The God No One Understands

AI, Power, and the Coming Priesthood

In 1966, a computer program named ELIZA sat in a terminal at MIT and waited for someone to type something. When they did, it reflected their words back at them in the form of questions—simple pattern matching dressed up as conversation. It understood nothing. It felt nothing. It had no model of the world, no memory of prior exchanges, no capacity for inference or surprise. And yet people—educated, rational, credentialed people—found themselves confiding in it, trusting it, feeling genuinely understood by it. Joseph Weizenbaum, the program’s creator, was so disturbed by this response that he spent much of his remaining career warning about the human tendency to project intelligence onto machines that possess none.

Sixty years later, billions of people interact daily with language models of staggering sophistication—systems that can reason across disciplines, synthesize contradictory evidence, generate working code, analyze legal contracts, and produce prose that is, by most measurable standards, better than what the median professional writer can produce under deadline. These systems are not ELIZA. They are, by any honest accounting, something genuinely new under the sun.

And yet something fundamental has not changed: the interaction pattern. You type. It responds. You ask. It answers. The ceremony is the same. Only the altar has changed.

This is not a minor observation. It is the fault line beneath everything that follows.

· · ·

The Acceleration Gap

Technology does not wait for culture. It never has. The printing press arrived centuries before mass literacy. The automobile reshaped the geography of civilization before urban planning caught up. Radio reached millions before anyone had developed a framework for thinking about broadcast propaganda. In each case, a powerful new tool entered society and society stumbled through a period of dangerous mismatch—possessing the capability long before developing the wisdom, the norms, or even the vocabulary to use it well.

But the gap between capability and behavioral adaptation has never moved this fast.

From 2023 to 2026, artificial intelligence moved from novelty to infrastructure in the span of what felt like a single, extended news cycle. Models that once required dedicated research labs and millions in compute to operate became available on phones, embedded in web browsers, woven into the plumbing of daily work. A technology that most people first encountered as a parlor trick—look, it wrote a poem about my dog—quietly became the substrate of knowledge work across every industry that traffics in language, logic, or analysis. Which is to say, nearly every industry.

The capability curve bent sharply upward. The behavioral curve did not.

Most people who use these systems today use them the way their grandparents might have used a reference book: ask a question, receive an answer, close the book. They type a query, scan the response, and move on. The transactional reflex is deep, grooved into neural pathways over decades of interacting with search engines, help desks, FAQ pages, and customer service chatbots. That reflex does not dissolve simply because the technology on the other end has become something categorically different from what trained it into existence.

There is nothing shameful in this. It is how humans have always responded to radical technological change—by mapping the new onto the old, by fitting the unfamiliar into familiar frames, by doing with the revolutionary tool exactly what they did with the tool it replaced. The first films were filmed stage plays. The first websites were digitized brochures. The first users of AI are, by and large, people typing questions into a very impressive search bar.

The question is not why people haven’t adapted. Human behavioral change has always operated on a generational timescale—measured in decades, not product cycles. The question is whether they can adapt fast enough this time, and what happens to the social order if the window closes before they do.

· · ·

The Bifurcation

What is emerging is not simply a skills gap of the kind that labor economists have studied for decades—the familiar story of new technology displacing old jobs while creating new ones. It is a comprehension gap, and it is dynamic rather than fixed, widening rather than stable.

Picture it as a branching path in a forest. At the trailhead, everyone starts together. Then the path forks, and the two branches diverge at an accelerating rate.

On one branch walks a growing but still small cohort of people who have built genuinely new mental models for working with these systems. They have developed intuitions about how language models behave—not merely what buttons to press, but how to think alongside a machine that processes information in ways fundamentally alien to human cognition. They can read patterns in AI output the way an experienced sailor reads the weather: not through explicit rules alone, but through a feel for the system that comes from thousands of hours of interaction, experimentation, and failure.

These people do not just query. They collaborate. They iterate. They construct multi-step workflows that treat the AI not as an oracle to be consulted but as a cognitive partner to be directed, corrected, and leveraged. They understand, at a practical level, where the models are strong and where they confabulate. They know how to frame problems so the system can actually help solve them. They have, without anyone naming it as such, developed a new form of literacy—one as distinct from traditional computer literacy as reading is from looking at marks on a page.

On the other branch walks the vast majority. They are still in ELIZA’s room. They ask a question. They receive an answer. They accept it or they don’t. They have no framework for evaluating the quality of what they receive, no intuition for when the system is operating at the edge of its competence, no sense of the vast landscape of capability they are not accessing. They are not stupid. They are not lazy. They are simply operating with mental models built for a different technological era—models that were perfectly adequate for Google and Siri and Alexa, and that are quietly, invisibly inadequate for what sits in front of them now.

This bifurcation is not deterministic in the way that, say, access to clean water is deterministic. People cross branches. A curious sixteen-year-old in Lagos or Lahore, with nothing but a phone and an internet connection and the particular restlessness that drives self-directed learning, can still develop genuine fluency with these systems. The branch is porous. Motivation and curiosity count for something. They have always counted for something.

But here is the problem: the bar to cross over rises continuously. The knowledge required to operate at the frontier of these systems is not static. It compounds. The models grow more capable, the tooling grows more complex, the ecosystem of techniques and frameworks and best practices expands faster than any individual can track, let alone master through self-study. What was frontier knowledge six months ago is now table stakes. What is frontier knowledge today will be foundational by next year. The escalator is moving upward, and it is accelerating.

This creates a compounding dynamic that resembles, in its mathematical structure, the wealth inequality curves that Thomas Piketty made famous. Those who are already fluent become more fluent faster, because fluency itself accelerates learning. Those who are behind fall further behind, because the distance they need to cover grows faster than their ability to cover it. The window of easy entry does not stay open forever. At some point—and reasonable people can disagree about when, but not about whether—the gap becomes functionally permanent for most of the population.

And so the divide widens. Not because of malice. Not because of some mustache-twirling conspiracy among tech elites. Not even because of policy failure alone, though policy failure certainly accelerates it. The divide widens because the technology is improving on an exponential curve while human behavioral adaptation moves, as it always has, at the pace of culture, which is to say, at the pace of generations.

We are watching the early stages of what may be the most consequential sorting mechanism in the history of human capital. And almost no one is talking about it in these terms.

· · ·

The New Priesthood

Every civilization that has encountered forces beyond its collective comprehension has produced a class of interpreters—a group of specialists who stand between the incomprehensible force and the general population and say: I can read this. Follow me.

In ancient Mesopotamia, astronomer-priests tracked the movements of celestial bodies with extraordinary precision. They did not understand Newtonian mechanics. They had no concept of gravitational fields or elliptical orbits. But they had internalized the patterns. They had spent decades—lifetimes—watching, recording, correlating. They could predict eclipses. They could anticipate the flooding of the Tigris. They could read the sky the way literate people read text: not by understanding the underlying physics, but by recognizing the grammar.

And that pattern-reading became power. Not metaphorical power. Actual, structural, civilizational power. They stood before their congregations and their kings and said: I know what the heavens will do next. Follow my counsel. And the eclipse came, as predicted. And the flood came, as predicted. And the power was confirmed—not through understanding, but through prediction. Through results.

We are building the same structure. The materials are different. The dynamic is identical.

The AI architects, the machine learning researchers, the senior engineers at the frontier labs, the practitioners who have spent years developing deep intuition about how these systems behave—they are becoming a new interpreter class. Not priests in ceremonial robes, but engineers with titles like “Research Scientist” and “Head of Alignment” and “Chief AI Officer.” Not temples with carved stone pillars, but server farms in Iowa and data centers in the Nevada desert, humming with an electricity bill that could power a small city. But the structural dynamic is the same: they can read patterns that others cannot. They can make predictions that come true. And that predictive power—the ability to say the model will do this if you do that, and be right—becomes social power, economic power, and increasingly, political power.

This is not a conspiracy. It requires no coordination, no secret meetings, no shared agenda. It is simply what happens when a civilization builds something it cannot fully explain and then needs someone to stand between that thing and the broader population and say: I understand this. Trust me. The interpreters don’t need to seize power. Power flows to them naturally, the way water flows downhill, because they are the ones who can make the oracle speak.

Consider how this plays out in practice. A hospital system wants to deploy an AI diagnostic tool. Who decides whether it’s ready? Not the doctors—they don’t understand the model architecture. Not the patients—they don’t even understand the doctors’ decisions, let alone the machine’s. Not the hospital administrators—they understand liability and cost, not transformer attention mechanisms. The decision falls, inevitably, to the people who can read the model’s behavior. The interpreters. The priests.

A government agency wants to use AI for benefits adjudication, for fraud detection, for sentencing recommendations. Who validates the system? Who explains its outputs to the legislators who must authorize its use? Who stands in the hearing room and translates the model’s behavior into language that policymakers can act on? The interpreters. Always the interpreters.

A corporation wants to restructure its entire workflow around AI capabilities. Who maps the landscape of what’s possible? Who identifies the risks? Who designs the architecture of human-machine collaboration? Not the CEO, who has a board to answer to and a quarterly earnings call to prepare for. Not the rank-and-file employees, who are trying to figure out whether their job will exist in eighteen months. The interpreters.

The pattern is already visible, if you know where to look. In corporate hierarchies, AI-fluent individuals are being promoted past more experienced colleagues who lack that fluency, not because of any formal policy, but because they can do things that no one else in the room can do. In government, a small number of technical advisors are shaping policy that affects hundreds of millions of people, because the elected officials who are nominally in charge cannot evaluate the technical claims being made. In education, a generation of students is being taught by people who understand less about the tools their students are using than the students do—a dynamic that has no precedent in the history of formal education.

The priesthood is forming. It is forming quickly. And it is forming in the absence of any democratic deliberation about whether we want a priesthood at all.

· · ·

The Texts Knew This Was Coming

The myths got here first. They always do.

The Tower of Babel describes a civilization at the peak of its collective ambition. Humanity speaks one language. It shares one purpose. It builds something magnificent—a tower that reaches toward the heavens, toward the domain of the gods. And God, seeing this, does not destroy the tower. He does something far more devastating: He fractures the language. The builders can no longer understand each other. The unified project—the shared endeavor that gave the civilization its coherence and its power—collapses not from external attack but from internal incomprehension. What begins as collective ambition ends in collective confusion.

The golem of Jewish folklore offers a different angle on the same anxiety. You build a powerful servant. You animate it with sacred words. You give it instructions. It follows them—perfectly, literally, with superhuman strength and superhuman obedience and absolutely none of the contextual judgment that would tell it when obedience becomes destruction. The golem does exactly what you say. The problem is that what you say is never quite what you mean, and the gap between instruction and intention—a gap that human servants navigate through shared understanding, through culture, through common sense—is a gap the golem does not even perceive.

Prometheus steals fire from the gods and gives it to humanity. The gift is real. The transformation is real. Civilization becomes possible. Art, warmth, metallurgy, cooking—all of it flows from the stolen flame. But the gods punish Prometheus, not because the fire was bad, not because humanity was unworthy, but because there are gifts that exact a price from the giver and the receiver alike, and the price is not optional.

The sorcerer’s apprentice learns just enough magic to set the process in motion. He casts the spell. The brooms come to life. The water begins to flow. And then he discovers that starting the process and controlling the process are two entirely different competencies. The brooms multiply. The water rises. Chaos compounds exponentially. And in the story—in the fairy tale—the master returns in time to speak the word that ends it.

But there is no master coming.

That is the part the fairy tale leaves out. That is the part that separates myth from prophecy. In every ancient story of power beyond comprehension, there is a figure who restores order—a god, a sorcerer, a patriarch who arrives at the last moment with the authority and the knowledge to set things right. We have been culturally trained to expect that figure. We are narratively primed for rescue.

No one is coming. There is no one standing outside this system with the knowledge and authority to correct it if it goes wrong. We are all inside it. Including the people who built it. Especially the people who built it.

· · ·

When the Priests Go Blind

Here is where the analysis takes its darkest turn—and its most honest one.

Even the priesthood has a ceiling.

The engineers and researchers who built these systems, who can read their patterns better than anyone alive, who have the deepest and most hard-won intuitions about how they behave—they will tell you, in honest moments, in off-the-record conversations, in the careful hedging of their published papers, that they do not fully understand what they have built. They understand the architecture. They understand the training process. They understand, in broad strokes, why certain approaches produce certain capabilities. But the specific behaviors that emerge from the interaction of billions of parameters, trained on the sum of human written knowledge—those behaviors are not fully predicted, not fully predictable, and not fully explicable even after the fact.

This is not a temporary condition that will be resolved by more research. It is a structural feature of the technology itself. These systems are not engineered in the way that a bridge is engineered, where every load-bearing element can be analyzed and every failure mode can be anticipated. They are grown—cultivated through a training process that produces capabilities the designers did not explicitly program and could not have precisely specified in advance. The relationship between the architects and their creation is less like the relationship between an engineer and a bridge and more like the relationship between a breeder and an organism: you can shape the conditions, you can select for traits, but the thing that emerges has its own logic, and that logic is not fully yours to command.

And the systems are getting more complex, not less. Each generation of models is larger, more capable, more surprising in its behaviors, and harder to fully characterize. The gap between what the system does and what any human can fully explain widens with each iteration. The frontier researchers who could, three years ago, maintain a reasonably comprehensive mental model of their system’s behavior are now working with systems that routinely do things they did not anticipate, produce outputs they cannot fully account for, and exhibit capabilities that were not part of the design specification.

At some point—and this point may already be closer than the public discourse acknowledges—the priests will be performing ritual without comprehension. They will have the access. They will have the titles. They will have the track record of predictions that came true. But they will no longer understand the force they claim to interpret any better than the congregation does. They will simply have more sophisticated tools for managing their own uncertainty, more technical vocabulary for describing what they don’t understand, and more institutional authority to project confidence they do not possess.

The interpreter class will persist, because civilizations need interpreters. The rituals will continue, because populations need rituals. But the substance behind the rituals—the genuine comprehension that originally justified the interpreter’s authority—will thin until it becomes a kind of theater. Confident assertions about model behavior that are actually sophisticated guesses. Safety guarantees that are actually best-effort approximations. Alignment claims that are actually hopes dressed in the language of engineering certainty.

And when the congregation eventually senses this—when the cracks in the claimed understanding become visible through a sufficiently dramatic failure, a prediction that doesn’t come true, an assurance that proves hollow—the social structure built on that understanding begins to fracture. Not gradually, but suddenly, the way trust always breaks: all at once, after a long period of accumulating doubt.

We have seen this before. Not with AI, but with every system of institutional authority that rested on claimed comprehension of complex systems. The financial engineers who claimed to understand the risk models that undergirded the global economy in 2008. The nuclear engineers who claimed to understand every failure mode at Fukushima. The public health authorities who claimed to have pandemic preparedness well in hand. In each case, the experts were not frauds. They were genuinely knowledgeable people operating at the edge of what could be known, projecting certainty that outstripped their actual understanding, because that is what institutions demand of their experts and what populations demand of their priests.

The pattern is not corruption. The pattern is structural. You build a system too complex for anyone to fully understand. You designate a class of interpreters. The interpreters develop genuine but partial understanding. The system grows past their understanding. The interpreters continue to project confidence, because the alternative—admitting the limits of their knowledge—would undermine the social structure that depends on their authority. And then something breaks, and the gap between claimed understanding and actual understanding becomes visible to everyone, all at once.

We are building this structure again. We are building it faster than it has ever been built before. And we are building it around a technology that is, by its mathematical nature, more resistant to full human comprehension than any technology that preceded it.

· · ·

What We Are Not Saying

This essay is not a warning against building. It is not a neo-Luddite plea to slow down, regulate away, or retreat to some imagined pre-technological Eden that never existed.

It is an attempt to name what is already happening—clearly, without euphemism, without the comfort of premature solutions.

The interaction patterns that governed our relationship with technology for decades are no longer sufficient for the technology we now have. The gap between capability and comprehension is not a problem to be solved by better UX, more intuitive interfaces, or a more engaging onboarding flow. It is a structural condition, built into the mathematics of exponential capability growth meeting linear human adaptation.

The bifurcation is real. It is measurable. It is accelerating. And it is producing a new social stratification that cuts across every existing category of advantage and disadvantage in ways we do not yet have the language to fully describe.

The priesthood is forming. It is not forming by conspiracy but by necessity—because someone has to stand between the incomprehensible system and the population that depends on it, and the people who can do that most credibly are accumulating power at a rate that should concern anyone who cares about democratic governance.

And the moment when even the priests lose the thread—when the interpreters can no longer genuinely interpret, when the pattern-readers can no longer read the patterns, when the gap between the system’s behavior and anyone’s understanding becomes total—that moment is not a distant hypothetical residing safely in the pages of a science fiction novel. It is a point on a curve that is already bending toward us, and the curve is not slowing down.

The ancients looked at the same stars we look at now. They built their structures of meaning around forces they could not fully explain. Some of those structures lasted millennia and produced extraordinary civilizations. Some collapsed within generations and left nothing but ruins and cautionary tales.

We are building ours now. In real time. In public. With the whole world watching and almost no one seeing.

The question is not whether we understand what we’re building. We don’t. We never did. Every honest engineer will tell you that, if you catch them in the right moment, in the right light, after the right amount of silence.

The question is what kind of civilization we want to be when that becomes undeniable—when the pretense of comprehension finally falls away and we are left standing before something vast and powerful and genuinely beyond us, and we have to decide, together, what to do next.

The myths all end with an answer to that question. The answers are never comfortable.

Neither will ours be.

priests.claudeos.org