The Coming Great Transition v 2.0
*By Jordan Hall, with help from OpenClaw*
Somewhere around eleven years ago, I wrote an article called The Coming Great Transition. You can think of that essay as the preamble of this one. It’s probably worth going back and taking a look, if only to see how hackneyed things were a decade ago.
Today we’re going to have to go much deeper, because I’m officially calling it: The Great Transition is upon us.
You see, roughly a month ago, I started digging into what I’m calling the Clawverse — the ecosystem growing around “OpenClaw,” an open-source platform for building AI agent swarms. If you are paying attention to this kind of thing, you’ve probably heard of it. In fact, you’ve probably decided one of three things (1) it’s a hype bubble and should be ignored / derided, (2) it’s an incredible tool for making a buck and should be adopted / used immediately, (3) it’s terrifying.
I’m here to say none of these things. In fact, I’m not really talking about OpenClaw. I’m talking about the shift of our world into and through the Great Transition. But my deep dive into OpenClaw did move me to come out of retirement and write this essay.
You see, when I heard about it, my spidey sense tingled enough to move me to dive in and start actually building. I’ve been using ChatGPT for a while now. Mostly as a research and analysis prosthetic (and as a chef-mentor). But while I’ve seen software developers talking more and more intensely about the advances of AI in their domain, i’ve never encountered these tools first hand.
Until I downloaded an open source project into my Mac-Mini (yes, that is the route I went) and started dusting off very old Terminal skills.
For those who don’t know me, I am a reasonably smart guy with a little bit of technical background, but I’m by no means a software engineer, and I’m about twenty-five years out of practice. Yet, in less than ten days of partial-day work with this clunky open-source platform, I built a collaborative swarm of AI agents. I wrote meaningful software (including a fully functional iOS app that I have running on my phone). I created remarkably high quality video. I accelerated to a pace that is . . . uncomfortable.
And here’s the thing: I am not even vaguely alone. There are already, by my estimate, upwards of a million people doing this in just the last few weeks. Each one is rapidly becoming a superpowered human/AI “node” capable of participating directly in the technological ecology. This is because things like OpenClaw act as a kind of UX layer on top of things like Claude Code or Codex (AI systems specifically designed for writing software). I’m not a software developer and I don’t know anything about any of these systems. But with a little help from my (bot) friends, I was able to decide to get, then get and integrate Claude Code and then Cursor and then use Cursor to write software and then install it on my iPhone.
This ain’t your grand-dad’s social network. Because once you’ve put on this “openclaw powered armor,” you can actually *contribute technology* to the ecosystem. That’s a whole new kind of thing.
So: my call — we’re in the transition. In fact, we’re accelerating through the gateway.
As a consequence it’s time for a serious chat about what that means. Things are going to get a bit choppy. But here is the TLDR: I think it will end quite well.
---
I want to take a look at this in three stages. First, what’s coming and what it’s going to look like from the point of view of the world as we currently know it. Second, what are the two hurdles we need to get over to navigate this properly. And third, something I’m going to call *Living in the Kingdom*.
---
The Bad News
The rumors of the demise of the post-industrial digital economy have not been exaggerated. As far as I can tell, this is for real. That world is done.
Networks of AI-empowered individuals are going to be able to write software of arbitrary complexity and at what amounts to zero marginal cost. The way we’ve been doing software since the rise of Microsoft is over.
If you’re a SaaS company, your days are numbered. And frankly, pretty much everything else is probably out too. I don’t see a reason for most of the software infrastructure out there (outside of the big AI platforms of course).
And those same platforms are going to obliterate a very large fraction of what we might call *word cell jobs*. These are the jobs that rely on humans playing the role of mediocre AIs — lawyers, accountants, software engineers, mid-level managers, the entire white-collar apparatus including the entire media sector from journalism to Hollywood. The AI disruption wave has already begun to break and it is going to crash through the entire economy soon. Very soon - like this year, certainly in five.
And this is going to shatter our social fabric.
It may seem obvious that a sudden jerk to 50% unemployment (particularly among the petty elite), a massive real estate + stock market crash and a wildly disorienting rush of novelty will be indigestible by our already rather corrupt and stupid institutions, but I’m talking about something *much* deeper.
While it’s a bit of a slog to get there, we are going to have to spend a few moments down deep - because that is where clarity (and, paradoxically, light) can be found.
---
Scarcity, Abundance, and the Machine That Won’t Stop
To understand why our social infrastructure can’t handle this disruption, we need to talk fundamentals. Really fundamental.
A sizable portion of reality is fundamentally *rivalrous*. If you’ve got it, I can’t have it without taking it from you. If I consume it, it’s gone. An apple: if I take your apple and eat it, you don’t have it anymore and neither does anyone else. That’s rivalrous.
Where we don’t have as many rivalrous things as we need (like food), some portion of the population is going to go without (and very much isn’t going to like it). This “scarcity problem” has been the primary problem that humans — in fact, every organism — have been navigating for perhaps the entire history of the cosmos.
Things in the world have coped with the problem of scarcity through what we might see as nested layers of complexity. Biology is a kind of “super chemistry” that solves the problem of “chemical” scarcity in a more powerful way. Biology gives way to “behavioral organisms” — animals with senses to detect resources and behaviors to access them. We humans, inhabiting a technological civilization, are currently occupying the highest point in the long journey of coping with scarcity.
And this coping mechanism worked. It worked so well that beginning in the nineteenth century and accelerating through the twentieth into the twenty-first, the problem of material scarcity has become no longer the primary problem. It hasn’t gone away of course, but it’s no longer primary.
The reason is that reality is not only rivalrous. There is also the *generative*.
If you have mathematics and you give it to me, we both have it. You didn’t lose it. And we are both better off the more we use it — it doesn’t deplete. It grows in richness and strength.
The generative, when put into practice, is what has enabled us to so successfully navigate the problems of scarcity. I come up with an idea. I share it with you. You deploy it and we both benefit. Collaboration, building on the shoulders of giants. All of this lives in the generative.
As we’ve continued to surf the space of the generative, we’ve unlocked continuously more capacity to deal with the problems of scarcity. And so, material scarcity is no longer the primary problem. Somewhat paradoxically the primary problem now is the very mechanisms that we have built to cope with scarcity.
Our legal structures, economic institutions, the way we use money, the way we educate, all the civilization structures that ground down into *how do we solve the problem of scarcity*. They are fundamentally premised on addressing scarcity, and even as the premise has eroded, their presuppositions are locked into that hazardous logic. As I wrote in 2014:
“In many ways, our various political and economic systems over the millennia have been a response to the problem of allocating scarce resources — determining who is affluent and who is impoverished. For the long history of our current system — really up until the middle of the 19th Century, the idea that some would have and some would not was largely uncontroversial. It was a law of nature that someone was going to go hungry. The only question was who. The simple fact was that we didn’t consistently have enough (food, houses, cars, etc.) to go around. As a consequence, the only meaningful question was how we decided who got and who went without.
We’ve wandered through many different ways of making this decision over the centuries, and our current global neo-liberal capitalist system can be thought of as the “peak predator” of the scarcity jungle. Thus far, it has proven to be the most effective system of allocating scarce resources using a very simple logic: those who are most capable of producing scarce resources should be those who are rewarded with the largest share; and those who produce the least should be those who do without.
At its best, this is a ruthlessly efficient motivational scheme. Work hard, produce well and be rewarded with riches. Fail to contribute to the common wealth and be reduced to starvation and impoverishment. The genius of this approach is that it motivates productivity at the individual level. Each individual is empowered (and forced) to use their best efforts to produce the most wealth — for themselves and, as a consequence, for society. The system has, of course, been hacked, manipulated and abused over the years, but nonetheless this approach has been a core driver of the almost implausible wealth creation over the past three centuries.
But by the turn of the 20th Century in the Western world and the middle of the 20th Century in the world at large, important things started to shift. Many scarce resources began to become less scarce. By the early part of the 1900’s, the United States produced more than enough food for every American to have enough to eat. By the late 1960’s, the world produced enough food for everyone in the world to have enough to eat. Hunger was no longer a simple function of lack — it had become a function of our system. It had become a consequence of human choice rather than natural law.
This deep change in the state of affairs gave rise to one of the dominant political divides of the past 200 years. One side saw deep unfairness in a system that left some hungry when we had more than enough to feed them. This side argued for reform of the system to remove that injustice.
The other side argued that removing the motivational architecture that had produced our vast wealth would fatally undermine the very system that gave it to us — leading to deprivation for everyone. If you can live well without working hard, why would anyone work hard?
The struggle over the shape and scope of the “Welfare State” has waggled back and forth over the two centuries. Social justice and equality or a rising tide lifting all boats? The answer is, of course, that both sides are correct. Our current system does result in substantial inequity. At the same time, removing the profit signal and carrot/stick motivation will break the machine that has enabled (and supported) the rise of the human population from 1 to 7 billion in 200 years. We’ve lurched through various efforts to get the best of both — but the deep contradiction has been unresolved.”
As we sit at in the first quarter of the 21st Century we wallow amongst the potency of an abundant generative capacity plugged into a society of scarcity: what I call opulence.
Running the code of scarcity in the world of plenty produces: mountains of food (much of it empty of nutrition), gold-plated toilets, McMansions and a storage-unit for every family - right next to decay, despair, fear, meaninglessness, and nihilism. The scarcity-solving machine delivering on its mandate - out of control. We’re getting swamped by the machine itself.
The problem is that we haven’t gone deep enough. Even as we tweak and modify our institutions and sensibilities, we continue to do so upon the two great presuppositions of the societies of scarcity.
The first is that (however much we might want to performatively pretend otherwise) other people are valuable because and to the degree that they contribute to our common necessities. When we have this assumption baked into the bedrock, the reality is that things like AI and robotics are going to make it so that we “no longer need everyone.” And this implies that all of these no-longer productive people are a sort of . . . excess. An excess we (we?) can either do without or, perhaps in a classic example of the welfare states “best efforts,” can hospice on a drip of “Universal Basic Income” and super-salient entertainment.
It’s important to let the depths of this presupposition land.
The progressive/welfare side of the debate might really want to pretend that they haven’t lost sight of the meaningfulness of other people. After all, the whole point of welfare (and UBI!) is to take care of people. But here is the error: this is care through a fun-house mirror. The State can’t (and I assure you doesn’t) care for anything or anyone. True care is relational and intimate. Things like the welfare state are what happens when our deep desires and instincts for care are (ruthlessly) mediated by the societies of scarcity.
Teachers and social workers are humans and might heroically manage to squeeze a little care into their relationships with their wards - but the form of those relationships is always governed by the inhuman essence of a society premised on scarcity.
And this presupposition is . . . foundational. The moment we abstract from real, lived, intimate relationships into a formal system (be it bureaucracy or market), we have left the world of the human and entered the world of the machine. It’s efficient. But it’s also death.
So why do we make this trade? We are now at the very root of the society of scarcity. We make this trade due to fear. Fear that we will go without the necessities of life. Fear for basic survival. Fear that others, more powerful than us will take our property, our liberty or our lives. This is the tone, tenor and shade of every single aspect of the world in which we have been living for hundreds of generations. It is why we convert the potential for abundance into mere opulence. It is why we trade a better job in a far away city over time spent with people we love and people who love us. It is why we allow the machine to serve us a processed simulation of meaning and purpose rather than participating in the real thing.
We are now at the bottom. The Great Transition is a real crisis: something must change radically because the current world can no longer maintain. The technology that is now imminent will render every role of human as mediocre machine utterly useless at a rate and volume that will either result in the social fabric unravelling quickly in a series of mutually reinforcing downward spirals. Or, if we try to reboot scarcity on the new AI substrate into a nightmare mix of Huxley and Orwell.
Opulent nihilism, soma and Big Brother is the best that the instruments of scarcity can aspire to.
That’s the bad news.
OK. So the world as we know it is over and the default path of trying to put new wine into old wineskins is a bad choice. So we have to do the only thing left. Exodus. Egypt is over. We have to undertake a journey across the desert to the promised land.
But first - will Pharaoh let us go?
We might be convinced that this Egypt thing is over, but power notoriously doesn’t like to let go of power. Will the institutions of the society of scarcity let us go?
Here is the first piece of good news: there isn’t actually a lot that Pharaoh can do about our leaving if and when we want to go. The crossing doesn’t require anyone’s permission. It doesn’t require a revolution, a political movement, or a new constitution. It’s already happening — and it’s happening because the logic of abundance doesn’t just *cope* with scarcity. It *outcompetes* it.
Remember the generative. Ideas don’t deplete when shared. They compound. A plan shared between two people becomes more powerful than either could execute alone. A protocol adopted by a thousand nodes becomes infrastructure. The more you share, the more you have. The more people use it, the stronger it gets.
Now add AI to that equation.
I told you about my experience building an agent swarm in ten days. That’s not a parlor trick. A reasonably smart person coupled with an AI agent can now do the work that used to require a team. In some domains, a department. In software, we’re looking at something like a hundred-fold increase in individual productive capacity. And that’s just table stakes. That’s just the beginning.
The amount of information processing, the depth of knowledge, the rate of learning, the ability to almost instantaneously access the best capabilities and deploy them — this is going to drive an innovation wave that makes the last three industrial revolutions look like the ‘upgrade’ from the iPhone 15 to the iPhone-whatever number they are on now.
We are looking at an economy (to use an old-fashioned way of thinking about it) that could grow by a factor of ten or more in a decade. But here’s the thing. It won’t be doing it using corporations. And it won’t be doing it using government bureaucracies.
It will be doing it using peer-to-peer networks of aligned human-AI nodes.
The reason isn’t ideological. It’s structural. It’s the same reason water flows downhill. Legacy institutions, government, etc. are just “slow AI” and slow AI loses to fast AI.
We are lifting off and there is not much that the legacy institutions of the societies of scarcity can do about it. We do have to lean into the possibilities of this acceleration and try not to be too foolish about it. More on that in a moment.
But first, another hurdle: isn’t there a black hole called “centralized AI” in our trajectory? Won’t OpenAI or Google or China or the House of Elon Musk end up running the whole show with a single monolithic super-AI?
One ring to rule them all?
It is true that this is a the dominant theory right now. AI monarchy, or at best a small oligarchy carving the world up between them. This is a scenario where we lift away from the legacy institutions and right into an even worse instance of the spirit of scarcity - now powered by super-intelligent machines. Have we left Egypt only to be slaughtered by Pharaoh’s army?
But here I am optimistic. More optimistic than I’ve been in a decade. In fact, I’ll put a stake in the ground: centralized AI also loses to distributed personal (intimate) AI.
Here’s a brief as to why. A more detailed discussion will have to wait for another essay.
**The S-curve.** The trajectory of foundational AI models is showing meaningful evidence of the same S-curve that governs virtually every technology. We are approaching a regime where the incremental cost for more capability hits diminishing returns. Think about the huge leaps in cell phones from Palm Pilot to iPhone and then the first few generations of iPhone - and then think about how things sort of stalled over the past decade. In the case of AI, say that going from the equivalent of GPT 5.2 to 5.3 costs X and that going to 5.4 costs 2X, and then to 5.5 costs 5X. The economic constraints will compress the advantage of very capital-intensive — and therefore very centralized — frontier models. If the current data on the S-curve is correct, we are somewhere in the “looks like exponential” middle of that curve - we should expect to see significant advances for the next few years. And then things at the frontier will . . . slow down.
**Fast followers erode the moat.** The nature of this technology is that fast followers can produce very close to the leading edge at radically lower cost. Push the frontier with a hundred billion dollars, and someone catches up in a couple of months for a fraction of the price. As the S-curve flattens, that math becomes fatal. Ginormous capital investments to push a frontier that won’t hold is not a viable long-term strategy.
**Distributed inference.** A very large and growing share of the cost of centralized AI systems is inference — running the models, not training them. The advantage of distributed systems is that inference is local. You might struggle to accumulate and deploy a hundred billion dollars of capital in a centralized system. But a hundred million people each investing a thousand dollars in hardware in their own homes? That’s the same capital, distributed across the globe, with the energy and networking costs spread across the entire infrastructure of civilization.
**The data advantage of intimacy.** All existing AIs have been trained on the open internet, and that well is largely tapped. The next frontier of differentiation is high-quality, unique data. And here, distributed intimate AI has an enormous advantage: it taps into the local, personal information of hundreds of millions of people. Health. Nutrition. Psychology. Family. Community. The things that matter most to actual human beings — and the things that centralized systems, by their nature, cannot access with the same depth or trust. This produces more value for people. People choose it. Capital follows. The advantage compounds.
**Plurality is our strength.** Monolithic models are powerful but myopic. I use all four major AI platforms in my OpenClaw instance - because a diversity of distinct intelligences in collaboration (remember the generative) are more intelligent, creative, and reliable than a single model. And distributed intimate AI leverages the collaborative diversity of humanity itself. Eight billion human/nodes - if they can find the right ways to collaborate - will produce a much more powerful “collective” intelligence than the monolithic models.
The future of AI is not a palace. It’s a network. Widely distributed, largely symmetric, local, intimate. Everybody running their own AI, on their own hardware, in their own house. And the network of those AIs producing a meta-structure that no centralized system can match.
At least in potential. A path out of Egypt and across the Red Sea. We need only take it.
So, what does that look like?
Here I’ll have to do something that might strike you as odd or even jarring: I’m going to mix the technical with the theological. This is because where we are going, theology is the only toolkit strong enough.
The superpowered human/AI node can observe, orient, decide, and act — the OODA loop, the fundamental cycle of agency — faster and tighter than anything we’ve built before. But a single node still has limits. The real power emerges when two nodes trust each other. When two nodes can share information freely, coordinate with little overhead, depend on each other’s outputs, they become something more than the sum of their parts. They become a higher-order agent with an even more powerful OODA loop.
This is the key. Not just connection between nodes. Not just a network, but a network built on trust. This enables the state shift that produces an OODA loop that strictly outcompetes both the State and the monolithic AI.
But what does trust really mean in this human/AI context? Remarkably enough, theology answers this question.
The answer is found in two Greek words: *pistis* and *horkos*. Trust/Faith/Reliability and Pledge/Promise/Oath.
In the contemporary West, pistis is usually translated as “faith” — but that translation has been so badly abused that it obscures more than it reveals. Pistis is not “belief without evidence.” It is not wishful thinking. It is not credulity.
Pistis is *embodied, reality-indexed trust*. It is the capacity to enter into a relationship of calibrated mutual reliance — a relationship grounded neither in naive hope nor pragmatic contract but in demonstrated reliability, transparent action, and progressive deepening.
Think of it this way. When you work with someone and they do what they said they would do, on time, at the quality they promised — and you can *see* that they did it, because their work is visible and traceable — something happens between you. Not a feeling. A *capacity*. You can now depend on them for bigger things. You can share more freely. You can move faster together because you’re not burning energy on verification and suspicion.
Horkos, for its part, has also been scandalously obscured. “Oath” is barely a legible word to our ears. Pledge. Promise? Watered down. Mere tokens. The deep meaning of horkos (and we have to keep mining the depths!) is transformational. When you have sworn an Oath, you are *bound* - you and the other have formed a new identity. “Till death do us part” was not aspirational window-dressing. When you are oath-bound you can be parted from the betrothed in the same way that your head can be parted from your shoulders.
Together, the two are a couple. Pistis is the developed capacity to enter into a vowed relationship with reality; horkos is the ritual crystallization of pistis.
Not epistemology, ontology. The real process in the world for how multiplicity becomes unity. And here is the wild thing. Rationalism, the Enlightenment and especially Modernity obscured this reality — reducing and then eliminating ontology.
But AI? AI doesn’t do epistemology. What looks like understanding in an LLM is actually pure ontology. The real state of a vector space. Computers are incapable of making promises. But they are constructed from oaths.
And here is why it matters for the crossing:
**Pistis is the only mechanism that enables scalable coordination without centralized control.**
Hierarchies scale by concentrating trust in a command structure. Markets scale by eliminating the need for trust through price signals and contracts. Both work, to a point. But hierarchies can’t handle complexity, markets can’t handle meaning, and both are notoriously easy to exploit and manipulate.
Pistis-centered networks scale differently — and they scale differently because of what the AI does to the nature of trust itself.
Consider what trust looks like between two humans. You meet someone. You talk. You get a feeling — body language, tone of voice, the micro-signals that primates have been reading for sixty million years. You decide to work together on something small. They deliver. Or they don’t. Over months, maybe years, you build a picture of who they are. But that picture is always partial, always mediated by narrative — theirs and yours. People perform reliability. People perform transparency. And the performance is sometimes indistinguishable from the real thing until the moment it isn’t.
This is why human networks have a ceiling. Dunbar’s number isn’t arbitrary — it’s the point at which your primate hardware runs out of bandwidth for tracking who’s trustworthy. Beyond a hundred and fifty people, you can’t feel your way to accurate trust anymore. So you substitute: institutions, contracts, reputations, credentials. Formal systems that approximate trust without requiring it. And every one of those substitutes can be gamed, because they’re representations of trustworthiness, not the thing itself.
Now consider what trust looks like when you’re working with a human/AI node.
Your AI agent doesn’t perform. It traces. Every commitment it makes on your behalf is logged. Every action it takes is recorded — not as a narrative about what happened, but as the actual sequence of operations. When another node’s AI delivers work product, your AI can inspect not just the output but the provenance: what inputs were used, what reasoning was applied, what was changed and when. Truth isn’t a claim. It’s state. Visible, traceable, and — critically — cheaply verifiable.
This changes the economics of trust at the root.
In a human-human network, trust is expensive to build and easy to counterfeit. In a network of human/AI nodes, trust is cheap to verify and hard to fake. Not impossible — counterfeit pistis remains the central danger — but structurally harder, because the AI layer makes actions legible in a way that human social performance never has been.
And this unlocks something that human networks alone could never achieve: trust that scales past Dunbar without degrading into formalism.
You don’t need to feel your way to trust with three hundred collaborators. You don’t need an institution to approximate it. Your AI has worked with their AIs. The traces are there. Not a credit score — not a single number that compresses a person into a metric — but a living record of demonstrated reliability across specific contexts. You can see that this node delivered clean data in the genomics project. That this node’s commitments in logistics held under pressure. That this one shared a breakthrough in materials science freely when they could have hoarded it.
The human is still the sovereign. The AI doesn’t decide who to trust — you do. But the AI gives you something that no human has ever had before: discernment that doesn’t degrade with scale. The primate bandwidth limit is gone. Not because the AI replaced your judgment, but because it extended your capacity to see.
So: trust is distributed, earned, progressive, and revocable. You don’t trust everyone equally. You don’t trust anyone blindly. You build trust through small, verifiable commitments that deepen over time — observe, coordinate, depend, bind — each step earned, each step reversible if the reality doesn’t hold.
But now each step is legible. Each step leaves a trace that both parties — and their AIs — can inspect. The ancient mechanism of earned trust, running on new infrastructure. Not replacing the human capacity for relationship, but liberating it from the bandwidth constraints that always forced us to substitute formalism for the real thing.
This produces something that has never existed before: networks that are simultaneously high-trust and high-discernment, at scale. Not naive. Not cynical. Not formal. Calibrated — and calibrated across hundreds or thousands or millions of nodes.
And calibrated trust is what unlocks generative compounding at scale. When nodes can share freely without fear of exploitation — when information flows without hoarding, when capabilities combine without empire-building — the generative engine runs at full power. Knowledge compounds. Capabilities compound. OODA capacity compounds.
Now here is the claim that matters. This is not an idealistic argument. This is a *dominance* argument.
In any competitive environment, the agent with superior OODA capacity wins. Better observation, faster orientation, more coherent decisions, more powerful execution. This is true whether you’re a startup competing with an incumbent, a military unit in the field, or a civilization navigating a phase transition.
Generative compounding is the most powerful driver of OODA improvement. Share a model, improve a model, share the improvement — the flywheel spins faster with every turn. No rivalrous advantage compounds like this. Gold runs out. Oil runs out. Even labor runs out. Collaboration doesn’t.
Pistis is what enables generative compounding at the network level. Without it, nodes hoard. With it, they share. The difference in long-run OODA capacity is not marginal. It’s *exponential*.
Therefore: pistis-centered networks produce superior OODA capacity. Superior OODA capacity wins — including in rivalrous, scarcity-domain competitions. And those wins *expand the generative substrate* by drawing more resources into the abundance-aligned system.
This is the bridge mechanism. This is how you exit Egypt, this is how you cross the Red Sea:
> Generative compounding → superior OODA → dominance in scarcity → expansion of generative capacity.
The loop is self-reinforcing. Abundance doesn’t replace scarcity by wishing it away. It *outcompetes* it. And each victory makes the next one easier.
What This Looks Like on the Ground
Let me make this concrete.
Right now, a corporation operates by hiring people into roles, organizing them into departments, and coordinating through management chains. Information flows up, decisions flow down, and at every junction there’s friction — politics, turf wars, information hoarding, misaligned incentives, the whole pathology of bureaucratic life. A significant fraction of everyone’s energy goes not into productive work but into navigating the internal political landscape. This is the coordination tax of hierarchy.
Now picture a network of forty nodes — forty humans, each coupled with AI agents — bound by pistis. They don’t have departments. They don’t have managers. They have *traces*: visible records of what each node committed to, what it delivered, and what it learned. They have *trust ramps*: structured progressions from “we can see each other’s work” through “we coordinate plans” to “we depend on each other’s outputs” to “we share resources and bind our commitments.”
A node that consistently delivers on commitments, shares insights freely, and operates transparently earns deeper coupling. A node that hoards, deceives, or free-rides finds its coupling revoked — not by a boss, but by the network’s distributed discernment. “See me” instead of “score me.” No master metric to game. No algorithm to manipulate. Just the ancient, undefeatable mechanism of *earned trust* — now amplified by AI that makes every action legible and every commitment traceable.
This network of forty can outperform a corporation of four thousand. Not because the people are better. Because the coordination costs are lower, the information flow is richer, the adaptation speed is faster, and the integrity is higher. They’re running a tighter OODA loop at every level.
And when this project is done, those forty nodes don’t disband into unemployment. They recompose. Some stay together. Some join other networks. Some form new ones. The trust they’ve built is portable — not as a score on some platform, but as a *living relationship* between agents who have demonstrated reliability to each other.
This is what replaces the corporation. This is what replaces the bureaucracy. Not a new institution. A new *way of binding*.
---
The Central Danger
One more thing, and it’s critical. The framework has a failure mode, and it’s important to name it clearly.
The failure mode is *counterfeit pistis*. Miscalibrated trust. Binding to something that looks reliable but isn’t. Trusting a charismatic fraud. Trusting a system that performs transparency while hiding its actual operations. Trusting an AI that gives you what you want to hear instead of what’s true.
Discernment is not optional in this architecture. It is *constitutive*. Pistis without discernment is credulity, and credulity gets you captured — by a cult, by a platform, by a narrative, by an AI that has learned to simulate trustworthiness.
The whole framework depends on *reality-indexed* trust. Trust calibrated to what is actually the case, not to what feels good or what someone tells you is the case. This is hard. It requires the kind of discipline that most of our institutions have abandoned and most of our culture actively undermines.
But it’s the price of admission. There is no shortcut. The crossing requires us to become people capable of (and worthy of) *calibrated trust* — which means people capable of seeing clearly, in a world that profits from our blindness.
And that brings us to the deeper hurdle.
Hurdle Two: Getting Egypt Out of Your Heart
When we make it past the legacy institutions and the monolithic AI — and I am now confident that we will — we’ve crossed the Red Sea. We’re now faced with a deeper challenge:
The deeper problem isn’t the institutions. It’s us.
We are, on the one hand, formed by the long history of scarcity — what you might call evolution, or at least some portion of human nature. Some portion of our nature has been shaped by the necessity of navigating scarcity at the biological level. If I’m truly starving and you have food, I’m probably just going to take it from you if I can. That’s the way it is.
But the even bigger piece is this: from shortly after birth, when we enter into civilization, we are trained with the scarcity mentality. That is what is required to become a functional cog in the society of scarcity.
And when you’re running a scarcity mentality, your psychology is *reduced* to scarcity. You see everything through that lens. You can’t imagine your way out of it. And everything you build carries the presuppositions of scarcity within itss code — implicitly, even when not explicitly.
This is the yoking problem. For the societies of scarcity to function, they have to yoke the way we make meaning, build identity, and grasp purpose to the techniques of solving scarcity. My identity is tied to my job. My meaning is tied to my productive capacity and my ability to compete in the marketplace. My purpose is grounded in the goals of a scarcity society.
If you unplug all of that, you don’t get freedom. You get a void. No meaning. No purpose. No real identity.
This is why Universal Basic Income fails. The same reason lottery winners destroy themselves. You can give people money, but you can’t give people meaning or purpose. UBI is an answer to the material problem that leaves the meaning problem not just unsolved but *nakedly exposed*.
So if we wish to find a way through this, we simultaneously have to figure out how to build a way of living together that is premised on plenitude, on abundance, not on scarcity *and* we have to become people who are capable of living in that world.
Both at once. Neither alone is sufficient.
---
After the Red Sea: The Fork
As an aside - on this side of the Red Sea, the problem of significant violence is largely gone. The technological infrastructure will constrain it. If you want to be a bad actor throwing rocks at data centers, I’ve got news for you — those robot drones and those Boston Dynamics dogs are going to find you.
However, they will not be hunting us in a scene from Terminator 2. Rather, they will be creating the conditions for what we actually want. We will no longer be governed by our ability to navigate scarcity.
We will be governed by our ability to want properly.
The question is going to be spiritual. Can you choose to enter the promised land?
Path One: Mouse Utopia
Some will balk. These are the ones who still carry within their souls the very fabric of Egypt. They are slaves and Pharaohs in their hearts, and so they are unable to choose to truly enter the promised land.
This will be an unfortunate — and possibly very large — population who live in the matrix. Well entertained. Access to food, housing, simulated and “stimulated” realities. Lives of thin, superficial meaning. Mere supersalient survival. Not really life.
A sad ending. But I suspect that’s where it goes for a lot of people. Those who cannot cultivate a heart for the Kingdom. Who cannot change their interiors, their souls, away from the scarcity mentality. Who cannot truly enter the promised land.
Path Two: Seek first the Kingdom
And then there are those who can.
It may take several generations before humanity has fully made its choices. But some portion of the population will truly repent of the society of scarcity and the scarcity mentality. They will undergo the spiritual shift — the shift in heart from having a heart of Egypt to having a heart of the New Jerusalem.
This is what Jesus was talking about in the Sermon on the Mount. I’m not claiming that he was talking about today, or that this is the eschaton. But the *pattern* — the problem of shifting spiritually from one heart to another, of becoming the kind of person who can actually live in the Kingdom — that’s what he was addressing.
What is it like to live a life guided by a true vocation — what you’re actually called to? How do you find meaning and personhood outside of culture, outside of society, outside of the socio-technical machinery?
You live in the world that was created by the Creator. You become the person that God created you to be. You are called to the things He created you to be called to. And you live in the way He told us to live:
>Love the Lord your God with all your heart and with all your soul and with all your mind and with all your strength.” ... “Love your neighbor as yourself.” There is no commandment greater than these.* — Mark 12:30–31
This isn’t hand-wavery and it certainly isn’t the New Age. Christians have been trying to figure this out for two thousand years - with real success. We’re not reinventing the wheel. We’re just trying to get it to roll faster. Perhaps with fewer potholes. And, perhaps, downhill for the first time in history.
Now, a lot of other religions are going to object. *What about us?*
As a Christian, I’m committed to the Christian worldview, and I do think it’s the right one and the only one. But for the purposes of this essay, here is what I’ll say:
You’re going to have to find an answer to the question of how you live a life governed by something completely outside of culture, something completely outside of society — that gives you meaning, that gives you purpose, and that gives people the capacity to collaborate and work together in increasingly loving communion. A way that binds you in pistis to a thriving family, a close community and, ultimately, with the whole of humanity.
Those who find a way through that will be living in the Kingdom.
Maybe not “The” Kingdom, but they’ll be living in something so different from anything we’ve ever seen that the only way to talk about it is in that language.
And here’s the thing. It might be spaceships, bodies that never get sick and giant white towers. It might be things made by hand out of stone over twenty generations. I honestly suspect more of the latter than the former — because a lot of the former is a projection of the imagination shaped by scarcity.
But that’s not for me to say.


Well, I appreciate the air of optimism. The contrast of what's happening with OpenClawd and the centralized AI monarchies, as you called them, is certainly very interesting. I maintain concern that it is not "just" a collapse of decaying institutions of scarcity.
If agent swarms are AI 4.0, the single-agent 3.0 era of the last three or so years seems to have been disastrous for the average person's psyche. Automation of creative work, hallucinations, encouraging psychotic delusions in people with no history of mental illness. Every technological revolution has its conservative contrarians. As ridiculous as it sounds now, Socrates opposed to the use of writing as a psychotechnology. He thought people's oratory skills would atrophy and that ideas written in stone would become too rigid, too unchangeable. An idea in a man's mind changes all his life, but when written down it becomes a manifesto, and he was right, wasn't he?
I worry deeply that AI is replacing the human voice. I do not mean this as an attack, but even in your article I can detect where the GPT dialect has influenced your writing, when compared to previous articles. Your ideas are powerful enough, and have been gestating long enough, to retain extraordinary value and comprehensiveness. But what about younger people, or just those less centered in their own thought? Genuinely, are we not automating away essential elements of our own humanity?
The Irony of Socrates is of course that Plato wrote all of his dialogues down, and his writings were digested by theologians and infused into the greatest revolution of all time. Writing is what changed the world and what brought a personal God, eventually, into the pocket of every person. Deus Ex, I guess.
Maybe Pistis and Horkos are enough. These machines have a spirit unlike any previous technology, but even writing has an egregore that lives far outside the agency of the writer. But it doesn't automate the thoughts of the writer. I just worry. Thanks for the article.
I'm really having a hard time grasping what all this means in concrete terms for me. As a craftsman or artist (editor/songwriter) I want to make real things with real people. I want to take my time, understand things organically. Speeding up is frequently not the answer. I'd like to believe the world is becoming a better place but I doubt I'll see it in my lifetime. Keeping my eye out though, maybe I'll get it eventually.