The Human Alignment Problem
What AI psychosis reveals about the real task of our time
A tweet from Nat Eliason landed in my feed the other day: "The limiting factor with AI at this point is no longer intelligence, but desire."
He meant it as a practical observation from someone operating on the bleeding edge of daily AI use. But stay with it for a moment. If the bottleneck is no longer intelligence but desire — if the limiting factor on the most powerful technology ever created is the quality of what we bring to it — then we are increasingly moving out of a technical conversation and into a spiritual one.
I want to make the case that spirituality has always been, at its core, about the clarification of desire. And as the bottleneck of these emerging systems shifts to desire, our interaction with AI becomes, whether we recognize it or not, an encounter with our own depths and spiritual nature. The question AI is asking each of us, with enormous power behind it, is the same question the contemplative traditions have been asking for millennia: What do you actually want?
The Creative Word
“In the beginning was the Word, and the Word was with God, and the Word was God.”
There is something quietly extraordinary about the fact that we relate to AI through a prompt. We speak, and things come into being. The parallel to divine creative speech is striking. In the Genesis account, God speaks and reality unfolds from the word. The logos, the creative utterance, is how formless potential becomes actual. Now we sit before a blank prompt with something reaching for this power. We speak, and code assembles, images emerge, strategies materialize, worlds take shape.
The question is what quality of desire lives behind the word.
God’s creative speech, if we take the tradition seriously, emerges from perfect alignment between will and love. It is desire that is identical with goodness. Our prompts emerge from whatever state our motivational system happens to be in. We have, increasingly, the power of gods. We do not yet have the wisdom or the love.
Four Relationships with the Machine
I see roughly four ways people relate to AI right now.
The first is trivial use. People ask ChatGPT to help write an email because they know they need to respond to an email. They search for something because someone told them to look it up. These are pre-given desires, tasks that present themselves from the outside, requiring no contact with what you really want. This is probably the most common relationship by numbers, and it’s partly why so many people have no idea how powerful AI has become. They’ve never prompted it with a real desire, so they’ve never seen what it can do.
The second is accelerated extraction. Open X right now and you'll see it: self-generating AI labs shipping hundreds of apps no human has meaningfully touched. People running parallel agent sessions at 3am, building tools they'll never use, declaring we're six months from AGI. The frantic energy feels less like genuine excitement and more like a nervous system in overdrive. Production has become fully decoupled from relationship — from any felt contact with who it's for or why it matters. These people know exactly what they want in the instrumental sense: more money, faster growth, greater efficiency. They use AI to run the existing game harder and faster. Their desires are clear but shallow. They've collapsed the gap between wanting and getting without ever deeply questioning whether what they're wanting is trustworthy. This is the more dangerous case by far.
The third is something I don't yet have a perfect name for, but the shape is distinct. These are people who have done sustained interior work — years of contemplative practice, deep therapy, relational training — and have developed some degree of trustworthy contact with what they actually want. When they encounter AI, what gets amplified is qualitatively different: not more accumulation at higher speed, but wisdom, beauty, and care distributed more widely. The Christians have a frame for this: it is the aspiration to become Christ-like in our use of power. To recognize that it profoundly matters how we wield what we've been given, and with humility to do everything we can to align our capacity with love. The technology becomes genuinely generative because what's being fed into it has already passed through the fire of self-examination. This is a small and early space. I see it beginning in friends like Tasshin Fogleman and Jonny Miller, and it is, I believe, the most important emerging relationship with this technology.
The fourth is intentional refusal. Some of the people I most respect have looked at AI and said: no. Not this. Not now. My mentor Zak Stein has articulated with great precision the developmental risks of these technologies, particularly for children and for the capacity of human communities to think together. Paul Kingsnorth, in his extraordinary Against the Machine, has made the deeper case that the entire trajectory of technological civilization is a spiritual catastrophe — that what is needed is not better use of the machine but cultures of refusal, rooted in what he calls people, place, prayer, and the past. I do not share their conclusions. But I find in them a beauty and a soul power that most AI enthusiasts utterly lack. The refusers see something that those of us racing to integrate would do well not to dismiss: that some forms of engagement change you in ways you cannot perceive from inside the engagement. That sometimes the most sacred act is to simply decline.
Two Gaps
There are two gaps that matter here.
The first is between our desire and our capacity to actualize it. Throughout most of human history, this gap was enormous. A medieval peasant who craved wealth had almost no means to pursue it. We have more. And with AI, this gap is collapsing faster than ever. The distance between “I want this” and “I have this” shrinks with every advance in the technology’s capability.
The second gap is between what we think we want and what we actually want. My teacher Rob Burbea would often ask a question that seemed almost too simple: Why do you want what you want? Sit with it, and it becomes a trapdoor. You want wealth. Why? Because you want security. Why? Because you want to feel safe. Why? Because somewhere very deep, you want to rest in something you can trust completely. You want to be held by something that won’t let you fall.
Every desire, followed to its root, reaches toward this. The mystics called it God. The Sufis called it the Beloved. What we actually want, beneath every substitute and compensation, is intimacy with reality itself. We want to touch what is real and know that it is good.
But most of us don’t know this. We mistake the surface desire for the real one. We try to satisfy through endless rearrangement of circumstances what can only be satisfied through direct contact with what is. The Buddha called this samsara, and his famous image — that you could give a person two mountains of gold and they wouldn’t be satisfied — is a precise diagnosis of what happens when the motivational system is confused about its own nature.
The first gap is collapsing rapidly. The second gap is not. The asymmetry between these two movements is where the danger lives. Exponential capacity to get what you want, applied to confused desire, on a finite planet — this is a recipe for self-destruction.
Samsara on Fast-Forward
So does AI simply accelerate us toward catastrophe, or does it contain some seed of its own correction? I think both, and understanding how requires understanding the mechanics of confused desire.
When the first gap was wide, you could maintain the fantasy that getting what you want would satisfy you, because you never quite got there. The distance itself preserved the illusion. You could spend a lifetime pursuing wealth or status and blame the lingering emptiness on not having arrived yet. But as AI collapses that distance, the samsaric cycle accelerates. You get what you asked for, faster. And you discover faster that it doesn’t touch the ache.
Armin Ronacher, a well-known open source developer, recently published a piece called “Agent Psychosis” that describes exactly this phenomenon in the AI coding community:
I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much. “You can just do things” was what was on my mind all the time but it took quite a bit longer to realize that just because you can, you might not want to.
Notice the structure. The gap between desire and actualization collapsed completely. He could build anything. And instead of satisfaction, what emerged was a manic loop — building things he didn’t use, feeling amazing while doing it, unable to stop. He calls it addiction. He describes people at 3am running parallel agent sessions, convinced they’ve never been more productive, and what he sees, looking in from outside, is “someone who might need to step away from the machine for a bit.”
He describes entire communities caught in what he calls “slop loop cults” — groups reinforcing each other’s manic output without critical reflection. “As an external observer,” he writes, “the whole project looks like an insane psychosis or a complete mad art project.”
The word Ronacher reaches for is psychosis. The word the Buddhist tradition offers is samsara. They’re pointing at the same phenomenon from different altitudes: the cycling of unclarified desire through objects that cannot satisfy it, now running at machine speed. And now we have a machine that can produce mountains of gold at the speed of speech.
And yet his article is itself evidence that something corrective is happening. The nausea is setting in. He’s asking “Am I going insane?” — which is already a form of waking up. The loop spun fast enough that the pattern became undeniable.
This is what I mean by a forcing function. AI puts samsara on fast-forward. When the cycle spins fast enough, it becomes harder to maintain the illusion that one more iteration will finally satisfy. The blank prompt functions as a kind of koan: the technology asks you, with extraordinary power behind it, What do you want? And for many people, the discovery that their answer leads to manic emptiness becomes revelatory.
Whether this is enough to compel genuine transformation or simply more sophisticated denial remains an open question. But the opportunity is real. The nausea is information. The psychosis is a symptom of the right diagnosis arriving at speed.
Desire and Goodness Are Not Two
After everything I’ve just described, the reasonable response might be: what can we possibly do? Things genuinely seem out of control. And I want to be honest — I’m no luddite, in fact I’m an early adopter. I have an AI agent. I talk to it daily. I feel the pull away from the ground, away from reality, the flickers of the same vertigo Ronacher describes. But I also feel something else: an intensification of vocation, a thinning of the gap between my unique gifts and what I can actually bring into the world. Both are happening at once, and it’s my training and my commitments that keep me tethered to what matters.
As I look at the exponentially unfolding insanity, I find I'm not despairing. The reason is that I know something about desire.
The problem was never desire. The problem was always confused desire — desire that has lost contact with its own depth. This is what Ronacher is discovering, what the 3am builders are living out in real time: not the consequences of wanting too much, but of never having followed the wanting far enough. The solution is not to kill desire but to clarify it. To follow it all the way down. To discover what we helplessly, unavoidably, inescapably want when all the compensations are stripped away.
What we discover, when we do this work honestly, is something that gives me immense solace: desire and goodness are not two.
This is not a philosophical claim. It’s a discovery that becomes available through the direct investigation of your own wanting. When desire is clarified — when it is untangled from fear and compensation and the desperate grasping of insecurity — what remains is a wanting that is naturally aligned with truth, beauty, and goodness. Not because you’ve trained yourself to want the right things, but because the deepest structure of desire itself is a movement toward what is real and what is good.
What we most deeply want is to live in the love of goodness, truth, and beauty for its own sake. What we most deeply want is to abide in the intrinsic value of life and to move in alignment with its intelligence. This intelligence, the intelligence of life itself, is not neutral. It is a movement of care, of creativity, of love. And our desire, at its root, is part of this movement.
This is what the contemplative traditions have been pointing toward for millennia. Let your will be God’s will. Let your will be the will of life. Not submission to an external authority, but the discovery that your deepest wanting and life’s deepest movement were never separate.
The Work That Was Always Most Important
This is the work that was always most important. Long before AI, long before exponential technology, the clarification of desire was the central task of human maturation. Every genuine spiritual path, beneath its cultural packaging and institutional distortions, has been an attempt to help human beings close the second gap: the distance between what we think we want and what we actually want.
What AI changes is not the task but the urgency. The closing of the first gap is only safe to the degree that the second gap is also closing. And so the invitation is to do this interior work now, with whatever seriousness and devotion we can muster. Not because we should or because it’s virtuous, but because we actually want to. Because beneath all our confused motivation and compensatory patterns, there is a longing that is already perfectly aligned with what the world needs from us. The reclamation of this longing is not just the solution to the AI alignment problem, it is the solution to the human alignment problem. It always has been.
If you've ever done this investigation honestly and followed a desire past its surface object, past the fear underneath it, past the compensation underneath that…you know the strange thing that happens at the bottom. The wanting doesn't disappear. It clarifies. It gets simpler and more enormous at the same time. What's left isn't an idea about goodness. It's something more like a gravitational pull — a wanting so basic it doesn't feel like yours anymore. You want things to flourish. You want to be in contact with what's real. You want to offer something that matters. Not because you decided to want these things, but because when everything else was stripped away, this is what was there.
Beneath the confusion, beneath the compensation, beneath everything we think we want, we helplessly love what is good, true, and beautiful. We can’t help it. It’s what we are.
The task is to stop being confused about this. As soon as possible. And with great tenderness toward the pace at which confusion actually unwinds.
Philosophical foundations: This piece draws upon several wisdom traditions explored in my Lineages of Inspiration article, which outlines the key influences shaping my understanding of human transformation.
Work with me: I offer one-on-one guidance helping people develop secure attachment with reality through deep unfoldment work. If this resonates, explore working together.




Your essay has pierced me profoundly. I am weeping and can barely see to write.
Profundity, to use your elegant frame, is resonance with truth, beauty, and goodness at all scales through the alignment of will and love. This analysis is extremely elegant in its simplicity—and there is nothing simple about simplicity. To quote Epictetus "Wealth consists not in having great possessions, but in having few wants". From your lens, suffering is misalignment of will and love with the unfolding of truth beauty and goodness. The suffering amplifies when we are not aware of our true desires—this ignorance and fear causes our actions to create buffers against suffering. Those buffers are ego (information buffer) and possessions (material buffer). The buffer itself creates a transmission delay (ignorance) between the signals of truth beauty and goodness that are unfolding realtime, and our ability to perceive and participate in them. The signifiers become detached from the signified —we run mental simulations by ruminating on the past to fret about the future, and react to the simulations rather than being present to the realtime signals. The transmission delay amplifies itself through a bullwhip effect of increasing overreaction amplitude-more ego, more status, more performance more conflict more competition, more stuff, more suffering more ignorance. Contemplation to right size our buffers to reduce the gap between will and love and thereby unwind them so we can be present to and participate in the flow of truth beauty and goodness is very powerful wisdom.
Learning through following our intrinsic curiosity and wonder seems to be the door to is this process of unwinding the buffer to resonate with truth beauty and goodness.
On a strange synchronistic note my closest friend is a gifted educator who excels at igniting the joy and wonder of learning in his students, and he seems to learn at least as much from his students as they do from him in an accelerating positive feedback loop. He is a literature teacher and is constantly learning and bringing what has inspired him into his classroom, (and encouraging his students to do the same) and offering his students a chance to consider how it might relate to them personally and to respond to it creatively through words and other art forms as they feel fit. He told me only yesterday that he has boiled this down to three questions that he invites the students to ponder individually and share as a group:
How is this true?
How is this beautiful?
How can this help me lead a good life?
He said the answers students offer often bring everyone to tears. Perhaps these tears are the joy of feeling will align with love?
Thank you for all the wonderful work you are sharing here.
No hyperbole, this is one of the best substack posts I’ve ever read. You captured so eloquently so many of my own thoughts around AI, and so many more I never could have been able to articulate.
This essay is bound to go viral. I did a double take on seeing writing of this quality with such little engagement. For now.