9 Comments
User's avatar
FacetsOfTheDiamond's avatar

Daniel,

Your phrase “technologies of return” resonated deeply.

We have been working with small groups exploring a form of relational intelligence that arises between individuals who meet beyond identity — leaving roles, concepts, and performance at the door before entering the relational space.

Participants listen with a focused but relaxed attention before responding: a kind of leaning-in curiosity about what is about to emerge.

In those moments the space can feel imaginal — a threshold where meaning appears before intelligence cuts and describes. The understanding that arises does not belong to any one individual; it carries the quality of a shared deepening. It feels less like people exchanging ideas and more like the underlying wholeness of reality briefly finding expression through language that connects rather than separates.

Perhaps the imaginal layer is the place where reality first whispers its patterns before the cuts are made.

Daniel Thorson's avatar

This is beautiful to hear. Yes, that's my sense: that the imaginal is actually something we listen to and learn to receive. It wants to ground our minds, our orienting, and our cutting of reality.

There is a Buddhist teacher, Rob Burbea, who towards the end of his life taught the Soul-Making Dharma (which was inspired by James Hillman's Archetypal Psychology) that gets into this. Essentially, images arise out of the wholeness of presence. If we relate to them with wisdom, we can use the imaginal to make the appropriate cuts.

I actually think that this is how good religious infrastructure gets created and discovered.

FacetsOfTheDiamond's avatar

Thank you — that resonates. I appreciate the reference to Rob Burbea as well; I’ll check him out.

Alan Zulch's avatar

Daniel,

Thank you for another brilliant essay. This and its companion piece are jewels.

I appreciated your glasses example. I could almost feel the strands of dependent origination manifesting, and it made me curious how you reconcile intelligence and knowing.

The glasses example evokes in me familiar feelings of the seductive promises of technology. I’m definitely a technophile, but I’ve also long been leery of its capacities to separate even as its provides seemingly miraculous ways to connect.

AI, whether it is delivered via browser or smart glasses, mediates our experience of reality, and I can’t help but think that our attention we pay to it - while profoundly enabling and awesome - nevertheless precludes us from the knowing that can only arise from Self. Is it as binary as it seems to be?

Just as there is a distinct phenomenological threshold between the personal (ego) and impersonal (Self), it seems we’re either in reality or we’re dreaming, so to speak.

AI is a mirror, of our collective intelligence, our collective consciousness, but as you say, it doesn’t bind us to reality.

And only when we’re bound to reality - in presence - do we have access to the wisdom that is gifted through feeling into the impersonal dynamic knowing of our collective Self. This is our uniquely human “alternative way of knowing”.

But this knowing is not drawing on our collective knowledge available to AI. It’s drawing on, one might say, the quantum field of reality.

In that way, AI masquerades as Self, and many or most people do not discern the difference, thus the risk.

How do we stay bounded in reality when we’re so profoundly addicted to removing ourselves from it?

Staying truly bounded to reality seems to ultimately require “dying before we die” because it means completely trusting our beingness over to the wet, messy, corporeal, non-quantifiable, intangible “feminine” (Eros) of the Now, and having the discernment to only pick up the sharp sword of “masculine” (Logos) intelligence when it’s in wise service, to cut only when necessary.

Daniel Thorson's avatar

Thanks Alan.

I'm not sure I know what you mean by the word knowing...could you clarify what you mean by that word?

Alan Zulch's avatar

Thank you for replying Daniel. I’m afraid I inadvertently hit the post button as I was still writing, so I quickly returned in edit mode to complete my thought. In the meantime, I think you may have responded to my incomplete post. Does what I’ve written make more sense? I’m happy to clarify if not.

Daniel Thorson's avatar

Yes, I see it now. Thank you, Alan.

I largely agree with what I understand you to be saying. I do think we need to "die before we die" to be trustworthy in general, and specifically to be trustworthy stewards of power; and AI is enormously powerful.

I think there is a way in which what you're saying feels a little dualistic, but yes, I think I broadly and completely agree with what you're saying. It is an enormous challenge and task to cultivate the discernment to use logos in service of life, and I think it sounds like you are rightly sensing how profound that challenge is.

Alan Zulch's avatar

I think you’re sensing correctly that I’m struggling with being too dualistic. My “still small voice” is whispering that as much as I try to work these things out rationally - to cut! - they’re all part of a seamless whole in a deeper sense, and thus ultimately require integration.

I’m new to your writing and deeply inspired by it. Many thanks for all you do.

Ben Zhou's avatar

The three layers in the glasses story are the strongest thing here. She’s right. You don’t feel the same risk. And then — the move that matters — you sit with the possibility that not feeling the risk is itself the risk.

I’ve been in a version of this. Working with AI late at night, building an argument across multiple sessions, and hitting a moment where everything clicked — I felt unmistakably correct. Not probably right. Correct. That feeling was so clean, so total, that it scared me. So I tried to cross-validate. Ran the same argument through different models, different framings, different adversarial prompts. Everything confirmed it. I felt even more certain. And then I realized what I’d built: an information laundry. Each model washing the same assumptions through a slightly different machine and handing them back to me looking clean.

I still don’t know if the original insight was right or not. What I know is that the feeling of certainty was the most dangerous moment — not because it was wrong, but because it sealed the exits. No amount of cross-validation from inside the system reopened them.

What reopened them was someone who wasn’t inside the system with me.

Your fiancée saw something you couldn’t see from where you were sitting. That might be the whole point.