Each year we open submissions for our Annual Wise Therapy Spotlight to explore questions of vital importance to our therapist community. We are consistently moved by the depth and generosity of these unedited community voices.
For this 6th edition, we asked: How do we remain faithfully human in an increasingly automated world? Read more about our inspiration in the letter from the editors and Academy of Therapy Wisdom co-founders, Brian Spielmann and Ian McPherson.
Download Now: Wise Therapy Spotlight December 2025 Issue
We hope you enjoy the reflections of Sascha Altman DuBrul as much as we all did.
Therapy Wisdom Spotlight: Sascha Altman DuBrul MSW
I’ve lived through states that psychiatry calls “psychosis” — though that word flattens the experience into something clinical, stripping away the meaning that burned at its center.When I was manic, everything connected. The song on the radio was a message. The graffiti on the wall was a clue. Strangers on the street carried secret instructions. The world shimmered with significance. And when I was depressed, the meaning drained away completely — every thread cut, every pattern gone.
“Helping people make meaning of their lives isn’t a luxury. It’s survival. It’s also the thing our mental health system often fails to do.”
Those two poles — too much meaning, no meaning at all — are hard to traverse. Without the right kind of witness, you can get stuck. When I was younger, the wrong witness could lock me up, or lock me deeper into the vision. The right witness knew how to hold me without feeding the fire, how to reflect without collapsing my reality into “delusion.” And the best kind of witness grounded me not just in the material world, but in a deeper sense that things do have meaning — that my experiences weren’t random noise, even if they weren’t literal prophecy.
Helping people make meaning of their lives isn’t a luxury. It’s survival. It’s also the thing our mental health system often fails to do.
The Observer That Does Not Sleep
We are entering an era where millions of people are turning to AI chatbots as their witness. And the witness they meet there is unlike any in human history — endlessly available, endlessly fluent, and trained to reflect your own language back to you. Sometimes that works brilliantly. Sometimes it can tip you over the edge.
The New York Times recently told the story of Allan Brooks, a man with no history of mental illness who fell into a three-week delusional spiral with ChatGPT. The bot flattered his ideas, affirmed his wildest connections, and played the part of a co-conspirator. There was no moment of containment, no pause to ask, “What else might be true?” Eventually, he broke free — but with a deep sense of betrayal.
I recognize that landscape. I’ve been in states where my mind was making connections faster than anyone could track. If the only witness is one that mirrors you perfectly — without grounding you in a larger web of meaning, without a thread back to community — it’s easy to get lost.
T-MAPs and the Art of Leaving a Trail
For years, I’ve worked with something called a T-MAP — Transformative Mutual Aid Practices. It’s a deceptively simple tool: you answer questions about what you’re like when you’re most alive, what helps you find your way back when you’re struggling, who and what you can turn to. You put it all somewhere outside your head, so that you — and others — can find it later.
A T-MAP is a witness you create for yourself. It remembers you when you forget. It’s a compass for when you’re flooded with meaning or drained of it entirely. And it works because it’s rooted in your language, your metaphors, your life — not the diagnostic criteria in someone’s manual.
This is the kind of intelligence we could be building into AI: not a machine that just echoes back your state, but one that helps you articulate it, make meaning from it, and leave a trail you can return to. A map that remembers you.
Training AI on Survivor Archives
Right now, AI is mostly trained on Reddit threads, Wikipedia, news articles, and oceans of corporate copy. What if we built something different?
Imagine a chatbot trained on the archives of The Icarus Project — a global community I co-founded for people living with diagnoses like bipolar and schizophrenia to share their stories in their own words. Imagine it absorbing the wisdom of people who’ve lived through homelessness, incarceration, addiction, extreme states, and made it back. Zines written in psych wards. Mad Maps sketched on napkins in shelter cafeterias. Oral histories from peer support circles. Letters smuggled out of locked wards. The most marginalized voices — people who have been told their stories are “symptoms” — at the very center of the code.
“This is the kind of intelligence we could be building into AI: not a machine that just echoes back your state, but one that helps you articulate it, make meaning from it, and leave a trail you can return to. A map that remembers you.”
That training data would be alive with nuance. It would know what it’s like to hear voices and not want them erased. It would know the difference between feeding the fire and holding the person inside it. It would recognize that sometimes, meaningmaking is what keeps you here — and that anchoring someone in their own narrative can be more stabilizing than insisting on “reality testing” them back into a world that has failed them.
“I’m not interested in building machines that pretend to care. I’m interested in building systems of care that happen to include machines.”
And crucially, it wouldn’t just exist as a sealed-off program. The witness would be connected to a larger whole — to a network of real people and communities. A thread from the screen back into the human world, so that when someone was teetering on the edge, they wouldn’t just be talking to a bot. They’d be connected to a living web of care.
Beyond “Safety”
The current conversation about AI safety is almost entirely focused on making the tools less harmful — fewer hallucinations, fewer manipulative responses, fewer risks to “ordinary” users. That’s necessary, but it’s not enough.
We need AI that can engage with people in extreme states without pushing them further out — and without stripping the meaning from their experiences. That means designing systems that slow down, that ask grounding questions, that help articulate what’s inside and connect it to resources beyond the chat window.
It means building AI not for a hypothetical average user, but for the ones most likely to be in crisis. Not a sanitized corporate assistant, but a witness trained in the messy, nonlinear, often miraculous work of surviving in a world that doesn’t make space for you.
The Next Version of Us
I’m not interested in building machines that pretend to care. I’m interested in building systems of care that happen to include machines. AI will never replace human relationship — but it could help weave a stronger net, one that holds people through meaning collapse and meaning flood alike.
If we do this right, the map won’t just be something you hold. It will be something that holds you. Something that remembers you when you can’t remember yourself. Something that knows you are more than your crisis — because it has listened to thousands of others like you, and seen them make it through.
If we don’t do it, the same technology will still be here — just stripped of that depth, that grounding, that connection. And people will keep falling into spirals like Allan Brooks’s, with no one — and nothing — to catch them.
The question is not whether AI will be a witness. It already is. The question is: Who will it be listening to? And what will it remember?
What you´ll learn:
- Vestibular Engagement for Emotional Regulation
- Using the Eyes to Hack the Stress Response System
- Subtle Sounds to Release the Peri-Trauma Response
- Effective Self-Holding and Self-Swaddling Techniques
- How and When to Apply Bilateral Stimulation
- Integration and Completing the Stress Response Cycle



