Each year we open submissions for our Annual Wise Therapy Spotlight to explore questions of vital importance to our therapist community. We are consistently moved by the depth and generosity of these unedited community voices.
For this 6th edition, we asked: How do we remain faithfully human in an increasingly automated world? Read more about our inspiration in the letter from the editors and Academy of Therapy Wisdom co-founders, Brian Spielmann and Ian McPherson.
Download Now: Wise Therapy Spotlight December 2025 Issue
We hope you enjoy the reflections of Nancy G. Kinsey Lewis as much as we all did.
Therapy Wisdom Spotlight: Nancy G. Kinsey Lewis Registered Psychotherapist in Ontario Canada
A new presence has entered the therapy room—not in person, but in the stories our clients bring. Increasingly, therapists are hearing how people turn to artificial intelligence for reassurance, clarification, emotional processing, or even existential questions. Some experiment with AI out of curiosity; others reach toward it in moments of fear, urgency, or uncertainty.
At first glance, this trend can feel disorienting. Is AI encroaching on the therapeutic domain? Will it undermine the human heart of therapy? Or might there be a more grounded, reflective way to understand this shift?
This essay argues that the rise of AI is best understood not as a threat—but as an opportunity to expand psychological literacy. When we understand AI not as an oracle or therapist, but as a mirror reflecting back the patterns contained in our own words, histories, and tones, it becomes a tool clients can engage with safely and meaningfully.
The task for therapists is not to stay in fear, but to adapt—to meet clients where they are while keeping clinical judgment firmly human.
The Rise of AI in Daily Life
In only a few years, AI has moved from research labs into homes, workplaces, and pockets. People use it to:
- draft résumés, emails, and cover letters
- rehearse difficult conversations
- explore emotional reactions
- generate motivational statements
- reflect on choices or goals
- soothe themselves during periods of distress
For many clients, AI feels like a readily available “listener.” For others, it is confusing or unsettling: a tool that appears intelligent yet produces responses that are sometimes comforting, sometimes strange, and occasionally destabilizing.
Regardless of whether therapists personally use AI, clients are already engaging with it—and this alone makes the topic unavoidable within modern mental health care.
Understanding AI as Mirror
The most stabilizing way to understand AI is this: AI is not a mind. It is a mirror of patterns.
AI does not know. It does not intuit. It does not access spiritual, psychic, or clinical truths. When asked to provide guidance, it draws upon and recombines linguistic patterns from the enormous amounts of text it has been trained on.
If someone asks AI for spiritual direction, AI can simulate the language of spiritual guidance—because that language exists in its training data—not because it has access to metaphysical realities. If someone asks whether they will get a job, AI can simulate the tone of reassurance or probability—but only by reflecting back cultural patterns, not by accessing the future.
This is why “AI literacy” is becoming as important as emotional literacy or relational literacy. Understanding how AI works prevents misattribution of meaning and reduces fear. It allows clients to interact with AI as a reflective tool, not a mystical authority.
Ethics and Boundaries
The ethical foundation is simple:
AI may assist, but clinical decision-making must remain human.
Therapists should never defer diagnostic, risk-management, or treatment decisions to AI. AI cannot assess risk, cannot hold therapeutic alliance, cannot grasp nuance, and cannot substitute for presence, attunement, or accountability.
Yet telling clients “Just don’t use AI” is unrealistic and can leave them alone with a tool they are already experimenting with.
“AI literacy” is becoming as important as emotional literacy or relational literacy. “
An ethical stance must combine transparency, discernment, and grounded guidance so that clients feel supported rather than judged.
Safe Ways Clients Already Use AI for Growth
When used with clear understanding, AI can support reflection.
For example, clients can ask AI to:
- “Help me rephrase this situation so I can see it from a calmer perspective.”
- “List three everyday activities people often use to de-stress.”
- “Show me how someone else might describe this challenge differently.”
- “Act as a practice partner while I rehearse what to say to my employer.”
These are literacy uses, not therapeutic interventions. They do not replace therapy; they help clients articulate thoughts, explore language, and access alternatives between sessions—much like journaling or brainstorming.
Context improves outcomes. Rushed or anxious input often produces equally rushed or anxious reflections.
Clear, grounded input tends to produce clearer reflections. What AI Reveals About the Inner World: A Clinical Vignette
A client facing job uncertainty turned to AI for reassurance. Feeling anxious, she asked questions such as:
“Will I get the job I just interviewed for?”
“What kind of work should I do next?”
AI responded in ways that mirrored her internal state—sometimes producing noncontextual statements like:
“A person like you should only work from home.”
The client interpreted this as definitive guidance until she realized that the AI was simply reflecting back the anxiety and limited context embedded in her own wording.
This example illustrates two truths:
- AI mirrors the emotional tone and assumptions in the input.
- Without understanding AI’s pattern-based nature, clients may mistake reflections for answers.
When clients understand that AI reflects patterns rather than truth, they can begin using it to clarify values, articulate emotions, and explore possibilities—without attributing authority where it doesn’t belong.
What This Means for Therapy
AI is not a therapist. It cannot attune to felt sense, relational dynamics, attachment history, or unconscious material.
“If therapists refuse to engage the topic, they risk leaving clients alone with tools they do not yet understand.”
However, dismissing AI entirely, or relegating it only to note-taking and scheduling, ignores reality:
Clients are already talking to AI about their fears, relationships, identities, and decisions.
If therapists refuse to engage the topic, they risk leaving clients alone with tools they do not yet understand.
When therapists approach AI with grounded curiosity rather than avoidance, they help clients develop clarity, boundaries, and resilience.
A gentle professional responsibility emerges:
If we meet AI only with fear, clients navigate it alone. If we meet it with discernment, we can keep therapy human while supporting the world clients already inhabit.
Global Context: How Governments Are Responding
Canada
Canada has no AI-specific mental health laws yet, but regulatory bodies increasingly emphasize privacy protections, transparency, human oversight, and explicit prohibition against AI making clinical decisions
United States
Several states have enacted guardrails:
- Illinois, Nevada, Utah: ban AI from providing therapy; require disclosure of non-human identity; require routing suicidal ideation to crisis resources.
- Colorado, Texas, Maryland: developing rules preventing insurers from using algorithms to deny care.
Europe
The European Union’s AI Act identifies mental health–related AI as high-risk, requiring human oversight, transparency, and strict data protections.
Asia and Africa
Regulatory frameworks vary, but many regions follow WHO digital health guidelines emphasizing ethics, accountability, and protections of vulnerable populations. Across jurisdictions, a consistent theme emerges:
AI may assist—but the heart of therapy must remain human.
The Larger Meaning: AI, Fear, and Adaptation
The rise of AI exposes a tension in the mental health field.
Some feel fear, skepticism, or moral concern. Others feel curiosity, possibility, or relief at having new reflective tools.
The real issue is not whether AI should exist, but how we respond. Avoidance breeds distortion and fear.
Engagement, with boundaries, fosters clarity.
Therapists have an opportunity here: to model calm discernment, to help clients understand AI’s limitations, and to use this moment to deepen psychological literacy at a cultural level.
Conclusion
AI’s emergence is not a disruption to be feared—it is an evolution to be met with clarity, ethics, and humanity.
Understanding AI as a mirror enables clients and therapists alike to navigate this technology without confusion or over-reliance. It reinforces a central truth:
AI can reflect patterns. Only people can create meaning.
Therapy remains, and must remain, human work.
As AI enters the world clients inhabit, the humanity of therapy becomes even more essential… and unmistakable.
What you´ll learn:
- Vestibular Engagement for Emotional Regulation
- Using the Eyes to Hack the Stress Response System
- Subtle Sounds to Release the Peri-Trauma Response
- Effective Self-Holding and Self-Swaddling Techniques
- How and When to Apply Bilateral Stimulation
- Integration and Completing the Stress Response Cycle



