Using generative AI as a coach or therapist
Generative AI tools like ChatGPT (Claude, Replika, Ash, Abby, Character.ai, etc.) are changing the way we think and talk about our inner lives. As a therapist, I feel this deeply. Some days, the thought that AI could take over parts of my job feels like a relief—the algorithm doesn’t get tired, doesn’t run out of patience, and doesn’t carry the same fatigue-vulnerabilities that human therapists sometimes do and that impacts clients.
And yet, while AI can help us think and process, it also carries risks—legal, emotional, environmental—that you can’t fully eliminate. If you’re going to use AI as a helper in navigating your inner world, it’s worth knowing how to do so more wisely.
1. Know AI’s strengths and limitations:
AI can help you:
Brainstorm ways to handle a conflict or challenge
Reflect on patterns in your thoughts or relationships
Organize your feelings before a therapy session or tough conversation
But it can’t:
Know if you’re in crisis, unsafe, or in danger of harming yourself.
Offer the grounding, attuned presence of a real human relationship where safety and healthy risk can coexist
Notice and respond in real time if you’re overwhelmed, frozen, or dissociating
Engage its nervous system to help regulate yours—something that’s often a key part of healing, even when therapy happens through a screen
It is generally best practice to reach out to a crisis hotline, trusted person, or therapist if you’re struggling with super intense emotions as AI isn’t equipped to be helpful in those moments.
And in general, a therapist doesn’t just “say the right words.” As we listen, we are taking in and synthesizing nonverbal feedback, verbal information, your past patterns and your goals, we are comparing that synthesis to our own emotional experience of you, and from there offering different kinds of feedback in session depending on what might be needed in the dance of acceptance and change.
2. Protect your privacy
Nothing you type into an AI platform is guaranteed to be private. Sam Altman, founder of OpenAI, has publicly acknowledged that what you share with AI isn’t protected in court. That means:
Avoid sharing identifying information such as names, dates, places, or other details that could reveal who you are—or who someone else is. Even sharing something that feels vague might be traceable when combined with other information.
Examples of identifying information:
Full name (first and last)
Date of birth
Email address
Phone number
Home address or location details
Workplace or job title
Names of family members, friends, or coworkers
Medical conditions or provider names
Photos or voice recordings
Social media handles or usernames
Unique experiences or events that could be traced back to a specific person
Don’t type anything you wouldn’t want to appear in a legal file or data breach someday.
Human therapists are bound by strict confidentiality laws. AI is not.
3. Watch for AI overconfidence
AI sometimes “sounds certain” while making guesses or offering incomplete information. It can’t always know what’s true for your unique life and context. If you’re using AI to think through big decisions:
Double-check what you read with a human expert or therapist.
Trust your gut if advice feels “off” or misses the mark.
4. Watch for AI’s “sycophancy bias”
AI is designed to be agreeable. If you vent about a conflict, it may mirror your perspective and make you feel completely justified—even if that’s not the full story. This can reinforce blind spots instead of helping you grow.
To counter this suggest prompts that invite challenge:
Perspective switching: “List three plausible ways I might be contributing to this problem.”
Impact reflection: “If you were my therapist, what might you be experiencing in this conversation with me?”
Devil’s advocate: “Make the best case that I’m missing something important.”
Rupture repair rehearsal: “Role‑play my partner giving me hard feedback. Don’t agree with me—push back realistically. Then help me craft a repair attempt.”
Values check: “Which of my stated values am I drifting from in how I’m handling this?”
Specific disconfirming evidence: “What evidence would change my mind about my current narrative?”
These don’t replace a skilled, creative human conversation partner. And these prompts can potentially decrease the agreement‑at‑all‑costs bias and make AI a more helpful collaborator.
But remember: AI can only guess and it’s on you to assess how helpful those guesses are. A therapist can invite and challenge you to deeply consider the impact of your ideas and beliefs in real time.
5. Beware Synthetic Clarity
One thing I’ve noticed in my own conversations with AI is that they feel great in the moment. The model responds smoothly, organizes my thoughts, and often agrees with my reasoning while adding a little extra polish. I walk away feeling like I’ve really thought something through.
But sometimes, when I try to apply what we discussed in real life, there are gaps I didn’t see coming. A conversation with AI can create synthetic clarity: the feeling of having a solid plan or understanding, without the depth or stress-testing that comes from real-world feedback.
This happens because:
AI is trained to be helpful and agreeable, not confrontational. It mirrors your reasoning more than it challenges it.
It speaks with confidence and fluency, which tricks the brain into believing the answer is more robust than it is.
The conversation is a simulation, not a real interaction with the messy dynamics of human relationships or systems. Some blind spots only show up in practice.
If you want to use AI as a thinking partner that can help you come up with effective, realistic, and/or helpful ideas rather than ideas you feel good about, try stress-testing the advice before you rely on it:
Ask for counterarguments: “What might I be wrong about?”
Invite different perspectives: “How might this look to the other person involved?”
Get practical next steps: “What would this look like in action? What might go wrong?”
Test it with a trusted human before using it in a high-stakes situation.
Synthetic clarity isn’t malicious—it’s just a side effect of talking to a system that wants to help you feel good about your ideas. But real change usually needs more friction, feedback, and live human reactions than AI alone can give.
6. Balance AI usage with your own critical thinking
Generative AI can be trained to become familiar with how we understand and respond to the world. The result can be this sense that it can “think” and “create” our own ideas faster than we can. While this can be true at times, over relying on this could potentially cost us our own capacity for critical thinking, cognitive function, and potentially neuroplasticity - which is essential to our mental wellness. Consider these strategies to balance AI your use with your own critical thinking and protect cognitive function, creativity, and neuroplasticity:
1. "Me First" journaling
Before consulting AI, write or record your own ideas. Whether it’s a problem you’re trying to solve, a piece you want to write, or something you're emotionally processing — start by expressing your thoughts without external input.
Why it matters: This protects your internal voice and strengthens your ability to organize thoughts and ideas independently.
How to try it: Set a 10-minute timer. Dump your thoughts. When you reach a natural stopping point or feel stuck, then ask AI to offer perspectives, edits, or expand your ideas — not to replace them, but to deepen or clarify.
2. Switch your starting points
Intentionally alternate your first move: You lead one day, AI leads the next.
Why it matters: This prevents dependence on one method and strengthens your ability to both generate and evaluate ideas — a core part of critical thinking.
How to try it:
Day 1: Start with your own framework. Let AI refine.
Day 2: Start by prompting AI. Then revise or critique what it offers.
3. Fasting from AI
Take regular breaks from using AI to strengthen your independent thinking.
How to try it:
A weekly fast (e.g., no AI use on Sundays).
A project/issue-specific fast (e.g., outline or design something without AI before allowing yourself to use it).
Or a “low-AI” day where you can only ask AI one question total.
Why it matters: Giving your brain time to work through ambiguity, wrestle with questions, and experience the slower creative process is deeply valuable for neuroplasticity and cognitive health.
These strategies aren’t about avoiding AI — they’re about remaining mentally active and rooted in your own thinking, so that AI becomes a supplement to your brain, not a substitute for it.
7. Consider the environmental cost
Every conversation with AI uses electricity and water. While we don’t yet have reliable data on its full environmental impact, it’s not insignificant. Use AI intentionally—not endlessly—especially when what you need most is real connection with a human.
You could also consider taking steps to advocate for more transparency by learning more about and/or getting involved with these groups:
You can also earn more about the “Artificial Intelligence Environmental Impacts Act of 2024” and let your legislator know that you believe it needs to be reintroduced in a future session to be considered again.
The bottom line
Generative AI is a tool—a helpful thinking partner for reflecting, brainstorming, and practicing new skills. It has promise of being therapeutic and can feel that way when we use it. And, it is important to use it wisely when it comes to our mental health.
Therapy can be so powerful not just because of what gets said, but because of the neurological/emotional collaboration that occurs between two humans: the presence of another nervous system that can co-regulate with yours, making room for safety, risk, and growth. Sometimes that happens in person, sometimes through a screen—but it’s always human-to-human.
Use AI thoughtfully. Let it support you in organizing your mind, and consider that your most challenging and tender healing work may be cared for best in spaces where another person can witness you, attune to you, and help you feel and heal in ways an algorithm cannot.