top of page
Post: Blog2_Post

AI In Mental Health: Eight Evidence-Based Predictions and Three Suggestions

Invited guest post by Alex Vaz, PhD, Chief Academic Officer at Sentio University


AI’s impact in mental health is already here, and it will only increase over time.


Research already suggests that clinicians tend to underestimate how many people are already using AI for emotional support, as well as how helpful they often find it to be.


Sentio’s Marriage and Family Therapy program in California has been at the forefront of AI research in mental health. Our AI research team studies how people use AI for psychological support (when it helps, when it harms) and collaborates with AI safety leaders to create resources for clinicians.


As part of Sentio’s AI initiatives, we have been offering AI trainings and free resources to professionals. We recently gave away for free our 4-hour introductory AI training series, since we believe every therapist and every training program should have access to this right now.


One recurring question we hear from our peers and trainees is: If AI isn’t going away, what’s our role in this? What can we possibly do now?


Well, it turns out, a lot. It’s really a personal choice how much you want to dive into it.


Integrating AI safely and ethically in clinical work is a set of trainable skills that clinicians, supervisors and trainees can hone over time. 

Flowchart on "Current Clinical Uses of AI" splits into "For Clinicians" with five categories and "Client Facing" with two categories.

Clinician categories include 1. Documentation and notes, 2. Theory and learning, 3. Clinical brainstorming, 4. Skill building.

Client categories include 1. Assessing AI use, 2. Educating safe use, 3. Co-design task

For example, Sentio’s AI implementation ladder is as an overview of several skills you can develop, from lowest to highest-burden:

Sentio's AI Implementation Ladder shows 5 steps, with a colored bar from green to red: 1 assessment, 2 basic psychosocial education, 3 Teaching safe use of AI, 4 Between session therapy tasks (and homework), 5 Structured prompts.

The other main question we get asked is: “What will the field look like in the future?”


We don’t know. No one does. Not even the CEOs of AI companies.


We could be wrong, but here are our best evidence-based guesses for the next 5-10 years:

  1. HIPAA-compliant AIs go mainstream. AI use becomes “official” in healthcare systems.

  2. Most clinical documentation becomes semi-automated. Notes, summaries, treatment plans are mostly drafted by AI.

  3. Much “therapy” starts outside human therapy. People do first-line support with always-on AI, and book a human therapist when stuck or escalating.

  4. The market bifurcates: Low-cost AI-first support scales massively; human therapy becomes more premium.

  5. Clinicians see fewer “mild” cases. Caseloads skew toward comorbidity, high-risk, trauma/dissociation, personality patterns, couples/family, and “when AI didn’t help.”

  6. We’ll start seeing a new referral category: “AI-related complications”. Clinicians treat AI dependency/attachment displacement, misinformation spirals, and escalation/conflict caused by clients following AI.

  7. Training programs, licensure and standards change. Expect new required competencies such as: AI-informed consent, safety planning with AI use, documentation of AI use, etc.

  8. Supervising AI becomes an industry in and of itself, monitoring the safety, accuracy, bias, and real-world impact of AI systems in clinical care.


From our read of the literature, here are three quick future-proofing tips:

  1. Become a “complexity specialist”. Lean into work that stays hardest to automate: high-risk care, trauma/dissociation, personality patterns, comorbidity, couples/family.

  2. Differentiate with a clear niche. Pick a specific population/problem, publish on your approach (website, social media, writings, talks, supervision), and become an obvious referral.

  3. Invest in AI training, for yourself and for others. You can build part of your career around training/supervising clinicians on safe AI use and screening; do staff training workshops; build AI resources for specific clinical populations/issues; etc.


For a deeper dive, join Sentio’s next advanced online trainings: https://sentio.org/offerings


Alexandre Vaz, PhD

Contact: avaz{at}sentio.org

Headshot of Alex Vaz, Ph.D. He's a dark skinned european man with a beard.
Alex Vaz, Ph.D

Alexandre Vaz, Ph.D. is cofounder and Chief Academic Officer of Sentio University and the Sentio Counseling Center. Dr. Vaz is the author of eighteen books on psychotherapy training and deliberate practice, and co-editor of American Psychological Association's book series “The Essentials of Deliberate Practice”. He has held multiple committee roles for the Society for the Exploration of Psychotherapy Integration (SEPI) and the Society for Psychotherapy Research (SPR). 


Comments


bottom of page