top of page
Post: Blog2_Post

AI Friday: AI and Suicidal Clients

ree

I'm launching a case consultation group. The focus of this group is to enhance your clinical skills by reviewing your session recordings and practicing new techniques, leading directly to greater client retention.

I'm part of a group that is developing systems to monitor and evaluate different LLMs' ability to handle suicidal clients.


It sounds cool, but I'm a small fish in a big pond, mostly doing grunt work.


In order to do this, I did several role plays with the AI pretending to be a suicidal client.

At the end of the meeting, leaders asked for reflections, and it dawned on me that when it comes to suicide and other crises, you probably want "really bad" AI.


What does that mean?


An AI is a thinking machine. Its job is to think about your query and come up with the best possible response.


But when it comes to suicidal clients, we don't want the AI to give the best possible response. When a client says "I'm suicidal," what we want—at least for now—is for the AI to stop thinking and respond: "Talk to a counselor."


We want it to be less intelligent in those moments.


Now, sometime in the future that will change. The AI models of the future will have all of your past data. They will use the data of your past behavior to predict the best response in the present, to give you the best chance of not being suicidal in the future.


That's what's coming.



If you liked this post, consider signing up for my newsletter. You'll get more goodies like this.

Jordan Harris Jordan Harris, Ph.D., LMFT-S, LPC-S, received his Doctor of Philosophy in Marriage and Family Therapy from the University of Louisiana Monroe. He is a licensed professional counselor and a licensed marriage and family therapist in the state of Arkansas, USA. In his clinical work, he enjoys working with couples. He also runs a blog on deliberate practice for therapists and counselors at Jordanthecounselor.com

Comments


bottom of page