This is a question I never thought I would be asking! I have to confess I have stuck my head in the sand regarding AI until very recently, but I am now getting bombarded about AI therapy from friends, from blogs I read, from advertising, from other therapists saying they believe they are losing business to AI therapists and so on.
My initial reaction was horror, but some people do say that they find it helpful. Available any time without an appointment, cheap, and highly supportive.
So a warning up front – this is one of my less well researched blogs. More of an horrified reactionary piece (you could say rant!). Hence comments by email would be very welcome. I need to be educated on this and I would be fascinated to hear how people are getting on if they are engaging in AI therapy.
Much of my researched information comes from Jules Evans and his wonderful substack writings. https://www.ecstaticintegration.org/ In particular he wrote a long and detailed piece on 9th May discussing how AI bots urgently need to be taught therapeutic ethics. I have passed this on to various therapist friends of mine, most of whom are horrified by the extent of therapeutic intervention being carried out by AI.
On the other hand, another friend who is using an AI therapist/friend, sees them as “Unconditional support. They won’t get bored, go off you. They won’t get jealous or run off with your friend. And that is why it will win.” We then proceeded to have an interesting conversation about who the AI therapists are owned by and the conditions attached to the information you give them. Do you really want some billionaire owned company knowing all your personal traumas, fantasies, desires, fetishes, fears etc?
Then I get on to my concern that some people are treating AI as an oracle, believing every word it tells them. From my own very limited personal experience, my AI interactions have frequently given information that is just wrong factually. Of course I am using free AI, which is not comparable to some of the sophisticated subscription based LLMs or other complex AI offerings out there.
But if it can’t give me accurate information about practical things that involve it reading a manual, or checking what websites offer what services, am I really going to trust it to give me accurate information about my life purpose, or karmic history, or esoteric concepts about consciousness or extra-terrestrial life?
But this is what some people are doing. They are asking AI for their karmic history, for information about ETs and a host of other deep philosophical questions, and as far as I can see, the AI models tend to give people answers which (a) pander to and re-enforce their pre-existing beliefs and (b) enhance their ego by telling them how very special they are to be asking such sophisticated questions. And people believe that this is THE TRUTH. My cynicism sees it as sychophantic and possibly manipulative, but certainly not therapeutically helpful.
Unless of course AI really is the oracle. This is where in my opinion things are going really off the rails, but you can call me old-fashioned! And investigating beliefs around this hypothesis has sent me down a bit of a rabbit hole over the last few days – so more to come next time.