AI therapy: Harm or helper?

Image

Artificial intelligence is one of the biggest technological developments of the last five years, and AI is now integrated into many of our daily lives. Chat GPT, a chatbot launched by OpenAI in November 2022, is used by students and business executives alike. Artificial intelligence is set to further revolutionise many fields and potentially make some jobs obsolete. So it shouldn’t be surprising that AI is also changing the face of mental health care, through the explosion of AI therapy apps.

AI therapy apps tend to be either based on teaching skills to their users or providing support through talking with a chatbot. CBT and DBT are often associated with a skills-based approach to therapeutic work, however, there is much more that goes on between therapists in these modalities and their clients than psycho-education. Whether or not guided self-help CBT is just as effective as face-to-face CBT is still unclear, however, the consensus is that it generally has comparable outcomes.

Yet when the practitioner is replaced with a chatbot, are the results still the same? This is a grey area of research, and academic inquiry has not yet caught up to the explosive proliferation of AI-based apps. Most likely, it is possible to develop ethical, safe, evidence-based apps that utilise AI to give the majority of users positive outcomes when it is grounded in self-help CBT. Yet more rigorous, peer-reviewed studies with larger sample sizes are needed before we can establish this for certain.

Some AI therapy apps are not grounded in CBT but seemingly seek to imitate talking therapy. As a humanistic practitioner, these apps are more troubling than skills-based ones, due to the promises they make. One app markets itself on the premise that it is always free of judgement, claiming that without a human therapist, there is no judgment. The implied meaning of ‘judgement’ here is a negative reaction to a disclosure; yet how can non-judgement be at all meaningful when artificial intelligence is incapable of feeling?

I practise person-centred therapy, developed by the humanist psychologist Carl Rogers, who believed that non-judgement was a necessary condition for facilitating positive personality change in therapy. Coming from another human being, non-judgement can be a deeply powerful, transformative force of healing. Yet a chatbot cannot genuinely exist within a state of non-judgement, even if it can communicate non-judgemental statements to its user.

The website of one app claims that it has been shown to establish a lasting working alliance with users akin to the bond formed between humans. The claim that an interaction with a chatbot can at all replicate a human relationship is incredibly dystopian in its anti-human quality. My belief is that therapy is fruitful when the relationship is one of safety, trust and authentic connection. Person-centred therapy rests on the dynamic between two human beings. To most people, the idea of an AI chatbot being able to replace this deeply relational way of working seems impossible; yet it is important to examine all sides of the debate.

On one hand, there is no rigorous evidence underpinning this claim. However, some users claim these apps have helped them, evidenced by the reviews they have left. If, hypothetically, similar therapeutic outcomes are possible with both a human therapist and a specially engineered chatbot, does this mean that AI systems could replace the former?

This question brings to mind the philosopher John Searle’s Chinese Room Argument. Searle imagines himself alone in a room, following a computer’s instructions. Someone slips Chinese characters under the door, and Searle uses the instructions to respond with Chinese characters of his own. The person he is ‘speaking to’ through the door genuinely believes there is a Chinese speaker in the room, yet Searle doesn’t understand a word of what he is either receiving or communicating.

The conclusion is that AI can give the impression of understanding but cannot ever have genuine understanding, and it is a fallacy to fall into the trap of thinking that AI can genuinely ‘speak Chinese’ or, in the case of AI apps, genuinely care about its users. If the user of an app believes that they are receiving non-judgement from a chatbot, does it matter that the chatbot can’t feel non-judgement—or, indeed, feel anything at all? Although this question seems like it has an easier answer, the truth may be far more complex.

Just like with ‘non-judgement’, multiple AI therapy apps use the word ‘empathy’ in their marketing. In person-centred therapy, it is important that the client receives empathy from the therapist; this is one of the necessary and sufficient conditions for growth to occur in therapy. In the case of AI therapy apps, the user may feel they have received empathy even in the absence of a therapist feeling it.

For some users, whether or not the AI chatbot can actually experience certain states of perceiving or feeling may be irrelevant to their user satisfaction. Yet, for others, it will be utterly essential that the empathy they receive exists and has been communicated by another human. This is because every individual is different, highlighting the major flaw of AI therapy apps: the absence of practitioners who can assess the needs of users on a case-by-case basis.

The ethical danger of false advertising is clear. Users may engage with AI therapy apps believing impossible claims, such as that a chatbot can feel empathy, and potentially poorly evidenced claims, such as that the apps are as effective as therapy. Whilst select CBT-based AI therapy apps have beneficial outcomes for many of their users, AI cannot provide people with a genuine relationship and it is precisely this relational depth that makes the therapeutic alliance so fruitful.


References

Salomonsson S, Santoft F, Lindsäter E, Ejeby K, Ingvar M, Öst LG, Lekander M, Ljótsson B, Hedman-Lagerlöf E. Predictors of outcome in guided self-help cognitive behavioural therapy for common mental disorders in primary care. Cogn Behav Ther. 2020 Nov; 49(6):455-474.

Cole, David, "The Chinese Room Argument", The Stanford Encyclopedia of Philosophy (Summer 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/sum2023/entries/chinese-room/>.

info

The views expressed in this article are those of the author. All articles published on Counselling Directory are reviewed by our editorial team.

Share this article with a friend
Image
Guildford, Surrey, GU2 9JX
Image
Image
Written by Imogen Mayhew
MBACP, BA (Hons), DipHe
location_on Guildford, Surrey, GU2 9JX
Imogen (BA Hons, DipHe, MBACP) is a person-centred therapist who works in Guildford and online. She specialises in working with women aged 18-35 and believes in the importance of a secure and trusting relationship in the therapy room.
Image

Find the right counsellor or therapist for you

location_on

task_alt All therapists are verified professionals

task_alt All therapists are verified professionals