Last month marked the one-year anniversary of the release of ChatGPT, OpenAI’s chatbot. As companies rush to incorporate the groundbreaking technology into their operations, many workers remain anxious that generative artificial intelligence – which tends to rely on large language models (LLMs) – will replace them. Ironically, this anxiety is shared by professionals who are trained to deal with it: therapists.
To be sure, generative AI services, of which ChatGPT, Google’s Bard, and Meta’s LLaMA are only the tip of the iceberg, will disrupt work as we know it today. Accenture estimates that language tasks account for 62 percent of total work time in the United States, and that LLMs are likely to automate or augmented 65 percent of those tasks.
Earlier this year, the US National Bureau of Economic Research published a study showing that access to a generative AI-based conversational assistant increased the productivity of customer-support agents by an average of 14 percent.
Global health systems could also benefit from productivity gains, as many are contending with underfunded prevention programmes, overworked staff, and the rising costs of chronic diseases. This is especially true in the mental-health field, where providers have struggled to meet growing demand in the wake of the pandemic.
According to a 2021 report by the Organisation for Economic Co-operation and Development (OECD), 67 percent of people had difficulties getting the mental-health support that they needed. Moreover, the US Centers for Disease Control and Prevention found that, in 2022, one in eight Americans regularly had feelings of worry, nervousness, or anxiety, while nearly half of US health workers reported often feeling burned out.
So, will generative AI revolutionise mental-health care by reducing therapists’ workload, or even replace human therapists altogether? Can LLMs like ChatGPT or Bard “treat” us the same way?
Given that language and communication are the main tools of psychotherapy, one might assume that generative AI could easily automate treatment. Because these models can digest thousands of pages of therapy manuals, research papers, and clinical case studies faster than any human with a PhD, they could conceivably use this knowledge base to tailor psychotherapy for each patient.
But this aspiration (or fear) ignores how psychotherapy works and what makes it effective. Research shows that successful treatment depends mainly on two elements: “specific” and “common” factors.
Specific factors are techniques such as relaxation exercises and exposures that psychotherapists intentionally use. For example, while a person can talk for hours about their arachnophobia, conquering their fear requires gradual exposure to spiders.
But the key to effective psychotherapy lies in common factors. These include genuinely human traits such as empathy and hope. They also encompass the actions – listening and sharing emotions and thoughts – that form the basis of human connection and are crucial to building a bond of trust between patient and therapist, without which psychotherapy is doomed to fail.
Moreover, only within the confines of such a relationship can the two parties agree on expectations and goals – another important common factor.
Ultimately, both categories can help explain why psychotherapy may work in one case but not in another. For example, a distant and cold expert who selects the right technique will not be successful, nor will a kind and motivated therapist who addresses issues that are irrelevant to the patient.
As a board-certified psychotherapist who regularly sees patients, I do not believe that generative AI will automate the profession. Psychotherapy is a deeply human interaction, whereby two people meet to relieve one of them of personal distress.
Despite the speed and ease with which LLMs can conjure text using natural language, they will not soon form connections with humans as we do with each other.
Moreover, psychotherapy offers a safe space – protected by professional confidentiality – to talk about feelings, vulnerabilities, and thoughts we might be too embarrassed, or ashamed, to discuss with another person. What is discussed in the therapy room remains there.
The idea that we will be comfortable sharing intrusive thoughts, obsessions, and sensitive information with a system that may use any input to further improve its output is far-fetched.
Even if data were anonymised, patients would still share sensitive health information with private companies on a large scale, which is fundamentally different from a face-to-face session with a single therapist.
Generative AI could, however, augment the work of therapists by making it easier to determine the most suitable techniques for each patient. More specifically, in complex cases, it could propose a list of personalised interventions from which to choose, nudging therapists in the right direction.
One can also imagine an AI-enabled self-referral tool expanding access to mental-health services, especially for minority groups or those with communication needs such as plain language. All of these developments would aid therapy, not replace it.
Therapists need not fear generative AI, even though it has been hailed as a transformative innovation. According to the World Economic Forum, marriage and family therapists and mental-health counsellors are some of the jobs that are least likely to be transformed by LLMs.
Instead, therapists working through their anxiety about being replaced by technology must take their own advice: face the fear head-on. What they would see is that AI fails to replicate our common humanity, the foundation on which psychotherapy is constructed.
Marc Augustin, a German board-certified psychiatrist/psychotherapist, is a professor at the Protestant University of Applied Sciences in Bochum, Germany, and a SCIANA fellow.