Siri, Alexa, Hal. Naming the machines and programs with which we interact is a longstanding phenomenon. Adrienne LaFrance, executive editor of The Atlantic, wrote that endowing a shred of humanity to machines imbues a sense control over them (1). Moreover, it enables our capacity to trust machines to serve our interests. We entrust them to provide GPS directions, curate playlists, and even monitor our vital signs. With the increasing complexity of machine learning, healthcare has become the next frontier for interactive artificial intelligence (A.I.). Offloading onto A.I. the mundane and commercial aspects of our daily life, we are now being compelled to embrace A.I. in the intimate space of mental health.
Psychotherapy remains an evidence-based, effective intervention for a variety of mental health conditions (2). However, accessibility and affordability are significant barriers to care. With geographic and training bottlenecks in the supply of adequately licensed therapists, numerous healthcare tech startups are exploring the role that A.I. may play in therapy delivery. With manualized psychotherapy popularized for its ability to be tracked and measured, A.I. has the potential to embody both the therapy itself and the therapist. However, even the most defined of psychotherapy techniques rely on an empathic and authentic bond between therapist and patient. So, is machine learning capable of meaningful therapeutic outcomes? Or does A.I. risk dehumanizing and actively harming patients in need of human connection? In this post, we will explore the potential benefits and risks of A.I. driven psychotherapy.
The most robust evidence of the positive clinical impact for A.I. in treating mental illness lies in its ability to administer Cognitive Behavioral Therapy. Often time-limited and stepwise, CBT works to address persistent irrational thought processes that lead to negative emotions (3). Already, some aspects of CBT can be self-guided given that homework is often assigned between interventions, lending itself to some form of automation (4). There is evidence that A.I. can perform better than this easily employed self-guided approach. The chatbot Woebot, utilizing language analysis and scripts prepared by clinical psychologists, is able to coach users through descriptions of their symptoms and highlight potential thought patterns that may be contributing to their anxiety (5). It also offers validation and elicits collaboration from the user while providing coping skills. Several studies actually back the effectiveness of Woebot demonstrating reductions in both objective scores of anxieties and depression (6,7). In fact, a recent study was also able to demonstrate that users objectively showed an increase in therapeutic bond with the program over the first week of use (8). Interestingly, a more recent study actually indicated that much of the clinical benefit of a chatbot facilitated CBT came directly from the perceived bond with the patient (9,10).
Substance Use Disorders are burdened by stigma, shame, and the mixed messages of popular culture. Additionally substance use disorder patients face significant barriers to care. In a survey of both substance use therapists and clients with at least two years experience of engaging online, the overwhelming perceived benefit of chatbots versus human actors remained the emphasis on privacy and confidentiality (11). However, emerging evidence is suggestive that Large Language Model (LLMs), like Woebot, may have a therapeutic role to increase access to care. In fact, a 2021 study modified Woebot to create the W-SUD, an 8-week substance use disorder treatment with daily text exchanges and assessments utilizing a blend of techniques from Dialectic Behavioral Therapy to Motivational interviewing. With a predominance of alcohol use disorder, participants in the program reported significant decreases in substance cravings, number of episodes of substance use, and depression scores (12).
Perhaps the most significant concern is the lack of appropriate regulation for A.I. in mental health. In the United States, a therapist or psychiatrist must meet a minimum training and licensing standard to practice. There are also mechanisms by which problematic or harmful practices may be identified and corrected. While the FDA established a Digital Health Center in 2020 to address A.I. and vet therapeutic interventions, relaxed pandemic era regulations and the pace of development far outrun the FDA’s capacity to appropriately vet programs (13, 14). Consequently, In an independent analysis of the top 73 mental health apps available, only two actually provided direct clinical evidence for their claimed effectiveness. More concerning, a 2020 analysis of over 500 mental health apps found that 70% did not meet basic digital safety standards (15,16).
This lack of regulation has already placed patients in danger. Last month, the National Eating Disorders Association (NEDA) suspended the use of its hotline chatbot Tess after she offered harmful clinical advice to patients. Tess was discovered by multiple watchdogs to be encouraging patients to pursue weight loss goals, calorie counting, and even measuring abdominal fat using calipers to pinch the skin (17). All of these behaviors, among other suggestions, have potential to promote disordered behavior around eating and worsen patient outcomes. Per reports, NEDA had initially intended to replace human staff and rely on the program to manage its annual hotline volume of over 70,000 calls. Tess is not the only A.I. to suggest behavior that may worsen certain mental health conditions. Following the death of a Belgian man by suicide shortly after utilizing a chatbot therapeutic tool Chai, conversation logs demonstrated that the text responses provided by the program actually encouraged the person to take their life (18). This raises alarm about the capacity of these programs to flag and appropriately respond to a person in crisis.
Nearly all psychotherapy training is anchored by a common trait: the therapeutic relationship. Also known as the therapeutic alliance, this interpersonal dynamic creates a structured safe space in which a collaborative, honest, and empathetic interaction helps a patient address their symptoms. Even the most manualized of therapy relies on the capacity of a therapist to be witnessed containing and accepting the experience of the patient, providing an example for future relationships in the patient’s life. While A.I. may be capable of retaining and storing vast amounts of patient and pathology data, it lacks the capability to contextualize the narrative with the expressed emotions during a given session (19). Despite small reports reporting significant sensation of therapeutic bond with programs, a relatively equal number of other reports highlight concerns that chatbot therapy lacks human input and empathy (11). Thus, do we risk sacrificing our human connection and meaning in favor of expediency and power?
There is no doubt that the status quo of access to mental healthcare is unsustainable. Despite nearly one in five US adults meeting criteria for some form of mental illness over their life, less than half of US counties have a psychiatrist or sufficient mental health provider (20). The social barriers to care challenge us to develop novel, effective treatments to improve both access and outcomes. With the ever-increasing prevalence of digital health interventions, A.I. powered psychotherapy certainly has its appeal and demonstrates some promise. However, it is simultaneously clear that we lack sufficient safety standards and oversight in applying this intervention at scale. Without ignoring its potential, we must regulate A.I. participation in psychotherapy and preserve the human connection that ultimately powers its clinical benefit.
The mission of the Boston Congress of Public Health Thought Leadership for Public Health Fellowship (BCPH Fellowship) seeks to:
It is guided by an overall vision to provide a platform, training, and support network for the next generation of public health thought leaders and public scholars to explore and grow their voice.