“Medicine should be viewed as social justice work in a world that is so sick and so riven by inequities.”- Paul Farmer (1)
Like many physicians of my generation, I was inspired to enter the field of medicine by Terry Kidder’s, Mountains Beyond Mountains, an extraordinarily inspiring book on the life of the physician and advocate Paul Farmer. The late Dr. Farmer compelled practitioners of medicine to not only recognize, but act against the social and economic forces that perpetuate health inequities worldwide as a part of their daily work, regardless of the environment.
Over the previous posts, we have discussed how the increasing utilization of digital healthcare tech, specifically Artificial Intelligence (A.I) has the capacity to both empower or worsen the state of health inequity. As governments and industry regulators are already behind in placing patients first, physicians need to be prepared to both compel regulatory action and take part in protecting patients in the digital age. In the following, I offer four ways in which physicians should address the rise of A.I. and ensure its appropriate and just application for patients.
Previously, I’ve highlighted a few of the many applications of machine learning, specifically in the specialty of psychiatry. However, the state and variety of these applications changes almost daily. Consequently, understanding the use and current limitations of A.I. within a given practice setting can be difficult. Several public-private partnerships have developed databases to help guide practicing physicians through this new frontier (4). One such resource is the Health AI Partnership, a collaboration between multiple academic medical centers, the American Medical Association, and the global law firm DLA Piper. Health AI Partnership collates media regarding A.I. in healthcare, vets current research, and provides guidelines to considering the safe implementation of A.I. in a given practice (5) including identifying scope of A.I. use, understanding product specifications, and developing success measures.
In July 21, 2023, the Biden administration announced that it had secured commitments by industry leaders in A.I. to submit to rigorous data security and consumer protection regulations (6). While this is an important step, it is imperative that physicians understand and reiterate how poorly prepared our current regulatory environment is to manage the wave of A.I. applications in progress. As noted in a recent report on the state of A.I. in healthcare, the FDA’s current medical device framework is insufficient to assess the safety and accessibility of A.I. in healthcare (7). The FDA has released its draft of the proposed framework for managing the release of healthcare affiliated machine learning here. There are several ways in which physicians can advocate and ensure this regulation happens. One, is to directly submit comments regarding the FDA policy at this site. Physicians may also align with organizations that have the financial and institutional resources to amplify their voices. With regard to A.I., the leading voices in physician advocacy include the American Medical Association and Stanford University RAISE-Health initiative (8,9).
The role and skills required of physicians in all specialties remains in constant evolution. While it may seem counterintuitive, the appropriate and well-regulated implementation of A.I. in healthcare requires physicians to be familiar with its use. Thus, integrating A.I. competency into physician training as well as continued medical education plays a vital role in improving outcomes and protecting patients. Given that core competencies for resident physicians are defined by completion of Entrustable Professional Activities (EPAs), several leading researchers suggest developing technical EPAs for the use of A.I. In particular, fundamentally integrating it into the physician’s core training (10). The fields of radiology and pathology are most advanced in this process given the more direct role A.I. currently plays in diagnosis and management in these fields (11, 12). One of the most significant competencies of these proposed trainings is the capacity of the physician to explain machine learning process and consequently be able to contest or identify misuse or errors made by A.I (13).
Just as they do in the present healthcare environment, patients have the most to lose to unchecked and recklessly applied use of machine learning. Fortunately, the clinical relationship is a crucial source of patient education and empowerment. A recent survey by Yale University demonstrated that patients across multiple ages and ethnic backgrounds trusted that A.I. would have a positive impact on their overall healthcare (14). However, most respondents agreed that the biggest risks of A.I. were misdiagnosis, privacy breaches, and less physician involvement in care. Thus, while optimistic, patients are realistic about the potential risks, providing ample opportunity for education.
Physicians can reinforce patient safety through education in several ways. One, physicians must reiterate to patients their rights under the Health Information Portability and Accountability Act (HIPAA). Given that so much of the technology space compels patients to share significant amounts of data in their personal lives, patients may not be aware of their rights nor the consequences of personal health data being leaked. Patients must also be made aware that the majority of A.I. systems in production and being experimentally applied in healthcare do not yet have HIPAA approval. Patients should be educated to inquire what, how, and how long their data will be stored (15,16, 17). Physicians also have the opportunity to directly and collaboratively demonstrate the use of A.I. in clinic given that it already exists across many healthcare systems in the form of personal assistants embedded in the electronic medical record (18,19). In engaging patients directly over the use of A.I. they make relatable and directly impactful a subject that patients may often find abstract or distant from their own care (20).
A.I. is the technological breakthrough of the moment, and it is one that appears to be advancing rapidly and garnering significant enthusiasm. In a healthcare system marred by systemic inequities and profit-driven entities, the risk of technological innovation comes with the emphasis of productivity and profit over patient outcomes. As Paul Farmer so clearly distilled, physicians are often called to provide care in opposition to a long line of societal forces that perpetuate disease and suffering. Consequently, it is imperative for the safety of patients and advancement of healthcare that physicians take an active role in shaping the use of A.I. in medicine. By staying informed, demanding regulatory action, demonstrating competence in machine learning, and providing patient education, physicians have the power to prevent future inequities and ensure patient safety.
The mission of the Boston Congress of Public Health Thought Leadership for Public Health Fellowship (BCPH Fellowship) seeks to:
It is guided by an overall vision to provide a platform, training, and support network for the next generation of public health thought leaders and public scholars to explore and grow their voice.