When you have the privilege of leading a global enterprise like The Joint Commission, certain universal truths are revealed. One especially stands out.
Healthcare today is brutally complex. Our field is facing unprecedented clinical, operational, financial and even political challenges. Patients still have trouble accessing the safest, highest-quality care available. In fact, a 2024 study suggests 23% of people who die or deteriorate in U.S. hospitals do so because of a missed or misdiagnosis, according to research led by the University of California San Francisco’s Andrew Auerbach, MD, published in JAMA Internal Medicine last year. U.S. hospitals and health systems are under immense strain, with 37% operating at a loss, Kaufman Hall reported in February. Workforce shortages and burnout persist. Violence toward healthcare workers is intensifying. And the headwinds seem to keep coming.
Still there is optimism, as we are on the brink of significant transformation. Artificial intelligence offers tremendous opportunity for a much-needed step-change in healthcare. From meeting patients’ needs in a timelier fashion, reducing administrative overhead, easing clinician burden and improving diagnosis to dramatically accelerating drug discovery, AI has the potential to unlock advances we have yet to envision.
Consider for a moment the enormous amount of data that contemporary clinicians are expected to manage. In the ICU, evidence indicates the average patient generates more than 1,300 data points per day. The “magical number seven” theory in cognitive psychology suggests the brain’s working memory is calibrated to remember only about seven variables simultaneously—the length of a typical local U.S. phone number, excluding area codes.
Just how many unique combinations can seven digits make? 5,040. Now, let’s do the same factorial exercise for 1,300 variables, calculating all the possible permutations of 1,300 that could surface in the ICU for a given patient.
Is it 1 million? 1 billion? No; it’s the head-spinning number of 3.16 times 10 to the 3,485th power. That’s more than the number of sand grains estimated to be on Earth. Likely more, even, than the number of particles thought to exist in the universe. We have reached a point where “the complexity of modern medicine has surpassed the capacity of the human mind,” as my colleagues and I wrote in the New England Journal of Medicine AI last year.
The bottom line is healthcare needs help. That’s why the potential for AI to augment our work is so intriguing, across all aspects of quality, patient safety and operations. Early uses of AI and their results have been encouraging.
Six years ago, for instance, following traditional attempts to improve care for sepsis and reduce mortality rates, a large, national health system turned to AI and developed an algorithm that alerted care teams the instant an inpatient began exhibiting signs, using data being tracked in the patient’s EHR. The algorithm, called SPOT, or Sepsis Prediction and Optimization of Therapy, dramatically improved sepsis diagnosis and saved 8,000 additional lives over a five-year period. This occurred well before ChatGPT and the rapidly evolving AI capabilities we are witnessing today.
At the same time, two fears persist: The fear that as a society we won’t have sufficient guardrails in place to protect us from the unintended consequences of AI in healthcare, which could cause real harm. These range from user error, hallucinations and algorithmic biases that amplify care disparities to novel data security threats and inappropriate use. On the other hand, there is the fear that overregulation and too many controls will stifle progress, hampering the ingenuity of entrepreneurial innovators, thereby obstructing healthcare’s ability to harness the transformative power of AI.
We must land somewhere in the middle. Last September, The Joint Commission convened experts on operationalizing the responsible use of health AI in healthcare. U.S. policymakers, patient advocates, healthcare workers, tech industry leaders and healthcare executives gathered for spirited discussions around the opportunities and challenges that AI in healthcare presents. What emerged was excitement for the value of AI tools, a desire for guidance to drive innovation and a strong interest in common-sense guardrails with a focus on governance to ensure that healthcare organizations are meeting their obligations to patients.
When we think about developing a framework governing the responsible use of AI in healthcare, it’s important to recognize AI is not one thing, adding complexity to an already complex picture. As Michael Howell, MD, chief clinical officer, and Karen DeSalvo, MD, chief health officer, both at Google, wrote last year in the Journal of the American Medical Association, we can divide AI into three epochs, each with “fundamentally different capabilities and risks.”
- AI 1.0, which dates back to the 1950s and is characterized by symbolic and probabilistic models; picture if/then statements.
- AI 2.0, which entails deep learning models that “do one thing at a time” and “primarily focus on classification and prediction.”
- AI 3.0, which are foundation models and generative AI that “can do many different kinds of tasks without being retrained on a new dataset.
Therefore, the governance and regulation of AI in healthcare will necessitate agility, with consideration for the specific AI category and its use cases versus a one-size-fits-all approach. Probabilistic models that we’ve been creating for more than half a century, for example, may require a different framework than, say, agentic AI, which can operate with some autonomy and make decisions. James Zou, PhD, and Eric Topol, MD, both of Stanford University, recently wrote in a column published in The Lancet that such AI agents hold great promise in medicine and “have the potential to become valuable teammates to human clinicians,” provided they are carefully investigated and regulated.
In 2024, The Joint Commission launched a Responsible Use of Health Data certification program, which provides healthcare organizations with a blueprint for safely and appropriately managing secondary patient data—health data used for purposes beyond clinical care such as for research, registry creation or the training of AI tools.
Informed by Health Evolution Forum’s “The Trust Framework for Accelerating Responsible Use of De-identified Data in Algorithm and Product Development,” the certification affirms that organizations have the necessary protocols and governance processes in place to keep patients informed, safeguard their privacy, prevent data misuse and validate algorithms.
The Joint Commission sees this as the precursor to a future certification program that will guide healthcare organizations on the responsible use of health AI. The organization’s vision is to empower hospitals and health systems to improve patient outcomes by harnessing AI’s potential, while at the same time addressing and mitigating safety concerns. More details are on the horizon, but it begins with a strong belief that the speed of innovation necessary for transformative solutions to pressing problems should be nurtured with responsible and informed self-governance, not stifled with overregulation.
It’s fashionable in safety and quality circles to say we want to stop variation, but a higher aspiration is required. Let’s use AI to harvest variation. Let’s use AI to discover the best practices we didn’t intuit. Let’s recognize the potential for a new level of thinking augmented by technology that can help us wrestle those 3.16-times-10-to-the-3,485th-power challenges of complexity. All of us—as healthcare leaders—bear the responsibility of making healthcare safer, more effective, more efficient, more accessible, more affordable and more compassionate.
Paradoxically, as AI grows more facile at reducing clinician burden and improving diagnosis—from generating visit notes to analyzing radiology imaging—we have the chance to return joy to work by preserving more time for interaction with the patient, not the computer.
I have seen examples of how AI can relieve a patient’s anxiety by speeding up the process of alerting them to reassuring biopsy results and channeling patients diagnosed with cancer to appropriate care.
Not only does this result in better outcomes for patients with time-sensitive cancers, but it also completely changes the role of nurse navigators who have shifted 70% of their time dedicated to direct patient engagement from spending 70% of their time on administrative tasks.
As we look ahead to the many possibilities, let’s embrace the potential for AI to make healthcare more human. And there is nothing artificial about that.
Jonathan B. Perlin, MD, PhD, FACMI, is president and CEO, The Joint Commission Enterprise, Oak Brook Terrace, Ill.