Cover Story

Improving Patient Care and Safety With AI

By Topic: TechnologyInformation Management Safety Quality By Collection: Safety


Artificial intelligence has demonstrated that it can improve patient care and safety by helping clinicians analyze imaging and pathology reports. Many applications, though, are still in the early stages of development and adoption, especially in the automation of administrative tasks.

“For quality care, I believe AI allows us to augment clinical decision-making,” says Sachin Shah, MD, an associate professor of medicine and pediatrics and chief medical information officer at UChicago Medicine in Chicago. He notes there is a sizable predictive analytics and machine learning space that can significantly leverage the large amount of data available. As a result, “we can better identify patterns, predict health issues and customize treatment plans,” Shah says.

Achieving that level of precision and personalization at the patient level will help improve outcomes and streamline many care-delivery processes, according to Shah.

Patient safety is also enhanced by AI’s capacity to analyze vast volumes of data in real time. “This can lead to early detection of clinical deterioration, intervention ahead of potential adverse drug reactions and an overall reduction of medical errors,” Shah says. “Clinicians are alerted sooner that something may be trending in the wrong direction, well before it may become clinically evident.”

AI’s Potential

Richard Greenhill, DHA, CPHQ, FACHE, director of the undergraduate healthcare management program in the School of Health Professions at Texas Tech University Health Sciences Center, says high-quality healthcare depends on reliable and accurate processes in the delivery of patient care “to ensure that safety is prioritized for sustainable outcomes. Process variation is the enemy of quality, so the tools of AI are exciting because they can undertake some of the more routine and repeatable processes that are susceptible to the impact of human factors.” 

The TTUHSC Institute of Telehealth and Digital Innovation was established in September 2023 to create a digital health ecosystem, where technologies such as AI transform care delivery in West Texas. “AI has the potential to unleash new ways of approaching performance improvement and patient safety as we begin to tap into the universe of data that encompasses health—both inside and outside of the delivery system,” says Greenhill.

Systems thinking is also an important component of quality care. “Systems thinking in care delivery helps us focus on interrelationships in processes to solve complex problems in our quality improvement strategy,” Greenhill explains. “This is why we should not ever first blame people when a medical mistake or undesired event occurs. A system produces the results for which it was designed. Thus, a person or a tool, such as AI in our delivery system, can never itself be the focus when an error occurs.”

Healthcare facilities must look deeply at the design of how these resources were planned for and supported in their activities, according to Greenhill. AI allows “us to look across interprofessional processes and aggregate datasets to discover insights that we may not have assumed.” Examples include early detection via analysis of diagnostic studies, such as radiology and clinical lab, and more quickly identifying high-risk patients based on treatment and disease progression. “AI can assist quality professionals, clinicians and leaders to make sense of the complexities in our systems in managing risks associated with handoffs, care coordination and treatment interventions,” Greenhill says.

AI can enhance patient care in everything from ophthalmology to cardiology and pathology/oncology to psychiatry, according to Thomas Fuchs, DrSc, dean of the Department of Artificial Intelligence and Human Health at Mount Sinai’s Icahn School of Medicine in New York City. “AI should help physicians to be faster and more effective, do new things they currently cannot do and reduce burnout,” he says.

Even before the AI department was established in the fall of 2021, Mount Sinai was testing AI systems in the ICU to alert nurses to patient risk of malnutrition, deterioration or fall, along with reducing false alarms and staff overhead. The AI system takes information from the EHR and various devices in the ICU.

Cancer detection. AI has also benefited pathology at Mount Sinai by detecting cancer and grading cancer earlier than before. The institution has developed an AI foundation model based on 3 billion images from 400,000 digital microscopy slides of cancer biopsies and resections. “The model can detect biomarkers to predict what cancer treatment will work for a specific patient,” Fuchs says.

Diagnostic imaging assisted by AI holds promise, as well. A retrospective study published online in European Radiology in March 2023 concluded that among more than 1,200 breast cancer cases in Norway, an AI system scored 93% of the screen-detected cancers and 40% of the interval cancers with an AI score of 10 (on a scale of 1 to 10, based on the risk of malignancy). Furthermore, the AI system scored all screen-detected cancers and nearly 50% of interval cancers in women with the highest breast density with an AI score of 10. This resulted in a sensitivity of 80.9% for women with the highest breast density for the AI system, compared to 62.8% for independent double reading by radiologists.

Roughly 40% of the screen-detected cancers achieved an AI score of 10 on the prior mammograms, indicating a potential for earlier detection by using AI in screen-reading.

Physicians at the Breast Center at Mount Sinai Medical Center believe that the use of AI for detecting new abnormalities in mammograms and MRI scans is promising. They also predict that AI will help radiologists interpret mammograms while reducing the number of false positive interpretations, thus avoiding unnecessary biopsies. However, further research is needed to ensure the technology’s reliability and the anticipated benefits.

Informatics. Excitement about the potential for AI led the University of Kentucky to form a multidisciplinary collaborative of physicians, data scientists, biomedical engineers and others to form the Artificial Intelligence in Medicine Alliance. After two years of success in starting projects around radiology, pharmacy and cancer, the alliance merged in 2022 with an innovations program to form the Center for Applied AI within biomedical informatics for even broader AI applications across healthcare and the university. Most AI healthcare initiatives at the university hospital stem from a desire to improve quality of care and reporting, such as better implementing the recommendation from the Society of Thoracic Surgery that a patient take a beta-blocker within 24 hours of surgery. Using tools already developed or in development in one discipline allows center members to more quickly apply solutions to other projects.

A chatbot is being developed at the college that will connect with patients preparing to undergo cardiac surgery. The chatbot would help guide patients on when to take their medicine and help answer questions they may have about the surgery.

Over the past year, there has been a rapid increase in the optimization of administrative tasks completed via AI, including documentation, prior authorizations and revenue cycling. “We are also able to better communicate with our patients and give our patients more self-efficacy,” Shah of UChicago Medicine says. “Conversational AI is very well suited to replace many of the burdensome nonclinical tasks that do not require clinical judgement. 

AI offers an opportunity to enhance quality care by augmenting clinical decision-making, according to Shah. “Through predictive analytics and machine learning, we can leverage vast amounts of data to identify patterns, predict potential health issues and customize treatment plans for individual patients. This personalization can significantly improve outcomes and streamline care delivery processes.”

Similarly, AI’s role in patient safety “lies in its ability to analyze data in real time, allowing for early detection of potential risks such as adverse drug interactions or deteriorating patient condition,” Shah says. AI-powered systems can also assist in proactive interventions, thus reducing medical errors and ensuring timely and accurate care, all of which ultimately safeguard patient well-being.

UChicago Medicine is scheduled to begin a pilot study this spring comprising 250 of its physicians to leverage generative AI to streamline and take over much of the clinical documentation.

Some EHR providers include AI algorithms in their products, according to Greenhill. “These tools show promise to augment and enhance work for clinicians,” he says. “However, as these tools become more widely used, we need to maintain vigilance about the risk that they pose to patients and processes due to model and data bias.”

AI chatbots—automated software applications—can perform many mundane tasks and reduce the burden on staff, physicians and nurses so they can pursue “the meaningful work they signed up to do, such as physicians spending more time with patients,” Greenhill says. 

“If a human does the same task 10 times every hour, he or she is more likely to make a mistake due to a host of human factors,” Greenhill says. By contrast, AI chatbots can operate nonstop, requiring neither breaks nor rest. “This is a win in the war on process variation,” he says. 

Surgical care. Danielle Walsh, MD, FAAP, FACS, a professor of surgery and vice chair of Surgery for Quality and Process Improvement at the University of Kentucky College of Medicine in Lexington, was part of a panel of surgeons who discussed the impact of AI on surgical practices at the American College of Surgeons Clinical Congress 2023 in October. By allowing AI to take over many of the repetitive and rote administrative tasks that burden physicians, “the physician can perform more cognitive decision-making and focus more on human connections and time spent with patients,” says Walsh. 

AI will also play a major role in mandatory reporting of quality metrics to governmental and other entities, according to Walsh, freeing up even more staff members to perform more substantial work to truly improve quality of care. A study published online in JAMA Network in June 2023 on resources spent on just measuring and reporting data on 162 unique quality metrics found that an estimated 108,478 person-hours with a personnel cost of $5 million and another $603,000 in vendor fees were required. “Those funds are better spent on interventions than reporting,” Walsh says.

AI tools also can communicate with a patient at home to ensure drug adherence, collect feedback on how patients are feeling, and use that feedback to become more connected with the patient in between visits or after discharge from the hospital. “And if you add in digital sensors, we can do tremendous monitoring at home,” Walsh says. “During COVID, people at home were at ICU-level care.”

A patient with a blood vessel bypassed in the leg, for example, could wear a sock with a sensor at home to detect if blood flow begins to decrease before the blood vessel blocks off again and before the patient develops a blue, painful foot. “AI combined with sensors will allow us to intervene earlier and prevent more serious complications,” Walsh says. 

Looking Forward

As for AI cost, some healthcare facilities already have 5G network capability, along with modern wiring and systems that are compatible with AI, while others will need to invest in infrastructure improvements. “There will also be tradeoffs,” Walsh says. “Because AI requires a tremendous amount of computing power, you are not going to be able to run every AI algorithm simultaneously. And as the area grows, you will need to prioritize AI systems that meet the needs of a specific institution and the specific patient population being served.”

Which areas of AI to pursue differ by organization and their culture, according to Shah, and are determined by staffing and the ability of teams and leadership to adjust. “There is not necessarily a lot of replacement of human expertise coming, but certainly a lot of reallocating human resources to different work that is ideally more top-of-license work, by automating some of the repetitive administrative tasks and algorithmic work,” he says.

However, the quality of the output from AI is contingent on the quality of data entered, “so you need to invest a lot into your data infrastructure,” Shah says. “The quality of your data needs to be high.”

There are also ethical considerations. “There has to be clear transparency around the decision-making in these models,” Shah says. “The concept of algorithmic bias is really important to understand and to ensure we are minimizing. Humans need to be at the center to carefully scrutinize any potential bias by closely monitoring the inputs, the algorithms and outputs to ensure they are not leading to unintended consequences.”  

Bob Kronemyer is a freelance writer based in Elkhart, Ind.

 

Acceptance and Implementation Strategies

Danielle Walsh, MD, FAAP, FACS, a professor of surgery and vice chair of Surgery for Quality and Process Improvement at the University of Kentucky College of Medicine in Lexington, believes there are two big myths about using AI in healthcare: that physicians don’t want AI and that patients don’t want AI.

READ MORE