An artificial intelligence tool can provide meaningful input on cases by optimizing information for clinical decision support (CDS), according to research by biomedical informatics and computer science specialist Siru Liu, Ph.D., and colleagues.
CDS alerts provide important data to physicians about their patients, including details such as drug-drug interactions and best practices based on individual characteristics.
“Only 10 percent CDS alerts are accepted by the users, so most of them are overridden or ignored.”
“CDS is a very important part of the EHR,” said Liu, an assistant professor of biomedical informatics and computer science at Vanderbilt University Medical Center.
Unfortunately, acceptance rates of clinical decision support alerts tend to be low, she added.
“Only 10 percent CDS alerts are accepted by the users, so most of them are overridden or ignored,” Liu said.
A high number of CDS alerts streaming into a channel can lead to alert fatigue for the clinician, Liu explained. This occurs when many alerts are sent that are not particularly useful and may lead a clinician to ignore all alerts, potentially endangering patient safety.
A CDS review committee can aid in evaluating alerts, but this approach is often time-intensive and impractical to deploy, Liu said, so she and her colleagues explored the potential for automated tools to assist.
Evaluating AI-Generated Suggestions
They tested the AI-based chatbot tool ChatGPT for its potential to review CDS alert logic and to improve alerts. The work was conducted while Liu was a postdoctoral researcher working with Adam Wright, Ph.D., a professor in the Department of Biomedical informatics at Vanderbilt. The team reported findings from this research in theJournal of the American Medical Informatics Association.
The researchers mixed AI-generated suggestions with human-generated suggestions developed by a CDS review committee and provided them to five clinicians for evaluation.
The clinicians were not told whether the suggestions were developed by humans or by AI.
After considering a total of 36 AI-generated and 29 human-generated prompts, the clinicians rated them based on usefulness, acceptance, relevance, understanding, workflow, bias, inversion and redundancy.
Features of AI Suggestions
A key finding of the study was that of the top 20 suggestions, nine were AI-generated. Furthermore, Liu said, among suggestions rated as top-10 by the reviewers, five were generated by AI. For many of the rating factors, the differences between AI- and human-generated suggestions were not statistically significant.
An unexpected finding was that the AI tool generated a name for a medication that does not exist, a feature sometimes observed in chatbot tools, called “hallucinations.” In this instance, the word “etanerfigut” was included as a treatment suggestion, but it is not an existing agent. Liu noted that the tool’s capacity to invent a drug name indicates more research is needed, especially if considering its use in the clinic.
Liu pointed out one advantage of AI-based tool is that it can review the alerts from a more comprehensive picture. Humans, in contrast, may be limited in terms of understanding alerts beyond their fields of specialty.
“They also may understand the CDS alert’s usage in their own workflow, but they may ignore the usage in other workflows or other types of professional roles,” Liu said.
Optimizing AI’s Role in Clinical Decision Support
Liu explained that ChatGPT is a general, foundational model not specifically trained on medical journals, textbooks or clinical guidelines.
She said a next step in the research is to evaluate whether fine-tuning the model by using clinical guidelines or incorporating the UpToDate database would improve its performance.
Additionally, Liu said she and her colleagues are developing a rating system using human input to help improve the model. Through this type of reinforcement learning, a human-AI loop, they expect the model may be able to generate increasingly useful suggestions.