/* Target unordered list (bullets) in the Rich Text Block */ .Blog-Rich-Text ul { list-style-type: disc; color: #0000FF; /* Change this color to your desired bullet color */ } /* Target ordered list (numbers) in the Rich Text Block */ .Blog-Rich-Text ol { color: #FF0000; /* Change this color to your desired number color */ list-style-type: decimal; /* Customize list type */ } /* Target the list items */ .Blog-Rich-Text ul li, .Blog-Rich-Text ol li { font-size: 18px; /* Customize the font size for list items */ line-height: 1.6; }
Innovative framework developed to ensure the accuracy and reliability of AI-generated medical summaries.
San Jose, CA, and Amherst, MA - August 6, 2024 – Mendel, a leader in Clinical AI, and the University of Massachusetts Amherst (UMass Amherst) have jointly published pioneering research addressing the critical issue of faithfulness hallucinations in AI-generated medical summaries. This collaborative effort marks a significant advancement in ensuring the safety and reliability of AI applications in healthcare settings.
Research Overview
In recent years, large language models (LLMs) such as GPT-4o and Llama-3 have shown remarkable capabilities in generating medical summaries. However, the risk of hallucinations—where AI outputs include false or misleading information—remains a significant concern. This study aimed to systematically detect and categorize these hallucinations to improve the trustworthiness of AI in clinical contexts.
The research team developed a robust hallucination detection framework, categorizing hallucinations into five subtypes of medical event inconsistency, incorrect reasoning, and chronological inconsistency. A pilot study of 100 summaries from GPT-4o and Llama-3 models revealed that GPT-4o produced longer summaries (>500 words) and often made bold, two-step reasoning statements, leading to hallucinations. Llama-3 hallucinated less by avoiding extensive inferences, but its summaries were of lower quality. The table below reports the number of model summaries, out of 50 summaries per model, that contains incorrect information according the source medical records:
“Our findings highlight the critical risks posed by hallucinations in AI-generated medical summaries,” said Andrew McCallum, Distinguished Professor of Computer Science, University of Massachusetts Amherst. “Ensuring the accuracy of these models is paramount to preventing potential misdiagnoses and inappropriate treatments in healthcare.”
The study also explored automated detection methods to mitigate the high costs and time associated with human annotations. The Hypercube system, leveraging medical knowledge bases, symbolic reasoning and NLP, played a crucial role in detecting hallucinations. It provided a comprehensive representation of patient documents, aiding in the initial detection step before human expert review.
“We are committed to continually enhancing Hypercube’s capabilities. The future of healthcare AI depends on reliable, accurate tools, and Hypercube’s evolving features, including real-time data processing and adaptive learning algorithms, will keep it at the forefront of clinical innovation,” said Dr. Wael Salloum, Chief Scientific Officer of Mendel AI.
Future Prospects
As AI continues to integrate into healthcare, addressing hallucinations in LLM outputs will be vital. Future research will focus on refining detection frameworks and exploring more advanced automated systems like Hypercube to ensure the highest levels of accuracy and reliability in AI-generated medical content. Hypercube's real-time data processing and adaptive learning algorithms will be essential in maintaining its position at the forefront of clinical innovation.
Accepted Paper
Mendel's work on Hypercube in detecting hallucinations is recognized by the academic community. The research paper is accepted for oral presentation at the KDD AI conference, August 2024: “Faithfulness Hallucination Detection in Healthcare AI. Prathiksha Rumale V*, Simran Tiwari*, Tejas G Naik*, Sahil Gupta*, Dung N Thai*, Wenlong Zhao*, Sunjae Kwon, Victor Ardulov, Karim Tarabishy, Andrew McCallum, Wael Salloum”. It details the methodologies and technologies underpinning Hypercube’s success.
For more information about the Hypercube platform try the Hypercube demo.
About Mendel
Mendel AI supercharges clinical data workflows by coupling large language models with a proprietary clinical hypergraph, delivering scalable clinical reasoning without hallucinations and ensuring 100% explainability. Headquartered in San Jose, California, Mendel is backed by blue-chip investors, including Oak HC/FT and DCM. For more information, visit Mendel or contact marketing@mendel.ai.
About UMass Amherst
UMass Amherst, the flagship campus of the University of Massachusetts system, is a nationally ranked public research university known for its excellence in teaching, research, and community engagement. The university fosters innovation and collaboration across a wide range of disciplines. For more information, visit UMass Amherst.
Press Contact:
Jessica McNellis
Gale Strategies
Jessica@GaleStrategies.com