Press Release

AI Misses Critical Nuances in Complex Clinical Decisions, Medint Study Published in Nature Scientific Reports Finds

TEL AVIV, Israel, Nov. 12, 2025 /PRNewswire/ — A new peer-reviewed study published in Nature Scientific Reports reveals that large language models (LLMs) often overlook key clinical nuances when handling complex medical queries, raising questions about their reliability in health care decision-making.

The study, titled “Evaluating the performance of large language models versus human researchers on real-world complex medical queries,” was conducted by Medint’s research team. It compared leading AI models with trained human researchers as they analyzed real-world clinical dilemmas, from routine cases to highly complex, patient-centered scenarios.

W
hen AI sounds right but gets it wrong

While AI tools can provide accurate advice in simple cases, such as managing a sore throat, they falter when context and clinical complexity increase. In one real case examined, a 32-year-old pregnant woman with a rare blood-clotting disorder faced anesthesia risks during a scheduled cesarean section. The question of whether to administer medication before proceeding with her preferred anesthesia required synthesizing data from multiple medical domains, a task AI struggled to perform effectively.

The study found that LLMs often produced references that appeared authoritative but were irrelevant to the actual clinical question. In contrast, human researchers consistently produced more relevant, context-aware reports, even when citing lower-ranked journals.

The “dangerous disconnect” between confidence and quality

Researchers identified a critical disconnect between perceived and actual quality. Physician satisfaction with AI outputs did not correlate with the factual accuracy or clinical appropriateness of those outputs. In some cases, AI-generated citations were fabricated or misaligned with the question.

“AI systems can sound confident and convincing, but that doesn’t always mean they’re correct,” said Sigal Ben-Ari, PhD, Vice President of Product at Medint. “The issue isn’t physician ability; it’s applying AI effectively in the complex realities of patient care. Our goal is to keep clinicians central to every decision while making validation natural, not burdensome.”

Building AI that strengthens, not replaces, human judgment

The findings reinforce Medint’s philosophy that AI should enhance, not replace, clinical reasoning. The company’s platform integrates AI capabilities with transparent, human-centered validation tools that help clinicians verify sources and patient-specific factors in real time. This ensures that every recommendation supports, rather than shortcuts, expert judgment.

“AI can accelerate information gathering and support decision making, but medicine requires patient context, experience, empathy, and critical thinking, the kind that comes from direct patient care, not just data analysis,” Ben-Ari said. “Our goal is to provide physicians with tools that keep them in the driver’s seat, ensuring the clinician remains an active, critical thinker throughout the decision-making process,” Ben-Ari emphasized.

About Medint

Medint helps clinicians manage complex, multidisciplinary cases by embedding transparent, human-centered AI into clinical workflows. Its solution ensures that physicians remain fully informed and engaged throughout treatment planning, reinforcing clinical judgment and contextual relevance.

Read the full study: Nature Scientific Reports, October 2025

Media contact: Itamar Ben Shitrit, Hadas Sasson-Zitomer
Email: [email protected]
Sigal Ben-Ari
[email protected]
646-460-1888
Website: www.medint.ai

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/ai-misses-critical-nuances-in-complex-clinical-decisions-medint-study-published-in-nature-scientific-reports-finds-302611531.html

SOURCE Medint

Author

Leave a Reply

Related Articles

Back to top button