AI Outperforms ER Doctors in Diagnostic Cases, Study Points to Collaborative Care

AI Outperforms ER Doctors in Diagnostic Cases, Study Points to Collaborative Care

While large language models can match or exceed emergency physicians in specific contexts, AI can't replace doctors.

Headshot of Macy Meyer
Headshot of Macy Meyer

Macy is a writer on the AI Team. She covers how AI is changing daily life and how to make the most of it. This includes writing about consumer AI products and their real-world impact, from breakthrough tools reshaping daily life to the intimate ways people interact with AI technology day-to-day. Macy is a North Carolina native who graduated from UNC-Chapel Hill with a BA in English and a second BA in Journalism. You can reach her at mmeyer@cnet.com.

Expertise Macy covers consumer AI products and their real-world impact Credentials

  • Macy has been working for CNET for coming on 2 years. Prior to CNET, Macy received a North Carolina College Media Association award in sports writing.

Have you ever thought about how artificial intelligence compares to a human physician in an emergency diagnostic setting? New research published Thursday might have you thinking over this question. 

The study, published in the journal Science, found that a state-of-the-art large language model outperformed human doctors on a range of common clinical tasks. Using real emergency department data and hundreds of physician comparisons, the model matched or even exceeded human clinician performance in diagnostic choices, emergency triage and determining next steps in management. 

The authors of the study said those results do not mean AI models are ready to replace human doctors. Instead, the results indicate that industry professionals need faster, more rigorous standards for evaluation and rules for using AI in medicine. 

AI Atlas

The researchers tested OpenAI's o1 series large language model, released in 2024, across six experiments that blended standardized clinical cases with a real-world sample of randomly selected emergency room patients at a medical center in Massachusetts. 

The model's advantage was most evident in early-stage triage, when decisions must be made with little information. Both the human clinicians and the AI model improved as more data became available to them, but the study found that the LLM handled uncertainty far better, using fragmented or unstructured health data and notes more effectively.

These findings build on decades of using difficult diagnostic cases to evaluate medical-computing systems. Earlier LLMs already outperformed older algorithmic approaches, but what sets this study apart is the scale and the head-to-head comparison between a human doctor and AI in a real clinical scenario. 

The authors stressed that we should remain skeptical of these results. Real clinical work in hospitals and emergency rooms often relies on visual and auditory cues -- rather than text-based reasoning -- which AI cannot interpret fully and accurately. "Future work is needed to assess how humans and machines may effectively collaborate in the use of nontext signals," the study notes. 

When considering AI-assisted medical care, it's also critical to assess whether it will be safe, equitable and cost-effective, aspects that were not tested in this study. 

Read also: If AI Health Advice From Apple Is Coming, I Want to Be Ready

"Long story short, the model outperformed our very large physician baseline. You'll see this in detail, but this included board-certified, actively practicing physicians and real messy cases," Arjun Manrai, an assistant professor of Biomedical Informatics at Harvard Medical School, said during a virtual press briefing call. 

"I don't think our findings mean that AI replaces doctors, despite what some companies are likely to say, and how they're likely to use these results," Manrai said. "I think it does mean that we're witnessing a really profound change in technology that will reshape medicine, and that we need to evaluate this technology now, and rigorously conduct in prospective clinical trials." 

Regulators, hospitals and healthcare providers should work together to test these tools thoroughly before they're deployed to ensure safety and equity for all patients. 

In a commentary also published Thursday in Science, Ashley M. Hopkins and Eric Cornelisse, researchers at Flinders University in Australia, wrote that the study is a step toward better evaluation of AI systems in healthcare, but that medicine is a complex field that requires rigorous oversight to ensure patients receive the best possible care.

"We do not allow doctors to practice without supervision and evaluation, and AI should be held to comparable standards," Cornelisse said in a statement.

Read also: AI Chatbots Miss More Than Half of Medical Diagnoses, Study Finds

Headshot of Macy Meyer

Macy is a writer on the AI Team. She covers how AI is changing daily life and how to make the most of it. This includes writing about consumer AI products and their real-world impact, from breakthrough tools reshaping daily life to the intimate ways people interact with AI technology day-to-day. Macy is a North Carolina native who graduated from UNC-Chapel Hill with a BA in English and a second BA in Journalism. You can reach her at mmeyer@cnet.com.

Sponsored
Sponsored
Upgrade to Pro
Choose the Plan That's Right for You
Sponsored
Sponsored
Ads
Read More
Download the Telestraw App!
Download on the App Store Get it on Google Play
×