A major new study found AI outperformed doctors in ER diagnosis — but there’s a catch

4 hours ago 3

When I think of heroic doctors, I think of the physician in the hospital who’s presented with a patient suffering bizarre or vague symptoms and pulls out the right diagnosis just in time. It’s the basis of almost every medical procedural TV show, from House, MD to The Pitt. It’s the mystique that has made doctors among the most revered professionals in society.

But what if a machine could make that call just as well or even better? What should we do about it here in the real world?

That question is becoming more urgent. According to a major new study published in Science, advanced artificial intelligence programs often outperform human doctors when diagnosing people seeking emergency medical care.

AI has already, for better or worse, become a part of modern medicine. Different programs are being used to do everything from collate physician notes to identify promising new candidates for drug development. The authors of the Science study portrayed their findings as strong evidence that AI could be valuable in the emergency room as well — as long as it is fully vetted in clinical trials for specific uses.

Lest the hype outpace the science, the authors made a point to say that they feared their research would be cited to justify replacing human doctors with software programs: “I get a little bit queasy about how some of these results might be used,” said co-author Dr. Adam Rodman, a general internis­­­t and medical educator at Beth Israel Deaconess Medical Center. They warned against taking such a simplistic view of their findings.

”No one should look at this and say we do not need doctors,” Rodman said in a call with reporters.

At the same time, the researchers did argue that AI had reached the point where it could be a genuine asset for doctors in certain situations — especially in the ER, where physicians are frequently dealing with imperfect information. They called for clinical trials that would properly assess the safety and efficacy of using AI for those tasks, serving as a second pair of virtual eyes that could act as a gut check for human physicians, or help them when they encounter a case that is outside their experience or expertise.

AI can clearly be a force for good in health care, they said — so long as we recognize its limitations and use it in conjunction with, rather than as a replacement for, our human doctors.

“We’re witnessing a really profound change in technology that will reshape medicine,” Arjun Manrai, who studies machine learning and statistical modeling for medical decision-making at Harvard Medical School, said.

AI outperformed human doctors in making emergency diagnoses

The researchers evaluated OpenAI’s o1 reasoning model, which is a more specialized AI program than, say, ChatGPT. It works more deliberately and with an emphasis on internal logic. They ran the program through several experiments, evaluating its accuracy in both simulated and historical cases that have been used in medical training to test physicians’ critical thinking as well as real-world emergency cases from the Beth Israel hospital. The study then compared how the o1 model performed against human doctors, ChatGPT, and human doctors using ChatGPT.

Assessing the training cases allowed the researchers to compare o1’s performance to a very large sample of existing data from human doctors who took the same tests. And across those different scenarios, the AI consistently outperformed those physicians and offered the correct diagnosis or a helpful plan for patient management in the vast majority of the cases studied.

Sign up for the Good Medicine newsletter

Our political wellness landscape has shifted: new leaders, shady science, contradictory advice, broken trust, and overwhelming systems. How is anyone supposed to make sense of it all? Vox’s senior correspondent Dylan Scott has been on the health beat for a long time, and every week, he’ll wade into sticky debates, answer fair questions, and contextualize what’s happening in American health care policy. Sign up here.

But its accuracy when evaluating raw electronic health record data from real-world ER cases was especially impressive. This is closest to the messy reality that emergency doctors must often perform in: they are dealing with a person who is in serious need of speedy treatment, and have incomplete and unfiltered information, if they have much information at all. In reviewing those cases, the o1 model identified the exact or a very close diagnosis 67 percent of the time during the patient’s initial presentation at triage (versus 50 and 55 percent respectively for two expert doctors that the AI was measured against) and 81 percent of the time once the patient was ready to be admitted to the hospital (versus 70 and 79 percent for the human doctors).

“We can definitively say…reasoning models can meet that criteria for making diagnostic reasoning at the highest levels of human performance,” Rodman told reporters.

Two experts I consulted who were unaffiliated with the study — Dr. Sanjay Basu at UC-San Francisco and Nigam Shah at Stanford — praised its rigor, but they also noted its limitations. The preexisting training cases studied have been curated specifically for evaluating physicians’ accuracy, so they may overstate how well the model would perform in the real world. In one of the case study experiments that included a set of “cannot-miss” diagnoses when the patient is at risk of serious harm or death, the AI model did not perform any better than ChatGPT or human doctors.

Even the ER findings, which come closest to assessing the o1 model’s performance under true-to-life conditions, were retrospective reviews of existing cases; the model was not actually asked to diagnose or manage patients in real time.

That is why, as even the Science study’s authors argued, the next step should not be immediately putting Open AI’s model in charge of emergency triage at hospitals across the country. Instead they called for clinical trials that could assess the model’s performance — in both accuracy and safety — under real-world conditions.

“Medicine is high stakes… and we have ways to mitigate these risks. They’re called clinical trials,” Rodman told reporters. “What these results support is a robust and ambitious research agenda.”

AI could be valuable for doctors — but patients should be cautious

AI hype, especially in medicine, is high right now. While listening to the authors discuss their findings, what struck me was their own awareness that their research could be used as a justification for cutting the human medical workforce — and the risks that could end up creating for patients.

“There’s a lot of these so-called AI doctor companies out there that are trying to either cut doctors out of the loop or have minimal clinical supervision,” Rodman said. “As one of the senior authors on the study, I do not think that these results support that.”

The authors emphasized that based on their results, they would envision AI models in the ER being overseen by an actual doctor. Making a diagnosis is only part of treating a patient; it also includes figuring out a treatment plan and monitoring for developments — as well as the human element. “Humans want humans to guide them through life-or-death decisions,” Manrai said.

Basu and Shah said they supported narrowly defined uses for AI in the ER based on the collective research so far. It could offer second opinions when a patient is being handed off to another clinician or weigh in on specific high-risk situations (such as a patient presenting with sepsis infection or stroke symptoms) where time is of the essence. It could also reduce paperwork for doctors, an application featured in the most recent season of The Pitt. Shah pointed to prior authorization, documentation, and scheduling as obvious areas where AI could help.

At the same time, AI models should absolutely not be deployed to autonomously diagnose and manage treatment, Basu said.

Individuals should also be cautious about using AI to make medical decisions. Other studies of AI diagnosis have found worrying results, especially for consumer-facing models like ChatGPT. A paper published in Nature Medicine earlier this year evaluated how ChatGPT did when presented with scenarios that ranged from non-urgent to emergent and found the model underestimated the seriousness of the patient’s condition in 52 percent of cases; patients who were on the verge of diabetic shock or respiratory failure were instead referred to 24- or 48-hour monitoring. The model repeatedly failed to identify clear signs of suicidal ideation.

As Shah put it to me, the Science paper represents a “ceiling” for using AI for diagnosis, while the Nature Medicine paper represents a floor. The two studies show how precise we need to be when considering AI’s use for making clinical decisions: While the more sophisticated o1 model did well in the Science study reviewing curated cases, the consumer-facing ChatGPT — developed by the very same company, Open AI — underperformed in the other paper.

“Both can be true,” Basu told me. “Both are.”

Good Medicine

A newsletter for anyone trying to make sense of their health.

In the call with reporters, Manrai described both “green” (low-risk) scenarios where an AI might genuinely be helpful even to a lay person and “red” (high-risk) cases where you should always involve a medical professional. A green use would be, for example, asking a model about a diet that could help manage your hypertension or stretches that could alleviate a recent back injury. Think of it more as lifestyle advice than hard clinical guidance.

A red use, on the other hand, would involve serious medical situations with life-or-death consequences: chest pain, to give one of many possible examples, is cause to go straight to a doctor or the hospital, not to consult ChatGPT.

We are getting closer to unlocking the awesome potential of these powerful programs to improve medical care, to make what was once science fiction a reality. But even these researchers at the cutting edge agree that we need to move cautiously — and keep the real experts, the doctors, in the loop.

Read Entire Article
Situasi Pemerintah | | | |