An artist in Germany who liked to draw outdoors showed up at the hospital with a bug bite and a host of symptoms that doctors couldn’t quite connect. After a month and several unsuccessful treatments, the patient started plugging his medical history into ChatGPT, which offered a diagnosis: tularemia, also known as rabbit fever. The chatbot was correct, and the case was later written up in a peer-reviewed medical study.
Around the same time, another study described a man who appeared at a hospital in the United States with signs of psychosis, paranoid that his neighbor had been poisoning him. It turns out, the patient had asked ChatGPT for alternatives to sodium chloride, or table salt. The chatbot suggested sodium bromide, which is used to clean pools. He’d been eating the toxic substance for three months and, once he’d stopped, required three weeks in a psychiatric unit to stabilize.
User Friendly
A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes.
You’re probably familiar with consulting Google for a mystery ailment. You search the internet for your symptoms, sometimes find helpful advice, and sometimes get sucked into a vortex of anxiety and dread, convinced that you’ve got a rare, undiagnosed form of cancer. Now, thanks to the wonder that is generative AI, you can carry out this process in more detail. Meet Dr. ChatGPT.
AI chatbots are an appealing stand-in for a human physician, especially given the ongoing doctor shortage as well as the broader barriers to accessing health care in the United States.
ChatGPT is not a doctor in the same way that Google is not a doctor. Searching for medical information on either platform is just as likely to lead you to the wrong conclusion as it is to point toward the correct diagnosis. Unlike Google search, however, which simply points users to information, ChatGPT and other large language models (LLMs) invite people to have a conversation about it. They’re designed to be approachable, engaging, and always available. This makes AI chatbots an appealing stand-in for a human physician, especially given the ongoing doctor shortage as well as the broader barriers to accessing health care in the United States.
As the rabbit fever anecdote shows, these tools can also ingest all kinds of data and, having been trained on reams of medical journals, sometimes arrive at expert-level conclusions that doctors missed. Or it might give you really terrible medical advice.
There’s a difference between asking a chatbot for medical advice and talking to it about your health in general. Done right, talking to ChatGPT could lead to better conversations with your doctor and better care. Just don’t let the AI talk you into eating pool cleaner.
The right and wrong ways to talk to Dr. ChatGPT
Plenty of people are talking to ChatGPT about their health. About one in six adults in the United States say they use AI chatbots for medical advice on a monthly basis, according to a 2024 KFF poll. A majority of them aren’t confident in the accuracy of the information the bots provide — and frankly, that level of skepticism is appropriate given the stubborn tendency for LLMs to hallucinate and the potential for bad health information to cause harm. The real challenge for the average user is knowing how to distinguish between fact and fabrication.
“Honestly, I think people need to be very careful about using it for any medical purpose, especially if they don’t have the expertise around knowing what’s true and what’s not,” said Dr. Roxana Daneshjou, a professor and AI researcher at the Stanford School of Medicine. “When it’s correct, it does a pretty good job, but when it’s incorrect, it can be pretty catastrophic.”
Chatbots also have a tendency to be sycophantic, or eager to please, which means they might steer you in the wrong direction if they think that’s what you want.
The situation is precarious enough, Daneshjou added, that she encourages patients to go instead to Dr. Google, which serves up trusted sources. The search giant has been collaborating with experts from the Mayo Clinic and Harvard Medical School for a decade to present verified information about conditions and symptoms after the rise of something called “cyberchondria,” or health anxiety enabled by the internet.
This condition is much older than Google, actually. People have been searching for answers to their health questions since the Usenet days of the 1980s, and by the mid-2000s, eight in 10 people were using the internet to search for health information. Now, regardless of their reliability, chatbots are poised to receive more and more of these queries. Google even puts its problematic AI-generated results for medical questions above the vetted results from its symptom checker.
If you’ve got a list of things to ask your doctor about, ChatGPT could help you craft questions.
But if you skip the symptom checking side of things, tools like ChatGPT can be really helpful if you just want to learn more about what’s going on with your health based on what your doctor’s already told you or to gain a better understanding of their jargony notes. Chatbots are designed to be conversational, and they’re good at it. If you’ve got a list of things to ask your doctor about, ChatGPT could help you craft questions. If you’ve gotten some test results and need to make a decision with your doctor about the best next steps, you can rehearse that with a chatbot without actually asking the AI for any advice.
In fact, when it comes to just talking, there’s some evidence that ChatGPT is better at it. One study from 2023 compared real physician answers to health questions from a Reddit forum to AI-generated responses when a chatbot was prompted with the same questions. Health care professionals then evaluated all of the responses and found that the chatbot-generated ones were both higher quality and more empathetic. This is not the same thing as a doctor being in the same room as a patient, discussing their health. Now is a good time to point out that, on average, patients get just 18 minutes with their primary care doctor on any given visit. If you go just once a year, that’s not very much time to talk to a doctor.
You should be aware that, unlike your human doctor, ChatGPT is not HIPAA-compliant. Chatbots generally have very few privacy protections. That means you should expect any health information you upload will get stored in the AI’s memory and be used to train large language models in the future. It’s also theoretically possible that your data could end up being included in an output for someone else’s prompt. There are more private ways to use chatbots, but still, the hallucination problem and the potential for catastrophe exist.
The future of bot-assisted health care
Even if you’re not using AI to figure out medical mysteries, there’s a chance your doctor is. According to a 2025 Elsevier report, about half of clinicians said they’d used an AI tool for work and slightly more said these tools save them time, and one in five say they’ve used AI for a second opinion on a complex case. This doesn’t necessarily mean your doctor is asking ChatGPT to figure out what your symptoms mean.
Doctors have been using AI-powered tools to help with everything from diagnosing patients to taking notes since well before ChatGPT even existed. These include clinical decision support systems built specifically for doctors, which currently outperform off-the-shelf chatbots — although the chatbots can actually augment the existing tools. A 2023 study found that doctors working with ChatGPT performed only slightly better at diagnosing test cases than those working independently. Interestingly, ChatGPT alone performed the best.
That study made headlines, probably for the suggestion that AI chatbots are better than doctors at diagnosis. One of its co-authors, Dr. Adam Rodman, suggests that this wouldn’t necessarily be the case if doctors would be more open to listening to ChatGPT rather than assuming the chatbots were wrong when the doctor disagreed with their conclusions. Sure, the AI can hallucinate, but it can also spot connections that humans may have missed. Again, look at the rabbit fever case.
“Patients need to talk to their doctors about their LLM use, and honestly, doctors should talk to their patients about their LLM use.”
“The average doctor has a sense of when something is hallucinating or going off the rails,” said Rodman, an internist at Beth Israel Deaconess Medical Center and instructor at Harvard Medical School. “I don’t know that the average patient necessarily does.”
Nevertheless, in the near term, you shouldn’t expect to see Dr. ChatGPT making an appearance at your local clinic. You’re more likely to see AI working as a scribe, saving your doctor time taking notes and possibly, one day, analyzing that data to help your doctor. Your doctor might use AI to help draft messages to patients more quickly. In the near future, as AI tools get better, it’s possible that more clinicians use AI for diagnosis and second opinions. That still doesn’t mean you should rush to ChatGPT with your urgent medical concerns. If you do, tell your doctor about how that went.
“Patients need to talk to their doctors about their LLM use, and honestly, doctors should talk to their patients about their LLM use,” said Rodman. “If we just both step kind of out of the shadow world and talk to each other, we’ll have more productive conversations.”
A version of this story was also published in the User Friendly newsletter. Sign up here so you don’t miss the next one!