
AI for medical advice
Patients
Asking ChatGPT for Health Advice: What You Need to Know
More and more people are turning to AI chatbots like ChatGPT for medical advice. These tools seem smart, helpful, and are available anytime you have a health question. But a recent case where a man was hospitalized for three weeks after following ChatGPT's dietary advice [1] shows why this trend is so concerning.
Why People Love Getting Health Advice from AI
It's easy to see why AI health advice has become so popular. ChatGPT and similar tools are helping doctors write medical documents and communicate with patients better[2,3]. For regular people seeking health information, these AI tools offer some clear benefits.
First, they're always available. Unlike your doctor's office with limited hours and appointment waiting times, you can ask ChatGPT about your symptoms at 2 AM on a Sunday. Second, it's completely free. While doctor visits takes time to book or can be expensive, asking an AI provides an immediate answer and costs nothing. Many people also find it less embarrassing to discuss sensitive health issues with a computer than with a human doctor. Finally, these AI systems seem to know about everything from heart surgery to skin conditions, making them appear like knowledgeable medical experts.
The Convenience of AI Medical Advice
AI health tools offer a level of convenience that traditional healthcare can't match. They can explain complicated medical terms in plain English, helping you understand what your doctor told you during your last visit. You can ask as many follow-up questions as you want without feeling rushed. You can ask about multiple symptoms at the same time, get information in different languages, and receive answers that seem tailored to your specific situation.
This convenience has led millions of people to use these tools for health information, often without understanding their serious limitations[2].
When AI Advice Nearly Killed Someone
The risks of trusting AI for medical advice became frighteningly clear in a case recently published in a medical journal[1]. A 60-year-old man had been reading about the health problems caused by eating too much salt. He wanted to find a substitute for table salt (sodium chloride) and asked ChatGPT for suggestions.
ChatGPT told him he could replace table salt with sodium bromide. While the AI mentioned that "context matters," it never warned him that sodium bromide is actually toxic to humans. The man bought sodium bromide online and used it as his salt substitute for three months.
The results were devastating. He developed a condition called bromism, which caused severe symptoms including constant fatigue, inability to sleep, poor balance, facial acne, excessive thirst, and most seriously, hallucinations and paranoia. He became convinced his neighbor was trying to poison him and ended up in the emergency room.
Doctors had to treat him with IV fluids, medications to balance his body chemistry, and anti-psychotic drugs to stop the hallucinations. He spent three weeks in the hospital recovering from following ChatGPT's advice. Worrying, when doctors later asked ChatGPT the same question, they got the same dangerous recommendation.
Why AI Gets Medical Advice Wrong
This case shows several serious problems with using AI for health advice.
AI systems don't think like doctors. A real doctor would have asked why the man wanted a salt substitute, warned him about the dangers of sodium bromide, and probably suggested safer alternatives like reducing salt intake. ChatGPT gave a chemically correct but medically dangerous answer without considering the human context.
These AI systems also lack built-in safety checks. They're trained on massive amounts of text from the internet, which includes both reliable medical information and complete nonsense. The AI can't tell the difference between a legitimate medical study and someone's dangerous home remedy posted on a blog.
Most importantly, AI can't examine you physically, doesn't know your medical history, can't order lab tests, and can't consider all the factors that go into real medical decision-making. Current AI systems perform much worse than real doctors at diagnosing medical conditions and often ignore established medical guidelines[5].
AI systems also suffer from "hallucinations" - they can generate information that sounds completely believable but is totally made up. When this happens with medical advice, the consequences can be life-threatening.
How AI Gets Its Medical Information
Understanding where AI gets its medical knowledge helps explain why it's so unreliable. These systems are trained on enormous amounts of text scraped from websites, online books, research papers, and other internet sources. This creates several major problems.
Much of the medical information online is outdated. For example, medical databases still contain thousands of articles promoting lobotomy, a brain surgery that's now considered harmful and barbaric. AI systems can't distinguish between current, evidence-based medical practice and historical mistakes.
The internet is also full of medical misinformation. Anti-vaccine propaganda, conspiracy theories about diseases, and dangerous home remedies all get mixed into the data that trains AI systems. Even worse, research shows that adding just a tiny amount of false medical information to an AI system's training can make it much more likely to give dangerous advice^6.
Once an AI system is trained, it can't learn new information. Medical knowledge advances constantly - new treatments are discovered, old ones are found to be harmful, and guidelines change. But AI systems remain frozen with whatever information they learned during training, which could be months or years old.
The Ongoing Problems
Despite these serious risks, people continue using AI for medical advice in large numbers. Most users don't understand the limitations of these systems. The companies that make AI tools protect themselves with legal disclaimers saying their products shouldn't be used for medical decisions, but people use them anyway. Most doctors don't know their patients are consulting AI for health advice, so they can't warn them about potential dangers.
There's currently no system in place to track when AI gives dangerous medical advice or to prevent similar cases from happening. The regulations that govern medical devices and software don't adequately cover AI systems that give medical advice to the general public[8].
It also takes time for information to filter from key opinion leaders to experts and then to patients and finally more widely on the internet. A recent conversation I had with chatGPT on recommendations for surgery for node positive lung cancer, the confident recommendation was for surgery and chemotherapy, which is now outdated, when challenged why it did not recommend chemo-immunotherapy before surgery, it simply agreed with me (like an ignorant medical student eager to please), then pushed further about chemo-immunotherapy before and immunotherapy after surgery it changed its opinion again. Finally when I asked why it kept changing it’s answer the reply was: “Your follow-up specifically asked about neoadjuvant chemo-immunotherapy, which prompted a reassessment toward the most cutting-edge evidence”.
The Bottom Line
The case of the man who was hospitalized after following ChatGPT's dietary advice should serve as a serious warning. While AI tools might seem helpful and knowledgeable, they have fundamental limitations that make them unsuitable for medical decision-making.
These systems lack the clinical training, judgment, and safety awareness that real medical professionals possess. They can't consider your individual medical history, perform physical examinations, or understand the full context of your health situation. Most dangerously, they can confidently provide information that sounds medical and authoritative but is actually harmful or even life-threatening.
The convenience and accessibility of AI health advice will likely continue to attract people seeking medical information. However, the underlying problems that led to the bromism case remain present in all current AI systems. The promise of easy access to medical knowledge comes with demonstrable risks that have already caused serious harm to real patients.
The man who spent three weeks in the hospital after following AI dietary advice learned this lesson the hard way. His experience should remind us all that when it comes to our health, for now there's no substitute for real medical expertise and human judgment.
August 2025
Background research, drafting assisted by LLM, reviewing, editing and final approval by Prof Eric Lim.
References
Zhang, Y., et al. (2025). A Case of Bromism Influenced by Use of Artificial Intelligence. Annals of Internal Medicine: Clinical Cases, 2024, 1260.
Busch, A., et al. (2025). Current applications and challenges in large language models for patient care: a systematic review. Communications Medicine, 8, 15.
Liu, S., et al. (2024). The application of large language models in medicine: A scoping review. iScience, 27(9), 110350.
Clusmann, J., et al. (2023). The future landscape of large language models in medicine. Communications Medicine, 6, 141.
Dash, D., et al. (2024). Evaluation and mitigation of the limitations of large language models in clinical decision-making. Nature Medicine, 30, 1863-1873.
Wang, C., et al. (2024). Medical large language models are vulnerable to data-poisoning attacks. Nature Machine Intelligence, 6, 516-526.
Thompson, A., et al. (2024). Large language models in medical and healthcare fields: applications, advances, and challenges. Artificial Intelligence Review, 57, 289.
Martinez, R., et al. (2024). A future role for health applications of large language models depends on regulators enforcing safety standards. The Lancet Digital Health, 6(9), e601-e610.