When Margie Smith got sick in 2022, she sought help from a parade of specialists. She saw an allergist for an intractable cough; three pulmonologists for the cough and breathlessness; an ear, nose and throat doctor for severe acid reflux; a cardiologist after she almost passed out while exercising. She got the sense that most were siloed in their specialities and couldn’t assemble the full puzzle.
Others Also Read
When Margie Smith got sick in 2022, she sought help from a parade of specialists. She saw an allergist for an intractable cough; three pulmonologists for the cough and breathlessness; an ear, nose and throat doctor for severe acid reflux; a cardiologist after she almost passed out while exercising....

Eventually, Smith, 70, of Swannanoa, North Carolina, turned to the artificial intelligence chatbot Claude. Through lengthy chats, as well as a Facebook group, she concluded that she had long COVID and it was causing dysautonomia – a condition, common in post-viral syndromes, in which the body struggles to regulate functions like pulse, blood pressure, digestion and temperature. Smith now goes to appointments with AI suggestions in hand, and she chooses providers in part based on whether they are receptive to its role in her decision-making.
She said a combination of recommendations from doctors and from Claude had made her symptoms manageable.
“The medical system really failed me,” she said.
“Is it a good thing to be depending on AI for medical advice? I don’t think so. But it’s the option that’s available.”
More people are asking chatbots for health advice: A third of adults use them for that purpose, according to a poll released in March. Reporting by The New York Times suggests that one notable subset are women with complex chronic illnesses, which are often poorly understood. It can take years to receive a diagnosis, much less relief.
That is partly because symptoms span multiple specialities. But also, many of these illnesses – like long COVID and autoimmune diseases – disproportionately affect women, and doctors are more likely to minimise or delay treating women’s symptoms. Hundreds of people responded to a request last fall to discuss how they were using AI for their health.
Since then, the Times has conducted dozens of interviews about patterns that emerged. The women interviewed for this article said they knew chatbots often provided misinformation, and some had encountered serious errors. Most said they would rather rely on doctors, but felt they couldn’t.
“There are a lot of problems” with using chatbots for medical advice, said James Landay, a co-director of Stanford University’s Institute for Human-Centered AI.
“But I think we also have to admit that there’s a reason people are doing this.” Old Pattern, New Technology Patients have long self-diagnosed through forums, social media, Google and WebMD. It’s easy to find patients who were dismissed by doctors, did their own research and were proven right – as well as patients who pursued unapproved treatment plans and were catastrophically wrong.
So in some ways, using AI to compensate for health care failures is a new version of an old story, said Dr John J. Whyte, CEO of the American Medical Association. But the nature of the technology makes it both more powerful and more risky. Chatbots often invite people to describe their medical histories in detail, including by uploading test results.
And they can offer responses that feel personalised, comprehensive and authoritative, even when they aren’t. Some startups are testing specialised AI products to help diagnose illnesses. But general-purpose chatbots “have not been thoroughly evaluated” for personalised diagnoses and can err in significant ways, said Dr Danielle Bitterman, the clinical lead for data science and AI at Mass General Brigham.
AI models can draw from both high- and low-quality sources, or hallucinate. Users won’t always get citations unless they ask, and it takes scientific literacy to determine whether those sources are reputable and support the chatbot’s claims. Chatbots can sometimes diagnose tough cases.
Take Patty Costello, a user experience researcher in Idaho. More than a decade ago, Costello woke up feeling off. She would have flares of nausea, diarrhea, heartburn and fatigue for days or weeks at a time, with respites but no long-term improvement.
She saw numerous doctors who ordered a variety of tests, several of which showed signs of inflammation, but none brought a diagnosis. The flares grew more frequent.
“This is ruining my life,” she told Chat
GPT last year, describing her symptoms and overall health, and mentioning the inflammation. As one of nine possible diagnoses, the chatbot listed mast cell activation syndrome, in which mast cells – a part of the immune system – flag incorrectly that there is something dangerous in the body, causing allergic reactions with no clear trigger. Costello said that everything she read about the disease seemed to click with her symptoms.
She went to an allergist with the suggestion and received an MCAS diagnosis. With medication, she estimates she’s about 80% better. Costello is not alone in finding a diagnosis through AI, but her experience isn’t the norm.
A study published in February found that, when people without medical training were given detailed scenarios and told to use chatbots to identify a diagnosis and determine next steps, they reached the correct answers less than half the time. A spokesperson for OpenAI, which makes Chat
GPT, referred to an earlier statement from Karan Singhal, who leads the company’s health team. (The Times has sued OpenAI, claiming copyright infringement; OpenAI has denied the claims.) Singhal said the February study’s design didn’t match how people used chatbots in the real world. The company also noted that its models had become more advanced over time, while emphasising that they are still “not a substitute for professional medical advice.”
Anthropic, which makes Claude, did not respond to a request for comment. Scientific Literacy and Skepticism It is perhaps unsurprising that many of the success stories shared with the Times came from people with medical expertise. Caroline Gamwell, 31, is a pelvic floor physical therapist in Denver.
She has training in anatomy and physiology and regularly sees patients with chronic pain. Her own pain began when she was a teenager. She felt spasms along her spine and through her torso and pelvis, like “everything twisting in on itself,” she said.
When she had sex, it felt like sandpaper. At 17, she was told she had anxiety; in college, fibromyalgia; in graduate school, chronic fatigue syndrome plus psychosomatic symptoms; then back to fibromyalgia. But she had seen fibromyalgia in her patients and didn’t think the diagnosis fit.
In October 2025, she described her symptoms to Chat
GPT using precise medical terminology, and asked for 10 possible diagnoses. Her expertise enabled her to reject many of its suggestions. Over more than 12,000 words, she pushed back on implausible diagnoses and explored ones that felt reasonable. One of Chat
GPT’s suggestions was pelvic congestion syndrome, a vascular disease. Gamwell sought a procedure that confirmed it. She had surgery in January and is now symptom-free.
“I’ve been wanting so badly to send a message to my primary care, but I haven’t yet, to kind of be like: ‘I told you so,’” she said.
“‘You were going to have me live the rest of my life in this chronic pain.’” She recognised that many users couldn’t have prompted Chat
GPT and assessed its responses as she had. How many people, she asked, would have realized that several of the suggestions made no sense? – ©2026 The New York Times Company
Source Verification
Corroboration Score: 1This story was independently reported by 1 sources. Click any source to read the original article.
Comments
0 commentsRelated Articles
Science3 Best New Hulu Movies to Watch This Weekend (April 3-5): ‘The Heat’ and More
ScienceU.S. agencies to monitor drinking water for microplastics, pharmaceuticals
Science