The researchers observed that people seeking health advice through AI received a blend of accurate and misleading responses. This inconsistency raised concerns about users’ ability to make appropriate decisions about their care.
The findings come as AI adoption for wellbeing continues to rise. A survey by Mental Health UK in November 2025 revealed that more than one-third of people in the UK now turn to AI tools for mental health or wellbeing support.
Dr Rebecca Payne, the study’s lead medical practitioner, warned that relying on chatbots to assess symptoms could pose serious risks, describing the practice as potentially dangerous.
As part of the study, 1,300 participants were presented with medical scenarios, such as experiencing an intense headache or a new mother dealing with extreme fatigue. Participants were divided into two groups, with one group using AI tools to help identify possible conditions and decide on next steps.
Researchers then assessed whether participants correctly understood what might be wrong and whether they made appropriate decisions, such as visiting a GP or seeking emergency care. Results showed that those using AI often struggled to ask the right questions and received different answers depending on how their queries were phrased.
The AI-generated responses typically included a mix of helpful and unhelpful information, leaving many participants unsure how to interpret or act on the advice.
Dr Adam Mahdi, the study’s senior author, told the BBC that while AI can provide medical facts, users frequently find it difficult to extract clear, actionable guidance. He explained that people tend to share information gradually and often omit important details, which can cause AI systems to offer multiple possible explanations without clarity on which is most relevant.
Dr Amber W. Childs, an associate professor of psychiatry at Yale School of Medicine, pointed out another concern: chatbots may reinforce long-standing biases embedded in medical data and practices. She noted that AI systems are only as accurate as the clinical standards they are trained on, which are themselves imperfect.
At the same time, Dr Bertalan Meskó, editor of The Medical Futurist, said progress is being made. He highlighted that major AI companies, including OpenAI and Anthropic, have recently introduced health-focused versions of their chatbots, which he believes could perform differently if tested under similar conditions.
According to Dr Meskó, the priority should be continuous improvement of healthcare-specific AI, alongside strong national regulations, clear safeguards, and well-defined medical guidelines.
Newer Articles
- Leaked Meta Quest Design Points to Lighter, Modular VR Headset
- Massive Data Leak Exposes 8.7 Billion Records Linked to China
- How to Throw an Instagram-Worthy Party without Breaking the Bank