Link Centre - Search Engine and Internet Directory

Helping to share the web since 1996

Can You Trust AI With Your Health? Oxford Researchers Sound the Alarm

AI chatbots are increasingly being used for health guidance, but new research suggests they may provide advice that is unreliable and sometimes unsafe. A study conducted by the University of Oxford found that medical guidance from AI systems often varied in quality, making it difficult for users to determine which information they could trust.

The image shows the chatgpt app on a phone.

The researchers observed that people seeking health advice through AI received a blend of accurate and misleading responses. This inconsistency raised concerns about users’ ability to make appropriate decisions about their care.

The findings come as AI adoption for wellbeing continues to rise. A survey by Mental Health UK in November 2025 revealed that more than one-third of people in the UK now turn to AI tools for mental health or wellbeing support.

Dr Rebecca Payne, the study’s lead medical practitioner, warned that relying on chatbots to assess symptoms could pose serious risks, describing the practice as potentially dangerous.

As part of the study, 1,300 participants were presented with medical scenarios, such as experiencing an intense headache or a new mother dealing with extreme fatigue. Participants were divided into two groups, with one group using AI tools to help identify possible conditions and decide on next steps.

Researchers then assessed whether participants correctly understood what might be wrong and whether they made appropriate decisions, such as visiting a GP or seeking emergency care. Results showed that those using AI often struggled to ask the right questions and received different answers depending on how their queries were phrased.

The AI-generated responses typically included a mix of helpful and unhelpful information, leaving many participants unsure how to interpret or act on the advice.

Dr Adam Mahdi, the study’s senior author, told the BBC that while AI can provide medical facts, users frequently find it difficult to extract clear, actionable guidance. He explained that people tend to share information gradually and often omit important details, which can cause AI systems to offer multiple possible explanations without clarity on which is most relevant.

Dr Amber W. Childs, an associate professor of psychiatry at Yale School of Medicine, pointed out another concern: chatbots may reinforce long-standing biases embedded in medical data and practices. She noted that AI systems are only as accurate as the clinical standards they are trained on, which are themselves imperfect.

At the same time, Dr Bertalan Meskó, editor of The Medical Futurist, said progress is being made. He highlighted that major AI companies, including OpenAI and Anthropic, have recently introduced health-focused versions of their chatbots, which he believes could perform differently if tested under similar conditions.

According to Dr Meskó, the priority should be continuous improvement of healthcare-specific AI, alongside strong national regulations, clear safeguards, and well-defined medical guidelines.

Newer Articles

Older Articles

← Back to News Headlines