Oxford study warns against relying on AI chatbots for health advice

Although LLMS perform well on standardised medical exams, the study found that this does not translate to safe or effective real-world application. The issue is not just accuracy, but also how people interpret and act on chatbot outputs.

Oxford study warns against relying on AI chatbots for health advice

A new study led by the Oxford Internet Institute has found that AI chatbots are not reliable sources of medical advice, despite their increasing popularity. The research highlights serious risks associated with depending on large language models (LLMs) like ChatGPT, Meta’s Llama 3, and Cohere for health-related decisions.

In a controlled experiment involving 1,300 UK-based participants, researchers tested decision-making across several medical scenarios. The results showed no significant advantage when users consulted AI tools compared to relying on their own judgement or online searches. In many cases, chatbot advice was unclear or mixed accurate and harmful information, which led to poor health decisions and the underestimation of serious conditions.

Although LLMS perform well on standardised medical exams, the study found that this does not translate to safe or effective real-world application. The issue is not just accuracy, but also how people interpret and act on chatbot outputs.

Why this matters for civil society

Civil society must remain vigilant about the role of AI in critical areas like healthcare. When users turn to chatbots out of convenience or necessity, due to high costs or long wait times, they risk receiving misleading or unsafe information. The unequal access to quality healthcare may worsen if vulnerable groups rely on unregulated tools without safeguards or proper oversight.

What civil society can do

Civil society organisations can raise awareness about the limitations and risks of using AI for medical decisions. They should advocate for clear digital health standards, transparency in AI training data, and stronger data protection for users. Public campaigns can help educate individuals on when AI tools may assist and when to rely on certified healthcare professionals.

Go to Top