Risks of Trusting AI Chatbots for Medical Advice

Published on Feb 09, 2026.
A cautionary symbol with tech and medical elements.

In recent years, AI chatbots have surged in popularity, providing users with an accessible avenue for medical advice. However, a study by the University of Oxford raises serious concerns about the reliability of the advice these chatbots offer. With over one-third of UK residents reportedly using AI for mental health support, understanding the accuracy of these responses becomes critical, especially as users increasingly turn to technology for help with their health concerns.

The Oxford study involved nearly 1,300 participants who encountered various health scenarios, such as severe headaches and fatigue after childbirth. When these individuals relied on AI chatbots to discern potential ailments and seek appropriate next steps, many struggled to pose their questions effectively. For instance, the wording of questions could lead to vastly different AI responses, resulting in a confusing mix of clinical options and advice. As noted by Dr. Adam Mahdi, the inconsistency in responses left individuals uncertain about which medical conditions might apply to them, increasing the risk of either underestimating a health issue or seeking unnecessary medical attention.

These findings highlight a significant misconception about AI chatbots: that they can autonomously provide precise and tailored medical advice. Instead, their efficacy largely hinges on how users interact with them. This raises questions about the potential dangers inherent in using AI for health inquiries, particularly as chatbots may inadvertently reflect biases present in historical medical advice. Experts like Dr. Amber W. Childs emphasize that chatbots are limited by the quality of their training data and may reinforce outdated norms. As AI in healthcare evolves, there's a pressing need for clear regulations and guidelines to ensure these systems prioritize user safety.

AIHEALTHCARECHATBOTSMEDICAL ADVICEOXFORD UNIVERSITY STUDY

Read These Next