Study Reveals Challenges in Obtaining Effective Health Advice from Chatbots
As healthcare systems face long waiting lists and increasing costs, many individuals are turning to AI-powered chatbots for medical self-diagnosis. A recent survey revealed that approximately one in six American adults utilize chatbots for health advice at least once a month, indicating a growing reliance on technology for health-related decisions.
The Risks of Relying on AI for Health Advice
While AI chatbots like ChatGPT offer quick access to information, placing too much trust in their outputs can be risky. A study led by the University of Oxford highlights the challenges users face when interacting with these systems, particularly in providing accurate information to receive effective health recommendations.
Understanding the Study Findings
The Oxford study involved around 1,300 participants from the U.K., who were presented with medical scenarios crafted by medical professionals. Participants had to identify potential health conditions and decide on possible actions, such as visiting a doctor or going to the hospital. They used various AI models, including:
- GPT-4o (the default model powering ChatGPT)
- Cohere’s Command R+
- Meta’s Llama 3
Results showed that those using chatbots were less likely to accurately identify relevant health conditions and often underestimated the severity of any conditions they did recognize. Adam Mahdi, director of graduate studies at the Oxford Internet Institute and a co-author of the study, noted, “The responses they received frequently combined good and poor recommendations.”
Challenges with AI Communication
Participants often struggled to provide essential details when querying chatbots, resulting in answers that were difficult to interpret. Mahdi emphasized that the current evaluation methods for chatbots do not capture the complexities of user interactions. He stated, “Current evaluation methods for chatbots do not reflect the complexity of interacting with human users.”
AI in the Healthcare Landscape
Despite the risks, tech companies are increasingly promoting AI as a solution to enhance health outcomes. Notable developments include:
- Apple: Working on an AI tool for advice on exercise, diet, and sleep.
- Amazon: Exploring AI to analyze medical databases for social determinants of health.
- Microsoft: Assisting in developing AI to triage messages from patients to care providers.
However, both healthcare professionals and patients remain cautious about AI’s readiness for high-stakes medical applications. The American Medical Association advises against using chatbots like ChatGPT for clinical decision-making, and leading AI companies, including OpenAI, warn against relying on chatbots for diagnoses.
Expert Recommendations
Mahdi advises, “We would recommend relying on trusted sources of information for healthcare decisions. Like clinical trials for new medications, chatbot systems should be tested in the real world before being deployed.” This approach emphasizes the importance of verifying health advice through credible sources.
For more information on the implications of AI in healthcare, visit our healthcare technology section.