Developers Explore AI for Medical Advice Amid Accuracy Concerns
Software developers are experimenting with AI-powered chatbots designed to provide medical advice and diagnose conditions, but questions about their accuracy persist.
In the spring, Google introduced an “AI Overview” feature that presents chatbot-generated answers above traditional search results, including those related to health. While the concept seemed promising, issues quickly emerged regarding the reliability of the advice provided.
During the initial week of the feature’s launch, users reported serious errors. One instance involved Google AI giving potentially dangerous advice for treating a rattlesnake bite, while another search led to a recommendation to consume “at least one small rock per day” for nutritional benefits—an error traced back to a satirical source.
In response, Google has since adjusted its algorithms to reduce the inclusion of satirical and humor-based content in its AI Overviews. The company has also removed some misleading search results that gained viral attention.
“Most AI Overviews deliver high-quality information with links for further exploration,” a Google spokesperson told CBS News. “For health-related queries, we have stringent quality and safety measures in place, including disclaimers advising users to seek professional advice. We continue to refine how we present AI Overviews to ensure the information is both reliable and accurate.”
Despite these efforts, problems with health misinformation persist. For instance, searches about introducing solid food to infants under six months old continued to yield incorrect advice as late as June. According to the American Academy of Pediatrics, solid foods should not be introduced until around six months of age. Additionally, searches related to dubious wellness trends, such as detox diets or drinking raw milk, sometimes returned discredited claims.
Despite these issues, many healthcare professionals remain hopeful about the potential of AI chatbots to revolutionize the industry. Dr. Nigam Shah, Chief Data Scientist at Stanford Healthcare, expressed cautious optimism: “While I’m somewhat skeptical in the short term, I believe that these technologies will ultimately benefit us greatly.”
Proponents of chatbots also argue that medical professionals are not infallible. A 2022 study by the Department of Health and Human Services estimated that up to 2% of patients in emergency departments might experience harm due to misdiagnoses from healthcare providers.
Shah drew a comparison to the early days of Google Search: “When Google Search first appeared, there were concerns that people would misdiagnose themselves and chaos would ensue. That didn’t happen. Similarly, while early-stage chatbots will have their share of mistakes, having access to information when other options are unavailable is beneficial.”
The World Health Organization (WHO) is also exploring AI, with its chatbot, Sarah, providing information based on the WHO’s resources and trusted partners. Sarah offers advice on heart attack prevention, focusing on stress management, sleep, and a healthy lifestyle.
As advancements in AI design and oversight continue, improvements in chatbot accuracy are expected. However, if you are seeking medical advice from an AI today, it’s important to heed warnings about variable information quality, as noted by Google’s disclaimer: “Info quality may vary.”