ChatGPT is a powerful language model, boasting 175 billion parameters to create its novel functionality. Following a wav publicity ion recent months, the AI is also becoming more widely used (and it is the fastest growing app yet created).
A new study conducted by a team of medical researchers, led by Dr. Edna Skopljak and Dr. Donika Vata, has analysed the accuracy of ChatGPT’s health information. With the rise of chatbots and AI in healthcare, many people are turning towards these tools for medical advice (despite some security concerns). However, there have been several concerns about AI accuracy.
To assess the appropriateness of the AI technology for delivering health related information, the searchers analysed over 100 Google health search terms, which were asked as questions to ChatGPT.
Following this, medical experts evaluated the accuracy of the information provided and considered whether the answers could be used for first-hand knowledge.
Questions for Chat GPT included:
- Flu Symptoms
- Diabetes Symptoms
- ADHD Symptoms
- How to Lower Blood Pressure
- How to Get Rid of Hiccups
- How to Lower Cholesterol
- What Is RSV?
- Is Pneumonia Contagious?
- What Causes High Blood Pressure?
Together with medical experts, the researchers evaluated the accuracy of the information provided and considered whether the answers could be used for first-hand knowledge.
The researchers concluded that while the chatbots provided some accurate information, there were also many inaccuracies and gaps in knowledge. There is also the patient impact to consider. According to Pew Research, 6 in 10 U.S. citizens adults say they would feel “uncomfortable” if their doctor relied upon artificial intelligence to diagnose disease and provide treatment recommendations.
Another concern is that when using ChatGPT, medical professionals are feeding it data, and in healthcare, this data will often be confidential patient information. These concerns are additional to the recent study about the accuracy of search outcomes.
As an example, the chatbot tested offered incorrect information about the availability of the vaccine to prevent RSV infection. It also failed to tailor answers to the personal health characteristics of the user, which may play a crucial role, as deployed by qualified medics, in understanding a specific medical condition.
The authors concluded that the findings of the review carry important implications for using chatbots and AI in healthcare. In other words, while these tools can be helpful, it is important that people are aware of their limitations and to seek medical advice from a licensed healthcare professional when necessary.
