A recent study has shown that the AI language model, ChatGPT, outperforms human doctors in terms of the quality and empathy of their written advice, The Guardian reports.
The research suggests that AI assistants have the potential to play a significant role in medicine and could help improve doctors' communication with their patients.
AI Passes Quality and Empathy Test for Doctors!
The study, published in the journal JAMA Internal Medicine, examined data from Reddit's AskDocs community, where verified healthcare professionals answer members' medical inquiries from internet users.
The researchers took a random sample of 195 AskDocs exchanges in which a verified doctor responded to a public question. The original queries were then posed to ChatGPT, which was instructed to respond.
A board of three licensed healthcare professionals, who did not know whether the answer came from an actual physician or ChatGPT, assessed the responses for quality and empathy.
Essentially, the researchers attempted to carry out a Turing Test equivalent for an AI chatbot within the medical field.
Incredible Results for ChatGPT
Before discussing the results, it is worth noting that OpenAI's ChatGPT has previously undergone similar assessments. In January, ChatGPT drew attention to its ability to achieve a B/B grade on an MBA exam.
In February, ChatGPT made significant progress in artificial intelligence by successfully advancing past the initial job interview stages for an L3 software engineering position.
This is a significant achievement, as the L3 position is typically held by new college graduates seeking to begin their careers in coding.
In the same month, a new study discovered that OpenAI's ChatGPT scored nearly 60% of the passing threshold on the United States Medical Licensing Exam (USMLE), demonstrating its ability to almost pass the exam.
Going back to the quality and empathy test, The Guardian tells us that the panel preferred ChatGPT's responses to those given by a human doctor 79% of the time.
ChatGPT responses were also rated good or very good quality 79% of the time, compared with 22% of doctors' responses, and 45% of the ChatGPT answers were rated empathic or very empathic compared with just 5% of doctors' replies.
The AI bot may have failed an actual medical licensing exam, but this proves it can be an empathic guide to anyone with the correct prompts.
This is good news, as many companies have already started integrating the chatbot as an automatic website response machine.
ChatGPT Promises Improvements in Healthcare
Dr. John Ayers of the University of California San Diego, one of the study's authors, said that the results highlighted the potential for AI assistants to improve healthcare. "The opportunities for improving healthcare with AI are massive," he said.
Dr. Christopher Longhurst of UC San Diego Health also commented on the results, saying that the study suggests that tools like ChatGPT can efficiently draft high-quality, personalized medical advice for review by clinicians. He added that they are already beginning the process of using ChatGPT at UCSD Health.
Stay posted here at Tech Times.
Related Article : AI May Just Prevent the Next Pandemic-but How?