Scientists examined 30 cases treated in an emergency service in the Netherlands in 2022, feeding in anonymized patient history, lab tests, and the doctors' own observations to ChatGPT, asking it to provide five possible diagnoses. (REUTERS)AI 

AI System ChatGPT Accurately Diagnoses Emergency Room Patients

Dutch researchers have discovered that the artificial intelligence chatbot ChatGPT was able to diagnose patients who were urgently rushed to the emergency department at a level comparable to doctors, and in certain instances, even surpassed them. This finding has led the researchers to believe that AI has the potential to bring about a revolutionary transformation in the field of medicine.

But Wednesday’s report also emphasized that emergency room doctors don’t need to hang up their scrubs just yet, as chatbots can potentially speed up diagnosis but won’t replace human medical judgment and experience.

Researchers studied 30 cases treated in an emergency department in the Netherlands in 2022, feeding ChatGPT an anonymized patient history, laboratory tests and doctors’ own observations and asking it to provide five possible diagnoses.

They then compared the chatbot’s recommended list to the same five diagnoses suggested by emergency physicians who had access to the same information, and then cross-checked the correct diagnosis in each case.

Doctors had the correct diagnosis in the top five in 87 percent of cases, compared to 97 percent in ChatGPT version 3.5 and 87 percent in version 4.0.

“Simply put, this shows that ChatGPT was able to suggest medical diagnoses just like a human doctor would,” said Hidde ten Berg from the emergency department at Jeroen Bosch Hospital in the Netherlands.

Co-author Steef Kurstjens told AFP that the study did not prove that computers could one day use first aid, but that artificial intelligence could play an important role in helping hypotensive doctors.

“The most important thing is that the chatbot does not replace the doctor, but it can help with the diagnosis and maybe it can generate ideas that the doctor hasn’t thought of,” Kurstjens told AFP.

Large language models like ChatGPT are not designed to be medical devices, he stressed, and feeding confidential and sensitive medical information into a chatbot would also cause problems.

– “Bloopers” –

And as in other fields, ChatGPT showed some limitations.

The chatbot’s reasoning was “at times medically implausible or inconsistent, which can lead to misinformation or misdiagnosis with significant consequences,” the report said.

The researchers also acknowledged some flaws in the study. The sample size was small, with 30 cases studied. In addition, only relatively simple cases where patients had a single primary complaint were considered.

It wasn’t clear how well the chatbot would fare in more complex cases. “The effectiveness of ChatGPT in providing multiple diagnoses for patients with complex or rare diseases remains unproven.”

Sometimes the chatbot didn’t give the correct diagnosis out of its top five possibilities, Kurstjens explained, especially in the case of an abdominal aneurysm, a potentially life-threatening complication in which the aortic artery swells.

The only consolation for ChatGPT: in that case, the doctor was also wrong.

The report outlines what it calls medical “bloopers” that the chatbot made, such as diagnosing anemia (a low level of hemoglobin in the blood) in a patient with a normal hemoglobin level.

“It is important to remember that ChatGPT is not a medical device and that there are privacy concerns when using ChatGPT with medical information,” said ten Berg.

“However, this has the potential to save time and reduce waiting times in the emergency department. The benefit of using AI could be to support less experienced doctors, or it could help detect rare diseases,” he added.

The results, published in the medical journal Annals of Emergency Medicine, will be presented at the European Congress of Emergency Medicine (EUSEM) 2023 in Barcelona.

Related posts

Leave a Comment