OpenAI's ChatGPT can support the clinical decision-making process, including when picking the correct radiological imaging tests for breast cancer screening or breast pain, finds a study.AI 

Study Finds ChatGPT 4 Capable of Selecting Appropriate Medical Imaging Tests for Diagnosis

According to a study, OpenAI’s ChatGPT has the ability to aid in clinical decision-making, specifically in selecting the appropriate radiological imaging tests for breast cancer screening or breast pain.

Research by researchers at Mass General Brigham in the US suggests that large-scale language models can help primary care physicians and referring providers make decisions when evaluating patients and ordering imaging tests for breast pain and breast cancer screening. Their results are published in the Journal of the American College of Radiology.

“In this scenario, the capabilities of ChatGPT were impressive,” said corresponding author Marc D. Succi, associate director of innovation and commercialization at Mass General Brigham Radiology and director of the MESH Incubator.

“I see it as a bridge between the referring healthcare professional and the specialist radiologist – as a trained consultant to recommend the right imaging test at the point of care without delay.

“This could reduce administrative time for both referring and consulting physicians to make these evidence-based decisions, optimize workflow, reduce burnout, and reduce patient confusion and wait times,” Succi said.

In the study, researchers asked ChatGPT 3.5 and 4 to help them decide which imaging tests to use in 21 patient scenarios requiring breast cancer screening or breast pain reporting using eligibility criteria.

They asked the AI openly, giving ChatGPT a list of options. They tested ChatGPT 3.5 as well as ChatGPT 4, a newer, more advanced version.

ChatGPT 4 outperformed 3.5, especially given the available imaging options.

For example, when asked about breast cancer screening and multiple-choice imaging options, ChatGPT 3.5 answered an average of 88.9 percent of the prompts correctly, and ChatGPT 4 about 98.4 percent correctly.

“This study does not compare ChatGPT to existing radiologists because the current gold standard is actually a set of guidelines from the American College of Radiology, which is the comparison we did,” Succi said.

“This is a purely additive study, so we’re not saying that AI is better than your doctor in selecting an imaging test, but it can be an excellent tool for optimizing the doctor’s time in non-interpretive tasks.”

Related posts

Leave a Comment