Exploring the potential of ChatGPT for clinical reasoning and decision-making: a cross-sectional study on the Italian Medical Residency Exam
Authors
Giacomo Scaioli, Giuseppina Lo Moro, Francesco Conrado, Lorenzo Rosset, Fabrizio Bert, Roberta Siliquini
Abstract
Background. This study aimed to assess the performance of ChatGPT, a large language model (LLM), on the Italian State Exam for Medical Residency (SSM) test to determine its potential as a tool for medical education and clinical decision-making support.
Materials and methods. A total of 136 questions were obtained from the official SSM test. ChatGPT responses were analyzed and compared to the performance of medical doctors who took the test in 2022. Questions were classified into clinical cases (CC) and
notional questions (NQ).
Results. ChatGPT achieved an overall accuracy of 90.44%, with higher performance on clinical cases (92.45%) than on notional questions (89.15%). Compared to medical doctors’ scores, ChatGPT performance was higher than 99.6% of the participants.
Conclusions. These results suggest that ChatGPT holds promise as a valuable tool in clinical decision-making, particularly in the context of clinical reasoning. Further research is needed to explore the potential applications and implementation of large language models (LLMs) in medical education and medical practice.