Empowering Patients: Comparing ChatGPT and Google in Medical Report Interpretation

Authors

  • Tisa Dobovšek Comenius University Bratislava

Abstract

Introduction

Medical reports play a crucial role in healthcare communication, significantly influencing patients' understanding and decision-making regarding their health. However, patients often face challenges in interpreting these reports due to limited time with doctors, a lack of medical knowledge, and inadequate access to information, resulting in confusion and distress [2]. Conversational AI-based models, such as ChatGPT, have emerged as potential solutions, offering emotional support and the potential to enhance understanding [1]. Despite their promising applications in various domains, their effectiveness in the interpretation of medical reports remains largely unexplored. This study aims to compare the effectiveness of ChatGPT and Google in facilitating understanding and providing more empathic responses and better emotional support. Furthermore, the study will investigate trust in technology, user experience, information reliability, and validity, addressing the emotional and cognitive needs of patients.

Methods

This experimental study will involve 96 participants recruited from a university setting. They will be randomly assigned to either ChatGPT or Google as a tool to aid their interpretation of presented medical reports, while a control group will receive doctor-supported interpretations. After a 30-minute interpretation phase, participants will be asked to provide feedback on trust, user experience, and perceived empathy using a Likert scale questionnaire. The Likert scale ratings will also be used to assess the quality, reliability, and validity of the information provided by comparing medical reports and AI-generated answers with expert evaluations. To evaluate understanding in the context of multimedia learning theory, qualitative interviews, comprehension questions, and tests (e.g., retention, transfer) will be conducted to gain in-depth insights into participants' understanding. Data analysis will involve statistical techniques such as ANOVA to examine differences between the groups.

Expected Results

ChatGPT's conversational capabilities are expected to outperform Google, enhancing emotional support, and understanding. The findings will advance the understanding of machine learning in medical report interpretation, with practical implications for healthcare integration, user experience, information reliability etc. However, a further larger-scale study involving healthcare participants is necessary to capture the complexities of real-world medical report interpretation.

References

[1] M. Cascella, J. Montomoli, V. Bellini, and E. Bignami, “Evaluating the feasibility of CHATGPT in Healthcare: An analysis of multiple clinical and research scenarios,” Journal of Medical Systems, vol. 47, no. 1, 2023. doi:10.1007/s10916-023-01925-4

[2] J. W. Ayers et al., “Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum,” JAMA Internal Medicine, 2023. doi:10.1001/jamainternmed.2023.1838

 

Published

2023-06-05