Academic Journal
The doc versus the bot: A pilot study to assess the quality and accuracy of physician and chatbot responses to clinical questions in gynecologic oncology
العنوان: | The doc versus the bot: A pilot study to assess the quality and accuracy of physician and chatbot responses to clinical questions in gynecologic oncology |
---|---|
المؤلفون: | Mary Katherine Anastasio, Pamela Peters, Jonathan Foote, Alexander Melamed, Susan C. Modesitt, Fernanda Musa, Emma Rossi, Benjamin B. Albright, Laura J. Havrilesky, Haley A. Moss |
المصدر: | Gynecologic Oncology Reports, Vol 55, Iss , Pp 101477- (2024) |
بيانات النشر: | Elsevier, 2024. |
سنة النشر: | 2024 |
المجموعة: | LCC:Gynecology and obstetrics LCC:Neoplasms. Tumors. Oncology. Including cancer and carcinogens |
مصطلحات موضوعية: | Artificial intelligence, Gynecologic oncology, Patient education, Gynecology and obstetrics, RG1-991, Neoplasms. Tumors. Oncology. Including cancer and carcinogens, RC254-282 |
الوصف: | Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p |
نوع الوثيقة: | article |
وصف الملف: | electronic resource |
اللغة: | English |
تدمد: | 2352-5789 |
Relation: | http://www.sciencedirect.com/science/article/pii/S2352578924001565; https://doaj.org/toc/2352-5789 |
DOI: | 10.1016/j.gore.2024.101477 |
URL الوصول: | https://doaj.org/article/06e9d8256ccc491aba40ea304ee13c7d |
رقم الانضمام: | edsdoj.06e9d8256ccc491aba40ea304ee13c7d |
قاعدة البيانات: | Directory of Open Access Journals |
تدمد: | 23525789 |
---|---|
DOI: | 10.1016/j.gore.2024.101477 |