Background: Large language models (LLMs) like ChatGPT, enhance scientific research by processing vast amounts of data and improving personalized learning, clinical reasoning, and workflow efficiency in healthcare. Ethical considerations and reduced human interactions, highlight the need for medical education to incorporate AI literacy, data science, and bioethics to prepare future physicians for AI-integrated healthcare. Methods: As per the requirements of this study, a brief questionnaire was developed. This was a descriptive cross-sectional study and data was collected using circulation of electronic surveys and physical questionnaires via simple random sampling technique. The inclusion criteria were clinicians practicing medicine in India with different degrees (E.g. MBBS, BAMS, BHMS, etc). Clinicians who do not have access to digital platforms or technology necessary for interacting with ChatGPT were excluded. Data was documented and analysed using Microsoft Excel 2019. Chi square test was used to determine the association between various parameters and extent of familiarity with ChatGPT. Results: A total of 380 responses were collected. The majority were aged 35-44 years (30.8%), with MBBS/MD/MS or equivalent being the most common qualification (37.6%). Most respondents had 0-5 years of clinical experience (36.5%). Only 21% were very familiar with ChatGPT, while 41.8% were not familiar. Nearly half (45.5%) had interacted with a chatbot, and 65% used ChatGPT in less than 20% of their practice. While 45% viewed AI chatbots as beneficial, 28.4% were unsure of their impact. Confidence in AI accuracy was mixed, with only 20.7% being very confident. Ethical concerns were reported by 32.1% of participants. Interest in AI training was high (64.2%), though 66.6% were unaware of government policies on AI in healthcare. Familiarity with AI was significantly associated with age (p=0.004), sex (p=0.0002), qualification (p=0.003), and experience (p=0.0001). Conclusion: AI could be an excellent resource which can augment the work of physicians and make a significant impact in improving efficiency. However, currently there is much progress left to be made as AI cannot fully replace the role of a physician. Incorporation of AI training in medical education is the need of the hour.
Ever-growing scientific advances and data present a significant challenge: a “burden” of knowledge that leaves researchers struggling to keep up with the expanding scientific literature. In contrast, the explosion of knowledge and data is being aided by machine intelligence. The rapid progress in generative AI in the past few years, especially in large language models (LLMs), is a game-changer. It is well suited to decrease the vast amount of knowledge and has the potential to revolutionize scientific research [1]. ChatGPT and other large language models generate sentences and paragraphs in response to word patterns by training data that they have previously seen. It learns to predict the next word in a sentence based on pre-trained data which is fed to it. By allowing users to communicate with an artificial intelligence model in a human-like way, ChatGPT has crossed the technological adoption barrier into the mainstream [2]. Although AI-based language models like ChatGPT have demonstrated impressive capabilities, it is uncertain how well they will perform in real-world scenarios, particularly in fields such as medicine where high-level and complex thinking is necessary. Furthermore, while the use of ChatGPT in writing scientific articles and other scientific outputs may have potential benefits, important ethical concerns must also be addressed [3].
Clinical Decision systems (CDS) that use AI can perform a diagnosis and provide physicians with treatment suggestions. They improve diagnostic accuracy, efficiency, and safety through optimal interactions between physicians and CDS systems [4]. Data suggested that in medical education, ChatGPT benefits included the possibility of improving personalized learning, clinical reasoning and understanding of complex medical concepts [5]. AI is being tested in healthcare to assist and replace repetitive tasks such as image recognition in diagnosis and augmentation of images in radiology. AI is expected to augment healthcare workflow through automated triage, improve the productivity of individual physicians, reduce human errors, discover better patterns of patient care, defray medical costs, perform minimally invasive surgery, and reduce mortality rates [6].
AI models have lacked sufficient accuracy and power to engage meaningfully in the clinical decision-making space. However, the advancement of large language models (LLMs), which are trained on large amounts of human-generated text like the Internet, has motivated further investigation into whether AI can serve as an adjunct in clinical decision making throughout the entire clinical workflow, from triage to diagnosis to management [7]. Challenges: 1. Data privacy concerns: ChatGPT may raise privacy concerns as it has access to vast amounts of personal health information. 2. Inconsistent accuracy: ChatGPT may not always provide accurate answers, particularly for complex medical questions. 3. Bias in the training data: ChatGPT is only as good as the data it was trained on. If the training data are biased, the model may perpetuate that bias. Pitfalls: 1. Misleading information: ChatGPT is not a substitute for a healthcare provider, and incorrect or misleading information may cause harm to patients. 2. Dependence on technology: ChatGPT may become a crutch for healthcare providers, reducing their ability to diagnose and treat patients without relying on technology [8].
The technological revolution raises many challenges with regards to ethical considerations of AI-based implementation in health care. Minority exclusions in databases, issues with legal protections, and a decrease in humanistic touch, among other ethical issues, raise concern for an adaptation of AI in health care. These reasons bring forth the importance of acquiring sufficient knowledge and experience about AI, an obligation of high importance for future physicians. Medical schools should take necessary steps to educate students with widespread knowledge of basic and clinical medicines along with data science, biostatistics, bioethical implications of AI, and evidence-based medicine. Part of a medical student’s training should include developing the abilities to distinguish correct information from the vast amount of data and to understand how to create and disseminate thoroughly validated, trustworthy information for patients and the public [9].
Study design and setting
Before data collection, institutional ethical clearance was obtained from the Ethical Committee of Dr. Vithalrao Vikhe Patil Foundation’s Medical College. Additionally, informed consent was taken from all the participants stating that the identity of the candidates participating in the study will not be disclosed. The remaining data will be available only to the investigator involved in the study and the regulatory authorities. Break in the confidentiality is possible only after detailed review by the investigator and with the permission of the ethical committee.
This was a descriptive cross-sectional study and data was collected using circulation of electronic surveys as well as physical questionnaires. The data was collected using simple random sampling technique. The inclusion criteria were clinicians practicing medicine in India with different degrees (E.g. MBBS, BAMS, BHMS, etc). Clinicians who do not have access to digital platforms or technology necessary for interacting with ChatGPT were excluded.
Study Questionnaire
As per the requirements of this study, a brief questionnaire was developed. Data was collected over three sections. The questionnaire included a statement confirming the participant’s agreement to participate in the study. Section 1 of the questionnaire collected demographic information age, sex, medical qualification, and years of clinical experience. Section 2 assesses the familiarity and awareness of clinicians with ChatGPT and Artificial Intelligence. Section 3 dealt with perceptions and understanding of clinicians with the feasibility and use of ChatGPT and Artificial Intelligence in their clinical experience and practice.
Statistical Analysis
Data was documented and analysed using Microsoft Excel 2019. Descriptive statistics such as percentages and frequencies were calculated. Chi square test was used to determine the association between sex, age, medical qualification, and years of clinical experience with extent of familiarity with ChatGPT. A p-value of less than 0.05 was considered significant.
Demographics
A total of 380 responses were collected. Majority of the participants were in the 35-44 age group (117, 30.8%), followed by <25 years (83, 21.8%), 25-34 years (75, 19.7%), 45-54 years (52, 13.7%), and ≥55 years (53, 13.9%). In terms of medical qualifications, the largest group consisted of MBBS/MD/MS or equivalent (143, 37.6%), followed by BHMS (75, 19.7%), BAMS (45, 11.8%), UNANI (44, 11.6%), BDS/MDS (30, 7.9%), and other qualifications (36, 9.5%).
Years of clinical experience
Data was collected across clinicians with varying years of experience. The largest group had 139 (36.5%) in the range of 0-5 years of experience, followed by 11-15 years (63, 16.6%), 16-20 years (61, 16.1%), 6-10 years (61, 16.1%), 21-25 years (39, 10.3%), 26-30 years (11, 2.9%), and >30 years (6, 1.6%).
Interaction with Artificial Intelligence
A total of 80 (21%) were very familiar with ChatGPT and other similar conversational agents, 141 (37.1%) were somewhat familiar and 159 (41.8%) were not familiar. Additionally, 173 (45.5%) had interacted with a chatbot in the past whereas 207 (54.4%) had not.
Regarding the extent of ChatGPT usage, 247 (65%) of respondents reported using it in less than 20% of their practice. 48 (12.6%) participants used ChatGPT in 20-40%, 42 (11%) in 40-60%, 28 (7.3%) in 60-80% and 15 (3.9%) in 80-100% of their clinical practice.
Influence of ChatGPT and specific uses
When asked about the potential influence of AI chatbots in healthcare, 171 (45%) viewed them as having positive impact, while 67 (17.6%) believed they had no significant effect, 34 (8.9%) thought they had a negative impact, and 108 (28.4%) were unsure.
A total of 107 (28.1%) clinicians were aware of specific uses of AI beyond general information retrieval, whereas 142 (37.3%) had a general idea but expressed an inclination to know more and 131 (34.4%) were not aware of any such uses.
Accuracy, Reliability and Ethical concerns
Regarding confidence of clinicians in the accuracy and reliability of AI powered in providing medical information or advice, only 79 (20.7%) said that they were very confident, while most participants, 175 (46.05%) were somewhat confident and 126 (33.1%) were not confident.
When asked about apprehensions regarding potential ethical issues related to the use of AI powered chatbots in healthcare, such as patient privacy or data security, majority of participants, 122 (32.1%) had ethical concerns, while 93 (24.4%) had only minor concerns. A total of 65 (17.1%) indicated that they had no ethical concerns and 100 (26.3%) were unaware regarding ChatGPT.
Training, Workshops and Government policies and research initiatives
Amongst all participants, 244 (64.2%) indicated that they would be interested in participating in training and workshops designed to educate doctors about AI and their specific applications in healthcare, while 136 (35.8%) were not interested at this time. In regards to awareness about government policies regarding AI in healthcare, 127 (33.4%) were aware of such initiative whereas 253 (66.6%) were not. Additionally, only 75 (19.7%) respondents were aware of specific research initiatives on AI in healthcare, while 150 (39.5%) had a general understanding, and 155 (40.8%) were unaware of such research.
Associations
Extent of familiarity with ChatGPT and other forms of Artificial Intelligence was significantly associated with age (p=0.004), sex (p=0.0002), medical qualification (0.003) and years of clinical experience (0.0001).
Table 1: Frequency and Percentage of variables
Variable | Frequency | Percentage |
Age wise distribution | ||
<25 | 83 | 21.8% |
25-34 | 75 | 19.7% |
35-44 | 117 | 30.8% |
45-54 | 52 | 13.7% |
>55 | 53 | 14% |
Qualificatio | ||
BAMS | 45 | 11.8% |
BDS/MDS | 30 | 7.9% |
BMHS | 75 | 19.8% |
BPTH | 7 | 1.9% |
MBBS/MD/MS | 143 | 37.6% |
OTHER | 36 | 9.4% |
UNANI | 44 | 11.6% |
Years of Clinical Experience | ||
0-5 | 139 | 36.6% |
6-10 | 61 | 16.05% |
11-15 | 63 | 16.6% |
16-20 | 61 | 16.05% |
21-25 | 39 | 10.3% |
26-30 | 11 | 2.9% |
>30 | 6 | 1.5% |
Have you heard of the use of AI-powered chatbots or virtual assistants in healthcare? | ||
YES | 252 | 66.3% |
NO | 128 | 33.7% |
Are you familiar with the term ‘ChatGPT’ or other such conversational agents? | ||
Yes | 248 | 65.2% |
No | 132 | 34.8% |
Extent of familiarity | ||
Not familiar | 159 | 41.8% |
Somewhat familiar | 141 | 37.1% |
Very familiar | 80 | 21.1% |
Have you interacted or utilized a chatbot or virtual assistant for healthcare related queries or patient support in the past? | ||
Yes | 173 | 45.5% |
No | 207 | 54.5% |
If yes, in what % of your practice do you use AI powered chatbots like ChatGPT? | ||
0-20% | 247 | 65% |
20-40% | 48 | 12.6% |
40-60% | 42 | 11% |
60-80% | 28 | 7.4% |
80-100% | 15 | 3.9% |
Have you participated in any conferences, workshops or educational sessions that discuss the role of AI in healthcare? | ||
Yes, I have attended as an observer | 160 | 42.1% |
No, I have not attended any such events | 220 | 57.9% |
In your opinion, what is the potential influence of AI chatbots on enhancing healthcare accessibility and delivery in India, particularly for underserved populations? | ||
Negative impact | 34 | 8.9% |
Positive impact | 171 | 45% |
NO significance | 67 | 17.7% |
Not sure | 108 | 28.4% |
Are you acquainted with any particular instances or implementations where AI powered chatbots are utilized in healthcare, beyond their general role in information retrieval? | ||
Yes, I am familiar with specific uses | 107 | 28.2% |
I have a general idea but would like to know more | 142 | 37.4% |
No, I am not aware of specific uses | 131 | 34.4% |
How confident are you in the accuracy and reliability of AI powered chatbots like ChatGPT in providing medical information or advice? | ||
Very confident | 79 | 20.8% |
Somewhat confident | 175 | 46.8% |
Not confident at all | 126 | 33.2% |
Do you have any apprehensions regarding potential ethical issues related to the use of AI powered chatbots in healthcare, such as patient privacy or data security? | ||
Yes, I am concerned about ethical issues | 122 | 32.1% |
I have some concerns but they are not major | 93 | 24.5% |
I am unaware about ChatGPT | 100 | 26.3% |
No, I do not have any concerns | 65 | 17.1% |
Would you be interested in participating in training programs or workshops specifically designed to educate doctors about AI powered chatbots and their applications in healthcare? | ||
Yes, I would be interested in participating | 244 | 64.3% |
No, I am not interested at this time | 136 | 35.7% |
Are you aware of any ongoing research or studies evaluating the effectiveness and impact of AI powered chatbots in healthcare? | ||
I have heard of such studies but do not have detailed information | 150 | 39.5% |
No, I am not aware of the applications of ChatGPT in this scenario | 155 | 40.8% |
Yes, I am aware of specific research initiatives | 75 | 19.7% |
Are you aware of any government initiatives or policies in India that encourage the use of AI or chatbot technology in healthcare delivery? | ||
Yes | 127 | 33.4% |
No | 253 | 66.6% |
Table 2: Association between sex, age, medical qualification and years of experience and extent of familiarity with ChatGPT
Variable | Not familiar | Somewhat familiar | Very familiar | Total | p-value |
Sex | |||||
Female | 60 | 74 | 41 | 175 | 0.004 |
Male | 99 | 67 | 39 | 205 | |
Age Group | |||||
Under 25 | 18 | 42 | 23 | 83 | 0.0002 |
25-34 | 28 | 28 | 19 | 75 | |
35-44 | 52 | 41 | 24 | 117 | |
45-55 | 27 | 19 | 6 | 52 | |
55 and above | 34 | 11 | 8 | 53 | |
Medical Qualification | |||||
BAMS | 23 | 10 | 10 | 43 | 0.003 |
BDS/MDS | 17 | 8 | 5 | 30 | |
BHMS | 40 | 23 | 12 | 75 | |
MBBS/MD/MS or Equivalent | 44 | 72 | 27 | 143 | |
OTHER | 17 | 16 | 12 | 45 | |
UNANI | 18 | 12 | 14 | 44 | |
Years of Experience | |||||
0-5 | 40 | 66 | 33 | 139 | 0.0001 |
6 to 10 | 26 | 20 | 15 | 61 | |
11 to 15 | 24 | 24 | 15 | 63 | |
16-20 | 32 | 22 | 7 | 61 | |
21-25 | 28 | 4 | 7 | 39 | |
26-30 | 8 | 2 | 1 | 11 | |
>30 | 1 | 3 | 2 | 6 |
In this study, we conducted a comprehensive analysis of clinicians' perceptions of ChatGPT and other Artificial Intelligence tools in healthcare. Our findings indicate that 65.2% of clinicians were aware of AI, with 45% holding a positive opinion. This indicates that familiarity with ChatGPT does not translate into trust for its use in clinical practice. This claim is further supported by the finding that 65% of the participants rely on artificial intelligence in less than 20% of their practice. A significant portion (32%) expressed concerns related to ethical implications of AI in healthcare. These concerns likely stem from issues related to patient’s privacy, misinformation regarding diseases and treatment, incorrect diagnoses, medicolegal issues, and potential of AI to impact decision making. These ethical challenges emphasize the need of legislation, regulatory frameworks, and guidelines to ensure responsible integration and implementation of AI in healthcare [10,11]. So far, ChatGPT and AI cannot replace humans. It can however, help in supporting and enhancing efficiency [10]. AI is an extremely powerful tool which can handle enormous amounts of data. It can be a valuable aid in administrative tasks like streamlining medical records, accessing relevant patient information with ease, reducing paperwork, organizing appointments, cost-saving, etc [12]. It is also a great tool to generate differential diagnoses, however the accuracy of a physician in correctly diagnosing a patient remains unchallenged [13].
ChatGPT can be a decent tool for patients to access health information regarding diseases but it suffices to provide only a generalized idea. As such, it could help users become more informed and encourage them to seek out professional medical care in situations where they would not have done so [7,14]. It can be a valuable avenue for improving health literacy of the public by making health information extremely accessible and easy to understand [15].
Despite these limitations, ChatGPT shows great promise with its applications in medical education and research. ChatGPT has demonstrated its ability to successfully pass various medical licencing examination at the level expected from doctors at that stage [15,16,17]. ChatGPT shines when asked to simplify and explain complex medical concepts in an easy manner [18,19]. With regards to applications in research, it could be used of rapidly generating summaries, enhance language clarity and flow of speech thereby facilitating better understanding [20]. This, however poses some issues. Concerns regarding originality of work and plagiarism arise with the use of ChatGPT in research. Incorrect information and non-plausible statements could negatively impact the quality of research [21,22].
This study has certain limitations. Firstly, due to random sampling technique, the sample size obtained is relatively small. As such, the data obtained cannot be generalized on a national and international level. Secondly, majority of the participants have less than 5 years of clinical experience which may influence their perception of AI skewing the results to a more positive outlook.
The imminent integration of Artificial Intelligence in healthcare is inevitable. The advances and development in this field crosses new boundaries each day. Provided that there is availability of a defined and standardized protocol addressing the ethical concerns, AI could be an excellent resource which can augment the work of physicians and make a significant impact in improving efficiency. However, currently there is much progress left to be made as AI cannot fully replace the role of a physician. Incorporation of AI training in medical education is the need of the hour so that future physicians are better prepared to deal with emerging technologies, integrate AI-driven tools into clinical practice, and navigate ethical and practical challenges associated with AI in healthcare.
1. Lin Z. Why and how to embrace AI such as ChatGPT in your academic life [Internet]. PsyArXiv. 2023. Available from: http://dx.doi.org/10.31234/osf.io/sdx3j
2. Ge J, Lai JC. Artificial intelligence-based text generators in hepatology: ChatGPT is just the beginning. Hepatol Commun [Internet]. 2023;7(4). Available from: http://dx.doi.org/10.1097/HC9.0000000000000097
3. Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. J Med Syst [Internet]. 2023;47(1):33. Available from: http://dx.doi.org/10.1007/s10916-023-01925-4
4. Hirosawa T, Harada Y, Yokose M, Sakamoto T, Kawamura R, Shimizu T. Diagnostic accuracy of differential-diagnosis lists generated by generative pretrained transformer 3 chatbot for clinical vignettes with common chief complaints: A pilot study. Int J Environ Res Public Health [Internet]. 2023;20(4). Available from: http://dx.doi.org/10.3390/ijerph20043378
5. Sallam M, Salim N, Barakat M, Al-Tammemi A. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J [Internet]. 2023;3(1):e103. Available from: http://dx.doi.org/10.52225/narra.v3i1.103
6. Grunhut J, Wyatt AT, Marques O. Educating future physicians in Artificial Intelligence (AI): An integrative review and proposed changes. J Med Educ Curric Dev [Internet]. 2021;8:23821205211036836. Available from: http://dx.doi.org/10.1177/23821205211036836
7. Rao A, Pang M, Kim J, Kamineni M, Lie W, Prasad AK, et al. Assessing the utility of ChatGPT throughout the entire clinical workflow. medRxiv [Internet]. 2023; Available from: http://dx.doi.org/10.1101/2023.02.21.23285886
8. Baumgartner C. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med [Internet]. 2023;13(3):e1206. Available from: http://dx.doi.org/10.1002/ctm2.1206
9. Grunhut J, Marques O, Wyatt ATM. Needs, challenges, and applications of artificial intelligence in medical education curriculum. JMIR Med Educ [Internet]. 2022;8(2):e35587. Available from: http://dx.doi.org/10.2196/35587
10. Sallam, M. (2023). ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare, 11(6), 887. https://doi.org/10.3390/healthcare11060887
11. Korngiebel, D. M., & Mooney, S. D. (2021). Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ digital medicine, 4(1), 93. https://doi.org/10.1038/s41746-021-00464-x
12. Xu, L., Sanders, L., Li, K., & Chow, J. C. L. (2021). Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR cancer, 7(4), e27850. https://doi.org/10.2196/27850
13. Hirosawa, T., Harada, Y., Yokose, M., Sakamoto, T., Kawamura, R., & Shimizu, T. (2023). Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study. International journal of environmental research and public health, 20(4), 3378. https://doi.org/10.3390/ijerph20043378
14. Yeo, Y. H., Samaan, J. S., Ng, W. H., Ting, P. S., Trivedi, H., Vipani, A., Ayoub, W., Yang, J. D., Liran, O., Spiegel, B., & Kuo, A. (2023). Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clinical and molecular hepatology, 29(3), 721–732. https://doi.org/10.3350/cmh.2023.0089
15. Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS digital health, 2(2), e0000198. https://doi.org/10.1371/journal.pdig.0000198
16. Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2023). How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR medical education, 9, e45312. https://doi.org/10.2196/45312
17. Huh S. (2023). Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. Journal of educational evaluation for health professions, 20, 1. https://doi.org/10.3352/jeehp.2023.20.1
18. Xu T, Weng H, Liu F, Yang L, Luo Y, Ding Z, et al. Current status of ChatGPT use in medical Education: Potentials, challenges, and strategies. Journal of Medical Internet Research [Internet]. 2024 Jun 29;26:e57896. Available from: https://www.jmir.org/2024/1/e57896/
19. Ahmed Y. (2023). Utilization of ChatGPT in Medical Education: Applications and Implications for Curriculum Enhancement. Acta informatica medica : AIM : journal of the Society for Medical Informatics of Bosnia & Herzegovina : casopis Drustva za medicinsku informatiku BiH, 31(4), 300–305. https://doi.org/10.5455/aim.2023.31.300-305
20. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. (2023). Nature, 613(7945), 612. https://doi.org/10.1038/d41586-023-00191-1
21. Thorp H. H. (2023). ChatGPT is fun, but not an author. Science (New York, N.Y.), 379(6630), 313. https://doi.org/10.1126/science.adg7879
22. Kitamura F. C. (2023). ChatGPT Is Shaping the Future of Medical Writing But Still Requires Human Judgment. Radiology, 307(2), e230171. https://doi.org/10.1148/radiol.230171