Editorial Open Access
Copyright ©The Author(s) 2025. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Diabetes. Mar 15, 2025; 16(3): 98408
Published online Mar 15, 2025. doi: 10.4239/wjd.v16.i3.98408
Prospects and perils of ChatGPT in diabetes
Gumpeny R Sridhar, Department of Endocrinology and Diabetes, Endocrine and Diabetes Centre, Visakhapatnam 530002, Andhra Pradesh, India
Lakshmi Gumpeny, Department of Internal Medicine, Gayatri Vidya Parishad Institute of Healthcare & Medical Technology, Visakhapatnam 530048, Andhra Pradesh, India
ORCID number: Gumpeny R Sridhar (0000-0002-7446-1251); Lakshmi Gumpeny (0000-0002-1368-745X).
Author contributions: Sridhar GR conceived the study and outlined a draft of the manuscript; Gumpeny L contributed to the writing and editing of the manuscript.
Conflict-of-interest statement: The authors have no conflicts of interest to declare.
Open Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Gumpeny R Sridhar, Department of Endocrinology and Diabetes, Endocrine and Diabetes Centre, 15-12-15 Krishnanagar, Visakhapatnam 530002, Andhra Pradesh, India. sridharvizag@gmail.com
Received: June 25, 2024
Revised: November 5, 2024
Accepted: December 3, 2024
Published online: March 15, 2025
Processing time: 209 Days and 18.1 Hours

Abstract

ChatGPT, a popular large language model developed by OpenAI, has the potential to transform the management of diabetes mellitus. It is a conversational artificial intelligence model trained on extensive datasets, although not specifically health-related. The development and core components of ChatGPT include neural networks and machine learning. Since the current model is not yet developed on diabetes-related datasets, it has limitations such as the risk of inaccuracies and the need for human supervision. Nevertheless, it has the potential to aid in patient engagement, medical education, and clinical decision support. In diabetes management, it can contribute to patient education, personalized dietary guidelines, and providing emotional support. Specifically, it is being tested in clinical scenarios such as assessment of obesity, screening for diabetic retinopathy, and provision of guidelines for the management of diabetic ketoacidosis. Ethical and legal considerations are essential before ChatGPT can be integrated into healthcare. Potential concerns relate to data privacy, accuracy of responses, and maintenance of the patient-doctor relationship. Ultimately, while ChatGPT and large language models hold immense potential to revolutionize diabetes care, one needs to weigh their limitations, ethical implications, and the need for human supervision. The integration promises a future of proactive, personalized, and patient-centric care in diabetes management.

Key Words: Large language models; Hallucinations; Ethics; Queries; Accuracy; Scholar GPT; ChatGPT medical professional edition

Core Tip: ChatGPT, a large language model, was released to the public in late 2022. Its popularity lies in queries posted as conversation, eliciting human-like responses. Despite the available version being untrained in domain specific databases (e.g., diabetes), it is being increasingly used in diabetes. Once the accuracy improves, it promises to change the way diabetes is managed by aiding in patient education, dietary guidelines and providing emotional support. In addition, it has been used to assess a number of clinical situations and offer guidance in these areas.



INTRODUCTION

ChatGPT is an artificial intelligence (AI) prototype developed by OpenAI. Released in November 2022, it is a conversational model that gives responses similar to what humans provide to questions that are in the form of natural language[1,2]. It is being utilized in a variety of fields due to its user-friendly interface. ChatGPT-3 is trained on a diverse dataset of about 500 billion words. It is important to bear in mind that ChatGPT versions 3 and 4 are drawn from all available words and not specifically health-related sources. Therefore, the responses may not always be accurate, a caveat that must be kept in mind. Despite the limitations, ChatGPT can be used in health-care settings in areas of patient-engagement, medical education, and clinical decision support[1]. AI encompasses multiple fields and is defined as a computer’s ability to perform tasks requiring human-like cognition, or as a machine that simulates complex human thinking[2].

DEVELOPMENT OF CHATGPT

At its core, ChatGPT is developed using language processing, machine learning, and deep learning. Neural network models form its main component. Neural networks are mathematical models which are computer programs that mimic the functioning of the human brain. Machine learning can be considered an advanced statistical approach to uncover relationships among various parameters. It utilizes algorithms such as linear regression for predicting relationships between variables, and classification algorithms such as support vector machines and random forests for categorizing data. Considering the way it is trained, ChatGPT does not possess the ability to understand the broader context of a question, resulting in errors or inconsistencies. These are particularly important when dealing with questions pertaining to facts or logic. Such limitations can be overcome to a certain extent by incorporating available medical knowledge bases (e.g., Unified Medical Language System)[3].

APPLICATIONS OF LARGE LANGUAGE MODELS SUCH AS CHATGPT IN MEDICAL FIELD

Large language models (LLMs) have a long history. As early as the 1950s, Shannon[4] applied information theory to human language to evaluate how well simple n-gram language models could predict or compress natural language text. Since that time, statistical language modeling has emerged as the foundation for a range of tasks from speech recognition and machine translation, to information retrieval. Popular high level languages are broadly classified as Encoder-Only, Decoder-only, Encoder-Decoder, GPT family, LLaMA family, and PaLM family. ChatGPT belongs to the GPT family. The early versions of ChatGPT were openly accessible, unlike recent releases that require application programming interface. ChatGPT, in contrast to other digital tools such as stand-alone apps or telemedicine platforms, has the advantage of being platform-independent. Questions posed in natural languages elicit responses in conversational format. It can be complementary to existing digital tools by integration with them, leveraging both their strengths.

Despite these limitations, ChatGPT has the potential to help patients and clinicians in many areas: providing reminders to take medications, responding to questions related to their disease and referring them to appropriate healthcare services[5]. Additionally, it enables the analysis of unstructured clinical data and identification of patterns that may be missed by clinicians. It could eventually also aid in the development of personalized treatment plans. In the broad areas of biomedicine and health, LLMs such as ChatGPT find applications in retrieving and extracting information, summarizing text and in medical education[6]. Integration of ChatGPT into the practice of medicine promises to be safe and transparent, enhancing the quality of care, leading to favorable outcomes[7].

To date, ChatGPT has been chiefly studied in the context of medical education, research, and consultation. Published studies have not reported on its deployment in clinical settings. Therefore, ethical issues are bound to arise because available evidence is only experimental[2]. Compared with existing digital health tools, ChatGPT has natural language processing ability. Queries posed in conversational language generate responses without the need for restrictive query terms. ChatGPT can be expected to be integrated with future digital tools, including sensor data.

Performance of ChatGPT has been compared with other LLMs such as Microsoft Bing and Google Bard. Although all three models provide satisfactory responses in different domains, further studies are necessary before they can be clinically useful[8]. ChatGPT outperformed Google Bard in providing accurate and comprehensive responses to myopia-related queries[9]. In the field of dentistry, although ChatGPT4.0 outperformed Bing Chat and Bard, the results were not accurate enough for routine clinical use[10]. In palliative care, although responses of Bard, Copilot, Perplexity, ChatGPT, and Gemini were above the recommended levels in terms of text readability, scores on text content quality assessment were low[11]. In diabetes, comparative studies of ChatGPT with other LLMs are not available. Most publications were reviews, case reports, news reports, or essays[12].

APPLICATIONS IN DIABETES MELLITUS
Patient education

Promoting self-management of diabetes entails patient education, which is an integral part of clinical care. It improves glucose control and quality of life and reduces diabetes complications. Traditional methods of patient education have not been fruitful. LLMs and AI have the potential to have a direct impact, as they are based on language patterns and are therefore accessible and popular. Sng et al[13] used ChatGPT to answer questions on four domains of diabetes self-management: Diet and exercise, hypoglycemia and hyperglycemia education, insulin storage, and administration. ChatGPT could answer all questions with clear, concise, well-organized, and comprehensible instructions. Inaccuracies were observed in a few situations, whereas it was inflexible in certain other situations. Further queries were necessary to obtain proper answers. The authors concluded that ChatGPT could help ease the burden of dispensing routine basic diabetes patient education. The potential benefits of LLMs in patient education include their ability to provide individualized advice and support outside of the healthcare setting, independent of time and location[14]. However, in a non-inferiority trial of the Turing test, participants were able to distinguish responses obtained from ChatGPT and those written by humans better than flipping a coin. ChatGPT needs to undergo further development before it can be confidently integrated in routine clinical care[15].

Dietary guidelines

Subjects with diabetes can be provided personalized dietary guidelines by analysis of their diet preferences, past glycemic control, and dietary advice from national societies. It is also possible to receive real-time feedback when integrated with continuous glucose monitoring systems[16]. Potential application of AI-based nutritional program with the use of LLM and image recognition models has been reported. Sun et al[17] used the model to identify ingredients from visual images of a patient’s meal and generate guidance and recommendations. This was presented as a user-friendly app, which integrated language and image recognition to help in the nutritional management of subjects with type 2 diabetes mellitus.

Provision of emotional support

LLMs can offer emotional support with knowledge and foresight[18]. Since psychological stress is common in subjects with diabetes mellitus[19], LLMs can play a potential role, based on their applications in primary mental health conditions[20]. Proof of concept studies have shown that it is possible to predict psychological outcomes using neural network models[21]. Expansion of the different ways in which ChatGPT can be engaged in patient care is an evolving process. One can expect to find many innovative ways in which it can be put to use.

CHATGPT UTILITY IN CLINICAL SCENARIOS
Assessment of obesity

ChatGPT allows access of information about obesity in Type 2 diabetes mellitus. When evaluated for the credibility of the LLM, ChatGPT was 100% compatible with the guidelines in the assessment of obesity in type 2 diabetes mellitus[22]. The compatibility was lower with respect to treatment aspects such as nutrition, medications, or surgical procedures to lose weight. In other areas, ChatGPT required more prompts to provide complete information. Although it holds promise, LLMs cannot replace human interactions for a patient-centric approach[22].

Use in screening for diabetic retinopathy

LLMs can be integrated into a well-designed screening for diabetic retinopathy (DR). Assessment of DR proceeded stepwise to automate the process. The turning point occurred with the deployment of deep learning in the classification of fundus images to identify DR. The United States Food and Drug Administration approved the DR screening camera, which is available in the market. ChatGPT can be used to help patients decide whether a medical consultation is necessary, to schedule an appointment and educate themselves about various aspects of DR. Healthcare providers can use it to create medical charts, referral letters, and discharge summaries. A comprehensive system that is integrated into a well-designed screening program is essential[23]. Gopalakrishnan et al[24] reported that the system could play a crucial role to identify DR in newly diagnosed diabetes subjects. However, further validation is required before its application in clinical encounters.

Adaptation of guidelines for diabetic ketoacidosis from different sources

Diabetic ketoacidosis is a medical emergency that requires prompt action. Guidelines for its management provided by different scientific societies may differ in some respects. ChatGPT was employed to synthesize guidelines of Diabetes Canada Clinical Practice Guidelines, emergency management of hyperglycemia in primary care, and Joint British Diabetes Societies[25]. Although it was able to generate a table comparing these guidelines, there were errors and inconsistencies, reflecting the limitations of the current version of ChatGPT. With future refinements it can provide practical advice.

After being trained on larger and more specialized databases[26], LLMs such as ChatGPT can help in drafting scientific publications. Despite their current potential, they are complementary rather than a replacement for human expertise[7]. In addition, ChatGPT is useful in rural areas and in low and middle income countries which have limited access to medical services. Integration with telehealth platforms is beneficial in reducing the burden on health care professionals while improving access to medical information on prevention and primary health care, and giving them access to decision support systems[27].

What is the likely impact of ChatGPT in healthcare dynamics such as doctor-patient relationship? Temsah et al[28] assessed the effect of ChatGPT on teleconsultation practice. Positive changes were reported in informational support, assisting diagnosis, communication, improving efficacy, personalizing care, multilingual support, decision making, documentation, and continuing education. These were offset by misdiagnosis, results compromised by limited medical context, increased dependency on technology, ethical, legal, security, and privacy issues. The economic impact of the use of ChatGPT on diabetes outcomes needs to be studied. It is logical that gains result from a shift of healthcare delivery model which is accurate and timely, independent of place or time of the day. However real-world studies are required.

CAUTION AND DRAWBACKS WITH CHATGPT

The enticing convenience of ChatGPT must not mask its limitations for use in diabetes mellitus. Since it is trained to statistically analyze publicly available data, there is a danger of providing information which is inaccurate, if not completely untrue. Therefore, medical supervision must be ideally available before patients act on the advice which is provided by ChatGPT. Even physicians may come to rely increasingly on the results from ChatGPT, which appear to be glib and even authoritative. This must be guarded against. Misinformation fed into the database may overshadow the genuine scientific evidence such that the former could get into an echo chamber where it is repeated in future responses. Ultimately the information provided by ChatGPT must be validated by a subject expert before acting upon it.

It needs reiteration that publicly available versions of ChatGPT are not trained on data that are specific to diabetes mellitus. Despite the facile human-like responses, they are based on analysis of billions of terms to provide an answer without truly understanding the logic or context. Sometimes the responses are untrue, which is called the hallucination phenomenon. Therefore, a specialist’s background knowledge is required to corroborate the responses given by ChatGPT. Improvement needs access to accurate data, the context of the knowledge along with a consideration of aspects of ethics and legality[29]. Posing the same questions at different times may result in different answers, because it cannot distinguish genuine from fake information. It is essential that domain specific versions are trained on specialized knowledge areas to improve reliability.

Challenges that need to be addressed are access to comprehensive data to reduce bias and thus overcome ethical issues. This should be followed by finding technical solutions to integrate the system into routine clinical workflow[30]. Much of the published information is based on reviews, case reports, news items or essays. There was only one meta-analysis and no randomized clinical trials[12]. Of particular importance is the need for meticulous care in using chatboxes that exist in online health portals, bypassing professional advice[31]. Another interesting aspect of LLMs is their employment in evaluating ‘soft data’ such as social determinants of health[32]. Similarly, there is an interesting possibility for generation of EMR phenotyping algorithms[33].

ETHICAL AND LEGAL ASPECTS

Despite the many benefits of ChatGPT, ethical and legal regulatory frameworks are necessary[15]. Of foremost concern is data privacy and security in processing patient information. Ultimately the employment of LLMs must rest on a fine balance between technological assistance and human judgement, without supplanting the patient-doctor relationship. In the development of LLMs, a critical issue is of informed consent for the use of data other than that for which they were originally collected, which involves privacy, ethics, and legality. O’Brien[34] categorized biobank ethics as allowing openness for scientific inquiry, provisions for unanticipated use, the intellectual investment of its creation and protection of proprietary information. Privacy breaches must be precluded. It is expected that secure patient information remains only with the physician. When it is shared with others, it must be in the framework of robust anonymization measures. Currently available protocols include key cryptography, multiparty computation and homomorphic encryptation, k-anonymity, I-diversity, t-closeness, and clustering based k-anonymity[35]. All these need powerful computational power. Recently, Shaik et al[36] updated the Data Information Knowledge Wisdom model, which involves a hierarchical progress of collecting data from many sources, which are then analyzed and interpreted to first reach first the information level and then to knowledge.

All of these presuppose careful consideration of ethics in which algorithmic bias, equity of access to AI, representative geographical and ethnic sources of data are ensured. Reliability and safety entail collaboration among government, industry and academia to ensure reliability of the data that is used to train AI systems. There is a gray zone about legal responsibility of the technology developers and of the users (doctors). The tools must be validated by testing, measurements for dependability, performance, ethical compliance, and safety[37]. The data on which algorithms are trained could lead to bias, which must be avoided: ‘responsible AI systems’ must be ‘transparent, explainable and accountable’[38]. Explainability of the output in AI is a contentious issue. A trade-off is necessary between explainability and accuracy in which it is imperative to involve the public in making decisions. There are arguments that validity is more important than explainability. But critics point out that bias may be built into models that are used to validate an outcome[39].

OTHER ISSUES AWAITING RESOLUTION

ChatGPT is being refined to target specific fields of knowledge. One can expect models trained on data related to diabetes, considering its prevalence and the potential benefit they offer. Low hanging fruits include the analysis of sensor data such as continuous glucose monitoring. Early studies show that it may be eventually useful in providing evidence based advice for both physicians and patients. As an extension, this would help in the development of predictive analytics to prevent diabetic complications. Exploratory studies were conducted in the management of DR. At the next stage, voice inputs and responses in local languages make ChatGPT vastly useful. Not just physicians and patients, development of a model which fits into clinical workflow requires the collaboration of a multidisciplinary team. Providing solutions to clinical problems requires close working of clinicians, informaticians, data scientists, and software engineers. It is important to consider that usage of data science solutions into clinical practice is often iterative, requiring multiple feedback loops between the development team and the end-user[40].

The use of ChatGPT in patient education and guidance about diet composition and habits has been studied, which suggested it is potentially useful, but further refinements are necessary before it can enter mainstream medical care[41,42]. Practical solutions to the issues raised above are technically feasible. But translation in the real world is beset with roadblocks such as financial, logistic and infrastructure issues. These are likely to be made worse by exclusion of training data from underdeveloped and developing nations that most need the technology. Despite the caveats, ChatGPT is likely to revolutionize medical education of health care providers, improving access to relevant and ready information to the general public along with physicians. Solutions were proposed to address problems of explainability of the black box phenomenon of AI. Imans et al[43] described a model in which depression was identified and its severity graded which allowed the physician to understand the model explainability using dynamic ensemble frameworks.

CONCLUSION

Generative AI, such as ChatGPT promises to combine the power of AI with the particular needs of managing chronic diseases such as diabetes mellitus. This requires multifaceted integration: Patient engagement, support for physicians and personalized treatment. It signals the transformative era in healthcare to provide care that is personalized, efficient and patient-centered. Future developments require navigation of challenges such as data privacy, quality of AI recommendations and ethical and legal aspects. Standing at the cusp of technological revolution, one can expect it to deliver proactive, personalized and patient-centric care in diabetes.

ACKNOWLEDGEMENTS

We thank Mr. Venkat Yarabati for assistance in the preparation of this manuscript.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Endocrinology and metabolism

Country of origin: India

Peer-review report’s classification

Scientific Quality: Grade B, Grade D

Novelty: Grade A, Grade C

Creativity or Innovation: Grade B, Grade C

Scientific Significance: Grade B, Grade C

P-Reviewer: Cai ST; Zhang ZQ S-Editor: Wei YF L-Editor: Filipodia P-Editor: Xu ZH

References
1.  Briganti G. How ChatGPT works: a mini review. Eur Arch Otorhinolaryngol. 2024;281:1565-1569.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 13]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
2.  Li J, Dada A, Puladi B, Kleesiek J, Egger J. ChatGPT in healthcare: A taxonomy and systematic review. Comput Methods Programs Biomed. 2024;245:108013.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 44]  [Article Influence: 44.0]  [Reference Citation Analysis (0)]
3.  Su P, Vijay-Shanker K. Investigation of improving the pre-training and fine-tuning of BERT model for biomedical relation extraction. BMC Bioinformatics. 2022;23:120.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 12]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
4.  Shannon CE. Prediction and Entropy of Printed English. Bell Syst Tech J. 1951;30:50-64.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1388]  [Cited by in F6Publishing: 1386]  [Article Influence: 115.5]  [Reference Citation Analysis (0)]
5.  Souza LLD, Fonseca FP, Martins MD, de Almeida OP, Pontes HAR, Coracin FL, Lopes MA, Khurram SA, Santos-silva AR, Hagag A, Vargas PA. ChatGPT and medicine: a potential threat to science or a step towards the future? J Med Artif Intell. 2023;6:19-19.  [PubMed]  [DOI]  [Cited in This Article: ]
6.  Tian S, Jin Q, Yeganova L, Lai PT, Zhu Q, Chen X, Yang Y, Chen Q, Kim W, Comeau DC, Islamaj R, Kapoor A, Gao X, Lu Z. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Brief Bioinform. 2023;25:bbad493.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 40]  [Article Influence: 20.0]  [Reference Citation Analysis (0)]
7.  Jeyaraman M, Ramasubramanian S, Balaji S, Jeyaraman N, Nallakumarasamy A, Sharma S. ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World J Methodol. 2023;13:170-178.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 12]  [Reference Citation Analysis (6)]
8.  Lee Y, Shin T, Tessier L, Javidan A, Jung J, Hong D, Strong AT, McKechnie T, Malone S, Jin D, Kroh M, Dang JT; ASMBS Artificial Intelligence and Digital Surgery Task Force. Harnessing artificial intelligence in bariatric surgery: comparative analysis of ChatGPT-4, Bing, and Bard in generating clinician-level bariatric surgery recommendations. Surg Obes Relat Dis. 2024;20:603-608.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Reference Citation Analysis (0)]
9.  Lim ZW, Pushpanathan K, Yew SME, Lai Y, Sun CH, Lam JSH, Chen DZ, Goh JHL, Tan MCJ, Sheng B, Cheng CY, Koh VTC, Tham YC. Benchmarking large language models' performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard. EBioMedicine. 2023;95:104770.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 103]  [Reference Citation Analysis (36)]
10.  Giannakopoulos K, Kavadella A, Aaqel Salim A, Stamatopoulos V, Kaklamanos EG. Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study. J Med Internet Res. 2023;25:e51580.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
11.  Hancı V, Ergün B, Gül Ş, Uzun Ö, Erdemir İ, Hancı FB. Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care. Medicine (Baltimore). 2024;103:e39305.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
12.  Gödde D, Nöhl S, Wolf C, Rupert Y, Rimkus L, Ehlers J, Breuckmann F, Sellmann T. A SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of ChatGPT in the Medical Literature: Concise Review. J Med Internet Res. 2023;25:e49368.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 15]  [Article Influence: 7.5]  [Reference Citation Analysis (0)]
13.  Sng GGR, Tung JYM, Lim DYZ, Bee YM. Potential and Pitfalls of ChatGPT and Natural-Language Artificial Intelligence Models for Diabetes Education. Diabetes Care. 2023;46:e103-e105.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 45]  [Reference Citation Analysis (0)]
14.  Zheng Y, Wu Y, Feng B, Wang L, Kang K, Zhao A. Enhancing Diabetes Self-management and Education: A Critical Analysis of ChatGPT's Role. Ann Biomed Eng. 2024;52:741-744.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Reference Citation Analysis (1)]
15.  Hulman A, Dollerup OL, Mortensen JF, Fenech ME, Norman K, Støvring H, Hansen TK. ChatGPT- versus human-generated answers to frequently asked questions about diabetes: A Turing test-inspired survey among employees of a Danish diabetes center. PLoS One. 2023;18:e0290773.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 10]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
16.  Dey AK. ChatGPT in Diabetes Care: An Overview of the Evolution and Potential of Generative Artificial Intelligence Model Like ChatGPT in Augmenting Clinical and Patient Outcomes in the Management of Diabetes. Int J Diabetes Technol. 2023;2:66-72.  [PubMed]  [DOI]  [Cited in This Article: ]
17.  Sun H, Zhang K, Lan W, Gu Q, Jiang G, Yang X, Qin W, Han D. An AI Dietitian for Type 2 Diabetes Mellitus Management Based on Large Language and Image Recognition Models: Preclinical Concept Validation Study. J Med Internet Res. 2023;25:e51300.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 7]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
18.  Huang J, Yeung AM, Kerr D, Klonoff DC. Using ChatGPT to Predict the Future of Diabetes Technology. J Diabetes Sci Technol. 2023;17:853-854.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Reference Citation Analysis (0)]
19.  Sridhar GR. On Psychology and Psychiatry in Diabetes. Indian J Endocrinol Metab. 2020;24:387-395.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 4]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
20.  Singh OP. Artificial intelligence in the era of ChatGPT - Opportunities and challenges in mental health care. Indian J Psychiatry. 2023;65:297-298.  [PubMed]  [DOI]  [Cited in This Article: ]
21.  Mohamed ES, Naqishbandi TA, Bukhari SAC, Rauf I, Sawrikar V, Hussain A. A hybrid mental health prediction model using Support Vector Machine, Multilayer Perceptron, and Random Forest algorithms. Healthc Analytics. 2023;3:100185.  [PubMed]  [DOI]  [Cited in This Article: ]
22.  Barlas T, Altinova AE, Akturk M, Toruner FB. Credibility of ChatGPT in the assessment of obesity in type 2 diabetes according to the guidelines. Int J Obes (Lond). 2024;48:271-275.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 7]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
23.  Kawasaki R. How Can Artificial Intelligence Be Implemented Effectively in Diabetic Retinopathy Screening in Japan? Medicina (Kaunas). 2024;60:243.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
24.  Gopalakrishnan N, Joshi A, Chhablani J, Yadav NK, Reddy NG, Rani PK, Pulipaka RS, Shetty R, Sinha S, Prabhu V, Venkatesh R. Recommendations for initial diabetic retinopathy screening of diabetic patients using large language model-based artificial intelligence in real-life case scenarios. Int J Retina Vitreous. 2024;10:11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Reference Citation Analysis (0)]
25.  Hamed E, Eid A, Alberry M. Exploring ChatGPT's Potential in Facilitating Adaptation of Clinical Guidelines: A Case Study of Diabetic Ketoacidosis Guidelines. Cureus. 2023;15:e38784.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 9]  [Article Influence: 4.5]  [Reference Citation Analysis (0)]
26.  Chatterjee J, Dethlefs N. This new conversational AI model can be your friend, philosopher, and guide ... and even your worst enemy. Patterns (N Y). 2023;4:100676.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 31]  [Reference Citation Analysis (0)]
27.  Ahmed SK, Hussein S, Aziz TA, Chakraborty S, Islam MR, Dhama K. The power of ChatGPT in revolutionizing rural healthcare delivery. Health Sci Rep. 2023;6:e1684.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 1]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
28.  Temsah MH, Aljamaan F, Malki KH, Alhasan K, Altamimi I, Aljarbou R, Bazuhair F, Alsubaihin A, Abdulmajeed N, Alshahrani FS, Temsah R, Alshahrani T, Al-Eyadhy L, Alkhateeb SM, Saddik B, Halwani R, Jamal A, Al-Tawfiq JA, Al-Eyadhy A. ChatGPT and the Future of Digital Health: A Study on Healthcare Workers' Perceptions and Expectations. Healthcare (Basel). 2023;11:1812.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 41]  [Article Influence: 20.5]  [Reference Citation Analysis (0)]
29.  Wang X, Sanders HM, Liu Y, Seang K, Tran BX, Atanasov AG, Qiu Y, Tang S, Car J, Wang YX, Wong TY, Tham YC, Chung KC. ChatGPT: promise and challenges for deployment in low- and middle-income countries. Lancet Reg Health West Pac. 2023;41:100905.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 25]  [Article Influence: 12.5]  [Reference Citation Analysis (1)]
30.  Alsadhan A, Al-Anezi F, Almohanna A, Alnaim N, Alzahrani H, Shinawi R, AboAlsamh H, Bakhshwain A, Alenazy M, Arif W, Alyousef S, Alhamidi S, Alghamdi A, AlShrayfi N, Rubaian NB, Alanzi T, AlSahli A, Alturki R, Herzallah N. The opportunities and challenges of adopting ChatGPT in medical research. Front Med (Lausanne). 2023;10:1259640.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
31.  Ajagunde J, Das NK. ChatGPT Versus Medical Professionals. Health Serv Insights. 2024;17:11786329241230161.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
32.  Ong JCL, Seng BJJ, Law JZF, Low LL, Kwa ALH, Giacomini KM, Ting DSW. Artificial intelligence, ChatGPT, and other large language models for social determinants of health: Current state and future directions. Cell Rep Med. 2024;5:101356.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Reference Citation Analysis (0)]
33.  Yan C, Ong HH, Grabowska ME, Krantz MS, Su WC, Dickson AL, Peterson JF, Feng Q, Roden DM, Stein CM, Kerchberger VE, Malin BA, Wei WQ. Large language models facilitate the generation of electronic health record phenotyping algorithms. J Am Med Inform Assoc. 2024;31:1994-2001.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Reference Citation Analysis (0)]
34.  O'Brien SJ. Stewardship of human biospecimens, DNA, genotype, and clinical data in the GWAS era. Annu Rev Genomics Hum Genet. 2009;10:193-209.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 33]  [Cited by in F6Publishing: 31]  [Article Influence: 1.9]  [Reference Citation Analysis (0)]
35.  Andrew J, Eunice RJ, Karthikeyan J. An anonymization-based privacy-preserving data collection protocol for digital health data. Front Public Health. 2023;11:1125011.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
36.  Shaik T, Tao X, Li L, Xie H, Velásquez JD. A survey of multimodal information fusion for smart healthcare: Mapping the journey from data to wisdom. Inf Fus. 2024;102:102040.  [PubMed]  [DOI]  [Cited in This Article: ]
37.  Sridhar GR, Lakshmi G. Ethical Issues of Artificial Intelligence in Diabetes Mellitus. Med Res Arch. 2023;11.  [PubMed]  [DOI]  [Cited in This Article: ]
38.  Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, Aggarwal K, Ibrahim S, Patil V, Smriti K, Shetty S, Rai BP, Chlosta P, Somani BK. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front Surg. 2022;9:862322.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 51]  [Cited by in F6Publishing: 174]  [Article Influence: 58.0]  [Reference Citation Analysis (0)]
39.  Reddy S. Explainability and artificial intelligence in medicine. Lancet Digit Health. 2022;4:e214-e215.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 4]  [Article Influence: 1.3]  [Reference Citation Analysis (0)]
40.  Poddar M, Marwaha JS, Yuan W, Romero-Brufau S, Brat GA. An operational guide to translational clinical machine learning in academic medical centers. NPJ Digit Med. 2024;7:129.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
41.  Sharma S, Pajai S, Prasad R, Wanjari MB, Munjewar PK, Sharma R, Pathade A. A Critical Review of ChatGPT as a Potential Substitute for Diabetes Educators. Cureus. 2023;15:e38380.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 10]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
42.  Ponzo V, Goitre I, Favaro E, Merlo FD, Mancino MV, Riso S, Bo S. Is ChatGPT an Effective Tool for Providing Dietary Advice? Nutrients. 2024;16:469.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
43.  Imans D, Abuhmed T, Alharbi M, El-Sappagh S. Explainable Multi-Layer Dynamic Ensemble Framework Optimized for Depression Detection and Severity Assessment. Diagnostics (Basel). 2024;14:2385.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]