1
|
Khokhar PB, Gravino C, Palomba F. Advances in artificial intelligence for diabetes prediction: insights from a systematic literature review. Artif Intell Med 2025; 164:103132. [PMID: 40258308 DOI: 10.1016/j.artmed.2025.103132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 03/23/2025] [Accepted: 04/10/2025] [Indexed: 04/23/2025]
Abstract
Diabetes mellitus (DM), a prevalent metabolic disorder, has significant global health implications. The advent of machine learning (ML) has revolutionized the ability to predict and manage diabetes early, offering new avenues to mitigate its impact. This systematic review examined 53 articles on ML applications for diabetes prediction, focusing on datasets, algorithms, training methods, and evaluation metrics. Various datasets, such as the Singapore National Diabetic Retinopathy Screening Program, REPLACE-BG, National Health and Nutrition Examination Survey (NHANES), and Pima Indians Diabetes Database (PIDD), have been explored, highlighting their unique features and challenges, such as class imbalance. This review assesses the performance of various ML algorithms, such as Convolutional Neural Networks (CNN), Support Vector Machines (SVM), Logistic Regression, and XGBoost, for the prediction of diabetes outcomes from multiple datasets. In addition, it explores explainable AI (XAI) methods such as Grad-CAM, SHAP, and LIME, which improve the transparency and clinical interpretability of AI models in assessing diabetes risk and detecting diabetic retinopathy. Techniques such as cross-validation, data augmentation, and feature selection are discussed in terms of their influence on the versatility and robustness of the model. Some evaluation techniques involving k-fold cross-validation, external validation, and performance indicators such as accuracy, area under curve, sensitivity, and specificity are presented. The findings highlight the usefulness of ML in addressing the challenges of diabetes prediction, the value of sourcing different data types, the need to make models explainable, and the need to keep models clinically relevant. This study highlights significant implications for healthcare professionals, policymakers, technology developers, patients, and researchers, advocating interdisciplinary collaboration and ethical considerations when implementing ML-based diabetes prediction models. By consolidating existing knowledge, this SLR outlines future research directions aimed at improving diagnostic accuracy, patient care, and healthcare efficiency through advanced ML applications. This comprehensive review contributes to the ongoing efforts to utilize artificial intelligence technology for a better prediction of diabetes, ultimately aiming to reduce the global burden of this widespread disease.
Collapse
Affiliation(s)
- Pir Bakhsh Khokhar
- Department of Informatics, University of Salerno, Via Giovanni Paolo II, 132, Fisciano, 84084 Salerno, Italy.
| | - Carmine Gravino
- Department of Informatics, University of Salerno, Via Giovanni Paolo II, 132, Fisciano, 84084 Salerno, Italy.
| | - Fabio Palomba
- Department of Informatics, University of Salerno, Via Giovanni Paolo II, 132, Fisciano, 84084 Salerno, Italy.
| |
Collapse
|
2
|
Deng J, Qin Y. Current Status, Hotspots, and Prospects of Artificial Intelligence in Ophthalmology: A Bibliometric Analysis (2003-2023). Ophthalmic Epidemiol 2025; 32:245-258. [PMID: 39146462 DOI: 10.1080/09286586.2024.2373956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Revised: 06/01/2024] [Accepted: 06/18/2024] [Indexed: 08/17/2024]
Abstract
PURPOSE Artificial intelligence (AI) has gained significant attention in ophthalmology. This paper reviews, classifies, and summarizes the research literature in this field and aims to provide readers with a detailed understanding of the current status and future directions, laying a solid foundation for further research and decision-making. METHODS Literature was retrieved from the Web of Science database. Bibliometric analysis was performed using VOSviewer, CiteSpace, and the R package Bibliometrix. RESULTS The study included 3,377 publications from 4,035 institutions in 98 countries. China and the United States had the most publications. Sun Yat-sen University is a leading institution. Translational Vision Science & Technology"published the most articles, while "Ophthalmology" had the most co-citations. Among 13,145 researchers, Ting DSW had the most publications and citations. Keywords included "Deep learning," "Diabetic retinopathy," "Machine learning," and others. CONCLUSION The study highlights the promising prospects of AI in ophthalmology. Automated eye disease screening, particularly its core technology of retinal image segmentation and recognition, has become a research hotspot. AI is also expanding to complex areas like surgical assistance, predictive models. Multimodal AI, Generative Adversarial Networks, and ChatGPT have driven further technological innovation. However, implementing AI in ophthalmology also faces many challenges, including technical, regulatory, and ethical issues, and others. As these challenges are overcome, we anticipate more innovative applications, paving the way for more effective and safer eye disease treatments.
Collapse
Affiliation(s)
- Jie Deng
- First Clinical College of Traditional Chinese Medicine, Hunan University of Chinese Medicine, Changsha, Hunan, China
- Graduate School, Hunan University of Chinese Medicine, Changsha, Hunan, China
| | - YuHui Qin
- First Clinical College of Traditional Chinese Medicine, Hunan University of Chinese Medicine, Changsha, Hunan, China
- Graduate School, Hunan University of Chinese Medicine, Changsha, Hunan, China
| |
Collapse
|
3
|
Balas M, Micieli JA, Wong JCY. Integrating AI with tele-ophthalmology in Canada: a review. CANADIAN JOURNAL OF OPHTHALMOLOGY 2025; 60:e337-e343. [PMID: 39255951 DOI: 10.1016/j.jcjo.2024.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 05/21/2024] [Accepted: 08/18/2024] [Indexed: 09/12/2024]
Abstract
The field of ophthalmology is rapidly advancing, with technological innovations enhancing the diagnosis and management of eye diseases. Tele-ophthalmology, or the use of telemedicine for ophthalmology, has emerged as a promising solution to improve access to eye care services, particularly for patients in remote or underserved areas. Despite its potential benefits, tele-ophthalmology faces significant challenges, including the need for high volumes of medical images to be analyzed and interpreted by trained clinicians. Artificial intelligence (AI) has emerged as a powerful tool in ophthalmology, capable of assisting clinicians in diagnosing and treating a variety of conditions. Integrating AI models into existing tele-ophthalmology infrastructure has the potential to revolutionize eye care services by reducing costs, improving efficiency, and increasing access to specialized care. By automating the analysis and interpretation of clinical data and medical images, AI models can reduce the burden on human clinicians, allowing them to focus on patient care and disease management. Available literature on the current status of tele-ophthalmology in Canada and successful AI models in ophthalmology was acquired and examined using the Arksey and O'Malley framework. This review covers literature up to 2022 and is split into 3 sections: 1) existing Canadian tele-ophthalmology infrastructure, with its benefits and drawbacks; 2) preeminent AI models in ophthalmology, across a variety of ocular conditions; and 3) bridging the gap between Canadian tele-ophthalmology and AI in a safe and effective manner.
Collapse
Affiliation(s)
- Michael Balas
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, ON, Canada; Division of Neurology, Department of Medicine, St. Michael's Hospital, University of Toronto, Toronto, ON, Canada; Department of Ophthalmology, St. Michael's Hospital, Toronto, ON, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, ON, Canada.
| |
Collapse
|
4
|
Sobhi N, Sadeghi-Bazargani Y, Mirzaei M, Abdollahi M, Jafarizadeh A, Pedrammehr S, Alizadehsani R, Tan RS, Islam SMS, Acharya UR. Artificial intelligence for early detection of diabetes mellitus complications via retinal imaging. J Diabetes Metab Disord 2025; 24:104. [PMID: 40224528 PMCID: PMC11993533 DOI: 10.1007/s40200-025-01596-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Accepted: 02/23/2025] [Indexed: 04/15/2025]
Abstract
Background Diabetes mellitus (DM) increases the risk of vascular complications, and retinal vasculature imaging serves as a valuable indicator of both microvascular and macrovascular health. Moreover, artificial intelligence (AI)-enabled systems developed for high-throughput detection of diabetic retinopathy (DR) using digitized retinal images have become clinically adopted. This study reviews AI applications using retinal images for DM-related complications, highlighting advancements beyond DR screening, diagnosis, and prognosis, and addresses implementation challenges, such as ethics, data privacy, equitable access, and explainability. Methods We conducted a thorough literature search across several databases, including PubMed, Scopus, and Web of Science, focusing on studies involving diabetes, the retina, and artificial intelligence. We reviewed the original research based on their methodology, AI algorithms, data processing techniques, and validation procedures to ensure a detailed analysis of AI applications in diabetic retinal imaging. Results Retinal images can be used to diagnose DM complications including DR, neuropathy, nephropathy, and atherosclerotic cardiovascular disease, as well as to predict the risk of cardiovascular events. Beyond DR screening, AI integration also offers significant potential to address the challenges in the comprehensive care of patients with DM. Conclusion With the ability to evaluate the patient's health status in relation to DM complications as well as risk prognostication of future cardiovascular complications, AI-assisted retinal image analysis has the potential to become a central tool for modern personalized medicine in patients with DM.
Collapse
Affiliation(s)
- Navid Sobhi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | | | - Majid Mirzaei
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Mirsaeed Abdollahi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jafarizadeh
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Siamak Pedrammehr
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, 75 Pigdons Rd, Waurn Ponds, VIC 3216 Australia
- Faculty of Design, Tabriz Islamic Art University, Tabriz, Iran
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, 75 Pigdons Rd, Waurn Ponds, VIC 3216 Australia
| | - Ru-San Tan
- National Heart Centre Singapore, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Sheikh Mohammed Shariful Islam
- Institute for Physical Activity and Nutrition, School of Exercise and Nutrition Sciences, Deakin University, Melbourne, VIC Australia
- Cardiovascular Division, The George Institute for Global Health, Newtown, Australia
- Sydney Medical School, University of Sydney, Camperdown, Australia
| | - U. Rajendra Acharya
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD 4300 Australia
- Centre for Health Research, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
5
|
Pedrini A, Nowosielski Y, Rehak M. Diabetic retinopathy-recommendations for screening and treatment. Wien Med Wochenschr 2025; 175:253-263. [PMID: 40343680 DOI: 10.1007/s10354-025-01088-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Accepted: 04/06/2025] [Indexed: 05/11/2025]
Abstract
Diabetic retinopathy (DR), the prevalence of which continues to rise, is one of the most common causes of vision loss worldwide. Experimental and clinical research in recent years has contributed to a better understanding of the pathogenesis of DR, which is complex and results from many interrelated processes leading to abnormal permeability and occlusion of the retinal vasculature, with ischemia and subsequent neovascularization. According to the absence or presence of neovascularization, DR is divided into two main forms: nonproliferative and proliferative DR. From nonproliferative to proliferative disease, diabetic macular edema (DME) can develop anywhere along the spectrum. As the majority of diabetics have no ophthalmologic symptoms, screening plays an important role in preventing the development of retinal disease. Specific treatment options beyond metabolic risk factor control, including intravitreal administration of anti-vascular endothelial growth factor (VEGF) agents or corticosteroids, laser photocoagulation, and vitreous surgery, are effective approaches for ocular diabetic complications.
Collapse
Affiliation(s)
- Alisa Pedrini
- Department of Ophthalmology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria
| | - Yvonne Nowosielski
- Department of Ophthalmology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria.
| | - Matus Rehak
- Department of Ophthalmology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria
| |
Collapse
|
6
|
Peoples N, McBee D, Xiong S, Alvarez A, Wang E, Ricciardelli A, Wang S, Clark DL, Wong TY. Diabetic Retinopathy Screening Rates at Student-Run Clinics in the United States: A Systematic Review and Meta-Analysis. Ophthalmic Epidemiol 2025; 32:356-360. [PMID: 39212490 DOI: 10.1080/09286586.2024.2378778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 03/24/2024] [Accepted: 07/05/2024] [Indexed: 09/04/2024]
Affiliation(s)
- Nicholas Peoples
- College of Medicine, Baylor College of Medicine, Houston, Texas, USA
| | - Dylan McBee
- College of Medicine, Baylor College of Medicine, Houston, Texas, USA
| | - Shangzhi Xiong
- The George Institute for Global Health, Faculty of Medicine and Health, University of New South Wales, Sydney, New South Wales, Australia
| | - Alexandra Alvarez
- College of Medicine, Baylor College of Medicine, Houston, Texas, USA
| | - Emily Wang
- College of Medicine, Baylor College of Medicine, Houston, Texas, USA
| | | | - Shiwei Wang
- Department of Counseling & Clinical Psychology, Teachers College, Columbia University, New York, New York, USA
| | - Dana L Clark
- Department of Family and Community Medicine, Baylor College of Medicine, Houston, Texas, USA
| | - Tien Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing, China
- Ocular Epidemiology Research Group, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
7
|
Naskar S, Sharma S, Kuotsu K, Halder S, Pal G, Saha S, Mondal S, Biswas UK, Jana M, Bhattacharjee S. The biomedical applications of artificial intelligence: an overview of decades of research. J Drug Target 2025; 33:717-748. [PMID: 39744873 DOI: 10.1080/1061186x.2024.2448711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2024] [Revised: 12/13/2024] [Accepted: 12/26/2024] [Indexed: 01/11/2025]
Abstract
A significant area of computer science called artificial intelligence (AI) is successfully applied to the analysis of intricate biological data and the extraction of substantial associations from datasets for a variety of biomedical uses. AI has attracted significant interest in biomedical research due to its features: (i) better patient care through early diagnosis and detection; (ii) enhanced workflow; (iii) lowering medical errors; (v) lowering medical costs; (vi) reducing morbidity and mortality; (vii) enhancing performance; (viii) enhancing precision; and (ix) time efficiency. Quantitative metrics are crucial for evaluating AI implementations, providing insights, enabling informed decisions, and measuring the impact of AI-driven initiatives, thereby enhancing transparency, accountability, and overall impact. The implementation of AI in biomedical fields faces challenges such as ethical and privacy concerns, lack of awareness, technology unreliability, and professional liability. A brief discussion is given of the AI techniques, which include Virtual screening (VS), DL, ML, Hidden Markov models (HMMs), Neural networks (NNs), Generative models (GMs), Molecular dynamics (MD), and Structure-activity relationship (SAR) models. The study explores the application of AI in biomedical fields, highlighting its enhanced predictive accuracy, treatment efficacy, diagnostic efficiency, faster decision-making, personalised treatment strategies, and precise medical interventions.
Collapse
Affiliation(s)
- Sweet Naskar
- Department of Pharmaceutics, Institute of Pharmacy, Kalyani, West Bengal, India
| | - Suraj Sharma
- Department of Pharmaceutics, Sikkim Professional College of Pharmaceutical Sciences, Sikkim, India
| | - Ketousetuo Kuotsu
- Department of Pharmaceutical Technology, Jadavpur University, Kolkata, West Bengal, India
| | - Suman Halder
- Medical Department, Department of Indian Railway, Kharagpur Division, Kharagpur, West Bengal, India
| | - Goutam Pal
- Service Dispensary, ESI Hospital, Hoogly, West Bengal, India
| | - Subhankar Saha
- Department of Pharmaceutical Technology, Jadavpur University, Kolkata, West Bengal, India
| | - Shubhadeep Mondal
- Department of Pharmacology, Momtaz Begum Pharmacy College, Rajarhat, West Bengal, India
| | - Ujjwal Kumar Biswas
- School of Pharmaceutical Science (SPS), Siksha O Anusandhan (SOA) University, Bhubaneswar, Odisha, India
| | - Mayukh Jana
- School of Pharmacy, Centurion University of Technology and Management, Centurion University, Bhubaneswar, Odisha, India
| | - Sunirmal Bhattacharjee
- Department of Pharmaceutics, Bharat Pharmaceutical Technology, Amtali, Agartala, Tripura, India
| |
Collapse
|
8
|
Fan W, Jager MJ, Dai W, Heindl LM. Deep learning-based system for automatic identification of benign and malignant eyelid tumours. Br J Ophthalmol 2025:bjo-2025-327127. [PMID: 40348397 DOI: 10.1136/bjo-2025-327127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Accepted: 04/10/2025] [Indexed: 05/14/2025]
Abstract
AIMS Our aim is to develop a deep learning-based system for automatically identifying and classifying benign and malignant tumours of the eyelid to improve diagnostic accuracy and efficiency. METHODS The dataset includes photographs of normal eyelids, benign and malignant eyelid tumours and was randomly divided into a training and validation dataset in a ratio of 8:2. We used the training dataset to train eight convolutional neural network models to classify normal eyelids, benign and malignant eyelid tumours. These models included VGG16, ResNet50, Inception-v4, EfficientNet-V2-M and their variants. The validation dataset was used to evaluate and compare the performance of the different deep learning models. RESULTS All eight models achieved an average accuracy greater than 0.746 for identifying normal eyelids, benign and malignant eyelid tumours, with an average sensitivity and specificity exceeding 0.790 and 0.866, respectively. The mean area under the receiver operating characteristic curve (AUC) for the eight models was more than 0.904 in correctly identifying normal eyelids, benign and malignant eyelid tumours. The dual-path Inception-v4 network demonstrated the highest performance, with an AUC of 0.930 (95% CI 0.900 to 0.954) and an F1-score of 0.838 (95% CI 0.787 to 0.882). CONCLUSION The deep learning-based system shows significant potential in improving the diagnosis of eyelid tumours, providing a reliable and efficient tool for clinical practice. Future work will validate the model with more extensive and diverse datasets and integrate it into clinical workflows for real-time diagnostic support.
Collapse
Affiliation(s)
- Wanlin Fan
- Department of Ophthalmology, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
| | - Martine Johanna Jager
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
| | - Weiwei Dai
- Changsha Aier Eye Hospital, Hunan, China
| | - Ludwig M Heindl
- Department of Ophthalmology, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
- Center for Integrated Oncology (CIO), Aachen-Bonn-Cologne-Duesseldorf, Cologne, Germany
| |
Collapse
|
9
|
Abreu-Gonzalez R, Susanna-González G, Blair JPM, Lasagni Vitar RM, Ciller C, Apostolopoulos S, De Zanet S, Rodríguez Martín JN, Bermúdez C, Calle Pascual AL, Rigo E, Cervera Taulet E, Escobar-Barranco JJ, Cobo-Soriano R, Donate-Lopez J. Validation of artificial intelligence algorithm LuxIA for screening of diabetic retinopathy from a single 45° retinal colour fundus images: the CARDS study. BMJ Open Ophthalmol 2025; 10:e002109. [PMID: 40340790 PMCID: PMC12067837 DOI: 10.1136/bmjophth-2024-002109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2025] [Accepted: 04/17/2025] [Indexed: 05/10/2025] Open
Abstract
OBJECTIVE This study validated the artificial intelligence (AI)-based algorithm LuxIA for screening more-than-mild diabetic retinopathy (mtmDR) from a single 45° colour fundus image of patients with diabetes mellitus (DM, type 1 or type 2) in Spain. Secondary objectives included validating LuxIA according to the International Clinical Diabetic Retinopathy (ICDR) classification and comparing its performance between different devices. METHODS In this multicentre, cross-sectional study, retinal colour fundus images of adults (≥18 years) with DM were collected from five hospitals in Spain (December 2021-December 2022). 45° colour fundus photographs were captured using non-mydriatic Topcon and ZEISS cameras. The Discovery platform (RetinAI) was used to collect images. LuxIA output was an ordinal score (1-5), indicating a classification as mtmDR based on an ICDR severity score. RESULTS 945 patients with DM were included; the mean (SD) age was 64.6 (13.5) years. The LuxIA algorithm detected mtmDR with a sensitivity and specificity of 97.1% and 94.8%, respectively. The area under the receiver-operating characteristic curve was 0.96, indicating a high test accuracy. The 95% CI data for overall accuracy (94.8% to 95.6%), sensitivity (96.8% to 98.2%) and specificity (94.3% to 95.1%) indicated robust estimations by LuxIA, which maintained a concordance of classification (N=829, kappa=0.837, p=0.001) when used to classify Topcon images. LuxIA validation on ZEISS-obtained images demonstrated high accuracy (90.6%), specificity (92.3%) and lower sensitivity (83.3%) as compared with Topcon-obtained images. CONCLUSIONS AI algorithms such as LuxIA are increasing testing feasibility for healthcare professionals in DR screening. This study validates the real-world utility of LuxIA for mtmDR screening.
Collapse
Affiliation(s)
- Rodrigo Abreu-Gonzalez
- Ophthalmology, University Hospital of La Candelaria, La Matanza, Spain
- Fundación VerSalud, Madrid, Spain
| | | | | | | | | | | | | | | | - Carlos Bermúdez
- Innovation & Digital Health Service, Servicio Canario de Salud, Santa Cruz de Tenerife, Spain
| | - Alfonso Luis Calle Pascual
- Endocrinology & Nutrition Department, HCSC, Complutense University of Madrid, Madrid, Spain
- CIBERDEM, Madrid, Spain
| | - Elena Rigo
- Ophthalmology, Son Llàtzer Hospital, Palma de Mallorca, Spain
| | | | | | - Rosario Cobo-Soriano
- Ophthalmology, Hospital Universitario del Henares, Coslada, Spain
- Francisco de Vitoria University, Majadahonda, Spain
| | - Juan Donate-Lopez
- Fundación VerSalud, Madrid, Spain
- Ophthalmology, La Luz Hospital, Madrid, Spain
| |
Collapse
|
10
|
Luo X, Xu Q, Wang Z, Huang C, Liu C, Jin X, Zhang J. A Lesion-Fusion Neural Network for Multi-View Diabetic Retinopathy Grading. IEEE J Biomed Health Inform 2025; 29:3184-3193. [PMID: 38568769 DOI: 10.1109/jbhi.2024.3384251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
As the most common complication of diabetes, diabetic retinopathy (DR) is one of the main causes of irreversible blindness. Automatic DR grading plays a crucial role in early diagnosis and intervention, reducing the risk of vision loss in people with diabetes. In these years, various deep-learning approaches for DR grading have been proposed. Most previous DR grading models are trained using the dataset of single-field fundus images, but the entire retina cannot be fully visualized in a single field of view. There are also problems of scattered location and great differences in the appearance of lesions in fundus images. To address the limitations caused by incomplete fundus features, and the difficulty in obtaining lesion information. This work introduces a novel multi-view DR grading framework, which solves the problem of incomplete fundus features by jointly learning fundus images from multiple fields of view. Furthermore, the proposed model combines multi-view inputs such as fundus images and lesion snapshots. It utilizes heterogeneous convolution blocks (HCB) and scalable self-attention classes (SSAC), which enhance the ability of the model to obtain lesion information. The experimental results show that our proposed method performs better than the benchmark methods on the large-scale dataset.
Collapse
|
11
|
Chinta SV, Wang Z, Palikhe A, Zhang X, Kashif A, Smith MA, Liu J, Zhang W. AI-driven healthcare: Fairness in AI healthcare: A survey. PLOS DIGITAL HEALTH 2025; 4:e0000864. [PMID: 40392801 PMCID: PMC12091740 DOI: 10.1371/journal.pdig.0000864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/22/2025]
Abstract
Artificial intelligence (AI) is rapidly advancing in healthcare, enhancing the efficiency and effectiveness of services across various specialties, including cardiology, ophthalmology, dermatology, emergency medicine, etc. AI applications have significantly improved diagnostic accuracy, treatment personalization, and patient outcome predictions by leveraging technologies such as machine learning, neural networks, and natural language processing. However, these advancements also introduce substantial ethical and fairness challenges, particularly related to biases in data and algorithms. These biases can lead to disparities in healthcare delivery, affecting diagnostic accuracy and treatment outcomes across different demographic groups. This review paper examines the integration of AI in healthcare, highlighting critical challenges related to bias and exploring strategies for mitigation. We emphasize the necessity of diverse datasets, fairness-aware algorithms, and regulatory frameworks to ensure equitable healthcare delivery. The paper concludes with recommendations for future research, advocating for interdisciplinary approaches, transparency in AI decision-making, and the development of innovative and inclusive AI applications.
Collapse
Affiliation(s)
| | - Zichong Wang
- Florida International University, Miami, Florida, United States of America
| | - Avash Palikhe
- Florida International University, Miami, Florida, United States of America
| | - Xingyu Zhang
- University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Ayesha Kashif
- Jose Marti MAST 6-12 Academy, Hialeah, Florida, United States of America
| | | | - Jun Liu
- Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Wenbin Zhang
- Florida International University, Miami, Florida, United States of America
| |
Collapse
|
12
|
Zhu Z, Wang Y, Qi Z, Hu W, Zhang X, Wagner SK, Wang Y, Ran AR, Ong J, Waisberg E, Masalkhi M, Suh A, Tham YC, Cheung CY, Yang X, Yu H, Ge Z, Wang W, Sheng B, Liu Y, Lee AG, Denniston AK, Wijngaarden PV, Keane PA, Cheng CY, He M, Wong TY. Oculomics: Current concepts and evidence. Prog Retin Eye Res 2025; 106:101350. [PMID: 40049544 DOI: 10.1016/j.preteyeres.2025.101350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 03/03/2025] [Accepted: 03/03/2025] [Indexed: 03/20/2025]
Abstract
The eye provides novel insights into general health, as well as pathogenesis and development of systemic diseases. In the past decade, growing evidence has demonstrated that the eye's structure and function mirror multiple systemic health conditions, especially in cardiovascular diseases, neurodegenerative disorders, and kidney impairments. This has given rise to the field of oculomics-the application of ophthalmic biomarkers to understand mechanisms, detect and predict disease. The development of this field has been accelerated by three major advances: 1) the availability and widespread clinical adoption of high-resolution and non-invasive ophthalmic imaging ("hardware"); 2) the availability of large studies to interrogate associations ("big data"); 3) the development of novel analytical methods, including artificial intelligence (AI) ("software"). Oculomics offers an opportunity to enhance our understanding of the interplay between the eye and the body, while supporting development of innovative diagnostic, prognostic, and therapeutic tools. These advances have been further accelerated by developments in AI, coupled with large-scale linkage datasets linking ocular imaging data with systemic health data. Oculomics also enables the detection, screening, diagnosis, and monitoring of many systemic health conditions. Furthermore, oculomics with AI allows prediction of the risk of systemic diseases, enabling risk stratification, opening up new avenues for prevention or individualized risk prediction and prevention, facilitating personalized medicine. In this review, we summarise current concepts and evidence in the field of oculomics, highlighting the progress that has been made, remaining challenges, and the opportunities for future research.
Collapse
Affiliation(s)
- Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia.
| | - Yueye Wang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ziyi Qi
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Shanghai, China
| | - Wenyi Hu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - Xiayin Zhang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Yujie Wang
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, USA
| | - Ethan Waisberg
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Alex Suh
- Tulane University School of Medicine, New Orleans, LA, USA
| | - Yih Chung Tham
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaohong Yang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Honghua Yu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Zongyuan Ge
- Monash e-Research Center, Faculty of Engineering, Airdoc Research, Nvidia AI Technology Research Center, Monash University, Melbourne, VIC, Australia
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Andrew G Lee
- Center for Space Medicine and the Department of Ophthalmology, Baylor College of Medicine, Houston, USA; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, USA; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, USA; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, USA; Department of Ophthalmology, University of Texas Medical Branch, Galveston, USA; University of Texas MD Anderson Cancer Center, Houston, USA; Texas A&M College of Medicine, Bryan, USA; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Alastair K Denniston
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK; National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre (BRC), University Hospital Birmingham and University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Florey Institute of Neuroscience and Mental Health, University of Melbourne, Parkville, VIC, Australia
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Ching-Yu Cheng
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
13
|
Tsai TY, Chang YF, Kang EYC, Chen KJ, Wang NK, Liu L, Hwang YS, Lai CC, Chen SY, Chen J, Lai CS, Wu WC. Grading of Foveal Hypoplasia Using Deep Learning on Retinal Fundus Images. Transl Vis Sci Technol 2025; 14:18. [PMID: 40402544 DOI: 10.1167/tvst.14.5.18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/23/2025] Open
Abstract
Purpose This study aimed to develop and evaluate a deep learning model for grading foveal hypoplasia using retinal fundus images. Methods This retrospective study included patients with foveal developmental disorders, using color fundus images and optical coherence tomography scans taken between January 1, 2001, and August 31, 2021. In total, 605 retinal fundus images were obtained from 303 patients (male, 55.1%; female, 44.9%). After augmentation, the training, validation, and testing data sets comprised 1229, 527, and 179 images, respectively. A deep learning model was developed for binary classification (normal vs. abnormal foveal development) and six-grade classification of foveal hypoplasia. The outcome was compared with those by senior and junior clinicians. Results Higher grade of foveal hypoplasia showed worse visual outcomes (P < 0.001). The binary classification achieved a best testing accuracy of 84.36% using the EfficientNet_b1 model, with 84.51% sensitivity and 84.26% specificity. The six-grade classification achieved a best testing accuracy of 78.21% with the model. The model achieved an area under the receiver operating characteristic curve (AUROC) of 0.9441 and an area under the precision-recall curve (AUPRC) of 0.9654 (both P < 0.0001) in the validation set and an AUROC of 0.8777 and an AUPRC of 0.8327 (both P < 0.0001) in the testing set. Compared to junior and senior clinicians, the EfficientNet_b1 model exhibited a superior performance in both binary and six-grade classification (both P < 0.00001). Conclusions The deep learning model in this study proved more efficient and accurate than assessments by junior and senior clinicians for identifying foveal developmental diseases in retinal fundus images. With the aid of the model, we were able to accurately assess patients with foveal developmental disorders. Translational Relevance This study strengthened the importance for a pediatric deep learning system to support clinical evaluation, particularly in cases reliant on retinal fundus images.
Collapse
Affiliation(s)
- Tsung-Ying Tsai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Ying-Feng Chang
- Artificial Intelligence Research Center, Chang Gung University, Taoyuan, Taiwan
- Department of Gastroenterology and Hepatology, New Taipei Municipal Tu Cheng Hospital (Built and Operated by Chang Gung Medical Foundation), New Taipei City, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Kuan-Jen Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Nan-Kai Wang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Laura Liu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Yih-Shiou Hwang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chi-Chun Lai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Sin-You Chen
- Artificial Intelligence Research Center, Chang Gung University, Taoyuan, Taiwan
| | - Jenhui Chen
- Department of Computer Science and Information Engineering, Chang Gung University, Taiwan
- Division of Breast Surgery and General Surgery, Department of Surgery, Chang Gung Memorial Hospital, Taiwan
- Department of Electronic Engineering, Ming Chi University of Technology, Taiwan
| | - Chao-Sung Lai
- Department of Electronic Engineering, Chang Gung University, Taoyuan, Taiwan
- Department of Nephrology, Chang Gung Memorial Hospital, Taiwan
- Department of Materials Engineering, Ming Chi University of Technology, Taiwan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
14
|
Dinesen S, Schou MG, Hedegaard CV, Subhi Y, Savarimuthu TR, Peto T, Andersen JKH, Grauslund J. A Deep Learning Segmentation Model for Detection of Active Proliferative Diabetic Retinopathy. Ophthalmol Ther 2025; 14:1053-1063. [PMID: 40146482 PMCID: PMC12006569 DOI: 10.1007/s40123-025-01127-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2025] [Accepted: 03/05/2025] [Indexed: 03/28/2025] Open
Abstract
INTRODUCTION Existing deep learning (DL) algorithms lack the capability to accurately identify patients in immediate need of treatment for proliferative diabetic retinopathy (PDR). We aimed to develop a DL segmentation model to detect active PDR in six-field retinal images by the annotation of new retinal vessels and preretinal hemorrhages. METHODS We identified six-field retinal images classified at level 4 of the International Clinical Diabetic Retinopathy Disease Severity Scale collected at the Island of Funen from 2009 to 2019 as part of the Danish screening program for diabetic retinopathy (DR). A certified grader (grader 1) manually dichotomized the images into active or inactive PDR, and the images were then reassessed by two independent certified graders. In cases of disagreement, the final classification decision was made in collaboration between grader 1 and one of the secondary graders. Overall, 637 images were classified as active PDR. We then applied our pre-established DL segmentation model to annotate nine lesion types before training the algorithm. The segmentations of new vessels and preretinal hemorrhages were corrected for any inaccuracies before training the DL algorithm. After the classification and pre-segmentation phases the images were divided into training (70%), validation (10%), and testing (20%) datasets. We added 301 images with inactive PDR to the testing dataset. RESULTS We included 637 images of active PDR and 301 images of inactive PDR from 199 individuals. The training dataset had 1381 new vessel and preretinal hemorrhage lesions, while the validation dataset had 123 lesions and the testing dataset 374 lesions. The DL system demonstrated a sensitivity of 90% and a specificity of 70% for annotation-assisted classification of active PDR. The negative predictive value was 94%, while the positive predictive value was 57%. CONCLUSIONS Our DL segmentation model achieved excellent sensitivity and acceptable specificity in distinguishing active from inactive PDR.
Collapse
Affiliation(s)
- Sebastian Dinesen
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark.
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
- Steno Diabetes Centre Odense, Odense University Hospital, Odense, Denmark.
| | - Marianne G Schou
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Christoffer V Hedegaard
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
| | - Yousif Subhi
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Department of Ophthalmology, Rigshospitalet, Glostrup, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | | | - Tunde Peto
- Centre for Public Health, Queen's University Belfast, Belfast, UK
| | - Jakob K H Andersen
- The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Sdr. Boulevard 29, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Steno Diabetes Centre Odense, Odense University Hospital, Odense, Denmark
| |
Collapse
|
15
|
Tan-Torres A, Praveen PA, Jeji D, Brant A, Yin X, Yang L, Singh P, Ali T, Traynis I, Jadeja D, Sawhney R, Webster DR, Hammel N, Liu Y, Widner K, Virmani S, Venkatesh P, Krause J, Tandon N. Validation of a Deep Learning Model for Diabetic Retinopathy on Patients with Young-Onset Diabetes. Ophthalmol Ther 2025; 14:1147-1155. [PMID: 40087218 PMCID: PMC12006647 DOI: 10.1007/s40123-025-01116-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Accepted: 02/25/2025] [Indexed: 03/17/2025] Open
Abstract
INTRODUCTION While many deep learning systems (DLSs) for diabetic retinopathy (DR) have been developed and validated on cohorts with an average age of 50s or older, fewer studies have examined younger individuals. This study aimed to understand DLS performance for younger individuals, who tend to display anatomic differences, such as prominent retinal sheen. This sheen can be mistaken for exudates or cotton wool spots, and potentially confound DLSs. METHODS This was a prospective cross-sectional cohort study in a "Diabetes of young" clinic in India, enrolling 321 individuals between ages 18 and 45 (98.8% with type 1 diabetes). Participants had fundus photographs taken and the photos were adjudicated by experienced graders to obtain reference DR grades. We defined a younger cohort (age 18-25) and an older cohort (age 26-45) and examined differences in DLS performance between the two cohorts. The main outcome measures were sensitivity and specificity for DR. RESULTS Eye-level sensitivity for moderate-or-worse DR was 97.6% [95% confidence interval (CI) 91.2, 98.2] for the younger cohort and 94.0% [88.8, 98.1] for the older cohort (p = 0.418 for difference). The specificity for moderate-or-worse DR significantly differed between the younger and older cohorts, 97.9% [95.9, 99.3] and 92.1% [87.6, 96.0], respectively (p = 0.008). Similar trends were observed for diabetic macular edema (DME); sensitivity was 79.0% [57.9, 93.6] for the younger cohort and 77.5% [60.8, 90.6] for the older cohort (p = 0.893), whereas specificity was 97.0% [94.5, 99.0] and 92.0% [88.2, 95.5] (p = 0.018). Retinal sheen presence (94% of images) was associated with DME presence (p < 0.0001). Image review suggested that sheen presence confounded reference DME status, increasing noise in the labels and depressing measured sensitivity. The gradability rate for both DR and DME was near-perfect (99% for both). CONCLUSION DLS-based DR screening performed well in younger individuals aged 18-25, with comparable sensitivity and higher specificity compared to individuals aged 26-45. Sheen presence in this cohort made identification of DME difficult for graders and depressed measured DLS sensitivity; additional studies incorporating optical coherence tomography may improve accuracy of measuring DLS DME sensitivity.
Collapse
Affiliation(s)
| | - Pradeep A Praveen
- Department of Endocrinology and Metabolism, AIIMS, New Delhi, India
- Goa Institute of Management, Goa, India
| | | | - Arthur Brant
- Work Done at Google Via Oregon Vision and Pharmacy LLC, Keizer, OR, USA
| | | | - Lu Yang
- Google, Mountain View, CA, USA
| | | | - Tayyeba Ali
- Work Done at Google Via Vituity, Emeryville, CA, USA
| | - Ilana Traynis
- Work Done at Google Via Oregon Vision and Pharmacy LLC, Keizer, OR, USA
| | | | | | | | | | - Yun Liu
- Google, Mountain View, CA, USA.
| | | | | | | | | | - Nikhil Tandon
- Department of Endocrinology and Metabolism, AIIMS, New Delhi, India
| |
Collapse
|
16
|
Chen Y, Song F, Zhao Z, Wang Y, To E, Liu Y, Shi D, Chen X, Xu L, Shang X, Lai M, He M. Acceptability, applicability, and cost-utility of artificial-intelligence-powered low-cost portable fundus camera for diabetic retinopathy screening in primary health care settings. Diabetes Res Clin Pract 2025; 223:112161. [PMID: 40194705 DOI: 10.1016/j.diabres.2025.112161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Revised: 03/26/2025] [Accepted: 03/31/2025] [Indexed: 04/09/2025]
Abstract
AIMS To evaluate the acceptability, applicability, and cost-utility of AI-powered portable fundus cameras for diabetic retinopathy (DR) screening in Hong Kong, providing a viable alternative screening solution for resource-limited areas. METHODS This pragmatic trial conducted in an optometric clinic and two optical shops. A self-testing system was used, integrating a portable fundus camera and AI software that automatically identified DR. Three months following the screening, selected participants were invited to complete an open-ended questionnaire. RESULTS A total of 316 subjects participated, with age of 60.80 ± 8.30 years. The success rate of the self-testing system without active assistance was 89 %. Among 61 subjects who completed follow-up interview, a majority agreed that the system and report were easy to follow and understand (85.3 % and 75.4 %). The satisfaction rate was 64 %, and the willingness to use again was 80 %. The AI screening showed a cost saving of 6312.92 USD per QALY, while the adjusted AI model saved 18639. AI screening and adjusted model outperformed traditional screening (Net Monetary Benefit 367,863.31 and 354,904.76 vs 339,919.83 USD). CONCLUSIONS The AI-powered portable fundus camera demonstrated high acceptability and applicability in real-world settings, suggesting that AI screening could be a viable alternative in resource-limited settings.
Collapse
Affiliation(s)
- Yanxian Chen
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Fan Song
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Ziwei Zhao
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Yueye Wang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Elaine To
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Yanjun Liu
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Xiaolan Chen
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Liya Xu
- Department of Public Health & Community Medicine, Tufts University School of Medicine, MA, USA
| | - Xianwen Shang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Mengying Lai
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong; Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong.
| |
Collapse
|
17
|
Agnihotri AP, Nagel ID, Artiaga JCM, Guevarra MCB, Sosuan GMN, Kalaw FGP. Large Language Models in Ophthalmology: A Review of Publications from Top Ophthalmology Journals. OPHTHALMOLOGY SCIENCE 2025; 5:100681. [PMID: 40114712 PMCID: PMC11925577 DOI: 10.1016/j.xops.2024.100681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 11/27/2024] [Accepted: 12/13/2024] [Indexed: 03/22/2025]
Abstract
Purpose To review and evaluate the current literature on the application and impact of large language models (LLMs) in the field of ophthalmology, focusing on studies published in high-ranking ophthalmology journals. Design This is a retrospective review of published articles. Participants This study did not involve human participation. Methods Articles published in the first quartile (Q1) of ophthalmology journals on Scimago Journal & Country Rank discussing different LLMs up to June 7, 2024, were reviewed, parsed, and analyzed. Main Outcome Measures All available articles were parsed and analyzed, which included the article and author characteristics and data regarding the LLM used and its applications, focusing on its use in medical education, clinical assistance, research, and patient education. Results There were 35 Q1-ranked journals identified, 19 of which contained articles discussing LLMs, with 101 articles eligible for review. One-third were original investigations (32%; 32/101), with an average of 5.3 authors per article. The United States (50.4%; 51/101) was the most represented country, followed by the United Kingdom (25.7%; 26/101) and Canada (16.8%; 17/101). ChatGPT was the most used LLM among the studies, with different versions discussed and compared. Large language model applications were discussed relevant to their implications in medical education, clinical assistance, research, and patient education. Conclusions The numerous publications on the use of LLM in ophthalmology can provide valuable insights for stakeholders and consumers of these applications. Large language models present significant opportunities for advancement in ophthalmology, particularly in team science, education, clinical assistance, and research. Although LLMs show promise, they also show challenges such as performance inconsistencies, bias, and ethical concerns. The study emphasizes the need for ongoing artificial intelligence improvement, ethical guidelines, and multidisciplinary collaboration. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Akshay Prashant Agnihotri
- Jacobs Retina Center, University of California, San Diego, La Jolla, California
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla, California
- Retina Care Hospital, Nagpur, India
| | - Ines Doris Nagel
- Jacobs Retina Center, University of California, San Diego, La Jolla, California
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla, California
- Department of Ophthalmology, University Hospital Augsburg, Augsburg, Germany
| | - Jose Carlo M Artiaga
- Department of Ophthalmology and Visual Sciences, Philippine General Hospital, University of the Philippines Manila, Manila City, Philippines
- International Eye Institute, St. Luke's Medical Center Global City, Taguig City, Philippines
| | - Ma Carmela B Guevarra
- Department of Ophthalmology, Massachusetts Eye and Ear, Boston, Massachusetts
- Harvard Medical School, Department of Ophthalmology, Boston, Massachusetts
| | - George Michael N Sosuan
- Department of Ophthalmology and Visual Sciences, Philippine General Hospital, University of the Philippines Manila, Manila City, Philippines
| | - Fritz Gerald P Kalaw
- Jacobs Retina Center, University of California, San Diego, La Jolla, California
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla, California
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla, California
| |
Collapse
|
18
|
Sasaki K, Garcia-Manero G, Nigo M, Jabbour E, Ravandi F, Wierda WG, Jain N, Takahashi K, Montalban-Bravo G, Daver NG, Thompson PA, Pemmaraju N, Kontoyiannis DP, Sato J, Karimaghaei S, Soltysiak KA, Raad II, Kantarjian HM, Carter BW. Artificial Intelligence Assessment of Chest Radiographs for COVID-19. CLINICAL LYMPHOMA, MYELOMA & LEUKEMIA 2025; 25:319-327. [PMID: 39710565 PMCID: PMC11993350 DOI: 10.1016/j.clml.2024.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 10/21/2024] [Accepted: 11/25/2024] [Indexed: 12/24/2024]
Abstract
BACKGROUND The sensitivity of reverse-transcription polymerase chain reaction (RT-PCR) is limited for diagnosis of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Chest computed tomography (CT) is reported to have high sensitivity; however, given the limited availability of chest CT during a pandemic, the assessment of more readily available imaging, such as chest radiographs, augmented by artificial intelligence may substitute for the detection of the features of coronavirus disease 2019 (COVID-19) pneumonia. METHODS We trained a deep convolutional neural network to detect SARS-CoV-2 pneumonia using publicly available chest radiography imaging data including 8,851 normal, 6,045 pneumonia, and 200 COVID-19 pneumonia radiographs. The entire cohort was divided into training (n = 13,586) and test groups (n = 1510). We assessed the accuracy of prediction with independent external data. RESULTS The sensitivity and positive predictive values of the assessment by artificial intelligence were 96.8% and 90.9%, respectively. In the first external validation of 204 chest radiographs among 107 patients with confirmed COVID-19, the artificial intelligence algorithm correctly identified 174 (85%) chest radiographs as COVID-19 pneumonia among 97 (91%) patients. In the second external validation with 50 immunocompromised patients with leukemia, the higher probability of the artificial intelligence assessment for COVID-19 was correlated with suggestive features of COVID-19, while the normal chest radiographs were closely correlated with the likelihood of normal chest radiographs by the artificial intelligence prediction. CONCLUSIONS The assessment method by artificial intelligence identified suspicious lung lesions on chest radiographs. This novel approach can identify patients for confirmatory chest CT before the progression of COVID-19 pneumonia.
Collapse
Affiliation(s)
- Koji Sasaki
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX; Department of Hematology, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan.
| | | | - Masayuki Nigo
- Division of Infectious Diseases, Department of Internal Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX
| | - Elias Jabbour
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Farhad Ravandi
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - William G Wierda
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Nitin Jain
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Koichi Takahashi
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | | | - Naval G Daver
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Philip A Thompson
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Naveen Pemmaraju
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Dimitrios P Kontoyiannis
- Department of Infectious Disease, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Junya Sato
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Sam Karimaghaei
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX
| | - Kelly A Soltysiak
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Issam I Raad
- Department of Infectious Disease, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Hagop M Kantarjian
- Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Brett W Carter
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, TX
| |
Collapse
|
19
|
Kuo D, Gao Q, Patel D, Pajic M, Hadziahmetovic M. How Foundational Is the Retina Foundation Model? Estimating RETFound's Label Efficiency on Binary Classification of Normal versus Abnormal OCT Images. OPHTHALMOLOGY SCIENCE 2025; 5:100707. [PMID: 40161460 PMCID: PMC11950740 DOI: 10.1016/j.xops.2025.100707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2024] [Revised: 12/25/2024] [Accepted: 01/07/2025] [Indexed: 04/02/2025]
Abstract
Objective While the availability of public internet-scale datasets of images and language has catalyzed remarkable progress in machine learning, medical datasets are constrained by regulations protecting patient privacy and the time and cost required for curation and labeling. Self-supervised learning or pretraining has demonstrated great success in learning meaningful representations from large unlabeled datasets to enable efficient learning on downstream tasks. In ophthalmology, the RETFound model, a large vision transformer (ViT-L) model trained by masked autoencoding on 1.6 million color fundus photos and OCT B-scans, is the first model pretrained at such scale for ophthalmology, demonstrating strong performance on downstream tasks from diabetic retinopathy grading to stroke detection. Here, we measure the label efficiency of the RETFound model in learning to identify normal vs. abnormal OCT B-scans obtained as part of a pilot study for primary care-based diabetic retinopathy screening in North Carolina. Design The 1150 TopCon Maestro OCT central B-scans (981 normal and 169 abnormal) were randomly split 80/10/10 into training, validation, and test datasets. Model training and hyperparameter tuning were performed on the training set guided by validation set performance. The best performing models were then evaluated on the final test set. Subjects Six hundred forty-seven patients with diabetes in the Duke Health System participating in primary care diabetic retinopathy screening contributed 1150 TopCon Maestro OCT central B-scans. Methods Three models (ResNet-50, ViT-L, and RETFound) were fine-tuned on the full training dataset of 915 OCT B-scans and on smaller training data subsets of 500, 250, 100, and 50 OCT B-scans, respectively, across 3 random seeds. Main Outcome Measures Mean accuracy, area under the receiver operator curve (AUROC), area under the precision recall curve (AUPRC), F1 score, precision, and recall on the final held-out test set were reported for each model. Results Across 3 random seeds and all training dataset sizes, RETFound outperformed both ResNet-50 and ViT-L on all evaluation metrics on the final held-out test dataset. Large vision transformer and ResNet-50 performed comparably at the largest training dataset sizes of 915 and 500 OCT B-scans; however, ResNet-50 suffered more pronounced performance degradation at the smallest dataset sizes of 100 and 50 OCT B-scans. Conclusions Our findings validate the benefits of RETFound's additional retina-specific pretraining. Further research is needed to establish best practices for fine-tuning RETFound to downstream tasks. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- David Kuo
- Department of Ophthalmology, Duke University, Durham, North Carolina
| | - Qitong Gao
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Dev Patel
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Miroslav Pajic
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- Department of Computer Science, Duke University, Durham, North Carolina
| | - Majda Hadziahmetovic
- Department of Ophthalmology, Duke University, Durham, North Carolina
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
20
|
Fickweiler W, Sampani K, Markel DS, Levine SR, Sun JK, Gardner TW. Advancing Toward a World Without Vision Loss From Diabetes: Insights From The Mary Tyler Moore Vision Initiative Symposium 2024 on Curing Vision Loss From Diabetes. Transl Vis Sci Technol 2025; 14:12. [PMID: 40338731 PMCID: PMC12077579 DOI: 10.1167/tvst.14.5.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2025] [Accepted: 03/07/2025] [Indexed: 05/10/2025] Open
Abstract
The Mary Tyler Moore Vision Initiative (MTM Vision) honors Mary Tyler Moore's commitment to ending vision loss from diabetes. Founded by Moore's husband, Dr. S. Robert Levine, MTM Vision aims to accelerate breakthroughs in diabetic retinal disease (DRD). At the MTM Vision Symposium 2024 on Curing Vision Loss from Diabetes, experts highlighted the urgent need for updated DRD staging systems, clinically relevant endpoints, and novel biomarkers to detect early disease changes. MTM Vision is advancing two clinical trials in collaboration with the DRCR Retina Network, launching a public awareness campaign, and welcoming Boehringer Ingelheim as the first founding industry member of its pre-competitive Consortium. Speakers emphasized big-data strategies and artificial intelligence (AI)-driven tools to improve DRD diagnosis, risk prediction, and personalized treatment. They also showcased new efforts to bridge academic discoveries with industry expertise, illustrating promising work on vascular regeneration and cellular senescence that may yield future therapies. The MTM Vision Biorepository and Resource Center is expanding tissue collections, enabling multi-omics analyses to study DRD mechanisms. Patient voices were central to the discussion, with calls for enhanced patient-reported outcomes, caregiver support, and broader education on DRD's risks. The symposium also underscored the importance of integrating mental health, quality of life measures, and ongoing patient input to guide clinical research.
Collapse
Affiliation(s)
- Ward Fickweiler
- Beetham Eye Institute, Joslin Diabetes Center, Boston, MA, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Konstantina Sampani
- Beetham Eye Institute, Joslin Diabetes Center, Boston, MA, USA
- Department of Medicine, Harvard Medical School, Boston, MA, USA
| | | | | | - Jennifer K. Sun
- Beetham Eye Institute, Joslin Diabetes Center, Boston, MA, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Thomas W. Gardner
- Department of Ophthalmology and Visual Sciences, Kellogg Eye Center, University of Michigan Medical School, Ann Arbor, MI, USA
| |
Collapse
|
21
|
Meng Z, Guan Z, Yu S, Wu Y, Zhao Y, Shen J, Lim CC, Chen T, Yang D, Ran AR, He F, Hamzah H, Singh S, Abd Raof AS, Lee-Boey JWS, Lim SK, Sun X, Ge S, Xu G, Su H, Cheng Y, Lu F, Liao X, Jin H, Deng C, Ruan L, Zhang C, Wu C, Dai R, Jin Y, Wang W, Li T, Liu R, Li J, Shu J, Lu Y, Wang X, Wu Q, Qin Y, Tang J, Sheng X, Jiao Q, Yang X, Guo M, McKay GJ, Hogg RE, Liew G, Chee EYL, Hsu W, Lee ML, Szeto S, Luk AOY, Chan JCN, Cheung CY, Tan GSW, Tham YC, Cheng CY, Sabanayagam C, Lim LL, Jia W, Li H, Sheng B, Wong TY. Non-invasive biopsy diagnosis of diabetic kidney disease via deep learning applied to retinal images: a population-based study. Lancet Digit Health 2025:100868. [PMID: 40312169 DOI: 10.1016/j.landig.2025.02.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 01/13/2025] [Accepted: 02/26/2025] [Indexed: 05/03/2025]
Abstract
BACKGROUND Improving the accessibility of screening diabetic kidney disease (DKD) and differentiating isolated diabetic nephropathy from non-diabetic kidney disease (NDKD) are two major challenges in the field of diabetes care. We aimed to develop and validate an artificial intelligence (AI) deep learning system to detect DKD and isolated diabetic nephropathy from retinal fundus images. METHODS In this population-based study, we developed a retinal image-based AI-deep learning system, DeepDKD, pretrained using 734 084 retinal fundus images. First, for DKD detection, we used 486 312 retinal images from 121 578 participants in the Shanghai Integrated Diabetes Prevention and Care System for development and internal validation, and ten multi-ethnic datasets from China, Singapore, Malaysia, Australia, and the UK (65 406 participants) for external validation. Second, to differentiate isolated diabetic nephropathy from NDKD, we used 1068 retinal images from 267 participants for development and internal validation, and three multi-ethnic datasets from China, Malaysia, and the UK (244 participants) for external validation. Finally, we conducted two proof-of-concept studies: a prospective real-world study with 3 months' follow-up to evaluate the effectiveness of DeepDKD in screening DKD; and a longitudinal analysis of the effectiveness of DeepDKD in differentiating isolated diabetic nephropathy from NDKD on renal function changes with 4·6 years' follow-up. FINDINGS For detecting DKD, DeepDKD achieved an area under the receiver operating characteristic curve (AUC) of 0·842 (95% CI 0·838-0·846) on the internal validation dataset and AUCs of 0·791-0·826 across external validation datasets. For differentiating isolated diabetic nephropathy from NDKD, DeepDKD achieved an AUC of 0·906 (0·825-0·966) on the internal validation dataset and AUCs of 0·733-0·844 across external validation datasets. In the prospective study, compared with the metadata model, DeepDKD could detect DKD with higher sensitivity (89·8% vs 66·3%, p<0·0001). In the longitudinal study, participants with isolated diabetic nephropathy and participants with NDKD identified by DeepDKD had a significant difference in renal function outcomes (proportion of estimated glomerular filtration rate decline: 27·45% vs 52·56%, p=0·0010). INTERPRETATION Among diverse multi-ethnic populations with diabetes, a retinal image-based AI-deep learning system showed its potential for detecting DKD and differentiating isolated diabetic nephropathy from NDKD in clinical practice. FUNDING National Key R & D Program of China, National Natural Science Foundation of China, Beijing Natural Science Foundation, Shanghai Municipal Key Clinical Specialty, Shanghai Research Centre for Endocrine and Metabolic Diseases, Innovative research team of high-level local universities in Shanghai, Noncommunicable Chronic Diseases-National Science and Technology Major Project, Clinical Special Program of Shanghai Municipal Health Commission, and the three-year action plan to strengthen the construction of public health system in Shanghai.
Collapse
Affiliation(s)
- Ziyao Meng
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China
| | - Shujie Yu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China
| | - Yilan Wu
- Beijing Visual Science and Translational Eye Research Institute (BERI), Beijing Tsinghua Changgung Hospital Eye Center, School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Yaoning Zhao
- Beijing Visual Science and Translational Eye Research Institute (BERI), Beijing Tsinghua Changgung Hospital Eye Center, School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Jie Shen
- Medical Records and Statistics Office, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Cynthia Ciwei Lim
- Department of Renal Medicine, Singapore General Hospital, SingHealth-Duke Academic Medical Centre, Singapore, Singapore
| | - Tingli Chen
- Department of Ophthalmology, Shanghai Health and Medical Center, Wuxi, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Feng He
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Sarkaaj Singh
- Department of Medicine, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia
| | | | | | - Soo-Kun Lim
- Department of Medicine, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Xufang Sun
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shuwang Ge
- Department of Nephrology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Gang Xu
- Department of Nephrology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hua Su
- Department of Nephrology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yang Cheng
- Department of Ophthalmology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Feng Lu
- National Engineering Research Centre for Big Data Technology and System, Services Computing Technology and System Laboratory, Cluster and Grid Computing Laboratory, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaofei Liao
- National Engineering Research Centre for Big Data Technology and System, Services Computing Technology and System Laboratory, Cluster and Grid Computing Laboratory, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Hai Jin
- National Engineering Research Centre for Big Data Technology and System, Services Computing Technology and System Laboratory, Cluster and Grid Computing Laboratory, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Chenxin Deng
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China; Key Laboratory of Vascular Aging, Ministry of Education, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Ruan
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China; Key Laboratory of Vascular Aging, Ministry of Education, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cuntai Zhang
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China; Key Laboratory of Vascular Aging, Ministry of Education, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Yixiao Jin
- Beijing Visual Science and Translational Eye Research Institute (BERI), Beijing Tsinghua Changgung Hospital Eye Center, School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Wenxiao Wang
- Key Laboratory of River Basic Digital Twinning of Ministry of Water Resources, Macau University of Science and Technology, Macau Special Administrative Region, China
| | - Tingyao Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ruhan Liu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jiajia Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jia Shu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yuwei Lu
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yiming Qin
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jin Tang
- Department of Clinical Laboratory, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaohua Sheng
- Department of Nephrology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiong Jiao
- Department of Pathology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Minyi Guo
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Gareth J McKay
- Centre for Public Health, School of Medicine, Dentistry, and Biomedical Sciences, Queen's University Belfast, Belfast, UK
| | - Ruth E Hogg
- Centre for Public Health, School of Medicine, Dentistry, and Biomedical Sciences, Queen's University Belfast, Belfast, UK
| | - Gerald Liew
- Westmead Institute for Medical Research, University of Sydney, NSW, Australia
| | | | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore
| | - Mong Li Lee
- School of Computing, National University of Singapore, Singapore
| | - Simon Szeto
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Andrea O Y Luk
- Department of Medicine and Therapeutics, Hong Kong Institute of Diabetes and Obesity, Li Ka Shing Institute of Health Sciences, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong Special Administrative Region, China
| | - Juliana C N Chan
- Department of Medicine and Therapeutics, Hong Kong Institute of Diabetes and Obesity, Li Ka Shing Institute of Health Sciences, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong Special Administrative Region, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Lee-Ling Lim
- Department of Medicine, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia; Department of Medicine and Therapeutics, Hong Kong Institute of Diabetes and Obesity, Li Ka Shing Institute of Health Sciences, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong Special Administrative Region, China
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China.
| | - Bin Sheng
- Shanghai Belt and Road International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Institute for Proactive Healthcare, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Tien Yin Wong
- Beijing Visual Science and Translational Eye Research Institute (BERI), Beijing Tsinghua Changgung Hospital Eye Center, School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Beijing Key Laboratory of Intelligent Diagnostic Technology and Devices for Major Blinding Eye Diseases, Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
22
|
Ciapponi A, Ballivian J, Gentile C, Mejia JR, Ruiz-Baena J, Bardach A. Diagnostic utility of artificial intelligence software through non-mydriatic digital retinography in the screening of diabetic retinopathy: an overview of reviews. Eye (Lond) 2025:10.1038/s41433-025-03809-y. [PMID: 40301668 DOI: 10.1038/s41433-025-03809-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 04/07/2025] [Accepted: 04/08/2025] [Indexed: 05/01/2025] Open
Abstract
OBJECTIVE To evaluate the capability of artificial intelligence (AI) in screening for diabetic retinopathy (DR) utilizing digital retinography captured by non-mydriatic (NM) ≥45° cameras, focusing on diagnosis accuracy, effectiveness, and clinical safety. METHODS We performed an overview of systematic reviews (SRs) up to May 2023 in Medline, Embase, CINAHL, and Web of Science. We used AMSTAR-2 tool to assess the reliability of each SR. We reported meta-analysis estimates or ranges of diagnostic performance figures. RESULTS Out of 1336 records, ten SRs were selected, most deemed low or critically low quality. Eight primary studies were included in at least five of the ten SRs and 125 in less than five SRs. No SR reported efficacy, effectiveness, or safety outcomes. The sensitivity and specificity for referable DR were 68-100% and 20-100%, respectively, with an AUROC range of 88 to 99%. For detecting DR at any stage, sensitivity was 79-100%, and specificity was 50-100%, with an AUROC range of 93 to 98%. CONCLUSIONS AI demonstrates strong diagnostic potential for DR screening using NM cameras, with adequate sensitivity but variable specificity. While AI is increasingly integrated into routine practice, this overview highlights significant heterogeneity in AI models and the cameras used. Additionally, our study enlightens the low quality of existing systematic reviews and the significant challenge of integrating the rapidly growing volume of emerging evidence in this field. Policymakers should carefully evaluate AI tools in specific contexts, and future research must generate updated high-quality evidence to optimize their application and improve patient outcomes.
Collapse
Affiliation(s)
- Agustín Ciapponi
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina.
| | - Jamile Ballivian
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina
| | - Carolina Gentile
- Hospital Italiano de Buenos Aires, Servicio de Oftalmología, Buenos Aires, Argentina
| | - Jhonatan R Mejia
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina
| | - Jessica Ruiz-Baena
- Àrea d'Avaluació i Qualitat, Agència de Qualitat i Avaluació Sanitàries de Catalunya (AQuAS), Catalunya, España
| | - Ariel Bardach
- Instituto de Efectividad Clínica y Sanitaria (IECS), Buenos Aires, Argentina
| |
Collapse
|
23
|
Lin F, Su Y, Zhao C, Akter F, Yao S, Huang S, Shao X, Yao Y. Tackling visual impairment: emerging avenues in ophthalmology. Front Med (Lausanne) 2025; 12:1567159. [PMID: 40357281 PMCID: PMC12066777 DOI: 10.3389/fmed.2025.1567159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2025] [Accepted: 04/14/2025] [Indexed: 05/15/2025] Open
Abstract
Visual impairment, stemming from genetic, degenerative, and traumatic causes, affects millions globally. Recent advancements in ophthalmology present novel strategies for managing and potentially reversing these conditions. Here, we explore 10 emerging avenues-including gene therapy, stem cell therapy, advanced imaging, novel therapeutics, nanotechnology, artificial intelligence (AI) and machine learning, teleophthalmology, optogenetics, bionics, and neuro-ophthalmology-all making strides to improve diagnosis, treatment, and vision restoration. Among these, gene therapy and stem cell therapy are revolutionizing the treatment of retinal degenerative diseases, while advanced imaging technologies enable early detection and personalized care. Therapeutic advancements like anti-vascular endothelial growth factor therapies and neuroprotective agents, along with nanotechnology, have improved clinical outcomes for multiple ocular conditions. AI, especially machine learning, is enhancing diagnostic accuracy, facilitating early detection, and personalized treatment strategies, particularly when integrated with advanced imaging technologies. Teleophthalmology, further strengthened by AI, is expanding access to care, particularly in underserved regions, whereas emerging technologies like optogenetics, bionics, and neuro-ophthalmology offer new hope for patients with severe vision impairment. In light of ongoing research, we summarize the current clinical landscape and the potential advantages of these innovations to revolutionize the management of visual impairments. Additionally, we address the challenges and limitations associated with these emerging avenues in ophthalmology, providing insights into their future trajectories in clinical practice. Continued advancements in these fields promise to reshape the landscape of ophthalmic care, ultimately improving the quality of life for individuals with visual impairments.
Collapse
Affiliation(s)
- Fang Lin
- Department of Ophthalmology, Xinjiang 474 Hospital, China RongTong Medical Healthcare Group CO. LTD, Urumqi, Xinjiang Uygur Autonomous Region, China
| | - Yuxing Su
- Department of Ophthalmology, Xinjiang 474 Hospital, China RongTong Medical Healthcare Group CO. LTD, Urumqi, Xinjiang Uygur Autonomous Region, China
| | - Chenxi Zhao
- Department of Ophthalmology, Xinjiang 474 Hospital, China RongTong Medical Healthcare Group CO. LTD, Urumqi, Xinjiang Uygur Autonomous Region, China
| | - Farhana Akter
- Faculty of Arts and Sciences, Harvard University, Cambridge, MA, United States
| | - Shun Yao
- Department of Neurosurgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Sheng Huang
- Department of Ophthalmology, TongRen Municipal People’s Hospital, Tongren, Guizhou, China
| | - Xiaodong Shao
- Department of Neurosurgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yizheng Yao
- Department of Neurology and Clinical Research Center of Neurological Disease, The Second Affiliated Hospital of Soochow University, Soochow University, Suzhou, Jiangsu, China
| |
Collapse
|
24
|
Zhang J, Tian B, Tian M, Si X, Li J, Fan T. A scoping review of advancements in machine learning for glaucoma: current trends and future direction. Front Med (Lausanne) 2025; 12:1573329. [PMID: 40342583 PMCID: PMC12059588 DOI: 10.3389/fmed.2025.1573329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2025] [Accepted: 04/03/2025] [Indexed: 05/11/2025] Open
Abstract
Introduction Machine learning technology has demonstrated significant potential in glaucoma research, particularly in early diagnosis, predicting disease progression, evaluating treatment responses, and developing personalized treatment strategies. The application of machine learning not only enhances the understanding of the pathological mechanism of glaucoma and optimizes the diagnostic process but also provides patients with accurate medical services. Methods This study aimed to describe the current state of research, highlight directions for further development, and identify potential trends for improvement. This review was conducted following the scoping review of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension to showcase advancements in the application of machine learning in glaucoma research and treatment. Results We employed a comprehensive search strategy to retrieve literature from the Web of Science Core Collection database, ultimately including 3,581 articles in the analysis. Through data analysis, we identified current research hotspots, noted differences in researchers' attitudes and opinions, and predicted potential future development trends. Discussion We divided the research topics into six categories, clearly identifying "eye diseases", "retinal fundus imaging" and "risk factors" as the key terms for the development of this field. These findings signify the promising prospects of machine learning, particularly when integrated with multimodal technologies and large language models, to enhance the diagnosis and treatment of glaucoma.
Collapse
Affiliation(s)
- Jiatong Zhang
- The First Clinical Medical School, China Medical University, Shenyang, China
| | - Bocheng Tian
- The Second Clinical Medical School, China Medical University, Shenyang, China
| | - Mingke Tian
- Emory College of Arts and Sciences, Emory University, Atlanta, GA, United States
| | - Xinxin Si
- The Fourth Clinical Medical School, China Medical University, Shenyang, China
| | - Jiani Li
- The First Clinical Medical School, China Medical University, Shenyang, China
| | - Ting Fan
- School of Intelligent Medicine, China Medical University, Shenyang, China
| |
Collapse
|
25
|
Jin L, Tao Y, Liu Y, Liu G, Lin L, Chen Z, Peng S. SEM model analysis of diabetic patients' acceptance of artificial intelligence for diabetic retinopathy. BMC Med Inform Decis Mak 2025; 25:175. [PMID: 40275308 PMCID: PMC12023383 DOI: 10.1186/s12911-025-03008-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2025] [Accepted: 04/17/2025] [Indexed: 04/26/2025] Open
Abstract
AIMS This study aimed to investigate diabetic patients' acceptance of artificial intelligence (AI) devices for diabetic retinopathy screening and the related influencing factors. METHODS An integrated model was proposed, and structural equation modeling was used to evaluate items and construct reliability and validity via confirmatory factor analysis. The model's path effects, significance, goodness of fit, and mediation and moderation effects were analyzed. RESULTS Intention to Use (IU) is significantly affected by Subjective Norms (SN), Resistance Bias (RB), and Uniqueness Neglect (UN). Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) were significant mediators between IU and other variables. The moderating effect of trust (TR) is non-significant on the path of PU to IU. CONCLUSIONS The significant positive impact of SN may be caused by China's collectivist and authoritarian cultures. Both PU and PEOU had a significant mediation effect, which suggests that impressions influence acceptance. Although the moderating effect of TR was not significant, the unstandardized factor loading remained positive in this study. We presume that this may be due to an insufficient sample size, and the public was unfamiliar with AI medical devices.
Collapse
Affiliation(s)
- Luchang Jin
- Provincial Key Laboratory of Intelligent Medical Care and Elderly Health Management, Chengdu Medical College, Chengdu, China
| | - Yanmin Tao
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Ya Liu
- Department of Endocrinology, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Gang Liu
- The First Affiliated Hospital of Chengdu Medical College, Chengdu, China
| | - Lin Lin
- School of Elderly Health/Collaborative Innovation Centre of Elderly Care and Health, Chengdu Medical College, Chengdu, China
| | - Zixi Chen
- Eighth Branch of the Democratic Construction Association of Sichuan Provincial Working Committee, Chengdu, China
| | - Sihan Peng
- TCM Regulating Metabolic Diseases Key Laboratory of Sichuan Province, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, China.
| |
Collapse
|
26
|
Tan TE, Ng YP, Calhoun C, Chaung JQ, Yao J, Wang Y, Zhen L, Xu X, Liu Y, Goh RS, Piccoli G, Vujosevic S, Tan GS, Sun JK, Ting DS. Detection of Center-Involved Diabetic Macular Edema with Visual Impairment using Multimodal Artificial Intelligence Algorithms. Ophthalmol Retina 2025:S2468-6530(25)00173-3. [PMID: 40286985 DOI: 10.1016/j.oret.2025.04.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 04/14/2025] [Accepted: 04/15/2025] [Indexed: 04/29/2025]
Abstract
PURPOSE To develop artificial intelligence (AI) models for automated detection of center-involved diabetic macular edema (CI-DME) with visual impairment using color fundus photographs (CFP) and optical coherence tomography (OCT) scans. DESIGN AI effort using pooled data from multi-center studies. PARTICIPANTS Datasets consisted of diabetic participants with or without CI-DME, who had CFP, OCT, and best corrected visual acuity (BCVA) obtained after manifest refraction. The development dataset was from DRCR Retina Network clinical trials, external testing dataset 1 was from the Singapore National Eye Centre, Singapore, and external testing dataset 2 was from the Eye Clinic, IRCCS MultiMedica, Milan, Italy. METHODS AI models were trained to detect CI-DME, visual impairment (BCVA 20/32 or worse), and CI-DME with visual impairment, using CFPs alone, OCTs alone, and both CFPs and OCTs together (multimodal). Data from 1,007 eyes were used to train and validate the algorithms, and data from 448 eyes were used for testing. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve (AUC) values. RESULTS In the primary testing set, the CFP model, OCT model, and multimodal model had AUCs of 0.848 (95% CI 0.787-0.900), 0.913 (95% CI 0.870-0.947), and 0.939 (95% CI 0.906-0.964), respectively, for detection of CI-DME with visual impairment. In external testing dataset 1, the CFP, OCT, and multimodal models had AUCs of 0.756 (95% CI 0.624-0.870), 0.949 (95% CI 0.889-0.989), and 0.917 (95% CI 0.837-0.979), respectively, for detection of CI-DME with visual impairment. In external testing dataset 2, the CFP, OCT, and multimodal models had AUCs of 0.881 (95% CI 0.822-0.940), 0.828 (95% CI 0.749-0.905), and 0.907 (95% CI 0.852-0.952), respectively, for detection of CI-DME with visual impairment. CONCLUSION The AI models showed good diagnostic performance for detection of CI-DME with visual impairment. The multimodal (CFP and OCT) model did not offer additional benefit over the OCT model alone. If validated in prospective studies, these AI models could potentially help to improve triage and detection of patients who require prompt treatment.
Collapse
Affiliation(s)
- Tien-En Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-National University of Singapore Medical School, Singapore
| | - Yi Pin Ng
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | | | - Jia Quan Chaung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Jie Yao
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-National University of Singapore Medical School, Singapore
| | - Yan Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Liangli Zhen
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Rick Sm Goh
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | | | - Stela Vujosevic
- Eye Clinic, IRCCS MultiMedica, Milan, Italy; Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
| | - Gavin Sw Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-National University of Singapore Medical School, Singapore
| | - Jennifer K Sun
- Joslin Diabetes Center, Beetham Eye Institute, Harvard Department of Ophthalmology, Boston, Massachusetts
| | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-National University of Singapore Medical School, Singapore
| |
Collapse
|
27
|
Kashiwagi K, Toyoura M, Mao X, Kawase K, Tanito M, Nakazawa T, Miki A, Mori K, Yoshitomi T. Influence of artificial intelligence on ophthalmologists' judgments in glaucoma. PLoS One 2025; 20:e0321368. [PMID: 40238811 PMCID: PMC12002539 DOI: 10.1371/journal.pone.0321368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Accepted: 03/05/2025] [Indexed: 04/18/2025] Open
Abstract
PURPOSE To examine the influence of artificial intelligence (AI) on physicians' judgments regarding the presence and severity of glaucoma on fundus photographs in an online simulation system. METHODS Forty-five trainee and expert ophthalmologists independently evaluated 120 fundus photographs, including 30 photographs each from patients with no glaucoma, mild glaucoma, moderate glaucoma, and severe glaucoma. A second trial was conducted at least one week after the initial trial in which photograph presentation order was randomized. During the second trial, 30% of the glaucoma judgments made by the AI system were intentionally incorrect. The evaluators were asked about their thoughts on AI in ophthalmology via a 3-item questionnaire. RESULTS The percentage of correct responses for all images significantly improved (P < 0.001) from 48.4 ± 24.8% in the initial trial to 59.6 ± 20.3% in the second trial. The improvement in the correct response rate was significantly greater for trainees (14.2 ± 19.0%) than for experts (8.6 ± 11.4%) (P = 0.04). The correct response rate was 63.9 ± 20.6% when the AI response was correct, significantly greater than the 47.9 ± 26.6% when the AI response was incorrect (P < 0.0001). For trainees, the correct response rate was significantly greater when the AI's response was correct than when it was incorrect. However, for experts, the effect was less pronounced. The decision time was significantly longer when the AI response was incorrect than when it was correct (P = 0.003). CONCLUSION In fundus photography-based glaucoma detection, the results of AI systems can influence physicians' judgments, particularly those of physicians with less experience.
Collapse
Affiliation(s)
- Kenji Kashiwagi
- Department of Ophthalmology, University of Yamanashi Faculty of Medicine, Chuo, Japan
| | - Masahiro Toyoura
- Department of Computer Science and Engineering, University of Yamanashi, Kofu, Japan,
| | - Xiaoyang Mao
- Department of Computer Science and Engineering, University of Yamanashi, Kofu, Japan,
| | - Kazuhide Kawase
- Yasuma Eye Clinic, Nagoya, Japan,
- Department of Ophthalmology Protective Care for Sensory Disorders, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Toru Nakazawa
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Atsuya Miki
- Department of Myopia Control Research, Aichi Medical University, Nagoya, Japan
| | - Kazuhiko Mori
- Baptist Eye Institute, Kyoto, Japan,
- Department of Ophthalmology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Takeshi Yoshitomi
- Department of Orthoptics, Fukuoka International University of Health and Welfare, Fukuoka, Japan
| |
Collapse
|
28
|
Maino AP, Klikowski J, Strong B, Ghaffari W, Woźniak M, Bourcier T, Grzybowski A. Artificial Intelligence vs. Human Cognition: A Comparative Analysis of ChatGPT and Candidates Sitting the European Board of Ophthalmology Diploma Examination. Vision (Basel) 2025; 9:31. [PMID: 40265399 PMCID: PMC12015923 DOI: 10.3390/vision9020031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2025] [Revised: 04/07/2025] [Accepted: 04/07/2025] [Indexed: 04/24/2025] Open
Abstract
BACKGROUND/OBJECTIVES This paper aims to assess ChatGPT's performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results. METHODS This cross-sectional study used a sample of past exam papers from 2012, 2013, 2020-2023 EBOD examinations. This study analyzed ChatGPT's responses to 440 multiple choice questions (MCQs), each containing five true/false statements (2200 statements in total) and 48 single best answer (SBA) questions. RESULTS ChatGPT, for MCQs, scored on average 64.39%. ChatGPT's strongest metric performance for MCQs was precision (68.76%). ChatGPT performed best at answering pathology MCQs (Grubbs test p < 0.05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT-3.5 Turbo performed worse than human candidates and ChatGPT-4o on easy questions (75% vs. 100% accuracy) but outperformed humans and ChatGPT-4o on challenging questions (50% vs. 28% accuracy). ChatGPT's SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT demonstrated a nonsignificant tendency to select option 1 more frequently (p = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured. CONCLUSIONS ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, suggesting that ChatGPT's ability in information retrieval is better than that in knowledge integration. ChatGPT could become a valuable tool in ophthalmic education, allowing exam boards to test their exam papers to ensure they are pitched at the right level, marking open-ended questions and providing detailed feedback.
Collapse
Affiliation(s)
- Anna P. Maino
- Manchester Royal Eye Hospital, Manchester M13 9WL, UK;
| | - Jakub Klikowski
- Department of Systems and Computer Networks, Wrocław University of Science and Technology, 50-370 Wrocław, Poland; (J.K.); (M.W.)
| | - Brendan Strong
- European Board of Ophthalmology Examination Headquarters, RP56PT10 Kilcullen, Ireland;
| | - Wahid Ghaffari
- Department of Medical Education, Stockport NHS Foundation Trust, Stockport SK2 7JE, UK;
| | - Michał Woźniak
- Department of Systems and Computer Networks, Wrocław University of Science and Technology, 50-370 Wrocław, Poland; (J.K.); (M.W.)
| | - Tristan Bourcier
- Department of Ophthalmology, University of Strasbourg, 67081 Strasbourg, France;
| | | |
Collapse
|
29
|
Liu Y, Peng X, Wei X, Geng L, Zhang F, Xiao Z, Lin JCW. Label-Aware Dual Graph Neural Networks for Multi-Label Fundus Image Classification. IEEE J Biomed Health Inform 2025; 29:2731-2743. [PMID: 39255075 DOI: 10.1109/jbhi.2024.3457232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Fundus disease is a complex and universal disease involving a variety of pathologies. Its early diagnosis using fundus images can effectively prevent further diseases and provide targeted treatment plans for patients. Recent deep learning models for classification of this disease are gradually emerging as a critical research field, which is attracting widespread attention. However, in practice, most of the existing methods only focus on local visual cues of a single image, and ignore the underlying explicit interaction similarity between subjects and correlation information among pathologies in fundus diseases. In this paper, we propose a novel label-aware dual graph neural networks for multi-label fundus image classification that consists of population-based graph representation learning and pathology-based graph representation learning modules. Specifically, we first construct a population-based graph by integrating image features and non-image information to learn patient's representations by incorporating associations between subjects. Then, we represent pathologies as a sparse graph where its nodes are associated with pathology-based feature vectors and the edges correspond to probability of the co-occurrence of labels to generate a set of classifier scores by the propagation of multi-layer graph information. Finally, our model can adaptively recalibrate multi-label outputs. Detailed experiments and analysis of our results show the effectiveness of our method compared with state-of-the-art multi-label fundus image classification methods.
Collapse
|
30
|
Yao H, Wang X, Suo Y, He J, Chu C, Yang Z, Xu Q, Zhou J, Zhu M, Sun X, Ge L. Primary angle-closed diseases recognition through artificial intelligence-based anterior segment-optical coherence tomography imaging. Graefes Arch Clin Exp Ophthalmol 2025; 263:1081-1087. [PMID: 39680113 DOI: 10.1007/s00417-024-06709-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 09/21/2024] [Accepted: 12/05/2024] [Indexed: 12/17/2024] Open
Abstract
PURPOSE In this study, artificial intelligence (AI) was used to deeply learn the classification of the anterior segment-Optical Coherence Tomography (AS-OCT) images. This AI systems automatically analyzed the angular structure of the AS-OCT images and automatically classified anterior chamber angle. It would improve the efficiency of AS-OCT image analysis. METHODS The subjects were from the glaucoma disease screening and prevention project for elderly people in Shanghai community. Each scan contained 72 cross-sectional AS-OCT frames. We developed a deep learning-based AS-OCT image automatic anterior chamber angle analysis software. Classifier performance was evaluated against glaucoma experts' grading of AS-OCT images as standard. Outcome evaluation included accuracy (ACC) and area under the receiver operator curve (AUC). RESULTS 94895 AS-OCT images were collected from 687 participants, in which 69,243 images were annotated as open, 16,433 images were annotated as closed, and 9219 images were annotated as non-gradable. The class-balanced train data were formed from randomly extracting the same number of open angle images as the closed angle images, which contained 22,393 images (11127 open, 11256 closed). The best-performing classifier was developed by applying transfer learning to the ResNet-50 architecture. against experts' grading, this classifier achieved an AUC of 0.9635. CONCLUSION Deep learning classifiers effectively detect angle closure based on automated analysis of AS-OCT images. This system could be used to automate clinical evaluations of the anterior chamber angle and improve efficiency of interpreting AS-OCT images. The results demonstrated the potential of the deep learning system for rapid recognition of high-risk populations of PACD.
Collapse
Affiliation(s)
- Haipei Yao
- Department of Ophthalmology, Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Xiaolei Wang
- Department of Ophthalmology & Visual Science, Eye & ENT Hospital, Shanghai Medical College, Fudan University, Shanghai, 200031, China
| | - Yan Suo
- Department of Ophthalmology, Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Jiangnan He
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention &Treatment Center, Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Chen Chu
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention &Treatment Center, Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
| | | | - Qiuzhuo Xu
- TowardPi Medical Technology, Beijing, China
| | - Jian Zhou
- TowardPi Medical Technology, Beijing, China
| | | | - Xinghuai Sun
- Department of Ophthalmology & Visual Science, Eye & ENT Hospital, Shanghai Medical College, Fudan University, Shanghai, 200031, China.
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, 200032, China.
- NHC Key Laboratory of Myopia, Chinese Academy of Medical Sciences, and Shanghai Key Laboratory of Visual Impairment and Restoration , Fudan University, Shanghai, 200031, China.
| | - Ling Ge
- Department of Ophthalmology, Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China.
| |
Collapse
|
31
|
Chen J, Yuan XL, Liao Z, Zhu W, Zhou X, Duan X. Research Trends and Hotspots of Big Data in Ophthalmology: A Bibliometric Analysis and Visualization. Semin Ophthalmol 2025; 40:210-222. [PMID: 39460752 DOI: 10.1080/08820538.2024.2421478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 10/14/2024] [Accepted: 10/20/2024] [Indexed: 10/28/2024]
Abstract
PURPOSE The burst of modern information has significantly promoted the development of global medicine into a new era of big data healthcare. Ophthalmology is one of the most prominent medical specialties driven by big data analytics. This study aimed to describe the development status and research hotspots of big data in ophthalmology. METHODS English articles and reviews related to big data in ophthalmology published from January 1, 1999, to April 30, 2024, were retrieved from the Web of Science Core Collection. The relevant information was analyzed and visualized using VOSviewer and CiteSpace software. RESULTS A total of 406 qualified documents were included in the analysis. The annual number of publications on big data in ophthalmology reached a rapidly increasing stage since 2019. The United States (n = 147) led in the number of publications, followed by India (n = 77) and China (n = 69). The L.V. Prasad Eye Institute in India was the most productive institution (n = 50), and Anthony Vipin Das was the most influential author with the most relevant literature (n = 45). The electronic medical records were the primary source of ophthalmic big data, and artificial intelligence served as the principal analytics tool. Diabetic retinopathy, glaucoma, and myopia are currently the main topics of interest in this field. CONCLUSIONS The application of big data in ophthalmology has experienced rapid growth in recent years. Big data is expected to play an increasingly significant role in shaping the future of research and clinical practice in ophthalmology.
Collapse
Affiliation(s)
- Jiawei Chen
- Aier Academy of Ophthalmology, Central South University, Changsha, Hunan Province, P.R. China
- Aier Glaucoma Institute, Hunan Engineering Research Center for Glaucoma with Artificial Intelligence in Diagnosis and Application of New Materials, Changsha Aier Eye Hospital, Changsha, Hunan Province, P.R. China
| | - Xiang-Ling Yuan
- Aier Academy of Ophthalmology, Central South University, Changsha, Hunan Province, P.R. China
- Aier Eye Institute, Changsha Aier Eye Hospital, Changsha, Hunan Province, P.R. China
| | - Zhimin Liao
- Aier Academy of Ophthalmology, Central South University, Changsha, Hunan Province, P.R. China
- Aier Glaucoma Institute, Hunan Engineering Research Center for Glaucoma with Artificial Intelligence in Diagnosis and Application of New Materials, Changsha Aier Eye Hospital, Changsha, Hunan Province, P.R. China
| | - Wenxiang Zhu
- Aier Academy of Ophthalmology, Central South University, Changsha, Hunan Province, P.R. China
- Aier Glaucoma Institute, Hunan Engineering Research Center for Glaucoma with Artificial Intelligence in Diagnosis and Application of New Materials, Changsha Aier Eye Hospital, Changsha, Hunan Province, P.R. China
| | - Xiaoyu Zhou
- Aier Academy of Ophthalmology, Central South University, Changsha, Hunan Province, P.R. China
- Aier Glaucoma Institute, Hunan Engineering Research Center for Glaucoma with Artificial Intelligence in Diagnosis and Application of New Materials, Changsha Aier Eye Hospital, Changsha, Hunan Province, P.R. China
| | - Xuanchu Duan
- Aier Academy of Ophthalmology, Central South University, Changsha, Hunan Province, P.R. China
- Aier Glaucoma Institute, Hunan Engineering Research Center for Glaucoma with Artificial Intelligence in Diagnosis and Application of New Materials, Changsha Aier Eye Hospital, Changsha, Hunan Province, P.R. China
| |
Collapse
|
32
|
Carlà MM, Crincoli E, Rizzo S. RETINAL IMAGING ANALYSIS PERFORMED BY CHATGPT-4o AND GEMINI ADVANCED: The Turning Point of the Revolution? Retina 2025; 45:694-702. [PMID: 39715322 DOI: 10.1097/iae.0000000000004351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2024]
Abstract
PURPOSE To assess the diagnostic capabilities of the most recent chatbots releases, GPT-4o and Gemini Advanced, facing different retinal diseases. METHODS Exploratory analysis on 50 cases with different surgical (n = 27) and medical (n = 23) retinal pathologies, whose optical coherence tomography/angiography scans were dragged into ChatGPT and Gemini's interfaces. Then, the authors asked "Please describe this image" and classified the diagnosis as: 1) Correct; 2) Partially correct; 3) Wrong; 4) Unable to assess exam type; and 5) Diagnosis not given. RESULTS ChatGPT indicated the correct diagnosis in 31 of 50 cases (62%), significantly higher than Gemini Advanced in 16 of 50 cases ( P = 0.0048). In 24% of cases, Gemini Advanced was not able to produce any answer, stating "That's not something I'm able to do yet." For both, primary misdiagnosis was macular edema, given erroneously in 16% and 14% of cases, respectively. ChatGPT-4o showed higher rates of correct diagnoses either in surgical (52% vs. 30%) or in medical retina (78% vs. 43%). Notably, when presented without the corresponding structural image, in any case Gemini was able to recognize optical coherence tomography angiography scans, confusing images for artworks. CONCLUSION ChatGPT-4o outperformed Gemini Advanced in diagnostic accuracy facing optical coherence tomography/angiography images, even if the range of diagnoses is still limited.
Collapse
Affiliation(s)
- Matteo Mario Carlà
- Ophthalmology Department, "Fondazione Policlinico Universitario A. Gemelli, IRCCS", Rome, Italy
- Ophthalmology Department, Catholic University "Sacro Cuore", Rome, Italy ; and
| | - Emanuele Crincoli
- Ophthalmology Department, "Fondazione Policlinico Universitario A. Gemelli, IRCCS", Rome, Italy
- Ophthalmology Department, Catholic University "Sacro Cuore", Rome, Italy ; and
| | - Stanislao Rizzo
- Ophthalmology Department, "Fondazione Policlinico Universitario A. Gemelli, IRCCS", Rome, Italy
- Ophthalmology Department, Catholic University "Sacro Cuore", Rome, Italy ; and
- Consiglio Nazionale Delle Ricerche, Istituto di Neuroscienze, Pisa, Italy
| |
Collapse
|
33
|
Irodi A, Zhu Z, Grzybowski A, Wu Y, Cheung CY, Li H, Tan G, Wong TY. The evolution of diabetic retinopathy screening. Eye (Lond) 2025; 39:1040-1046. [PMID: 39910282 PMCID: PMC11978858 DOI: 10.1038/s41433-025-03633-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 01/06/2025] [Accepted: 01/22/2025] [Indexed: 02/07/2025] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of preventable blindness and has emerged as a global health challenge, necessitating the development of robust management strategies. As DR prevalence continues to rise, advancements in screening methods have become increasingly critical for timely detection and intervention. This review examines three key advancements in DR screening: a shift from specialist to generalist approach, the adoption of telemedicine strategies for expanded access and enhanced efficiency, and the integration of artificial intelligence (AI). In particular, AI offers unprecedented benefits in the form of sustainability and scalability for not only DR screening but other aspects of eye health and the medical field as a whole. Though there remain barriers to address, AI holds vast potential for reshaping DR screening and significantly improving patient outcomes globally.
Collapse
Affiliation(s)
- Anushka Irodi
- School of Clinical Medicine, University of Cambridge, Cambridge, United Kingdom
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Yilan Wu
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Huating Li
- Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Centre for Diabetes, Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Shanghai, China
| | - Gavin Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Tien Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Beijing Visual Science and Translational Eye Research Institute (BERI), School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
34
|
Malik MH, Wan Z, Gao Y, Ding DW. Efficient diagnosis of retinal disorders using dual-branch semi-supervised learning (DB-SSL): An enhanced multi-class classification approach. Comput Med Imaging Graph 2025; 121:102494. [PMID: 39914126 DOI: 10.1016/j.compmedimag.2025.102494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Revised: 01/04/2025] [Accepted: 01/17/2025] [Indexed: 03/03/2025]
Abstract
The early diagnosis of retinal disorders is essential in preventing permanent or partial blindness. Identifying these conditions promptly guarantees early treatment and prevents blindness. However, the challenge lies in accurately diagnosing these conditions, especially with limited labeled data. This study aims to enhance the diagnostic accuracy of retinal disorders using a novel Dual-Branch Semi-Supervised Learning (DB-SSL) approach that leverages both labeled and unlabeled data for multi-class classification of eye diseases. Employing Color Fundus Photography (CFP), our research integrates a Convolutional Neural Network (CNN) that integrates features from two parallel branches. This framework effectively handles the complexity of ocular imaging by utilizing self-training-based semi-supervised learning to explore relationships within unlabeled data. We propose and evaluate six CNN models: ResNet50, DenseNet121, MobileNetV2, EfficientNetB0, SqueezeNet1_0, and a hybrid of ResNet50 and MobileNetV2 on their ability to classify four key eye conditions: cataract, diabetic retinopathy, glaucoma, and normal, using a large, diverse OIH dataset containing 4217 fundus images. Among the evaluated models, ResNet50 emerged as the most accurate, achieving 93.14 % accuracy on unseen data. The model demonstrates robustness with a sensitivity of 93 % and specificity of 98.37 %, along with a precision and F1 Score of 93 % each, and a Cohen's Kappa of 90.85 %. Additionally, it exhibits an AUC score of 97.75 % nearing perfection. Systematically removing certain components from the ResNet50 model further validates its efficacy. Our findings underscore the potential of advanced CNN architectures combined with semi-supervised learning in enhancing the accuracy of eye disease classification systems, particularly in resource-constrained environments where the procurement of large labeled datasets is challenging and expensive. This approach is well-suited for integration into Clinical Decision Support Systems (CDSS), providing valuable diagnostic assistance in real-world clinical settings.
Collapse
Affiliation(s)
- Muhammad Hammad Malik
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China.
| | - Zishuo Wan
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China.
| | - Yu Gao
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China.
| | - Da-Wei Ding
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China; Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, University of Science and Technology Beijing, Beijing 100083, China.
| |
Collapse
|
35
|
Esmaeilkhanian H, Gutierrez KG, Myung D, Fisher AC. Detection Rate of Diabetic Retinopathy Before and After Implementation of Autonomous AI-based Fundus Photograph Analysis in a Resource-Limited Area in Belize. Clin Ophthalmol 2025; 19:993-1006. [PMID: 40144136 PMCID: PMC11937645 DOI: 10.2147/opth.s490473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 02/13/2025] [Indexed: 03/28/2025] Open
Abstract
Purpose To evaluate the use of an autonomous artificial intelligence (AI)-based device to screen for diabetic retinopathy (DR) and to evaluate the frequency of diabetes mellitus (DM) and DR in an under-resourced population served by the Stanford Belize Vision Clinic (SBVC). Patients and Methods The records of all patients from 2017 to 2024 were collected and analyzed, dividing the study into two time periods: Pre-AI (before June 2022, prior to the implementation of the LumineticsCore® device at SBVC) and Post-AI (from June 2022 to the present) and subdivided into post-COVID19 and pre-COVID19 periods. Patients were categorized based on self-reported past medical history (PMH) as DM positive (diagnosed DM) and DM negative (no PMH of DM). AI camera outcomes included: negative for more than mild DR (MTMDR), positive for MTMDR, and insufficient exam quality. Results A total of 1897 patients with a mean age of 47.6 years were included. The gradability of encounters by the AI device was 89.1%. The frequency of DR detection increased significantly in the Post-AI period (55/639) compared to the Pre-AI period (38/1258), including during the COVID-19 pandemic. The mean age of DR diagnosis was significantly lower in the Post-AI period (44.1 years) compared to Pre-AI period (60.7 years) among DM negative patients. There was a significant association between having DR and hypertension. Additionally, the detection rate of DM increased in the Post-AI period compared to Pre-AI period. Conclusion Autonomous AI-based screening significantly improves the detection of patients with DR in areas with limited healthcare resources by reducing dependence on on-field ophthalmologists. This innovative approach can be seamlessly integrated into primary care settings, with technicians capturing images quickly and efficiently within just a few minutes. This study demonstrates the effectiveness of autonomous AI in identifying patients with both DR and DM, as well as associated high-burden diseases such as hypertension, across various age ranges.
Collapse
Affiliation(s)
- Houri Esmaeilkhanian
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Karen G Gutierrez
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Palo Alto, CA, USA
- Department of Ophthalmology, USC Roski Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, CA, USA
| | - David Myung
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Ann Caroline Fisher
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Palo Alto, CA, USA
| |
Collapse
|
36
|
Wan X, Zhang R, Wang Y, Wei W, Song B, Zhang L, Hu Y. Predicting diabetic retinopathy based on routine laboratory tests by machine learning algorithms. Eur J Med Res 2025; 30:183. [PMID: 40102923 PMCID: PMC11921716 DOI: 10.1186/s40001-025-02442-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2024] [Accepted: 03/09/2025] [Indexed: 03/20/2025] Open
Abstract
OBJECTIVES This study aimed to identify risk factors for diabetic retinopathy (DR) and develop machine learning (ML)-based predictive models using routine laboratory data in patients with type 2 diabetes mellitus (T2DM). METHODS Clinical data from 4259 T2DM inpatients at Beijing Tongren Hospital were analyzed, divided into a model construction data set (N = 3936) and an external validation data set (N = 323). Using 39 optimal variables, a prediction model was constructed using the eXtreme Gradient Boosting (XGBoost) algorithm and compared with four other algorithms: support vector machine (SVM), gradient boosting decision tree (GBDT), neural network (NN), and logistic regression (LR). The Shapley Additive exPlanation (SHAP) method was employed to interpret the XGBoost model. External validation was performed to assess model performance. RESULTS DR was present in 47.69% (N = 1877) of T2DM patients in the model construction data set. Among the models tested, the XGBoost model performed best with an AUC of 0.831, accuracy of 0.757, sensitivity of 0.754, specificity of 0.759, and F1-score of 0.752. SHAP explained feature importance for XGBoost model and identified key risk factors for DR. External validation yielded an accuracy of 0.650 for the XGBoost model. CONCLUSIONS The XGBoost-based prediction model effectively assesses DR risk in T2DM patients using routine laboratory data, aiding clinicians in identifying high-risk individuals and guiding personalized management strategies, especially in medically underserved areas.
Collapse
Affiliation(s)
- Xiaohua Wan
- Department of Clinical Laboratory, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China
- Beijing Center for Clinical Laboratories, Beijing, People's Republic of China
- Department of Clinical Laboratory, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Ruihuan Zhang
- The Inner Mongolia Medical Intelligent Diagnostics Big Data Research Institute, Inner Mongolia, People's Republic of China
| | - Yanan Wang
- The Inner Mongolia Medical Intelligent Diagnostics Big Data Research Institute, Inner Mongolia, People's Republic of China
| | - Wei Wei
- Department of Medical Record, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Biao Song
- The Inner Mongolia Medical Intelligent Diagnostics Big Data Research Institute, Inner Mongolia, People's Republic of China.
| | - Lin Zhang
- Department of Medical Record, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China.
- Department of Endocrinology, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China.
- Beijing Diabetes Research Institute, Beijing, People's Republic of China.
| | - Yanwei Hu
- Department of Clinical Laboratory, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, People's Republic of China.
- Beijing Center for Clinical Laboratories, Beijing, People's Republic of China.
| |
Collapse
|
37
|
Moannaei M, Jadidian F, Doustmohammadi T, Kiapasha AM, Bayani R, Rahmani M, Jahanbazy MR, Sohrabivafa F, Asadi Anar M, Magsudy A, Sadat Rafiei SK, Khakpour Y. Performance and limitation of machine learning algorithms for diabetic retinopathy screening and its application in health management: a meta-analysis. Biomed Eng Online 2025; 24:34. [PMID: 40087776 PMCID: PMC11909973 DOI: 10.1186/s12938-025-01336-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 01/07/2025] [Indexed: 03/17/2025] Open
Abstract
BACKGROUND In recent years, artificial intelligence and machine learning algorithms have been used more extensively to diagnose diabetic retinopathy and other diseases. Still, the effectiveness of these methods has not been thoroughly investigated. This study aimed to evaluate the performance and limitations of machine learning and deep learning algorithms in detecting diabetic retinopathy. METHODS This study was conducted based on the PRISMA checklist. We searched online databases, including PubMed, Scopus, and Google Scholar, for relevant articles up to September 30, 2023. After the title, abstract, and full-text screening, data extraction and quality assessment were done for the included studies. Finally, a meta-analysis was performed. RESULTS We included 76 studies with a total of 1,371,517 retinal images, of which 51 were used for meta-analysis. Our meta-analysis showed a significant sensitivity and specificity with a percentage of 90.54 (95%CI [90.42, 90.66], P < 0.001) and 78.33% (95%CI [78.21, 78.45], P < 0.001). However, the AUC (area under curvature) did not statistically differ across studies, but had a significant figure of 0.94 (95% CI [- 46.71, 48.60], P = 1). CONCLUSIONS Although machine learning and deep learning algorithms can properly diagnose diabetic retinopathy, their discriminating capacity is limited. However, they could simplify the diagnosing process. Further studies are required to improve algorithms.
Collapse
Affiliation(s)
- Mehrsa Moannaei
- School of Medicine, Hormozgan University of Medical Sciences, Bandar Abbas, Iran
| | - Faezeh Jadidian
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Tahereh Doustmohammadi
- Department and Faculty of Health Education and Health Promotion, Student Research Committee, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Amir Mohammad Kiapasha
- Student Research Committee, School of Medicine, Shahid Beheshti University of Medical Science, Tehran, Iran
| | - Romina Bayani
- Student Research Committee, School of Medicine, Shahid Beheshti University of Medical Science, Tehran, Iran
| | | | | | - Fereshteh Sohrabivafa
- Health Education and Promotion, Department of Community Medicine, School of Medicine, Dezful University of Medical Sciences, Dezful, Iran
| | - Mahsa Asadi Anar
- Student Research Committee, Shahid Beheshti University of Medical Science, Arabi Ave, Daneshjoo Blvd, Velenjak, Tehran, 19839-63113, Iran.
| | - Amin Magsudy
- Faculty of Medicine, Islamic Azad University Tabriz Branch, Tabriz, Iran
| | - Seyyed Kiarash Sadat Rafiei
- Student Research Committee, Shahid Beheshti University of Medical Science, Arabi Ave, Daneshjoo Blvd, Velenjak, Tehran, 19839-63113, Iran
| | - Yaser Khakpour
- Faculty of Medicine, Guilan University of Medical Sciences, Rasht, Iran
| |
Collapse
|
38
|
Rajalakshmi R, PramodKumar TA, Dhara AK, Kumar G, Gulnaaz N, Dey S, Basak S, Shankar BU, Goswami R, Mohammed R, Manikandan S, Mitra S, Thethi H, Jebarani S, Mathavan S, Sarveswaran T, Anjana RM, Mohan V, Ghosh S, Bera TK, Raman R. Creating a retinal image database to develop an automated screening tool for diabetic retinopathy in India. Sci Rep 2025; 15:7853. [PMID: 40050377 PMCID: PMC11885578 DOI: 10.1038/s41598-025-91941-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/21/2025] [Indexed: 03/09/2025] Open
Abstract
Diabetic retinopathy (DR), a prevalent microvascular complication of diabetes, is the fifth leading cause of blindness worldwide. Given the critical nature of the disease, it is paramount that individuals with diabetes undergo annual screening for early and timely detection of DR, facilitating prompt ophthalmic assessment and intervention. However, screening for DR, which involves assessing visual acuity and retinal examination through ophthalmoscopy or retinal photography, presents a significant global challenge due to the massive volume of individuals requiring annual reviews. To counter this challenge, there has been an increasing interest in the potential of artificial intelligence (AI) tools for automated diagnosis of DR. The AI tools primarily utilize deep learning (DL) techniques and are tailored to analyse extensive medical image data and provide diagnostic outputs, essentially streamline the DR screening process. However, the development of such AI tools requires access to a comprehensive retinal image database with a plethora of high-resolution fundus images from various cameras, covering all DR lesions. Additionally, the accurate training of these AI algorithms necessitates skilled professionals, such as optometrists or ophthalmologists, to provide reliable ground truths that ensure the precision of the diagnostic outputs. To address these prerequisites, we have initiated a study involving multiple institutions to establish a large-scale online 'Retinal Image Database' in India, aiming to contribute significantly to DR research. This paper delineates the methodology employed for this significant undertaking, detailing the steps taken to create the large retinal image database, as well as the framework for developing a cost-effective, robust AI-based DR diagnostic tool. Our work is expected to mark a significant stride in DR detection and management, promising a more efficient and scalable solution for tackling this global health challenge.
Collapse
Affiliation(s)
- Ramachandran Rajalakshmi
- Department of Ophthalmology, Madras Diabetes Research Foundation & Dr. Mohan's Diabetes Specialities Centre, 6, Conran Smith Road Gopalapuram, Chennai, 600 086, India.
| | | | - Ashis Kumar Dhara
- Department of Electrical Engineering, National Institute of Technology Durgapur, Durgapur, India
| | - Geetha Kumar
- Department of Vitreo-Retina, Vision Research Foundation, Sankara Netralaya, Chennai, India
| | - Naziya Gulnaaz
- Department of Research Operations, Data Management and Diabetes Complications, Madras Diabetes Research Foundation, Chennai, India
| | - Shramana Dey
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| | - Sourav Basak
- Department of Electrical Engineering, National Institute of Technology Durgapur, Durgapur, India
| | - B Uma Shankar
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| | - Raka Goswami
- Department of Electrical Engineering, National Institute of Technology Durgapur, Durgapur, India
| | - Raja Mohammed
- Department of Ophthalmology, Madras Diabetes Research Foundation & Dr. Mohan's Diabetes Specialities Centre, 6, Conran Smith Road Gopalapuram, Chennai, 600 086, India
| | - Suchetha Manikandan
- Centre for Healthcare Advancements, Innovation and Research, Vellore Institute of Technology, Chennai, India
| | - Sushmita Mitra
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| | | | - Saravanan Jebarani
- Department of Research Operations, Data Management and Diabetes Complications, Madras Diabetes Research Foundation, Chennai, India
| | - Sinnakaruppan Mathavan
- Department of Vitreo-Retina, Vision Research Foundation, Sankara Netralaya, Chennai, India
| | - Tamilselvi Sarveswaran
- Centre for Healthcare Advancements, Innovation and Research, Vellore Institute of Technology, Chennai, India
| | - Ranjit Mohan Anjana
- Department of Diabetology, Madras Diabetes Research Foundation & Dr. Mohan's Diabetes Specialities Centre, Chennai, India
| | - Viswanathan Mohan
- Department of Diabetology, Madras Diabetes Research Foundation & Dr. Mohan's Diabetes Specialities Centre, Chennai, India
| | - Sambuddha Ghosh
- Department of Ophthalmology, Calcutta National Medical College, Kolkata, India
| | - Tushar Kanti Bera
- Department of Electrical Engineering, National Institute of Technology Durgapur, Durgapur, India
| | - Rajiv Raman
- Department of Vitreo-Retina, Vision Research Foundation, Sankara Netralaya, Chennai, India
| |
Collapse
|
39
|
Ong SS, Varghese JS. Closing Gaps in Diabetic Retinopathy Screening in India Using a Deep Learning System. JAMA Netw Open 2025; 8:e250991. [PMID: 40105846 DOI: 10.1001/jamanetworkopen.2025.0991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/20/2025] Open
Affiliation(s)
- Sally S Ong
- Department of Ophthalmology, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Jithin Sam Varghese
- Hubert Department of Global Health, Rollins School of Public Health, Emory University, Atlanta, Georgia
- Emory Global Diabetes Research Center of Woodruff Health Sciences Center and Emory University, Atlanta, Georgia
| |
Collapse
|
40
|
Cho HS, Hwang EJ, Yi J, Choi B, Park CM. Artificial intelligence system for identification of overlooked lung metastasis in abdominopelvic computed tomography scans of patients with malignancy. Diagn Interv Radiol 2025; 31:102-110. [PMID: 39248126 PMCID: PMC11880870 DOI: 10.4274/dir.2024.242835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Accepted: 08/01/2024] [Indexed: 09/10/2024]
Abstract
PURPOSE This study aimed to evaluate whether an artificial intelligence (AI) system can identify basal lung metastatic nodules examined using abdominopelvic computed tomography (CT) that were initially overlooked by radiologists. METHODS We retrospectively included abdominopelvic CT images with the following inclusion criteria: a) CT images from patients with solid organ malignancies between March 1 and March 31, 2019, in a single institution; and b) abdominal CT images interpreted as negative for basal lung metastases. Reference standards for diagnosis of lung metastases were confirmed by reviewing medical records and subsequent CT images. An AI system that could automatically detect lung nodules on CT images was applied retrospectively. A radiologist reviewed the AI detection results to classify them as lesions with the possibility of metastasis or clearly benign. The performance of the initial AI results and the radiologist's review of the AI results were evaluated using patient-level and lesion-level sensitivities, false-positive rates, and the number of false-positive lesions per patient. RESULTS A total of 878 patients (580 men; mean age, 63 years) were included, with overlooked basal lung metastases confirmed in 13 patients (1.5%). The AI exhibited an area under the receiver operating characteristic curve value of 0.911 for the identification of overlooked basal lung metastases. Patient- and lesion-level sensitivities of the AI system ranged from 69.2% to 92.3% and 46.2% to 92.3%, respectively. After a radiologist reviewed the AI results, the sensitivity remained unchanged. The false-positive rate and number of false-positive lesions per patient ranged from 5.8% to 27.6% and 0.1% to 0.5%, respectively. Radiologist reviews significantly reduced the false-positive rate (2.4%-12.6%; all P values < 0.001) and the number of false-positive lesions detected per patient (0.03-0.20, respectively). CONCLUSION The AI system could accurately identify basal lung metastases detected in abdominopelvic CT images that were overlooked by radiologists, suggesting its potential as a tool for radiologist interpretation. CLINICAL SIGNIFICANCE The AI system can identify missed basal lung lesions in abdominopelvic CT scans in patients with malignancy, providing feedback to radiologists, which can reduce the risk of missing basal lung metastasis.
Collapse
Affiliation(s)
- Hye Soo Cho
- Seoul National University Hospital Seoul National University College of Medicine, Department of Radiology, Seoul, Republic of Korea
| | - Eui Jin Hwang
- Seoul National University Hospital Seoul National University College of Medicine, Department of Radiology, Seoul, Republic of Korea
- Seoul National University College of Medicine Department of Radiology, Seoul, Republic of Korea
| | - Jaeyoun Yi
- Coreline Soft Inc. Seoul, Republic of Korea
| | | | - Chang Min Park
- Seoul National University Hospital Seoul National University College of Medicine, Department of Radiology, Seoul, Republic of Korea
- Seoul National University College of Medicine Department of Radiology, Seoul, Republic of Korea
| |
Collapse
|
41
|
Sim MA, Tham YC, Nusinovici S, Quek TC, Yu M, Xue CC, Chee ML, Peng QS, Tan ESJ, Chan SP, Cai Y, Chong EJY, Tan BY, Venketasubramanian N, Hilal S, Lai MKP, Choi H, Richards AM, Cheng C, Chen CLH. A deep-learning retinal aging biomarker for cognitive decline and incident dementia. Alzheimers Dement 2025; 21:e14601. [PMID: 40042460 PMCID: PMC11881618 DOI: 10.1002/alz.14601] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 01/14/2025] [Accepted: 01/17/2025] [Indexed: 03/27/2025]
Abstract
INTRODUCTION The utility of retinal photography-derived aging biomarkers for predicting cognitive decline remains under-explored. METHODS A memory-clinic cohort in Singapore was followed-up for 5 years. RetiPhenoAge, a retinal aging biomarker, was derived from retinal photographs using deep-learning. Using competing risk analysis, we determined the associations of RetiPhenoAge with cognitive decline and dementia, with the UK Biobank utilized as the replication cohort. The associations of RetiPhenoAge with MRI markers(cerebral small vessel disease [CSVD] and neurodegeneration) and its underlying plasma proteomic profile were evaluated. RESULTS Of 510 memory-clinic subjects(N = 155 cognitive decline), RetiPhenoAge associated with incident cognitive decline (subdistribution hazard ratio [SHR] 1.34, 95% confidence interval [CI] 1.10-1.64, p = 0.004), and incident dementia (SHR 1.43, 95% CI 1.02-2.01, p = 0.036). In the UK Biobank (N = 33 495), RetiPhenoAge similarly predicted incident dementia (SHR 1.25, 95% CI 1.09-1.41, p = 0.008). RetiPhenoAge significantly associated with CSVD, brain atrophy, and plasma proteomic signatures related to aging. DISCUSSION RetiPhenoAge may provide a non-invasive prognostic screening tool for cognitive decline and dementia. HIGHLIGHTS RetiPhenoAge, a retinal aging marker, was studied in an Asian memory clinic cohort. Older RetiPhenoAge predicted future cognitive decline and incident dementia. It also linked to neuropathological markers, and plasma proteomic profiles of aging. UK Biobank replication found that RetiPhenoAge predicted 12-year incident dementia. Future studies should validate RetiPhenoAge as a prognostic biomarker for dementia.
Collapse
Affiliation(s)
- Ming Ann Sim
- Memory Aging and Cognition CentreDepartments of Pharmacology and Psychological MedicineNational University of SingaporeSingaporeSingapore
- Department of AnesthesiaNational University Health System and National University of Singapore, Yong Loo Lin School of MedicineSingaporeSingapore
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Healthand Department of OphthalmologyYong Loo Lin School of MedicineNational University of SingaporeSingaporeSingapore
- Ocular Epidemiology Research GroupSingapore Eye Research InstituteSingapore National Eye CentreSingaporeSingapore
- Ophthalmology and Visual Science Academic Clinical ProgramDuke‐NUS Medical SchoolSingaporeSingapore
| | - Simon Nusinovici
- Ocular Epidemiology Research GroupSingapore Eye Research InstituteSingapore National Eye CentreSingaporeSingapore
| | - Ten Cheer Quek
- Ocular Epidemiology Research GroupSingapore Eye Research InstituteSingapore National Eye CentreSingaporeSingapore
| | - Marco Yu
- Ocular Epidemiology Research GroupSingapore Eye Research InstituteSingapore National Eye CentreSingaporeSingapore
| | - Can Can Xue
- Ocular Epidemiology Research GroupSingapore Eye Research InstituteSingapore National Eye CentreSingaporeSingapore
| | - Miao Li Chee
- Ocular Epidemiology Research GroupSingapore Eye Research InstituteSingapore National Eye CentreSingaporeSingapore
| | - Qing Sheng Peng
- Centre for Innovation and Precision Eye Healthand Department of OphthalmologyYong Loo Lin School of MedicineNational University of SingaporeSingaporeSingapore
| | - Eugene S. J. Tan
- National University Heart CentreNational University HospitalSingaporeSingapore
| | - Siew Pang Chan
- National University Heart CentreNational University HospitalSingaporeSingapore
| | - Yuan Cai
- Division of NeurologyDepartment of Medicine and TherapeuticsFaculty of MedicineThe Chinese University of Hong KongMa Liu ShuiHong Kong
| | - Eddie Jun Yi Chong
- Memory Aging and Cognition CentreDepartments of Pharmacology and Psychological MedicineNational University of SingaporeSingaporeSingapore
| | | | | | - Saima Hilal
- Saw Swee Hock School of Public HealthNational University of SingaporeSingaporeSingapore
| | - Mitchell K. P. Lai
- Memory Aging and Cognition CentreDepartments of Pharmacology and Psychological MedicineNational University of SingaporeSingaporeSingapore
| | - Hyungwon Choi
- Cardiovascular Metabolic Disease Translational Research ProgramNational University of SingaporeSingaporeSingapore
| | - Arthur Mark Richards
- Cardiovascular Metabolic Disease Translational Research ProgramNational University of SingaporeSingaporeSingapore
- Christchurch Heart InstituteUniversity of Otago, Dunedin NorthDunedinNew Zealand
| | - Ching‐Yu Cheng
- Centre for Innovation and Precision Eye Healthand Department of OphthalmologyYong Loo Lin School of MedicineNational University of SingaporeSingaporeSingapore
- Ocular Epidemiology Research GroupSingapore Eye Research InstituteSingapore National Eye CentreSingaporeSingapore
- Ophthalmology and Visual Science Academic Clinical ProgramDuke‐NUS Medical SchoolSingaporeSingapore
| | - Christopher L. H. Chen
- Memory Aging and Cognition CentreDepartments of Pharmacology and Psychological MedicineNational University of SingaporeSingaporeSingapore
| |
Collapse
|
42
|
Kanca E, Ayas S, Baykal Kablan E, Ekinci M. Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging. Med Biol Eng Comput 2025; 63:673-690. [PMID: 39453557 DOI: 10.1007/s11517-024-03226-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 10/12/2024] [Indexed: 10/26/2024]
Abstract
Deep neural networks (DNNs) have demonstrated exceptional performance in medical image analysis. However, recent studies have uncovered significant vulnerabilities in DNN models, particularly their susceptibility to adversarial attacks that manipulate these models into making inaccurate predictions. Vision Transformers (ViTs), despite their advanced capabilities in medical imaging tasks, have not been thoroughly evaluated for their robustness against such attacks in this domain. This study addresses this research gap by conducting an extensive analysis of various adversarial attacks on ViTs specifically within medical imaging contexts. We explore adversarial training as a potential defense mechanism and assess the resilience of ViT models against state-of-the-art adversarial attacks and defense strategies using publicly available benchmark medical image datasets. Our findings reveal that ViTs are vulnerable to adversarial attacks even with minimal perturbations, although adversarial training significantly enhances their robustness, achieving over 80% classification accuracy. Additionally, we perform a comparative analysis with state-of-the-art convolutional neural network models, highlighting the unique strengths and weaknesses of ViTs in handling adversarial threats. This research advances the understanding of ViTs robustness in medical imaging and provides insights into their practical deployment in real-world scenarios.
Collapse
Affiliation(s)
- Elif Kanca
- Department of Software Engineering, Karadeniz Technical University, Trabzon, Turkey.
| | - Selen Ayas
- Department of Computer Engineering, Karadeniz Technical University, Trabzon, Turkey.
| | - Elif Baykal Kablan
- Department of Software Engineering, Karadeniz Technical University, Trabzon, Turkey
| | - Murat Ekinci
- Department of Computer Engineering, Karadeniz Technical University, Trabzon, Turkey
| |
Collapse
|
43
|
Redaelli E, Calvo B, Rodriguez Matas JF, Luraghi G, Grasa J. A POD-NN methodology to determine in vivo mechanical properties of soft tissues. Application to human cornea deformed by Corvis ST test. Comput Biol Med 2025; 187:109792. [PMID: 39938339 DOI: 10.1016/j.compbiomed.2025.109792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 12/12/2024] [Accepted: 01/31/2025] [Indexed: 02/14/2025]
Abstract
The interaction between optical and biomechanical properties of the corneal tissue is crucial for the eye's ability to refract and focus light. The mechanical properties vary among individuals and can change over time due to factors such as eye growth, ageing, and diseases like keratoconus. Estimating these properties is crucial for diagnosing ocular conditions, improving surgical outcomes, and enhancing vision quality, especially given increasing life expectancies and societal demands. Current ex-vivo methods for evaluating corneal mechanical properties are limited and not patient-specific. This study aims to develop a model to estimate in real-time the mechanical properties of the corneal tissue in-vivo. It is composed both by a proof of concept and by a clinical application. Regarding the proof of concept, we used high-fidelity Fluid-Structure Interaction (FSI) simulations of Non-Contact Tonometry (NCT) with Corvis ST® (OCULUS, Wetzlar, Germany) device to create a large dataset of corneal deformation evolution. Proper Orthogonal Decomposition (POD) was applied to this dataset to identify principal modes of variation, resulting in a reduced-order model (ROM). We then trained a Neural Network (NN) using the reduced coefficients, intraocular pressure (IOP), and corneal geometry derived from Pentacam® (OCULUS, Wetzlar, Germany) elevation data to predict the mechanical properties of the corneal tissue. This methodology was then applied to a clinical case in which the mechanical properties of the corneal tissue are estimated based on Corvis ST results. Our method demonstrated the potential for real-time, in-vivo estimation of corneal biomechanics, offering a significant advancement over traditional approaches that require time-consuming numerical simulations. This model, being entirely data-driven, eliminates the need for complex inverse analyses, providing an efficient and accurate tool to be implemented directly in the Corvis ST device.
Collapse
Affiliation(s)
- Elena Redaelli
- Aragón Institute of Engineering Research (I3A), Universidad de Zaragoza, Zaragoza, Spain.
| | - Begoña Calvo
- Aragón Institute of Engineering Research (I3A), Universidad de Zaragoza, Zaragoza, Spain; Centro de Investigación Biomecánica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Zaragoza, Spain
| | - Jose Felix Rodriguez Matas
- LaBS, Department of Chemistry, Materials and Chemical Engineering "Giulio Natta", Politecnico di Milano, Milan, Italy
| | - Giulia Luraghi
- LaBS, Department of Chemistry, Materials and Chemical Engineering "Giulio Natta", Politecnico di Milano, Milan, Italy
| | - Jorge Grasa
- Aragón Institute of Engineering Research (I3A), Universidad de Zaragoza, Zaragoza, Spain; Centro de Investigación Biomecánica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Zaragoza, Spain
| |
Collapse
|
44
|
David D, Zloto O, Katz G, Huna-Baron R, Vishnevskia-Dai V, Armarnik S, Zauberman NA, Barnir EM, Singer R, Hostovsky A, Klang E. The use of artificial intelligence based chat bots in ophthalmology triage. Eye (Lond) 2025; 39:785-789. [PMID: 39592814 PMCID: PMC11885819 DOI: 10.1038/s41433-024-03488-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 11/07/2024] [Accepted: 11/12/2024] [Indexed: 11/28/2024] Open
Abstract
PURPOSE To evaluate AI-based chat bots ability to accurately answer common patient's questions in the field of ophthalmology. METHODS An experienced ophthalmologist curated a set of 20 representative questions and responses were sought from two AI generative models: OpenAI's ChatGPT and Google's Bard (Gemini Pro). Eight expert ophthalmologists from different sub-specialties assessed each response, blinded to the source, and ranked them by three metrics-accuracy, comprehensiveness, and clarity, on a 1-5 scale. RESULTS For accuracy, ChatGPT scored a median of 4.0, whereas Bard scored a median of 3.0. In terms of comprehensiveness, ChatGPT achieved a median score of 4.5, compared to Bard which scored a median of 3.0. Regarding clarity, ChatGPT maintained a higher score with a median of 5.0, compared to Bard's median score of 4.0. All comparisons were statistically significant (p < 0.001). CONCLUSION AI-based chat bots can provide relatively accurate and clear responses for addressing common ophthalmological inquiries. ChatGPT surpassed Bard in all measured metrics. While these AI models exhibit promise, further research is indicated to improve their performance and allow them to be used as a reliable medical tool.
Collapse
Affiliation(s)
- Daniel David
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel.
| | - Ofira Zloto
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Gabriel Katz
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Ruth Huna-Baron
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Vicktoria Vishnevskia-Dai
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Sharon Armarnik
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Noa Avni Zauberman
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Elinor Megiddo Barnir
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Reut Singer
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Avner Hostovsky
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
| | - Eyal Klang
- Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer, Israel
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
45
|
Duong D, Solomon BD. Artificial intelligence in clinical genetics. Eur J Hum Genet 2025; 33:281-288. [PMID: 39806188 PMCID: PMC11894121 DOI: 10.1038/s41431-024-01782-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2024] [Accepted: 12/19/2024] [Indexed: 01/16/2025] Open
Abstract
Artificial intelligence (AI) has been growing more powerful and accessible, and will increasingly impact many areas, including virtually all aspects of medicine and biomedical research. This review focuses on previous, current, and especially emerging applications of AI in clinical genetics. Topics covered include a brief explanation of different general categories of AI, including machine learning, deep learning, and generative AI. After introductory explanations and examples, the review discusses AI in clinical genetics in three main categories: clinical diagnostics; management and therapeutics; clinical support. The review concludes with short, medium, and long-term predictions about the ways that AI may affect the field of clinical genetics. Overall, while the precise speed at which AI will continue to change clinical genetics is unclear, as are the overall ramifications for patients, families, clinicians, researchers, and others, it is likely that AI will result in dramatic evolution in clinical genetics. It will be important for all those involved in clinical genetics to prepare accordingly in order to minimize the risks and maximize benefits related to the use of AI in the field.
Collapse
Affiliation(s)
- Dat Duong
- Medical Genetics Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA
| | - Benjamin D Solomon
- Medical Genetics Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
46
|
Yew SME, Lei X, Chen Y, Goh JHL, Pushpanathan K, Xue CC, Wang YX, Jonas JB, Sabanayagam C, Koh VTC, Xu X, Liu Y, Cheng CY, Tham YC. Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos. OPHTHALMOLOGY SCIENCE 2025; 5:100659. [PMID: 39931359 PMCID: PMC11808727 DOI: 10.1016/j.xops.2024.100659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 11/17/2024] [Accepted: 11/18/2024] [Indexed: 02/13/2025]
Abstract
Purpose Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs. Design Retrospective study. Subjects We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted. Methods This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power. Main Outcome Measures Mean absolute error (MAE) and coefficient of determination were used to evaluate the models' performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions. Results For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline-ResNet34 (MAE: 0.88D; P < 0.001) and SwinV2 (MAE: 0.87D; P < 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline-ResNet34 (MAE: 0.81D; P < 0.001) and SwinV2 (MAE: 0.78D; P < 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction. Conclusions Deep imbalanced regressed-integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Samantha Min Er Yew
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Xiaofeng Lei
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Yibing Chen
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Krithi Pushpanathan
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Can Can Xue
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Ophthalmology and Visual Science Key Lab, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jost B. Jonas
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
- Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Victor Teck Chang Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Xinxing Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Yong Liu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Ching-Yu Cheng
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Yih-Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
47
|
Yang Q, Bee YM, Lim CC, Sabanayagam C, Yim-Lui Cheung C, Wong TY, Ting DS, Lim LL, Li H, He M, Lee AY, Shaw AJ, Keong YK, Wei Tan GS. Use of artificial intelligence with retinal imaging in screening for diabetes-associated complications: systematic review. EClinicalMedicine 2025; 81:103089. [PMID: 40052065 PMCID: PMC11883405 DOI: 10.1016/j.eclinm.2025.103089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2024] [Revised: 12/30/2024] [Accepted: 01/16/2025] [Indexed: 03/09/2025] Open
Abstract
Background Artificial Intelligence (AI) has been used to automate detection of retinal diseases from retinal images with great success, in particular for screening for diabetic retinopathy, a major complication of diabetes. Since persons with diabetes routinely receive retinal imaging to evaluate their diabetic retinopathy status, AI-based retinal imaging may have potential to be used as an opportunistic comprehensive screening for multiple systemic micro- and macro-vascular complications of diabetes. Methods We conducted a qualitative systematic review on published literature using AI on retina images to detect systemic diabetes complications. We searched three main databases: PubMed, Google Scholar, and Web of Science (January 1, 2000, to October 1, 2024). Research that used AI to evaluate the associations between retinal images and diabetes-associated complications, or research involving diabetes patients with retinal imaging and AI systems were included. Our primary focus was on articles related to AI, retinal images, and diabetes-associated complications. We evaluated each study for the robustness of the studies by development of the AI algorithm, size and quality of the training dataset, internal validation and external testing, and the performance. Quality assessments were employed to ensure the inclusion of high-quality studies, and data extraction was conducted systematically to gather pertinent information for analysis. This study has been registered on PROSPERO under the registration ID CRD42023493512. Findings From a total of 337 abstracts, 38 studies were included. These studies covered a range of topics related to prediction of diabetes from pre-diabetes or non-diabeticindividuals (n = 4), diabetes related systemic risk factors (n = 10), detection of microvascular complications (n = 8) and detection of macrovascular complications (n = 17). Most studies (n = 32) utilized color fundus photographs (CFP) as retinal image modality, while others employed optical coherence tomography (OCT) (n = 6). The performance of the AI systems varied, with an AUC ranging from 0.676 to 0.971 in prediction or identification of different complications. Study designs included cross-sectional and cohort studies with sample sizes ranging from 100 to over 100,000 participants. Risk of bias was evaluated by using the Newcastle-Ottawa Scale and AXIS, with most studies scoring as low to moderate risk. Interpretation Our review highlights the potential for the use of AI algorithms applied to retina images, particularly CFP, to screen, predict, or diagnose the various microvascular and macrovascular complications of diabetes. However, we identified few studies with longitudinal data and a paucity of randomized control trials, reflecting a gap between the development of AI algorithms and real-world implementation and translational studies. Funding Dr. Gavin Siew Wei TAN is supported by: 1. DYNAMO: Diabetes studY on Nephropathy And other Microvascular cOmplications II supported by National Medical Research Council (MOH-001327-03): data collection, analysis, trial design 2. Prognositc significance of novel multimodal imaging markers for diabetic retinopathy: towards improving the staging for diabetic retinopathy supported by NMRC Clinician Scientist Award (CSA)-Investigator (INV) (MOH-001047-00).
Collapse
Affiliation(s)
- Qianhui Yang
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, China
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| | - Yong Mong Bee
- Department of Endocrinology, Singapore General Hospital, Singapore
| | - Ciwei Cynthia Lim
- Department of Renal Medicine, Singapore General Hospital, Academia Level 3, 20 College Road, Singapore, 169856, Singapore
| | - Charumathi Sabanayagam
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| | - Carol Yim-Lui Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Tien Yin Wong
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, China
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Daniel S.W. Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| | - Lee-Ling Lim
- Department of Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, 50603, Malaysia
| | - HuaTing Li
- Department of Endocrinology and Metabolism, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai Key Laboratory of Diabetes Mellitus, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yishan Road, Shanghai, 200233, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
- Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
| | - A Jonathan Shaw
- Department of Biology & L. E. Anderson Bryophyte Herbarium, Duke University, Durham, NC, USA
| | - Yeo Khung Keong
- Department of Cardiology, National Heart Centre Singapore, Singapore
| | - Gavin Siew Wei Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Republic of Singapore
- Duke-NUS Medical School, Singapore, Republic of Singapore
| |
Collapse
|
48
|
D N S, Pai RM, Bhat SN, Pai M M M. Assessment of perceived realism in AI-generated synthetic spine fracture CT images. Technol Health Care 2025; 33:931-944. [PMID: 40105176 DOI: 10.1177/09287329241291368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Abstract
BackgroundDeep learning-based decision support systems require synthetic images generated by adversarial networks, which require clinical evaluation to ensure their quality.ObjectiveThe study evaluates perceived realism of high-dimension synthetic spine fracture CT images generated Progressive Growing Generative Adversarial Networks (PGGANs).Method: The study used 2820 spine fracture CT images from 456 patients to train an PGGAN model. The model synthesized images up to 512 × 512 pixels, and the realism of the generated images was assessed using Visual Turing Tests and Fracture Identification Test. Three spine surgeons evaluated the images, and clinical evaluation results were statistically analysed.Result: Spine surgeons have an average prediction accuracy of nearly 50% during clinical evaluations, indicating difficulty in distinguishing between real and generated images. The accuracy varies for different dimensions, with synthetic images being more realistic, especially in 512 × 512-dimension images. During FIT, among 16 generated images of each fracture type, 13-15 images were correctly identified, indicating images are more realistic and clearly depict fracture lines in 512 × 512 dimensions.ConclusionThe study reveals that AI-based PGGAN can generate realistic synthetic spine fracture CT images up to 512 × 512 pixels, making them difficult to distinguish from real images, and improving the automatic spine fracture type detection system.
Collapse
Affiliation(s)
- Sindhura D N
- Department of Data Science and Computer Applications, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Radhika M Pai
- Department of Data Science and Computer Applications, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Manohara Pai M M
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
49
|
Olawade DB, Weerasinghe K, Mathugamage MDDE, Odetayo A, Aderinto N, Teke J, Boussios S. Enhancing Ophthalmic Diagnosis and Treatment with Artificial Intelligence. MEDICINA (KAUNAS, LITHUANIA) 2025; 61:433. [PMID: 40142244 PMCID: PMC11943519 DOI: 10.3390/medicina61030433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2025] [Revised: 02/20/2025] [Accepted: 02/26/2025] [Indexed: 03/28/2025]
Abstract
The integration of artificial intelligence (AI) in ophthalmology is transforming the field, offering new opportunities to enhance diagnostic accuracy, personalize treatment plans, and improve service delivery. This review provides a comprehensive overview of the current applications and future potential of AI in ophthalmology. AI algorithms, particularly those utilizing machine learning (ML) and deep learning (DL), have demonstrated remarkable success in diagnosing conditions such as diabetic retinopathy (DR), age-related macular degeneration, and glaucoma with precision comparable to, or exceeding, human experts. Furthermore, AI is being utilized to develop personalized treatment plans by analyzing large datasets to predict individual responses to therapies, thus optimizing patient outcomes and reducing healthcare costs. In surgical applications, AI-driven tools are enhancing the precision of procedures like cataract surgery, contributing to better recovery times and reduced complications. Additionally, AI-powered teleophthalmology services are expanding access to eye care in underserved and remote areas, addressing global disparities in healthcare availability. Despite these advancements, challenges remain, particularly concerning data privacy, security, and algorithmic bias. Ensuring robust data governance and ethical practices is crucial for the continued success of AI integration in ophthalmology. In conclusion, future research should focus on developing sophisticated AI models capable of handling multimodal data, including genetic information and patient histories, to provide deeper insights into disease mechanisms and treatment responses. Also, collaborative efforts among governments, non-governmental organizations (NGOs), and technology companies are essential to deploy AI solutions effectively, especially in low-resource settings.
Collapse
Affiliation(s)
- David B. Olawade
- Department of Allied and Public Health, School of Health, Sport and Bioscience, University of East London, London E16 2RD, UK
- Department of Research and Innovation, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK; (K.W.); (J.T.); (S.B.)
- Department of Public Health, York St John University, London YO31 7EX, UK
- School of Health and Care Management, Arden University, Arden House, Middlemarch Park, Coventry CV3 4FJ, UK
| | - Kusal Weerasinghe
- Department of Research and Innovation, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK; (K.W.); (J.T.); (S.B.)
| | | | | | - Nicholas Aderinto
- Department of Medicine and Surgery, Ladoke Akintola University of Technology, Ogbomoso 210214, Nigeria;
| | - Jennifer Teke
- Department of Research and Innovation, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK; (K.W.); (J.T.); (S.B.)
- Faculty of Medicine, Health and Social Care, Canterbury Christ Church University, Canterbury CT1 1QU, UK
| | - Stergios Boussios
- Department of Research and Innovation, Medway NHS Foundation Trust, Gillingham ME7 5NY, UK; (K.W.); (J.T.); (S.B.)
- Faculty of Medicine, Health and Social Care, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- School of Cancer & Pharmaceutical Sciences, King’s College London, Strand, London WC2R 2LS, UK
- Kent Medway Medical School, University of Kent, Canterbury CT2 7NZ, UK
- Department of Medical Oncology, Medway NHS Foundation Trust, Gillingham ME7 5NK, UK
- AELIA Organization, 57001 Thessaloniki, Greece
| |
Collapse
|
50
|
Chu B, Zhao J, Zheng W, Xu Z. (DA-U) 2Net: double attention U 2Net for retinal vessel segmentation. BMC Ophthalmol 2025; 25:86. [PMID: 39984892 PMCID: PMC11844045 DOI: 10.1186/s12886-025-03908-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 02/10/2025] [Indexed: 02/23/2025] Open
Abstract
BACKGROUND Morphological changes in the retina are crucial and serve as valuable references in the clinical diagnosis of ophthalmic and cardiovascular diseases. However, the retinal vascular structure is complex, making manual segmentation time-consuming and labor-intensive. METHODS This paper proposes a retinal segmentation network that integrates feature channel attention and the Convolutional Block Attention Module (CBAM) attention within the U2Net model. First, a feature channel attention module is introduced into the RSU (Residual Spatial Unit) block of U2Net, forming an Attention-RSU block, which focuses more on significant areas during feature extraction and suppresses the influence of noise; Second, a Spatial Attention Module (SAM) is introduced into the high-resolution module of Attention-RSU to enrich feature extraction from both spatial and channel dimensions, and a Channel Attention Module (CAM) is integrated into the lowresolution module of Attention-RSU, which uses dual channel attention to reduce detail loss.Finally, dilated convolution is applied during the upscaling and downscaling processes to expand the receptive field in low-resolution states, allowing the model to better integrate contextual information. RESULTS The evaluation across multiple clinical datasets demonstrated excellent performance on various metrics, with an accuracy (ACC) of 98.71%. CONCLUSION The proposed Network is general enough and we believe it can be easily extended to other medical image segmentation tasks where large scale variation and complicated features are the main challenges.
Collapse
Affiliation(s)
- Bing Chu
- Department of Medical Engineering, Wannan Medical College, WuHu, AnHui, 241002, China
| | - Jinsong Zhao
- School of Medical Imageology, Wannan Medical College, WuHu, AnHui, 241002, China
| | - Wenqiang Zheng
- Department of Nuclear Medicine, First Affiliated Hospital of Wannan Medical College, Wuhu, AnHui, 241001, China
| | - Zhengyuan Xu
- Department of Medical Engineering, Wannan Medical College, WuHu, AnHui, 241002, China.
| |
Collapse
|