1
|
Liu S, Liu L, Ma C, Su S, Liu Y, Li B. Association between retinal vascular fractal dimensions and retinopathy of prematurity: an AI-assisted retrospective case-control study. Int Ophthalmol 2025; 45:105. [PMID: 40100468 DOI: 10.1007/s10792-025-03461-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 02/22/2025] [Indexed: 03/20/2025]
Abstract
PURPOSE The main objective of this study was to analyze the fractal dimensions (D(f)) of retinal vasculature in premature infants with retinopathy of prematurity (ROP) and determine their correlation with ROP severity. METHODS We conducted a single-center retrospective case-control study involving 641 premature patients with ROP (641 eyes) and 684 normal preterm infants (684 eyes) matched for corrected gestational age (CGA). Computer-assisted techniques were used to quantify peripapillary retinal vascular D(f), vessel tortuosity (VT), and vessel width (VW). RESULTS Compared to the normal preterm groups, patients with ROP exhibited a significant increase in retinal vascular D(f) by 0.0061 (P = 0.0002). Subgroup analyses revealed a significant association between increasing ROP severity and increased retinal vascular D(f) (P < 0.05). Multivariable-adjusted ordered logistic regression models demonstrated that retinal vascular D(f) (aOR: 3.307, P < 0.0001) was significantly independent and associated with ROP severity. For every 0.1 increase in D(f), the probability of ROP requiring intervention increased by 33.07%. Multiple linear regression models indicated a significant positive correlation between D(f) and VT, as well as VW around the optic disc (P < 0.0001). For every 1 (104 cm-3) increase in VT, D(f) increased by 0.0010. Similarly, for every 1 (μm) increase in VW, D(f) increased by 0.0025. CONCLUSIONS Our findings suggest that increased D(f) in retinal vessels is a pathological characteristic of ROP. This increase may be attributed to the curvature and width of the retinal vasculature in infants with ROP. Quantitative measurement of retinal vascular D(f) could serve as a valuable vascular indicator for assessing the severity of ROP.
Collapse
Affiliation(s)
- Shuai Liu
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230022, China
- Eye Institute, Affiliated Hospital of Nantong University, Medical School of Nantong University, Nantong, 226006, China
| | - Lei Liu
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230022, China.
| | - Cuixia Ma
- Anhui Province Maternity and Child Health Hospital, Maternity and Child Health Hospital affiliated to Anhui Medical University, Hefei, 230001, China
| | - Shu Su
- Eye Institute, Affiliated Hospital of Nantong University, Medical School of Nantong University, Nantong, 226006, China
| | - Ying Liu
- Department of Radiology, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Jiangxi, 330006, China.
| | - Bin Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230022, China
| |
Collapse
|
2
|
Redaelli E, Calvo B, Rodriguez Matas JF, Luraghi G, Grasa J. A POD-NN methodology to determine in vivo mechanical properties of soft tissues. Application to human cornea deformed by Corvis ST test. Comput Biol Med 2025; 187:109792. [PMID: 39938339 DOI: 10.1016/j.compbiomed.2025.109792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 12/12/2024] [Accepted: 01/31/2025] [Indexed: 02/14/2025]
Abstract
The interaction between optical and biomechanical properties of the corneal tissue is crucial for the eye's ability to refract and focus light. The mechanical properties vary among individuals and can change over time due to factors such as eye growth, ageing, and diseases like keratoconus. Estimating these properties is crucial for diagnosing ocular conditions, improving surgical outcomes, and enhancing vision quality, especially given increasing life expectancies and societal demands. Current ex-vivo methods for evaluating corneal mechanical properties are limited and not patient-specific. This study aims to develop a model to estimate in real-time the mechanical properties of the corneal tissue in-vivo. It is composed both by a proof of concept and by a clinical application. Regarding the proof of concept, we used high-fidelity Fluid-Structure Interaction (FSI) simulations of Non-Contact Tonometry (NCT) with Corvis ST® (OCULUS, Wetzlar, Germany) device to create a large dataset of corneal deformation evolution. Proper Orthogonal Decomposition (POD) was applied to this dataset to identify principal modes of variation, resulting in a reduced-order model (ROM). We then trained a Neural Network (NN) using the reduced coefficients, intraocular pressure (IOP), and corneal geometry derived from Pentacam® (OCULUS, Wetzlar, Germany) elevation data to predict the mechanical properties of the corneal tissue. This methodology was then applied to a clinical case in which the mechanical properties of the corneal tissue are estimated based on Corvis ST results. Our method demonstrated the potential for real-time, in-vivo estimation of corneal biomechanics, offering a significant advancement over traditional approaches that require time-consuming numerical simulations. This model, being entirely data-driven, eliminates the need for complex inverse analyses, providing an efficient and accurate tool to be implemented directly in the Corvis ST device.
Collapse
Affiliation(s)
- Elena Redaelli
- Aragón Institute of Engineering Research (I3A), Universidad de Zaragoza, Zaragoza, Spain.
| | - Begoña Calvo
- Aragón Institute of Engineering Research (I3A), Universidad de Zaragoza, Zaragoza, Spain; Centro de Investigación Biomecánica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Zaragoza, Spain
| | - Jose Felix Rodriguez Matas
- LaBS, Department of Chemistry, Materials and Chemical Engineering "Giulio Natta", Politecnico di Milano, Milan, Italy
| | - Giulia Luraghi
- LaBS, Department of Chemistry, Materials and Chemical Engineering "Giulio Natta", Politecnico di Milano, Milan, Italy
| | - Jorge Grasa
- Aragón Institute of Engineering Research (I3A), Universidad de Zaragoza, Zaragoza, Spain; Centro de Investigación Biomecánica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Zaragoza, Spain
| |
Collapse
|
3
|
Stuermer L, Braga S, Martin R, Wolffsohn JS. Artificial intelligence virtual assistants in primary eye care practice. Ophthalmic Physiol Opt 2025; 45:437-449. [PMID: 39723633 PMCID: PMC11823310 DOI: 10.1111/opo.13435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2024] [Revised: 12/15/2024] [Accepted: 12/16/2024] [Indexed: 12/28/2024]
Abstract
PURPOSE To propose a novel artificial intelligence (AI)-based virtual assistant trained on tabular clinical data that can provide decision-making support in primary eye care practice and optometry education programmes. METHOD Anonymised clinical data from 1125 complete optometric examinations (2250 eyes; 63% women, 37% men) were used to train different machine learning algorithm models to predict eye examination classification (refractive, binocular vision dysfunction, ocular disorder or any combination of these three options). After modelling, adjustment, mining and preprocessing (one-hot encoding and SMOTE techniques), 75 input (preliminary data, history, oculomotor test and ocular examinations) and three output (refractive, binocular vision status and eye disease) features were defined. The data were split into training (80%) and test (20%) sets. Five machine learning algorithms were trained, and the best algorithms were subjected to fivefold cross-validation. Model performance was evaluated for accuracy, precision, sensitivity, F1 score and specificity. RESULTS The random forest algorithm was the best for classifying eye examination results with a performance >95.2% (based on 35 input features from preliminary data and history), to propose a subclassification of ocular disorders with a performance >98.1% (based on 65 features from preliminary data, history and ocular examinations) and to differentiate binocular vision dysfunctions with a performance >99.7% (based on 30 features from preliminary data and oculomotor tests). These models were integrated into a responsive web application, available in three languages, allowing intuitive access to the AI models via conventional clinical terms. CONCLUSIONS An AI-based virtual assistant that performed well in predicting patient classification, eye disorders or binocular vision dysfunction has been developed with potential use in primary eye care practice and education programmes.
Collapse
Affiliation(s)
- Leandro Stuermer
- Department of OptometryUniversity of ContestadoCanoinhasBrazil
- Optometry Research Group, School of Optometry, IOBA Eye InstituteUniversity of ValladolidValladolidSpain
| | - Sabrina Braga
- Department of OptometryUniversity of ContestadoCanoinhasBrazil
- Optometry Research Group, School of Optometry, IOBA Eye InstituteUniversity of ValladolidValladolidSpain
| | - Raul Martin
- Optometry Research Group, School of Optometry, IOBA Eye InstituteUniversity of ValladolidValladolidSpain
- Departamento de Física Teórica, Atómica y ÓpticaUniversidad de ValladolidValladolidSpain
| | - James S. Wolffsohn
- Optometry and Vision Sciences Research GroupAston UniversityBirminghamUK
| |
Collapse
|
4
|
D N S, Pai RM, Bhat SN, Pai M M M. Assessment of perceived realism in AI-generated synthetic spine fracture CT images. Technol Health Care 2025; 33:931-944. [PMID: 40105176 DOI: 10.1177/09287329241291368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Abstract
BackgroundDeep learning-based decision support systems require synthetic images generated by adversarial networks, which require clinical evaluation to ensure their quality.ObjectiveThe study evaluates perceived realism of high-dimension synthetic spine fracture CT images generated Progressive Growing Generative Adversarial Networks (PGGANs).Method: The study used 2820 spine fracture CT images from 456 patients to train an PGGAN model. The model synthesized images up to 512 × 512 pixels, and the realism of the generated images was assessed using Visual Turing Tests and Fracture Identification Test. Three spine surgeons evaluated the images, and clinical evaluation results were statistically analysed.Result: Spine surgeons have an average prediction accuracy of nearly 50% during clinical evaluations, indicating difficulty in distinguishing between real and generated images. The accuracy varies for different dimensions, with synthetic images being more realistic, especially in 512 × 512-dimension images. During FIT, among 16 generated images of each fracture type, 13-15 images were correctly identified, indicating images are more realistic and clearly depict fracture lines in 512 × 512 dimensions.ConclusionThe study reveals that AI-based PGGAN can generate realistic synthetic spine fracture CT images up to 512 × 512 pixels, making them difficult to distinguish from real images, and improving the automatic spine fracture type detection system.
Collapse
Affiliation(s)
- Sindhura D N
- Department of Data Science and Computer Applications, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Radhika M Pai
- Department of Data Science and Computer Applications, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Manohara Pai M M
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
5
|
Lima RV, Arruda MP, Muniz MCR, Filho HNF, Ferrerira DMR, Pereira SM. Artificial intelligence methods in diagnosis of retinoblastoma based on fundus imaging: a systematic review and meta-analysis. Graefes Arch Clin Exp Ophthalmol 2025; 263:547-553. [PMID: 39289309 DOI: 10.1007/s00417-024-06643-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 07/26/2024] [Accepted: 09/09/2024] [Indexed: 09/19/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) algorithms for the detection of retinoblastoma (RB) by fundus image analysis have been proposed as a potentially effective technique to facilitate diagnosis and screening programs. However, doubts remain about the accuracy of the technique, the best type of AI for this situation, and its feasibility for everyday use. Therefore, we performed a systematic review and meta-analysis to evaluate this issue. METHODS Following PRISMA 2020 guidelines, a comprehensive search of MEDLINE, Embase, ClinicalTrials.gov and IEEEX databases identified 494 studies whose titles and abstracts were screened for eligibility. We included diagnostic studies that evaluated the accuracy of AI in identifying retinoblastoma based on fundus imaging. Univariate and bivariate analysis was performed using the random effects model. The study protocol was registered in PROSPERO under CRD42024499221. RESULTS Six studies with 9902 fundus images were included, of which 5944 (60%) had confirmed RB. Only one dataset used a semi-supervised machine learning (ML) based method, all other studies used supervised ML, three using architectures requiring high computational power and two using more economical models. The pooled analysis of all models showed a sensitivity of 98.2% (95% CI: 0.947-0.994), a specificity of 98.5% (95% CI: 0.916-0.998) and an AUC of 0.986 (95% CI: 0.970-0.989). Subgroup analyses comparing models with high and low computational power showed no significant difference (p=0.824). CONCLUSIONS AI methods showed a high precision in the diagnosis of RB based on fundus images with no significant difference when comparing high and low computational power models, suggesting a viability of their use. Validation and cost-effectiveness studies are needed in different income countries. Subpopulations should also be analyzed, as AI may be useful as an initial screening tool in populations at high risk for RB, serving as a bridge to the pediatric ophthalmologist or ocular oncologist, who are scarce globally. KEY MESSAGES What is known Retinoblastoma is the most common intraocular cancer in childhood and diagnostic delay is the main factor leading to a poor prognosis. The application of machine learning techniques proposes reliable methods for screening and diagnosis of retinal diseases. What is new The meta-analysis of the diagnostic accuracy of artificial intelligence methods for diagnosing retinoblastoma based on fundus images showed a sensitivity of 98.2% (95% CI: 0.947-0.994) and a specificity of 98.5% (95% CI: 0.916-0.998). There was no statistically significant difference in the diagnostic accuracy of high and low computational power models. The overall performance of supervised machine learning was best than unsupervised, although few studies were available on the second type.
Collapse
Affiliation(s)
- Rian Vilar Lima
- Department of Medicine, University of Fortaleza, Av. Washington Soares, 1321 - Edson Queiroz, Fortaleza - CE, Ceará, 60811-905, Brazil.
| | | | - Maria Carolina Rocha Muniz
- Department of Medicine, University of Fortaleza, Av. Washington Soares, 1321 - Edson Queiroz, Fortaleza - CE, Ceará, 60811-905, Brazil
| | - Helvécio Neves Feitosa Filho
- Department of Medicine, University of Fortaleza, Av. Washington Soares, 1321 - Edson Queiroz, Fortaleza - CE, Ceará, 60811-905, Brazil
| | | | | |
Collapse
|
6
|
M K, G M. A comprehensive review on early detection of drusen patterns in age-related macular degeneration using deep learning models. Photodiagnosis Photodyn Ther 2025; 51:104454. [PMID: 39716627 DOI: 10.1016/j.pdpdt.2024.104454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 12/19/2024] [Accepted: 12/20/2024] [Indexed: 12/25/2024]
Abstract
Age-related Macular Degeneration (AMD) is a leading cause of visual impairment and blindness that affects the eye from the age of fifty-five and older. It impacts on the retina, the light-sensitive layer of the eye. In early AMD, yellowish deposits called drusen, form under the retina, which could result in distortion and gradual blurring of vision. The presence of drusen is the first sign of early dry AMD. As the disease progresses, more and larger deposits develop, and blood vessels grow up from beneath the retina leading to leakage of blood, that damages the retina. In advanced AMD, peripheral vision may remain, but the straight vision is lost. Detecting AMD early is crucial, but treatments are limited, and nutritional supplements like AREDS2 formula may slow disease progression. AMD diagnosis is primarily achieved through drusen identification, a process involving fundus photography by ophthalmologists, but the early stages of AMD make this task challenging due to ambiguous drusen regions. Furthermore, the existing models have difficulty in correctly predicting the drusen regions because of the resolution of fundus images, for which a solution is proposed as a model based on deep learning. Performance can be optimized by employing both local and global knowledge when AMD issues are still in the early phases. The area of the retina where drusen forms were identified by image segmentation, and then these deposits were automatically recognized through pattern recognition techniques.
Collapse
Affiliation(s)
- Kiruthika M
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - Malathi G
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India.
| |
Collapse
|
7
|
Cho U, Gwon YN, Chong SR, Han JY, Kim DK, Doo SW, Yang WJ, Kim K, Shim SR, Jung J, Kim JH. Satisfactory Evaluation of Call Service Using AI After Ureteral Stent Insertion: Randomized Controlled Trial. J Med Internet Res 2025; 27:e56039. [PMID: 39836955 PMCID: PMC11795156 DOI: 10.2196/56039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 11/03/2024] [Accepted: 12/09/2024] [Indexed: 01/23/2025] Open
Abstract
BACKGROUND Ureteral stents, such as double-J stents, have become indispensable in urologic procedures but are associated with complications like hematuria and pain. While the advancement of artificial intelligence (AI) technology has led to its increasing application in the health sector, AI has not been used to provide information on potential complications and to facilitate subsequent measures in the event of such complications. OBJECTIVE This study aimed to assess the effectiveness of an AI-based prediction tool in providing patients with information about potential complications from ureteroscopy and ureteric stent placement and indicating the need for early additional therapy. METHODS Overall, 28 patients (aged 20-70 years) who underwent ureteral stent insertion for the first time without a history of psychological illness were consecutively included. A "reassurance-call" service was set up to equip patients with details about the procedure and postprocedure care, to monitor for complications and their severity. Patients were randomly allocated into 2 groups, reassurance-call by AI (group 1) and reassurance-call by humans (group 2). The primary outcome was the level of satisfaction with the reassurance-call service itself, measured using a Likert scale. Secondary outcomes included satisfaction with the AI-assisted reassurance-call service, also measured using a Likert scale, and the level of satisfaction (Likert scale and Visual Analogue Scale [VAS]) and anxiety (State-Trait Anxiety Inventory and VAS) related to managing complications for both groups. RESULTS Of the 28 recruited patients (14 in each group), 1 patient in group 2 dropped out. Baseline characteristics of patients showed no significant differences (all P>.05). Satisfaction with reassurance-call averaged 4.14 (SD 0.66; group 1) and 4.54 (SD 0.52; group 2), with no significant difference between AI and humans (P=.11). AI-assisted reassurance-call satisfaction averaged 3.43 (SD 0.94). Satisfaction about the management of complications using the Likert scale averaged 3.79 (SD 0.70) and 4.23 (SD 0.83), respectively, showing no significant difference (P=.14), but a significant difference was observed when using the VAS (P=.01), with 6.64 (SD 2.13) in group 1 and 8.69 (SD 1.80) in group 2. Anxiety about complications using the State-Trait Anxiety Inventory averaged 36.43 (SD 9.17) and 39.23 (SD 8.51; P=.33), while anxiety assessed with VAS averaged 4.86 (SD 2.28) and 3.46 (SD 3.38; P=.18), respectively, showing no significant differences. Multiple regression analysis was performed on all outcomes, and humans showed superior satisfaction than AI in the management of complications. Otherwise, most of the other variables showed no significant differences (P.>05). CONCLUSIONS This is the first study to use AI for patient reassurance regarding complications after ureteric stent placement. The study found that patients were similarly satisfied for reassurance calls conducted by AI or humans. Further research in larger populations is warranted to confirm these findings. TRIAL REGISTRATION Clinical Research Information System KCT0008062; https://tinyurl.com/4s8725w2.
Collapse
Affiliation(s)
- Ukrae Cho
- AI product Biz Team, AI service division, SK Telecom, Seoul, Republic of Korea
- College of Business, KAIST, Seoul, Republic of Korea
| | - Yong Nam Gwon
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Republic of Korea
| | - Seung Ryong Chong
- Social Safety Net Team, ESG Office, SK Telecom, Seoul, Republic of Korea
| | - Ji Yeon Han
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Republic of Korea
| | - Do Kyung Kim
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Republic of Korea
| | - Seung Whan Doo
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Republic of Korea
| | - Won Jae Yang
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Republic of Korea
| | - Kyeongmin Kim
- Department of Engineering, University of Hong Kong, Hong Kong, China (Hong Kong)
| | - Sung Ryul Shim
- Department of Biomedical Informatics, College of Medicine, Konyang University, Daejeon, Republic of Korea
| | - Jaehun Jung
- Department of Preventive Medicine, Korea University College of Medicine, Seoul, Republic of Korea
| | - Jae Heon Kim
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Republic of Korea
| |
Collapse
|
8
|
Zuo H, Huang B, He J, Fang L, Huang M. Machine Learning Approaches in High Myopia: Systematic Review and Meta-Analysis. J Med Internet Res 2025; 27:e57644. [PMID: 39753217 PMCID: PMC11748443 DOI: 10.2196/57644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 07/02/2024] [Accepted: 11/06/2024] [Indexed: 01/24/2025] Open
Abstract
BACKGROUND In recent years, with the rapid development of machine learning (ML), it has gained widespread attention from researchers in clinical practice. ML models appear to demonstrate promising accuracy in the diagnosis of complex diseases, as well as in predicting disease progression and prognosis. Some studies have applied it to ophthalmology, primarily for the diagnosis of pathologic myopia and high myopia-associated glaucoma, as well as for predicting the progression of high myopia. ML-based detection still requires evidence-based validation to prove its accuracy and feasibility. OBJECTIVE This study aims to discern the performance of ML methods in detecting high myopia and pathologic myopia in clinical practice, thereby providing evidence-based support for the future development and refinement of intelligent diagnostic or predictive tools. METHODS PubMed, Cochrane, Embase, and Web of Science were thoroughly retrieved up to September 3, 2023. The prediction model risk of bias assessment tool was leveraged to appraise the risk of bias in the eligible studies. The meta-analysis was implemented using a bivariate mixed-effects model. In the validation set, subgroup analyses were conducted based on the ML target events (diagnosis and prediction of high myopia and diagnosis of pathological myopia and high myopia-associated glaucoma) and modeling methods. RESULTS This study ultimately included 45 studies, of which 32 were used for quantitative meta-analysis. The meta-analysis results unveiled that for the diagnosis of pathologic myopia, the summary receiver operating characteristic (SROC), sensitivity, and specificity of ML were 0.97 (95% CI 0.95-0.98), 0.91 (95% CI 0.89-0.92), and 0.95 (95% CI 0.94-0.97), respectively. Specifically, deep learning (DL) showed an SROC of 0.97 (95% CI 0.95-0.98), sensitivity of 0.92 (95% CI 0.90-0.93), and specificity of 0.96 (95% CI 0.95-0.97), while conventional ML (non-DL) showed an SROC of 0.86 (95% CI 0.75-0.92), sensitivity of 0.77 (95% CI 0.69-0.84), and specificity of 0.85 (95% CI 0.75-0.92). For the diagnosis and prediction of high myopia, the SROC, sensitivity, and specificity of ML were 0.98 (95% CI 0.96-0.99), 0.94 (95% CI 0.90-0.96), and 0.94 (95% CI 0.88-0.97), respectively. For the diagnosis of high myopia-associated glaucoma, the SROC, sensitivity, and specificity of ML were 0.96 (95% CI 0.94-0.97), 0.92 (95% CI 0.85-0.96), and 0.88 (95% CI 0.67-0.96), respectively. CONCLUSIONS ML demonstrated highly promising accuracy in diagnosing high myopia and pathologic myopia. Moreover, based on the limited evidence available, we also found that ML appeared to have favorable accuracy in predicting the risk of developing high myopia in the future. DL can be used as a potential method for intelligent image processing and intelligent recognition, and intelligent examination tools can be developed in subsequent research to provide help for areas where medical resources are scarce. TRIAL REGISTRATION PROSPERO CRD42023470820; https://tinyurl.com/2xexp738.
Collapse
Affiliation(s)
- Huiyi Zuo
- Ophthalmology Department, First Affiliated Hospital of GuangXi Medical University, Nanning, China
| | - Baoyu Huang
- Ophthalmology Department, First Affiliated Hospital of GuangXi Medical University, Nanning, China
| | - Jian He
- Ophthalmology Department, First Affiliated Hospital of GuangXi Medical University, Nanning, China
| | - Liying Fang
- Ophthalmology Department, First Affiliated Hospital of GuangXi Medical University, Nanning, China
| | - Minli Huang
- Ophthalmology Department, First Affiliated Hospital of GuangXi Medical University, Nanning, China
| |
Collapse
|
9
|
Yang Z, Tian D, Zhao X, Zhang L, Xu Y, Lu X, Chen Y. Evolutionary patterns and research frontiers of artificial intelligence in age-related macular degeneration: a bibliometric analysis. Quant Imaging Med Surg 2025; 15:813-830. [PMID: 39839014 PMCID: PMC11744182 DOI: 10.21037/qims-24-1406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 11/29/2024] [Indexed: 01/23/2025]
Abstract
Background Age-related macular degeneration (AMD) represents a significant clinical concern, particularly in aging populations, and recent advancements in artificial intelligence (AI) have catalyzed substantial research interest in this domain. Despite the growing body of literature, there remains a need for a comprehensive, quantitative analysis to delineate key trends and emerging areas in the field of AI applications in AMD. This bibliometric analysis sought to systematically evaluate the landscape of AI-focused research on AMD to illuminate publication patterns, influential contributors, and focal research trends. Methods Using the Web of Science Core Collection (WoSCC), a search was conducted to retrieve relevant publications from 1992 to 2023. This analysis involved an array of bibliometric indicators to map the evolution of AI research in AMD, assessing parameters such as publication volume, national/regional and institutional contributions, journal impact, author influence, and emerging research hotspots. Visualization tools, including Bibliometrix, CiteSpace and VOSviewer, were employed to generate comprehensive assessments of the data. Results A total of 1,721 publications were identified, with the USA leading in publication output and the University of Melbourne as the most prolific institution. The journal Investigative Ophthalmology & Visual Science published the highest number of articles, and Schmidt-Eerfurth emerged as the most active author. Keyword and clustering analyses, along with citation burst detection, revealed three distinct research stages within the field from 1992 to 2023. Presently, research efforts are concentrated on developing deep learning (DL) models for AMD diagnosis and progression prediction. Prominent emerging themes include early detection, risk stratification, and treatment efficacy prediction. The integration of large language models (LLMs) and vision-language models (VLMs) for enhanced image processing also represents a novel research frontier. Conclusions This bibliometric analysis provides a structured overview of prevailing research trends and emerging directions in AI applications for AMD. These findings furnish valuable insights to guide future research and foster collaborative advancements in this evolving field.
Collapse
Affiliation(s)
- Zuyi Yang
- Eight-year Medical Doctor Program, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Department of Ophthalmology, Key Lab of Ocular Fundus Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Dianzhe Tian
- Eight-year Medical Doctor Program, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinyu Zhao
- Department of Ophthalmology, Key Lab of Ocular Fundus Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Lei Zhang
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yiyao Xu
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xin Lu
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Key Lab of Ocular Fundus Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
10
|
Chen JS, Reddy AJ, Al-Sharif E, Shoji MK, Kalaw FGP, Eslani M, Lang PZ, Arya M, Koretz ZA, Bolo KA, Arnett JJ, Roginiel AC, Do JL, Robbins SL, Camp AS, Scott NL, Rudell JC, Weinreb RN, Baxter SL, Granet DB. Analysis of ChatGPT Responses to Ophthalmic Cases: Can ChatGPT Think like an Ophthalmologist? OPHTHALMOLOGY SCIENCE 2025; 5:100600. [PMID: 39346575 PMCID: PMC11437840 DOI: 10.1016/j.xops.2024.100600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 08/09/2024] [Accepted: 08/13/2024] [Indexed: 10/01/2024]
Abstract
Objective Large language models such as ChatGPT have demonstrated significant potential in question-answering within ophthalmology, but there is a paucity of literature evaluating its ability to generate clinical assessments and discussions. The objectives of this study were to (1) assess the accuracy of assessment and plans generated by ChatGPT and (2) evaluate ophthalmologists' abilities to distinguish between responses generated by clinicians versus ChatGPT. Design Cross-sectional mixed-methods study. Subjects Sixteen ophthalmologists from a single academic center, of which 10 were board-eligible and 6 were board-certified, were recruited to participate in this study. Methods Prompt engineering was used to ensure ChatGPT output discussions in the style of the ophthalmologist author of the Medical College of Wisconsin Ophthalmic Case Studies. Cases where ChatGPT accurately identified the primary diagnoses were included and then paired. Masked human-generated and ChatGPT-generated discussions were sent to participating ophthalmologists to identify the author of the discussions. Response confidence was assessed using a 5-point Likert scale score, and subjective feedback was manually reviewed. Main Outcome Measures Accuracy of ophthalmologist identification of discussion author, as well as subjective perceptions of human-generated versus ChatGPT-generated discussions. Results Overall, ChatGPT correctly identified the primary diagnosis in 15 of 17 (88.2%) cases. Two cases were excluded from the paired comparison due to hallucinations or fabrications of nonuser-provided data. Ophthalmologists correctly identified the author in 77.9% ± 26.6% of the 13 included cases, with a mean Likert scale confidence rating of 3.6 ± 1.0. No significant differences in performance or confidence were found between board-certified and board-eligible ophthalmologists. Subjectively, ophthalmologists found that discussions written by ChatGPT tended to have more generic responses, irrelevant information, hallucinated more frequently, and had distinct syntactic patterns (all P < 0.01). Conclusions Large language models have the potential to synthesize clinical data and generate ophthalmic discussions. While these findings have exciting implications for artificial intelligence-assisted health care delivery, more rigorous real-world evaluation of these models is necessary before clinical deployment. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Akshay J. Reddy
- School of Medicine, California University of Science and Medicine, Colton, California
| | - Eman Al-Sharif
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- Surgery Department, College of Medicine, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Marissa K. Shoji
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Fritz Gerald P. Kalaw
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Medi Eslani
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Paul Z. Lang
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Malvika Arya
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Zachary A. Koretz
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Kyle A. Bolo
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Justin J. Arnett
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Aliya C. Roginiel
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Jiun L. Do
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Shira L. Robbins
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Andrew S. Camp
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Nathan L. Scott
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Jolene C. Rudell
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Robert N. Weinreb
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Sally L. Baxter
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - David B. Granet
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, La Jolla, California
| |
Collapse
|
11
|
Nadarajasundaram A, Harrow S. The Role of Artificial Intelligence in Triaging Patients in Eye Casualty Departments: A Systematic Review. Cureus 2025; 17:e78144. [PMID: 39877053 PMCID: PMC11774558 DOI: 10.7759/cureus.78144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/28/2025] [Indexed: 01/31/2025] Open
Abstract
Visual impairment and eye disease remain a significant burden, highlighting the need for further support regarding eye care services. Artificial intelligence (AI) and its rapid advancements are providing an avenue for transforming healthcare. As a result, this provides a potential avenue to address the growing challenges with eye health and could assist in settings such as eye casualty departments. This review aims to evaluate current studies on AI implementation in eye casualty triage to understand the potential application for the future. A systematic review was conducted across a range of sources and databases producing 77 records initially identified, with four studies included in the final analysis. The findings demonstrated that AI tools are able to produce consistent and accurate triaging of patients and provide improvement in work efficiency without compromising safety. However, we note limitations of the studies including limited external validations of results and general applicability at present. Additionally, all the studies highlight the need for further studies and testing to allow for better understanding and validation of AI tools in eye casualty triaging.
Collapse
Affiliation(s)
| | - Simeon Harrow
- Emergency Medicine Department, Maidstone and Tunbridge Wells NHS Trust, Maidstone, GBR
| |
Collapse
|
12
|
Huang YP, Vadloori S, Kang EYC, Fukushima Y, Takahashi R, Wu WC. Computer-aided detection of retinopathy of prematurity severity assessment via vessel tortuosity measurement in preterm infants' fundus images. Eye (Lond) 2024; 38:3309-3317. [PMID: 39097674 PMCID: PMC11584778 DOI: 10.1038/s41433-024-03285-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 07/12/2024] [Accepted: 07/23/2024] [Indexed: 08/05/2024] Open
Abstract
OBJECTIVE To develop a computer-aided diagnostic system for retinopathy of prematurity (ROP) disease using retinal vessel morphological features. METHODS A total of 200 fundus images from 136 preterm infants with stage 1 to 3 ROP were analysed. Two methods were developed to measure vessel tortuosity: the peak-and-valley method and the polynomial curve fitting method. Correlations between temporal artery tortuosity (TAT) and temporal vein tortuosity (TVT) with ROP severity were investigated, and vessel tortuosity relationships with vessel angles (TAA and TVA) and vessel widths (TAW and TVW). A separate dataset from Japan containing 126 images from 97 preterm patients was used for verification. RESULTS Both methods identified similar tortuosity in images without ROP and mild ROP cases. However, the polynomial curve fit method demonstrated enhanced tortuosity detection in stages 2 and 3 ROP compared to the peak and valley method. A strong positive correlation was revealed between ROP severity and increased arterial and venous tortuosity (P < 0.0001). A significant negative correlation between TAA and TAT (r = -0.485, P < 0.0001) and TVA and TVT (r = -0.281, P < 0.0001), and a significant positive correlation between TAW and TAT (r = 0.204, P value = 0.0040) were identified. Similar results were found in the test dataset from Japan. CONCLUSIONS ROP severity was associated with increased retinal tortuosity and retinal vessel width while displaying a decrease in retinal vascular angle. This quantitative analysis of retinal vessels provides crucial insights for advancing ROP diagnosis and understanding its progression.
Collapse
Affiliation(s)
- Yo-Ping Huang
- Department of Electrical Engineering, National Penghu University of Science and Technology, Penghu, 88046, Taiwan.
- Department of Electrical Engineering, National Taipei University of Technology, Taipei, 10608, Taiwan.
- Department of Information and Communication Engineering, Chaoyang University of Technology, Taichung, 41349, Taiwan.
| | - Spandana Vadloori
- Department of Electrical Engineering, National Penghu University of Science and Technology, Penghu, 88046, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou, 33305, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, 33305, Taiwan
| | - Yoko Fukushima
- Department of Ophthalmology, Osaka University, Osaka, 565-0871, Japan
| | - Rie Takahashi
- Department of Ophthalmology, Fukuoka University, Fukuoka, 814-0180, Japan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou, 33305, Taiwan.
- College of Medicine, Chang Gung University, Taoyuan, 33305, Taiwan.
| |
Collapse
|
13
|
Gwon YN, Cho U, Chong SR, Han JY, Kim DK, Doo SW, Yang WJ, Kim K, Shim SR, Jung J, Kim JH. Coping with Complications that Occur after Prostate Biopsy for Satisfactory Evaluation of Call Service Using Artificial Intelligence: A Pilot Randomized Controlled Trial. World J Mens Health 2024; 42:42.e101. [PMID: 39743217 DOI: 10.5534/wjmh.240191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/19/2024] [Accepted: 10/06/2024] [Indexed: 01/04/2025] Open
Abstract
PURPOSE To assess whether an artificial intelligence (AI)-based reassurance-call can inform patients about potential complications and provides reassurance following a prostate biopsy. MATERIALS AND METHODS From October 2022 to December 2023, 42 patients aged 40 to 70 years undergoing their first prostate biopsy were recruited. The 'Reassurance-call' service was utilized to inform and monitor patients for complications. Participants were randomized into three groups: AI-assisted Reassurance-call service (Group 1), human-assisted Reassurance-call service (Group 2), and no call (Group 3). The primary outcome measured was patient satisfaction with the Reassurance-call service, assessed using a Likert scale. Secondary outcomes included satisfaction with complication management and anxiety levels, evaluated using the Likert scale, visual analog scale (VAS), and the state-trait anxiety inventory (STAI). RESULTS Satisfaction with Reassurance-call averaged 4.20 (standard deviation [SD] 0.56) for Group 1 and 4.71 (SD 0.47) for Group 2, showing a statistically significant difference. Satisfaction regarding complication management using Likert scale was 4.13 (SD 0.52) for Group 1, 4.43 (SD 0.76) for Group 2, and 3.79 (SD 0.80) for Group 3 with no statistically significant differences. Satisfaction regarding complication management using VAS averaged 8.33 (SD 1.23) for Group 1, 8.57 (SD 1.45) for Group 2, and 7.07 (SD 1.86) for Group 3, indicating significant differences. Anxiety levels using STAI averaged 40.00 (SD 10.54) for Group 1, 39.14 (SD 8.29) for Group 2, and 35.00 (SD 9.46) for Group 3, showing no significant differences. Anxiety levels using VAS averaged 5.07 (SD 2.79) for Group 1, 2.21 (SD 2.64) for Group 2, and 3.50 (SD 2.28) for Group 3, with significant differences observed. CONCLUSIONS AI demonstrated potential in enhancing patient reassurance and managing complications post-prostate biopsy, although human interaction proved superior in certain aspects. Further studies with larger cohorts are necessary to verify the effectiveness of AI-based tools.
Collapse
Affiliation(s)
- Yong Nam Gwon
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Korea
| | - Ukrae Cho
- AI Product Biz Team, AI Service Division, SK Telecom, Seoul, Korea
| | | | - Ji Yeon Han
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Korea
| | - Do Kyung Kim
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Korea
| | - Seung Whan Doo
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Korea
| | - Won Jae Yang
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Korea
| | - Kyeongmin Kim
- Department of Engineering, University of Hong Kong, Hong Kong, China
| | - Sung Ryul Shim
- Department of Biomedical Informatics, College of Medicine, Konyang University, Daejeon, Korea
| | - Jaehun Jung
- Artificial Intelligence and Big-Data Convergence Center, Gil Medical Center, Gachon University College of Medicine, Incheon, Korea
- Department of Preventive Medicine, Gachon University College of Medicine, Incheon, Korea.
| | - Jae Heon Kim
- Department of Urology, Soonchunhyang University Seoul Hospital, Soonchunhyang University Medical College, Seoul, Korea.
| |
Collapse
|
14
|
Coyner AS, Young BK, Ostmo SR, Grigorian F, Ells A, Hubbard B, Rodriguez SH, Rishi P, Miller AM, Bhatt AR, Agarwal-Sinha S, Sears J, Chan RVP, Chiang MF, Kalpathy-Cramer J, Binenbaum G, Campbell JP. Use of an Artificial Intelligence-Generated Vascular Severity Score Improved Plus Disease Diagnosis in Retinopathy of Prematurity. Ophthalmology 2024; 131:1290-1296. [PMID: 38866367 PMCID: PMC11499038 DOI: 10.1016/j.ophtha.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/14/2024] Open
Abstract
PURPOSE To evaluate whether providing clinicians with an artificial intelligence (AI)-based vascular severity score (VSS) improves consistency in the diagnosis of plus disease in retinopathy of prematurity (ROP). DESIGN Multireader diagnostic accuracy imaging study. PARTICIPANTS Eleven ROP experts, 9 of whom had been in practice for 10 years or more. METHODS RetCam (Natus Medical Incorporated) fundus images were obtained from premature infants during routine ROP screening as part of the Imaging and Informatics in ROP study between January 2012 and July 2020. From all available examinations, a subset of 150 eye examinations from 110 infants were selected for grading. An AI-based VSS was assigned to each set of images using the i-ROP DL system (Siloam Vision). The clinicians were asked to diagnose plus disease for each examination and to assign an estimated VSS (range, 1-9) at baseline, and then again 1 month later with AI-based VSS assistance. A reference standard diagnosis (RSD) was assigned to each eye examination from the Imaging and Informatics in ROP study based on 3 masked expert labels and the ophthalmoscopic diagnosis. MAIN OUTCOME MEASURES Mean linearly weighted κ value for plus disease diagnosis compared with RSD. Area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPR) for labels 1 through 9 compared with RSD for plus disease. RESULTS Expert agreement improved significantly, from substantial (κ value, 0.69 [0.59, 0.75]) to near perfect (κ value, 0.81 [0.71, 0.86]), when AI-based VSS was integrated. Additionally, a significant improvement in plus disease discrimination was achieved as measured by mean AUC (from 0.94 [95% confidence interval (CI), 0.92-0.96] to 0.98 [95% CI, 0.96-0.99]; difference, 0.04 [95% CI, 0.01-0.06]) and AUPR (from 0.86 [95% CI, 0.81-0.90] to 0.95 [95% CI, 0.91-0.97]; difference, 0.09 [95% CI, 0.03-0.14]). CONCLUSIONS Providing ROP clinicians with an AI-based measurement of vascular severity in ROP was associated with both improved plus disease diagnosis and improved continuous severity labeling as compared with an RSD for plus disease. If implemented in practice, AI-based VSS could reduce interobserver variability and could standardize treatment for infants with ROP. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Aaron S Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Benjamin K Young
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Susan R Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Florin Grigorian
- Arkansas Children's Hospital, University of Arkansas for Medical Sciences, Little Rock, Arkansas
| | - Anna Ells
- Calgary Retina Consultants, University of Calgary, Calgary, Alberta, Canada
| | - Baker Hubbard
- Emory Eye Center, Emory University School of Medicine, Atlanta, Georgia
| | - Sarah H Rodriguez
- Department of Ophthalmology and Visual Science, University of Chicago, Chicago, Illinois
| | - Pukhraj Rishi
- Truhlsen Eye Institute, University of Nebraska Medical Centre, Omaha, Nebraska
| | - Aaron M Miller
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, Texas
| | - Amit R Bhatt
- Department of Ophthalmology, Texas Children's Hospital, Houston, Texas
| | | | - Jonathan Sears
- Cole Eye Institute, The Cleveland Clinic, Cleveland, Ohio
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- National Eye Institute, National Institutes of Health, Bethesda, Maryland; Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado
| | - Gil Binenbaum
- Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
15
|
Akbari M, Pourreza HR, Khalili Pour E, Dastjani Farahani A, Bazvand F, Ebrahimiadib N, Imani Fooladi M, Ramazani K F. FARFUM-RoP, A dataset for computer-aided detection of Retinopathy of Prematurity. Sci Data 2024; 11:1176. [PMID: 39478004 PMCID: PMC11525552 DOI: 10.1038/s41597-024-03897-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 09/17/2024] [Indexed: 11/02/2024] Open
Abstract
Retinopathy of Prematurity (ROP) is a critical eye disorder affecting premature infants, characterized by abnormal blood vessel development in the retina. Plus Disease, indicating severe ROP progression, plays a pivotal role in diagnosis. Recent advancements in Artificial Intelligence (AI) have shown parity with or surpass human experts in ROP detection, especially Plus Disease. However, the success of AI systems depends on high-quality datasets, emphasizing the need for collaboration and data sharing among researchers. To address this challenge, the paper introduces a new public dataset, FARFUM-RoP (Farabi and Ferdowsi University of Mashhad's ROP dataset), comprising 1533 ROP fundus images from 68 patients, annotated independently by five experienced childhood ophthalmologists as "Normal," "Pre-Plus," or "Plus." Ethical principles and consent were meticulously followed during data collection. The paper presents the dataset structure, patient details, and expert labels.
Collapse
Affiliation(s)
- Morteza Akbari
- Machine Vision Lab., Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, 9177948974, Iran
| | - Hamid-Reza Pourreza
- Machine Vision Lab., Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, 9177948974, Iran.
- Faculty of Engineering, McMaster University, Hamilton, Ontario, L8S 4L7, Canada.
| | - Elias Khalili Pour
- Department of Pediatric Ophthalmology, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, 1336616351, Iran.
| | - Afsar Dastjani Farahani
- Department of Pediatric Ophthalmology, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, 1336616351, Iran
| | - Fatemeh Bazvand
- Department of Pediatric Ophthalmology, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, 1336616351, Iran
| | - Nazanin Ebrahimiadib
- Department of Pediatric Ophthalmology, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, 1336616351, Iran
| | - Marjan Imani Fooladi
- Department of Pediatric Ophthalmology, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, 1336616351, Iran
| | - Fereshteh Ramazani K
- Machine Vision Lab., Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, 9177948974, Iran
| |
Collapse
|
16
|
Guo MY, Zheng YY, Xie Q. A preliminary study of artificial intelligence to recognize tessellated fundus in visual function screening of 7-14 year old students. BMC Ophthalmol 2024; 24:471. [PMID: 39472791 PMCID: PMC11520471 DOI: 10.1186/s12886-024-03722-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 10/09/2024] [Indexed: 11/02/2024] Open
Abstract
BACKGROUND To evaluate the accuracy of artificial intelligence (AI)-based technology in recognizing tessellated fundus in students aged 7-14 years. METHODS A retrospective study was conducted to collect consecutive fundus photographs for visual function screening of students aged 7-14 years old in Haikou City from June 2018 to May 2019, and 1907 cases were included in the study. Among them, 949 cases were male and 958cases were female. The results were manually analyzed by two attending ophthalmologists to ensure the accuracy of the results. In case of discrepancies between the results analyzed by the two methods, the manual results were used as the standard. To assess the sensitivity and specificity of AI in recognizing tessellated fundus, a Kappa consistency test was performed comparing the results of manual recognition with those of AI recognition. RESULTS Among 1907 cases, 1782 cases, or 93.4%, were completely consistent with the recognition results of manual and AI; 125 cases, or 6.6%, were analyzed with differences. The diagnostic rates of manual and AI for tessellated fundus were 26.1% and 26.4%, respectively. The sensitivity, specificity and area of the ROC curve (AUC) of AI for recognizing tessellated fundus in students aged 7-14 years were 88.0%, 95.4% and 0.917, respectively. The results of test showed that that the manual and AI identification results were highly consistent (κ = 0.831, P = 0.000). CONCLUSION AI analysis has high specificity and sensitivity for tessellated fundus identification in students aged 7-14 years, and it is feasible to apply artificial intelligence to visual function screening in students aged 7-14 years.
Collapse
Affiliation(s)
- Meng-Ying Guo
- Department of Ophthalmology, Haikou Affiliated Hospital of Central South University Xiangya School of Medicine, Haikou, Hainan, 570208, China
| | - Yun-Yan Zheng
- Department of Ophthalmology, Haikou Affiliated Hospital of Central South University Xiangya School of Medicine, Haikou, Hainan, 570208, China
| | - Qing Xie
- Department of Ophthalmology, Haikou Affiliated Hospital of Central South University Xiangya School of Medicine, Haikou, Hainan, 570208, China.
| |
Collapse
|
17
|
Xiaojian Y, Zhanbo Q, Jian C, Zefeng W, Jian L, Jin L, Yuefen P, Shuwen H. Deep learning application in prediction of cancer molecular alterations based on pathological images: a bibliographic analysis via CiteSpace. J Cancer Res Clin Oncol 2024; 150:467. [PMID: 39422817 PMCID: PMC11489169 DOI: 10.1007/s00432-024-05992-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Accepted: 10/09/2024] [Indexed: 10/19/2024]
Abstract
BACKGROUND The advancements in artificial intelligence (AI) technology for image recognition were propelling molecular pathology research into a new era. OBJECTIVE To summarize the hot spots and research trends in the field of molecular pathology image recognition. METHODS Relevant articles from January 1st, 2010, to August 25th, 2023, were retrieved from the Web of Science Core Collection. Subsequently, CiteSpace was employed for bibliometric and visual analysis, generating diverse network diagrams illustrating keywords, highly cited references, hot topics, and research trends. RESULTS A total of 110 relevant articles were extracted from a pool of 10,205 articles. The overall publication count exhibited a rising trend each year. The leading contributors in terms of institutions, countries, and authors were Maastricht University (11 articles), the United States (38 articles), and Kather Jacob Nicholas (9 articles), respectively. Half of the top ten research institutions, based on publication volume, were affiliated with Germany. The most frequently cited article was authored by Nicolas Coudray et al. accumulating 703 citations. The keyword "Deep learning" had the highest frequency in 2019. Notably, the highlighted keywords from 2022 to 2023 included "microsatellite instability", and there were 21 articles focusing on utilizing algorithms to recognize microsatellite instability (MSI) in colorectal cancer (CRC) pathological images. CONCLUSION The use of DL is expected to provide a new strategy to effectively solve the current problem of time-consuming and expensive molecular pathology detection. Therefore, further research is needed to address issues, such as data quality and standardization, model interpretability, and resource and infrastructure requirements.
Collapse
Affiliation(s)
- Yu Xiaojian
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Qu Zhanbo
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Chu Jian
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Wang Zefeng
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Liu Jian
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Liu Jin
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China
| | - Pan Yuefen
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China.
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China.
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China.
| | - Han Shuwen
- Huzhou Central Hospital, Affiliated Central Hospital Huzhou University, No.1558, Sanhuan North Road, Wuxing District, Huzhou, 313000, Zhejiang Province, China.
- Key Laboratory of Multiomics Research and Clinical Transformation of Digestive Cancer of Huzhou, Huzhou, China.
- Huzhou Central Hospital, Fifth School of Clinical Medicine of Zhejiang Chinese Medical University, Huzhou, China.
- ASIR(Institute - Association of intelligent systems and robotics), Rueil-Malmaison, France.
| |
Collapse
|
18
|
Tsai ASH, Yip M, Song A, Tan GSW, Ting DSW, Campbell JP, Coyner A, Chan RVP. Implementation of Artificial Intelligence in Retinopathy of Prematurity Care: Challenges and Opportunities. Int Ophthalmol Clin 2024; 64:9-14. [PMID: 39480203 DOI: 10.1097/iio.0000000000000532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2024]
Abstract
The diagnosis of retinopathy of prematurity (ROP) is primarily image-based and suitable for implementation of artificial intelligence (AI) systems. Increasing incidence of ROP, especially in low and middle-income countries, has also put tremendous stress on health care systems. Barriers to the implementation of AI include infrastructure, regulatory, legal, cost, sustainability, and scalability. This review describes currently available AI and imaging systems, how a stable telemedicine infrastructure is crucial to AI implementation, and how successful ROP programs have been run in both low and middle-income countries and high-income countries. More work is needed in terms of validating AI systems with different populations with various low-cost imaging devices that have recently been developed. A sustainable and cost-effective ROP screening program is crucial in the prevention of childhood blindness.
Collapse
Affiliation(s)
- Andrew S H Tsai
- Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, Singapore
| | | | - Amy Song
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Illinois Eye and Ear Infirmary, Chicago, IL
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, Singapore
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore
- Duke-NUS Medical School, Singapore
| | - J Peter Campbell
- Casey Eye Institute, Oregon Health & Science University, Portland, OR
| | - Aaron Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland, OR
| | - Robison Vernon Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Illinois Eye and Ear Infirmary, Chicago, IL
| |
Collapse
|
19
|
Balas M, Micieli JA, Wong JCY. Integrating AI with tele-ophthalmology in Canada: a review. CANADIAN JOURNAL OF OPHTHALMOLOGY 2024:S0008-4182(24)00259-X. [PMID: 39255951 DOI: 10.1016/j.jcjo.2024.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 05/21/2024] [Accepted: 08/18/2024] [Indexed: 09/12/2024]
Abstract
The field of ophthalmology is rapidly advancing, with technological innovations enhancing the diagnosis and management of eye diseases. Tele-ophthalmology, or the use of telemedicine for ophthalmology, has emerged as a promising solution to improve access to eye care services, particularly for patients in remote or underserved areas. Despite its potential benefits, tele-ophthalmology faces significant challenges, including the need for high volumes of medical images to be analyzed and interpreted by trained clinicians. Artificial intelligence (AI) has emerged as a powerful tool in ophthalmology, capable of assisting clinicians in diagnosing and treating a variety of conditions. Integrating AI models into existing tele-ophthalmology infrastructure has the potential to revolutionize eye care services by reducing costs, improving efficiency, and increasing access to specialized care. By automating the analysis and interpretation of clinical data and medical images, AI models can reduce the burden on human clinicians, allowing them to focus on patient care and disease management. Available literature on the current status of tele-ophthalmology in Canada and successful AI models in ophthalmology was acquired and examined using the Arksey and O'Malley framework. This review covers literature up to 2022 and is split into 3 sections: 1) existing Canadian tele-ophthalmology infrastructure, with its benefits and drawbacks; 2) preeminent AI models in ophthalmology, across a variety of ocular conditions; and 3) bridging the gap between Canadian tele-ophthalmology and AI in a safe and effective manner.
Collapse
Affiliation(s)
- Michael Balas
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, ON, Canada; Division of Neurology, Department of Medicine, St. Michael's Hospital, University of Toronto, Toronto, ON, Canada; Department of Ophthalmology, St. Michael's Hospital, Toronto, ON, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, ON, Canada.
| |
Collapse
|
20
|
Kıran Yenice E, Kara C, Erdaş ÇB. Automated detection of type 1 ROP, type 2 ROP and A-ROP based on deep learning. Eye (Lond) 2024; 38:2644-2648. [PMID: 38918566 PMCID: PMC11385231 DOI: 10.1038/s41433-024-03184-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 06/10/2024] [Accepted: 06/11/2024] [Indexed: 06/27/2024] Open
Abstract
PURPOSE To provide automatic detection of Type 1 retinopathy of prematurity (ROP), Type 2 ROP, and A-ROP by deep learning-based analysis of fundus images obtained by clinical examination using convolutional neural networks. MATERIAL AND METHODS A total of 634 fundus images of 317 premature infants born at 23-34 weeks of gestation were evaluated. After image pre-processing, we obtained a rectangular region (ROI). RegNetY002 was used for algorithm training, and stratified 10-fold cross-validation was applied during training to evaluate and standardize our model. The model's performance was reported as accuracy and specificity and described by the receiver operating characteristic (ROC) curve and area under the curve (AUC). RESULTS The model achieved 0.98 accuracy and 0.98 specificity in detecting Type 2 ROP versus Type 1 ROP and A-ROP. On the other hand, as a result of the analysis of ROI regions, the model achieved 0.90 accuracy and 0.95 specificity in detecting Stage 2 ROP versus Stage 3 ROP and 0.91 accuracy and 0.92 specificity in detecting A-ROP versus Type 1 ROP. The AUC scores were 0.98 for Type 2 ROP versus Type 1 ROP and A-ROP, 0.85 for Stage 2 ROP versus Stage 3 ROP, and 0.91 for A-ROP versus Type 1 ROP. CONCLUSION Our study demonstrated that ROP classification by DL-based analysis of fundus images can be distinguished with high accuracy and specificity. Integrating DL-based artificial intelligence algorithms into clinical practice may reduce the workload of ophthalmologists in the future and provide support in decision-making in the management of ROP.
Collapse
Affiliation(s)
- Eşay Kıran Yenice
- Department of Ophthalmology, University of Health Sciences, Etlik Zübeyde Hanım Maternity and Women's Health Teaching and Research Hospital, Ankara, Turkey.
| | - Caner Kara
- Department of Ophthalmology, Etlik City Hospital, Ankara, Turkey
| | | |
Collapse
|
21
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
22
|
Zhu J, Yan Y, Jiang W, Zhang S, Niu X, Wan S, Cong Y, Hu X, Zheng B, Yang Y. A Deep Learning Model for Automatically Quantifying the Anterior Segment in Ultrasound Biomicroscopy Images of Implantable Collamer Lens Candidates. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1262-1272. [PMID: 38777640 DOI: 10.1016/j.ultrasmedbio.2024.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/24/2024] [Accepted: 05/03/2024] [Indexed: 05/25/2024]
Abstract
OBJECTIVE This study aimed to develop and evaluate a deep learning-based model that could automatically measure anterior segment (AS) parameters on preoperative ultrasound biomicroscopy (UBM) images of implantable Collamer lens (ICL) surgery candidates. METHODS A total of 1164 panoramic UBM images were preoperatively obtained from 321 patients who received ICL surgery in the Eye Center of Renmin Hospital of Wuhan University (Wuhan, China) to develop an imaging database. First, the UNet++ network was utilized to segment AS tissues automatically, such as corneal lens and iris. In addition, image processing techniques and geometric localization algorithms were developed to automatically identify the anatomical landmarks (ALs) of pupil diameter (PD), anterior chamber depth (ACD), angle-to-angle distance (ATA), and sulcus-to-sulcus distance (STS). Based on the results of the latter two processes, PD, ACD, ATA, and STS can be measured. Meanwhile, an external dataset of 294 images from Huangshi Aier Eye Hospital was employed to further assess the model's performance in other center. Lastly, a subset of 100 random images from the external test set was chosen to compare the performance of the model with senior experts. RESULTS Whether in the internal test dataset or external test dataset, using manual labeling as the reference standard, the models achieved a mean Dice coefficient exceeding 0.880. Additionally, the intra-class correlation coefficients (ICCs) of ALs' coordinates were all greater than 0.947, and the percentage of Euclidean distance distribution of ALs within 250 μm was over 95.24%.While the ICCs for PD, ACD, ATA, and STS were greater than 0.957, furthermore, the average relative error (ARE) of PD, ACD, ATA, and STS were below 2.41%. In terms of human versus machine performance, the ICCs between the measurements performed by the model and those by senior experts were all greater than 0.931. CONCLUSION A deep learning-based model could measure AS parameters using UBM images of ICL candidates, and exhibited a performance similar to that of a senior ophthalmologist.
Collapse
Affiliation(s)
- Jian Zhu
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yulin Yan
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Weiyan Jiang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Shaowei Zhang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Xiaoguang Niu
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Shanshan Wan
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yuyu Cong
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Xiao Hu
- Wuhan EndoAngel Medical Technology Company, Wuhan, China
| | - Biqin Zheng
- Wuhan EndoAngel Medical Technology Company, Wuhan, China
| | - Yanning Yang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China.
| |
Collapse
|
23
|
Benetz BAM, Shivade VS, Joseph NM, Romig NJ, McCormick JC, Chen J, Titus MS, Sawant OB, Clover JM, Yoganathan N, Menegay HJ, O'Brien RC, Wilson DL, Lass JH. Automatic Determination of Endothelial Cell Density From Donor Cornea Endothelial Cell Images. Transl Vis Sci Technol 2024; 13:40. [PMID: 39177992 PMCID: PMC11346145 DOI: 10.1167/tvst.13.8.40] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 06/21/2024] [Indexed: 08/24/2024] Open
Abstract
Purpose To determine endothelial cell density (ECD) from real-world donor cornea endothelial cell (EC) images using a self-supervised deep learning segmentation model. Methods Two eye banks (Eversight, VisionGift) provided 15,138 single, unique EC images from 8169 donors along with their demographics, tissue characteristics, and ECD. This dataset was utilized for self-supervised training and deep learning inference. The Cornea Image Analysis Reading Center (CIARC) provided a second dataset of 174 donor EC images based on image and tissue quality. These images were used to train a supervised deep learning cell border segmentation model. Evaluation between manual and automated determination of ECD was restricted to the 1939 test EC images with at least 100 cells counted by both methods. Results The ECD measurements from both methods were in excellent agreement with rc of 0.77 (95% confidence interval [CI], 0.75-0.79; P < 0.001) and bias of 123 cells/mm2 (95% CI, 114-131; P < 0.001); 81% of the automated ECD values were within 10% of the manual ECD values. When the analysis was further restricted to the cropped image, the rc was 0.88 (95% CI, 0.87-0.89; P < 0.001), bias was 46 cells/mm2 (95% CI, 39-53; P < 0.001), and 93% of the automated ECD values were within 10% of the manual ECD values. Conclusions Deep learning analysis provides accurate ECDs of donor images, potentially reducing analysis time and training requirements. Translational Relevance The approach of this study, a robust methodology for automatically evaluating donor cornea EC images, could expand the quantitative determination of endothelial health beyond ECD.
Collapse
Affiliation(s)
- Beth Ann M. Benetz
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | - Ved S. Shivade
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Naomi M. Joseph
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Nathan J. Romig
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - John C. McCormick
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jiawei Chen
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | | | - Onkar B. Sawant
- Eversight, Ann Arbor, MI, USA
- Center for Vision and Eye Banking Research, Eversight, Cleveland, OH, USA
| | | | | | - Harry J. Menegay
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| | | | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Jonathan H. Lass
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA
- Cornea Image Analysis Reading Center, University Hospitals Eye Institute, Cleveland, OH, USA
| |
Collapse
|
24
|
Mathieu A, Ajana S, Korobelnik JF, Le Goff M, Gontier B, Rougier MB, Delcourt C, Delyfer MN. DeepAlienorNet: A deep learning model to extract clinical features from colour fundus photography in age-related macular degeneration. Acta Ophthalmol 2024; 102:e823-e830. [PMID: 38345159 DOI: 10.1111/aos.16660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/11/2024] [Accepted: 01/25/2024] [Indexed: 07/09/2024]
Abstract
OBJECTIVE This study aimed to develop a deep learning (DL) model, named 'DeepAlienorNet', to automatically extract clinical signs of age-related macular degeneration (AMD) from colour fundus photography (CFP). METHODS AND ANALYSIS The ALIENOR Study is a cohort of French individuals 77 years of age or older. A multi-label DL model was developed to grade the presence of 7 clinical signs: large soft drusen (>125 μm), intermediate soft (63-125 μm), large area of soft drusen (total area >500 μm), presence of central soft drusen (large or intermediate), hyperpigmentation, hypopigmentation, and advanced AMD (defined as neovascular or atrophic AMD). Prediction performances were evaluated using cross-validation and the expert human interpretation of the clinical signs as the ground truth. RESULTS A total of 1178 images were included in the study. Averaging the 7 clinical signs' detection performances, DeepAlienorNet achieved an overall sensitivity, specificity, and AUROC of 0.77, 0.83, and 0.87, respectively. The model demonstrated particularly strong performance in predicting advanced AMD and large areas of soft drusen. It can also generate heatmaps, highlighting the relevant image areas for interpretation. CONCLUSION DeepAlienorNet demonstrates promising performance in automatically identifying clinical signs of AMD from CFP, offering several notable advantages. Its high interpretability reduces the black box effect, addressing ethical concerns. Additionally, the model can be easily integrated to automate well-established and validated AMD progression scores, and the user-friendly interface further enhances its usability. The main value of DeepAlienorNet lies in its ability to assist in precise severity scoring for further adapted AMD management, all while preserving interpretability.
Collapse
Affiliation(s)
- Alexis Mathieu
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Soufiane Ajana
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Jean-François Korobelnik
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Mélanie Le Goff
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Brigitte Gontier
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | | | - Cécile Delcourt
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Marie-Noëlle Delyfer
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
- FRCRnet/FCRIN Network, Bordeaux, France
| |
Collapse
|
25
|
Chatzimichail E, Feltgen N, Motta L, Empeslidis T, Konstas AG, Gatzioufas Z, Panos GD. Transforming the future of ophthalmology: artificial intelligence and robotics' breakthrough role in surgical and medical retina advances: a mini review. Front Med (Lausanne) 2024; 11:1434241. [PMID: 39076760 PMCID: PMC11284058 DOI: 10.3389/fmed.2024.1434241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 06/26/2024] [Indexed: 07/31/2024] Open
Abstract
Over the past decade, artificial intelligence (AI) and its subfields, deep learning and machine learning, have become integral parts of ophthalmology, particularly in the field of ophthalmic imaging. A diverse array of algorithms has emerged to facilitate the automated diagnosis of numerous medical and surgical retinal conditions. The development of these algorithms necessitates extensive training using large datasets of retinal images. This approach has demonstrated a promising impact, especially in increasing accuracy of diagnosis for unspecialized clinicians for various diseases and in the area of telemedicine, where access to ophthalmological care is restricted. In parallel, robotic technology has made significant inroads into the medical field, including ophthalmology. The vast majority of research in the field of robotic surgery has been focused on anterior segment and vitreoretinal surgery. These systems offer potential improvements in accuracy and address issues such as hand tremors. However, widespread adoption faces hurdles, including the substantial costs associated with these systems and the steep learning curve for surgeons. These challenges currently constrain the broader implementation of robotic surgical systems in ophthalmology. This mini review discusses the current research and challenges, underscoring the limited yet growing implementation of AI and robotic systems in the field of retinal conditions.
Collapse
Affiliation(s)
| | - Nicolas Feltgen
- Department of Ophthalmology, University Hospital of Basel, Basel, Switzerland
| | - Lorenzo Motta
- Department of Ophthalmology, School of Medicine, University of Padova, Padua, Italy
| | | | - Anastasios G. Konstas
- Department of Ophthalmology, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Zisis Gatzioufas
- Department of Ophthalmology, University Hospital of Basel, Basel, Switzerland
| | - Georgios D. Panos
- Department of Ophthalmology, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham University Hospitals, Nottingham, United Kingdom
- Division of Ophthalmology and Visual Sciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
26
|
Peng J, Xie X, Lu Z, Xu Y, Xie M, Luo L, Xiao H, Ye H, Chen L, Yang J, Zhang M, Zhao P, Zheng C. Generative adversarial networks synthetic optical coherence tomography images as an education tool for image diagnosis of macular diseases: a randomized trial. Front Med (Lausanne) 2024; 11:1424749. [PMID: 39050535 PMCID: PMC11266019 DOI: 10.3389/fmed.2024.1424749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 06/19/2024] [Indexed: 07/27/2024] Open
Abstract
Purpose This study aimed to evaluate the effectiveness of generative adversarial networks (GANs) in creating synthetic OCT images as an educational tool for teaching image diagnosis of macular diseases to medical students and ophthalmic residents. Methods In this randomized trial, 20 fifth-year medical students and 20 ophthalmic residents were enrolled and randomly assigned (1:1 allocation) into Group real OCT and Group GANs OCT. All participants had a pretest to assess their educational background, followed by a 30-min smartphone-based education program using GANs or real OCT images for macular disease recognition training. Two additional tests were scheduled: one 5 min after the training to assess short-term performance, and another 1 week later to assess long-term performance. Scores and time consumption were recorded and compared. After all the tests, participants completed an anonymous subjective questionnaire. Results Group GANs OCT scores increased from 80.0 (46.0 to 85.5) to 92.0 (81.0 to 95.5) 5 min after training (p < 0.001) and 92.30 ± 5.36 1 week after training (p < 0.001). Similarly, Group real OCT scores increased from 66.00 ± 19.52 to 92.90 ± 5.71 (p < 0.001), respectively. When compared between two groups, no statistically significant difference was found in test scores, score improvements, or time consumption. After training, medical students had a significantly higher score improvement than residents (p < 0.001). Conclusion The education tool using synthetic OCT images had a similar educational ability compared to that using real OCT images, which improved the interpretation ability of ophthalmic residents and medical students in both short-term and long-term performances. The smartphone-based educational tool could be widely promoted for educational applications.Clinical trial registration: https://www.chictr.org.cn, Chinese Clinical Trial Registry [No. ChiCTR 2100053195].
Collapse
Affiliation(s)
- Jie Peng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaoling Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Zupeng Lu
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Department of Ophthalmology, Shanghai Children’s Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yu Xu
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Meng Xie
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Li Luo
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Haodong Xiao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hongfei Ye
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Li Chen
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jianlong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, China
| | - Peiquan Zhao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Hospital Development Strategy, China Hospital Development Institute Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
27
|
Chu Y, Hu S, Li Z, Yang X, Liu H, Yi X, Qi X. Image Analysis-Based Machine Learning for the Diagnosis of Retinopathy of Prematurity: A Meta-analysis and Systematic Review. Ophthalmol Retina 2024; 8:678-687. [PMID: 38237772 DOI: 10.1016/j.oret.2024.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 01/02/2024] [Accepted: 01/09/2024] [Indexed: 02/17/2024]
Abstract
TOPIC To evaluate the performance of machine learning (ML) in the diagnosis of retinopathy of prematurity (ROP) and to assess whether it can be an effective automated diagnostic tool for clinical applications. CLINICAL RELEVANCE Early detection of ROP is crucial for preventing tractional retinal detachment and blindness in preterm infants, which has significant clinical relevance. METHODS Web of Science, PubMed, Embase, IEEE Xplore, and Cochrane Library were searched for published studies on image-based ML for diagnosis of ROP or classification of clinical subtypes from inception to October 1, 2022. The quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies was used to determine the risk of bias (RoB) of the included original studies. A bivariate mixed effects model was used for quantitative analysis of the data, and the Deek's test was used for calculating publication bias. Quality of evidence was assessed using Grading of Recommendations Assessment, Development and Evaluation. RESULTS Twenty-two studies were included in the systematic review; 4 studies had high or unclear RoB. In the area of indicator test items, only 2 studies had high or unclear RoB because they did not establish predefined thresholds. In the area of reference standards, 3 studies had high or unclear RoB. Regarding applicability, only 1 study was considered to have high or unclear applicability in terms of patient selection. The sensitivity and specificity of image-based ML for the diagnosis of ROP were 93% (95% confidence interval [CI]: 0.90-0.94) and 95% (95% CI: 0.94-0.97), respectively. The area under the receiver operating characteristic curve (AUC) was 0.98 (95% CI: 0.97-0.99). For the classification of clinical subtypes of ROP, the sensitivity and specificity were 93% (95% CI: 0.89-0.96) and 93% (95% CI: 0.89-0.95), respectively, and the AUC was 0.97 (95% CI: 0.96-0.98). The classification results were highly similar to those of clinical experts (Spearman's R = 0.879). CONCLUSIONS Machine learning algorithms are no less accurate than human experts and hold considerable potential as automated diagnostic tools for ROP. However, given the quality and high heterogeneity of the available evidence, these algorithms should be considered as supplementary tools to assist clinicians in diagnosing ROP. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Yihang Chu
- Central South University of Forestry and Technology, Changsha, Hunan, China; State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Shipeng Hu
- Central South University of Forestry and Technology, Changsha, Hunan, China
| | - Zilan Li
- Department of Biochemistry, McGill University, Montreal, Quebec, Canada
| | - Xiao Yang
- State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Hui Liu
- Central South University of Forestry and Technology, Changsha, Hunan, China.
| | - Xianglong Yi
- Department of Ophthalmology, The First Affiliated Hospital of Xinjiang Medical University, Urumchi, China.
| | - Xinwei Qi
- State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China.
| |
Collapse
|
28
|
Hashemian H, Peto T, Ambrósio Jr R, Lengyel I, Kafieh R, Muhammed Noori A, Khorrami-Nejad M. Application of Artificial Intelligence in Ophthalmology: An Updated Comprehensive Review. J Ophthalmic Vis Res 2024; 19:354-367. [PMID: 39359529 PMCID: PMC11444002 DOI: 10.18502/jovr.v19i3.15893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 07/06/2024] [Indexed: 10/04/2024] Open
Abstract
Artificial intelligence (AI) holds immense promise for transforming ophthalmic care through automated screening, precision diagnostics, and optimized treatment planning. This paper reviews recent advances and challenges in applying AI techniques such as machine learning and deep learning to major eye diseases. In diabetic retinopathy, AI algorithms analyze retinal images to accurately identify lesions, which helps clinicians in ophthalmology practice. Systems like IDx-DR (IDx Technologies Inc, USA) are FDA-approved for autonomous detection of referable diabetic retinopathy. For glaucoma, deep learning models assess optic nerve head morphology in fundus photographs to detect damage. In age-related macular degeneration, AI can quantify drusen and diagnose disease severity from both color fundus and optical coherence tomography images. AI has also been used in screening for retinopathy of prematurity, keratoconus, and dry eye disease. Beyond screening, AI can aid treatment decisions by forecasting disease progression and anti-VEGF response. However, potential limitations such as the quality and diversity of training data, lack of rigorous clinical validation, and challenges in regulatory approval and clinician trust must be addressed for the widespread adoption of AI. Two other significant hurdles include the integration of AI into existing clinical workflows and ensuring transparency in AI decision-making processes. With continued research to address these limitations, AI promises to enable earlier diagnosis, optimized resource allocation, personalized treatment, and improved patient outcomes. Besides, synergistic human-AI systems could set a new standard for evidence-based, precise ophthalmic care.
Collapse
Affiliation(s)
- Hesam Hashemian
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Tunde Peto
- School of Medicine, Dentistry and Biomedical Sciences, Centre for Public Health, Queen’s University Belfast, Northern Ireland,
UK
| | - Renato Ambrósio Jr
- Department of Ophthalmology, Federal University the State of Rio de Janeiro (UNIRIO), Brazil
- Department of Ophthalmology, Federal University of São Paulo, São Paulo, Brazil
- Brazilian Study Group of Artificial Intelligence and Corneal Analysis – BrAIN, Rio de Janeiro & Maceió, Brazil
- Rio Vision Hospital, Rio de Janeiro, Brazil
- Instituto de Olhos Renato Ambrósio, Rio de Janeiro, Brazil
| | - Imre Lengyel
- School of Medicine, Dentistry and Biomedical Sciences, Queen’s University Belfast, Northern Ireland
| | - Rahele Kafieh
- Department of Engineering, Durham University, United Kingdom
| | | | - Masoud Khorrami-Nejad
- School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
- Department of Optical Techniques, Al-Mustaqbal University College, Hillah, Babylon 51001, Iraq
| |
Collapse
|
29
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
30
|
Maitra P, Shah PK, Campbell PJ, Rishi P. The scope of artificial intelligence in retinopathy of prematurity (ROP) management. Indian J Ophthalmol 2024; 72:931-934. [PMID: 38454859 PMCID: PMC11329810 DOI: 10.4103/ijo.ijo_2544_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/02/2024] [Accepted: 01/04/2024] [Indexed: 03/09/2024] Open
Abstract
Artificial Intelligence (AI) is a revolutionary technology that has the potential to develop into a widely implemented system that could reduce the dependence on qualified professionals/experts for screening the large at-risk population, especially in the Indian scenario. Deep learning involves learning without being explicitly told what to focus on and utilizes several layers of artificial neural networks (ANNs) to create a robust algorithm that is capable of high-complexity tasks. Convolutional neural networks (CNNs) are a subset of ANNs that are particularly useful for image processing as well as cognitive tasks. Training of these algorithms involves inputting raw human-labeled data, which are then processed through the algorithm's multiple layers and allow CNN to develop their own learning of image features. AI systems must be validated using different population datasets since the performance of the AI system would vary according to the population. Indian datasets have been used in AI-based risk model that could predict whether an infant would develop treatment-requiring retinopathy of prematurity (ROP). AI also served as an epidemiological tool by objectively showing that a higher ROP severity was in Neonatal intensive care units (NICUs) that did not have the resources to monitor and titrate oxygen. There are rising concerns about the medicolegal aspect of AI implementation as well as discussion on the possibilities of catastrophic life-threatening diseases like retinoblastoma and lipemia retinalis being missed by AI. Computer-based systems have the advantage over humans in not being susceptible to biases or fatigue. This is especially relevant in a country like India with an increased rate of ROP and a preexisting strained doctor-to-preterm child ratio. Many AI algorithms can perform in a way comparable to or exceeding human experts, and this opens possibilities for future large-scale prospective studies.
Collapse
Affiliation(s)
- Puja Maitra
- Department of Vitreoretina Services, Aravind Eye Hospital, Chennai, Tamil Nadu, India
| | - Parag K Shah
- Department of Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, Tamil Nadu, India
| | - Peter J Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon, United States
| | - Pukhraj Rishi
- Ocular Oncology and Vitreoretinal Surgery, Truhlsen Eye Institute, University of Nebraska Medical Centre, Omaha, NE, USA
| |
Collapse
|
31
|
Lee SB. Development of a chest X-ray machine learning convolutional neural network model on a budget and using artificial intelligence explainability techniques to analyze patterns of machine learning inference. JAMIA Open 2024; 7:ooae035. [PMID: 38699648 PMCID: PMC11064095 DOI: 10.1093/jamiaopen/ooae035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/03/2024] [Accepted: 04/10/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Machine learning (ML) will have a large impact on medicine and accessibility is important. This study's model was used to explore various concepts including how varying features of a model impacted behavior. Materials and Methods This study built an ML model that classified chest X-rays as normal or abnormal by using ResNet50 as a base with transfer learning. A contrast enhancement mechanism was implemented to improve performance. After training with a dataset of publicly available chest radiographs, performance metrics were determined with a test set. The ResNet50 base was substituted with deeper architectures (ResNet101/152) and visualization methods used to help determine patterns of inference. Results Performance metrics were an accuracy of 79%, recall 69%, precision 96%, and area under the curve of 0.9023. Accuracy improved to 82% and recall to 74% with contrast enhancement. When visualization methods were applied and the ratio of pixels used for inference measured, deeper architectures resulted in the model using larger portions of the image for inference as compared to ResNet50. Discussion The model performed on par with many existing models despite consumer-grade hardware and smaller datasets. Individual models vary thus a single model's explainability may not be generalizable. Therefore, this study varied architecture and studied patterns of inference. With deeper ResNet architectures, the machine used larger portions of the image to make decisions. Conclusion An example using a custom model showed that AI (Artificial Intelligence) can be accessible on consumer-grade hardware, and it also demonstrated an example of studying themes of ML explainability by varying ResNet architectures.
Collapse
Affiliation(s)
- Stephen B Lee
- Division of Infectious Diseases, Department of Medicine, College of Medicine, University of Saskatchewan, Regina, S4P 0W5, Canada
| |
Collapse
|
32
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024; 8:633-645. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|
33
|
Ahn J, Choi M. Advancements and turning point of artificial intelligence in ophthalmology: A comprehensive analysis of research trends and collaborative networks. Ophthalmic Physiol Opt 2024; 44:1031-1040. [PMID: 38581209 DOI: 10.1111/opo.13315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/08/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative force with great potential in various fields, including healthcare. In recent years, AI has garnered significant attention due to its potential to revolutionise ophthalmology, leading to advancements in patient care such as disease detection, diagnosis, treatment and monitoring of disease progression. This study presents a comprehensive analysis of the research trends and collaborative networks at the intersection of AI and ophthalmology. In this study, we conducted an extensive search of the Web of Science Core Collection to identify articles related to 'artificial intelligence' in ophthalmology published from 1968 to 2023. We performed co-occurrence keywords and co-authorship network analyses using VOSviewer software to explore the relationships between keywords and country collaboration. We found a remarkable surge in articles applying AI in ophthalmology after 2017, marking a turning point in the integration of AI within the medical field. The primary application of AI shifted towards the diagnosis of ocular disease, which was particularly evident through keywords such as glaucoma, diabetic retinopathy and age-related macular degeneration. Analysis of the collaboration networks of countries revealed a global expansion of ophthalmology-related AI research. This study provides valuable insights into the evolving landscape of AI integration in ophthalmology, indicating its growing potential for enhancing disease detection, diagnosis, treatment planning and monitoring of disease progression. In order to translate AI technologies into clinical practice effectively, it is imperative to comprehend the evolving research trends and advancements at the intersection of AI and ophthalmology.
Collapse
Affiliation(s)
- Jihye Ahn
- Department of Optometry, College of Energy and Biotechnology, Seoul National University of Science and Technology, Seoul, Republic of Korea
| | - Moonsung Choi
- Department of Optometry, College of Energy and Biotechnology, Seoul National University of Science and Technology, Seoul, Republic of Korea
- Convergence Institute of Biomedical Engineering and Biomaterials, Seoul National University of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
34
|
Sorrentino FS, Gardini L, Fontana L, Musa M, Gabai A, Maniaci A, Lavalle S, D’Esposito F, Russo A, Longo A, Surico PL, Gagliano C, Zeppieri M. Novel Approaches for Early Detection of Retinal Diseases Using Artificial Intelligence. J Pers Med 2024; 14:690. [PMID: 39063944 PMCID: PMC11278069 DOI: 10.3390/jpm14070690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 07/28/2024] Open
Abstract
BACKGROUND An increasing amount of people are globally affected by retinal diseases, such as diabetes, vascular occlusions, maculopathy, alterations of systemic circulation, and metabolic syndrome. AIM This review will discuss novel technologies in and potential approaches to the detection and diagnosis of retinal diseases with the support of cutting-edge machines and artificial intelligence (AI). METHODS The demand for retinal diagnostic imaging exams has increased, but the number of eye physicians or technicians is too little to meet the request. Thus, algorithms based on AI have been used, representing valid support for early detection and helping doctors to give diagnoses and make differential diagnosis. AI helps patients living far from hub centers to have tests and quick initial diagnosis, allowing them not to waste time in movements and waiting time for medical reply. RESULTS Highly automated systems for screening, early diagnosis, grading and tailored therapy will facilitate the care of people, even in remote lands or countries. CONCLUSION A potential massive and extensive use of AI might optimize the automated detection of tiny retinal alterations, allowing eye doctors to perform their best clinical assistance and to set the best options for the treatment of retinal diseases.
Collapse
Affiliation(s)
| | - Lorenzo Gardini
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.)
| | - Luigi Fontana
- Ophthalmology Unit, Department of Surgical Sciences, Alma Mater Studiorum University of Bologna, IRCCS Azienda Ospedaliero-Universitaria Bologna, 40100 Bologna, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Andrea Gabai
- Department of Ophthalmology, Humanitas-San Pio X, 20159 Milan, Italy
| | - Antonino Maniaci
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Salvatore Lavalle
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Fabiana D’Esposito
- Imperial College Ophthalmic Research Group (ICORG) Unit, Imperial College, 153-173 Marylebone Rd, London NW15QH, UK
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Via Pansini 5, 80131 Napoli, Italy
| | - Andrea Russo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Antonio Longo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Pier Luigi Surico
- Schepens Eye Research Institute of Mass Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
| | - Caterina Gagliano
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy
| |
Collapse
|
35
|
Zivojinovic S, Petrovic Savic S, Prodanovic T, Prodanovic N, Simovic A, Devedzic G, Savic D. Neurosonographic Classification in Premature Infants Receiving Omega-3 Supplementation Using Convolutional Neural Networks. Diagnostics (Basel) 2024; 14:1342. [PMID: 39001234 PMCID: PMC11241385 DOI: 10.3390/diagnostics14131342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/14/2024] [Accepted: 06/21/2024] [Indexed: 07/16/2024] Open
Abstract
This study focuses on developing a model for the precise determination of ultrasound image density and classification using convolutional neural networks (CNNs) for rapid, timely, and accurate identification of hypoxic-ischemic encephalopathy (HIE). Image density is measured by comparing two regions of interest on ultrasound images of the choroid plexus and brain parenchyma using the Delta E CIE76 value. These regions are then combined and serve as input to the CNN model for classification. The classification results of images into three groups (Normal, Moderate, and Intensive) demonstrate high model efficiency, with an overall accuracy of 88.56%, precision of 90% for Normal, 85% for Moderate, and 88% for Intensive. The overall F-measure is 88.40%, indicating a successful combination of accuracy and completeness in classification. This study is significant as it enables rapid and accurate identification of hypoxic-ischemic encephalopathy in newborns, which is crucial for the timely implementation of appropriate therapeutic measures and improving long-term outcomes for these patients. The application of such advanced techniques allows medical personnel to manage treatment more efficiently, reducing the risk of complications and improving the quality of care for newborns with HIE.
Collapse
Affiliation(s)
- Suzana Zivojinovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Suzana Petrovic Savic
- Department for Production Engineering, Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac, Serbia; (S.P.S.); (G.D.)
| | - Tijana Prodanovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Nikola Prodanovic
- Department of Surgery, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia
- Clinic for Orthopaedic and Trauma Surgery, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Aleksandra Simovic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| | - Goran Devedzic
- Department for Production Engineering, Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac, Serbia; (S.P.S.); (G.D.)
| | - Dragana Savic
- Department of Pediatrics, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, 34000 Kragujevac, Serbia; (S.Z.); (T.P.); (A.S.); (D.S.)
- Center for Neonatology, Pediatric Clinic, University Clinical Center Kragujevac, Zmaj Jovina 30, 34000 Kragujevac, Serbia
| |
Collapse
|
36
|
Kim JH, Hong H, Lee K, Jeong Y, Ryu H, Kim H, Jang SH, Park HK, Han JY, Park HJ, Bae H, Oh BM, Kim WS, Lee SY, Lee SU. AI in evaluating ambulation of stroke patients: severity classification with video and functional ambulation category scale. Top Stroke Rehabil 2024:1-9. [PMID: 38841903 DOI: 10.1080/10749357.2024.2359342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 05/18/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND The evaluation of gait function and severity classification of stroke patients are important to determine the rehabilitation goal and the level of exercise. Physicians often qualitatively evaluate patients' walking ability through visual gait analysis using naked eye, video images, or standardized assessment tools. Gait evaluation through observation relies on the doctor's empirical judgment, potentially introducing subjective opinions. Therefore, conducting research to establish a basis for more objective judgment is crucial. OBJECTIVE To verify a deep learning model that classifies gait image data of stroke patients according to Functional Ambulation Category (FAC) scale. METHODS Gait vision data from 203 stroke patients and 182 healthy individuals recruited from six medical institutions were collected to train a deep learning model for classifying gait severity in stroke patients. The recorded videos were processed using OpenPose. The dataset was randomly split into 80% for training and 20% for testing. RESULTS The deep learning model attained a training accuracy of 0.981 and test accuracy of 0.903. Area Under the Curve(AUC) values of 0.93, 0.95, and 0.96 for discriminating among the mild, moderate, and severe stroke groups, respectively. CONCLUSION This confirms the potential of utilizing human posture estimation based on vision data not only to develop gait parameter models but also to develop models to classify severity according to the FAC criteria used by physicians. To develop an AI-based severity classification model, a large amount and variety of data is necessary and data collected in non-standardized real environments, not in laboratories, can also be used meaningfully.
Collapse
Affiliation(s)
- Jeong-Hyun Kim
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Hyeon Hong
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Kyuwon Lee
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Yeji Jeong
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
| | - Hokyoung Ryu
- Department of Graduate School of Technology and Innovation Management, Hanyang University, Seoul, South Korea
| | - Hyundo Kim
- Department of Intelligence Computing, Hanyang University, Seoul, South Korea
| | - Seong-Ho Jang
- Department of Rehabilitation Medicine, Hanyang University, Guri Hospital, Gyeonggi-do, South Korea
| | - Hyeng-Kyu Park
- Department of Physical & Rehabilitation Medicine, Regional Cardiocerebrovascular Center, Center for Aging and Geriatrics, Chonnam National University Medical School & Hospital, Gwangju, South Korea
| | - Jae-Young Han
- Department of Physical & Rehabilitation Medicine, Regional Cardiocerebrovascular Center, Center for Aging and Geriatrics, Chonnam National University Medical School & Hospital, Gwangju, South Korea
| | - Hye Jung Park
- Department of Rehabilitation Medicine, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, South Korea
| | - Hasuk Bae
- Department of Rehabilitation Medicine, Ewha Woman's University, Seoul, South Korea
| | - Byung-Mo Oh
- Department of Rehabilitation, Seoul National University Hospital, Seoul, South Korea
| | - Won-Seok Kim
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, Seoul, South Korea
| | - Sang Yoon Lee
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, SMG-SNU Boramae Medical Center, Seoul, South Korea
| | - Shi-Uk Lee
- Department of Rehabilitation Medicine, Seoul Metropolitan Government Boramae Medical Center, Seoul, South Korea
- Department of Physical Medicine & Rehabilitation, College of Medicine, Seoul National University, Seoul, South Korea
| |
Collapse
|
37
|
Chen S, Zhao X, Wu Z, Cao K, Zhang Y, Tan T, Lam CT, Xu Y, Zhang G, Sun Y. Multi-risk factors joint prediction model for risk prediction of retinopathy of prematurity. EPMA J 2024; 15:261-274. [PMID: 38841619 PMCID: PMC11147992 DOI: 10.1007/s13167-024-00363-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/17/2024] [Indexed: 06/07/2024]
Abstract
Purpose Retinopathy of prematurity (ROP) is a retinal vascular proliferative disease common in low birth weight and premature infants and is one of the main causes of blindness in children.In the context of predictive, preventive and personalized medicine (PPPM/3PM), early screening, identification and treatment of ROP will directly contribute to improve patients' long-term visual prognosis and reduce the risk of blindness. Thus, our objective is to establish an artificial intelligence (AI) algorithm combined with clinical demographics to create a risk model for ROP including treatment-requiring retinopathy of prematurity (TR-ROP) infants. Methods A total of 22,569 infants who underwent routine ROP screening in Shenzhen Eye Hospital from March 2003 to September 2023 were collected, including 3335 infants with ROP and 1234 infants with TR-ROP among ROP infants. Two machine learning methods of logistic regression and decision tree and a deep learning method of multi-layer perceptron were trained by using the relevant combination of risk factors such as birth weight (BW), gestational age (GA), gender, whether multiple births (MB) and mode of delivery (MD) to achieve the risk prediction of ROP and TR-ROP. We used five evaluation metrics to evaluate the performance of the risk prediction model. The area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUCPR) were the main measurement metrics. Results In the risk prediction for ROP, the BW + GA demonstrated the optimal performance (mean ± SD, AUCPR: 0.4849 ± 0.0175, AUC: 0.8124 ± 0.0033). In the risk prediction of TR-ROP, reasonable performance can be achieved by using GA + BW + Gender + MD + MB (AUCPR: 0.2713 ± 0.0214, AUC: 0.8328 ± 0.0088). Conclusions Combining risk factors with AI in screening programs for ROP could achieve risk prediction of ROP and TR-ROP, detect TR-ROP earlier and reduce the number of ROP examinations and unnecessary physiological stress in low-risk infants. Therefore, combining ROP-related biometric information with AI is a cost-effective strategy for predictive diagnostic, targeted prevention, and personalization of medical services in early screening and treatment of ROP.
Collapse
Affiliation(s)
- Shaobin Chen
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Kangyang Cao
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Yulin Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Chan-Tong Lam
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
| | - Yanwu Xu
- School of Future Technology, South China University of Technology, Guangzhou, Guangzhou; Pazhou Lab, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, 518040 China
| | - Yue Sun
- Faculty of Applied Sciences, Macao Polytechnic University, Gomes Street, Macao, China
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, 5612 AP The Netherlands
| |
Collapse
|
38
|
Abstract
OBJECTIVE To summarize the current research progress of machine learning and venous thromboembolism. METHODS The literature on risk factors, diagnosis, prevention and prognosis of machine learning and venous thromboembolism in recent years was reviewed. RESULTS Machine learning is the future of biomedical research, personalized medicine, and computer-aided diagnosis, and will significantly promote the development of biomedical research and healthcare. However, many medical professionals are not familiar with it. In this review, we will introduce several commonly used machine learning algorithms in medicine, discuss the application of machine learning in venous thromboembolism, and reveal the challenges and opportunities of machine learning in medicine. CONCLUSION The incidence of venous thromboembolism is high, the diagnostic measures are diverse, and it is necessary to classify and treat machine learning, and machine learning as a research tool, it is more necessary to strengthen the special research of venous thromboembolism and machine learning.
Collapse
Affiliation(s)
- Shirong Zou
- West China Hospital of Medicine, West China Hospital Operation Room /West China School of Nursing, Sichuan University, Chengdu, China
| | - Zhoupeng Wu
- Department of vascular surgery, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
39
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024; 47:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
40
|
Ramakrishnan MS, Kovach JL, Wykoff CC, Berrocal AM, Modi YS. American Society of Retina Specialists Clinical Practice Guidelines on Multimodal Imaging for Retinal Disease. JOURNAL OF VITREORETINAL DISEASES 2024; 8:234-246. [PMID: 38770073 PMCID: PMC11102716 DOI: 10.1177/24741264241237012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Purpose: Advancements in retinal imaging have augmented our understanding of the pathology and structure-function relationships of retinal disease. No single diagnostic test is sufficient; rather, diagnostic and management strategies increasingly involve the synthesis of multiple imaging modalities. Methods: This literature review and editorial offer practical clinical guidelines for how the retina specialist can use multimodal imaging to manage retinal conditions. Results: Various imaging modalities offer information on different aspects of retinal structure and function. For example, optical coherence tomography (OCT) and B-scan ultrasonography can provide insights into the microstructural anatomy; fluorescein angiography (FA), indocyanine green angiography (ICGA), and OCT angiography (OCTA) can reveal vascular integrity and perfusion status; and near-infrared reflectance and fundus autofluorescence (FAF) can characterize molecular components within tissues. Managing retinal vascular diseases often includes fundus photography, OCT, OCTA, and FA to evaluate for macular edema, retinal ischemia, and the secondary complications of neovascularization (NV). OCT and FAF play a key role in diagnosing and treating maculopathies. FA, OCTA, and ICGA can help identify macular NV, posterior uveitis, and choroidal venous insufficiency, which guides treatment strategies. Finally, OCT and B-scan ultrasonography can help with preoperative planning and prognostication in vitreoretinal surgical conditions. Conclusions: Today, the retina specialist has access to numerous retinal imaging modalities that can augment the clinical examination to help diagnose and manage retinal conditions. Understanding the capabilities and limitations of each modality is critical to maximizing its clinical utility.
Collapse
Affiliation(s)
- Meera S. Ramakrishnan
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, NY, USA
- Department of Ophthalmology, New York University Langone Medical Center, New York, NY, USA
| | - Jaclyn L. Kovach
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Charlie C. Wykoff
- Retina Consultants of Houston, Blanton Eye Institute, Houston Methodist Hospital, Weill Cornell Medical College, Houston, TX, USA
| | - Audina M. Berrocal
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yasha S. Modi
- Department of Ophthalmology, New York University Langone Medical Center, New York, NY, USA
| |
Collapse
|
41
|
Marra KV, Chen JS, Robles-Holmes HK, Miller J, Wei G, Aguilar E, Ideguchi Y, Ly KB, Prenner S, Erdogmus D, Ferrara N, Campbell JP, Friedlander M, Nudleman E. Development of a Semi-automated Computer-based Tool for the Quantification of Vascular Tortuosity in the Murine Retina. OPHTHALMOLOGY SCIENCE 2024; 4:100439. [PMID: 38361912 PMCID: PMC10867761 DOI: 10.1016/j.xops.2023.100439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 10/10/2023] [Accepted: 11/27/2023] [Indexed: 02/17/2024]
Abstract
Purpose The murine oxygen-induced retinopathy (OIR) model is one of the most widely used animal models of ischemic retinopathy, mimicking hallmark pathophysiology of initial vaso-obliteration (VO) resulting in ischemia that drives neovascularization (NV). In addition to NV and VO, human ischemic retinopathies, including retinopathy of prematurity (ROP), are characterized by increased vascular tortuosity. Vascular tortuosity is an indicator of disease severity, need to treat, and treatment response in ROP. Current literature investigating novel therapeutics in the OIR model often report their effects on NV and VO, and measurements of vascular tortuosity are less commonly performed. No standardized quantification of vascular tortuosity exists to date despite this metric's relevance to human disease. This proof-of-concept study aimed to apply a previously published semi-automated computer-based image analysis approach (iROP-Assist) to develop a new tool to quantify vascular tortuosity in mouse models. Design Experimental study. Subjects C57BL/6J mice subjected to the OIR model. Methods In a pilot study, vasculature was manually segmented on flat-mount images of OIR and normoxic (NOX) mice retinas and segmentations were analyzed with iROP-Assist to quantify vascular tortuosity metrics. In a large cohort of age-matched (postnatal day 12 [P12], P17, P25) NOX and OIR mice retinas, NV, VO, and vascular tortuosity were quantified and compared. In a third experiment, vascular tortuosity in OIR mice retinas was quantified on P17 following intravitreal injection with anti-VEGF (aflibercept) or Immunoglobulin G isotype control on P12. Main Outcome Measures Vascular tortuosity. Results Cumulative tortuosity index was the best metric produced by iROP-Assist for discriminating between OIR mice and NOX controls. Increased vascular tortuosity correlated with disease activity in OIR. Treatment of OIR mice with aflibercept rescued vascular tortuosity. Conclusions Vascular tortuosity is a quantifiable feature of the OIR model that correlates with disease severity and may be quickly and accurately quantified using the iROP-Assist algorithm. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Kyle V. Marra
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
- School of Medicine, University of California San Diego, San Diego, California
| | - Jimmy S. Chen
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Hailey K. Robles-Holmes
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Joseph Miller
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Guoqin Wei
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Edith Aguilar
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Yoichiro Ideguchi
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Kristine B. Ly
- College of Optometry, Pacific University, Forest Grove, Oregon
| | - Sofia Prenner
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - Deniz Erdogmus
- Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts
| | - Napoleone Ferrara
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Martin Friedlander
- Department of Molecular Medicine, The Scripps Research Institute, San Diego, California
| | - Eric Nudleman
- Department of Ophthalmology, Shiley Eye Institute, University of California San Diego, San Diego, California
| |
Collapse
|
42
|
Padhi TR, Bhunia S, Das T, Nayak S, Jalan M, Rath S, Barik B, Ali H, Rani PK, Routray D, Jalali S. Outcome of real-time telescreening for retinopathy of prematurity using videoconferencing in a community setting in Eastern India. Indian J Ophthalmol 2024; 72:697-703. [PMID: 38389241 PMCID: PMC11168531 DOI: 10.4103/ijo.ijo_2024_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 11/01/2023] [Accepted: 11/06/2023] [Indexed: 02/24/2024] Open
Abstract
PURPOSE To evaluate the feasibility and outcome of a real-time retinopathy of prematurity (ROP) telescreening strategy using videoconferencing in a community setting in India. METHOD In a prospective study, trained allied ophthalmic personnel obtained the fundus images in the presence of the parents and local childcare providers. Analysis of images and parental counseling were done in real time by an ROP specialist located at a tertiary center using videoconferencing software. A subset of babies was also examined using bedside indirect ophthalmoscopy by an ROP care-trained ophthalmologist. The data were analyzed using descriptive statistics, sensitivity, specificity, positive and negative predictive values, and correlation coefficient. RESULTS Over 9 months, we examined 576 babies (1152 eyes) in six rural districts of India. The parents accepted the model as they recognized that a remotely located specialist was evaluating all images in real time. The strategy saved the travel time for ROP specialists by 477 h (47.7 working days) and for parents (47,406 h or 1975.25 days), along with the associated travel cost. In a subgroup analysis (100 babies, 200 eyes), the technology had a high sensitivity (97.2%) and negative predictivity value (92.7%). It showed substantial agreement (k = 0.708) with the bedside indirect ophthalmoscopy by ROP specialists with respect to the detection of treatment warranting ROP. Also, the strategy helped train the participants. CONCLUSION Real-time ROP telescreening using videoconferencing is sensitive enough to detect treatment warranting ROPs and saves skilled workforce and time. The real-time audiovisual connection allows optimal supervision of imaging, provides excellent training opportunities, and connects ophthalmologists directly with the parents.
Collapse
Affiliation(s)
- Tapas R Padhi
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Souvik Bhunia
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Taraprasad Das
- Vitreoretinal Services, Anant Bajaj Retina Institute, Hyderabad, Telangana, India
| | - Sameer Nayak
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Manav Jalan
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Suryasnata Rath
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Biswajeet Barik
- Vitreoretinal Services, Anant Bajaj Retina Institute, Mithu Tulsi Chanrai Campus, LV Prasad Eye Institute, Bhubaneswar, Odisha, India
| | - Hasnat Ali
- Department of Biostatistics, Kallam Anji Reddy Campus, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Padmaja Kumari Rani
- Vitreoretinal Services, Anant Bajaj Retina Institute, Hyderabad, Telangana, India
| | - Dipanwita Routray
- Department of Community Medicine, District Medical College Hospital, Keonjhar, Odisha, India
| | - Subhadra Jalali
- Vitreoretinal Services, Anant Bajaj Retina Institute, Hyderabad, Telangana, India
| |
Collapse
|
43
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
44
|
Coyner AS, Murickan T, Oh MA, Young BK, Ostmo SR, Singh P, Chan RVP, Moshfeghi DM, Shah PK, Venkatapathy N, Chiang MF, Kalpathy-Cramer J, Campbell JP. Multinational External Validation of Autonomous Retinopathy of Prematurity Screening. JAMA Ophthalmol 2024; 142:327-335. [PMID: 38451496 PMCID: PMC10921347 DOI: 10.1001/jamaophthalmol.2024.0045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/15/2023] [Indexed: 03/08/2024]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of blindness in children, with significant disparities in outcomes between high-income and low-income countries, due in part to insufficient access to ROP screening. Objective To evaluate how well autonomous artificial intelligence (AI)-based ROP screening can detect more-than-mild ROP (mtmROP) and type 1 ROP. Design, Setting, and Participants This diagnostic study evaluated the performance of an AI algorithm, trained and calibrated using 2530 examinations from 843 infants in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) study, on 2 external datasets (6245 examinations from 1545 infants in the Stanford University Network for Diagnosis of ROP [SUNDROP] and 5635 examinations from 2699 infants in the Aravind Eye Care Systems [AECS] telemedicine programs). Data were taken from 11 and 48 neonatal care units in the US and India, respectively. Data were collected from January 2012 to July 2021, and data were analyzed from July to December 2023. Exposures An imaging processing pipeline was created using deep learning to autonomously identify mtmROP and type 1 ROP in eye examinations performed via telemedicine. Main Outcomes and Measures The area under the receiver operating characteristics curve (AUROC) as well as sensitivity and specificity for detection of mtmROP and type 1 ROP at the eye examination and patient levels. Results The prevalence of mtmROP and type 1 ROP were 5.9% (91 of 1545) and 1.2% (18 of 1545), respectively, in the SUNDROP dataset and 6.2% (168 of 2699) and 2.5% (68 of 2699) in the AECS dataset. Examination-level AUROCs for mtmROP and type 1 ROP were 0.896 and 0.985, respectively, in the SUNDROP dataset and 0.920 and 0.982 in the AECS dataset. At the cross-sectional examination level, mtmROP detection had high sensitivity (SUNDROP: mtmROP, 83.5%; 95% CI, 76.6-87.7; type 1 ROP, 82.2%; 95% CI, 81.2-83.1; AECS: mtmROP, 80.8%; 95% CI, 76.2-84.9; type 1 ROP, 87.8%; 95% CI, 86.8-88.7). At the patient level, all infants who developed type 1 ROP screened positive (SUNDROP: 100%; 95% CI, 81.4-100; AECS: 100%; 95% CI, 94.7-100) prior to diagnosis. Conclusions and Relevance Where and when ROP telemedicine programs can be implemented, autonomous ROP screening may be an effective force multiplier for secondary prevention of ROP.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Tom Murickan
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Minn A. Oh
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Susan R. Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Praveer Singh
- Ophthalmology, University of Colorado School of Medicine, Aurora
| | - R. V. Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago
| | - Darius M. Moshfeghi
- Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Parag K. Shah
- Pediatric Retina and Ocular Oncology, Aravind Eye Hospital, Coimbatore, India
| | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | | | | |
Collapse
|
45
|
Sharafi SM, Ebrahimiadib N, Roohipourmoallai R, Farahani AD, Fooladi MI, Khalili Pour E. Automated diagnosis of plus disease in retinopathy of prematurity using quantification of vessels characteristics. Sci Rep 2024; 14:6375. [PMID: 38493272 PMCID: PMC10944526 DOI: 10.1038/s41598-024-57072-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 03/14/2024] [Indexed: 03/18/2024] Open
Abstract
The condition known as Plus disease is distinguished by atypical alterations in the retinal vasculature of neonates born prematurely. It has been demonstrated that the diagnosis of Plus disease is subjective and qualitative in nature. The utilization of quantitative methods and computer-based image analysis to enhance the objectivity of Plus disease diagnosis has been extensively established in the literature. This study presents the development of a computer-based image analysis method aimed at automatically distinguishing Plus images from non-Plus images. The proposed methodology conducts a quantitative analysis of the vascular characteristics linked to Plus disease, thereby aiding physicians in making informed judgments. A collection of 76 posterior retinal images from a diverse group of infants who underwent screening for Retinopathy of Prematurity (ROP) was obtained. A reference standard diagnosis was established as the majority of the labeling performed by three experts in ROP during two separate sessions. The process of segmenting retinal vessels was carried out using a semi-automatic methodology. Computer algorithms were developed to compute the tortuosity, dilation, and density of vessels in various retinal regions as potential discriminative characteristics. A classifier was provided with a set of selected features in order to distinguish between Plus images and non-Plus images. This study included 76 infants (49 [64.5%] boys) with mean birth weight of 1305 ± 427 g and mean gestational age of 29.3 ± 3 weeks. The average level of agreement among experts for the diagnosis of plus disease was found to be 79% with a standard deviation of 5.3%. In terms of intra-expert agreement, the average was 85% with a standard deviation of 3%. Furthermore, the average tortuosity of the five most tortuous vessels was significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The curvature values based on points were found to be significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The maximum diameter of vessels within a region extending 5-disc diameters away from the border of the optic disc (referred to as 5DD) exhibited a statistically significant increase in Plus images compared to non-Plus images (p ≤ 0.0001). The density of vessels in Plus images was found to be significantly higher compared to non-Plus images (p ≤ 0.0001). The classifier's accuracy in distinguishing between Plus and non-Plus images, as determined through tenfold cross-validation, was found to be 0.86 ± 0.01. This accuracy was observed to be higher than the diagnostic accuracy of one out of three experts when compared to the reference standard. The implemented algorithm in the current study demonstrated a commendable level of accuracy in detecting Plus disease in cases of retinopathy of prematurity, exhibiting comparable performance to that of expert diagnoses. By engaging in an objective analysis of the characteristics of vessels, there exists the possibility of conducting a quantitative assessment of the disease progression's features. The utilization of this automated system has the potential to enhance physicians' ability to diagnose Plus disease, thereby offering valuable contributions to the management of ROP through the integration of traditional ophthalmoscopy and image-based telemedicine methodologies.
Collapse
Affiliation(s)
- Sayed Mehran Sharafi
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran
| | - Nazanin Ebrahimiadib
- Ophthalmology Department, College of Medicine, University of Florida, Gainesville, FL, USA
| | - Ramak Roohipourmoallai
- Department of Ophthalmology, Morsani College of Medicine, University of South Florida, Tempa, FL, USA
| | - Afsar Dastjani Farahani
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran
| | - Marjan Imani Fooladi
- Clinical Pediatric Ophthalmology Department, UPMC, Children's Hospital of Pittsburgh, Pittsburgh, PA, USA
| | - Elias Khalili Pour
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran.
| |
Collapse
|
46
|
Liu R, Li X, Liu Y, Du L, Zhu Y, Wu L, Hu B. A high-speed microscopy system based on deep learning to detect yeast-like fungi cells in blood. Bioanalysis 2024; 16:289-303. [PMID: 38334080 DOI: 10.4155/bio-2023-0193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024] Open
Abstract
Background: Blood-invasive fungal infections can cause the death of patients, while diagnosis of fungal infections is challenging. Methods: A high-speed microscopy detection system was constructed that included a microfluidic system, a microscope connected to a high-speed camera and a deep learning analysis section. Results: For training data, the sensitivity and specificity of the convolutional neural network model were 93.5% (92.7-94.2%) and 99.5% (99.1-99.5%), respectively. For validating data, the sensitivity and specificity were 81.3% (80.0-82.5%) and 99.4% (99.2-99.6%), respectively. Cryptococcal cells were found in 22.07% of blood samples. Conclusion: This high-speed microscopy system can analyze fungal pathogens in blood samples rapidly with high sensitivity and specificity and can help dramatically accelerate the diagnosis of fungal infectious diseases.
Collapse
Affiliation(s)
- Ruiqi Liu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Xiaojie Li
- Department of Laboratory Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, P.R. China
| | - Yingyi Liu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Lijun Du
- Department of Clinical Laboratory, Huadu District People's Hospital of Guangzhou, Guangdong, China
| | - Yingzhu Zhu
- Guangzhou Waterrock Gene Technology, Guangdong, China
| | - Lichuan Wu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Bo Hu
- Department of Laboratory Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, P.R. China
| |
Collapse
|
47
|
Demirbaş KC, Yıldız M, Saygılı S, Canpolat N, Kasapçopur Ö. Artificial Intelligence in Pediatrics: Learning to Walk Together. Turk Arch Pediatr 2024; 59:121-130. [PMID: 38454219 PMCID: PMC11059951 DOI: 10.5152/turkarchpediatr.2024.24002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 02/02/2024] [Indexed: 03/09/2024]
Abstract
In this era of rapidly advancing technology, artificial intelligence (AI) has emerged as a transformative force, even being called the Fourth Industrial Revolution, along with gene editing and robotics. While it has undoubtedly become an increasingly important part of our daily lives, it must be recognized that it is not an additional tool, but rather a complex concept that poses a variety of challenges. AI, with considerable potential, has found its place in both medical care and clinical research. Within the vast field of pediatrics, it stands out as a particularly promising advancement. As pediatricians, we are indeed witnessing the impactful integration of AI-based applications into our daily clinical practice and research efforts. These tools are being used for simple to more complex tasks such as diagnosing clinically challenging conditions, predicting disease outcomes, creating treatment plans, educating both patients and healthcare professionals, and generating accurate medical records or scientific papers. In conclusion, the multifaceted applications of AI in pediatrics will increase efficiency and improve the quality of healthcare and research. However, there are certain risks and threats accompanying this advancement including the biases that may contribute to health disparities and, inaccuracies. Therefore, it is crucial to recognize and address the technical, ethical, and legal challenges as well as explore the benefits in both clinical and research fields.
Collapse
Affiliation(s)
- Kaan Can Demirbaş
- İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Mehmet Yıldız
- Department of Pediatric Rheumatology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Seha Saygılı
- Department of Pediatric Nephrology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Nur Canpolat
- Department of Pediatric Nephrology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| | - Özgür Kasapçopur
- Department of Pediatric Rheumatology, İstanbul University-Cerrahpaşa, Cerrahpaşa Faculty of Medicine, İstanbul, Turkey
| |
Collapse
|
48
|
Gomes RFT, Schmith J, de Figueiredo RM, Freitas SA, Machado GN, Romanini J, Almeida JD, Pereira CT, Rodrigues JDA, Carrard VC. Convolutional neural network misclassification analysis in oral lesions: an error evaluation criterion by image characteristics. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:243-252. [PMID: 38161085 DOI: 10.1016/j.oooo.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 10/02/2023] [Accepted: 10/04/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE This retrospective study analyzed the errors generated by a convolutional neural network (CNN) when performing automated classification of oral lesions according to their clinical characteristics, seeking to identify patterns in systemic errors in the intermediate layers of the CNN. STUDY DESIGN A cross-sectional analysis nested in a previous trial in which automated classification by a CNN model of elementary lesions from clinical images of oral lesions was performed. The resulting CNN classification errors formed the dataset for this study. A total of 116 real outputs were identified that diverged from the estimated outputs, representing 7.6% of the total images analyzed by the CNN. RESULTS The discrepancies between the real and estimated outputs were associated with problems relating to image sharpness, resolution, and focus; human errors; and the impact of data augmentation. CONCLUSIONS From qualitative analysis of errors in the process of automated classification of clinical images, it was possible to confirm the impact of image quality, as well as identify the strong impact of the data augmentation process. Knowledge of the factors that models evaluate to make decisions can increase confidence in the high classification potential of CNNs.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil.
| | - Jean Schmith
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Rodrigo Marques de Figueiredo
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Samuel Armbrust Freitas
- Department of Applied Computing, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | | | - Juliana Romanini
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| | - Janete Dias Almeida
- Department of Biosciences and Oral Diagnostics, São Paulo State University, Campus São José dos Campos, São Paulo, Brazil
| | | | - Jonas de Almeida Rodrigues
- Department of Surgery and Orthopaedics, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil
| | - Vinicius Coelho Carrard
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil; TelessaudeRS-UFRGS, Federal University of Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil; Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| |
Collapse
|
49
|
Vilela MAP, Arrigo A, Parodi MB, da Silva Mengue C. Smartphone Eye Examination: Artificial Intelligence and Telemedicine. Telemed J E Health 2024; 30:341-353. [PMID: 37585566 DOI: 10.1089/tmj.2023.0041] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023] Open
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
Affiliation(s)
| | - Alessandro Arrigo
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Carolina da Silva Mengue
- Post-Graduation Ophthalmological School, Ivo Corrêa-Meyer/Cardiology Institute, Porto Alegre, Brazil
| |
Collapse
|
50
|
Ong KTI, Kwon T, Jang H, Kim M, Lee CS, Byeon SH, Kim SS, Yeo J, Choi EY. Multitask Deep Learning for Joint Detection of Necrotizing Viral and Noninfectious Retinitis From Common Blood and Serology Test Data. Invest Ophthalmol Vis Sci 2024; 65:5. [PMID: 38306107 PMCID: PMC10851173 DOI: 10.1167/iovs.65.2.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024] Open
Abstract
Purpose Necrotizing viral retinitis is a serious eye infection that requires immediate treatment to prevent permanent vision loss. Uncertain clinical suspicion can result in delayed diagnosis, inappropriate administration of corticosteroids, or repeated intraocular sampling. To quickly and accurately distinguish between viral and noninfectious retinitis, we aimed to develop deep learning (DL) models solely using noninvasive blood test data. Methods This cross-sectional study trained DL models using common blood and serology test data from 3080 patients (noninfectious uveitis of the posterior segment [NIU-PS] = 2858, acute retinal necrosis [ARN] = 66, cytomegalovirus [CMV], retinitis = 156). Following the development of separate base DL models for ARN and CMV retinitis, multitask learning (MTL) was employed to enable simultaneous discrimination. Advanced MTL models incorporating adversarial training were used to enhance DL feature extraction from the small, imbalanced data. We evaluated model performance, disease-specific important features, and the causal relationship between DL features and detection results. Results The presented models all achieved excellent detection performances, with the adversarial MTL model achieving the highest receiver operating characteristic curves (0.932 for ARN and 0.982 for CMV retinitis). Significant features for ARN detection included varicella-zoster virus (VZV) immunoglobulin M (IgM), herpes simplex virus immunoglobulin G, and neutrophil count, while for CMV retinitis, they encompassed VZV IgM, CMV IgM, and lymphocyte count. The adversarial MTL model exhibited substantial changes in detection outcomes when the key features were contaminated, indicating stronger causality between DL features and detection results. Conclusions The adversarial MTL model, using blood test data, may serve as a reliable adjunct for the expedited diagnosis of ARN, CMV retinitis, and NIU-PS simultaneously in real clinical settings.
Collapse
Affiliation(s)
- Kai Tzu-iunn Ong
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Taeyoon Kwon
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Harok Jang
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Min Kim
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Christopher Seungkyu Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Suk Ho Byeon
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sung Soo Kim
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jinyoung Yeo
- Department of Artificial Intelligence, Yonsei University College of Computing, Seoul, Republic of Korea
| | - Eun Young Choi
- Department of Ophthalmology, Institute of Vision Research, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|