1
|
Chen KY, Chan HC, Chan CM. Can Stem Cell Therapy Revolutionize Ocular Disease Treatment? A Critical Review of Preclinical and Clinical Advances. Stem Cell Rev Rep 2025:10.1007/s12015-025-10884-x. [PMID: 40266467 DOI: 10.1007/s12015-025-10884-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/08/2025] [Indexed: 04/24/2025]
Abstract
Stem cell therapy in regenerative medicine has a scope for treating ocular diseases. Stem cell therapy aims to repair damaged tissue and restore vision. The present review focuses on the advancements in stem cell therapies for ocular disorders, their mechanism of action, and clinical applications while addressing some outstanding challenges. Stem cells that include embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), mesenchymal stem cells (MSCs), and retinal progenitor cells have regenerative potential for ocular repair. They differentiate into specialized ocular cell types, conduct neuroprotection, and modulate immune responses. It is emphasized in preclinical and clinical studies that stem cell therapy can treat corneal disorders such as limbal stem cell deficiency, retinal diseases like dry age macular degeneration and retinitis pigmentosa, and diabetic retinopathy. Various studies suggested that stem cells have considerable scope in glaucoma treatment by supporting retinal ganglion cell survival and optic nerve regeneration. Advanced approaches such as gene editing, organoid generation, and artificial intelligence enhance these therapies. Effective delivery to target areas, engraftment, orientation, and long-term survival of transplanted cells need optimization. Issues such as immune rejection and tumorigenicity must be addressed. This approach is further hindered by regulatory issues and overly complicated approval processes and trials. Ethical issues related to sourcing embryonic stem cells and patient consent complicate the issue. The cost of manufacturing stem cells and their accessibility are other factors posing potential barriers to widespread application. These regulatory, ethical, and economic issues must be tackled if stem cell treatments are to be made safe, accessible, and effective. Future studies will include refining therapeutic protocols, scaling manufacturing processes, and overcoming socio-economic barriers, eventually improving clinical outcomes.
Collapse
Affiliation(s)
- Kai-Yang Chen
- School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Hoi-Chun Chan
- School of Pharmacy, China Medical University, Taichung, Taiwan
| | - Chi-Ming Chan
- Department of Ophthalmology, Cardinal Tien Hospital, New Taipei City, Taiwan.
- School of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan.
| |
Collapse
|
2
|
Cromack SC, Lew AM, Bazzetta SE, Xu S, Walter JR. The perception of artificial intelligence and infertility care among patients undergoing fertility treatment. J Assist Reprod Genet 2025; 42:855-863. [PMID: 39776390 PMCID: PMC11950478 DOI: 10.1007/s10815-024-03382-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Accepted: 12/23/2024] [Indexed: 01/11/2025] Open
Abstract
PURPOSE To characterize the opinions of patients undergoing infertility treatment on the use of artificial intelligence (AI) in their care. METHODS Patients planning or undergoing in vitro fertilization (IVF) or frozen embryo transfers were invited to complete an anonymous electronic survey from April to June 2024. The survey collected demographics, technological affinity, general perception of AI, and its applications to fertility care. Patient-reported trust of AI compared to a physician for fertility care (e.g. gamete selection, gonadotropin doing, and stimulation length) were analyzed. Descriptive statistics were calculated, and subgroup analyses by age, occupation, and parity were performed. Chi-squared tests were used to compare categorical variables. RESULTS A total of 200 patients completed the survey and were primarily female (n = 193/200) and of reproductive age (mean 37 years). Patients were well educated with high technological affinity. Respondents were familiar with AI (93%) and generally supported its use in medicine (55%), but fewer trusted AI-informed reproductive care (46%). More patients disagreed (37%) that AI should be used to determine gonadotropin dose or stimulation length compared to embryo selection (26.5%; p = 0.01). In the setting of disagreement between physician and AI recommendation, patients preferred the physician-based recommendation in all treatment-related decisions. However, a larger proportion favored AI recommendations for gamete (22%) and embryo (14.5%) selection, compared to gonadotropin dosing (6.5%) or stimulation length (7.0%). Most would not be willing to pay more for AI-informed fertility care. CONCLUSIONS In this highly educated infertile population familiar with AI, patients still prefer physician-based recommendations compared with AI.
Collapse
Affiliation(s)
- Sarah C Cromack
- Department of Obstetrics and Gynecology, Division of Reproductive Endocrinology and Infertility, Northwestern University, 259 E Erie St Suite 2400, Chicago, IL, 60611, USA
| | - Ashley M Lew
- Department of Obstetrics and Gynecology, Division of Reproductive Endocrinology and Infertility, Northwestern University, 259 E Erie St Suite 2400, Chicago, IL, 60611, USA
| | - Sarah E Bazzetta
- Department of Obstetrics and Gynecology, Division of Reproductive Endocrinology and Infertility, Northwestern University, 259 E Erie St Suite 2400, Chicago, IL, 60611, USA
| | - Shuai Xu
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Chicago, IL, USA
| | - Jessica R Walter
- Department of Obstetrics and Gynecology, Division of Reproductive Endocrinology and Infertility, Northwestern University, 259 E Erie St Suite 2400, Chicago, IL, 60611, USA.
- Querrey Simpson Institute for Bioelectronics, Northwestern University, Chicago, IL, USA.
| |
Collapse
|
3
|
Muthukumar KA, Nandi D, Ranjan P, Ramachandran K, Pj S, Ghosh A, M A, Radhakrishnan A, Dhandapani VE, Janardhanan R. Integrating electrocardiogram and fundus images for early detection of cardiovascular diseases. Sci Rep 2025; 15:4390. [PMID: 39910082 PMCID: PMC11799439 DOI: 10.1038/s41598-025-87634-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 01/21/2025] [Indexed: 02/07/2025] Open
Abstract
Cardiovascular diseases (CVD) are a predominant health concern globally, emphasizing the need for advanced diagnostic techniques. In our research, we present an avant-garde methodology that synergistically integrates ECG readings and retinal fundus images to facilitate the early disease tagging as well as triaging of the CVDs in the order of disease priority. Recognizing the intricate vascular network of the retina as a reflection of the cardiovascular system, alongwith the dynamic cardiac insights from ECG, we sought to provide a holistic diagnostic perspective. Initially, a Fast Fourier Transform (FFT) was applied to both the ECG and fundus images, transforming the data into the frequency domain. Subsequently, the Earth Mover's Distance (EMD) was computed for the frequency-domain features of both modalities. These EMD values were then concatenated, forming a comprehensive feature set that was fed into a Neural Network classifier. This approach, leveraging the FFT's spectral insights and EMD's capability to capture nuanced data differences, offers a robust representation for CVD classification. Preliminary tests yielded a commendable accuracy of 84%, underscoring the potential of this combined diagnostic strategy. As we continue our research, we anticipate refining and validating the model further to enhance its clinical applicability in resource limited healthcare ecosystems prevalent across the Indian sub-continent and also the world at large.
Collapse
Affiliation(s)
- K A Muthukumar
- University of Petroleum and Energy Studies, Dehradun, Uttarakhand, India.
| | - Dhruva Nandi
- Faculty of Medicine and Health Sciences , SRM Medical College Hospital and Research Centre, SRM IST, Kattankulathur, Chengalpattu, Tamil Nadu, India
| | - Priya Ranjan
- University of Petroleum and Energy Studies, Dehradun, Uttarakhand, India
| | - Krithika Ramachandran
- Centre for High Impact Neuroscience and Translational Applications, TCG Crest, Kolkata, West Bengal, India
| | - Shiny Pj
- Faculty of Medicine and Health Sciences , SRM Medical College Hospital and Research Centre, SRM IST, Kattankulathur, Chengalpattu, Tamil Nadu, India
| | - Anirban Ghosh
- Department of Electronics and Communication, SRM University AP, Neerukonda, Andhra Pradesh, India
| | - Ashwini M
- Ashwini Eye Care, Chennai, Tamil Nadu, India
| | - Aiswaryah Radhakrishnan
- Faculty of Medicine and Health Sciences , SRM Medical College Hospital and Research Centre, SRM IST, Kattankulathur, Chengalpattu, Tamil Nadu, India
| | | | - Rajiv Janardhanan
- Faculty of Medicine and Health Sciences , SRM Medical College Hospital and Research Centre, SRM IST, Kattankulathur, Chengalpattu, Tamil Nadu, India.
| |
Collapse
|
4
|
Musetti D, Cutolo CA, Bonetto M, Giacomini M, Maggi D, Viviani GL, Gandin I, Traverso CE, Nicolò M. Autonomous artificial intelligence versus teleophthalmology for diabetic retinopathy. Eur J Ophthalmol 2025; 35:232-238. [PMID: 38656241 DOI: 10.1177/11206721241248856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Purpose: To assess the role of artificial intelligence (AI) based automated software for detection of Diabetic Retinopathy (DR) compared with the evaluation of digital retinography by two double masked retina specialists. Methods: Two-hundred one patients (mean age 65 ± 13 years) with type 1 diabetes mellitus or type 2 diabetes mellitus were included. All patients were undergoing a retinography and spectral domain optical coherence tomography (SD-OCT, DRI 3D OCT-2000, Topcon) of the macula. The retinal photographs were graded using two validated AI DR screening software (Eye Art TM and IDx-DR) designed to identify more than mild DR. Results: Retinal images of 201 patients were graded. DR (more than mild DR) was detected by the ophthalmologists in 38 (18.9%) patients and by the AI-algorithms in 36 patients (with 30 eyes diagnosed by both algorithms). Ungradable patients by the AI software were 13 (6.5%) and 16 (8%) for the Eye Art and IDx-DR, respectively. Both AI software strategies showed a high sensitivity and specificity for detecting any more than mild DR without showing any statistically significant difference between them. Conclusions: The comparison between the diagnosis provided by artificial intelligence based automated software and the reference clinical diagnosis showed that they can work at a level of sensitivity that is similar to that achieved by experts.
Collapse
Affiliation(s)
- Donatella Musetti
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Carlo Alberto Cutolo
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | | | | | - Davide Maggi
- Clinica Diabetologica, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Giorgio Luciano Viviani
- Clinica Diabetologica, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Ilaria Gandin
- Sciences, Biostatistic Unit, University of Trieste, Italy
| | - Carlo Enrico Traverso
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Massimo Nicolò
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
- Fondazione per la Macula onlus, Genova, Italy
| |
Collapse
|
5
|
Banna HU, Slayo M, Armitage JA, Del Rosal B, Vocale L, Spencer SJ. Imaging the eye as a window to brain health: frontier approaches and future directions. J Neuroinflammation 2024; 21:309. [PMID: 39614308 PMCID: PMC11606158 DOI: 10.1186/s12974-024-03304-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Accepted: 11/18/2024] [Indexed: 12/01/2024] Open
Abstract
Recent years have seen significant advances in diagnostic testing of central nervous system (CNS) function and disease. However, there remain challenges in developing a comprehensive suite of non- or minimally invasive assays of neural health and disease progression. Due to the direct connection with the CNS, structural changes in the neural retina, retinal vasculature and morphological changes in retinal immune cells can occur in parallel with disease conditions in the brain. The retina can also, uniquely, be assessed directly and non-invasively. For these reasons, the retina may prove to be an important "window" for revealing and understanding brain disease. In this review, we discuss the gross anatomy of the eye, focusing on the sensory and non-sensory cells of the retina, especially microglia, that lend themselves to diagnosing brain disease by imaging the retina. We include a history of ocular imaging to describe the different imaging approaches undertaken in the past and outline current and emerging technologies including retinal autofluorescence imaging, Raman spectroscopy, and artificial intelligence image analysis. These new technologies show promising potential for retinal imaging to be used as a tool for the diagnosis of brain disorders such as Alzheimer's disease and others and the assessment of treatment success.
Collapse
Affiliation(s)
- Hasan U Banna
- School of Health and Biomedical Sciences, RMIT University, Bundoora, Melbourne, VIC, Australia
| | - Mary Slayo
- School of Health and Biomedical Sciences, RMIT University, Bundoora, Melbourne, VIC, Australia
- Institute of Veterinary Physiology and Biochemistry, Justus Liebig University, Giessen, Germany
| | - James A Armitage
- School of Medicine (Optometry), Deakin University, Waurn Ponds, VIC, Australia
| | | | - Loretta Vocale
- School of Health and Biomedical Sciences, RMIT University, Bundoora, Melbourne, VIC, Australia
| | - Sarah J Spencer
- School of Health and Biomedical Sciences, RMIT University, Bundoora, Melbourne, VIC, Australia.
| |
Collapse
|
6
|
O'Shaughnessy E, Senicourt L, Mambour N, Savatovsky J, Duron L, Lecler A. Toward Precision Diagnosis: Machine Learning in Identifying Malignant Orbital Tumors With Multiparametric 3 T MRI. Invest Radiol 2024; 59:737-745. [PMID: 38597586 DOI: 10.1097/rli.0000000000001076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
BACKGROUND Orbital tumors present a diagnostic challenge due to their varied locations and histopathological differences. Although recent advancements in imaging have improved diagnosis, classification remains a challenge. The integration of artificial intelligence in radiology and ophthalmology has demonstrated promising outcomes. PURPOSE This study aimed to evaluate the performance of machine learning models in accurately distinguishing malignant orbital tumors from benign ones using multiparametric 3 T magnetic resonance imaging (MRI) data. MATERIALS AND METHODS In this single-center prospective study, patients with orbital masses underwent presurgery 3 T MRI scans between December 2015 and May 2021. The MRI protocol comprised multiparametric imaging including dynamic contrast-enhanced (DCE), diffusion-weighted imaging (DWI), intravoxel incoherent motion (IVIM), as well as morphological imaging acquisitions. A repeated nested cross-validation strategy using random forest classifiers was used for model training and evaluation, considering 8 combinations of explanatory features. Shapley additive explanations (SHAP) values were used to assess feature contributions, and the model performance was evaluated using multiple metrics. RESULTS One hundred thirteen patients were analyzed (57/113 [50.4%] were women; average age was 51.5 ± 17.5 years, range: 19-88 years). Among the 8 combinations of explanatory features assessed, the performance on predicting malignancy when using the most comprehensive model, which is the most exhaustive one incorporating all 46 explanatory features-including morphology, DWI, DCE, and IVIM, achieved an area under the curve of 0.9 [0.73-0.99]. When using the streamlined "10-feature signature" model, performance reached an area under the curve of 0.88 [0.71-0.99]. Random forest feature importance graphs measured by the mean of SHAP values pinpointed the 10 most impactful features, which comprised 3 quantitative IVIM features, 4 quantitative DCE features, 1 quantitative DWI feature, 1 qualitative DWI feature, and age. CONCLUSIONS Our findings demonstrate that a machine learning approach, integrating multiparametric MRI data such as DCE, DWI, IVIM, and morphological imaging, offers high-performing models for differentiating malignant from benign orbital tumors. The streamlined 10-feature signature, with a performance close to the comprehensive model, may be more suitable for clinical application.
Collapse
Affiliation(s)
- Emma O'Shaughnessy
- From the Department of Neuroradiology, Rothschild Foundation Hospital, Paris, France (E.O.S., J.S., L.D., A.L.); Department of Data Science, Rothschild Foundation Hospital, Paris, France (L.S.); and Department of Ophthalmology, Rothschild Foundation Hospital, Paris, France (N.M.)
| | | | | | | | | | | |
Collapse
|
7
|
Sirocchi C, Bogliolo A, Montagna S. Medical-informed machine learning: integrating prior knowledge into medical decision systems. BMC Med Inform Decis Mak 2024; 24:186. [PMID: 38943085 PMCID: PMC11212227 DOI: 10.1186/s12911-024-02582-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 06/20/2024] [Indexed: 07/01/2024] Open
Abstract
BACKGROUND Clinical medicine offers a promising arena for applying Machine Learning (ML) models. However, despite numerous studies employing ML in medical data analysis, only a fraction have impacted clinical care. This article underscores the importance of utilising ML in medical data analysis, recognising that ML alone may not adequately capture the full complexity of clinical data, thereby advocating for the integration of medical domain knowledge in ML. METHODS The study conducts a comprehensive review of prior efforts in integrating medical knowledge into ML and maps these integration strategies onto the phases of the ML pipeline, encompassing data pre-processing, feature engineering, model training, and output evaluation. The study further explores the significance and impact of such integration through a case study on diabetes prediction. Here, clinical knowledge, encompassing rules, causal networks, intervals, and formulas, is integrated at each stage of the ML pipeline, resulting in a spectrum of integrated models. RESULTS The findings highlight the benefits of integration in terms of accuracy, interpretability, data efficiency, and adherence to clinical guidelines. In several cases, integrated models outperformed purely data-driven approaches, underscoring the potential for domain knowledge to enhance ML models through improved generalisation. In other cases, the integration was instrumental in enhancing model interpretability and ensuring conformity with established clinical guidelines. Notably, knowledge integration also proved effective in maintaining performance under limited data scenarios. CONCLUSIONS By illustrating various integration strategies through a clinical case study, this work provides guidance to inspire and facilitate future integration efforts. Furthermore, the study identifies the need to refine domain knowledge representation and fine-tune its contribution to the ML model as the two main challenges to integration and aims to stimulate further research in this direction.
Collapse
Affiliation(s)
- Christel Sirocchi
- Department of Pure and Applied Sciences, University of Urbino, Piazza della Repubblica, 13, Urbino, 61029, Italy.
| | - Alessandro Bogliolo
- Department of Pure and Applied Sciences, University of Urbino, Piazza della Repubblica, 13, Urbino, 61029, Italy
| | - Sara Montagna
- Department of Pure and Applied Sciences, University of Urbino, Piazza della Repubblica, 13, Urbino, 61029, Italy
| |
Collapse
|
8
|
Iorga RE, Costin D, Munteanu-Dănulescu RS, Rezuș E, Moraru AD. Non-Invasive Retinal Vessel Analysis as a Predictor for Cardiovascular Disease. J Pers Med 2024; 14:501. [PMID: 38793083 PMCID: PMC11122007 DOI: 10.3390/jpm14050501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 05/06/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024] Open
Abstract
Cardiovascular disease (CVD) is the most frequent cause of death worldwide. The alterations in the microcirculation may predict the cardiovascular mortality. The retinal vasculature can be used as a model to study vascular alterations associated with cardiovascular disease. In order to quantify microvascular changes in a non-invasive way, fundus images can be taken and analysed. The central retinal arteriolar (CRAE), the venular (CRVE) diameter and the arteriolar-to-venular diameter ratio (AVR) can be used as biomarkers to predict the cardiovascular mortality. A narrower CRAE, wider CRVE and a lower AVR have been associated with increased cardiovascular events. Dynamic retinal vessel analysis (DRVA) allows the quantification of retinal changes using digital image sequences in response to visual stimulation with flicker light. This article is not just a review of the current literature, it also aims to discuss the methodological benefits and to identify research gaps. It highlights the potential use of microvascular biomarkers for screening and treatment monitoring of cardiovascular disease. Artificial intelligence (AI), such as Quantitative Analysis of Retinal vessel Topology and size (QUARTZ), and SIVA-deep learning system (SIVA-DLS), seems efficient in extracting information from fundus photographs and has the advantage of increasing diagnosis accuracy and improving patient care by complementing the role of physicians. Retinal vascular imaging using AI may help identify the cardiovascular risk, and is an important tool in primary cardiovascular disease prevention. Further research should explore the potential clinical application of retinal microvascular biomarkers, in order to assess systemic vascular health status, and to predict cardiovascular events.
Collapse
Affiliation(s)
- Raluca Eugenia Iorga
- Department of Surgery II, Discipline of Ophthalmology, “Grigore T. Popa” University of Medicine and Pharmacy, Strada Universitatii No. 16, 700115 Iași, Romania; (R.E.I.); (A.D.M.)
| | - Damiana Costin
- Doctoral School, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iași, Romania
| | | | - Elena Rezuș
- Department of Internal Medicine II, Discipline of Reumathology, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iași, Romania;
| | - Andreea Dana Moraru
- Department of Surgery II, Discipline of Ophthalmology, “Grigore T. Popa” University of Medicine and Pharmacy, Strada Universitatii No. 16, 700115 Iași, Romania; (R.E.I.); (A.D.M.)
| |
Collapse
|
9
|
Chandrakanth P, Akkara JD, Joshi SM, Gosalia H, Chandrakanth KS, Narendran V. The Slitscope. Indian J Ophthalmol 2024; 72:741-744. [PMID: 38189430 PMCID: PMC11168557 DOI: 10.4103/ijo.ijo_1589_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 10/12/2023] [Accepted: 10/25/2023] [Indexed: 01/09/2024] Open
Abstract
Slit lamp biomicroscope is the right hand of an Ophthalmologist. Even though precise, its bulky design and complex working process are limiting constraints, making it difficult for screening at outreach camps, which are an integral part of this field for the purpose of eliminating needless blindness. The torchlight is the main tool used for screening. Recently, the integration of smartphones with instruments and the digitization of slit lamp has been explored, to provide simple and easy hacks. By bringing the slit of the slit lamp to traditional torchlight, we have created "The Slitscope". It combines the best of both worlds as a simple innovative do-it-yourself novel technique for precise cataract screening. It is especially useful in peripheral centers, vision centers, and outreach camps. We present two prototypes which can also be 3D printed.
Collapse
Affiliation(s)
- Prithvi Chandrakanth
- Department of Vitreoretinal Services, Aravind Eye Hospital, Coimbatore, Tamil Nadu, India
| | - John Davis Akkara
- Department of Glaucoma Services, Chaitanya Eye Hospital and Westend Eye Hospital, Kochi, Kerala, India
| | - Saloni M Joshi
- Department of General Ophthalmology, Aravind Eye Hospital, Pondicherry, India
| | - Hirika Gosalia
- Department of General Ophthalmology, Aravind Eye Hospital, Pondicherry, India
| | - K S Chandrakanth
- Chief Medical Officer, General Ophthalmology, Dr. Chandrakanth Nethralaya, Kozhikode, Kerala, India
| | - V Narendran
- Department of Vitreoretinal Services, Aravind Eye Hospital, Coimbatore, Tamil Nadu, India
| |
Collapse
|
10
|
Lu Y, Armstrong GW. Prognostic Factors for Visual Outcomes in Open Globe Injury. Int Ophthalmol Clin 2024; 64:175-185. [PMID: 38525990 DOI: 10.1097/iio.0000000000000496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
|
11
|
Li A, Tandon AK, Sun G, Dinkin MJ, Oliveira C. Early Detection of Optic Nerve Changes on Optical Coherence Tomography Using Deep Learning for Risk-Stratification of Papilledema and Glaucoma. J Neuroophthalmol 2024; 44:47-52. [PMID: 37494177 DOI: 10.1097/wno.0000000000001945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
BACKGROUND The use of artificial intelligence is becoming more prevalence in medicine with numerous successful examples in ophthalmology. However, much of the work has been focused on replicating the works of ophthalmologists. Given the analytical potentials of artificial intelligence, it is plausible that artificial intelligence can detect microfeatures not readily distinguished by humans. In this study, we tested the potential for artificial intelligence to detect early optic coherence tomography changes to predict progression toward papilledema or glaucoma when no significant changes are detected on optical coherence tomography by clinicians. METHODS Prediagnostic optical coherence tomography of patients who developed papilledema (n = 93, eyes = 166) and glaucoma (n = 187, eyes = 327) were collected. Given discrepancy in average cup-to-disc ratios of the experimental groups, control groups for papilledema (n = 254, eyes = 379) and glaucoma (n = 441, eyes = 739) are matched by cup-to-disc ratio. Publicly available Visual Geometry Group-19 model is retrained using each experimental group and its respective control group to predict progression to papilledema or glaucoma. Images used for training include retinal nerve fiber layer thickness map, extracted vertical tomogram, ganglion cell thickness map, and ILM-RPE thickness map. RESULTS Trained model was able to predict progression to papilledema with a precision of 0.714 and a recall of 0.769 when trained with retinal nerve fiber layer thickness map, but not other image types. However, trained model was able to predict progression to glaucoma with a precision of 0.682 and recall of 0.857 when trained with extracted vertical tomogram, but not other image types. Area under precision-recall curve of 0.826 and 0.785 were achieved for papilledema and glaucoma models, respectively. CONCLUSIONS Computational and analytical power of computers have become an invaluable part of our lives and research endeavors. Our proof-of-concept study showed that artificial intelligence (AI) algorithms have the potential to detect early changes on optical coherence tomography for prediction of progression that is not readily observed by clinicians. Further research may help establish possible AI models that can assist with early diagnosis or risk stratification in ophthalmology.
Collapse
Affiliation(s)
- Anfei Li
- Department of Ophthalmology (AL), New York Presbyterian Hospital, New York, New York; and Department of Ophthalmology (AKT, GS, MJD, CO), Weill Cornell Medicine, New York, New York
| | | | | | | | | |
Collapse
|
12
|
Lapka M, Straňák Z. The Current State of Artificial Intelligence in Neuro-Ophthalmology. A Review. CESKA A SLOVENSKA OFTALMOLOGIE : CASOPIS CESKE OFTALMOLOGICKE SPOLECNOSTI A SLOVENSKE OFTALMOLOGICKE SPOLECNOSTI 2024; 80:179-186. [PMID: 38538291 DOI: 10.31348/2023/33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
This article presents a summary of recent advances in the development and use of complex systems using artificial intelligence (AI) in neuro-ophthalmology. The aim of the following article is to present the principles of AI and algorithms that are currently being used or are still in the stage of evaluation or validation within the neuro-ophthalmology environment. For the purpose of this text, a literature search was conducted using specific keywords in available scientific databases, cumulatively up to April 2023. The AI systems developed across neuro-ophthalmology mostly achieve high sensitivity, specificity and accuracy. Individual AI systems and algorithms are subsequently selected, simply described and compared in the article. The results of the individual studies differ significantly, depending on the chosen methodology, the set goals, the size of the test, evaluated set, and the evaluated parameters. It has been demonstrated that the evaluation of various diseases will be greatly speeded up with the help of AI and make the diagnosis more efficient in the future, thus showing a high potential to be a useful tool in clinical practice even with a significant increase in the number of patients.
Collapse
|
13
|
Oganov AC, Seddon I, Jabbehdari S, Uner OE, Fonoudi H, Yazdanpanah G, Outani O, Arevalo JF. Artificial intelligence in retinal image analysis: Development, advances, and challenges. Surv Ophthalmol 2023; 68:905-919. [PMID: 37116544 DOI: 10.1016/j.survophthal.2023.04.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 04/20/2023] [Accepted: 04/24/2023] [Indexed: 04/30/2023]
Abstract
Modern advances in diagnostic technologies offer the potential for unprecedented insight into ophthalmic conditions relating to the retina. We discuss the current landscape of artificial intelligence in retina with respect to screening, diagnosis, and monitoring of retinal pathologies such as diabetic retinopathy, diabetic macular edema, central serous chorioretinopathy, and age-related macular degeneration. We review the methods used in these models and evaluate their performance in both research and clinical contexts and discuss potential future directions for investigation, use of multiple imaging modalities in artificial intelligence algorithms, and challenges in the application of artificial intelligence in retinal pathologies.
Collapse
Affiliation(s)
- Anthony C Oganov
- Department of Ophthalmology, Renaissance School of Medicine, Stony Brook, NY, USA
| | - Ian Seddon
- College of Osteopathic Medicine, Nova Southeastern University, Fort Lauderdale, FL, USA
| | - Sayena Jabbehdari
- Jones Eye Institute, University of Arkansas for Medical Sciences, Little Rock, AR, USA.
| | - Ogul E Uner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health and Science University, Portland, OR, USA
| | - Hossein Fonoudi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Iranshahr University of Medical Sciences, Iranshahr, Sistan and Baluchestan, Iran
| | - Ghasem Yazdanpanah
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - Oumaima Outani
- Faculty of Medicine and Pharmacy of Rabat, Mohammed 5 University, Rabat, Rabat, Morocco
| | - J Fernando Arevalo
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
14
|
Li T, Stein J, Nallasamy N. Evaluation of the Nallasamy formula: a stacking ensemble machine learning method for refraction prediction in cataract surgery. Br J Ophthalmol 2023; 107:1066-1071. [PMID: 35379599 PMCID: PMC9530066 DOI: 10.1136/bjophthalmol-2021-320599] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Accepted: 03/03/2022] [Indexed: 11/04/2022]
Abstract
AIMS To develop a new intraocular lens power selection method with improved accuracy for general cataract patients receiving Alcon SN60WF lenses. METHODS AND ANALYSIS A total of 5016 patients (6893 eyes) who underwent cataract surgery at University of Michigan's Kellogg Eye Center and received the Alcon SN60WF lens were included in the study. A machine learning-based method was developed using a training dataset of 4013 patients (5890 eyes), and evaluated on a testing dataset of 1003 patients (1003 eyes). The performance of our method was compared with that of Barrett Universal II, Emmetropia Verifying Optical (EVO), Haigis, Hoffer Q, Holladay 1, PearlDGS and SRK/T. RESULTS Mean absolute error (MAE) of the Nallasamy formula in the testing dataset was 0.312 Dioptres and the median absolute error (MedAE) was 0.242 D. Performance of existing methods were as follows: Barrett Universal II MAE=0.328 D, MedAE=0.256 D; EVO MAE=0.322 D, MedAE=0.251 D; Haigis MAE=0.363 D, MedAE=0.289 D; Hoffer Q MAE=0.404 D, MedAE=0.331 D; Holladay 1 MAE=0.371 D, MedAE=0.298 D; PearlDGS MAE=0.329 D, MedAE=0.258 D; SRK/T MAE=0.376 D, MedAE=0.300 D. The Nallasamy formula performed significantly better than seven existing methods based on the paired Wilcoxon test with Bonferroni correction (p<0.05). CONCLUSIONS The Nallasamy formula (available at https://lenscalc.com/) outperformed the seven other formulas studied on overall MAE, MedAE, and percentage of eyes within 0.5 D of prediction. Clinical significance may be primarily at the population level.
Collapse
Affiliation(s)
- Tingyang Li
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, Michigan, USA
| | - Joshua Stein
- Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan, USA
- Center for Eye Policy and Innovation, University of Michigan, Ann Arbor, Michigan, USA
- Department of Health Management and Policy, University of Michigan School of Public Health, Ann Arbor, Michigan, USA
| | - Nambi Nallasamy
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, Michigan, USA
- Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
15
|
Li A, Winebrake JP, Kovacs K. Facilitating deep learning through preprocessing of optical coherence tomography images. BMC Ophthalmol 2023; 23:158. [PMID: 37069534 PMCID: PMC10108538 DOI: 10.1186/s12886-023-02916-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Accepted: 04/10/2023] [Indexed: 04/19/2023] Open
Abstract
BACKGROUND While deep learning has delivered promising results in the field of ophthalmology, the hurdle to complete a deep learning study is high. In this study, we aim to facilitate small scale model trainings by exploring the role of preprocessing to reduce computational burden and accelerate learning. METHODS A small subset of a previously published dataset containing optical coherence tomography images of choroidal neovascularization, drusen, diabetic macula edema, and normal macula was modified using Fourier transformation and bandpass filter, producing high frequency images, original images, and low frequency images. Each set of images was trained with the same model, and their performances were compared. RESULTS Compared to that with the original image dataset, the model trained with the high frequency image dataset achieved an improved final performance and reached maximum performance much earlier (in fewer epochs). The model trained with low frequency images did not achieve a meaningful performance. CONCLUSION Appropriate preprocessing of training images can accelerate the training process and can potentially facilitate modeling using artificial intelligence when limited by sample size or computational power.
Collapse
Affiliation(s)
- Anfei Li
- Department of Ophthalmology, New York Presbyterian Hospital, 1305 York Ave 11th floor, New York, NY, 10021, USA.
| | - James P Winebrake
- Department of Ophthalmology, New York Presbyterian Hospital, 1305 York Ave 11th floor, New York, NY, 10021, USA
| | - Kyle Kovacs
- Department of Ophthalmology, Weill Cornell Medicine, 1305 York Ave 11th floor, New York, NY, 10021, USA.
| |
Collapse
|
16
|
Islam MT, Khan HA, Naveed K, Nauman A, Gulfam SM, Kim SW. LUVS-Net: A Lightweight U-Net Vessel Segmentor for Retinal Vasculature Detection in Fundus Images. ELECTRONICS 2023; 12:1786. [DOI: 10.3390/electronics12081786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
This paper presents LUVS-Net, which is a lightweight convolutional network for retinal vessel segmentation in fundus images that is designed for resource-constrained devices that are typically unable to meet the computational requirements of large neural networks. The computational challenges arise due to low-quality retinal images, wide variance in image acquisition conditions and disparities in intensity. Consequently, the training of existing segmentation methods requires a multitude of trainable parameters for the training of networks, resulting in computational complexity. The proposed Lightweight U-Net for Vessel Segmentation Network (LUVS-Net) can achieve high segmentation performance with only a few trainable parameters. This network uses an encoder–decoder framework in which edge data are transposed from the first layers of the encoder to the last layer of the decoder, massively improving the convergence latency. Additionally, LUVS-Net’s design allows for a dual-stream information flow both inside as well as outside of the encoder–decoder pair. The network width is enhanced using group convolutions, which allow the network to learn a larger number of low- and intermediate-level features. Spatial information loss is minimized using skip connections, and class imbalances are mitigated using dice loss for pixel-wise classification. The performance of the proposed network is evaluated on the publicly available retinal blood vessel datasets DRIVE, CHASE_DB1 and STARE. LUVS-Net proves to be quite competitive, outperforming alternative state-of-the-art segmentation methods and achieving comparable accuracy using trainable parameters that are reduced by two to three orders of magnitude compared with those of comparative state-of-the-art methods.
Collapse
Affiliation(s)
- Muhammad Talha Islam
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Haroon Ahmed Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
- Department of Electrical and Computer Engineering, Aarhus University, 8000 Aarhus, Denmark
| | - Ali Nauman
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| | - Sardar Muhammad Gulfam
- Department of Electrical and Computer Engineering, Abbottabad Campus, COMSATS University Islamabad (CUI), Abbottabad 22060, Pakistan
| | - Sung Won Kim
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| |
Collapse
|
17
|
Ricur G, Reyes J, Alfonso E, Marino RG. Surfing the COVID-19 Tsunami with Teleophthalmology: the Advent of New Models of Eye Care. CURRENT OPHTHALMOLOGY REPORTS 2023; 11:1-12. [PMID: 36743397 PMCID: PMC9883823 DOI: 10.1007/s40135-023-00308-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/11/2023] [Indexed: 01/30/2023]
Abstract
Purpose of Review In this article, we reviewed the impact resulting from the COVID-19 pandemic on the traditional model of care in ophthalmology. Recent Findings Though virtual eye care has been present for more than 20 years, the COVID-19 pandemic has established a precedent to seriously consider its role in the evolving paradigm of vision and eye care. New hybrid models of care have enhanced or replaced traditional synchronous and asynchronous visits. The increased use of smart phoneography and mobile applications enhanced the remote examination of patients. Use of e-learning became a mainstream tool to continue accessing education and training. Summary Teleophthalmology has demonstrated its value for screening, examining, diagnosing, monitoring treatment, and increasing access to education. However, much of the progress made following the COVID-19 pandemic is at risk of being lost as society pushes to reestablish normalcy. Further studies during the new norm are required to prove a more permanent role for virtual eye care.
Collapse
Affiliation(s)
- Giselle Ricur
- Bascom Palmer Eye Institute, University of Miami, 900 NW 17Th St., Miami, FL 33136 USA
| | - Joshua Reyes
- Bascom Palmer Eye Institute, University of Miami, 900 NW 17Th St., Miami, FL 33136 USA
| | - Eduardo Alfonso
- Bascom Palmer Eye Institute, University of Miami, 900 NW 17Th St., Miami, FL 33136 USA
| | - Raul Guillermo Marino
- Facultad de Ciencias Exactas Y Naturales, Universidad Nacional de Cuyo, Mendoza, Argentina
| |
Collapse
|
18
|
Deans AM, Basilious A, Hutnik CM. Assessing the Performance of a Novel Bayesian Algorithm at Point of Care for Red Eye Complaints. VISION (BASEL, SWITZERLAND) 2022; 6:vision6040064. [PMID: 36412645 PMCID: PMC9680424 DOI: 10.3390/vision6040064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 10/05/2022] [Accepted: 10/21/2022] [Indexed: 11/06/2022]
Abstract
The current diagnostic aids for red eye are static flowcharts that do not provide dynamic, stepwise workups. The diagnostic accuracy of a novel dynamic Bayesian algorithm for red eye was tested. Fifty-seven patients with red eye were evaluated by an emergency medicine physician who completed a questionnaire about symptoms/findings (without requiring extensive slit lamp findings). An ophthalmologist then attributed an independent "gold-standard diagnosis". The algorithm used questionnaire data to suggest a differential diagnosis. The referrer's diagnostic accuracy was 70.2%, while the algorithm's accuracy was 68.4%, increasing to 75.4% with the algorithm's top two diagnoses included and 80.7% with the top three included. In urgent cases of red eye (n = 26), the referrer diagnostic accuracy was 76.9%, while the algorithm's top diagnosis was 73.1% accurate, increasing to 84.6% (top two included) and 88.5% (top three included). The algorithm's sensitivity for urgent cases was 76.9% (95% CI: 56-91%) using its top diagnosis, with a specificity of 93.6% (95% CI: 79-99%). This novel algorithm provides dynamic workups using clinical symptoms, and may be used as an adjunct to clinical judgement for triaging the urgency of ocular causes of red eye.
Collapse
Affiliation(s)
- Alexander M. Deans
- Schulich School of Medicine and Dentistry, Western University, 1151 Richmond St., London, ON N6A 5C1, Canada
- Correspondence: ; Tel.: +226-246-8142
| | - Amy Basilious
- Schulich School of Medicine and Dentistry, Western University, 1151 Richmond St., London, ON N6A 5C1, Canada
| | - Cindy M. Hutnik
- Department of Ophthalmology, Schulich School of Medicine and Dentistry, Western University, 1151 Richmond St., London, ON N6A 5C1, Canada
| |
Collapse
|
19
|
Bassi A, Krance SH, Pucchio A, Pur DR, Miranda RN, Felfeli T. The Application of Artificial Intelligence in the Analysis of Biomarkers for Diagnosis and Management of Uveitis and Uveal Melanoma: A Systematic Review. Clin Ophthalmol 2022; 16:2895-2908. [PMID: 36065357 PMCID: PMC9440710 DOI: 10.2147/opth.s377358] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 08/16/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose This study aims to identify the available literature describing the utilization of artificial intelligence (AI) as a clinical tool in uveal diseases. Methods A comprehensive literature search was conducted in 5 electronic databases, finding studies relating to AI and uveal diseases. Results After screening 10,258 studies,18 studies met the inclusion criteria. Uveal melanoma (44%) and uveitis (56%) were the two uveal diseases examined. Ten studies (56%) used complex AI, while 13 studies (72%) used regression methods. Lactate dehydrogenase (LDH), found in 50% of studies concerning uveal melanoma, was the only biomarker that overlapped in multiple studies. However, 94% of studies highlighted that the biomarkers of interest were significant. Conclusion This study highlights the value of using complex and simple AI tools as a clinical tool in uveal diseases. Particularly, complex AI methods can be used to weigh the merit of significant biomarkers, such as LDH, in order to create staging tools and predict treatment outcomes.
Collapse
Affiliation(s)
- Arshpreet Bassi
- Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| | - Saffire H Krance
- Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| | - Aidan Pucchio
- School of Medicine, Queen’s University, Kingston, Ontario, Canada
| | - Daiana R Pur
- Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| | - Rafael N Miranda
- Toronto Health Economics and Technology Assessment Collaborative, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Tina Felfeli
- Toronto Health Economics and Technology Assessment Collaborative, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology and Visual Sciences, University of Toronto, Toronto, Ontario, Canada
- Correspondence: Tina Felfeli, Department of Ophthalmology and Visual Sciences, University of Toronto, 340 College Street, Suite 400, Toronto, ON M5T 3A9, Canada, Fax +416-978-4590, Email
| |
Collapse
|
20
|
Design of Intelligent Diagnosis and Treatment System for Ophthalmic Diseases Based on Deep Neural Network Model. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:4934190. [PMID: 35854765 PMCID: PMC9277203 DOI: 10.1155/2022/4934190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/10/2022] [Accepted: 06/14/2022] [Indexed: 11/18/2022]
Abstract
Artificial intelligence (AI) has developed rapidly in the field of ophthalmology. Fundus images have become a research hotspot because they are easy to obtain and rich in biological information. The application of fundus image analysis (AI) in background image analysis has been deepened and expanded. At present, a variety of AI studies have been carried out in the clinical screening, diagnosis, and prognosis of eye diseases, and the research results have been gradually applied to clinical practice. The application of AI in fundus image analysis will improve the situation of lack of medical resources and low diagnosis efficiency. In the future, the research of AI eye images should focus on the comprehensive intelligent diagnosis of various ophthalmic diseases and complex diseases. The focus is to integrate standardized and high-quality data resources, improve algorithm efficiency, and formulate corresponding clinical research plans.
Collapse
|
21
|
Gajendran MK, Rohowetz LJ, Koulen P, Mehdizadeh A. Novel Machine-Learning Based Framework Using Electroretinography Data for the Detection of Early-Stage Glaucoma. Front Neurosci 2022; 16:869137. [PMID: 35600610 PMCID: PMC9115110 DOI: 10.3389/fnins.2022.869137] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 03/28/2022] [Indexed: 01/05/2023] Open
Abstract
PurposeEarly-stage glaucoma diagnosis has been a challenging problem in ophthalmology. The current state-of-the-art glaucoma diagnosis techniques do not completely leverage the functional measures' such as electroretinogram's immense potential; instead, focus is on structural measures like optical coherence tomography. The current study aims to take a foundational step toward the development of a novel and reliable predictive framework for early detection of glaucoma using machine-learning-based algorithm capable of leveraging medically relevant information that ERG signals contain.MethodsERG signals from 60 eyes of DBA/2 mice were grouped for binary classification based on age. The signals were also grouped based on intraocular pressure (IOP) for multiclass classification. Statistical and wavelet-based features were engineered and extracted. Important predictors (ERG tests and features) were determined, and the performance of five machine learning-based methods were evaluated.ResultsRandom forest (bagged trees) ensemble classifier provided the best performance in both binary and multiclass classification of ERG signals. An accuracy of 91.7 and 80% was achieved for binary and multiclass classification, respectively, suggesting that machine-learning-based models can detect subtle changes in ERG signals if trained using advanced features such as those based on wavelet analyses.ConclusionsThe present study describes a novel, machine-learning-based method to analyze ERG signals providing additional information that may be used to detect early-stage glaucoma. Based on promising performance metrics obtained using the proposed machine-learning-based framework leveraging an established ERG data set, we conclude that the novel framework allows for detection of functional deficits of early/various stages of glaucoma in mice.
Collapse
Affiliation(s)
- Mohan Kumar Gajendran
- Department of Civil and Mechanical Engineering, School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO, United States
| | - Landon J. Rohowetz
- Vision Research Center, Department of Ophthalmology, University of Missouri-Kansas City, Kansas City, MO, United States
| | - Peter Koulen
- Vision Research Center, Department of Ophthalmology, University of Missouri-Kansas City, Kansas City, MO, United States
- Department of Biomedical Sciences, University of Missouri-Kansas City, Kansas City, MO, United States
| | - Amirfarhang Mehdizadeh
- Department of Civil and Mechanical Engineering, School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO, United States
- Vision Research Center, Department of Ophthalmology, University of Missouri-Kansas City, Kansas City, MO, United States
- *Correspondence: Amirfarhang Mehdizadeh
| |
Collapse
|
22
|
|
23
|
Kim CB, Armstrong GW. Characterizing Infectious Keratitis Using Artificial Intelligence. Int Ophthalmol Clin 2022; 62:41-53. [PMID: 35325909 DOI: 10.1097/iio.0000000000000405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
24
|
A Deep Learning Ensemble Method to Visual Acuity Measurement Using Fundus Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12063190] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Visual acuity (VA) is a measure of the ability to distinguish shapes and details of objects at a given distance and is a measure of the spatial resolution of the visual system. Vision is one of the basic health indicators closely related to a person’s quality of life. It is one of the first basic tests done when an eye disease develops. VA is usually measured by using a Snellen chart or E-chart from a specific distance. However, in some cases, such as the unconsciousness of patients or diseases, i.e., dementia, it can be impossible to measure the VA using such traditional chart-based methodologies. This paper provides a machine learning-based VA measurement methodology that determines VA only based on fundus images. In particular, the levels of VA, conventionally divided into 11 levels, are grouped into four classes and three machine learning algorithms, one SVM model and two CNN models, are combined into an ensemble method in order to predict the corresponding VA level from a fundus image. Based on a performance evaluation conducted using randomly selected 4000 fundus images, we confirm that our ensemble method can estimate with 82.4% of the average accuracy for four classes of VA levels, in which each class of Class 1 to Class 4 identifies the level of VA with 88.5%, 58.8%, 88%, and 94.3%, respectively. To the best of our knowledge, this is the first paper on VA measurements based on fundus images using deep machine learning.
Collapse
|
25
|
Zhu S, Lu B, Wang C, Wu M, Zheng B, Jiang Q, Wei R, Cao Q, Yang W. Screening of Common Retinal Diseases Using Six-Category Models Based on EfficientNet. Front Med (Lausanne) 2022; 9:808402. [PMID: 35280876 PMCID: PMC8904395 DOI: 10.3389/fmed.2022.808402] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/12/2022] [Indexed: 11/21/2022] Open
Abstract
PURPOSE A six-category model of common retinal diseases is proposed to help primary medical institutions in the preliminary screening of the five common retinal diseases. METHODS A total of 2,400 fundus images of normal and five common retinal diseases were provided by a cooperative hospital. Two six-category deep learning models of common retinal diseases based on the EfficientNet-B4 and ResNet50 models were trained. The results from the six-category models in this study and the results from a five-category model in our previous study based on ResNet50 were compared. A total of 1,315 fundus images were used to test the models, the clinical diagnosis results and the diagnosis results of the two six-category models were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), 95% confidence interval, kappa and accuracy, and the receiver operator characteristic curves of the two six-category models were compared in the study. RESULTS The diagnostic accuracy rate of EfficientNet-B4 model was 95.59%, the kappa value was 94.61%, and there was high diagnostic consistency. The AUC of the normal diagnosis and the five retinal diseases were all above 0.95. The sensitivity, specificity, and F1-score for the diagnosis of normal fundus images were 100, 99.9, and 99.83%, respectively. The specificity and F1-score for RVO diagnosis were 95.68, 98.61, and 93.09%, respectively. The sensitivity, specificity, and F1-score for high myopia diagnosis were 96.1, 99.6, and 97.37%, respectively. The sensitivity, specificity, and F1-score for glaucoma diagnosis were 97.62, 99.07, and 94.62%, respectively. The sensitivity, specificity, and F1-score for DR diagnosis were 90.76, 99.16, and 93.3%, respectively. The sensitivity, specificity, and F1-score for MD diagnosis were 92.27, 98.5, and 91.51%, respectively. CONCLUSION The EfficientNet-B4 model was used to design a six-category model of common retinal diseases. It can be used to diagnose the normal fundus and five common retinal diseases based on fundus images. It can help primary doctors in the screening for common retinal diseases, and give suitable suggestions and recommendations. Timely referral can improve the efficiency of diagnosis of eye diseases in rural areas and avoid delaying treatment.
Collapse
Affiliation(s)
- Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Chenghu Wang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Ruili Wei
- Department of Ophthalmology, Shanghai Changzheng Hospital, Huangpu, China
| | - Qixin Cao
- Huzhou Traditional Chinese Medicine Hospital Affiliated to Zhejiang University of Traditional Chinese Medicine, Huzhou, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
26
|
Pettit RW, Fullem R, Cheng C, Amos CI. Artificial intelligence, machine learning, and deep learning for clinical outcome prediction. Emerg Top Life Sci 2021; 5:ETLS20210246. [PMID: 34927670 PMCID: PMC8786279 DOI: 10.1042/etls20210246] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 12/03/2021] [Accepted: 12/07/2021] [Indexed: 12/12/2022]
Abstract
AI is a broad concept, grouping initiatives that use a computer to perform tasks that would usually require a human to complete. AI methods are well suited to predict clinical outcomes. In practice, AI methods can be thought of as functions that learn the outcomes accompanying standardized input data to produce accurate outcome predictions when trialed with new data. Current methods for cleaning, creating, accessing, extracting, augmenting, and representing data for training AI clinical prediction models are well defined. The use of AI to predict clinical outcomes is a dynamic and rapidly evolving arena, with new methods and applications emerging. Extraction or accession of electronic health care records and combining these with patient genetic data is an area of present attention, with tremendous potential for future growth. Machine learning approaches, including decision tree methods of Random Forest and XGBoost, and deep learning techniques including deep multi-layer and recurrent neural networks, afford unique capabilities to accurately create predictions from high dimensional, multimodal data. Furthermore, AI methods are increasing our ability to accurately predict clinical outcomes that previously were difficult to model, including time-dependent and multi-class outcomes. Barriers to robust AI-based clinical outcome model deployment include changing AI product development interfaces, the specificity of regulation requirements, and limitations in ensuring model interpretability, generalizability, and adaptability over time.
Collapse
Affiliation(s)
- Rowland W. Pettit
- Institute for Clinical and Translational Research, Baylor College of Medicine, Houston, TX, U.S.A
| | - Robert Fullem
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX, U.S.A
| | - Chao Cheng
- Institute for Clinical and Translational Research, Baylor College of Medicine, Houston, TX, U.S.A
- Section of Epidemiology and Population Sciences, Department of Medicine, Baylor College of Medicine, Houston, TX, U.S.A
| | - Christopher I. Amos
- Institute for Clinical and Translational Research, Baylor College of Medicine, Houston, TX, U.S.A
- Section of Epidemiology and Population Sciences, Department of Medicine, Baylor College of Medicine, Houston, TX, U.S.A
- Dan L Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX, U.S.A
| |
Collapse
|
27
|
Shekar S, Satpute N, Gupta A. Review on diabetic retinopathy with deep learning methods. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2021; 8:060901. [PMID: 34859116 DOI: 10.1117/1.jmi.8.6.060901] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 10/27/2021] [Indexed: 11/14/2022]
Abstract
Purpose: The purpose of our review paper is to examine many existing works of literature presenting the different methods utilized for diabetic retinopathy (DR) recognition employing deep learning (DL) and machine learning (ML) techniques, and also to address the difficulties faced in various datasets used by DR. Approach: DR is a progressive illness and may become a reason for vision loss. Early identification of DR lesions is, therefore, helpful and prevents damage to the retina. However, it is a complex job in view of the fact that it is symptomless earlier, and also ophthalmologists have been needed in traditional approaches. Recently, automated identification of DR-based studies has been stated based on image processing, ML, and DL. We analyze the recent literature and provide a comparative study that also includes the limitations of the literature and future work directions. Results: A relative analysis among the databases used, performance metrics employed, and ML and DL techniques adopted recently in DR detection based on various DR features is presented. Conclusion: Our review paper discusses the methods employed in DR detection along with the technical and clinical challenges that are encountered, which is missing in existing reviews, as well as future scopes to assist researchers in the field of retinal imaging.
Collapse
Affiliation(s)
- Shreya Shekar
- College of Engineering Pune, Department of Electronics and Telecommunication Engineering, Pune, Maharashtra, India
| | - Nitin Satpute
- Aarhus University, Department of Electrical and Computer Engineering, Aarhus, Denmark
| | - Aditya Gupta
- College of Engineering Pune, Department of Electronics and Telecommunication Engineering, Pune, Maharashtra, India
| |
Collapse
|
28
|
Jahangir S, Khan HA. Artificial intelligence in ophthalmology and visual sciences: Current implications and future directions. Artif Intell Med Imaging 2021; 2:95-103. [DOI: 10.35711/aimi.v2.i5.95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/30/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Abstract
Since its inception in 1959, artificial intelligence (AI) has evolved at an unprecedented rate and has revolutionized the world of medicine. Ophthalmology, being an image-driven field of medicine, is well-suited for the implementation of AI. Machine learning (ML) and deep learning (DL) models are being utilized for screening of vision threatening ocular conditions of the eye. These models have proven to be accurate and reliable for diagnosing anterior and posterior segment diseases, screening large populations, and even predicting the natural course of various ocular morbidities. With the increase in population and global burden of managing irreversible blindness, AI offers a unique solution when implemented in clinical practice. In this review, we discuss what are AI, ML, and DL, their uses, future direction for AI, and its limitations in ophthalmology.
Collapse
Affiliation(s)
- Smaha Jahangir
- School of Optometry, The University of Faisalabad, Faisalabad, Punjab 38000, Pakistan
| | - Hashim Ali Khan
- Department of Ophthalmology, SEHHAT Foundation, Gilgit 15100, Gilgit-Baltistan, Pakistan
| |
Collapse
|
29
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
30
|
Takhchidi K, Gliznitsa PV, Svetozarskiy SN, Bursov AI, Shusterzon KA. Labelling of data on fundus color pictures used to train a deep learning model enhances its macular pathology recognition capabilities. BULLETIN OF RUSSIAN STATE MEDICAL UNIVERSITY 2021. [DOI: 10.24075/brsmu.2021.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Retinal diseases remain one of the leading causes of visual impairments in the world. The development of automated diagnostic methods can improve the efficiency and availability of the macular pathology mass screening programs. The objective of this work was to develop and validate deep learning algorithms detecting macular pathology (age-related macular degeneration, AMD) based on the analysis of color fundus photographs with and without data labeling. We used 1200 color fundus photographs from local databases, including 575 retinal images of AMD patients and 625 pictures of the retina of healthy people. The deep learning algorithm was deployed in the Faster RCNN neural network with ResNet50 for convolution. The process employed the transfer learning method. As a result, in the absence of labeling, the accuracy of the model was unsatisfactory (79%) because the neural network selected the areas of attention incorrectly. Data labeling improved the efficacy of the developed method: with the test dataset, the model determined the areas with informative features adequately, and the classification accuracy reached 96.6%. Thus, image data labeling significantly improves the accuracy of retinal color images recognition by a neural network and enables development and training of effective models with limited datasets.
Collapse
Affiliation(s)
- KhP Takhchidi
- Pirogov Russian National Research Medical University, Moscow, Russia
| | - PV Gliznitsa
- OOO Innovatsioonniye Tekhnologii (Innovative Technologies, LLC), Nizhny Novgorod, Russia
| | - SN Svetozarskiy
- Volga District Medical Center under the Federal Medical-Biological Agency, Nizhny Novgorod, Russia
| | - AI Bursov
- Ivannikov Institute for System Programming of RAS, Moscow, Russia
| | - KA Shusterzon
- L.A. Melentiev Energy Systems Institute, Irkutsk, Russia
| |
Collapse
|
31
|
Feng R, Xu Z, Zheng X, Hu H, Jin X, Chen DZ, Yao K, Wu J. KerNet: A Novel Deep Learning Approach for Keratoconus and Sub-clinical Keratoconus Detection Based on Raw Data of the Pentacam System. IEEE J Biomed Health Inform 2021; 25:3898-3910. [PMID: 33979295 DOI: 10.1109/jbhi.2021.3079430] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Keratoconus is one of the most severe corneal diseases, which is difficult to detect at the early stage (i.e., sub-clinical keratoconus) and possibly results in vision loss. In this paper, we propose a novel end-to-end deep learning approach, called KerNet, which processes the raw data of the Pentacam system to detect keratoconus and sub-clinical keratoconus. First, we collect raw data from the Pentacam system. The raw data is of a specific format, that is, each sample consists of five numerical matrices, corresponding to the front and back surface curvature, the front and back surface elevation, and the pachymetry of an eye. Then, we propose a novel convolutional neural network, called KerNet, containing five branches as the backbone with a multi-level fusion architecture. The five branches receive five slices separately and capture effectively the features of different slices by several cascaded residual blocks. The multi-level fusion architecture (i.e., low-level fusion and high-level fusion) moderately takes into account the correlation among five slices and fuses the extracted features for better prediction. Specifically, five spatial attention modules are utilized, each in a branch, to guide the operation of the low-level fusion. The high-level fusion is implemented by simply concatenating the output feature maps of the last residual block in each branch. Experimental results show that: (1) our novel approach outperforms state-of-the-art methods on an in-house dataset, by ∼ 1\% for keratoconus detection accuracy and ∼ 4\% for sub-clinical keratoconus detection accuracy; (2) the attention maps visualized by Grad-CAM show that our KerNet places more attention on the inferior temporal part for sub-clinical keratoconus, which has been proved as the identifying regions for ophthalmologists to detect sub-clinical keratoconus in previous clinical studies. To our best knowledge, we are the first to propose an end-to-end deep learning approach utilizing raw data obtained by the Pentacam system for keratoconus and subclinical keratoconus detection. Further, the prediction performance and the clinical significance of our KerNet are well evaluated and proved by two clinical experts. Our code is available at \url{https://github.com/upzheng/Keratoconus}.
Collapse
|
32
|
Saifee M, Wu J, Liu Y, Ma P, Patlidanon J, Yu Y, Ying GS, Han Y. Development and Validation of Automated Visual Field Report Extraction Platform Using Computer Vision Tools. Front Med (Lausanne) 2021; 8:625487. [PMID: 33996848 PMCID: PMC8116600 DOI: 10.3389/fmed.2021.625487] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/31/2021] [Indexed: 02/01/2023] Open
Abstract
Purpose: To introduce and validate hvf_extraction_script, an open-source software script for the automated extraction and structuring of metadata, value plot data, and percentile plot data from Humphrey visual field (HVF) report images. Methods: Validation was performed on 90 HVF reports over three different report layouts, including a total of 1,530 metadata fields, 15,536 value plot data points, and 10,210 percentile data points, between the computer script and four human extractors, compared against DICOM reference data. Computer extraction and human extraction were compared on extraction time as well as accuracy of extraction for metadata, value plot data, and percentile plot data. Results: Computer extraction required 4.9-8.9 s per report, compared to the 6.5-19 min required by human extractors, representing a more than 40-fold difference in extraction speed. Computer metadata extraction error rate varied from an aggregate 1.2-3.5%, compared to 0.2-9.2% for human metadata extraction across all layouts. Computer value data point extraction had an aggregate error rate of 0.9% for version 1, <0.01% in version 2, and 0.15% in version 3, compared to 0.8-9.2% aggregate error rate for human extraction. Computer percentile data point extraction similarly had very low error rates, with no errors occurring in version 1 and 2, and 0.06% error rate in version 3, compared to 0.06-12.2% error rate for human extraction. Conclusions: This study introduces and validates hvf_extraction_script, an open-source tool for fast, accurate, automated data extraction of HVF reports to facilitate analysis of large-volume HVF datasets, and demonstrates the value of image processing tools in facilitating faster and cheaper large-volume data extraction in research settings.
Collapse
Affiliation(s)
- Murtaza Saifee
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States
| | - Jian Wu
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States.,Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China
| | - Yingna Liu
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States
| | - Ping Ma
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States.,Department of Ophthalmology, Shandong Provincial Hospital, Shandong First Medical University, Jinan, China
| | - Jutima Patlidanon
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States.,Department of Ophthalmology, Bhumibol Adulyadej Hospital, Bangkok, Thailand
| | - Yinxi Yu
- Center for Preventive Ophthalmology and Biostatistics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Gui-Shuang Ying
- Center for Preventive Ophthalmology and Biostatistics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Ying Han
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States.,Ophthalmology Section, Surgical Service, San Francisco Veterans Affairs Medical Center, San Francisco, CA, United States
| |
Collapse
|
33
|
Jiang J, Wang L, Fu H, Long E, Sun Y, Li R, Li Z, Zhu M, Liu Z, Chen J, Lin Z, Wu X, Wang D, Liu X, Lin H. Automatic classification of heterogeneous slit-illumination images using an ensemble of cost-sensitive convolutional neural networks. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:550. [PMID: 33987248 DOI: 10.21037/atm-20-6635] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Lens opacity seriously affects the visual development of infants. Slit-illumination images play an irreplaceable role in lens opacity detection; however, these images exhibited varied phenotypes with severe heterogeneity and complexity, particularly among pediatric cataracts. Therefore, it is urgently needed to explore an effective computer-aided method to automatically diagnose heterogeneous lens opacity and to provide appropriate treatment recommendations in a timely manner. Methods We integrated three different deep learning networks and a cost-sensitive method into an ensemble learning architecture, and then proposed an effective model called CCNN-Ensemble [ensemble of cost-sensitive convolutional neural networks (CNNs)] for automatic lens opacity detection. A total of 470 slit-illumination images of pediatric cataracts were used for training and comparison between the CCNN-Ensemble model and conventional methods. Finally, we used two external datasets (132 independent test images and 79 Internet-based images) to further evaluate the model's generalizability and effectiveness. Results Experimental results and comparative analyses demonstrated that the proposed method was superior to conventional approaches and provided clinically meaningful performance in terms of three grading indices of lens opacity: area (specificity and sensitivity; 92.00% and 92.31%), density (93.85% and 91.43%) and opacity location (95.25% and 89.29%). Furthermore, the comparable performance on the independent testing dataset and the internet-based images verified the effectiveness and generalizability of the model. Finally, we developed and implemented a website-based automatic diagnosis software for pediatric cataract grading diagnosis in ophthalmology clinics. Conclusions The CCNN-Ensemble method demonstrates higher specificity and sensitivity than conventional methods on multi-source datasets. This study provides a practical strategy for heterogeneous lens opacity diagnosis and has the potential to be applied to the analysis of other medical images.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Haoran Fu
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yibin Sun
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi'an, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
34
|
Brill D, Papaliodis G. Uveitis Specialists Harnessing Disruptive Technology during the COVID-19 Pandemic and Beyond. Semin Ophthalmol 2021; 36:296-303. [PMID: 33755525 DOI: 10.1080/08820538.2021.1896753] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Spurred by the coronavirus disease pandemic and shortage of eye care providers, telemedicine is transforming the way ophthalmologists care for their patients. Video conferencing, ophthalmic imaging, hybrid visits, intraocular inflammation quantification, and portable technology are evolving areas that may allow more uveitis patients to be evaluated via telemedicine. Despite these promising disruptive technologies, there remain significant technological limitations, legal barriers, variable insurance coverage for virtual visits, and lack of clinical trials for uveitis specialists to embrace telemedicine.
Collapse
Affiliation(s)
- Daniel Brill
- Ocular Immunology and Uveitis Service, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - George Papaliodis
- Ocular Immunology and Uveitis Service, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
35
|
Armstrong GW, Kalra G, De Arrigunaga S, Friedman DS, Lorch AC. Anterior Segment Imaging Devices in Ophthalmic Telemedicine. Semin Ophthalmol 2021; 36:149-156. [PMID: 33656960 DOI: 10.1080/08820538.2021.1887899] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Obtaining a clear assessment of the anterior segment is critical for disease diagnosis and management in ophthalmic telemedicine. The anterior segment can be imaged with slit lamp cameras, robotic remote controlled slit lamps, cell phones, cell phone adapters, digital cameras, and webcams, all of which can enable remote care. The ability of these devices to identify various ophthalmic diseases has been studied, including cataracts, as well as abnormalities of the ocular adnexa, cornea, and anterior chamber. This article reviews the current state of anterior segment imaging for the purpose of ophthalmic telemedical care.
Collapse
Affiliation(s)
- Grayson W Armstrong
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Gagan Kalra
- Department of Ophthalmology, Government Medical College and Hospital, Chandigarh, India
| | - Sofia De Arrigunaga
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - David S Friedman
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Alice C Lorch
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
36
|
Moraru AD, Costin D, Moraru RL, Branisteanu DC. Artificial intelligence and deep learning in ophthalmology - present and future (Review). Exp Ther Med 2020; 20:3469-3473. [PMID: 32905155 PMCID: PMC7465350 DOI: 10.3892/etm.2020.9118] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 06/30/2020] [Indexed: 02/06/2023] Open
Abstract
Since its introduction in 1959, artificial intelligence technology has evolved rapidly and helped benefit research, industries and medicine. Deep learning, as a process of artificial intelligence (AI) is used in ophthalmology for data analysis, segmentation, automated diagnosis and possible outcome predictions. The association of deep learning and optical coherence tomography (OCT) technologies has proven reliable for the detection of retinal diseases and improving the diagnostic performance of the eye's posterior segment diseases. This review explored the possibility of implementing and using AI in establishing the diagnosis of retinal disorders. The benefits and limitations of AI in the field of retinal disease medical management were investigated by analyzing the most recent literature data. Furthermore, the future trends of AI involvement in ophthalmology were analyzed, as AI will be part of the decision-making regarding the scientific investigation, diagnosis and therapeutic management.
Collapse
Affiliation(s)
- Andreea Dana Moraru
- Department of Ophthalmology, 'Grigore T. Popa' University of Medicine and Pharmacy, 700115 Iaşi, Romania.,Department of Ophthalmology, 'N. Oblu' Clinical Hospital, 700309 Iași, Romania
| | - Danut Costin
- Department of Ophthalmology, 'Grigore T. Popa' University of Medicine and Pharmacy, 700115 Iaşi, Romania.,Department of Ophthalmology, 'N. Oblu' Clinical Hospital, 700309 Iași, Romania
| | - Radu Lucian Moraru
- Department of Otorhinolaryngology, Transmed Expert, 700011 Iaşi; 4'Retina Center' Eye Clinic, 700126 Iaşi, Romania
| | - Daniel Constantin Branisteanu
- Department of Ophthalmology, 'Grigore T. Popa' University of Medicine and Pharmacy, 700115 Iaşi, Romania.,'Retina Center' Eye Clinic, 700126 Iași, Romania
| |
Collapse
|
37
|
Abstract
Telemedicine is the provision of healthcare-related services from a distance and is poised to move healthcare from the physician's office back into the patient's home. The field of ophthalmology is often at the forefront of technological advances in medicine including telemedicine and the use of artificial intelligence. Multiple studies have demonstrated the reliability of tele-ophthalmology for use in screening and diagnostics and have demonstrated benefits to patients, physicians, as well as payors. There remain obstacles to widespread implementation, but recent legislation and regulation passed due to the devastating COVID-19 pandemic have helped to reduce some of these barriers. This review describes the current status of tele-ophthalmology in the United States including benefits, hurdles, current programs, technology, and developments in artificial intelligence. With ongoing advances patients may benefit from improved detection and earlier treatment of eye diseases, resulting in better care and improved visual outcomes.
Collapse
Affiliation(s)
- Deep Parikh
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School , Boston, MA, USA
| | - Grayson Armstrong
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School , Boston, MA, USA
| | - Victor Liou
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School , Boston, MA, USA
| | - Deeba Husain
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School , Boston, MA, USA
| |
Collapse
|