1
|
Wang Y, Wang Z, Lin W, Li Q. Artificial Intelligence Technologies for Nursing Development: A Review of the Current Literature. Br J Hosp Med (Lond) 2025; 86:1-13. [PMID: 40405854 DOI: 10.12968/hmed.2024.0947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2025]
Abstract
Recent years have seen rapid development of artificial intelligence (AI) technology revolutionizing the healthcare industry by a tremendous measure, especially in the field of nursing, highlighting its great potential for application. Aside from assisting nurses to make more accurate decisions in complex clinical environments, AI also provides patients with more convenient remote care services. These trends highlight the indispensable and important value of AI in nursing. The current study comprehensively reviewed the current literature on the application of AI in nursing environments, aiming to deeply analyze the current status of the application of AI technology in nursing practice and to provide a prospective outlook on its future development trends in the nursing field. Through this review, we hope to provide nursing practitioners and healthcare policy makers with valuable information to facilitate the further application of AI technologies in enhancing the quality and efficiency of nursing care.
Collapse
Affiliation(s)
- Yundan Wang
- Nursing Department, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Zanli Wang
- Nursing Department, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Wujia Lin
- Nursing Department, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Qian Li
- Nursing Department, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| |
Collapse
|
2
|
Rakhilin N, Morris HD, Pham DL, Hood MN, Ho VB. Opportunities for Artificial Intelligence in Operational Medicine: Lessons from the United States Military. Bioengineering (Basel) 2025; 12:519. [PMID: 40428137 DOI: 10.3390/bioengineering12050519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2025] [Revised: 05/02/2025] [Accepted: 05/08/2025] [Indexed: 05/29/2025] Open
Abstract
Conducted in challenging environments such as disaster or conflict areas, operational medicine presents unique challenges for the delivery of efficient and quality healthcare. It exposes first responders and medical personnel to many unexpected health risks and dangerous situations. To tackle these issues, artificial intelligence (AI) has been progressively incorporated into operational medicine, both on the front lines and also more recently in support roles. The ability of AI to rapidly analyze high-dimensional data and make inferences has opened up a wide variety of opportunities and increased efficiency for its early adopters, notably for the United States military, for non-invasive medical imaging and for mental health applications. This review discusses the current state of AI and highlights its broad array of potential applications in operational medicine as developed for the United States military.
Collapse
Affiliation(s)
- Nikolai Rakhilin
- Department of Radiology and Bioengineering, Uniformed Services University for Health Science, 4301 Jones Bridge Rd, Bethesda, MD 20814, USA
| | - H Douglas Morris
- Department of Radiology and Bioengineering, Uniformed Services University for Health Science, 4301 Jones Bridge Rd, Bethesda, MD 20814, USA
| | - Dzung L Pham
- Department of Radiology and Bioengineering, Uniformed Services University for Health Science, 4301 Jones Bridge Rd, Bethesda, MD 20814, USA
| | - Maureen N Hood
- Department of Radiology and Bioengineering, Uniformed Services University for Health Science, 4301 Jones Bridge Rd, Bethesda, MD 20814, USA
| | - Vincent B Ho
- Department of Radiology and Bioengineering, Uniformed Services University for Health Science, 4301 Jones Bridge Rd, Bethesda, MD 20814, USA
| |
Collapse
|
3
|
Chen L, Xu F, Chen L. Diagnostic accuracy of artificial intelligence based on imaging data for predicting distant metastasis of colorectal cancer: a systematic review and meta-analysis. Front Oncol 2025; 15:1558915. [PMID: 40421093 PMCID: PMC12104061 DOI: 10.3389/fonc.2025.1558915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2025] [Accepted: 04/17/2025] [Indexed: 05/28/2025] Open
Abstract
Background Colorectal cancer is the third most common malignant tumor with the third highest incidence rate. Distant metastasis is the main cause of death in colorectal cancer patients. Early detection and prognostic prediction of colorectal cancer has improved with the widespread use of artificial intelligence technologies. Purpose The aim of this study was to comprehensively evaluate the accuracy and validity of AI-based imaging data for predicting distant metastasis in colorectal cancer patients. Methods A systematic literature search was conducted to find relevant studies published up to January, 2024, in different databases. The quality of articles was assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 tool. The predictive value of AI-based imaging data for distant metastasis in colorectal cancer patients was assessed using pooled sensitivity, specificity. To explore the reasons for heterogeneity, subgroup analyses were performed using different covariates. Results Seventeen studies were included in the systematic evaluation. The pooled sensitivity, specificity, and AUC of AI-based imaging data for predicting distant metastasis in colorectal cancer patients were 0.86, 0.82, and 0.91. Based on QUADAS-2, risk of bias was detected in patient selection, diagnostic tests to be evaluated, and gold standard. Based on the results of subgroup analyses, found that the duration of follow-up, site of metastasis, etc. had a significant impact on the heterogeneity. Conclusion Imaging data images based on artificial intelligence algorithms have good diagnostic accuracy for predicting distant metastasis in colorectal cancer patients and have potential for clinical application. Systematic review registration https://www.crd.york.ac.uk/PROSPERO/, identifier PROSPERO (CRD42024516063).
Collapse
Affiliation(s)
- Lulin Chen
- Postgraduate Affairs Department, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
- Department of Ultrasound, Affiliated Hospital of Shaoxing University, Shaoxing, Zhejiang, China
| | - Fei Xu
- Department of Ultrasound, Affiliated Hospital of Shaoxing University, Shaoxing, Zhejiang, China
| | - Lujiao Chen
- Department of Radiology, Shaoxing People’s Hospital, Shaoxing, Zhejiang, China
| |
Collapse
|
4
|
Chadha S, Mukherjee S, Sanyal S. Advancements and implications of artificial intelligence for early detection, diagnosis and tailored treatment of cancer. Semin Oncol 2025; 52:152349. [PMID: 40345002 DOI: 10.1016/j.seminoncol.2025.152349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Revised: 03/20/2025] [Accepted: 04/04/2025] [Indexed: 05/11/2025]
Abstract
The complexity and heterogeneity of cancer makes early detection and effective treatment crucial to enhance patient survival and quality of life. The intrinsic creative ability of artificial intelligence (AI) offers improvements in patient screening, diagnosis, and individualized care. Advanced technologies, like computer vision, machine learning, deep learning, and natural language processing, can analyze large datasets and identify patterns that permit early cancer detection, diagnosis, management and incorporation of conclusive treatment plans, ensuring improved quality of life for patients by personalizing care and minimizing unnecessary interventions. Genomics, transcriptomics and proteomics data can be combined with AI algorithms to unveil an extensive overview of cancer biology, assisting in its detailed understanding and will help in identifying new drug targets and developing effective therapies. This can also help to identify personalized molecular signatures which can facilitate tailored interventions addressing the unique aspects of each patient. AI-driven transcriptomics, proteomics, and genomes represents a revolutionary strategy to improve patient outcome by offering precise diagnosis and tailored therapy. The inclusion of AI in oncology may boost efficiency, reduce errors, and save costs, but it cannot take the role of medical professionals. While clinicians and doctors have the final say in all matters, it might serve as their faithful assistant.
Collapse
Affiliation(s)
- Sonia Chadha
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India.
| | - Sayali Mukherjee
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India
| | - Somali Sanyal
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India
| |
Collapse
|
5
|
Donkor A, Kumi D, Amponsah E, Della Atuwo-Ampoh V. Principles for enhancing trust in artificial intelligence systems among medical imaging professionals in Ghana: A nationwide cross-sectional study. Radiography (Lond) 2025; 31:102953. [PMID: 40228323 DOI: 10.1016/j.radi.2025.102953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2025] [Revised: 03/31/2025] [Accepted: 04/01/2025] [Indexed: 04/16/2025]
Abstract
INTRODUCTION To realise the full potential of artificial intelligence (AI) systems in medical imaging, it is crucial to address challenges, such as cyberterrorism to foster trust and acceptance. This study aimed to determine the principles that enhance trust in AI systems from the perspective of medical imaging professionals in Ghana. METHODS An anonymous, online, nationwide cross-sectional survey was conducted. The survey contained questions related to socio-demographic characteristics and AI trustworthy principles, including "human agency and oversight", "technical robustness and safety", "data privacy, security and governance" and "transparency, fairness and accountability". RESULTS A total of 370 respondents completed the survey. Among the respondents, 66.5 % (n = 246) were diagnostic radiographers. Considerable number of respondents (n = 121, 32.7 %) reported having little or no understanding of how medical imaging AI systems work. Overall, 54.9 % (n = 203) of the respondents agreed or strongly agreed that each of the four principles was important to enhance trust in medical imaging AI systems, with a composite mean score of 3.88 ± 0.45. Transparency, fairness and accountability had the highest rating (4.27 ± 0.58), whereas the mean score for human agency and oversight was 3.89 ± 0.53. Technical robustness and safety as well as data privacy, security and governance obtained mean scores of 3.79 ± 0.61 and 3.58 ± 0.65, respectively. CONCLUSION Medical imaging professionals in Ghana agreed that human agency, technical robustness, data privacy and transparency are important principles to enhance trust in AI systems; however, future plans including medical imaging AI educational interventions are required to improve AI literacy among medical imaging professionals in Ghana. IMPLICATIONS FOR PRACTICE The evidence presented should encourage organisations to design and deploy trustworthy medical imaging AI systems.
Collapse
Affiliation(s)
- A Donkor
- Department of Medical Imaging, Faculty of Allied Health Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana; IMPACCT (Improving Palliative, Aged and Chronic Care Through Clinical Research and Translation), Faculty of Health, University of Technology Sydney, Australia.
| | - D Kumi
- Department of Medical Imaging, Faculty of Allied Health Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - E Amponsah
- Department of Medical Imaging, Faculty of Allied Health Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - V Della Atuwo-Ampoh
- Department of Medical Imaging, School of Allied Health Sciences, University of Health and Allied Sciences, Ho, Ghana
| |
Collapse
|
6
|
Chang SH, Yeh LK, Hung KH, Chiu YJ, Hsieh CH, Ma CP. Machine Learning-Driven Transcriptome Analysis of Keratoconus for Predictive Biomarker Identification. Biomedicines 2025; 13:1032. [PMID: 40426861 DOI: 10.3390/biomedicines13051032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2025] [Revised: 04/17/2025] [Accepted: 04/21/2025] [Indexed: 05/29/2025] Open
Abstract
Background: Keratoconus (KTCN) is a multifactorial disease characterized by progressive corneal degeneration. Recent studies suggest that a gene expression analysis of corneas may uncover potential novel biomarkers involved in corneal matrix remodeling. However, identifying reliable combinations of biomarkers that are linked to disease risk or progression remains a significant challenge. Objective: This study employed multiple machine learning algorithms to analyze the transcriptomes of keratoconus patients, identifying feature gene combinations and their functional associations, with the aim of enhancing the understanding of keratoconus pathogenesis. Methods: We analyzed the GSE77938 (PRJNA312169) dataset for differential gene expression (DGE) and performed gene set enrichment analysis (GSEA) using Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways to identify enriched pathways in keratoconus (KTCN) versus controls. Machine learning algorithms were then used to analyze the gene sets, with SHapley Additive exPlanations (SHAP) applied to assess the contribution of key feature genes in the model's predictions. Selected feature genes were further analyzed through Gene Ontology (GO) enrichment to explore their roles in biological processes and cellular functions. Results: Machine learning models, including XGBoost, Random Forest, Logistic Regression, and SVM, identified a set of important feature genes associated with keratoconus, with 15 notable genes appearing across multiple models, such as IL1R1, JUN, CYBB, CXCR4, KRT13, KRT14, S100A8, S100A9, and others. The under-expressed genes in KTCN were involved in the mechanical resistance of the epidermis (KRT14, KRT15) and in inflammation pathways (S100A8/A9, IL1R1, CYBB, JUN, and CXCR4), as compared to controls. The GO analysis highlighted that the S100A8/A9 complex and its associated genes were primarily involved in biological processes related to the cytoskeleton organization, inflammation, and immune response. Furthermore, we expanded our analysis by incorporating additional datasets from PRJNA636666 and PRJNA1184491, thereby offering a broader representation of gene features and increasing the generalizability of our results across diverse cohorts. Conclusions: The differing gene sets identified by XGBoost and SVM may reflect distinct but complementary aspects of keratoconus pathophysiology. Meanwhile, XGBoost captured key immune and chemotactic regulators (e.g., IL1R1, CXCR4), suggesting upstream inflammatory signaling pathways. SVM highlighted structural and epithelial differentiation markers (e.g., KRT14, S100A8/A9), possibly reflecting downstream tissue remodeling and stress responses. Our findings provide a novel research platform for the evaluation of keratoconus using machine learning-based approaches, offering valuable insights into its pathogenesis and potential therapeutic targets.
Collapse
Affiliation(s)
- Shao-Hsuan Chang
- Department of Biomedical Engineering, Chang Gung University, Taoyuan 33302, Taiwan
- Department of Ophthalmology, Linkou Chang Gung Memorial Hospital, Taoyuan 33305, Taiwan
| | - Lung-Kun Yeh
- Department of Ophthalmology, Linkou Chang Gung Memorial Hospital, Taoyuan 33305, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| | - Kuo-Hsuan Hung
- Department of Ophthalmology, Linkou Chang Gung Memorial Hospital, Taoyuan 33305, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| | - Yen-Jung Chiu
- Department of Biomedical Engineering, Chang Gung University, Taoyuan 33302, Taiwan
| | - Chia-Hsun Hsieh
- College of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
- Division of Oncology, Department of Internal Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 33305, Taiwan
| | - Chung-Pei Ma
- Department of Biomedical Sciences, College of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| |
Collapse
|
7
|
Azma R, Hareendranathan A, Li M, Nguyen P, S Wahd A, Jaremko JL, T Almeida F. Automated pediatric TMJ articular disk identification and displacement classification in MRI with machine learning. J Dent 2025; 155:105622. [PMID: 39952550 DOI: 10.1016/j.jdent.2025.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 02/05/2025] [Accepted: 02/10/2025] [Indexed: 02/17/2025] Open
Abstract
OBJECTIVE To evaluate the performance of an automated two-step model interpreting pediatric temporomandibular joint (TMJ) magnetic resonance imaging (MRI) using artificial intelligence (AI). Using deep learning techniques, the model first automatically identifies the disk and the TMJ osseous structures, and then an automated algorithm classifies disk displacement. MATERIALS AND METHODS MRI images of the TMJ from 235 pediatric patients (470 joints) were reviewed. TMJ structures were segmented, and the disk position was classified as dislocated or not dislocated. The UNet++ model was trained on MRI images from 135 and tested on images from 100 patients. Disk displacement was then classified by an automated algorithm assessing the location of disk centroid and surfaces for bone landmarks. RESULTS The mean age was 14.6 ± 0.1 years (Female: 138/235, 58 %), with 104 of 470 disks (22 %) anteriorly dislocated. UNet++ performed well in segmenting the TMJ anatomical structures, with a Dice coefficient of 0.67 for the disk, 0.91 for the condyle, and a Hausdorff distance of 2.8 mm for the articular eminence. The classification algorithm showed disk displacement classification comparable to human experts, with an AUC of 0.89-0.92 for the distance between the disk center and the eminence-condyle line. CONCLUSION A two-step automated model can accurately identify TMJ osseous structures and classify disk dislocation in pediatric TMJ MRI. This tool could assist clinicians who are not MRI experts when assessing pediatric TMJ disorders. CLINICAL SIGNIFICANCE Automated software that assists in locating the articular disk and surrounding structures and classifies disk displacement would improve the TMJ-MRI interpretation and the assessment of TMJ disorders in children.
Collapse
Affiliation(s)
- Roxana Azma
- Mike Petryk School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Canada; Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Canada.
| | - Abhilash Hareendranathan
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Canada.
| | - Mengxun Li
- Department of Prosthodontics, School of Stomatology, Wuhan University, China.
| | - Phu Nguyen
- Department of Computing Science, Faculty of Science, University of Alberta, Canada.
| | - Assefa S Wahd
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Canada.
| | - Jacob L Jaremko
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Canada.
| | - Fabiana T Almeida
- Mike Petryk School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Canada.
| |
Collapse
|
8
|
Kleinhendler E, Pinkhasov A, Hayek S, Man A, Freund O, Perluk TM, Gershman E, Unterman A, Fire G, Bar-Shai A. Interpretation of cardiopulmonary exercise test by GPT - promising tool as a first step to identify normal results. Expert Rev Respir Med 2025; 19:371-378. [PMID: 40012496 DOI: 10.1080/17476348.2025.2474138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Revised: 02/03/2025] [Accepted: 02/26/2025] [Indexed: 02/28/2025]
Abstract
BACKGROUND Cardiopulmonary exercise testing (CPET) is used in the evaluation of unexplained dyspnea. However, its interpretation requires expertise that is often not available. We aim to evaluate the utility of ChatGPT (GPT) in interpreting CPET results. RESEARCH DESIGN AND METHODS This cross-sectional study included 150 patients who underwent CPET. Two expert pulmonologists categorized the results as normal or abnormal (cardiovascular, pulmonary, or other exercise limitations), being the gold standard. GPT versions 3.5 (GPT-3.5) and 4 (GPT-4) analyzed the same data using pre-defined structured inputs. RESULTS GPT-3.5 correctly interpreted 67% of the cases. It achieved a sensitivity of 75% and specificity of 98% in identifying normal CPET results. GPT-3.5 had varying results for abnormal CPET tests, depending on the limiting etiology. In contrast, GPT-4 demonstrated improvements in interpreting abnormal tests, with sensitivities of 83% and 92% for respiratory and cardiovascular limitations, respectively. Combining the normal CPET interpretations by both AI models resulted in 91% sensitivity and 98% specificity. Low work rate and peak oxygen consumption were independent predictors for inaccurate interpretations. CONCLUSIONS Both GPT-3.5 and GPT-4 succeeded in ruling out abnormal CPET results. This tool could be utilized to differentiate between normal and abnormal results.
Collapse
Affiliation(s)
- Eyal Kleinhendler
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Avital Pinkhasov
- Department of Epidemiology and Preventive Medicine, School of Public Health, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Samah Hayek
- Department of Epidemiology and Preventive Medicine, School of Public Health, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
- Clalit Innovation, Clalit Health Services, Ramat Gan, Israel
| | - Avraham Man
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Ophir Freund
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Tal Moshe Perluk
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Evgeni Gershman
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Avraham Unterman
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Gil Fire
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Amir Bar-Shai
- Division of Pulmonary Medicine, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
- School of Medicine, Faculty of Medical and Health Sciences, Tel-Aviv University, Tel-Aviv, Israel
| |
Collapse
|
9
|
Lo Mastro A, Grassi E, Berritto D, Russo A, Reginelli A, Guerra E, Grassi F, Boccia F. Artificial intelligence in fracture detection on radiographs: a literature review. Jpn J Radiol 2025; 43:551-585. [PMID: 39538068 DOI: 10.1007/s11604-024-01702-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024]
Abstract
Fractures are one of the most common reasons of admission to emergency department affecting individuals of all ages and regions worldwide that can be misdiagnosed during radiologic examination. Accurate and timely diagnosis of fracture is crucial for patients, and artificial intelligence that uses algorithms to imitate human intelligence to aid or enhance human performs is a promising solution to address this issue. In the last few years, numerous commercially available algorithms have been developed to enhance radiology practice and a large number of studies apply artificial intelligence to fracture detection. Recent contributions in literature have described numerous advantages showing how artificial intelligence performs better than doctors who have less experience in interpreting musculoskeletal X-rays, and assisting radiologists increases diagnostic accuracy and sensitivity, improves efficiency, and reduces interpretation time. Furthermore, algorithms perform better when they are trained with big data on a wide range of fracture patterns and variants and can provide standardized fracture identification across different radiologist, thanks to the structured report. In this review article, we discuss the use of artificial intelligence in fracture identification and its benefits and disadvantages. We also discuss its current potential impact on the field of radiology and radiomics.
Collapse
Affiliation(s)
- Antonio Lo Mastro
- Department of Radiology, University of Campania "Luigi Vanvitelli", Naples, Italy.
| | - Enrico Grassi
- Department of Orthopaedics, University of Florence, Florence, Italy
| | - Daniela Berritto
- Department of Clinical and Experimental Medicine, University of Foggia, Foggia, Italy
| | - Anna Russo
- Department of Radiology, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Alfonso Reginelli
- Department of Radiology, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Egidio Guerra
- Emergency Radiology Department, "Policlinico Riuniti Di Foggia", Foggia, Italy
| | - Francesca Grassi
- Department of Radiology, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Francesco Boccia
- Department of Radiology, University of Campania "Luigi Vanvitelli", Naples, Italy
| |
Collapse
|
10
|
Hanna MG, Pantanowitz L, Dash R, Harrison JH, Deebajah M, Pantanowitz J, Rashidi HH. Future of Artificial Intelligence-Machine Learning Trends in Pathology and Medicine. Mod Pathol 2025; 38:100705. [PMID: 39761872 DOI: 10.1016/j.modpat.2025.100705] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 12/19/2024] [Accepted: 01/01/2025] [Indexed: 02/07/2025]
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming the field of medicine. Health care organizations are now starting to establish management strategies for integrating such platforms (AI-ML toolsets) that leverage the computational power of advanced algorithms to analyze data and to provide better insights that ultimately translate to enhanced clinical decision-making and improved patient outcomes. Emerging AI-ML platforms and trends in pathology and medicine are reshaping the field by offering innovative solutions to enhance diagnostic accuracy, operational workflows, clinical decision support, and clinical outcomes. These tools are also increasingly valuable in pathology research in which they contribute to automated image analysis, biomarker discovery, drug development, clinical trials, and productive analytics. Other related trends include the adoption of ML operations for managing models in clinical settings, the application of multimodal and multiagent AI to utilize diverse data sources, expedited translational research, and virtualized education for training and simulation. As the final chapter of our AI educational series, this review article delves into the current adoption, future directions, and transformative potential of AI-ML platforms in pathology and medicine, discussing their applications, benefits, challenges, and future perspectives.
Collapse
Affiliation(s)
- Matthew G Hanna
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence, University of Pittsburgh, Pittsburgh, Pennsylvania.
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Rajesh Dash
- Department of Pathology, Duke University, Durham, North Carolina
| | - James H Harrison
- Department of Pathology, University of Virginia, Charlottesville, Virginia
| | | | | | - Hooman H Rashidi
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence, University of Pittsburgh, Pittsburgh, Pennsylvania.
| |
Collapse
|
11
|
Cai C, Xiao X, Wen Q, Luo Z, Wang S. The research progress of label-free optical imaging technology in intraoperative real-time navigation of parathyroid glands. Lasers Med Sci 2025; 40:154. [PMID: 40113605 DOI: 10.1007/s10103-025-04418-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2024] [Accepted: 03/14/2025] [Indexed: 03/22/2025]
Abstract
Intraoperative misidentification or vascular injury to the parathyroid glands can lead to hypoparathyroidism and hypocalcemia, resulting in serious postoperative complications. Therefore, functional localization of the parathyroid glands during thyroid (parathyroid) surgery is a key focus and challenge in thyroid surgery. The current clinical prospects of various optical imaging technologies for intraoperative localization, identification, and protection of parathyroid glands varies. However, "Label-free optical imaging technology" is increasingly favored by surgeons due to its simplicity, efficiency, safety, real-time capability, and non-invasiveness. This manuscript focuses on the relatively well-researched near-infrared autofluorescence (NIRAF) and NIRAF-combined studies including those integrating laser speckle imaging, artificial intelligence(AI) optimization, hardware integration, and optical path improvements. It also briefly introduces promising technologies, including Laser-Induced Fluorescence (LIF), Hyperspectral Imaging (HSI), Fluorescence Lifetime Imaging (FLIm), Laser-Induced Breakdown Spectroscopy (LIBS), Optical Coherence Tomography (OCT), and Dynamic Optical Contrast Imaging (DOCI). While these technologies are still in early stages with limited clinical application and standardization, current research highlights their potential for improving intraoperative parathyroid identification. Future studies should focus on refining these methods for broader clinical use.
Collapse
Affiliation(s)
- Chang Cai
- The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Xiao Xiao
- The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Qiye Wen
- The Fifth Clinical College of Guangzhou Medical University, Guangzhou, China
| | - Zifeng Luo
- Hunan Institute of Technology, Hengyang, China
| | - Song Wang
- The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
| |
Collapse
|
12
|
Li R, Liu G, Zhang M, Rong D, Su Z, Shan Y, Lu J. Integration of artificial intelligence in radiology education: a requirements survey and recommendations from faculty radiologists, residents, and medical students. BMC MEDICAL EDUCATION 2025; 25:380. [PMID: 40082889 PMCID: PMC11908051 DOI: 10.1186/s12909-025-06859-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 02/11/2025] [Indexed: 03/16/2025]
Abstract
BACKGROUND To investigate the perspectives and expectations of faculty radiologists, residents, and medical students regarding the integration of artificial intelligence (AI) in radiology education, a survey was conducted to collect their opinions and attitudes on implementing AI in radiology education. METHODS An online questionnaire was used for this survey, and the participant anonymity was ensured. In total, 41 faculty radiologists, 38 residents, and 120 medical students from the authors' institution completed the questionnaire. RESULTS Most residents and students experience different levels of psychological stress during the initial stage of clinical practice, and this stress mainly stems from tight schedules, heavy workloads, apprehensions about making mistakes in diagnostic report writing, as well as academic or employment pressures. Although most of the respondents were not familiar with how AI is applied in radiology education, a substantial proportion of them expressed eagerness and enthusiasm for the integration of AI into radiology education. Especially among radiologists and residents, they showed a desire to utilize an AI-driven online platform for practicing radiology skills, including reading medical images and writing diagnostic reports, before engaging in clinical practice. Furthermore, faculty radiologists demonstrated strong enthusiasm for the notion that AI training platforms can enhance training efficiency and boost learners' confidence. Notably, only approximately half of the residents and medical students shared the instructors' optimism, with the remainder expressing neutrality or concern, emphasizing the need for robust AI feedback systems and user-centered designs. Moreover, the authors' team has developed a preliminary framework for an AI-driven radiology education training platform, consisting of four key components: imaging case sets, intelligent interactive learning, self-quiz, and online exam. CONCLUSIONS The integration of AI technology in radiology education has the potential to revolutionize the field by providing innovative solutions for enhancing competency levels and optimizing learning outcomes.
Collapse
Affiliation(s)
- Ruili Li
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, No.45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Guangxue Liu
- Department of Natural Medicines, School of Pharmaceutical Sciences, Peking University Health Science Center, Beijing, 100191, China
| | - Miao Zhang
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, No.45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Dongdong Rong
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, No.45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Zhuangzhi Su
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, No.45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Yi Shan
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, No.45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Jie Lu
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, No.45 Changchun Street, Xicheng District, Beijing, 100053, China.
| |
Collapse
|
13
|
Sorantin E, Grasser MG, Hemmelmayr A, Heinze S. Let us talk about mistakes. Pediatr Radiol 2025; 55:420-428. [PMID: 39210092 PMCID: PMC11882668 DOI: 10.1007/s00247-024-06034-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 08/11/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024]
Abstract
Unfortunately, errors and mistakes are part of life. Errors and mistakes can harm patients and incur unplanned costs. Errors may arise from various sources, which may be classified as systematic, latent, or active. Intrinsic and extrinsic factors also contribute to incorrect decisions. In addition to cognitive biases, our personality, socialization, personal chronobiology, and way of thinking (heuristic versus analytical) are influencing factors. Factors such as overload from private situations, long commuting times, and the complex environment of information technology must also be considered. The objective of this paper is to define and classify errors and mistakes in radiology, to discuss the influencing factors, and to present strategies for prevention. Hierarchical responsibilities and team "well-being" are also discussed.
Collapse
Affiliation(s)
- Erich Sorantin
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 34, 8036, Graz, Austria.
| | - Michael Georg Grasser
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 34, 8036, Graz, Austria
| | - Ariane Hemmelmayr
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 34, 8036, Graz, Austria
| | - Sarah Heinze
- Diagnostic and Research Institute of Forensic Medicine, Medical University Graz, Neue Stiftingtalstrasse 6, 8010, Graz, Austria
| |
Collapse
|
14
|
Xiao L, Zhao Y, Li Y, Yan M, Liu Y, Liu M, Ning C. Developing an interpretable machine learning model for diagnosing gout using clinical and ultrasound features. Eur J Radiol 2025; 184:111959. [PMID: 39893823 DOI: 10.1016/j.ejrad.2025.111959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2024] [Revised: 01/07/2025] [Accepted: 01/28/2025] [Indexed: 02/04/2025]
Abstract
OBJECTIVE To develop a machine learning (ML) model using clinical data and ultrasound features for gout prediction, and apply SHapley Additive exPlanations (SHAP) for model interpretation. METHODS This study analyzed 609 patients' first metatarsophalangeal (MTP1) joint ultrasound data from two institutions. Institution 1 data (n = 571) were split into training cohort (TC) and internal testing cohort (ITC) (8:2 ratio), while Institution 2 data (n = 92) served as external testing cohort (ETC). Key predictors were selected using Random Forest (RF), Least Absolute Shrinkage and Selection Operator (LASSO), and Extreme Gradient Boosting (XGBoost) algorithms. Six ML models were evaluated using standard performance metrics, with SHAP analysis for model interpretation. RESULTS Five key predictors were identified: serum uric acid (SUA), deep learning (DL) model predictions, tophus, bone erosion, and double contour sign (DCs). The logistic regression (LR) model demonstrated optimal performance, achieving Area Under the Curve (AUC) values of 0.870 (95% CI: 0.820-0.920) in ITC and 0.854 (95% CI: 0.804-0.904) in ETC. The model showed good calibration with Brier scores of 0.138 and 0.159 in ITC and ETC, respectively. CONCLUSION This study developed an interpretable ML model for gout prediction and utilized SHAP to elucidate feature contributions, establishing a foundation for future applications in clinical decision support for gout diagnosis.
Collapse
Affiliation(s)
- Lishan Xiao
- Department of Ultrasound, the Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yizhe Zhao
- The School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Yuchen Li
- Department of Ultrasound, the Affiliated Hospital of Qingdao University, Qingdao, China
| | - Mengmeng Yan
- Department of Ultrasound, the Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yongming Liu
- Department of Ultrasound, Shandong Province Chronic Disease Hospital (Shandong Province Rehabilitation Center), Qingdao, China
| | - Manhua Liu
- The School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China.
| | - Chunping Ning
- Department of Ultrasound, the Affiliated Hospital of Qingdao University, Qingdao, China.
| |
Collapse
|
15
|
Loi SJ, Ng W, Lai C, Chua ECP. Artificial intelligence education in medical imaging: A scoping review. J Med Imaging Radiat Sci 2025; 56:101798. [PMID: 39718290 DOI: 10.1016/j.jmir.2024.101798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2024] [Revised: 10/27/2024] [Accepted: 10/30/2024] [Indexed: 12/25/2024]
Abstract
BACKGROUND The rise of Artificial intelligence (AI) is reshaping healthcare, particularly in medical imaging. In this emerging field, clinical imaging personnel need proper training. However, formal AI education is lacking in medical curricula, coupled with a shortage of studies synthesising the availability of AI curricula tailored for clinical imaging personnel. This study therefore addresses the question "what are the current AI training programs or curricula for clinical imaging personnel?" METHODS This review follows Arksey & O'Malley's framework and the PRISMA Extension for Scoping Reviews checklist. Six electronic databases were searched between June and September 2023 and the screening process comprised two stages. Data extraction was performed using a standardised charting form. Data was summarised in table format and thematically. RESULTS Twenty-two studies were included in this review. The goals of the curriculum include enhancing AI knowledge through the delivery of educational content and encouraging practical application and skills development in AI. The learning objectives comprise technical proficiency and model development, foundational knowledge and understanding, literature review and information utilisation, and practical application and problem-solving skills. Course content spanned nine areas, from fundamentals of AI to imaging informatics. Most curricula adopted an online mode of delivery, and the program duration varied significantly. All programs utilised didactic presentations, with several incorporating additional teaching methods and activities to fulfil curriculum goals. The target audiences and participants primarily involved radiology residents, while the creators and instructors comprised a multidisciplinary team of radiology and AI personnel. Various tools and resources were utilised, encompassing online courses and cloud-based notebooks. The curricula were well-received by participants, and time constraint emerged as a major challenge. CONCLUSION This scoping review provides an overview of the AI educational programs from existing literature to guide future developments in AI educational curricula. Future education efforts should prioritise evidence-based curriculum design, expand training offerings to radiographers, increase content offerings in imaging informatics, and effectively utilise different teaching strategies and training tools and resources in the curriculum.
Collapse
Affiliation(s)
- Su Jean Loi
- Singapore Institute of Technology, 10 Dover Drive, 138683, Singapore.
| | - Wenhui Ng
- Singapore Institute of Technology, 10 Dover Drive, 138683, Singapore
| | - Christopher Lai
- Singapore Institute of Technology, 10 Dover Drive, 138683, Singapore
| | | |
Collapse
|
16
|
Díaz Moreno A, Cano Alonso R, Fernández Alfonso A, Álvarez Vázquez A, Carrascoso Arranz J, López Alcolea J, García Castellanos D, Sanabria Greciano L, Recio Rodríguez M, Andreu-Vázquez C, Thuissard Vasallo IJ, Martínez De Vega V. Diagnostic Performance of an Artificial Intelligence Software for the Evaluation of Bone X-Ray Examinations Referred from the Emergency Department. Diagnostics (Basel) 2025; 15:491. [PMID: 40002642 PMCID: PMC11854177 DOI: 10.3390/diagnostics15040491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2025] [Revised: 01/31/2025] [Accepted: 02/10/2025] [Indexed: 02/27/2025] Open
Abstract
Background/Objectives: The growing use of artificial intelligence (AI) in musculoskeletal radiographs presents significant potential to improve diagnostic accuracy and optimize clinical workflow. However, assessing its performance in clinical environments is essential for successful implementation. We hypothesized that our AI applied to urgent bone X-rays could detect fractures, joint dislocations, and effusion with high sensitivity (Sens) and specificity (Spec). The specific objectives of our study were as follows: 1. To determine the Sens and Spec rates of AI in detecting bone fractures, dislocations, and elbow joint effusion compared to the gold standard (GS). 2. To evaluate the concordance rate between AI and radiology residents (RR). 3. To compare the proportion of doubtful results identified by AI and the RR, and the rates confirmed by GS. Methods: We conducted an observational, double-blind, retrospective study on adult bone X-rays (BXRs) referred from the emergency department at our center between October and November 2022, with a final sample of 792 BXRs, categorized into three groups: large joints, small joints, and long-flat bones. Our AI system detects fractures, dislocations, and elbow effusions, providing results as positive, negative, or doubtful. We compared the diagnostic performance of AI and the RR against a senior radiologist (GS). Results: The study population's median age was 48 years; 48.6% were male. Statistical analysis showed Sens = 90.6% and Spec = 98% for fracture detection by the RR, and 95.8% and 97.6% by AI. The RR achieved higher Sens (77.8%) and Spec (100%) for dislocation detection compared to AI. The Kappa coefficient between RR and AI was 0.797 for fractures in large joints, and concordance was considered acceptable for all other variables. We also analyzed doubtful cases and their confirmation by GS. Additionally, we analyzed findings not detected by AI, such as chronic fractures, arthropathy, focal lesions, and anatomical variants. Conclusions: This study assessed the impact of AI in a real-world clinical setting, comparing its performance with that of radiologists (both in training and senior). AI achieved high Sens, Spec, and AUC in bone fracture detection and showed strong concordance with the RR. In conclusion, AI has the potential to be a valuable screening tool, helping reduce missed diagnoses in clinical practice.
Collapse
Affiliation(s)
- Alejandro Díaz Moreno
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Raquel Cano Alonso
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Ana Fernández Alfonso
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Ana Álvarez Vázquez
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Javier Carrascoso Arranz
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Julia López Alcolea
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - David García Castellanos
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Lucía Sanabria Greciano
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Manuel Recio Rodríguez
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| | - Cristina Andreu-Vázquez
- Faculty of Biomedical and Health Science, Universidad Europea de Madrid, 28670 Madrid, Spain; (C.A.-V.); (I.J.T.V.)
| | | | - Vicente Martínez De Vega
- Hospital Universitario QuironSalud Madrid, 28223 Madrid, Spain; (R.C.A.); (A.F.A.); (A.Á.V.); (J.C.A.); (J.L.A.); (D.G.C.); (L.S.G.); (M.R.R.); (V.M.D.V.)
- Department of Medicine Faculty of Medicine, Health and Sports Universidad Europea de Madrid, 28670 Madrid, Spain
| |
Collapse
|
17
|
Gallo ML, Moriconi M, Phé V. Current applications and future perspectives of artificial intelligence in functional urology and neurourology: how far can we get? Minerva Urol Nephrol 2025; 77:33-42. [PMID: 40183181 DOI: 10.23736/s2724-6051.25.06195-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2025]
Abstract
In the last few years, the scientific community has seen an increasing interest towards the potential applications of artificial intelligence in medicine and healthcare. In this context, urology represents an area of rapid development, particularly in uro-oncology, where a wide range of applications has focused on prostate cancer diagnosis. Other urological branches are also starting to explore the potential advantages of AI in the diagnostic and therapeutic process, and functional urology and neurourology are among them. Although the experiences in this sense have been quite limited so far, some AI applications have already started to show potential benefits, especially for urodynamic and imaging interpretation, as well as for the development of AI-based predictive models for treatment response. A few experiences on the use of ChatGPT to answer questions on functional urology and neurourology topics have also been reported. Conversely, AI applications in functional urology surgery remain largely unexplored. This paper provides a critical overview of the current evidence on this topic, highlighting the potential benefits for the diagnostic workflow, therapeutic evaluation and surgical training, as well as the current limitations that need to be addressed to enable the integration of this tools in the clinical practice in the future.
Collapse
Affiliation(s)
- Maria Lucia Gallo
- Department of Minimally Invasive and Robotic Urologic Surgery, Careggi University Hospital, University of Florence, Florence, Italy -
- Sorbonne University, Department of Urology AP-HP, Tenon Hospital, Paris, France -
| | - Martina Moriconi
- Sorbonne University, Department of Urology AP-HP, Tenon Hospital, Paris, France
- Department of Maternal-Infant and Urological Sciences, Sapienza University, Rome, Italy
| | - Véronique Phé
- Sorbonne University, Department of Urology AP-HP, Tenon Hospital, Paris, France
| |
Collapse
|
18
|
de Oliveira MBM, Mendes F, Martins M, Cardoso P, Fonseca J, Mascarenhas T, Saraiva MM. The Role of Artificial Intelligence in Urogynecology: Current Applications and Future Prospects. Diagnostics (Basel) 2025; 15:274. [PMID: 39941204 PMCID: PMC11816405 DOI: 10.3390/diagnostics15030274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Revised: 01/09/2025] [Accepted: 01/17/2025] [Indexed: 02/16/2025] Open
Abstract
Artificial intelligence (AI) is the new medical hot topic, being applied mainly in specialties with a strong imaging component. In the domain of gynecology, AI has been tested and shown vast potential in several areas with promising results, with an emphasis on oncology. However, fewer studies have been made focusing on urogynecology, a branch of gynecology known for using multiple imaging exams (IEs) and tests in the management of women's pelvic floor health. This review aims to illustrate the current state of AI in urogynecology, namely with the use of machine learning (ML) and deep learning (DL) in diagnostics and as imaging tools, discuss possible future prospects for AI in this field, and go over its limitations that challenge its safe implementation.
Collapse
Affiliation(s)
- Maria Beatriz Macedo de Oliveira
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.M.d.O.); (P.C.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.M.d.O.); (P.C.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - João Fonseca
- CINTESIS@RISE, Department of Community Medicine, Information and Health Decision Sciences (MEDCIDS), Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal;
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.M.d.O.); (P.C.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.M.d.O.); (P.C.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
19
|
Glicksman M, Wang S, Yellapragada S, Robinson C, Orhurhu V, Emerick T. Artificial intelligence and pain medicine education: Benefits and pitfalls for the medical trainee. Pain Pract 2025; 25:e13428. [PMID: 39588809 DOI: 10.1111/papr.13428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) represents an exciting and evolving technology that is increasingly being utilized across pain medicine. Large language models (LLMs) are one type of AI that has become particularly popular. Currently, there is a paucity of literature analyzing the impact that AI may have on trainee education. As such, we sought to assess the benefits and pitfalls that AI may have on pain medicine trainee education. Given the rapidly increasing popularity of LLMs, we particularly assessed how these LLMs may promote and hinder trainee education through a pilot quality improvement project. MATERIALS AND METHODS A comprehensive search of the existing literature regarding AI within medicine was performed to identify its potential benefits and pitfalls within pain medicine. The pilot project was approved by UPMC Quality Improvement Review Committee (#4547). Three of the most commonly utilized LLMs at the initiation of this pilot study - ChatGPT Plus, Google Bard, and Bing AI - were asked a series of multiple choice questions to evaluate their ability to assist in learner education within pain medicine. RESULTS Potential benefits of AI within pain medicine trainee education include ease of use, imaging interpretation, procedural/surgical skills training, learner assessment, personalized learning experiences, ability to summarize vast amounts of knowledge, and preparation for the future of pain medicine. Potential pitfalls include discrepancies between AI devices and associated cost-differences, correlating radiographic findings to clinical significance, interpersonal/communication skills, educational disparities, bias/plagiarism/cheating concerns, lack of incorporation of private domain literature, and absence of training specifically for pain medicine education. Regarding the quality improvement project, ChatGPT Plus answered the highest percentage of all questions correctly (16/17). Lowest correctness scores by LLMs were in answering first-order questions, with Google Bard and Bing AI answering 4/9 and 3/9 first-order questions correctly, respectively. Qualitative evaluation of these LLM-provided explanations in answering second- and third-order questions revealed some reasoning inconsistencies (e.g., providing flawed information in selecting the correct answer). CONCLUSIONS AI represents a continually evolving and promising modality to assist trainees pursuing a career in pain medicine. Still, limitations currently exist that may hinder their independent use in this setting. Future research exploring how AI may overcome these challenges is thus required. Until then, AI should be utilized as supplementary tool within pain medicine trainee education and with caution.
Collapse
Affiliation(s)
- Michael Glicksman
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Medical Center (UPMC), Pittsburgh, Pennsylvania, USA
| | - Sheri Wang
- Department of Anesthesiology and Perioperative Medicine, University of Pittsburgh Medical Center (UPMC), Pittsburgh, Pennsylvania, USA
| | - Samir Yellapragada
- University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Christopher Robinson
- Department of Anesthesiology, Perioperative, and Pain Medicine, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Vwaire Orhurhu
- University of Pittsburgh Medical Center (UPMC), Susquehanna, Williamsport, Pennsylvania, USA
- MVM Health, East Stroudsburg, Pennsylvania, USA
| | - Trent Emerick
- Department of Anesthesiology and Perioperative Medicine, Chronic Pain Division, University of Pittsburgh Medical Center (UPMC), Pittsburgh, Pennsylvania, USA
| |
Collapse
|
20
|
Mohanarajan M, Salunke PP, Arif A, Iglesias Gonzalez PM, Ospina D, Benavides DS, Amudha C, Raman KK, Siddiqui HF. Advancements in Machine Learning and Artificial Intelligence in the Radiological Detection of Pulmonary Embolism. Cureus 2025; 17:e78217. [PMID: 40026993 PMCID: PMC11872007 DOI: 10.7759/cureus.78217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/29/2025] [Indexed: 03/05/2025] Open
Abstract
Pulmonary embolism (PE) is a clinically challenging diagnosis that varies from silent to life-threatening symptoms. Timely diagnosis of the condition is subject to clinical assessment, D-dimer testing and radiological imaging. Computed tomography pulmonary angiogram (CTPA) is considered the gold standard imaging modality, although some cases can be missed due to reader dependency, resulting in adverse patient outcomes. Hence, it is crucial to implement faster and precise diagnostic strategies to help clinicians diagnose and treat PE patients promptly and mitigate morbidity and mortality. Machine learning (ML) and artificial intelligence (AI) are the newly emerging tools in the medical field, including in radiological imaging, potentially improving diagnostic efficacy. Our review of the studies showed that computer-aided design (CAD) and AI tools displayed similar to superior sensitivity and specificity in identifying PE on CTPA as compared to radiologists. Several tools demonstrated the potential in identifying minor PE on radiological scans showing promising ability to aid clinicians in reducing missed cases substantially. However, it is imperative to design sophisticated tools and conduct large clinical trials to integrate AI use in everyday clinical setting and establish guidelines for its ethical applicability. ML and AI can also potentially help physicians in formulating individualized management strategies to enhance patient outcomes.
Collapse
Affiliation(s)
| | | | - Ali Arif
- Medicine, Dow University of Health Sciences, Karachi, PAK
| | | | - David Ospina
- Internal Medicine, Universidad de los Andes, Bogotá, COL
| | | | - Chaithanya Amudha
- Medicine and Surgery, Saveetha Medical College and Hospital, Chennai, IND
| | - Kumareson K Raman
- Cardiology, Nottingham University Hospitals National Health Service (NHS) Trust, Nottingham, GBR
| | - Humza F Siddiqui
- Internal Medicine, Jinnah Postgraduate Medical Centre, Karachi, PAK
| |
Collapse
|
21
|
Jin Y, Liang L, Li J, Xu K, Zhou W, Li Y. Artificial intelligence and glaucoma: a lucid and comprehensive review. Front Med (Lausanne) 2024; 11:1423813. [PMID: 39736974 PMCID: PMC11682886 DOI: 10.3389/fmed.2024.1423813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 11/25/2024] [Indexed: 01/01/2025] Open
Abstract
Glaucoma is a pathologically irreversible eye illness in the realm of ophthalmic diseases. Because it is difficult to detect concealed and non-obvious progressive changes, clinical diagnosis and treatment of glaucoma is extremely challenging. At the same time, screening and monitoring for glaucoma disease progression are crucial. Artificial intelligence technology has advanced rapidly in all fields, particularly medicine, thanks to ongoing in-depth study and algorithm extension. Simultaneously, research and applications of machine learning and deep learning in the field of glaucoma are fast evolving. Artificial intelligence, with its numerous advantages, will raise the accuracy and efficiency of glaucoma screening and diagnosis to new heights, as well as significantly cut the cost of diagnosis and treatment for the majority of patients. This review summarizes the relevant applications of artificial intelligence in the screening and diagnosis of glaucoma, as well as reflects deeply on the limitations and difficulties of the current application of artificial intelligence in the field of glaucoma, and presents promising prospects and expectations for the application of artificial intelligence in other eye diseases such as glaucoma.
Collapse
Affiliation(s)
| | - Lina Liang
- Department of Eye Function Laboratory, Eye Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | | | | | | | | |
Collapse
|
22
|
Xuereb F, Portelli DJL. The knowledge and perception of patients in Malta towards artificial intelligence in medical imaging. J Med Imaging Radiat Sci 2024; 55:101743. [PMID: 39317135 DOI: 10.1016/j.jmir.2024.101743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 07/23/2024] [Accepted: 07/31/2024] [Indexed: 09/26/2024]
Abstract
INTRODUCTION Artificial intelligence (AI) is becoming increasingly implemented in radiology, especially in image reporting. Patients' perceptions about AI integration in medical imaging is a relatively unexplored area that has received limited investigation in the literature. This study aimed to determine current knowledge and perceptions of patients in Malta towards AI application in medical imaging. METHODS A cross-sectional study using a self-designed paper-based questionnaire, partly adapted with permission from two previous studies, was distributed in English or Maltese language amongst eligible outpatients attending medical imaging examinations across public hospitals in Malta and Gozo in March 2023. RESULTS 280 questionnaires were analysed, resulting in a 5.83 % confidence interval. 42.1 % of patients indicated basic AI knowledge, while 36.4 % reported minimal to no knowledge. Responses indicated favourable opinions towards the collaborative integration of humans and AI to improve healthcare. However, participants expressed preference for doctors to retain final-decision making when AI is used. For some statements, a statistically significant association was observed between patients' perception of AI-based technology and their gender, age, and educational background. Essentially, 92.1 % expressed the importance of being informed whenever AI is to be utilised in their care. DISCUSSION As key stakeholders, patients should be actively involved when AI technology is used. Informing patients about the use of AI in medical imaging is important to cultivate trust, address ethical concerns, and help ensure that AI integration in healthcare systems aligns with patients' values and needs. CONCLUSION This study highlights the need to enhance AI literacy amongst patients, possibly though awareness campaigns or educational programmes. Additionally, clear policies relating to the use of AI in medical imaging and how such AI use is communicated to patients are necessary.
Collapse
Affiliation(s)
- Francesca Xuereb
- Department of Radiography, Faculty of Health Sciences, University of Malta, Msida, Malta.
| | - Dr Jonathan L Portelli
- Department of Radiography, Faculty of Health Sciences, University of Malta, Msida, Malta.
| |
Collapse
|
23
|
Matthijs L, Delande L, De Tobel J, Büyükçakir B, Claes P, Vandermeulen D, Thevissen P. Artificial intelligence and dental age estimation: development and validation of an automated stage allocation technique on all mandibular tooth types in panoramic radiographs. Int J Legal Med 2024; 138:2469-2479. [PMID: 39105781 DOI: 10.1007/s00414-024-03298-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 07/16/2024] [Indexed: 08/07/2024]
Abstract
Age estimation in forensic odontology is mainly based on the development of permanent teeth. To register the developmental status of an examined tooth, staging techniques were developed. However, due to inappropriate calibration, uncertainties during stage allocation, and lack of experience, non-uniformity in stage allocation exists between expert observers. As a consequence, related age estimation results are inconsistent. An automated staging technique applicable to all tooth types can overcome this drawback.This study aimed to establish an integrated automated technique to stage the development of all mandibular tooth types and to compare their staging performances.Calibrated observers staged FDI teeth 31, 33, 34, 37 and 38 according to a ten-stage modified Demirjian staging technique. According to a standardised bounding box around each examined tooth, the retrospectively collected panoramic radiographs were cropped using Photoshop CC 2021® software (Adobe®, version 23.0). A gold standard set of 1639 radiographs were selected (n31 = 259, n33 = 282, n34 = 308, n37 = 390, n38 = 400) and input into a convolutional neural network (CNN) trained for optimal staging accuracy. The performance evaluation of the network was conducted in a five-fold cross-validation scheme. In each fold, the entire dataset was split into a training and a test set in a non-overlapping fashion between the folds (i.e., 80% and 20% of the dataset, respectively). Staging performances were calculated per tooth type and overall (accuracy, mean absolute difference, linearly weighted Cohen's Kappa and intra-class correlation coefficient). Overall, these metrics equalled 0.53, 0.71, 0.71, and 0.89, respectively. All staging performance indices were best for 37 and worst for 31. The highest number of misclassified stages were associated to adjacent stages. Most misclassifications were observed in all available stages of 31.Our findings suggest that the developmental status of mandibular molars can be taken into account in an automated approach for age estimation, while taking incisors into account may hinder age estimation.
Collapse
Affiliation(s)
- Lander Matthijs
- Oral Health Sciences and Dentistry, KU Leuven, Leuven, Belgium
| | - Lauren Delande
- Oral Health Sciences and Dentistry, KU Leuven, Leuven, Belgium
| | - Jannick De Tobel
- Diagnostic Sciences - Radiology, Ghent University, Ghent, Belgium.
| | - Barkin Büyükçakir
- Electrical Engineering - Processing Speech and Images, KU Leuven, Leuven, Belgium
| | - Peter Claes
- Electrical Engineering - Processing Speech and Images, KU Leuven, Leuven, Belgium
| | - Dirk Vandermeulen
- Electrical Engineering - Processing Speech and Images, KU Leuven, Leuven, Belgium
| | - Patrick Thevissen
- Imaging and Pathology - Forensic Odontology, KU Leuven, Leuven, Belgium
| |
Collapse
|
24
|
Kumari K, Pahuja SK, Kumar S. A Comprehensive Examination of ChatGPT's Contribution to the Healthcare Sector and Hepatology. Dig Dis Sci 2024; 69:4027-4043. [PMID: 39354272 DOI: 10.1007/s10620-024-08659-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 09/20/2024] [Indexed: 10/03/2024]
Abstract
Artificial Intelligence and Natural Language Processing technology have demonstrated significant promise across several domains within the medical and healthcare sectors. This technique has numerous uses in the field of healthcare. One of the primary challenges in implementing ChatGPT in healthcare is the requirement for precise and up-to-date data. In the case of the involvement of sensitive medical information, it is imperative to carefully address concerns regarding privacy and security when using GPT in the healthcare sector. This paper outlines ChatGPT and its relevance in the healthcare industry. It discusses the important aspects of ChatGPT's workflow and highlights the usual features of ChatGPT specifically designed for the healthcare domain. The present review uses the ChatGPT model within the research domain to investigate disorders associated with the hepatic system. This review demonstrates the possible use of ChatGPT in supporting researchers and clinicians in analyzing and interpreting liver-related data, thereby improving disease diagnosis, prognosis, and patient care.
Collapse
Affiliation(s)
- Kabita Kumari
- Department of Instrumentation and Control Engineering, Dr B. R. Ambedkar National Institute of Technology, Jalandhar, Punjab, 144011, India.
| | - Sharvan Kumar Pahuja
- Department of Instrumentation and Control Engineering, Dr B. R. Ambedkar National Institute of Technology, Jalandhar, Punjab, 144011, India
| | - Sanjeev Kumar
- Biomedical Instrumentation Unit, CSIR-Central Scientific Instruments Organisation (CSIR-CSIO), Chandigarh, India
| |
Collapse
|
25
|
Lindner C. Contributing to the prediction of prognosis for treated hepatocellular carcinoma: Imaging aspects that sculpt the future. World J Gastrointest Surg 2024; 16:3377-3380. [PMID: 39575286 PMCID: PMC11577411 DOI: 10.4240/wjgs.v16.i10.3377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Revised: 08/19/2024] [Accepted: 08/28/2024] [Indexed: 09/27/2024] Open
Abstract
A novel nomogram model to predict the prognosis of hepatocellular carcinoma (HCC) treated with radiofrequency ablation and transarterial chemoembolization was recently published in the World Journal of Gastrointestinal Surgery. This model includes clinical and laboratory factors, but emerging imaging aspects, particularly from magnetic resonance imaging (MRI) and radiomics, could enhance the predictive accuracy thereof. Multiparametric MRI and deep learning radiomics models significantly improve prognostic predictions for the treatment of HCC. Incorporating advanced imaging features, such as peritumoral hypointensity and radiomics scores, alongside clinical factors, can refine prognostic models, aiding in personalized treatment and better predicting outcomes. This letter underscores the importance of integrating novel imaging techniques into prognostic tools to better manage and treat HCC.
Collapse
Affiliation(s)
- Cristian Lindner
- Department of Radiology, Faculty of Medicine, University of Concepcion, Concepcion 4030000, Biobío, Chile
- Department of Radiology, Hospital Regional Guillermo Grant Benavente, Concepcion 4030000, Biobío, Chile
| |
Collapse
|
26
|
Hachache R, Yahyaouy A, Riffi J, Tairi H, Abibou S, Adoui ME, Benjelloun M. Advancing personalized oncology: a systematic review on the integration of artificial intelligence in monitoring neoadjuvant treatment for breast cancer patients. BMC Cancer 2024; 24:1300. [PMID: 39434042 PMCID: PMC11495077 DOI: 10.1186/s12885-024-13049-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 10/08/2024] [Indexed: 10/23/2024] Open
Abstract
PURPOSE Despite suffering from the same disease, each patient exhibits a distinct microbiological profile and variable reactivity to prescribed treatments. Most doctors typically use a standardized treatment approach for all patients suffering from a specific disease. Consequently, the challenge lies in the effectiveness of this standardized treatment and in adapting it to each individual patient. Personalized medicine is an emerging field in which doctors use diagnostic tests to identify the most effective medical treatments for each patient. Prognosis, disease monitoring, and treatment planning rely on manual, error-prone methods. Artificial intelligence (AI) uses predictive techniques capable of automating prognostic and monitoring processes, thus reducing the error rate associated with conventional methods. METHODS This paper conducts an analysis of current literature, encompassing the period from January 2015 to 2023, based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). RESULTS In assessing 25 pertinent studies concerning predicting neoadjuvant treatment (NAT) response in breast cancer (BC) patients, the studies explored various imaging modalities (Magnetic Resonance Imaging, Ultrasound, etc.), evaluating results based on accuracy, sensitivity, and area under the curve. Additionally, the technologies employed, such as machine learning (ML), deep learning (DL), statistics, and hybrid models, were scrutinized. The presentation of datasets used for predicting complete pathological response (PCR) was also considered. CONCLUSION This paper seeks to unveil crucial insights into the application of AI techniques in personalized oncology, particularly in the monitoring and prediction of responses to NAT for BC patients. Finally, the authors suggest avenues for future research into AI-based monitoring systems.
Collapse
Affiliation(s)
- Rachida Hachache
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco.
| | - Ali Yahyaouy
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
- USPN, La Maison Des Sciences Numériques, Paris, France
| | - Jamal Riffi
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Hamid Tairi
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Soukayna Abibou
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Mohammed El Adoui
- Computer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, Mons, 7000, Belgium
| | - Mohammed Benjelloun
- Computer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, Mons, 7000, Belgium
| |
Collapse
|
27
|
Dadgar M, Verstraete A, Maebe J, D'Asseler Y, Vandenberghe S. Assessing the deep learning based image quality enhancements for the BGO based GE omni legend PET/CT. EJNMMI Phys 2024; 11:86. [PMID: 39412633 PMCID: PMC11484998 DOI: 10.1186/s40658-024-00688-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Accepted: 10/01/2024] [Indexed: 10/19/2024] Open
Abstract
BACKGROUND This study investigates the integration of Artificial Intelligence (AI) in compensating the lack of time-of-flight (TOF) of the GE Omni Legend PET/CT, which utilizes BGO scintillation crystals. METHODS The current study evaluates the image quality of the GE Omni Legend PET/CT using a NEMA IQ phantom. It investigates the impact on imaging performance of various deep learning precision levels (low, medium, high) across different data acquisition durations. Quantitative analysis was performed using metrics such as contrast recovery coefficient (CRC), background variability (BV), and contrast to noise Ratio (CNR). Additionally, patient images reconstructed with various deep learning precision levels are presented to illustrate the impact on image quality. RESULTS The deep learning approach significantly reduced background variability, particularly for the smallest region of interest. We observed improvements in background variability of 11.8 % , 17.2 % , and 14.3 % for low, medium, and high precision deep learning, respectively. The results also indicate a significant improvement in larger spheres when considering both background variability and contrast recovery coefficient. The high precision deep learning approach proved advantageous for short scans and exhibited potential in improving detectability of small lesions. The exemplary patient study shows that the noise was suppressed for all deep learning cases, but low precision deep learning also reduced the lesion contrast (about -30 % ), while high precision deep learning increased the contrast (about 10 % ). CONCLUSION This study conducted a thorough evaluation of deep learning algorithms in the GE Omni Legend PET/CT scanner, demonstrating that these methods enhance image quality, with notable improvements in CRC and CNR, thereby optimizing lesion detectability and offering opportunities to reduce image acquisition time.
Collapse
Affiliation(s)
- Meysam Dadgar
- Department of Electronics and Information Systems, Medical Image and Signal Processing, Ghent University, C. Heymanslaan 10, Ghent, Belgium.
| | - Amaryllis Verstraete
- Department of Electronics and Information Systems, Medical Image and Signal Processing, Ghent University, C. Heymanslaan 10, Ghent, Belgium
| | - Jens Maebe
- Department of Electronics and Information Systems, Medical Image and Signal Processing, Ghent University, C. Heymanslaan 10, Ghent, Belgium
| | - Yves D'Asseler
- Department of Electronics and Information Systems, Medical Image and Signal Processing, Ghent University, C. Heymanslaan 10, Ghent, Belgium
| | - Stefaan Vandenberghe
- Department of Electronics and Information Systems, Medical Image and Signal Processing, Ghent University, C. Heymanslaan 10, Ghent, Belgium
| |
Collapse
|
28
|
Hamd ZY, Alorainy AI, Aldhahi MI, Gareeballah A, F Alsubaie N, A Alshanaiber S, S Almudayhesh N, A Alyousef R, A AlNiwaider R, A Bin Moammar L, M Abuzaid M. Evaluation of the Impact of Artificial Intelligence on Clinical Practice of Radiology in Saudi Arabia. J Multidiscip Healthc 2024; 17:4745-4756. [PMID: 39411200 PMCID: PMC11476743 DOI: 10.2147/jmdh.s465508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 08/17/2024] [Indexed: 10/19/2024] Open
Abstract
Background Artificial Intelligence (AI) is becoming integral to the health sector, particularly radiology, because it enhances diagnostic accuracy and optimizes patient care. This study aims to assess the awareness and acceptance of AI among radiology professionals in Saudi Arabia, identifying the educational and training needs to bridge knowledge gaps and enhance AI-related competencies. Methods This cross-sectional observational study surveyed radiology professionals across various hospitals in Saudi Arabia. Participants were recruited through multiple channels, including direct invitations, emails, social media, and professional societies. The survey comprised four sections: demographic details, perceptions of AI, knowledge about AI, and willingness to adopt AI in clinical practice. Results Out of 374 radiology professionals surveyed, 45.2% acknowledged AI's significant impact on their field. Approximately 44% showed enthusiasm for AI adoption. However, 58.6% reported limited AI knowledge and inadequate training, with 43.6% identifying skill development and the complexity of AI educational programs as major barriers to implementation. Conclusion While radiology professionals in Saudi Arabia are generally positive about integrating AI into clinical practice, significant gaps in knowledge and training need to be addressed. Tailored educational programs are essential to fully leverage AI's potential in improving medical imaging practices and patient care outcomes.
Collapse
Affiliation(s)
- Zuhal Y Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Amal I Alorainy
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Monira I Aldhahi
- Department of Rehabilitation Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Awadia Gareeballah
- Department of Diagnostic Radiology, College of Applied Medical Science, Taibah University, Al-Madinah Al-Munawwarah, Saudi Arabia
| | - Naifah F Alsubaie
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Shahad A Alshanaiber
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Nehal S Almudayhesh
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Raneem A Alyousef
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Reem A AlNiwaider
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Lamia A Bin Moammar
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Mohamed M Abuzaid
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
- Research Institute for Medical and Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
29
|
Hu CL, Wang YC, Wu WF, Xi Y. Evaluation of AI-enhanced non-mydriatic fundus photography for diabetic retinopathy screening. Photodiagnosis Photodyn Ther 2024; 49:104331. [PMID: 39245303 DOI: 10.1016/j.pdpdt.2024.104331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 08/26/2024] [Accepted: 09/04/2024] [Indexed: 09/10/2024]
Abstract
OBJECTIVE To assess the feasibility of using non-mydriatic fundus photography in conjunction with an artificial intelligence (AI) reading platform for large-scale screening of diabetic retinopathy (DR). METHODS In this study, we selected 120 patients with diabetes hospitalized in our institution from December 2019 to April 2021. Retinal imaging of 240 eyes was obtained using non-mydriatic fundus photography. The fundus images of these patients were divided into two groups based on different interpretation methods. In Experiment Group 1, the images were analyzed and graded for DR diagnosis using an AI reading platform. In Experiment Group 2, the images were analyzed and graded for DR diagnosis by an associate chief physician in ophthalmology, specializing in fundus diseases. Concurrently, all patients underwent the gold standard for DR diagnosis and grading-fundus fluorescein angiography (FFA)-with the outcomes serving as the Control Group. The diagnostic value of the two methods was assessed by comparing the results of Experiment Groups 1 and 2 with those of the Control Group. RESULTS Keeping the control group (FFA results) as the gold standard, no significant differences were observed between the two experimental groups regarding diagnostic sensitivity, specificity, false positive rate, false negative rate, positive predictive value, negative predictive value, Youden's index, Kappa value, and diagnostic accuracy (X2 = 0.371, P > 0.05). CONCLUSION Compared with the manual reading group, the AI reading group revealed no significant differences across all diagnostic indicators, exhibiting high sensitivity and specificity, as well as a relatively high positive predictive value. Additionally, it demonstrated a high level of diagnostic consistency with the gold standard. This technology holds potential for suitability in large-scale screening of DR.
Collapse
Affiliation(s)
- Chen-Liang Hu
- Department of endocrinology, Huangshan city People's Hospital, Huangshan 245000, China.
| | - Yu-Chan Wang
- Department of endocrinology, Huangshan city People's Hospital, Huangshan 245000, China
| | - Wen-Fang Wu
- Department of ophthalmology, Huangshan city People's Hospital, Huangshan 245000, China
| | - Yu Xi
- Department of endocrinology, Huangshan city People's Hospital, Huangshan 245000, China
| |
Collapse
|
30
|
Sheerah HA, AlSalamah S, Alsalamah SA, Lu CT, Arafa A, Zaatari E, Alhomod A, Pujari S, Labrique A. The Rise of Virtual Health Care: Transforming the Health Care Landscape in the Kingdom of Saudi Arabia: A Review Article. Telemed J E Health 2024; 30:2545-2554. [PMID: 38984415 DOI: 10.1089/tmj.2024.0114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2024] Open
Abstract
Background: The rise of virtual healthcare underscores the transformative influence of digital technologies in reshaping the healthcare landscape. As technology advances and the global demand for accessible and convenient healthcare services escalates, the virtual healthcare sector is gaining unprecedented momentum. Saudi Arabia, with its ambitious Vision 2030 initiative, is actively embracing digital innovation in the healthcare sector. Methods: In this narrative review, we discussed the key drivers and prospects of virtual healthcare in Saudi Arabia, highlighting its potential to enhance healthcare accessibility, quality, and patient outcomes. We also summarized the role of the COVID-19 pandemic in the digital transformation of healthcare in the country. Healthcare services provided by Seha Virtual Hospital in Saudi Arabia, the world's largest and Middle East's first virtual hospital, were also described. Finally, we proposed a roadmap for the future development of virtual health in the country. Results and conclusions: The integration of virtual healthcare into the existing healthcare system can enhance patient experiences, improve outcomes, and contribute to the overall well-being of the population. However, careful planning, collaboration, and investment are essential to overcome the challenges and ensure the successful implementation and sustainability of virtual healthcare in the country.
Collapse
Affiliation(s)
- Haytham A Sheerah
- Ministry of Health, Office of the Vice Minister of Health, Riyadh, Saudi Arabia
| | - Shada AlSalamah
- Information Systems Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
- Department of Digital Health and Innovation, Science Division, World Health Organization, Geneva, Switzerland
| | - Sara A Alsalamah
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
- Department of Computer Science, Virginia Tech, Blacksburg, Virginia, USA
| | - Chang-Tien Lu
- Department of Computer Science, Virginia Tech, Blacksburg, Virginia, USA
| | - Ahmed Arafa
- Department of Preventive Cardiology, National Cerebral and Cardiovascular Center, Suita, Japan
- Department of Public Health and Community Medicine, Faculty of Medicine, Beni-Suef University, Beni-Suef, Egypt
| | - Ezzedine Zaatari
- Ministry of Health, Office of the Vice Minister of Health, Riyadh, Saudi Arabia
| | - Abdulaziz Alhomod
- Ministry of Health, SEHA Virtual Hospital, Riyadh, Saudi Arabia
- Emergency Medicine Administration, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Sameer Pujari
- Department of Digital Health and Innovation, Science Division, World Health Organization, Geneva, Switzerland
| | - Alain Labrique
- Department of International Health, Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland,United States
| |
Collapse
|
31
|
Zhao X, Dong YH, Xu LY, Shen YY, Qin G, Zhang ZB. Deep bone oncology Diagnostics: Computed tomography based Machine learning for detection of bone tumors from breast cancer metastasis. J Bone Oncol 2024; 48:100638. [PMID: 39391583 PMCID: PMC11466622 DOI: 10.1016/j.jbo.2024.100638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 09/12/2024] [Accepted: 09/21/2024] [Indexed: 10/12/2024] Open
Abstract
Purpose The objective of this study is to develop a novel diagnostic tool using deep learning and radiomics to distinguish bone tumors on CT images as metastases from breast cancer. By providing a more accurate and reliable method for identifying metastatic bone tumors, this approach aims to significantly improve clinical decision-making and patient management in the context of breast cancer. Methods This study utilized CT images of bone tumors from 178 patients, including 78 cases of breast cancer bone metastases and 100 cases of non-breast cancer bone metastases. The dataset was processed using the Medical Image Segmentation via Self-distilling TransUNet (MISSU) model for automated segmentation. Radiomics features were extracted from the segmented tumor regions using the Pyradiomics library, capturing various aspects of tumor phenotype. Feature selection was conducted using LASSO regression to identify the most predictive features. The model's performance was evaluated using ten-fold cross-validation, with metrics including accuracy, sensitivity, specificity, and the Dice similarity coefficient. Results The developed radiomics model using the SVM algorithm achieved high discriminatory power, with an AUC of 0.936 on the training set and 0.953 on the test set. The model's performance metrics demonstrated strong accuracy, sensitivity, and specificity. Specifically, the accuracy was 0.864 for the training set and 0.853 for the test set. Sensitivity values were 0.838 and 0.789 for the training and test sets, respectively, while specificity values were 0.896 and 0.933 for the training and test sets, respectively. These results indicate that the SVM model effectively distinguishes between bone metastases originating from breast cancer and other origins. Additionally, the average Dice similarity coefficient for the automated segmentation was 0.915, demonstrating a high level of agreement with manual segmentations. Conclusion This study demonstrates the potential of combining CT-based radiomics and deep learning for the accurate detection of bone metastases from breast cancer. The high-performance metrics indicate that this approach can significantly enhance diagnostic accuracy, aiding in early detection and improving patient outcomes. Future research should focus on validating these findings on larger datasets, integrating the model into clinical workflows, and exploring its use in personalized treatment planning.
Collapse
Affiliation(s)
- Xiao Zhao
- Department of Applied Engineering, Zhejiang Institute of Economics and Trade, Hangzhou, Zhejiang Province, 310018, China
| | - Yue-han Dong
- Department of Applied Engineering, Zhejiang Institute of Economics and Trade, Hangzhou, Zhejiang Province, 310018, China
| | - Li-yu Xu
- Department of Applied Engineering, Zhejiang Institute of Economics and Trade, Hangzhou, Zhejiang Province, 310018, China
| | - Yan-yan Shen
- Department of Applied Engineering, Zhejiang Institute of Economics and Trade, Hangzhou, Zhejiang Province, 310018, China
| | - Gang Qin
- Department of Applied Engineering, Zhejiang Institute of Economics and Trade, Hangzhou, Zhejiang Province, 310018, China
| | - Zheng-bo Zhang
- Wuxi Hospital of Traditional Chinese Medicine, Wuxi, Jiangsu Province, 214071, China
| |
Collapse
|
32
|
van Erck D, Moeskops P, Schoufour JD, Weijs PJM, Scholte Op Reimer WJM, van Mourik MS, Planken RN, Vis MM, Baan J, Išgum I, Henriques JP, de Vos BD, Delewi R. Low muscle quality on a procedural computed tomography scan assessed with deep learning as a practical useful predictor of mortality in patients with severe aortic valve stenosis. Clin Nutr ESPEN 2024; 63:142-147. [PMID: 38944828 DOI: 10.1016/j.clnesp.2024.06.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/18/2024] [Accepted: 06/11/2024] [Indexed: 07/02/2024]
Abstract
BACKGROUND & AIMS Accurate diagnosis of sarcopenia requires evaluation of muscle quality, which refers to the amount of fat infiltration in muscle tissue. In this study, we aim to investigate whether we can independently predict mortality risk in transcatheter aortic valve implantation (TAVI) patients, using automatic deep learning algorithms to assess muscle quality on procedural computed tomography (CT) scans. METHODS This study included 1199 patients with severe aortic stenosis who underwent transcatheter aortic valve implantation (TAVI) between January 2010 and January 2020. A procedural CT scan was performed as part of the preprocedural-TAVI evaluation, and the scans were analyzed using deep-learning-based software to automatically determine skeletal muscle density (SMD) and intermuscular adipose tissue (IMAT). The association of SMD and IMAT with all-cause mortality was analyzed using a Cox regression model, adjusted for other known mortality predictors, including muscle mass. RESULTS The mean age of the participants was 80 ± 7 years, 53% were female. The median observation time was 1084 days, and the overall mortality rate was 39%. We found that the lowest tertile of muscle quality, as determined by SMD, was associated with an increased risk of mortality (HR 1.40 [95%CI: 1.15-1.70], p < 0.01). Similarly, low muscle quality as defined by high IMAT in the lowest tertile was also associated with increased mortality risk (HR 1.24 [95%CI: 1.01-1.52], p = 0.04). CONCLUSIONS Our findings suggest that deep learning-assessed low muscle quality, as indicated by fat infiltration in muscle tissue, is a practical, useful and independent predictor of mortality after TAVI.
Collapse
Affiliation(s)
- Dennis van Erck
- Department of Cardiology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands.
| | - Pim Moeskops
- Quantib - AI Radiology Software, Westblaak 106, 3012 KM, Rotterdam, The Netherlands
| | - Josje D Schoufour
- Center of Expertise Urban Vitality, Faculty of Health, Amsterdam University of Applied Science, Tafelbergweg 51, 1105 BD, Amsterdam, The Netherlands; Center of Expertise Urban Vitality, Faculty of Sports and Nutrition, Amsterdam University of Applied Sciences, Dokter Meurerlaan 8, 1067 SM, Amsterdam, The Netherlands
| | - Peter J M Weijs
- Center of Expertise Urban Vitality, Faculty of Sports and Nutrition, Amsterdam University of Applied Sciences, Dokter Meurerlaan 8, 1067 SM, Amsterdam, The Netherlands
| | - Wilma J M Scholte Op Reimer
- Department of Cardiology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands; Research Group Chronic Diseases, HU University of Applied Sciences, Heidelberglaan 15, 3584 CS, Utrecht, The Netherlands
| | - Martijn S van Mourik
- Department of Cardiology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - R Nils Planken
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Marije M Vis
- Department of Cardiology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Jan Baan
- Department of Cardiology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Ivana Išgum
- Quantib - AI Radiology Software, Westblaak 106, 3012 KM, Rotterdam, The Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - José P Henriques
- Department of Cardiology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Bob D de Vos
- Quantib - AI Radiology Software, Westblaak 106, 3012 KM, Rotterdam, The Netherlands; Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| | - Ronak Delewi
- Department of Cardiology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, The Netherlands
| |
Collapse
|
33
|
Daum N, Blaivas M, Goudie A, Hoffmann B, Jenssen C, Neubauer R, Recker F, Moga TV, Zervides C, Dietrich CF. Student ultrasound education, current view and controversies. Role of Artificial Intelligence, Virtual Reality and telemedicine. Ultrasound J 2024; 16:44. [PMID: 39331224 PMCID: PMC11436506 DOI: 10.1186/s13089-024-00382-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 06/11/2024] [Indexed: 09/28/2024] Open
Abstract
The digitization of medicine will play an increasingly significant role in future years. In particular, telemedicine, Virtual Reality (VR) and innovative Artificial Intelligence (AI) systems offer tremendous potential in imaging diagnostics and are expected to shape ultrasound diagnostics and teaching significantly. However, it is crucial to consider the advantages and disadvantages of employing these new technologies and how best to teach and manage their use. This paper provides an overview of telemedicine, VR and AI in student ultrasound education, presenting current perspectives and controversies.
Collapse
Affiliation(s)
- Nils Daum
- Department of Anesthesiology and Intensive Care Medicine (CCM/CVK), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität Zu Berlin, Berlin, Germany
- Brandenburg Institute for Clinical Ultrasound (BICUS) at Brandenburg Medical University, Neuruppin, Germany
| | - Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA
| | | | - Beatrice Hoffmann
- Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Christian Jenssen
- Brandenburg Institute for Clinical Ultrasound (BICUS) at Brandenburg Medical University, Neuruppin, Germany
- Department for Internal Medicine, Krankenhaus Märkisch Oderland, Strausberg, Germany
| | | | - Florian Recker
- Department of Obstetrics and Prenatal Medicine, University Hospital Bonn, Bonn, Germany
| | - Tudor Voicu Moga
- Department of Gastroenterology and Hepatology, "Victor Babeș" University of Medicine and Pharmacy, Piața Eftimie Murgu 2, 300041, Timișoara, Romania
- Center of Advanced Research in Gastroenterology and Hepatology, "Victor Babeș" University of Medicine and Pharmacy, 300041, Timisoara, Romania
| | | | - Christoph Frank Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Hirslanden Beau Site, Salem und Permanence, Bern, Switzerland.
| |
Collapse
|
34
|
Kalidindi S. The Role of Artificial Intelligence in the Diagnosis of Melanoma. Cureus 2024; 16:e69818. [PMID: 39308840 PMCID: PMC11415605 DOI: 10.7759/cureus.69818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2024] [Indexed: 09/25/2024] Open
Abstract
The incidence of melanoma, the most aggressive form of skin cancer, continues to rise globally, particularly among fair-skinned populations (type I and II). Early detection is crucial for improving patient outcomes, and recent advancements in artificial intelligence (AI) have shown promise in enhancing the accuracy and efficiency of melanoma diagnosis and management. This review examines the role of AI in skin lesion diagnostics, highlighting two main approaches: machine learning, particularly convolutional neural networks (CNNs), and expert systems. AI techniques have demonstrated high accuracy in classifying dermoscopic images, often matching or surpassing dermatologists' performance. Integrating AI into dermatology has improved tasks, such as lesion classification, segmentation, and risk prediction, facilitating earlier and more accurate interventions. Despite these advancements, challenges remain, including biases in training data, interpretability issues, and integration of AI into clinical workflows. Ensuring diverse data representation and maintaining high standards of image quality are essential for reliable AI performance. Future directions involve the development of more sophisticated models, such as vision-language and multimodal models, and federated learning to address data privacy and generalizability concerns. Continuous validation and ethical integration of AI into clinical practice are vital for realizing its full potential for improving melanoma diagnosis and patient care.
Collapse
Affiliation(s)
- Sadhana Kalidindi
- Clinical Research, Apollo Radiology International Academy, Hyderabad, IND
| |
Collapse
|
35
|
Melesse GT, Amde T, Tezera R. Competency in evidence-based medicine and associated factors among medical radiology technologists in Addis Ababa, Ethiopia. J Med Radiat Sci 2024; 71:344-354. [PMID: 38445830 PMCID: PMC11569404 DOI: 10.1002/jmrs.777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 02/19/2024] [Indexed: 03/07/2024] Open
Abstract
INTRODUCTION Evidence-based medicine integrates clinical expertise, patient values and best research evidence in clinical decision-making. This study aimed to assess evidence-based medicine knowledge, attitudes, practices and associated factors among medical radiology technologists in Addis Ababa, Ethiopia. METHODS A cross-sectional study was conducted among 392 medical radiology technologists from May to August 2022 using a self-administered questionnaire. Bivariate and multivariate logistic regression identified factors associated with evidence-based medicine practice. RESULTS Most medical radiology technologists (57.7%) had moderate evidence-based medicine knowledge and 94.9% had favourable attitudes. However, 64.8% demonstrated poor evidence-based medicine practice. Factors significantly associated with better evidence-based medicine practice were moderate knowledge (AOR 1.949, 95% CI 1.155-3.291), good statistical understanding (AOR 1.824, 95% CI 1.135-2.930), sufficient time for evidence-based medicine (AOR 1.892, 95% CI 1.140-3.141), institutional support (AOR 2.093, 95% CI 1.271-3.440) and evidence-based medicine resource access (AOR 1.653, 95% CI 1.028-2.656). CONCLUSION Despite moderate knowledge and positive attitudes towards evidence-based medicine, most medical radiology technologists had suboptimal utilisation. Strategies to improve knowledge, ensure dedicated time, provide institutional support and resources could enhance evidence-based radiology practice.
Collapse
Affiliation(s)
- Girma Tufa Melesse
- Department of Midwifery, Institute of HealthBule Hora UniversityBule HoraEthiopia
| | - Tewodros Amde
- Department of Medical Radiology, College of Medical and Health ScienceAddis Ababa UniversityAddis AbabaEthiopia
| | - Robel Tezera
- Department of Medical Radiology, College of Medical and Health ScienceAddis Ababa UniversityAddis AbabaEthiopia
| |
Collapse
|
36
|
Bratan T, Schneider D, Funer F, Heyen NB, Klausen A, Liedtke W, Lipprandt M, Salloch S, Langanke M. [Supporting medical and nursing activities with AI: recommendations for responsible design and use]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2024; 67:1039-1046. [PMID: 39017712 PMCID: PMC11349829 DOI: 10.1007/s00103-024-03918-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 06/12/2024] [Indexed: 07/18/2024]
Abstract
Clinical decision support systems (CDSS) based on artificial intelligence (AI) are complex socio-technical innovations and are increasingly being used in medicine and nursing to improve the overall quality and efficiency of care, while also addressing limited financial and human resources. However, in addition to such intended clinical and organisational effects, far-reaching ethical, social and legal implications of AI-based CDSS on patient care and nursing are to be expected. To date, these normative-social implications have not been sufficiently investigated. The BMBF-funded project DESIREE (DEcision Support In Routine and Emergency HEalth Care: Ethical and Social Implications) has developed recommendations for the responsible design and use of clinical decision support systems. This article focuses primarily on ethical and social aspects of AI-based CDSS that could have a negative impact on patient health. Our recommendations are intended as additions to existing recommendations and are divided into the following action fields with relevance across all stakeholder groups: development, clinical use, information and consent, education and training, and (accompanying) research.
Collapse
Affiliation(s)
- Tanja Bratan
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland.
| | - Diana Schneider
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland
| | - Florian Funer
- Institut für Ethik, Geschichte und Philosophie der Medizin, Medizinische Hochschule Hannover (MHH), Hannover, Deutschland
- Institut für Ethik und Geschichte der Medizin, Eberhard Karls Universität Tübingen, Tübingen, Deutschland
| | - Nils B Heyen
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland
| | - Andrea Klausen
- Uniklinik RWTH Aachen, Institut für Medizinische Informatik, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen, Deutschland
| | - Wenke Liedtke
- Theologische Fakultät, Universität Greifswald, Greifswald, Deutschland
| | - Myriam Lipprandt
- Uniklinik RWTH Aachen, Institut für Medizinische Informatik, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen, Deutschland
| | - Sabine Salloch
- Institut für Ethik, Geschichte und Philosophie der Medizin, Medizinische Hochschule Hannover (MHH), Hannover, Deutschland
| | - Martin Langanke
- Angewandte Ethik/Fachbereich Soziale Arbeit, Evangelische Hochschule Rheinland-Westfalen-Lippe, Bochum, Deutschland
| |
Collapse
|
37
|
Yuan Y, Pan B, Mo H, Wu X, Long Z, Yang Z, Zhu J, Ming J, Qiu L, Sun Y, Yin S, Zhang F. Deep learning-based computer-aided diagnosis system for the automatic detection and classification of lateral cervical lymph nodes on original ultrasound images of papillary thyroid carcinoma: a prospective diagnostic study. Endocrine 2024; 85:1289-1299. [PMID: 38570388 DOI: 10.1007/s12020-024-03808-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/26/2024] [Indexed: 04/05/2024]
Abstract
PURPOSE This study aims to develop a deep learning-based computer-aided diagnosis (CAD) system for the automatic detection and classification of lateral cervical lymph nodes (LNs) on original ultrasound images of papillary thyroid carcinoma (PTC) patients. METHODS A retrospective data set of 1801 cervical LN ultrasound images from 1675 patients with PTC and a prospective test set including 185 images from 160 patients were collected. Four different deep leaning models were trained and validated in the retrospective data set. The best model was selected for CAD system development and compared with three sonographers in the retrospective and prospective test sets. RESULTS The Deformable Detection Transformer (DETR) model showed the highest diagnostic efficacy, with a mean average precision score of 86.3% in the retrospective test set, and was therefore used in constructing the CAD system. The detection performance of the CAD system was superior to the junior sonographer and intermediate sonographer with accuracies of 86.3% and 92.4% in the retrospective and prospective test sets, respectively. The classification performance of the CAD system was better than all sonographers with the areas under the curve (AUCs) of 94.4% and 95.2% in the retrospective and prospective test sets, respectively. CONCLUSIONS This study developed a Deformable DETR model-based CAD system for automatically detecting and classifying lateral cervical LNs on original ultrasound images, which showed excellent diagnostic efficacy and clinical utility. It can be an important tool for assisting sonographers in the diagnosis process.
Collapse
Affiliation(s)
- Yuquan Yuan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Bin Pan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Hongbiao Mo
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Xing Wu
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zhaoxin Long
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zeyu Yang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Junping Zhu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Jing Ming
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Lin Qiu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Yiceng Sun
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Supeng Yin
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| | - Fan Zhang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| |
Collapse
|
38
|
Aytaç E, Gönen M, Tatli S, Balgetir F, Dogan S, Tuncer T. Large vessel occlusion detection by non-contrast CT using artificial ıntelligence. Neurol Sci 2024; 45:4391-4397. [PMID: 38622451 PMCID: PMC11306655 DOI: 10.1007/s10072-024-07522-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/06/2024] [Indexed: 04/17/2024]
Abstract
INTRODUCTION Computer vision models have been used to diagnose some disorders using computer tomography (CT) and magnetic resonance (MR) images. In this work, our objective is to detect large and small brain vessel occlusion using a deep feature engineering model in acute of ischemic stroke. METHODS We use our dataset. which contains 324 patient's CT images with two classes; these classes are large and small brain vessel occlusion. We divided the collected image into horizontal and vertical patches. Then, pretrained AlexNet was utilized to extract deep features. Here, fc6 and fc7 (sixth and seventh fully connected layers) layers have been used to extract deep features from the created patches. The generated features from patches have been concatenated/merged to generate the final feature vector. In order to select the best combination from the generated final feature vector, an iterative selector (iterative neighborhood component analysis-INCA) has been used, and this selector has chosen 43 features. These 43 features have been used for classification. In the last phase, we used a kNN classifier with tenfold cross-validation. RESULTS By using 43 features and a kNN classifier, our AlexNet-based deep feature engineering model surprisingly attained 100% classification accuracy. CONCLUSION The obtained perfect classification performance clearly demonstrated that our proposal could separate large and small brain vessel occlusion detection in non-contrast CT images. In this aspect, this model can assist neurology experts with the early recanalization chance.
Collapse
Affiliation(s)
- Emrah Aytaç
- Department of Neurology, Faculty of Medicine, Fırat University, Elazig, Turkey
| | - Murat Gönen
- Department of Neurology, Faculty of Medicine, Fırat University, Elazig, Turkey
| | - Sinan Tatli
- Department of Neurology, Faculty of Medicine, Fırat University, Elazig, Turkey
| | - Ferhat Balgetir
- Department of Neurology, Faculty of Medicine, Fırat University, Elazig, Turkey.
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Fırat University, Elazig, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Fırat University, Elazig, Turkey
| |
Collapse
|
39
|
Vij O, Calver H, Myall N, Dey M, Kouranloo K. Evaluating the competency of ChatGPT in MRCP Part 1 and a systematic literature review of its capabilities in postgraduate medical assessments. PLoS One 2024; 19:e0307372. [PMID: 39083455 PMCID: PMC11290618 DOI: 10.1371/journal.pone.0307372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Accepted: 07/03/2024] [Indexed: 08/02/2024] Open
Abstract
OBJECTIVES As a large language model (LLM) trained on a large data set, ChatGPT can perform a wide array of tasks without additional training. We evaluated the performance of ChatGPT on postgraduate UK medical examinations through a systematic literature review of ChatGPT's performance in UK postgraduate medical assessments and its performance on Member of Royal College of Physicians (MRCP) Part 1 examination. METHODS Medline, Embase and Cochrane databases were searched. Articles discussing the performance of ChatGPT in UK postgraduate medical examinations were included in the systematic review. Information was extracted on exam performance including percentage scores and pass/fail rates. MRCP UK Part 1 sample paper questions were inserted into ChatGPT-3.5 and -4 four times each and the scores marked against the correct answers provided. RESULTS 12 studies were ultimately included in the systematic literature review. ChatGPT-3.5 scored 66.4% and ChatGPT-4 scored 84.8% on MRCP Part 1 sample paper, which is 4.4% and 22.8% above the historical pass mark respectively. Both ChatGPT-3.5 and -4 performance was significantly above the historical pass mark for MRCP Part 1, indicating they would likely pass this examination. ChatGPT-3.5 failed eight out of nine postgraduate exams it performed with an average percentage of 5.0% below the pass mark. ChatGPT-4 passed nine out of eleven postgraduate exams it performed with an average percentage of 13.56% above the pass mark. ChatGPT-4 performance was significantly better than ChatGPT-3.5 in all examinations that both models were tested on. CONCLUSION ChatGPT-4 performed at above passing level for the majority of UK postgraduate medical examinations it was tested on. ChatGPT is prone to hallucinations, fabrications and reduced explanation accuracy which could limit its potential as a learning tool. The potential for these errors is an inherent part of LLMs and may always be a limitation for medical applications of ChatGPT.
Collapse
Affiliation(s)
- Oliver Vij
- Guy’s Hospital, Guy’s and St Thomas’ NHS Foundation Trust, Great Maze Pond, London, United Kingdom
| | | | - Nikki Myall
- British Medical Association Library, BMA House, Tavistock Square, London, United Kingdom
| | - Mrinalini Dey
- Centre for Rheumatic Diseases, Denmark Hill Campus King’s College London, London, United Kingdom
| | - Koushan Kouranloo
- Department of Rheumatology, University Hospital Lewisham, London, United Kingdom
- School of Medicine, Cedar House, University of Liverpool, Liverpool, United Kingdom
| |
Collapse
|
40
|
Kaya K, Gietzen C, Hahnfeldt R, Zoubi M, Emrich T, Halfmann MC, Sieren MM, Elser Y, Krumm P, Brendel JM, Nikolaou K, Haag N, Borggrefe J, Krüchten RV, Müller-Peltzer K, Ehrengut C, Denecke T, Hagendorff A, Goertz L, Gertz RJ, Bunck AC, Maintz D, Persigehl T, Lennartz S, Luetkens JA, Jaiswal A, Iuga AI, Pennig L, Kottlors J. Generative Pre-trained Transformer 4 analysis of cardiovascular magnetic resonance reports in suspected myocarditis: A multicenter study. J Cardiovasc Magn Reson 2024; 26:101068. [PMID: 39079602 PMCID: PMC11414660 DOI: 10.1016/j.jocmr.2024.101068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/04/2024] [Accepted: 07/24/2024] [Indexed: 09/13/2024] Open
Abstract
BACKGROUND Diagnosing myocarditis relies on multimodal data, including cardiovascular magnetic resonance (CMR), clinical symptoms, and blood values. The correct interpretation and integration of CMR findings require radiological expertise and knowledge. We aimed to investigate the performance of Generative Pre-trained Transformer 4 (GPT-4), a large language model, for report-based medical decision-making in the context of cardiac MRI for suspected myocarditis. METHODS This retrospective study includes CMR reports from 396 patients with suspected myocarditis and eight centers, respectively. CMR reports and patient data including blood values, age, and further clinical information were provided to GPT-4 and radiologists with 1 (resident 1), 2 (resident 2), and 4 years (resident 3) of experience in CMR and knowledge of the 2018 Lake Louise Criteria. The final impression of the report regarding the radiological assessment of whether myocarditis is present or not was not provided. The performance of Generative pre-trained transformer 4 (GPT-4) and the human readers were compared to a consensus reading (two board-certified radiologists with 8 and 10 years of experience in CMR). Sensitivity, specificity, and accuracy were calculated. RESULTS GPT-4 yielded an accuracy of 83%, sensitivity of 90%, and specificity of 78%, which was comparable to the physician with 1 year of experience (R1: 86%, 90%, 84%, p = 0.14) and lower than that of more experienced physicians (R2: 89%, 86%, 91%, p = 0.007 and R3: 91%, 85%, 96%, p < 0.001). GPT-4 and human readers showed a higher diagnostic performance when results from T1- and T2-mapping sequences were part of the reports, for residents 1 and 3 with statistical significance (p = 0.004 and p = 0.02, respectively). CONCLUSION GPT-4 yielded good accuracy for diagnosing myocarditis based on CMR reports in a large dataset from multiple centers and therefore holds the potential to serve as a diagnostic decision-supporting tool in this capacity, particularly for less experienced physicians. Further studies are required to explore the full potential and elucidate educational aspects of the integration of large language models in medical decision-making.
Collapse
Affiliation(s)
- Kenan Kaya
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.
| | - Carsten Gietzen
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Robert Hahnfeldt
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Maher Zoubi
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Bonn, University of Bonn, Bonn, Germany
| | - Tilman Emrich
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes-Gutenberg-University, Mainz, Germany; Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina, USA; German Centre for Cardiovascular Research, Partner Site Rhine-Main, Mainz, Germany
| | - Moritz C Halfmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes-Gutenberg-University, Mainz, Germany
| | - Malte Maria Sieren
- Department of Radiology and Nuclear Medicine, UKSH, Campus Lübeck, Lübeck, Germany; Institute of Interventional Radiology, UKSH, Campus Lübeck, Lübeck, Germany
| | - Yannic Elser
- Department of Radiology and Nuclear Medicine, UKSH, Campus Lübeck, Lübeck, Germany
| | - Patrick Krumm
- Department of Radiology, Diagnostic and Interventional Radiology, University of Tübingen, Tübingen, Germany
| | - Jan M Brendel
- Department of Radiology, Diagnostic and Interventional Radiology, University of Tübingen, Tübingen, Germany
| | - Konstantin Nikolaou
- Department of Radiology, Diagnostic and Interventional Radiology, University of Tübingen, Tübingen, Germany
| | - Nina Haag
- Institute for Radiology, Neuroradiology and Nuclear Medicine Johannes Wesling University Hospital/Mühlenkreiskliniken, Bochum/Minden, Germany
| | - Jan Borggrefe
- Institute for Radiology, Neuroradiology and Nuclear Medicine Johannes Wesling University Hospital/Mühlenkreiskliniken, Bochum/Minden, Germany
| | - Ricarda von Krüchten
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Katharina Müller-Peltzer
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Constantin Ehrengut
- Department of Diagnostic and Interventional Radiology, University of Leipzig, Leipzig, Germany
| | - Timm Denecke
- Department of Diagnostic and Interventional Radiology, University of Leipzig, Leipzig, Germany
| | | | - Lukas Goertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Roman J Gertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Alexander Christian Bunck
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - David Maintz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Simon Lennartz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Julian A Luetkens
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Bonn, University of Bonn, Bonn, Germany
| | - Astha Jaiswal
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Andra Iza Iuga
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Lenhard Pennig
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Jonathan Kottlors
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
41
|
Woessner AE, Anjum U, Salman H, Lear J, Turner JT, Campbell R, Beaudry L, Zhan J, Cornett LE, Gauch S, Quinn KP. Identifying and training deep learning neural networks on biomedical-related datasets. Brief Bioinform 2024; 25:bbae232. [PMID: 39041915 PMCID: PMC11264291 DOI: 10.1093/bib/bbae232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 03/26/2024] [Indexed: 07/24/2024] Open
Abstract
This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.
Collapse
Affiliation(s)
- Alan E Woessner
- Arkansas Integrative Metabolic Research Center, University of Arkansas, Fayetteville, AR
- Department of Biomedical Engineering, University of Arkansas, Fayetteville, AR
| | - Usman Anjum
- Arkansas Integrative Metabolic Research Center, University of Arkansas, Fayetteville, AR
- Department of Computer Science, University of Cincinnati, Cincinnati, OH
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR
| | - Hadi Salman
- Arkansas Integrative Metabolic Research Center, University of Arkansas, Fayetteville, AR
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR
| | - Jacob Lear
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR
| | | | - Ross Campbell
- Health Data and AI, Deloitte Consulting LLP, Arlington VA, USA
| | | | - Justin Zhan
- Arkansas Integrative Metabolic Research Center, University of Arkansas, Fayetteville, AR
- Department of Computer Science, University of Cincinnati, Cincinnati, OH
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR
| | - Lawrence E Cornett
- Department of Physiology and Cell Biology, University of Arkansas for Medical Sciences, Little Rock, AR
| | - Susan Gauch
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR
| | - Kyle P Quinn
- Arkansas Integrative Metabolic Research Center, University of Arkansas, Fayetteville, AR
- Department of Biomedical Engineering, University of Arkansas, Fayetteville, AR
| |
Collapse
|
42
|
Maksim R, Buczyńska A, Sidorkiewicz I, Krętowski AJ, Sierko E. Imaging and Metabolic Diagnostic Methods in the Stage Assessment of Rectal Cancer. Cancers (Basel) 2024; 16:2553. [PMID: 39061192 PMCID: PMC11275086 DOI: 10.3390/cancers16142553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 07/04/2024] [Accepted: 07/12/2024] [Indexed: 07/28/2024] Open
Abstract
Rectal cancer (RC) is a prevalent malignancy with significant morbidity and mortality rates. The accurate staging of RC is crucial for optimal treatment planning and patient outcomes. This review aims to summarize the current literature on imaging and metabolic diagnostic methods used in the stage assessment of RC. Various imaging modalities play a pivotal role in the initial evaluation and staging of RC. These include magnetic resonance imaging (MRI), computed tomography (CT), and endorectal ultrasound (ERUS). MRI has emerged as the gold standard for local staging due to its superior soft tissue resolution and ability to assess tumor invasion depth, lymph node involvement, and the presence of extramural vascular invasion. CT imaging provides valuable information about distant metastases and helps determine the feasibility of surgical resection. ERUS aids in assessing tumor depth, perirectal lymph nodes, and sphincter involvement. Understanding the strengths and limitations of each diagnostic modality is essential for accurate staging and treatment decisions in RC. Furthermore, the integration of multiple imaging and metabolic methods, such as PET/CT or PET/MRI, can enhance diagnostic accuracy and provide valuable prognostic information. Thus, a literature review was conducted to investigate and assess the effectiveness and accuracy of diagnostic methods, both imaging and metabolic, in the stage assessment of RC.
Collapse
Affiliation(s)
- Rafał Maksim
- Department of Radiotherapy, Maria Skłodowska-Curie Białystok Oncology Center, 15-027 Bialystok, Poland;
| | - Angelika Buczyńska
- Clinical Research Centre, Medical University of Bialystok, 15-276 Bialystok, Poland; (A.B.); (A.J.K.)
| | - Iwona Sidorkiewicz
- Clinical Research Support Centre, Medical University of Bialystok, 15-276 Bialystok, Poland;
| | - Adam Jacek Krętowski
- Clinical Research Centre, Medical University of Bialystok, 15-276 Bialystok, Poland; (A.B.); (A.J.K.)
- Department of Endocrinology, Diabetology and Internal Medicine, Medical University of Bialystok, 15-276 Bialystok, Poland
| | - Ewa Sierko
- Department of Oncology, Medical University of Bialystok, 15-276 Bialystok, Poland
- Department of Radiotherapy I, Maria Sklodowska-Curie Bialystok Oncology Centre, 15-027 Bialystok, Poland
| |
Collapse
|
43
|
Zhu B, Yang Y. Quality assessment of abdominal CT images: an improved ResNet algorithm with dual-attention mechanism. Am J Transl Res 2024; 16:3099-3107. [PMID: 39114678 PMCID: PMC11301486 DOI: 10.62347/wkns8633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 05/19/2024] [Indexed: 08/10/2024]
Abstract
OBJECTIVES To enhance medical image classification using a Dual-attention ResNet model and investigate the impact of attention mechanisms on model performance in a clinical setting. METHODS We utilized a dataset of medical images and implemented a Dual-attention ResNet model, integrating self-attention and spatial attention mechanisms. The model was trained and evaluated using binary and five-level quality classification tasks, leveraging standard evaluation metrics. RESULTS Our findings demonstrated substantial performance improvements with the Dual-attention ResNet model in both classification tasks. In the binary classification task, the model achieved an accuracy of 0.940, outperforming the conventional ResNet model. Similarly, in the five-level quality classification task, the Dual-attention ResNet model attained an accuracy of 0.757, highlighting its efficacy in capturing nuanced distinctions in image quality. CONCLUSIONS The integration of attention mechanisms within the ResNet model resulted in significant performance enhancements, showcasing its potential for improving medical image classification tasks. These results underscore the promising role of attention mechanisms in facilitating more accurate and discriminative analysis of medical images, thus holding substantial promise for clinical applications in radiology and diagnostics.
Collapse
Affiliation(s)
- Boying Zhu
- Shanghai Institute of Technical Physics, Chinese Academy of SciencesShanghai 200083, China
- University of Chinese Academy of SciencesBeijing 100049, China
| | - Yuanyuan Yang
- Shanghai Institute of Technical Physics, Chinese Academy of SciencesShanghai 200083, China
- University of Chinese Academy of SciencesBeijing 100049, China
| |
Collapse
|
44
|
Micali G, Corallo F, Pagano M, Giambò FM, Duca A, D’Aleo P, Anselmo A, Bramanti A, Garofano M, Mazzon E, Bramanti P, Cappadona I. Artificial Intelligence and Heart-Brain Connections: A Narrative Review on Algorithms Utilization in Clinical Practice. Healthcare (Basel) 2024; 12:1380. [PMID: 39057522 PMCID: PMC11276532 DOI: 10.3390/healthcare12141380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 07/04/2024] [Accepted: 07/08/2024] [Indexed: 07/28/2024] Open
Abstract
Cardiovascular and neurological diseases are a major cause of mortality and morbidity worldwide. Such diseases require careful monitoring to effectively manage their progression. Artificial intelligence (AI) offers valuable tools for this purpose through its ability to analyse data and identify predictive patterns. This review evaluated the application of AI in cardiac and neurological diseases for their clinical impact on the general population. We reviewed studies on the application of AI in the neurological and cardiological fields. Our search was performed on the PubMed, Web of Science, Embase and Cochrane library databases. Of the initial 5862 studies, 23 studies met the inclusion criteria. The studies showed that the most commonly used algorithms in these clinical fields are Random Forest and Artificial Neural Network, followed by logistic regression and Support-Vector Machines. In addition, an ECG-AI algorithm based on convolutional neural networks has been developed and has been widely used in several studies for the detection of atrial fibrillation with good accuracy. AI has great potential to support physicians in interpretation, diagnosis, risk assessment and disease management.
Collapse
Affiliation(s)
- Giuseppe Micali
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Francesco Corallo
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Maria Pagano
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Fabio Mauro Giambò
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Antonio Duca
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Piercataldo D’Aleo
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Anna Anselmo
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Alessia Bramanti
- Department of Medicine, Surgery and Dentistry, University of Salerno, 84081 Baronissi, Italy
| | - Marina Garofano
- Department of Medicine, Surgery and Dentistry, University of Salerno, 84081 Baronissi, Italy
| | - Emanuela Mazzon
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| | - Placido Bramanti
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
- Faculty of Psychology, Università degli Studi eCampus, Via Isimbardi 10, 22060 Novedrate, Italy
| | - Irene Cappadona
- IRCCS Centro Neurolesi Bonino-Pulejo, Via Palermo, S.S. 113, C.da Casazza, 98124 Messina, Italy; (G.M.)
| |
Collapse
|
45
|
Wang Y, Fu W, Zhang Y, Wang D, Gu Y, Wang W, Xu H, Ge X, Ye C, Fang J, Su L, Wang J, He W, Zhang X, Feng R. Constructing and implementing a performance evaluation indicator set for artificial intelligence decision support systems in pediatric outpatient clinics: an observational study. Sci Rep 2024; 14:14482. [PMID: 38914707 PMCID: PMC11196575 DOI: 10.1038/s41598-024-64893-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/13/2024] [Indexed: 06/26/2024] Open
Abstract
Artificial intelligence (AI) decision support systems in pediatric healthcare have a complex application background. As an AI decision support system (AI-DSS) can be costly, once applied, it is crucial to focus on its performance, interpret its success, and then monitor and update it to ensure ongoing success consistently. Therefore, a set of evaluation indicators was explicitly developed for AI-DSS in pediatric healthcare, enabling continuous and systematic performance monitoring. The study unfolded in two stages. The first stage encompassed establishing the evaluation indicator set through a literature review, a focus group interview, and expert consultation using the Delphi method. In the second stage, weight analysis was conducted. Subjective weights were calculated based on expert opinions through analytic hierarchy process, while objective weights were determined using the entropy weight method. Subsequently, subject and object weights were synthesized to form the combined weight. In the two rounds of expert consultation, the authority coefficients were 0.834 and 0.846, Kendall's coordination coefficient was 0.135 in Round 1 and 0.312 in Round 2. The final evaluation indicator set has three first-class indicators, fifteen second-class indicators, and forty-seven third-class indicators. Indicator I-1(Organizational performance) carries the highest weight, followed by Indicator I-2(Societal performance) and Indicator I-3(User experience performance) in the objective and combined weights. Conversely, 'Societal performance' holds the most weight among the subjective weights, followed by 'Organizational performance' and 'User experience performance'. In this study, a comprehensive and specialized set of evaluation indicators for the AI-DSS in the pediatric outpatient clinic was established, and then implemented. Continuous evaluation still requires long-term data collection to optimize the weight proportions of the established indicators.
Collapse
Affiliation(s)
- Yingwen Wang
- Nursing Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Weijia Fu
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Yuejie Zhang
- School of Computer Science, Fudan University, Shanghai, 200438, China
| | - Daoyang Wang
- School of Public, Health Fudan University, Shanghai, 200032, China
| | - Ying Gu
- Nursing Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Weibing Wang
- School of Public, Health Fudan University, Shanghai, 200032, China
| | - Hong Xu
- Nephrology Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Xiaoling Ge
- Statistical and Data Management Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Chengjie Ye
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Jinwu Fang
- School of Public, Health Fudan University, Shanghai, 200032, China
| | - Ling Su
- Statistical and Data Management Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Jiayu Wang
- National Health Commission Key Laboratory of Neonatal Diseases (Fudan University), Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Wen He
- Respiratory Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Xiaobo Zhang
- Respiratory Department, Children's Hospital of Fudan University, Shanghai, 201102, China.
| | - Rui Feng
- School of Computer Science, Fudan University, Shanghai, 200438, China.
- School of Computer Science, Fudan University, 2005 Songhu Road, Shanghai, 200438, China.
| |
Collapse
|
46
|
Yao S, Yao D, Huang Y, Qin S, Chen Q. A machine learning model based on clinical features and ultrasound radiomics features for pancreatic tumor classification. Front Endocrinol (Lausanne) 2024; 15:1381822. [PMID: 38957447 PMCID: PMC11218542 DOI: 10.3389/fendo.2024.1381822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 06/03/2024] [Indexed: 07/04/2024] Open
Abstract
Objective This study aimed to construct a machine learning model using clinical variables and ultrasound radiomics features for the prediction of the benign or malignant nature of pancreatic tumors. Methods 242 pancreatic tumor patients who were hospitalized at the First Affiliated Hospital of Guangxi Medical University between January 2020 and June 2023 were included in this retrospective study. The patients were randomly divided into a training cohort (n=169) and a test cohort (n=73). We collected 28 clinical features from the patients. Concurrently, 306 radiomics features were extracted from the ultrasound images of the patients' tumors. Initially, a clinical model was constructed using the logistic regression algorithm. Subsequently, radiomics models were built using SVM, random forest, XGBoost, and KNN algorithms. Finally, we combined clinical features with a new feature RAD prob calculated by applying radiomics model to construct a fusion model, and developed a nomogram based on the fusion model. Results The performance of the fusion model surpassed that of both the clinical and radiomics models. In the training cohort, the fusion model achieved an AUC of 0.978 (95% CI: 0.96-0.99) during 5-fold cross-validation and an AUC of 0.925 (95% CI: 0.86-0.98) in the test cohort. Calibration curve and decision curve analyses demonstrated that the nomogram constructed from the fusion model has high accuracy and clinical utility. Conclusion The fusion model containing clinical and ultrasound radiomics features showed excellent performance in predicting the benign or malignant nature of pancreatic tumors.
Collapse
Affiliation(s)
- Shunhan Yao
- Medical College, Guangxi University, Nanning, China
- Monash Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| | - Dunwei Yao
- Department of Gastroenterology, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
- Department of Gastroenterology, The People’s Hospital of Baise, Baise, China
| | - Yuanxiang Huang
- School of Computer, Electronic and Information, Guangxi University, Nanning, China
| | - Shanyu Qin
- Department of Gastroenterology, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Qingfeng Chen
- School of Computer, Electronic and Information, Guangxi University, Nanning, China
| |
Collapse
|
47
|
Ravelo V, Acero J, Fuentes-Zambrano J, García Guevara H, Olate S. Artificial Intelligence Used for Diagnosis in Facial Deformities: A Systematic Review. J Pers Med 2024; 14:647. [PMID: 38929868 PMCID: PMC11204491 DOI: 10.3390/jpm14060647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 05/26/2024] [Accepted: 06/02/2024] [Indexed: 06/28/2024] Open
Abstract
AI is included in a lot of different systems. In facial surgery, there are some AI-based software programs oriented to diagnosis in facial surgery. This study aims to evaluate the capacity and training of models for diagnosis of dentofacial deformities in class II and class III patients using artificial intelligence and the potential use for indicating orthognathic surgery. The search strategy is from 1943 to April 2024 in PubMed, Embase, Scopus, Lilacs, and Web of Science. Studies that used imaging to assess anatomical structures, airway volume, and craniofacial positions using the AI algorithm in the human population were included. The methodological quality of the studies was assessed using the Effective Public Health Practice Project instrument. The systematic search identified 697 articles. Eight studies were obtained for descriptive analysis after exclusion according to our inclusion and exclusion criteria. All studies were retrospective in design. A total of 5552 subjects with an age range between 14.7 and 56 years were obtained; 2474 (44.56%) subjects were male, and 3078 (55.43%) were female. Six studies were analyzed using 2D imaging and obtained highly accurate results in diagnosing skeletal features and determining the need for orthognathic surgery, and two studies used 3D imaging for measurement and diagnosis. Limitations of the studies such as age, diagnosis in facial deformity, and the included variables were observed. Concerning the overall analysis bias, six studies were at moderate risk due to weak study designs, while two were at high risk of bias. We can conclude that, with the few articles included, using AI-based software allows for some craniometric recognition and measurements to determine the diagnosis of facial deformities using mainly 2D analysis. However, it is necessary to perform studies based on three-dimensional images, increase the sample size, and train models in different populations to ensure accuracy of AI applications in this field. After that, the models can be trained for dentofacial diagnosis.
Collapse
Affiliation(s)
- Victor Ravelo
- Grupo de Investigación de Pregrado en Odontología (GIPO), Universidad Autónoma de Chile, Temuco 4780000, Chile;
- PhD Program in Morphological Science, Universidad de La Frontera, Temuco 4780000, Chile
| | - Julio Acero
- Department of Oral and Maxillofacial Surgery, Ramon y Cajal University Hospital, Ramon y Cajal Research Institute (IRYCIS), University of Alcala, 28034 Madrid, Spain;
| | | | - Henry García Guevara
- Department of Oral Surgery, La Floresta Medical Institute, Caracas 1060, Venezuela;
- Division for Oral and Maxillofacial Surgery, Hospital Ortopedico Infantil, Caracas 1060, Venezuela
| | - Sergio Olate
- Center for Research in Morphology and Surgery (CEMyQ), Universidad de La Frontera, Temuco 4780000, Chile
- Division of Oral, Facial and Maxillofacial Surgery, Universidad de La Frontera, Temuco 4780000, Chile
| |
Collapse
|
48
|
Silva Santana L, Borges Camargo Diniz J, Mothé Glioche Gasparri L, Buccaran Canto A, Batista Dos Reis S, Santana Neville Ribeiro I, Gadelha Figueiredo E, Paulo Mota Telles J. Application of Machine Learning for Classification of Brain Tumors: A Systematic Review and Meta-Analysis. World Neurosurg 2024; 186:204-218.e2. [PMID: 38580093 DOI: 10.1016/j.wneu.2024.03.152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/07/2024]
Abstract
BACKGROUND Classifying brain tumors accurately is crucial for treatment and prognosis. Machine learning (ML) shows great promise in improving tumor classification accuracy. This study evaluates ML algorithms for differentiating various brain tumor types. METHODS A systematic review and meta-analysis were conducted, searching PubMed, Embase, and Web of Science up to March 14, 2023. Studies that only investigated image segmentation accuracy or brain tumor detection instead of classification were excluded. We extracted binary diagnostic accuracy data, constructing contingency tables to derive sensitivity and specificity. RESULTS Fifty-one studies were included. The pooled area under the curve for glioblastoma versus lymphoma and low-grade versus high-grade gliomas were 0.99 (95% confidence interval [CI]: 0.98-1.00) and 0.89, respectively. The pooled sensitivity and specificity for benign versus malignant tumors were 0.90 (95% CI: 0.85-0.93) and 0.93 (95% CI: 0.90-0.95), respectively. The pooled sensitivity and specificity for low-grade versus high-grade gliomas were 0.99 (95% CI: 0.97-1.00) and 0.94, (95% CI: 0.79-0.99), respectively. Primary versus metastatic tumor identification yields sensitivity and specificity of 0.89, (95% CI: 0.83-0.93) and 0.87 (95% CI: 0.82-0.91), correspondingly. The differentiation of gliomas from pituitary tumors yielded the highest results among primary brain tumor classifications: sensitivity of 0.99 (95% CI: 0.99-1.00) and specificity of 0.99 (95% CI: 0.98-1.00). CONCLUSIONS ML demonstrated excellent performance in classifying brain tumor images, with near-maximum area under the curves, sensitivity, and specificity.
Collapse
Affiliation(s)
| | | | | | | | | | - Iuri Santana Neville Ribeiro
- Department of Neurology, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil
| | - Eberval Gadelha Figueiredo
- Department of Neurology, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil
| | - João Paulo Mota Telles
- Department of Neurology, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil.
| |
Collapse
|
49
|
Fang TY, Lin TY, Shen CM, Hsu SY, Lin SH, Kuo YJ, Chen MH, Yin TK, Liu CH, Lo MT, Wang PC. Algorithm-Driven Tele-otoscope for Remote Care for Patients With Otitis Media. Otolaryngol Head Neck Surg 2024; 170:1590-1597. [PMID: 38545686 DOI: 10.1002/ohn.738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 02/05/2024] [Accepted: 02/29/2024] [Indexed: 05/31/2024]
Abstract
OBJECTIVE The COVID-19 pandemic has spurred a growing demand for telemedicine. Artificial intelligence and image processing systems with wireless transmission functionalities can facilitate remote care for otitis media (OM). Accordingly, this study developed and validated an algorithm-driven tele-otoscope system equipped with Wi-Fi transmission and a cloud-based automatic OM diagnostic algorithm. STUDY DESIGN Prospective, cross-sectional, diagnostic study. SETTING Tertiary Academic Medical Center. METHODS We designed a tele-otoscope (Otiscan, SyncVision Technology Corp) equipped with digital imaging and processing modules, Wi-Fi transmission capabilities, and an automatic OM diagnostic algorithm. A total of 1137 otoscopic images, comprising 987 images of normal cases and 150 images of cases of acute OM and OM with effusion, were used as the dataset for image classification. Two convolutional neural network models, trained using our dataset, were used for raw image segmentation and OM classification. RESULTS The tele-otoscope delivered images with a resolution of 1280 × 720 pixels. Our tele-otoscope effectively differentiated OM from normal images, achieving a classification accuracy rate of up to 94% (sensitivity, 80%; specificity, 96%). CONCLUSION Our study demonstrated that the developed tele-otoscope has acceptable accuracy in diagnosing OM. This system can assist health care professionals in early detection and continuous remote monitoring, thus mitigating the consequences of OM.
Collapse
Affiliation(s)
- Te-Yung Fang
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Otolaryngology, Sijhih Cathay General Hospital, New Taipei City, Taiwan
| | - Tse-Yu Lin
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Chung-Min Shen
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Pediatric, Cathay General Hospital, Taipei, Taiwan
| | - Su-Yi Hsu
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
| | - Shing-Huey Lin
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Family and Community Medicine, Cathay General Hospital, Taipei, Taiwan
| | - Yu-Jung Kuo
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Ming-Hsu Chen
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
| | - Tan-Kuei Yin
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
| | - Chih-Hsien Liu
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
| | - Men-Tzung Lo
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Pa-Chun Wang
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan
| |
Collapse
|
50
|
Oeding JF, Pareek A, Kunze KN, Nwachukwu BU, Greditzer HG, Camp CL, Kelly BT, Pearle AD, Ranawat AS, Williams RJ. Segond Fractures Can Be Identified With Excellent Accuracy Utilizing Deep Learning on Anteroposterior Knee Radiographs. Arthrosc Sports Med Rehabil 2024; 6:100940. [PMID: 39006790 PMCID: PMC11240019 DOI: 10.1016/j.asmr.2024.100940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/25/2024] [Indexed: 07/16/2024] Open
Abstract
Purpose To develop a deep learning model for the detection of Segond fractures on anteroposterior (AP) knee radiographs and to compare model performance to that of trained human experts. Methods AP knee radiographs were retrieved from the Hospital for Special Surgery ACL Registry, which enrolled patients between 2009 and 2013. All images corresponded to patients who underwent anterior cruciate ligament reconstruction by 1 of 23 surgeons included in the registry data. Images were categorized into 1 of 2 classes based on radiographic evidence of a Segond fracture and manually annotated. Seventy percent of the images were used to populate the training set, while 20% and 10% were reserved for the validation and test sets, respectively. Images from the test set were used to compare model performance to that of expert human observers, including an orthopaedic surgery sports medicine fellow and a fellowship-trained orthopaedic sports medicine surgeon with over 10 years of experience. Results A total of 324 AP knee radiographs were retrieved, of which 34 (10.4%) images demonstrated evidence of a Segond fracture. The overall mean average precision (mAP) was 0.985, and this was maintained on the Segond fracture class (mAP = 0.978, precision = 0.844, recall = 1). The model demonstrated 100% accuracy with perfect sensitivity and specificity when applied to the independent testing set and the ability to meet or exceed human sensitivity and specificity in all cases. Compared to an orthopaedic surgery sports medicine fellow, the model required 0.3% of the total time needed to evaluate and classify images in the independent test set. Conclusions A deep learning model was developed and internally validated for Segond fracture detection on AP radiographs and demonstrated perfect accuracy, sensitivity, and specificity on a small test set of radiographs with and without Segond fractures. The model demonstrated superior performance compared with expert human observers. Clinical Relevance Deep learning can be used for automated Segond fracture identification on radiographs, leading to improved diagnosis of easily missed concomitant injuries, including lateral meniscus tears. Automated identification of Segond fractures can also enable large-scale studies on the incidence and clinical significance of these fractures, which may lead to improved management and outcomes for patients with knee injuries.
Collapse
Affiliation(s)
- Jacob F Oeding
- School of Medicine, Mayo Clinic Alix School of Medicine, Rochester, Minnesota, U.S.A
| | - Ayoosh Pareek
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, U.S.A
| | - Kyle N Kunze
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, U.S.A
| | - Benedict U Nwachukwu
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, U.S.A
| | - Harry G Greditzer
- Department of Radiology and Imaging, Hospital for Special Surgery, New York, New York, U.S.A
| | - Christopher L Camp
- Department of Orthopedic Surgery and Sports Medicine, Mayo Clinic, Rochester, Minnesota, U.S.A
| | - Bryan T Kelly
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, U.S.A
| | - Andrew D Pearle
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, U.S.A
| | - Anil S Ranawat
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, U.S.A
| | - Riley J Williams
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, U.S.A
| |
Collapse
|