1
|
Hariri M, Aydın A, Sıbıç O, Somuncu E, Yılmaz S, Sönmez S, Avşar E. LesionScanNet: dual-path convolutional neural network for acute appendicitis diagnosis. Health Inf Sci Syst 2025; 13:3. [PMID: 39654693 PMCID: PMC11625030 DOI: 10.1007/s13755-024-00321-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Accepted: 11/19/2024] [Indexed: 12/12/2024] Open
Abstract
Acute appendicitis is an abrupt inflammation of the appendix, which causes symptoms such as abdominal pain, vomiting, and fever. Computed tomography (CT) is a useful tool in accurate diagnosis of acute appendicitis; however, it causes challenges due to factors such as the anatomical structure of the colon and localization of the appendix in CT images. In this paper, a novel Convolutional Neural Network model, namely, LesionScanNet for the computer-aided detection of acute appendicitis has been proposed. For this purpose, a dataset of 2400 CT scan images was collected by the Department of General Surgery at Kanuni Sultan Süleyman Research and Training Hospital, Istanbul, Turkey. LesionScanNet is a lightweight model with 765 K parameters and includes multiple DualKernel blocks, where each block contains a convolution, expansion, separable convolution layers, and skip connections. The DualKernel blocks work with two paths of input image processing, one of which uses 3 × 3 filters, and the other path encompasses 1 × 1 filters. It has been demonstrated that the LesionScanNet model has an accuracy score of 99% on the test set, a value that is greater than the performance of the benchmark deep learning models. In addition, the generalization ability of the LesionScanNet model has been demonstrated on a chest X-ray image dataset for pneumonia and COVID-19 detection. In conclusion, LesionScanNet is a lightweight and robust network achieving superior performance with smaller number of parameters and its usage can be extended to other medical application domains.
Collapse
Affiliation(s)
- Muhab Hariri
- Electrical and Electronics Engineering Department, Çukurova University, 01330 Adana, Turkey
| | - Ahmet Aydın
- Biomedical Engineering Department, Çukurova University, 01330 Adana, Turkey
| | - Osman Sıbıç
- General Surgery Department, Derik State Hospital, 47800 Mardin, Turkey
| | - Erkan Somuncu
- General Surgery Department, Kanuni Sultan Suleyman Research and Training Hospital, 34303 Istanbul, Turkey
| | - Serhan Yılmaz
- General Surgery Department, Bilkent City Hospital, 06800 Ankara, Turkey
| | - Süleyman Sönmez
- Interventional Radiology Department, Kanuni Sultan Suleyman Research and Training Hospital, 34303 Istanbul, Turkey
| | - Ercan Avşar
- Section for Fisheries Technology, Institute of Aquatic Resources, DTU Aqua, Technical University of Denmark, 9850 Hirtshals, Denmark
| |
Collapse
|
2
|
Erman A, Ferreira J, Ashour WA, Guadagno E, St-Louis E, Emil S, Cheung J, Poenaru D. Machine-learning-assisted Preoperative Prediction of Pediatric Appendicitis Severity. J Pediatr Surg 2025; 60:162151. [PMID: 39855986 DOI: 10.1016/j.jpedsurg.2024.162151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 12/05/2024] [Accepted: 12/30/2024] [Indexed: 01/27/2025]
Abstract
PURPOSE This study evaluates the effectiveness of machine learning (ML) algorithms for improving the preoperative diagnosis of acute appendicitis in children, focusing on the accurate prediction of the severity of disease. METHODS An anonymized clinical and operative dataset was retrieved from the medical records of children undergoing emergency appendectomy between 2014 and 2021. We developed an ML pipeline that pre-processed the dataset and developed algorithms to predict 5 appendicitis grades (1 - non-perforated, 2 - localized perforation, 3 - abscess, 4 - generalized peritonitis, and 5 - generalized peritonitis with abscess). Imputation strategies were used for missing values and upsampling techniques for infrequent classes. Standard classifier models were tested. The best combination of imputation strategy, class balancing technique and classification model was chosen based on validation performance. Model explainability was verified by a pediatric surgeon. Our model's performance was compared to another pediatric appendicitis severity prediction tool. RESULTS The study used a retrospective cohort including 1980 patients (60.6 % males, average age 10.7 years). Grade of appendicitis in the cohort was as follows: grade 1-70 %; grade 2-8 %; grade 3-7 %; grade 4-7 %; grade 5-8 %. Every combination of 6 imputation strategies, 7 class-balancing techniques, and 5 classification models was tested. The best-performing combined ML pipeline distinguished non-perforated from perforated appendicitis with 82.8 ± 0.2 % NPV and 56.4 ± 0.4 % PPV, and differentiated between severity grades with 70.1 ± 0.2 % accuracy and 0.77 ± 0.00 AUROC. The other pediatric appendicitis severity prediction tool gave an accuracy of 71.4 %, AUROC of 0.54 and NPV/PPV of 71.8/64.7. CONCLUSION Prediction of appendiceal perforation outperforms prediction of the continuum of appendicitis grades. The variables our models primarily rely on to make predictions are consistent with clinical experience and the literature, suggesting that the ML models uncovered useful patterns in the dataset. Our model outperforms the other pediatric appendicitis prediction tools. The ML model developed for grade prediction is the first of this type, offering a novel approach for assessing appendicitis severity in children preoperatively. Following external validation and silent clinical testing, this ML model has the potential to enable personalized severity-based treatment of pediatric appendicitis and optimize resource allocation for its management. LEVEL OF EVIDENCE: 3
Collapse
Affiliation(s)
- Aylin Erman
- Department of Computer Science, McGill University, Montreal, QC, Canada.
| | - Julia Ferreira
- Harvey E. Beardmore Division of Pediatric Surgery, The Montreal Children's Hospital, McGill University Health Centre, Montreal, Qc, Canada
| | - Waseem Abu Ashour
- Harvey E. Beardmore Division of Pediatric Surgery, The Montreal Children's Hospital, McGill University Health Centre, Montreal, Qc, Canada
| | - Elena Guadagno
- Harvey E. Beardmore Division of Pediatric Surgery, The Montreal Children's Hospital, McGill University Health Centre, Montreal, Qc, Canada
| | - Etienne St-Louis
- McGill University Faculty of Medicine and Health Sciences, Canada; Harvey E. Beardmore Division of Pediatric Surgery, The Montreal Children's Hospital, McGill University Health Centre, Montreal, Qc, Canada
| | - Sherif Emil
- McGill University Faculty of Medicine and Health Sciences, Canada; Harvey E. Beardmore Division of Pediatric Surgery, The Montreal Children's Hospital, McGill University Health Centre, Montreal, Qc, Canada
| | - Jackie Cheung
- Department of Computer Science, McGill University, Montreal, QC, Canada; Canada CIFAR AI Chair, Mila, Canada
| | - Dan Poenaru
- McGill University Faculty of Medicine and Health Sciences, Canada; Harvey E. Beardmore Division of Pediatric Surgery, The Montreal Children's Hospital, McGill University Health Centre, Montreal, Qc, Canada
| |
Collapse
|
3
|
Lee JO, Zhou HY, Berzin TM, Sodickson DK, Rajpurkar P. Multimodal generative AI for interpreting 3D medical images and videos. NPJ Digit Med 2025; 8:273. [PMID: 40360694 PMCID: PMC12075794 DOI: 10.1038/s41746-025-01649-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Accepted: 04/18/2025] [Indexed: 05/15/2025] Open
Abstract
This perspective proposes adapting video-text generative AI to 3D medical imaging (CT/MRI) and medical videos (endoscopy/laparoscopy) by treating 3D images as videos. The approach leverages modern video models to analyze multiple sequences simultaneously and provide real-time AI assistance during procedures. The paper examines medical imaging's unique characteristics (synergistic information, metadata, and world model), outlines applications in automated reporting, case retrieval, and education, and addresses challenges of limited datasets, benchmarks, and specialized training.
Collapse
Affiliation(s)
- Jung-Oh Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hong-Yu Zhou
- Department of Biomedical Informatics, Harvard Medical School, Boston, USA
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, USA
| | - Daniel K Sodickson
- Center for Advanced Imaging Innovation and Research, Department of Radiology, New York University Grossman School of Medicine, New York, USA
| | - Pranav Rajpurkar
- Department of Biomedical Informatics, Harvard Medical School, Boston, USA.
| |
Collapse
|
4
|
Sibic O, Somuncu E, Yilmaz S, Avsar E, Bozdag E, Ozcan A, Aydin MO, Ozkan C. Diagnosis of Acute Appendicitis with Machine Learning-Based Computer Tomography: Diagnostic Reliability and Role in Clinical Management. J Laparoendosc Adv Surg Tech A 2025; 35:313-317. [PMID: 39967483 DOI: 10.1089/lap.2024.0374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2025] Open
Abstract
Purpose: Acute appendicitis (AA) is a common surgical emergency affecting 7-8% of the population. Timely diagnosis and treatment are crucial for preventing serious morbidity and mortality. Diagnosis typically involves physical examination, laboratory tests, ultrasonography, and computed tomography (CT). This study aimed to evaluate the effectiveness of artificial intelligence (AI) in analyzing CT images for the early diagnosis of AA and prevention of complications. Methods: CT images of patients who underwent surgery for AA at the General Surgery Clinic of Kanuni Sultan Suleyman Health Application and Research Center between January 1, 2019, and June 31, 2023, were analyzed. A total of 1200 CT images were evaluated using four different AI models. The model performance was assessed using a confusion matrix. Results: The median age of the patients was 28 years, with a similar sex distribution. No significant differences were observed in terms of age or sex (P = .168 and P = .881, respectively). Among the AI models, MobileNet v2 showed the highest accuracy (0.7908) and precision (0.8203), whereas Inception v3 had the highest F-score (0.7928). In the receiver operating characteristic analysis, MobileNet v2 achieved an area under the curve (AUC) of 0.8767. Conclusion: AI's role in daily life is expanding. In the present study, the highest sensitivity and specificity were 77% and 86%, respectively. Supporting CT imaging with AI systems can enhance the accuracy of AA diagnoses.
Collapse
Affiliation(s)
- Osman Sibic
- General Surgery Service, Derik State Hospital, Derik, Turkey
| | - Erkan Somuncu
- General Surgery Service, Kanuni Sultan Suleyman Training and Research Hospital, Istanbul, Turkey
| | - Serhan Yilmaz
- General Surgery Service, Bilkent City Hospital, Cankaya, Turkey
| | - Ercan Avsar
- Technical University of Denmark National Institute of Aquatic Resources, Lyngby, Denmark
| | - Emre Bozdag
- Gastroenterology Surgery Service, Kanuni Sultan Suleyman Training and Research Hospital, Istanbul, Turkey
| | - Adem Ozcan
- Surgical Oncology Service, Bilkent City Hospital, Cankaya, Turkey
| | | | - Cenk Ozkan
- General Surgery Service, Kanuni Sultan Suleyman Training and Research Hospital, Istanbul, Turkey
| |
Collapse
|
5
|
Maleš I, Kumrić M, Huić Maleš A, Cvitković I, Šantić R, Pogorelić Z, Božić J. A Systematic Integration of Artificial Intelligence Models in Appendicitis Management: A Comprehensive Review. Diagnostics (Basel) 2025; 15:866. [PMID: 40218216 PMCID: PMC11988987 DOI: 10.3390/diagnostics15070866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2025] [Revised: 03/24/2025] [Accepted: 03/27/2025] [Indexed: 04/14/2025] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming the management of acute appendicitis by enhancing diagnostic accuracy, optimizing treatment strategies, and improving patient outcomes. This study reviews AI applications across all stages of appendicitis care, from triage to postoperative management, using sources from PubMed/MEDLINE, IEEE Xplore, arXiv, Web of Science, and Scopus, covering publications up to 14 February 2025. AI models have demonstrated potential in triage, enabling rapid differentiation of appendicitis from other causes of abdominal pain. In diagnostics, ML algorithms incorporating clinical, laboratory, imaging, and demographic data have improved accuracy and reduced uncertainty. These tools also predict disease severity, aiding decisions between conservative management and surgery. Radiomics further enhances diagnostic precision by analyzing imaging data. Intraoperatively, AI applications are emerging to support real-time decision-making, assess procedural steps, and improve surgical training. Postoperatively, ML models predict complications such as abscess formation and sepsis, facilitating early interventions and personalized recovery plans. This is the first comprehensive review to examine AI's role across the entire appendicitis treatment process, including triage, diagnosis, severity prediction, intraoperative assistance, and postoperative prognosis. Despite its potential, challenges remain regarding data quality, model interpretability, ethical considerations, and clinical integration. Future efforts should focus on developing end-to-end AI-assisted workflows that enhance diagnosis, treatment, and patient outcomes while ensuring equitable access and clinician oversight.
Collapse
Affiliation(s)
- Ivan Maleš
- Department of Abdominal Surgery, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Marko Kumrić
- Department of Pathophysiology, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
- Laboratory for Cardiometabolic Research, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
| | - Andrea Huić Maleš
- Department of Pediatrics, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Ivan Cvitković
- Department of Anesthesiology and Intensive Care, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Roko Šantić
- Department of Pathophysiology, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
| | - Zenon Pogorelić
- Department of Surgery, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
- Department of Pediatric Surgery, University Hospital of Split, Spinčićeva 1, 21000 Split, Croatia
| | - Joško Božić
- Department of Pathophysiology, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
- Laboratory for Cardiometabolic Research, School of Medicine, University of Split, Šoltanska 2A, 21000 Split, Croatia
| |
Collapse
|
6
|
Li J, Ye J, Luo Y, Xu T, Jia Z. Progress in the application of machine learning in CT diagnosis of acute appendicitis. Abdom Radiol (NY) 2025:10.1007/s00261-025-04864-5. [PMID: 40095017 DOI: 10.1007/s00261-025-04864-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 02/21/2025] [Accepted: 02/28/2025] [Indexed: 03/19/2025]
Abstract
Acute appendicitis represents a prevalent condition within the spectrum of acute abdominal pathologies, exhibiting a diverse clinical presentation. Computed tomography (CT) imaging has emerged as a prospective diagnostic modality for the identification and differentiation of appendicitis. This review aims to synthesize current applications, progress, and challenges in integrating machine learning (ML) with CT for diagnosing acute appendicitis while exploring prospects. ML-driven advancements include automated detection, differential diagnosis, and severity stratification. For instance, deep learning models such as AppendiXNet achieved an AUC of 0.81 for appendicitis detection, while 3D convolutional neural networks (CNNs) demonstrated superior performance, with AUCs up to 0.95 and an accuracy of 91.5%. ML algorithms effectively differentiate appendicitis from similar conditions like diverticulitis, achieving AUCs between 0.951 and 0.972. They demonstrate remarkable proficiency in distinguishing between complex and straightforward cases through the innovative use of radiomics and hybrid models, achieving AUCs ranging from 0.80 to 0.96. Even with these advancements, challenges remain, such as the "black-box" nature of artificial intelligence, its integration into clinical workflows, and the significant resources required. Future directions emphasize interpretable models, multimodal data fusion, and cost-effective decision-support systems. By addressing these barriers, ML holds promise for refining diagnostic precision, optimizing treatment pathways, and reducing healthcare costs.
Collapse
Affiliation(s)
- Jiaxin Li
- Shanghai Jiao Tong University, Shanghai, China
| | - Jiayin Ye
- Shanghai Jiao Tong University, Shanghai, China
| | - Yiyun Luo
- Shanghai Jiao Tong University, Shanghai, China
| | - Tianyang Xu
- Shanghai Jiao Tong University, Shanghai, China
| | - Zhenyi Jia
- Shanghai Sixth People's Hospital, Shanghai, China.
| |
Collapse
|
7
|
Kim M, Park T, Kang J, Kim MJ, Kwon MJ, Oh BY, Kim JW, Ha S, Yang WS, Cho BJ, Son I. Development and validation of automated three-dimensional convolutional neural network model for acute appendicitis diagnosis. Sci Rep 2025; 15:7711. [PMID: 40044743 PMCID: PMC11882796 DOI: 10.1038/s41598-024-84348-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Accepted: 12/23/2024] [Indexed: 03/09/2025] Open
Abstract
Rapid, accurate preoperative imaging diagnostics of appendicitis are critical in surgical decisions of emergency care. This study developed a fully automated diagnostic framework using a 3D convolutional neural network (CNN) to identify appendicitis and clinical information from patients with abdominal pain, including contrast-enhanced abdominopelvic computed tomography images. A deep learning model-Information of Appendix (IA)-was developed, and the volume of interest (VOI) region corresponding to the anatomical location of the appendix was automatically extracted. It was analysed using a two-stage binary algorithm with transfer learning. The algorithm predicted three categories: non-, simple, and complicated appendicitis. The 3D-CNN architecture incorporated ResNet, DenseNet, and EfficientNet. The IA model utilising DenseNet169 demonstrated 79.5% accuracy (76.4-82.6%), 70.1% sensitivity (64.7-75.0%), 87.6% specificity (83.7-90.7%), and an area under the curve (AUC) of 0.865 (0.862-0.867), with a negative appendectomy rate of 12.4% in stage 1 classification identifying non-appendicitis versus. appendicitis. In stage 2, the IA model exhibited 76.1% accuracy (70.3-81.9%), 82.6% sensitivity (62.9-90.9%), 74.2% specificity (67.0-80.3%), and an AUC of 0.827 (0.820-0.833), differentiating simple and complicated appendicitis. This IA model can provide physicians with reliable diagnostic information on appendicitis with generality and reproducibility within the VOI.
Collapse
Affiliation(s)
- Minsung Kim
- Department of Surgery, Hallym University Medical Center, Hallym Sacred Heart Hospital, Hallym University College of Medicine, 22 Gwanpyeong-ro 170 beon-gil, Pyeongan-dong, Dongan-gu, Anyang, Gyeonggi-do, Republic of Korea
| | - Taeyong Park
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang, Republic of Korea
| | - Jaewoong Kang
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang, Republic of Korea
| | - Min-Jeong Kim
- Department of Radiology, Hallym Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Mi Jung Kwon
- Department of Pathology, Hallym Sacred Heart Hospital, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Bo Young Oh
- Department of Surgery, Hallym University Medical Center, Hallym Sacred Heart Hospital, Hallym University College of Medicine, 22 Gwanpyeong-ro 170 beon-gil, Pyeongan-dong, Dongan-gu, Anyang, Gyeonggi-do, Republic of Korea
| | - Jong Wan Kim
- Department of Surgery, Dongtan Sacred Heart Hospital, Hallym University College of Medicine, Hwaseong, Republic of Korea
| | - Sangook Ha
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Republic of Korea
| | - Won Seok Yang
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Republic of Korea
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang, Republic of Korea.
| | - Iltae Son
- Department of Surgery, Hallym University Medical Center, Hallym Sacred Heart Hospital, Hallym University College of Medicine, 22 Gwanpyeong-ro 170 beon-gil, Pyeongan-dong, Dongan-gu, Anyang, Gyeonggi-do, Republic of Korea.
| |
Collapse
|
8
|
Yao J, Chu LC, Patlas M. Applications of Artificial Intelligence in Acute Abdominal Imaging. Can Assoc Radiol J 2024; 75:761-770. [PMID: 38715249 DOI: 10.1177/08465371241250197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024] Open
Abstract
Artificial intelligence (AI) is a rapidly growing field with significant implications for radiology. Acute abdominal pain is a common clinical presentation that can range from benign conditions to life-threatening emergencies. The critical nature of these situations renders emergent abdominal imaging an ideal candidate for AI applications. CT, radiographs, and ultrasound are the most common modalities for imaging evaluation of these patients. For each modality, numerous studies have assessed the performance of AI models for detecting common pathologies, such as appendicitis, bowel obstruction, and cholecystitis. The capabilities of these models range from simple classification to detailed severity assessment. This narrative review explores the evolution, trends, and challenges in AI applications for evaluating acute abdominal pathologies. We review implementations of AI for non-traumatic and traumatic abdominal pathologies, with discussion of potential clinical impact, challenges, and future directions for the technology.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, McMaster University, Hamilton, ON, Canada
| | - Linda C Chu
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Michael Patlas
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
9
|
Dandıl E, Baştuğ BT, Yıldırım MS, Çorbacı K, Güneri G. MaskAppendix: Backbone-Enriched Mask R-CNN Based on Grad-CAM for Automatic Appendix Segmentation. Diagnostics (Basel) 2024; 14:2346. [PMID: 39518314 PMCID: PMC11544770 DOI: 10.3390/diagnostics14212346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 10/17/2024] [Accepted: 10/17/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND A leading cause of emergency abdominal surgery, appendicitis is a common condition affecting millions of people worldwide. Automatic and accurate segmentation of the appendix from medical imaging is a challenging task, due to its small size, variability in shape, and proximity to other anatomical structures. METHODS In this study, we propose a backbone-enriched Mask R-CNN architecture (MaskAppendix) on the Detectron platform, enhanced with Gradient-weighted Class Activation Mapping (Grad-CAM), for precise appendix segmentation on computed tomography (CT) scans. In the proposed MaskAppendix deep learning model, ResNet101 network is used as the backbone. By integrating Grad-CAM into the MaskAppendix network, our model improves feature localization, allowing it to better capture subtle variations in appendix morphology. RESULTS We conduct extensive experiments on a dataset of abdominal CT scans, demonstrating that our method achieves state-of-the-art performance in appendix segmentation, outperforming traditional segmentation techniques in terms of both accuracy and robustness. In the automatic segmentation of the appendix region in CT slices, a DSC score of 87.17% was achieved with the proposed approach, and the results obtained have the potential to improve clinical diagnostic accuracy. CONCLUSIONS This framework provides an effective tool for aiding clinicians in the diagnosis of appendicitis and other related conditions, reducing the potential for diagnostic errors and enhancing clinical workflow efficiency.
Collapse
Affiliation(s)
- Emre Dandıl
- Department of Computer Engineering, Faculty of Engineering, Bilecik Seyh Edebali University, 11230 Bilecik, Türkiye
| | - Betül Tiryaki Baştuğ
- Radiology Department, Faculty of Medicine, Bilecik Şeyh Edebali University, 11230 Bilecik, Türkiye;
| | - Mehmet Süleyman Yıldırım
- Department of Söğüt Vocational School, Computer Technology, Bilecik Şeyh Edebali University, Söğüt, 11600 Bilecik, Türkiye;
| | - Kadir Çorbacı
- General Surgery Department, Bilecik Osmaneli Mustafa Selahattin Çetintaş Hospital, 11500 Bilecik, Türkiye;
| | - Gürkan Güneri
- General Surgery Department, Faculty of Medicine, Bilecik Şeyh Edebali University, 11230 Bilecik, Türkiye;
| |
Collapse
|
10
|
Baştuğ BT, Güneri G, Yıldırım MS, Çorbacı K, Dandıl E. Fully Automated Detection of the Appendix Using U-Net Deep Learning Architecture in CT Scans. J Clin Med 2024; 13:5893. [PMID: 39407953 PMCID: PMC11478302 DOI: 10.3390/jcm13195893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 09/29/2024] [Accepted: 09/30/2024] [Indexed: 10/20/2024] Open
Abstract
Background: The accurate segmentation of the appendix with well-defined boundaries is critical for diagnosing conditions such as acute appendicitis. The manual identification of the appendix is time-consuming and highly dependent on the expertise of the radiologist. Method: In this study, we propose a fully automated approach to the detection of the appendix using deep learning architecture based on the U-Net with specific training parameters in CT scans. The proposed U-Net architecture is trained on an annotated original dataset of abdominal CT scans to segment the appendix efficiently and with high performance. In addition, to extend the training set, data augmentation techniques are applied for the created dataset. Results: In experimental studies, the proposed U-Net model is implemented using hyperparameter optimization and the performance of the model is evaluated using key metrics to measure diagnostic reliability. The trained U-Net model achieved the segmentation performance for the detection of the appendix in CT slices with a Dice Similarity Coefficient (DSC), Volumetric Overlap Error (VOE), Average Symmetric Surface Distance (ASSD), Hausdorff Distance 95 (HD95), Precision (PRE) and Recall (REC) of 85.94%, 23.29%, 1.24 mm, 5.43 mm, 86.83% and 86.62%, respectively. Moreover, our model outperforms other methods by leveraging the U-Net's ability to capture spatial context through encoder-decoder structures and skip connections, providing a correct segmentation output. Conclusions: The proposed U-Net model showed reliable performance in segmenting the appendix region, with some limitations in cases where the appendix was close to other structures. These improvements highlight the potential of deep learning to significantly improve clinical outcomes in appendix detection.
Collapse
Affiliation(s)
- Betül Tiryaki Baştuğ
- Department of Radiology, Medical Faculty, Bilecik Şeyh Edebali University, Bilecik 11230, Türkiye
| | - Gürkan Güneri
- Department of General Surgery, Medical Faculty, Bilecik Şeyh Edebali University, Bilecik 11230, Türkiye;
| | - Mehmet Süleyman Yıldırım
- Department of Sogut Vocational School, Computer Technology, Bilecik Şeyh Edebali University, Bilecik 11600, Türkiye;
| | - Kadir Çorbacı
- Department of General Surgery, Bilecik Osmaneli Mustafa Selahattin Çetintaş Hospital, Bilecik 11500, Türkiye;
| | - Emre Dandıl
- Department of Computer Engineering, Faculty of Engineering, Bilecik Seyh Edebali University, Bilecik 11230, Türkiye
| |
Collapse
|
11
|
An J, Kim IS, Kim KJ, Park JH, Kang H, Kim HJ, Kim YS, Ahn JH. Efficacy of automated machine learning models and feature engineering for diagnosis of equivocal appendicitis using clinical and computed tomography findings. Sci Rep 2024; 14:22658. [PMID: 39349512 PMCID: PMC11442641 DOI: 10.1038/s41598-024-72889-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Accepted: 09/11/2024] [Indexed: 10/02/2024] Open
Abstract
This study evaluates the diagnostic efficacy of automated machine learning (AutoGluon) with automated feature engineering and selection (autofeat), focusing on clinical manifestations, and a model integrating both clinical manifestations and CT findings in adult patients with ambiguous computed tomography (CT) results for acute appendicitis (AA). This evaluation was compared with conventional single machine learning models such as logistic regression(LR) and established scoring systems such as the Adult Appendicitis Score(AAS) to address the gap in diagnostic approaches for uncertain AA cases. In this retrospective analysis of 303 adult patients with indeterminate CT findings, the cohort was divided into appendicitis (n = 115) and non-appendicitis (n = 188) groups. AutoGluon and autofeat were used for AA prediction. The AutoGluon-clinical model relied solely on clinical data, whereas the AutoGluon-clinical-CT model included both clinical and CT data. The area under the receiver operating characteristic curve (AUROC) and other metrics for the test dataset, namely accuracy, sensitivity, specificity, PPV, NPV, and F1 score, were used to compare AutoGluon models with single machine learning models and the AAS. The single ML models in this study were LR, LASSO regression, ridge regression, support vector machine, decision tree, random forest, and extreme gradient boosting. Feature importance values were extracted using the "feature_importance" attribute from AutoGluon. The AutoGluon-clinical model demonstrated an AUROC of 0.785 (95% CI 0.691-0.890), and the ridge regression model with only clinical data revealed an AUROC of 0.755 (95% CI 0.649-0.861). The AutoGluon-clinical-CT model (AUROC 0.886 with 95% CI 0.820-0.951) performed better than the ridge model using clinical and CT data (AUROC 0.852 with 95% CI 0.774-0.930, p = 0.029). A new feature, exp(-(duration from pain to CT)3 + rebound tenderness), was identified (importance = 0.049, p = 0.001). AutoML (AutoGluon) and autoFE (autofeat) enhanced the diagnosis of uncertain AA cases, particularly when combining CT and clinical findings. This study suggests the potential of integrating AutoML and autoFE in clinical settings to improve diagnostic strategies and patient outcomes and make more efficient use of healthcare resources. Moreover, this research supports further exploration of machine learning in diagnostic processes.
Collapse
Affiliation(s)
- Juho An
- Department of Emergency Medicine, Ajou University School of Medicine, World Cup-ro, Suwon, Gyeonggi-do, 16499, South Korea
| | - Il Seok Kim
- Department of Anesthesiology and Pain Medicine, Kangdong Sacred Hospital, Hallym University College of Medicine, Seongan-ro, Seoul, 05355, South Korea
| | - Kwang-Ju Kim
- Electronics and Telecommunications Research Institute (ETRI), Techno sunhwan-ro, Daegu, 42994, South Korea
| | - Ji Hyun Park
- Office of Biostatistics, Medical Research Collaborating Center, Ajou Research Institute for Innovative Medicine, Ajou University Medical Center, World Cup-ro, Suwon, Gyeonggi-do, 16499, South Korea
| | - Hyuncheol Kang
- Department of Big Data and AI, Hoseo University, Hoseo-ro, Asan, Chungcheongnam-do, 31499, South Korea
| | - Hyuk Jung Kim
- Department of Radiology, Daejin Medical Center, Bundang Jesaeng General Hospital, Seohyeon-ro, Seongnam, Gyeonggi-do, 13590, South Korea
| | - Young Sik Kim
- Department of Emergency Medicine, Daejin Medical Center, Bundang Jesaeng General Hospital, Seohyeon-ro, Seongnam, Gyeonggi-do, 13590, South Korea
| | - Jung Hwan Ahn
- Department of Emergency Medicine, Ajou University School of Medicine, World Cup-ro, Suwon, Gyeonggi-do, 16499, South Korea.
- Electronics and Telecommunications Research Institute (ETRI), Techno sunhwan-ro, Daegu, 42994, South Korea.
| |
Collapse
|
12
|
Gollapalli M, Rahman A, Kudos SA, Foula MS, Alkhalifa AM, Albisher HM, Al-Hariri MT, Mohammad N. Appendicitis Diagnosis: Ensemble Machine Learning and Explainable Artificial Intelligence-Based Comprehensive Approach. BIG DATA AND COGNITIVE COMPUTING 2024; 8:108. [DOI: 10.3390/bdcc8090108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
Appendicitis is a condition wherein the appendix becomes inflamed, and it can be difficult to diagnose accurately. The type of appendicitis can also be hard to determine, leading to misdiagnosis and difficulty in managing the condition. To avoid complications and reduce mortality, early diagnosis and treatment are crucial. While Alvarado’s clinical scoring system is not sufficient, ultrasound and computed tomography (CT) imaging are effective but have downsides such as operator-dependency and radiation exposure. This study proposes the use of machine learning methods and a locally collected reliable dataset to enhance the identification of acute appendicitis while detecting the differences between complicated and non-complicated appendicitis. Machine learning can help reduce diagnostic errors and improve treatment decisions. This study conducted four different experiments using various ML algorithms, including K-nearest neighbors (KNN), DT, bagging, and stacking. The experimental results showed that the stacking model had the highest training accuracy, test set accuracy, precision, and F1 score, which were 97.51%, 92.63%, 95.29%, and 92.04%, respectively. Feature importance and explainable AI (XAI) identified neutrophils, WBC_Count, Total_LOS, P_O_LOS, and Symptoms_Days as the principal features that significantly affected the performance of the model. Based on the outcomes and feedback from medical health professionals, the scheme is promising in terms of its effectiveness in diagnosing of acute appendicitis.
Collapse
Affiliation(s)
- Mohammed Gollapalli
- Department of Computer Information Systems, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Atta Rahman
- Department of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Sheriff A. Kudos
- Department of Computer Engineering, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Mohammed S. Foula
- Department of Surgery, King Fahd University Hospital, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Abdullah Mahmoud Alkhalifa
- Department of Surgery, King Fahd University Hospital, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Hassan Mohammed Albisher
- Department of Surgery, King Fahd University Hospital, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Mohammed Taha Al-Hariri
- Department of Physiology, College of Medicine, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
| | - Nazeeruddin Mohammad
- Cybersecurity Center, Prince Mohammad Bin Fahd University, P.O. Box 1664, Alkhobar 31952, Saudi Arabia
| |
Collapse
|
13
|
Marullo G, Ulrich L, Antonaci FG, Audisio A, Aprato A, Massè A, Vezzetti E. Classification of AO/OTA 31A/B femur fractures in X-ray images using YOLOv8 and advanced data augmentation techniques. Bone Rep 2024; 22:101801. [PMID: 39324016 PMCID: PMC11422035 DOI: 10.1016/j.bonr.2024.101801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 08/20/2024] [Accepted: 09/05/2024] [Indexed: 09/27/2024] Open
Abstract
Femur fractures are a significant worldwide public health concern that affects patients as well as their families because of their high frequency, morbidity, and mortality. When employing computer-aided diagnostic (CAD) technologies, promising results have been shown in the efficiency and accuracy of fracture classification, particularly with the growing use of Deep Learning (DL) approaches. Nevertheless, the complexity is further increased by the need to collect enough input data to train these algorithms and the challenge of interpreting the findings. By improving on the results of the most recent deep learning-based Arbeitsgemeinschaft für Osteosynthesefragen and Orthopaedic Trauma Association (AO/OTA) system classification of femur fractures, this study intends to support physicians in making correct and timely decisions regarding patient care. A state-of-the-art architecture, YOLOv8, was used and refined while paying close attention to the interpretability of the model. Furthermore, data augmentation techniques were involved during preprocessing, increasing the dataset samples through image processing alterations. The fine-tuned YOLOv8 model achieved remarkable results, with 0.9 accuracy, 0.85 precision, 0.85 recall, and 0.85 F1-score, computed by averaging the values among all the individual classes for each metric. This study shows the proposed architecture's effectiveness in enhancing the AO/OTA system's classification of femur fractures, assisting physicians in making prompt and accurate diagnoses.
Collapse
Affiliation(s)
- Giorgia Marullo
- Department of Management, Production, and Design, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino 10129, Italy
| | - Luca Ulrich
- Department of Management, Production, and Design, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino 10129, Italy
| | - Francesca Giada Antonaci
- Department of Management, Production, and Design, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino 10129, Italy
| | - Andrea Audisio
- Pediatric Orthopaedics and Traumatology, Regina Margherita Children's Hospital, Torino 10126, Italy
| | - Alessandro Aprato
- Department of Surgical Sciences, University of Turin, Torino 10124, Italy
| | - Alessandro Massè
- Department of Surgical Sciences, University of Turin, Torino 10124, Italy
| | - Enrico Vezzetti
- Department of Management, Production, and Design, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino 10129, Italy
| |
Collapse
|
14
|
Cappuccio M, Bianco P, Rotondo M, Spiezia S, D'Ambrosio M, Menegon Tasselli F, Guerra G, Avella P. Current use of artificial intelligence in the diagnosis and management of acute appendicitis. Minerva Surg 2024; 79:326-338. [PMID: 38477067 DOI: 10.23736/s2724-5691.23.10156-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
INTRODUCTION Acute appendicitis is a common and time-sensitive surgical emergency, requiring rapid and accurate diagnosis and management to prevent complications. Artificial intelligence (AI) has emerged as a transformative tool in healthcare, offering significant potential to improve the diagnosis and management of acute appendicitis. This review provides an overview of the evolving role of AI in the diagnosis and management of acute appendicitis, highlighting its benefits, challenges, and future perspectives. EVIDENCE ACQUISITION We performed a literature search on articles published from 2018 to September 2023. We included only original articles. EVIDENCE SYNTHESIS Overall, 121 studies were examined. We included 32 studies: 23 studies addressed the diagnosis, five the differentiation between complicated and uncomplicated appendicitis, and 4 studies the management of acute appendicitis. CONCLUSIONS AI is poised to revolutionize the diagnosis and management of acute appendicitis by improving accuracy, speed and consistency. It could potentially reduce healthcare costs. As AI technologies continue to evolve, further research and collaboration are needed to fully realize their potential in the diagnosis and management of acute appendicitis.
Collapse
Affiliation(s)
- Micaela Cappuccio
- Department of Clinical Medicine and Surgery, University of Naples Federico II, Naples, Italy
| | - Paolo Bianco
- Hepatobiliary and Pancreatic Surgery Unit, Pineta Grande Hospital, Castel Volturno, Caserta, Italy
| | - Marco Rotondo
- V. Tiberio Department of Medicine and Health Sciences, University of Molise, Campobasso, Italy
| | - Salvatore Spiezia
- V. Tiberio Department of Medicine and Health Sciences, University of Molise, Campobasso, Italy
| | - Marco D'Ambrosio
- V. Tiberio Department of Medicine and Health Sciences, University of Molise, Campobasso, Italy
| | | | - Germano Guerra
- V. Tiberio Department of Medicine and Health Sciences, University of Molise, Campobasso, Italy
| | - Pasquale Avella
- Department of Clinical Medicine and Surgery, University of Naples Federico II, Naples, Italy -
- Hepatobiliary and Pancreatic Surgery Unit, Pineta Grande Hospital, Castel Volturno, Caserta, Italy
| |
Collapse
|
15
|
Bianchi V, Giambusso M, De Iacob A, Chiarello MM, Brisinda G. Artificial intelligence in the diagnosis and treatment of acute appendicitis: a narrative review. Updates Surg 2024; 76:783-792. [PMID: 38472633 PMCID: PMC11129994 DOI: 10.1007/s13304-024-01801-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 02/24/2024] [Indexed: 03/14/2024]
Abstract
Artificial intelligence is transforming healthcare. Artificial intelligence can improve patient care by analyzing large amounts of data to help make more informed decisions regarding treatments and enhance medical research through analyzing and interpreting data from clinical trials and research projects to identify subtle but meaningful trends beyond ordinary perception. Artificial intelligence refers to the simulation of human intelligence in computers, where systems of artificial intelligence can perform tasks that require human-like intelligence like speech recognition, visual perception, pattern-recognition, decision-making, and language processing. Artificial intelligence has several subdivisions, including machine learning, natural language processing, computer vision, and robotics. By automating specific routine tasks, artificial intelligence can improve healthcare efficiency. By leveraging machine learning algorithms, the systems of artificial intelligence can offer new opportunities for enhancing both the efficiency and effectiveness of surgical procedures, particularly regarding training of minimally invasive surgery. As artificial intelligence continues to advance, it is likely to play an increasingly significant role in the field of surgical learning. Physicians have assisted to a spreading role of artificial intelligence in the last decade. This involved different medical specialties such as ophthalmology, cardiology, urology, but also abdominal surgery. In addition to improvements in diagnosis, ascertainment of efficacy of treatment and autonomous actions, artificial intelligence has the potential to improve surgeons' ability to better decide if acute surgery is indicated or not. The role of artificial intelligence in the emergency departments has also been investigated. We considered one of the most common condition the emergency surgeons have to face, acute appendicitis, to assess the state of the art of artificial intelligence in this frequent acute disease. The role of artificial intelligence in diagnosis and treatment of acute appendicitis will be discussed in this narrative review.
Collapse
Affiliation(s)
- Valentina Bianchi
- Emergency Surgery and Trauma Center, Department of Abdominal and Endocrine Metabolic Medical and Surgical Sciences, IRCCS, Fondazione Policlinico Universitario A Gemelli, Largo Agostino Gemelli 8, 00168, Rome, Italy
| | - Mauro Giambusso
- General Surgery Operative Unit, Vittorio Emanuele Hospital, 93012, Gela, Italy
| | - Alessandra De Iacob
- Emergency Surgery and Trauma Center, Department of Abdominal and Endocrine Metabolic Medical and Surgical Sciences, IRCCS, Fondazione Policlinico Universitario A Gemelli, Largo Agostino Gemelli 8, 00168, Rome, Italy
| | - Maria Michela Chiarello
- Department of Surgery, General Surgery Operative Unit, Azienda Sanitaria Provinciale Cosenza, 87100, Cosenza, Italy
| | - Giuseppe Brisinda
- Emergency Surgery and Trauma Center, Department of Abdominal and Endocrine Metabolic Medical and Surgical Sciences, IRCCS, Fondazione Policlinico Universitario A Gemelli, Largo Agostino Gemelli 8, 00168, Rome, Italy.
- Catholic School of Medicine, University Department of Translational Medicine and Surgery, 00168, Rome, Italy.
| |
Collapse
|
16
|
Liang D, Fan Y, Zeng Y, Zhou H, Zhou H, Li G, Liang Y, Zhong Z, Chen D, Chen A, Li G, Deng J, Huang B, Wei X. Development and Validation of a Deep Learning and Radiomics Combined Model for Differentiating Complicated From Uncomplicated Acute Appendicitis. Acad Radiol 2024; 31:1344-1354. [PMID: 37775450 DOI: 10.1016/j.acra.2023.08.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/03/2023] [Accepted: 08/14/2023] [Indexed: 10/01/2023]
Abstract
RATIONALE AND OBJECTIVES This study aimed to develop and validate a deep learning and radiomics combined model for differentiating complicated from uncomplicated acute appendicitis (AA). MATERIALS AND METHODS This retrospective multicenter study included 1165 adult AA patients (training cohort, 700 patients; validation cohort, 465 patients) with available abdominal pelvic computed tomography (CT) images. The reference standard for complicated/uncomplicated AA was the surgery and pathology records. We developed our combined model with CatBoost based on the selected clinical characteristics, CT visual features, deep learning features, and radiomics features. We externally validated our combined model and compared its performance with that of the conventional combined model, the deep learning radiomics (DLR) model, and the radiologist's visual diagnosis using receiver operating characteristic (ROC) curve analysis. RESULTS In the training cohort, the area under the ROC curve (AUC) of our combined model in distinguishing complicated from uncomplicated AA was 0.816 (95% confidence interval [CI]: 0.785-0.844). In the validation cohort, our combined model showed robust performance across the data from three centers, with AUCs of 0.836 (95% CI: 0.785-0.879), 0.793 (95% CI: 0.695-0.872), and 0.723 (95% CI: 0.632-0.802). In the total validation cohort, our combined model (AUC = 0.799) performed better than the conventional combined model, DLR model, and radiologist's visual diagnosis (AUC = 0.723, 0.755, and 0.679, respectively; all P < 0.05). Decision curve analysis showed that our combined model provided greater net benefit in predicting complicated AA than the other three models. CONCLUSION Our combined model allows the accurate differentiation of complicated and uncomplicated AA.
Collapse
Affiliation(s)
- Dan Liang
- First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, People's Republic of China (D.L.); Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, People's Republic of China (D.L., Y.L., D.C., A.C., J.D., X.W.)
| | - Yaheng Fan
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, People's Republic of China (Y.F., Y.Z., Z.Z., B.H.)
| | - Yinghou Zeng
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, People's Republic of China (Y.F., Y.Z., Z.Z., B.H.)
| | - Hui Zhou
- Department of Radiology, The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People's Hospital, Qingyuan, Guangdong, People's Republic of China (Hui Zhou, Guangming Li)
| | - Hong Zhou
- Department of Radiology, The First Affiliated Hospital of University of South China, Hengyang, Hunan, People's Republic of China (Hong Zhou)
| | - Guangming Li
- Department of Radiology, The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People's Hospital, Qingyuan, Guangdong, People's Republic of China (Hui Zhou, Guangming Li)
| | - Yingying Liang
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, People's Republic of China (D.L., Y.L., D.C., A.C., J.D., X.W.)
| | - Zhangnan Zhong
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, People's Republic of China (Y.F., Y.Z., Z.Z., B.H.)
| | - Dandan Chen
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, People's Republic of China (D.L., Y.L., D.C., A.C., J.D., X.W.)
| | - Amei Chen
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, People's Republic of China (D.L., Y.L., D.C., A.C., J.D., X.W.)
| | - Guanwei Li
- Department of Colorectal & Anal Surgery, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, People's Republic of China (Guanwei Li)
| | - Jinhe Deng
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, People's Republic of China (D.L., Y.L., D.C., A.C., J.D., X.W.)
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, People's Republic of China (Y.F., Y.Z., Z.Z., B.H.)
| | - Xinhua Wei
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, People's Republic of China (D.L., Y.L., D.C., A.C., J.D., X.W.).
| |
Collapse
|
17
|
Yi PH, Garner HW, Hirschmann A, Jacobson JA, Omoumi P, Oh K, Zech JR, Lee YH. Clinical Applications, Challenges, and Recommendations for Artificial Intelligence in Musculoskeletal and Soft-Tissue Ultrasound: AJR Expert Panel Narrative Review. AJR Am J Roentgenol 2024; 222:e2329530. [PMID: 37436032 DOI: 10.2214/ajr.23.29530] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Abstract
Artificial intelligence (AI) is increasingly used in clinical practice for musculoskeletal imaging tasks, such as disease diagnosis and image reconstruction. AI applications in musculoskeletal imaging have focused primarily on radiography, CT, and MRI. Although musculoskeletal ultrasound stands to benefit from AI in similar ways, such applications have been relatively underdeveloped. In comparison with other modalities, ultrasound has unique advantages and disadvantages that must be considered in AI algorithm development and clinical translation. Challenges in developing AI for musculoskeletal ultrasound involve both clinical aspects of image acquisition and practical limitations in image processing and annotation. Solutions from other radiology subspecialties (e.g., crowdsourced annotations coordinated by professional societies), along with use cases (most commonly rotator cuff tendon tears and palpable soft-tissue masses), can be applied to musculoskeletal ultrasound to help develop AI. To facilitate creation of high-quality imaging datasets for AI model development, technologists and radiologists should focus on increasing uniformity in musculoskeletal ultrasound performance and increasing annotations of images for specific anatomic regions. This Expert Panel Narrative Review summarizes available evidence regarding AI's potential utility in musculoskeletal ultrasound and challenges facing its development. Recommendations for future AI advancement and clinical translation in musculoskeletal ultrasound are discussed.
Collapse
Affiliation(s)
- Paul H Yi
- University of Maryland Medical Intelligent Imaging Center, University of Maryland School of Medicine, Baltimore, MD
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD
| | | | - Anna Hirschmann
- Imamed Radiology Nordwest, Basel, Switzerland
- Department of Radiology, University of Basel, Basel, Switzerland
| | - Jon A Jacobson
- Lenox Hill Radiology, New York, NY
- Department of Radiology, University of California, San Diego Medical Center, San Diego, CA
| | - Patrick Omoumi
- Department of Radiology, Lausanne University Hospital, Lausanne, Switzerland
- Department of Radiology, University of Lausanne, Lausanne, Switzerland
| | - Kangrok Oh
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea
| | - John R Zech
- Department of Radiology, Columbia University Irving Medical Center, New York-Presbyterian Hospital, New York, NY
| | - Young Han Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea
| |
Collapse
|
18
|
Baskaran RKR, Link A, Porr B, Franke T. Classification of chemically modified red blood cells in microflow using machine learning video analysis. SOFT MATTER 2024; 20:952-958. [PMID: 38088860 DOI: 10.1039/d3sm01337e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
We classify native and chemically modified red blood cells with an AI based video classifier. Using TensorFlow video analysis enables us to capture not only the morphology of the cell but also the trajectories of motion of individual red blood cells and their dynamics. We chemically modify cells in three different ways to model different pathological conditions and obtain classification accuracies for all three classification tasks of more than 90% between native and modified cells. Unlike standard cytometers that are based on immunophenotyping our microfluidic cytometer allows to rapidly categorize cells without any fluorescence labels simply by analysing the shape and flow of red blood cells.
Collapse
Affiliation(s)
- R K Rajaram Baskaran
- Division of Biomedical Engineering, School of Engineering, University of Glasgow, Oakfield Avenue, Glasgow G12 8LT, UK.
| | - A Link
- Division of Biomedical Engineering, School of Engineering, University of Glasgow, Oakfield Avenue, Glasgow G12 8LT, UK.
| | - B Porr
- Division of Biomedical Engineering, School of Engineering, University of Glasgow, Oakfield Avenue, Glasgow G12 8LT, UK.
| | - T Franke
- Division of Biomedical Engineering, School of Engineering, University of Glasgow, Oakfield Avenue, Glasgow G12 8LT, UK.
| |
Collapse
|
19
|
Zhao Y, Wang X, Zhang Y, Liu T, Zuo S, Sun L, Zhang J, Wang K, Liu J. Combination of clinical information and radiomics models for the differentiation of acute simple appendicitis and non simple appendicitis on CT images. Sci Rep 2024; 14:1854. [PMID: 38253872 PMCID: PMC10803326 DOI: 10.1038/s41598-024-52390-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 01/18/2024] [Indexed: 01/24/2024] Open
Abstract
To investigate the radiomics models for the differentiation of simple and non-simple acute appendicitis. This study retrospectively included 334 appendectomy cases (76 simple and 258 non-simple cases) for acute appendicitis. These cases were divided into training (n = 106) and test cohorts (n = 228). A radiomics model was developed using the radiomic features of the appendix area on CT images as the input variables. A CT model was developed using the clinical and CT features as the input variables. A combined model was developed by combining the radiomics model and clinical information. These models were tested, and their performance was evaluated by receiver operating characteristic curves and decision curve analysis (DCA). The variables independently associated with non-simple appendicitis in the combined model were body temperature, age, percentage of neutrophils and Rad-score. The AUC of the combined model was significantly higher than that of the CT model (P = 0.041). The AUC of the radiomics model was also higher than that of the CT model but did not reach a level of statistical significance (P = 0.053). DCA showed that all three models had a higher net benefit (NB) than the default strategies, and the combined model presented the highest NB. A nomogram of the combined model was developed as the graphical representation of the final model. It is feasible to use the combined information of clinical and CT radiomics models for the differentiation of simple and non-simple acute appendicitis.
Collapse
Affiliation(s)
- Yinming Zhao
- Department of Gastrointestinal Surgery, Peking University First Hospital, Beijing, China
| | - Xin Wang
- Department of Gastrointestinal Surgery, Peking University First Hospital, Beijing, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Tao Liu
- Department of Gastrointestinal Surgery, Peking University First Hospital, Beijing, China
| | - Shuai Zuo
- Department of Gastrointestinal Surgery, Peking University First Hospital, Beijing, China
| | - Lie Sun
- Department of Gastrointestinal Surgery, Peking University First Hospital, Beijing, China
| | - Junling Zhang
- Department of Gastrointestinal Surgery, Peking University First Hospital, Beijing, China.
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University Beijing, Beijing, China.
| | - Jing Liu
- Department of Radiology, Peking University First Hospital, Beijing, China.
| |
Collapse
|
20
|
Marcinkevičs R, Reis Wolfertstetter P, Klimiene U, Chin-Cheong K, Paschke A, Zerres J, Denzinger M, Niederberger D, Wellmann S, Ozkan E, Knorr C, Vogt JE. Interpretable and intervenable ultrasonography-based machine learning models for pediatric appendicitis. Med Image Anal 2024; 91:103042. [PMID: 38000257 DOI: 10.1016/j.media.2023.103042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 11/10/2023] [Accepted: 11/20/2023] [Indexed: 11/26/2023]
Abstract
Appendicitis is among the most frequent reasons for pediatric abdominal surgeries. Previous decision support systems for appendicitis have focused on clinical, laboratory, scoring, and computed tomography data and have ignored abdominal ultrasound, despite its noninvasive nature and widespread availability. In this work, we present interpretable machine learning models for predicting the diagnosis, management and severity of suspected appendicitis using ultrasound images. Our approach utilizes concept bottleneck models (CBM) that facilitate interpretation and interaction with high-level concepts understandable to clinicians. Furthermore, we extend CBMs to prediction problems with multiple views and incomplete concept sets. Our models were trained on a dataset comprising 579 pediatric patients with 1709 ultrasound images accompanied by clinical and laboratory data. Results show that our proposed method enables clinicians to utilize a human-understandable and intervenable predictive model without compromising performance or requiring time-consuming image annotation when deployed. For predicting the diagnosis, the extended multiview CBM attained an AUROC of 0.80 and an AUPR of 0.92, performing comparably to similar black-box neural networks trained and tested on the same dataset.
Collapse
Affiliation(s)
- Ričards Marcinkevičs
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland.
| | - Patricia Reis Wolfertstetter
- Department of Pediatric Surgery and Pediatric Orthopedics, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany; Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany.
| | - Ugne Klimiene
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland
| | - Kieran Chin-Cheong
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland
| | - Alyssia Paschke
- Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany
| | - Julia Zerres
- Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany
| | - Markus Denzinger
- Department of Pediatric Surgery and Pediatric Orthopedics, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany; Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany
| | - David Niederberger
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland
| | - Sven Wellmann
- Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany; Division of Neonatology, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany
| | - Ece Ozkan
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, 02139, USA
| | - Christian Knorr
- Department of Pediatric Surgery and Pediatric Orthopedics, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany
| | - Julia E Vogt
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland.
| |
Collapse
|
21
|
Issaiy M, Zarei D, Saghazadeh A. Artificial Intelligence and Acute Appendicitis: A Systematic Review of Diagnostic and Prognostic Models. World J Emerg Surg 2023; 18:59. [PMID: 38114983 PMCID: PMC10729387 DOI: 10.1186/s13017-023-00527-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 12/06/2023] [Indexed: 12/21/2023] Open
Abstract
BACKGROUND To assess the efficacy of artificial intelligence (AI) models in diagnosing and prognosticating acute appendicitis (AA) in adult patients compared to traditional methods. AA is a common cause of emergency department visits and abdominal surgeries. It is typically diagnosed through clinical assessments, laboratory tests, and imaging studies. However, traditional diagnostic methods can be time-consuming and inaccurate. Machine learning models have shown promise in improving diagnostic accuracy and predicting outcomes. MAIN BODY A systematic review following the PRISMA guidelines was conducted, searching PubMed, Embase, Scopus, and Web of Science databases. Studies were evaluated for risk of bias using the Prediction Model Risk of Bias Assessment Tool. Data points extracted included model type, input features, validation strategies, and key performance metrics. RESULTS In total, 29 studies were analyzed, out of which 21 focused on diagnosis, seven on prognosis, and one on both. Artificial neural networks (ANNs) were the most commonly employed algorithm for diagnosis. Both ANN and logistic regression were also widely used for categorizing types of AA. ANNs showed high performance in most cases, with accuracy rates often exceeding 80% and AUC values peaking at 0.985. The models also demonstrated promising results in predicting postoperative outcomes such as sepsis risk and ICU admission. Risk of bias was identified in a majority of studies, with selection bias and lack of internal validation being the most common issues. CONCLUSION AI algorithms demonstrate significant promise in diagnosing and prognosticating AA, often surpassing traditional methods and clinical scores such as the Alvarado scoring system in terms of speed and accuracy.
Collapse
Affiliation(s)
- Mahbod Issaiy
- School of Medicine, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Systematic Review and Meta-Analysis Expert Group (SRMEG), Universal Scientific Education and Research Network (USERN), Tehran, Iran
| | - Diana Zarei
- School of Medicine, Iran University of Medical Sciences, Tehran, Iran
- Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Science, Tehran, Iran
| | - Amene Saghazadeh
- Systematic Review and Meta-Analysis Expert Group (SRMEG), Universal Scientific Education and Research Network (USERN), Tehran, Iran.
- Research Center for Immunodeficiencies, Children's Medical Center, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
22
|
Harmantepe AT, Dikicier E, Gönüllü E, Ozdemir K, Kamburoğlu MB, Yigit M. A different way to diagnosis acute appendicitis: machine learning. POLISH JOURNAL OF SURGERY 2023; 96:38-43. [PMID: 38629278 DOI: 10.5604/01.3001.0053.5994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2024]
Abstract
<b><br>Indroduction:</b> Machine learning is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns, and make decisions with minimal human intervention.</br> <b><br>Aim:</b> Our aim is to predict acute appendicitis, which is the most common indication for emergency surgery, using machine learning algorithms with an easy and inexpensive method.</br> <b><br>Materials and methods:</b> Patients who were treated surgically with a prediagnosis of acute appendicitis in a single center between 2011 and 2021 were analyzed. Patients with right lower quadrant pain were selected. A total of 189 positive and 156 negative appendectomies were found. Gender and hemogram were used as features. Machine learning algorithms and data analysis were made in Python (3.7) programming language.</br> <b><br>Results:</b> Negative appendectomies were found in 62% (n = 97) of the women and in 38% (n = 59) of the men. Positive appendectomies were present in 38% (n = 72) of the women and 62% (n = 117) of the men. The accuracy in the test data was 82.7% in logistic regression, 68.9% in support vector machines, 78.1% in k-nearest neighbors, and 83.9% in neural networks. The accuracy in the voting classifier created with logistic regression, k-nearest neighbor, support vector machines, and artificial neural networks was 86.2%. In the voting classifier, the sensitivity was 83.7% and the specificity was 88.6%.</br> <b><br>Conclusions:</b> The results of our study show that machine learning is an effective method for diagnosing acute appendicitis. This study presents a practical, easy, fast, and inexpensive method to predict the diagnosis of acute appendicitis.</br>.
Collapse
Affiliation(s)
| | - Enis Dikicier
- Sakarya University Faculty of Medicine, Department of General Surgery
| | - Emre Gönüllü
- Sakarya University Education and Research Hospital, Department of General Surgery
| | | | | | - Merve Yigit
- Sakarya University Education and Research Hospital, Department of General Surgery
| |
Collapse
|
23
|
Callahan A, Ashley E, Datta S, Desai P, Ferris TA, Fries JA, Halaas M, Langlotz CP, Mackey S, Posada JD, Pfeffer MA, Shah NH. The Stanford Medicine data science ecosystem for clinical and translational research. JAMIA Open 2023; 6:ooad054. [PMID: 37545984 PMCID: PMC10397535 DOI: 10.1093/jamiaopen/ooad054] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 03/14/2023] [Accepted: 07/19/2023] [Indexed: 08/08/2023] Open
Abstract
Objective To describe the infrastructure, tools, and services developed at Stanford Medicine to maintain its data science ecosystem and research patient data repository for clinical and translational research. Materials and Methods The data science ecosystem, dubbed the Stanford Data Science Resources (SDSR), includes infrastructure and tools to create, search, retrieve, and analyze patient data, as well as services for data deidentification, linkage, and processing to extract high-value information from healthcare IT systems. Data are made available via self-service and concierge access, on HIPAA compliant secure computing infrastructure supported by in-depth user training. Results The Stanford Medicine Research Data Repository (STARR) functions as the SDSR data integration point, and includes electronic medical records, clinical images, text, bedside monitoring data and HL7 messages. SDSR tools include tools for electronic phenotyping, cohort building, and a search engine for patient timelines. The SDSR supports patient data collection, reproducible research, and teaching using healthcare data, and facilitates industry collaborations and large-scale observational studies. Discussion Research patient data repositories and their underlying data science infrastructure are essential to realizing a learning health system and advancing the mission of academic medical centers. Challenges to maintaining the SDSR include ensuring sufficient financial support while providing researchers and clinicians with maximal access to data and digital infrastructure, balancing tool development with user training, and supporting the diverse needs of users. Conclusion Our experience maintaining the SDSR offers a case study for academic medical centers developing data science and research informatics infrastructure.
Collapse
Affiliation(s)
- Alison Callahan
- Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA
| | - Euan Ashley
- Department of Medicine, School of Medicine, Stanford University, Stanford, California, USA
- Department of Genetics, School of Medicine, Stanford University, Stanford, California, USA
- Department of Biomedical Data Science, School of Medicine, Stanford University, Stanford, California, USA
| | - Somalee Datta
- Technology and Digital Solutions, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Priyamvada Desai
- Technology and Digital Solutions, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Todd A Ferris
- Technology and Digital Solutions, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Jason A Fries
- Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA
| | - Michael Halaas
- Technology and Digital Solutions, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Curtis P Langlotz
- Department of Radiology, School of Medicine, Stanford University, Stanford, California, USA
| | - Sean Mackey
- Department of Anesthesia, School of Medicine, Stanford University, Stanford, California, USA
| | - José D Posada
- Technology and Digital Solutions, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Michael A Pfeffer
- Technology and Digital Solutions, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Nigam H Shah
- Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA
- Technology and Digital Solutions, Stanford Medicine, Stanford University, Stanford, California, USA
- Clinical Excellence Research Center, School of Medicine, Stanford University, Stanford, California, USA
| |
Collapse
|
24
|
Kelly B, Martinez M, Do H, Hayden J, Huang Y, Yedavalli V, Ho C, Keane PA, Killeen R, Lawlor A, Moseley ME, Yeom KW, Lee EH. DEEP MOVEMENT: Deep learning of movie files for management of endovascular thrombectomy. Eur Radiol 2023; 33:5728-5739. [PMID: 36847835 PMCID: PMC10326097 DOI: 10.1007/s00330-023-09478-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/23/2022] [Accepted: 01/19/2023] [Indexed: 03/01/2023]
Abstract
OBJECTIVES Treatment and outcomes of acute stroke have been revolutionised by mechanical thrombectomy. Deep learning has shown great promise in diagnostics but applications in video and interventional radiology lag behind. We aimed to develop a model that takes as input digital subtraction angiography (DSA) videos and classifies the video according to (1) the presence of large vessel occlusion (LVO), (2) the location of the occlusion, and (3) the efficacy of reperfusion. METHODS All patients who underwent DSA for anterior circulation acute ischaemic stroke between 2012 and 2019 were included. Consecutive normal studies were included to balance classes. An external validation (EV) dataset was collected from another institution. The trained model was also used on DSA videos post mechanical thrombectomy to assess thrombectomy efficacy. RESULTS In total, 1024 videos comprising 287 patients were included (44 for EV). Occlusion identification was achieved with 100% sensitivity and 91.67% specificity (EV 91.30% and 81.82%). Accuracy of location classification was 71% for ICA, 84% for M1, and 78% for M2 occlusions (EV 73, 25, and 50%). For post-thrombectomy DSA (n = 194), the model identified successful reperfusion with 100%, 88%, and 35% for ICA, M1, and M2 occlusion (EV 89, 88, and 60%). The model could also perform classification of post-intervention videos as mTICI < 3 with an AUC of 0.71. CONCLUSIONS Our model can successfully identify normal DSA studies from those with LVO and classify thrombectomy outcome and solve a clinical radiology problem with two temporal elements (dynamic video and pre and post intervention). KEY POINTS • DEEP MOVEMENT represents a novel application of a model applied to acute stroke imaging to handle two types of temporal complexity, dynamic video and pre and post intervention. • The model takes as an input digital subtraction angiograms of the anterior cerebral circulation and classifies according to (1) the presence or absence of large vessel occlusion, (2) the location of the occlusion, and (3) the efficacy of thrombectomy. • Potential clinical utility lies in providing decision support via rapid interpretation (pre thrombectomy) and automated objective gradation of thrombectomy outcomes (post thrombectomy).
Collapse
Affiliation(s)
- Brendan Kelly
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Radiology, St Vincent's University Hospital, Elm Park, Dublin 4, Ireland.
- Insight Centre for Data Analytics, University College Dublin, Belfield, Dublin 4, Ireland.
| | - Mesha Martinez
- Department of Clinical Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Huy Do
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | | | - Yuhao Huang
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Vivek Yedavalli
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Chang Ho
- Department of Clinical Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Pearse A Keane
- Moorfields Eye Hospital, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Ronan Killeen
- Department of Radiology, St Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Aonghus Lawlor
- Insight Centre for Data Analytics, University College Dublin, Belfield, Dublin 4, Ireland
| | - Michael E Moseley
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Kristen W Yeom
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Edward H Lee
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
25
|
Lee GP, Park SH, Kim YJ, Chung JW, Kim KG. Enhancing Disease Classification in Abdominal CT Scans through RGB Superposition Methods and 2D Convolutional Neural Networks: A Study of Appendicitis and Diverticulitis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:7714483. [PMID: 37284168 PMCID: PMC10241572 DOI: 10.1155/2023/7714483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 02/10/2023] [Accepted: 04/15/2023] [Indexed: 06/08/2023]
Abstract
The primary symptom of both appendicitis and diverticulitis is a pain in the right lower abdomen; it is almost impossible to diagnose these conditions through symptoms alone. However, there will be misdiagnoses happening when using abdominal computed tomography (CT) scans. Most previous studies have used a 3D convolutional neural network (CNN) suitable for processing sequences of images. However, 3D CNN models can be difficult to implement in typical computing systems because they require large amounts of data, GPU memory, and extensive training times. We propose a deep learning method, utilizing red, green, and blue (RGB) channel superposition images reconstructed from three slices of sequence images. Using the RGB superposition image as the input image of the model, the average accuracy was shown as 90.98% in EfficietNetB0, 91.27% in EfficietNetB2, and 91.98% in EfficietNetB4. The AUC score using the RGB superposition image was higher than the original image of the single channel for EfficientNetB4 (0.967 vs. 0.959, p = 0.0087). The comparison in performance between the model architectures using the RGB superposition method showed the highest learning performance in the EfficientNetB4 model among all indicators; accuracy was 91.98% and recall was 95.35%. EfficientNetB4 using the RGB superposition method had a 0.011 (p value = 0.0001) AUC score higher than EfficientNetB0 using the same method. The superposition of sequential slice images in CT scans was used to enhance the distinction in features like shape, size of the target, and spatial information used to classify disease. The proposed method has fewer constraints than the 3D CNN method and is suitable for an environment using 2D CNN; thus, we can achieve performance improvement with limited resources.
Collapse
Affiliation(s)
- Gi Pyo Lee
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, Republic of Korea
| | - So Hyun Park
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, Republic of Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, Gyeonggi-do, Republic of Korea
| | - Jun-Won Chung
- Division of Gastroenterology, Department of Internal Medicine, Gil Medical Center, Gachon University College of Medicine, Incheon, Republic of Korea
| | - Kwang Gi Kim
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, Republic of Korea
- Department of Biomedical Engineering Medical Center, College of Medicine, Gachon University, Incheon, Republic of Korea
| |
Collapse
|
26
|
Park SH, Kim YJ, Kim KG, Chung JW, Kim HC, Choi IY, You MW, Lee GP, Hwang JH. Comparison between single and serial computed tomography images in classification of acute appendicitis, acute right-sided diverticulitis, and normal appendix using EfficientNet. PLoS One 2023; 18:e0281498. [PMID: 37224137 PMCID: PMC10208462 DOI: 10.1371/journal.pone.0281498] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 01/24/2023] [Indexed: 05/26/2023] Open
Abstract
This study aimed to develop a convolutional neural network (CNN) using the EfficientNet algorithm for the automated classification of acute appendicitis, acute diverticulitis, and normal appendix and to evaluate its diagnostic performance. We retrospectively enrolled 715 patients who underwent contrast-enhanced abdominopelvic computed tomography (CT). Of these, 246 patients had acute appendicitis, 254 had acute diverticulitis, and 215 had normal appendix. Training, validation, and test data were obtained from 4,078 CT images (1,959 acute appendicitis, 823 acute diverticulitis, and 1,296 normal appendix cases) using both single and serial (RGB [red, green, blue]) image methods. We augmented the training dataset to avoid training disturbances caused by unbalanced CT datasets. For classification of the normal appendix, the RGB serial image method showed a slightly higher sensitivity (89.66 vs. 87.89%; p = 0.244), accuracy (93.62% vs. 92.35%), and specificity (95.47% vs. 94.43%) than did the single image method. For the classification of acute diverticulitis, the RGB serial image method also yielded a slightly higher sensitivity (83.35 vs. 80.44%; p = 0.019), accuracy (93.48% vs. 92.15%), and specificity (96.04% vs. 95.12%) than the single image method. Moreover, the mean areas under the receiver operating characteristic curve (AUCs) were significantly higher for acute appendicitis (0.951 vs. 0.937; p < 0.0001), acute diverticulitis (0.972 vs. 0.963; p = 0.0025), and normal appendix (0.979 vs. 0.972; p = 0.0101) with the RGB serial image method than those obtained by the single method for each condition. Thus, acute appendicitis, acute diverticulitis, and normal appendix could be accurately distinguished on CT images by our model, particularly when using the RGB serial image method.
Collapse
Affiliation(s)
- So Hyun Park
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University, Gil Medical Center, Incheon, South Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University, Gil Medical Center, Incheon, South Korea
| | - Jun-Won Chung
- Division of Gastroenterology, Department of Internal Medicine, Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea
| | - Hyun Cheol Kim
- Department of Radiology, Kyung Hee University Hospital at Gangdong, Seoul, South Korea
| | - In Young Choi
- Department of Radiology, Korea University Ansan Hospital, Ansan, South Korea
| | - Myung-Won You
- Department of Radiology, Kyung Hee University Hospital, Seoul, South Korea
| | - Gi Pyo Lee
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, South Korea
| | - Jung Han Hwang
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea
| |
Collapse
|
27
|
Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 2023; 156:106668. [PMID: 36863192 DOI: 10.1016/j.compbiomed.2023.106668] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 01/12/2023] [Accepted: 02/10/2023] [Indexed: 02/21/2023]
Abstract
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
Collapse
Affiliation(s)
- Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, UK.
| | - Diane M Dickson
- Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
28
|
Reis HC, Turk V, Khoshelham K, Kaya S. MediNet: transfer learning approach with MediNet medical visual database. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-44. [PMID: 37362724 PMCID: PMC10025796 DOI: 10.1007/s11042-023-14831-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/06/2022] [Accepted: 02/06/2023] [Indexed: 06/28/2023]
Abstract
The rapid development of machine learning has increased interest in the use of deep learning methods in medical research. Deep learning in the medical field is used in disease detection and classification problems in the clinical decision-making process. Large amounts of labeled datasets are often required to train deep neural networks; however, in the medical field, the lack of a sufficient number of images in datasets and the difficulties encountered during data collection are among the main problems. In this study, we propose MediNet, a new 10-class visual dataset consisting of Rontgen (X-ray), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, and Histopathological images such as calcaneal normal, calcaneal tumor, colon benign colon adenocarcinoma, brain normal, brain tumor, breast benign, breast malignant, chest normal, chest pneumonia. AlexNet, VGG19-BN, Inception V3, DenseNet 121, ResNet 101, EfficientNet B0, Nested-LSTM + CNN, and proposed RdiNet deep learning algorithms are used in the transfer learning for pre-training and classification application. Transfer learning aims to apply previously learned knowledge in a new task. Seven algorithms were trained with the MediNet dataset, and the models obtained from these algorithms, namely feature vectors, were recorded. Pre-training models were used for classification studies on chest X-ray images, diabetic retinopathy, and Covid-19 datasets with the transfer learning technique. In performance measurement, an accuracy of 94.84% was obtained in the traditional classification study for the InceptionV3 model in the classification study performed on the Chest X-Ray Images dataset, and the accuracy was increased 98.71% after the transfer learning technique was applied. In the Covid-19 dataset, the classification success of the DenseNet121 model before pre-trained was 88%, while the performance after the transfer application with MediNet was 92%. In the Diabetic retinopathy dataset, the classification success of the Nested-LSTM + CNN model before pre-trained was 79.35%, while the classification success was 81.52% after the transfer application with MediNet. The comparison of results obtained from experimental studies observed that the proposed method produced more successful results. Graphical abstract
Collapse
Affiliation(s)
- Hatice Catal Reis
- Department of Geomatics Engineering, Gumushane University, 2900 Gumushane, Turkey
| | - Veysel Turk
- Department of Computer Engineering, University of Harran, Sanliurfa, Turkey
| | - Kourosh Khoshelham
- Department of Infrastructure Engineering, The University of Melbourne, Parkville, 3052 Australia
| | - Serhat Kaya
- Department of Mining Engineering, Dicle University, Diyarbakir, Turkey
| |
Collapse
|
29
|
Atasever S, Azginoglu N, Terzi DS, Terzi R. A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
Affiliation(s)
- Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey.
| | - Nuh Azginoglu
- Computer Engineering Department, Kayseri University, Kayseri, Turkey.
| | | | - Ramazan Terzi
- Computer Engineering Department, Amasya University, Amasya, Turkey.
| |
Collapse
|
30
|
Ni C, Feng B, Yao J, Zhou X, Shen J, Ou D, Peng C, Xu D. Value of deep learning models based on ultrasonic dynamic videos for distinguishing thyroid nodules. Front Oncol 2023; 12:1066508. [PMID: 36733368 PMCID: PMC9887311 DOI: 10.3389/fonc.2022.1066508] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023] Open
Abstract
Objective This study was designed to distinguish benign and malignant thyroid nodules by using deep learning(DL) models based on ultrasound dynamic videos. Methods Ultrasound dynamic videos of 1018 thyroid nodules were retrospectively collected from 657 patients in Zhejiang Cancer Hospital from January 2020 to December 2020 for the tests with 5 DL models. Results In the internal test set, the area under the receiver operating characteristic curve (AUROC) was 0.929(95% CI: 0.888,0.970) for the best-performing model LSTM Two radiologists interpreted the dynamic video with AUROC values of 0.760 (95% CI: 0.653, 0.867) and 0.815 (95% CI: 0.778, 0.853). In the external test set, the best-performing DL model had AUROC values of 0.896(95% CI: 0.847,0.945), and two ultrasound radiologist had AUROC values of 0.754 (95% CI: 0.649,0.850) and 0.833 (95% CI: 0.797,0.869). Conclusion This study demonstrates that the DL model based on ultrasound dynamic videos performs better than the ultrasound radiologists in distinguishing thyroid nodules.
Collapse
Affiliation(s)
- Chen Ni
- The Second Clinical School of Zhejiang Chinese Medical University, Hangzhou, China
| | - Bojian Feng
- Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Jincao Yao
- Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Xueqin Zhou
- Clinical Research Department, Esaote (Shenzhen) Medical Equipment Co., Ltd., Xinyilingyu Research Center, Shenzhen, China
| | - Jiafei Shen
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Di Ou
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Chanjuan Peng
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China; Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China
| | - Dong Xu
- Key Laboratory of Head and Neck Cancer Translational Research of Zhejiang Province, Hangzhou, China,Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, Zhejiang, China,*Correspondence: Dong Xu,
| |
Collapse
|
31
|
Zimmer VA, Gomez A, Skelton E, Wright R, Wheeler G, Deng S, Ghavami N, Lloyd K, Matthew J, Kainz B, Rueckert D, Hajnal JV, Schnabel JA. Placenta segmentation in ultrasound imaging: Addressing sources of uncertainty and limited field-of-view. Med Image Anal 2023; 83:102639. [PMID: 36257132 PMCID: PMC7614009 DOI: 10.1016/j.media.2022.102639] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 03/09/2022] [Accepted: 09/15/2022] [Indexed: 02/04/2023]
Abstract
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
Collapse
Affiliation(s)
- Veronika A Zimmer
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; Faculty of Informatics, Technical University of Munich, Germany.
| | - Alberto Gomez
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; School of Health Sciences, City, University of London, London, United Kingdom
| | - Robert Wright
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Gavin Wheeler
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Shujie Deng
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Nooshin Ghavami
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Karen Lloyd
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Bernhard Kainz
- BioMedIA group, Imperial College London, London, United Kingdom; FAU Erlangen-Nürnberg, Germany
| | - Daniel Rueckert
- Faculty of Informatics, Technical University of Munich, Germany; BioMedIA group, Imperial College London, London, United Kingdom
| | - Joseph V Hajnal
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Julia A Schnabel
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; Faculty of Informatics, Technical University of Munich, Germany; Helmholtz Center Munich, Germany
| |
Collapse
|
32
|
Ding W, Abdel-Basset M, Hawash H, Ali AM. Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
33
|
Benchmarking saliency methods for chest X-ray interpretation. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00536-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AbstractSaliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
Collapse
|
34
|
Mastrodicasa D, Codari M, Bäumler K, Sandfort V, Shen J, Mistelbauer G, Hahn LD, Turner VL, Desjardins B, Willemink MJ, Fleischmann D. Artificial Intelligence Applications in Aortic Dissection Imaging. Semin Roentgenol 2022; 57:357-363. [PMID: 36265987 PMCID: PMC10013132 DOI: 10.1053/j.ro.2022.07.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/25/2022] [Accepted: 07/02/2022] [Indexed: 11/11/2022]
Affiliation(s)
- Domenico Mastrodicasa
- Department of Radiology, Stanford University School of Medicine, Stanford, CA; Stanford Cardiovascular Institute, Stanford University School of Medicine, Stanford, CA.
| | - Marina Codari
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Kathrin Bäumler
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Veit Sandfort
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Jody Shen
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Gabriel Mistelbauer
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Lewis D Hahn
- University of California San Diego, Department of Radiology, La Jolla, CA
| | - Valery L Turner
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Benoit Desjardins
- Department of Radiology, Stanford University School of Medicine, Stanford, CA; Department of Radiology, University of Pennsylvania, Philadelphia, PA
| | - Martin J Willemink
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Dominik Fleischmann
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| |
Collapse
|
35
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
36
|
Malik N, Bzdok D. From YouTube to the brain: Transfer learning can improve brain-imaging predictions with deep learning. Neural Netw 2022; 153:325-338. [PMID: 35777174 DOI: 10.1016/j.neunet.2022.06.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 04/20/2022] [Accepted: 06/09/2022] [Indexed: 12/01/2022]
Abstract
Deep learning has recently achieved best-in-class performance in several fields, including biomedical domains such as X-ray images. Yet, data scarcity poses a strict limit on training successful deep learning systems in many, if not most, biomedical applications, including those involving brain images. In this study, we translate state-of-the-art transfer learning techniques for single-subject prediction of simpler (sex and age) and more complex phenotypes (number of people in household, household income, fluid intelligence and smoking behavior). We fine-tuned 2D and 3D ResNet-18 convolutional neural networks for target phenotype predictions from brain images of ∼40,000 UK Biobank participants, after pretraining on YouTube videos from the Kinetics dataset and natural images from the ImageNet dataset. Transfer learning was effective on several phenotypes, especially sex and age classification. Additionally, transfer learning in particular outperformed deep learning models trained from scratch especially on smaller sample sizes. The out-of-sample performance using transfer learning from previously learned knowledge based on real-world images and videos could unlock the potential in many areas of imaging neuroscience where deep learning solutions are currently infeasible.
Collapse
Affiliation(s)
- Nahiyan Malik
- School of Computer Science, McGill University, Montreal, QC, Canada; Mila - Quebec Artificial Intelligence Institute, Montreal, QC, Canada.
| | - Danilo Bzdok
- School of Computer Science, McGill University, Montreal, QC, Canada; Mila - Quebec Artificial Intelligence Institute, Montreal, QC, Canada; McConnell Brain Imaging Centre, Montreal Neurological Institute (MNI), McGill University, Montreal, QC, Canada; Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
37
|
Talaat M, Si X, Liu X, Xi J. Count- and mass-based dosimetry of MDI spray droplets with polydisperse and monodisperse size distributions. Int J Pharm 2022; 623:121920. [PMID: 35714818 DOI: 10.1016/j.ijpharm.2022.121920] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 06/06/2022] [Accepted: 06/12/2022] [Indexed: 11/25/2022]
Abstract
Most previous numerical studies of inhalation drug delivery used monodisperse aerosols or quantified deposition as the ratio of deposited particle number over the total number of released particles (i.e., count-based). These practices are reasonable when the aerosols have a sufficiently narrow size range. However, spray droplets from metered-dose inhalers (MDIs) are often polydisperse with a wide size range, so using monodisperse aerosols and/or count-based deposition quantification may lead to significant errors. The objective of this study was to develop a mass-based dosimetry method and evaluate its performance in lung delivery in a mouth-lung (G9) geometry with an albuterol-CFC inhaler. The conventional practices (monodisperse and polydisperse-count-based) were also simulated for comparison purposes. The MDI actuation in the open space was studied using both high-speed imaging and LES-Lagrangian simulations. Experimentally measured spray velocities and size distribution were implemented in the computational model as boundary conditions. Good agreement was achieved between recorded and simulated spray plume evolution spatially and temporally. The polydisperse-mass-based predictions of MDI doses compared favorably with the measurements in all three regions considered (device, mouth-throat, and lung). Significant errors in MDI regional deposition were predicted using the monodisperse and count-based methods. The new polydisperse-mass-based method also predicted local deposition hot spots that were one order of magnitude higher in intensity than the two conventional methods. The results of this study highlighted that a presentative polydisperse size distribution and appropriate deposition quantification method should be applied to reliably predict the MDI drug delivery in the human respiratory tract.
Collapse
Affiliation(s)
- Mohamed Talaat
- Department of Biomedical Engineering, University of Massachusetts, 1 University Ave., Lowell, MA 01854, USA.
| | - Xiuhua Si
- Department of Aerospace, Industrial, and Mechanical Engineering, California Baptist University, 8432 Magnolia Ave, Riverside, CA 92504, USA.
| | - Xiaofei Liu
- US Food and Drug Administration, Division of Pharmaceutical Analysis, 1114 Market Street, St. Louis, MO 63101, USA
| | - Jinxiang Xi
- Department of Biomedical Engineering, University of Massachusetts, 1 University Ave., Lowell, MA 01854, USA.
| |
Collapse
|
38
|
Deng A, Qian N, Hua S, Wan J, Lv Z, Zou W. High-resolution ISAR imaging based on photonic receiving for high-accuracy automatic target recognition. OPTICS EXPRESS 2022; 30:20580-20588. [PMID: 36224799 DOI: 10.1364/oe.457443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/14/2022] [Indexed: 06/16/2023]
Abstract
A scheme of high-resolution inverse synthetic aperture radar (ISAR) imaging based on photonic receiving is demonstrated. In the scheme, the linear frequency modulated (LFM) pulse echoes with 8 GHz bandwidth at the center frequency of 36 GHz are directly sampled with the photonic analog-to-digital converter (PADC). The ISAR images of complex targets can be constructed without detection range swath limitation due to the fidelity of the sampled results. The images of two pyramids demonstrate that the two-dimension (2D) resolution is 3.3 cm × 1.9 cm. Furthermore, the automatic target recognition (ATR) is employed based on the high-resolution experimental dataset under the assistance of deep learning. Despite of the small training dataset containing only 50 samples for each model, the ATR accuracy of three complex targets is still validated to be 95% on a test dataset with the equal number of samples.
Collapse
|
39
|
Lachance A, Godbout M, Antaki F, Hébert M, Bourgault S, Caissie M, Tourville É, Durand A, Dirani A. Predicting Visual Improvement After Macular Hole Surgery: A Combined Model Using Deep Learning and Clinical Features. Transl Vis Sci Technol 2022; 11:6. [PMID: 35385045 PMCID: PMC8994199 DOI: 10.1167/tvst.11.4.6] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Purpose The purpose of this study was to assess the feasibility of deep learning (DL) methods to enhance the prediction of visual acuity (VA) improvement after macular hole (MH) surgery from a combined model using DL on high-definition optical coherence tomography (HD-OCT) B-scans and clinical features. Methods We trained a DL convolutional neural network (CNN) using pre-operative HD-OCT B-scans of the macula and combined with a logistic regression model of pre-operative clinical features to predict VA increase ≥15 Early Treatment Diabetic Retinopathy Study (ETDRS) letters at 6 months post-vitrectomy in closed MHs. A total of 121 MHs with 242 HD-OCT B-scans and 484 clinical data points were used to train, validate, and test the model. Prediction of VA increase was evaluated using the area under the receiver operating characteristic curve (AUROC) and F1 scores. We also extracted the weight of each input feature in the hybrid model. Results All performances are reported on the held-out test set, matching results obtained with cross-validation. Using a regression on clinical features, the AUROC was 80.6, with an F1 score of 79.7. For the CNN, relying solely on the HD-OCT B-scans, the AUROC was 72.8 ± 14.6, with an F1 score of 61.5 ± 23.7. For our hybrid regression model using clinical features and CNN prediction, the AUROC was 81.9 ± 5.2, with an F1 score of 80.4 ± 7.7. In the hybrid model, the baseline VA was the most important feature (weight = 59.1 ± 6.9%), while the weight of HD-OCT prediction was 9.6 ± 4.2%. Conclusions Both the clinical data and HD-OCT models can predict postoperative VA improvement in patients undergoing vitrectomy for a MH with good discriminative performances. Combining them into a hybrid model did not significantly improve performance. Translational Relevance OCT-based DL models can predict postoperative VA improvement following vitrectomy for MH but fusing those models with clinical data might not provide improved predictive performance.
Collapse
Affiliation(s)
- Alexandre Lachance
- Faculté de Médecine, Université Laval, Québec, QC, Canada.,Département d'Ophtalmologie et d'oto-Rhino-Laryngologie - Chirurgie Cervico-Faciale, Centre Universitaire d'Ophtalmologie, Hôpital du Saint-Sacrement, CHU de Québec - Université Laval, Québec, QC, Canada
| | - Mathieu Godbout
- Département d'informatique et de Génie Logiciel, Université Laval, Québec, QC, Canada
| | - Fares Antaki
- Département d'ophtalmologie, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Québec, QC, Canada
| | - Mélanie Hébert
- Faculté de Médecine, Université Laval, Québec, QC, Canada.,Département d'Ophtalmologie et d'oto-Rhino-Laryngologie - Chirurgie Cervico-Faciale, Centre Universitaire d'Ophtalmologie, Hôpital du Saint-Sacrement, CHU de Québec - Université Laval, Québec, QC, Canada
| | - Serge Bourgault
- Faculté de Médecine, Université Laval, Québec, QC, Canada.,Département d'Ophtalmologie et d'oto-Rhino-Laryngologie - Chirurgie Cervico-Faciale, Centre Universitaire d'Ophtalmologie, Hôpital du Saint-Sacrement, CHU de Québec - Université Laval, Québec, QC, Canada
| | - Mathieu Caissie
- Faculté de Médecine, Université Laval, Québec, QC, Canada.,Département d'Ophtalmologie et d'oto-Rhino-Laryngologie - Chirurgie Cervico-Faciale, Centre Universitaire d'Ophtalmologie, Hôpital du Saint-Sacrement, CHU de Québec - Université Laval, Québec, QC, Canada
| | - Éric Tourville
- Faculté de Médecine, Université Laval, Québec, QC, Canada.,Département d'Ophtalmologie et d'oto-Rhino-Laryngologie - Chirurgie Cervico-Faciale, Centre Universitaire d'Ophtalmologie, Hôpital du Saint-Sacrement, CHU de Québec - Université Laval, Québec, QC, Canada
| | - Audrey Durand
- Département d'informatique et de Génie Logiciel, Université Laval, Québec, QC, Canada.,Département de Génie Électrique et de Génie Informatique, Université Laval, Québec, QC, Canada
| | - Ali Dirani
- Faculté de Médecine, Université Laval, Québec, QC, Canada.,Département d'Ophtalmologie et d'oto-Rhino-Laryngologie - Chirurgie Cervico-Faciale, Centre Universitaire d'Ophtalmologie, Hôpital du Saint-Sacrement, CHU de Québec - Université Laval, Québec, QC, Canada
| |
Collapse
|
40
|
Boehm KM, Khosravi P, Vanguri R, Gao J, Shah SP. Harnessing multimodal data integration to advance precision oncology. Nat Rev Cancer 2022; 22:114-126. [PMID: 34663944 PMCID: PMC8810682 DOI: 10.1038/s41568-021-00408-3] [Citation(s) in RCA: 219] [Impact Index Per Article: 73.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/08/2021] [Indexed: 02/07/2023]
Abstract
Advances in quantitative biomarker development have accelerated new forms of data-driven insights for patients with cancer. However, most approaches are limited to a single mode of data, leaving integrated approaches across modalities relatively underdeveloped. Multimodal integration of advanced molecular diagnostics, radiological and histological imaging, and codified clinical data presents opportunities to advance precision oncology beyond genomics and standard molecular techniques. However, most medical datasets are still too sparse to be useful for the training of modern machine learning techniques, and significant challenges remain before this is remedied. Combined efforts of data engineering, computational methods for analysis of heterogeneous data and instantiation of synergistic data models in biomedical research are required for success. In this Perspective, we offer our opinions on synthesizing complementary modalities of data with emerging multimodal artificial intelligence methods. Advancing along this direction will result in a reimagined class of multimodal biomarkers to propel the field of precision oncology in the coming decade.
Collapse
Affiliation(s)
- Kevin M Boehm
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Pegah Khosravi
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Rami Vanguri
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Jianjiong Gao
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Sohrab P Shah
- Computational Oncology, Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| |
Collapse
|
41
|
Abstract
Machine Learning is an increasingly important technology dealing with the growing complexity of the digitalised world. Despite the fact, that we live in a 'Big data' world where, almost 'everything' is digitally stored, there are many real-world situations, where researchers are still faced with small data samples. The present bibliometric knowledge synthesis study aims to answer the research question 'What is the small data problem in machine learning and how it is solved?' The analysis a positive trend in the number of research publications and substantial growth of the research community, indicating that the research field is reaching maturity. Most productive countries are China, United States and United Kingdom. Despite notable international cooperation, the regional concentration of research literature production in economically more developed countries was observed. Thematic analysis identified four research themes. The themes are concerned with to dimension reduction in complex big data analysis, data augmentation techniques in deep learning, data mining and statistical learning on small datasets.
Collapse
Affiliation(s)
- Peter Kokol
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia
| | | | | |
Collapse
|
42
|
Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2022; 77:29-52. [PMID: 34980946 PMCID: PMC8459787 DOI: 10.1016/j.inffus.2021.07.016] [Citation(s) in RCA: 195] [Impact Index Per Article: 65.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/25/2021] [Accepted: 07/25/2021] [Indexed: 05/04/2023]
Abstract
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.
Collapse
Affiliation(s)
- Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Royal Brompton Hospital, London, UK
- Imperial Institute of Advanced Technology, Hangzhou, China
| | - Qinghao Ye
- Hangzhou Ocean’s Smart Boya Co., Ltd, China
- University of California, San Diego, La Jolla, CA, USA
| | - Jun Xia
- Radiology Department, Shenzhen Second People’s Hospital, Shenzhen, China
| |
Collapse
|
43
|
Katsushika S, Kodera S, Nakamoto M, Ninomiya K, Kakuda N, Shinohara H, Matsuoka R, Ieki H, Uehara M, Higashikuni Y, Nakanishi K, Nakao T, Takeda N, Fujiu K, Daimon M, Ando J, Akazawa H, Morita H, Komuro I. Deep Learning Algorithm to Detect Cardiac Sarcoidosis From Echocardiographic Movies. Circ J 2021; 86:87-95. [PMID: 34176867 DOI: 10.1253/circj.cj-21-0265] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
BACKGROUND Because the early diagnosis of subclinical cardiac sarcoidosis (CS) remains difficult, we developed a deep learning algorithm to distinguish CS patients from healthy subjects using echocardiographic movies. METHODS AND RESULTS Among the patients who underwent echocardiography from January 2015 to December 2019, we chose 151 echocardiographic movies from 50 CS patients and 151 from 149 healthy subjects. We trained two 3D convolutional neural networks (3D-CNN) to identify CS patients using a dataset of 212 echocardiographic movies with and without a transfer learning method (Pretrained algorithm and Non-pretrained algorithm). On an independent set of 41 echocardiographic movies, the area under the receiver-operating characteristic curve (AUC) of the Pretrained algorithm was greater than that of Non-pretrained algorithm (0.842, 95% confidence interval (CI): 0.722-0.962 vs. 0.724, 95% CI: 0.566-0.882, P=0.253). The AUC from the interpretation of the same set of 41 echocardiographic movies by 5 cardiologists was not significantly different from that of the Pretrained algorithm (0.855, 95% CI: 0.735-0.975 vs. 0.842, 95% CI: 0.722-0.962, P=0.885). A sensitivity map demonstrated that the Pretrained algorithm focused on the area of the mitral valve. CONCLUSIONS A 3D-CNN with a transfer learning method may be a promising tool for detecting CS using an echocardiographic movie.
Collapse
Affiliation(s)
- Susumu Katsushika
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Satoshi Kodera
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | | | - Kota Ninomiya
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Nobutaka Kakuda
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Hiroki Shinohara
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Ryo Matsuoka
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Hirotaka Ieki
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Masae Uehara
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | | | - Koki Nakanishi
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Tomoko Nakao
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
- Department of Clinical Laboratory, The University of Tokyo Hospital
| | - Norifumi Takeda
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Katsuhito Fujiu
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
- Department of Advanced Cardiology, The University of Tokyo
| | - Masao Daimon
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
- Department of Clinical Laboratory, The University of Tokyo Hospital
| | - Jiro Ando
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Hiroshi Akazawa
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Hiroyuki Morita
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| | - Issei Komuro
- Department of Cardiovascular Medicine, The University of Tokyo Hospital
| |
Collapse
|
44
|
Decuyper M, Maebe J, Van Holen R, Vandenberghe S. Artificial intelligence with deep learning in nuclear medicine and radiology. EJNMMI Phys 2021; 8:81. [PMID: 34897550 PMCID: PMC8665861 DOI: 10.1186/s40658-021-00426-y] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 11/19/2021] [Indexed: 12/19/2022] Open
Abstract
The use of deep learning in medical imaging has increased rapidly over the past few years, finding applications throughout the entire radiology pipeline, from improved scanner performance to automatic disease detection and diagnosis. These advancements have resulted in a wide variety of deep learning approaches being developed, solving unique challenges for various imaging modalities. This paper provides a review on these developments from a technical point of view, categorizing the different methodologies and summarizing their implementation. We provide an introduction to the design of neural networks and their training procedure, after which we take an extended look at their uses in medical imaging. We cover the different sections of the radiology pipeline, highlighting some influential works and discussing the merits and limitations of deep learning approaches compared to other traditional methods. As such, this review is intended to provide a broad yet concise overview for the interested reader, facilitating adoption and interdisciplinary research of deep learning in the field of medical imaging.
Collapse
Affiliation(s)
- Milan Decuyper
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Jens Maebe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Roel Van Holen
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Stefaan Vandenberghe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| |
Collapse
|
45
|
Designing clinically translatable artificial intelligence systems for high-dimensional medical imaging. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00399-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
46
|
Shad R, Quach N, Fong R, Kasinpila P, Bowles C, Castro M, Guha A, Suarez EE, Jovinge S, Lee S, Boeve T, Amsallem M, Tang X, Haddad F, Shudo Y, Woo YJ, Teuteberg J, Cunningham JP, Langlotz CP, Hiesinger W. Predicting post-operative right ventricular failure using video-based deep learning. Nat Commun 2021; 12:5192. [PMID: 34465780 PMCID: PMC8408163 DOI: 10.1038/s41467-021-25503-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 08/11/2021] [Indexed: 11/22/2022] Open
Abstract
Despite progressive improvements over the decades, the rich temporally resolved data in an echocardiogram remain underutilized. Human assessments reduce the complex patterns of cardiac wall motion, to a small list of measurements of heart function. All modern echocardiography artificial intelligence (AI) systems are similarly limited by design - automating measurements of the same reductionist metrics rather than utilizing the embedded wealth of data. This underutilization is most evident where clinical decision making is guided by subjective assessments of disease acuity. Predicting the likelihood of developing post-operative right ventricular failure (RV failure) in the setting of mechanical circulatory support is one such example. Here we describe a video AI system trained to predict post-operative RV failure using the full spatiotemporal density of information in pre-operative echocardiography. We achieve an AUC of 0.729, and show that this ML system significantly outperforms a team of human experts at the same task on independent evaluation.
Collapse
Affiliation(s)
- Rohan Shad
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Nicolas Quach
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Robyn Fong
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Patpilai Kasinpila
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Cayley Bowles
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Miguel Castro
- Department of Cardiovascular Medicine, Houston Methodist DeBakey Heart Centre, Houston, TX, USA
| | - Ashrith Guha
- Department of Cardiovascular Medicine, Houston Methodist DeBakey Heart Centre, Houston, TX, USA
| | - Erik E Suarez
- Department of Cardiothoracic Surgery, Houston Methodist DeBakey Heart Centre, Houston, TX, USA
| | - Stefan Jovinge
- Department of Cardiovascular Surgery, Spectrum Health Grand Rapids, Grand Rapids, MI, USA
| | - Sangjin Lee
- Department of Cardiovascular Surgery, Spectrum Health Grand Rapids, Grand Rapids, MI, USA
| | - Theodore Boeve
- Department of Cardiovascular Surgery, Spectrum Health Grand Rapids, Grand Rapids, MI, USA
| | - Myriam Amsallem
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
| | - Xiu Tang
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
| | - Francois Haddad
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
| | - Yasuhiro Shudo
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Y Joseph Woo
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA
| | - Jeffrey Teuteberg
- Department of Cardiovascular Medicine, Stanford University, Stanford, CA, USA
- Stanford Artificial Intelligence in Medicine Centre, Stanford, CA, USA
| | | | - Curtis P Langlotz
- Stanford Artificial Intelligence in Medicine Centre, Stanford, CA, USA
- Department of Radiology and Biomedical Informatics, Stanford University, Stanford, CA, USA
| | - William Hiesinger
- Department of Cardiothoracic Surgery, Stanford University, Stanford, CA, USA.
- Stanford Artificial Intelligence in Medicine Centre, Stanford, CA, USA.
| |
Collapse
|
47
|
Bourcier S, Klug J, Nguyen LS. Non-occlusive mesenteric ischemia: Diagnostic challenges and perspectives in the era of artificial intelligence. World J Gastroenterol 2021; 27:4088-4103. [PMID: 34326613 PMCID: PMC8311528 DOI: 10.3748/wjg.v27.i26.4088] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/25/2021] [Accepted: 06/18/2021] [Indexed: 02/06/2023] Open
Abstract
Acute mesenteric ischemia (AMI) is a severe condition associated with poor prognosis, ultimately leading to death due to multiorgan failure. Several mechanisms may lead to AMI, and non-occlusive mesenteric ischemia (NOMI) represents a particular form of AMI. NOMI is prevalent in intensive care units in critically ill patients. In NOMI management, promptness and accuracy of diagnosis are paramount to achieve decisive treatment, but the last decades have been marked by failure to improve NOMI prognosis, due to lack of tools to detect this condition. While real-life diagnostic management relies on a combination of physical examination, several biomarkers, imaging, and endoscopy to detect the possibility of several grades of NOMI, research studies only focus on a few elements at a time. In the era of artificial intelligence (AI), which can aggregate thousands of variables in complex longitudinal models, the prospect of achieving accurate diagnosis through machine-learning-based algorithms may be sought. In the following work, we bring you a state-of-the-art literature review regarding NOMI, its presentation, its mechanics, and the pitfalls of routine work-up diagnostic exams including biomarkers, imaging, and endoscopy, we raise the perspectives of new biomarker exams, and finally we discuss what AI may add to the field, after summarizing what this technique encompasses.
Collapse
Affiliation(s)
- Simon Bourcier
- Department of Intensive Care Medicine, University Hospital of Geneva, Geneva 1201, Switzerland
| | - Julian Klug
- Department of Internal Medicine, Groupement Hospitalier de l’Ouest Lémanique, Nyon 1260, Switzerland
| | - Lee S Nguyen
- Department of Intensive Care Medicine, CMC Ambroise Paré, Neuilly-sur-Seine 92200, France
| |
Collapse
|
48
|
Zimmermann L, Faustmann E, Ramsl C, Georg D, Heilemann G. Technical Note: Dose prediction for radiation therapy using feature-based losses and One Cycle Learning. Med Phys 2021; 48:5562-5566. [PMID: 34156727 PMCID: PMC8518421 DOI: 10.1002/mp.14774] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 01/14/2021] [Accepted: 01/29/2021] [Indexed: 12/13/2022] Open
Abstract
Purpose To present the technical details of the runner‐up model in the open knowledge‐based planning (OpenKBP) challenge for the dose–volume histogram (DVH) stream. The model was designed to ensure simple and reproducible training, without the necessity of costly advanced generative adversarial network (GAN) techniques. Methods The model was developed based on the OpenKBP challenge dataset, consisting of 200 and 40 head‐and‐neck patients for training and validation, respectively. The final model is a U‐Net with additional ResNet blocks between up‐ and down convolutions. The results were obtained by training the model with AdamW with the One Cycle scheduler. The loss function is a combination of the L1 loss with a feature loss, which uses a pretrained video classifier as a feature extractor. The performance was evaluated on another 100 patients in the OpenKBP test dataset. The DVH metrics of the test data were evaluated, where D0.1cc, and Dmean were calculated for the organs at risk (OARs) and D1%, D95%, and D99% were computed for the target structures. DVH metric differences between predicted and true dose are reported in percentage. Results The model achieved 2nd and 4th place in the DVH and dose stream of the OpenKBP challenge, respectively. The dose and DVH score were 2.62 ± 1.10 and 1.52 ± 1.06, respectively. Mean dose differences for the different structures and DVH parameters were within ±1%. Conclusion This straightforward approach produced excellent results. It incorporated One Cycle Learning, ResNet, and feature‐based losses, which are common computer vision techniques.
Collapse
Affiliation(s)
- Lukas Zimmermann
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | | | | | - Dietmar Georg
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Gerd Heilemann
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
49
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 274] [Impact Index Per Article: 68.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
50
|
Albahli S, Rauf HT, Algosaibi A, Balas VE. AI-driven deep CNN approach for multi-label pathology classification using chest X-Rays. PeerJ Comput Sci 2021; 7:e495. [PMID: 33977135 PMCID: PMC8064140 DOI: 10.7717/peerj-cs.495] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 03/27/2021] [Indexed: 02/05/2023]
Abstract
Artificial intelligence (AI) has played a significant role in image analysis and feature extraction, applied to detect and diagnose a wide range of chest-related diseases. Although several researchers have used current state-of-the-art approaches and have produced impressive chest-related clinical outcomes, specific techniques may not contribute many advantages if one type of disease is detected without the rest being identified. Those who tried to identify multiple chest-related diseases were ineffective due to insufficient data and the available data not being balanced. This research provides a significant contribution to the healthcare industry and the research community by proposing a synthetic data augmentation in three deep Convolutional Neural Networks (CNNs) architectures for the detection of 14 chest-related diseases. The employed models are DenseNet121, InceptionResNetV2, and ResNet152V2; after training and validation, an average ROC-AUC score of 0.80 was obtained competitive as compared to the previous models that were trained for multi-class classification to detect anomalies in x-ray images. This research illustrates how the proposed model practices state-of-the-art deep neural networks to classify 14 chest-related diseases with better accuracy.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer Science, Qassim University, Buraydah, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, stoke on Trent, United Kingdom
| | | | - Valentina Emilia Balas
- Department of Automation and Applied Informatics, Aurel Vlaicu University of Arad, Arad, Romania
| |
Collapse
|