1
|
Loaiza-Bonilla A, Thaker N, Chung C, Parikh RB, Stapleton S, Borkowski P. Driving Knowledge to Action: Building a Better Future With Artificial Intelligence-Enabled Multidisciplinary Oncology. Am Soc Clin Oncol Educ Book 2025; 45:e100048. [PMID: 40315375 DOI: 10.1200/edbk-25-100048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2025]
Abstract
Artificial intelligence (AI) is transforming multidisciplinary oncology at an unprecedented pace, redefining how clinicians detect, classify, and treat cancer. From earlier and more accurate diagnoses to personalized treatment planning, AI's impact is evident across radiology, pathology, radiation oncology, and medical oncology. By leveraging vast and diverse data-including imaging, genomic, clinical, and real-world evidence-AI algorithms can uncover complex patterns, accelerate drug discovery, and help identify optimal treatment regimens for each patient. However, realizing the full potential of AI also necessitates addressing concerns regarding data quality, algorithmic bias, explainability, privacy, and regulatory oversight-especially in low- and middle-income countries (LMICs), where disparities in cancer care are particularly pronounced. This study provides a comprehensive overview of how AI is reshaping cancer care, reviews its benefits and challenges, and outlines ethical and policy implications in line with ASCO's 2025 theme, Driving Knowledge to Action. We offer concrete calls to action for clinicians, researchers, industry stakeholders, and policymakers to ensure that AI-driven, patient-centric oncology is accessible, equitable, and sustainable worldwide.
Collapse
Affiliation(s)
- Arturo Loaiza-Bonilla
- St Luke's University Health Network, Bethlehem, PA
- Massive Bio, Inc, New York, NY
- Lewis Katz School of Medicine at Temple University, Philadelphia, PA
| | | | - Caroline Chung
- The University of Texas MD Anderson Cancer Center, Houston, TX
| | | | - Shawn Stapleton
- The University of Texas MD Anderson Cancer Center, Houston, TX
| | | |
Collapse
|
2
|
Mudavadkar GR, Deng M, Al-Heejawi SMA, Arora IH, Breggia A, Ahmad B, Christman R, Ryan ST, Amal S. Gastric Cancer Detection with Ensemble Learning on Digital Pathology: Use Case of Gastric Cancer on GasHisSDB Dataset. Diagnostics (Basel) 2024; 14:1746. [PMID: 39202233 PMCID: PMC11354078 DOI: 10.3390/diagnostics14161746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 08/01/2024] [Accepted: 08/02/2024] [Indexed: 09/03/2024] Open
Abstract
Gastric cancer has become a serious worldwide health concern, emphasizing the crucial importance of early diagnosis measures to improve patient outcomes. While traditional histological image analysis is regarded as the clinical gold standard, it is labour intensive and manual. In recognition of this problem, there has been a rise in interest in the use of computer-aided diagnostic tools to help pathologists with their diagnostic efforts. In particular, deep learning (DL) has emerged as a promising solution in this sector. However, current DL models are still restricted in their ability to extract extensive visual characteristics for correct categorization. To address this limitation, this study proposes the use of ensemble models, which incorporate the capabilities of several deep-learning architectures and use aggregate knowledge of many models to improve classification performance, allowing for more accurate and efficient gastric cancer detection. To determine how well these proposed models performed, this study compared them with other works, all of which were based on the Gastric Histopathology Sub-Size Images Database, a publicly available dataset for gastric cancer. This research demonstrates that the ensemble models achieved a high detection accuracy across all sub-databases, with an average accuracy exceeding 99%. Specifically, ResNet50, VGGNet, and ResNet34 performed better than EfficientNet and VitNet. For the 80 × 80-pixel sub-database, ResNet34 exhibited an accuracy of approximately 93%, VGGNet achieved 94%, and the ensemble model excelled with 99%. In the 120 × 120-pixel sub-database, the ensemble model showed 99% accuracy, VGGNet 97%, and ResNet50 approximately 97%. For the 160 × 160-pixel sub-database, the ensemble model again achieved 99% accuracy, VGGNet 98%, ResNet50 98%, and EfficientNet 92%, highlighting the ensemble model's superior performance across all resolutions. Overall, the ensemble model consistently provided an accuracy of 99% across the three sub-pixel categories. These findings show that ensemble models may successfully detect critical characteristics from smaller patches and achieve high performance. The findings will help pathologists diagnose gastric cancer using histopathological images, leading to earlier identification and higher patient survival rates.
Collapse
Affiliation(s)
- Govind Rajesh Mudavadkar
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (G.R.M.); (M.D.); (S.M.A.A.-H.)
| | - Mo Deng
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (G.R.M.); (M.D.); (S.M.A.A.-H.)
| | | | - Isha Hemant Arora
- Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA;
| | - Anne Breggia
- MaineHealth Institute for Research, Scarborough, ME 04074, USA
| | - Bilal Ahmad
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Robert Christman
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Stephen T. Ryan
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Saeed Amal
- The Roux Institute, Department of Bioengineering, College of Engineering at Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
3
|
Bordbar M, Helfroush MS, Danyali H, Ejtehadi F. Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model. Biomed Eng Online 2023; 22:124. [PMID: 38098015 PMCID: PMC10722702 DOI: 10.1186/s12938-023-01186-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is a patient-friendly and non-invasive technology that scans the whole of the gastrointestinal tract, including difficult-to-access regions like the small bowel. Major drawback of this technology is that the visual inspection of a large number of video frames produced during each examination makes the physician diagnosis process tedious and prone to error. Several computer-aided diagnosis (CAD) systems, such as deep network models, have been developed for the automatic recognition of abnormalities in WCE frames. Nevertheless, most of these studies have only focused on spatial information within individual WCE frames, missing the crucial temporal data within consecutive frames. METHODS In this article, an automatic multiclass classification system based on a three-dimensional deep convolutional neural network (3D-CNN) is proposed, which utilizes the spatiotemporal information to facilitate the WCE diagnosis process. The 3D-CNN model fed with a series of sequential WCE frames in contrast to the two-dimensional (2D) model, which exploits frames as independent ones. Moreover, the proposed 3D deep model is compared with some pre-trained networks. The proposed models are trained and evaluated with 29 subject WCE videos (14,691 frames before augmentation). The performance advantages of 3D-CNN over 2D-CNN and pre-trained networks are verified in terms of sensitivity, specificity, and accuracy. RESULTS 3D-CNN outperforms the 2D technique in all evaluation metrics (sensitivity: 98.92 vs. 98.05, specificity: 99.50 vs. 86.94, accuracy: 99.20 vs. 92.60). In conclusion, a novel 3D-CNN model for lesion detection in WCE frames is proposed in this study. CONCLUSION The results indicate the performance of 3D-CNN over 2D-CNN and some well-known pre-trained classifier networks. The proposed 3D-CNN model uses the rich temporal information in adjacent frames as well as spatial data to develop an accurate and efficient model.
Collapse
Affiliation(s)
- Mehrdokht Bordbar
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Fardad Ejtehadi
- Department of Internal Medicine, Gastroenterohepatology Research Center, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
4
|
Keshtkar K, Reza Safarpour A, Heshmat R, Sotoudehmanesh R, Keshtkar A. A Systematic Review and Meta-analysis of Convolutional Neural Network in the Diagnosis of Colorectal Polyps and Cancer. THE TURKISH JOURNAL OF GASTROENTEROLOGY : THE OFFICIAL JOURNAL OF TURKISH SOCIETY OF GASTROENTEROLOGY 2023; 34:985-997. [PMID: 37681266 PMCID: PMC10645297 DOI: 10.5152/tjg.2023.22491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 03/22/2023] [Indexed: 09/09/2023]
Abstract
Convolutional neural networks are a class of deep neural networks used for different clinical purposes, including improving the detection rate of colorectal lesions. This systematic review and meta-analysis aimed to assess the performance of convolutional neural network-based models in the detection or classification of colorectal polyps and colorectal cancer. A systematic search was performed in MEDLINE, SCOPUS, Web of Science, and other related databases. The performance measures of the convolutional neural network models in the detection of colorectal polyps and colorectal cancer were calculated in the 2 scenarios of the best and worst accuracy. Stata and R software were used for conducting the meta-analysis. From 3368 searched records, 24 primary studies were included. The sensitivity and specificity of convolutional neural network models in predicting colorectal polyps in worst and best scenarios ranged from 84.7% to 91.6% and from 86.0% to 93.8%, respectively. These values in predicting colorectal cancer varied between 93.2% and 94.1% and between 94.6% and 97.7%. The positive and negative likelihood ratios varied between 6.2 and 14.5 and 0.09 and 0.17 in these scenarios, respectively, in predicting colorectal polyps, and 17.1-41.2 and 0.07-0.06 in predicting colorectal polyps. The diagnostic odds ratio and accuracy measures of convolutional neural network models in predicting colorectal polyps in worst and best scenarios ranged between 36% and 162% and between 80.5% and 88.6%, respectively. These values in predicting colorectal cancer in the worst and the best scenarios varied between 239.63% and 677.47% and between 88.2% and 96.4%. The area under the receiver operating characteristic varied between 0.92 and 0.97 in the worst and the best scenarios in colorectal polyps, respectively, and between 0.98 and 0.99 in colorectal polyps prediction. Convolutional neural network-based models showed an acceptable accuracy in detecting colorectal polyps and colorectal cancer.
Collapse
Affiliation(s)
- Kamyab Keshtkar
- University of Tehran School of Electrical and Computer Engineering, Tehran, Iran
| | - Ali Reza Safarpour
- Gastroenterohepatology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ramin Heshmat
- Chronic Diseases Research Center, Endocrinology and Metabolism Population Sciences Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Rasoul Sotoudehmanesh
- Department of Gastroenterology, Digestive Disease Research Center, Digestive Disease Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Abbas Keshtkar
- Department of Health Sciences Education Development, Tehran University of Medical Sciences School of Public Health, Tehran, Iran
| |
Collapse
|
5
|
Sivari E, Bostanci E, Guzel MS, Acici K, Asuroglu T, Ercelebi Ayyildiz T. A New Approach for Gastrointestinal Tract Findings Detection and Classification: Deep Learning-Based Hybrid Stacking Ensemble Models. Diagnostics (Basel) 2023; 13:diagnostics13040720. [PMID: 36832205 PMCID: PMC9954881 DOI: 10.3390/diagnostics13040720] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/06/2023] [Accepted: 02/10/2023] [Indexed: 02/17/2023] Open
Abstract
Endoscopic procedures for diagnosing gastrointestinal tract findings depend on specialist experience and inter-observer variability. This variability can cause minor lesions to be missed and prevent early diagnosis. In this study, deep learning-based hybrid stacking ensemble modeling has been proposed for detecting and classifying gastrointestinal system findings, aiming at early diagnosis with high accuracy and sensitive measurements and saving workload to help the specialist and objectivity in endoscopic diagnosis. In the first level of the proposed bi-level stacking ensemble approach, predictions are obtained by applying 5-fold cross-validation to three new CNN models. A machine learning classifier selected at the second level is trained according to the obtained predictions, and the final classification result is reached. The performances of the stacking models were compared with the performances of the deep learning models, and McNemar's statistical test was applied to support the results. According to the experimental results, stacking ensemble models performed with a significant difference with 98.42% ACC and 98.19% MCC in the KvasirV2 dataset and 98.53% ACC and 98.39% MCC in the HyperKvasir dataset. This study is the first to offer a new learning-oriented approach that efficiently evaluates CNN features and provides objective and reliable results with statistical testing compared to state-of-the-art studies on the subject. The proposed approach improves the performance of deep learning models and outperforms the state-of-the-art studies in the literature.
Collapse
Affiliation(s)
- Esra Sivari
- Department of Computer Engineering, Cankiri Karatekin University, Cankiri 18100, Turkey
| | - Erkan Bostanci
- Department of Computer Engineering, Ankara University, Ankara 06830, Turkey
| | | | - Koray Acici
- Department of Artificial Intelligence and Data Engineering, Ankara University, Ankara 06830, Turkey
| | - Tunc Asuroglu
- Faculty of Medicine and Health Technology, Tampere University, 33720 Tampere, Finland
- Correspondence:
| | | |
Collapse
|
6
|
Shah S, Park N, Chehade NEH, Chahine A, Monachese M, Tiritilli A, Moosvi Z, Ortizo R, Samarasena J. Effect of computer-aided colonoscopy on adenoma miss rates and polyp detection: A systematic review and meta-analysis. J Gastroenterol Hepatol 2023; 38:162-176. [PMID: 36350048 DOI: 10.1111/jgh.16059] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 10/16/2022] [Accepted: 10/31/2022] [Indexed: 11/11/2022]
Abstract
BACKGROUND AND AIM Multiple computer-aided techniques utilizing artificial intelligence (AI) have been created to improve the detection of polyps during colonoscopy and thereby reduce the incidence of colorectal cancer. While adenoma detection rates (ADR) and polyp detection rates (PDR) are important colonoscopy quality indicators, adenoma miss rates (AMR) may better quantify missed lesions, which can ultimately lead to interval colorectal cancer. The purpose of this systematic review and meta-analysis was to determine the efficacy of computer-aided colonoscopy (CAC) with respect to AMR, ADR, and PDR in randomized controlled trials. METHODS A comprehensive, systematic literature search was performed across multiple databases in September of 2022 to identify randomized, controlled trials that compared CAC with traditional colonoscopy. Primary outcomes were AMR, ADR, and PDR. RESULTS Fourteen studies totaling 10 928 patients were included in the final analysis. There was a 65% reduction in the adenoma miss rate with CAC (OR, 0.35; 95% CI, 0.25-0.49, P < 0.001, I2 = 50%). There was a 78% reduction in the sessile serrated lesion miss rate with CAC (OR, 0.22; 95% CI, 0.08-0.65, P < 0.01, I2 = 0%). There was a 52% increase in ADR in the CAC group compared with the control group (OR, 1.52; 95% CI, 1.39-1.67, P = 0.04, I2 = 47%). There was 93% increase in the number of adenomas > 10 mm detected per colonoscopy with CAC (OR 1.93; 95% CI, 1.18-3.16, P < 0.01, I2 = 0%). CONCLUSIONS The results of the present study demonstrate the promise of CAC in improving AMR, ADR, PDR across a spectrum of size and morphological lesion characteristics.
Collapse
Affiliation(s)
- Sagar Shah
- Department of Internal Medicine, University of California Los Angeles Ronald Reagan Medical Center, Los Angeles, California, USA
| | - Nathan Park
- H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine Medical Center, Orange, California, USA
| | - Nabil El Hage Chehade
- Division of Internal Medicine, Case Western Reserve University MetroHealth Medical Center, Cleveland, Ohio, USA
| | - Anastasia Chahine
- H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine Medical Center, Orange, California, USA
| | - Marc Monachese
- H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine Medical Center, Orange, California, USA
| | - Amelie Tiritilli
- H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine Medical Center, Orange, California, USA
| | - Zain Moosvi
- Division of Gastroenterology, Hepatology and Nutrition, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Ronald Ortizo
- H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine Medical Center, Orange, California, USA
| | - Jason Samarasena
- H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine Medical Center, Orange, California, USA
| |
Collapse
|
7
|
Hsu CM, Hsu CC, Hsu ZM, Chen TH, Kuo T. Intraprocedure Artificial Intelligence Alert System for Colonoscopy Examination. SENSORS (BASEL, SWITZERLAND) 2023; 23:1211. [PMID: 36772251 PMCID: PMC9921893 DOI: 10.3390/s23031211] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
Colonoscopy is a valuable tool for preventing and reducing the incidence and mortality of colorectal cancer. Although several computer-aided colorectal polyp detection and diagnosis systems have been proposed for clinical application, many remain susceptible to interference problems, including low image clarity, unevenness, and low accuracy for the analysis of dynamic images; these drawbacks affect the robustness and practicality of these systems. This study proposed an intraprocedure alert system for colonoscopy examination developed on the basis of deep learning. The proposed system features blurred image detection, foreign body detection, and polyp detection modules facilitated by convolutional neural networks. The training and validation datasets included high-quality images and low-quality images, including blurred images and those containing folds, fecal matter, and opaque water. For the detection of blurred images and images containing folds, fecal matter, and opaque water, the accuracy rate was 96.2%. Furthermore, the study results indicated a per-polyp detection accuracy of 100% when the system was applied to video images. The recall rates for high-quality image frames and polyp image frames were 95.7% and 92%, respectively. The overall alert accuracy rate and the false-positive rate of low quality for video images obtained through per-frame analysis were 95.3% and 0.18%, respectively. The proposed system can be used to alert colonoscopists to the need to slow their procedural speed or to perform flush or lumen inflation in cases where the colonoscope is being moved too rapidly, where fecal residue is present in the intestinal tract, or where the colon has been inadequately distended.
Collapse
Affiliation(s)
- Chen-Ming Hsu
- Department of Gastroenterology and Hepatology, Taoyuan Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Chien-Chang Hsu
- Department of Computer Science and Information Engineering, Fu-Jen Catholic University, New Taipei 242, Taiwan
| | - Zhe-Ming Hsu
- Department of Computer Science and Information Engineering, Fu-Jen Catholic University, New Taipei 242, Taiwan
| | - Tsung-Hsing Chen
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Tony Kuo
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
| |
Collapse
|
8
|
Li MD, Huang ZR, Shan QY, Chen SL, Zhang N, Hu HT, Wang W. Performance and comparison of artificial intelligence and human experts in the detection and classification of colonic polyps. BMC Gastroenterol 2022; 22:517. [PMID: 36513975 PMCID: PMC9749329 DOI: 10.1186/s12876-022-02605-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 12/05/2022] [Indexed: 12/15/2022] Open
Abstract
OBJECTIVE The main aim of this study was to analyze the performance of different artificial intelligence (AI) models in endoscopic colonic polyp detection and classification and compare them with doctors with different experience. METHODS We searched the studies on Colonoscopy, Colonic Polyps, Artificial Intelligence, Machine Learning, and Deep Learning published before May 2020 in PubMed, EMBASE, Cochrane, and the citation index of the conference proceedings. The quality of studies was assessed using the QUADAS-2 table of diagnostic test quality evaluation criteria. The random-effects model was calculated using Meta-DISC 1.4 and RevMan 5.3. RESULTS A total of 16 studies were included for meta-analysis. Only one study (1/16) presented externally validated results. The area under the curve (AUC) of AI group, expert group and non-expert group for detection and classification of colonic polyps were 0.940, 0.918, and 0.871, respectively. AI group had slightly lower pooled specificity than the expert group (79% vs. 86%, P < 0.05), but the pooled sensitivity was higher than the expert group (88% vs. 80%, P < 0.05). While the non-experts had less pooled specificity in polyp recognition than the experts (81% vs. 86%, P < 0.05), and higher pooled sensitivity than the experts (85% vs. 80%, P < 0.05). CONCLUSION The performance of AI in polyp detection and classification is similar to that of human experts, with high sensitivity and moderate specificity. Different tasks may have an impact on the performance of deep learning models and human experts, especially in terms of sensitivity and specificity.
Collapse
Affiliation(s)
- Ming-De Li
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Ze-Rong Huang
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Quan-Yuan Shan
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Shu-Ling Chen
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Ning Zhang
- grid.412615.50000 0004 1803 6239Department of Gastroenterology, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Hang-Tong Hu
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Wei Wang
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| |
Collapse
|
9
|
Alkabbany I, Ali AM, Mohamed M, Elshazly SM, Farag A. An AI-Based Colonic Polyp Classifier for Colorectal Cancer Screening Using Low-Dose Abdominal CT. SENSORS (BASEL, SWITZERLAND) 2022; 22:9761. [PMID: 36560132 PMCID: PMC9782078 DOI: 10.3390/s22249761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/30/2022] [Accepted: 12/02/2022] [Indexed: 06/17/2023]
Abstract
Among the non-invasive Colorectal cancer (CRC) screening approaches, Computed Tomography Colonography (CTC) and Virtual Colonoscopy (VC), are much more accurate. This work proposes an AI-based polyp detection framework for virtual colonoscopy (VC). Two main steps are addressed in this work: automatic segmentation to isolate the colon region from its background, and automatic polyp detection. Moreover, we evaluate the performance of the proposed framework on low-dose Computed Tomography (CT) scans. We build on our visualization approach, Fly-In (FI), which provides "filet"-like projections of the internal surface of the colon. The performance of the Fly-In approach confirms its ability with helping gastroenterologists, and it holds a great promise for combating CRC. In this work, these 2D projections of FI are fused with the 3D colon representation to generate new synthetic images. The synthetic images are used to train a RetinaNet model to detect polyps. The trained model has a 94% f1-score and 97% sensitivity. Furthermore, we study the effect of dose variation in CT scans on the performance of the the FI approach in polyp visualization. A simulation platform is developed for CTC visualization using FI, for regular CTC and low-dose CTC. This is accomplished using a novel AI restoration algorithm that enhances the Low-Dose CT images so that a 3D colon can be successfully reconstructed and visualized using the FI approach. Three senior board-certified radiologists evaluated the framework for the peak voltages of 30 KV, and the average relative sensitivities of the platform were 92%, whereas the 60 KV peak voltage produced average relative sensitivities of 99.5%.
Collapse
Affiliation(s)
- Islam Alkabbany
- Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY 40292, USA
| | - Asem M. Ali
- Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY 40292, USA
| | - Mostafa Mohamed
- Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY 40292, USA
| | | | - Aly Farag
- Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
10
|
Zhang H, Zhu X, Li B, Dai X, Bao X, Fu Q, Tong Z, Liu L, Zheng Y, Zhao P, Ye L, Chen Z, Fang W, Ruan L, Jin X. Development and validation of a meta-learning-based multi-modal deep learning algorithm for detection of peritoneal metastasis. Int J Comput Assist Radiol Surg 2022; 17:1845-1853. [PMID: 35867303 DOI: 10.1007/s11548-022-02698-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 06/05/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE The existing medical imaging tools have a detection accuracy of 97% for peritoneal metastasis(PM) bigger than 0.5 cm, but only 29% for that smaller than 0.5 cm, the early detection of PM is still a difficult problem. This study is aiming at constructing a deep convolution neural network classifier based on meta-learning to predict PM. METHOD Peritoneal metastases are delineated on enhanced CT. The model is trained based on meta-learning, and features are extracted using multi-modal deep Convolutional Neural Network(CNN) with enhanced CT to classify PM. Besides, we evaluate the performance on the test dataset, and compare it with other PM prediction algorithm. RESULTS The training datasets are consisted of 9574 images from 43 patients with PM and 67 patients without PM. The testing datasets are consisted of 1834 images from 21 testing patients. To increase the accuracy of the prediction, we combine the multi-modal inputs of plain scan phase, portal venous phase and arterial phase to build a meta-learning-based multi-modal PM predictor. The classifier shows an accuracy of 87.5% with Area Under Curve(AUC) of 0.877, sensitivity of 73.4%, specificity of 95.2% on the testing datasets. The performance is superior to routine PM classify based on logistic regression (AUC: 0.795), a deep learning method named ResNet3D (AUC: 0.827), and a domain generalization (DG) method named MADDG (AUC: 0.834). CONCLUSIONS we proposed a novel training strategy based on meta-learning to improve the model's robustness to "unseen" samples. The experiments shows that our meta-learning-based multi-modal PM predicting classifier obtain more competitive results in synchronous PM prediction compared to existing algorithms and the model's improvements of generalization ability even with limited data.
Collapse
Affiliation(s)
- Hangyu Zhang
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xudong Zhu
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Bin Li
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaomeng Dai
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China
| | - Xuanwen Bao
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Qihan Fu
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhou Tong
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Lulu Liu
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yi Zheng
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Peng Zhao
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Luan Ye
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China
| | - Zhihong Chen
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China
| | - Weijia Fang
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Lingxiang Ruan
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.
| | - Xinyu Jin
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China.
| |
Collapse
|
11
|
Liang F, Wang S, Zhang K, Liu TJ, Li JN. Development of artificial intelligence technology in diagnosis, treatment, and prognosis of colorectal cancer. World J Gastrointest Oncol 2022; 14:124-152. [PMID: 35116107 PMCID: PMC8790413 DOI: 10.4251/wjgo.v14.i1.124] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 08/19/2021] [Accepted: 11/15/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) technology has made leaps and bounds since its invention. AI technology can be subdivided into many technologies such as machine learning and deep learning. The application scope and prospect of different technologies are also totally different. Currently, AI technologies play a pivotal role in the highly complex and wide-ranging medical field, such as medical image recognition, biotechnology, auxiliary diagnosis, drug research and development, and nutrition. Colorectal cancer (CRC) is a common gastrointestinal cancer that has a high mortality, posing a serious threat to human health. Many CRCs are caused by the malignant transformation of colorectal polyps. Therefore, early diagnosis and treatment are crucial to CRC prognosis. The methods of diagnosing CRC are divided into imaging diagnosis, endoscopy, and pathology diagnosis. Treatment methods are divided into endoscopic treatment, surgical treatment, and drug treatment. AI technology is in the weak era and does not have communication capabilities. Therefore, the current AI technology is mainly used for image recognition and auxiliary analysis without in-depth communication with patients. This article reviews the application of AI in the diagnosis, treatment, and prognosis of CRC and provides the prospects for the broader application of AI in CRC.
Collapse
Affiliation(s)
- Feng Liang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Shu Wang
- Department of Radiotherapy, Jilin University Second Hospital, Changchun 130041, Jilin Province, China
| | - Kai Zhang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Tong-Jun Liu
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Jian-Nan Li
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| |
Collapse
|
12
|
Capsule Endoscopy: Pitfalls and Approaches to Overcome. Diagnostics (Basel) 2021; 11:diagnostics11101765. [PMID: 34679463 PMCID: PMC8535011 DOI: 10.3390/diagnostics11101765] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 09/21/2021] [Indexed: 12/15/2022] Open
Abstract
Capsule endoscopy of the gastrointestinal tract is an innovative technology that serves to replace conventional endoscopy. Wireless capsule endoscopy, which is mainly used for small bowel examination, has recently been used to examine the entire gastrointestinal tract. This method is promising for its usefulness and development potential and enhances convenience by reducing the side effects and discomfort that may occur during conventional endoscopy. However, capsule endoscopy has fundamental limitations, including passive movement via bowel peristalsis and space restriction. This article reviews the current scientific aspects of capsule endoscopy and discusses the pitfalls and approaches to overcome its limitations. This review includes the latest research results on the role and potential of capsule endoscopy as a non-invasive diagnostic and therapeutic device.
Collapse
|
13
|
Nogueira-Rodríguez A, Domínguez-Carbajales R, Campos-Tato F, Herrero J, Puga M, Remedios D, Rivas L, Sánchez E, Iglesias Á, Cubiella J, Fdez-Riverola F, López-Fernández H, Reboiro-Jato M, Glez-Peña D. Real-time polyp detection model using convolutional neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06496-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractColorectal cancer is a major health problem, where advances towards computer-aided diagnosis (CAD) systems to assist the endoscopist can be a promising path to improvement. Here, a deep learning model for real-time polyp detection based on a pre-trained YOLOv3 (You Only Look Once) architecture and complemented with a post-processing step based on an object-tracking algorithm to reduce false positives is reported. The base YOLOv3 network was fine-tuned using a dataset composed of 28,576 images labelled with locations of 941 polyps that will be made public soon. In a frame-based evaluation using isolated images containing polyps, a general F1 score of 0.88 was achieved (recall = 0.87, precision = 0.89), with lower predictive performance in flat polyps, but higher for sessile, and pedunculated morphologies, as well as with the usage of narrow band imaging, whereas polyp size < 5 mm does not seem to have significant impact. In a polyp-based evaluation using polyp and normal mucosa videos, with a positive criterion defined as the presence of at least one 50-frames-length (window size) segment with a ratio of 75% of frames with predicted bounding boxes (frames positivity), 72.61% of sensitivity (95% CI 68.99–75.95) and 83.04% of specificity (95% CI 76.70–87.92) were achieved (Youden = 0.55, diagnostic odds ratio (DOR) = 12.98). When the positive criterion is less stringent (window size = 25, frames positivity = 50%), sensitivity reaches around 90% (sensitivity = 89.91%, 95% CI 87.20–91.94; specificity = 54.97%, 95% CI 47.49–62.24; Youden = 0.45; DOR = 10.76). The object-tracking algorithm has demonstrated a significant improvement in specificity whereas maintaining sensitivity, as well as a marginal impact on computational performance. These results suggest that the model could be effectively integrated into a CAD system.
Collapse
|
14
|
Nazarian S, Glover B, Ashrafian H, Darzi A, Teare J. Diagnostic Accuracy of Artificial Intelligence and Computer-Aided Diagnosis for the Detection and Characterization of Colorectal Polyps: Systematic Review and Meta-analysis. J Med Internet Res 2021; 23:e27370. [PMID: 34259645 PMCID: PMC8319784 DOI: 10.2196/27370] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/09/2021] [Accepted: 05/06/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Colonoscopy reduces the incidence of colorectal cancer (CRC) by allowing detection and resection of neoplastic polyps. Evidence shows that many small polyps are missed on a single colonoscopy. There has been a successful adoption of artificial intelligence (AI) technologies to tackle the issues around missed polyps and as tools to increase the adenoma detection rate (ADR). OBJECTIVE The aim of this review was to examine the diagnostic accuracy of AI-based technologies in assessing colorectal polyps. METHODS A comprehensive literature search was undertaken using the databases of Embase, MEDLINE, and the Cochrane Library. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were followed. Studies reporting the use of computer-aided diagnosis for polyp detection or characterization during colonoscopy were included. Independent proportions and their differences were calculated and pooled through DerSimonian and Laird random-effects modeling. RESULTS A total of 48 studies were included. The meta-analysis showed a significant increase in pooled polyp detection rate in patients with the use of AI for polyp detection during colonoscopy compared with patients who had standard colonoscopy (odds ratio [OR] 1.75, 95% CI 1.56-1.96; P<.001). When comparing patients undergoing colonoscopy with the use of AI to those without, there was also a significant increase in ADR (OR 1.53, 95% CI 1.32-1.77; P<.001). CONCLUSIONS With the aid of machine learning, there is potential to improve ADR and, consequently, reduce the incidence of CRC. The current generation of AI-based systems demonstrate impressive accuracy for the detection and characterization of colorectal polyps. However, this is an evolving field and before its adoption into a clinical setting, AI systems must prove worthy to patients and clinicians. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42020169786; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020169786.
Collapse
Affiliation(s)
- Scarlet Nazarian
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Ben Glover
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Julian Teare
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| |
Collapse
|
15
|
Rahim T, Hassan SA, Shin SY. A deep convolutional neural network for the detection of polyps in colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102654] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
16
|
Yi PS, Hu CJ, Li CH, Yu F. Clinical value of artificial intelligence in hepatocellular carcinoma: Current status and prospect. Artif Intell Gastroenterol 2021; 2:42-55. [DOI: 10.35712/aig.v2.i2.42] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/25/2021] [Accepted: 03/16/2021] [Indexed: 02/06/2023] Open
Abstract
Hepatocellular carcinoma (HCC) is the most commonly diagnosed type of liver cancer and the fourth leading cause of cancer-related mortality worldwide. The early identification of HCC and effective treatments for it have been challenging. Due to the sufficient compensatory ability of early patients and its nonspecific symptoms, HCC is more likely to escape diagnosis in the incipient stage, during which patients can achieve a more satisfying overall survival if they undergo resection or liver transplantation. Patients at advanced stages can profit from radical therapies in a limited way. In order to improve the unfavorable prognosis of HCC, diagnostic ability and treatment efficiency must be improved. The past decade has seen rapid advancements in artificial intelligence, underlying its unique usefulness in almost every field, including that of medicine. Herein, we sought and reviewed studies that put emphasis on artificial intelligence and HCC.
Collapse
Affiliation(s)
- Peng-Sheng Yi
- Department of Hepato-Biliary-Pancreas II, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China
| | - Chen-Jun Hu
- Department of Hepato-Biliary-Pancreas II, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000, Sichuan Province, China
| | - Chen-Hui Li
- Department of Obstetrics and Gynecology, Nanchong Traditional Chinese Medicine Hospital, Nanchong 637000, Sichuan Province, China
| | - Fei Yu
- Department of Radiology, Yingshan County People’s Hospital, Nanchong 610041, Sichuan Province, China
| |
Collapse
|
17
|
Lin B, Wu S. Digital Transformation in Personalized Medicine with Artificial Intelligence and the Internet of Medical Things. OMICS-A JOURNAL OF INTEGRATIVE BIOLOGY 2021; 26:77-81. [PMID: 33887155 DOI: 10.1089/omi.2021.0037] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Digital transformation is impacting every facet of science and society, not least because there is a growing need for digital services and products with the COVID-19 pandemic. But the need for digital transformation in diagnostics and personalized medicine field cuts deeper. In the past, personalized/precision medicine initiatives have been unable to capture the patients' experiences and clinical outcomes in real-time and in real-world settings. The availability of wearable smart sensors, wireless connectivity, artificial intelligence, and the Internet of Medical Things is changing the personalized/precision medicine research and implementation landscape. Digital transformation in poised to accelerate personalized/precision medicine and systems science in multiple fronts such as deep real-time phenotyping with patient-reported outcomes, high-throughput association studies between omics and highly granular phenotypic variation, digital clinical trials, among others. The present expert review offers an analysis of these systems science frontiers with a view to future applications at the intersection of digital health and personalized medicine, or put in other words, signaling the rise of "digital personalized medicine."
Collapse
Affiliation(s)
- Biaoyang Lin
- Zhejiang-California International Nanosystems Institute (ZCNI) Proprium Research Center, Zhejiang University, Hangzhou, Zhejiang, China.,Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, School of Medicine, Zhejiang University, The First Affiliated Hospital, Hangzhou, China.,Department of Urology, University of Washington School of Medicine, Seattle, Washington, USA
| | - Shengjun Wu
- Department of Clinical Laboratories, School of Medicine, Zhejiang University, Sir Run Run Shaw Hospital, Hangzhou, China
| |
Collapse
|
18
|
Artificial Intelligence in Colorectal Cancer Diagnosis Using Clinical Data: Non-Invasive Approach. Diagnostics (Basel) 2021; 11:diagnostics11030514. [PMID: 33799452 PMCID: PMC8001232 DOI: 10.3390/diagnostics11030514] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/10/2021] [Accepted: 03/11/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer is the third most common and second most lethal tumor globally, causing 900,000 deaths annually. In this research, a computer aided diagnosis system was designed that detects colorectal cancer, using an innovative dataset composing of both numeric (blood and urine analysis) and qualitative data (living environment of the patient, tumor position, T, N, M, Dukes classification, associated pathology, technical approach, complications, incidents, ultrasonography-dimensions as well as localization). The intelligent computer aided colorectal cancer diagnosis system was designed using different machine learning techniques, such as classification and shallow and deep neural networks. The maximum accuracy obtained from solving the binary classification problem with traditional machine learning algorithms was 77.8%. However, the regression problem solved with deep neural networks yielded with significantly better performance in terms of mean squared error minimization, reaching the value of 0.0000529.
Collapse
|
19
|
Wang X, Huang J, Ji X, Zhu Z. [Application of artificial intelligence for detection and classification of colon polyps]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2021; 41:310-313. [PMID: 33624608 DOI: 10.12122/j.issn.1673-4254.2021.02.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
Colorectal cancer is one of the most common cancers worldwide, and colonoscopy has proven to be a preferable modality for screening and surveillance of colorectal cancer. This review discusses the clinical application of artificial intelligence (AI) and computer-aided diagnosis for automated colonoscopic detection and diagnosis of colorectal polyps for better understanding of the application of AI-based computer-aided diagnosis systems especially in terms of machine learning, deep learning and convolutional neural network for screening and surveillance of colorectal cancer.
Collapse
Affiliation(s)
- X Wang
- Information Center, First Affiliated Hospital of Kunming Medical University, Kunming 65003, China
| | - J Huang
- Department of Oncology, First Affiliated Hospital of Kunming Medical University, Kunming 65003, China
| | - X Ji
- Day Surgery Center, First Affiliated Hospital of Kunming Medical University, Kunming 65003, China
| | - Z Zhu
- Day Surgery Center, First Affiliated Hospital of Kunming Medical University, Kunming 65003, China
| |
Collapse
|
20
|
Ghatwary N, Zolgharni M, Janan F, Ye X. Learning Spatiotemporal Features for Esophageal Abnormality Detection From Endoscopic Videos. IEEE J Biomed Health Inform 2021; 25:131-142. [PMID: 32750901 DOI: 10.1109/jbhi.2020.2995193] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Esophageal cancer is categorized as a type of disease with a high mortality rate. Early detection of esophageal abnormalities (i.e. precancerous and early cancerous) can improve the survival rate of the patients. Recent deep learning-based methods for selected types of esophageal abnormality detection from endoscopic images have been proposed. However, no methods have been introduced in the literature to cover the detection from endoscopic videos, detection from challenging frames and detection of more than one esophageal abnormality type. In this paper, we present an efficient method to automatically detect different types of esophageal abnormalities from endoscopic videos. We propose a novel 3D Sequential DenseConvLstm network that extracts spatiotemporal features from the input video. Our network incorporates 3D Convolutional Neural Network (3DCNN) and Convolutional Lstm (ConvLstm) to efficiently learn short and long term spatiotemporal features. The generated feature map is utilized by a region proposal network and ROI pooling layer to produce a bounding box that detects abnormality regions in each frame throughout the video. Finally, we investigate a post-processing method named Frame Search Conditional Random Field (FS-CRF) that improves the overall performance of the model by recovering the missing regions in neighborhood frames within the same clip. We extensively validate our model on an endoscopic video dataset that includes a variety of esophageal abnormalities. Our model achieved high performance using different evaluation metrics showing 93.7% recall, 92.7% precision, and 93.2% F-measure. Moreover, as no results have been reported in the literature for the esophageal abnormality detection from endoscopic videos, to validate the robustness of our model, we have tested the model on a publicly available colonoscopy video dataset, achieving the polyp detection performance in a recall of 81.18%, precision of 96.45% and F-measure 88.16%, compared to the state-of-the-art results of 78.84% recall, 90.51% precision and 84.27% F-measure using the same dataset. This demonstrates that the proposed method can be adapted to different gastrointestinal endoscopic video applications with a promising performance.
Collapse
|
21
|
Yang YJ. The Future of Capsule Endoscopy: The Role of Artificial Intelligence and Other Technical Advancements. Clin Endosc 2020; 53:387-394. [PMID: 32668529 PMCID: PMC7403015 DOI: 10.5946/ce.2020.133] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 05/24/2020] [Indexed: 12/13/2022] Open
Abstract
Capsule endoscopy has revolutionized the management of small-bowel diseases owing to its convenience and noninvasiveness. Capsule endoscopy is a common method for the evaluation of obscure gastrointestinal bleeding, Crohn’s disease, small-bowel tumors, and polyposis syndrome. However, the laborious reading process, oversight of small-bowel lesions, and lack of locomotion are major obstacles to expanding its application. Along with recent advances in artificial intelligence, several studies have reported the promising performance of convolutional neural network systems for the diagnosis of various small-bowel lesions including erosion/ulcers, angioectasias, polyps, and bleeding lesions, which have reduced the time needed for capsule endoscopy interpretation. Furthermore, colon capsule endoscopy and capsule endoscopy locomotion driven by magnetic force have been investigated for clinical application, and various capsule endoscopy prototypes for active locomotion, biopsy, or therapeutic approaches have been introduced. In this review, we will discuss the recent advancements in artificial intelligence in the field of capsule endoscopy, as well as studies on other technological improvements in capsule endoscopy.
Collapse
Affiliation(s)
- Young Joo Yang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Korea
| |
Collapse
|
22
|
Yang YJ, Cho BJ, Lee MJ, Kim JH, Lim H, Bang CS, Jeong HM, Hong JT, Baik GH. Automated Classification of Colorectal Neoplasms in White-Light Colonoscopy Images via Deep Learning. J Clin Med 2020; 9:E1593. [PMID: 32456309 PMCID: PMC7291169 DOI: 10.3390/jcm9051593] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 05/14/2020] [Accepted: 05/21/2020] [Indexed: 12/24/2022] Open
Abstract
Background: Classification of colorectal neoplasms during colonoscopic examination is important to avoid unnecessary endoscopic biopsy or resection. This study aimed to develop and validate deep learning models that automatically classify colorectal lesions histologically on white-light colonoscopy images. Methods: White-light colonoscopy images of colorectal lesions exhibiting pathological results were collected and classified into seven categories: stages T1-4 colorectal cancer (CRC), high-grade dysplasia (HGD), tubular adenoma (TA), and non-neoplasms. The images were then re-classified into four categories including advanced CRC, early CRC/HGD, TA, and non-neoplasms. Two convolutional neural network models were trained, and the performances were evaluated in an internal test dataset and an external validation dataset. Results: In total, 3828 images were collected from 1339 patients. The mean accuracies of ResNet-152 model for the seven-category and four-category classification were 60.2% and 67.3% in the internal test dataset, and 74.7% and 79.2% in the external validation dataset, respectively, including 240 images. In the external validation, ResNet-152 outperformed two endoscopists for four-category classification, and showed a higher mean area under the curve (AUC) for detecting TA+ lesions (0.818) compared to the worst-performing endoscopist. The mean AUC for detecting HGD+ lesions reached 0.876 by Inception-ResNet-v2. Conclusions: A deep learning model presented promising performance in classifying colorectal lesions on white-light colonoscopy images; this model could help endoscopists build optimal treatment strategies.
Collapse
Affiliation(s)
- Young Joo Yang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24252, Korea; (Y.J.Y.); (H.L.); (C.S.B.); (H.M.J.); (J.T.H.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
- Division of Biomedical Informatics, Seoul National University Biomedical Informatics (SNUBI), Seoul National University College of Medicine, Seoul 03080, Korea;
- Department of Ophthalmology, Hallym University Sacred Heart Hospital, Anyang 14068, Korea
| | - Myung-Je Lee
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
| | - Ju Han Kim
- Division of Biomedical Informatics, Seoul National University Biomedical Informatics (SNUBI), Seoul National University College of Medicine, Seoul 03080, Korea;
| | - Hyun Lim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24252, Korea; (Y.J.Y.); (H.L.); (C.S.B.); (H.M.J.); (J.T.H.); (G.H.B.)
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24252, Korea; (Y.J.Y.); (H.L.); (C.S.B.); (H.M.J.); (J.T.H.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Hae Min Jeong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24252, Korea; (Y.J.Y.); (H.L.); (C.S.B.); (H.M.J.); (J.T.H.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Ji Taek Hong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24252, Korea; (Y.J.Y.); (H.L.); (C.S.B.); (H.M.J.); (J.T.H.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24252, Korea; (Y.J.Y.); (H.L.); (C.S.B.); (H.M.J.); (J.T.H.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| |
Collapse
|
23
|
Kim HS. Decision-Making in Artificial Intelligence: Is It Always Correct? J Korean Med Sci 2020; 35:e1. [PMID: 31898430 PMCID: PMC6942130 DOI: 10.3346/jkms.2020.35.e1] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 11/01/2019] [Indexed: 12/02/2022] Open
Affiliation(s)
- Hun Sung Kim
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Department of Endocrinology and Metabolism, College of Medicine, The Catholic University of Korea, Seoul, Korea.
| |
Collapse
|