1
|
Jain A, Sinha S, Mazumdar S. Enhancing polyp classification: A comparative analysis of spatio-temporal techniques. Med Eng Phys 2025; 139:104336. [PMID: 40306886 DOI: 10.1016/j.medengphy.2025.104336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 03/25/2025] [Accepted: 04/04/2025] [Indexed: 05/02/2025]
Abstract
Colorectal cancer (CRC) is a major health concern, ranking as the third deadliest cancer globally. Early diagnosis of adenomatous polyps which are pre-cancerous abnormal tissue growth, is crucial for preventing CRC. Artificial intelligence-assisted narrow-band imaging colonoscopy can significantly increase the accuracy of polyp characterization during the endoscopy procedure. This study presents a comprehensive comparative analysis of the performances of three different deep architectures for incorporating temporal information alongside spatial features for colon polyp classification. We employed three different models namely, time-distributed 2D CNN-LSTM, 3D CNN, and hybrid 3D CNN-ConvLSTM2D model and evaluated their performance of polyp characterization using a real-world clinical dataset of NBI colonoscopy videos of 64 different polyps from 60 patients in India. Additionally, cross-dataset validation on a publicly available dataset demonstrated the generalizability and robustness of the proposed model. The 3D CNN-ConvLSTM2D model outperforms the other two in terms of all evaluation metrics. Notably, it achieved a mean NPV of 92%, surpassing the minimum NPV threshold set by PIVI guidelines for reliable polyp diagnosis which demonstrates its suitability for real-world applications. The performance of the proposed deep architectures is also compared with some existing methods proposed by other researchers, and 3D CNN-ConvLSTM2D model demonstrates significant improvements in both NPV and overall performance metrics in comparison with the other existing methods, while also effectively reducing false positives. This study demonstrates the effectiveness of employing spatiotemporal features for accurate polyp classification. To the best of our knowledge, this is the first study performed, using exclusively NBI polyp dataset, to investigate the effectiveness of spatiotemporal information for polyp classification.
Collapse
Affiliation(s)
- Aditi Jain
- VNIT, South Ambazari Road, Nagpur, 440010, Maharashtra, India.
| | - Saugata Sinha
- VNIT, South Ambazari Road, Nagpur, 440010, Maharashtra, India.
| | | |
Collapse
|
2
|
Tan J, Yuan J, Fu X, Bai Y. Colonoscopy polyp classification via enhanced scattering wavelet Convolutional Neural Network. PLoS One 2024; 19:e0302800. [PMID: 39392783 PMCID: PMC11469526 DOI: 10.1371/journal.pone.0302800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 08/26/2024] [Indexed: 10/13/2024] Open
Abstract
Among the most common cancers, colorectal cancer (CRC) has a high death rate. The best way to screen for colorectal cancer (CRC) is with a colonoscopy, which has been shown to lower the risk of the disease. As a result, Computer-aided polyp classification technique is applied to identify colorectal cancer. But visually categorizing polyps is difficult since different polyps have different lighting conditions. Different from previous works, this article presents Enhanced Scattering Wavelet Convolutional Neural Network (ESWCNN), a polyp classification technique that combines Convolutional Neural Network (CNN) and Scattering Wavelet Transform (SWT) to improve polyp classification performance. This method concatenates simultaneously learnable image filters and wavelet filters on each input channel. The scattering wavelet filters can extract common spectral features with various scales and orientations, while the learnable filters can capture image spatial features that wavelet filters may miss. A network architecture for ESWCNN is designed based on these principles and trained and tested using colonoscopy datasets (two public datasets and one private dataset). An n-fold cross-validation experiment was conducted for three classes (adenoma, hyperplastic, serrated) achieving a classification accuracy of 96.4%, and 94.8% accuracy in two-class polyp classification (positive and negative). In the three-class classification, correct classification rates of 96.2% for adenomas, 98.71% for hyperplastic polyps, and 97.9% for serrated polyps were achieved. The proposed method in the two-class experiment reached an average sensitivity of 96.7% with 93.1% specificity. Furthermore, we compare the performance of our model with the state-of-the-art general classification models and commonly used CNNs. Six end-to-end models based on CNNs were trained using 2 dataset of video sequences. The experimental results demonstrate that the proposed ESWCNN method can effectively classify polyps with higher accuracy and efficacy compared to the state-of-the-art CNN models. These findings can provide guidance for future research in polyp classification.
Collapse
Affiliation(s)
- Jun Tan
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jiamin Yuan
- Health construction administration center, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, Guangdong, China
- The Second Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine(TCM), Guangzhou, Guangdong, China
| | - Xiaoyong Fu
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yilin Bai
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- China Southern Airlines, Guangzhou, Guangdong, China
| |
Collapse
|
3
|
Jain A, Sinha S, Mazumdar S. Comparative analysis of machine learning frameworks for automatic polyp characterization. Biomed Signal Process Control 2024; 95:106451. [DOI: 10.1016/j.bspc.2024.106451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
|
4
|
Ding M, Yan J, Chao G, Zhang S. Application of artificial intelligence in colorectal cancer screening by colonoscopy: Future prospects (Review). Oncol Rep 2023; 50:199. [PMID: 37772392 DOI: 10.3892/or.2023.8636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 07/07/2023] [Indexed: 09/30/2023] Open
Abstract
Colorectal cancer (CRC) has become a severe global health concern, with the third‑high incidence and second‑high mortality rate of all cancers. The burden of CRC is expected to surge to 60% by 2030. Fortunately, effective early evidence‑based screening could significantly reduce the incidence and mortality of CRC. Colonoscopy is the core screening method for CRC with high popularity and accuracy. Yet, the accuracy of colonoscopy in CRC screening is related to the experience and state of operating physicians. It is challenging to maintain the high CRC diagnostic rate of colonoscopy. Artificial intelligence (AI)‑assisted colonoscopy will compensate for the above shortcomings and improve the accuracy, efficiency, and quality of colonoscopy screening. The unique advantages of AI, such as the continuous advancement of high‑performance computing capabilities and innovative deep‑learning architectures, which hugely impact the control of colorectal cancer morbidity and mortality expectancy, highlight its role in colonoscopy screening.
Collapse
Affiliation(s)
- Menglu Ding
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| | - Junbin Yan
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| | - Guanqun Chao
- Department of General Practice, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, Zhejiang 310000, P.R. China
| | - Shuo Zhang
- The Second Affiliated Hospital of Zhejiang Chinese Medical University (The Xin Hua Hospital of Zhejiang Province), Hangzhou, Zhejiang 310000, P.R. China
| |
Collapse
|
5
|
Pedroso M, Martins ML, Libânio D, Dinis-Ribeiro M, Coimbra M, Renna F. Fractal Bilinear Deep Neural Network Models for Gastric Intestinal Metaplasia Detection. 2023 IEEE EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS (BHI) 2023:1-5. [DOI: 10.1109/bhi58575.2023.10313503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
Affiliation(s)
- Maria Pedroso
- University of Porto,INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Faculty of Science
| | - Miguel L. Martins
- University of Porto,INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Faculty of Science
| | - Diogo Libânio
- University of Porto,CIDES/CINTESIS, Faculty of Medicine
| | | | - Miguel Coimbra
- University of Porto,INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Faculty of Science
| | - Francesco Renna
- University of Porto,INESC TEC - Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, Faculty of Science
| |
Collapse
|
6
|
Gabralla LA, Hussien AM, AlMohimeed A, Saleh H, Alsekait DM, El-Sappagh S, Ali AA, Refaat Hassan M. Automated Diagnosis for Colon Cancer Diseases Using Stacking Transformer Models and Explainable Artificial Intelligence. Diagnostics (Basel) 2023; 13:2939. [PMID: 37761306 PMCID: PMC10529133 DOI: 10.3390/diagnostics13182939] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/23/2023] [Accepted: 08/31/2023] [Indexed: 09/29/2023] Open
Abstract
Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI).
Collapse
Affiliation(s)
- Lubna Abdelkareim Gabralla
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ali Mohamed Hussien
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| | - Abdulaziz AlMohimeed
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia
| | - Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada 84511, Egypt
| | - Deema Mohammed Alsekait
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez 34511, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha 13518, Egypt
| | - Abdelmgeid A. Ali
- Faculty of Computers and Information, Minia University, Minia 61519, Egypt
| | - Moatamad Refaat Hassan
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| |
Collapse
|
7
|
Gan P, Li P, Xia H, Zhou X, Tang X. The application of artificial intelligence in improving colonoscopic adenoma detection rate: Where are we and where are we going. GASTROENTEROLOGIA Y HEPATOLOGIA 2023; 46:203-213. [PMID: 35489584 DOI: 10.1016/j.gastrohep.2022.03.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 03/08/2022] [Accepted: 03/18/2022] [Indexed: 02/08/2023]
Abstract
Colorectal cancer (CRC) is one of the common malignant tumors in the world. Colonoscopy is the crucial examination technique in CRC screening programs for the early detection of precursor lesions, and treatment of early colorectal cancer, which can reduce the morbidity and mortality of CRC significantly. However, pooled polyp miss rates during colonoscopic examination are as high as 22%. Artificial intelligence (AI) provides a promising way to improve the colonoscopic adenoma detection rate (ADR). It might assist endoscopists in avoiding missing polyps and offer an accurate optical diagnosis of suspected lesions. Herein, we described some of the milestone studies in using AI for colonoscopy, and the future application directions of AI in improving colonoscopic ADR.
Collapse
Affiliation(s)
- Peiling Gan
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Peiling Li
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Huifang Xia
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xian Zhou
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xiaowei Tang
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China; Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, China.
| |
Collapse
|
8
|
Li P, Pan Q, Jiang S, Kuebler WM, Pries AR, Ning G. Visualizing the spatiotemporal pattern of yolk sac membrane vascular network by enhanced local fractal analysis. Microcirculation 2022; 29:e12746. [PMID: 34897901 DOI: 10.1111/micc.12746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 11/09/2021] [Accepted: 12/07/2021] [Indexed: 12/30/2022]
Abstract
OBJECTIVE To establish methods for providing a comprehensive and detailed description of the spatial distribution of the vascular networks, and to reveal the spatiotemporal pattern of the yolk sac membrane vascular network during the angiogenic procedure. METHODS Addressing the limitations in the conventional local fractal analysis, an improved approach, named scanning average local fractal dimension, was proposed. This method was conducted on 6 high-resolution vascular images of the yolk sac membrane for 3 eggs at two stages (E3 and E4) to characterize the spatial distribution of the complexity of the vascular network. RESULTS With the proposed method, the spatial distribution of the complexity of the yolk sac membrane vascular network was visualized. From E3 to E4, the local fractal dimension increased in 3 eggs, 1.80 ± 0.02 vs. 1.85 ± 0.02, 1.72 ± 0.03 vs. 1.83 ± 0.02, and 1.77 ± 0.03 vs. 1.82 ± 0.02, respectively. The mean local fractal dimension in the most distal area from the embryo proper was the lowest at E3 while the highest at E4. At E3, the most peaks of the local fractal dimension were located in the vein territories and shifted to artery territories at E4. CONCLUSIONS The spatial distribution of the complexity of the yolk sac membrane vascular network exhibited diverse patterns at different stages. In addition from E3 to E4, the increment of complexity at the intersection areas between arteries and sinus terminalis was with the most advance. This is consistent with the physiologic evidence. The present work provides a potential approach for investigating the spatiotemporal pattern of the angiogenic process.
Collapse
Affiliation(s)
- Peilun Li
- Department of Biomedical Engineering, Zhejiang University, Hangzhou, China
| | - Qing Pan
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Sheng Jiang
- Department of Biomedical Engineering, Zhejiang University, Hangzhou, China
| | - Wolfgang M Kuebler
- Institute of Physiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Axel R Pries
- Institute of Physiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Gangmin Ning
- Department of Biomedical Engineering, Zhejiang University, Hangzhou, China
| |
Collapse
|
9
|
Kavitha MS, Gangadaran P, Jackson A, Venmathi Maran BA, Kurita T, Ahn BC. Deep Neural Network Models for Colon Cancer Screening. Cancers (Basel) 2022; 14:3707. [PMID: 35954370 PMCID: PMC9367621 DOI: 10.3390/cancers14153707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/26/2022] [Accepted: 07/27/2022] [Indexed: 12/24/2022] Open
Abstract
Early detection of colorectal cancer can significantly facilitate clinicians' decision-making and reduce their workload. This can be achieved using automatic systems with endoscopic and histological images. Recently, the success of deep learning has motivated the development of image- and video-based polyp identification and segmentation. Currently, most diagnostic colonoscopy rooms utilize artificial intelligence methods that are considered to perform well in predicting invasive cancer. Convolutional neural network-based architectures, together with image patches and preprocesses are often widely used. Furthermore, learning transfer and end-to-end learning techniques have been adopted for detection and localization tasks, which improve accuracy and reduce user dependence with limited datasets. However, explainable deep networks that provide transparency, interpretability, reliability, and fairness in clinical diagnostics are preferred. In this review, we summarize the latest advances in such models, with or without transparency, for the prediction of colorectal cancer and also address the knowledge gap in the upcoming technology.
Collapse
Affiliation(s)
- Muthu Subash Kavitha
- School of Information and Data Sciences, Nagasaki University, Nagasaki 852-8521, Japan;
| | - Prakash Gangadaran
- BK21 FOUR KNU Convergence Educational Program of Biomedical Sciences for Creative Future Talents, School of Medicine, Kyungpook National University, Daegu 41944, Korea;
- Department of Nuclear Medicine, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu 41944, Korea
| | - Aurelia Jackson
- Borneo Marine Research Institute, Universiti Malaysia Sabah, Kota Kinabalu 88400, Malaysia; (A.J.); (B.A.V.M.)
| | - Balu Alagar Venmathi Maran
- Borneo Marine Research Institute, Universiti Malaysia Sabah, Kota Kinabalu 88400, Malaysia; (A.J.); (B.A.V.M.)
| | - Takio Kurita
- Graduate School of Advanced Science and Engineering, Hiroshima University, Higashi-Hiroshima 739-8521, Japan;
| | - Byeong-Cheol Ahn
- BK21 FOUR KNU Convergence Educational Program of Biomedical Sciences for Creative Future Talents, School of Medicine, Kyungpook National University, Daegu 41944, Korea;
- Department of Nuclear Medicine, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu 41944, Korea
| |
Collapse
|
10
|
Sharma P, Balabantaray BK, Bora K, Mallik S, Kasugai K, Zhao Z. An Ensemble-Based Deep Convolutional Neural Network for Computer-Aided Polyps Identification From Colonoscopy. Front Genet 2022; 13:844391. [PMID: 35559018 PMCID: PMC9086187 DOI: 10.3389/fgene.2022.844391] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/14/2022] [Indexed: 01/16/2023] Open
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death globally. Early detection and removal of precancerous polyps can significantly reduce the chance of CRC patient death. Currently, the polyp detection rate mainly depends on the skill and expertise of gastroenterologists. Over time, unidentified polyps can develop into cancer. Machine learning has recently emerged as a powerful method in assisting clinical diagnosis. Several classification models have been proposed to identify polyps, but their performance has not been comparable to an expert endoscopist yet. Here, we propose a multiple classifier consultation strategy to create an effective and powerful classifier for polyp identification. This strategy benefits from recent findings that different classification models can better learn and extract various information within the image. Therefore, our Ensemble classifier can derive a more consequential decision than each individual classifier. The extracted combined information inherits the ResNet's advantage of residual connection, while it also extracts objects when covered by occlusions through depth-wise separable convolution layer of the Xception model. Here, we applied our strategy to still frames extracted from a colonoscopy video. It outperformed other state-of-the-art techniques with a performance measure greater than 95% in each of the algorithm parameters. Our method will help researchers and gastroenterologists develop clinically applicable, computational-guided tools for colonoscopy screening. It may be extended to other clinical diagnoses that rely on image.
Collapse
Affiliation(s)
- Pallabi Sharma
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Bunil Kumar Balabantaray
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Kangkana Bora
- Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, Japan
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
- MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, United States
| |
Collapse
|
11
|
Detection and Classification of Colorectal Polyp Using Deep Learning. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2805607. [PMID: 35463989 PMCID: PMC9033358 DOI: 10.1155/2022/2805607] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/05/2022] [Accepted: 03/11/2022] [Indexed: 11/17/2022]
Abstract
Colorectal Cancer (CRC) is the third most dangerous cancer in the world and also increasing day by day. So, timely and accurate diagnosis is required to save the life of patients. Cancer grows from polyps which can be either cancerous or noncancerous. So, if the cancerous polyps are detected accurately and removed on time, then the dangerous consequences of cancer can be reduced to a large extent. The colonoscopy is used to detect the presence of colorectal polyps. However, manual examinations performed by experts are prone to various errors. Therefore, some researchers have utilized machine and deep learning-based models to automate the diagnosis process. However, existing models suffer from overfitting and gradient vanishing problems. To overcome these problems, a convolutional neural network- (CNN-) based deep learning model is proposed. Initially, guided image filter and dynamic histogram equalization approaches are used to filter and enhance the colonoscopy images. Thereafter, Single Shot MultiBox Detector (SSD) is used to efficiently detect and classify colorectal polyps from colonoscopy images. Finally, fully connected layers with dropouts are used to classify the polyp classes. Extensive experimental results on benchmark dataset show that the proposed model achieves significantly better results than the competitive models. The proposed model can detect and classify colorectal polyps from the colonoscopy images with 92% accuracy.
Collapse
|
12
|
Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27:8103-8122. [PMID: 35068857 PMCID: PMC8704267 DOI: 10.3748/wjg.v27.i47.8103] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/22/2021] [Accepted: 12/03/2021] [Indexed: 02/06/2023] Open
Abstract
Colonoscopy is an effective screening procedure in colorectal cancer prevention programs; however, colonoscopy practice can vary in terms of lesion detection, classification, and removal. Artificial intelligence (AI)-assisted decision support systems for endoscopy is an area of rapid research and development. The systems promise improved detection, classification, screening, and surveillance for colorectal polyps and cancer. Several recently developed applications for AI-assisted colonoscopy have shown promising results for the detection and classification of colorectal polyps and adenomas. However, their value for real-time application in clinical practice has yet to be determined owing to limitations in the design, validation, and testing of AI models under real-life clinical conditions. Despite these current limitations, ambitious attempts to expand the technology further by developing more complex systems capable of assisting and supporting the endoscopist throughout the entire colonoscopy examination, including polypectomy procedures, are at the concept stage. However, further work is required to address the barriers and challenges of AI integration into broader colonoscopy practice, to navigate the approval process from regulatory organizations and societies, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of its accuracy and safety. This article takes a closer look at the current state of AI integration into the field of colonoscopy and offers suggestions for future research.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo 0450, Norway
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama 224-8503, Japan
| | - Daniel von Renteln
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| |
Collapse
|
13
|
Mitsala A, Tsalikidis C, Pitiakoudis M, Simopoulos C, Tsaroucha AK. Artificial Intelligence in Colorectal Cancer Screening, Diagnosis and Treatment. A New Era. ACTA ACUST UNITED AC 2021; 28:1581-1607. [PMID: 33922402 PMCID: PMC8161764 DOI: 10.3390/curroncol28030149] [Citation(s) in RCA: 125] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/09/2021] [Accepted: 04/20/2021] [Indexed: 12/24/2022]
Abstract
The development of artificial intelligence (AI) algorithms has permeated the medical field with great success. The widespread use of AI technology in diagnosing and treating several types of cancer, especially colorectal cancer (CRC), is now attracting substantial attention. CRC, which represents the third most commonly diagnosed malignancy in both men and women, is considered a leading cause of cancer-related deaths globally. Our review herein aims to provide in-depth knowledge and analysis of the AI applications in CRC screening, diagnosis, and treatment based on current literature. We also explore the role of recent advances in AI systems regarding medical diagnosis and therapy, with several promising results. CRC is a highly preventable disease, and AI-assisted techniques in routine screening represent a pivotal step in declining incidence rates of this malignancy. So far, computer-aided detection and characterization systems have been developed to increase the detection rate of adenomas. Furthermore, CRC treatment enters a new era with robotic surgery and novel computer-assisted drug delivery techniques. At the same time, healthcare is rapidly moving toward precision or personalized medicine. Machine learning models have the potential to contribute to individual-based cancer care and transform the future of medicine.
Collapse
Affiliation(s)
- Athanasia Mitsala
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
- Correspondence: ; Tel.: +30-6986423707
| | - Christos Tsalikidis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Michail Pitiakoudis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Constantinos Simopoulos
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Alexandra K. Tsaroucha
- Laboratory of Experimental Surgery & Surgical Research, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece;
| |
Collapse
|
14
|
He Z, Wang P, Liang Y, Fu Z, Ye X. Clinically Available Optical Imaging Technologies in Endoscopic Lesion Detection: Current Status and Future Perspective. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7594513. [PMID: 33628407 PMCID: PMC7886528 DOI: 10.1155/2021/7594513] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 01/13/2021] [Accepted: 01/27/2021] [Indexed: 01/02/2023]
Abstract
Endoscopic optical imaging technologies for the detection and evaluation of dysplasia and early cancer have made great strides in recent decades. With the capacity of in vivo early detection of subtle lesions, they allow modern endoscopists to provide accurate and effective optical diagnosis in real time. This review mainly analyzes the current status of clinically available endoscopic optical imaging techniques, with emphasis on the latest updates of existing techniques. We summarize current coverage of these technologies in major hospital departments such as gastroenterology, urology, gynecology, otolaryngology, pneumology, and laparoscopic surgery. In order to promote a broader understanding, we further cover the underlying principles of these technologies and analyze their performance. Moreover, we provide a brief overview of future perspectives in related technologies, such as computer-assisted diagnosis (CAD) algorithms dealing with exploring endoscopic video data. We believe all these efforts will benefit the healthcare of the community, help endoscopists improve the accuracy of diagnosis, and relieve patients' suffering.
Collapse
Affiliation(s)
- Zhongyu He
- Biosensor National Special Laboratory, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Peng Wang
- Biosensor National Special Laboratory, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yuelong Liang
- Department of General Surgery, Sir Run Run Shaw Hospital, College of Medicine, Zhejiang University, Hangzhou 310016, China
| | - Zuoming Fu
- Biosensor National Special Laboratory, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Xuesong Ye
- Biosensor National Special Laboratory, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
- State Key Laboratory of CAD and CG, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
15
|
Misawa M, Kudo SE, Mori Y, Maeda Y, Ogawa Y, Ichimasa K, Kudo T, Wakamura K, Hayashi T, Miyachi H, Baba T, Ishida F, Itoh H, Oda M, Mori K. Current status and future perspective on artificial intelligence for lower endoscopy. Dig Endosc 2021; 33:273-284. [PMID: 32969051 DOI: 10.1111/den.13847] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 09/03/2020] [Accepted: 09/16/2020] [Indexed: 12/23/2022]
Abstract
The global incidence and mortality rate of colorectal cancer remains high. Colonoscopy is regarded as the gold standard examination for detecting and eradicating neoplastic lesions. However, there are some uncertainties in colonoscopy practice that are related to limitations in human performance. First, approximately one-fourth of colorectal neoplasms are missed on a single colonoscopy. Second, it is still difficult for non-experts to perform adequately regarding optical biopsy. Third, recording of some quality indicators (e.g. cecal intubation, bowel preparation, and withdrawal speed) which are related to adenoma detection rate, is sometimes incomplete. With recent improvements in machine learning techniques and advances in computer performance, artificial intelligence-assisted computer-aided diagnosis is being increasingly utilized by endoscopists. In particular, the emergence of deep-learning, data-driven machine learning techniques have made the development of computer-aided systems easier than that of conventional machine learning techniques, the former currently being considered the standard artificial intelligence engine of computer-aided diagnosis by colonoscopy. To date, computer-aided detection systems seem to have improved the rate of detection of neoplasms. Additionally, computer-aided characterization systems may have the potential to improve diagnostic accuracy in real-time clinical practice. Furthermore, some artificial intelligence-assisted systems that aim to improve the quality of colonoscopy have been reported. The implementation of computer-aided system clinical practice may provide additional benefits such as helping in educational poorly performing endoscopists and supporting real-time clinical decision-making. In this review, we have focused on computer-aided diagnosis during colonoscopy reported by gastroenterologists and discussed its status, limitations, and future prospects.
Collapse
Affiliation(s)
- Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
- Clinical Effectiveness Research Group, Institute of Heath and Society, University of Oslo, Oslo, Norway
| | - Yasuharu Maeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yushi Ogawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Katsuro Ichimasa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kunihiko Wakamura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Takemasa Hayashi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hideyuki Miyachi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toshiyuki Baba
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Fumio Ishida
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
16
|
Patel K, Li K, Tao K, Wang Q, Bansal A, Rastogi A, Wang G. A comparative study on polyp classification using convolutional neural networks. PLoS One 2020; 15:e0236452. [PMID: 32730279 PMCID: PMC7392235 DOI: 10.1371/journal.pone.0236452] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/07/2020] [Indexed: 12/31/2022] Open
Abstract
Colorectal cancer is the third most common cancer diagnosed in both men and women in the United States. Most colorectal cancers start as a growth on the inner lining of the colon or rectum, called 'polyp'. Not all polyps are cancerous, but some can develop into cancer. Early detection and recognition of the type of polyps is critical to prevent cancer and change outcomes. However, visual classification of polyps is challenging due to varying illumination conditions of endoscopy, variant texture, appearance, and overlapping morphology between polyps. More importantly, evaluation of polyp patterns by gastroenterologists is subjective leading to a poor agreement among observers. Deep convolutional neural networks have proven very successful in object classification across various object categories. In this work, we compare the performance of the state-of-the-art general object classification models for polyp classification. We trained a total of six CNN models end-to-end using a dataset of 157 video sequences composed of two types of polyps: hyperplastic and adenomatous. Our results demonstrate that the state-of-the-art CNN models can successfully classify polyps with an accuracy comparable or better than reported among gastroenterologists. The results of this study can guide future research in polyp classification.
Collapse
Affiliation(s)
- Krushi Patel
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| | - Kaidong Li
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| | - Ke Tao
- The First Hospital of Jilin University, Changchun, China
| | - Quan Wang
- The First Hospital of Jilin University, Changchun, China
| | - Ajay Bansal
- The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Amit Rastogi
- The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Guanghui Wang
- School of Engineering, University of Kansas, Lawrence, KS, United States of America
| |
Collapse
|
17
|
Adenocarcinoma Recognition in Endoscopy Images Using Optimized Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051650] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Colonoscopy, which refers to the endoscopic examination of colon using a camera, is considered as the most effective method for diagnosis of colorectal cancer. Colonoscopy is performed by a medical doctor who visually inspects one’s colon to find protruding or cancerous polyps. In some situations, these polyps are difficult to find by the human eye, which may lead to a misdiagnosis. In recent years, deep learning has revolutionized the field of computer vision due to its exemplary performance. This study proposes a Convolutional Neural Network (CNN) architecture for classifying colonoscopy images as normal, adenomatous polyps, and adenocarcinoma. The main objective of this study is to aid medical practitioners in the correct diagnosis of colorectal cancer. Our proposed CNN architecture consists of 43 convolutional layers and one fully-connected layer. We trained and evaluated our proposed network architecture on the colonoscopy image dataset with 410 test subjects provided by Gachon University Hospital. Our experimental results showed an accuracy of 94.39% over 410 test subjects.
Collapse
|
18
|
Kudo’s Classification for Colon Polyps Assessment Using a Deep Learning Approach. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10020501] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer death in the world. This disease could begin as a non-cancerous polyp in the colon, when not treated in a timely manner, these polyps could induce cancer, and in turn, death. We propose a deep learning model for classifying colon polyps based on the Kudo’s classification schema, using basic colonoscopy equipment. We train a deep convolutional model with a private dataset from the University of Deusto with and without using a VGG model as a feature extractor, and compared the results. We obtained 83% of accuracy and 83% of F1-score after fine tuning our model with the VGG filter. These results show that deep learning algorithms are useful to develop computer-aided tools for early CRC detection, and suggest combining it with a polyp segmentation model for its use by specialists.
Collapse
|
19
|
Wickstrøm K, Kampffmeyer M, Jenssen R. Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps. Med Image Anal 2019; 60:101619. [PMID: 31810005 DOI: 10.1016/j.media.2019.101619] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 11/14/2019] [Accepted: 11/14/2019] [Indexed: 12/27/2022]
Abstract
Colorectal polyps are known to be potential precursors to colorectal cancer, which is one of the leading causes of cancer-related deaths on a global scale. Early detection and prevention of colorectal cancer is primarily enabled through manual screenings, where the intestines of a patient is visually examined. Such a procedure can be challenging and exhausting for the person performing the screening. This has resulted in numerous studies on designing automatic systems aimed at supporting physicians during the examination. Recently, such automatic systems have seen a significant improvement as a result of an increasing amount of publicly available colorectal imagery and advances in deep learning research for object image recognition. Specifically, decision support systems based on Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance on both detection and segmentation of colorectal polyps. However, CNN-based models need to not only be precise in order to be helpful in a medical context. In addition, interpretability and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. Furthermore, we propose a novel method for estimating the uncertainty associated with important features in the input and demonstrate how interpretability and uncertainty can be modeled in DSSs for semantic segmentation of colorectal polyps. Results indicate that deep models are utilizing the shape and edge information of polyps to make their prediction. Moreover, inaccurate predictions show a higher degree of uncertainty compared to precise predictions.
Collapse
Affiliation(s)
- Kristoffer Wickstrøm
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway.
| | - Michael Kampffmeyer
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway
| | - Robert Jenssen
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway
| |
Collapse
|
20
|
Ali H, Sharif M, Yasmin M, Rehmani MH, Riaz F. A survey of feature extraction and fusion of deep learning for detection of abnormalities in video endoscopy of gastrointestinal-tract. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09743-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
21
|
|
22
|
Kudo SE, Mori Y, Misawa M, Takeda K, Kudo T, Itoh H, Oda M, Mori K. Artificial intelligence and colonoscopy: Current status and future perspectives. Dig Endosc 2019; 31:363-371. [PMID: 30624835 DOI: 10.1111/den.13340] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 12/04/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND AIM Application of artificial intelligence in medicine is now attracting substantial attention. In the field of gastrointestinal endoscopy, computer-aided diagnosis (CAD) for colonoscopy is the most investigated area, although it is still in the preclinical phase. Because colonoscopy is carried out by humans, it is inherently an imperfect procedure. CAD assistance is expected to improve its quality regarding automated polyp detection and characterization (i.e. predicting the polyp's pathology). It could help prevent endoscopists from missing polyps as well as provide a precise optical diagnosis for those detected. Ultimately, these functions that CAD provides could produce a higher adenoma detection rate and reduce the cost of polypectomy for hyperplastic polyps. METHODS AND RESULTS Currently, research on automated polyp detection has been limited to experimental assessments using an algorithm based on ex vivo videos or static images. Performance for clinical use was reported to have >90% sensitivity with acceptable specificity. In contrast, research on automated polyp characterization seems to surpass that for polyp detection. Prospective studies of in vivo use of artificial intelligence technologies have been reported by several groups, some of which showed a >90% negative predictive value for differentiating diminutive (≤5 mm) rectosigmoid adenomas, which exceeded the threshold for optical biopsy. CONCLUSION We introduce the potential of using CAD for colonoscopy and describe the most recent conditions for regulatory approval for artificial intelligence-assisted medical devices.
Collapse
Affiliation(s)
- Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kenichi Takeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
23
|
Wimmer G, Gadermayr M, Wolkersdörfer G, Kwitt R, Tamaki T, Tischendorf J, Häfner M, Yoshida S, Tanaka S, Merhof D, Uhl A. Quest for the best endoscopic imaging modality for computer-assisted colonic polyp staging. World J Gastroenterol 2019; 25:1197-1209. [PMID: 30886503 PMCID: PMC6421240 DOI: 10.3748/wjg.v25.i10.1197] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 02/13/2019] [Accepted: 02/15/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND It was shown in previous studies that high definition endoscopy, high magnification endoscopy and image enhancement technologies, such as chromoendoscopy and digital chromoendoscopy [narrow-band imaging (NBI), i-Scan] facilitate the detection and classification of colonic polyps during endoscopic sessions. However, there are no comprehensive studies so far that analyze which endoscopic imaging modalities facilitate the automated classification of colonic polyps. In this work, we investigate the impact of endoscopic imaging modalities on the results of computer-assisted diagnosis systems for colonic polyp staging. AIM To assess which endoscopic imaging modalities are best suited for the computer-assisted staging of colonic polyps. METHODS In our experiments, we apply twelve state-of-the-art feature extraction methods for the classification of colonic polyps to five endoscopic image databases of colonic lesions. For this purpose, we employ a specifically designed experimental setup to avoid biases in the outcomes caused by differing numbers of images per image database. The image databases were obtained using different imaging modalities. Two databases were obtained by high-definition endoscopy in combination with i-Scan technology (one with chromoendoscopy and one without chromoendoscopy). Three databases were obtained by high-magnification endoscopy (two databases using narrow band imaging and one using chromoendoscopy). The lesions are categorized into non-neoplastic and neoplastic according to the histological diagnosis. RESULTS Generally, it is feature-dependent which imaging modalities achieve high results and which do not. For the high-definition image databases, we achieved overall classification rates of up to 79.2% with chromoendoscopy and 88.9% without chromoendoscopy. In the case of the database obtained by high-magnification chromoendoscopy, the classification rates were up to 81.4%. For the combination of high-magnification endoscopy with NBI, results of up to 97.4% for one database and up to 84% for the other were achieved. Non-neoplastic lesions were classified more accurately in general than non-neoplastic lesions. It was shown that the image recording conditions highly affect the performance of automated diagnosis systems and partly contribute to a stronger effect on the staging results than the used imaging modality. CONCLUSION Chromoendoscopy has a negative impact on the results of the methods. NBI is better suited than chromoendoscopy. High-definition and high-magnification endoscopy are equally suited.
Collapse
Affiliation(s)
- Georg Wimmer
- Department of Computer Sciences, University of Salzburg, Salzburg 5020, Austria
| | - Michael Gadermayr
- Interdisciplinary Imaging and Vision Institute Aachen, RWTH Aachen, Aachen 52074, Germany
| | - Gernot Wolkersdörfer
- Department of Internal Medicine I, Paracelsus Medical University/Salzburger Landeskliniken (SALK), Salzburg 5020, Austria
| | - Roland Kwitt
- Department of Computer Sciences, University of Salzburg, Salzburg 5020, Austria
| | - Toru Tamaki
- Department of Information Engineering, Graduate School of Engineering, Hiroshima University, Hiroshima 7398527, Japan
| | - Jens Tischendorf
- Internal Medicine and Gastroenterology, University Hospital Aachen, Würselen 52146, Germany
| | - Michael Häfner
- Department of Gastroenterologie and Hepatologie, Krankenhaus St. Elisabeth, Wien 1080, Austria
| | - Shigeto Yoshida
- Department of Endoscopy and Medicine, Graduate School of Biomedical and Health Science, Hiroshima University, Hiroshima 7348551, Japan
| | - Shinji Tanaka
- Department of Endoscopy, Hiroshima University Hospital, Hiroshima 7348551, Japan
| | - Dorit Merhof
- Interdisciplinary Imaging and Vision Institute Aachen, RWTH Aachen, Aachen 52074, Germany
| | - Andreas Uhl
- Department of Computer Sciences, University of Salzburg, Salzburg 5020, Austria
| |
Collapse
|
24
|
Diamantis DE, Iakovidis DK, Koulaouzidis A. Look-behind fully convolutional neural network for computer-aided endoscopy. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.12.005] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
25
|
Wimmer G, Gadermayr M, Kwitt R, Häfner M, Tamaki T, Yoshida S, Tanaka S, Merhof D, Uhl A. Training of polyp staging systems using mixed imaging modalities. Comput Biol Med 2018; 102:251-259. [PMID: 29773226 DOI: 10.1016/j.compbiomed.2018.05.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Revised: 04/24/2018] [Accepted: 05/01/2018] [Indexed: 02/08/2023]
Abstract
BACKGROUND In medical image data sets, the number of images is usually quite small. The small number of training samples does not allow to properly train classifiers which leads to massive overfitting to the training data. In this work, we investigate whether increasing the number of training samples by merging datasets from different imaging modalities can be effectively applied to improve predictive performance. Further, we investigate if the extracted features from the employed image representations differ between different imaging modalities and if domain adaption helps to overcome these differences. METHOD We employ twelve feature extraction methods to differentiate between non-neoplastic and neoplastic lesions. Experiments are performed using four different classifier training strategies, each with a different combination of training data. The specifically designed setup for these experiments enables a fair comparison between the four training strategies. RESULTS Combining high definition with high magnification training data and chromoscopic with non-chromoscopic training data partly improved the results. The usage of domain adaptation has only a small effect on the results compared to just using non-adapted training data. CONCLUSION Merging datasets from different imaging modalities turned out to be partially beneficial for the case of combining high definition endoscopic data with high magnification endoscopic data and for combining chromoscopic with non-chromoscopic data. NBI and chromoendoscopy on the other hand are mostly too different with respect to the extracted features to combine images of these two modalities for classifier training.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| | | | - Roland Kwitt
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria
| | - Michael Häfner
- St. Elisabeth Hospital, Landstraßer Hauptstraße 4a, A-1030 Vienna, Austria
| | - Toru Tamaki
- Hiroshima University, 1-4-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8527, Japan
| | - Shigeto Yoshida
- Hiroshima University, 1-4-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8527, Japan
| | - Shinji Tanaka
- Hiroshima University, 1-4-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8527, Japan
| | - Dorit Merhof
- RWTH Aachen University, Templergraben 55, 52056 Aachen, Germany
| | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| |
Collapse
|
26
|
Wimmer G, Vécsei A, Häfner M, Uhl A. Fisher encoding of convolutional neural network features for endoscopic image classification. J Med Imaging (Bellingham) 2018; 5:034504. [PMID: 30840751 PMCID: PMC6152583 DOI: 10.1117/1.jmi.5.3.034504] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 08/21/2018] [Indexed: 12/14/2022] Open
Abstract
We propose an approach for the automated diagnosis of celiac disease (CD) and colonic polyps (CP) based on applying Fisher encoding to the activations of convolutional layers. In our experiments, three different convolutional neural network (CNN) architectures (AlexNet, VGG-f, and VGG-16) are applied to three endoscopic image databases (one CD database and two CP databases). For each network architecture, we perform experiments using a version of the net that is pretrained on the ImageNet database, as well as a version of the net that is trained on a specific endoscopic image database. The Fisher representations of convolutional layer activations are classified using support vector machines. Additionally, experiments are performed by concatenating the Fisher representations of several layers to combine the information of these layers. We will show that our proposed CNN-Fisher approach clearly outperforms other CNN- and non-CNN-based approaches and that our approach requires no training on the target dataset, which results in substantial time savings compared with other CNN-based approaches.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Salzburg, Austria
| | | | | | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Salzburg, Austria
| |
Collapse
|
27
|
Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy. Med Image Anal 2018; 48:230-243. [PMID: 29990688 DOI: 10.1016/j.media.2018.06.005] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 05/04/2018] [Accepted: 06/07/2018] [Indexed: 02/07/2023]
Abstract
Colorectal cancer is the fourth leading cause of cancer deaths worldwide and the second leading cause in the United States. The risk of colorectal cancer can be mitigated by the identification and removal of premalignant lesions through optical colonoscopy. Unfortunately, conventional colonoscopy misses more than 20% of the polyps that should be removed, due in part to poor contrast of lesion topography. Imaging depth and tissue topography during a colonoscopy is difficult because of the size constraints of the endoscope and the deforming mucosa. Most existing methods make unrealistic assumptions which limits accuracy and sensitivity. In this paper, we present a method that avoids these restrictions, using a joint deep convolutional neural network-conditional random field (CNN-CRF) framework for monocular endoscopy depth estimation. Estimated depth is used to reconstruct the topography of the surface of the colon from a single image. We train the unary and pairwise potential functions of a CRF in a CNN on synthetic data, generated by developing an endoscope camera model and rendering over 200,000 images of an anatomically-realistic colon.We validate our approach with real endoscopy images from a porcine colon, transferred to a synthetic-like domain via adversarial training, with ground truth from registered computed tomography measurements. The CNN-CRF approach estimates depths with a relative error of 0.152 for synthetic endoscopy images and 0.242 for real endoscopy images. We show that the estimated depth maps can be used for reconstructing the topography of the mucosa from conventional colonoscopy images. This approach can easily be integrated into existing endoscopy systems and provides a foundation for improving computer-aided detection algorithms for detection, segmentation and classification of lesions.
Collapse
|
28
|
van der Sommen F, Curvers WL, Nagengast WB. Novel Developments in Endoscopic Mucosal Imaging. Gastroenterology 2018; 154:1876-1886. [PMID: 29462601 DOI: 10.1053/j.gastro.2018.01.070] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 12/28/2017] [Accepted: 01/06/2018] [Indexed: 12/20/2022]
Abstract
Endoscopic techniques such as high-definition and optical-chromoendoscopy have had enormous impact on endoscopy practice. Since these techniques allow assessment of most subtle morphological mucosal abnormalities, further improvements in endoscopic practice lay in increasing the detection efficacy of endoscopists. Several new developments could assist in this. First, web based training tools could improve the skills of the endoscopist for enhancing the detection and classification of lesions. Secondly, incorporation of computer aided detection will be the next step to raise endoscopic quality of the captured data. These systems will aid the endoscopist in interpreting the increasing amount of visual information in endoscopic images providing real-time objective second reading. In addition, developments in the field of molecular imaging open opportunities to add functional imaging data, visualizing biological parameters, of the gastrointestinal tract to white-light morphology imaging. For the successful implementation of abovementioned techniques, a true multi-disciplinary approach is of vital importance.
Collapse
Affiliation(s)
- Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Wouter L Curvers
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, The Netherlands
| | - Wouter B Nagengast
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
29
|
Evaluation of i-Scan Virtual Chromoendoscopy and Traditional Chromoendoscopy for the Automated Diagnosis of Colonic Polyps. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/978-3-319-54057-3_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
30
|
Exploring Deep Learning and Transfer Learning for Colonic Polyp Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:6584725. [PMID: 27847543 PMCID: PMC5101370 DOI: 10.1155/2016/6584725] [Citation(s) in RCA: 81] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Accepted: 10/04/2016] [Indexed: 12/26/2022]
Abstract
Recently, Deep Learning, especially through Convolutional Neural Networks (CNNs) has been widely used to enable the extraction of highly representative features. This is done among the network layers by filtering, selecting, and using these features in the last fully connected layers for pattern classification. However, CNN training for automated endoscopic image classification still provides a challenge due to the lack of large and publicly available annotated databases. In this work we explore Deep Learning for the automated classification of colonic polyps using different configurations for training CNNs from scratch (or full training) and distinct architectures of pretrained CNNs tested on 8-HD-endoscopic image databases acquired using different modalities. We compare our results with some commonly used features for colonic polyp classification and the good results suggest that features learned by CNNs trained from scratch and the “off-the-shelf” CNNs features can be highly relevant for automated classification of colonic polyps. Moreover, we also show that the combination of classical features and “off-the-shelf” CNNs features can be a good approach to further improve the results.
Collapse
|
31
|
Gadermayr M, Kogler H, Karla M, Merhof D, Uhl A, Vécsei A. Computer-aided texture analysis combined with experts' knowledge: Improving endoscopic celiac disease diagnosis. World J Gastroenterol 2016; 22:7124-7134. [PMID: 27610022 PMCID: PMC4988309 DOI: 10.3748/wjg.v22.i31.7124] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Revised: 04/28/2016] [Accepted: 05/23/2016] [Indexed: 02/06/2023] Open
Abstract
AIM: To further improve the endoscopic detection of intestinal mucosa alterations due to celiac disease (CD).
METHODS: We assessed a hybrid approach based on the integration of expert knowledge into the computer-based classification pipeline. A total of 2835 endoscopic images from the duodenum were recorded in 290 children using the modified immersion technique (MIT). These children underwent routine upper endoscopy for suspected CD or non-celiac upper abdominal symptoms between August 2008 and December 2014. Blinded to the clinical data and biopsy results, three medical experts visually classified each image as normal mucosa (Marsh-0) or villous atrophy (Marsh-3). The experts’ decisions were further integrated into state-of-the-art texture recognition systems. Using the biopsy results as the reference standard, the classification accuracies of this hybrid approach were compared to the experts’ diagnoses in 27 different settings.
RESULTS: Compared to the experts’ diagnoses, in 24 of 27 classification settings (consisting of three imaging modalities, three endoscopists and three classification approaches), the best overall classification accuracies were obtained with the new hybrid approach. In 17 of 24 classification settings, the improvements achieved with the hybrid approach were statistically significant (P < 0.05). Using the hybrid approach classification accuracies between 94% and 100% were obtained. Whereas the improvements are only moderate in the case of the most experienced expert, the results of the less experienced expert could be improved significantly in 17 out of 18 classification settings. Furthermore, the lowest classification accuracy, based on the combination of one database and one specific expert, could be improved from 80% to 95% (P < 0.001).
CONCLUSION: The overall classification performance of medical experts, especially less experienced experts, can be boosted significantly by integrating expert knowledge into computer-aided diagnosis systems.
Collapse
|
32
|
Wimmer G, Tamaki T, Tischendorf JJW, Häfner M, Yoshida S, Tanaka S, Uhl A. Directional wavelet based features for colonic polyp classification. Med Image Anal 2016; 31:16-36. [PMID: 26948110 DOI: 10.1016/j.media.2016.02.001] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Revised: 02/08/2016] [Accepted: 02/09/2016] [Indexed: 01/27/2023]
Abstract
In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state-of-the-art methods on most of the databases. We will also show that the Weibull distribution is better suited to model the subband coefficient distribution than other commonly used probability distributions like the Gaussian distribution and the generalized Gaussian distribution. So this work gives a reasonable summary of wavelet based methods for colonic polyp classification and the huge amount of endoscopic polyp databases used for our experiments assures a high significance of the achieved results.
Collapse
Affiliation(s)
- Georg Wimmer
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| | - Toru Tamaki
- Hiroshima University, Department of Information Engineering, Graduate School of Engineering, 1-4-1 Kagamiyama, Higashi-hiroshima, Hiroshima 739-8527, Japan
| | - J J W Tischendorf
- Medical Department III (Gastroenterology, Hepatology and Metabolic Diseases), RWTH Aachen University Hospital, Paulwelsstr. 30, 52072 Aachen, Germany
| | - Michael Häfner
- St. Elisabeth Hospital, Landstraßer Hauptstraße 4a, A-1030 Vienna, Austria
| | - Shigeto Yoshida
- Hiroshima University Hospital, Department of Endoscopy, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8551, Japan
| | - Shinji Tanaka
- Hiroshima University Hospital, Department of Endoscopy, 1-2-3 Kasumi, Minami-ku, Hiroshima 734-8551, Japan
| | - Andreas Uhl
- University of Salzburg, Department of Computer Sciences, Jakob Haringerstrasse 2, 5020 Salzburg, Austria.
| |
Collapse
|