1
|
Jiang Q, Yu Y, Ren Y, Li S, He X. A review of deep learning methods for gastrointestinal diseases classification applied in computer-aided diagnosis system. Med Biol Eng Comput 2025; 63:293-320. [PMID: 39343842 DOI: 10.1007/s11517-024-03203-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/12/2024] [Indexed: 10/01/2024]
Abstract
Recent advancements in deep learning have significantly improved the intelligent classification of gastrointestinal (GI) diseases, particularly in aiding clinical diagnosis. This paper seeks to review a computer-aided diagnosis (CAD) system for GI diseases, aligning with the actual clinical diagnostic process. It offers a comprehensive survey of deep learning (DL) techniques tailored for classifying GI diseases, addressing challenges inherent in complex scenes, clinical constraints, and technical obstacles encountered in GI imaging. Firstly, the esophagus, stomach, small intestine, and large intestine were located to determine the organs where the lesions were located. Secondly, location detection and classification of a single disease are performed on the premise that the organ's location corresponding to the image is known. Finally, comprehensive classification for multiple diseases is carried out. The results of single and multi-classification are compared to achieve more accurate classification outcomes, and a more effective computer-aided diagnosis system for gastrointestinal diseases was further constructed.
Collapse
Affiliation(s)
- Qianru Jiang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yulin Yu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yipei Ren
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China.
| |
Collapse
|
2
|
Yousef AM, Deliyski DD, Zacharias SRC, Naghibolhosseini M. Detection of Vocal Fold Image Obstructions in High-Speed Videoendoscopy During Connected Speech in Adductor Spasmodic Dysphonia: A Convolutional Neural Networks Approach. J Voice 2024; 38:951-962. [PMID: 35304042 PMCID: PMC9474736 DOI: 10.1016/j.jvoice.2022.01.028] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/30/2022] [Accepted: 01/30/2022] [Indexed: 01/10/2023]
Abstract
OBJECTIVE Adductor spasmodic dysphonia (AdSD) is a neurogenic voice disorder, affecting the intrinsic laryngeal muscle control. AdSD leads to involuntary laryngeal spasms and only reveals during connected speech. Laryngeal high-speed videoendoscopy (HSV) coupled with a flexible fiberoptic endoscope provides a unique opportunity to study voice production and visualize the vocal fold vibrations in AdSD during speech. The goal of this study is to automatically detect instances during which the image of the vocal folds is optically obstructed in HSV recordings obtained during connected speech. METHODS HSV data were recorded from vocally normal adults and patients with AdSD during reading of the "Rainbow Passage", six CAPE-V sentences, and production of the vowel /i/. A convolutional neural network was developed and trained as a classifier to detect obstructed/unobstructed vocal folds in HSV frames. Manually labelled data were used for training, validating, and testing of the network. Moreover, a comprehensive robustness evaluation was conducted to compare the performance of the developed classifier and visual analysis of HSV data. RESULTS The developed convolutional neural network was able to automatically detect the vocal fold obstructions in HSV data in vocally normal participants and AdSD patients. The trained network was tested successfully and showed an overall classification accuracy of 94.18% on the testing dataset. The robustness evaluation showed an average overall accuracy of 94.81% on a massive number of HSV frames demonstrating the high robustness of the introduced technique while keeping a high level of accuracy. CONCLUSIONS The proposed approach can be used for efficient analysis of HSV data to study laryngeal maneuvers in patients with AdSD during connected speech. Additionally, this method will facilitate development of vocal fold vibratory measures for HSV frames with an unobstructed view of the vocal folds. Indicating parts of connected speech that provide an unobstructed view of the vocal folds can be used for developing optimal passages for precise HSV examination during connected speech and subject-specific clinical voice assessment protocols.
Collapse
Affiliation(s)
- Ahmed M Yousef
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, Michigan
| | - Dimitar D Deliyski
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, Michigan
| | - Stephanie R C Zacharias
- Head and Neck Regenerative Medicine Program, Mayo Clinic, Scottsdale, Arizona; Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Phoenix, Arizona
| | - Maryam Naghibolhosseini
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, Michigan.
| |
Collapse
|
3
|
Wan L, Chen Z, Xiao Y, Zhao J, Feng W, Fu H. Iterative feedback-based models for image and video polyp segmentation. Comput Biol Med 2024; 177:108569. [PMID: 38781640 DOI: 10.1016/j.compbiomed.2024.108569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 03/27/2024] [Accepted: 05/05/2024] [Indexed: 05/25/2024]
Abstract
Accurate segmentation of polyps in colonoscopy images has gained significant attention in recent years, given its crucial role in automated colorectal cancer diagnosis. Many existing deep learning-based methods follow a one-stage processing pipeline, often involving feature fusion across different levels or utilizing boundary-related attention mechanisms. Drawing on the success of applying Iterative Feedback Units (IFU) in image polyp segmentation, this paper proposes FlowICBNet by extending the IFU to the domain of video polyp segmentation. By harnessing the unique capabilities of IFU to propagate and refine past segmentation results, our method proves effective in mitigating challenges linked to the inherent limitations of endoscopic imaging, notably the presence of frequent camera shake and frame defocusing. Furthermore, in FlowICBNet, we introduce two pivotal modules: Reference Frame Selection (RFS) and Flow Guided Warping (FGW). These modules play a crucial role in filtering and selecting the most suitable historical reference frames for the task at hand. The experimental results on a large video polyp segmentation dataset demonstrate that our method can significantly outperform state-of-the-art methods by notable margins achieving an average metrics improvement of 7.5% on SUN-SEG-Easy and 7.4% on SUN-SEG-Hard. Our code is available at https://github.com/eraserNut/ICBNet.
Collapse
Affiliation(s)
- Liang Wan
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Zhihao Chen
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Yefan Xiao
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Junting Zhao
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Wei Feng
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore, 138632, Republic of Singapore.
| |
Collapse
|
4
|
Li H, Liu D, Zeng Y, Liu S, Gan T, Rao N, Yang J, Zeng B. Single-Image-Based Deep Learning for Segmentation of Early Esophageal Cancer Lesions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:2676-2688. [PMID: 38530733 DOI: 10.1109/tip.2024.3379902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Accurate segmentation of lesions is crucial for diagnosis and treatment of early esophageal cancer (EEC). However, neither traditional nor deep learning-based methods up to today can meet the clinical requirements, with the mean Dice score - the most important metric in medical image analysis - hardly exceeding 0.75. In this paper, we present a novel deep learning approach for segmenting EEC lesions. Our method stands out for its uniqueness, as it relies solely on a single input image from a patient, forming the so-called "You-Only-Have-One" (YOHO) framework. On one hand, this "one-image-one-network" learning ensures complete patient privacy as it does not use any images from other patients as the training data. On the other hand, it avoids nearly all generalization-related problems since each trained network is applied only to the same input image itself. In particular, we can push the training to "over-fitting" as much as possible to increase the segmentation accuracy. Our technical details include an interaction with clinical doctors to utilize their expertise, a geometry-based data augmentation over a single lesion image to generate the training dataset (the biggest novelty), and an edge-enhanced UNet. We have evaluated YOHO over an EEC dataset collected by ourselves and achieved a mean Dice score of 0.888, which is much higher as compared to the existing deep-learning methods, thus representing a significant advance toward clinical applications. The code and dataset are available at: https://github.com/lhaippp/YOHO.
Collapse
|
5
|
Sakamoto K, Hiraoka SI, Kawamura K, Ruan P, Uchida S, Akiyama R, Lee C, Ide K, Tanaka S. Automated evaluation of masseter muscle volume: deep learning prognostic approach in oral cancer. BMC Cancer 2024; 24:128. [PMID: 38267924 PMCID: PMC10809430 DOI: 10.1186/s12885-024-11873-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 01/12/2024] [Indexed: 01/26/2024] Open
Abstract
BACKGROUND Sarcopenia has been identified as a potential negative prognostic factor in cancer patients. In this study, our objective was to investigate the relationship between the assessment method for sarcopenia using the masseter muscle volume measured on computed tomography (CT) images and the life expectancy of patients with oral cancer. We also developed a learning model using deep learning to automatically extract the masseter muscle volume and investigated its association with the life expectancy of oral cancer patients. METHODS To develop the learning model for masseter muscle volume, we used manually extracted data from CT images of 277 patients. We established the association between manually extracted masseter muscle volume and the life expectancy of oral cancer patients. Additionally, we compared the correlation between the groups of manual and automatic extraction in the masseter muscle volume learning model. RESULTS Our findings revealed a significant association between manually extracted masseter muscle volume on CT images and the life expectancy of patients with oral cancer. Notably, the manual and automatic extraction groups in the masseter muscle volume learning model showed a high correlation. Furthermore, the masseter muscle volume automatically extracted using the developed learning model exhibited a strong association with life expectancy. CONCLUSIONS The sarcopenia assessment method is useful for predicting the life expectancy of patients with oral cancer. In the future, it is crucial to validate and analyze various factors within the oral surgery field, extending beyond cancer patients.
Collapse
Affiliation(s)
- Katsuya Sakamoto
- Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, 1-8 Yamada-Oka, 565-0871, Suita, Osaka, Japan
| | - Shin-Ichiro Hiraoka
- Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, 1-8 Yamada-Oka, 565-0871, Suita, Osaka, Japan.
| | - Kohei Kawamura
- Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, 1-8 Yamada-Oka, 565-0871, Suita, Osaka, Japan
| | - Peiying Ruan
- NVIDIA AI Technology Center, NVIDIA Japan, 12F ATT New Tower, 2-11-7, Akasaka, Minato-ku, 107-0052, Tokyo, Japan
| | - Shuji Uchida
- Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, 1-8 Yamada-Oka, 565-0871, Suita, Osaka, Japan
| | - Ryo Akiyama
- Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, 1-8 Yamada-Oka, 565-0871, Suita, Osaka, Japan
| | - Chonho Lee
- Cybermedia Center, Osaka University, 5-1 Mihogaoka, 567-0047, Ibaraki city, Osaka, Japan
| | - Kazuki Ide
- Division of Scientific Information and Public Policy, Center for Infectious Disease Education and Research, Research Center on Ethical, Legal and Social Issues, Osaka University, Osaka University, Techno-Alliance Building C 208, 2-8 Yamadaoka, 565-0871, Suita, Osaka, Japan
| | - Susumu Tanaka
- Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, 1-8 Yamada-Oka, 565-0871, Suita, Osaka, Japan
| |
Collapse
|
6
|
Bordbar M, Helfroush MS, Danyali H, Ejtehadi F. Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model. Biomed Eng Online 2023; 22:124. [PMID: 38098015 PMCID: PMC10722702 DOI: 10.1186/s12938-023-01186-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is a patient-friendly and non-invasive technology that scans the whole of the gastrointestinal tract, including difficult-to-access regions like the small bowel. Major drawback of this technology is that the visual inspection of a large number of video frames produced during each examination makes the physician diagnosis process tedious and prone to error. Several computer-aided diagnosis (CAD) systems, such as deep network models, have been developed for the automatic recognition of abnormalities in WCE frames. Nevertheless, most of these studies have only focused on spatial information within individual WCE frames, missing the crucial temporal data within consecutive frames. METHODS In this article, an automatic multiclass classification system based on a three-dimensional deep convolutional neural network (3D-CNN) is proposed, which utilizes the spatiotemporal information to facilitate the WCE diagnosis process. The 3D-CNN model fed with a series of sequential WCE frames in contrast to the two-dimensional (2D) model, which exploits frames as independent ones. Moreover, the proposed 3D deep model is compared with some pre-trained networks. The proposed models are trained and evaluated with 29 subject WCE videos (14,691 frames before augmentation). The performance advantages of 3D-CNN over 2D-CNN and pre-trained networks are verified in terms of sensitivity, specificity, and accuracy. RESULTS 3D-CNN outperforms the 2D technique in all evaluation metrics (sensitivity: 98.92 vs. 98.05, specificity: 99.50 vs. 86.94, accuracy: 99.20 vs. 92.60). In conclusion, a novel 3D-CNN model for lesion detection in WCE frames is proposed in this study. CONCLUSION The results indicate the performance of 3D-CNN over 2D-CNN and some well-known pre-trained classifier networks. The proposed 3D-CNN model uses the rich temporal information in adjacent frames as well as spatial data to develop an accurate and efficient model.
Collapse
Affiliation(s)
- Mehrdokht Bordbar
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Fardad Ejtehadi
- Department of Internal Medicine, Gastroenterohepatology Research Center, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
7
|
Huang Y, Ding X, Zhao Y, Tian X, Feng G, Gao Z. Automatic detection and segmentation of chorda tympani under microscopic vision in otosclerosis patients via convolutional neural networks. Int J Med Robot 2023; 19:e2567. [PMID: 37634074 DOI: 10.1002/rcs.2567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND Artificial intelligence (AI) techniques, especially deep learning (DL) techniques, have shown promising results for various computer vision tasks in the field of surgery. However, AI-guided navigation during microscopic surgery for real-time surgical guidance and decision support is much more complex, and its efficacy has yet to be demonstrated. We propose a model dedicated to the evaluation of DL-based semantic segmentation of chorda tympani (CT) during microscopic surgery. METHODS Various convolutional neural networks were constructed, trained, and validated for semantic segmentation of CT. Our dataset has 5817 images annotated from 36 patients, which were further randomly split into the training set (90%, 5236 images) and validation set (10%, 581 images). In addition, 1500 raw images from 3 patients (500 images randomly selected per patient) were used to evaluate the network performance. RESULTS When evaluated on a validation set (581 images), our proposed CT detection networks achieved great performance, and the modified U-net performed best (mIOU = 0.892, mPA = 0.9427). Moreover, when applying U-net to predict the test set (1500 raw images from 3 patients), our methods also showed great overall performance (Accuracy = 0.976, Precision = 0.996, Sensitivity = 0.979, Specificity = 0.902). CONCLUSIONS This study suggests that DL can be used for the automated detection and segmentation of CT in patients with otosclerosis during microscopic surgery with a high degree of performance. Our research validated the potential feasibility for future vision-based navigation surgical assistance and autonomous surgery using AI.
Collapse
Affiliation(s)
- Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
8
|
Sui D, Liu W, Zhang Y, Li Y, Luo G, Wang K, Guo M. ColonNet: A novel polyp segmentation framework based on LK-RFB and GPPD. Comput Biol Med 2023; 166:107541. [PMID: 37804779 DOI: 10.1016/j.compbiomed.2023.107541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 09/11/2023] [Accepted: 09/28/2023] [Indexed: 10/09/2023]
Abstract
Colorectal cancer (CRC) holds the distinction of being the most prevalent malignant tumor affecting the digestive system. It is a formidable global health challenge, as it ranks as the fourth leading cause of cancer-related fatalities around the world. Despite considerable advancements in comprehending and addressing colorectal cancer (CRC), the likelihood of recurring tumors and metastasis remains a major cause of high morbidity and mortality rates during treatment. Currently, colonoscopy is the predominant method for CRC screening. Artificial intelligence has emerged as a promising tool in aiding the diagnosis of polyps, which have demonstrated significant potential. Unfortunately, most segmentation methods face challenges in terms of limited accuracy and generalization to different datasets, especially the slow processing and analysis speed has become a major obstacle. In this study, we propose a fast and efficient polyp segmentation framework based on the Large-Kernel Receptive Field Block (LK-RFB) and Global Parallel Partial Decoder(GPPD). Our proposed ColonNet has been extensively tested and proven effective, achieving a DICE coefficient of over 0.910 and an FPS of over 102 on the CVC-300 dataset. In comparison to the state-of-the-art (SOTA) methods, ColonNet outperforms or achieves comparable performance on five publicly available datasets, establishing a new SOTA. Compared to state-of-the-art methods, ColonNet achieves the highest FPS (over 102 FPS) while maintaining excellent segmentation results, achieving the best or comparable performance on the five public datasets. The code will be released at: https://github.com/SPECTRELWF/ColonNet.
Collapse
Affiliation(s)
- Dong Sui
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China.
| | - Weifeng Liu
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| | - Yue Zhang
- College of Computer Science and Technology, Harbin Engineering University, Harbin, China.
| | - Yang Li
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| | - Gongning Luo
- Perceptual Computing Research Center, Harbin Institute of Technology, Harbin, China
| | - Kuanquan Wang
- Perceptual Computing Research Center, Harbin Institute of Technology, Harbin, China
| | - Maozu Guo
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| |
Collapse
|
9
|
Lee GE, Cho J, Choi SI. Shallow and reverse attention network for colon polyp segmentation. Sci Rep 2023; 13:15243. [PMID: 37709828 PMCID: PMC10502036 DOI: 10.1038/s41598-023-42436-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 09/10/2023] [Indexed: 09/16/2023] Open
Abstract
Polyp segmentation is challenging because the boundary between polyps and mucosa is ambiguous. Several models have considered the use of attention mechanisms to solve this problem. However, these models use only finite information obtained from a single type of attention. We propose a new dual-attention network based on shallow and reverse attention modules for colon polyps segmentation called SRaNet. The shallow attention mechanism removes background noise while emphasizing the locality by focusing on the foreground. In contrast, reverse attention helps distinguish the boundary between polyps and mucous membranes more clearly by focusing on the background. The two attention mechanisms are adaptively fused using a "Softmax Gate". Combining the two types of attention enables the model to capture complementary foreground and boundary features. Therefore, the proposed model predicts the boundaries of polyps more accurately than other models. We present the results of extensive experiments on polyp benchmarks to show that the proposed method outperforms existing models on both seen and unseen data. Furthermore, the results show that the proposed dual attention module increases the explainability of the model.
Collapse
Affiliation(s)
- Go-Eun Lee
- Department of Computer Science and Engineering, Dankook University, Yongin, 16890, South Korea
| | - Jungchan Cho
- School of Computing, Gachon University, Seongnam, 13120, South Korea.
| | - Sang-Ii Choi
- Department of Computer Science and Engineering, Dankook University, Yongin, 16890, South Korea.
| |
Collapse
|
10
|
Liu W, Li Z, Xia J, Li C. MCSF-Net: a multi-scale channel spatial fusion network for real-time polyp segmentation. Phys Med Biol 2023; 68:175041. [PMID: 37582393 DOI: 10.1088/1361-6560/acf090] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Colorectal cancer is a globally prevalent cancer type that necessitates prompt screening. Colonoscopy is the established diagnostic technique for identifying colorectal polyps. However, missed polyp rates remain a concern. Early detection of polyps, while still precancerous, is vital for minimizing cancer-related mortality and economic impact. In the clinical setting, precise segmentation of polyps from colonoscopy images can provide valuable diagnostic and surgical information. Recent advances in computer-aided diagnostic systems, specifically those based on deep learning techniques, have shown promise in improving the detection rates of missed polyps, and thereby assisting gastroenterologists in improving polyp identification. In the present investigation, we introduce MCSF-Net, a real-time automatic segmentation framework that utilizes a multi-scale channel space fusion network. The proposed architecture leverages a multi-scale fusion module in conjunction with spatial and channel attention mechanisms to effectively amalgamate high-dimensional multi-scale features. Additionally, a feature complementation module is employed to extract boundary cues from low-dimensional features, facilitating enhanced representation of low-level features while keeping computational complexity to a minimum. Furthermore, we incorporate shape blocks to facilitate better model supervision for precise identification of boundary features of polyps. Our extensive evaluation of the proposed MCSF-Net on five publicly available benchmark datasets reveals that it outperforms several existing state-of-the-art approaches with respect to different evaluation metrics. The proposed approach runs at an impressive ∼45 FPS, demonstrating notable advantages in terms of scalability and real-time segmentation.
Collapse
Affiliation(s)
- Weikang Liu
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, People's Republic of China
| | - Zhigang Li
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, People's Republic of China
| | - Jiaao Xia
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, People's Republic of China
| | - Chunyang Li
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, People's Republic of China
| |
Collapse
|
11
|
Mazumdar S, Sinha S, Jha S, Jagtap B. Computer-aided automated diminutive colonic polyp detection in colonoscopy by using deep machine learning system; first indigenous algorithm developed in India. Indian J Gastroenterol 2023; 42:226-232. [PMID: 37145230 DOI: 10.1007/s12664-022-01331-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 12/18/2022] [Indexed: 05/06/2023]
Abstract
BACKGROUND Colonic polyps can be detected and resected during a colonoscopy before cancer development. However, about 1/4th of the polyps could be missed due to their small size, location or human errors. An artificial intelligence (AI) system can improve polyp detection and reduce colorectal cancer incidence. We are developing an indigenous AI system to detect diminutive polyps in real-life scenarios that can be compatible with any high-definition colonoscopy and endoscopic video- capture software. METHODS We trained a masked region-based convolutional neural network model to detect and localize colonic polyps. Three independent datasets of colonoscopy videos comprising 1,039 image frames were used and divided into a training dataset of 688 frames and a testing dataset of 351 frames. Of 1,039 image frames, 231 were from real-life colonoscopy videos from our centre. The rest were from publicly available image frames already modified to be directly utilizable for developing the AI system. The image frames of the testing dataset were also augmented by rotating and zooming the images to replicate real-life distortions of images seen during colonoscopy. The AI system was trained to localize the polyp by creating a 'bounding box'. It was then applied to the testing dataset to test its accuracy in detecting polyps automatically. RESULTS The AI system achieved a mean average precision (equivalent to specificity) of 88.63% for automatic polyp detection. All polyps in the testing were identified by AI, i.e., no false-negative result in the testing dataset (sensitivity of 100%). The mean polyp size in the study was 5 (± 4) mm. The mean processing time per image frame was 96.4 minutes. CONCLUSIONS This AI system, when applied to real-life colonoscopy images, having wide variations in bowel preparation and small polyp size, can detect colonic polyps with a high degree of accuracy.
Collapse
Affiliation(s)
- Srijan Mazumdar
- Indian Institute of Liver and Digestive Sciences, Sitala (East), Jagadishpur, Sonarpur, 24 Parganas (South), Kolkata, 700 150, India.
| | - Saugata Sinha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Saurabh Jha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Balaji Jagtap
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| |
Collapse
|
12
|
Colon cancer stage detection in colonoscopy images using YOLOv3 MSF deep learning architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
13
|
Sadagopan R, Ravi S, Adithya SV, Vivekanandhan S. PolyEffNetV1: A CNN based colorectal polyp detection in colonoscopy images. Proc Inst Mech Eng H 2023; 237:406-418. [PMID: 36683465 DOI: 10.1177/09544119221149233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Presence of polyps is the root cause of colorectal cancer, hence identification of such polyps at an early stage can help in advance treatments to avoid complications to the patient. Since there are variations in the size and shape of polyps, the task of detecting them in colonoscopy images becomes challenging. Hence our work is to leverage an algorithm for segmentation and classification of the polyp of colonoscopy images using Deep learning algorithms. In this work, we propose PolypEffNetV1, a U-Net to segment the different pathologies present in the colonoscopy frame and EfficientNetB5 to classify the detected pathologies. The colonoscopy images for the segmentation process are taken from the open-source dataset KVASIR, it consists of 1000 images with "ground truth" labeling. For classification, combination of KVASIR and CVC datasets are incorporated, which consists of 1612 images with 1696 polyp regions and 760 non-polyp inflamed regions. The proposed PolypEffNetV1 produced testing accuracy of 97.1%, Jaccard index of 0.84, dice coefficient of 0.91, and F1-score of 0.89. Subsequently, for classification to evidence whether the segmented region is polyp or non-polyp inflammation, the developed classifier produced validation accuracy of 99%, specificity of 98%, and sensitivity of 99%. Hence the proposed system could be used by gastroenterologists to identify the presence of polyp in the colonoscopy images/videos which will in turn increase healthcare quality. These developed models can be either deployed on the edge of the device to enable real-time aidance or can be integrated with existing software-application for offline review and treatment planning.
Collapse
Affiliation(s)
- Rajkumar Sadagopan
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Centre of Excellence in Medical Imaging, Rajalakshmi Engineering College, Chennai, India
| | - Saravanan Ravi
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Centre of Excellence in Medical Imaging, Rajalakshmi Engineering College, Chennai, India
| | - Sairam Vuppala Adithya
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Centre of Excellence in Medical Imaging, Rajalakshmi Engineering College, Chennai, India
| | - Sapthagirivasan Vivekanandhan
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Medical and Life Sciences Department, Engineering R&D Division, IT Services Company, Bengaluru, India
| |
Collapse
|
14
|
Lewis J, Cha YJ, Kim J. Dual encoder-decoder-based deep polyp segmentation network for colonoscopy images. Sci Rep 2023; 13:1183. [PMID: 36681776 PMCID: PMC9867760 DOI: 10.1038/s41598-023-28530-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 01/19/2023] [Indexed: 01/22/2023] Open
Abstract
Detection of colorectal polyps through colonoscopy is an essential practice in prevention of colorectal cancers. However, the method itself is labor intensive and is subject to human error. With the advent of deep learning-based methodologies, and specifically convolutional neural networks, an opportunity to improve upon the prognosis of potential patients suffering with colorectal cancer has appeared with automated detection and segmentation of polyps. Polyp segmentation is subject to a number of problems such as model overfitting and generalization, poor definition of boundary pixels, as well as the model's ability to capture the practical range in textures, sizes, and colors. In an effort to address these challenges, we propose a dual encoder-decoder solution named Polyp Segmentation Network (PSNet). Both the dual encoder and decoder were developed by the comprehensive combination of a variety of deep learning modules, including the PS encoder, transformer encoder, PS decoder, enhanced dilated transformer decoder, partial decoder, and merge module. PSNet outperforms state-of-the-art results through an extensive comparative study against 5 existing polyp datasets with respect to both mDice and mIoU at 0.863 and 0.797, respectively. With our new modified polyp dataset we obtain an mDice and mIoU of 0.941 and 0.897 respectively.
Collapse
Affiliation(s)
- John Lewis
- Department of Civil Engineering, University of Manitoba, Winnipeg, R3M 0N2, Canada
| | - Young-Jin Cha
- Department of Civil Engineering, University of Manitoba, Winnipeg, R3M 0N2, Canada.
| | - Jongho Kim
- Department of Radiology, Max Rady College of Medicine, University of Manitoba, Winnipeg, R3A 1R9, Canada
| |
Collapse
|
15
|
ELKarazle K, Raman V, Then P, Chua C. Detection of Colorectal Polyps from Colonoscopy Using Machine Learning: A Survey on Modern Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1225. [PMID: 36772263 PMCID: PMC9953705 DOI: 10.3390/s23031225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Given the increased interest in utilizing artificial intelligence as an assistive tool in the medical sector, colorectal polyp detection and classification using deep learning techniques has been an active area of research in recent years. The motivation for researching this topic is that physicians miss polyps from time to time due to fatigue and lack of experience carrying out the procedure. Unidentified polyps can cause further complications and ultimately lead to colorectal cancer (CRC), one of the leading causes of cancer mortality. Although various techniques have been presented recently, several key issues, such as the lack of enough training data, white light reflection, and blur affect the performance of such methods. This paper presents a survey on recently proposed methods for detecting polyps from colonoscopy. The survey covers benchmark dataset analysis, evaluation metrics, common challenges, standard methods of building polyp detectors and a review of the latest work in the literature. We conclude this paper by providing a precise analysis of the gaps and trends discovered in the reviewed literature for future work.
Collapse
Affiliation(s)
- Khaled ELKarazle
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Valliappan Raman
- Department of Artificial Intelligence and Data Science, Coimbatore Institute of Technology, Coimbatore 641014, India
| | - Patrick Then
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Caslon Chua
- Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne 3122, Australia
| |
Collapse
|
16
|
Rajesh E, Basheer S, Dhanaraj RK, Yadav S, Kadry S, Khan MA, Kim YJ, Cha JH. Machine Learning for Online Automatic Prediction of Common Disease Attributes Using Never-Ending Image Learner. Diagnostics (Basel) 2022; 13:95. [PMID: 36611387 PMCID: PMC9818336 DOI: 10.3390/diagnostics13010095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/30/2022] [Accepted: 12/10/2022] [Indexed: 12/31/2022] Open
Abstract
The rapid increase in Internet technology and machine-learning devices has opened up new avenues for online healthcare systems. Sometimes, getting medical assistance or healthcare advice online is easier to understand than getting it in person. For mild symptoms, people frequently feel reluctant to visit the hospital or a doctor; instead, they express their questions on numerous healthcare forums. However, predictions may not always be accurate, and there is no assurance that users will always receive a reply to their posts. In addition, some posts are made up, which can misdirect the patient. To address these issues, automatic online prediction (OAP) is proposed. OAP clarifies the idea of employing machine learning to predict the common attributes of disease using Never-Ending Image Learner with an intelligent analysis of disease factors. Never-Ending Image Learner predicts disease factors by selecting from finite data images with minimum structural risk and efficiently predicting efficient real-time images via machine-learning-enabled M-theory. The proposed multi-access edge computing platform works with the machine-learning-assisted automatic prediction from multiple images using multiple-instance learning. Using a Never-Ending Image Learner based on Machine Learning, common disease attributes may be predicted online automatically. This method has deeper storage of images, and their data are stored per the isotropic positioning. The proposed method was compared with existing approaches, such as Multiple-Instance Learning for automated image indexing and hyper-spectrum image classification. Regarding the machine learning of multiple images with the application of isotropic positioning, the operating efficiency is improved, and the results are predicted with better accuracy. In this paper, machine-learning performance metrics for online automatic prediction tools are compiled and compared, and through this survey, the proposed method is shown to achieve higher accuracy, proving its efficiency compared to the existing methods.
Collapse
Affiliation(s)
- E. Rajesh
- School of Computing Science and Engineering, Galgotias University, Greater Noida 203201, India
| | - Shajahan Basheer
- School of Computing Science and Engineering, Galgotias University, Greater Noida 203201, India
| | - Rajesh Kumar Dhanaraj
- School of Computing Science and Engineering, Galgotias University, Greater Noida 203201, India
| | - Soni Yadav
- School of Computing Science and Engineering, Galgotias University, Greater Noida 203201, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| | | | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea
| | - Jae-Hyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea
| |
Collapse
|
17
|
Wu C, Long C, Li S, Yang J, Jiang F, Zhou R. MSRAformer: Multiscale spatial reverse attention network for polyp segmentation. Comput Biol Med 2022; 151:106274. [PMID: 36375412 DOI: 10.1016/j.compbiomed.2022.106274] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 10/10/2022] [Accepted: 10/30/2022] [Indexed: 11/11/2022]
Abstract
Colon polyp is an important reference basis in the diagnosis of colorectal cancer(CRC). In routine diagnosis, the polyp area is segmented from the colorectal enteroscopy image, and the obtained pathological information is used to assist in the diagnosis of the disease and surgery. It is always a challenging task for accurate segmentation of polyps in colonoscopy images. There are great differences in shape, size, color and texture of the same type of polyps, and it is difficult to distinguish the polyp region from the mucosal boundary. In recent years, convolutional neural network(CNN) has achieved some results in the task of medical image segmentation. However, CNNs focus on the extraction of local features and be short of the extracting ability of global feature information. This paper presents a Multiscale Spatial Reverse Attention Network called MSRAformer with high performance in medical segmentation, which adopts the Swin Transformer encoder with pyramid structure to extract the features of four different stages, and extracts the multi-scale feature information through the multi-scale channel attention module, which enhances the global feature extraction ability and generalization of the network, and preliminarily aggregates a pre-segmentation result. This paper proposes a spatial reverse attention mechanism module to gradually supplement the edge structure and detail information of the polyp region. Extensive experiments on MSRAformer proved that the segmentation effect on the colonoscopy polyp dataset is better than most state-of-the-art(SOTA) medical image segmentation methods, with better generalization performance. Reference implementation of MSRAformer is available at https://github.com/ChengLong1222/MSRAformer-main.
Collapse
Affiliation(s)
- Cong Wu
- School of computer science, Hubei University of Technology, Wuhan, China.
| | - Cheng Long
- School of computer science, Hubei University of Technology, Wuhan, China.
| | - Shijun Li
- School of computer science, Hubei University of Technology, Wuhan, China
| | - Junjie Yang
- Union Hospital Tongji Medical College Huazhong University of Science and Technology, Wuhan, China
| | - Fagang Jiang
- Union Hospital Tongji Medical College Huazhong University of Science and Technology, Wuhan, China
| | - Ran Zhou
- School of computer science, Hubei University of Technology, Wuhan, China
| |
Collapse
|
18
|
Tjønnås MS, Guzmán-García C, Sánchez-González P, Gómez EJ, Oropesa I, Våpenstad C. Stress in surgical educational environments: a systematic review. BMC MEDICAL EDUCATION 2022; 22:791. [PMID: 36380334 PMCID: PMC9667591 DOI: 10.1186/s12909-022-03841-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 10/29/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND The effects of stress on surgical residents and how stress management training can prepare residents to effectively manage stressful situations is a relevant topic. This systematic review aimed to analyze the literature regarding (1) the current stress monitoring tools and their use in surgical environments, (2) the current methods in surgical stress management training, and (3) how stress affects surgical performance. METHODS A search strategy was implemented to retrieve relevant articles from Web of Science, Scopus, and PubMed. The 787 initially retrieved articles were reviewed for further evaluation according to the inclusion/exclusion criteria (Prospero registration number CRD42021252682). RESULTS Sixty-one articles were included in the review. The stress monitoring methods found in the articles showed heart rate analysis as the most used monitoring tool for physiological parameters while the STAI-6 scale was preferred for psychological parameters. The stress management methods found in the articles were mental-, simulation- and feedback-based training, with the mental-based training showing clear positive effects on participants. The studies analyzing the effects of stress on surgical performance showed both negative and positive effects on technical and non-technical performance. CONCLUSIONS The impact of stress responses presents an important factor in surgical environments, affecting residents' training and performance. This study identified the main methods used for monitoring stress parameters in surgical educational environments. The applied surgical stress management training methods were diverse and demonstrated positive effects on surgeons' stress levels and performance. There were negative and positive effects of stress on surgical performance, although a collective pattern on their effects was not clear.
Collapse
Affiliation(s)
- Maria Suong Tjønnås
- Department of Neuromedicine and Movement Science (INB), Faculty of Medicine and Health Sciences, NTNU, Norwegian University of Science and Technology, N-7491, Trondheim, Norway.
- SINTEF Digital, Health Department, Trondheim, Norway.
| | - Carmen Guzmán-García
- Biomedical Engineering and Telemedicine Centre (GBT), ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid (UPM), Madrid, Spain
| | - Patricia Sánchez-González
- Biomedical Engineering and Telemedicine Centre (GBT), ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid (UPM), Madrid, Spain
- Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
| | - Enrique Javier Gómez
- Biomedical Engineering and Telemedicine Centre (GBT), ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid (UPM), Madrid, Spain
- Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
| | - Ignacio Oropesa
- Biomedical Engineering and Telemedicine Centre (GBT), ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid (UPM), Madrid, Spain
| | - Cecilie Våpenstad
- SINTEF Digital, Health Department, Trondheim, Norway
- Department of Clinical and Molecular Medicine (IKOM), Faculty of Medicine and Health Sciences, NTNU, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
19
|
Turan M, Durmus F. UC-NfNet: Deep learning-enabled assessment of ulcerative colitis from colonoscopy images. Med Image Anal 2022; 82:102587. [DOI: 10.1016/j.media.2022.102587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/12/2022] [Accepted: 08/17/2022] [Indexed: 10/31/2022]
|
20
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Polyp detection on video colonoscopy using a hybrid 2D/3D CNN. Med Image Anal 2022; 82:102625. [PMID: 36209637 DOI: 10.1016/j.media.2022.102625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 08/22/2022] [Accepted: 09/10/2022] [Indexed: 12/15/2022]
Abstract
Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK; Odin Vision, London, W1W 7TY, UK.
| | | | - Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| |
Collapse
|
21
|
Cui R, Yang R, Liu F, Cai C. N-Net: Lesion region segmentations using the generalized hybrid dilated convolutions for polyps in colonoscopy images. Front Bioeng Biotechnol 2022; 10:963590. [DOI: 10.3389/fbioe.2022.963590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/12/2022] [Indexed: 11/13/2022] Open
Abstract
Colorectal cancer is the cancer with the second highest and the third highest incidence rates for the female and the male, respectively. Colorectal polyps are potential prognostic indicators of colorectal cancer, and colonoscopy is the gold standard for the biopsy and the removal of colorectal polyps. In this scenario, one of the main concerns is to ensure the accuracy of lesion region identifications. However, the missing rate of polyps through manual observations in colonoscopy can reach 14%–30%. In this paper, we focus on the identifications of polyps in clinical colonoscopy images and propose a new N-shaped deep neural network (N-Net) structure to conduct the lesion region segmentations. The encoder-decoder framework is adopted in the N-Net structure and the DenseNet modules are implemented in the encoding path of the network. Moreover, we innovatively propose the strategy to design the generalized hybrid dilated convolution (GHDC), which enables flexible dilated rates and convolutional kernel sizes, to facilitate the transmission of the multi-scale information with the respective fields expanded. Based on the strategy of GHDC designing, we design four GHDC blocks to connect the encoding and the decoding paths. Through the experiments on two publicly available datasets on polyp segmentations of colonoscopy images: the Kvasir-SEG dataset and the CVC-ClinicDB dataset, the rationality and superiority of the proposed GHDC blocks and the proposed N-Net are verified. Through the comparative studies with the state-of-the-art methods, such as TransU-Net, DeepLabV3+ and CA-Net, we show that even with a small amount of network parameters, the N-Net outperforms with the Dice of 94.45%, the average symmetric surface distance (ASSD) of 0.38 pix and the mean intersection-over-union (mIoU) of 89.80% on the Kvasir-SEG dataset, and with the Dice of 97.03%, the ASSD of 0.16 pix and the mIoU of 94.35% on the CVC-ClinicDB dataset.
Collapse
|
22
|
Yu T, Lin N, Zhang X, Pan Y, Hu H, Zheng W, Liu J, Hu W, Duan H, Si J. An end-to-end tracking method for polyp detectors in colonoscopy videos. Artif Intell Med 2022; 131:102363. [DOI: 10.1016/j.artmed.2022.102363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/04/2022] [Accepted: 07/11/2022] [Indexed: 12/09/2022]
|
23
|
Han J, Xu C, An Z, Qian K, Tan W, Wang D, Fang Q. PRAPNet: A Parallel Residual Atrous Pyramid Network for Polyp Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:4658. [PMID: 35808154 PMCID: PMC9268928 DOI: 10.3390/s22134658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/15/2022] [Accepted: 06/15/2022] [Indexed: 02/05/2023]
Abstract
In a colonoscopy, accurate computer-aided polyp detection and segmentation can help endoscopists to remove abnormal tissue. This reduces the chance of polyps developing into cancer, which is of great importance. In this paper, we propose a neural network (parallel residual atrous pyramid network or PRAPNet) based on a parallel residual atrous pyramid module for the segmentation of intestinal polyp detection. We made full use of the global contextual information of the different regions by the proposed parallel residual atrous pyramid module. The experimental results showed that our proposed global prior module could effectively achieve better segmentation results in the intestinal polyp segmentation task compared with the previously published results. The mean intersection over union and dice coefficient of the model in the Kvasir-SEG dataset were 90.4% and 94.2%, respectively. The experimental results outperformed the scores achieved by the seven classical segmentation network models (U-Net, U-Net++, ResUNet++, praNet, CaraNet, SFFormer-L, TransFuse-L).
Collapse
Affiliation(s)
- Jubao Han
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Chao Xu
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Ziheng An
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Kai Qian
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Wei Tan
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Dou Wang
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| | - Qianqian Fang
- School of Integrated Circuits, Anhui University, Hefei 230601, China; (J.H.); (Z.A.); (K.Q.); (W.T.); (D.W.); (Q.F.)
- Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
| |
Collapse
|
24
|
Karmakar R, Nooshabadi S. Mobile-PolypNet: Lightweight Colon Polyp Segmentation Network for Low-Resource Settings. J Imaging 2022; 8:jimaging8060169. [PMID: 35735968 PMCID: PMC9225047 DOI: 10.3390/jimaging8060169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/08/2022] [Accepted: 06/10/2022] [Indexed: 11/16/2022] Open
Abstract
Colon polyps, small clump of cells on the lining of the colon, can lead to colorectal cancer (CRC), one of the leading types of cancer globally. Hence, early detection of these polyps automatically is crucial in the prevention of CRC. The deep learning models proposed for the detection and segmentation of colorectal polyps are resource-consuming. This paper proposes a lightweight deep learning model for colorectal polyp segmentation that achieved state-of-the-art accuracy while significantly reducing the model size and complexity. The proposed deep learning autoencoder model employs a set of state-of-the-art architectural blocks and optimization objective functions to achieve the desired efficiency. The model is trained and tested on five publicly available colorectal polyp segmentation datasets (CVC-ClinicDB, CVC-ColonDB, EndoScene, Kvasir, and ETIS). We also performed ablation testing on the model to test various aspects of the autoencoder architecture. We performed the model evaluation by using most of the common image-segmentation metrics. The backbone model achieved a DICE score of 0.935 on the Kvasir dataset and 0.945 on the CVC-ClinicDB dataset, improving the accuracy by 4.12% and 5.12%, respectively, over the current state-of-the-art network, while using 88 times fewer parameters, 40 times less storage space, and being computationally 17 times more efficient. Our ablation study showed that the addition of ConvSkip in the autoencoder slightly improves the model's performance but it was not significant (p-value = 0.815).
Collapse
|
25
|
Simple U-net based synthetic polyp image generation: Polyp to negative and negative to polyp. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103491] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
26
|
Hasan MM, Islam N, Rahman MM. Gastrointestinal polyp detection through a fusion of contourlet transform and Neural features. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2019.12.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
27
|
COMMA: Propagating Complementary Multi-Level Aggregation Network for Polyp Segmentation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042114] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Colonoscopy is an effective method for detecting polyps to prevent colon cancer. Existing studies have achieved satisfactory polyp detection performance by aggregating low-level boundary and high-level region information in convolutional neural networks (CNNs) for precise polyp segmentation in colonoscopy images. However, multi-level aggregation provides limited polyp segmentation owing to the distribution discrepancy that occurs when integrating different layer representations. To address this problem, previous studies have employed complementary low- and high- level representations. In contrast to existing methods, we focus on propagating complementary information such that the complementary low-level explicit boundary with abstracted high-level representations diminishes the discrepancy. This study proposes COMMA, which propagates complementary multi-level aggregation to reduce distribution discrepancies. COMMA comprises a complementary masking module (CMM) and a boundary propagation module (BPM) as a multi-decoder. The CMM masks the low-level boundary noises through the abstracted high-level representation and leverages the masked information at both levels. Similarly, the BPM incorporates the lowest- and highest-level representations to obtain explicit boundary information and propagates the boundary to the CMMs to improve polyp detection. CMMs can discriminate polyps more elaborately than prior CMMs based on boundary and complementary representations. Moreover, we propose a hybrid loss function to mitigate class imbalance and noisy annotations in polyp segmentation. To evaluate the COMMA performance, we conducted experiments on five benchmark datasets using five metrics. The results proved that the proposed network outperforms state-of-the-art methods in terms of all datasets. Specifically, COMMA improved mIoU performance by 0.043 on average for all datasets compared to the existing state-of-the-art methods.
Collapse
|
28
|
Xu J, Zhang Q, Yu Y, Zhao R, Bian X, Liu X, Wang J, Ge Z, Qian D. Deep reconstruction-recoding network for unsupervised domain adaptation and multi-center generalization in colonoscopy polyp detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106576. [PMID: 34915425 DOI: 10.1016/j.cmpb.2021.106576] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 11/16/2021] [Accepted: 12/02/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Currently, the best performing methods in colonoscopy polyp detection are primarily based on deep neural networks (DNNs), which are usually trained on large amounts of labeled data. However, different hospitals use different endoscope models and set different imaging parameters, which causes the collected endoscopic images and videos to vary greatly in style. There may be variations in the color space, brightness, contrast, and resolution, and there are also differences between white light endoscopy (WLE) and narrow band image endoscopy (NBIE). We call these variations the domain shift. The DNN performance may decrease when the training data and the testing data come from different hospitals or different endoscope models. Additionally, it is quite difficult to collect enough new labeled data and retrain a new DNN model before deploying that DNN to a new hospital or endoscope model. METHODS To solve this problem, we propose a domain adaptation model called Deep Reconstruction-Recoding Network (DRRN), which jointly learns a shared encoding representation for two tasks: i) a supervised object detection network for labeled source data, and ii) an unsupervised reconstruction-recoding network for unlabeled target data. Through the DRRN, the object detection network's encoder not only learns the features from the labeled source domain, but also encodes useful information from the unlabeled target domain. Therefore, the distribution difference of the two domains' feature spaces can be reduced. RESULTS We evaluate the performance of the DRRN on a series of cross-domain datasets. Compared with training the polyp detection network using only source data, the performance of the DRRN on the target domain is improved. Through feature statistics and visualization, it is demonstrated that the DRRN can learn the common distribution and feature invariance of the two domains. The distribution difference between the feature spaces of the two domains can be reduced. CONCLUSION The DRRN can improve cross-domain polyp detection. With the DRRN, the generalization performance of the DNN-based polyp detection model can be improved without additional labeled data. This improvement allows the polyp detection model to be easily transferred to datasets from different hospitals or different endoscope models.
Collapse
Affiliation(s)
- Jianwei Xu
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Qingwei Zhang
- Division of Gastroenterology and Hepatology, Key Laboratory of Gastroenterology and Hepatology, Ministry of Health, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai Institute of Digestive Disease, Shanghai, China.
| | - Yizhou Yu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Ran Zhao
- Division of Gastroenterology and Hepatology, Key Laboratory of Gastroenterology and Hepatology, Ministry of Health, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai Institute of Digestive Disease, Shanghai, China
| | - Xianzhang Bian
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoqing Liu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Jun Wang
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhizheng Ge
- Division of Gastroenterology and Hepatology, Key Laboratory of Gastroenterology and Hepatology, Ministry of Health, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai Institute of Digestive Disease, Shanghai, China.
| | - Dahong Qian
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
29
|
Liang F, Wang S, Zhang K, Liu TJ, Li JN. Development of artificial intelligence technology in diagnosis, treatment, and prognosis of colorectal cancer. World J Gastrointest Oncol 2022; 14:124-152. [PMID: 35116107 PMCID: PMC8790413 DOI: 10.4251/wjgo.v14.i1.124] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 08/19/2021] [Accepted: 11/15/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) technology has made leaps and bounds since its invention. AI technology can be subdivided into many technologies such as machine learning and deep learning. The application scope and prospect of different technologies are also totally different. Currently, AI technologies play a pivotal role in the highly complex and wide-ranging medical field, such as medical image recognition, biotechnology, auxiliary diagnosis, drug research and development, and nutrition. Colorectal cancer (CRC) is a common gastrointestinal cancer that has a high mortality, posing a serious threat to human health. Many CRCs are caused by the malignant transformation of colorectal polyps. Therefore, early diagnosis and treatment are crucial to CRC prognosis. The methods of diagnosing CRC are divided into imaging diagnosis, endoscopy, and pathology diagnosis. Treatment methods are divided into endoscopic treatment, surgical treatment, and drug treatment. AI technology is in the weak era and does not have communication capabilities. Therefore, the current AI technology is mainly used for image recognition and auxiliary analysis without in-depth communication with patients. This article reviews the application of AI in the diagnosis, treatment, and prognosis of CRC and provides the prospects for the broader application of AI in CRC.
Collapse
Affiliation(s)
- Feng Liang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Shu Wang
- Department of Radiotherapy, Jilin University Second Hospital, Changchun 130041, Jilin Province, China
| | - Kai Zhang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Tong-Jun Liu
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Jian-Nan Li
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| |
Collapse
|
30
|
Channel separation-based network for the automatic anatomical site recognition using endoscopic images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103167] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
31
|
Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27:8103-8122. [PMID: 35068857 PMCID: PMC8704267 DOI: 10.3748/wjg.v27.i47.8103] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/22/2021] [Accepted: 12/03/2021] [Indexed: 02/06/2023] Open
Abstract
Colonoscopy is an effective screening procedure in colorectal cancer prevention programs; however, colonoscopy practice can vary in terms of lesion detection, classification, and removal. Artificial intelligence (AI)-assisted decision support systems for endoscopy is an area of rapid research and development. The systems promise improved detection, classification, screening, and surveillance for colorectal polyps and cancer. Several recently developed applications for AI-assisted colonoscopy have shown promising results for the detection and classification of colorectal polyps and adenomas. However, their value for real-time application in clinical practice has yet to be determined owing to limitations in the design, validation, and testing of AI models under real-life clinical conditions. Despite these current limitations, ambitious attempts to expand the technology further by developing more complex systems capable of assisting and supporting the endoscopist throughout the entire colonoscopy examination, including polypectomy procedures, are at the concept stage. However, further work is required to address the barriers and challenges of AI integration into broader colonoscopy practice, to navigate the approval process from regulatory organizations and societies, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of its accuracy and safety. This article takes a closer look at the current state of AI integration into the field of colonoscopy and offers suggestions for future research.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo 0450, Norway
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama 224-8503, Japan
| | - Daniel von Renteln
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| |
Collapse
|
32
|
Wang S, Yin Y, Wang D, Lv Z, Wang Y, Jin Y. An interpretable deep neural network for colorectal polyp diagnosis under colonoscopy. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
33
|
Wu L, He X, Liu M, Xie H, An P, Zhang J, Zhang H, Ai Y, Tong Q, Guo M, Huang M, Ge C, Yang Z, Yuan J, Liu J, Zhou W, Jiang X, Huang X, Mu G, Wan X, Li Y, Wang H, Wang Y, Zhang H, Chen D, Gong D, Wang J, Huang L, Li J, Yao L, Zhu Y, Yu H. Evaluation of the effects of an artificial intelligence system on endoscopy quality and preliminary testing of its performance in detecting early gastric cancer: a randomized controlled trial. Endoscopy 2021; 53:1199-1207. [PMID: 33429441 DOI: 10.1055/a-1350-5583] [Citation(s) in RCA: 93] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
BACKGROUND Esophagogastroduodenoscopy (EGD) is a prerequisite for detecting upper gastrointestinal lesions especially early gastric cancer (EGC). An artificial intelligence system has been shown to monitor blind spots during EGD. In this study, we updated the system (ENDOANGEL), verified its effectiveness in improving endoscopy quality, and pretested its performance in detecting EGC in a multicenter randomized controlled trial. METHODS ENDOANGEL was developed using deep convolutional neural networks and deep reinforcement learning. Patients undergoing EGD in five hospitals were randomly assigned to the ENDOANGEL-assisted group or to a control group without use of ENDOANGEL. The primary outcome was the number of blind spots. Secondary outcomes included performance of ENDOANGEL in predicting EGC in a clinical setting. RESULTS 1050 patients were randomized, and 498 and 504 patients in the ENDOANGEL and control groups, respectively, were analyzed. Compared with the control group, the ENDOANGEL group had fewer blind spots (mean 5.38 [standard deviation (SD) 4.32] vs. 9.82 [SD 4.98]; P < 0.001) and longer inspection time (5.40 [SD 3.82] vs. 4.38 [SD 3.91] minutes; P < 0.001). In the ENDOANGEL group, 196 gastric lesions with pathological results were identified. ENDOANGEL correctly predicted all three EGCs (one mucosal carcinoma and two high grade neoplasias) and two advanced gastric cancers, with a per-lesion accuracy of 84.7 %, sensitivity of 100 %, and specificity of 84.3 % for detecting gastric cancer. CONCLUSIONS In this multicenter study, ENDOANGEL was an effective and robust system to improve the quality of EGD and has the potential to detect EGC in real time.
Collapse
Affiliation(s)
- Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xinqi He
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mei Liu
- Department of Gastroenterology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Huaping Xie
- Department of Gastroenterology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ping An
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jun Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Heng Zhang
- Department of Gastroenterology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yaowei Ai
- Department of Gastroenterology, The People's Hospital of China Three Gorges University/The First People's Hospital of Yichang, Yichang, China
| | - Qiaoyun Tong
- Department of Gastroenterology, Yichang Central People's Hospital, China Three Gorges University, Yichang, China
| | - Mingwen Guo
- Department of Gastroenterology, The People's Hospital of China Three Gorges University/The First People's Hospital of Yichang, Yichang, China
| | - Manling Huang
- Department of Gastroenterology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cunjin Ge
- Department of Gastroenterology, Yichang Central People's Hospital, China Three Gorges University, Yichang, China
| | - Zhi Yang
- Department of Gastroenterology, Yichang Central People's Hospital, China Three Gorges University, Yichang, China
| | - Jingping Yuan
- Department of Pathology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jun Liu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Wei Zhou
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoda Jiang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xu Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Ganggang Mu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xinyue Wan
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yanxia Li
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Hongguang Wang
- Department of Gastroenterology, Jilin People's Hospital, Jilin, China
| | - Yonggui Wang
- School of Geography and Information Engineering, China University of Geosciences, Wuhan, China
| | - Hongfeng Zhang
- Department of Pathology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Di Chen
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Dexin Gong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jing Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jia Li
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liwen Yao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yijie Zhu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
34
|
Kröner PT, Engels MML, Glicksberg BS, Johnson KW, Mzaik O, van Hooft JE, Wallace MB, El-Serag HB, Krittanawong C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J Gastroenterol 2021; 27:6794-6824. [PMID: 34790008 PMCID: PMC8567482 DOI: 10.3748/wjg.v27.i40.6794] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/15/2021] [Accepted: 09/16/2021] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett’s esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.
Collapse
Affiliation(s)
- Paul T Kröner
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Megan ML Engels
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Cancer Center Amsterdam, Department of Gastroenterology and Hepatology, Amsterdam UMC, Location AMC, Amsterdam 1105, The Netherlands
| | - Benjamin S Glicksberg
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kipp W Johnson
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Obaie Mzaik
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Amsterdam 2300, The Netherlands
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Division of Gastroenterology and Hepatology, Sheikh Shakhbout Medical City, Abu Dhabi 11001, United Arab Emirates
| | - Hashem B El-Serag
- Section of Gastroenterology and Hepatology, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
| | - Chayakrit Krittanawong
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Cardiology, Michael E. DeBakey VA Medical Center, Houston, TX 77030, United States
| |
Collapse
|
35
|
El-Nakeep S, El-Nakeep M. Artificial intelligence for cancer detection in upper gastrointestinal endoscopy, current status, and future aspirations. Artif Intell Gastroenterol 2021; 2:124-132. [DOI: 10.35712/aig.v2.i5.124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 06/26/2021] [Accepted: 09/02/2021] [Indexed: 02/06/2023] Open
|
36
|
Yeung M, Sala E, Schönlieb CB, Rundo L. Focus U-Net: A novel dual attention-gated CNN for polyp segmentation during colonoscopy. Comput Biol Med 2021; 137:104815. [PMID: 34507156 PMCID: PMC8505797 DOI: 10.1016/j.compbiomed.2021.104815] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 08/26/2021] [Accepted: 08/26/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Colonoscopy remains the gold-standard screening for colorectal cancer. However, significant miss rates for polyps have been reported, particularly when there are multiple small adenomas. This presents an opportunity to leverage computer-aided systems to support clinicians and reduce the number of polyps missed. METHOD In this work we introduce the Focus U-Net, a novel dual attention-gated deep neural network, which combines efficient spatial and channel-based attention into a single Focus Gate module to encourage selective learning of polyp features. The Focus U-Net incorporates several further architectural modifications, including the addition of short-range skip connections and deep supervision. Furthermore, we introduce the Hybrid Focal loss, a new compound loss function based on the Focal loss and Focal Tversky loss, designed to handle class-imbalanced image segmentation. For our experiments, we selected five public datasets containing images of polyps obtained during optical colonoscopy: CVC-ClinicDB, Kvasir-SEG, CVC-ColonDB, ETIS-Larib PolypDB and EndoScene test set. We first perform a series of ablation studies and then evaluate the Focus U-Net on the CVC-ClinicDB and Kvasir-SEG datasets separately, and on a combined dataset of all five public datasets. To evaluate model performance, we use the Dice similarity coefficient (DSC) and Intersection over Union (IoU) metrics. RESULTS Our model achieves state-of-the-art results for both CVC-ClinicDB and Kvasir-SEG, with a mean DSC of 0.941 and 0.910, respectively. When evaluated on a combination of five public polyp datasets, our model similarly achieves state-of-the-art results with a mean DSC of 0.878 and mean IoU of 0.809, a 14% and 15% improvement over the previous state-of-the-art results of 0.768 and 0.702, respectively. CONCLUSIONS This study shows the potential for deep learning to provide fast and accurate polyp segmentation results for use during colonoscopy. The Focus U-Net may be adapted for future use in newer non-invasive colorectal cancer screening and more broadly to other biomedical image segmentation tasks similarly involving class imbalance and requiring efficiency.
Collapse
Affiliation(s)
- Michael Yeung
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; School of Clinical Medicine, University of Cambridge, Cambridge, CB2 0SP, United Kingdom.
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, United Kingdom.
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, CB3 0WA, United Kingdom.
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, United Kingdom.
| |
Collapse
|
37
|
Chen BL, Wan JJ, Chen TY, Yu YT, Ji M. A self-attention based faster R-CNN for polyp detection from colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
38
|
CMVHHO-DKMLC: A Chaotic Multi Verse Harris Hawks optimization (CMV-HHO) algorithm based deep kernel optimized machine learning classifier for medical diagnosis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103034] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
39
|
Zhou J, Hu N, Huang ZY, Song B, Wu CC, Zeng FX, Wu M. Application of artificial intelligence in gastrointestinal disease: a narrative review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:1188. [PMID: 34430629 PMCID: PMC8350704 DOI: 10.21037/atm-21-3001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 06/29/2021] [Indexed: 02/05/2023]
Abstract
Objective We collected evidence on the application of artificial intelligence (AI) in gastroenterology field. The review was carried out from two aspects of endoscopic types and gastrointestinal diseases, and briefly summarized the challenges and future directions in this field. Background Due to the advancement of computational power and a surge of available data, a solid foundation has been laid for the growth of AI. Specifically, varied machine learning (ML) techniques have been emerging in endoscopic image analysis. To improve the accuracy and efficiency of clinicians, AI has been widely applied to gastrointestinal endoscopy. Methods PubMed electronic database was searched using the keywords containing “AI”, “ML”, “deep learning (DL)”, “convolution neural network”, “endoscopy (such as white light endoscopy (WLE), narrow band imaging (NBI) endoscopy, magnifying endoscopy with narrow band imaging (ME-NBI), chromoendoscopy, endocytoscopy (EC), and capsule endoscopy (CE))”. Search results were assessed for relevance and then used for detailed discussion. Conclusions This review described the basic knowledge of AI, ML, and DL, and summarizes the application of AI in various endoscopes and gastrointestinal diseases. Finally, the challenges and directions of AI in clinical application were discussed. At present, the application of AI has solved some clinical problems, but more still needs to be done.
Collapse
Affiliation(s)
- Jun Zhou
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| | - Na Hu
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Zhi-Yin Huang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Bin Song
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Chun-Cheng Wu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu, China
| | - Fan-Xin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| | - Min Wu
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.,Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, China
| |
Collapse
|
40
|
Parsa N, Byrne MF. Artificial intelligence for identification and characterization of colonic polyps. Ther Adv Gastrointest Endosc 2021; 14:26317745211014698. [PMID: 34263163 PMCID: PMC8252334 DOI: 10.1177/26317745211014698] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 04/07/2021] [Indexed: 12/27/2022] Open
Abstract
Colonoscopy remains the gold standard exam for colorectal cancer screening due to its ability to detect and resect pre-cancerous lesions in the colon. However, its performance is greatly operator dependent. Studies have shown that up to one-quarter of colorectal polyps can be missed on a single colonoscopy, leading to high rates of interval colorectal cancer. In addition, the American Society for Gastrointestinal Endoscopy has proposed the “resect-and-discard” and “diagnose-and-leave” strategies for diminutive colorectal polyps to reduce the costs of unnecessary polyp resection and pathology evaluation. However, the performance of optical biopsy has been suboptimal in community practice. With recent improvements in machine-learning techniques, artificial intelligence–assisted computer-aided detection and diagnosis have been increasingly utilized by endoscopists. The application of computer-aided design on real-time colonoscopy has been shown to increase the adenoma detection rate while decreasing the withdrawal time and improve endoscopists’ optical biopsy accuracy, while reducing the time to make the diagnosis. These are promising steps toward standardization and improvement of colonoscopy quality, and implementation of “resect-and-discard” and “diagnose-and-leave” strategies. Yet, issues such as real-world applications and regulatory approval need to be addressed before artificial intelligence models can be successfully implemented in clinical practice. In this review, we summarize the recent literature on the application of artificial intelligence for detection and characterization of colorectal polyps and review the limitation of existing artificial intelligence technologies and future directions for this field.
Collapse
Affiliation(s)
- Nasim Parsa
- Division of Gastroenterology and Hepatology, Department of Medicine, University of Missouri, Columbia, MO 65211, USA
| | - Michael F Byrne
- Division of Gastroenterology, Department of Medicine, The University of British Columbia, Vancouver, BC, Canada; Satisfai Health, Vancouver, BC, Canada
| |
Collapse
|
41
|
Shah N, Jyala A, Patel H, Makker J. Utility of artificial intelligence in colonoscopy. Artif Intell Gastrointest Endosc 2021; 2:79-88. [DOI: 10.37126/aige.v2.i3.79] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 06/20/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer is one of the major causes of death worldwide. Colonoscopy is the most important tool that can identify neoplastic lesion in early stages and resect it in a timely manner which helps in reducing mortality related to colorectal cancer. However, the quality of colonoscopy findings depends on the expertise of the endoscopist and thus the rate of missed adenoma or polyp cannot be controlled. It is desirable to standardize the quality of colonoscopy by reducing the number of missed adenoma/polyps. Introduction of artificial intelligence (AI) in the field of medicine has become popular among physicians nowadays. The application of AI in colonoscopy can help in reducing miss rate and increasing colorectal cancer detection rate as per recent studies. Moreover, AI assistance during colonoscopy has also been utilized in patients with inflammatory bowel disease to improve diagnostic accuracy, assessing disease severity and predicting clinical outcomes. We conducted a literature review on the available evidence on use of AI in colonoscopy. In this review article, we discuss about the principles, application, limitations, and future aspects of AI in colonoscopy.
Collapse
Affiliation(s)
- Niel Shah
- Department of Internal Medicine, BronxCare Hospital Center, Bronx, NY 10457, United States
| | - Abhilasha Jyala
- Department of Internal Medicine, BronxCare Hospital Center, Bronx, NY 10457, United States
| | - Harish Patel
- Department of Internal Medicine, Gastroenterology, BronxCare Hospital Center, Bronx, NY 10457, United States
| | - Jasbir Makker
- Department of Internal Medicine, Gastroenterology, BronxCare Hospital Center, Bronx, NY 10457, United States
| |
Collapse
|
42
|
Shah N, Jyala A, Patel H, Makker J. Utility of artificial intelligence in colonoscopy. Artif Intell Gastrointest Endosc 2021. [DOI: 10.37126/aige.v2.i3.78] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
|
43
|
Bardhi O, Sierra-Sosa D, Garcia-Zapirain B, Bujanda L. Deep Learning Models for Colorectal Polyps. INFORMATION 2021; 12:245. [DOI: 10.3390/info12060245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2023] Open
Abstract
Colorectal cancer is one of the main causes of cancer incident cases and cancer deaths worldwide. Undetected colon polyps, be them benign or malignant, lead to late diagnosis of colorectal cancer. Computer aided devices have helped to decrease the polyp miss rate. The application of deep learning algorithms and techniques has escalated during this last decade. Many scientific studies are published to detect, localize, and classify colon polyps. We present here a brief review of the latest published studies. We compare the accuracy of these studies with our results obtained from training and testing three independent datasets using a convolutional neural network and autoencoder model. A train, validate and test split was performed for each dataset, 75%, 15%, and 15%, respectively. An accuracy of 0.937 was achieved for CVC-ColonDB, 0.951 for CVC-ClinicDB, and 0.967 for ETIS-LaribPolypDB. Our results suggest slight improvements compared to the algorithms used to date.
Collapse
Affiliation(s)
- Ornela Bardhi
- eVIDA Lab, Faculty of Engineering, University of Deusto, 48007 Bilbao, Spain
| | - Daniel Sierra-Sosa
- Department of Computer Science & Information Technology, Hood College, Frederick, MD 21701, USA
| | | | - Luis Bujanda
- Department of Gastroenterology, Instituto Biodonostia, Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBERehd), Universidad del País Vasco (UPV/EHU), 20014 San Sebastián, Spain
| |
Collapse
|
44
|
Saito H, Tanimoto T, Ozawa T, Ishihara S, Fujishiro M, Shichijo S, Hirasawa D, Matsuda T, Endo Y, Tada T. Automatic anatomical classification of colonoscopic images using deep convolutional neural networks. Gastroenterol Rep (Oxf) 2021; 9:226-233. [PMID: 34316372 PMCID: PMC8309686 DOI: 10.1093/gastro/goaa078] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 03/25/2020] [Accepted: 05/13/2020] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND A colonoscopy can detect colorectal diseases, including cancers, polyps, and inflammatory bowel diseases. A computer-aided diagnosis (CAD) system using deep convolutional neural networks (CNNs) that can recognize anatomical locations during a colonoscopy could efficiently assist practitioners. We aimed to construct a CAD system using a CNN to distinguish colorectal images from parts of the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum. METHOD We constructed a CNN by training of 9,995 colonoscopy images and tested its performance by 5,121 independent colonoscopy images that were categorized according to seven anatomical locations: the terminal ileum, the cecum, ascending colon to transverse colon, descending colon to sigmoid colon, the rectum, the anus, and indistinguishable parts. We examined images taken during total colonoscopy performed between January 2017 and November 2017 at a single center. We evaluated the concordance between the diagnosis by endoscopists and those by the CNN. The main outcomes of the study were the sensitivity and specificity of the CNN for the anatomical categorization of colonoscopy images. RESULTS The constructed CNN recognized anatomical locations of colonoscopy images with the following areas under the curves: 0.979 for the terminal ileum; 0.940 for the cecum; 0.875 for ascending colon to transverse colon; 0.846 for descending colon to sigmoid colon; 0.835 for the rectum; and 0.992 for the anus. During the test process, the CNN system correctly recognized 66.6% of images. CONCLUSION We constructed the new CNN system with clinically relevant performance for recognizing anatomical locations of colonoscopy images, which is the first step in constructing a CAD system that will support us during colonoscopy and provide an assurance of the quality of the colonoscopy procedure.
Collapse
Affiliation(s)
- Hiroaki Saito
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | | | - Tsuyoshi Ozawa
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Soichiro Ishihara
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
- Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology and Hepatology, Nagoya University Graduate School of Medicine, Aichi, Japan
| | - Satoki Shichijo
- Department of Gastrointestinal Oncology, Osaka International Cancer Institute, Osaka, Japan
| | - Dai Hirasawa
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | - Tomoki Matsuda
- Department of Gastroenterology, Sendai Kousei Hospital, Miyagi, Japan
| | - Yuma Endo
- AI Medical Service, Inc., Tokyo, Japan
| | - Tomohiro Tada
- Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan
- Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- AI Medical Service, Inc., Tokyo, Japan
| |
Collapse
|
45
|
Kim GH, Sung ES, Nam KW. Automated laryngeal mass detection algorithm for home-based self-screening test based on convolutional neural network. Biomed Eng Online 2021; 20:51. [PMID: 34034766 PMCID: PMC8144695 DOI: 10.1186/s12938-021-00886-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 05/11/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Early detection of laryngeal masses without periodic visits to hospitals is essential for improving the possibility of full recovery and the long-term survival ratio after prompt treatment, as well as reducing the risk of clinical infection. RESULTS We first propose a convolutional neural network model for automated laryngeal mass detection based on diagnostic images captured at hospitals. Thereafter, we propose a pilot system, composed of an embedded controller, a camera module, and an LCD display, that can be utilized for a home-based self-screening test. In terms of evaluating the model's performance, the experimental results indicated a final validation loss of 0.9152 and a F1-score of 0.8371 before post-processing. Additionally, the F1-score of the original computer algorithm with respect to 100 randomly selected color-printed test images was 0.8534 after post-processing while that of the embedded pilot system was 0.7672. CONCLUSIONS The proposed technique is expected to increase the ratio of early detection of laryngeal masses without the risk of clinical infection spread, which could help improve convenience and ensure safety of individuals, patients, and medical staff.
Collapse
Affiliation(s)
- Gun Ho Kim
- Interdisciplinary Program in Biomedical Engineering, School of Medicine, Pusan National University, Busan, South Korea
| | - Eui-Suk Sung
- Department of Otolaryngology-Head and Neck Surgery, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, South Korea.
| | - Kyoung Won Nam
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Department of Biomedical Engineering, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Department of Biomedical Engineering, School of Medicine, Pusan National University, 49 Busandaehak-ro, Mulgeum-eup, Yangsan, Gyeongsangnam-do, 50629, South Korea.
| |
Collapse
|
46
|
Kim KO, Kim EY. Application of Artificial Intelligence in the Detection and Characterization of Colorectal Neoplasm. Gut Liver 2021; 15:346-353. [PMID: 32773386 PMCID: PMC8129657 DOI: 10.5009/gnl20186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 06/28/2020] [Indexed: 12/19/2022] Open
Abstract
Endoscpists always have tried to pursue a perfect colonoscopy, and application of artificial intelligence (AI) using deep-learning algorithms is one of the promising supportive options for detection and characterization of colorectal polyps during colonoscopy. Many retrospective studies conducted with real-time application of AI using convolutional neural networks have shown improved colorectal polyp detection. Moreover, a recent randomized clinical trial reported additional polyp detection with shorter analysis time. Studies conducted regarding polyp characterization provided additional promising results. Application of AI with narrow band imaging in real-time prediction of the pathology of diminutive polyps resulted in high diagnostic accuracy. In addition, application of AI with endocytoscopy or confocal laser endomicroscopy was investigated for real-time cellular diagnosis, and the diagnostic accuracy of some studies was comparable to that of pathologists. With AI technology, we can expect a higher polyp detection rate with reduced time and cost by avoiding unnecessary procedures, resulting in enhanced colonoscopy efficiency. However, for AI application in actual daily clinical practice, more prospective studies with minimized selection bias, consensus on standardized utilization, and regulatory approval are needed. (Gut Liver 2021;15:-353)
Collapse
Affiliation(s)
- Kyeong Ok Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, Korea
| | - Eun Young Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Daegu Catholic University School of Medicine, Daegu, Korea
| |
Collapse
|
47
|
Podlasek J, Heesch M, Podlasek R, Kilisiński W, Filip R. Real-time deep learning-based colorectal polyp localization on clinical video footage achievable with a wide array of hardware configurations. Endosc Int Open 2021; 9:E741-E748. [PMID: 33937516 PMCID: PMC8062241 DOI: 10.1055/a-1388-6735] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 12/30/2020] [Indexed: 02/08/2023] Open
Abstract
Background and study aims Several computer-assisted polyp detection systems have been proposed, but they have various limitations, from utilizing outdated neural network architectures to a requirement for multi-graphics processing unit (GPU) processing, to validating on small or non-robust datasets. To address these problems, we developed a system based on a state-of-the-art convolutional neural network architecture able to detect polyps in real time on a single GPU and tested on both public datasets and full clinical examination recordings. Methods The study comprised 165 colonoscopy procedure recordings and 2678 still photos gathered retrospectively. The system was trained on 81,962 polyp frames in total and then tested on footage from 42 colonoscopies and CVC-ClinicDB, CVC-ColonDB, Hyper-Kvasir, and ETIS-Larib public datasets. Clinical videos were evaluated for polyp detection and false-positive rates whereas the public datasets were assessed for F1 score. The system was tested for runtime performance on a wide array of hardware. Results The performance on public datasets varied from an F1 score of 0.727 to 0.942. On full examination videos, it detected 94 % of the polyps found by the endoscopist with a 3 % false-positive rate and identified additional polyps that were missed during initial video assessment. The system's runtime fits within the real-time constraints on all but one of the hardware configurations. Conclusions We have created a polyp detection system with a post-processing pipeline that works in real time on a wide array of hardware. The system does not require extensive computational power, which could help broaden the adaptation of new commercially available systems.
Collapse
Affiliation(s)
- Jeremi Podlasek
- Department of Technology, moretho Ltd., Manchester, United Kingdom
| | - Mateusz Heesch
- Department of Technology, moretho Ltd., Manchester, United Kingdom
- Department of Robotics and Mechatronics, AGH University of Science and Technology, Kraków, Poland
| | - Robert Podlasek
- Department of Surgery with the Trauma and Orthopedic Division, District Hospital in Strzyżów, Strzyżów, Poland
| | - Wojciech Kilisiński
- Department of Gastroenterology with IBD Unit, Voivodship Hospital No 2 in Rzeszow, Rzeszów, Poland
| | - Rafał Filip
- Department of Gastroenterology with IBD Unit, Voivodship Hospital No 2 in Rzeszow, Rzeszów, Poland
- Faculty of Medicine, University of Rzeszów, Rzeszów, Poland
| |
Collapse
|
48
|
Mitsala A, Tsalikidis C, Pitiakoudis M, Simopoulos C, Tsaroucha AK. Artificial Intelligence in Colorectal Cancer Screening, Diagnosis and Treatment. A New Era. ACTA ACUST UNITED AC 2021; 28:1581-1607. [PMID: 33922402 PMCID: PMC8161764 DOI: 10.3390/curroncol28030149] [Citation(s) in RCA: 122] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 04/09/2021] [Accepted: 04/20/2021] [Indexed: 12/24/2022]
Abstract
The development of artificial intelligence (AI) algorithms has permeated the medical field with great success. The widespread use of AI technology in diagnosing and treating several types of cancer, especially colorectal cancer (CRC), is now attracting substantial attention. CRC, which represents the third most commonly diagnosed malignancy in both men and women, is considered a leading cause of cancer-related deaths globally. Our review herein aims to provide in-depth knowledge and analysis of the AI applications in CRC screening, diagnosis, and treatment based on current literature. We also explore the role of recent advances in AI systems regarding medical diagnosis and therapy, with several promising results. CRC is a highly preventable disease, and AI-assisted techniques in routine screening represent a pivotal step in declining incidence rates of this malignancy. So far, computer-aided detection and characterization systems have been developed to increase the detection rate of adenomas. Furthermore, CRC treatment enters a new era with robotic surgery and novel computer-assisted drug delivery techniques. At the same time, healthcare is rapidly moving toward precision or personalized medicine. Machine learning models have the potential to contribute to individual-based cancer care and transform the future of medicine.
Collapse
Affiliation(s)
- Athanasia Mitsala
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
- Correspondence: ; Tel.: +30-6986423707
| | - Christos Tsalikidis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Michail Pitiakoudis
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Constantinos Simopoulos
- Second Department of Surgery, University General Hospital of Alexandroupolis, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece; (C.T.); (M.P.); (C.S.)
| | - Alexandra K. Tsaroucha
- Laboratory of Experimental Surgery & Surgical Research, Democritus University of Thrace Medical School, Dragana, 68100 Alexandroupolis, Greece;
| |
Collapse
|
49
|
Pang X, Zhao Z, Weng Y. The Role and Impact of Deep Learning Methods in Computer-Aided Diagnosis Using Gastrointestinal Endoscopy. Diagnostics (Basel) 2021; 11:694. [PMID: 33919669 PMCID: PMC8069844 DOI: 10.3390/diagnostics11040694] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 03/24/2021] [Accepted: 04/01/2021] [Indexed: 12/18/2022] Open
Abstract
At present, the application of artificial intelligence (AI) based on deep learning in the medical field has become more extensive and suitable for clinical practice compared with traditional machine learning. The application of traditional machine learning approaches to clinical practice is very challenging because medical data are usually uncharacteristic. However, deep learning methods with self-learning abilities can effectively make use of excellent computing abilities to learn intricate and abstract features. Thus, they are promising for the classification and detection of lesions through gastrointestinal endoscopy using a computer-aided diagnosis (CAD) system based on deep learning. This study aimed to address the research development of a CAD system based on deep learning in order to assist doctors in classifying and detecting lesions in the stomach, intestines, and esophagus. It also summarized the limitations of the current methods and finally presented a prospect for future research.
Collapse
Affiliation(s)
- Xuejiao Pang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Zijian Zhao
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Ying Weng
- School of Computer Science, University of Nottingham, Nottingham NG7 2RD, UK;
| |
Collapse
|
50
|
Liu X, Guo X, Liu Y, Yuan Y. Consolidated domain adaptive detection and localization framework for cross-device colonoscopic images. Med Image Anal 2021; 71:102052. [PMID: 33895616 DOI: 10.1016/j.media.2021.102052] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 02/19/2021] [Accepted: 03/22/2021] [Indexed: 12/17/2022]
Abstract
Automatic polyp detection has been proven to be crucial in improving the diagnosis accuracy and reducing colorectal cancer mortality during the precancerous stage. However, the performance of deep neural networks may degrade severely when being deployed to polyp data in a distinct domain. This domain distinction can be caused by different scanners, hospitals, or imaging protocols. In this paper, we propose a consolidated domain adaptive detection and localization framework to bridge the domain gap between different colonosopic datasets effectively, consisting of two parts: the pixel-level adaptation and the hierarchical feature-level adaptation. For the pixel-level adaptation part, we propose a Gaussian Fourier Domain Adaptation (GFDA) method to sample the matched source and target image pairs from Gaussian distributions then unify their styles via the low-level spectrum replacement, which can reduce the domain discrepancy of the cross-device polyp datasets in appearance level without distorting their contents. The hierarchical feature-level adaptation part comprising a Hierarchical Attentive Adaptation (HAA) module to minimize the domain discrepancy in high-level semantics and an Iconic Concentrative Adaptation (ICA) module to perform reliable instance alignment. These two modules are regularized by a Generalized Consistency Regularizer (GCR) for maintaining the consistency of their domain predictions. We further extend our framework to the polyp localization task and present a Centre Besiegement (CB) loss for better location optimization. Experimental results show that our framework outperforms other domain adaptation detectors by a large margin in the detection task meanwhile achieves the state-of-the-art recall rate of 87.5% in the localization task. The source code is available at https://github.com/CityU-AIM-Group/ConsolidatedPolypDA.
Collapse
Affiliation(s)
- Xinyu Liu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yajie Liu
- Department of Radiation Oncology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|