1
|
Ren X, Zhou W, Yuan N, Li F, Ruan Y, Zhou H. Prompt-based polyp segmentation during endoscopy. Med Image Anal 2025; 102:103510. [PMID: 40073580 DOI: 10.1016/j.media.2025.103510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 12/26/2024] [Accepted: 02/15/2025] [Indexed: 03/14/2025]
Abstract
Accurate judgment and identification of polyp size is crucial in endoscopic diagnosis. However, the indistinct boundaries of polyps lead to missegmentation and missed cancer diagnoses. In this paper, a prompt-based polyp segmentation method (PPSM) is proposed to assist in early-stage cancer diagnosis during endoscopy. It combines endoscopists' experience and artificial intelligence technology. Firstly, a prompt-based polyp segmentation network (PPSN) is presented, which contains the prompt encoding module (PEM), the feature extraction encoding module (FEEM), and the mask decoding module (MDM). The PEM encodes prompts to guide the FEEM for feature extracting and the MDM for mask generating. So that PPSN can segment polyps efficiently. Secondly, endoscopists' ocular attention data (gazes) are used as prompts, which can enhance PPSN's accuracy for segmenting polyps and obtain prompt data effectively in real-world. To reinforce the PPSN's stability, non-uniform dot matrix prompts are generated to compensate for frame loss during the eye-tracking. Moreover, a data augmentation method based on the segment anything model (SAM) is introduced to enrich the prompt dataset and improve the PPSN's adaptability. Experiments demonstrate the PPSM's accuracy and real-time capability. The results from cross-training and cross-testing on four datasets show the PPSM's generalization. Based on the research results, a disposable electronic endoscope with the real-time auxiliary diagnosis function for early cancer and an image processor have been developed. Part of the code and the method for generating the prompts dataset are available at https://github.com/XinZhenRen/PPSM.
Collapse
Affiliation(s)
- Xinzhen Ren
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, CO 200444, China
| | - Wenju Zhou
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, CO 200444, China.
| | - Naitong Yuan
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, CO 200444, China
| | - Fang Li
- Department of Obstetrics and Gynecology, Shanghai East Hospital, School of Medicine, Tongji University, Shanghai, CO 200120, China.
| | - Yetian Ruan
- Department of Obstetrics and Gynecology, Shanghai East Hospital, School of Medicine, Tongji University, Shanghai, CO 200120, China
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
2
|
Tawheed A, Ismail A, Amer MS, Elnahas O, Mowafy T. Capsule endoscopy: Do we still need it after 24 years of clinical use? World J Gastroenterol 2025; 31:102692. [PMID: 39926220 PMCID: PMC11718605 DOI: 10.3748/wjg.v31.i5.102692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 11/20/2024] [Accepted: 12/02/2024] [Indexed: 12/30/2024] Open
Abstract
In this letter, we comment on a recent article published in the World Journal of Gastroenterology by Xiao et al, where the authors aimed to use a deep learning model to automatically detect gastrointestinal lesions during capsule endoscopy (CE). CE was first presented in 2000 and was approved by the Food and Drug Administration in 2001. The indications of CE overlap with those of regular diagnostic endoscopy. However, in clinical practice, CE is usually used to detect lesions in areas inaccessible to standard endoscopies or in cases of bleeding that might be missed during conventional endoscopy. Since the emergence of CE, many physiological and technical challenges have been faced and addressed. In this letter, we summarize the current challenges and briefly mention the proposed methods to overcome these challenges to answer a central question: Do we still need CE?
Collapse
Affiliation(s)
- Ahmed Tawheed
- Department of Endemic Medicine, Faculty of Medicine, Helwan University, Cairo 11795, Egypt
| | - Alaa Ismail
- Faculty of Medicine, Helwan University, Cairo 11795, Egypt
| | - Mohab S Amer
- Faculty of Medicine, Helwan University, Cairo 11795, Egypt
- Department of Research, SMART Company for Research Services, Cairo 11795, Egypt
| | - Osama Elnahas
- Faculty of Medicine, Helwan University, Cairo 11795, Egypt
| | - Tawhid Mowafy
- Department of Internal Medicine, Gardenia Medical Center, Doha 0000, Qatar
| |
Collapse
|
3
|
Raju ASN, Venkatesh K, Gatla RK, Konakalla EP, Eid MM, Titova N, Ghoneim SSM, Ghaly RNR. Colorectal cancer detection with enhanced precision using a hybrid supervised and unsupervised learning approach. Sci Rep 2025; 15:3180. [PMID: 39863646 PMCID: PMC11763007 DOI: 10.1038/s41598-025-86590-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Accepted: 01/13/2025] [Indexed: 01/27/2025] Open
Abstract
The current work introduces the hybrid ensemble framework for the detection and segmentation of colorectal cancer. This framework will incorporate both supervised classification and unsupervised clustering methods to present more understandable and accurate diagnostic results. The method entails several steps with CNN models: ADa-22 and AD-22, transformer networks, and an SVM classifier, all inbuilt. The CVC ClinicDB dataset supports this process, containing 1650 colonoscopy images classified as polyps or non-polyps. The best performance in the ensembles was done by the AD-22 + Transformer + SVM model, with an AUC of 0.99, a training accuracy of 99.50%, and a testing accuracy of 99.00%. This group also saw a high accuracy of 97.50% for Polyps and 99.30% for Non-Polyps, together with a recall of 97.80% for Polyps and 98.90% for Non-Polyps, hence performing very well in identifying both cancerous and healthy regions. The framework proposed here uses K-means clustering in combination with the visualisation of bounding boxes, thereby improving segmentation and yielding a silhouette score of 0.73 with the best cluster configuration. It discusses how to combine feature interpretation challenges into medical imaging for accurate localization and precise segmentation of malignant regions. A good balance between performance and generalization shall be done by hyperparameter optimization-heavy learning rates; dropout rates and overfitting shall be suppressed effectively. The hybrid schema of this work treats the deficiencies of the previous approaches, such as incorporating CNN-based effective feature extraction, Transformer networks for developing attention mechanisms, and finally the fine decision boundary of the support vector machine. Further, we refine this process via unsupervised clustering for the purpose of enhancing the visualisation of such a procedure. Such a holistic framework, hence, further boosts classification and segmentation results by generating understandable outcomes for more rigorous benchmarking of detecting colorectal cancer and with higher reality towards clinical application feasibility.
Collapse
Affiliation(s)
- Akella S Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India.
| | - Ranjith Kumar Gatla
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India
| | - Eswara Prasad Konakalla
- Department of Physics and Electronics, B.V.Raju College, Bhimavaram, Garagaparru Road, Kovvada, Andhra Pradesh, 534202, India
| | - Marwa M Eid
- College of Applied Medical Science, Taif University, 21944, Taif, Saudi Arabia
| | - Nataliia Titova
- Biomedical Engineering Department, National University Odesa Polytechnic, Odesa, 65044, Ukraine.
| | - Sherif S M Ghoneim
- Department of Electrical Engineering, College of Engineering, Taif University, 21944, Taif, Saudi Arabia
| | - Ramy N R Ghaly
- Ministry of Higher Education, Mataria Technical College, Cairo, 11718, Egypt
- Chitkara Centre for Research and Development, Chitkara University, Solan, Himachal Pradesh, 174103, India
| |
Collapse
|
4
|
Raju ASN, Venkatesh K, Rajababu M, Gatla RK, Eid MM, Ali E, Titova N, Sharaf ABA. A hybrid framework for colorectal cancer detection and U-Net segmentation using polynetDWTCADx. Sci Rep 2025; 15:847. [PMID: 39757273 PMCID: PMC11701104 DOI: 10.1038/s41598-025-85156-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Accepted: 01/01/2025] [Indexed: 01/07/2025] Open
Abstract
"PolynetDWTCADx" is a sophisticated hybrid model that was developed to identify and distinguish colorectal cancer. In this study, the CKHK-22 dataset, comprising 24 classes, served as the introduction. The proposed method, which combines CNNs, DWTs, and SVMs, enhances the accuracy of feature extraction and classification. The study employs DWT to optimize and enhance two integrated CNN models before classifying them with SVM following a systematic procedure. PolynetDWTCADx was the most effective model that we evaluated. It was capable of attaining a moderate level of recall, as well as an area under the curve (AUC) and accuracy during testing. The testing accuracy was 92.3%, and the training accuracy was 95.0%. This demonstrates that the model is capable of distinguishing between noncancerous and cancerous lesions in the colon. We can also employ the semantic segmentation algorithms of the U-Net architecture to accurately identify and segment cancerous colorectal regions. We assessed the model's exceptional success in segmenting and providing precise delineation of malignant tissues using its maximal IoU value of 0.93, based on intersection over union (IoU) scores. When these techniques are added to PolynetDWTCADx, they give doctors detailed visual information that is needed for diagnosis and planning treatment. These techniques are also very good at finding and separating colorectal cancer. PolynetDWTCADx has the potential to enhance the recognition and management of colorectal cancer, as this study underscores.
Collapse
Affiliation(s)
- Akella S Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, Telangana, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203, Tamilnadu, India.
| | - Makineedi Rajababu
- Department of Information Technology, Aditya University, Surampalem, 533437, Andhra Pradesh, India
| | - Ranjith Kumar Gatla
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, Telangana, India
| | - Marwa M Eid
- Department of physical therapy, College of Applied Medical Science, Taif University, Taif, 21944, Saudi Arabia
| | - Enas Ali
- University Centre for Research and Development, Chandigarh University, Mohali, 140413, Punjab, India
| | - Nataliia Titova
- Biomedical Engineering Department, National University Odesa Polytechnic, Odesa, 65044, Ukraine.
| | - Ahmed B Abou Sharaf
- Ministry of Higher Education & Scientific Research, Industrial Technical Institute in Mataria, Cairo, 11718, Egypt
- Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, 174103, India
| |
Collapse
|
5
|
Demirbaş AA, Üzen H, Fırat H. Spatial-attention ConvMixer architecture for classification and detection of gastrointestinal diseases using the Kvasir dataset. Health Inf Sci Syst 2024; 12:32. [PMID: 38685985 PMCID: PMC11056348 DOI: 10.1007/s13755-024-00290-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/12/2024] [Indexed: 05/02/2024] Open
Abstract
Gastrointestinal (GI) disorders, encompassing conditions like cancer and Crohn's disease, pose a significant threat to public health. Endoscopic examinations have become crucial for diagnosing and treating these disorders efficiently. However, the subjective nature of manual evaluations by gastroenterologists can lead to potential errors in disease classification. In addition, the difficulty of diagnosing diseased tissues in GI and the high similarity between classes made the subject a difficult area. Automated classification systems that use artificial intelligence to solve these problems have gained traction. Automatic detection of diseases in medical images greatly benefits in the diagnosis of diseases and reduces the time of disease detection. In this study, we suggested a new architecture to enable research on computer-assisted diagnosis and automated disease detection in GI diseases. This architecture, called Spatial-Attention ConvMixer (SAC), further developed the patch extraction technique used as the basis of the ConvMixer architecture with a spatial attention mechanism (SAM). The SAM enables the network to concentrate selectively on the most informative areas, assigning importance to each spatial location within the feature maps. We employ the Kvasir dataset to assess the accuracy of classifying GI illnesses using the SAC architecture. We compare our architecture's results with Vanilla ViT, Swin Transformer, ConvMixer, MLPMixer, ResNet50, and SqueezeNet models. Our SAC method gets 93.37% accuracy, while the other architectures get respectively 79.52%, 74.52%, 92.48%, 63.04%, 87.44%, and 85.59%. The proposed spatial attention block improves the accuracy of the ConvMixer architecture on the Kvasir, outperforming the state-of-the-art methods with an accuracy rate of 93.37%.
Collapse
Affiliation(s)
| | - Hüseyin Üzen
- Department of Computer Engineering, Faculty of Engineering, Bingol University, Bingol, Turkey
| | - Hüseyin Fırat
- Department of Computer Engineering, Faculty of Engineering, Dicle University, Diyarbakır, Turkey
| |
Collapse
|
6
|
Jiang B, Dorosan M, Leong JWH, Ong MEH, Lam SSW, Ang TL. Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution. Singapore Med J 2024; 65:133-140. [PMID: 38527297 PMCID: PMC11060635 DOI: 10.4103/singaporemedj.smj-2023-187] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/10/2023] [Indexed: 03/27/2024]
Abstract
INTRODUCTION Deep learning models can assess the quality of images and discriminate among abnormalities in small bowel capsule endoscopy (CE), reducing fatigue and the time needed for diagnosis. They serve as a decision support system, partially automating the diagnosis process by providing probability predictions for abnormalities. METHODS We demonstrated the use of deep learning models in CE image analysis, specifically by piloting a bowel preparation model (BPM) and an abnormality detection model (ADM) to determine frame-level view quality and the presence of abnormal findings, respectively. We used convolutional neural network-based models pretrained on large-scale open-domain data to extract spatial features of CE images that were then used in a dense feed-forward neural network classifier. We then combined the open-source Kvasir-Capsule dataset (n = 43) and locally collected CE data (n = 29). RESULTS Model performance was compared using averaged five-fold and two-fold cross-validation for BPMs and ADMs, respectively. The best BPM model based on a pre-trained ResNet50 architecture had an area under the receiver operating characteristic and precision-recall curves of 0.969±0.008 and 0.843±0.041, respectively. The best ADM model, also based on ResNet50, had top-1 and top-2 accuracies of 84.03±0.051 and 94.78±0.028, respectively. The models could process approximately 200-250 images per second and showed good discrimination on time-critical abnormalities such as bleeding. CONCLUSION Our pilot models showed the potential to improve time to diagnosis in CE workflows. To our knowledge, our approach is unique to the Singapore context. The value of our work can be further evaluated in a pragmatic manner that is sensitive to existing clinician workflow and resource constraints.
Collapse
Affiliation(s)
- Bochao Jiang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Michael Dorosan
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Justin Wen Hao Leong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Marcus Eng Hock Ong
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Sean Shao Wei Lam
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| |
Collapse
|
7
|
Mota J, Almeida MJ, Mendes F, Martins M, Ribeiro T, Afonso J, Cardoso P, Cardoso H, Andrade P, Ferreira J, Mascarenhas M, Macedo G. From Data to Insights: How Is AI Revolutionizing Small-Bowel Endoscopy? Diagnostics (Basel) 2024; 14:291. [PMID: 38337807 PMCID: PMC10855436 DOI: 10.3390/diagnostics14030291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/09/2024] [Accepted: 01/16/2024] [Indexed: 02/12/2024] Open
Abstract
The role of capsule endoscopy and enteroscopy in managing various small-bowel pathologies is well-established. However, their broader application has been hampered mainly by their lengthy reading times. As a result, there is a growing interest in employing artificial intelligence (AI) in these diagnostic and therapeutic procedures, driven by the prospect of overcoming some major limitations and enhancing healthcare efficiency, while maintaining high accuracy levels. In the past two decades, the applicability of AI to gastroenterology has been increasing, mainly because of the strong imaging component. Nowadays, there are a multitude of studies using AI, specifically using convolutional neural networks, that prove the potential applications of AI to these endoscopic techniques, achieving remarkable results. These findings suggest that there is ample opportunity for AI to expand its presence in the management of gastroenterology diseases and, in the future, catalyze a game-changing transformation in clinical activities. This review provides an overview of the current state-of-the-art of AI in the scope of small-bowel study, with a particular focus on capsule endoscopy and enteroscopy.
Collapse
Affiliation(s)
- Joana Mota
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Maria João Almeida
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Tiago Ribeiro
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João Afonso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Helder Cardoso
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Patrícia Andrade
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, R. Dr. Roberto Frias, 4200-465 Porto, Portugal;
- Digestive Artificial Intelligence Development, R. Alfredo Allen 455-461, 4200-135 Porto, Portugal
| | - Miguel Mascarenhas
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- ManopH Gastroenterology Clinic, R. de Sá da Bandeira 752, 4000-432 Porto, Portugal
| | - Guilherme Macedo
- Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal (G.M.)
- WGO Gastroenterology and Hepatology Training Center, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| |
Collapse
|
8
|
Popa SL, Stancu B, Ismaiel A, Turtoi DC, Brata VD, Duse TA, Bolchis R, Padureanu AM, Dita MO, Bashimov A, Incze V, Pinna E, Grad S, Pop AV, Dumitrascu DI, Munteanu MA, Surdea-Blaga T, Mihaileanu FV. Enteroscopy versus Video Capsule Endoscopy for Automatic Diagnosis of Small Bowel Disorders-A Comparative Analysis of Artificial Intelligence Applications. Biomedicines 2023; 11:2991. [PMID: 38001991 PMCID: PMC10669430 DOI: 10.3390/biomedicines11112991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 10/26/2023] [Accepted: 11/05/2023] [Indexed: 11/26/2023] Open
Abstract
BACKGROUND Small bowel disorders present a diagnostic challenge due to the limited accessibility of the small intestine. Accurate diagnosis is made with the aid of specific procedures, like capsule endoscopy or double-ballon enteroscopy, but they are not usually solicited and not widely accessible. This study aims to assess and compare the diagnostic effectiveness of enteroscopy and video capsule endoscopy (VCE) when combined with artificial intelligence (AI) algorithms for the automatic detection of small bowel diseases. MATERIALS AND METHODS We performed an extensive literature search for relevant studies about AI applications capable of identifying small bowel disorders using enteroscopy and VCE, published between 2012 and 2023, employing PubMed, Cochrane Library, Google Scholar, Embase, Scopus, and ClinicalTrials.gov databases. RESULTS Our investigation discovered a total of 27 publications, out of which 21 studies assessed the application of VCE, while the remaining 6 articles analyzed the enteroscopy procedure. The included studies portrayed that both investigations, enhanced by AI, exhibited a high level of diagnostic accuracy. Enteroscopy demonstrated superior diagnostic capability, providing precise identification of small bowel pathologies with the added advantage of enabling immediate therapeutic intervention. The choice between these modalities should be guided by clinical context, patient preference, and resource availability. Studies with larger sample sizes and prospective designs are warranted to validate these results and optimize the integration of AI in small bowel diagnostics. CONCLUSIONS The current analysis demonstrates that both enteroscopy and VCE with AI augmentation exhibit comparable diagnostic performance for the automatic detection of small bowel disorders.
Collapse
Affiliation(s)
- Stefan Lucian Popa
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Bogdan Stancu
- 2nd Surgical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
| | - Abdulrahman Ismaiel
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Daria Claudia Turtoi
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Vlad Dumitru Brata
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Traian Adrian Duse
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Roxana Bolchis
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Alexandru Marius Padureanu
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Miruna Oana Dita
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Atamyrat Bashimov
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Victor Incze
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Edoardo Pinna
- Faculty of Medicine, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (D.C.T.); (V.D.B.); (T.A.D.); (R.B.); (A.M.P.); (M.O.D.); (A.B.); (V.I.); (E.P.)
| | - Simona Grad
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Andrei-Vasile Pop
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Dinu Iuliu Dumitrascu
- Department of Anatomy, “Iuliu Hatieganu“ University of Medicine and Pharmacy, 400006 Cluj-Napoca, Romania;
| | - Mihai Alexandru Munteanu
- Department of Medical Disciplines, Faculty of Medicine and Pharmacy, University of Oradea, 410087 Oradea, Romania;
| | - Teodora Surdea-Blaga
- 2nd Medical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400000 Cluj-Napoca, Romania; (S.L.P.); (A.I.); (S.G.); (A.-V.P.); (T.S.-B.)
| | - Florin Vasile Mihaileanu
- 2nd Surgical Department, “Iuliu Hatieganu” University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
| |
Collapse
|
9
|
Ramzan M, Raza M, Sharif MI, Azam F, Kim J, Kadry S. Gastrointestinal tract disorders classification using ensemble of InceptionNet and proposed GITNet based deep feature with ant colony optimization. PLoS One 2023; 18:e0292601. [PMID: 37831692 PMCID: PMC10575542 DOI: 10.1371/journal.pone.0292601] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Computer-aided classification of diseases of the gastrointestinal tract (GIT) has become a crucial area of research. Medical science and artificial intelligence have helped medical experts find GIT diseases through endoscopic procedures. Wired endoscopy is a controlled procedure that helps the medical expert in disease diagnosis. Manual screening of the endoscopic frames is a challenging and time taking task for medical experts that also increases the missed rate of the GIT disease. An early diagnosis of GIT disease can save human beings from fatal diseases. An automatic deep feature learning-based system is proposed for GIT disease classification. The adaptive gamma correction and weighting distribution (AGCWD) preprocessing procedure is the first stage of the proposed work that is used for enhancing the intensity of the frames. The deep features are extracted from the frames by deep learning models including InceptionNetV3 and GITNet. Ant Colony Optimization (ACO) procedure is employed for feature optimization. Optimized features are fused serially. The classification operation is performed by variants of support vector machine (SVM) classifiers, including the Cubic SVM (CSVM), Coarse Gaussian SVM (CGSVM), Quadratic SVM (QSVM), and Linear SVM (LSVM) classifiers. The intended model is assessed on two challenging datasets including KVASIR and NERTHUS that consist of eight and four classes respectively. The intended model outperforms as compared with existing methods by achieving an accuracy of 99.32% over the KVASIR dataset and 99.89% accuracy using the NERTHUS dataset.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, HITEC University Taxila, Taxila, Pakistan
| | - Muhammad Irfan Sharif
- Department of Information Sciences, University of Education Lahore, Jauharabad Campus, Jauharabad, Pakistan
| | - Faisal Azam
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Jungeun Kim
- Department of Software and CMPSI, Kongju National University, Cheonan, Korea
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
- MEU Research Unit, Middle East University, Amman, Jordan
| |
Collapse
|
10
|
Sharma N, Gupta S, Reshan MSA, Sulaiman A, Alshahrani H, Shaikh A. EfficientNetB0 cum FPN Based Semantic Segmentation of Gastrointestinal Tract Organs in MRI Scans. Diagnostics (Basel) 2023; 13:2399. [PMID: 37510142 PMCID: PMC10377822 DOI: 10.3390/diagnostics13142399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/09/2023] [Accepted: 07/17/2023] [Indexed: 07/30/2023] Open
Abstract
The segmentation of gastrointestinal (GI) organs is crucial in radiation therapy for treating GI cancer. It allows for developing a targeted radiation therapy plan while minimizing radiation exposure to healthy tissue, improving treatment success, and decreasing side effects. Medical diagnostics in GI tract organ segmentation is essential for accurate disease detection, precise differential diagnosis, optimal treatment planning, and efficient disease monitoring. This research presents a hybrid encoder-decoder-based model for segmenting healthy organs in the GI tract in biomedical images of cancer patients, which might help radiation oncologists treat cancer more quickly. Here, EfficientNet B0 is used as a bottom-up encoder architecture for downsampling to capture contextual information by extracting meaningful and discriminative features from input images. The performance of the EfficientNet B0 encoder is compared with that of three encoders: ResNet 50, MobileNet V2, and Timm Gernet. The Feature Pyramid Network (FPN) is a top-down decoder architecture used for upsampling to recover spatial information. The performance of the FPN decoder was compared with that of three decoders: PAN, Linknet, and MAnet. This paper proposes a segmentation model named as the Feature Pyramid Network (FPN), with EfficientNet B0 as the encoder. Furthermore, the proposed hybrid model is analyzed using Adam, Adadelta, SGD, and RMSprop optimizers. Four performance criteria are used to assess the models: the Jaccard and Dice coefficients, model loss, and processing time. The proposed model can achieve Dice coefficient and Jaccard index values of 0.8975 and 0.8832, respectively. The proposed method can assist radiation oncologists in precisely targeting areas hosting cancer cells in the gastrointestinal tract, allowing for more efficient and timely cancer treatment.
Collapse
Affiliation(s)
- Neha Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
| | - Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
| | - Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
- Scientific and Engineering Research Centre, Najran University, Najran 61441, Saudi Arabia
| |
Collapse
|
11
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
12
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. ColoRectalCADx: Expeditious Recognition of Colorectal Cancer with Integrated Convolutional Neural Networks and Visual Explanations Using Mixed Dataset Evidence. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8723957. [PMID: 36404909 PMCID: PMC9671728 DOI: 10.1155/2022/8723957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/27/2022] [Indexed: 12/07/2023]
Abstract
Colorectal cancer typically affects the gastrointestinal tract within the human body. Colonoscopy is one of the most accurate methods of detecting cancer. The current system facilitates the identification of cancer by computer-assisted diagnosis (CADx) systems with a limited number of deep learning methods. It does not imply the depiction of mixed datasets for the functioning of the system. The proposed system, called ColoRectalCADx, is supported by deep learning (DL) models suitable for cancer research. The CADx system comprises five stages: convolutional neural networks (CNN), support vector machine (SVM), long short-term memory (LSTM), visual explanation such as gradient-weighted class activation mapping (Grad-CAM), and semantic segmentation phases. Here, the key components of the CADx system are equipped with 9 individual and 12 integrated CNNs, implying that the system consists mainly of investigational experiments with a total of 21 CNNs. In the subsequent phase, the CADx has a combination of CNNs of concatenated transfer learning functions associated with the machine SVM classification. Additional classification is applied to ensure effective transfer of results from CNN to LSTM. The system is mainly made up of a combination of CVC Clinic DB, Kvasir2, and Hyper Kvasir input as a mixed dataset. After CNN and LSTM, in advanced stage, malignancies are detected by using a better polyp recognition technique with Grad-CAM and semantic segmentation using U-Net. CADx results have been stored on Google Cloud for record retention. In these experiments, among all the CNNs, the individual CNN DenseNet-201 (87.1% training and 84.7% testing accuracies) and the integrated CNN ADaDR-22 (84.61% training and 82.17% testing accuracies) were the most efficient for cancer detection with the CNN+LSTM model. ColoRectalCADx accurately identifies cancer through individual CNN DesnseNet-201 and integrated CNN ADaDR-22. In Grad-CAM's visual explanations, CNN DenseNet-201 displays precise visualization of polyps, and CNN U-Net provides precise malignant polyps.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - T. Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| |
Collapse
|
13
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Thulasi Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| |
Collapse
|
14
|
Hanscom M, Cave DR. Endoscopic capsule robot-based diagnosis, navigation and localization in the gastrointestinal tract. Front Robot AI 2022; 9:896028. [PMID: 36119725 PMCID: PMC9479458 DOI: 10.3389/frobt.2022.896028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 08/08/2022] [Indexed: 01/10/2023] Open
Abstract
The proliferation of video capsule endoscopy (VCE) would not have been possible without continued technological improvements in imaging and locomotion. Advancements in imaging include both software and hardware improvements but perhaps the greatest software advancement in imaging comes in the form of artificial intelligence (AI). Current research into AI in VCE includes the diagnosis of tumors, gastrointestinal bleeding, Crohn’s disease, and celiac disease. Other advancements have focused on the improvement of both camera technologies and alternative forms of imaging. Comparatively, advancements in locomotion have just started to approach clinical use and include onboard controlled locomotion, which involves miniaturizing a motor to incorporate into the video capsule, and externally controlled locomotion, which involves using an outside power source to maneuver the capsule itself. Advancements in locomotion hold promise to remove one of the major disadvantages of VCE, namely, its inability to obtain targeted diagnoses. Active capsule control could in turn unlock additional diagnostic and therapeutic potential, such as the ability to obtain targeted tissue biopsies or drug delivery. With both advancements in imaging and locomotion has come a corresponding need to be better able to process generated images and localize the capsule’s position within the gastrointestinal tract. Technological advancements in computation performance have led to improvements in image compression and transfer, as well as advancements in sensor detection and alternative methods of capsule localization. Together, these advancements have led to the expansion of VCE across a number of indications, including the evaluation of esophageal and colon pathologies including esophagitis, esophageal varices, Crohn’s disease, and polyps after incomplete colonoscopy. Current research has also suggested a role for VCE in acute gastrointestinal bleeding throughout the gastrointestinal tract, as well as in urgent settings such as the emergency department, and in resource-constrained settings, such as during the COVID-19 pandemic. VCE has solidified its role in the evaluation of small bowel bleeding and earned an important place in the practicing gastroenterologist’s armamentarium. In the next few decades, further improvements in imaging and locomotion promise to open up even more clinical roles for the video capsule as a tool for non-invasive diagnosis of lumenal gastrointestinal pathologies.
Collapse
|
15
|
Investigating the significance of color space for abnormality detection in wireless capsule endoscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
16
|
Kim HJ, Gong EJ, Bang CS, Lee JJ, Suk KT, Baik GH. Computer-Aided Diagnosis of Gastrointestinal Protruded Lesions Using Wireless Capsule Endoscopy: A Systematic Review and Diagnostic Test Accuracy Meta-Analysis. J Pers Med 2022; 12:644. [PMID: 35455760 PMCID: PMC9029411 DOI: 10.3390/jpm12040644] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 04/14/2022] [Accepted: 04/14/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Wireless capsule endoscopy allows the identification of small intestinal protruded lesions, such as polyps, tumors, or venous structures. However, reading wireless capsule endoscopy images or movies is time-consuming, and minute lesions are easy to miss. Computer-aided diagnosis (CAD) has been applied to improve the efficacy of the reading process of wireless capsule endoscopy images or movies. However, there are no studies that systematically determine the performance of CAD models in diagnosing gastrointestinal protruded lesions. OBJECTIVE The aim of this study was to evaluate the diagnostic performance of CAD models for gastrointestinal protruded lesions using wireless capsule endoscopic images. METHODS Core databases were searched for studies based on CAD models for the diagnosis of gastrointestinal protruded lesions using wireless capsule endoscopy, and data on diagnostic performance were presented. A systematic review and diagnostic test accuracy meta-analysis were performed. RESULTS Twelve studies were included. The pooled area under the curve, sensitivity, specificity, and diagnostic odds ratio of CAD models for the diagnosis of protruded lesions were 0.95 (95% confidence interval, 0.93-0.97), 0.89 (0.84-0.92), 0.91 (0.86-0.94), and 74 (43-126), respectively. Subgroup analyses showed robust results. Meta-regression found no source of heterogeneity. Publication bias was not detected. CONCLUSION CAD models showed high performance for the optical diagnosis of gastrointestinal protruded lesions based on wireless capsule endoscopy.
Collapse
Affiliation(s)
- Hye Jin Kim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
| | - Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea;
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Ki Tae Suk
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea; (H.J.K.); (E.J.G.); (K.T.S.); (G.H.B.)
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea
| |
Collapse
|
17
|
Mohammad F, Al-Razgan M. Deep Feature Fusion and Optimization-Based Approach for Stomach Disease Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:2801. [PMID: 35408415 PMCID: PMC9003289 DOI: 10.3390/s22072801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/26/2022] [Accepted: 04/02/2022] [Indexed: 01/10/2023]
Abstract
Cancer is the deadliest disease among all the diseases and the main cause of human mortality. Several types of cancer sicken the human body and affect organs. Among all the types of cancer, stomach cancer is the most dangerous disease that spreads rapidly and needs to be diagnosed at an early stage. The early diagnosis of stomach cancer is essential to reduce the mortality rate. The manual diagnosis process is time-consuming, requires many tests, and the availability of an expert doctor. Therefore, automated techniques are required to diagnose stomach infections from endoscopic images. Many computerized techniques have been introduced in the literature but due to a few challenges (i.e., high similarity among the healthy and infected regions, irrelevant features extraction, and so on), there is much room to improve the accuracy and reduce the computational time. In this paper, a deep-learning-based stomach disease classification method employing deep feature extraction, fusion, and optimization using WCE images is proposed. The proposed method comprises several phases: data augmentation performed to increase the dataset images, deep transfer learning adopted for deep features extraction, feature fusion performed on deep extracted features, fused feature matrix optimized with a modified dragonfly optimization method, and final classification of the stomach disease was performed. The features extraction phase employed two pre-trained deep CNN models (Inception v3 and DenseNet-201) performing activation on feature derivation layers. Later, the parallel concatenation was performed on deep-derived features and optimized using the meta-heuristic method named the dragonfly algorithm. The optimized feature matrix was classified by employing machine-learning algorithms and achieved an accuracy of 99.8% on the combined stomach disease dataset. A comparison has been conducted with state-of-the-art techniques and shows improved accuracy.
Collapse
Affiliation(s)
- Farah Mohammad
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
| | - Muna Al-Razgan
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia;
| |
Collapse
|
18
|
Muruganantham P, Balakrishnan SM. Attention Aware Deep Learning Model for Wireless Capsule Endoscopy Lesion Classification and Localization. J Med Biol Eng 2022. [DOI: 10.1007/s40846-022-00686-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
19
|
Goel N, Kaur S, Gunjan D, Mahapatra SJ. Dilated CNN for abnormality detection in wireless capsule endoscopy images. Soft comput 2022. [DOI: 10.1007/s00500-021-06546-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
20
|
Kröner PT, Engels MML, Glicksberg BS, Johnson KW, Mzaik O, van Hooft JE, Wallace MB, El-Serag HB, Krittanawong C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J Gastroenterol 2021; 27:6794-6824. [PMID: 34790008 PMCID: PMC8567482 DOI: 10.3748/wjg.v27.i40.6794] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/15/2021] [Accepted: 09/16/2021] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett’s esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.
Collapse
Affiliation(s)
- Paul T Kröner
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Megan ML Engels
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Cancer Center Amsterdam, Department of Gastroenterology and Hepatology, Amsterdam UMC, Location AMC, Amsterdam 1105, The Netherlands
| | - Benjamin S Glicksberg
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kipp W Johnson
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Obaie Mzaik
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Amsterdam 2300, The Netherlands
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Division of Gastroenterology and Hepatology, Sheikh Shakhbout Medical City, Abu Dhabi 11001, United Arab Emirates
| | - Hashem B El-Serag
- Section of Gastroenterology and Hepatology, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
| | - Chayakrit Krittanawong
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Cardiology, Michael E. DeBakey VA Medical Center, Houston, TX 77030, United States
| |
Collapse
|
21
|
Dhal P, Azad C. A comprehensive survey on feature selection in the various fields of machine learning. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02550-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
22
|
Recognizing Gastrointestinal Malignancies on WCE and CCE Images by an Ensemble of Deep and Handcrafted Features with Entropy and PCA Based Features Optimization. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10481-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
23
|
Lan L, Ye C. Recurrent generative adversarial networks for unsupervised WCE video summarization. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.106971] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
24
|
Attallah O. CoMB-Deep: Composite Deep Learning-Based Pipeline for Classifying Childhood Medulloblastoma and Its Classes. Front Neuroinform 2021; 15:663592. [PMID: 34122031 PMCID: PMC8193683 DOI: 10.3389/fninf.2021.663592] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 04/26/2021] [Indexed: 12/28/2022] Open
Abstract
Childhood medulloblastoma (MB) is a threatening malignant tumor affecting children all over the globe. It is believed to be the foremost common pediatric brain tumor causing death. Early and accurate classification of childhood MB and its classes are of great importance to help doctors choose the suitable treatment and observation plan, avoid tumor progression, and lower death rates. The current gold standard for diagnosing MB is the histopathology of biopsy samples. However, manual analysis of such images is complicated, costly, time-consuming, and highly dependent on the expertise and skills of pathologists, which might cause inaccurate results. This study aims to introduce a reliable computer-assisted pipeline called CoMB-Deep to automatically classify MB and its classes with high accuracy from histopathological images. This key challenge of the study is the lack of childhood MB datasets, especially its four categories (defined by the WHO) and the inadequate related studies. All relevant works were based on either deep learning (DL) or textural analysis feature extractions. Also, such studies employed distinct features to accomplish the classification procedure. Besides, most of them only extracted spatial features. Nevertheless, CoMB-Deep blends the advantages of textural analysis feature extraction techniques and DL approaches. The CoMB-Deep consists of a composite of DL techniques. Initially, it extracts deep spatial features from 10 convolutional neural networks (CNNs). It then performs a feature fusion step using discrete wavelet transform (DWT), a texture analysis method capable of reducing the dimension of fused features. Next, the CoMB-Deep explores the best combination of fused features, enhancing the performance of the classification process using two search strategies. Afterward, it employs two feature selection techniques on the fused feature sets selected in the previous step. A bi-directional long-short term memory (Bi-LSTM) network; a DL-based approach that is utilized for the classification phase. CoMB-Deep maintains two classification categories: binary category for distinguishing between the abnormal and normal cases and multi-class category to identify the subclasses of MB. The results of the CoMB-Deep for both classification categories prove that it is reliable. The results also indicate that the feature sets selected using both search strategies have enhanced the performance of Bi-LSTM compared to individual spatial deep features. CoMB-Deep is compared to related studies to verify its competitiveness, and this comparison confirmed its robustness and outperformance. Hence, CoMB-Deep can help pathologists perform accurate diagnoses, reduce misdiagnosis risks that could occur with manual diagnosis, accelerate the classification procedure, and decrease diagnosis costs.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
25
|
Sullivan P, Gupta S, Powers PD, Marya NB. Artificial Intelligence Research and Development for Application in Video Capsule Endoscopy. Gastrointest Endosc Clin N Am 2021; 31:387-397. [PMID: 33743933 DOI: 10.1016/j.giec.2020.12.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Artificial intelligence (AI) research for medical applications has expanded quickly. Advancements in computer processing now allow for the development of complex neural network architectures (eg, convolutional neural networks) that are capable of extracting and learning complex features from massive data sets, including large image databases. Gastroenterology and endoscopy are well suited for AI research. Video capsule endoscopy is an ideal platform for AI model research given the large amount of data produced by each capsule examination and the annotated databases that are already available. Studies have demonstrated high performance for applications of capsule-based AI models developed for various pathologic conditions.
Collapse
Affiliation(s)
- Peter Sullivan
- Division of Gastroenterology, University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655, USA
| | - Shradha Gupta
- Division of Gastroenterology, University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655, USA
| | - Patrick D Powers
- Division of Gastroenterology, University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655, USA
| | - Neil B Marya
- Division of Gastroenterology, University of Massachusetts Medical School, 55 Lake Avenue North, Worcester, MA 01655, USA.
| |
Collapse
|
26
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
27
|
Wang S, Cong Y, Zhu H, Chen X, Qu L, Fan H, Zhang Q, Liu M. Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract. IEEE J Biomed Health Inform 2021; 25:514-525. [PMID: 32750912 DOI: 10.1109/jbhi.2020.2997760] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Collapse
|
28
|
Intelligent automated drug administration and therapy: future of healthcare. Drug Deliv Transl Res 2021; 11:1878-1902. [PMID: 33447941 DOI: 10.1007/s13346-020-00876-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2020] [Indexed: 12/13/2022]
Abstract
In the twenty-first century, the collaboration of control engineering and the healthcare sector has matured to some extent; however, the future will have promising opportunities, vast applications, and some challenges. Due to advancements in processing speed, the closed-loop administration of drugs has gained popularity for critically ill patients in intensive care units and routine life such as personalized drug delivery or implantable therapeutic devices. For developing a closed-loop drug delivery system, the control system works with a group of technologies like sensors, micromachining, wireless technologies, and pharmaceuticals. Recently, the integration of artificial intelligence techniques such as fuzzy logic, neural network, and reinforcement learning with the closed-loop drug delivery systems has brought their applications closer to fully intelligent automatic healthcare systems. This review's main objectives are to discuss the current developments, possibilities, and future visions in closed-loop drug delivery systems, for providing treatment to patients suffering from chronic diseases. It summarizes the present insight of closed-loop drug delivery/therapy for diabetes, gastrointestinal tract disease, cancer, anesthesia administration, cardiac ailments, and neurological disorders, from a perspective to show the research in the area of control theory.
Collapse
|
29
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
30
|
Owais M, Arsalan M, Mahmood T, Kang JK, Park KR. Automated Diagnosis of Various Gastrointestinal Lesions Using a Deep Learning-Based Classification and Retrieval Framework With a Large Endoscopic Database: Model Development and Validation. J Med Internet Res 2020; 22:e18563. [PMID: 33242010 PMCID: PMC7728528 DOI: 10.2196/18563] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/16/2020] [Accepted: 11/11/2020] [Indexed: 12/14/2022] Open
Abstract
Background The early diagnosis of various gastrointestinal diseases can lead to effective treatment and reduce the risk of many life-threatening conditions. Unfortunately, various small gastrointestinal lesions are undetectable during early-stage examination by medical experts. In previous studies, various deep learning–based computer-aided diagnosis tools have been used to make a significant contribution to the effective diagnosis and treatment of gastrointestinal diseases. However, most of these methods were designed to detect a limited number of gastrointestinal diseases, such as polyps, tumors, or cancers, in a specific part of the human gastrointestinal tract. Objective This study aimed to develop a comprehensive computer-aided diagnosis tool to assist medical experts in diagnosing various types of gastrointestinal diseases. Methods Our proposed framework comprises a deep learning–based classification network followed by a retrieval method. In the first step, the classification network predicts the disease type for the current medical condition. Then, the retrieval part of the framework shows the relevant cases (endoscopic images) from the previous database. These past cases help the medical expert validate the current computer prediction subjectively, which ultimately results in better diagnosis and treatment. Results All the experiments were performed using 2 endoscopic data sets with a total of 52,471 frames and 37 different classes. The optimal performances obtained by our proposed method in accuracy, F1 score, mean average precision, and mean average recall were 96.19%, 96.99%, 98.18%, and 95.86%, respectively. The overall performance of our proposed diagnostic framework substantially outperformed state-of-the-art methods. Conclusions This study provides a comprehensive computer-aided diagnosis framework for identifying various types of gastrointestinal diseases. The results show the superiority of our proposed method over various other recent methods and illustrate its potential for clinical diagnosis and treatment. Our proposed network can be applicable to other classification domains in medical imaging, such as computed tomography scans, magnetic resonance imaging, and ultrasound sequences.
Collapse
Affiliation(s)
- Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Jin Kyu Kang
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| |
Collapse
|
31
|
Abstract
Artificial intelligence (AI) is now a trendy subject in clinical medicine and especially in gastrointestinal (GI) endoscopy. AI has the potential to improve the quality of GI endoscopy at all levels. It will compensate for humans' errors and limited capabilities by bringing more accuracy, consistency, and higher speed, making endoscopic procedures more efficient and of higher quality. AI showed great results in diagnostic and therapeutic endoscopy in all parts of the GI tract. More studies are still needed before the introduction of this new technology in our daily practice and clinical guidelines. Furthermore, ethical clearance and new legislations might be needed. In conclusion, the introduction of AI will be a big breakthrough in the field of GI endoscopy in the upcoming years. It has the potential to bring major improvements to GI endoscopy at all levels.
Collapse
Affiliation(s)
- Ahmad El Hajjar
- Department of Gastroenterology and Digestive Endoscopy, Arnault Tzanck Institute, Saint-Laurent du Var 06700, France
| | | |
Collapse
|
32
|
Jani KK, Srivastava R. A Survey on Medical Image Analysis in Capsule Endoscopy. Curr Med Imaging 2020; 15:622-636. [PMID: 32008510 DOI: 10.2174/1573405614666181102152434] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 10/14/2018] [Accepted: 10/22/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Capsule Endoscopy (CE) is a non-invasive, patient-friendly alternative to conventional endoscopy procedure. However, CE produces 6 to 8 hrs long video posing a tedious challenge to a gastroenterologist for abnormality detection. Major challenges to an expert are lengthy videos, need of constant concentration and subjectivity of the abnormality. To address these challenges along with high diagnostic accuracy, design and development of automated abnormality detection system is a must. Machine learning and computer vision techniques are devised to develop such automated systems. METHODS Study presents a review of quality research papers published in IEEE, Scopus, and Science Direct database with search criteria as capsule endoscopy, engineering, and journal papers. The initial search retrieved 144 publications. After evaluating all articles, 62 publications pertaining to image analysis are selected. RESULTS This paper presents a rigorous review comprising all the aspects of medical image analysis concerning capsule endoscopy namely video summarization and redundant image elimination, Image enhancement and interpretation, segmentation and region identification, Computer-aided abnormality detection in capsule endoscopy, Image and video compression. The study provides a comparative analysis of various approaches, experimental setup, performance, strengths, and limitations of the aspects stated above. CONCLUSIONS The analyzed image analysis techniques for capsule endoscopy have not yet overcome all current challenges mainly due to lack of dataset and complex nature of the gastrointestinal tract.
Collapse
Affiliation(s)
- Kuntesh Ketan Jani
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| | - Rajeev Srivastava
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| |
Collapse
|
33
|
Soffer S, Klang E, Shimon O, Nachmias N, Eliakim R, Ben-Horin S, Kopylov U, Barash Y. Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis. Gastrointest Endosc 2020; 92:831-839.e8. [PMID: 32334015 DOI: 10.1016/j.gie.2020.04.039] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 04/13/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Deep learning is an innovative algorithm based on neural networks. Wireless capsule endoscopy (WCE) is considered the criterion standard for detecting small-bowel diseases. Manual examination of WCE is time-consuming and can benefit from automatic detection using artificial intelligence (AI). We aimed to perform a systematic review of the current literature pertaining to deep learning implementation in WCE. METHODS We conducted a search in PubMed for all original publications on the subject of deep learning applications in WCE published between January 1, 2016 and December 15, 2019. Evaluation of the risk of bias was performed using tailored Quality Assessment of Diagnostic Accuracy Studies-2. Pooled sensitivity and specificity were calculated. Summary receiver operating characteristic curves were plotted. RESULTS Of the 45 studies retrieved, 19 studies were included. All studies were retrospective. Deep learning applications for WCE included detection of ulcers, polyps, celiac disease, bleeding, and hookworm. Detection accuracy was above 90% for most studies and diseases. Pooled sensitivity and specificity for ulcer detection were .95 (95% confidence interval [CI], .89-.98) and .94 (95% CI, .90-.96), respectively. Pooled sensitivity and specificity for bleeding or bleeding source were .98 (95% CI, .96-.99) and .99 (95% CI, .97-.99), respectively. CONCLUSIONS Deep learning has achieved excellent performance for the detection of a range of diseases in WCE. Notwithstanding, current research is based on retrospective studies with a high risk of bias. Thus, future prospective, multicenter studies are necessary for this technology to be implemented in the clinical use of WCE.
Collapse
Affiliation(s)
- Shelly Soffer
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| | - Orit Shimon
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Anesthesia, Rabin Medical Center, Beilinson Hospital, Petach Tikvah, Israel
| | - Noy Nachmias
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Departments of Internal Medicine D, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Rami Eliakim
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Shomron Ben-Horin
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Uri Kopylov
- Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Israel
| | - Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Israel; Sackler Medical School, Tel Aviv University, Tel Aviv, Israel; DeepVision Lab, Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
34
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
35
|
Yang YJ. The Future of Capsule Endoscopy: The Role of Artificial Intelligence and Other Technical Advancements. Clin Endosc 2020; 53:387-394. [PMID: 32668529 PMCID: PMC7403015 DOI: 10.5946/ce.2020.133] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 05/24/2020] [Indexed: 12/13/2022] Open
Abstract
Capsule endoscopy has revolutionized the management of small-bowel diseases owing to its convenience and noninvasiveness. Capsule endoscopy is a common method for the evaluation of obscure gastrointestinal bleeding, Crohn’s disease, small-bowel tumors, and polyposis syndrome. However, the laborious reading process, oversight of small-bowel lesions, and lack of locomotion are major obstacles to expanding its application. Along with recent advances in artificial intelligence, several studies have reported the promising performance of convolutional neural network systems for the diagnosis of various small-bowel lesions including erosion/ulcers, angioectasias, polyps, and bleeding lesions, which have reduced the time needed for capsule endoscopy interpretation. Furthermore, colon capsule endoscopy and capsule endoscopy locomotion driven by magnetic force have been investigated for clinical application, and various capsule endoscopy prototypes for active locomotion, biopsy, or therapeutic approaches have been introduced. In this review, we will discuss the recent advancements in artificial intelligence in the field of capsule endoscopy, as well as studies on other technological improvements in capsule endoscopy.
Collapse
Affiliation(s)
- Young Joo Yang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea.,Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Korea
| |
Collapse
|
36
|
Thambawita V, Jha D, Hammer HL, Johansen HD, Johansen D, Halvorsen P, Riegler MA. An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning Applied to Gastrointestinal Tract Abnormality Classification. ACTA ACUST UNITED AC 2020. [DOI: 10.1145/3386295] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Precise and efficient automated identification of gastrointestinal (GI) tract diseases can help doctors treat more patients and improve the rate of disease detection and identification. Currently, automatic analysis of diseases in the GI tract is a hot topic in both computer science and medical-related journals. Nevertheless, the evaluation of such an automatic analysis is often incomplete or simply wrong. Algorithms are often only tested on small and biased datasets, and cross-dataset evaluations are rarely performed. A clear understanding of evaluation metrics and machine learning models with cross datasets is crucial to bring research in the field to a new quality level. Toward this goal, we present comprehensive evaluations of five distinct machine learning models using global features and deep neural networks that can classify 16 different key types of GI tract conditions, including pathological findings, anatomical landmarks, polyp removal conditions, and normal findings from images captured by common GI tract examination instruments. In our evaluation, we introduce performance hexagons using six performance metrics, such as recall, precision, specificity, accuracy, F1-score, and the Matthews correlation coefficient to demonstrate how to determine the real capabilities of models rather than evaluating them shallowly. Furthermore, we perform cross-dataset evaluations using different datasets for training and testing. With these cross-dataset evaluations, we demonstrate the challenge of actually building a generalizable model that could be used across different hospitals. Our experiments clearly show that more sophisticated performance metrics and evaluation methods need to be applied to get reliable models rather than depending on evaluations of the splits of the same dataset—that is, the performance metrics should always be interpreted together rather than relying on a single metric.
Collapse
Affiliation(s)
| | - Debesh Jha
- SimulaMet and UiT—The Arctic University of Norway, Tromsø, Norway
| | | | | | - Dag Johansen
- UiT—The Arctic University of Norway, Tromsø, Norway
| | - Pål Halvorsen
- SimulaMet and Oslo Metropolitan University, Oslo, Norway
| | | |
Collapse
|
37
|
Khan MA, Khan MA, Ahmed F, Mittal M, Goyal LM, Jude Hemanth D, Satapathy SC. Gastrointestinal diseases segmentation and classification based on duo-deep architectures. Pattern Recognit Lett 2020; 131:193-204. [DOI: 10.1016/j.patrec.2019.12.024] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
38
|
Color-based template selection for detection of gastric abnormalities in video endoscopy. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101668] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
39
|
Liaqat A, Khan MA, Sharif M, Mittal M, Saba T, Manic KS, Al Attar FNH. Gastric Tract Infections Detection and Classification from Wireless Capsule Endoscopy using Computer Vision Techniques: A Review. Curr Med Imaging 2020; 16:1229-1242. [PMID: 32334504 DOI: 10.2174/1573405616666200425220513] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/14/2020] [Accepted: 01/30/2020] [Indexed: 11/22/2022]
Abstract
Recent facts and figures published in various studies in the US show that approximately 27,510 new cases of gastric infections are diagnosed. Furthermore, it has also been reported that the mortality rate is quite high in diagnosed cases. The early detection of these infections can save precious human lives. As the manual process of these infections is time-consuming and expensive, therefore automated Computer-Aided Diagnosis (CAD) systems are required which helps the endoscopy specialists in their clinics. Generally, an automated method of gastric infection detections using Wireless Capsule Endoscopy (WCE) is comprised of the following steps such as contrast preprocessing, feature extraction, segmentation of infected regions, and classification into their relevant categories. These steps consist of various challenges that reduce the detection and recognition accuracy as well as increase the computation time. In this review, authors have focused on the importance of WCE in medical imaging, the role of endoscopy for bleeding-related infections, and the scope of endoscopy. Further, the general steps and highlighting the importance of each step have been presented. A detailed discussion and future directions have been provided at the end.
Collapse
Affiliation(s)
- Amna Liaqat
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | | | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Mamta Mittal
- Department of Computer Science & Engineering, G.B. Pant Govt. Engineering College, New Delhi, India
| | - Tanzila Saba
- Department of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - K Suresh Manic
- Department of Electrical & Computer Engineering, National University of Science & Technology, Muscat, Oman
| | | |
Collapse
|
40
|
Khan MA, Kadry S, Alhaisoni M, Nam Y, Zhang Y, Rajinikanth V, Sarfraz MS. Computer-Aided Gastrointestinal Diseases Analysis From Wireless Capsule Endoscopy: A Framework of Best Features Selection. IEEE ACCESS 2020; 8:132850-132859. [DOI: 10.1109/access.2020.3010448] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
41
|
Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020; 158:76-94.e2. [PMID: 31593701 DOI: 10.1053/j.gastro.2019.08.058] [Citation(s) in RCA: 321] [Impact Index Per Article: 64.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 08/22/2019] [Accepted: 08/24/2019] [Indexed: 02/07/2023]
Abstract
Since 2010, substantial progress has been made in artificial intelligence (AI) and its application to medicine. AI is explored in gastroenterology for endoscopic analysis of lesions, in detection of cancer, and to facilitate the analysis of inflammatory lesions or gastrointestinal bleeding during wireless capsule endoscopy. AI is also tested to assess liver fibrosis and to differentiate patients with pancreatic cancer from those with pancreatitis. AI might also be used to establish prognoses of patients or predict their response to treatments, based on multiple factors. We review the ways in which AI may help physicians make a diagnosis or establish a prognosis and discuss its limitations, knowing that further randomized controlled studies will be required before the approval of AI techniques by the health authorities.
Collapse
Affiliation(s)
- Catherine Le Berre
- Institut des Maladies de l'Appareil Digestif, Nantes University Hospital, France; Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France
| | | | - Sabeur Aridhi
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Marie-Dominique Devignes
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Laure Fournier
- Université Paris-Descartes, Institut National de la Santé et de la Recherche Médicale, Unité Mixte De Recherché S970, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - Malika Smaïl-Tabbone
- University of Lorraine, Le Centre National de la Recherche Scientifique, Inria, Laboratoire Lorrain de Recherche en Informatique et ses Applications, Nancy, France
| | - Silvio Danese
- Inflammatory Bowel Disease Center and Department of Biomedical Sciences, Humanitas Clinical and Research Center, Humanitas University, Milan, Italy
| | - Laurent Peyrin-Biroulet
- Institut National de la Santé et de la Recherche Médicale U954 and Department of Gastroenterology, Nancy University Hospital, University of Lorraine, France.
| |
Collapse
|
42
|
Jia X, Xing X, Yuan Y, Xing L, Meng MQH. Wireless Capsule Endoscopy: A New Tool for Cancer Screening in the Colon With Deep-Learning-Based Polyp Recognition. PROCEEDINGS OF THE IEEE 2020; 108:178-197. [DOI: 10.1109/jproc.2019.2950506] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
43
|
Deeba F, Bui FM, Wahid KA. Computer-aided polyp detection based on image enhancement and saliency-based selection. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
44
|
Khan MA, Rashid M, Sharif M, Javed K, Akram T. Classification of gastrointestinal diseases of stomach from WCE using improved saliency-based method and discriminant features selection. MULTIMEDIA TOOLS AND APPLICATIONS 2019; 78:27743-27770. [DOI: 10.1007/s11042-019-07875-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 04/22/2019] [Accepted: 06/10/2019] [Indexed: 08/25/2024]
|
45
|
Ali H, Sharif M, Yasmin M, Rehmani MH, Riaz F. A survey of feature extraction and fusion of deep learning for detection of abnormalities in video endoscopy of gastrointestinal-tract. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09743-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
46
|
Artificial Intelligence-Based Classification of Multiple Gastrointestinal Diseases Using Endoscopy Videos for Clinical Diagnosis. J Clin Med 2019; 8:jcm8070986. [PMID: 31284687 PMCID: PMC6678612 DOI: 10.3390/jcm8070986] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 07/02/2019] [Accepted: 07/05/2019] [Indexed: 02/08/2023] Open
Abstract
Various techniques using artificial intelligence (AI) have resulted in a significant contribution to field of medical image and video-based diagnoses, such as radiology, pathology, and endoscopy, including the classification of gastrointestinal (GI) diseases. Most previous studies on the classification of GI diseases use only spatial features, which demonstrate low performance in the classification of multiple GI diseases. Although there are a few previous studies using temporal features based on a three-dimensional convolutional neural network, only a specific part of the GI tract was involved with the limited number of classes. To overcome these problems, we propose a comprehensive AI-based framework for the classification of multiple GI diseases by using endoscopic videos, which can simultaneously extract both spatial and temporal features to achieve better classification performance. Two different residual networks and a long short-term memory model are integrated in a cascaded mode to extract spatial and temporal features, respectively. Experiments were conducted on a combined dataset consisting of one of the largest endoscopic videos with 52,471 frames. The results demonstrate the effectiveness of the proposed classification framework for multi-GI diseases. The experimental results of the proposed model (97.057% area under the curve) demonstrate superior performance over the state-of-the-art methods and indicate its potential for clinical applications.
Collapse
|
47
|
Cummins G, Cox BF, Ciuti G, Anbarasan T, Desmulliez MPY, Cochran S, Steele R, Plevris JN, Koulaouzidis A. Gastrointestinal diagnosis using non-white light imaging capsule endoscopy. Nat Rev Gastroenterol Hepatol 2019; 16:429-447. [PMID: 30988520 DOI: 10.1038/s41575-019-0140-z] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Capsule endoscopy (CE) has proved to be a powerful tool in the diagnosis and management of small bowel disorders since its introduction in 2001. However, white light imaging (WLI) is the principal technology used in clinical CE at present, and therefore, CE is limited to mucosal inspection, with diagnosis remaining reliant on visible manifestations of disease. The introduction of WLI CE has motivated a wide range of research to improve its diagnostic capabilities through integration with other sensing modalities. These developments have the potential to overcome the limitations of WLI through enhanced detection of subtle mucosal microlesions and submucosal and/or transmural pathology, providing novel diagnostic avenues. Other research aims to utilize a range of sensors to measure physiological parameters or to discover new biomarkers to improve the sensitivity, specificity and thus the clinical utility of CE. This multidisciplinary Review summarizes research into non-WLI CE devices by organizing them into a taxonomic structure on the basis of their sensing modality. The potential of these capsules to realize clinically useful virtual biopsy and computer-aided diagnosis (CADx) is also reported.
Collapse
Affiliation(s)
- Gerard Cummins
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK.
| | | | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
| | | | - Marc P Y Desmulliez
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK
| | - Sandy Cochran
- School of Engineering, University of Glasgow, Glasgow, UK
| | - Robert Steele
- School of Medicine, University of Dundee, Dundee, UK
| | - John N Plevris
- Centre for Liver and Digestive Disorders, The Royal Infirmary of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
48
|
Application of MR morphologic, diffusion tensor, and perfusion imaging in the classification of brain tumors using machine learning scheme. Neuroradiology 2019; 61:757-765. [PMID: 30949746 DOI: 10.1007/s00234-019-02195-z] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 02/27/2019] [Indexed: 12/22/2022]
Abstract
PURPOSE While MRI is the modality of choice for the assessment of patients with brain tumors, differentiation between various tumors based on their imaging characteristics might be challenging due to overlapping imaging features. The purpose of this study was to apply a machine learning scheme using basic and advanced MR sequences for distinguishing different types of brain tumors. METHODS The study cohort included 141 patients (41 glioblastoma, 38 metastasis, 50 meningioma, and 12 primary central nervous system lymphoma). A computer-assisted classification scheme, combining morphologic MRI, perfusion MRI, and DTI metrics, was developed and used for tumor classification. The proposed multistep scheme consists of pre-processing, ROI definition, features extraction, feature selection, and classification. Feature subset selection was performed using support vector machines (SVMs). Classification performance was assessed by leave-one-out cross-validation. Given an ROI, the entire classification process was done automatically via computer and without any human intervention. RESULTS A binary hierarchical classification tree was chosen. In the first step, selected features were chosen for distinguishing glioblastoma from the remaining three classes, followed by separation of meningioma from metastasis and PCNSL, and then to discriminate PCNSL from metastasis. The binary SVM classification accuracy, sensitivity and specificity for glioblastoma, metastasis, meningiomas, and primary central nervous system lymphoma were 95.7, 81.6, and 91.2%; 92.7, 95.1, and 93.6%; 97, 90.8, and 58.3%; and 91.5, 90, and 96.9%, respectively. CONCLUSION A machine learning scheme using data from anatomical and advanced MRI sequences resulted in high-performance automatic tumor classification algorithm. Such a scheme can be integrated into clinical decision support systems to optimize tumor classification.
Collapse
|
49
|
DINOSARC: Color Features Based on Selective Aggregation of Chromatic Image Components for Wireless Capsule Endoscopy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2018; 2018:2026962. [PMID: 30250496 PMCID: PMC6140007 DOI: 10.1155/2018/2026962] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 07/22/2018] [Accepted: 07/31/2018] [Indexed: 12/12/2022]
Abstract
Wireless Capsule Endoscopy (WCE) is a noninvasive diagnostic technique enabling the inspection of the whole gastrointestinal (GI) tract by capturing and wirelessly transmitting thousands of color images. Proprietary software "stitches" the images into videos for examination by accredited readers. However, the videos produced are of large length and consequently the reading task becomes harder and more prone to human errors. Automating the WCE reading process could contribute in both the reduction of the examination time and the improvement of its diagnostic accuracy. In this paper, we present a novel feature extraction methodology for automated WCE image analysis. It aims at discriminating various kinds of abnormalities from the normal contents of WCE images, in a machine learning-based classification framework. The extraction of the proposed features involves an unsupervised color-based saliency detection scheme which, unlike current approaches, combines both point and region-level saliency information and the estimation of local and global image color descriptors. The salient point detection process involves estimation of DIstaNces On Selective Aggregation of chRomatic image Components (DINOSARC). The descriptors are extracted from superpixels by coevaluating both point and region-level information. The main conclusions of the experiments performed on a publicly available dataset of WCE images are (a) the proposed salient point detection scheme results in significantly less and more relevant salient points; (b) the proposed descriptors are more discriminative than relevant state-of-the-art descriptors, promising a wider adoption of the proposed approach for computer-aided diagnosis in WCE.
Collapse
|
50
|
Yuan Y, Yao X, Han J, Guo L, Meng MQH. Discriminative Joint-Feature Topic Model With Dual Constraints for WCE Classification. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:2074-2085. [PMID: 28749365 DOI: 10.1109/tcyb.2017.2726818] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Wireless capsule endoscopy (WCE) enables clinicians to examine the digestive tract without any surgical operations, at the cost of a large amount of images to be analyzed. The main challenge for automatic computer-aided diagnosis arises from the difficulty of robust characterization of these images. To tackle this problem, a novel discriminative joint-feature topic model (DJTM) with dual constraints is proposed to classify multiple abnormalities in WCE images. We first propose a joint-feature probabilistic latent semantic analysis (PLSA) model, where color and texture descriptors extracted from same image patches are jointly modeled with their conditional distributions. Then the proposed dual constraints: visual words importance and local image manifold are embedded into the joint-feature PLSA model simultaneously to obtain discriminative latent semantic topics. The visual word importance is proposed in our DJTM to guarantee that visual words with similar importance come from close latent topics while the local image manifold constraint enforces that images within the same category share similar latent topics. Finally, each image is characterized by distribution of latent semantic topics instead of low level features. Our proposed DJTM showed an excellent overall recognition accuracy 90.78%. Comprehensive comparison results demonstrate that our method outperforms existing multiple abnormalities classification methods for WCE images.
Collapse
|