1
|
Wang Y, Shi T, Gao F, Tian S, Yu L. Celiac disease diagnosis from endoscopic images based on multi-scale adaptive hybrid architecture model. Phys Med Biol 2024; 69:075014. [PMID: 38306971 DOI: 10.1088/1361-6560/ad25c1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/02/2024] [Indexed: 02/04/2024]
Abstract
Objective. Celiac disease (CD) has emerged as a significant global public health concern, exhibiting an estimated worldwide prevalence of approximately 1%. However, existing research pertaining to domestic occurrences of CD is confined mainly to case reports and limited case analyses. Furthermore, there is a substantial population of undiagnosed patients in the Xinjiang region. This study endeavors to create a novel, high-performance, lightweight deep learning model utilizing endoscopic images from CD patients in Xinjiang as a dataset, with the intention of enhancing the accuracy of CD diagnosis.Approach. In this study, we propose a novel CNN-Transformer hybrid architecture for deep learning, tailored to the diagnosis of CD using endoscopic images. Within this architecture, a multi-scale spatial adaptive selective kernel convolution feature attention module demonstrates remarkable efficacy in diagnosing CD. Within this module, we dynamically capture salient features within the local channel feature map that correspond to distinct manifestations of endoscopic image lesions in the CD-affected areas such as the duodenal bulb, duodenal descending segment, and terminal ileum. This process serves to extract and fortify the spatial information specific to different lesions. This strategic approach facilitates not only the extraction of diverse lesion characteristics but also the attentive consideration of their spatial distribution. Additionally, we integrate the global representation of the feature map obtained from the Transformer with the locally extracted information via convolutional layers. This integration achieves a harmonious synergy that optimizes the diagnostic prowess of the model.Main results. Overall, the accuracy, specificity, F1-Score, and precision in the experimental results were 98.38%, 99.04%, 98.66% and 99.38%, respectively.Significance. This study introduces a deep learning network equipped with both global feature response and local feature extraction capabilities. This innovative architecture holds significant promise for the accurate diagnosis of CD by leveraging endoscopic images captured from diverse anatomical sites.
Collapse
Affiliation(s)
- Yilei Wang
- College of Software, Xinjiang University, Urumqi, Xinjiang, People's Republic of China
- Key Laboratory of Software Engineering Technology, College of Software, Xin Jiang University, Urumqi, People's Republic of China
| | - Tian Shi
- Department of Gastroenterologys, People's Hospital of Xinjiang Uyghur Autonomous Region, Urumqi, Xinjiang Uyghur Autonomous Region, People's Republic of China
- Xinjiang Clinical Research Center for Digestive Diseases, Urumqi, People's Republic of China
| | - Feng Gao
- Department of Gastroenterologys, People's Hospital of Xinjiang Uyghur Autonomous Region, Urumqi, Xinjiang Uyghur Autonomous Region, People's Republic of China
- Xinjiang Clinical Research Center for Digestive Diseases, Urumqi, People's Republic of China
| | - Shengwei Tian
- College of Software, Xinjiang University, Urumqi, Xinjiang, People's Republic of China
- Key Laboratory of Software Engineering Technology, College of Software, Xin Jiang University, Urumqi, People's Republic of China
| | - Long Yu
- College of Network Center, Xinjiang University, Urumqi, People's Republic of China
- Signal and Signal Processing Laboratory, College of Information Science and Engineering, Xinjiang University, Urumqi, People's Republic of China
| |
Collapse
|
2
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
3
|
Kociołek M, Kozłowski M, Cardone A. A Convolutional Neural Networks-Based Approach for Texture Directionality Detection. SENSORS 2022; 22:s22020562. [PMID: 35062522 PMCID: PMC8778371 DOI: 10.3390/s22020562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/04/2022] [Accepted: 01/07/2022] [Indexed: 02/04/2023]
Abstract
The perceived texture directionality is an important, not fully explored image characteristic. In many applications texture directionality detection is of fundamental importance. Several approaches have been proposed, such as the fast Fourier-based method. We recently proposed a method based on the interpolated grey-level co-occurrence matrix (iGLCM), robust to image blur and noise but slower than the Fourier-based method. Here we test the applicability of convolutional neural networks (CNNs) to texture directionality detection. To obtain the large amount of training data required, we built a training dataset consisting of synthetic textures with known directionality and varying perturbation levels. Subsequently, we defined and tested shallow and deep CNN architectures. We present the test results focusing on the CNN architectures and their robustness with respect to image perturbations. We identify the best performing CNN architecture, and compare it with the iGLCM, the Fourier and the local gradient orientation methods. We find that the accuracy of CNN is lower, yet comparable to the iGLCM, and it outperforms the other two methods. As expected, the CNN method shows the highest computing speed. Finally, we demonstrate the best performing CNN on real-life images. Visual analysis suggests that the learned patterns generalize to real-life image data. Hence, CNNs represent a promising approach for texture directionality detection, warranting further investigation.
Collapse
Affiliation(s)
- Marcin Kociołek
- Institute of Electronics, Lodz University of Technology, Al. Politechniki 10, 93-590 Łódź, Poland
- Correspondence: ; Tel.: +48-603-291-300
| | - Michał Kozłowski
- Department of Mechatronics, Faculty of Technical Science, University of Warmia and Mazury, Ul. Oczapowskiego 11, 10-710 Olsztyn, Poland;
| | - Antonio Cardone
- Information Technology Laboratory, Software and Systems Division, National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA;
| |
Collapse
|
4
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
5
|
Wang X, Qian H, Ciaccio EJ, Lewis SK, Bhagat G, Green PH, Xu S, Huang L, Gao R, Liu Y. Celiac disease diagnosis from videocapsule endoscopy images with residual learning and deep feature extraction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105236. [PMID: 31786452 DOI: 10.1016/j.cmpb.2019.105236] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 11/14/2019] [Accepted: 11/19/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Videocapsule endoscopy (VCE) is a relatively new technique for evaluating the presence of villous atrophy in celiac disease patients. The diagnostic analysis of video frames is currently time-consuming and tedious. Recently, computer-aided diagnosis (CAD) systems have become an attractive research area for diagnosing celiac disease. However, the images captured from VCE are susceptible to alterations in light illumination, rotation direction, and intestinal secretions. Moreover, textural features of the mucosal villi obtained by VCE are difficult to characterize and extract. This work aims to find a novel deep learning feature learning module to assist in the diagnosis of celiac disease. METHODS In this manuscript, we propose a novel deep learning recalibration module which shows significant gain in diagnosing celiac disease. In this recalibration module, the block-wise recalibration component is newly employed to capture the most salient feature in the local channel feature map. This learning module was embedded into ResNet50, Inception-v3 to diagnose celiac disease using a 10-time 10-fold cross-validation based upon analysis of VCE images. In addition, we employed model weights to extract feature points from training and test samples before the last fully connected layer, and then input to a support vector machine (SVM), k-nearest neighbor (KNN), and linear discriminant analysis (LDA) for differentiating celiac disease images from heathy controls. RESULTS Overall, the accuracy, sensitivity and specificity of the 10-time 10-fold cross-validation were 95.94%, 97.20% and 95.63%, respectively. CONCLUSIONS A novel deep learning recalibration module, with global response and local salient factors is proposed, and it has a high potential for utilizing deep learning networks to diagnose celiac disease using VCE images.
Collapse
Affiliation(s)
- Xinle Wang
- School of Instrument Science and Opto-electronic Engineering, Hefei University of Technology, Hefei 230009, China
| | - Haiyang Qian
- School of Instrument Science and Opto-electronic Engineering, Hefei University of Technology, Hefei 230009, China
| | - Edward J Ciaccio
- Columbia University Medical Center, Department of Medicine - Celiac Disease Center, New York, USA
| | - Suzanne K Lewis
- Columbia University Medical Center, Department of Medicine - Celiac Disease Center, New York, USA
| | - Govind Bhagat
- Columbia University Medical Center, Department of Medicine - Celiac Disease Center, New York, USA; Columbia University Medical Center, Department of Pathology and Cell Biology, New York, USA
| | - Peter H Green
- Columbia University Medical Center, Department of Medicine - Celiac Disease Center, New York, USA
| | - Shenghao Xu
- Shandong Key Laboratory of Biochemical Analysis, College of Chemistry and Molecular Engineering, Qingdao University of Science and Technology, Qingdao 266042, China
| | - Liang Huang
- School of Instrument Science and Opto-electronic Engineering, Hefei University of Technology, Hefei 230009, China
| | - Rongke Gao
- School of Instrument Science and Opto-electronic Engineering, Hefei University of Technology, Hefei 230009, China.
| | - Yu Liu
- School of Instrument Science and Opto-electronic Engineering, Hefei University of Technology, Hefei 230009, China.
| |
Collapse
|
6
|
Deep Learning for Medical Image Processing: Overview, Challenges and the Future. LECTURE NOTES IN COMPUTATIONAL VISION AND BIOMECHANICS 2018. [DOI: 10.1007/978-3-319-65981-7_12] [Citation(s) in RCA: 239] [Impact Index Per Article: 34.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|