1
|
Hansun S, Argha A, Bakhshayeshi I, Wicaksana A, Alinejad-Rokny H, Fox GJ, Liaw ST, Celler BG, Marks GB. Diagnostic Performance of Artificial Intelligence-Based Methods for Tuberculosis Detection: Systematic Review. J Med Internet Res 2025; 27:e69068. [PMID: 40053773 PMCID: PMC11928776 DOI: 10.2196/69068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Revised: 01/10/2025] [Accepted: 02/07/2025] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND Tuberculosis (TB) remains a significant health concern, contributing to the highest mortality among infectious diseases worldwide. However, none of the various TB diagnostic tools introduced is deemed sufficient on its own for the diagnostic pathway, so various artificial intelligence (AI)-based methods have been developed to address this issue. OBJECTIVE We aimed to provide a comprehensive evaluation of AI-based algorithms for TB detection across various data modalities. METHODS Following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) 2020 guidelines, we conducted a systematic review to synthesize current knowledge on this topic. Our search across 3 major databases (Scopus, PubMed, Association for Computing Machinery [ACM] Digital Library) yielded 1146 records, of which we included 152 (13.3%) studies in our analysis. QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies version 2) was performed for the risk-of-bias assessment of all included studies. RESULTS Radiographic biomarkers (n=129, 84.9%) and deep learning (DL; n=122, 80.3%) approaches were predominantly used, with convolutional neural networks (CNNs) using Visual Geometry Group (VGG)-16 (n=37, 24.3%), ResNet-50 (n=33, 21.7%), and DenseNet-121 (n=19, 12.5%) architectures being the most common DL approach. The majority of studies focused on model development (n=143, 94.1%) and used a single modality approach (n=141, 92.8%). AI methods demonstrated good performance in all studies: mean accuracy=91.93% (SD 8.10%, 95% CI 90.52%-93.33%; median 93.59%, IQR 88.33%-98.32%), mean area under the curve (AUC)=93.48% (SD 7.51%, 95% CI 91.90%-95.06%; median 95.28%, IQR 91%-99%), mean sensitivity=92.77% (SD 7.48%, 95% CI 91.38%-94.15%; median 94.05% IQR 89%-98.87%), and mean specificity=92.39% (SD 9.4%, 95% CI 90.30%-94.49%; median 95.38%, IQR 89.42%-99.19%). AI performance across different biomarker types showed mean accuracies of 92.45% (SD 7.83%), 89.03% (SD 8.49%), and 84.21% (SD 0%); mean AUCs of 94.47% (SD 7.32%), 88.45% (SD 8.33%), and 88.61% (SD 5.9%); mean sensitivities of 93.8% (SD 6.27%), 88.41% (SD 10.24%), and 93% (SD 0%); and mean specificities of 94.2% (SD 6.63%), 85.89% (SD 14.66%), and 95% (SD 0%) for radiographic, molecular/biochemical, and physiological types, respectively. AI performance across various reference standards showed mean accuracies of 91.44% (SD 7.3%), 93.16% (SD 6.44%), and 88.98% (SD 9.77%); mean AUCs of 90.95% (SD 7.58%), 94.89% (SD 5.18%), and 92.61% (SD 6.01%); mean sensitivities of 91.76% (SD 7.02%), 93.73% (SD 6.67%), and 91.34% (SD 7.71%); and mean specificities of 86.56% (SD 12.8%), 93.69% (SD 8.45%), and 92.7% (SD 6.54%) for bacteriological, human reader, and combined reference standards, respectively. The transfer learning (TL) approach showed increasing popularity (n=89, 58.6%). Notably, only 1 (0.7%) study conducted domain-shift analysis for TB detection. CONCLUSIONS Findings from this review underscore the considerable promise of AI-based methods in the realm of TB detection. Future research endeavors should prioritize conducting domain-shift analyses to better simulate real-world scenarios in TB detection. TRIAL REGISTRATION PROSPERO CRD42023453611; https://www.crd.york.ac.uk/PROSPERO/view/CRD42023453611.
Collapse
Affiliation(s)
- Seng Hansun
- School of Clinical Medicine, South West Sydney, UNSW Medicine & Health, UNSW Sydney, Sydney, Australia
- Woolcock Vietnam Research Group, Woolcock Institute of Medical Research, Sydney, Australia
| | - Ahmadreza Argha
- Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
- Tyree Institute of Health Engineering, UNSW Sydney, Sydney, Australia
- Ageing Future Institute, UNSW Sydney, Sydney, Australia
| | - Ivan Bakhshayeshi
- Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
- BioMedical Machine Learning Lab, Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
| | - Arya Wicaksana
- Informatics Department, Universitas Multimedia Nusantara, Tangerang, Indonesia
| | - Hamid Alinejad-Rokny
- Tyree Institute of Health Engineering, UNSW Sydney, Sydney, Australia
- Ageing Future Institute, UNSW Sydney, Sydney, Australia
- BioMedical Machine Learning Lab, Graduate School of Biomedical Engineering, UNSW Sydney, Sydney, Australia
| | - Greg J Fox
- NHMRC Clinical Trials Centre, Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Siaw-Teng Liaw
- School of Population Health and School of Clinical Medicine, UNSW Sydney, Sydney, Australia
| | - Branko G Celler
- Biomedical Systems Research Laboratory, School of Electrical Engineering and Telecommunications, UNSW Sydney, Sydney, Australia
| | - Guy B Marks
- School of Clinical Medicine, South West Sydney, UNSW Medicine & Health, UNSW Sydney, Sydney, Australia
- Woolcock Vietnam Research Group, Woolcock Institute of Medical Research, Sydney, Australia
- Burnet Institute, Melbourne, Australia
| |
Collapse
|
2
|
Liu J, Cai X, Niranjan M. Medical image classification by incorporating clinical variables and learned features. ROYAL SOCIETY OPEN SCIENCE 2025; 12:241222. [PMID: 40078919 PMCID: PMC11897822 DOI: 10.1098/rsos.241222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Revised: 10/28/2024] [Accepted: 12/15/2024] [Indexed: 03/14/2025]
Abstract
Medical image classification plays an important role in medical imaging. In this work, we present a novel approach to enhance deep learning models in medical image classification by incorporating clinical variables without overwhelming the information. Unlike most existing deep neural network models that only consider single-pixel information, our method captures a more comprehensive view. Our method contains two main steps and is effective in tackling the extra challenge raised by the scarcity of medical data. Firstly, we employ a pre-trained deep neural network served as a feature extractor to capture meaningful image features. Then, an exquisite discriminant analysis is applied to reduce the dimensionality of these features, ensuring that the low number of features remains optimized for the classification task and striking a balance with the clinical variables information. We also develop a way of obtaining class activation maps for our approach in visualizing models' focus on specific regions within the low-dimensional feature space. Thorough experimental results demonstrate improvements of our proposed method over state-of-the-art methods for tuberculosis and dermatology issues for example. Furthermore, a comprehensive comparison with a popular dimensionality reduction technique (principal component analysis) is also conducted.
Collapse
Affiliation(s)
- Jiahui Liu
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| | - Xiaohao Cai
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| | - Mahesan Niranjan
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| |
Collapse
|
3
|
Chen H, Alfred M, Brown AD, Atinga A, Cohen E. Intersection of Performance, Interpretability, and Fairness in Neural Prototype Tree for Chest X-Ray Pathology Detection: Algorithm Development and Validation Study. JMIR Form Res 2024; 8:e59045. [PMID: 39636692 DOI: 10.2196/59045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 10/08/2024] [Accepted: 10/30/2024] [Indexed: 12/07/2024] Open
Abstract
BACKGROUND While deep learning classifiers have shown remarkable results in detecting chest X-ray (CXR) pathologies, their adoption in clinical settings is often hampered by the lack of transparency. To bridge this gap, this study introduces the neural prototype tree (NPT), an interpretable image classifier that combines the diagnostic capability of deep learning models and the interpretability of the decision tree for CXR pathology detection. OBJECTIVE This study aimed to investigate the utility of the NPT classifier in 3 dimensions, including performance, interpretability, and fairness, and subsequently examined the complex interaction between these dimensions. We highlight both local and global explanations of the NPT classifier and discuss its potential utility in clinical settings. METHODS This study used CXRs from the publicly available Chest X-ray 14, CheXpert, and MIMIC-CXR datasets. We trained 6 separate classifiers for each CXR pathology in all datasets, 1 baseline residual neural network (ResNet)-152, and 5 NPT classifiers with varying levels of interpretability. Performance, interpretability, and fairness were measured using the area under the receiver operating characteristic curve (ROC AUC), interpretation complexity (IC), and mean true positive rate (TPR) disparity, respectively. Linear regression analyses were performed to investigate the relationship between IC and ROC AUC, as well as between IC and mean TPR disparity. RESULTS The performance of the NPT classifier improved as the IC level increased, surpassing that of ResNet-152 at IC level 15 for the Chest X-ray 14 dataset and IC level 31 for the CheXpert and MIMIC-CXR datasets. The NPT classifier at IC level 1 exhibited the highest degree of unfairness, as indicated by the mean TPR disparity. The magnitude of unfairness, as measured by the mean TPR disparity, was more pronounced in groups differentiated by age (chest X-ray 14 0.112, SD 0.015; CheXpert 0.097, SD 0.010; MIMIC 0.093, SD 0.017) compared to sex (chest X-ray 14 0.054 SD 0.012; CheXpert 0.062, SD 0.008; MIMIC 0.066, SD 0.013). A significant positive relationship between interpretability (ie, IC level) and performance (ie, ROC AUC) was observed across all CXR pathologies (P<.001). Furthermore, linear regression analysis revealed a significant negative relationship between interpretability and fairness (ie, mean TPR disparity) across age and sex subgroups (P<.001). CONCLUSIONS By illuminating the intricate relationship between performance, interpretability, and fairness of the NPT classifier, this research offers insightful perspectives that could guide future developments in effective, interpretable, and equitable deep learning classifiers for CXR pathology detection.
Collapse
Affiliation(s)
- Hongbo Chen
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
| | - Myrtede Alfred
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
| | | | - Angela Atinga
- Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Eldan Cohen
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
4
|
Fu Z, Xi J, Ji Z, Zhang R, Wang J, Shi R, Pu X, Yu J, Xue F, Liu J, Wang Y, Zhong H, Feng J, Zhang M, He Y. Analysis of anterior segment in primary angle closure suspect with deep learning models. BMC Med Inform Decis Mak 2024; 24:251. [PMID: 39251987 PMCID: PMC11385134 DOI: 10.1186/s12911-024-02658-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 08/29/2024] [Indexed: 09/11/2024] Open
Abstract
OBJECTIVE To analyze primary angle closure suspect (PACS) patients' anatomical characteristics of anterior chamber configuration, and to establish artificial intelligence (AI)-aided diagnostic system for PACS screening. METHODS A total of 1668 scans of 839 patients were included in this cross-sectional study. The subjects were divided into two groups: PACS group and normal group. With anterior segment optical coherence tomography scans, the anatomical diversity between two groups was compared, and anterior segment structure features of PACS were extracted. Then, AI-aided diagnostic system was constructed, which based different algorithms such as classification and regression tree (CART), random forest (RF), logistic regression (LR), VGG-16 and Alexnet. Then the diagnostic efficiencies of different algorithms were evaluated, and compared with junior physicians and experienced ophthalmologists. RESULTS RF [sensitivity (Se) = 0.84; specificity (Sp) = 0.92; positive predict value (PPV) = 0.82; negative predict value (NPV) = 0.95; area under the curve (AUC) = 0.90] and CART (Se = 0.76, Sp = 0.93, PPV = 0.85, NPV = 0.92, AUC = 0.90) showed better performance than LR (Se = 0.68, Sp = 0.91, PPV = 0.79, NPV = 0.90, AUC = 0.86). In convolutional neural networks (CNN), Alexnet (Se = 0.83, Sp = 0.95, PPV = 0.92, NPV = 0.87, AUC = 0.85) was better than VGG-16 (Se = 0.84, Sp = 0.90, PPV = 0.85, NPV = 0.90, AUC = 0.79). The performance of 2 CNN algorithms was better than 5 junior physicians, and the mean value of diagnostic indicators of 2 CNN algorithm was similar to experienced ophthalmologists. CONCLUSION PACS patients have distinct anatomical characteristics compared with health controls. AI models for PACS screening are reliable and powerful, equivalent to experienced ophthalmologists.
Collapse
Affiliation(s)
- Ziwei Fu
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China
- Xi'an Key Laboratory for the Prevention and Treatment of Eye and Brain Neurological Related Diseases, Xi'an, Shaanxi, 710038, China
| | - Jinwei Xi
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China
| | - Zhi Ji
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China
| | - Ruxue Zhang
- School of Mathematics, Northwest University, Xi'an, 710127, China
| | - Jianping Wang
- Shaanxi Provincial People's Hospital, Xi'an, Shaanxi, 710068, China
| | - Rui Shi
- Shaanxi Provincial People's Hospital, Xi'an, Shaanxi, 710068, China
| | - Xiaoli Pu
- Xianyang First People's Hospital, Xianyang, Shaanxi Province, 712000, China
| | - Jingni Yu
- Xi'an People's Hospital, Xi'an, Shaanxi, 712099, China
| | - Fang Xue
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China
| | - Jianrong Liu
- Xi'an People's Hospital, Xi'an, Shaanxi, 712099, China
| | - Yanrong Wang
- Yan'an People's Hospital, Yan'an, Shaanxi, 716099, China
| | - Hua Zhong
- The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan Province, 650032, China
| | - Jun Feng
- School of Mathematics, Northwest University, Xi'an, 710127, China
| | - Min Zhang
- School of Mathematics, Northwest University, Xi'an, 710127, China.
| | - Yuan He
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China.
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China.
- Xi'an Key Laboratory for the Prevention and Treatment of Eye and Brain Neurological Related Diseases, Xi'an, Shaanxi, 710038, China.
| |
Collapse
|
5
|
Rajaraman S, Liang Z, Xue Z, Antani S. Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification. Front Artif Intell 2024; 7:1419638. [PMID: 39301479 PMCID: PMC11410760 DOI: 10.3389/frai.2024.1419638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 08/27/2024] [Indexed: 09/22/2024] Open
Abstract
Introduction Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture. Methods This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a VGG-16-Sharp-U-Net architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, viz., the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index. Results Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, viz., Baseline, and achieve significantly higher sensitivity (p < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327). Discussion The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.
Collapse
Affiliation(s)
- Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| | - Zhaohui Liang
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| | - Zhiyun Xue
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
6
|
Kaur J, Kaur P. A systematic literature analysis of multi-organ cancer diagnosis using deep learning techniques. Comput Biol Med 2024; 179:108910. [PMID: 39032244 DOI: 10.1016/j.compbiomed.2024.108910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/23/2024]
Abstract
Cancer is becoming the most toxic ailment identified among individuals worldwide. The mortality rate has been increasing rapidly every year, which causes progression in the various diagnostic technologies to handle this illness. The manual procedure for segmentation and classification with a large set of data modalities can be a challenging task. Therefore, a crucial requirement is to significantly develop the computer-assisted diagnostic system intended for the initial cancer identification. This article offers a systematic review of Deep Learning approaches using various image modalities to detect multi-organ cancers from 2012 to 2023. It emphasizes the detection of five supreme predominant tumors, i.e., breast, brain, lung, skin, and liver. Extensive review has been carried out by collecting research and conference articles and book chapters from reputed international databases, i.e., Springer Link, IEEE Xplore, Science Direct, PubMed, and Wiley that fulfill the criteria for quality evaluation. This systematic review summarizes the overview of convolutional neural network model architectures and datasets used for identifying and classifying the diverse categories of cancer. This study accomplishes an inclusive idea of ensemble deep learning models that have achieved better evaluation results for classifying the different images into cancer or healthy cases. This paper will provide a broad understanding to the research scientists within the domain of medical imaging procedures of which deep learning technique perform best over which type of dataset, extraction of features, different confrontations, and their anticipated solutions for the complex problems. Lastly, some challenges and issues which control the health emergency have been discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| |
Collapse
|
7
|
Sufian MA, Hamzi W, Sharifi T, Zaman S, Alsadder L, Lee E, Hakim A, Hamzi B. AI-Driven Thoracic X-ray Diagnostics: Transformative Transfer Learning for Clinical Validation in Pulmonary Radiography. J Pers Med 2024; 14:856. [PMID: 39202047 PMCID: PMC11355475 DOI: 10.3390/jpm14080856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 07/23/2024] [Accepted: 08/01/2024] [Indexed: 09/03/2024] Open
Abstract
Our research evaluates advanced artificial (AI) methodologies to enhance diagnostic accuracy in pulmonary radiography. Utilizing DenseNet121 and ResNet50, we analyzed 108,948 chest X-ray images from 32,717 patients and DenseNet121 achieved an area under the curve (AUC) of 94% in identifying the conditions of pneumothorax and oedema. The model's performance surpassed that of expert radiologists, though further improvements are necessary for diagnosing complex conditions such as emphysema, effusion, and hernia. Clinical validation integrating Latent Dirichlet Allocation (LDA) and Named Entity Recognition (NER) demonstrated the potential of natural language processing (NLP) in clinical workflows. The NER system achieved a precision of 92% and a recall of 88%. Sentiment analysis using DistilBERT provided a nuanced understanding of clinical notes, which is essential for refining diagnostic decisions. XGBoost and SHapley Additive exPlanations (SHAP) enhanced feature extraction and model interpretability. Local Interpretable Model-agnostic Explanations (LIME) and occlusion sensitivity analysis further enriched transparency, enabling healthcare providers to trust AI predictions. These AI techniques reduced processing times by 60% and annotation errors by 75%, setting a new benchmark for efficiency in thoracic diagnostics. The research explored the transformative potential of AI in medical imaging, advancing traditional diagnostics and accelerating medical evaluations in clinical settings.
Collapse
Affiliation(s)
- Md Abu Sufian
- IVR Low-Carbon Research Institute, Chang’an University, Xi’an 710018, China;
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Wahiba Hamzi
- Laboratoire de Biotechnologie Santé et Environnement, Department of Biology, University of Blida, Blida 09000, Algeria
| | - Tazkera Sharifi
- Data Science Architect-Lead Technologist, Booz Allen Hamilton, Texas City, TX 78226, USA
| | - Sadia Zaman
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Lujain Alsadder
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Esther Lee
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Amir Hakim
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Boumediene Hamzi
- Department of Computing and Mathematical Sciences, California Institute of Technology, Caltech, CA 91125, USA
- The Alan Turing Institute, London NW1 2DB, UK
- Department of Mathematics, Imperial College London, London SW7 2AZ, UK
- Department of Mathematics, Gulf University for Science and Technology (GUST), Mubarak Al-Abdullah 32093, Kuwait
| |
Collapse
|
8
|
Morís DI, Moura JD, Novo J, Ortega M. Adapted generative latent diffusion models for accurate pathological analysis in chest X-ray images. Med Biol Eng Comput 2024; 62:2189-2212. [PMID: 38499946 PMCID: PMC11190015 DOI: 10.1007/s11517-024-03056-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 02/16/2024] [Indexed: 03/20/2024]
Abstract
Respiratory diseases have a significant global impact, and assessing these conditions is crucial for improving patient outcomes. Chest X-ray is widely used for diagnosis, but expert evaluation can be challenging. Automatic computer-aided diagnosis methods can provide support for clinicians in these tasks. Deep learning has emerged as a set of algorithms with exceptional potential in such tasks. However, these algorithms require a vast amount of data, often scarce in medical imaging domains. In this work, a new data augmentation methodology based on adapted generative latent diffusion models is proposed to improve the performance of an automatic pathological screening in two high-impact scenarios: tuberculosis and lung nodules. The methodology is evaluated using three publicly available datasets, representative of real-world settings. An ablation study obtained the highest-performing image generation model configuration regarding the number of training steps. The results demonstrate that the novel set of generated images can improve the performance of the screening of these two highly relevant pathologies, obtaining an accuracy of 97.09%, 92.14% in each dataset of tuberculosis screening, respectively, and 82.19% in lung nodules. The proposal notably improves on previous image generation methods for data augmentation, highlighting the importance of the contribution in these critical public health challenges.
Collapse
Affiliation(s)
- Daniel I Morís
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Joaquim de Moura
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain.
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| |
Collapse
|
9
|
Ou CY, Chen IY, Chang HT, Wei CY, Li DY, Chen YK, Chang CY. Deep Learning-Based Classification and Semantic Segmentation of Lung Tuberculosis Lesions in Chest X-ray Images. Diagnostics (Basel) 2024; 14:952. [PMID: 38732366 PMCID: PMC11083603 DOI: 10.3390/diagnostics14090952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 04/24/2024] [Accepted: 04/29/2024] [Indexed: 05/13/2024] Open
Abstract
We present a deep learning (DL) network-based approach for detecting and semantically segmenting two specific types of tuberculosis (TB) lesions in chest X-ray (CXR) images. In the proposed method, we use a basic U-Net model and its enhanced versions to detect, classify, and segment TB lesions in CXR images. The model architectures used in this study are U-Net, Attention U-Net, U-Net++, Attention U-Net++, and pyramid spatial pooling (PSP) Attention U-Net++, which are optimized and compared based on the test results of each model to find the best parameters. Finally, we use four ensemble approaches which combine the top five models to further improve lesion classification and segmentation results. In the training stage, we use data augmentation and preprocessing methods to increase the number and strength of lesion features in CXR images, respectively. Our dataset consists of 110 training, 14 validation, and 98 test images. The experimental results show that the proposed ensemble model achieves a maximum mean intersection-over-union (MIoU) of 0.70, a mean precision rate of 0.88, a mean recall rate of 0.75, a mean F1-score of 0.81, and an accuracy of 1.0, which are all better than those of only using a single-network model. The proposed method can be used by clinicians as a diagnostic tool assisting in the examination of TB lesions in CXR images.
Collapse
Affiliation(s)
- Chih-Ying Ou
- Division of Chest Medicine, Department of Internal Medicine, National Cheng Kung University Hospital, Douliu Branch, College of Medicine, National Cheng Kung University, Douliu City 64043, Taiwan; (C.-Y.O.); (I.-Y.C.)
| | - I-Yen Chen
- Division of Chest Medicine, Department of Internal Medicine, National Cheng Kung University Hospital, Douliu Branch, College of Medicine, National Cheng Kung University, Douliu City 64043, Taiwan; (C.-Y.O.); (I.-Y.C.)
| | - Hsuan-Ting Chang
- Photonics and Information Laboratory, Department of Electrical Engineering, National Yunlin University of Science and Technology, Douliu City 64002, Taiwan; (C.-Y.W.); (D.-Y.L.); (Y.-K.C.)
| | - Chuan-Yi Wei
- Photonics and Information Laboratory, Department of Electrical Engineering, National Yunlin University of Science and Technology, Douliu City 64002, Taiwan; (C.-Y.W.); (D.-Y.L.); (Y.-K.C.)
| | - Dian-Yu Li
- Photonics and Information Laboratory, Department of Electrical Engineering, National Yunlin University of Science and Technology, Douliu City 64002, Taiwan; (C.-Y.W.); (D.-Y.L.); (Y.-K.C.)
| | - Yen-Kai Chen
- Photonics and Information Laboratory, Department of Electrical Engineering, National Yunlin University of Science and Technology, Douliu City 64002, Taiwan; (C.-Y.W.); (D.-Y.L.); (Y.-K.C.)
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Douliu City 64002, Taiwan;
| |
Collapse
|
10
|
Pan CT, Kumar R, Wen ZH, Wang CH, Chang CY, Shiue YL. Improving Respiratory Infection Diagnosis with Deep Learning and Combinatorial Fusion: A Two-Stage Approach Using Chest X-ray Imaging. Diagnostics (Basel) 2024; 14:500. [PMID: 38472972 DOI: 10.3390/diagnostics14050500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/16/2024] [Accepted: 02/18/2024] [Indexed: 03/14/2024] Open
Abstract
The challenges of respiratory infections persist as a global health crisis, placing substantial stress on healthcare infrastructures and necessitating ongoing investigation into efficacious treatment modalities. The persistent challenge of respiratory infections, including COVID-19, underscores the critical need for enhanced diagnostic methodologies to support early treatment interventions. This study introduces an innovative two-stage data analytics framework that leverages deep learning algorithms through a strategic combinatorial fusion technique, aimed at refining the accuracy of early-stage diagnosis of such infections. Utilizing a comprehensive dataset compiled from publicly available lung X-ray images, the research employs advanced pre-trained deep learning models to navigate the complexities of disease classification, addressing inherent data imbalances through methodical validation processes. The core contribution of this work lies in its novel application of combinatorial fusion, integrating select models to significantly elevate diagnostic precision. This approach not only showcases the adaptability and strength of deep learning in navigating the intricacies of medical imaging but also marks a significant step forward in the utilization of artificial intelligence to improve outcomes in healthcare diagnostics. The study's findings illuminate the path toward leveraging technological advancements in enhancing diagnostic accuracies, ultimately contributing to the timely and effective treatment of respiratory diseases.
Collapse
Affiliation(s)
- Cheng-Tang Pan
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung 804, Taiwan
- Institute of Precision Medicine, National Sun Yat-sen University, Kaohsiung 804, Taiwan
- Taiwan Instrument Research Institute, National Applied Research Laboratories, Hsinchu 300, Taiwan
- Institute of Advanced Semiconductor Packaging and Testing, College of Semiconductor and Advanced Technology Research, National Sun Yat-sen University, Kaohsiung 804, Taiwan
| | - Rahul Kumar
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung 804, Taiwan
| | - Zhi-Hong Wen
- Department of Marine Biotechnology and Research, National Sun Yat-sen University, Kaohsiung 804, Taiwan
| | - Chih-Hsuan Wang
- Division of Nephrology and Metabolism, Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, Kaohsiung 804, Taiwan
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
| | - Chun-Yung Chang
- Division of Nephrology and Metabolism, Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, Kaohsiung 804, Taiwan
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
| | - Yow-Ling Shiue
- Institute of Precision Medicine, National Sun Yat-sen University, Kaohsiung 804, Taiwan
- Institute of Biomedical Sciences, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
| |
Collapse
|
11
|
Guo L, Xia L, Zheng Q, Zheng B, Jaeger S, Giger ML, Fuhrman J, Li H, Lure FYM, Li H, Li L. Can AI generate diagnostic reports for radiologist approval on CXR images? A multi-reader and multi-case observer performance study. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:1465-1480. [PMID: 39422982 PMCID: PMC11787813 DOI: 10.3233/xst-240051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
BACKGROUND Accurately detecting a variety of lung abnormalities from heterogenous chest X-ray (CXR) images and writing radiology reports is often difficult and time-consuming. OBJECTIVE To access the utility of a novel artificial intelligence (AI) system (MOM-ClaSeg) in enhancing the accuracy and efficiency of radiologists in detecting heterogenous lung abnormalities through a multi-reader and multi-case (MRMC) observer performance study. METHODS Over 36,000 CXR images were retrospectively collected from 12 hospitals over 4 months and used as the experiment group and the control group. In the control group, a double reading method is used in which two radiologists interpret CXR to generate a final report, while in the experiment group, one radiologist generates the final reports based on AI-generated reports. RESULTS Compared with double reading, the diagnostic accuracy and sensitivity of single reading with AI increases significantly by 1.49% and 10.95%, respectively (P < 0.001), while the difference in specificity is small (0.22%) and without statistical significance (P = 0.255). Additionally, the average image reading and diagnostic time in the experimental group is reduced by 54.70% (P < 0.001). CONCLUSION This MRMC study demonstrates that MOM-ClaSeg can potentially serve as the first reader to generate the initial diagnostic reports, with a radiologist only reviewing and making minor modifications (if needed) to arrive at the final decision. It also shows that single reading with AI can achieve a higher diagnostic accuracy and efficiency than double reading.
Collapse
Affiliation(s)
- Lin Guo
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | - Li Xia
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | - Qiuting Zheng
- Department of Medical Imaging, Shenzhen Center for Chronic Disease Control, Shenzhen, Guangdong, China
| | - Bin Zheng
- MS Technologies Corp, Rockville, Maryland, USA
| | - Stefan Jaeger
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | | | - Jordan Fuhrman
- Department of Radiology, University of Chicago, Chicago, IL, USA
| | - Hui Li
- Department of Radiology, University of Chicago, Chicago, IL, USA
| | | | - Hongjun Li
- Department of Radiology, Beijing YouAn Hospital, Capital Medical University, Beijing, China
| | - Li Li
- Department of Radiology, Beijing YouAn Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
12
|
Zia T, Wahab A, Windridge D, Tirunagari S, Bhatti NB. Visual attribution using Adversarial Latent Transformations. Comput Biol Med 2023; 166:107521. [PMID: 37778213 DOI: 10.1016/j.compbiomed.2023.107521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 09/02/2023] [Accepted: 09/19/2023] [Indexed: 10/03/2023]
Abstract
The ability to accurately locate all indicators of disease within medical images is vital for comprehending the effects of the disease, as well as for weakly-supervised segmentation and localization of the diagnostic correlators of disease. Existing methods either use classifiers to make predictions based on class-salient regions or else use adversarial learning based image-to-image translation to capture such disease effects. However, the former does not capture all relevant features for visual attribution (VA) and are prone to data biases; the latter can generate adversarial (misleading) and inefficient solutions when dealing in pixel values. To address this issue, we propose a novel approach Visual Attribution using Adversarial Latent Transformations (VA2LT). Our method uses adversarial learning to generate counterfactual (CF) normal images from abnormal images by finding and modifying discrepancies in the latent space. We use cycle consistency between the query and CF latent representations to guide our training. We evaluate our method on three datasets including a synthetic dataset, the Alzheimer's Disease Neuroimaging Initiative dataset, and the BraTS dataset. Our method outperforms baseline and related methods on all datasets.
Collapse
Affiliation(s)
- Tehseen Zia
- COMSATS University Islamabad, Pakistan; Medical Imaging and Diagnostics Lab, National Center of Artificial Intelligence, Pakistan.
| | - Abdul Wahab
- COMSATS University Islamabad, Pakistan; Medical Imaging and Diagnostics Lab, National Center of Artificial Intelligence, Pakistan
| | | | | | | |
Collapse
|
13
|
Devasia J, Goswami H, Lakshminarayanan S, Rajaram M, Adithan S. Observer Performance Evaluation of a Deep Learning Model for Multilabel Classification of Active Tuberculosis Lung Zone-Wise Manifestations. Cureus 2023; 15:e44954. [PMID: 37818499 PMCID: PMC10561790 DOI: 10.7759/cureus.44954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2023] [Indexed: 10/12/2023] Open
Abstract
Background Chest X-rays (CXRs) are widely used for cost-effective screening of active pulmonary tuberculosis despite their limitations in sensitivity and specificity when interpreted by clinicians or radiologists. To address this issue, computer-aided detection (CAD) algorithms, particularly deep learning architectures based on convolution, have been developed to automate the analysis of radiography imaging. Deep learning algorithms have shown promise in accurately classifying lung abnormalities using chest X-ray images. In this study, we utilized the EfficientNet B4 model, which was pre-trained on ImageNet with 380x380 input dimensions, using its weights for transfer learning, and was modified with a series of components including global average pooling, batch normalization, dropout, and a classifier with 12 image-wise and 44 segment-wise lung zone evaluation classes using sigmoid activation. Objectives Assess the clinical usefulness of our previously created EfficientNet B4 model in identifying lung zone-specific abnormalities related to active tuberculosis through an observer performance test involving a skilled clinician operating in tuberculosis-specific environments. Methods The ground truth was established by a radiologist who examined all sample CXRs to identify lung zone-wise abnormalities. An expert clinician working in tuberculosis-specific settings independently reviewed the same CXR with blinded access to the ground truth. Simultaneously, the CXRs were classified using the EfficientNet B4 model. The clinician's assessments were then compared with the model's predictions, and the agreement between the two was measured using the kappa coefficient, evaluating the model's performance in classifying active tuberculosis manifestations across lung zones. Results The results show a strong agreement (Kappa ≥0.81) seen for lung zone-wise abnormalities of pneumothorax, mediastinal shift, emphysema, fibrosis, calcifications, pleural effusion, and cavity. Substantial agreement (Kappa = 0.61-0.80) for cavity, mediastinal shift, volume loss, and collapsed lungs. The Kappa score for lung zone-wise abnormalities is moderate (0.41-0.60) for 39% of cases. In image-wise agreement, the EfficientNet B4 model's performance ranges from moderate to almost perfect across categories, while in lung zone-wise agreement, it varies from fair to almost perfect. The results show strong agreement between the EfficientNet B4 model and the human reader in detecting lung zone-wise and image-wise manifestations. Conclusion The clinical utility of the EfficientNet B4 models to detect the abnormalities can aid clinicians in primary care settings for screening and triaging tuberculosis where resources are constrained or overburdened.
Collapse
Affiliation(s)
- James Devasia
- Preventive Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| | | | - Subitha Lakshminarayanan
- Preventive Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| | - Manju Rajaram
- Pulmonary Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| | - Subathra Adithan
- Radiodiagnosis, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, IND
| |
Collapse
|
14
|
Yang Y, Xia L, Liu P, Yang F, Wu Y, Pan H, Hou D, Liu N, Lu S. A prospective multicenter clinical research study validating the effectiveness and safety of a chest X-ray-based pulmonary tuberculosis screening software JF CXR-1 built on a convolutional neural network algorithm. Front Med (Lausanne) 2023; 10:1195451. [PMID: 37649977 PMCID: PMC10463041 DOI: 10.3389/fmed.2023.1195451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 07/24/2023] [Indexed: 09/01/2023] Open
Abstract
Background Chest radiography (chest X-ray or CXR) plays an important role in the early detection of active pulmonary tuberculosis (TB). In areas with a high TB burden that require urgent screening, there is often a shortage of radiologists available to interpret the X-ray results. Computer-aided detection (CAD) software employed with artificial intelligence (AI) systems may have the potential to solve this problem. Objective We validated the effectiveness and safety of pulmonary tuberculosis imaging screening software that is based on a convolutional neural network algorithm. Methods We conducted prospective multicenter clinical research to validate the performance of pulmonary tuberculosis imaging screening software (JF CXR-1). Volunteers under the age of 15 years, both with or without suspicion of pulmonary tuberculosis, were recruited for CXR photography. The software reported a probability score of TB for each participant. The results were compared with those reported by radiologists. We measured sensitivity, specificity, consistency rate, and the area under the receiver operating characteristic curves (AUC) for the diagnosis of tuberculosis. Besides, adverse events (AE) and severe adverse events (SAE) were also evaluated. Results The clinical research was conducted in six general infectious disease hospitals across China. A total of 1,165 participants were enrolled, and 1,161 were enrolled in the full analysis set (FAS). Men accounted for 60.0% (697/1,161). Compared to the results from radiologists on the board, the software showed a sensitivity of 94.2% (95% CI: 92.0-95.8%) and a specificity of 91.2% (95% CI: 88.5-93.2%). The consistency rate was 92.7% (91.1-94.1%), with a Kappa value of 0.854 (P = 0.000). The AUC was 0.98. In the safety set (SS), which consisted of 1,161 participants, 0.3% (3/1,161) had AEs that were not related to the software, and no severe AEs were observed. Conclusion The software for tuberculosis screening based on a convolutional neural network algorithm is effective and safe. It is a potential candidate for solving tuberculosis screening problems in areas lacking radiologists with a high TB burden.
Collapse
Affiliation(s)
- Yang Yang
- Department of Tuberculosis, Shanghai Public Health Clinical Center Affiliated to Fudan University, Shanghai, China
| | - Lu Xia
- Department of Pulmonary Medicine, National Clinical Research Center for Infectious Disease, Shenzhen Third People's Hospital/The Second Affiliated Hospital, School of Medicine, Southern University of Science and Technology, Shenzhen, Guangdong, China
| | - Ping Liu
- Department of Tuberculosis, Shanghai Public Health Clinical Center Affiliated to Fudan University, Shanghai, China
| | - Fuping Yang
- Department of Tuberculosis, Chongqing Public Health Medical Center, Southwest University, Chongqing, China
| | - Yuqing Wu
- Department of Tuberculosis, Jiangxi Chest Hospital, Nanchang, Jiangxi, China
| | - Hongqiu Pan
- Department of Tuberculosis, The Third Hospital of Zhenjiang, Zhenjiang, Jiangsu, China
| | - Dailun Hou
- Department of Radiology, Beijing Chest Hospital, Capital Medical University, Beijing, China
| | - Ning Liu
- Department of Tuberculosis, Hebei Chest Hospital, Shijiangzhuang, Hebei, China
| | - Shuihua Lu
- Department of Pulmonary Medicine, National Clinical Research Center for Infectious Disease, Shenzhen Third People's Hospital/The Second Affiliated Hospital, School of Medicine, Southern University of Science and Technology, Shenzhen, Guangdong, China
| |
Collapse
|
15
|
Feyisa DW, Ayano YM, Debelee TG, Schwenker F. Weak Localization of Radiographic Manifestations in Pulmonary Tuberculosis from Chest X-ray: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:6781. [PMID: 37571564 PMCID: PMC10422452 DOI: 10.3390/s23156781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/13/2023]
Abstract
Pulmonary tuberculosis (PTB) is a bacterial infection that affects the lung. PTB remains one of the infectious diseases with the highest global mortalities. Chest radiography is a technique that is often employed in the diagnosis of PTB. Radiologists identify the severity and stage of PTB by inspecting radiographic features in the patient's chest X-ray (CXR). The most common radiographic features seen on CXRs include cavitation, consolidation, masses, pleural effusion, calcification, and nodules. Identifying these CXR features will help physicians in diagnosing a patient. However, identifying these radiographic features for intricate disorders is challenging, and the accuracy depends on the radiologist's experience and level of expertise. So, researchers have proposed deep learning (DL) techniques to detect and mark areas of tuberculosis infection in CXRs. DL models have been proposed in the literature because of their inherent capacity to detect diseases and segment the manifestation regions from medical images. However, fully supervised semantic segmentation requires several pixel-by-pixel labeled images. The annotation of such a large amount of data by trained physicians has some challenges. First, the annotation requires a significant amount of time. Second, the cost of hiring trained physicians is expensive. In addition, the subjectivity of medical data poses a difficulty in having standardized annotation. As a result, there is increasing interest in weak localization techniques. Therefore, in this review, we identify methods employed in the weakly supervised segmentation and localization of radiographic manifestations of pulmonary tuberculosis from chest X-rays. First, we identify the most commonly used public chest X-ray datasets for tuberculosis identification. Following that, we discuss the approaches for weakly localizing tuberculosis radiographic manifestations in chest X-rays. The weakly supervised localization of PTB can highlight the region of the chest X-ray image that contributed the most to the DL model's classification output and help pinpoint the diseased area. Finally, we discuss the limitations and challenges of weakly supervised techniques in localizing TB manifestations regions in chest X-ray images.
Collapse
Affiliation(s)
- Degaga Wolde Feyisa
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
| | - Yehualashet Megersa Ayano
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
| | - Taye Girma Debelee
- Ethiopian Artificial Intelligence Institute, Addis Ababa P.O. Box 40782, Ethiopia; (D.W.F.); (Y.M.A.); (T.G.D.)
- Department of Electrical and Computer Engineering, Addis Ababa Science and Technology University, Addis Ababa P.O. Box 120611, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89069 Ulm, Germany
| |
Collapse
|
16
|
Abraham B, Mohan J, John SM, Ramachandran S. Computer-Aided detection of tuberculosis from X-ray images using CNN and PatternNet classifier. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023:XST230028. [PMID: 37182860 DOI: 10.3233/xst-230028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
BACKGROUND Tuberculosis (TB) is a highly infectious disease that mainly affects the human lungs. The gold standard for TB diagnosis is Xpert Mycobacterium tuberculosis/ resistance to rifampicin (MTB/RIF) testing. X-ray, a relatively inexpensive and widely used imaging modality, can be employed as an alternative for early diagnosis of the disease. Computer-aided techniques can be used to assist radiologists in interpreting X-ray images, which can improve the ease and accuracy of diagnosis. OBJECTIVE To develop a computer-aided technique for the diagnosis of TB from X-ray images using deep learning techniques. METHODS This research paper presents a novel approach for TB diagnosis from X-ray using deep learning methods. The proposed method uses an ensemble of two pre-trained neural networks, namely EfficientnetB0 and Densenet201, for feature extraction. The features extracted using two CNNs are expected to generate more accurate and representative features than a single CNN. A custom-built artificial neural network (ANN) called PatternNet with two hidden layers is utilized to classify the extracted features. RESULTS The effectiveness of the proposed method was assessed on two publicly accessible datasets, namely the Montgomery and Shenzhen datasets. The Montgomery dataset comprises 138 X-ray images, while the Shenzhen dataset has 662 X-ray images. The method was further evaluated after combining both datasets. The method performed exceptionally well on all three datasets, achieving high Area Under the Curve (AUC) scores of 0.9978, 0.9836, and 0.9914, respectively, using a 10-fold cross-validation technique. CONCLUSION The experiments performed in this study prove the effectiveness of features extracted using EfficientnetB0 and Densenet201 in combination with PatternNet classifier in the diagnosis of tuberculosis from X-ray images.
Collapse
Affiliation(s)
- Bejoy Abraham
- Department of Computer Science and Engineering, College of Engineering Muttathara, Thiruvananthapuram, Kerala, India
| | - Jesna Mohan
- Department of Computer Science and Engineering, Mar Baselios College of Engineering and Technology, Thiruvananthapuram, Kerala, India
| | - Shinu Mathew John
- Department ofComputer Science and Engineering, St. Thomas College of Engineeringand Technology, Kannur, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
17
|
Kumar VD, Rajesh P, Geman O, Craciun MD, Arif M, Filip R. “Quo Vadis Diagnosis”: Application of Informatics in Early Detection of Pneumothorax. Diagnostics (Basel) 2023; 13:diagnostics13071305. [PMID: 37046523 PMCID: PMC10093601 DOI: 10.3390/diagnostics13071305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/22/2023] [Accepted: 03/28/2023] [Indexed: 04/03/2023] Open
Abstract
A pneumothorax is a condition that occurs in the lung region when air enters the pleural space—the area between the lung and chest wall—causing the lung to collapse and making it difficult to breathe. This can happen spontaneously or as a result of an injury. The symptoms of a pneumothorax may include chest pain, shortness of breath, and rapid breathing. Although chest X-rays are commonly used to detect a pneumothorax, locating the affected area visually in X-ray images can be time-consuming and prone to errors. Existing computer technology for detecting this disease from X-rays is limited by three major issues, including class disparity, which causes overfitting, difficulty in detecting dark portions of the images, and vanishing gradient. To address these issues, we propose an ensemble deep learning model called PneumoNet, which uses synthetic images from data augmentation to address the class disparity issue and a segmentation system to identify dark areas. Finally, the issue of the vanishing gradient, which becomes very small during back propagation, can be addressed by hyperparameter optimization techniques that prevent the model from slowly converging and poorly performing. Our model achieved an accuracy of 98.41% on the Society for Imaging Informatics in Medicine pneumothorax dataset, outperforming other deep learning models and reducing the computation complexities in detecting the disease.
Collapse
Affiliation(s)
- V. Dhilip Kumar
- School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India; (V.D.K.); (P.R.)
| | - P. Rajesh
- School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India; (V.D.K.); (P.R.)
| | - Oana Geman
- Department of Computers, Electronics and Automation, Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, 720229 Suceava, Romania
- Correspondence: (O.G.); (M.D.C.)
| | - Maria Daniela Craciun
- Interdisciplinary Research Centre in Motricity Sciences and Human Health, Ştefan cel Mare University of Suceava, 720229 Suceava, Romania
- Correspondence: (O.G.); (M.D.C.)
| | - Muhammad Arif
- Department of Computer Science, Superior University, Lahore 54000, Pakistan;
| | - Roxana Filip
- Faculty of Medicine and Biological Sciences, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
- Suceava Emergency County Hospital, 720224 Suceava, Romania
| |
Collapse
|
18
|
Rehman A, Khan A, Fatima G, Naz S, Razzak I. Review on chest pathogies detection systems using deep learning techniques. Artif Intell Rev 2023; 56:1-47. [PMID: 37362896 PMCID: PMC10027283 DOI: 10.1007/s10462-023-10457-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Chest radiography is the standard and most affordable way to diagnose, analyze, and examine different thoracic and chest diseases. Typically, the radiograph is examined by an expert radiologist or physician to decide about a particular anomaly, if exists. Moreover, computer-aided methods are used to assist radiologists and make the analysis process accurate, fast, and more automated. A tremendous improvement in automatic chest pathologies detection and analysis can be observed with the emergence of deep learning. The survey aims to review, technically evaluate, and synthesize the different computer-aided chest pathologies detection systems. The state-of-the-art of single and multi-pathologies detection systems, which are published in the last five years, are thoroughly discussed. The taxonomy of image acquisition, dataset preprocessing, feature extraction, and deep learning models are presented. The mathematical concepts related to feature extraction model architectures are discussed. Moreover, the different articles are compared based on their contributions, datasets, methods used, and the results achieved. The article ends with the main findings, current trends, challenges, and future recommendations.
Collapse
Affiliation(s)
- Arshia Rehman
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Ahmad Khan
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Gohar Fatima
- The Islamia University of Bahawalpur, Bahawal Nagar Campus, Bahawal Nagar, Pakistan
| | - Saeeda Naz
- Govt Girls Post Graduate College No.1, Abbottabad, Pakistan
| | - Imran Razzak
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
19
|
Cifci MA. A Deep Learning-Based Framework for Uncertainty Quantification in Medical Imaging Using the DropWeak Technique: An Empirical Study with Baresnet. Diagnostics (Basel) 2023; 13:800. [PMID: 36832288 PMCID: PMC9955446 DOI: 10.3390/diagnostics13040800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 02/13/2023] [Accepted: 02/15/2023] [Indexed: 02/22/2023] Open
Abstract
Lung cancer is a leading cause of cancer-related deaths globally. Early detection is crucial for improving patient survival rates. Deep learning (DL) has shown promise in the medical field, but its accuracy must be evaluated, particularly in the context of lung cancer classification. In this study, we conducted uncertainty analysis on various frequently used DL architectures, including Baresnet, to assess the uncertainties in the classification results. This study focuses on the use of deep learning for the classification of lung cancer, which is a critical aspect of improving patient survival rates. The study evaluates the accuracy of various deep learning architectures, including Baresnet, and incorporates uncertainty quantification to assess the level of uncertainty in the classification results. The study presents a novel automatic tumor classification system for lung cancer based on CT images, which achieves a classification accuracy of 97.19% with an uncertainty quantification. The results demonstrate the potential of deep learning in lung cancer classification and highlight the importance of uncertainty quantification in improving the accuracy of classification results. This study's novelty lies in the incorporation of uncertainty quantification in deep learning for lung cancer classification, which can lead to more reliable and accurate diagnoses in clinical settings.
Collapse
Affiliation(s)
- Mehmet Akif Cifci
- The Institute of Computer Technology, Tu Wien University, 1040 Vienna, Austria;
- Department of Computer Eng., Bandirma Onyedi Eylul University, 10200 Balikesir, Turkey
- Department of Informatics, Klaipeda University, 92294 Klaipeda, Lithuania;
| |
Collapse
|
20
|
A deep learning-based framework for automatic detection of drug resistance in tuberculosis patients. EGYPTIAN INFORMATICS JOURNAL 2023. [DOI: 10.1016/j.eij.2023.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
21
|
Singla S, Eslami M, Pollack B, Wallace S, Batmanghelich K. Explaining the black-box smoothly-A counterfactual approach. Med Image Anal 2023; 84:102721. [PMID: 36571975 PMCID: PMC9835100 DOI: 10.1016/j.media.2022.102721] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 11/23/2022] [Accepted: 12/02/2022] [Indexed: 12/15/2022]
Abstract
We propose a BlackBox Counterfactual Explainer, designed to explain image classification models for medical applications. Classical approaches (e.g., , saliency maps) that assess feature importance do not explain how imaging features in important anatomical regions are relevant to the classification decision. Such reasoning is crucial for transparent decision-making in healthcare applications. Our framework explains the decision for a target class by gradually exaggerating the semantic effect of the class in a query image. We adopted a Generative Adversarial Network (GAN) to generate a progressive set of perturbations to a query image, such that the classification decision changes from its original class to its negation. Our proposed loss function preserves essential details (e.g., support devices) in the generated images. We used counterfactual explanations from our framework to audit a classifier trained on a chest X-ray dataset with multiple labels. Clinical evaluation of model explanations is a challenging task. We proposed clinically-relevant quantitative metrics such as cardiothoracic ratio and the score of a healthy costophrenic recess to evaluate our explanations. We used these metrics to quantify the counterfactual changes between the populations with negative and positive decisions for a diagnosis by the given classifier. We conducted a human-grounded experiment with diagnostic radiology residents to compare different styles of explanations (no explanation, saliency map, cycleGAN explanation, and our counterfactual explanation) by evaluating different aspects of explanations: (1) understandability, (2) classifier's decision justification, (3) visual quality, (d) identity preservation, and (5) overall helpfulness of an explanation to the users. Our results show that our counterfactual explanation was the only explanation method that significantly improved the users' understanding of the classifier's decision compared to the no-explanation baseline. Our metrics established a benchmark for evaluating model explanation methods in medical images. Our explanations revealed that the classifier relied on clinically relevant radiographic features for its diagnostic decisions, thus making its decision-making process more transparent to the end-user.
Collapse
Affiliation(s)
- Sumedha Singla
- Computer Science Department at the University of Pittsburgh, Pittsburgh, PA, 15206, USA.
| | - Motahhare Eslami
- School of Computer Science, Human-Computer Interaction Institute, Carnegie Mellon University, USA.
| | - Brian Pollack
- Department of Biomedical Informatics, the University of Pittsburgh, Pittsburgh, PA, 15206, USA.
| | - Stephen Wallace
- University of Pittsburgh Medical School, Pittsburgh, PA, 15206, USA.
| | - Kayhan Batmanghelich
- Department of Biomedical Informatics, the University of Pittsburgh, Pittsburgh, PA, 15206, USA.
| |
Collapse
|
22
|
Zhang D, Wang H, Deng J, Wang T, Shen C, Feng J. CAMS-Net: An attention-guided feature selection network for rib segmentation in chest X-rays. Comput Biol Med 2023. [DOI: 10.1016/j.compbiomed.2023.106702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
23
|
Deep learning classification of active tuberculosis lung zones wise manifestations using chest X-rays: a multi label approach. Sci Rep 2023; 13:887. [PMID: 36650270 PMCID: PMC9845381 DOI: 10.1038/s41598-023-28079-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 01/12/2023] [Indexed: 01/19/2023] Open
Abstract
Chest X-rays are the most economically viable diagnostic imaging test for active pulmonary tuberculosis screening despite the high sensitivity and low specificity when interpreted by clinicians or radiologists. Computer aided detection (CAD) algorithms, especially convolution based deep learning architecture, have been proposed to facilitate the automation of radiography imaging modalities. Deep learning algorithms have found success in classifying various abnormalities in lung using chest X-ray. We fine-tuned, validated and tested EfficientNetB4 architecture and utilized the transfer learning methodology for multilabel approach to detect lung zone wise and image wise manifestations of active pulmonary tuberculosis using chest X-ray. We used Area Under Receiver Operating Characteristic (AUC), sensitivity and specificity along with 95% confidence interval as model evaluation metrics. We also utilized the visualisation capabilities of convolutional neural networks (CNN), Gradient-weighted Class Activation Mapping (Grad-CAM) as post-hoc attention method to investigate the model and visualisation of Tuberculosis abnormalities and discuss them from radiological perspectives. EfficientNetB4 trained network achieved remarkable AUC, sensitivity and specificity of various pulmonary tuberculosis manifestations in intramural test set and external test set from different geographical region. The grad-CAM visualisations and their ability to localize the abnormalities can aid the clinicians at primary care settings for screening and triaging of tuberculosis where resources are constrained or overburdened.
Collapse
|
24
|
Akhter Y, Singh R, Vatsa M. AI-based radiodiagnosis using chest X-rays: A review. Front Big Data 2023; 6:1120989. [PMID: 37091458 PMCID: PMC10116151 DOI: 10.3389/fdata.2023.1120989] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/06/2023] [Indexed: 04/25/2023] Open
Abstract
Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.
Collapse
|
25
|
ALCNN: Attention based lightweight convolutional neural network for pneumothorax detection in chest X-rays. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
26
|
Zhan Y, Wang Y, Zhang W, Ying B, Wang C. Diagnostic Accuracy of the Artificial Intelligence Methods in Medical Imaging for Pulmonary Tuberculosis: A Systematic Review and Meta-Analysis. J Clin Med 2022; 12:303. [PMID: 36615102 PMCID: PMC9820940 DOI: 10.3390/jcm12010303] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 01/03/2023] Open
Abstract
Tuberculosis (TB) remains one of the leading causes of death among infectious diseases worldwide. Early screening and diagnosis of pulmonary tuberculosis (PTB) is crucial in TB control, and tend to benefit from artificial intelligence. Here, we aimed to evaluate the diagnostic efficacy of a variety of artificial intelligence methods in medical imaging for PTB. We searched MEDLINE and Embase with the OVID platform to identify trials published update to November 2022 that evaluated the effectiveness of artificial-intelligence-based software in medical imaging of patients with PTB. After data extraction, the quality of studies was assessed using quality assessment of diagnostic accuracy studies 2 (QUADAS-2). Pooled sensitivity and specificity were estimated using a bivariate random-effects model. In total, 3987 references were initially identified and 61 studies were finally included, covering a wide range of 124,959 individuals. The pooled sensitivity and the specificity were 91% (95% confidence interval (CI), 89-93%) and 65% (54-75%), respectively, in clinical trials, and 94% (89-96%) and 95% (91-97%), respectively, in model-development studies. These findings have demonstrated that artificial-intelligence-based software could serve as an accurate tool to diagnose PTB in medical imaging. However, standardized reporting guidance regarding AI-specific trials and multicenter clinical trials is urgently needed to truly transform this cutting-edge technology into clinical practice.
Collapse
Affiliation(s)
- Yuejuan Zhan
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuqi Wang
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu 610041, China
| | - Wendi Zhang
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu 610041, China
| | - Binwu Ying
- Department of Laboratory Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu 610041, China
| | - Chengdi Wang
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu 610041, China
| |
Collapse
|
27
|
Morís DI, de Moura J, Novo J, Ortega M. Unsupervised contrastive unpaired image generation approach for improving tuberculosis screening using chest X-ray images. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.10.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
28
|
Kotei E, Thirunavukarasu R. Ensemble Technique Coupled with Deep Transfer Learning Framework for Automatic Detection of Tuberculosis from Chest X-ray Radiographs. Healthcare (Basel) 2022; 10:2335. [PMID: 36421659 PMCID: PMC9690876 DOI: 10.3390/healthcare10112335] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/14/2022] [Accepted: 11/17/2022] [Indexed: 01/28/2024] Open
Abstract
Tuberculosis (TB) is an infectious disease affecting humans' lungs and is currently ranked the 13th leading cause of death globally. Due to advancements in technology and the availability of medical datasets, automatic analysis and classification of chest X-rays (CXRs) into TB and non-TB can be a reliable alternative for early TB screening. We propose an automatic TB detection system using advanced deep learning (DL) models. A substantial part of a CXR image is dark, with no relevant information for diagnosis and potentially confusing DL models. In this work, the U-Net model extracts the region of interest from CXRs and the segmented images are fed to the DL models for feature extraction. Eight different convolutional neural networks (CNN) models are employed in our experiments, and their classification performance is compared based on three publicly available CXR datasets. The U-Net model achieves segmentation accuracy of 98.58%, intersection over union (IoU) of 93.10, and a Dice coefficient score of 96.50. Our proposed stacked ensemble algorithm performed better by achieving accuracy, sensitivity, and specificity values of 98.38%, 98.89%, and 98.70%, respectively. Experimental results confirm that segmented lung CXR images with ensemble learning produce a better result than un-segmented lung CXR images.
Collapse
Affiliation(s)
| | - Ramkumar Thirunavukarasu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| |
Collapse
|
29
|
Okolo GI, Katsigiannis S, Ramzan N. IEViT: An enhanced vision transformer architecture for chest X-ray image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107141. [PMID: 36162246 DOI: 10.1016/j.cmpb.2022.107141] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 08/02/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Chest X-ray imaging is a relatively cheap and accessible diagnostic tool that can assist in the diagnosis of various conditions, including pneumonia, tuberculosis, COVID-19, and others. However, the requirement for expert radiologists to view and interpret chest X-ray images can be a bottleneck, especially in remote and deprived areas. Recent advances in machine learning have made possible the automated diagnosis of chest X-ray scans. In this work, we examine the use of a novel Transformer-based deep learning model for the task of chest X-ray image classification. METHODS We first examine the performance of the Vision Transformer (ViT) state-of-the-art image classification machine learning model for the task of chest X-ray image classification, and then propose and evaluate the Input Enhanced Vision Transformer (IEViT), a novel enhanced Vision Transformer model that can achieve improved performance on chest X-ray images associated with various pathologies. RESULTS Experiments on four chest X-ray image data sets containing various pathologies (tuberculosis, pneumonia, COVID-19) demonstrated that the proposed IEViT model outperformed ViT for all the data sets and variants examined, achieving an F1-score between 96.39% and 100%, and an improvement over ViT of up to +5.82% in terms of F1-score across the four examined data sets. IEViT's maximum sensitivity (recall) ranged between 93.50% and 100% across the four data sets, with an improvement over ViT of up to +3%, whereas IEViT's maximum precision ranged between 97.96% and 100% across the four data sets, with an improvement over ViT of up to +6.41%. CONCLUSIONS Results showed that the proposed IEViT model outperformed all ViT's variants for all the examined chest X-ray image data sets, demonstrating its superiority and generalisation ability. Given the relatively low cost and the widespread accessibility of chest X-ray imaging, the use of the proposed IEViT model can potentially offer a powerful, but relatively cheap and accessible method for assisting diagnosis using chest X-ray images.
Collapse
Affiliation(s)
| | | | - Naeem Ramzan
- University of the West of Scotland, High St., Paisley, PA1 2BE, UK
| |
Collapse
|
30
|
Ghose P, Uddin MA, Acharjee UK, Sharmin S. Deep viewing for the identification of Covid-19 infection status from chest X-Ray image using CNN based architecture. INTELLIGENT SYSTEMS WITH APPLICATIONS 2022; 16. [PMCID: PMC9536212 DOI: 10.1016/j.iswa.2022.200130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In recent years, coronavirus (Covid-19) has evolved into one of the world’s leading life-threatening severe viral illnesses. A self-executing accord system might be a better option to stop Covid-19 from spreading due to its quick diagnostic option. Many researches have already investigated various deep learning techniques, which have a significant impact on the quick and precise early detection of Covid-19. Most of the existing techniques, though, have not been trained and tested using a significant amount of data. In this paper, we purpose a deep learning technique enabled Convolutional Neural Network (CNN) to automatically diagnose Covid-19 from chest x-rays. To train and test our model, 10,293 x-rays, including 2875 x-rays of Covid-19, were collected as a data set. The applied dataset consists of three groups of chest x-rays: Covid-19, pneumonia, and normal patients. The proposed approach achieved 98.5% accuracy, 98.9% specificity, 99.2% sensitivity, 99.2% precision, and 98.3% F1-score. Distinguishing Covid-19 patients from pneumonia patients using chest x-ray, particularly for human eyes is crucial since both diseases have nearly identical characteristics. To address this issue, we have categorized Covid-19 and pneumonia using x-rays, achieving a 99.60% accuracy rate. Our findings show that the proposed model might aid clinicians and researchers in rapidly detecting Covid-19 patients, hence facilitating the treatment of Covid-19 patients.
Collapse
Affiliation(s)
- Partho Ghose
- Depaprtment of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh,Corresponding author
| | - Md. Ashraf Uddin
- Depaprtment of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| | - Uzzal Kumar Acharjee
- Depaprtment of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| | - Selina Sharmin
- Depaprtment of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| |
Collapse
|
31
|
Bhandari M, Shahi TB, Siku B, Neupane A. Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI. Comput Biol Med 2022; 150:106156. [PMID: 36228463 PMCID: PMC9549800 DOI: 10.1016/j.compbiomed.2022.106156] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 09/05/2022] [Accepted: 09/24/2022] [Indexed: 11/18/2022]
Abstract
Chest X-ray (CXR) images are considered useful to monitor and investigate a variety of pulmonary disorders such as COVID-19, Pneumonia, and Tuberculosis (TB). With recent technological advancements, such diseases may now be recognized more precisely using computer-assisted diagnostics. Without compromising the classification accuracy and better feature extraction, deep learning (DL) model to predict four different categories is proposed in this study. The proposed model is validated with publicly available datasets of 7132 chest x-ray (CXR) images. Furthermore, results are interpreted and explained using Gradient-weighted Class Activation Mapping (Grad-CAM), Local Interpretable Modelagnostic Explanation (LIME), and SHapley Additive exPlanation (SHAP) for better understandably. Initially, convolution features are extracted to collect high-level object-based information. Next, shapely values from SHAP, predictability results from LIME, and heatmap from Grad-CAM are used to explore the black-box approach of the DL model, achieving average test accuracy of 94.31 ± 1.01% and validation accuracy of 94.54 ± 1.33 for 10-fold cross validation. Finally, in order to validate the model and qualify medical risk, medical sensations of classification are taken to consolidate the explanations generated from the eXplainable Artificial Intelligence (XAI) framework. The results suggest that XAI and DL models give clinicians/medical professionals persuasive and coherent conclusions related to the detection and categorization of COVID-19, Pneumonia, and TB.
Collapse
Affiliation(s)
- Mohan Bhandari
- Samriddhi College, Lokanthali, Bhaktapur, Kathmandu, Nepal.
| | - Tej Bahadur Shahi
- School of Engineering and Technology, Central Queensland University, Norman Gardens, 4701, Rockhampton, Queensland, Australia.
| | - Birat Siku
- Samriddhi College, Lokanthali, Bhaktapur, Kathmandu, Nepal.
| | - Arjun Neupane
- School of Engineering and Technology, Central Queensland University, Norman Gardens, 4701, Rockhampton, Queensland, Australia.
| |
Collapse
|
32
|
Lee J, Liu C, Kim J, Chen Z, Sun Y, Rogers JR, Chung WK, Weng C. Deep learning for rare disease: A scoping review. J Biomed Inform 2022; 135:104227. [PMID: 36257483 DOI: 10.1016/j.jbi.2022.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/22/2022] [Accepted: 10/07/2022] [Indexed: 10/31/2022]
Abstract
Although individually rare, collectively more than 7,000 rare diseases affect about 10% of patients. Each of the rare diseases impacts the quality of life for patients and their families, and incurs significant societal costs. The low prevalence of each rare disease causes formidable challenges in accurately diagnosing and caring for these patients and engaging participants in research to advance treatments. Deep learning has advanced many scientific fields and has been applied to many healthcare tasks. This study reviewed the current uses of deep learning to advance rare disease research. Among the 332 reviewed articles, we found that deep learning has been actively used for rare neoplastic diseases (250/332), followed by rare genetic diseases (170/332) and rare neurological diseases (127/332). Convolutional neural networks (307/332) were the most frequently used deep learning architecture, presumably because image data were the most commonly available data type in rare disease research. Diagnosis is the main focus of rare disease research using deep learning (263/332). We summarized the challenges and future research directions for leveraging deep learning to advance rare disease research.
Collapse
Affiliation(s)
- Junghwan Lee
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Cong Liu
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Junyoung Kim
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Zhehuan Chen
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Yingcheng Sun
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - James R Rogers
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Wendy K Chung
- Departments of Medicine and Pediatrics, Columbia University, New York, NY 10032, USA
| | - Chunhua Weng
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA.
| |
Collapse
|
33
|
Benchmarking saliency methods for chest X-ray interpretation. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00536-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AbstractSaliency methods, which produce heat maps that highlight the areas of the medical image that influence model prediction, are often presented to clinicians as an aid in diagnostic decision-making. However, rigorous investigation of the accuracy and reliability of these strategies is necessary before they are integrated into the clinical setting. In this work, we quantitatively evaluate seven saliency methods, including Grad-CAM, across multiple neural network architectures using two evaluation metrics. We establish the first human benchmark for chest X-ray segmentation in a multilabel classification set-up, and examine under what clinical conditions saliency maps might be more prone to failure in localizing important pathologies compared with a human expert benchmark. We find that (1) while Grad-CAM generally localized pathologies better than the other evaluated saliency methods, all seven performed significantly worse compared with the human benchmark, (2) the gap in localization performance between Grad-CAM and the human benchmark was largest for pathologies that were smaller in size and had shapes that were more complex, and (3) model confidence was positively correlated with Grad-CAM localization performance. Our work demonstrates that several important limitations of saliency methods must be addressed before we can rely on them for deep learning explainability in medical imaging.
Collapse
|
34
|
AI-Assisted Tuberculosis Detection and Classification from Chest X-Rays Using a Deep Learning Normalization-Free Network Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2399428. [PMID: 36225551 PMCID: PMC9550434 DOI: 10.1155/2022/2399428] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/24/2022] [Accepted: 09/01/2022] [Indexed: 11/29/2022]
Abstract
Tuberculosis (TB) is an airborne disease caused by Mycobacterium tuberculosis. It is imperative to detect cases of TB as early as possible because if left untreated, there is a 70% chance of a patient dying within 10 years. The necessity for supplementary tools has increased in mid to low-income countries due to the rise of automation in healthcare sectors. The already limited resources are being heavily allocated towards controlling other dangerous diseases. Modern digital radiography (DR) machines, used for screening chest X-rays of potential TB victims are very practical. Coupled with computer-aided detection (CAD) with the aid of artificial intelligence, radiologists working in this field can really help potential patients. In this study, progressive resizing is introduced for training models to perform automatic inference of TB using chest X-ray images. ImageNet fine-tuned Normalization-Free Networks (NFNets) are trained for classification and the Score-Cam algorithm is utilized to highlight the regions in the chest X-Rays for detailed inference on the diagnosis. The proposed method is engineered to provide accurate diagnostics for both binary and multiclass classification. The models trained with this method have achieved 96.91% accuracy, 99.38% AUC, 91.81% sensitivity, and 98.42% specificity on a multiclass classification dataset. Moreover, models have also achieved top-1 inference metrics of 96% accuracy and 98% AUC for binary classification. The results obtained demonstrate that the proposed method can be used as a secondary decision tool in a clinical setting for assisting radiologists.
Collapse
|
35
|
Draelos RL, Carin L. Explainable multiple abnormality classification of chest CT volumes. Artif Intell Med 2022; 132:102372. [DOI: 10.1016/j.artmed.2022.102372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 06/09/2022] [Accepted: 07/28/2022] [Indexed: 12/20/2022]
|
36
|
Early Diagnosis of Tuberculosis Using Deep Learning Approach for IOT Based Healthcare Applications. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3357508. [PMID: 36211018 PMCID: PMC9534630 DOI: 10.1155/2022/3357508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Revised: 07/06/2022] [Accepted: 08/26/2022] [Indexed: 02/03/2023]
Abstract
In the modern world, Tuberculosis (TB) is regarded as a serious health issue with a high rate of mortality. TB can be cured completely by early diagnosis. For achieving this, one tool utilized is CXR (Chest X-rays) which is used to screen active TB. An enhanced deep learning (DL) model is implemented for automatic Tuberculosis detection. This work undergoes the phases like preprocessing, segmentation, feature extraction, and optimized classification. Initially, the CXR image is preprocessed and segmented using AFCM (Adaptive Fuzzy C means) clustering. Then, feature extraction and several features are extracted. Finally, these features are given to the DL classifier Deep Belief Network (DBN). To improve the classification accuracy and to optimize the DBN, a metaheuristic optimization Adaptive Monarch butterfly optimization (AMBO) algorithm is used. Here, the Deep Belief Network with Adaptive Monarch butterfly optimization (DBN-AMBO) is used for enhancing the accuracy, reducing the error function, and optimizing weighting parameters. The overall implementation is carried out on the Python platform. The overall performance evaluations of the DBN-AMBO were carried out on MC and SC datasets and compared over the other approaches on the basis of certain metrics.
Collapse
|
37
|
Devnath L, Fan Z, Luo S, Summons P, Wang D. Detection and Visualisation of Pneumoconiosis Using an Ensemble of Multi-Dimensional Deep Features Learned from Chest X-rays. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11193. [PMID: 36141457 PMCID: PMC9517617 DOI: 10.3390/ijerph191811193] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 08/25/2022] [Accepted: 08/27/2022] [Indexed: 06/16/2023]
Abstract
Pneumoconiosis is a group of occupational lung diseases induced by mineral dust inhalation and subsequent lung tissue reactions. It can eventually cause irreparable lung damage, as well as gradual and permanent physical impairments. It has affected millions of workers in hazardous industries throughout the world, and it is a leading cause of occupational death. It is difficult to diagnose early pneumoconiosis because of the low sensitivity of chest radiographs, the wide variation in interpretation between and among readers, and the scarcity of B-readers, which all add to the difficulty in diagnosing these occupational illnesses. In recent years, deep machine learning algorithms have been extremely successful at classifying and localising abnormality of medical images. In this study, we proposed an ensemble learning approach to improve pneumoconiosis detection in chest X-rays (CXRs) using nine machine learning classifiers and multi-dimensional deep features extracted using CheXNet-121 architecture. There were eight evaluation metrics utilised for each high-level feature set of the associated cross-validation datasets in order to compare the ensemble performance and state-of-the-art techniques from the literature that used the same cross-validation datasets. It is observed that integrated ensemble learning exhibits promising results (92.68% accuracy, 85.66% Matthews correlation coefficient (MCC), and 0.9302 area under the precision-recall (PR) curve), compared to individual CheXNet-121 and other state-of-the-art techniques. Finally, Grad-CAM was used to visualise the learned behaviour of individual dense blocks within CheXNet-121 and their ensembles into three-color channels of CXRs. We compared the Grad-CAM-indicated ROI to the ground-truth ROI using the intersection of the union (IOU) and average-precision (AP) values for each classifier and their ensemble. Through the visualisation of the Grad-CAM within the blue channel, the average IOU passed more than 90% of the pneumoconiosis detection in chest radiographs.
Collapse
Affiliation(s)
- Liton Devnath
- School of Information and Physical Sciences, The University of Newcastle, Newcastle 2308, Australia
- British Columbia Cancer Research Centre, Vancouver, BC V5Z 1L3, Canada
| | - Zongwen Fan
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China
| | - Suhuai Luo
- School of Information and Physical Sciences, The University of Newcastle, Newcastle 2308, Australia
| | - Peter Summons
- School of Information and Physical Sciences, The University of Newcastle, Newcastle 2308, Australia
| | - Dadong Wang
- Quantitative Imaging, CSIRO Data61, Marsfield 2122, Australia
| |
Collapse
|
38
|
Priya KV, Peter JD. A federated approach for detecting the chest diseases using DenseNet for multi-label classification. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-021-00474-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
AbstractMulti-label disease classification algorithms help to predict various chronic diseases at an early stage. Diverse deep neural networks are applied for multi-label classification problems to foresee multiple mutually non-exclusive classes or diseases. We propose a federated approach for detecting the chest diseases using DenseNets for better accuracy in prediction of various diseases. Images of chest X-ray from the Kaggle repository is used as the dataset in the proposed model. This new model is tested with both sample and full dataset of chest X-ray, and it outperforms existing models in terms of various evaluation metrics. We adopted transfer learning approach along with the pre-trained network from scratch to improve performance. For this, we have integrated DenseNet121 to our framework. DenseNets have a few focal points as they help to overcome vanishing gradient issues, boost up the feature propagation and reuse and also to reduce the number of parameters. Furthermore, gradCAMS are used as visualization methods to visualize the affected parts on chest X-ray. Henceforth, the proposed architecture will help the prediction of various diseases from a single chest X-ray and furthermore direct the doctors and specialists for taking timely decisions.
Collapse
|
39
|
Çallı E, Murphy K, Scholten ET, Schalekamp S, van Ginneken B. Explainable emphysema detection on chest radiographs with deep learning. PLoS One 2022; 17:e0267539. [PMID: 35900979 PMCID: PMC9333227 DOI: 10.1371/journal.pone.0267539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 04/12/2022] [Indexed: 12/02/2022] Open
Abstract
We propose a deep learning system to automatically detect four explainable emphysema signs on frontal and lateral chest radiographs. Frontal and lateral chest radiographs from 3000 studies were retrospectively collected. Two radiologists annotated these with 4 radiological signs of pulmonary emphysema identified from the literature. A patient with ≥2 of these signs present is considered emphysema positive. Using separate deep learning systems for frontal and lateral images we predict the presence of each of the four visual signs and use these to determine emphysema positivity. The ROC and AUC results on a set of 422 held-out cases, labeled by both radiologists, are reported. Comparison with a black-box model which predicts emphysema without the use of explainable visual features is made on the annotations from both radiologists, as well as the subset that they agreed on. DeLong’s test is used to compare with the black-box model ROC and McNemar’s test to compare with radiologist performance. In 422 test cases, emphysema positivity was predicted with AUCs of 0.924 and 0.946 using the reference standard from each radiologist separately. Setting model sensitivity equivalent to that of the second radiologist, our model has a comparable specificity (p = 0.880 and p = 0.143 for each radiologist respectively). Our method is comparable with the black-box model with AUCs of 0.915 (p = 0.407) and 0.935 (p = 0.291), respectively. On the 370 cases where both radiologists agreed (53 positives), our model achieves an AUC of 0.981, again comparable to the black-box model AUC of 0.972 (p = 0.289). Our proposed method can predict emphysema positivity on chest radiographs as well as a radiologist or a comparable black-box method. It additionally produces labels for four visual signs to ensure the explainability of the result. The dataset is publicly available at https://doi.org/10.5281/zenodo.6373392.
Collapse
Affiliation(s)
- Erdi Çallı
- Diagnostic Image Analysis Group, Radboudumc, Nijmegen, The Netherlands
- * E-mail:
| | - Keelin Murphy
- Diagnostic Image Analysis Group, Radboudumc, Nijmegen, The Netherlands
| | - Ernst T. Scholten
- Diagnostic Image Analysis Group, Radboudumc, Nijmegen, The Netherlands
| | - Steven Schalekamp
- Diagnostic Image Analysis Group, Radboudumc, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboudumc, Nijmegen, The Netherlands
| |
Collapse
|
40
|
Liang S, Ma J, Wang G, Shao J, Li J, Deng H, Wang C, Li W. The Application of Artificial Intelligence in the Diagnosis and Drug Resistance Prediction of Pulmonary Tuberculosis. Front Med (Lausanne) 2022; 9:935080. [PMID: 35966878 PMCID: PMC9366014 DOI: 10.3389/fmed.2022.935080] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/13/2022] [Indexed: 11/30/2022] Open
Abstract
With the increasing incidence and mortality of pulmonary tuberculosis, in addition to tough and controversial disease management, time-wasting and resource-limited conventional approaches to the diagnosis and differential diagnosis of tuberculosis are still awkward issues, especially in countries with high tuberculosis burden and backwardness. In the meantime, the climbing proportion of drug-resistant tuberculosis poses a significant hazard to public health. Thus, auxiliary diagnostic tools with higher efficiency and accuracy are urgently required. Artificial intelligence (AI), which is not new but has recently grown in popularity, provides researchers with opportunities and technical underpinnings to develop novel, precise, rapid, and automated implements for pulmonary tuberculosis care, including but not limited to tuberculosis detection. In this review, we aimed to introduce representative AI methods, focusing on deep learning and radiomics, followed by definite descriptions of the state-of-the-art AI models developed using medical images and genetic data to detect pulmonary tuberculosis, distinguish the infection from other pulmonary diseases, and identify drug resistance of tuberculosis, with the purpose of assisting physicians in deciding the appropriate therapeutic schedule in the early stage of the disease. We also enumerated the challenges in maximizing the impact of AI in this field such as generalization and clinical utility of the deep learning models.
Collapse
Affiliation(s)
- Shufan Liang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jiechao Ma
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Gang Wang
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jingwei Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Hui Deng
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
- *Correspondence: Hui Deng,
| | - Chengdi Wang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Chengdi Wang,
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Weimin Li,
| |
Collapse
|
41
|
Nguyen NH, Nguyen HQ, Nguyen NT, Nguyen TV, Pham HH, Nguyen TNM. Deployment and validation of an AI system for detecting abnormal chest radiographs in clinical settings. Front Digit Health 2022; 4:890759. [PMID: 35966141 PMCID: PMC9367219 DOI: 10.3389/fdgth.2022.890759] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 06/28/2022] [Indexed: 11/13/2022] Open
Abstract
Background The purpose of this paper is to demonstrate a mechanism for deploying and validating an AI-based system for detecting abnormalities on chest X-ray scans at the Phu Tho General Hospital, Vietnam. We aim to investigate the performance of the system in real-world clinical settings and compare its effectiveness to the in-lab performance. Method The AI system was directly integrated into the Hospital's Picture Archiving and Communication System (PACS) after being trained on a fixed annotated dataset from other sources. The system's performance was prospectively measured by matching and comparing the AI results with the radiology reports of 6,285 chest X-ray examinations extracted from the Hospital Information System (HIS) over the last 2 months of 2020. The normal/abnormal status of a radiology report was determined by a set of rules and served as the ground truth. Results Our system achieves an F1 score-the harmonic average of the recall and the precision-of 0.653 (95% CI 0.635, 0.671) for detecting any abnormalities on chest X-rays. This corresponds to an accuracy of 79.6%, a sensitivity of 68.6%, and a specificity of 83.9%. Conclusions Computer-Aided Diagnosis (CAD) systems for chest radiographs using artificial intelligence (AI) have recently shown great potential as a second opinion for radiologists. However, the performances of such systems were mostly evaluated on a fixed dataset in a retrospective manner and, thus, far from the real performances in clinical practice. Despite a significant drop from the in-lab performance, our result establishes a reasonable level of confidence in applying such a system in real-life situations.
Collapse
Affiliation(s)
| | - Ha Quy Nguyen
- Medical Imaging Center, Vingroup Big Data Institute, Hanoi, Vietnam
- Smart Health Center, VinBigData JSC, Hanoi, Vietnam
| | | | | | - Hieu Huy Pham
- Smart Health Center, VinBigData JSC, Hanoi, Vietnam
- College of Engineering and Computer Science, VinUniversity, Hanoi, Vietnam
- VinUni-Illinois Smart Health Center, VinUniversity, Hanoi, Vietnam
| | - Tuan Ngoc-Minh Nguyen
- Training and Direction of Healthcare Activities Center, Phu Tho General Hospital, Thu Tho, Vietnam
| |
Collapse
|
42
|
Ravi V, Acharya V, Alazab M. A multichannel EfficientNet deep learning-based stacking ensemble approach for lung disease detection using chest X-ray images. CLUSTER COMPUTING 2022; 26:1181-1203. [PMID: 35874187 PMCID: PMC9295885 DOI: 10.1007/s10586-022-03664-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 05/21/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
This paper proposes a multichannel deep learning approach for lung disease detection using chest X-rays. The multichannel models used in this work are EfficientNetB0, EfficientNetB1, and EfficientNetB2 pretrained models. The features from EfficientNet models are fused together. Next, the fused features are passed into more than one non-linear fully connected layer. Finally, the features passed into a stacked ensemble learning classifier for lung disease detection. The stacked ensemble learning classifier contains random forest and SVM in the first stage and logistic regression in the second stage for lung disease detection. The performance of the proposed method is studied in detail for more than one lung disease such as pneumonia, Tuberculosis (TB), and COVID-19. The performances of the proposed method for lung disease detection using chest X-rays compared with similar methods with the aim to show that the method is robust and has the capability to achieve better performances. In all the experiments on lung disease, the proposed method showed better performance and outperformed similar lung disease existing methods. This indicates that the proposed method is robust and generalizable on unseen chest X-rays data samples. To ensure that the features learnt by the proposed method is optimal, t-SNE feature visualization was shown on all three lung disease models. Overall, the proposed method has shown 98% detection accuracy for pediatric pneumonia lung disease, 99% detection accuracy for TB lung disease, and 98% detection accuracy for COVID-19 lung disease. The proposed method can be used as a tool for point-of-care diagnosis by healthcare radiologists.Journal instruction requires a city for affiliations; however, this is missing in affiliation 3. Please verify if the provided city is correct and amend if necessary.correct.
Collapse
Affiliation(s)
- Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - Vasundhara Acharya
- Manipal Institute of Technology (MIT), Manipal Academy of Higher Education (MAHE), Manipal, India
| | - Mamoun Alazab
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT Australia
| |
Collapse
|
43
|
Deep and Hybrid Learning Technique for Early Detection of Tuberculosis Based on X-ray Images Using Feature Fusion. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Tuberculosis (TB) is a fatal disease in developing countries, with the infection spreading through direct contact or the air. Despite its seriousness, the early detection of tuberculosis by means of reliable techniques can save the patients’ lives. A chest X-ray is a recommended screening technique for locating pulmonary abnormalities. However, analyzing the X-ray images to detect abnormalities requires highly experienced radiologists. Therefore, artificial intelligence techniques come into play to help radiologists to perform an accurate diagnosis at the early stages of TB disease. Hence, this study focuses on applying two AI techniques, CNN and ANN. Furthermore, this study proposes two different approaches with two systems each to diagnose tuberculosis from two datasets. The first approach hybridizes two CNN models, which are Res-Net-50 and GoogLeNet techniques. Prior to the classification stage, the approach applies the principal component analysis (PCA) algorithm to reduce the features’ dimensionality, aiming to extract the deep features. Then, the SVM algorithm is used for classifying features with high accuracy. This hybrid approach achieved superior results in diagnosing tuberculosis based on X-ray images from both datasets. In contrast, the second approach applies artificial neural networks (ANN) based on the fused features extracted by ResNet-50 and GoogleNet models and combines them with the features extracted by the gray level co-occurrence matrix (GLCM), discrete wavelet transform (DWT) and local binary pattern (LBP) algorithms. ANN achieved superior results for the two tuberculosis datasets. When using the first dataset, the ANN, with ResNet-50, GLCM, DWT and LBP features, achieved an accuracy of 99.2%, a sensitivity of 99.23%, a specificity of 99.41%, and an AUC of 99.78%. Meanwhile, with the second dataset, ANN, with the features of ResNet-50, GLCM, DWT and LBP, reached an accuracy of 99.8%, a sensitivity of 99.54%, a specificity of 99.68%, and an AUC of 99.82%. Thus, the proposed methods help doctors and radiologists to diagnose tuberculosis early and increase chances of survival.
Collapse
|
44
|
Park S, Kim G, Oh Y, Seo JB, Lee SM, Kim JH, Moon S, Lim JK, Park CM, Ye JC. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation. Nat Commun 2022; 13:3848. [PMID: 35789159 PMCID: PMC9252561 DOI: 10.1038/s41467-022-31514-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 06/16/2022] [Indexed: 11/14/2022] Open
Abstract
Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting. Although deep learning-based computer-aided diagnosis systems have recently achieved expert level performance, developing a robust model requires large, high-quality data with annotations. Here, the authors present a framework which can improve the performance of vision transformer simultaneously with self-supervision and self-training.
Collapse
Affiliation(s)
- Sangjoon Park
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Gwanghyun Kim
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Yujin Oh
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Joon Beom Seo
- Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Sang Min Lee
- Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jin Hwan Kim
- College of Medicine, Chungnam National Univerity, Daejeon, South Korea
| | - Sungjun Moon
- College of Medicine, Yeungnam University, Daegu, South Korea
| | - Jae-Kwang Lim
- School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Chang Min Park
- College of Medicine, Seoul National University, Seoul, South Korea
| | - Jong Chul Ye
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea. .,Kim Jaechul Graduate School of AI, KAIST, Daejeon, Korea.
| |
Collapse
|
45
|
Chandra TB, Singh BK, Jain D. Disease Localization and Severity Assessment in Chest X-Ray Images using Multi-Stage Superpixels Classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106947. [PMID: 35749885 PMCID: PMC9403875 DOI: 10.1016/j.cmpb.2022.106947] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 05/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Chest X-ray (CXR) is a non-invasive imaging modality used in the prognosis and management of chronic lung disorders like tuberculosis (TB), pneumonia, coronavirus disease (COVID-19), etc. The radiomic features associated with different disease manifestations assist in detection, localization, and grading the severity of infected lung regions. The majority of the existing computer-aided diagnosis (CAD) system used these features for the classification task, and only a few works have been dedicated to disease-localization and severity scoring. Moreover, the existing deep learning approaches use class activation map and Saliency map, which generate a rough localization. This study aims to generate a compact disease boundary, infection map, and grade the infection severity using proposed multistage superpixel classification-based disease localization and severity assessment framework. METHODS The proposed method uses a simple linear iterative clustering (SLIC) technique to subdivide the lung field into small superpixels. Initially, the different radiomic texture and proposed shape features are extracted and combined to train different benchmark classifiers in a multistage framework. Subsequently, the predicted class labels are used to generate an infection map, mark disease boundary, and grade the infection severity. The performance is evaluated using a publicly available Montgomery dataset and validated using Friedman average ranking and Holm and Nemenyi post-hoc procedures. RESULTS The proposed multistage classification approach achieved accuracy (ACC)= 95.52%, F-Measure (FM)= 95.48%, area under the curve (AUC)= 0.955 for Stage-I and ACC=85.35%, FM=85.20%, AUC=0.853 for Stage-II using calibration dataset and ACC = 93.41%, FM = 95.32%, AUC = 0.936 for Stage-I and ACC = 84.02%, FM = 71.01%, AUC = 0.795 for Stage-II using validation dataset. Also, the model has demonstrated the average Jaccard Index (JI) of 0.82 and Pearson's correlation coefficient (r) of 0.9589. CONCLUSIONS The obtained classification results using calibration and validation dataset confirms the promising performance of the proposed framework. Also, the average JI shows promising potential to localize the disease, and better agreement between radiologist score and predicted severity score (r) confirms the robustness of the method. Finally, the statistical test justified the significance of the obtained results.
Collapse
Affiliation(s)
- Tej Bahadur Chandra
- Department of Computer Applications, National Institute of Technology Raipur, Chhattisgarh, India.
| | - Bikesh Kumar Singh
- Department of Biomedical Engineering, National Institute of Technology Raipur, Chhattisgarh, India
| | - Deepak Jain
- Department of Radiodiagnosis, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur, Chhattisgarh, India
| |
Collapse
|
46
|
Kotei E, Thirunavukarasu R. Computational techniques for the automated detection of mycobacterium tuberculosis from digitized sputum smear microscopic images: A systematic review. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2022; 171:4-16. [PMID: 35339515 DOI: 10.1016/j.pbiomolbio.2022.03.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 02/10/2022] [Accepted: 03/18/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Tuberculosis is an infectious disease that is caused by Mycobacterium tuberculosis (MTB), which mostly affects the lungs of humans. Bright-field microscopy and fluorescence microscopy are two major testing techniques used for tuberculosis (TB) detection. TB bacilli were identified and counted manually from sputum under a microscope and were found to be tedious, laborious and error prone. To eliminate this problem, traditional image processing techniques and deep learning (DL) models were deployed here to build computer-aided diagnosis (CADx) systems for TB detection. METHODS In this paper, we performed a systematic review on image processing techniques used in developing computer-aided diagnosis systems for TB detection. Articles selected for this review were retrieved from publication databases such as Science Direct, ACM, IEEE Xplore, Springer Link and PubMed. After a rigorous pruning exercise, 42 articles were selected, of which 21 were journal articles and 21 were conference articles. RESULT Image processing techniques and deep neural networks such as CNN and DCNN proposed in the literature along with clinical applications are presented and discussed. The performance of these techniques has been evaluated on metrics such as accuracy, sensitivity, specificity, precision and F-1 score and is presented accordingly. CONCLUSION CADx systems built on DL models performed better in TB detection and classification due to their abstraction of low-level features, better generalization and minimal or no human intervention in their operations. Research gaps identified in the literature have been highlighted and discussed for further investigation.
Collapse
Affiliation(s)
- Evans Kotei
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Ramkumar Thirunavukarasu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India.
| |
Collapse
|
47
|
Rajaraman S, Zamzmi G, Yang F, Xue Z, Jaeger S, Antani SK. Uncertainty Quantification in Segmenting Tuberculosis-Consistent Findings in Frontal Chest X-rays. Biomedicines 2022; 10:1323. [PMID: 35740345 PMCID: PMC9220007 DOI: 10.3390/biomedicines10061323] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/30/2022] [Accepted: 06/03/2022] [Indexed: 12/10/2022] Open
Abstract
Deep learning (DL) methods have demonstrated superior performance in medical image segmentation tasks. However, selecting a loss function that conforms to the data characteristics is critical for optimal performance. Further, the direct use of traditional DL models does not provide a measure of uncertainty in predictions. Even high-quality automated predictions for medical diagnostic applications demand uncertainty quantification to gain user trust. In this study, we aim to investigate the benefits of (i) selecting an appropriate loss function and (ii) quantifying uncertainty in predictions using a VGG16-based-U-Net model with the Monto-Carlo (MCD) Dropout method for segmenting Tuberculosis (TB)-consistent findings in frontal chest X-rays (CXRs). We determine an optimal uncertainty threshold based on several uncertainty-related metrics. This threshold is used to select and refer highly uncertain cases to an expert. Experimental results demonstrate that (i) the model trained with a modified Focal Tversky loss function delivered superior segmentation performance (mean average precision (mAP): 0.5710, 95% confidence interval (CI): (0.4021,0.7399)), (ii) the model with 30 MC forward passes during inference further improved and stabilized performance (mAP: 0.5721, 95% CI: (0.4032,0.7410), and (iii) an uncertainty threshold of 0.7 is observed to be optimal to refer highly uncertain cases.
Collapse
Affiliation(s)
- Sivaramakrishnan Rajaraman
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20892, USA; (G.Z.); (F.Y.); (Z.X.); (S.J.); (S.K.A.)
| | | | | | | | | | | |
Collapse
|
48
|
Baghdadi N, Maklad AS, Malki A, Deif MA. Reliable Sarcoidosis Detection Using Chest X-rays with EfficientNets and Stain-Normalization Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:3846. [PMID: 35632254 PMCID: PMC9144943 DOI: 10.3390/s22103846] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 02/04/2023]
Abstract
Sarcoidosis is frequently misdiagnosed as tuberculosis (TB) and consequently mistreated due to inherent limitations in radiological presentations. Clinically, to distinguish sarcoidosis from TB, physicians usually employ biopsy tissue diagnosis and blood tests; this approach is painful for patients, time-consuming, expensive, and relies on techniques prone to human error. This study proposes a computer-aided diagnosis method to address these issues. This method examines seven EfficientNet designs that were fine-tuned and compared for their abilities to categorize X-ray images into three categories: normal, TB-infected, and sarcoidosis-infected. Furthermore, the effects of stain normalization on performance were investigated using Reinhard's and Macenko's conventional stain normalization procedures. This procedure aids in improving diagnostic efficiency and accuracy while cutting diagnostic costs. A database of 231 sarcoidosis-infected, 563 TB-infected, and 1010 normal chest X-ray images was created using public databases and information from several national hospitals. The EfficientNet-B4 model attained accuracy, sensitivity, and precision rates of 98.56%, 98.36%, and 98.67%, respectively, when the training X-ray images were normalized by the Reinhard stain approach, and 97.21%, 96.9%, and 97.11%, respectively, when normalized by Macenko's approach. Results demonstrate that Reinhard stain normalization can improve the performance of EfficientNet -B4 X-ray image classification. The proposed framework for identifying pulmonary sarcoidosis may prove valuable in clinical use.
Collapse
Affiliation(s)
- Nadiah Baghdadi
- Nursing Management and Education Department, College of Nursing, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Ahmed S. Maklad
- Computer Science Department, College of Computer Science and Engineering in Yanbu, Taibah University, Medina 42353, Saudi Arabia;
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suif 62521, Egypt
| | - Amer Malki
- Computer Science Department, College of Computer Science and Engineering in Yanbu, Taibah University, Medina 42353, Saudi Arabia;
| | - Mohanad A. Deif
- Department of Bioelectronics, Modern University of Technology and Information (MTI University), Cairo 12055, Egypt;
| |
Collapse
|
49
|
Prediction of future healthcare expenses of patients from chest radiographs using deep learning: a pilot study. Sci Rep 2022; 12:8344. [PMID: 35585177 PMCID: PMC9117267 DOI: 10.1038/s41598-022-12551-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/12/2022] [Indexed: 11/09/2022] Open
Abstract
Our objective was to develop deep learning models with chest radiograph data to predict healthcare costs and classify top-50% spenders. 21,872 frontal chest radiographs were retrospectively collected from 19,524 patients with at least 1-year spending data. Among the patients, 11,003 patients had 3 years of cost data, and 1678 patients had 5 years of cost data. Model performances were measured with area under the receiver operating characteristic curve (ROC-AUC) for classification of top-50% spenders and Spearman ρ for prediction of healthcare cost. The best model predicting 1-year (N = 21,872) expenditure achieved ROC-AUC of 0.806 [95% CI 0.793–0.819] for top-50% spender classification and ρ of 0.561 [0.536–0.586] for regression. Similarly, for predicting 3-year (N = 12,395) expenditure, ROC-AUC of 0.771 [0.750–0.794] and ρ of 0.524 [0.489–0.559]; for predicting 5-year (N = 1779) expenditure ROC-AUC of 0.729 [0.667–0.729] and ρ of 0.424 [0.324–0.529]. Our deep learning model demonstrated the feasibility of predicting health care expenditure as well as classifying top 50% healthcare spenders at 1, 3, and 5 year(s), implying the feasibility of combining deep learning with information-rich imaging data to uncover hidden associations that may allude to physicians. Such a model can be a starting point of making an accurate budget in reimbursement models in healthcare industries.
Collapse
|
50
|
Wong A, Lee JRH, Rahmat-Khah H, Sabri A, Alaref A, Liu H. TB-Net: A Tailored, Self-Attention Deep Convolutional Neural Network Design for Detection of Tuberculosis Cases From Chest X-Ray Images. Front Artif Intell 2022; 5:827299. [PMID: 35464996 PMCID: PMC9022489 DOI: 10.3389/frai.2022.827299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/21/2022] [Indexed: 11/23/2022] Open
Abstract
Tuberculosis (TB) remains a global health problem, and is the leading cause of death from an infectious disease. A crucial step in the treatment of tuberculosis is screening high risk populations and the early detection of the disease, with chest x-ray (CXR) imaging being the most widely-used imaging modality. As such, there has been significant recent interest in artificial intelligence-based TB screening solutions for use in resource-limited scenarios where there is a lack of trained healthcare workers with expertise in CXR interpretation. Motivated by this pressing need and the recent recommendation by the World Health Organization (WHO) for the use of computer-aided diagnosis of TB in place of a human reader, we introduce TB-Net, a self-attention deep convolutional neural network tailored for TB case screening. We used CXR data from a multi-national patient cohort to train and test our models. A machine-driven design exploration approach leveraging generative synthesis was used to build a highly customized deep neural network architecture with attention condensers. We conducted an explainability-driven performance validation process to validate TB-Net's decision-making behavior. Experiments on CXR data from a multi-national patient cohort showed that the proposed TB-Net is able to achieve accuracy/sensitivity/specificity of 99.86/100.0/99.71%. Radiologist validation was conducted on select cases by two board-certified radiologists with over 10 and 19 years of experience, respectively, and showed consistency between radiologist interpretation and critical factors leveraged by TB-Net for TB case detection for the case where radiologists identified anomalies. The proposed TB-Net not only achieves high tuberculosis case detection performance in terms of sensitivity and specificity, but also leverages clinically relevant critical factors in its decision making process. While not a production-ready solution, we hope that the open-source release of TB-Net as part of the COVID-Net initiative will support researchers, clinicians, and citizen data scientists in advancing this field in the fight against this global public health crisis.
Collapse
Affiliation(s)
- Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp, Waterloo, ON, Canada
| | | | | | - Ali Sabri
- Department of Radiology, Niagara Health, McMaster University, Hamilton, ON, Canada
| | - Amer Alaref
- Department of Diagnostic Radiology, Thunder Bay Regional Health Sciences Centre, Thunder Bay, ON, Canada
- Department of Diagnostic Imaging, Northern Ontario School of Medicine, Sudbury, ON, Canada
| | | |
Collapse
|