1
|
Chen J, Fan X, Chen QL, Ren W, Li Q, Wang D, He J. Research status and progress of deep learning in automatic esophageal cancer detection. World J Gastrointest Oncol 2025; 17:104410. [DOI: 10.4251/wjgo.v17.i5.104410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2024] [Revised: 02/28/2025] [Accepted: 03/24/2025] [Indexed: 05/15/2025] Open
Abstract
Esophageal cancer (EC), a common malignant tumor of the digestive tract, requires early diagnosis and timely treatment to improve patient prognosis. Automated detection of EC using medical imaging has the potential to increase screening efficiency and diagnostic accuracy, thereby significantly improving long-term survival rates and the quality of life of patients. Recent advances in deep learning (DL), particularly convolutional neural networks, have demonstrated remarkable performance in medical imaging analysis. These techniques have shown significant progress in the automated identification of malignant tumors, quantitative analysis of lesions, and improvement in diagnostic accuracy and efficiency. This article comprehensively examines the research progress of DL in medical imaging for EC, covering various imaging modalities such as digital pathology, endoscopy, computed tomography, etc. It explores the clinical value and application prospects of DL in EC screening and diagnosis. Additionally, the article addresses several critical challenges that must be overcome for the clinical translation of DL techniques, including constructing high-quality datasets, promoting multimodal feature fusion, and optimizing artificial intelligence-clinical workflow integration. By providing a detailed overview of the current state of DL in EC imaging and highlighting the key challenges and future directions, this article aims to guide future research and facilitate the clinical implementation of DL technologies in EC management, ultimately contributing to better patient outcomes.
Collapse
Affiliation(s)
- Jing Chen
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing 210008, Jiangsu Province, China
| | - Xin Fan
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing 210008, Jiangsu Province, China
| | - Qiao-Liang Chen
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing 210008, Jiangsu Province, China
| | - Wei Ren
- The Comprehensive Cancer Center of Drum Tower Hospital, Medical School of Nanjing University & Clinical Cancer Institute of Nanjing University, Nanjing 210008, Jiangsu Province, China
| | - Qi Li
- Department of Pathology, Nanjing Drum Tower Hospital, Nanjing 210008, Jiangsu Province, China
| | - Dong Wang
- Nanjing Center for Applied Mathematics, Nanjing 211135, Jiangsu Province, China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing 210008, Jiangsu Province, China
| |
Collapse
|
2
|
Rokhshad R, Salehi SN, Yavari A, Shobeiri P, Esmaeili M, Manila N, Motamedian SR, Mohammad-Rahimi H. Deep learning for diagnosis of head and neck cancers through radiographic data: a systematic review and meta-analysis. Oral Radiol 2024; 40:1-20. [PMID: 37855976 DOI: 10.1007/s11282-023-00715-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 09/23/2023] [Indexed: 10/20/2023]
Abstract
PURPOSE This study aims to review deep learning applications for detecting head and neck cancer (HNC) using magnetic resonance imaging (MRI) and radiographic data. METHODS Through January 2023, a PubMed, Scopus, Embase, Google Scholar, IEEE, and arXiv search were carried out. The inclusion criteria were implementing head and neck medical images (computed tomography (CT), positron emission tomography (PET), MRI, Planar scans, and panoramic X-ray) of human subjects with segmentation, object detection, and classification deep learning models for head and neck cancers. The risk of bias was rated with the quality assessment of diagnostic accuracy studies (QUADAS-2) tool. For the meta-analysis diagnostic odds ratio (DOR) was calculated. Deeks' funnel plot was used to assess publication bias. MIDAS and Metandi packages were used to analyze diagnostic test accuracy in STATA. RESULTS From 1967 studies, 32 were found eligible after the search and screening procedures. According to the QUADAS-2 tool, 7 included studies had a low risk of bias for all domains. According to the results of all included studies, the accuracy varied from 82.6 to 100%. Additionally, specificity ranged from 66.6 to 90.1%, sensitivity from 74 to 99.68%. Fourteen studies that provided sufficient data were included for meta-analysis. The pooled sensitivity was 90% (95% CI 0.820.94), and the pooled specificity was 92% (CI 95% 0.87-0.96). The DORs were 103 (27-251). Publication bias was not detected based on the p-value of 0.75 in the meta-analysis. CONCLUSION With a head and neck screening deep learning model, detectable screening processes can be enhanced with high specificity and sensitivity.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| | - Seyyede Niloufar Salehi
- Executive Secretary of Research Committee, Board Director of Scientific Society, Dental Faculty, Azad University, Tehran, Iran
| | - Amirmohammad Yavari
- Student Research Committee, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Parnian Shobeiri
- School of Medicine, Tehran University of Medical Science, Tehran, Iran
| | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Nisha Manila
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
- Department of Diagnostic Sciences, Louisiana State University Health Science Center School of Dentistry, Louisiana, USA
| | - Saeed Reza Motamedian
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany.
- Dentofacial Deformities Research Center, Research Institute of Dental, Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjou Blvd, Tehran, Iran.
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| |
Collapse
|
3
|
Liu B, Li J, Yang X, Chen F, Zhang Y, Li H. Diagnosis of primary clear cell carcinoma of the liver based on Faster region-based convolutional neural network. Chin Med J (Engl) 2023; 136:2706-2711. [PMID: 37882066 PMCID: PMC10684187 DOI: 10.1097/cm9.0000000000002853] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Indexed: 10/27/2023] Open
Abstract
BACKGROUND Distinguishing between primary clear cell carcinoma of the liver (PCCCL) and common hepatocellular carcinoma (CHCC) through traditional inspection methods before the operation is difficult. This study aimed to establish a Faster region-based convolutional neural network (RCNN) model for the accurate differential diagnosis of PCCCL and CHCC. METHODS In this study, we collected the data of 62 patients with PCCCL and 1079 patients with CHCC in Beijing YouAn Hospital from June 2012 to May 2020. A total of 109 patients with CHCC and 42 patients with PCCCL were randomly divided into the training validation set and the test set in a ratio of 4:1.The Faster RCNN was used for deep learning of patients' data in the training validation set, and established a convolutional neural network model to distinguish PCCCL and CHCC. The accuracy, average precision, and the recall of the model for diagnosing PCCCL and CHCC were used to evaluate the detection performance of the Faster RCNN algorithm. RESULTS A total of 4392 images of 121 patients (1032 images of 33 patients with PCCCL and 3360 images of 88 patients with CHCC) were uesd in test set for deep learning and establishing the model, and 1072 images of 30 patients (320 images of nine patients with PCCCL and 752 images of 21 patients with CHCC) were used to test the model. The accuracy of the model for accurately diagnosing PCCCL and CHCC was 0.962 (95% confidence interval [CI]: 0.931-0.992). The average precision of the model for diagnosing PCCCL was 0.908 (95% CI: 0.823-0.993) and that for diagnosing CHCC was 0.907 (95% CI: 0.823-0.993). The recall of the model for diagnosing PCCCL was 0.951 (95% CI: 0.916-0.985) and that for diagnosing CHCC was 0.960 (95% CI: 0.854-0.962). The time to make a diagnosis using the model took an average of 4 s for each patient. CONCLUSION The Faster RCNN model can accurately distinguish PCCCL and CHCC. This model could be important for clinicians to make appropriate treatment plans for patients with PCCCL or CHCC.
Collapse
Affiliation(s)
- Bin Liu
- Department of Radiology, Beijing YouAn Hospital Capital Medical University, Beijing 100069, China
- Department of Radiology, Civil Aviation General Hospital, Beijing 100123, China
| | - Jianfei Li
- Extenics Specialized Committee, Chinese Association of Artificial Intelligence, Beijing 100876, China
| | - Xue Yang
- Department of Radiology, Beijing YouAn Hospital Capital Medical University, Beijing 100069, China
| | - Feng Chen
- Department of Radiology, Beijing YouAn Hospital Capital Medical University, Beijing 100069, China
| | - Yanyan Zhang
- Department of Radiology, Beijing YouAn Hospital Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing YouAn Hospital Capital Medical University, Beijing 100069, China
| |
Collapse
|
4
|
Hosseini F, Asadi F, Emami H, Harari RE. Machine learning applications for early detection of esophageal cancer: a systematic review. BMC Med Inform Decis Mak 2023; 23:124. [PMID: 37460991 PMCID: PMC10351192 DOI: 10.1186/s12911-023-02235-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/12/2023] [Indexed: 07/20/2023] Open
Abstract
INTRODUCTION Esophageal cancer (EC) is a significant global health problem, with an estimated 7th highest incidence and 6th highest mortality rate. Timely diagnosis and treatment are critical for improving patients' outcomes, as over 40% of patients with EC are diagnosed after metastasis. Recent advances in machine learning (ML) techniques, particularly in computer vision, have demonstrated promising applications in medical image processing, assisting clinicians in making more accurate and faster diagnostic decisions. Given the significance of early detection of EC, this systematic review aims to summarize and discuss the current state of research on ML-based methods for the early detection of EC. METHODS We conducted a comprehensive systematic search of five databases (PubMed, Scopus, Web of Science, Wiley, and IEEE) using search terms such as "ML", "Deep Learning (DL (", "Neural Networks (NN)", "Esophagus", "EC" and "Early Detection". After applying inclusion and exclusion criteria, 31 articles were retained for full review. RESULTS The results of this review highlight the potential of ML-based methods in the early detection of EC. The average accuracy of the reviewed methods in the analysis of endoscopic and computed tomography (CT (images of the esophagus was over 89%, indicating a high impact on early detection of EC. Additionally, the highest percentage of clinical images used in the early detection of EC with the use of ML was related to white light imaging (WLI) images. Among all ML techniques, methods based on convolutional neural networks (CNN) achieved higher accuracy and sensitivity in the early detection of EC compared to other methods. CONCLUSION Our findings suggest that ML methods may improve accuracy in the early detection of EC, potentially supporting radiologists, endoscopists, and pathologists in diagnosis and treatment planning. However, the current literature is limited, and more studies are needed to investigate the clinical applications of these methods in early detection of EC. Furthermore, many studies suffer from class imbalance and biases, highlighting the need for validation of detection algorithms across organizations in longitudinal studies.
Collapse
Affiliation(s)
- Farhang Hosseini
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Hassan Emami
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
5
|
Xu X, Jiang F, Guo Y, Chen H, Qian J, Wu L, Xie D, Chen G. Clinical-Pathological Characteristics of Adenosquamous Esophageal Carcinoma: A Propensity-Score-Matching Study. J Pers Med 2023; 13:468. [PMID: 36983650 PMCID: PMC10057829 DOI: 10.3390/jpm13030468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 02/24/2023] [Accepted: 03/01/2023] [Indexed: 03/08/2023] Open
Abstract
There are few studies on esophageal adenosquamous carcinoma (ADSC). Our study intended to investigate the clinical and survival features of ADSC. We included esophageal cancer (EC) data from the Surveillance, Epidemiology, and End Results program database to explore clinical and survival traits. Propensity score matching (PSM), the multivariate Cox regression model, and survival curves were used in this study. A total of 137 patients with ADSC were included in our analysis. The proportion of ADSC within the EC cohort declined from 2004 to 2018. Besides, results indicated no significant difference in survival between ADSC and SCC groups (PSM-adjusted HR = 1.249, P = 0.127). However, the survival rate of the ADSC group was significantly worse than that of the ADC group (PSM-adjusted HR = 1.497, P = 0.007). For the ADSC group, combined treatment with surgery had a higher survival rate than other treatment methods (all P < 0.001). Surgical resection, radiotherapy, and chemotherapy were independent protective prognostic factors (all P < 0.05). The proportion of ADSC has been declining from 2004 to 2018. The prognosis of ADSC is not significantly different from that of SCC but is worse than that of ADC. Surgery, radiotherapy, and chemotherapy could improve the prognosis of patients. Comprehensive treatment with surgery as the main treatment is more beneficial for some patients.
Collapse
Affiliation(s)
- Xinxin Xu
- Department of Gastroenterology, The Affiliated Clinical College of Xuzhou Medical University, Xuzhou 221002, China
| | - Feng Jiang
- Department of Oncology, Zhongda Hospital, Southeast University, Nanjing 210009, China
| | - Yihan Guo
- Department of Scientific Research, Shaanxi Academy of Social Sciences, Xi’an 710061, China
| | - Hu Chen
- Department of Gastroenterology, The Affiliated Clinical College of Xuzhou Medical University, Xuzhou 221002, China
| | - Jiayi Qian
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University, Shanghai 200092, China
| | - Leilei Wu
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University, Shanghai 200092, China
| | - Dong Xie
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University, Shanghai 200092, China
| | - Guangxia Chen
- Department of Gastroenterology, The Affiliated Xuzhou Municipal Hospital of Xuzhou Medical University, Xuzhou 221002, China
| |
Collapse
|
6
|
Li M, Chen C, Cao Y, Zhou P, Deng X, Liu P, Wang Y, Lv X, Chen C. CIABNet: Category imbalance attention block network for the classification of multi-differentiated types of esophageal cancer. Med Phys 2023; 50:1507-1527. [PMID: 36272103 DOI: 10.1002/mp.16067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/25/2022] [Accepted: 09/09/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Esophageal cancer has become one of the important cancers that seriously threaten human life and health, and its incidence and mortality rate are still among the top malignant tumors. Histopathological image analysis is the gold standard for diagnosing different differentiation types of esophageal cancer. PURPOSE The grading accuracy and interpretability of the auxiliary diagnostic model for esophageal cancer are seriously affected by small interclass differences, imbalanced data distribution, and poor model interpretability. Therefore, we focused on developing the category imbalance attention block network (CIABNet) model to try to solve the previous problems. METHODS First, the quantitative metrics and model visualization results are integrated to transfer knowledge from the source domain images to better identify the regions of interest (ROI) in the target domain of esophageal cancer. Second, in order to pay attention to the subtle interclass differences, we propose the concatenate fusion attention block, which can focus on the contextual local feature relationships and the changes of channel attention weights among different regions simultaneously. Third, we proposed a category imbalance attention module, which treats each esophageal cancer differentiation class fairly based on aggregating different intensity information at multiple scales and explores more representative regional features for each class, which effectively mitigates the negative impact of category imbalance. Finally, we use feature map visualization to focus on interpreting whether the ROIs are the same or similar between the model and pathologists, thus better improving the interpretability of the model. RESULTS The experimental results show that the CIABNet model outperforms other state-of-the-art models, which achieves the most advanced results in classifying the differentiation types of esophageal cancer with an average classification accuracy of 92.24%, an average precision of 93.52%, an average recall of 90.31%, an average F1 value of 91.73%, and an average AUC value of 97.43%. In addition, the CIABNet model has essentially similar or identical to the ROI of pathologists in identifying histopathological images of esophageal cancer. CONCLUSIONS Our experimental results prove that our proposed computer-aided diagnostic algorithm shows great potential in histopathological images of multi-differentiated types of esophageal cancer.
Collapse
Affiliation(s)
- Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
| | - Yanzhen Cao
- Department of Pathology, The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, China
| | - Panyun Zhou
- College of Software, Xinjiang University, Urumqi, China
| | - Xin Deng
- College of Software, Xinjiang University, Urumqi, China
| | - Pei Liu
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Yunling Wang
- The First Affiliated Hospital of Xinjiang Medical University, Urumqi, China
| | - Xiaoyi Lv
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
- College of Software, Xinjiang University, Urumqi, China
- Key Laboratory of software engineering technology, Xinjiang University, Urumqi, China
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, China
| |
Collapse
|
7
|
Zhu R, Zou H, Li Z, Ni R. Apple-Net: A Model Based on Improved YOLOv5 to Detect the Apple Leaf Diseases. PLANTS (BASEL, SWITZERLAND) 2022; 12:plants12010169. [PMID: 36616300 PMCID: PMC9824080 DOI: 10.3390/plants12010169] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 12/25/2022] [Accepted: 12/27/2022] [Indexed: 05/27/2023]
Abstract
Effective identification of apple leaf diseases can reduce pesticide spraying and improve apple fruit yield, which is significant to agriculture. However, the existing apple leaf disease detection models lack consideration of disease diversity and accuracy, which hinders the application of intelligent agriculture in the apple industry. In this paper, we explore an accurate and robust detection model for apple leaf disease called Apple-Net, improving the conventional YOLOv5 network by adding the Feature Enhancement Module (FEM) and Coordinate Attention (CA) methods. The combination of the feature pyramid and pan in YOLOv5 can obtain richer semantic information and enhance the semantic information of low-level feature maps but lacks the output of multi-scale information. Thus, the FEM was adopted to improve the output of multi-scale information, and the CA was used to improve the detection efficiency. The experimental results show that Apple-Net achieves a higher mAP@0.5 (95.9%) and precision (93.1%) than four classic target detection models, thus proving that Apple-Net achieves more competitive results on apple leaf disease identification.
Collapse
|
8
|
Alharbe NR, Munshi RM, Khayyat MM, Khayyat MM, Abdalaha Hamza SH, Aljohani AA. Atom Search Optimization with the Deep Transfer Learning-Driven Esophageal Cancer Classification Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4629178. [PMID: 36156959 PMCID: PMC9507698 DOI: 10.1155/2022/4629178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 07/13/2022] [Accepted: 08/10/2022] [Indexed: 11/20/2022]
Abstract
Esophageal cancer (EC) is a commonly occurring malignant tumor that significantly affects human health. Earlier recognition and classification of EC or premalignant lesions can result in highly effective targeted intervention. Accurate detection and classification of distinct stages of EC provide effective precision therapy planning and improve the 5-year survival rate. Automated recognition of EC can aid physicians in improving diagnostic performance and accuracy. However, the classification of EC is challenging due to identical endoscopic features, like mucosal erosion, hyperemia, and roughness. The recent developments of deep learning (DL) and computer-aided diagnosis (CAD) models have been useful for designing accurate EC classification models. In this aspect, this study develops an atom search optimization with a deep transfer learning-driven EC classification (ASODTL-ECC) model. The presented ASODTL-ECC model mainly examines the medical images for the existence of EC in a timely and accurate manner. To do so, the presented ASODTL-ECC model employs Gaussian filtering (GF) as a preprocessing stage to enhance image quality. In addition, the deep convolution neural network- (DCNN-) based residual network (ResNet) model is applied as a feature extraction approach. Besides, ASO with an extreme learning machine (ELM) model is utilized for identifying the presence of EC, showing the novelty of the work. The performance of the ASODTL-ECC model is assessed and compared with existing models under several medical images. The experimental results pointed out the improved performance of the ASODTL-ECC model over recent approaches.
Collapse
Affiliation(s)
| | - Raafat M. Munshi
- Department of Medical Laboratory Technology (MLT), Faculty of Applied Medical Sciences, King Abdulaziz University, Rabigh, Saudi Arabia
| | - Manal M. Khayyat
- Department of Information Systems, College of Computers and Information Systems, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Mashael M. Khayyat
- Department of Information Systems and Technology, Faculty of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Saadia Hassan Abdalaha Hamza
- Department of Computer Science College of Science and Humanities in Al-Sulail, Prince Sattam Bin Abdulaziz University, Saudi Arabia
| | | |
Collapse
|
9
|
He Y, Tan J, Han X. High-Resolution Computer Tomography Image Features of Lungs for Patients with Type 2 Diabetes under the Faster-Region Recurrent Convolutional Neural Network Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4147365. [PMID: 35509859 PMCID: PMC9061003 DOI: 10.1155/2022/4147365] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 01/11/2022] [Accepted: 03/30/2022] [Indexed: 12/17/2022]
Abstract
The objective of this study was to adopt the high-resolution computed tomography (HRCT) technology based on the faster-region recurrent convolutional neural network (Faster-RCNN) algorithm to evaluate the lung infection in patients with type 2 diabetes, so as to analyze the application value of imaging features in the assessment of pulmonary disease in type 2 diabetes. In this study, 176 patients with type 2 diabetes were selected as the research objects, and they were divided into different groups based on gender, course of disease, age, glycosylated hemoglobin level (HbA1c), 2 h C peptide (2 h C-P) after meal, fasting C peptide (FC-P), and complications. The research objects were performed with HRCT scan, and the Faster-RCNN algorithm model was built to obtain the imaging features. The relationships between HRCT imaging features and 2 h C-P, FC-P, HbA1c, gender, course of disease, age, and complications were analyzed comprehensively. The results showed that there were no significant differences in HRCT scores between male and female patients, patients of various ages, and patients with different HbA1c contents (P > 0.05). As the course of disease and complications increased, HRCT scores of patients increased obviously (P < 0.05). The HRCT score decreased dramatically with the increase in the contents of 2 h C-P and FC-P after the meal (P < 0.05). In addition, the results of the Spearman rank correlation analysis showed that the course of disease and complications were positively correlated with the HRCT scores, while the 2 h C-P and FC-P levels after meal were negatively correlated with the HRCT scores. The receiver operating curve (ROC) showed that the accuracy, specificity, and sensitivity of HRCT imaging based on Faster-RCNN algorithm were 90.12%, 90.43%, and 83.64%, respectively, in diagnosing lung infection of patients with type 2 diabetes. In summary, the HRCT imaging features based on the Faster-RCNN algorithm can provide effective reference information for the diagnosis and condition assessment of lung infection in patients with type 2 diabetes.
Collapse
Affiliation(s)
- Yumei He
- Department of General Medicine, Affiliated Hospital of Yan'an University, Yan'an, 716000 Shaanxi, China
| | - Juan Tan
- Department of Traditional Chinese Medicine, Affiliated Hospital of Yan'an University, Yan'an, 716000 Shaanxi, China
| | - Xiuping Han
- Department of General Medicine, Affiliated Hospital of Yan'an University, Yan'an, 716000 Shaanxi, China
| |
Collapse
|