1
|
Yalon M, Navin PJ. Quantitative imaging biomarkers in the assessment of adrenal nodules. Abdom Radiol (NY) 2025; 50:2169-2180. [PMID: 39532734 DOI: 10.1007/s00261-024-04671-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 10/28/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024]
Abstract
Incidental adrenal nodules provide a diagnostic conundrum. Current imaging techniques have demonstrated success in identifying lipid-rich adenomas, however, are limited in detecting malignancy and assessing for functionality. Imaging biomarkers are objective characteristics derived from imaging that may measure normal or pathological processes or assess response to therapy. Recent attempts have been made to standardize the measurement of the most common imaging biomarkers used in assessing the adrenal gland, offering a path to more uniform research in this area. The aim of this review is to describe the imaging biomarkers used in adrenal imaging and assess the evidence supporting their use.
Collapse
|
2
|
Shi YH, Liu JL, Cheng CC, Li WL, Sun H, Zhou XL, Wei H, Fei SJ. Construction and validation of machine learning-based predictive model for colorectal polyp recurrence one year after endoscopic mucosal resection. World J Gastroenterol 2025; 31:102387. [PMID: 40124266 PMCID: PMC11924002 DOI: 10.3748/wjg.v31.i11.102387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 01/25/2025] [Accepted: 02/14/2025] [Indexed: 03/13/2025] Open
Abstract
BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer. Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer. Endoscopic mucosal resection (EMR) is a common polypectomy procedure in clinical practice, but it has a high postoperative recurrence rate. Currently, there is no predictive model for the recurrence of colorectal polyps after EMR. AIM To construct and validate a machine learning (ML) model for predicting the risk of colorectal polyp recurrence one year after EMR. METHODS This study retrospectively collected data from 1694 patients at three medical centers in Xuzhou. Additionally, a total of 166 patients were collected to form a prospective validation set. Feature variable screening was conducted using univariate and multivariate logistic regression analyses, and five ML algorithms were used to construct the predictive models. The optimal models were evaluated based on different performance metrics. Decision curve analysis (DCA) and SHapley Additive exPlanation (SHAP) analysis were performed to assess clinical applicability and predictor importance. RESULTS Multivariate logistic regression analysis identified 8 independent risk factors for colorectal polyp recurrence one year after EMR (P < 0.05). Among the models, eXtreme Gradient Boosting (XGBoost) demonstrated the highest area under the curve (AUC) in the training set, internal validation set, and prospective validation set, with AUCs of 0.909 (95%CI: 0.89-0.92), 0.921 (95%CI: 0.90-0.94), and 0.963 (95%CI: 0.94-0.99), respectively. DCA indicated favorable clinical utility for the XGBoost model. SHAP analysis identified smoking history, family history, and age as the top three most important predictors in the model. CONCLUSION The XGBoost model has the best predictive performance and can assist clinicians in providing individualized colonoscopy follow-up recommendations.
Collapse
Affiliation(s)
- Yi-Heng Shi
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
- The First Clinical Medical College of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
| | - Jun-Liang Liu
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
| | - Cong-Cong Cheng
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
- The First Clinical Medical College of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
| | - Wen-Ling Li
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
- The First Clinical Medical College of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
| | - Han Sun
- Department of Gastroenterology, Xuzhou Central Hospital, The Affiliated Xuzhou Hospital of Medical College of Southeast University, Xuzhou 221009, Jiangsu Province, China
| | - Xi-Liang Zhou
- Department of Gastroenterology, Xuzhou Central Hospital, The Affiliated Xuzhou Hospital of Medical College of Southeast University, Xuzhou 221009, Jiangsu Province, China
| | - Hong Wei
- Department of Gastroenterology, Xuzhou New Health Hospital, North Hospital of Xuzhou Cancer Hospital, Xuzhou 221007, Jiangsu Province, China
| | - Su-Juan Fei
- Department of Gastroenterology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou 221002, Jiangsu Province, China
| |
Collapse
|
3
|
Malayeri AA, Turkbey B. Unveiling the Future: A Deep Learning Model for Accurate Detection of Adrenal Nodules. Radiology 2025; 314:e250387. [PMID: 40035670 PMCID: PMC11950882 DOI: 10.1148/radiol.250387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2025] [Revised: 02/13/2025] [Accepted: 02/14/2025] [Indexed: 03/06/2025]
Affiliation(s)
- Ashkan A. Malayeri
- Department of Radiology and Imaging Sciences, National
Institutes of Health Clinical Center, 10 Center Dr, 1C352, Bethesda, MD
20892
| | - Baris Turkbey
- Department of Artificial Intelligence Resource, National
Cancer Institute Center for Cancer Research, Bethesda, Md
| |
Collapse
|
4
|
Ahn CH, Kim T, Jo K, Park SS, Kim MJ, Yoon JW, Kim TM, Kim SY, Kim JH, Choo J. Two-Stage Deep Learning Model for Adrenal Nodule Detection on CT Images: A Retrospective Study. Radiology 2025; 314:e231650. [PMID: 40035671 DOI: 10.1148/radiol.231650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Background The detection and classification of adrenal nodules are crucial for their management. Purpose To develop and test a deep learning model to automatically depict adrenal nodules on abdominal CT images and to simulate triaging performance in combination with human interpretation. Materials and Methods This retrospective study (January 2000-December 2020) used an internal dataset enriched with adrenal nodules for model training and testing and an external dataset reflecting real-world practice for further simulated testing in combination with human interpretation. The deep learning model had a two-stage architecture, a sequential detection and segmentation model, trained separately for the right and left adrenal glands. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC) for nodule detection and intersection over union for nodule segmentation. Results Of a total of 995 patients in the internal dataset, the AUCs for detecting right and left adrenal nodules in internal test set 1 (n = 153) were 0.98 (95% CI: 0.96, 1.00; P < .001) and 0.93 (95% CI: 0.87, 0.98; P < .001), respectively. These values were 0.98 (95% CI: 0.97, 0.99; P < .001) and 0.97 (95% CI: 0.96, 0.97; P < .001) in the external test set (n = 12 080) and 0.90 (95% CI: 0.84, 0.95; P < .001) and 0.89 (95% CI: 0.85, 0.94; P < .001) in internal test set 2 (n = 1214). The median intersection over union was 0.64 (IQR, 0.43-0.71) and 0.53 (IQR, 0.40-0.64) for right and left adrenal nodules, respectively. Combining the model with human interpretation achieved high sensitivity (up to 100%) and specificity (up to 99%), with triaging performance from 0.77 to 0.98. Conclusion The deep learning model demonstrated high performance and has the potential to improve detection of incidental adrenal nodules. © RSNA, 2025 Supplemental material is available for this article. See also the editorial by Malayeri and Turkbey in this issue.
Collapse
Affiliation(s)
- Chang Ho Ahn
- Department of Internal Medicine, Seoul National University Hospital, Seoul National University College of Medicine, 101 Dae-hak ro, Seoul 03080, Republic of Korea
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Taewoo Kim
- Graduate School of Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Kyungmin Jo
- Graduate School of Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Seung Shin Park
- Department of Internal Medicine, Seoul National University Hospital, Seoul National University College of Medicine, 101 Dae-hak ro, Seoul 03080, Republic of Korea
- Department of Internal Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Min Joo Kim
- Department of Internal Medicine, Seoul National University Hospital, Seoul National University College of Medicine, 101 Dae-hak ro, Seoul 03080, Republic of Korea
- Division of Endocrinology, Department of Internal Medicine, Healthcare System Gangnam Center, Healthcare Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ji Won Yoon
- Department of Internal Medicine, Seoul National University Hospital, Seoul National University College of Medicine, 101 Dae-hak ro, Seoul 03080, Republic of Korea
- Division of Endocrinology, Department of Internal Medicine, Healthcare System Gangnam Center, Healthcare Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | - Taek Min Kim
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sang Youn Kim
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jung Hee Kim
- Department of Internal Medicine, Seoul National University Hospital, Seoul National University College of Medicine, 101 Dae-hak ro, Seoul 03080, Republic of Korea
- Department of Internal Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Jaegul Choo
- Graduate School of Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
5
|
Li Y, Zhao Y, Yang P, Li C, Liu L, Zhao X, Tang H, Mao Y. Adrenal Volume Quantitative Visualization Tool by Multiple Parameters and an nnU-Net Deep Learning Automatic Segmentation Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:47-59. [PMID: 38955963 PMCID: PMC11811328 DOI: 10.1007/s10278-024-01158-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 05/15/2024] [Accepted: 05/28/2024] [Indexed: 07/04/2024]
Abstract
Abnormalities in adrenal gland size may be associated with various diseases. Monitoring the volume of adrenal gland can provide a quantitative imaging indicator for such conditions as adrenal hyperplasia, adrenal adenoma, and adrenal cortical adenocarcinoma. However, current adrenal gland segmentation models have notable limitations in sample selection and imaging parameters, particularly the need for more training on low-dose imaging parameters, which limits the generalization ability of the models, restricting their widespread application in routine clinical practice. We developed a fully automated adrenal gland volume quantification and visualization tool based on the no new U-Net (nnU-Net) for the automatic segmentation of deep learning models to address these issues. We established this tool by using a large dataset with multiple parameters, machine types, radiation doses, slice thicknesses, scanning modes, phases, and adrenal gland morphologies to achieve high accuracy and broad adaptability. The tool can meet clinical needs such as screening, monitoring, and preoperative visualization assistance for adrenal gland diseases. Experimental results demonstrate that our model achieves an overall dice coefficient of 0.88 on all images and 0.87 on low-dose CT scans. Compared to other deep learning models and nnU-Net model tools, our model exhibits higher accuracy and broader adaptability in adrenal gland segmentation.
Collapse
Affiliation(s)
- Yi Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | | | - Ping Yang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Caihong Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Liu Liu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Xiaofang Zhao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Huali Tang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Yun Mao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China.
| |
Collapse
|
6
|
Chen Y, Zhang Y, Zhang X, Wang X. Characterization of adrenal glands on computed tomography with a 3D V-Net-based model. Insights Imaging 2025; 16:17. [PMID: 39808346 PMCID: PMC11732807 DOI: 10.1186/s13244-025-01898-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 12/31/2024] [Indexed: 01/16/2025] Open
Abstract
OBJECTIVES To evaluate the performance of a 3D V-Net-based segmentation model of adrenal lesions in characterizing adrenal glands as normal or abnormal. METHODS A total of 1086 CT image series with focal adrenal lesions were retrospectively collected, annotated, and used for the training of the adrenal lesion segmentation model. The dice similarity coefficient (DSC) of the test set was used to evaluate the segmentation performance. The other cohort, consisting of 959 patients with pathologically confirmed adrenal lesions (external validation dataset 1), was included for validation of the classification performance of this model. Then, another consecutive cohort of patients with a history of malignancy (N = 479) was used for validation in the screening population (external validation dataset 2). Parameters of sensitivity, accuracy, etc., were used, and the performance of the model was compared to the radiology report in these validation scenes. RESULTS The DSC of the test set of the segmentation model was 0.900 (0.810-0.965) (median (interquartile range)). The model showed sensitivities and accuracies of 99.7%, 98.3% and 87.2%, 62.2% in external validation datasets 1 and 2, respectively. It showed no significant difference comparing to radiology reports in external validation datasets 1 and lesion-containing groups of external validation datasets 2 (p = 1.000 and p > 0.05, respectively). CONCLUSION The 3D V-Net-based segmentation model of adrenal lesions can be used for the binary classification of adrenal glands. CRITICAL RELEVANCE STATEMENT A 3D V-Net-based segmentation model of adrenal lesions can be used for the detection of abnormalities of adrenal glands, with a high accuracy in the pre-surgical scene as well as a high sensitivity in the screening scene. KEY POINTS Adrenal lesions may be prone to inter-observer variability in routine diagnostic workflow. The study developed a 3D V-Net-based segmentation model of adrenal lesions with DSC 0.900 in the test set. The model showed high sensitivity and accuracy of abnormalities detection in different scenes.
Collapse
Affiliation(s)
- Yuanchong Chen
- Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, 100034, China.
| |
Collapse
|
7
|
Tüdös Z, Veverková L, Baxa J, Hartmann I, Čtvrtlík F. The current and upcoming era of radiomics in phaeochromocytoma and paraganglioma. Best Pract Res Clin Endocrinol Metab 2025; 39:101923. [PMID: 39227277 DOI: 10.1016/j.beem.2024.101923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
The topic of the diagnosis of phaeochromocytomas remains highly relevant because of advances in laboratory diagnostics, genetics, and therapeutic options and also the development of imaging methods. Computed tomography still represents an essential tool in clinical practice, especially in incidentally discovered adrenal masses; it allows morphological evaluation, including size, shape, necrosis, and unenhanced attenuation. More advanced post-processing tools to analyse digital images, such as texture analysis and radiomics, are currently being studied. Radiomic features utilise digital image pixels to calculate parameters and relations undetectable by the human eye. On the other hand, the amount of radiomic data requires massive computer capacity. Radiomics, together with machine learning and artificial intelligence in general, has the potential to improve not only the differential diagnosis but also the prediction of complications and therapy outcomes of phaeochromocytomas in the future. Currently, the potential of radiomics and machine learning does not match expectations and awaits its fulfilment.
Collapse
Affiliation(s)
- Zbyněk Tüdös
- Department of Radiology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic
| | - Lucia Veverková
- Department of Radiology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic
| | - Jan Baxa
- Department of Imaging Methods, Faculty Hospital Pilsen and Faculty of Medicine in Pilsen, Charles University, Czech Republic
| | - Igor Hartmann
- Department of Urology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic
| | - Filip Čtvrtlík
- Department of Radiology, University Hospital and Faculty of Medicine and Dentistry, Palacky University, Olomouc, Czech Republic.
| |
Collapse
|
8
|
Liu L, Liu J, Santra B, Parnell C, Mukherjee P, Mathai T, Zhu Y, Anand A, Summers RM. Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans. Comput Med Imaging Graph 2025; 119:102458. [PMID: 39740481 DOI: 10.1016/j.compmedimag.2024.102458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 10/03/2024] [Accepted: 10/28/2024] [Indexed: 01/02/2025]
Abstract
Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (https://github.com/rsummers11/CT_Contrast_Phase_Classifier).
Collapse
Affiliation(s)
- Liangchen Liu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, National Institutes of Health, United States of America.
| | - Jianfei Liu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, National Institutes of Health, United States of America
| | - Bikash Santra
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, National Institutes of Health, United States of America; Indian Institute of Technology, Jodhpur, India
| | | | - Pritam Mukherjee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, National Institutes of Health, United States of America
| | - Tejas Mathai
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, National Institutes of Health, United States of America
| | - Yingying Zhu
- The University of Texas at Arlington, United States of America
| | - Akshaya Anand
- The University of Maryland, United States of America
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, National Institutes of Health, United States of America.
| |
Collapse
|
9
|
Chatterjee D, Kanhere A, Doo FX, Zhao J, Chan A, Welsh A, Kulkarni P, Trang A, Parekh VS, Yi PH. Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01273-w. [PMID: 39299957 DOI: 10.1007/s10278-024-01273-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 09/10/2024] [Accepted: 09/12/2024] [Indexed: 09/22/2024]
Abstract
Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).
Collapse
Affiliation(s)
- Devina Chatterjee
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Adway Kanhere
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Florence X Doo
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Jerry Zhao
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Andrew Chan
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Alexander Welsh
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Pranav Kulkarni
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Annie Trang
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Vishwa S Parekh
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Paul H Yi
- Department of Diagnostic Imaging, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, 38105 TN, USA.
| |
Collapse
|
10
|
Abel L, Wasserthal J, Meyer MT, Vosshenrich J, Yang S, Donners R, Obmann M, Boll D, Merkle E, Breit HC, Segeroth M. Intra-Individual Reproducibility of Automated Abdominal Organ Segmentation-Performance of TotalSegmentator Compared to Human Readers and an Independent nnU-Net Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01265-w. [PMID: 39294417 DOI: 10.1007/s10278-024-01265-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 08/26/2024] [Accepted: 09/08/2024] [Indexed: 09/20/2024]
Abstract
The purpose of this study is to assess segmentation reproducibility of artificial intelligence-based algorithm, TotalSegmentator, across 34 anatomical structures using multiphasic abdominal CT scans comparing unenhanced, arterial, and portal venous phases in the same patients. A total of 1252 multiphasic abdominal CT scans acquired at our institution between January 1, 2012, and December 31, 2022, were retrospectively included. TotalSegmentator was used to derive volumetric measurements of 34 abdominal organs and structures from the total of 3756 CT series. Reproducibility was evaluated across three contrast phases per CT and compared to two human readers and an independent nnU-Net trained on the BTCV dataset. Relative deviation in segmented volumes and absolute volume deviations (AVD) were reported. Volume deviation within 5% was considered reproducible. Thus, non-inferiority testing was conducted using a 5% margin. Twenty-nine out of 34 structures had volume deviations within 5% and were considered reproducible. Volume deviations for the adrenal glands, gallbladder, spleen, and duodenum were above 5%. Highest reproducibility was observed for bones (- 0.58% [95% CI: - 0.58, - 0.57]) and muscles (- 0.33% [- 0.35, - 0.32]). Among abdominal organs, volume deviation was 1.67% (1.60, 1.74). TotalSegmentator outperformed the reproducibility of the nnU-Net trained on the BTCV dataset with an AVD of 6.50% (6.41, 6.59) vs. 10.03% (9.86, 10.20; p < 0.0001), most notably in cases with pathologic findings. Similarly, TotalSegmentator's AVD between different contrast phases was superior compared to the interreader AVD for the same contrast phase (p = 0.036). TotalSegmentator demonstrated high intra-individual reproducibility for most abdominal structures in multiphasic abdominal CT scans. Although reproducibility was lower in pathologic cases, it outperforms both human readers and a nnU-Net trained on the BTCV dataset.
Collapse
Affiliation(s)
- Lorraine Abel
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Jakob Wasserthal
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Manfred T Meyer
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Jan Vosshenrich
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Shan Yang
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Ricardo Donners
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Markus Obmann
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Daniel Boll
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Elmar Merkle
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Hanns-Christian Breit
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - Martin Segeroth
- Department of Radiology, University Hospital Basel, Petersgraben 4, 4031, Basel, Switzerland.
| |
Collapse
|
11
|
Chen PT, Li PY, Liu KL, Wu VC, Lin YH, Chueh JS, Chen CM, Chang CC. Machine Learning Model with Computed Tomography Radiomics and Clinicobiochemical Characteristics Predict the Subtypes of Patients with Primary Aldosteronism. Acad Radiol 2024; 31:1818-1827. [PMID: 38042624 DOI: 10.1016/j.acra.2023.10.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 10/04/2023] [Accepted: 10/05/2023] [Indexed: 12/04/2023]
Abstract
RATIONALE AND OBJECTIVES Adrenal venous sampling (AVS) is the primary method for differentiating between primary aldosterone (PA) subtypes. The aim of study is to develop prediction models for subtyping of patients with PA using computed tomography (CT) radiomics and clinicobiochemical characteristics associated with PA. MATERIALS AND METHODS This study retrospectively enrolled 158 patients with PA who underwent AVS between January 2014 and March 2021. Neural network machine learning models were developed using a two-stage analysis of triple-phase abdominal CT and clinicobiochemical characteristics. In the first stage, the models were constructed to classify unilateral or bilateral PA; in the second stage, they were designed to determine the predominant side in patients with unilateral PA. The final proposed model combined the best-performing models from both stages. The model's performance was evaluated using repeated stratified five-fold cross-validation. We employed paired t-tests to compare its performance with the conventional imaging evaluations made by radiologists, which categorize patients as either having bilateral PA or unilateral PA on one side. RESULTS In the first stage, the integrated model that combines CT radiomic and clinicobiochemical characteristics exhibited the highest performance, surpassing both the radiomic-alone and clinicobiochemical-alone models. It achieved an accuracy and F1 score of 80.6% ± 3.0% and 74.8% ± 5.2% (area under the receiver operating curve [AUC] = 0.778 ± 0.050). In the second stage, the accuracy and F1 score of the radiomic-based model were 88% ± 4.9% and 81.9% ± 6.2% (AUC=0.831 ± 0.087). The proposed model achieved an accuracy and F1 score of 77.5% ± 3.9% and 70.5% ± 7.1% (AUC=0.771 ± 0.046) in subtype diagnosis and lateralization, surpassing the accuracy and F1 score achieved by radiologists' evaluation (p < .05). CONCLUSION The proposed machine learning model can predict the subtypes and lateralization of PA. It yields superior results compared to conventional imaging evaluation and has potential to supplement the diagnostic process in PA.
Collapse
Affiliation(s)
- Po-Ting Chen
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan (P.T.C, P.Y.L., C.M.C.); Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan (P.T.C., K.L.L., C.C.C.); Department of Medical Imaging, National Taiwan University Cancer Center and National Taiwan University College of Medicine, Taipei, Taiwan (P.T.C., K.L.L.); Department of Medical Imaging, National Taiwan University Hospital Hsinchu Branch, Hsinchu, Taiwan (P.T.C.)
| | - Pei-Yan Li
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan (P.T.C, P.Y.L., C.M.C.)
| | - Kao-Lang Liu
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan (P.T.C., K.L.L., C.C.C.); Department of Medical Imaging, National Taiwan University Cancer Center and National Taiwan University College of Medicine, Taipei, Taiwan (P.T.C., K.L.L.)
| | - Vin-Cent Wu
- Division of Nephrology, Department of Internal Medicine, National Taiwan University Hospital, National Taiwan University College of Medicine, Taipei, Taiwan (V.C.W.)
| | - Yen-Hung Lin
- Division of Cardiology, Department of Internal Medicine, National Taiwan University Hospital, National Taiwan University College of Medicine, Taipei, Taiwan (Y.H.L.)
| | - Jeff S Chueh
- Department of Urology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan (J.S.C.)
| | - Chung-Ming Chen
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan (P.T.C, P.Y.L., C.M.C.)
| | - Chin-Chen Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan (P.T.C., K.L.L., C.C.C.).
| |
Collapse
|
12
|
Sun S, Yao W, Wang Y, Yue P, Guo F, Deng X, Zhang Y. Development and validation of machine-learning models for the difficulty of retroperitoneal laparoscopic adrenalectomy based on radiomics. Front Endocrinol (Lausanne) 2023; 14:1265790. [PMID: 38034013 PMCID: PMC10687448 DOI: 10.3389/fendo.2023.1265790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/03/2023] [Indexed: 12/02/2023] Open
Abstract
Objective The aim is to construct machine learning (ML) prediction models for the difficulty of retroperitoneal laparoscopic adrenalectomy (RPLA) based on clinical and radiomic characteristics and to validate the models. Methods Patients who had undergone RPLA at Shanxi Bethune Hospital between August 2014 and December 2020 were retrospectively gathered. They were then randomly split into a training set and a validation set, maintaining a ratio of 7:3. The model was constructed using the training set and validated using the validation set. Furthermore, a total of 117 patients were gathered between January and December 2021 to form a prospective set for validation. Radiomic features were extracted by drawing the region of interest using the 3D slicer image computing platform and Python. Key features were selected through LASSO, and the radiomics score (Rad-score) was calculated. Various ML models were constructed by combining Rad-score with clinical characteristics. The optimal models were selected based on precision, recall, the area under the curve, F1 score, calibration curve, receiver operating characteristic curve, and decision curve analysis in the training, validation, and prospective sets. Shapley Additive exPlanations (SHAP) was used to demonstrate the impact of each variable in the respective models. Results After comparing the performance of 7 ML models in the training, validation, and prospective sets, it was found that the RF model had a more stable predictive performance, while xGBoost can significantly benefit patients. According to SHAP, the variable importance of the two models is similar, and both can reflect that the Rad-score has the most significant impact. At the same time, clinical characteristics such as hemoglobin, age, body mass index, gender, and diabetes mellitus also influenced the difficulty. Conclusion This study constructed ML models for predicting the difficulty of RPLA by combining clinical and radiomic characteristics. The models can help surgeons evaluate surgical difficulty, reduce risks, and improve patient benefits.
Collapse
Affiliation(s)
- Shiwei Sun
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Wei Yao
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Yue Wang
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Peng Yue
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Fuyu Guo
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Xiaoqian Deng
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Yangang Zhang
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, China
- Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
13
|
Weisman AJ, Huff DT, Govindan RM, Chen S, Perk TG. Multi-organ segmentation of CT via convolutional neural network: impact of training setting and scanner manufacturer. Biomed Phys Eng Express 2023; 9:065021. [PMID: 37725928 DOI: 10.1088/2057-1976/acfb06] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 09/19/2023] [Indexed: 09/21/2023]
Abstract
Objective. Automated organ segmentation on CT images can enable the clinical use of advanced quantitative software devices, but model performance sensitivities must be understood before widespread adoption can occur. The goal of this study was to investigate performance differences between Convolutional Neural Networks (CNNs) trained to segment one (single-class) versus multiple (multi-class) organs, and between CNNs trained on scans from a single manufacturer versus multiple manufacturers.Methods. The multi-class CNN was trained on CT images obtained from 455 whole-body PET/CT scans (413 for training, 42 for testing) taken with Siemens, GE, and Phillips PET/CT scanners where 16 organs were segmented. The multi-class CNN was compared to 16 smaller single-class CNNs trained using the same data, but with segmentations of only one organ per model. In addition, CNNs trained on Siemens-only (N = 186) and GE-only (N = 219) scans (manufacturer-specific) were compared with CNNs trained on data from both Siemens and GE scanners (manufacturer-mixed). Segmentation performance was quantified using five performance metrics, including the Dice Similarity Coefficient (DSC).Results. The multi-class CNN performed well compared to previous studies, even in organs usually considered difficult auto-segmentation targets (e.g., pancreas, bowel). Segmentations from the multi-class CNN were significantly superior to those from smaller single-class CNNs in most organs, and the 16 single-class models took, on average, six times longer to segment all 16 organs compared to the single multi-class model. The manufacturer-mixed approach achieved minimally higher performance over the manufacturer-specific approach.Significance. A CNN trained on contours of multiple organs and CT data from multiple manufacturers yielded high-quality segmentations. Such a model is an essential enabler of image processing in a software device that quantifies and analyzes such data to determine a patient's treatment response. To date, this activity of whole organ segmentation has not been adopted due to the intense manual workload and time required.
Collapse
Affiliation(s)
- Amy J Weisman
- AIQ Solutions, Madison, WI, United States of America
| | - Daniel T Huff
- AIQ Solutions, Madison, WI, United States of America
| | | | - Song Chen
- Department of Nuclear Medicine, The First Hospital of China Medical University, Shenyang, Liaoning, People's Republic of China
| | | |
Collapse
|
14
|
Sengun B, Iscan Y, Tataroglu Ozbulak GA, Kumbasar N, Egriboz E, Sormaz IC, Aksakal N, Deniz SM, Haklidir M, Tunca F, Giles Senyurek Y. Artificial Intelligence in Minimally Invasive Adrenalectomy: Using Deep Learning to Identify the Left Adrenal Vein. Surg Laparosc Endosc Percutan Tech 2023; 33:327-331. [PMID: 37311027 DOI: 10.1097/sle.0000000000001185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/18/2023] [Indexed: 06/15/2023]
Abstract
BACKGROUND Minimally invasive adrenalectomy is the main surgical treatment option for the resection of adrenal masses. Recognition and ligation of adrenal veins are critical parts of adrenal surgery. The utilization of artificial intelligence and deep learning algorithms to identify anatomic structures during laparoscopic and robot-assisted surgery can be used to provide real-time guidance. METHODS In this experimental feasibility study, intraoperative videos of patients who underwent minimally invasive transabdominal left adrenalectomy procedures between 2011 and 2022 in a tertiary endocrine referral center were retrospectively analyzed and used to develop an artificial intelligence model. Semantic segmentation of the left adrenal vein with deep learning was performed. To train a model, 50 random images per patient were captured during the identification and dissection of the left adrenal vein. A randomly selected 70% of data was used to train models while 15% for testing and 15% for validation with 3 efficient stage-wise feature pyramid networks (ESFPNet). Dice similarity coefficient (DSC) and intersection over union scores were used to evaluate segmentation accuracy. RESULTS A total of 40 videos were analyzed. Annotation of the left adrenal vein was performed in 2000 images. The segmentation network training on 1400 images was used to identify the left adrenal vein in 300 test images. The mean DSC and sensitivity for the highest scoring efficient stage-wise feature pyramid network B-2 network were 0.77 (±0.16 SD) and 0.82 (±0.15 SD), respectively, while the maximum DSC was 0.93, suggesting a successful prediction of anatomy. CONCLUSIONS Deep learning algorithms can predict the left adrenal vein anatomy with high performance and can potentially be utilized to identify critical anatomy during adrenal surgery and provide real-time guidance in the near future.
Collapse
Affiliation(s)
- Berke Sengun
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Yalin Iscan
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | | | | | | | - Ismail C Sormaz
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Nihat Aksakal
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | | | | | - Fatih Tunca
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Yasemin Giles Senyurek
- Department of General Surgery, Istanbul University, Istanbul Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
15
|
Chen Y, Yang J, Zhang Y, Sun Y, Zhang X, Wang X. Age-related morphometrics of normal adrenal glands based on deep learning-aided segmentation. Heliyon 2023; 9:e16810. [PMID: 37346358 PMCID: PMC10279821 DOI: 10.1016/j.heliyon.2023.e16810] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 05/21/2023] [Accepted: 05/29/2023] [Indexed: 06/23/2023] Open
Abstract
OBJECTIVE This study aims to evaluate the morphometrics of normal adrenal glands in adult patients semiautomatically using a deep learning-based segmentation model. MATERIALS AND METHODS A total of 520 abdominal CT image series with normal findings, from January 1, 2016, to March 14, 2019, were retrospectively collected for the training of the adrenal segmentation model. Then, 1043 portal venous phase image series of inpatient contrast-enhanced abdominal CT examinations with normal adrenal glands were included for analysis and grouped by every 10-year gap. A 3D U-Net-based segmentation model was used to predict bilateral adrenal labels followed by manual modification of labels as appropriate. Quantitative parameters (volume, CT value, and diameters) of the bilateral adrenal glands were then analyzed. RESULTS In the study cohort aged 18-77 years old (554 males and 489 females), the left adrenal gland was significantly larger than the right adrenal gland [all patients, 2867.79 (2317.11-3499.89) mm3 vs. 2452.84 (1983.50-2935.18) mm3, P < 0.001]. Male patients showed a greater volume of bilateral adrenal glands than females in all age groups (all patients, left: 3237.83 ± 930.21 mm3 vs. 2646.49 ± 766.42 mm3, P < 0.001; right: 2731.69 ± 789.19 mm3 vs. 2266.18 ± 632.97 mm3, P = 0.001). Bilateral adrenal volume in male patients showed an increasing then decreasing trend as age increased that peaked at 38-47 years old (left: 3416.01 ± 886.21 mm3, right: 2855.04 ± 774.57 mm3). CONCLUSIONS The semiautomated measurement revealed that the adrenal volume differs as age increases. Male patients aged 38-47 years old have a peaked adrenal volume.
Collapse
Affiliation(s)
- Yuanchong Chen
- Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| | - Jiejin Yang
- Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| | - Yaofeng Zhang
- Beijing Smart-imaging Technology Co. Ltd., Beijing, 100011, China
| | - Yumeng Sun
- Beijing Smart-imaging Technology Co. Ltd., Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| |
Collapse
|
16
|
Sut SK, Koc M, Zorlu G, Serhatlioglu I, Barua PD, Dogan S, Baygin M, Tuncer T, Tan RS, Acharya UR. Automated Adrenal Gland Disease Classes Using Patch-Based Center Symmetric Local Binary Pattern Technique with CT Images. J Digit Imaging 2023; 36:879-892. [PMID: 36658376 PMCID: PMC10287607 DOI: 10.1007/s10278-022-00759-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 01/21/2023] Open
Abstract
Incidental adrenal masses are seen in 5% of abdominal computed tomography (CT) examinations. Accurate discrimination of the possible differential diagnoses has important therapeutic and prognostic significance. A new handcrafted machine learning method has been developed for the automated and accurate classification of adrenal gland CT images. A new dataset comprising 759 adrenal gland CT image slices from 96 subjects were analyzed. Experts had labeled the collected images into four classes: normal, pheochromocytoma, lipid-poor adenoma, and metastasis. The images were preprocessed, resized, and the image features were extracted using the center symmetric local binary pattern (CS-LBP) method. CT images were next divided into 16 × 16 fixed-size patches, and further feature extraction using CS-LBP was performed on these patches. Next, extracted features were selected using neighborhood component analysis (NCA) to obtain the most meaningful ones for downstream classification. Finally, the selected features were classified using k-nearest neighbor (kNN), support vector machine (SVM), and neural network (NN) classifiers to obtain the optimum performing model. Our proposed method obtained an accuracy of 99.87%, 99.21%, and 98.81% with kNN, SVM, and NN classifiers, respectively. Hence, the kNN classifier yielded the highest classification results with no pathological image misclassified as normal. Our developed fixed patch CS-LBP-based automatic classification of adrenal gland pathologies on CT images is highly accurate and has low time complexity [Formula: see text]. It has the potential to be used for screening of adrenal gland disease classes with CT images.
Collapse
Affiliation(s)
- Suat Kamil Sut
- Department of Radiology, Adiyaman Training and Research Hospital, Adiyaman, Turkey
| | - Mustafa Koc
- Department of Radiology, Faculty of Medicine, Firat University, Elazig, Turkey
| | - Gokhan Zorlu
- Department of Biophysics, Faculty of Medicine, Firat University, Elazig, Turkey
| | - Ihsan Serhatlioglu
- Department of Biophysics, Faculty of Medicine, Firat University, Elazig, Turkey
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD 4350 Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007 Australia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, 599489 Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|