1
|
Yousefzamani M, Babapour Mofrad F. Deep learning without borders: recent advances in ultrasound image classification for liver diseases diagnosis. Expert Rev Med Devices 2025:1-17. [PMID: 40445166 DOI: 10.1080/17434440.2025.2514764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2024] [Accepted: 05/29/2025] [Indexed: 06/11/2025]
Abstract
INTRODUCTION Liver diseases are among the top global health burdens. Recently, there has been an increasing significance of diagnostics without discomfort to the patient; among them, ultrasound is the most used. Deep learning, in particular convolutional neural networks, has revolutionized the classification of liver diseases by automatically performing some specific analyses of difficult images. AREAS COVERED This review summarizes the progress that has been made in deep learning techniques for the classification of liver diseases using ultrasound imaging. It evaluates various models from CNNs to their hybrid versions, such as CNN-Transformer, for detecting fatty liver, fibrosis, and liver cancer, among others. Several challenges in the generalization of data and models across a different clinical environment are also discussed. EXPERT OPINION Deep learning has great prospects for automatic diagnosis of liver diseases. Most of the models have performed with high accuracy in different clinical studies. Despite this promise, challenges relating to generalization have remained. Future hardware developments and access to quality clinical data continue to further improve the performance of these models and ensure their vital role in the diagnosis of liver diseases.
Collapse
Affiliation(s)
- Midya Yousefzamani
- Department of Medical Radiation Engineering SR.C., Islamic Azad University, Tehran, Iran
| | | |
Collapse
|
2
|
Sadr H, Nazari M, Khodaverdian Z, Farzan R, Yousefzadeh-Chabok S, Ashoobi MT, Hemmati H, Hendi A, Ashraf A, Pedram MM, Hasannejad-Bibalan M, Yamaghani MR. Unveiling the potential of artificial intelligence in revolutionizing disease diagnosis and prediction: a comprehensive review of machine learning and deep learning approaches. Eur J Med Res 2025; 30:418. [PMID: 40414894 DOI: 10.1186/s40001-025-02680-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 05/11/2025] [Indexed: 05/27/2025] Open
Abstract
The rapid advancement of Machine Learning (ML) and Deep Learning (DL) technologies has revolutionized healthcare, particularly in the domains of disease prediction and diagnosis. This study provides a comprehensive review of ML and DL applications across sixteen diverse diseases, synthesizing findings from research conducted between 2015 and 2024. We explore these technologies' methodologies, effectiveness, and clinical outcomes, highlighting their transformative potential in healthcare settings. Although ML and DL demonstrate remarkable accuracy and efficiency in disease prediction and diagnosis, challenges including quality of data, interpretability of models, and their integration into clinical workflows remain significant barriers. By evaluating advanced approaches and their outcomes, this review not only underscores the current capabilities of ML and DL but also identifies key areas for future research. Ultimately, this work aims to serve as a roadmap for advancing healthcare practices, enhancing clinical decision making, and strengthening patient outcomes through the effective and responsible implementation of AI-driven technologies.
Collapse
Affiliation(s)
- Hossein Sadr
- Department of Artificial Intelligence in Medicine, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran, Iran.
- Neuroscience Research Center, Trauma Institute, Guilan University of Medical Sciences, Rasht, Iran.
| | - Mojdeh Nazari
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
- Cardiovascular Disease Research Center, Department of Cardiology, Heshmat Hospital, School of Medicine, Guilan University of Medical Sciences, Rasht, Iran.
| | - Zeinab Khodaverdian
- Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ramyar Farzan
- Department of Plastic and Reconstructive Surgery, School of Medicine, Guilan University of Medical Sciences, Rasht, Iran
| | | | - Mohammad Taghi Ashoobi
- Razi Clinical Research Development Unit, Razi Hospital, Guilan University of Medical Sciences, Rasht, Iran
| | - Hossein Hemmati
- Razi Clinical Research Development Unit, Razi Hospital, Guilan University of Medical Sciences, Rasht, Iran
| | - Amirreza Hendi
- Dental Sciences Research Center, Department of Prosthodontics, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran
| | - Ali Ashraf
- Clinical Research Development Unit of Poursina Hospital, Guilan University of Medical Sciences, Rasht, Iran
| | - Mir Mohsen Pedram
- Department of Electrical and Computer Engineering, Faculty of Engineering, Kharazmi University, Tehran, Iran
| | | | - Mohammad Reza Yamaghani
- Department of Computer Engineering and Information Technology, La.C., Islamic Azad University, Lahijan, Iran
| |
Collapse
|
3
|
Jeon Y, Kim BR, Choi HI, Lee E, Kim DW, Choi B, Lee JW. Feasibility of deep learning algorithm in diagnosing lumbar central canal stenosis using abdominal CT. Skeletal Radiol 2025; 54:947-957. [PMID: 39249505 PMCID: PMC11953181 DOI: 10.1007/s00256-024-04796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 08/28/2024] [Accepted: 09/01/2024] [Indexed: 09/10/2024]
Abstract
OBJECTIVE To develop a deep learning algorithm for diagnosing lumbar central canal stenosis (LCCS) using abdominal CT (ACT) and lumbar spine CT (LCT). MATERIALS AND METHODS This retrospective study involved 109 patients undergoing LCTs and ACTs between January 2014 and July 2021. The dural sac on CT images was manually segmented and classified as normal or stenosed (dural sac cross-sectional area ≥ 100 mm2 or < 100 mm2, respectively). A deep learning model based on U-Net architecture was developed to automatically segment the dural sac and classify the central canal stenosis. The classification performance of the model was compared on a testing set (990 images from 9 patients). The accuracy, sensitivity, and specificity of automatic segmentation were quantitatively evaluated by comparing its Dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC) with those of manual segmentation. RESULTS In total, 990 CT images from nine patients (mean age ± standard deviation, 77 ± 7 years; six men) were evaluated. The algorithm achieved high segmentation performance with a DSC of 0.85 ± 0.10 and ICC of 0.82 (95% confidence interval [CI]: 0.80,0.85). The ICC between ACTs and LCTs on the deep learning algorithm was 0.89 (95%CI: 0.87,0.91). The accuracy of the algorithm in diagnosing LCCS with dichotomous classification was 84%(95%CI: 0.82,0.86). In dataset analysis, the accuracy of ACTs and LCTs was 85%(95%CI: 0.82,0.88) and 83%(95%CI: 0.79,0.86), respectively. The model showed better accuracy for ACT than LCT. CONCLUSION The deep learning algorithm automatically diagnosed LCCS on LCTs and ACTs. ACT had a diagnostic performance for LCCS comparable to that of LCT.
Collapse
Affiliation(s)
- Yejin Jeon
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do, 13620, Republic of Korea
| | - Bo Ram Kim
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do, 13620, Republic of Korea
| | - Hyoung In Choi
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do, 13620, Republic of Korea
| | - Eugene Lee
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do, 13620, Republic of Korea
| | - Da-Wit Kim
- Coreline Soft Co. Ltd., World-Cup Bukro 6-Gil, Mapogu, Seoul, 03991, Korea
| | - Boorym Choi
- Coreline Soft Co. Ltd., World-Cup Bukro 6-Gil, Mapogu, Seoul, 03991, Korea
| | - Joon Woo Lee
- Department of Radiology, Seoul National University Bundang Hospital, 82 Gumi-ro, 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do, 13620, Republic of Korea.
- Department of Radiology, College of Medicine, Seoul National University, 103, Daehak-Ro, Jongno-Gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
4
|
Yasaka K, Kawamura M, Sonoda Y, Kubo T, Kiryu S, Abe O. Large multimodality model fine-tuned for detecting breast and esophageal carcinomas on CT: a preliminary study. Jpn J Radiol 2025; 43:779-786. [PMID: 39668277 PMCID: PMC12052878 DOI: 10.1007/s11604-024-01718-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Accepted: 12/02/2024] [Indexed: 12/14/2024]
Abstract
PURPOSE This study aimed to develop a large multimodality model (LMM) that can detect breast and esophageal carcinomas on chest contrast-enhanced CT. MATERIALS AND METHODS In this retrospective study, CT images of 401 (age, 62.9 ± 12.9 years; 169 males), 51 (age, 65.5 ± 11.6 years; 23 males), and 120 (age, 64.6 ± 14.2 years; 60 males) patients were used in the training, validation, and test phases. The numbers of CT images with breast carcinoma, esophageal carcinoma, and no lesion were 927, 2180, and 2087; 80, 233, and 270; and 184, 246, and 6919 for the training, validation, and test datasets, respectively. The LMM was fine-tuned using CT images as input and text data ("suspicious of breast carcinoma"/ "suspicious of esophageal carcinoma"/ "no lesion") as reference data on a desktop computer equipped with a single graphic processing unit. Because of the random nature of the training process, supervised learning was performed 10 times. The performance of the best performing model on the validation dataset was further tested using the time-independent test dataset. The detection performance was evaluated by calculating the area under the receiver operating characteristic curve (AUC). RESULTS The sensitivities of the fine-tuned LMM for detecting breast and esophageal carcinomas in the test dataset were 0.929 and 0.951, respectively. The diagnostic performance of the fine-tuned LMM for detecting breast and esophageal carcinomas was high, with AUCs of 0.890 (95%CI 0.871-0.909) and 0.880 (95%CI 0.865-0.894), respectively. CONCLUSIONS The fine-tuned LMM could detect both breast and esophageal carcinomas on chest contrast-enhanced CT with high diagnostic performance. Usefulness of large multimodality models in chest cancer imaging has not been assessed so far. The fine-tuned large multimodality model could detect breast and esophageal carcinomas with high diagnostic performance (area under the receiver operating characteristic curve of 0.890 and 0.880, respectively).
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Motohide Kawamura
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuki Sonoda
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
5
|
Lou M, Ying H, Liu X, Zhou HY, Zhang Y, Yu Y. SDR-Former: A Siamese Dual-Resolution Transformer for liver lesion classification using 3D multi-phase imaging. Neural Netw 2025; 185:107228. [PMID: 39908910 DOI: 10.1016/j.neunet.2025.107228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 12/28/2024] [Accepted: 01/27/2025] [Indexed: 02/07/2025]
Abstract
Automated classification of liver lesions in multi-phase CT and MR scans is of clinical significance but challenging. This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework, specifically designed for liver lesion classification in 3D multi-phase CT and MR imaging with varying phase counts. The proposed SDR-Former utilizes a streamlined Siamese Neural Network (SNN) to process multi-phase imaging inputs, possessing robust feature representations while maintaining computational efficiency. The weight-sharing feature of the SNN is further enriched by a hybrid Dual-Resolution Transformer (DR-Former), comprising a 3D Convolutional Neural Network (CNN) and a tailored 3D Transformer for processing high- and low-resolution images, respectively. This hybrid sub-architecture excels in capturing detailed local features and understanding global contextual information, thereby, boosting the SNN's feature extraction capabilities. Additionally, a novel Adaptive Phase Selection Module (APSM) is introduced, promoting phase-specific intercommunication and dynamically adjusting each phase's influence on the diagnostic outcome. The proposed SDR-Former framework has been validated through comprehensive experiments on two clinically collected datasets: a 3-phase CT dataset and an 8-phase MR dataset. The experimental results affirm the efficacy of the proposed framework. To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public. This pioneering dataset, being the first publicly available multi-phase MR dataset in this field, also underpins the MICCAI LLD-MMRI Challenge. The dataset is publicly available at: https://github.com/LMMMEng/LLD-MMRI-Dataset.
Collapse
Affiliation(s)
- Meng Lou
- School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China; AI Lab, Deepwise Healthcare, Beijing, China.
| | - Hanning Ying
- Department of General Surgery, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China.
| | | | - Hong-Yu Zhou
- School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China; Department of Biomedical Informatics, Harvard Medical School, Boston, USA.
| | - Yuqin Zhang
- Department of Radiology, The Affiliated LiHuiLi Hospital of Ningbo University, Ningbo, Zhejiang, China.
| | - Yizhou Yu
- School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
6
|
Kim DH. Personalized Medical Approach in Gastrointestinal Surgical Oncology: Current Trends and Future Perspectives. J Pers Med 2025; 15:175. [PMID: 40423047 DOI: 10.3390/jpm15050175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2025] [Revised: 04/25/2025] [Accepted: 04/25/2025] [Indexed: 05/28/2025] Open
Abstract
Advances in artificial intelligence (AI), multi-omic profiling, and sophisticated imaging technologies have significantly advanced personalized medicine in gastrointestinal surgical oncology. These technological innovations enable precise patient stratification, tailored surgical strategies, and individualized therapeutic approaches, thereby significantly enhancing clinical outcomes. Despite remarkable progress, challenges persist, including the standardization and integration of diverse data types, ethical concerns regarding patient privacy, and rigorous clinical validation of predictive models. Addressing these challenges requires establishing international standards for data interoperability, such as Fast Healthcare Interoperability Resources, and adopting advanced security methods, such as homomorphic encryption, to facilitate secure multi-institutional data sharing. Moreover, ensuring model transparency and explainability through techniques such as explainable AI is critical for fostering trust among clinicians and patients. The successful integration of these advanced technologies necessitates strong multidisciplinary collaboration among surgeons, radiologists, geneticists, pathologists, and oncologists. Ultimately, the continued development and effective implementation of these personalized medical strategies complemented by human expertise promise a transformative shift toward patient-centered care, improving long-term outcomes for patients with gastrointestinal cancer.
Collapse
Affiliation(s)
- Dae Hoon Kim
- Department of Surgery, Chungbuk National University Hospital, Cheongju 28644, Republic of Korea
- Department of Surgery, Chungbuk National University College of Medicine, Cheongju 28644, Republic of Korea
| |
Collapse
|
7
|
Yin C, Zhang H, Du J, Zhu Y, Zhu H, Yue H. Artificial intelligence in imaging for liver disease diagnosis. Front Med (Lausanne) 2025; 12:1591523. [PMID: 40351457 PMCID: PMC12062035 DOI: 10.3389/fmed.2025.1591523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2025] [Accepted: 04/08/2025] [Indexed: 05/14/2025] Open
Abstract
Liver diseases, including hepatitis, non-alcoholic fatty liver disease (NAFLD), cirrhosis, and hepatocellular carcinoma (HCC), remain a major global health concern, with early and accurate diagnosis being essential for effective management. Imaging modalities such as ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI) play a crucial role in non-invasive diagnosis, but their sensitivity and diagnostic accuracy can be limited. Recent advancements in artificial intelligence (AI) have improved imaging-based liver disease assessment by enhancing pattern recognition, automating fibrosis and steatosis quantification, and aiding in HCC detection. AI-driven imaging techniques have shown promise in fibrosis staging through US, CT, MRI, and elastography, reducing the reliance on invasive liver biopsy. For liver steatosis, AI-assisted imaging methods have improved sensitivity and grading consistency, while in HCC detection and characterization, AI models have enhanced lesion identification, classification, and risk stratification across imaging modalities. The growing integration of AI into liver imaging is reshaping diagnostic workflows and has the potential to improve accuracy, efficiency, and clinical decision-making. This review provides an overview of AI applications in liver imaging, focusing on their clinical utility and implications for the future of liver disease diagnosis.
Collapse
Affiliation(s)
- Chenglong Yin
- Department of Gastroenterology, Affiliated Hospital 6 of Nantong University, Yancheng Third People's Hospital, Yancheng, Jiangsu, China
- Affiliated Yancheng Hospital, School of Medicine, Southeast University, Yancheng, Jiangsu, China
| | | | - Jin Du
- Affiliated Yancheng Hospital, School of Medicine, Southeast University, Yancheng, Jiangsu, China
- Department of Science and Education, Affiliated Hospital 6 of Nantong University, Yancheng Third People's Hospital, Yancheng, Jiangsu, China
| | - Yingling Zhu
- Affiliated Yancheng Hospital, School of Medicine, Southeast University, Yancheng, Jiangsu, China
- Department of Science and Education, Affiliated Hospital 6 of Nantong University, Yancheng Third People's Hospital, Yancheng, Jiangsu, China
| | - Hua Zhu
- Department of Gastroenterology, Affiliated Hospital 6 of Nantong University, Yancheng Third People's Hospital, Yancheng, Jiangsu, China
- Affiliated Yancheng Hospital, School of Medicine, Southeast University, Yancheng, Jiangsu, China
| | - Hongqin Yue
- Department of Gastroenterology, Affiliated Hospital 6 of Nantong University, Yancheng Third People's Hospital, Yancheng, Jiangsu, China
- Affiliated Yancheng Hospital, School of Medicine, Southeast University, Yancheng, Jiangsu, China
| |
Collapse
|
8
|
Yasaka K, Asari Y, Morita Y, Kurokawa M, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution deep learning reconstruction to evaluate lumbar spinal stenosis status on magnetic resonance myelography. Jpn J Radiol 2025:10.1007/s11604-025-01787-5. [PMID: 40266548 DOI: 10.1007/s11604-025-01787-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2025] [Accepted: 04/06/2025] [Indexed: 04/24/2025]
Abstract
PURPOSE To investigate whether super-resolution deep learning reconstruction (SR-DLR) of MR myelography-aided evaluations of lumbar spinal stenosis. MATERIAL AND METHODS In this retrospective study, lumbar MR myelography of 40 patients (16 males and 24 females; mean age, 59.4 ± 31.8 years) were analyzed. Using the MR imaging data, MR myelography was separately reconstructed via SR-DLR, deep learning reconstruction (DLR), and conventional zero-filling interpolation (ZIP). Three radiologists, blinded to patient background data and MR reconstruction information, independently evaluated the image sets in terms of the following items: the numbers of levels affected by lumbar spinal stenosis; and cauda equina depiction, sharpness, noise, artifacts, and overall image quality. RESULTS The median interobserver agreement in terms of the numbers of lumbar spinal stenosis levels were 0.819, 0.735, and 0.729 for SR-DLR, DLR, and ZIP images, respectively. The imaging quality of the cauda equina, and image sharpness, noise, and overall quality on SR-DLR images were significantly better than those on DLR and ZIP images, as rated by all readers (p < 0.001, Wilcoxon signed-rank test). No significant differences were observed for artifacts on SR-DLR against DLR and ZIP. CONCLUSIONS SR-DLR improved the image quality of lumbar MR myelographs compared to DLR and ZIP, and was associated with better interobserver agreement during assessment of lumbar spinal stenosis status.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Yusuke Asari
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuichi Morita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Mariko Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-Ku, Tokyo, 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
9
|
Zossou VBS, Rodrigue Gnangnon FH, Biaou O, de Vathaire F, Allodji RS, Ezin EC. Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:873-886. [PMID: 39227538 PMCID: PMC11950545 DOI: 10.1007/s10278-024-01192-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 06/26/2024] [Accepted: 06/27/2024] [Indexed: 09/05/2024]
Abstract
Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of 87.65 % in segmentation and a mean accuracy of 93.97 % in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.
Collapse
Affiliation(s)
- Vincent-Béni Sèna Zossou
- Université Paris-Saclay, UVSQ, Univ. Paris-Sud, CESP, Équipe Radiation Epidemiology, 94805, Villejuif, France.
- Centre de recherche en épidémiologie et santé des populations (CESP), U1018, Institut national de la santé et de la recherche médicale (INSERM), 94805, Villejuif, France.
- Department of Clinical Research, Radiation Epidemiology Team, Gustave Roussy, 94805, Villejuif, France.
- Ecole Doctorale Sciences de l'Ingénieur, Université d'Abomey-Calavi, BP 526, Abomey-Calavi, Benin.
| | | | - Olivier Biaou
- Faculté des Sciences de la Santé, Université d'Abomey-Calavi, BP 188, Cotonou, Benin
- Department of Radiology, CNHU-HKM, 1213, Cotonou, Benin
| | - Florent de Vathaire
- Université Paris-Saclay, UVSQ, Univ. Paris-Sud, CESP, Équipe Radiation Epidemiology, 94805, Villejuif, France
- Centre de recherche en épidémiologie et santé des populations (CESP), U1018, Institut national de la santé et de la recherche médicale (INSERM), 94805, Villejuif, France
- Department of Clinical Research, Radiation Epidemiology Team, Gustave Roussy, 94805, Villejuif, France
| | - Rodrigue S Allodji
- Université Paris-Saclay, UVSQ, Univ. Paris-Sud, CESP, Équipe Radiation Epidemiology, 94805, Villejuif, France
- Centre de recherche en épidémiologie et santé des populations (CESP), U1018, Institut national de la santé et de la recherche médicale (INSERM), 94805, Villejuif, France
- Department of Clinical Research, Radiation Epidemiology Team, Gustave Roussy, 94805, Villejuif, France
| | - Eugène C Ezin
- Institut de Formation et de Recherche en Informatique, Université d'Abomey-Calavi, BP 526, Cotonou, Benin
- Institut de Mathématiques et de Sciences Physiques, Université d'Abomey-Calavi, 613, Dangbo, Benin
| |
Collapse
|
10
|
Kanemaru N, Yasaka K, Fujita N, Kanzawa J, Abe O. The Fine-Tuned Large Language Model for Extracting the Progressive Bone Metastasis from Unstructured Radiology Reports. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:865-872. [PMID: 39187702 PMCID: PMC11950591 DOI: 10.1007/s10278-024-01242-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 08/03/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Early detection of patients with impending bone metastasis is crucial for prognosis improvement. This study aimed to investigate the feasibility of a fine-tuned, locally run large language model (LLM) in extracting patients with bone metastasis in unstructured Japanese radiology report and to compare its performance with manual annotation. This retrospective study included patients with "metastasis" in radiological reports (April 2018-January 2019, August-May 2022, and April-December 2023 for training, validation, and test datasets of 9559, 1498, and 7399 patients, respectively). Radiologists reviewed the clinical indication and diagnosis sections of the radiological report (used as input data) and classified them into groups 0 (no bone metastasis), 1 (progressive bone metastasis), and 2 (stable or decreased bone metastasis). The data for group 0 was under-sampled in training and test datasets due to group imbalance. The best-performing model from the validation set was subsequently tested using the testing dataset. Two additional radiologists (readers 1 and 2) were involved in classifying radiological reports within the test dataset for testing purposes. The fine-tuned LLM, reader 1, and reader 2 demonstrated an accuracy of 0.979, 0.996, and 0.993, sensitivity for groups 0/1/2 of 0.988/0.947/0.943, 1.000/1.000/0.966, and 1.000/0.982/0.954, and time required for classification (s) of 105, 2312, and 3094 in under-sampled test dataset (n = 711), respectively. Fine-tuned LLM extracted patients with bone metastasis, demonstrating satisfactory performance that was comparable to or slightly lower than manual annotation by radiologists in a noticeably shorter time.
Collapse
Affiliation(s)
- Noriko Kanemaru
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
11
|
Arribas Anta J, Moreno-Vedia J, García López J, Rios-Vives MA, Munuera J, Rodríguez-Comas J. Artificial intelligence for detection and characterization of focal hepatic lesions: a review. Abdom Radiol (NY) 2025; 50:1564-1583. [PMID: 39369107 DOI: 10.1007/s00261-024-04597-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 09/13/2024] [Accepted: 09/16/2024] [Indexed: 10/07/2024]
Abstract
Focal liver lesions (FLL) are common incidental findings in abdominal imaging. While the majority of FLLs are benign and asymptomatic, some can be malignant or pre-malignant, and need accurate detection and classification. Current imaging techniques, such as computed tomography (CT) and magnetic resonance imaging (MRI), play a crucial role in assessing these lesions. Artificial intelligence (AI), particularly deep learning (DL), offers potential solutions by analyzing large data to identify patterns and extract clinical features that aid in the early detection and classification of FLLs. This manuscript reviews the diagnostic capacity of AI-based algorithms in processing CT and MRIs to detect benign and malignant FLLs, with an emphasis in the characterization and classification of these lesions and focusing on differentiating benign from pre-malignant and potentially malignant lesions. A comprehensive literature search from January 2010 to April 2024 identified 45 relevant studies. The majority of AI systems employed convolutional neural networks (CNNs), with expert radiologists providing reference standards through manual lesion delineation, and histology as the gold standard. The studies reviewed indicate that AI-based algorithms demonstrate high accuracy, sensitivity, specificity, and AUCs in detecting and characterizing FLLs. These algorithms excel in differentiating between benign and malignant lesions, optimizing diagnostic protocols, and reducing the needs of invasive procedures. Future research should concentrate on the expansion of data sets, the improvement of model explainability, and the validation of AI tools across a range of clinical setting to ensure the applicability and reliability of such tools.
Collapse
Affiliation(s)
- Julia Arribas Anta
- Department of Gastroenterology, University Hospital, 12 Octubre, Madrid, Spain
| | - Juan Moreno-Vedia
- Scientific and Technical Department, Sycai Technologies S.L., Barcelona, Spain
| | - Javier García López
- Scientific and Technical Department, Sycai Technologies S.L., Barcelona, Spain
| | - Miguel Angel Rios-Vives
- Diagnostic Imaging Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
- Advanced Medical Imaging, Artificial Intelligence, and Imaging-Guided Therapy Research Group, Institut de Recerca Sant Pau - Centre CERCA, Barceona, Spain
| | - Josep Munuera
- Diagnostic Imaging Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
- Advanced Medical Imaging, Artificial Intelligence, and Imaging-Guided Therapy Research Group, Institut de Recerca Sant Pau - Centre CERCA, Barceona, Spain
| | | |
Collapse
|
12
|
Zhou XQ, Huang S, Shi XM, Liu S, Zhang W, Shi L, Lv MH, Tang XW. Global trends in artificial intelligence applications in liver disease over seventeen years. World J Hepatol 2025; 17:101721. [PMID: 40177211 PMCID: PMC11959664 DOI: 10.4254/wjh.v17.i3.101721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Revised: 01/01/2025] [Accepted: 02/10/2025] [Indexed: 03/26/2025] Open
Abstract
BACKGROUND In recent years, the utilization of artificial intelligence (AI) technology has gained prominence in the field of liver disease. AIM To analyzes AI research in the field of liver disease, summarizes the current research status and identifies hot spots. METHODS We searched the Web of Science Core Collection database for all articles and reviews on hepatopathy and AI. The time spans from January 2007 to August 2023. We included 4051 studies for further collection of information, including authors, countries, institutions, publication years, keywords and references. VOS viewer, CiteSpace, R 4.3.1 and Scimago Graphica were used to visualize the results. RESULTS A total of 4051 articles were analyzed. China was the leading contributor, with 1568 publications, while the United States had the most international collaborations. The most productive institutions and journals were the Chinese Academy of Sciences and Frontiers in Oncology. Keywords co-occurrence analysis can be roughly summarized into four clusters: Risk prediction, diagnosis, treatment and prognosis of liver diseases. "Machine learning", "deep learning", "convolutional neural network", "CT", and "microvascular infiltration" have been popular research topics in recent years. CONCLUSION AI is widely applied in the risk assessment, diagnosis, treatment, and prognosis of liver diseases, with a shift from invasive to noninvasive treatment approaches.
Collapse
Affiliation(s)
- Xue-Qin Zhou
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou 646099, Sichuan Province, China
| | - Shu Huang
- Department of Gastroenterology, Lianshui People' Hospital of Kangda College Affiliated to Nanjing Medical University, Huaian 223499, Jiangsu Province, China
| | - Xia-Min Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou 646099, Sichuan Province, China
| | - Sha Liu
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou 646099, Sichuan Province, China
| | - Wei Zhang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou 646099, Sichuan Province, China
| | - Lei Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou 646099, Sichuan Province, China
| | - Mu-Han Lv
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou 646099, Sichuan Province, China
| | - Xiao-Wei Tang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Luzhou 646099, Sichuan Province, China.
| |
Collapse
|
13
|
Inmutto N, Pojchamarnwiputh S, Na Chiangmai W. Multiphase Computed Tomography Scan Findings for Artificial Intelligence Training in the Differentiation of Hepatocellular Carcinoma and Intrahepatic Cholangiocarcinoma Based on Interobserver Agreement of Expert Abdominal Radiologists. Diagnostics (Basel) 2025; 15:821. [PMID: 40218171 PMCID: PMC11989188 DOI: 10.3390/diagnostics15070821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2025] [Revised: 03/23/2025] [Accepted: 03/23/2025] [Indexed: 04/14/2025] Open
Abstract
Background/Objective: Hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) are the most common primary liver cancer. Computed tomography (CT) is the imaging modality used to evaluate liver nodules and differentiate HCC from ICC. Artificial intelligence (AI), machine learning (ML), and deep learning (DL) have been used in multiple studies in the field of radiology. The purpose of this study was to determine potential CT features for the differentiation of hepatocellular carcinoma and intrahepatic cholangiocarcinoma. Methods: Patients with radiological and pathologically confirmed diagnosis of HCC and ICC between January 2013 and December 2015 were included in this retrospective study. Two board-certified diagnostic radiologists independently reviewed multiphase CT images on a picture archiving and communication system (PACS). Arterial hyperenhancement, portal vein thrombosis, lymph node enlargement, and cirrhosis appearance were evaluated. We then calculated sensitivity, specificity, the likelihood ratio for diagnosis of HCC and ICC. Inter-observed agreement of categorical data was evaluated using Cohen's kappa statistic (k). Results: A total of 74 patients with a pathologically confirmed diagnosis, including 48 HCCs and 26 ICC, were included in this study. Most of HCC patients showed arterial hyperenhancement at 95.8%, and interobserver agreement was moderate (k = 0.47). Arterial enhancement in ICC was less frequent, ranging from 15.4% to 26.9%, and agreement between readers was substantial (k = 0.66). The two readers showed a moderate agreement of cirrhosis appearance in both the HCC and ICC groups, k = 0.43 and k = 0.48, respectively. Cirrhosis appeared in the HCC group more frequently than the ICC group. Lymph node enlargement was more commonly seen in ICC than HCC, and agreement between the readers was almost perfect (k = 0.84). Portal vein invasion in HCC was seen in 14.6% by both readers with a substantial agreement (k = 0.66). Portal vein invasion in ICC was seen in 11.5% to 19.2% of the patients. The diagnostic performance of the two radiologists was satisfactory, with a corrected diagnosis of 87.8% and 94.6%. The two radiologists had high sensitivity in diagnosing HCCs (95.8% to 97.9%) and specificity in diagnosing ICCs (95.8% to 97.9%). Conclusions: Cirrhosis and lymph node metastasis could be ancillary and adopted in future AI training algorithms.
Collapse
Affiliation(s)
| | | | - Wittanee Na Chiangmai
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai 50200, Thailand; (N.I.); (S.P.)
| |
Collapse
|
14
|
Sattari MA, Zonouri SA, Salimi A, Izadi S, Rezaei AR, Ghezelbash Z, Hayati M, Seifi M, Ekhteraei M. Liver margin segmentation in abdominal CT images using U-Net and Detectron2: annotated dataset for deep learning models. Sci Rep 2025; 15:8721. [PMID: 40082561 PMCID: PMC11906767 DOI: 10.1038/s41598-025-92423-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2024] [Accepted: 02/27/2025] [Indexed: 03/16/2025] Open
Abstract
The segmentation of liver margins in computed tomography (CT) images presents significant challenges due to the complex anatomical variability of the liver, with critical implications for medical diagnostics and treatment planning. In this study, we leverage a substantial dataset of over 4,200 abdominal CT images, meticulously annotated by expert radiologists from Taleghani Hospital in Kermanshah, Iran. Now made available to the research community, this dataset serves as a rich resource for enhancing and validating various neural network models. We employed two advanced deep neural network models, U-Net and Detectron2, for liver segmentation tasks. In terms of the Mask Intersection over Union (Mask IoU) metric, U-Net achieved an Mask IoU of 0.903, demonstrating high efficacy in simpler cases. In contrast, Detectron2 outperformed U-Net with an Mask IoU of 0.974, particularly excelling in accurately delineating liver boundaries in complex cases where the liver appears segmented into two distinct regions within the images. This highlights Detectron2's advanced potential in handling anatomical variations that pose challenges for other models. Our findings not only provide a robust comparative analysis of these models but also establish a framework for further enhancements in medical imaging segmentation tasks. The initiative aims not just to refine liver margin detection but also to facilitate the development of automated systems for diagnosing liver diseases, with potential future applications extending these methodologies to other abdominal organs, potentially transforming the landscape of computational diagnostics in healthcare.
Collapse
Affiliation(s)
- Mohammad Amir Sattari
- Electrical Engineering Department, Faculty of Engineering, Razi University, Kermanshah, Iran
| | - Seyed Abed Zonouri
- Electrical Engineering Department, Faculty of Engineering, Razi University, Kermanshah, Iran
| | - Ali Salimi
- Department of Computer Engineering, Faculty of Engineering, Razi University, Kermanshah, Iran
| | - Saadat Izadi
- Department of Computer Engineering, Faculty of Engineering, Razi University, Kermanshah, Iran
| | - Ali Reza Rezaei
- Department of Computer Engineering, Faculty of Engineering, Razi University, Kermanshah, Iran
| | - Zahra Ghezelbash
- Radiology Department, Clinical Research Development Center, Imam Reza Hospital, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Mohsen Hayati
- Electrical Engineering Department, Faculty of Engineering, Razi University, Kermanshah, Iran.
| | - Mehrdad Seifi
- Clinical research development centre, taleghani and imam Ali hospital, Kermanshah university of medical science, Kermanshah, Iran
| | - Milad Ekhteraei
- Medical Biology Research Center, Health Technology Institute, Kermanshah University of Medical Sciences, Kermanshah, Iran
| |
Collapse
|
15
|
Adusumilli P, Ravikumar N, Hall G, Scarsbrook AF. A Methodological Framework for AI-Assisted Diagnosis of Ovarian Masses Using CT and MR Imaging. J Pers Med 2025; 15:76. [PMID: 39997351 PMCID: PMC11856859 DOI: 10.3390/jpm15020076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2025] [Revised: 02/15/2025] [Accepted: 02/17/2025] [Indexed: 02/26/2025] Open
Abstract
Background: Ovarian cancer encompasses a diverse range of neoplasms originating in the ovaries, fallopian tubes, and peritoneum. Despite being one of the commonest gynaecological malignancies, there are no validated screening strategies for early detection. A diagnosis typically relies on imaging, biomarkers, and multidisciplinary team discussions. The accurate interpretation of CTs and MRIs may be challenging, especially in borderline cases. This study proposes a methodological pipeline to develop and evaluate deep learning (DL) models that can assist in classifying ovarian masses from CT and MRI data, potentially improving diagnostic confidence and patient outcomes. Methods: A multi-institutional retrospective dataset was compiled, supplemented by external data from the Cancer Genome Atlas. Two classification workflows were examined: (1) whole-volume input and (2) lesion-focused region of interest. Multiple DL architectures, including ResNet, DenseNet, transformer-based UNeST, and Attention Multiple-Instance Learning (MIL), were implemented within the PyTorch-based MONAI framework. The class imbalance was mitigated using focal loss, oversampling, and dynamic class weighting. The hyperparameters were optimised with Optuna, and balanced accuracy was the primary metric. Results: For a preliminary dataset, the proposed framework demonstrated feasibility for the multi-class classification of ovarian masses. The initial experiments highlighted the potential of transformers and MIL for identifying the relevant imaging features. Conclusions: A reproducible methodological pipeline for DL-based ovarian mass classification using CT and MRI scans has been established. Future work will leverage a multi-institutional dataset to refine these models, aiming to enhance clinical workflows and improve patient outcomes.
Collapse
Affiliation(s)
- Pratik Adusumilli
- Department of Clinical Radiology, Leeds Teaching Hospitals NHS Trust, Leeds LS9 7TF, UK
- Leeds Institute of Medical Research, University of Leeds, Leeds LS2 9NL, UK
| | | | - Geoff Hall
- Department of Medical Oncology, Leeds Teaching Hospitals NHS Trust, Leeds LS2 9JT, UK
- Leeds Institute for Data Analytics, University of Leeds, Leeds LS2 9NL, UK
| | - Andrew F. Scarsbrook
- Department of Clinical Radiology, Leeds Teaching Hospitals NHS Trust, Leeds LS9 7TF, UK
- Leeds Institute of Medical Research, University of Leeds, Leeds LS2 9NL, UK
| |
Collapse
|
16
|
Yasaka K, Kanzawa J, Kanemaru N, Koshino S, Abe O. Fine-Tuned Large Language Model for Extracting Patients on Pretreatment for Lung Cancer from a Picture Archiving and Communication System Based on Radiological Reports. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:327-334. [PMID: 38955964 PMCID: PMC11811339 DOI: 10.1007/s10278-024-01186-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 06/17/2024] [Accepted: 06/19/2024] [Indexed: 07/04/2024]
Abstract
This study aimed to investigate the performance of a fine-tuned large language model (LLM) in extracting patients on pretreatment for lung cancer from picture archiving and communication systems (PACS) and comparing it with that of radiologists. Patients whose radiological reports contained the term lung cancer (3111 for training, 124 for validation, and 288 for test) were included in this retrospective study. Based on clinical indication and diagnosis sections of the radiological report (used as input data), they were classified into four groups (used as reference data): group 0 (no lung cancer), group 1 (pretreatment lung cancer present), group 2 (after treatment for lung cancer), and group 3 (planning radiation therapy). Using the training and validation datasets, fine-tuning of the pretrained LLM was conducted ten times. Due to group imbalance, group 2 data were undersampled in the training. The performance of the best-performing model in the validation dataset was assessed in the independent test dataset. For testing purposes, two other radiologists (readers 1 and 2) were also involved in classifying radiological reports. The overall accuracy of the fine-tuned LLM, reader 1, and reader 2 was 0.983, 0.969, and 0.969, respectively. The sensitivity for differentiating group 0/1/2/3 by LLM, reader 1, and reader 2 was 1.000/0.948/0.991/1.000, 0.750/0.879/0.996/1.000, and 1.000/0.931/0.978/1.000, respectively. The time required for classification by LLM, reader 1, and reader 2 was 46s/2539s/1538s, respectively. Fine-tuned LLM effectively extracted patients on pretreatment for lung cancer from PACS with comparable performance to radiologists in a shorter time.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Noriko Kanemaru
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Saori Koshino
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
17
|
Ma J, Yang H, Chou Y, Yoon J, Allison T, Komandur R, McDunn J, Tasneem A, Do RK, Schwartz LH, Zhao B. Generalizability of lesion detection and segmentation when ScaleNAS is trained on a large multi-organ dataset and validated in the liver. Med Phys 2025; 52:1005-1018. [PMID: 39576046 DOI: 10.1002/mp.17504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 09/25/2024] [Accepted: 10/05/2024] [Indexed: 02/04/2025] Open
Abstract
BACKGROUND Tumor assessment through imaging is crucial for diagnosing and treating cancer. Lesions in the liver, a common site for metastatic disease, are particularly challenging to accurately detect and segment. This labor-intensive task is subject to individual variation, which drives interest in automation using artificial intelligence (AI). PURPOSE Evaluate AI for lesion detection and lesion segmentation using CT in the context of human performance on the same task. Use internal testing to determine how an AI-developed model (ScaleNAS) trained on lesions in multiple organs performs when tested specifically on liver lesions in a dataset integrating real-world and clinical trial data. Use external testing to evaluate whether ScaleNAS's performance generalizes to publicly available colorectal liver metastases (CRLM) from The Cancer Imaging Archive (TCIA). METHODS The CUPA study dataset included patients whose CT scan of chest, abdomen, or pelvis at Columbia University between 2010-2020 indicated solid tumors (CUIMC, n = 5011) and from two clinical trials in metastatic colorectal cancer, PRIME (n = 1183) and Amgen (n = 463). Inclusion required ≥1 measurable lesion; exclusion criteria eliminated 1566 patients. Data were divided at the patient level into training (n = 3996), validation (n = 570), and testing (n = 1529) sets. To create the reference standard for training and validation, each case was annotated by one of six radiologists, randomly assigned, who marked the CUPA lesions without access to any previous annotations. For internal testing we refined the CUPA test set to contain only patients who had liver lesions (n = 525) and formed an enhanced reference standard through expert consensus reviewing prior annotations. For external testing, TCIA-CRLM (n = 197) formed the test set. The reference standard for TCIA-CRLM was formed by consensus review of the original annotation and contours by two new radiologists. Metrics for lesion detection were sensitivity and false positives. Lesion segmentation was assessed with median Dice coefficient, under-segmentation ratio (USR), and over-segmentation ratio (OSR). Subgroup analysis examined the influence of lesion size ≥ 10 mm (measurable by RECIST1.1) versus all lesions (important for early identification of disease progression). RESULTS ScaleNAS trained on all lesions achieved sensitivity of 71.4% and Dice of 70.2% for liver lesions in the CUPA internal test set (3,495 lesions) and sensitivity of 68.2% and Dice 64.2% in the TCIA-CRLM external test set (638 lesions). Human radiologists had mean sensitivity of 53.5% and Dice of 73.9% in CUPA and sensitivity of 84.1% and Dice of 88.4% in TCIA-CRLM. Performance improved for ScaleNAS and radiologists in the subgroup of lesions that excluded sub-centimeter lesions. CONCLUSIONS Our study presents the first evaluation of ScaleNAS in medical imaging, demonstrating its liver lesion detection and segmentation performance across diverse datasets. Using consensus reference standards from multiple radiologists, we addressed inter-observer variability and contributed to consistency in lesion annotation. While ScaleNAS does not surpass radiologists in performance, it offers fast and reliable results with potential utility in providing initial contours for radiologists. Future work will extend this model to lung and lymph node lesions, ultimately aiming to enhance clinical applications by generalizing detection and segmentation across tissue types.
Collapse
Affiliation(s)
- Jingchen Ma
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Hao Yang
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Yen Chou
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Fu Jen Catholic University Hospital, Department of Medical Imaging and Fu Jen Catholic University, School of Medicine, New Taipei City, Taiwan
| | - Jin Yoon
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
| | - Tavis Allison
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | | | - Jon McDunn
- Project Data Sphere, Cary, North Carolina, USA
| | | | - Richard K Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Binsheng Zhao
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
18
|
Bhange M, Telange D. Convergence of nanotechnology and artificial intelligence in the fight against liver cancer: a comprehensive review. Discov Oncol 2025; 16:77. [PMID: 39841330 PMCID: PMC11754566 DOI: 10.1007/s12672-025-01821-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Accepted: 01/15/2025] [Indexed: 01/23/2025] Open
Abstract
Liver cancer is one of the most challenging malignancies, often associated with poor prognosis and limited treatment options. Recent advancements in nanotechnology and artificial intelligence (AI) have opened new frontiers in the fight against this disease. Nanotechnology enables precise, targeted drug delivery, enhancing the efficacy of therapeutics while minimizing off-target effects. Simultaneously, AI contributes to improved diagnostic accuracy, predictive modeling, and the development of personalized treatment strategies. This review explores the convergence of nanotechnology and AI in liver cancer treatment, evaluating current progress, identifying existing research gaps, and discussing future directions. We highlight how AI-powered algorithms can optimize nanocarrier design, facilitate real-time monitoring of treatment efficacy, and enhance clinical decision-making. By integrating AI with nanotechnology, clinicians can achieve more accurate patient stratification and treatment personalization, ultimately improving patient outcomes. This convergence holds significant promise for transforming liver cancer therapy into a more precise, individualized, and efficient process. However, data privacy, regulatory hurdles, and the need for large-scale clinical validation remain. Addressing these issues will be essential to fully realizing the potential of these technologies in oncology.
Collapse
Affiliation(s)
- Manjusha Bhange
- Department of Pharmaceutics, Datta Meghe College of Pharmacy, Datta Meghe Institute of Higher Education and Research (DU), Sawangi Meghe, Wardha, Maharashtra, 442001, India.
| | - Darshan Telange
- Department of Pharmaceutics, Datta Meghe College of Pharmacy, Datta Meghe Institute of Higher Education and Research (DU), Sawangi Meghe, Wardha, Maharashtra, 442001, India
| |
Collapse
|
19
|
Alshamrani K, Alshamrani HA. An Efficient Dual-Sampling Approach for Chest CT Diagnosis. J Multidiscip Healthc 2025; 18:239-253. [PMID: 39839996 PMCID: PMC11748922 DOI: 10.2147/jmdh.s472170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 08/12/2024] [Indexed: 01/23/2025] Open
Abstract
Background This paper aimed to enhance the diagnostic process of lung abnormalities in computed tomography (CT) images, particularly in distinguishing cancer cells from normal chest tissue. The rapid and uneven growth of cancer cells, presenting with variable symptoms, necessitates an advanced approach for accurate identification. Objective To develop a dual-sampling network targeting lung infection regions to address the diagnostic challenge. The network was designed to adapt to the uneven distribution of infection areas, which could be predominantly minor or major in different regions. Methods A total of 150 CT images were analyzed using the dual-sampling network. Two sampling approaches were compared: the proposed dual-sampling technique and a uniform sampling method. Results The dual-sampling network demonstrated superior performance in detecting lung abnormalities compared to uniform sampling. The uniform sampling method, the network results: an F1-Score of 94.2%, accuracy of 94.5%, sensitivity of 93.5%, specificity of 95.4%, and an area under the curve (AUC) of 98.4%. However, with the proposed dual-sampling method, the network reached an F1-score of 94.9%, accuracy of 95.2%, specificity of 96.1%, sensitivity of 94.2%, and an AUC of 95.5%. Conclusion This study suggests that the proposed dual-sampling network significantly improves the precision of lung abnormality diagnosis in CT images. This advancement has the potential to aid radiologists in making more accurate diagnoses, ultimately benefiting patient treatment and contributing to better overall population health. The efficiency and effectiveness of the dual-sampling approach in managing the uneven distribution of lung infection areas are key to its success.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- Radiology Sciences Department, College of Medical Sciences, Najran University, Najran, Saudi Arabia
- School of Medicine and Population Health, University of Sheffield, Sheffield, UK
| | - Hassan A Alshamrani
- Radiology Sciences Department, College of Medical Sciences, Najran University, Najran, Saudi Arabia
| |
Collapse
|
20
|
Luan A, von Rabenau L, Serebrakian AT, Crowe CS, Do BH, Eberlin KR, Chang J, Pridgen BC. Machine Learning-Aided Diagnosis Enhances Human Detection of Perilunate Dislocations. Hand (N Y) 2025:15589447241308603. [PMID: 39815415 PMCID: PMC11736725 DOI: 10.1177/15589447241308603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2025]
Abstract
BACKGROUND Perilunate/lunate injuries are frequently misdiagnosed. We hypothesize that utilization of a machine learning algorithm can improve human detection of perilunate/lunate dislocations. METHODS Participants from emergency medicine, hand surgery, and radiology were asked to evaluate 30 lateral wrist radiographs for the presence of a perilunate/lunate dislocation with and without the use of a machine learning algorithm, which was used to label the lunate. Human performance with and without the machine learning tool was evaluated using sensitivity, specificity, accuracy, and F1 score. RESULTS A total of 137 participants were recruited, with 55 respondents from emergency medicine, 33 from radiology, and 49 from hand surgery. Thirty-nine participants were attending physicians or fellows, and 98 were residents. Use of the machine learning tool improved specificity from 88% to 94%, accuracy from 89% to 93%, and F1 score from 0.89 to 0.92. When stratified by training level, attending physicians and fellows had an improvement in specificity from 93% to 97%. For residents, use of the machine learning tool resulted in improved accuracy from 86% to 91% and specificity from 86% to 93%. The performance of surgery and radiology residents improved when assisted by the tool to achieve similar accuracy to attendings, and their assisted diagnostic performance reaches levels similar to that of the fully automated artificial intelligence tool. CONCLUSIONS Use of a machine learning tool improves resident accuracy for radiographic detection of perilunate dislocations, and improves specificity for all training levels. This may help to decrease misdiagnosis of perilunate dislocations, particularly when subspecialist evaluation is delayed.
Collapse
Affiliation(s)
- Anna Luan
- Stanford University, CA, USA
- Massachusetts General Hospital, Boston, USA
| | | | | | | | | | | | | | - Brian C. Pridgen
- University of Washington, Seattle, USA
- The Buncke Clinic, San Francisco, CA, USA
| |
Collapse
|
21
|
Nadeem A, Ashraf R, Mahmood T, Parveen S. Automated CAD system for early detection and classification of pancreatic cancer using deep learning model. PLoS One 2025; 20:e0307900. [PMID: 39752442 PMCID: PMC11698441 DOI: 10.1371/journal.pone.0307900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/10/2024] [Indexed: 01/06/2025] Open
Abstract
Accurate diagnosis of pancreatic cancer using CT scan images is critical for early detection and treatment, potentially saving numerous lives globally. Manual identification of pancreatic tumors by radiologists is challenging and time-consuming due to the complex nature of CT scan images and variations in tumor shape, size, and location of the pancreatic tumor also make it challenging to detect and classify different types of tumors. Thus, to address this challenge we proposed a four-stage framework of computer-aided diagnosis systems. In the preprocessing stage, the input image resizes into 227 × 227 dimensions then converts the RGB image into a grayscale image, and enhances the image by removing noise without blurring edges by applying anisotropic diffusion filtering. In the segmentation stage, the preprocessed grayscale image a binary image is created based on a threshold, highlighting the edges by Sobel filtering, and watershed segmentation to segment the tumor region and we also implement the U-Net method for segmentation. Then refine the geometric structure of the image using morphological operation and extracting the texture features from the image using a gray-level co-occurrence matrix computed by analyzing the spatial relationship of pixel intensities in the refined image, counting the occurrences of pixel pairs with specific intensity values and spatial relationships. The detection stage analyzes the tumor region's extracted features characteristics by labeling the connected components and selecting the region with the highest density to locate the tumor area, achieving a good accuracy of 99.64%. In the classification stage, the system classifies the detected tumor into the normal, pancreatic tumor, then into benign, pre-malignant, or malignant using a proposed reduced 11-layer AlexNet model. The classification stage attained an accuracy level of 98.72%, an AUC of 0.9979, and an overall system average processing time of 1.51 seconds, demonstrating the capability of the system to effectively and efficiently identify and classify pancreatic cancers.
Collapse
Affiliation(s)
- Abubakar Nadeem
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Rahan Ashraf
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Toqeer Mahmood
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Sajida Parveen
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| |
Collapse
|
22
|
Gupta P, Hsu Y, Liang L, Chu Y, Chu C, Wu J, Chen J, Tseng W, Yang Y, Lee T, Hung C, Wu C. Automatic localization and deep convolutional generative adversarial network-based classification of focal liver lesions in computed tomography images: A preliminary study. J Gastroenterol Hepatol 2025; 40:166-176. [PMID: 39542428 PMCID: PMC11771580 DOI: 10.1111/jgh.16803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 10/02/2024] [Accepted: 10/24/2024] [Indexed: 11/17/2024]
Abstract
BACKGROUND AND AIM Computed tomography of the abdomen exhibits subtle and complex features of liver lesions, subjectively interpreted by physicians. We developed a deep learning-based localization and classification (DLLC) system for focal liver lesions (FLLs) in computed tomography imaging that could assist physicians in more robust clinical decision-making. METHODS We conducted a retrospective study (approval no. EMRP-109-058) on 1589 patients with 17 335 slices with 3195 FLLs using data from January 2004 to December 2020. The training set included 1272 patients (male: 776, mean age 62 ± 10.9), and the test set included 317 patients (male: 228, mean age 57 ± 11.8). The slices were annotated by annotators with different experience levels, and the DLLC system was developed using generative adversarial networks for data augmentation. A comparative analysis was performed for the DLLC system versus physicians using external data. RESULTS Our DLLC system demonstrated mean average precision at 0.81 for localization. The system's overall accuracy for multiclass classifications was 0.97 (95% confidence interval [CI]: 0.95-0.99). Considering FLLs ≤ 3 cm, the system achieved an accuracy of 0.83 (95% CI: 0.68-0.98), and for size > 3 cm, the accuracy was 0.87 (95% CI: 0.77-0.97) for localization. Furthermore, during classification, the accuracy was 0.95 (95% CI: 0.92-0.98) for FLLs ≤ 3 cm and 0.97 (95% CI: 0.94-1.00) for FLLs > 3 cm. CONCLUSION This system can provide an accurate and non-invasive method for diagnosing liver conditions, making it a valuable tool for hepatologists and radiologists.
Collapse
Affiliation(s)
- Pushpanjali Gupta
- Division of Translational ResearchTaipei Veterans General HospitalTaipeiTaiwan
- Health Innovation CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Institute of Biomedical InformaticsNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Institute of Public HealthNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| | - Yao‐Chun Hsu
- Division of Gastroenterology and HepatologyE‐DA HospitalKaohsiungTaiwan
- School of MedicineI‐Shou UniversityKaohsiungTaiwan
| | - Li‐Lin Liang
- Health Innovation CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Institute of Public HealthNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| | - Yuan‐Chia Chu
- Information Management OfficeTaipei Veterans General HospitalTaipeiTaiwan
- Big Data CenterTaipei Veterans General HospitalTaipeiTaiwan
- Department of Information ManagementNational Taipei University of Nursing and Health SciencesTaipeiTaiwan
| | - Chia‐Sheng Chu
- Ph.D. Program of Interdisciplinary MedicineNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Division of Gastroenterology and HepatologyTaipei City Hospital Yang Ming BranchTaipeiTaiwan
| | - Jaw‐Liang Wu
- School of MedicineNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| | - Jian‐An Chen
- Institute of Biomedical InformaticsNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| | - Wei‐Hsiu Tseng
- Institute of Biomedical InformaticsNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| | - Ya‐Ching Yang
- Institute of Biomedical InformaticsNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| | - Teng‐Yu Lee
- Division of Gastroenterology and HepatologyTaichung Veterans General HospitalTaichungTaiwan
- School of MedicineChung Shan Medical UniversityTaichungTaiwan
| | - Che‐Lun Hung
- Health Innovation CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Institute of Biomedical InformaticsNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
| | - Chun‐Ying Wu
- Division of Translational ResearchTaipei Veterans General HospitalTaipeiTaiwan
- Health Innovation CenterNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Institute of Biomedical InformaticsNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Institute of Public HealthNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Ph.D. Program of Interdisciplinary MedicineNational Yang Ming Chiao Tung UniversityTaipeiTaiwan
- Department of Public HealthChina Medical UniversityTaichungTaiwan
| |
Collapse
|
23
|
Yasaka K, Nomura T, Kamohara J, Hirakawa H, Kubo T, Kiryu S, Abe O. Classification of Interventional Radiology Reports into Technique Categories with a Fine-Tuned Large Language Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01370-w. [PMID: 39673010 DOI: 10.1007/s10278-024-01370-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Revised: 11/29/2024] [Accepted: 12/02/2024] [Indexed: 12/15/2024]
Abstract
The aim of this study is to develop a fine-tuned large language model that classifies interventional radiology reports into technique categories and to compare its performance with readers. This retrospective study included 3198 patients (1758 males and 1440 females; age, 62.8 ± 16.8 years) who underwent interventional radiology from January 2018 to July 2024. Training, validation, and test datasets involved 2292, 250, and 656 patients, respectively. Input data involved texts in clinical indication, imaging diagnosis, and image-finding sections of interventional radiology reports. Manually classified technique categories (15 categories in total) were utilized as reference data. Fine-tuning of the Bidirectional Encoder Representations model was performed using training and validation datasets. This process was repeated 15 times due to the randomness of the learning process. The best-performed model, which showed the highest accuracy among 15 trials, was selected to further evaluate its performance in the independent test dataset. The report classification involved one radiologist (reader 1) and two radiology residents (readers 2 and 3). The accuracy and macrosensitivity (average of each category's sensitivity) of the best-performed model in the validation dataset were 0.996 and 0.994, respectively. For the test dataset, the accuracy/macrosensitivity were 0.988/0.980, 0.986/0.977, 0.989/0.979, and 0.988/0.980 in the best model, reader 1, reader 2, and reader 3, respectively. The model required 0.178 s required for classification per patient, which was 17.5-19.9 times faster than readers. In conclusion, fine-tuned large language model classified interventional radiology reports into technique categories with high accuracy similar to readers within a remarkably shorter time.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Takuto Nomura
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Jun Kamohara
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Hiroshi Hirakawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
24
|
Vadlamudi S, Kumar V, Ghosh D, Abraham A. Artificial intelligence-powered precision: Unveiling the landscape of liver disease diagnosis—A comprehensive review. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2024; 138:109452. [DOI: 10.1016/j.engappai.2024.109452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
25
|
Kanzawa J, Yasaka K, Ohizumi Y, Morita Y, Kurokawa M, Abe O. Effect of deep learning reconstruction on the assessment of pancreatic cystic lesions using computed tomography. Radiol Phys Technol 2024; 17:827-833. [PMID: 39147953 PMCID: PMC11579065 DOI: 10.1007/s12194-024-00834-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 07/30/2024] [Accepted: 08/09/2024] [Indexed: 08/17/2024]
Abstract
This study aimed to compare the image quality and detection performance of pancreatic cystic lesions between computed tomography (CT) images reconstructed by deep learning reconstruction (DLR) and filtered back projection (FBP). This retrospective study included 54 patients (mean age: 67.7 ± 13.1) who underwent contrast-enhanced CT from May 2023 to August 2023. Among eligible patients, 30 and 24 were positive and negative for pancreatic cystic lesions, respectively. DLR and FBP were used to reconstruct portal venous phase images. Objective image quality analyses calculated quantitative image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) using regions of interest on the abdominal aorta, pancreatic lesion, and pancreatic parenchyma. Three blinded radiologists performed subjective image quality assessment and lesion detection tests. Lesion depiction, normal structure illustration, subjective image noise, and overall image quality were utilized as subjective image quality indicators. DLR significantly reduced quantitative image noise compared with FBP (p < 0.001). SNR and CNR were significantly improved in DLR compared with FBP (p < 0.001). Three radiologists rated significantly higher scores for DLR in all subjective image quality indicators (p ≤ 0.029). Performance of DLR and FBP were comparable in lesion detection, with no statistically significant differences in the area under the receiver operating characteristic curve, sensitivity, specificity and accuracy. DLR reduced image noise and improved image quality with a clearer depiction of pancreatic structures. These improvements may have a positive effect on evaluating pancreatic cystic lesions, which can contribute to appropriate management of these lesions.
Collapse
Affiliation(s)
- Jun Kanzawa
- Department of Radiology, University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Koichiro Yasaka
- Department of Radiology, University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan.
| | - Yuji Ohizumi
- Department of Radiology, University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Yuichi Morita
- Department of Radiology, University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Mariko Kurokawa
- Department of Radiology, University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, University of Tokyo Hospital, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
26
|
Long QY, Wang FY, Hu Y, Gao B, Zhang C, Ban BH, Tian XB. Development of the interpretable typing prediction model for osteosarcoma and chondrosarcoma based on machine learning and radiomics: a multicenter retrospective study. Front Med (Lausanne) 2024; 11:1497309. [PMID: 39635595 PMCID: PMC11614641 DOI: 10.3389/fmed.2024.1497309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Accepted: 10/30/2024] [Indexed: 12/07/2024] Open
Abstract
Background Osteosarcoma and chondrosarcoma are common malignant bone tumors, and accurate differentiation between these two tumors is crucial for treatment strategies and prognosis assessment. However, traditional radiological methods face diagnostic challenges due to the similarity in imaging between the two. Methods Clinical CT images and pathological data of 76 patients confirmed by pathology from January 2018 to January 2024 were retrospectively collected from Guizhou Medical University Affiliated Hospital and Guizhou Medical University Second Affiliated Hospital. A total of 788 radiomic features, including shape, texture, and first-order statistics, were extracted in this study. Six machine learning models, including Random Forest (RF), Extra Trees (ET), AdaBoost, Gradient Boosting Tree (GB), Linear Discriminant Analysis (LDA), and XGBoost (XGB), were trained and validated. Additionally, the importance of features and the interpretability of the models were evaluated through SHAP value analysis. Results The RF model performed best in distinguishing between these two tumor types, with an AUC value close to perfect at 1.00. The ET and AdaBoost models also demonstrated high performance, with AUC values of 0.98 and 0.93, respectively. SHAP value analysis revealed significant influences of wavelet-transformed GLCM and First Order features on model predictions, further enhancing diagnostic interpretability. Conclusion This study confirms the effectiveness of combining machine learning with radiomic features in improving the accuracy and interpretability of osteosarcoma and chondrosarcoma diagnosis. The excellent performance of the RF model is particularly suitable for complex imaging data processing, providing valuable insights for the future.
Collapse
Affiliation(s)
- Qing-Yuan Long
- The Second Affiliated Hospital of Guizhou Medical University, Kaili, China
- School of Clinical Medicine, Guizhou Medical University, Guiyang, China
| | - Feng-Yan Wang
- School of Clinical Medicine, Guizhou Medical University, Guiyang, China
| | - Yue Hu
- Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | - Bo Gao
- School of Clinical Medicine, Guizhou Medical University, Guiyang, China
| | - Chuan Zhang
- The Second Affiliated Hospital of Guizhou Medical University, Kaili, China
| | - Bo-Heng Ban
- Qiannan State Hospital of Traditional Chinese Medicine, Duyun, China
| | - Xiao-Bin Tian
- School of Clinical Medicine, Guizhou Medical University, Guiyang, China
| |
Collapse
|
27
|
Lu MY, Chuang WL, Yu ML. The role of artificial intelligence in the management of liver diseases. Kaohsiung J Med Sci 2024; 40:962-971. [PMID: 39440678 DOI: 10.1002/kjm2.12901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Revised: 09/24/2024] [Accepted: 09/24/2024] [Indexed: 10/25/2024] Open
Abstract
Universal neonatal hepatitis B virus (HBV) vaccination and the advent of direct-acting antivirals (DAA) against hepatitis C virus (HCV) have reshaped the epidemiology of chronic liver diseases. However, some aspects of the management of chronic liver diseases remain unresolved. Nucleotide analogs can achieve sustained HBV DNA suppression but rarely lead to a functional cure. Despite the high efficacy of DAAs, successful antiviral therapy does not eliminate the risk of hepatocellular carcinoma (HCC), highlighted the need for cost-effective identification of high-risk populations for HCC surveillance and tailored HCC treatment strategies for these populations. The accessibility of high-throughput genomic data has accelerated the development of precision medicine, and the emergence of artificial intelligence (AI) has led to a new era of precision medicine. AI can learn from complex, non-linear data and identify hidden patterns within real-world datasets. The combination of AI and multi-omics approaches can facilitate disease diagnosis, biomarker discovery, and the prediction of treatment efficacy and prognosis. AI algorithms have been implemented in various aspects, including non-invasive tests, predictive models, image diagnosis, and the interpretation of histopathology findings. AI can support clinicians in decision-making, alleviate clinical burdens, and curtail healthcare expenses. In this review, we introduce the fundamental concepts of machine learning and review the role of AI in the management of chronic liver diseases.
Collapse
Affiliation(s)
- Ming-Ying Lu
- Division of Hepatobiliary, Department of Internal Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- School of Medicine and Hepatitis Research Center, College of Medicine and Center for Liquid Biopsy and Cohort Research, Kaohsiung Medical University, Kaohsiung, Taiwan
- School of Medicine and Doctoral Program of Clinical and Experimental Medicine, College of Medicine and Center of Excellence for Metabolic Associated Fatty Liver Disease, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Wan-Long Chuang
- Division of Hepatobiliary, Department of Internal Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- School of Medicine and Hepatitis Research Center, College of Medicine and Center for Liquid Biopsy and Cohort Research, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Ming-Lung Yu
- Division of Hepatobiliary, Department of Internal Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- School of Medicine and Hepatitis Research Center, College of Medicine and Center for Liquid Biopsy and Cohort Research, Kaohsiung Medical University, Kaohsiung, Taiwan
- School of Medicine and Doctoral Program of Clinical and Experimental Medicine, College of Medicine and Center of Excellence for Metabolic Associated Fatty Liver Disease, National Sun Yat-sen University, Kaohsiung, Taiwan
| |
Collapse
|
28
|
Giraldo-Roldán D, Dos Santos GC, Araújo ALD, Nakamura TCR, Pulido-Díaz K, Lopes MA, Santos-Silva AR, Kowalski LP, Moraes MC, Vargas PA. Deep Convolutional Neural Network for Accurate Classification of Myofibroblastic Lesions on Patch-Based Images. Head Neck Pathol 2024; 18:117. [PMID: 39466448 PMCID: PMC11519240 DOI: 10.1007/s12105-024-01723-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Accepted: 10/17/2024] [Indexed: 10/30/2024]
Abstract
OBJECTIVE This study aimed to implement and evaluate a Deep Convolutional Neural Network for classifying myofibroblastic lesions into benign and malignant categories based on patch-based images. METHODS A Residual Neural Network (ResNet50) model, pre-trained with weights from ImageNet, was fine-tuned to classify a cohort of 20 patients (11 benign and 9 malignant cases). Following annotation of tumor regions, the whole-slide images (WSIs) were fragmented into smaller patches (224 × 224 pixels). These patches were non-randomly divided into training (308,843 patches), validation (43,268 patches), and test (42,061 patches) subsets, maintaining a 78:11:11 ratio. The CNN training was caried out for 75 epochs utilizing a batch size of 4, the Adam optimizer, and a learning rate of 0.00001. RESULTS ResNet50 achieved an accuracy of 98.97%, precision of 99.91%, sensitivity of 97.98%, specificity of 99.91%, F1 score of 98.94%, and AUC of 0.99. CONCLUSIONS The ResNet50 model developed exhibited high accuracy during training and robust generalization capabilities in unseen data, indicating nearly flawless performance in distinguishing between benign and malignant myofibroblastic tumors, despite the small sample size. The excellent performance of the AI model in separating such histologically similar classes could be attributed to its ability to identify hidden discriminative features, as well as to use a wide range of features and benefit from proper data preprocessing.
Collapse
Affiliation(s)
- Daniela Giraldo-Roldán
- Faculdade de Odontologia de Piracicaba, Universidade de Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil.
- Department of Oral Diagnosis, Oral Pathology Area Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira, 901, 13.414-903, Piracicaba, São Paulo, Brazil.
| | - Giovanna Calabrese Dos Santos
- Institute of Science and Technology, Federal University of São Paulo (ICT-Unifesp), São José dos Campos, São Paulo, Brazil
| | | | - Thaís Cerqueira Reis Nakamura
- Institute of Science and Technology, Federal University of São Paulo (ICT-Unifesp), São José dos Campos, São Paulo, Brazil
| | - Katya Pulido-Díaz
- Health Care Department, Oral Pathology and Medicine Master, Autonomous Metropolitan University, Mexico City, Mexico
| | - Marcio Ajudarte Lopes
- Faculdade de Odontologia de Piracicaba, Universidade de Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| | - Alan Roger Santos-Silva
- Faculdade de Odontologia de Piracicaba, Universidade de Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| | - Luiz Paulo Kowalski
- Head and Neck Surgery Department, University of São Paulo Medical School (FMUSP), São Paulo, Brazil
- Department of Head and Neck Surgery and Otorhinolaryngology, A.C. Camargo Cancer Center, São Paulo, Brazil
| | - Matheus Cardoso Moraes
- Institute of Science and Technology, Federal University of São Paulo (ICT-Unifesp), São José dos Campos, São Paulo, Brazil
| | - Pablo Agustin Vargas
- Faculdade de Odontologia de Piracicaba, Universidade de Campinas (FOP-UNICAMP), Piracicaba, São Paulo, Brazil
| |
Collapse
|
29
|
Qiao S, Xue M, Zuo Y, Zheng J, Jiang H, Zeng X, Peng D. Four-phase CT lesion recognition based on multi-phase information fusion framework and spatiotemporal prediction module. Biomed Eng Online 2024; 23:103. [PMID: 39434126 PMCID: PMC11492744 DOI: 10.1186/s12938-024-01297-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 10/02/2024] [Indexed: 10/23/2024] Open
Abstract
Multiphase information fusion and spatiotemporal feature modeling play a crucial role in the task of four-phase CT lesion recognition. In this paper, we propose a four-phase CT lesion recognition algorithm based on multiphase information fusion framework and spatiotemporal prediction module. Specifically, the multiphase information fusion framework uses the interactive perception mechanism to realize the channel-spatial information interactive weighting between multiphase features. In the spatiotemporal prediction module, we design a 1D deep residual network to integrate multiphase feature vectors, and use the GRU architecture to model the temporal enhancement information between CT slices. In addition, we employ CT image pseudo-color processing for data augmentation and train the whole network based on a multi-task learning framework. We verify the proposed network on a four-phase CT dataset. The experimental results show that the proposed network can effectively fuse the multi-phase information and model the temporal enhancement information between CT slices, showing excellent performance in lesion recognition.
Collapse
Affiliation(s)
- Shaohua Qiao
- HDU-ITMO Joint Institute, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Mengfan Xue
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Yan Zuo
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Jiannan Zheng
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Haodong Jiang
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Xiangai Zeng
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Dongliang Peng
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China.
| |
Collapse
|
30
|
Huang S, Nie X, Pu K, Wan X, Luo J. A flexible deep learning framework for liver tumor diagnosis using variable multi-phase contrast-enhanced CT scans. J Cancer Res Clin Oncol 2024; 150:443. [PMID: 39361193 PMCID: PMC11450020 DOI: 10.1007/s00432-024-05977-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 09/27/2024] [Indexed: 10/05/2024]
Abstract
BACKGROUND Liver cancer is a significant cause of cancer-related mortality worldwide and requires tailored treatment strategies for different types. However, preoperative accurate diagnosis of the type presents a challenge. This study aims to develop an automatic diagnostic model based on multi-phase contrast-enhanced CT (CECT) images to distinguish between hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), and normal individuals. METHODS We designed a Hierarchical Long Short-Term Memory (H-LSTM) model, whose core components consist of a shared image feature extractor across phases, an internal LSTM for each phase, and an external LSTM across phases. The internal LSTM aggregates features from different layers of 2D CECT images, while the external LSTM aggregates features across different phases. H-LSTM can handle incomplete phases and varying numbers of CECT image layers, making it suitable for real-world decision support scenarios. Additionally, we applied phase augmentation techniques to process multi-phase CECT images, improving the model's robustness. RESULTS The H-LSTM model achieved an overall average AUROC of 0.93 (0.90, 1.00) on the test dataset, with AUROC for HCC classification reaching 0.97 (0.93, 1.00) and for ICC classification reaching 0.90 (0.78, 1.00). Comprehensive validation in scenarios with incomplete phases was performed, with the H-LSTM model consistently achieving AUROC values over 0.9. CONCLUSION The proposed H-LSTM model can be employed for classification tasks involving incomplete phases of CECT images in real-world scenarios, demonstrating high performance. This highlights the potential of AI-assisted systems in achieving accurate diagnosis and treatment of liver cancer. H-LSTM offers an effective solution for processing multi-phase data and provides practical value for clinical diagnostics.
Collapse
Affiliation(s)
- Shixin Huang
- Department of Scientific Research, The People's Hospital of Yubei District of Chongqing city, Chongqing, 401120, China
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Xixi Nie
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Kexue Pu
- School of Medical Informatics, Chongqing Medical University, Chongqing, 400016, China
| | - Xiaoyu Wan
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Jiawei Luo
- West China Biomedical Big Data Center, Med-X Center for Informatics, West China Hospital, Sichuan University, Chengdu, 610044, China.
| |
Collapse
|
31
|
Nault JC, Calderaro J, Ronot M. Integration of new technologies in the multidisciplinary approach to primary liver tumours: The next-generation tumour board. J Hepatol 2024; 81:756-762. [PMID: 38871125 DOI: 10.1016/j.jhep.2024.05.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Revised: 05/27/2024] [Accepted: 05/28/2024] [Indexed: 06/15/2024]
Abstract
Primary liver tumours, including benign liver tumours, hepatocellular carcinoma and cholangiocarcinoma, present a multifaceted challenge, necessitating a collaborative approach, as evidenced by the role of the multidisciplinary tumour board (MDTB). The approach to managing primary liver tumours involves specialised teams, including surgeons, radiologists, oncologists, pathologists, hepatologists, and radiation oncologists, coming together to propose individualised treatment plans. The evolving landscape of primary liver cancer treatment introduces complexities, particularly with the expanding array of systemic and locoregional therapies, alongside the potential integration of molecular biology and artificial intelligence (AI) into MDTBs in the future. Precision medicine demands collaboration across disciplines, challenging traditional frameworks. In the next decade, we anticipate the convergence of AI, molecular biology, pathology, and advanced imaging, requiring adaptability in MDTB structure to incorporate these cutting-edge technologies. Navigating this evolution also requires a focus on enhancing basic, translational, and clinical research, as well as boosting clinical trials through an upgraded use of MDTBs as hubs for scientific collaboration and raising literacy about AI and new technologies. In this review, we will delineate the current unmet needs in the clinical management of primary liver cancers, discuss our perspective on the future role of MDTBs in primary liver cancers ("next generation" MDTBs), and unravel the potential power and limitations of novel technologies that may shape the multidisciplinary care landscape for primary liver cancers in the coming decade.
Collapse
Affiliation(s)
- Jean-Charles Nault
- Liver unit, Hôpital Avicenne, Hôpitaux Universitaires Paris-Seine-Saint-Denis, Assistance-Publique Hôpitaux de Paris, Bobigny, France; Unité de Formation et de Recherche Santé Médecine et Biologie Humaine, Université Paris 13, Communauté d'Universités et Etablissements Sorbonne Paris Cité, Paris, France; Centre de Recherche des Cordeliers, Sorbonne Université, Inserm, Université de Paris, team « Functional Genomics of Solid Tumors », F-75006 Paris, France.
| | - Julien Calderaro
- Université Paris Est Créteil, INSERM, IMRB, F-94010, Créteil, France; Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Pathology, Créteil, France; MINT-Hep, Mondor Integrative Hepatology, Créteil, France
| | - Maxime Ronot
- Université de Paris, INSERM U1149 "Centre de Recherche sur l'inflammation", CRI, Paris, France; Department of Radiology, AP-HP, Hôpital Beaujon APHP.Nord, Clichy, France
| |
Collapse
|
32
|
Afyouni S, Zandieh G, Nia IY, Pawlik TM, Kamel IR. State-of-the-art imaging of hepatocellular carcinoma. J Gastrointest Surg 2024; 28:1717-1725. [PMID: 39117267 DOI: 10.1016/j.gassur.2024.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 07/20/2024] [Accepted: 08/01/2024] [Indexed: 08/10/2024]
Abstract
Hepatocellular carcinoma (HCC) is the third most fatal and fifth most common cancer worldwide, with rising incidence due to obesity and nonalcoholic fatty liver disease. Imaging modalities, including ultrasound (US), multidetector computed tomography (MDCT), and magnetic resonance imaging (MRI) play a vital role in detecting HCC characteristics, aiding in early detection, detailed visualization, and accurate differentiation of liver lesions. Liver-specific contrast agents, the Liver Imaging Reporting and Data System, and advanced techniques, including diffusion-weighted imaging and artificial intelligence, further enhance diagnostic accuracy. This review emphasizes the significant role of imaging in managing HCC, from diagnosis to treatment assessment, without the need for invasive biopsies.
Collapse
Affiliation(s)
- Shadi Afyouni
- Russell H. Morgan Department of Radiology and Radiological Sciences, Johns Hopkins Medicine, Johns Hopkins University, Baltimore, MD, United States
| | - Ghazal Zandieh
- Russell H. Morgan Department of Radiology and Radiological Sciences, Johns Hopkins Medicine, Johns Hopkins University, Baltimore, MD, United States
| | - Iman Yazdani Nia
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, United States
| | - Timothy M Pawlik
- Department of Surgery, The Ohio State University, Wexner Medical Center, The James Comprehensive Cancer Center, Columbus, OH, United States
| | - Ihab R Kamel
- Department of Radiology, University of Colorado School of Medicine, Aurora, CO, United States.
| |
Collapse
|
33
|
Chatzipanagiotou OP, Loukas C, Vailas M, Machairas N, Kykalos S, Charalampopoulos G, Filippiadis D, Felekouras E, Schizas D. Artificial intelligence in hepatocellular carcinoma diagnosis: a comprehensive review of current literature. J Gastroenterol Hepatol 2024; 39:1994-2005. [PMID: 38923550 DOI: 10.1111/jgh.16663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 04/26/2024] [Accepted: 06/07/2024] [Indexed: 06/28/2024]
Abstract
BACKGROUND AND AIM Hepatocellular carcinoma (HCC) diagnosis mainly relies on its pathognomonic radiological profile, obviating the need for biopsy. The project of incorporating artificial intelligence (AI) techniques in HCC aims to improve the performance of image recognition. Herein, we thoroughly analyze and evaluate proposed AI models in the field of HCC diagnosis. METHODS A comprehensive review of the literature was performed utilizing MEDLINE/PubMed and Web of Science databases with the end of search date being the 30th of September 2023. The MESH terms "Artificial Intelligence," "Liver Cancer," "Hepatocellular Carcinoma," "Machine Learning," and "Deep Learning" were searched in the title and/or abstract. All references of the obtained articles were also evaluated for any additional information. RESULTS Our search resulted in 183 studies meeting our inclusion criteria. Across all diagnostic modalities, reported area under the curve (AUC) of most developed models surpassed 0.900. A B-mode US and a contrast-enhanced US model achieved AUCs of 0.947 and 0.957, respectively. Regarding the more challenging task of HCC diagnosis, a 2021 deep learning model, trained with CT scans, classified hepatic malignant lesions with an AUC of 0.986. Finally, a MRI machine learning model developed in 2021 displayed an AUC of 0.975 when differentiating small HCCs from benign lesions, while another MRI-based model achieved HCC diagnosis with an AUC of 0.970. CONCLUSIONS AI tools may lead to significant improvement in diagnostic management of HCC. Many models fared better or comparable to experienced radiologists while proving capable of elevating radiologists' accuracy, demonstrating promising results for AI implementation in HCC-related diagnostic tasks.
Collapse
Affiliation(s)
- Odysseas P Chatzipanagiotou
- First Department of Surgery, National and Kapodistrian University of Athens, Laikon General Hospital, Athens, Greece
| | - Constantinos Loukas
- Laboratory of Medical Physics, Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Michail Vailas
- First Department of Surgery, National and Kapodistrian University of Athens, Laikon General Hospital, Athens, Greece
| | - Nikolaos Machairas
- Second Department of Propaedeutic Surgery, National and Kapodistrian University of Athens, Laikon General Hospital, Athens, Greece
| | - Stylianos Kykalos
- Second Department of Propaedeutic Surgery, National and Kapodistrian University of Athens, Laikon General Hospital, Athens, Greece
| | - Georgios Charalampopoulos
- Second Department of Radiology, National and Kapodistrian University of Athens, Attikon University Hospital, Athens, Greece
| | - Dimitrios Filippiadis
- Second Department of Radiology, National and Kapodistrian University of Athens, Attikon University Hospital, Athens, Greece
| | - Evangellos Felekouras
- First Department of Surgery, National and Kapodistrian University of Athens, Laikon General Hospital, Athens, Greece
| | - Dimitrios Schizas
- First Department of Surgery, National and Kapodistrian University of Athens, Laikon General Hospital, Athens, Greece
| |
Collapse
|
34
|
Wang L, Fatemi M, Alizad A. Artificial intelligence techniques in liver cancer. Front Oncol 2024; 14:1415859. [PMID: 39290245 PMCID: PMC11405163 DOI: 10.3389/fonc.2024.1415859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 08/15/2024] [Indexed: 09/19/2024] Open
Abstract
Hepatocellular Carcinoma (HCC), the most common primary liver cancer, is a significant contributor to worldwide cancer-related deaths. Various medical imaging techniques, including computed tomography, magnetic resonance imaging, and ultrasound, play a crucial role in accurately evaluating HCC and formulating effective treatment plans. Artificial Intelligence (AI) technologies have demonstrated potential in supporting physicians by providing more accurate and consistent medical diagnoses. Recent advancements have led to the development of AI-based multi-modal prediction systems. These systems integrate medical imaging with other modalities, such as electronic health record reports and clinical parameters, to enhance the accuracy of predicting biological characteristics and prognosis, including those associated with HCC. These multi-modal prediction systems pave the way for predicting the response to transarterial chemoembolization and microvascular invasion treatments and can assist clinicians in identifying the optimal patients with HCC who could benefit from interventional therapy. This paper provides an overview of the latest AI-based medical imaging models developed for diagnosing and predicting HCC. It also explores the challenges and potential future directions related to the clinical application of AI techniques.
Collapse
Affiliation(s)
- Lulu Wang
- Department of Engineering, School of Technology, Reykjavık University, Reykjavík, Iceland
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN, United States
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN, United States
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN, United States
| |
Collapse
|
35
|
Zhang G, Gao Q, Zhan Q, Wang L, Song B, Chen Y, Bian Y, Ma C, Lu J, Shao C. Label-free differentiation of pancreatic pathologies from normal pancreas utilizing end-to-end three-dimensional multimodal networks on CT. Clin Radiol 2024; 79:e1159-e1166. [PMID: 38969545 DOI: 10.1016/j.crad.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 05/10/2024] [Accepted: 06/05/2024] [Indexed: 07/07/2024]
Abstract
AIMS To investigate the utilization of an end-to-end multimodal convolutional model in the rapid and accurate diagnosis of pancreatic diseases using abdominal CT images. MATERIALS AND METHODS In this study, a novel lightweight label-free end-to-end multimodal network (eeMulNet) model was proposed for the rapid and precise diagnosis of abnormal pancreas. The eeMulNet consists of two steps: pancreatic region localization and multimodal CT diagnosis integrating textual and image data. A research dataset comprising 715 CT scans with various types of pancreas diseases and 228 CT scans from a control group was collected. The training set and independent test set for the multimodal classification network were randomly divided in an 8:2 ratio (755 for training and 188 for testing). RESULTS The eeMulNet model demonstrated outstanding performance on an independent test set of 188 CT scans (Normal: 45, Abnormal: 143), with an area under the curve (AUC) of 1.0, accuracy of 100%, and sensitivity of 100%. The average testing duration per patient was 41.04 seconds, while the classification network took only 0.04 seconds. CONCLUSIONS The proposed eeMulNet model offers a promising approach for the diagnosis of pancreatic diseases. It can support the identification of suspicious cases during daily radiology work and enhance the accuracy of pancreatic disease diagnosis. The codes and models of eeMulNet are publicly available at Rudeguy1/eeMulNet (github.com).
Collapse
Affiliation(s)
- G Zhang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - Q Gao
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - Q Zhan
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - L Wang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China.
| | - B Song
- Department of Pancreatic Surgery, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - Y Chen
- College of Electronic and Information Engineering, Tongji University, Shanghai 201804, China.
| | - Y Bian
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - C Ma
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China; College of Electronic and Information Engineering, Tongji University, Shanghai 201804, China.
| | - J Lu
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| | - C Shao
- Department of Radiology, Changhai Hospital of Shanghai, Naval Medical University, Shanghai 200433, China.
| |
Collapse
|
36
|
Okimoto N, Yasaka K, Cho S, Koshino S, Kanzawa J, Asari Y, Fujita N, Kubo T, Suzuki Y, Abe O. New liver window width in detecting hepatocellular carcinoma on dynamic contrast-enhanced computed tomography with deep learning reconstruction. Radiol Phys Technol 2024; 17:658-665. [PMID: 38837119 PMCID: PMC11341740 DOI: 10.1007/s12194-024-00817-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 05/12/2024] [Accepted: 05/30/2024] [Indexed: 06/06/2024]
Abstract
Changing a window width (WW) alters appearance of noise and contrast of CT images. The aim of this study was to investigate the impact of adjusted WW for deep learning reconstruction (DLR) in detecting hepatocellular carcinomas (HCCs) on CT with DLR. This retrospective study included thirty-five patients who underwent abdominal dynamic contrast-enhanced CT. DLR was used to reconstruct arterial, portal, and delayed phase images. The investigation of the optimal WW involved two blinded readers. Then, five other blinded readers independently read the image sets for detection of HCCs and evaluation of image quality with optimal or conventional liver WW. The optimal WW for detection of HCC was 119 (rounded to 120 in the subsequent analyses) Hounsfield unit (HU), which was the average of adjusted WW in the arterial, portal, and delayed phases. The average figures of merit for the readers for the jackknife alternative free-response receiver operating characteristic analysis to detect HCC were 0.809 (reader 1/2/3/4/5, 0.765/0.798/0.892/0.764/0.827) in the optimal WW (120 HU) and 0.765 (reader 1/2/3/4/5, 0.707/0.769/0.838/0.720/0.791) in the conventional WW (150 HU), and statistically significant difference was observed between them (p < 0.001). Image quality in the optimal WW was superior to those in the conventional WW, and significant difference was seen for some readers (p < 0.041). The optimal WW for detection of HCC was narrower than conventional WW on dynamic contrast-enhanced CT with DLR. Compared with the conventional liver WW, optimal liver WW significantly improved detection performance of HCC.
Collapse
Affiliation(s)
- Naomasa Okimoto
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Shinichi Cho
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Saori Koshino
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Yusuke Asari
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Nana Fujita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Yuichi Suzuki
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
37
|
Obimba DC, Esteva C, Nzouatcham Tsicheu EN, Wong R. Effectiveness of Artificial Intelligence Technologies in Cancer Treatment for Older Adults: A Systematic Review. J Clin Med 2024; 13:4979. [PMID: 39274201 PMCID: PMC11396550 DOI: 10.3390/jcm13174979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/29/2024] [Accepted: 08/21/2024] [Indexed: 09/16/2024] Open
Abstract
Background: Aging is a multifaceted process that may lead to an increased risk of developing cancer. Artificial intelligence (AI) applications in clinical cancer research may optimize cancer treatments, improve patient care, and minimize risks, prompting AI to receive high levels of attention in clinical medicine. This systematic review aims to synthesize current articles about the effectiveness of artificial intelligence in cancer treatments for older adults. Methods: We conducted a systematic review by searching CINAHL, PsycINFO, and MEDLINE via EBSCO. We also conducted forward and backward hand searching for a comprehensive search. Eligible studies included a study population of older adults (60 and older) with cancer, used AI technology to treat cancer, and were published in a peer-reviewed journal in English. This study was registered on PROSPERO (CRD42024529270). Results: This systematic review identified seven articles focusing on lung, breast, and gastrointestinal cancers. They were predominantly conducted in the USA (42.9%), with others from India, China, and Germany. The measures of overall and progression-free survival, local control, and treatment plan concordance suggested that AI interventions were equally or less effective than standard care in treating older adult cancer patients. Conclusions: Despite promising initial findings, the utility of AI technologies in cancer treatment for older adults remains in its early stages, as further developments are necessary to enhance accuracy, consistency, and reliability for broader clinical use.
Collapse
Affiliation(s)
- Doris C Obimba
- Department of Public Health and Preventive Medicine, Norton College of Medicine, SUNY Upstate Medical University, Syracuse, NY 13210, USA
| | - Charlene Esteva
- Department of Public Health and Preventive Medicine, Norton College of Medicine, SUNY Upstate Medical University, Syracuse, NY 13210, USA
| | - Eurika N Nzouatcham Tsicheu
- Department of Public Health and Preventive Medicine, Norton College of Medicine, SUNY Upstate Medical University, Syracuse, NY 13210, USA
| | - Roger Wong
- Department of Public Health and Preventive Medicine, Norton College of Medicine, SUNY Upstate Medical University, Syracuse, NY 13210, USA
- Department of Geriatrics, SUNY Upstate Medical University, Syracuse, NY 13210, USA
| |
Collapse
|
38
|
Al-Obeidat F, Hafez W, Gador M, Ahmed N, Abdeljawad MM, Yadav A, Rashed A. Diagnostic performance of AI-based models versus physicians among patients with hepatocellular carcinoma: a systematic review and meta-analysis. Front Artif Intell 2024; 7:1398205. [PMID: 39224209 PMCID: PMC11368160 DOI: 10.3389/frai.2024.1398205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 07/26/2024] [Indexed: 09/04/2024] Open
Abstract
Background Hepatocellular carcinoma (HCC) is a common primary liver cancer that requires early diagnosis due to its poor prognosis. Recent advances in artificial intelligence (AI) have facilitated hepatocellular carcinoma detection using multiple AI models; however, their performance is still uncertain. Aim This meta-analysis aimed to compare the diagnostic performance of different AI models with that of clinicians in the detection of hepatocellular carcinoma. Methods We searched the PubMed, Scopus, Cochrane Library, and Web of Science databases for eligible studies. The R package was used to synthesize the results. The outcomes of various studies were aggregated using fixed-effect and random-effects models. Statistical heterogeneity was evaluated using I-squared (I2) and chi-square statistics. Results We included seven studies in our meta-analysis;. Both physicians and AI-based models scored an average sensitivity of 93%. Great variation in sensitivity, accuracy, and specificity was observed depending on the model and diagnostic technique used. The region-based convolutional neural network (RCNN) model showed high sensitivity (96%). Physicians had the highest specificity in diagnosing hepatocellular carcinoma(100%); furthermore, models-based convolutional neural networks achieved high sensitivity. Models based on AI-assisted Contrast-enhanced ultrasound (CEUS) showed poor accuracy (69.9%) compared to physicians and other models. The leave-one-out sensitivity revealed high heterogeneity among studies, which represented true differences among the studies. Conclusion Models based on Faster R-CNN excel in image classification and data extraction, while both CNN-based models and models combining contrast-enhanced ultrasound (CEUS) with artificial intelligence (AI) had good sensitivity. Although AI models outperform physicians in diagnosing HCC, they should be utilized as supportive tools to help make more accurate and timely decisions.
Collapse
Affiliation(s)
- Feras Al-Obeidat
- College of Technological Innovation, Zayed University, Abu Dubai, United Arab Emirates
| | - Wael Hafez
- NMC Royal Hospital, Khalifa City, United Arab Emirates
- Internal Medicine Department, Medical Research and Clinical Studies Institute, The National Research Centre, Cairo, Egypt
| | - Muneir Gador
- Internal Medicine Department, Medical Research and Clinical Studies Institute, The National Research Centre, Cairo, Egypt
| | | | | | - Antesh Yadav
- NMC Royal Hospital, Khalifa City, United Arab Emirates
| | - Asrar Rashed
- NMC Royal Hospital, Khalifa City, United Arab Emirates
- Department of Computer Science, Edinburgh Napier University, Merchiston Campus, Edinburgh, United Kingdom
| |
Collapse
|
39
|
Wei Y, Yang M, Zhang M, Gao F, Zhang N, Hu F, Zhang X, Zhang S, Huang Z, Xu L, Zhang F, Liu M, Deng J, Cheng X, Xie T, Wang X, Liu N, Gong H, Zhu S, Song B, Liu M. Focal liver lesion diagnosis with deep learning and multistage CT imaging. Nat Commun 2024; 15:7040. [PMID: 39147767 PMCID: PMC11327344 DOI: 10.1038/s41467-024-51260-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 08/02/2024] [Indexed: 08/17/2024] Open
Abstract
Diagnosing liver lesions is crucial for treatment choices and patient outcomes. This study develops an automatic diagnosis system for liver lesions using multiphase enhanced computed tomography (CT). A total of 4039 patients from six data centers are enrolled to develop Liver Lesion Network (LiLNet). LiLNet identifies focal liver lesions, including hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), metastatic tumors (MET), focal nodular hyperplasia (FNH), hemangioma (HEM), and cysts (CYST). Validated in four external centers and clinically verified in two hospitals, LiLNet achieves an accuracy (ACC) of 94.7% and an area under the curve (AUC) of 97.2% for benign and malignant tumors. For HCC, ICC, and MET, the ACC is 88.7% with an AUC of 95.6%. For FNH, HEM, and CYST, the ACC is 88.6% with an AUC of 95.9%. LiLNet can aid in clinical diagnosis, especially in regions with a shortage of radiologists.
Collapse
Affiliation(s)
- Yi Wei
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Meiyi Yang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Meng Zhang
- Department of Radiology, Sanya People's Hospital, Sanya, Hainan, China
| | - Feifei Gao
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Ning Zhang
- Department of Radiology, Henan Provincial People's Hospital, Zhengzhou, Henan, China
| | - Fubi Hu
- Department of Radiology, The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan, China
| | - Xiao Zhang
- Department of Radiology, Leshan People's Hospital, Leshan, Sichuan, China
| | - Shasha Zhang
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Zixing Huang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Lifeng Xu
- Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou, Zhejiang, China
| | - Feng Zhang
- Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou, Zhejiang, China
| | - Minghui Liu
- Yangtze Delta Region Institute(Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang, China
| | - Jiali Deng
- Yangtze Delta Region Institute(Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang, China
| | - Xuan Cheng
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Tianshu Xie
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Xiaomin Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Nianbo Liu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Haigang Gong
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Shaocheng Zhu
- Department of Radiology, Henan Provincial People's Hospital, Zhengzhou, Henan, China.
| | - Bin Song
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China.
- Department of Radiology, Sanya People's Hospital, Sanya, Hainan, China.
| | - Ming Liu
- Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People's Hospital, Quzhou, Zhejiang, China.
- Yangtze Delta Region Institute(Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang, China.
| |
Collapse
|
40
|
Shang Z, Chauhan V, Devi K, Patil S. Artificial Intelligence, the Digital Surgeon: Unravelling Its Emerging Footprint in Healthcare - The Narrative Review. J Multidiscip Healthc 2024; 17:4011-4022. [PMID: 39165254 PMCID: PMC11333562 DOI: 10.2147/jmdh.s482757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 08/09/2024] [Indexed: 08/22/2024] Open
Abstract
Background Artificial Intelligence (AI) holds transformative potential for the healthcare industry, offering innovative solutions for diagnosis, treatment planning, and improving patient outcomes. As AI continues to be integrated into healthcare systems, it promises advancements across various domains. This review explores the diverse applications of AI in healthcare, along with the challenges and limitations that need to be addressed. The aim is to provide a comprehensive overview of AI's impact on healthcare and to identify areas for further development and focus. Main Applications The review discusses the broad range of AI applications in healthcare. In medical imaging and diagnostics, AI enhances the accuracy and efficiency of diagnostic processes, aiding in early disease detection. AI-powered clinical decision support systems assist healthcare professionals in patient management and decision-making. Predictive analytics using AI enables the prediction of patient outcomes and identification of potential health risks. AI-driven robotic systems have revolutionized surgical procedures, improving precision and outcomes. Virtual assistants and chatbots enhance patient interaction and support, providing timely information and assistance. In the pharmaceutical industry, AI accelerates drug discovery and development by identifying potential drug candidates and predicting their efficacy. Additionally, AI improves administrative efficiency and operational workflows in healthcare, streamlining processes and reducing costs. AI-powered remote monitoring and telehealth solutions expand access to healthcare, particularly in underserved areas. Challenges and Limitations Despite the significant promise of AI in healthcare, several challenges persist. Ensuring the reliability and consistency of AI-driven outcomes is crucial. Privacy and security concerns must be navigated carefully, particularly in handling sensitive patient data. Ethical considerations, including bias and fairness in AI algorithms, need to be addressed to prevent unintended consequences. Overcoming these challenges is critical for the ethical and successful integration of AI in healthcare. Conclusion The integration of AI into healthcare is advancing rapidly, offering substantial benefits in improving patient care and operational efficiency. However, addressing the associated challenges is essential to fully realize the transformative potential of AI in healthcare. Future efforts should focus on enhancing the reliability, transparency, and ethical standards of AI technologies to ensure they contribute positively to global health outcomes.
Collapse
Affiliation(s)
- Zifang Shang
- Guangdong Engineering Technological Research Centre of Clinical Molecular Diagnosis and Antibody Drugs, Meizhou People’s Hospital (Huangtang Hospital), Meizhou Academy of Medical Sciences, Meizhou, People’s Republic of China
| | - Varun Chauhan
- Multi-Disciplinary Research Unit, Government Institute of Medical Sciences, Greater Noida, India
| | - Kirti Devi
- Department of Medicine, Government Institute of Medical Sciences, Greater Noida, India
| | - Sandip Patil
- Department Haematology and Oncology, Shenzhen Children’s Hospital, Shenzhen, People’s Republic of China
| |
Collapse
|
41
|
Yang Y, Chen Q, Li Y, Wang F, Han XH, Iwamoto Y, Liu J, Lin L, Hu H, Chen YW. Segmentation Guided Crossing Dual Decoding Generative Adversarial Network for Synthesizing Contrast-Enhanced Computed Tomography Images. IEEE J Biomed Health Inform 2024; 28:4737-4750. [PMID: 38768004 DOI: 10.1109/jbhi.2024.3403199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Although contrast-enhanced computed tomography (CE-CT) images significantly improve the accuracy of diagnosing focal liver lesions (FLLs), the administration of contrast agents imposes a considerable physical burden on patients. The utilization of generative models to synthesize CE-CT images from non-contrasted CT images offers a promising solution. However, existing image synthesis models tend to overlook the importance of critical regions, inevitably reducing their effectiveness in downstream tasks. To overcome this challenge, we propose an innovative CE-CT image synthesis model called Segmentation Guided Crossing Dual Decoding Generative Adversarial Network (SGCDD-GAN). Specifically, the SGCDD-GAN involves a crossing dual decoding generator including an attention decoder and an improved transformation decoder. The attention decoder is designed to highlight some critical regions within the abdominal cavity, while the improved transformation decoder is responsible for synthesizing CE-CT images. These two decoders are interconnected using a crossing technique to enhance each other's capabilities. Furthermore, we employ a multi-task learning strategy to guide the generator to focus more on the lesion area. To evaluate the performance of proposed SGCDD-GAN, we test it on an in-house CE-CT dataset. In both CE-CT image synthesis tasks-namely, synthesizing ART images and synthesizing PV images-the proposed SGCDD-GAN demonstrates superior performance metrics across the entire image and liver region, including SSIM, PSNR, MSE, and PCC scores. Furthermore, CE-CT images synthetized from our SGCDD-GAN achieve remarkable accuracy rates of 82.68%, 94.11%, and 94.11% in a deep learning-based FLLs classification task, along with a pilot assessment conducted by two radiologists.
Collapse
|
42
|
Lee JM, Bae JS. Enhancing diagnostic precision in liver lesion analysis using a deep learning-based system: opportunities and challenges. Nat Rev Clin Oncol 2024; 21:485-486. [PMID: 38519602 DOI: 10.1038/s41571-024-00887-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/25/2024]
Affiliation(s)
- Jeong Min Lee
- Department of Radiology, Seoul National University Hospital, Seoul, South Korea.
- Department of Radiology, Seoul National University College of Medicine, Seoul, South Korea.
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, South Korea.
| | - Jae Seok Bae
- Department of Radiology, Seoul National University Hospital, Seoul, South Korea
| |
Collapse
|
43
|
Yang Y, Liu J, Chen Q, Li Y, Han XH, Hu H, Lin L, Chen YW. GANs-guided Conditional Diffusion Model for Synthesizing Contrast-enhanced Computed Tomography Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40031463 DOI: 10.1109/embc53108.2024.10781923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
In contrast to non-contrast computed tomography (NC-CT) scans, contrast-enhanced (CE) CT scans can highlight discrepancies between abnormal and normal areas, commonly used in clinical diagnosis of focal liver lesions. However, the use of contrast agents in CE-CT scans imposes significant physical and economic burdens on patients in clinical practice. Recently, Generative Adversarial Networks (GANs)-based synthesis models offer an alternative approach that obtains CE-CT images from NC-CT images. However, poor coverage and mode collapse greatly limit their performance. Diffusion models (DMs)-based methods have demonstrated superior performance in natural image synthesis tasks. Nevertheless, our experiment shows that CE-CT images synthesized from DMs-based method exhibit higher overall quality but lower local quality. The quality of local areas, particularly those related to lesion areas, is crucial in medical image synthesis tasks. Hence, we propose a GANs-guided conditional diffusion model (GANs-CDM), combining the GANs and conditional diffusion model (CDM), to generate CECT images. In the proposed GANs-CDM, the GANs is to generate a preliminary CE-CT image, serving as conditional input for guiding the subsequent CDM to produce refined CE-CT images. Qualitative and quantitative evaluation on arterial and portal venous phase synthesis tasks demonstrates that our proposed GANs-CDM can significantly improve both the local and global quality of synthetic images.
Collapse
|
44
|
Ma L, Li C, Li H, Zhang C, Deng K, Zhang W, Xie C. Deep learning model based on contrast-enhanced MRI for predicting post-surgical survival in patients with hepatocellular carcinoma. Heliyon 2024; 10:e31451. [PMID: 38868019 PMCID: PMC11167253 DOI: 10.1016/j.heliyon.2024.e31451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 05/15/2024] [Accepted: 05/16/2024] [Indexed: 06/14/2024] Open
Abstract
Objective To develop a deep learning model based on contrast-enhanced magnetic resonance imaging (MRI) data to predict post-surgical overall survival (OS) in patients with hepatocellular carcinoma (HCC). Methods This bi-center retrospective study included 564 surgically resected patients with HCC and divided them into training (326), testing (143), and external validation (95) cohorts. This study used a three-dimensional convolutional neural network (3D-CNN) ResNet to learn features from the pretreatment MR images (T1WIpre, late arterial phase, and portal venous phase) and got the deep learning score (DL score). Three cox regression models were established separately using the DL score (3D-CNN model), clinical features (clinical model), and a combination of above (combined model). The concordance index (C-index) was used to evaluate model performance. Results We trained a 3D-CNN model to get DL score from samples. The C-index of the 3D-CNN model in predicting 5-year OS for the training, testing, and external validation cohorts were 0.746, 0.714, and 0.698, respectively, and were higher than those of the clinical model, which were 0.675, 0.674, and 0.631, respectively (P = 0.009, P = 0.204, and P = 0.092, respectively). The C-index of the combined model for testing and external validation cohorts was 0.750 and 0.723, respectively, significantly higher than the clinical model (P = 0.017, P = 0.016) and the 3D-CNN model (P = 0.029, P = 0.036). Conclusions The combined model integrating the DL score and clinical factors showed a higher predictive value than the clinical and 3D-CNN models and may be more useful in guiding clinical treatment decisions to improve the prognosis of patients with HCC.
Collapse
Affiliation(s)
- Lidi Ma
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, PR China
| | - Congrui Li
- Department of Diagnostic Radiology, Hunan Cancer Hospital, Central South University, Changsha, PR China
| | - Haixia Li
- Bayer, Guangzhou, Guangdong, PR China
| | - Cheng Zhang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, PR China
| | - Kan Deng
- Clinical Science, Philips Healthcare, Guangzhou, PR China
| | - Weijing Zhang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, PR China
| | - Chuanmiao Xie
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, PR China
| |
Collapse
|
45
|
Yuan N, Zhang Y, Lv K, Liu Y, Yang A, Hu P, Yu H, Han X, Guo X, Li J, Wang T, Lei B, Ma G. HCA-DAN: hierarchical class-aware domain adaptive network for gastric tumor segmentation in 3D CT images. Cancer Imaging 2024; 24:63. [PMID: 38773670 PMCID: PMC11107051 DOI: 10.1186/s40644-024-00711-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 05/11/2024] [Indexed: 05/24/2024] Open
Abstract
BACKGROUND Accurate segmentation of gastric tumors from CT scans provides useful image information for guiding the diagnosis and treatment of gastric cancer. However, automated gastric tumor segmentation from 3D CT images faces several challenges. The large variation of anisotropic spatial resolution limits the ability of 3D convolutional neural networks (CNNs) to learn features from different views. The background texture of gastric tumor is complex, and its size, shape and intensity distribution are highly variable, which makes it more difficult for deep learning methods to capture the boundary. In particular, while multi-center datasets increase sample size and representation ability, they suffer from inter-center heterogeneity. METHODS In this study, we propose a new cross-center 3D tumor segmentation method named Hierarchical Class-Aware Domain Adaptive Network (HCA-DAN), which includes a new 3D neural network that efficiently bridges an Anisotropic neural network and a Transformer (AsTr) for extracting multi-scale context features from the CT images with anisotropic resolution, and a hierarchical class-aware domain alignment (HCADA) module for adaptively aligning multi-scale context features across two domains by integrating a class attention map with class-specific information. We evaluate the proposed method on an in-house CT image dataset collected from four medical centers and validate its segmentation performance in both in-center and cross-center test scenarios. RESULTS Our baseline segmentation network (i.e., AsTr) achieves best results compared to other 3D segmentation models, with a mean dice similarity coefficient (DSC) of 59.26%, 55.97%, 48.83% and 67.28% in four in-center test tasks, and with a DSC of 56.42%, 55.94%, 46.54% and 60.62% in four cross-center test tasks. In addition, the proposed cross-center segmentation network (i.e., HCA-DAN) obtains excellent results compared to other unsupervised domain adaptation methods, with a DSC of 58.36%, 56.72%, 49.25%, and 62.20% in four cross-center test tasks. CONCLUSIONS Comprehensive experimental results demonstrate that the proposed method outperforms compared methods on this multi-center database and is promising for routine clinical workflows.
Collapse
Affiliation(s)
- Ning Yuan
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Yongtao Zhang
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Kuan Lv
- Peking University China-Japan Friendship School of Clinical Medicine, Beijing, China
| | - Yiyao Liu
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Aocai Yang
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China
| | - Pianpian Hu
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China
| | - Hongwei Yu
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China
| | - Xiaowei Han
- Department of Radiology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, China
| | - Xing Guo
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Junfeng Li
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Tianfu Wang
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Baiying Lei
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
- AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Guangdong, China
| | - Guolin Ma
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China.
| |
Collapse
|
46
|
Saraiva MM, Spindler L, Manzione T, Ribeiro T, Fathallah N, Martins M, Cardoso P, Mendes F, Fernandes J, Ferreira J, Macedo G, Nadal S, de Parades V. Deep Learning and High-Resolution Anoscopy: Development of an Interoperable Algorithm for the Detection and Differentiation of Anal Squamous Cell Carcinoma Precursors-A Multicentric Study. Cancers (Basel) 2024; 16:1909. [PMID: 38791987 PMCID: PMC11119426 DOI: 10.3390/cancers16101909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 05/04/2024] [Accepted: 05/06/2024] [Indexed: 05/26/2024] Open
Abstract
High-resolution anoscopy (HRA) plays a central role in the detection and treatment of precursors of anal squamous cell carcinoma (ASCC). Artificial intelligence (AI) algorithms have shown high levels of efficiency in detecting and differentiating HSIL from low-grade squamous intraepithelial lesions (LSIL) in HRA images. Our aim was to develop a deep learning system for the automatic detection and differentiation of HSIL versus LSIL using HRA images from both conventional and digital proctoscopes. A convolutional neural network (CNN) was developed based on 151 HRA exams performed at two volume centers using conventional and digital HRA systems. A total of 57,822 images were included, 28,874 images containing HSIL and 28,948 LSIL. Partial subanalyses were performed to evaluate the performance of the CNN in the subset of images acetic acid and lugol iodine staining and after treatment of the anal canal. The overall accuracy of the CNN in distinguishing HSIL from LSIL during the testing stage was 94.6%. The algorithm had an overall sensitivity and specificity of 93.6% and 95.7%, respectively (AUC 0.97). For staining with acetic acid, HSIL was differentiated from LSIL with an overall accuracy of 96.4%, while for lugol and after therapeutic manipulation, these values were 96.6% and 99.3%, respectively. The introduction of AI algorithms to HRA may enhance the early diagnosis of ASCC precursors, and this system was shown to perform adequately across conventional and digital HRA interfaces.
Collapse
Affiliation(s)
- Miguel Mascarenhas Saraiva
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (T.R.); (M.M.); (P.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Lucas Spindler
- Department of Proctology, GH Paris Saint-Joseph, 185, Rue Raymond Losserand, 75014 Paris, France
| | - Thiago Manzione
- Department of Surgery, Instituto de Infectologia Emílio Ribas, São Paulo 01246-900, Brazil
| | - Tiago Ribeiro
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (T.R.); (M.M.); (P.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Nadia Fathallah
- Department of Proctology, GH Paris Saint-Joseph, 185, Rue Raymond Losserand, 75014 Paris, France
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (T.R.); (M.M.); (P.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (T.R.); (M.M.); (P.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (T.R.); (M.M.); (P.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Joana Fernandes
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
- DigestAID—Artificial Intelligence Development, Rua Alfredo Allen, 4200-135 Porto, Portugal
| | - João Ferreira
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
- DigestAID—Artificial Intelligence Development, Rua Alfredo Allen, 4200-135 Porto, Portugal
| | - Guilherme Macedo
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (T.R.); (M.M.); (P.C.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Sidney Nadal
- Department of Surgery, Instituto de Infectologia Emílio Ribas, São Paulo 01246-900, Brazil
| | - Vincent de Parades
- Department of Proctology, GH Paris Saint-Joseph, 185, Rue Raymond Losserand, 75014 Paris, France
| |
Collapse
|
47
|
Lei Y, Feng B, Wan M, Xu K, Cui J, Ma C, Sun J, Yao C, Gan S, Shi J, Cui E. Predicting microvascular invasion in hepatocellular carcinoma with a CT- and MRI-based multimodal deep learning model. Abdom Radiol (NY) 2024; 49:1397-1410. [PMID: 38433144 DOI: 10.1007/s00261-024-04202-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 01/04/2024] [Accepted: 01/12/2024] [Indexed: 03/05/2024]
Abstract
PURPOSE To investigate the value of a multimodal deep learning (MDL) model based on computed tomography (CT) and magnetic resonance imaging (MRI) for predicting microvascular invasion (MVI) in hepatocellular carcinoma (HCC). METHODS A total of 287 patients with HCC from our institution and 58 patients from another individual institution were included. Among these, 119 patients with only CT data and 116 patients with only MRI data were selected for single-modality deep learning model development, after which select parameters were migrated for MDL model development with transfer learning (TL). In addition, 110 patients with simultaneous CT and MRI data were divided into a training cohort (n = 66) and a validation cohort (n = 44). We input the features extracted from DenseNet121 into an extreme learning machine (ELM) classifier to construct a classification model. RESULTS The area under the curve (AUC) of the MDL model was 0.844, which was superior to that of the single-phase CT (AUC = 0.706-0.776, P < 0.05), single-sequence MRI (AUC = 0.706-0.717, P < 0.05), single-modality DL model (AUCall-phase CT = 0.722, AUCall-sequence MRI = 0.731; P < 0.05), clinical (AUC = 0.648, P < 0.05), but not to that of the delay phase (DP) and in-phase (IP) MRI and portal venous phase (PVP) CT models. The MDL model achieved better performance than models described above (P < 0.05). When combined with clinical features, the AUC of the MDL model increased from 0.844 to 0.871. A nomogram, combining deep learning signatures (DLS) and clinical indicators for MDL models, demonstrated a greater overall net gain than the MDL models (P < 0.05). CONCLUSION The MDL model is a valuable noninvasive technique for preoperatively predicting MVI in HCC.
Collapse
Affiliation(s)
- Yan Lei
- Department of Radiology, Jiangmen Central Hospital, 23 Beijie Haibang Street, Jiangmen, People's Republic of China
- Zunyi Medical University, 1 Xiaoyuan Road, Zunyi, People's Republic of China
| | - Bao Feng
- Laboratory of Intelligent Detection and Information Processing, School of Electronic Information and Automation, Guilin University of Aerospace Technology, 2 Jinji Road, Guilin, People's Republic of China
| | - Meiqi Wan
- Department of Radiology, Jiangmen Central Hospital, 23 Beijie Haibang Street, Jiangmen, People's Republic of China
- Zunyi Medical University, 1 Xiaoyuan Road, Zunyi, People's Republic of China
| | - Kuncai Xu
- Laboratory of Intelligent Detection and Information Processing, School of Electronic Information and Automation, Guilin University of Aerospace Technology, 2 Jinji Road, Guilin, People's Republic of China
| | - Jin Cui
- Department of Radiology, Jiangmen Central Hospital, 23 Beijie Haibang Street, Jiangmen, People's Republic of China
| | - Changyi Ma
- Department of Radiology, Jiangmen Central Hospital, 23 Beijie Haibang Street, Jiangmen, People's Republic of China
| | - Junqi Sun
- Department of Radiology, Yuebei People's Hospital, 133 Huimin Street, Shaoguan, People's Republic of China
| | - Changyin Yao
- Department of Radiology, Jiangmen Central Hospital, 23 Beijie Haibang Street, Jiangmen, People's Republic of China
- Guangdong Medical University, 2 Wenming East Road, Zhanjiang, People's Republic of China
| | - Shiman Gan
- Department of Radiology, Jiangmen Central Hospital, 23 Beijie Haibang Street, Jiangmen, People's Republic of China
- Guangdong Medical University, 2 Wenming East Road, Zhanjiang, People's Republic of China
| | - Jiangfeng Shi
- Laboratory of Intelligent Detection and Information Processing, School of Electronic Information and Automation, Guilin University of Aerospace Technology, 2 Jinji Road, Guilin, People's Republic of China
| | - Enming Cui
- Department of Radiology, Jiangmen Central Hospital, 23 Beijie Haibang Street, Jiangmen, People's Republic of China.
- Zunyi Medical University, 1 Xiaoyuan Road, Zunyi, People's Republic of China.
- Guangdong Medical University, 2 Wenming East Road, Zhanjiang, People's Republic of China.
- Jiangmen Key Laboratory of Artificial Intelligence in Medical Image Computation and Application, 23 Beijie Haibang Street, Jiangmen, People's Republic of China.
| |
Collapse
|
48
|
Veiga-Canuto D, Cerdá Alberich L, Fernández-Patón M, Jiménez Pastor A, Lozano-Montoya J, Miguel Blanco A, Martínez de Las Heras B, Sangüesa Nebot C, Martí-Bonmatí L. Imaging biomarkers and radiomics in pediatric oncology: a view from the PRIMAGE (PRedictive In silico Multiscale Analytics to support cancer personalized diaGnosis and prognosis, Empowered by imaging biomarkers) project. Pediatr Radiol 2024; 54:562-570. [PMID: 37747582 DOI: 10.1007/s00247-023-05770-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/01/2023] [Accepted: 09/03/2023] [Indexed: 09/26/2023]
Abstract
This review paper presents the practical development of imaging biomarkers in the scope of the PRIMAGE (PRedictive In silico Multiscale Analytics to support cancer personalized diaGnosis and prognosis, Empowered by imaging biomarkers) project, as a noninvasive and reliable way to improve the diagnosis and prognosis in pediatric oncology. The PRIMAGE project is a European multi-center research initiative that focuses on developing medical imaging-derived artificial intelligence (AI) solutions designed to enhance overall management and decision-making for two types of pediatric cancer: neuroblastoma and diffuse intrinsic pontine glioma. To allow this, the PRIMAGE project has created an open-cloud platform that combines imaging, clinical, and molecular data together with AI models developed from this data, creating a comprehensive decision support environment for clinicians managing patients with these two cancers. In order to achieve this, a standardized data processing and analysis workflow was implemented to generate robust and reliable predictions for different clinical endpoints. Magnetic resonance (MR) image harmonization and registration was performed as part of the workflow. Subsequently, an automated tool for the detection and segmentation of tumors was trained and internally validated. The Dice similarity coefficient obtained for the independent validation dataset was 0.997, indicating compatibility with the manual segmentation variability. Following this, radiomics and deep features were extracted and correlated with clinical endpoints. Finally, reproducible and relevant imaging quantitative features were integrated with clinical and molecular data to enrich both the predictive models and a set of visual analytics tools, making the PRIMAGE platform a complete clinical decision aid system. In order to ensure the advancement of research in this field and to foster engagement with the wider research community, the PRIMAGE data repository and platform are currently being integrated into the European Federation for Cancer Images (EUCAIM), which is the largest European cancer imaging research infrastructure created to date.
Collapse
Affiliation(s)
- Diana Veiga-Canuto
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain.
- Área Clínica de Imagen Médica, Área Clínica de Imagen Médica, Hospital Universitari i Politècnic La Fe, Avinguda Fernando Abril Martorell, 106 Torre E planta 0, 46026, València, Spain.
| | - Leonor Cerdá Alberich
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
| | - Matías Fernández-Patón
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
| | | | | | - Ana Miguel Blanco
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
| | - Blanca Martínez de Las Heras
- Pediatric Oncology Department, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre G planta 2, 46026, Valencia, Spain
| | - Cinta Sangüesa Nebot
- Área Clínica de Imagen Médica, Área Clínica de Imagen Médica, Hospital Universitari i Politècnic La Fe, Avinguda Fernando Abril Martorell, 106 Torre E planta 0, 46026, València, Spain
| | - Luis Martí-Bonmatí
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A planta 7, 46026, Valencia, Spain
- Área Clínica de Imagen Médica, Área Clínica de Imagen Médica, Hospital Universitari i Politècnic La Fe, Avinguda Fernando Abril Martorell, 106 Torre E planta 0, 46026, València, Spain
| |
Collapse
|
49
|
Zhan F, Wang W, Chen Q, Guo Y, He L, Wang L. Three-Direction Fusion for Accurate Volumetric Liver and Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:2175-2186. [PMID: 38109246 DOI: 10.1109/jbhi.2023.3344392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Biomedical image segmentation of organs, tissues and lesions has gained increasing attention in clinical treatment planning and navigation, which involves the exploration of two-dimensional (2D) and three-dimensional (3D) contexts in the biomedical image. Compared to 2D methods, 3D methods pay more attention to inter-slice correlations, which offer additional spatial information for image segmentation. An organ or tumor has a 3D structure that can be observed from three directions. Previous studies focus only on the vertical axis, limiting the understanding of the relationship between a tumor and its surrounding tissues. Important information can also be obtained from sagittal and coronal axes. Therefore, spatial information of organs and tumors can be obtained from three directions, i.e. the sagittal, coronal and vertical axes, to understand better the invasion depth of tumor and its relationship with the surrounding tissues. Moreover, the edges of organs and tumors in biomedical image may be blurred. To address these problems, we propose a three-direction fusion volumetric segmentation (TFVS) model for segmenting 3D biomedical images from three perspectives in sagittal, coronal and transverse planes, respectively. We use the dataset of the liver task provided by the Medical Segmentation Decathlon challenge to train our model. The TFVS method demonstrates a competitive performance on the 3D-IRCADB dataset. In addition, the t-test and Wilcoxon signed-rank test are also performed to show the statistical significance of the improvement by the proposed method as compared with the baseline methods. The proposed method is expected to be beneficial in guiding and facilitating clinical diagnosis and treatment.
Collapse
|
50
|
Nikzad N, Fuentes DT, Roach M, Chowdhury T, Cagley M, Badawy M, Elkhesen A, Hassan M, Elsayes KM, Beretta L, Koay EJ, Jalal PK. Enhancement Pattern Mapping for Early Detection of Hepatocellular Carcinoma in Patients with Cirrhosis. J Hepatocell Carcinoma 2024; 11:595-606. [PMID: 38525156 PMCID: PMC10961013 DOI: 10.2147/jhc.s449996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/07/2024] [Indexed: 03/26/2024] Open
Abstract
Background and Aims Limited methods exist to accurately characterize the risk of malignant progression of liver lesions. Enhancement pattern mapping (EPM) measures voxel-based root mean square deviation (RMSD) of parenchyma and the contrast-to-noise (CNR) ratio enhances in malignant lesions. This study investigates the utilization of EPM to differentiate between HCC versus cirrhotic parenchyma with and without benign lesions. Methods Patients with cirrhosis undergoing MRI surveillance were studied prospectively. Cases (n=48) were defined as patients with LI-RADS 3 and 4 lesions who developed HCC during surveillance. Controls (n=99) were patients with and without LI-RADS 3 and 4 lesions who did not develop HCC. Manual and automated EPM signals of liver parenchyma between cases and controls were quantitatively validated on an independent patient set using cross validation with manual methods avoiding parenchyma with artifacts or blood vessels. Results With manual EPM, RMSD of 0.37 was identified as a cutoff for distinguishing lesions that progress to HCC from background parenchyma with and without lesions on pre-diagnostic scans (median time interval 6.8 months) with an area under the curve (AUC) of 0.83 (CI: 0.73-0.94) and a sensitivity, specificity, and accuracy of 0.65, 0.97, and 0.89, respectively. At the time of diagnostic scans, a sensitivity, specificity, and accuracy of 0.79, 0.93, and 0.88 were achieved with manual EPM with an AUC of 0.89 (CI: 0.82-0.96). EPM RMSD signals of background parenchyma that did not progress to HCC in cases and controls were similar (case EPM: 0.22 ± 0.08, control EPM: 0.22 ± 0.09, p=0.8). Automated EPM produced similar quantitative results and performance. Conclusion With manual EPM, a cutoff of 0.37 identifies quantifiable differences between HCC cases and controls approximately six months prior to diagnosis of HCC with an accuracy of 89%.
Collapse
Affiliation(s)
- Newsha Nikzad
- Department of Medicine and Surgery, Baylor College of Medicine, Houston, TX, USA
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Internal Medicine, The University of Chicago Medical Center, Chicago, IL, USA
| | - David Thomas Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Millicent Roach
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Tasadduk Chowdhury
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Matthew Cagley
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Mohamed Badawy
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Ahmed Elkhesen
- Department of Internal Medicine, Texas Tech University Health Sciences Center, Lubbock, TX, USA
| | - Manal Hassan
- Department of Epidemiology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Khaled M Elsayes
- Department of Abdominal Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Laura Beretta
- Department of Molecular and Cellular Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Eugene Jon Koay
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Prasun Kumar Jalal
- Department of Medicine and Surgery, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|