1
|
Huang R, Ye Y, Chang A, Huang H, Zheng Z, Tan L, Tang G, Luo M, Yi X, Liu P, Wu J, Luo B, Ni D. Subtyping breast lesions via collective intelligence based long-tailed recognition in ultrasound. Med Image Anal 2025; 102:103548. [PMID: 40121808 DOI: 10.1016/j.media.2025.103548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/06/2025] [Accepted: 03/08/2025] [Indexed: 03/25/2025]
Abstract
Breast lesions display a wide spectrum of histological subtypes. Recognizing these subtypes is vital for optimizing patient care and facilitating tailored treatment strategies compared to a simplistic binary classification of malignancy. However, this task relies on invasive biopsy tests, which carry inherent risks and can lead to over-diagnosis, unnecessary expenses, and pain for patients. To avoid this, we propose to infer lesion subtypes from ultrasound images directly. Meanwhile, the incidence rates of different subtypes exhibit a skewed long-tailed distribution that presents substantial challenges for effective recognition. Inspired by collective intelligence in clinical diagnosis to handle complex or rare cases, we proposed a framework-CoDE-to amalgamate diverse expertise of different backbones to bolster robustness across varying scenarios for automated lesion subtyping. It utilizes dual-level balanced individual supervision to fully exploit prior knowledge while considering class imbalance. It is also equipped with a batch-based online competitive distillation module to stimulate dynamic knowledge exchange. Experimental results demonstrate that the model surpassed the state-of-the-art approaches by more than 7.22% in F1-score facing a challenging breast dataset with an imbalance ratio as high as 47.9:1.
Collapse
Affiliation(s)
- Ruobing Huang
- Medical Ultrasound Image Computing (MUSIC) Lab, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, China
| | - Yinyu Ye
- Medical Ultrasound Image Computing (MUSIC) Lab, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, China
| | - Ao Chang
- Medical Ultrasound Image Computing (MUSIC) Lab, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, China
| | - Han Huang
- Medical Ultrasound Image Computing (MUSIC) Lab, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, China
| | - Zijie Zheng
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Long Tan
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Guoxue Tang
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Man Luo
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xiuwen Yi
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Pan Liu
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jiayi Wu
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Baoming Luo
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Dong Ni
- Medical Ultrasound Image Computing (MUSIC) Lab, Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, China.
| |
Collapse
|
2
|
Li P, Yin M, Guerrini S, Gao W. Roles of artificial intelligence and high frame-rate contrast-enhanced ultrasound in the differential diagnosis of Breast Imaging Reporting and Data System 4 breast nodules. Gland Surg 2025; 14:462-478. [PMID: 40256461 PMCID: PMC12004330 DOI: 10.21037/gs-24-187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 03/07/2025] [Indexed: 04/22/2025]
Abstract
Background Breast cancer prevalence and mortality are rising, emphasizing the need for early, accurate diagnosis. Contrast-enhanced ultrasound (CEUS) and artificial intelligence (AI) show promise in distinguishing benign from malignant breast nodules. We compared the diagnostic values of AI, high frame-rate CEUS (HiFR-CEUS), and their combination in Breast Imaging Reporting and Data System (BI-RADS) 4 nodules, using pathology as the gold standard. Methods Patients with BI-RADS 4 breast nodules who were hospitalized at the Department of Thyroid and Breast Surgery, Taizhou People's Hospital from December 2021 to June 2022 were enrolled in the study.80 female patients (80 lesions) underwent preoperative AI and/or HiFR-CEUS. We assessed diagnostic outcomes of AI, HiFR-CEUS, and their combination, calculating sensitivity (SE), specificity (SP), accuracy (ACC), positive/negative predictive values (PPV/NPV). Reliability was compared using Kappa statistics, and AI-HiFR-CEUS correlation was analyzed with Pearson's test. Receiver operating characteristic curves were plotted to compare diagnostic accuracy of AI, HiFR-CEUS, and their combined approach in differentiating BI-RADS 4 lesions. Results Of the 80 lesions, 18 were pathologically confirmed to be benign, while the remaining 62 were malignant. The SE, SP, ACC, PPV, and NPV were 75.81%, 94.44%, 80.00%, 97.92%, and 53.13% in the AI group, 74.20%, 94.44%, 78.75%, 97.91%, and 51.51% in the HiFR-CEUS group, and 98.39%, 88.89%, 96.25%, 96.83%, and 94.12% in the combination group, respectively. Thus, the SE, ACC, and NPV of the combination group were significantly higher than those of the AI and HiFR-CEUS groups, and the SP of the combination group was lower (all P<0.05); however, no significant difference was found between the groups in terms of the PPV (P>0.05). No statistically significant difference was observed in the diagnostic performance of the AI and HiFR-CEUS groups (all P>0.05). The AI and HiFR-CEUS groups had moderate agreement with the "gold standard" (Kappa =0.551, Kappa =0.530, respectively), while the combination group had high agreement (Kappa =0.890). AI was positively correlated with HiFR-CEUS (r=0.249, P<0.05). The area under the curves (AUCs) of AI, HiFR-CEUS, and both in combination were 0.851±0.039, 0.815±0.047, and 0.936±0.039, respectively. Thus, the AUC of the combination group was significantly higher than those of the AI and HiFR-CEUS groups (Z1=2.207, Z2=2.477, respectively, both P<0.05). The AI group had a higher AUC than the HiFR-CEUS group, but the difference was not statistically significant (Z3=0.554, P>0.05). Conclusions Compared with AI alone or HiFR-CEUS alone, the combined use of these two methods had higher diagnostic performance in distinguishing between benign and malignant BI-RADS 4 breast nodules. Thus, our combination method could further improve the diagnostic accuracy and guide clinical decision making.
Collapse
Affiliation(s)
- Ping Li
- Ultrasound Medicine Department, The Affiliated Taizhou People’s Hospital of Nanjing Medical University, Taizhou, China
| | - Ming Yin
- Ultrasound Medicine Department, The Affiliated Taizhou People’s Hospital of Nanjing Medical University, Taizhou, China
| | - Susanna Guerrini
- Unit of Diagnostic Imaging, Department of Medical Sciences, Azienda Ospedaliero-Universitaria Senese, University of Siena, Siena, Italy
| | - Wenxiang Gao
- Ultrasound Medicine Department, The Affiliated Taizhou People’s Hospital of Nanjing Medical University, Taizhou, China
| |
Collapse
|
3
|
Liu Y, Li J, Zhao C, Zhang Y, Chen Q, Qin J, Dong L, Wang T, Jiang W, Lei B. FAMF-Net: Feature Alignment Mutual Attention Fusion With Region Awareness for Breast Cancer Diagnosis via Imbalanced Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1153-1167. [PMID: 39499601 DOI: 10.1109/tmi.2024.3485612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2024]
Abstract
Automatic and accurate classification of breast cancer in multimodal ultrasound images is crucial to improve patients' diagnosis and treatment effect and save medical resources. Methodologically, the fusion of multimodal ultrasound images often encounters challenges such as misalignment, limited utilization of complementary information, poor interpretability in feature fusion, and imbalances in sample categories. To solve these problems, we propose a feature alignment mutual attention fusion method (FAMF-Net), which consists of a region awareness alignment (RAA) block, a mutual attention fusion (MAF) block, and a reinforcement learning-based dynamic optimization strategy(RDO). Specifically, RAA achieves region awareness through class activation mapping and performs translation transformation to achieve feature alignment. When MAF utilizes a mutual attention mechanism for feature interaction fusion, it mines edge and color features separately in B-mode and shear wave elastography images, enhancing the complementarity of features and improving interpretability. Finally, RDO uses the distribution of samples and prediction probabilities during training as the state of reinforcement learning to dynamically optimize the weights of the loss function, thereby solving the problem of class imbalance. The experimental results based on our clinically obtained dataset demonstrate the effectiveness of the proposed method. Our code will be available at: https://github.com/Magnety/Multi_modal_Image.
Collapse
|
4
|
Uwimana A, Gnecco G, Riccaboni M. Artificial intelligence for breast cancer detection and its health technology assessment: A scoping review. Comput Biol Med 2025; 184:109391. [PMID: 39579663 DOI: 10.1016/j.compbiomed.2024.109391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 10/01/2024] [Accepted: 11/07/2024] [Indexed: 11/25/2024]
Abstract
BACKGROUND Recent healthcare advancements highlight the potential of Artificial Intelligence (AI) - and especially, among its subfields, Machine Learning (ML) - in enhancing Breast Cancer (BC) clinical care, leading to improved patient outcomes and increased radiologists' efficiency. While medical imaging techniques have significantly contributed to BC detection and diagnosis, their synergy with AI algorithms has consistently demonstrated superior diagnostic accuracy, reduced False Positives (FPs), and enabled personalized treatment strategies. Despite the burgeoning enthusiasm for leveraging AI for early and effective BC clinical care, its widespread integration into clinical practice is yet to be realized, and the evaluation of AI-based health technologies in terms of health and economic outcomes remains an ongoing endeavor. OBJECTIVES This scoping review aims to investigate AI (and especially ML) applications that have been implemented and evaluated across diverse clinical tasks or decisions in breast imaging and to explore the current state of evidence concerning the assessment of AI-based technologies for BC clinical care within the context of Health Technology Assessment (HTA). METHODS We conducted a systematic literature search following the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) checklist in PubMed and Scopus to identify relevant studies on AI (and particularly ML) applications in BC detection and diagnosis. We limited our search to studies published from January 2015 to October 2023. The Minimum Information about CLinical Artificial Intelligence Modeling (MI-CLAIM) checklist was used to assess the quality of AI algorithms development, evaluation, and reporting quality in the reviewed articles. The HTA Core Model® was also used to analyze the comprehensiveness, robustness, and reliability of the reported results and evidence in AI-systems' evaluations to ensure rigorous assessment of AI systems' utility and cost-effectiveness in clinical practice. RESULTS Of the 1652 initially identified articles, 104 were deemed eligible for inclusion in the review. Most studies examined the clinical effectiveness of AI-based systems (78.84%, n= 82), with one study focusing on safety in clinical settings, and 13.46% (n=14) focusing on patients' benefits. Of the studies, 31.73% (n=33) were ethically approved to be carried out in clinical practice, whereas 25% (n=26) evaluated AI systems legally approved for clinical use. Notably, none of the studies addressed the organizational implications of AI systems in clinical practice. Of the 104 studies, only two of them focused on cost-effectiveness analysis, and were analyzed separately. The average percentage scores for the first 102 AI-based studies' quality assessment based on the MI-CLAIM checklist criteria were 84.12%, 83.92%, 83.98%, 74.51%, and 14.7% for study design, data and optimization, model performance, model examination, and reproducibility, respectively. Notably, 20.59% (n=21) of these studies relied on large-scale representative real-world breast screening datasets, with only 10.78% (n =11) studies demonstrating the robustness and generalizability of the evaluated AI systems. CONCLUSION In bridging the gap between cutting-edge developments and seamless integration of AI systems into clinical workflows, persistent challenges encompass data quality and availability, ethical and legal considerations, robustness and trustworthiness, scalability, and alignment with existing radiologists' workflow. These hurdles impede the synthesis of comprehensive, robust, and reliable evidence to substantiate these systems' clinical utility, relevance, and cost-effectiveness in real-world clinical workflows. Consequently, evaluating AI-based health technologies through established HTA methodologies becomes complicated. We also highlight potential significant influences on AI systems' effectiveness of various factors, such as operational dynamics, organizational structure, the application context of AI systems, and practices in breast screening or examination reading of AI support tools in radiology. Furthermore, we emphasize substantial reciprocal influences on decision-making processes between AI systems and radiologists. Thus, we advocate for an adapted assessment framework specifically designed to address these potential influences on AI systems' effectiveness, mainly addressing system-level transformative implications for AI systems rather than focusing solely on technical performance and task-level evaluations.
Collapse
Affiliation(s)
| | | | - Massimo Riccaboni
- IMT School for Advanced Studies, Lucca, Italy; IUSS University School for Advanced Studies, Pavia, Italy.
| |
Collapse
|
5
|
Singh S, Healy NA. The top 100 most-cited articles on artificial intelligence in breast radiology: a bibliometric analysis. Insights Imaging 2024; 15:297. [PMID: 39666106 PMCID: PMC11638451 DOI: 10.1186/s13244-024-01869-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 11/24/2024] [Indexed: 12/13/2024] Open
Abstract
INTRODUCTION Artificial intelligence (AI) in radiology is a rapidly evolving field. In breast imaging, AI has already been applied in a real-world setting and multiple studies have been conducted in the area. The aim of this analysis is to identify the most influential publications on the topic of artificial intelligence in breast imaging. METHODS A retrospective bibliometric analysis was conducted on artificial intelligence in breast radiology using the Web of Science database. The search strategy involved searching for the keywords 'breast radiology' or 'breast imaging' and the various keywords associated with AI such as 'deep learning', 'machine learning,' and 'neural networks'. RESULTS From the top 100 list, the number of citations per article ranged from 30 to 346 (average 85). The highest cited article titled 'Artificial Neural Networks In Mammography-Application To Decision-Making In The Diagnosis Of Breast-Cancer' was published in Radiology in 1993. Eighty-three of the articles were published in the last 10 years. The journal with the greatest number of articles was Radiology (n = 22). The most common country of origin was the United States (n = 51). Commonly occurring topics published were the use of deep learning models for breast cancer detection in mammography or ultrasound, radiomics in breast cancer, and the use of AI for breast cancer risk prediction. CONCLUSION This study provides a comprehensive analysis of the top 100 most-cited papers on the subject of artificial intelligence in breast radiology and discusses the current most influential papers in the field. CLINICAL RELEVANCE STATEMENT This article provides a concise summary of the top 100 most-cited articles in the field of artificial intelligence in breast radiology. It discusses the most impactful articles and explores the recent trends and topics of research in the field. KEY POINTS Multiple studies have been conducted on AI in breast radiology. The most-cited article was published in the journal Radiology in 1993. This study highlights influential articles and topics on AI in breast radiology.
Collapse
Affiliation(s)
- Sneha Singh
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland.
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland.
| | - Nuala A Healy
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland
- Department of Radiology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
6
|
Wei TR, Hell M, Vierra A, Pang R, Kang Y, Patel M, Yan Y. Breast Cancer Detection on Dual-View Sonography via Data-Centric Deep Learning. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 6:100-106. [PMID: 39564554 PMCID: PMC11573408 DOI: 10.1109/ojemb.2024.3454958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 07/21/2024] [Accepted: 09/01/2024] [Indexed: 11/21/2024] Open
Abstract
Goal: This study aims to enhance AI-assisted breast cancer diagnosis through dual-view sonography using a data-centric approach. Methods: We customize a DenseNet-based model on our exclusive dual-view breast ultrasound dataset to enhance the model's ability to differentiate between malignant and benign masses. Various assembly strategies are designed to integrate the dual views into the model input, contrasting with the use of single views alone, with a goal to maximize performance. Subsequently, we compare the model against the radiologist and quantify the improvement in key performance metrics. We further assess how the radiologist's diagnostic accuracy is enhanced with the assistance of the model. Results: Our experiments consistently found that optimal outcomes were achieved by using a channel-wise stacking approach incorporating both views, with one duplicated as the third channel. This configuration resulted in remarkable model performance with an area underthe receiver operating characteristic curve (AUC) of 0.9754, specificity of 0.96, and sensitivity of 0.9263, outperforming the radiologist by 50% in specificity. With the model's guidance, the radiologist's performance improved across key metrics: accuracy by 17%, precision by 26%, and specificity by 29%. Conclusions: Our customized model, withan optimal configuration for dual-view image input, surpassed both radiologists and existing model results in the literature. Integrating the model as a standalone tool or assistive aid for radiologists can greatly enhance specificity, reduce false positives, thereby minimizing unnecessary biopsies and alleviating radiologists' workload.
Collapse
Affiliation(s)
| | | | - Aren Vierra
- Santa Clara Valley Medical Center San Jose CA 95128 USA
| | - Ran Pang
- Santa Clara Valley Medical Center San Jose CA 95128 USA
| | - Young Kang
- Santa Clara Valley Medical Center San Jose CA 95128 USA
| | - Mahesh Patel
- Santa Clara Valley Medical Center San Jose CA 95128 USA
| | - Yuling Yan
- Santa Clara University Santa Clara CA 95053 USA
| |
Collapse
|
7
|
Arslan M, Asim M, Sattar H, Khan A, Thoppil Ali F, Zehra M, Talluri K. Role of Radiology in the Diagnosis and Treatment of Breast Cancer in Women: A Comprehensive Review. Cureus 2024; 16:e70097. [PMID: 39449897 PMCID: PMC11500669 DOI: 10.7759/cureus.70097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2024] [Indexed: 10/26/2024] Open
Abstract
Breast cancer remains a leading cause of morbidity and mortality among women worldwide. Early detection and precise diagnosis are critical for effective treatment and improved patient outcomes. This review explores the evolving role of radiology in the diagnosis and treatment of breast cancer, highlighting advancements in imaging technologies and the integration of artificial intelligence (AI). Traditional imaging modalities such as mammography, ultrasound, and magnetic resonance imaging have been the cornerstone of breast cancer diagnostics, with each modality offering unique advantages. The advent of radiomics, which involves extracting quantitative data from medical images, has further augmented the diagnostic capabilities of these modalities. AI, particularly deep learning algorithms, has shown potential in improving diagnostic accuracy and reducing observer variability across imaging modalities. AI-driven tools are increasingly being integrated into clinical workflows to assist in image interpretation, lesion classification, and treatment planning. Additionally, radiology plays a crucial role in guiding treatment decisions, particularly in the context of image-guided radiotherapy and monitoring response to neoadjuvant chemotherapy. The review also discusses the emerging field of theranostics, where diagnostic imaging is combined with therapeutic interventions to provide personalized cancer care. Despite these advancements, challenges such as the need for large annotated datasets and the integration of AI into clinical practice remain. The review concludes that while the role of radiology in breast cancer management is rapidly evolving, further research is required to fully realize the potential of these technologies in improving patient outcomes.
Collapse
Affiliation(s)
| | - Muhammad Asim
- Emergency Medicine, Royal Free Hospital, London, GBR
| | - Hina Sattar
- Medicine, Dow University of Health Sciences, Karachi, PAK
| | - Anita Khan
- Medicine, Khyber Girls Medical College, Peshawar, PAK
| | | | - Muneeza Zehra
- Internal Medicine, Karachi Medical and Dental College, Karachi, PAK
| | - Keerthi Talluri
- General Medicine, GSL (Ganni Subba Lakshmi garu) Medical College, Rajahmundry, IND
| |
Collapse
|
8
|
Liang F, Song Y, Huang X, Ren T, Ji Q, Guo Y, Li X, Sui Y, Xie X, Han L, Li Y, Ren Y, Xu Z. Assessing breast disease with deep learning model using bimodal bi-view ultrasound images and clinical information. iScience 2024; 27:110279. [PMID: 39045104 PMCID: PMC11263717 DOI: 10.1016/j.isci.2024.110279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 05/19/2024] [Accepted: 06/13/2024] [Indexed: 07/25/2024] Open
Abstract
Breast cancer is the second leading cause of carcinoma-linked death in women. We developed a multi-modal deep-learning model (BreNet) to differentiate breast cancer from benign lesions. BreNet was constructed and trained on 10,108 images from one center and tested on 3,762 images from two centers in three steps. The diagnostic ability of BreNet was first compared with that of six radiologists; a BreNet-aided scheme was constructed to improve the diagnostic ability of the radiologists; and the diagnosis of real-world radiologists' scheme was then compared with the BreNet-aided scheme. The diagnostic performance of BreNet was superior to that of the radiologists (area under the curve [AUC]: 0.996 vs. 0.841). BreNet-aided scheme increased the pooled AUC of the radiologists from 0.841 to 0.934 for reviewing images, and from 0.892 to 0.934 in the real-world test. The use of BreNet significantly enhances the diagnostic ability of radiologists in the detection of breast cancer.
Collapse
Affiliation(s)
- Fengping Liang
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| | - Yihua Song
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| | - Xiaoping Huang
- Department of Ultrasound, Dongguan Songshan Lake Tungwah Hospital, No. 1, Kefa Seventh Road, Songshan Lake Park, Dongguan, China
| | - Tong Ren
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| | - Qiao Ji
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| | - Yanan Guo
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| | - Xiang Li
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| | - Yajuan Sui
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| | - Xiaohui Xie
- Section of Epidemiology and Population Science, Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| | - Lanqing Han
- Center for Artificial Intelligence in Medicine, Research Institute of Tsinghua, Pearl River Delta, Guangzhou, China
| | - Yuanqing Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, China
| | - Yong Ren
- Artificial Intelligence and Digital Economy Laboratory (Guangzhou), PAZHOU LAB, No.70 Yuean Road, Haizhu District, Guangzhou, China
- Shensi Lab, Shenzhen Institute for Advanced Study, UESTC, Shenzhen, China
- The Seventh Affiliated Hospital of Sun Yat-Sen University, Shenzhen, China
| | - Zuofeng Xu
- Department of Medical Ultrasound, The Seventh Affiliated Hospital, Sun Yat-sen University, 628 Zhenyuan Road, Shenzhen, China
| |
Collapse
|
9
|
Al-Karawi D, Al-Zaidi S, Helael KA, Obeidat N, Mouhsen AM, Ajam T, Alshalabi BA, Salman M, Ahmed MH. A Review of Artificial Intelligence in Breast Imaging. Tomography 2024; 10:705-726. [PMID: 38787015 PMCID: PMC11125819 DOI: 10.3390/tomography10050055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women's physical and mental health. Early breast cancer screening-through mammography, ultrasound, or magnetic resonance imaging (MRI)-can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.
Collapse
Affiliation(s)
- Dhurgham Al-Karawi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Shakir Al-Zaidi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Khaled Ahmad Helael
- Royal Medical Services, King Hussein Medical Hospital, King Abdullah II Ben Al-Hussein Street, Amman 11855, Jordan;
| | - Naser Obeidat
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Abdulmajeed Mounzer Mouhsen
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Tarek Ajam
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Bashar A. Alshalabi
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohamed Salman
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohammed H. Ahmed
- School of Computing, Coventry University, 3 Gulson Road, Coventry CV1 5FB, UK;
| |
Collapse
|
10
|
Pan H, Shi C, Zhang Y, Zhong Z. Artificial intelligence-based classification of breast nodules: a quantitative morphological analysis of ultrasound images. Quant Imaging Med Surg 2024; 14:3381-3392. [PMID: 38720871 PMCID: PMC11074741 DOI: 10.21037/qims-23-1652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/25/2024] [Indexed: 05/12/2024]
Abstract
Background Accurate classification of breast nodules into benign and malignant types is critical for the successful treatment of breast cancer. Traditional methods rely on subjective interpretation, which can potentially lead to diagnostic errors. Artificial intelligence (AI)-based methods using the quantitative morphological analysis of ultrasound images have been explored for the automated and reliable classification of breast cancer. This study aimed to investigate the effectiveness of AI-based approaches for improving diagnostic accuracy and patient outcomes. Methods In this study, a quantitative analysis approach was adopted, with a focus on five critical features for evaluation: degree of boundary regularity, clarity of boundaries, echo intensity, and uniformity of echoes. Furthermore, the classification results were assessed using five machine learning methods: logistic regression (LR), support vector machine (SVM), decision tree (DT), naive Bayes, and K-nearest neighbor (KNN). Based on these assessments, a multifeature combined prediction model was established. Results We evaluated the performance of our classification model by quantifying various features of the ultrasound images and using the area under the receiver operating characteristic (ROC) curve (AUC). The moment of inertia achieved an AUC value of 0.793, while the variance and mean of breast nodule areas achieved AUC values of 0.725 and 0.772, respectively. The convexity and concavity achieved AUC values of 0.988 and 0.987, respectively. Additionally, we conducted a joint analysis of multiple features after normalization, achieving a recall value of 0.98, which surpasses most medical evaluation indexes on the market. To ensure experimental rigor, we conducted cross-validation experiments, which yielded no significant differences among the classifiers under 5-, 8-, and 10-fold cross-validation (P>0.05). Conclusions The quantitative analysis can accurately differentiate between benign and malignant breast nodules.
Collapse
Affiliation(s)
- Hao Pan
- School of Electronic Information, Xijing University, Xi’an, China
| | - Changbei Shi
- Department of Nuclear Medicine, Shaanxi Provincial Cancer Hospital, Xi’an, China
| | - Yuxing Zhang
- School of Electronic Information, Xijing University, Xi’an, China
- School of Medicine, Xijing University, Xi’an, China
| | - Zijian Zhong
- School of Electronic Information, Xijing University, Xi’an, China
| |
Collapse
|
11
|
Seth I, Lim B, Joseph K, Gracias D, Xie Y, Ross RJ, Rozen WM. Use of artificial intelligence in breast surgery: a narrative review. Gland Surg 2024; 13:395-411. [PMID: 38601286 PMCID: PMC11002485 DOI: 10.21037/gs-23-414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/21/2024] [Indexed: 04/12/2024]
Abstract
Background and Objective We have witnessed tremendous advances in artificial intelligence (AI) technologies. Breast surgery, a subspecialty of general surgery, has notably benefited from AI technologies. This review aims to evaluate how AI has been integrated into breast surgery practices, to assess its effectiveness in improving surgical outcomes and operational efficiency, and to identify potential areas for future research and application. Methods Two authors independently conducted a comprehensive search of PubMed, Google Scholar, EMBASE, and Cochrane CENTRAL databases from January 1, 1950, to September 4, 2023, employing keywords pertinent to AI in conjunction with breast surgery or cancer. The search focused on English language publications, where relevance was determined through meticulous screening of titles, abstracts, and full-texts, followed by an additional review of references within these articles. The review covered a range of studies illustrating the applications of AI in breast surgery encompassing lesion diagnosis to postoperative follow-up. Publications focusing specifically on breast reconstruction were excluded. Key Content and Findings AI models have preoperative, intraoperative, and postoperative applications in the field of breast surgery. Using breast imaging scans and patient data, AI models have been designed to predict the risk of breast cancer and determine the need for breast cancer surgery. In addition, using breast imaging scans and histopathological slides, models were used for detecting, classifying, segmenting, grading, and staging breast tumors. Preoperative applications included patient education and the display of expected aesthetic outcomes. Models were also designed to provide intraoperative assistance for precise tumor resection and margin status assessment. As well, AI was used to predict postoperative complications, survival, and cancer recurrence. Conclusions Extra research is required to move AI models from the experimental stage to actual implementation in healthcare. With the rapid evolution of AI, further applications are expected in the coming years including direct performance of breast surgery. Breast surgeons should be updated with the advances in AI applications in breast surgery to provide the best care for their patients.
Collapse
Affiliation(s)
- Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Bryan Lim
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Konrad Joseph
- Department of Surgery, Port Macquarie Base Hospital, New South Wales, Australia
| | - Dylan Gracias
- Department of Surgery, Townsville Hospital, Queensland, Australia
| | - Yi Xie
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
| | - Richard J. Ross
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Warren M. Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| |
Collapse
|
12
|
Togawa R, Pfob A, Büsch C, Fastner S, Gomez C, Goncalo M, Hennigs A, Killinger K, Nees J, Riedel F, Schäfgen B, Stieber A, Tozaki M, Heil J, Barr R, Golatta M. Intra- and Interobserver Reliability of Shear Wave Elastography in Breast Cancer Diagnosis. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2024; 43:109-114. [PMID: 37772458 DOI: 10.1002/jum.16344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/11/2023] [Accepted: 09/17/2023] [Indexed: 09/30/2023]
Abstract
OBJECTIVES Shear wave elastography (SWE) is increasingly used in breast cancer diagnostics. However, large, prospective, multicenter data evaluating the reliability of SWE is missing. We evaluated the intra- and interobserver reliability of SWE in patients with breast lesions categorized as BIRADS 3 or 4. METHODS We used data of 1288 women at 12 institutions in 7 countries with breast lesions categorized as BIRADS 3 to 4 who underwent conventional B-mode ultrasound and SWE. 1243 (96.5%) women had three repetitive conventional B-mode ultrasounds as well as SWE measurements performed by a board-certified senior physician. 375 of 1288 (29.1%) women received an additional ultrasound examination with B-mode and SWE by a second physician. Intraclass correlation coefficients (ICC) were calculated to examine intra- and interobserver reliability. RESULTS ICC for intraobserver reliability showed an excellent correlation with ICC >0.9, while interobserver reliability was moderate with ICC of 0.7. There were no clinically significant differences in intraobserver reliability when SWE was performed in lesions categorized as BI-RADS 3 or 4 as well as in histopathologically benign or malignant lesions. CONCLUSION Reliability of additional SWE was evaluated on a study cohort consisting of 1288 breast lesions categorized as BI-RADS 3 and 4. SWE shows an excellent intraobserver reliability and a moderate interobserver reliability in the evaluation of solid breast masses.
Collapse
Affiliation(s)
- Riku Togawa
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
| | - André Pfob
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
| | - Christopher Büsch
- Institute of Medical Biometry (IMBI), University of Heidelberg, Heidelberg, Germany
| | - Sarah Fastner
- Breast Unit, Sankt Elisabeth Hospital, Heidelberg, Germany
| | | | - Manuela Goncalo
- Department of Radiology, University of Coimbra, Coimbra, Portugal
| | - André Hennigs
- Breast Unit, Sankt Elisabeth Hospital, Heidelberg, Germany
| | - Kristina Killinger
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
| | - Juliane Nees
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
| | - Fabian Riedel
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
| | - Benedikt Schäfgen
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne Stieber
- Department of Diagnostic and Interventional Radiology, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Jörg Heil
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
- Breast Unit, Sankt Elisabeth Hospital, Heidelberg, Germany
| | - Richard Barr
- Department of Radiology, Northeast Ohio Medical University, Ravenna, USA
| | - Michael Golatta
- University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Heidelberg, Germany
- Breast Unit, Sankt Elisabeth Hospital, Heidelberg, Germany
| |
Collapse
|
13
|
Irmici G, Cè M, Pepa GD, D'Ascoli E, De Berardinis C, Giambersio E, Rabiolo L, La Rocca L, Carriero S, Depretto C, Scaperrotta G, Cellina M. Exploring the Potential of Artificial Intelligence in Breast Ultrasound. Crit Rev Oncog 2024; 29:15-28. [PMID: 38505878 DOI: 10.1615/critrevoncog.2023048873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Breast ultrasound has emerged as a valuable imaging modality in the detection and characterization of breast lesions, particularly in women with dense breast tissue or contraindications for mammography. Within this framework, artificial intelligence (AI) has garnered significant attention for its potential to improve diagnostic accuracy in breast ultrasound and revolutionize the workflow. This review article aims to comprehensively explore the current state of research and development in harnessing AI's capabilities for breast ultrasound. We delve into various AI techniques, including machine learning, deep learning, as well as their applications in automating lesion detection, segmentation, and classification tasks. Furthermore, the review addresses the challenges and hurdles faced in implementing AI systems in breast ultrasound diagnostics, such as data privacy, interpretability, and regulatory approval. Ethical considerations pertaining to the integration of AI into clinical practice are also discussed, emphasizing the importance of maintaining a patient-centered approach. The integration of AI into breast ultrasound holds great promise for improving diagnostic accuracy, enhancing efficiency, and ultimately advancing patient's care. By examining the current state of research and identifying future opportunities, this review aims to contribute to the understanding and utilization of AI in breast ultrasound and encourage further interdisciplinary collaboration to maximize its potential in clinical practice.
Collapse
Affiliation(s)
- Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Gianmarco Della Pepa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elisa D'Ascoli
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Claudia De Berardinis
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Emilia Giambersio
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lidia Rabiolo
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata, Policlinico Università di Palermo, Palermo, Italy
| | - Ludovica La Rocca
- Postgraduation School in Radiodiagnostics, Università degli Studi di Napoli
| | - Serena Carriero
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Catherine Depretto
- Breast Radiology Unit, Fondazione IRCCS, Istituto Nazionale Tumori, Milano, Italy
| | | | - Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121, Milan, Italy
| |
Collapse
|
14
|
Rawlani P, Ghosh NK, Kumar A. Role of artificial intelligence in the characterization of indeterminate pancreatic head mass and its usefulness in preoperative diagnosis. Artif Intell Gastroenterol 2023; 4:48-63. [DOI: 10.35712/aig.v4.i3.48] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/11/2023] [Accepted: 10/08/2023] [Indexed: 12/07/2023] Open
Abstract
Artificial intelligence (AI) has been used in various fields of day-to-day life and its role in medicine is immense. Understanding of oncology has been improved with the introduction of AI which helps in diagnosis, treatment planning, management, prognosis, and follow-up. It also helps to identify high-risk groups who can be subjected to timely screening for early detection of malignant conditions. It is more important in pancreatic cancer as it is one of the major causes of cancer-related deaths worldwide and there are no specific early features (clinical and radiological) for diagnosis. With improvement in imaging modalities (computed tomography, magnetic resonance imaging, endoscopic ultrasound), most often clinicians were being challenged with lesions that were difficult to diagnose with human competence. AI has been used in various other branches of medicine to differentiate such indeterminate lesions including the thyroid gland, breast, lungs, liver, adrenal gland, kidney, etc. In the case of pancreatic cancer, the role of AI has been explored and is still ongoing. This review article will focus on how AI can be used to diagnose pancreatic cancer early or differentiate it from benign pancreatic lesions, therefore, management can be planned at an earlier stage.
Collapse
Affiliation(s)
- Palash Rawlani
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| | - Nalini Kanta Ghosh
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| | - Ashok Kumar
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| |
Collapse
|
15
|
Qiu S, Zhuang S, Li B, Wang J, Zhuang Z. Prospective assessment of breast lesions AI classification model based on ultrasound dynamic videos and ACR BI-RADS characteristics. Front Oncol 2023; 13:1274557. [PMID: 38023255 PMCID: PMC10656688 DOI: 10.3389/fonc.2023.1274557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction AI-assisted ultrasound diagnosis is considered a fast and accurate new method that can reduce the subjective and experience-dependent nature of handheld ultrasound. In order to meet clinical diagnostic needs better, we first proposed a breast lesions AI classification model based on ultrasound dynamic videos and ACR BI-RADS characteristics (hereafter, Auto BI-RADS). In this study, we prospectively verify its performance. Methods In this study, the model development was based on retrospective data including 480 ultrasound dynamic videos equivalent to 18122 static images of pathologically proven breast lesions from 420 patients. A total of 292 breast lesions ultrasound dynamic videos from the internal and external hospital were prospectively tested by Auto BI-RADS. The performance of Auto BI-RADS was compared with both experienced and junior radiologists using the DeLong method, Kappa test, and McNemar test. Results The Auto BI-RADS achieved an accuracy, sensitivity, and specificity of 0.87, 0.93, and 0.81, respectively. The consistency of the BI-RADS category between Auto BI-RADS and the experienced group (Kappa:0.82) was higher than that of the juniors (Kappa:0.60). The consistency rates between Auto BI-RADS and the experienced group were higher than those between Auto BI-RADS and the junior group for shape (93% vs. 80%; P = .01), orientation (90% vs. 84%; P = .02), margin (84% vs. 71%; P = .01), echo pattern (69% vs. 56%; P = .001) and posterior features (76% vs. 71%; P = .0046), While the difference of calcification was not significantly different. Discussion In this study, we aimed to prospectively verify a novel AI tool based on ultrasound dynamic videos and ACR BI-RADS characteristics. The prospective assessment suggested that the AI tool not only meets the clinical needs better but also reaches the diagnostic efficiency of experienced radiologists.
Collapse
Affiliation(s)
- Shunmin Qiu
- Department of Ultrasound, First Affiliated Hospital of Shantou University Medical College, Shantou, Guangdong, China
| | - Shuxin Zhuang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Bin Li
- Product Development Department, Shantou Institute of Ultrasonic Instruments, Shantou, Guangdong, China
| | - Jinhong Wang
- Department of Ultrasound, Shantou Chaonan Minsheng Hospital, Shantou, Guangdong, China
| | - Zhemin Zhuang
- Engineering College, Shantou University, Shantou, Guangdong, China
| |
Collapse
|
16
|
Wanderley MC, Soares CMA, Morais MMM, Cruz RM, Lima IRM, Chojniak R, Bitencourt AGV. Application of artificial intelligence in predicting malignancy risk in breast masses on ultrasound. Radiol Bras 2023; 56:229-234. [PMID: 38204896 PMCID: PMC10775818 DOI: 10.1590/0100-3984.2023.0034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/16/2023] [Accepted: 07/05/2023] [Indexed: 01/12/2024] Open
Abstract
Objective To evaluate the results obtained with an artificial intelligence-based software for predicting the risk of malignancy in breast masses from ultrasound images. Materials and Methods This was a retrospective, single-center study evaluating 555 breast masses submitted to percutaneous biopsy at a cancer referral center. Ultrasonographic findings were classified in accordance with the BI-RADS lexicon. The images were analyzed by using Koios DS Breast software and classified as benign, probably benign, low to intermediate suspicion, high suspicion, or probably malignant. The histological classification was considered the reference standard. Results The mean age of the patients was 51 years, and the mean mass size was 16 mm. The radiologist evaluation had a sensitivity and specificity of 99.1% and 34.0%, respectively, compared with 98.2% and 39.0%, respectively, for the software evaluation. The positive predictive value for malignancy for the BI-RADS categories was similar between the radiologist and software evaluations. Two false-negative results were identified in the radiologist evaluation, the masses in question being classified as suspicious by the software, whereas four false-negative results were identified in the software evaluation, the masses in question being classified as suspicious by the radiologist. Conclusion In our sample, the performance of artificial intelligence-based software was comparable to that of a radiologist.
Collapse
Affiliation(s)
| | | | | | | | | | - Rubens Chojniak
- Department of Imaging, A.C.Camargo Cancer Center, São Paulo,
SP, Brazil
| | | |
Collapse
|
17
|
Yu KL, Tseng YS, Yang HC, Liu CJ, Kuo PC, Lee MR, Huang CT, Kuo LC, Wang JY, Ho CC, Shih JY, Yu CJ. Deep learning with test-time augmentation for radial endobronchial ultrasound image differentiation: a multicentre verification study. BMJ Open Respir Res 2023; 10:e001602. [PMID: 37532473 PMCID: PMC10401203 DOI: 10.1136/bmjresp-2022-001602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 06/23/2023] [Indexed: 08/04/2023] Open
Abstract
PURPOSE Despite the importance of radial endobronchial ultrasound (rEBUS) in transbronchial biopsy, researchers have yet to apply artificial intelligence to the analysis of rEBUS images. MATERIALS AND METHODS This study developed a convolutional neural network (CNN) to differentiate between malignant and benign tumours in rEBUS images. This study retrospectively collected rEBUS images from medical centres in Taiwan, including 769 from National Taiwan University Hospital Hsin-Chu Branch, Hsinchu Hospital for model training (615 images) and internal validation (154 images) as well as 300 from National Taiwan University Hospital (NTUH-TPE) and 92 images were obtained from National Taiwan University Hospital Hsin-Chu Branch, Biomedical Park Hospital (NTUH-BIO) for external validation. Further assessments of the model were performed using image augmentation in the training phase and test-time augmentation (TTA). RESULTS Using the internal validation dataset, the results were as follows: area under the curve (AUC) (0.88 (95% CI 0.83 to 0.92)), sensitivity (0.80 (95% CI 0.73 to 0.88)), specificity (0.75 (95% CI 0.66 to 0.83)). Using the NTUH-TPE external validation dataset, the results were as follows: AUC (0.76 (95% CI 0.71 to 0.80)), sensitivity (0.58 (95% CI 0.50 to 0.65)), specificity (0.92 (95% CI 0.88 to 0.97)). Using the NTUH-BIO external validation dataset, the results were as follows: AUC (0.72 (95% CI 0.64 to 0.82)), sensitivity (0.71 (95% CI 0.55 to 0.86)), specificity (0.76 (95% CI 0.64 to 0.87)). After fine-tuning, the AUC values for the external validation cohorts were as follows: NTUH-TPE (0.78) and NTUH-BIO (0.82). Our findings also demonstrated the feasibility of the model in differentiating between lung cancer subtypes, as indicated by the following AUC values: adenocarcinoma (0.70; 95% CI 0.64 to 0.76), squamous cell carcinoma (0.64; 95% CI 0.54 to 0.74) and small cell lung cancer (0.52; 95% CI 0.32 to 0.72). CONCLUSIONS Our results demonstrate the feasibility of the proposed CNN-based algorithm in differentiating between malignant and benign lesions in rEBUS images.
Collapse
Affiliation(s)
- Kai-Lun Yu
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
- Graduate Institute of Clinical Medicine, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Yi-Shiuan Tseng
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Han-Ching Yang
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
| | - Chia-Jung Liu
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
| | - Po-Chih Kuo
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Meng-Rui Lee
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chun-Ta Huang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Lu-Cheng Kuo
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Jann-Yuan Wang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chao-Chi Ho
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Jin-Yuan Shih
- Graduate Institute of Clinical Medicine, National Taiwan University College of Medicine, Taipei, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chong-Jen Yu
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
18
|
Alsharif WM. The utilization of artificial intelligence applications to improve breast cancer detection and prognosis. Saudi Med J 2023; 44:119-127. [PMID: 36773967 PMCID: PMC9987701 DOI: 10.15537/smj.2023.44.2.20220611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Abstract
Breast imaging faces challenges with the current increase in medical imaging requests and lesions that breast screening programs can miss. Solutions to improve these challenges are being sought with the recent advancement and adoption of artificial intelligent (AI)-based applications to enhance workflow efficiency as well as patient-healthcare outcomes. rtificial intelligent tools have been proposed and used to analyze different modes of breast imaging, in most of the published studies, mainly for the detection and classification of breast lesions, breast lesion segmentation, breast density evaluation, and breast cancer risk assessment. This article reviews the background of the Conventional Computer-aided Detection system and AI, AI-based applications in breast medical imaging for the identification, segmentation, and categorization of lesions, breast density and cancer risk evaluation. In addition, the challenges, and limitations of AI-based applications in breast imaging are also discussed.
Collapse
Affiliation(s)
- Walaa M. Alsharif
- From the Diagnostic Radiology Technology Department, College of Applied Medical Sciences, Taibah University, Al Madinah Al Munawwarah; and from the Society of Artificial Intelligence in Healthcare, Riyadh, Kingdom of Saudi Arabia.
| |
Collapse
|
19
|
Baek J, O’Connell AM, Parker KJ. Improving breast cancer diagnosis by incorporating raw ultrasound parameters into machine learning. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2022; 3:045013. [PMID: 36698865 PMCID: PMC9855672 DOI: 10.1088/2632-2153/ac9bcc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/15/2022] [Accepted: 10/19/2022] [Indexed: 01/28/2023] Open
Abstract
The improved diagnostic accuracy of ultrasound breast examinations remains an important goal. In this study, we propose a biophysical feature-based machine learning method for breast cancer detection to improve the performance beyond a benchmark deep learning algorithm and to furthermore provide a color overlay visual map of the probability of malignancy within a lesion. This overall framework is termed disease-specific imaging. Previously, 150 breast lesions were segmented and classified utilizing a modified fully convolutional network and a modified GoogLeNet, respectively. In this study multiparametric analysis was performed within the contoured lesions. Features were extracted from ultrasound radiofrequency, envelope, and log-compressed data based on biophysical and morphological models. The support vector machine with a Gaussian kernel constructed a nonlinear hyperplane, and we calculated the distance between the hyperplane and each feature's data point in multiparametric space. The distance can quantitatively assess a lesion and suggest the probability of malignancy that is color-coded and overlaid onto B-mode images. Training and evaluation were performed on in vivo patient data. The overall accuracy for the most common types and sizes of breast lesions in our study exceeded 98.0% for classification and 0.98 for an area under the receiver operating characteristic curve, which is more precise than the performance of radiologists and a deep learning system. Further, the correlation between the probability and Breast Imaging Reporting and Data System enables a quantitative guideline to predict breast cancer. Therefore, we anticipate that the proposed framework can help radiologists achieve more accurate and convenient breast cancer classification and detection.
Collapse
Affiliation(s)
- Jihye Baek
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
| | - Avice M O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Kevin J Parker
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
| |
Collapse
|
20
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
21
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
22
|
Mahant SS, Varma AR. Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine. Cureus 2022; 14:e28945. [PMID: 36237807 PMCID: PMC9547651 DOI: 10.7759/cureus.28945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 09/08/2022] [Indexed: 11/25/2022] Open
Abstract
In today's world, progressively enormous popularity prevails around artificial intelligence (AI). AI is gaining popularity in the identification of various images. Therefore, it has been widely used in the ultrasound of the breast. Furthermore, AI can perform a quantitative evaluation, which further helps maintain the diagnosis's accuracy. Moreover, breast cancer is the most common cancer in women, posing a severe threat to women's health. Hence, its early detection is usually associated with a patient's prognosis. As a result, using AI in breast cancer screening and detection is highly crucial. The concept of AI in the perspective of breast ultrasound has been highlighted in this brief review article. It tends to focus on early AI, i.e., traditional machine learning and deep learning algorithms. Also, the use of AI in ultrasound and the use of it in mammography, magnetic resonance imaging, nuclear medicine imaging, and classification of breast lesions is broadly explained, along with the challenges faced in bringing AI into daily practice.
Collapse
|
23
|
Hotta T, Kurimoto N, Shiratsuki Y, Amano Y, Hamaguchi M, Tanino A, Tsubata Y, Isobe T. Deep learning-based diagnosis from endobronchial ultrasonography images of pulmonary lesions. Sci Rep 2022; 12:13710. [PMID: 35962181 PMCID: PMC9374687 DOI: 10.1038/s41598-022-17976-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/03/2022] [Indexed: 11/09/2022] Open
Abstract
Endobronchial ultrasonography with a guide sheath (EBUS-GS) improves the accuracy of bronchoscopy. The possibility of differentiating benign from malignant lesions based on EBUS findings may be useful in making the correct diagnosis. The convolutional neural network (CNN) model investigated whether benign or malignant (lung cancer) lesions could be predicted based on EBUS findings. This was an observational, single-center cohort study. Using medical records, patients were divided into benign and malignant groups. We acquired EBUS data for 213 participants. A total of 2,421,360 images were extracted from the learning dataset. We trained and externally validated a CNN algorithm to predict benign or malignant lung lesions. Test was performed using 26,674 images. The dataset was interpreted by four bronchoscopists. The accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the CNN model for distinguishing benign and malignant lesions were 83.4%, 95.3%, 53.6%, 83.8%, and 82.0%, respectively. For the four bronchoscopists, the accuracy rate was 68.4%, sensitivity was 80%, specificity was 39.6%, PPV was 76.8%, and NPV was 44.2%. The developed EBUS-computer-aided diagnosis system is expected to read EBUS findings that are difficult for clinicians to judge with precision and help differentiate between benign lesions and lung cancers.
Collapse
Affiliation(s)
- Takamasa Hotta
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan
| | - Noriaki Kurimoto
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan
| | - Yohei Shiratsuki
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan
| | - Yoshihiro Amano
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan
| | - Megumi Hamaguchi
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan
| | - Akari Tanino
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan
| | - Yukari Tsubata
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan.
| | - Takeshi Isobe
- Department of Internal Medicine, Division of Medical Oncology and Respiratory Medicine, Shimane University, 89-1 Enya-cho, Izumo, Shimane, 693-8501, Japan
| |
Collapse
|
24
|
Prediction of Nodal Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images. Cancers (Basel) 2022; 14:cancers14143334. [PMID: 35884395 PMCID: PMC9321716 DOI: 10.3390/cancers14143334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 06/25/2022] [Accepted: 07/05/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Endobronchial ultrasound-guided transbronchial aspiration is a minimally invasive and highly accurate modality for the diagnosis of lymph node metastasis and is useful for pre-treatment biomarker test sampling in patients with lung cancer. Endobronchial ultrasound image analysis is useful for predicting nodal metastasis; however, it can only be used as a supplemental method to tissue sampling. In recent years, deep learning-based computer-aided diagnosis using artificial intelligence technology has been introduced in research and clinical medicine. This study investigated the feasibility of computer-aided diagnosis for the prediction of nodal metastasis in lung cancer using endobronchial ultrasound images. The outcome of this study may help improve diagnostic efficiency and reduce invasiveness of the procedure. Abstract Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a valid modality for nodal lung cancer staging. The sonographic features of EBUS helps determine suspicious lymph nodes (LNs). To facilitate this use of this method, machine-learning-based computer-aided diagnosis (CAD) of medical imaging has been introduced in clinical practice. This study investigated the feasibility of CAD for the prediction of nodal metastasis in lung cancer using endobronchial ultrasound images. Image data of patients who underwent EBUS-TBNA were collected from a video clip. Xception was used as a convolutional neural network to predict the nodal metastasis of lung cancer. The prediction accuracy of nodal metastasis through deep learning (DL) was evaluated using both the five-fold cross-validation and hold-out methods. Eighty percent of the collected images were used in five-fold cross-validation, and all the images were used for the hold-out method. Ninety-one patients (166 LNs) were enrolled in this study. A total of 5255 and 6444 extracted images from the video clip were analyzed using the five-fold cross-validation and hold-out methods, respectively. The prediction of LN metastasis by CAD using EBUS images showed high diagnostic accuracy with high specificity. CAD during EBUS-TBNA may help improve the diagnostic efficiency and reduce invasiveness of the procedure.
Collapse
|
25
|
Pfob A, Sidey-Gibbons C, Barr RG, Duda V, Alwafai Z, Balleyguier C, Clevert DA, Fastner S, Gomez C, Goncalo M, Gruber I, Hahn M, Hennigs A, Kapetas P, Lu SC, Nees J, Ohlinger R, Riedel F, Rutten M, Schaefgen B, Schuessler M, Stieber A, Togawa R, Tozaki M, Wojcinski S, Xu C, Rauch G, Heil J, Golatta M. The importance of multi-modal imaging and clinical information for humans and AI-based algorithms to classify breast masses (INSPiRED 003): an international, multicenter analysis. Eur Radiol 2022; 32:4101-4115. [PMID: 35175381 PMCID: PMC9123064 DOI: 10.1007/s00330-021-08519-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 09/14/2021] [Accepted: 10/17/2021] [Indexed: 01/23/2023]
Abstract
OBJECTIVES AI-based algorithms for medical image analysis showed comparable performance to human image readers. However, in practice, diagnoses are made using multiple imaging modalities alongside other data sources. We determined the importance of this multi-modal information and compared the diagnostic performance of routine breast cancer diagnosis to breast ultrasound interpretations by humans or AI-based algorithms. METHODS Patients were recruited as part of a multicenter trial (NCT02638935). The trial enrolled 1288 women undergoing routine breast cancer diagnosis (multi-modal imaging, demographic, and clinical information). Three physicians specialized in ultrasound diagnosis performed a second read of all ultrasound images. We used data from 11 of 12 study sites to develop two machine learning (ML) algorithms using unimodal information (ultrasound features generated by the ultrasound experts) to classify breast masses which were validated on the remaining study site. The same ML algorithms were subsequently developed and validated on multi-modal information (clinical and demographic information plus ultrasound features). We assessed performance using area under the curve (AUC). RESULTS Of 1288 breast masses, 368 (28.6%) were histopathologically malignant. In the external validation set (n = 373), the performance of the two unimodal ultrasound ML algorithms (AUC 0.83 and 0.82) was commensurate with performance of the human ultrasound experts (AUC 0.82 to 0.84; p for all comparisons > 0.05). The multi-modal ultrasound ML algorithms performed significantly better (AUC 0.90 and 0.89) but were statistically inferior to routine breast cancer diagnosis (AUC 0.95, p for all comparisons ≤ 0.05). CONCLUSIONS The performance of humans and AI-based algorithms improves with multi-modal information. KEY POINTS • The performance of humans and AI-based algorithms improves with multi-modal information. • Multimodal AI-based algorithms do not necessarily outperform expert humans. • Unimodal AI-based algorithms do not represent optimal performance to classify breast masses.
Collapse
Affiliation(s)
- André Pfob
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany ,grid.240145.60000 0001 2291 4776MD Anderson Center for INSPiRED Cancer Care (Integrated Systems for Patient-Reported Data), The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Chris Sidey-Gibbons
- grid.240145.60000 0001 2291 4776MD Anderson Center for INSPiRED Cancer Care (Integrated Systems for Patient-Reported Data), The University of Texas MD Anderson Cancer Center, Houston, TX USA ,grid.240145.60000 0001 2291 4776Department of Symptom Research, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Richard G. Barr
- grid.261103.70000 0004 0459 7529Department of Radiology, Northeast Ohio Medical University, Ravenna, OH USA
| | - Volker Duda
- grid.10253.350000 0004 1936 9756Department of Gynecology and Obstetrics, University of Marburg, Marburg, Germany
| | - Zaher Alwafai
- grid.5603.0Department of Gynecology and Obstetrics, University of Greifswald, Greifswald, Germany
| | - Corinne Balleyguier
- grid.14925.3b0000 0001 2284 9388Department of Radiology, Institut Gustave Roussy, Villejuif Cedex, France
| | - Dirk-André Clevert
- grid.411095.80000 0004 0477 2585Department of Radiology, University Hospital Munich-Grosshadern, Munich, Germany
| | - Sarah Fastner
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Christina Gomez
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Manuela Goncalo
- grid.8051.c0000 0000 9511 4342Department of Radiology, University of Coimbra, Coimbra, Portugal
| | - Ines Gruber
- grid.10392.390000 0001 2190 1447Department of Gynecology and Obstetrics, University of Tuebingen, Tuebingen, Germany
| | - Markus Hahn
- grid.10392.390000 0001 2190 1447Department of Gynecology and Obstetrics, University of Tuebingen, Tuebingen, Germany
| | - André Hennigs
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Panagiotis Kapetas
- grid.22937.3d0000 0000 9259 8492Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Sheng-Chieh Lu
- grid.240145.60000 0001 2291 4776MD Anderson Center for INSPiRED Cancer Care (Integrated Systems for Patient-Reported Data), The University of Texas MD Anderson Cancer Center, Houston, TX USA ,grid.240145.60000 0001 2291 4776Department of Symptom Research, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Juliane Nees
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Ralf Ohlinger
- grid.5603.0Department of Gynecology and Obstetrics, University of Greifswald, Greifswald, Germany
| | - Fabian Riedel
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Matthieu Rutten
- grid.413508.b0000 0004 0501 9798Department of Radiology, Jeroen Bosch Hospital, ‘s-Hertogenbosch, The Netherlands ,grid.10417.330000 0004 0444 9382Radboud University Medical Center, Nijmegen, The Netherlands
| | - Benedikt Schaefgen
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Maximilian Schuessler
- grid.5253.10000 0001 0328 4908National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne Stieber
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Riku Togawa
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | | | - Sebastian Wojcinski
- grid.461805.e0000 0000 9323 0964Department of Gynecology and Obstetrics, Breast Cancer Center, Klinikum Bielefeld Mitte GmbH, Bielefeld, Germany
| | - Cai Xu
- grid.240145.60000 0001 2291 4776MD Anderson Center for INSPiRED Cancer Care (Integrated Systems for Patient-Reported Data), The University of Texas MD Anderson Cancer Center, Houston, TX USA ,grid.240145.60000 0001 2291 4776Department of Symptom Research, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Geraldine Rauch
- grid.7468.d0000 0001 2248 7639Institute of Biometry and Clinical Epidemiology, Charité – Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt-Universität Zu Berlin, Berlin , Germany
| | - Joerg Heil
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| | - Michael Golatta
- grid.5253.10000 0001 0328 4908University Breast Unit, Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120 Heidelberg, Germany
| |
Collapse
|
26
|
Hsieh YH, Hsu FR, Dai ST, Huang HY, Chen DR, Shia WC. Incorporating the Breast Imaging Reporting and Data System Lexicon with a Fully Convolutional Network for Malignancy Detection on Breast Ultrasound. Diagnostics (Basel) 2021; 12:66. [PMID: 35054233 PMCID: PMC8774546 DOI: 10.3390/diagnostics12010066] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/21/2021] [Accepted: 12/25/2021] [Indexed: 11/16/2022] Open
Abstract
In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and 174 malignant images) from 189 patients (102 benign breast tumor patients and 87 malignant patients), we identified seven malignant characteristics related to the BI-RADS lexicon in breast ultrasound. The mean accuracy and mean IU of the semantic segmentation were 32.82% and 28.88, respectively. The weighted intersection over union was 85.35%, and the area under the curve was 89.47%, showing better performance than similar semantic segmentation networks, SegNet and U-Net, in the same dataset. Our results suggest that the utilization of a deep learning network in combination with the BI-RADS lexicon can be an important supplemental tool when using ultrasound to diagnose breast malignancy.
Collapse
Affiliation(s)
- Yung-Hsien Hsieh
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
| | - Fang-Rong Hsu
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
| | - Seng-Tong Dai
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
| | - Hsin-Ya Huang
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
| | - Dar-Ren Chen
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
- School of Medicine, Chung Shan Medical University, Taichung 40201, Taiwan
| | - Wei-Chung Shia
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
| |
Collapse
|
27
|
Kim J, Kim HJ, Kim C, Lee JH, Kim KW, Park YM, Kim HW, Ki SY, Kim YM, Kim WH. Weakly-supervised deep learning for ultrasound diagnosis of breast cancer. Sci Rep 2021; 11:24382. [PMID: 34934144 PMCID: PMC8692405 DOI: 10.1038/s41598-021-03806-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 11/30/2021] [Indexed: 11/21/2022] Open
Abstract
Conventional deep learning (DL) algorithm requires full supervision of annotating the region of interest (ROI) that is laborious and often biased. We aimed to develop a weakly-supervised DL algorithm that diagnosis breast cancer at ultrasound without image annotation. Weakly-supervised DL algorithms were implemented with three networks (VGG16, ResNet34, and GoogLeNet) and trained using 1000 unannotated US images (500 benign and 500 malignant masses). Two sets of 200 images (100 benign and 100 malignant masses) were used for internal and external validation sets. For comparison with fully-supervised algorithms, ROI annotation was performed manually and automatically. Diagnostic performances were calculated as the area under the receiver operating characteristic curve (AUC). Using the class activation map, we determined how accurately the weakly-supervised DL algorithms localized the breast masses. For internal validation sets, the weakly-supervised DL algorithms achieved excellent diagnostic performances, with AUC values of 0.92–0.96, which were not statistically different (all Ps > 0.05) from those of fully-supervised DL algorithms with either manual or automated ROI annotation (AUC, 0.92–0.96). For external validation sets, the weakly-supervised DL algorithms achieved AUC values of 0.86–0.90, which were not statistically different (Ps > 0.05) or higher (P = 0.04, VGG16 with automated ROI annotation) from those of fully-supervised DL algorithms (AUC, 0.84–0.92). In internal and external validation sets, weakly-supervised algorithms could localize 100% of malignant masses, except for ResNet34 (98%). The weakly-supervised DL algorithms developed in the present study were feasible for US diagnosis of breast cancer with well-performing localization and differential diagnosis.
Collapse
Affiliation(s)
- Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea
| | - Hye Jung Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Republic of Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea
| | - Jin Hwa Lee
- Department of Radiology, Dong-A University College of Medicine, Busan, Republic of Korea
| | - Keum Won Kim
- Departments of Radiology, School of Medicine, Konyang University, Konyang Univeristy Hospital, Daejeon, Republic of Korea
| | - Young Mi Park
- Department of Radiology, School of Medicine, Inje University, Busan Paik Hospital, Busan, Republic of Korea
| | - Hye Won Kim
- Department of Radiology, Wonkwang University Hospital, Wonkwang University School of Medicine, Iksan, Republic of Korea
| | - So Yeon Ki
- Department of Radiology, School of Medicine, Chonnam National University, Chonnam National University Hwasun Hospital, Hwasun, Republic of Korea
| | - You Me Kim
- Department of Radiology, School of Medicine, Dankook University, Dankook University Hospital, Cheonan, Republic of Korea
| | - Won Hwa Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Republic of Korea.
| |
Collapse
|
28
|
A beneficial role of computer-aided diagnosis system for less experienced physicians in the diagnosis of thyroid nodule on ultrasound. Sci Rep 2021; 11:20448. [PMID: 34650185 PMCID: PMC8516898 DOI: 10.1038/s41598-021-99983-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 09/28/2021] [Indexed: 01/25/2023] Open
Abstract
Ultrasonography (US) is the primary diagnostic tool for thyroid nodules, while the accuracy is operator-dependent. It is widely used not only by radiologists but also by physicians with different levels of experience. The aim of this study was to investigate whether US with computer-aided diagnosis (CAD) has assisting roles to physicians in the diagnosis of thyroid nodules. 451 thyroid nodules evaluated by fine-needle aspiration cytology following surgery were included. 300 (66.5%) of them were diagnosed as malignancy. Physicians with US experience less than 1 year (inexperienced, n = 10), or more than 5 years (experienced, n = 3) reviewed the US images of thyroid nodules with or without CAD assistance. The diagnostic performance of CAD was comparable to that of the experienced group, and better than those of the inexperienced group. The AUC of the CAD for conventional PTC was higher than that for FTC and follicular variant PTC (0.925 vs. 0.499), independent of tumor size. CAD assistance significantly improved diagnostic performance in the inexperienced group, but not in the experienced groups. In conclusion, the CAD system showed good performance in the diagnosis of conventional PTC. CAD assistance improved the diagnostic performance of less experienced physicians in US, especially in diagnosis of conventional PTC.
Collapse
|
29
|
Preliminary Study on the Diagnostic Performance of a Deep Learning System for Submandibular Gland Inflammation Using Ultrasonography Images. J Clin Med 2021; 10:jcm10194508. [PMID: 34640523 PMCID: PMC8509623 DOI: 10.3390/jcm10194508] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 09/24/2021] [Accepted: 09/28/2021] [Indexed: 11/24/2022] Open
Abstract
This study was performed to evaluate the diagnostic performance of deep learning systems using ultrasonography (USG) images of the submandibular glands (SMGs) in three different conditions: obstructive sialoadenitis, Sjögren’s syndrome (SjS), and normal glands. Fifty USG images with a confirmed diagnosis of obstructive sialoadenitis, 50 USG images with a confirmed diagnosis of SjS, and 50 USG images with no SMG abnormalities were included in the study. The training group comprised 40 obstructive sialoadenitis images, 40 SjS images, and 40 control images, and the test group comprised 10 obstructive sialoadenitis images, 10 SjS images, and 10 control images for deep learning analysis. The performance of the deep learning system was calculated and compared between two experienced radiologists. The sensitivity of the deep learning system in the obstructive sialoadenitis group, SjS group, and control group was 55.0%, 83.0%, and 73.0%, respectively, and the total accuracy was 70.3%. The sensitivity of the two radiologists was 64.0%, 72.0%, and 86.0%, respectively, and the total accuracy was 74.0%. This study revealed that the deep learning system was more sensitive than experienced radiologists in diagnosing SjS in USG images of two case groups and a group of healthy subjects in inflammation of SMGs.
Collapse
|
30
|
Shen Y, Shamout FE, Oliver JR, Witowski J, Kannan K, Park J, Wu N, Huddleston C, Wolfson S, Millet A, Ehrenpreis R, Awal D, Tyma C, Samreen N, Gao Y, Chhor C, Gandhi S, Lee C, Kumari-Subaiya S, Leonard C, Mohammed R, Moczulski C, Altabet J, Babb J, Lewin A, Reig B, Moy L, Heacock L, Geras KJ. Artificial intelligence system reduces false-positive findings in the interpretation of breast ultrasound exams. Nat Commun 2021; 12:5645. [PMID: 34561440 PMCID: PMC8463596 DOI: 10.1038/s41467-021-26023-2] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 09/14/2021] [Indexed: 02/08/2023] Open
Abstract
Though consistently shown to detect mammographically occult cancers, breast ultrasound has been noted to have high false-positive rates. In this work, we present an AI system that achieves radiologist-level accuracy in identifying breast cancer in ultrasound images. Developed on 288,767 exams, consisting of 5,442,907 B-mode and Color Doppler images, the AI achieves an area under the receiver operating characteristic curve (AUROC) of 0.976 on a test set consisting of 44,755 exams. In a retrospective reader study, the AI achieves a higher AUROC than the average of ten board-certified breast radiologists (AUROC: 0.962 AI, 0.924 ± 0.02 radiologists). With the help of the AI, radiologists decrease their false positive rates by 37.3% and reduce requested biopsies by 27.8%, while maintaining the same level of sensitivity. This highlights the potential of AI in improving the accuracy, consistency, and efficiency of breast ultrasound diagnosis.
Collapse
Affiliation(s)
- Yiqiu Shen
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA
| | - Farah E. Shamout
- grid.440573.1Engineering Division, NYU Abu Dhabi, Abu Dhabi, UAE
| | - Jamie R. Oliver
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Jan Witowski
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Kawshik Kannan
- grid.482020.c0000 0001 1089 179XDepartment of Computer Science, Courant Institute, New York University, New York, NY USA
| | - Jungkyu Park
- grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| | - Nan Wu
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA
| | - Connor Huddleston
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Stacey Wolfson
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Alexandra Millet
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Robin Ehrenpreis
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Divya Awal
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cathy Tyma
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Naziya Samreen
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Yiming Gao
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Chloe Chhor
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Stacey Gandhi
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cindy Lee
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Sheila Kumari-Subaiya
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cindy Leonard
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Reyhan Mohammed
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Christopher Moczulski
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Jaime Altabet
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - James Babb
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Alana Lewin
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Beatriu Reig
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Linda Moy
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA ,grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| | - Laura Heacock
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Krzysztof J. Geras
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA ,grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA ,grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| |
Collapse
|
31
|
Bitencourt A, Daimiel Naranjo I, Lo Gullo R, Rossi Saccarelli C, Pinker K. AI-enhanced breast imaging: Where are we and where are we heading? Eur J Radiol 2021; 142:109882. [PMID: 34392105 PMCID: PMC8387447 DOI: 10.1016/j.ejrad.2021.109882] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 07/15/2021] [Accepted: 07/26/2021] [Indexed: 12/22/2022]
Abstract
Significant advances in imaging analysis and the development of high-throughput methods that can extract and correlate multiple imaging parameters with different clinical outcomes have led to a new direction in medical research. Radiomics and artificial intelligence (AI) studies are rapidly evolving and have many potential applications in breast imaging, such as breast cancer risk prediction, lesion detection and classification, radiogenomics, and prediction of treatment response and clinical outcomes. AI has been applied to different breast imaging modalities, including mammography, ultrasound, and magnetic resonance imaging, in different clinical scenarios. The application of AI tools in breast imaging has an unprecedented opportunity to better derive clinical value from imaging data and reshape the way we care for our patients. The aim of this study is to review the current knowledge and future applications of AI-enhanced breast imaging in clinical practice.
Collapse
Affiliation(s)
- Almir Bitencourt
- Department of Imaging, A.C.Camargo Cancer Center, Sao Paulo, SP, Brazil; Dasa, Sao Paulo, SP, Brazil
| | - Isaac Daimiel Naranjo
- Department of Radiology, Breast Imaging Service, Guy's and St. Thomas' NHS Trust, Great Maze Pond, London, UK
| | - Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| |
Collapse
|
32
|
Zhang H, Han L, Chen K, Peng Y, Lin J. Diagnostic Efficiency of the Breast Ultrasound Computer-Aided Prediction Model Based on Convolutional Neural Network in Breast Cancer. J Digit Imaging 2021; 33:1218-1223. [PMID: 32519253 DOI: 10.1007/s10278-020-00357-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
This study aimed to construct a breast ultrasound computer-aided prediction model based on the convolutional neural network (CNN) and investigate its diagnostic efficiency in breast cancer. A retrospective analysis was carried out, including 5000 breast ultrasound images (benign: 2500; malignant: 2500) as the training group. Different prediction models were constructed using CNN (based on InceptionV3, VGG16, ResNet50, and VGG19). Additionally, the constructed prediction models were tested using 1007 images of the test group (benign: 788; malignant: 219). The receiver operating characteristic curves were drawn, and the corresponding areas under the curve (AUCs) were obtained. The model with the highest AUC was selected, and its diagnostic accuracy was compared with that obtained by sonographers who performed and interpreted ultrasonographic examinations using 683 images of the comparison group (benign: 493; malignant: 190). In the model test with the test group images, the AUCs of the constructed InceptionV3, VGG16, ResNet50, and VGG19 models were 0.905, 0.866, 0.851, and 0.847, respectively. The InceptionV3 model showed the largest AUC, with statistically significant differences compared with the other models (P < 0.05). In the classification of the comparison group images, the AUC (0.913) of the InceptionV3 model was larger than that (0.846) obtained by sonographers, showing a statistically significant difference (P < 0.05). The breast ultrasound computer-aided prediction model based on CNN showed high accuracy in the prediction of breast cancer.
Collapse
Affiliation(s)
- Heqing Zhang
- Department of Ultrasound, West China Hospital, Sichuan University, Chengdu, China
| | - Lin Han
- Haihong Intellimage Medical Technology (Tianjin) Co., Ltd., Tianjin, China
| | - Ke Chen
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Yulan Peng
- Department of Ultrasound, West China Hospital, Sichuan University, Chengdu, China.
| | - Jiangli Lin
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
33
|
Lei YM, Yin M, Yu MH, Yu J, Zeng SE, Lv WZ, Li J, Ye HR, Cui XW, Dietrich CF. Artificial Intelligence in Medical Imaging of the Breast. Front Oncol 2021; 11:600557. [PMID: 34367938 PMCID: PMC8339920 DOI: 10.3389/fonc.2021.600557] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 07/07/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has invaded our daily lives, and in the last decade, there have been very promising applications of AI in the field of medicine, including medical imaging, in vitro diagnosis, intelligent rehabilitation, and prognosis. Breast cancer is one of the common malignant tumors in women and seriously threatens women’s physical and mental health. Early screening for breast cancer via mammography, ultrasound and magnetic resonance imaging (MRI) can significantly improve the prognosis of patients. AI has shown excellent performance in image recognition tasks and has been widely studied in breast cancer screening. This paper introduces the background of AI and its application in breast medical imaging (mammography, ultrasound and MRI), such as in the identification, segmentation and classification of lesions; breast density assessment; and breast cancer risk assessment. In addition, we also discuss the challenges and future perspectives of the application of AI in medical imaging of the breast.
Collapse
Affiliation(s)
- Yu-Meng Lei
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Miao Yin
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Mei-Hui Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Jing Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Jun Li
- Department of Medical Ultrasound, The First Affiliated Hospital of Medical College, Shihezi University, Xinjiang, China
| | - Hua-Rong Ye
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Beau Site, Salem und Permanence, Bern, Switzerland
| |
Collapse
|
34
|
Le D, Son T, Yao X. Machine learning in optical coherence tomography angiography. Exp Biol Med (Maywood) 2021; 246:2170-2183. [PMID: 34279136 DOI: 10.1177/15353702211026581] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Optical coherence tomography angiography (OCTA) offers a noninvasive label-free solution for imaging retinal vasculatures at the capillary level resolution. In principle, improved resolution implies a better chance to reveal subtle microvascular distortions associated with eye diseases that are asymptomatic in early stages. However, massive screening requires experienced clinicians to manually examine retinal images, which may result in human error and hinder objective screening. Recently, quantitative OCTA features have been developed to standardize and document retinal vascular changes. The feasibility of using quantitative OCTA features for machine learning classification of different retinopathies has been demonstrated. Deep learning-based applications have also been explored for automatic OCTA image analysis and disease classification. In this article, we summarize recent developments of quantitative OCTA features, machine learning image analysis, and classification.
Collapse
Affiliation(s)
- David Le
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Taeyoon Son
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Xincheng Yao
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA.,Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
35
|
Qian X, Pei J, Zheng H, Xie X, Yan L, Zhang H, Han C, Gao X, Zhang H, Zheng W, Sun Q, Lu L, Shung KK. Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning. Nat Biomed Eng 2021; 5:522-532. [PMID: 33875840 DOI: 10.1038/s41551-021-00711-2] [Citation(s) in RCA: 107] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 03/08/2021] [Indexed: 02/02/2023]
Abstract
The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868-0.959) for bimodal images and 0.955 (95% CI = 0.909-0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows.
Collapse
Affiliation(s)
- Xuejun Qian
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA. .,Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Jing Pei
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Hui Zheng
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xinxin Xie
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lin Yan
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Hao Zhang
- Department of Neurosurgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Chunguang Han
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiang Gao
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Hanqi Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weiwei Zheng
- Department of Ultrasound, Xuancheng People's Hospital, Xuancheng, China
| | - Qiang Sun
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lu Lu
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - K Kirk Shung
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
36
|
Romeo V, Cuocolo R, Apolito R, Stanzione A, Ventimiglia A, Vitale A, Verde F, Accurso A, Amitrano M, Insabato L, Gencarelli A, Buonocore R, Argenzio MR, Cascone AM, Imbriaco M, Maurea S, Brunetti A. Clinical value of radiomics and machine learning in breast ultrasound: a multicenter study for differential diagnosis of benign and malignant lesions. Eur Radiol 2021; 31:9511-9519. [PMID: 34018057 PMCID: PMC8589755 DOI: 10.1007/s00330-021-08009-2] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 04/06/2021] [Accepted: 04/22/2021] [Indexed: 01/28/2023]
Abstract
Objectives We aimed to assess the performance of radiomics and machine learning (ML) for classification of non-cystic benign and malignant breast lesions on ultrasound images, compare ML’s accuracy with that of a breast radiologist, and verify if the radiologist’s performance is improved by using ML. Methods Our retrospective study included patients from two institutions. A total of 135 lesions from Institution 1 were used to train and test the ML model with cross-validation. Radiomic features were extracted from manually annotated images and underwent a multistep feature selection process. Not reproducible, low variance, and highly intercorrelated features were removed from the dataset. Then, 66 lesions from Institution 2 were used as an external test set for ML and to assess the performance of a radiologist without and with the aid of ML, using McNemar’s test. Results After feature selection, 10 of the 520 features extracted were employed to train a random forest algorithm. Its accuracy in the training set was 82% (standard deviation, SD, ± 6%), with an AUC of 0.90 (SD ± 0.06), while the performance on the test set was 82% (95% confidence intervals (CI) = 70–90%) with an AUC of 0.82 (95% CI = 0.70–0.93). It resulted in being significantly better than the baseline reference (p = 0.0098), but not different from the radiologist (79.4%, p = 0.815). The radiologist’s performance improved when using ML (80.2%), but not significantly (p = 0.508). Conclusions A radiomic analysis combined with ML showed promising results to differentiate benign from malignant breast lesions on ultrasound images. Key Points • Machine learning showed good accuracy in discriminating benign from malignant breast lesions • The machine learning classifier’s performance was comparable to that of a breast radiologist • The radiologist’s accuracy improved with machine learning, but not significantly Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-08009-2.
Collapse
Affiliation(s)
- Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Renato Cuocolo
- Department of Clinical Medicine and Surgery, University of Naples "Federico II", Naples, Italy.,Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples "Federico II", Naples, Italy
| | - Roberta Apolito
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy.
| | - Antonio Ventimiglia
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Annalisa Vitale
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Antonello Accurso
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Michele Amitrano
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Luigi Insabato
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Annarita Gencarelli
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Roberta Buonocore
- Department of Radiology, A.O.U. San Giovanni di Dio e Ruggi d'Aragona, Salerno, Italy
| | | | - Anna Maria Cascone
- Department of Radiology, A.O.U. San Giovanni di Dio e Ruggi d'Aragona, Salerno, Italy
| | - Massimo Imbriaco
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Arturo Brunetti
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| |
Collapse
|
37
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 295] [Impact Index Per Article: 73.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
38
|
Muller S, Abildsnes H, Østvik A, Kragset O, Gangås I, Birke H, Langø T, Arum CJ. Can a Dinosaur Think? Implementation of Artificial Intelligence in Extracorporeal Shock Wave Lithotripsy. EUR UROL SUPPL 2021; 27:33-42. [PMID: 34337515 PMCID: PMC8317850 DOI: 10.1016/j.euros.2021.02.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/25/2021] [Indexed: 01/15/2023] Open
Abstract
Background Extracorporeal shock wave lithotripsy (ESWL) of kidney stones is losing ground to more expensive and invasive endoscopic treatments. Objective This proof-of-concept project was initiated to develop artificial intelligence (AI)-augmented ESWL and to investigate the potential for machine learning to improve the efficacy of ESWL. Design, setting, and participants Two-dimensional ultrasound videos were captured during ESWL treatments from an inline ultrasound device with a video grabber. An observer annotated 23 212 images from 11 patients as either in or out of focus. The median hit rate was calculated on a patient level via bootstrapping. A convolutional neural network with U-Net architecture was trained on 57 ultrasound images with delineated kidney stones from the same patients annotated by a second observer. We tested U-Net on the ultrasound images annotated by the first observer. Cross-validation with a training set of nine patients, a validation set of one patient, and a test set of one patient was performed. Outcome measurements and statistical analysis Classical metrics describing classifier performance were calculated, together with an estimation of how the algorithm would affect shock wave hit rate. Results and limitations The median hit rate for standard ESWL was 55.2% (95% confidence interval [CI] 43.2–67.3%). The performance metrics for U-Net were accuracy 63.9%, sensitivity 56.0%, specificity 74.7%, positive predictive value 75.3%, negative predictive value 55.2%, Youden’s J statistic 30.7%, no-information rate 58.0%, and Cohen’s κ 0.2931. The algorithm reduced total mishits by 67.1%. The main limitation is that this is a proof-of-concept study involving only 11 patients. Conclusions Our calculated ESWL hit rate of 55.2% (95% CI 43.2–67.3%) supports findings from earlier research. We have demonstrated that a machine learning algorithm trained on just 11 patients increases the hit rate to 75.3% and reduces mishits by 67.1%. When U-Net is trained on more and higher-quality annotations, even better results can be expected. Patient summary Kidney stones can be treated by applying shockwaves to the outside of the body. Ultrasound scans of the kidney are used to guide the machine delivering the shockwaves, but the shockwaves can still miss the stone. We used artificial intelligence to improve the accuracy in hitting the stone being treated.
Collapse
Affiliation(s)
- Sebastien Muller
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Håkon Abildsnes
- Medical School, Norwegian University of Science and Technology, Trondheim, Norway
| | - Andreas Østvik
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Oda Kragset
- Medical School, Norwegian University of Science and Technology, Trondheim, Norway
| | - Inger Gangås
- Department of Radiology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Harriet Birke
- Department of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Thomas Langø
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Carl-Jørgen Arum
- Department of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway.,Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway.,Department of Urology, Skane University Hospital, Malmö, Sweden.,Department of Translational Medicine, Lund University, Malmö, Sweden
| |
Collapse
|
39
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 130] [Impact Index Per Article: 32.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
40
|
Kise Y, Møystad A, Bjørnland T, Shimizu M, Ariji Y, Kuwada C, Nishiyama M, Funakoshi T, Yoshiura K, Ariji E. Effects of 1 year of training on the performance of ultrasonographic image interpretation: A preliminary evaluation using images of Sjögren syndrome patients. Imaging Sci Dent 2021; 51:129-136. [PMID: 34235058 PMCID: PMC8219445 DOI: 10.5624/isd.20200294] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 11/26/2020] [Accepted: 12/05/2020] [Indexed: 12/13/2022] Open
Abstract
Purpose This study investigated the effects of 1 year of training on imaging diagnosis, using static ultrasonography (US) salivary gland images of Sjögren syndrome patients. Materials and Methods This study involved 3 inexperienced radiologists with different levels of experience, who received training 1 or 2 days a week under the supervision of experienced radiologists. The training program included collecting patient histories and performing physical and imaging examinations for various maxillofacial diseases. The 3 radiologists (observers A, B, and C) evaluated 400 static US images of salivary glands twice at a 1-year interval. To compare their performance, 2 experienced radiologists evaluated the same images. Diagnostic performance was compared between the 2 evaluations using the area under the receiver operating characteristic curve (AUC). Results Observer A, who was participating in the training program for the second year, exhibited no significant difference in AUC between the first and second evaluations, with results consistently comparable to those of experienced radiologists. After 1 year of training, observer B showed significantly higher AUCs than before training. The diagnostic performance of observer B reached the level of experienced radiologists for parotid gland assessment, but differed for submandibular gland assessment. For observer C, who did not complete the training, there was no significant difference in the AUC between the first and second evaluations, both of which showed significant differences from those of the experienced radiologists. Conclusion These preliminary results suggest that the training program effectively helped inexperienced radiologists reach the level of experienced radiologists for US examinations.
Collapse
Affiliation(s)
- Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Anne Møystad
- Institute of Clinical Dentistry, Faculty of Dentistry, University of Oslo, Oslo, Norway
| | - Tore Bjørnland
- Department of Oral Surgery and Oral Medicine, Institute of Clinical Dentistry, Faculty of Dentistry, University of Oslo, Oslo, Norway
| | - Mayumi Shimizu
- Department of Oral and Maxillofacial Radiology, Kyushu University Hospital, Fukuoka, Japan
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Masako Nishiyama
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Takuma Funakoshi
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Kazunori Yoshiura
- Department of Oral and Maxillofacial Radiology, Faculty of Dental Science, Kyushu University, Fukuoka, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
41
|
Ou WC, Polat D, Dogan BE. Deep learning in breast radiology: current progress and future directions. Eur Radiol 2021; 31:4872-4885. [PMID: 33449174 DOI: 10.1007/s00330-020-07640-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 10/30/2020] [Accepted: 12/17/2020] [Indexed: 12/13/2022]
Abstract
This review provides an overview of current applications of deep learning methods within breast radiology. The diagnostic capabilities of deep learning in breast radiology continue to improve, giving rise to the prospect that these methods may be integrated not only into detection and classification of breast lesions, but also into areas such as risk estimation and prediction of tumor responses to therapy. Remaining challenges include limited availability of high-quality data with expert annotations and ground truth determinations, the need for further validation of initial results, and unresolved medicolegal considerations. KEY POINTS: • Deep learning (DL) continues to push the boundaries of what can be accomplished by artificial intelligence (AI) in breast imaging with distinct advantages over conventional computer-aided detection. • DL-based AI has the potential to augment the capabilities of breast radiologists by improving diagnostic accuracy, increasing efficiency, and supporting clinical decision-making through prediction of prognosis and therapeutic response. • Remaining challenges to DL implementation include a paucity of prospective data on DL utilization and yet unresolved medicolegal questions regarding increasing AI utilization.
Collapse
Affiliation(s)
- William C Ou
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA.
| | - Dogan Polat
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| | - Basak E Dogan
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| |
Collapse
|
42
|
Shia WC, Lin LS, Chen DR. Classification of malignant tumours in breast ultrasound using unsupervised machine learning approaches. Sci Rep 2021; 11:1418. [PMID: 33446841 PMCID: PMC7809485 DOI: 10.1038/s41598-021-81008-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 12/07/2020] [Indexed: 12/22/2022] Open
Abstract
Traditional computer-aided diagnosis (CAD) processes include feature extraction, selection, and classification. Effective feature extraction in CAD is important in improving the classification’s performance. We introduce a machine-learning method and have designed an analysis procedure of benign and malignant breast tumour classification in ultrasound (US) images without a need for a priori tumour region-selection processing, thereby decreasing clinical diagnosis efforts while maintaining high classification performance. Our dataset constituted 677 US images (benign: 312, malignant: 365). Regarding two-dimensional US images, the oriented gradient descriptors’ histogram pyramid was extracted and utilised to obtain feature vectors. The correlation-based feature selection method was used to evaluate and select significant feature sets for further classification. Sequential minimal optimisation—combining local weight learning—was utilised for classification and performance enhancement. The image dataset’s classification performance showed an 81.64% sensitivity and 87.76% specificity for malignant images (area under the curve = 0.847). The positive and negative predictive values were 84.1 and 85.8%, respectively. Here, a new workflow, utilising machine learning to recognise malignant US images was proposed. Comparison of physician diagnoses and the automatic classifications made using machine learning yielded similar outcomes. This indicates the potential applicability of machine learning in clinical diagnoses.
Collapse
Affiliation(s)
- Wei-Chung Shia
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua, Taiwan
| | - Li-Sheng Lin
- Department of Breast Surgery, The Affiliated Hospital (Group) of Putian University, Putian, Fujian, China
| | - Dar-Ren Chen
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua, Taiwan.
| |
Collapse
|
43
|
Kim J, Kim HJ, Kim C, Kim WH. Artificial intelligence in breast ultrasonography. Ultrasonography 2020; 40:183-190. [PMID: 33430577 PMCID: PMC7994743 DOI: 10.14366/usg.20117] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 11/12/2020] [Indexed: 12/13/2022] Open
Abstract
Although breast ultrasonography is the mainstay modality for differentiating between benign and malignant breast masses, it has intrinsic problems with false positives and substantial interobserver variability. Artificial intelligence (AI), particularly with deep learning models, is expected to improve workflow efficiency and serve as a second opinion. AI is highly useful for performing three main clinical tasks in breast ultrasonography: detection (localization/segmentation), differential diagnosis (classification), and prognostication (prediction). This article provides a current overview of AI applications in breast ultrasonography, with a discussion of methodological considerations in the development of AI models and an up-to-date literature review of potential clinical applications.
Collapse
Affiliation(s)
- Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Hye Jung Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Won Hwa Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| |
Collapse
|
44
|
Niu S, Huang J, Li J, Liu X, Wang D, Zhang R, Wang Y, Shen H, Qi M, Xiao Y, Guan M, Liu H, Li D, Liu F, Wang X, Xiong Y, Gao S, Wang X, Zhu J. Application of ultrasound artificial intelligence in the differential diagnosis between benign and malignant breast lesions of BI-RADS 4A. BMC Cancer 2020; 20:959. [PMID: 33008320 PMCID: PMC7532640 DOI: 10.1186/s12885-020-07413-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 09/15/2020] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND The classification of Breast Imaging Reporting and Data System 4A (BI-RADS 4A) lesions is mostly based on the personal experience of doctors and lacks specific and clear classification standards. The development of artificial intelligence (AI) provides a new method for BI-RADS categorisation. We analysed the ultrasonic morphological and texture characteristics of BI-RADS 4A benign and malignant lesions using AI, and these ultrasonic characteristics of BI-RADS 4A benign and malignant lesions were compared to examine the value of AI in the differential diagnosis of BI-RADS 4A benign and malignant lesions. METHODS A total of 206 lesions of BI-RADS 4A examined using ultrasonography were analysed retrospectively, including 174 benign lesions and 32 malignant lesions. All of the lesions were contoured manually, and the ultrasonic morphological and texture features of the lesions, such as circularity, height-to-width ratio, margin spicules, margin coarseness, margin indistinctness, margin lobulation, energy, entropy, grey mean, internal calcification and angle between the long axis of the lesion and skin, were calculated using grey level gradient co-occurrence matrix analysis. Differences between benign and malignant lesions of BI-RADS 4A were analysed. RESULTS Significant differences in margin lobulation, entropy, internal calcification and ALS were noted between the benign group and malignant group (P = 0.013, 0.045, 0.045, and 0.002, respectively). The malignant group had more margin lobulations and lower entropy compared with the benign group, and the benign group had more internal calcifications and a greater angle between the long axis of the lesion and skin compared with the malignant group. No significant differences in circularity, height-to-width ratio, margin spicules, margin coarseness, margin indistinctness, energy, and grey mean were noted between benign and malignant lesions. CONCLUSIONS Compared with the naked eye, AI can reveal more subtle differences between benign and malignant BI-RADS 4A lesions. These results remind us carefully observation of the margin and the internal echo is of great significance. With the help of morphological and texture information provided by AI, doctors can make a more accurate judgment on such atypical benign and malignant lesions.
Collapse
Affiliation(s)
- Sihua Niu
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Jianhua Huang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, Heilongjiang Province, China
| | - Jia Li
- Department of Ultrasound, Southeast University Zhongda Hospital, Nanjing, 210009, Jiangsu Province, China
| | - Xueling Liu
- Department of Ultrasound, The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, 530023, Guangxi Zhuang Autonomous Region, China
| | - Dan Wang
- Department of Ultrasound, The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, 530023, Guangxi Zhuang Autonomous Region, China
| | - Ruifang Zhang
- Department of Ultrasound, Zhengzhou University First Affiliated Hospital, Zhengzhou, 450052, Henan Province, China
| | - Yingyan Wang
- Department of Ultrasound, Southeast University Zhongda Hospital, Nanjing, 210009, Jiangsu Province, China
| | - Huiming Shen
- Department of Ultrasound, Southeast University Zhongda Hospital, Nanjing, 210009, Jiangsu Province, China
| | - Min Qi
- Department of Ultrasound, Southeast University Zhongda Hospital, Nanjing, 210009, Jiangsu Province, China
| | - Yi Xiao
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, Heilongjiang Province, China
| | - Mengyao Guan
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, Heilongjiang Province, China
| | - Haiyan Liu
- Department of Ultrasound, Zhengzhou University First Affiliated Hospital, Zhengzhou, 450052, Henan Province, China
| | - Diancheng Li
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Feifei Liu
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Xiuming Wang
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Yu Xiong
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Siqi Gao
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Xue Wang
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Jiaan Zhu
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China.
| |
Collapse
|
45
|
Hainc N, Mannil M, Anagnostakou V, Alkadhi H, Blüthgen C, Wacht L, Bink A, Husain S, Kulcsár Z, Winklhofer S. Deep learning based detection of intracranial aneurysms on digital subtraction angiography: A feasibility study. Neuroradiol J 2020; 33:311-317. [PMID: 32633602 PMCID: PMC7416354 DOI: 10.1177/1971400920937647] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Digital subtraction angiography is the gold standard for detecting and characterising aneurysms. Here, we assess the feasibility of commercial-grade deep learning software for the detection of intracranial aneurysms on whole-brain anteroposterior and lateral 2D digital subtraction angiography images. MATERIAL AND METHODS Seven hundred and six digital subtraction angiography images were included from a cohort of 240 patients (157 female, mean age 59 years, range 20-92; 83 male, mean age 55 years, range 19-83). Three hundred and thirty-five (47%) single frame anteroposterior and lateral images of a digital subtraction angiography series of 187 aneurysms (41 ruptured, 146 unruptured; average size 7±5.3 mm, range 1-5 mm; total 372 depicted aneurysms) and 371 (53%) aneurysm-negative study images were retrospectively analysed regarding the presence of intracranial aneurysms. The 2D data was split into testing and training sets in a ratio of 4:1 with 3D rotational digital subtraction angiography as gold standard. Supervised deep learning was performed using commercial-grade machine learning software (Cognex, ViDi Suite 2.0). Monte Carlo cross validation was performed. RESULTS Intracranial aneurysms were detected with a sensitivity of 79%, a specificity of 79%, a precision of 0.75, a F1 score of 0.77, and a mean area-under-the-curve of 0.76 (range 0.68-0.86) after Monte Carlo cross-validation, run 45 times. CONCLUSION The commercial-grade deep learning software allows for detection of intracranial aneurysms on whole-brain, 2D anteroposterior and lateral digital subtraction angiography images, with results being comparable to more specifically engineered deep learning techniques.
Collapse
Affiliation(s)
- Nicolin Hainc
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Manoj Mannil
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland
| | - Vaia Anagnostakou
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Hatem Alkadhi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland
| | - Christian Blüthgen
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Switzerland
| | - Lorenz Wacht
- Department of Radiology, City Hospital Triemli, Zurich, Switzerland
| | - Andrea Bink
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Shakir Husain
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Zsolt Kulcsár
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Sebastian Winklhofer
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| |
Collapse
|
46
|
Zhao C, Xiao M, Liu H, Wang M, Wang H, Zhang J, Jiang Y, Zhu Q. Reducing the number of unnecessary biopsies of US-BI-RADS 4a lesions through a deep learning method for residents-in-training: a cross-sectional study. BMJ Open 2020; 10:e035757. [PMID: 32513885 PMCID: PMC7282415 DOI: 10.1136/bmjopen-2019-035757] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
OBJECTIVE The aim of the study is to explore the potential value of S-Detect for residents-in-training, a computer-assisted diagnosis system based on deep learning (DL) algorithm. METHODS The study was designed as a cross-sectional study. Routine breast ultrasound examinations were conducted by an experienced radiologist. The ultrasonic images of the lesions were retrospectively assessed by five residents-in-training according to the Breast Imaging Report and Data System (BI-RADS) lexicon, and a dichotomic classification of the lesions was provided by S-Detect. The diagnostic performances of S-Detect and the five residents were measured and compared using the pathological results as the gold standard. The category 4a lesions assessed by the residents were downgraded to possibly benign as classified by S-Detect. The diagnostic performance of the integrated results was compared with the original results of the residents. PARTICIPANTS A total of 195 focal breast lesions were consecutively enrolled, including 82 malignant lesions and 113 benign lesions. RESULTS S-Detect presented higher specificity (77.88%) and area under the curve (AUC) (0.82) than the residents (specificity: 19.47%-48.67%, AUC: 0.62-0.74). A total of 24, 31, 38, 32 and 42 identified as BI-RADS 4a lesions by residents 1, 2, 3, 4 and 5 were downgraded to possibly benign lesions by S-Detect, respectively. Among these downgraded lesions, 24, 28, 35, 30 and 40 lesions were proven to be pathologically benign, respectively. After combining the residents' results with the results of the software in category 4a lesions, the specificity and AUC of the five residents significantly improved (specificity: 46.02%-76.11%, AUC: 0.71-0.85, p<0.001). The intraclass correlation coefficient of the five residents also increased after integration (from 0.480 to 0.643). CONCLUSIONS With the help of the DL software, the specificity, overall diagnostic performance and interobserver agreement of the residents greatly improved. The software can be used as adjunctive tool for residents-in-training, downgrading 4a lesions to possibly benign and reducing unnecessary biopsies.
Collapse
Affiliation(s)
- Chenyang Zhao
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mengsu Xiao
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - He Liu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ming Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hongyan Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jing Zhang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxin Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Qingli Zhu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
47
|
Tomita N, Jiang S, Maeder ME, Hassanpour S. Automatic post-stroke lesion segmentation on MR images using 3D residual convolutional neural network. Neuroimage Clin 2020; 27:102276. [PMID: 32512401 PMCID: PMC7281812 DOI: 10.1016/j.nicl.2020.102276] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Revised: 03/31/2020] [Accepted: 05/07/2020] [Indexed: 01/21/2023]
Abstract
In this paper, we demonstrate the feasibility and performance of deep residual neural networks for volumetric segmentation of irreversibly damaged brain tissue lesions on T1-weighted MRI scans for chronic stroke patients. A total of 239 T1-weighted MRI scans of chronic ischemic stroke patients from a public dataset were retrospectively analyzed by 3D deep convolutional segmentation models with residual learning, using a novel zoom-in&out strategy. Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and Hausdorff distance (HD) of the identified lesions were measured by using manual tracing of lesions as the reference standard. Bootstrapping was employed for all metrics to estimate 95% confidence intervals. The models were assessed on a test set of 31 scans. The average DSC was 0.64 (0.51-0.76) with a median of 0.78. ASSD and HD were 3.6 mm (1.7-6.2 mm) and 20.4 mm (10.0-33.3 mm), respectively. The latest deep learning architecture and techniques were applied with 3D segmentation on MRI scans and demonstrated effectiveness for volumetric segmentation of chronic ischemic stroke lesions.
Collapse
Affiliation(s)
- Naofumi Tomita
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Steven Jiang
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Matthew E Maeder
- Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA; Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA; Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA.
| |
Collapse
|
48
|
Pesapane F, Suter MB, Rotili A, Penco S, Nigro O, Cremonesi M, Bellomi M, Jereczek-Fossa BA, Pinotti G, Cassano E. Will traditional biopsy be substituted by radiomics and liquid biopsy for breast cancer diagnosis and characterisation? Med Oncol 2020; 37:29. [PMID: 32180032 DOI: 10.1007/s12032-020-01353-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 02/26/2020] [Indexed: 02/06/2023]
Abstract
The diagnosis of breast cancer currently relies on radiological and clinical evaluation, confirmed by histopathological examination. However, such approach has some limitations as the suboptimal sensitivity, the long turnaround time for recall tests, the invasiveness of the procedure and the risk that some features of target lesions may remain undetected, making re-biopsy a necessity. Recent technological advances in the field of artificial intelligence hold promise in addressing such medical challenges not only in cancer diagnosis, but also in treatment assessment, and monitoring of disease progression. In the perspective of a truly personalised medicine, based on the early diagnosis and individually tailored treatments, two new technologies, namely radiomics and liquid biopsy, are rising as means to obtain information from diagnosis to molecular profiling and response assessment, without the need of a biopsied tissue sample. Radiomics works through the extraction of quantitative peculiar features of cancer from radiological data, while liquid biopsy gets the whole of the malignancy's biology from something as easy as a blood sample. Both techniques hopefully will identify diagnostic and prognostic information of breast cancer potentially reducing the need for invasive (and often difficult to perform) biopsies and favouring an approach that is as personalised as possible for each patient. Nevertheless, such techniques will not substitute tissue biopsy in the near future, and even in further times they will require the aid of other parameters to be correctly interpreted and acted upon.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti, 435, 20141, Milan, MI, Italy.
| | | | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti, 435, 20141, Milan, MI, Italy
| | - Silvia Penco
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti, 435, 20141, Milan, MI, Italy
| | - Olga Nigro
- Medical Oncology, ASST Sette Laghi, Viale Borri 57, 21100, Varese, VA, Italy
| | - Marta Cremonesi
- Radiation Research Unit, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti, 435, 20141, Milan, MI, Italy
| | - Massimo Bellomi
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Department of Radiology, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti, 435, 20141, Milan, MI, Italy
| | - Barbara Alicja Jereczek-Fossa
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
- Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti, 435, 20141, Milan, MI, Italy
| | - Graziella Pinotti
- Medical Oncology, ASST Sette Laghi, Viale Borri 57, 21100, Varese, VA, Italy
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti, 435, 20141, Milan, MI, Italy
| |
Collapse
|
49
|
Kise Y, Shimizu M, Ikeda H, Fujii T, Kuwada C, Nishiyama M, Funakoshi T, Ariji Y, Fujita H, Katsumata A, Yoshiura K, Ariji E. Usefulness of a deep learning system for diagnosing Sjögren's syndrome using ultrasonography images. Dentomaxillofac Radiol 2020; 49:20190348. [PMID: 31804146 PMCID: PMC7068075 DOI: 10.1259/dmfr.20190348] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 11/28/2019] [Accepted: 11/29/2019] [Indexed: 12/18/2022] Open
Abstract
OBJECTIVES We evaluated the diagnostic performance of a deep learning system for the detection of Sjögren's syndrome (SjS) in ultrasonography (US) images, and compared it with the performance of inexperienced radiologists. METHODS 100 patients with a confirmed diagnosis of SjS according to both the Japanese criteria and American-European Consensus Group criteria and 100 non-SjS patients that had a dry mouth and suspected SjS but were definitively diagnosed as non-SjS were enrolled in this study. All the patients underwent US scans of both the parotid glands (PG) and submandibular glands (SMG). The training group consisted of 80 SjS patients and 80 non-SjS patients, whereas the test group consisted of 20 SjS patients and 20 non-SjS patients for deep learning analysis. The performance of the deep learning system for diagnosing SjS from the US images was compared with the diagnoses made by three inexperienced radiologists. RESULTS The accuracy, sensitivity and specificity of the deep learning system for the PG were 89.5, 90.0 and 89.0%, respectively, and those for the inexperienced radiologists were 76.7, 67.0 and 86.3%, respectively. The deep learning system results for the SMG were 84.0, 81.0 and 87.0%, respectively, and those for the inexperienced radiologists were 72.0, 78.0 and 66.0%, respectively. The AUC for the inexperienced radiologists was significantly different from that of the deep learning system. CONCLUSIONS The deep learning system had a high diagnostic ability for SjS. This suggests that deep learning could be used for diagnostic support when interpreting US images.
Collapse
Affiliation(s)
- Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| | - Mayumi Shimizu
- Department of Oral and Maxillofacial Radiology, Kyushu University Hospital, Fukuoka, Japan
| | - Haruka Ikeda
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| | - Takeshi Fujii
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| | - Masako Nishiyama
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| | - Takuma Funakoshi
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan
| | - Akitoshi Katsumata
- Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan
| | - Kazunori Yoshiura
- Department of Oral and Maxillofacial Radiology, Faculty of Dental Science, Kyushu University, Fukuoka, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University, Nagoya, Japan
| |
Collapse
|
50
|
Faes L, Liu X, Wagner SK, Fu DJ, Balaskas K, Sim DA, Bachmann LM, Keane PA, Denniston AK. A Clinician's Guide to Artificial Intelligence: How to Critically Appraise Machine Learning Studies. Transl Vis Sci Technol 2020; 9:7. [PMID: 32704413 PMCID: PMC7346877 DOI: 10.1167/tvst.9.2.7] [Citation(s) in RCA: 100] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 10/04/2019] [Indexed: 12/16/2022] Open
Abstract
In recent years, there has been considerable interest in the prospect of machine learning models demonstrating expert-level diagnosis in multiple disease contexts. However, there is concern that the excitement around this field may be associated with inadequate scrutiny of methodology and insufficient adoption of scientific good practice in the studies involving artificial intelligence in health care. This article aims to empower clinicians and researchers to critically appraise studies of clinical applications of machine learning, through: (1) introducing basic machine learning concepts and nomenclature; (2) outlining key applicable principles of evidence-based medicine; and (3) highlighting some of the potential pitfalls in the design and reporting of these studies.
Collapse
Affiliation(s)
- Livia Faes
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Eye Clinic, Cantonal Hospital of Lucerne, Lucerne, Switzerland
| | - Xiaoxuan Liu
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- Health Data Research UK, London, UK
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Dun Jack Fu
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Konstantinos Balaskas
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Dawn A. Sim
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | | | - Pearse A. Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Alastair K. Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- Health Data Research UK, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| |
Collapse
|