1
|
Alhejaily AMG. Artificial intelligence in healthcare (Review). Biomed Rep 2025; 22:11. [PMID: 39583770 PMCID: PMC11582508 DOI: 10.3892/br.2024.1889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 09/16/2024] [Indexed: 11/26/2024] Open
Abstract
The potential of artificial intelligence (AI) to significantly transform numerous aspects of contemporary civilization is substantial. Advancements in research show an increasing interest in creating AI solutions in the healthcare sector. This interest is driven by the broad spectrum and extensive nature of easily accessible patient data-including medical imaging, digitized data collection, and electronic health records - and by the ability to analyze and interpret complex data, facilitating more accurate and timely diagnoses. This review's goal is to provide a comprehensive overview of the advancements achieved by AI in healthcare, to elucidate the present state of AI in enhancing the healthcare system and improving the quality and efficiency of healthcare decision making, and to discuss selected medical applications of AI. Furthermore, the barriers and constraints that may impede the use of AI in healthcare are outlined, and the potential future directions of AI-augmented healthcare systems are discussed.
Collapse
Affiliation(s)
- Abdul-Mohsen G. Alhejaily
- Academic Operations Administration, King Fahad Medical City, Riyadh Second Health Cluster, Riyadh 11525, Kingdom of Saudi Arabia
| |
Collapse
|
2
|
Udriștoiu AL, Podină N, Ungureanu BS, Constantin A, Georgescu CV, Bejinariu N, Pirici D, Burtea DE, Gruionu L, Udriștoiu S, Săftoiu A. Deep learning segmentation architectures for automatic detection of pancreatic ductal adenocarcinoma in EUS-guided fine-needle biopsy samples based on whole-slide imaging. Endosc Ultrasound 2024; 13:335-344. [PMID: 39802107 PMCID: PMC11723688 DOI: 10.1097/eus.0000000000000094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 10/27/2024] [Indexed: 01/16/2025] Open
Abstract
Background EUS-guided fine-needle biopsy is the procedure of choice for the diagnosis of pancreatic ductal adenocarcinoma (PDAC). Nevertheless, the samples obtained are small and require expertise in pathology, whereas the diagnosis is difficult in view of the scarcity of malignant cells and the important desmoplastic reaction of these tumors. With the help of artificial intelligence, the deep learning architectures produce a fast, accurate, and automated approach for PDAC image segmentation based on whole-slide imaging. Given the effectiveness of U-Net in semantic segmentation, numerous variants and improvements have emerged, specifically for whole-slide imaging segmentation. Methods In this study, a comparison of 7 U-Net architecture variants was performed on 2 different datasets of EUS-guided fine-needle biopsy samples from 2 medical centers (31 and 33 whole-slide images, respectively) with different parameters and acquisition tools. The U-Net architecture variants evaluated included some that had not been previously explored for PDAC whole-slide image segmentation. The evaluation of their performance involved calculating accuracy through the mean Dice coefficient and mean intersection over union (IoU). Results The highest segmentation accuracies were obtained using Inception U-Net architecture for both datasets. PDAC tissue was segmented with the overall average Dice coefficient of 97.82% and IoU of 0.87 for Dataset 1, respectively, overall average Dice coefficient of 95.70%, and IoU of 0.79 for Dataset 2. Also, we considered the external testing of the trained segmentation models by performing the cross evaluations between the 2 datasets. The Inception U-Net model trained on Train Dataset 1 performed with the overall average Dice coefficient of 93.12% and IoU of 0.74 on Test Dataset 2. The Inception U-Net model trained on Train Dataset 2 performed with the overall average Dice coefficient of 92.09% and IoU of 0.81 on Test Dataset 1. Conclusions The findings of this study demonstrated the feasibility of utilizing artificial intelligence for assessing PDAC segmentation in whole-slide imaging, supported by promising scores.
Collapse
Affiliation(s)
| | - Nicoleta Podină
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
- Faculty of Medicine, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| | - Bogdan Silviu Ungureanu
- Department of Gastroenterology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy Craiova, Craiova, Romania
| | - Alina Constantin
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
| | | | - Nona Bejinariu
- REGINA MARIA Regional Laboratory, Pathological Anatomy Division, Cluj-Napoca, Romania
| | - Daniel Pirici
- Department of Histology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Daniela Elena Burtea
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy Craiova, Craiova, Romania
| | - Lucian Gruionu
- Faculty of Mechanics, University of Craiova, Craiova, Romania
| | - Stefan Udriștoiu
- Faculty of Automation, Computers and Electronics, University of Craiova, Craiova, Romania
| | - Adrian Săftoiu
- Department of Gastroenterology, Ponderas Academic Hospital, Bucharest, Romania
- Department of Gastroenterology and Hepatology, Elias University Emergency Hospital, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| |
Collapse
|
3
|
Rawlani P, Ghosh NK, Kumar A. Role of artificial intelligence in the characterization of indeterminate pancreatic head mass and its usefulness in preoperative diagnosis. Artif Intell Gastroenterol 2023; 4:48-63. [DOI: 10.35712/aig.v4.i3.48] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/11/2023] [Accepted: 10/08/2023] [Indexed: 12/07/2023] Open
Abstract
Artificial intelligence (AI) has been used in various fields of day-to-day life and its role in medicine is immense. Understanding of oncology has been improved with the introduction of AI which helps in diagnosis, treatment planning, management, prognosis, and follow-up. It also helps to identify high-risk groups who can be subjected to timely screening for early detection of malignant conditions. It is more important in pancreatic cancer as it is one of the major causes of cancer-related deaths worldwide and there are no specific early features (clinical and radiological) for diagnosis. With improvement in imaging modalities (computed tomography, magnetic resonance imaging, endoscopic ultrasound), most often clinicians were being challenged with lesions that were difficult to diagnose with human competence. AI has been used in various other branches of medicine to differentiate such indeterminate lesions including the thyroid gland, breast, lungs, liver, adrenal gland, kidney, etc. In the case of pancreatic cancer, the role of AI has been explored and is still ongoing. This review article will focus on how AI can be used to diagnose pancreatic cancer early or differentiate it from benign pancreatic lesions, therefore, management can be planned at an earlier stage.
Collapse
Affiliation(s)
- Palash Rawlani
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| | - Nalini Kanta Ghosh
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| | - Ashok Kumar
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, Uttar Pradesh, India
| |
Collapse
|
4
|
Nadeem S, Hanna MG, Viswanathan K, Marino J, Ahadi M, Alzumaili B, Bani MA, Chiarucci F, Chou A, De Leo A, Fuchs TL, Lubin DJ, Luxford C, Magliocca K, Martinez G, Shi Q, Sidhu S, Ghuzlan AA, Gill AJ, Tallini G, Ghossein R, Xu B. Ki67 proliferation index in medullary thyroid carcinoma: a comparative study of multiple counting methods and validation of image analysis and deep learning platforms. Histopathology 2023; 83:981-988. [PMID: 37706239 PMCID: PMC10840805 DOI: 10.1111/his.15048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 09/15/2023]
Abstract
AIMS The International Medullary Thyroid Carcinoma Grading System, introduced in 2022, mandates evaluation of the Ki67 proliferation index to assign a histological grade for medullary thyroid carcinoma. However, manual counting remains a tedious and time-consuming task. METHODS AND RESULTS We aimed to evaluate the performance of three other counting techniques for the Ki67 index, eyeballing by a trained experienced investigator, a machine learning-based deep learning algorithm (DeepLIIF) and an image analysis software with internal thresholding compared to the gold standard manual counting in a large cohort of 260 primarily resected medullary thyroid carcinoma. The Ki67 proliferation index generated by all three methods correlate near-perfectly with the manual Ki67 index, with kappa values ranging from 0.884 to 0.979 and interclass correlation coefficients ranging from 0.969 to 0.983. Discrepant Ki67 results were only observed in cases with borderline manual Ki67 readings, ranging from 3 to 7%. Medullary thyroid carcinomas with a high Ki67 index (≥ 5%) determined using any of the four methods were associated with significantly decreased disease-specific survival and distant metastasis-free survival. CONCLUSIONS We herein validate a machine learning-based deep-learning platform and an image analysis software with internal thresholding to generate accurate automatic Ki67 proliferation indices in medullary thyroid carcinoma. Manual Ki67 count remains useful when facing a tumour with a borderline Ki67 proliferation index of 3-7%. In daily practice, validation of alternative evaluation methods for the Ki67 index in MTC is required prior to implementation.
Collapse
Affiliation(s)
- Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Matthew G. Hanna
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Kartik Viswanathan
- Department of Pathology, Emory University Hospital Midtown, Atlanta, GA, USA
| | - Joseph Marino
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Mahsa Ahadi
- Royal North Shore Hospital and Northern Clinical School, Sydney Medical School, University of Sydney, Sydney, Australia
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, Australia
- NSW Health Pathology, Department of Anatomical Pathology, Royal North Shore Hospital, St Leonard, Australia
| | - Bayan Alzumaili
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Mohamed-Amine Bani
- Medical Pathology and Biology Department, Gustave Roussy Campus Cancer, Villejuif, France
| | - Federico Chiarucci
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna Medical Center; IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| | - Angela Chou
- Royal North Shore Hospital and Northern Clinical School, Sydney Medical School, University of Sydney, Sydney, Australia
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, Australia
- NSW Health Pathology, Department of Anatomical Pathology, Royal North Shore Hospital, St Leonard, Australia
| | - Antonio De Leo
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna Medical Center; IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| | - Talia L. Fuchs
- Royal North Shore Hospital and Northern Clinical School, Sydney Medical School, University of Sydney, Sydney, Australia
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, Australia
- NSW Health Pathology, Department of Anatomical Pathology, Royal North Shore Hospital, St Leonard, Australia
| | - Daniel J Lubin
- Department of Pathology, Emory University Hospital Midtown, Atlanta, GA, USA
| | - Catherine Luxford
- Royal North Shore Hospital and Northern Clinical School, Sydney Medical School, University of Sydney, Sydney, Australia
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, Australia
- NSW Health Pathology, Department of Anatomical Pathology, Royal North Shore Hospital, St Leonard, Australia
| | - Kelly Magliocca
- Department of Pathology, Emory University Hospital Midtown, Atlanta, GA, USA
| | - Germán Martinez
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Qiuying Shi
- Department of Pathology, Emory University Hospital Midtown, Atlanta, GA, USA
| | - Stan Sidhu
- Royal North Shore Hospital and Northern Clinical School, Sydney Medical School, University of Sydney, Sydney, Australia
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, Australia
- NSW Health Pathology, Department of Anatomical Pathology, Royal North Shore Hospital, St Leonard, Australia
| | - Abir Al Ghuzlan
- Medical Pathology and Biology Department, Gustave Roussy Campus Cancer, Villejuif, France
| | - Anthony J. Gill
- Royal North Shore Hospital and Northern Clinical School, Sydney Medical School, University of Sydney, Sydney, Australia
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, Australia
- NSW Health Pathology, Department of Anatomical Pathology, Royal North Shore Hospital, St Leonard, Australia
| | - Giovanni Tallini
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna Medical Center; IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| | - Ronald Ghossein
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Bin Xu
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
5
|
Pavel M, Dromain C, Ronot M, Schaefer N, Mandair D, Gueguen D, Elvira D, Jégou S, Balazard F, Dehaene O, Schutte K. The use of deep learning models to predict progression-free survival in patients with neuroendocrine tumors. Future Oncol 2023; 19:2185-2199. [PMID: 37497644 DOI: 10.2217/fon-2022-1136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/28/2023] Open
Abstract
Aim: The RAISE project assessed whether deep learning could improve early progression-free survival (PFS) prediction in patients with neuroendocrine tumors. Patients & methods: Deep learning models extracted features from CT scans from patients in CLARINET (NCT00353496) (n = 138/204). A Cox model assessed PFS prediction when combining deep learning with the sum of longest diameter ratio (SLDr) and logarithmically transformed CgA concentration (logCgA), versus SLDr and logCgA alone. Results: Deep learning models extracted features other than lesion shape to predict PFS at week 72. No increase in performance was achieved with deep learning versus SLDr and logCgA models alone. Conclusion: Deep learning models extracted relevant features to predict PFS, but did not improve early prediction based on SLDr and logCgA.
Collapse
Affiliation(s)
- Marianne Pavel
- Department of Medicine 1, Friedrich-Alexander-University of Erlangen-Nürnberg, Erlangen, Germany
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
6
|
Abu-Khudir R, Hafsa N, Badr BE. Identifying Effective Biomarkers for Accurate Pancreatic Cancer Prognosis Using Statistical Machine Learning. Diagnostics (Basel) 2023; 13:3091. [PMID: 37835833 PMCID: PMC10572229 DOI: 10.3390/diagnostics13193091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/08/2023] [Accepted: 09/26/2023] [Indexed: 10/15/2023] Open
Abstract
Pancreatic cancer (PC) has one of the lowest survival rates among all major types of cancer. Consequently, it is one of the leading causes of mortality worldwide. Serum biomarkers historically correlate well with the early prognosis of post-surgical complications of PC. However, attempts to identify an effective biomarker panel for the successful prognosis of PC were almost non-existent in the current literature. The current study investigated the roles of various serum biomarkers including carbohydrate antigen 19-9 (CA19-9), chemokine (C-X-C motif) ligand 8 (CXCL-8), procalcitonin (PCT), and other relevant clinical data for identifying PC progression, classified into sepsis, recurrence, and other post-surgical complications, among PC patients. The most relevant biochemical and clinical markers for PC prognosis were identified using a random-forest-powered feature elimination method. Using this informative biomarker panel, the selected machine-learning (ML) classification models demonstrated highly accurate results for classifying PC patients into three complication groups on independent test data. The superiority of the combined biomarker panel (Max AUC-ROC = 100%) was further established over using CA19-9 features exclusively (Max AUC-ROC = 75%) for the task of classifying PC progression. This novel study demonstrates the effectiveness of the combined biomarker panel in successfully diagnosing PC progression and other relevant complications among Egyptian PC survivors.
Collapse
Affiliation(s)
- Rasha Abu-Khudir
- Chemistry Department, College of Science, King Faisal University, P.O. Box 380, Hofuf 31982, Al-Ahsa, Saudi Arabia
- Chemistry Department, Biochemistry Branch, Faculty of Science, Tanta University, Tanta 31527, Egypt
| | - Noor Hafsa
- Computer Science Department, College of Computer Science and Information Technology, King Faisal University, P.O. Box 400, Hofuf 31982, Al-Ahsa, Saudi Arabia;
| | - Badr E. Badr
- Egyptian Ministry of Labor, Training and Research Department, Tanta 31512, Egypt;
- Botany Department, Microbiology Unit, Faculty of Science, Tanta University, Tanta 31527, Egypt
| |
Collapse
|
7
|
Luchian A, Cepeda KT, Harwood R, Murray P, Wilm B, Kenny S, Pregel P, Ressel L. Quantifying acute kidney injury in an Ischaemia-Reperfusion Injury mouse model using deep-learning-based semantic segmentation in histology. Biol Open 2023; 12:bio059988. [PMID: 37642317 PMCID: PMC10537956 DOI: 10.1242/bio.059988] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 08/22/2023] [Indexed: 08/31/2023] Open
Abstract
This study focuses on ischaemia-reperfusion injury (IRI) in kidneys, a cause of acute kidney injury (AKI) and end-stage kidney disease (ESKD). Traditional kidney damage assessment methods are semi-quantitative and subjective. This study aims to use a convolutional neural network (CNN) to segment murine kidney structures after IRI, quantify damage via CNN-generated pathological measurements, and compare this to conventional scoring. The CNN was able to accurately segment the different pathological classes, such as Intratubular casts and Tubular necrosis, with an F1 score of over 0.75. Some classes, such as Glomeruli and Proximal tubules, had even higher statistical values with F1 scores over 0.90. The scoring generated based on the segmentation approach statistically correlated with the semiquantitative assessment (Spearman's rank correlation coefficient=0.94). The heatmap approach localised the intratubular necrosis mainly in the outer stripe of the outer medulla, while the tubular casts were also present in more superficial or deeper portions of the cortex and medullary areas. This study presents a CNN model capable of segmenting multiple classes of interest, including acute IRI-specific pathological changes, in a whole mouse kidney section and can provide insights into the distribution of pathological classes within the whole mouse kidney section.
Collapse
Affiliation(s)
- Andreea Luchian
- Department of Veterinary Anatomy Physiology and Pathology, Institute of Infection, Veterinary and Ecological Sciences, Faculty of Health & Life Sciences, University of Liverpool, Liverpool, CH64 7TE, UK
| | - Katherine Trivino Cepeda
- Department of Molecular Physiology and Cell Signalling, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, L69 7BE, UK
- Centre for Pre-clinical Imaging, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, L69 7TX, UK
| | - Rachel Harwood
- Department of Paediatric Surgery, Alder Hey in the Park, Liverpool, L14 5AB, UK
| | - Patricia Murray
- Department of Molecular Physiology and Cell Signalling, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, L69 7BE, UK
- Centre for Pre-clinical Imaging, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, L69 7TX, UK
| | - Bettina Wilm
- Department of Molecular Physiology and Cell Signalling, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, L69 7BE, UK
- Centre for Pre-clinical Imaging, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, L69 7TX, UK
| | - Simon Kenny
- Department of Paediatric Surgery, Alder Hey in the Park, Liverpool, L14 5AB, UK
| | - Paola Pregel
- Department of Veterinary Sciences, University of Turin, Turin, 8-10124, Italy
| | - Lorenzo Ressel
- Department of Veterinary Anatomy Physiology and Pathology, Institute of Infection, Veterinary and Ecological Sciences, Faculty of Health & Life Sciences, University of Liverpool, Liverpool, CH64 7TE, UK
| |
Collapse
|
8
|
Tong Y, Udupa JK, Chong E, Winchell N, Sun C, Zou Y, Schuster SJ, Torigian DA. Prediction of lymphoma response to CAR T cells by deep learning-based image analysis. PLoS One 2023; 18:e0282573. [PMID: 37478073 PMCID: PMC10361488 DOI: 10.1371/journal.pone.0282573] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 02/21/2023] [Indexed: 07/23/2023] Open
Abstract
Clinical prognostic scoring systems have limited utility for predicting treatment outcomes in lymphomas. We therefore tested the feasibility of a deep-learning (DL)-based image analysis methodology on pre-treatment diagnostic computed tomography (dCT), low-dose CT (lCT), and 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images and rule-based reasoning to predict treatment response to chimeric antigen receptor (CAR) T-cell therapy in B-cell lymphomas. Pre-treatment images of 770 lymph node lesions from 39 adult patients with B-cell lymphomas treated with CD19-directed CAR T-cells were analyzed. Transfer learning using a pre-trained neural network model, then retrained for a specific task, was used to predict lesion-level treatment responses from separate dCT, lCT, and FDG-PET images. Patient-level response analysis was performed by applying rule-based reasoning to lesion-level prediction results. Patient-level response prediction was also compared to prediction based on the international prognostic index (IPI) for diffuse large B-cell lymphoma. The average accuracy of lesion-level response prediction based on single whole dCT slice-based input was 0.82+0.05 with sensitivity 0.87+0.07, specificity 0.77+0.12, and AUC 0.91+0.03. Patient-level response prediction from dCT, using the "Majority 60%" rule, had accuracy 0.81, sensitivity 0.75, and specificity 0.88 using 12-month post-treatment patient response as the reference standard and outperformed response prediction based on IPI risk factors (accuracy 0.54, sensitivity 0.38, and specificity 0.61 (p = 0.046)). Prediction of treatment outcome in B-cell lymphomas from pre-treatment medical images using DL-based image analysis and rule-based reasoning is feasible. This approach can potentially provide clinically useful prognostic information for decision-making in advance of initiating CAR T-cell therapy.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Emeline Chong
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Nicole Winchell
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Changjian Sun
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Yongning Zou
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Stephen J Schuster
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
9
|
Moscalu M, Moscalu R, Dascălu CG, Țarcă V, Cojocaru E, Costin IM, Țarcă E, Șerban IL. Histopathological Images Analysis and Predictive Modeling Implemented in Digital Pathology-Current Affairs and Perspectives. Diagnostics (Basel) 2023; 13:2379. [PMID: 37510122 PMCID: PMC10378281 DOI: 10.3390/diagnostics13142379] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 07/11/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
In modern clinical practice, digital pathology has an essential role, being a technological necessity for the activity in the pathological anatomy laboratories. The development of information technology has majorly facilitated the management of digital images and their sharing for clinical use; the methods to analyze digital histopathological images, based on artificial intelligence techniques and specific models, quantify the required information with significantly higher consistency and precision compared to that provided by optical microscopy. In parallel, the unprecedented advances in machine learning facilitate, through the synergy of artificial intelligence and digital pathology, the possibility of diagnosis based on image analysis, previously limited only to certain specialties. Therefore, the integration of digital images into the study of pathology, combined with advanced algorithms and computer-assisted diagnostic techniques, extends the boundaries of the pathologist's vision beyond the microscopic image and allows the specialist to use and integrate his knowledge and experience adequately. We conducted a search in PubMed on the topic of digital pathology and its applications, to quantify the current state of knowledge. We found that computer-aided image analysis has a superior potential to identify, extract and quantify features in more detail compared to the human pathologist's evaluating possibilities; it performs tasks that exceed its manual capacity, and can produce new diagnostic algorithms and prediction models applicable in translational research that are able to identify new characteristics of diseases based on changes at the cellular and molecular level.
Collapse
Affiliation(s)
- Mihaela Moscalu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Roxana Moscalu
- Wythenshawe Hospital, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester M139PT, UK
| | - Cristina Gena Dascălu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Viorel Țarcă
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Cojocaru
- Department of Morphofunctional Sciences I, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ioana Mădălina Costin
- Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Țarcă
- Department of Surgery II-Pediatric Surgery, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ionela Lăcrămioara Șerban
- Department of Morpho-Functional Sciences II, Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| |
Collapse
|
10
|
Mohamad Sehmi MN, Ahmad Fauzi MF, Wan Ahmad WSHM, Wan Ling Chan E. Pancreatic cancer grading in pathological images using deep learning convolutional neural networks. F1000Res 2022; 10:1057. [PMID: 37767358 PMCID: PMC10521057 DOI: 10.12688/f1000research.73161.2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/08/2022] [Indexed: 09/29/2023] Open
Abstract
Background: Pancreatic cancer is one of the deadliest forms of cancer. The cancer grades define how aggressively the cancer will spread and give indication for doctors to make proper prognosis and treatment. The current method of pancreatic cancer grading, by means of manual examination of the cancerous tissue following a biopsy, is time consuming and often results in misdiagnosis and thus incorrect treatment. This paper presents an automated grading system for pancreatic cancer from pathology images developed by comparing deep learning models on two different pathological stains. Methods: A transfer-learning technique was adopted by testing the method on 14 different ImageNet pre-trained models. The models were fine-tuned to be trained with our dataset. Results: From the experiment, DenseNet models appeared to be the best at classifying the validation set with up to 95.61% accuracy in grading pancreatic cancer despite the small sample set. Conclusions: To the best of our knowledge, this is the first work in grading pancreatic cancer based on pathology images. Previous works have either focused only on detection (benign or malignant), or on radiology images (computerized tomography [CT], magnetic resonance imaging [MRI] etc.). The proposed system can be very useful to pathologists in facilitating an automated or semi-automated cancer grading system, which can address the problems found in manual grading.
Collapse
|
11
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
12
|
Huang B, Huang H, Zhang S, Zhang D, Shi Q, Liu J, Guo J. Artificial intelligence in pancreatic cancer. Theranostics 2022; 12:6931-6954. [PMID: 36276650 PMCID: PMC9576619 DOI: 10.7150/thno.77949] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 09/24/2022] [Indexed: 11/30/2022] Open
Abstract
Pancreatic cancer is the deadliest disease, with a five-year overall survival rate of just 11%. The pancreatic cancer patients diagnosed with early screening have a median overall survival of nearly ten years, compared with 1.5 years for those not diagnosed with early screening. Therefore, early diagnosis and early treatment of pancreatic cancer are particularly critical. However, as a rare disease, the general screening cost of pancreatic cancer is high, the accuracy of existing tumor markers is not enough, and the efficacy of treatment methods is not exact. In terms of early diagnosis, artificial intelligence technology can quickly locate high-risk groups through medical images, pathological examination, biomarkers, and other aspects, then screening pancreatic cancer lesions early. At the same time, the artificial intelligence algorithm can also be used to predict the survival time, recurrence risk, metastasis, and therapy response which could affect the prognosis. In addition, artificial intelligence is widely used in pancreatic cancer health records, estimating medical imaging parameters, developing computer-aided diagnosis systems, etc. Advances in AI applications for pancreatic cancer will require a concerted effort among clinicians, basic scientists, statisticians, and engineers. Although it has some limitations, it will play an essential role in overcoming pancreatic cancer in the foreseeable future due to its mighty computing power.
Collapse
Affiliation(s)
- Bowen Huang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Haoran Huang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Shuting Zhang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Dingyue Zhang
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Qingya Shi
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Jianzhou Liu
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| | - Junchao Guo
- Department of General Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
13
|
Luchini C, Pantanowitz L, Adsay V, Asa SL, Antonini P, Girolami I, Veronese N, Nottegar A, Cingarlini S, Landoni L, Brosens LA, Verschuur AV, Mattiolo P, Pea A, Mafficini A, Milella M, Niazi MK, Gurcan MN, Eccher A, Cree IA, Scarpa A. Ki-67 assessment of pancreatic neuroendocrine neoplasms: Systematic review and meta-analysis of manual vs. digital pathology scoring. Mod Pathol 2022; 35:712-720. [PMID: 35249100 PMCID: PMC9174054 DOI: 10.1038/s41379-022-01055-1] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 02/10/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
Ki-67 assessment is a key step in the diagnosis of neuroendocrine neoplasms (NENs) from all anatomic locations. Several challenges exist related to quantifying the Ki-67 proliferation index due to lack of method standardization and inter-reader variability. The application of digital pathology coupled with machine learning has been shown to be highly accurate and reproducible for the evaluation of Ki-67 in NENs. We systematically reviewed all published studies on the subject of Ki-67 assessment in pancreatic NENs (PanNENs) employing digital image analysis (DIA). The most common advantages of DIA were improvement in the standardization and reliability of Ki-67 evaluation, as well as its speed and practicality, compared to the current gold standard approach of manual counts from captured images, which is cumbersome and time consuming. The main limitations were attributed to higher costs, lack of widespread availability (as of yet), operator qualification and training issues (if it is not done by pathologists), and most importantly, the drawback of image algorithms counting contaminating non-neoplastic cells and other signals like hemosiderin. However, solutions are rapidly developing for all of these challenging issues. A comparative meta-analysis for DIA versus manual counting shows very high concordance (global coefficient of concordance: 0.94, 95% CI: 0.83-0.98) between these two modalities. These findings support the widespread adoption of validated DIA methods for Ki-67 assessment in PanNENs, provided that measures are in place to ensure counting of only tumor cells either by software modifications or education of non-pathologist operators, as well as selection of standard regions of interest for analysis. NENs, being cellular and monotonous neoplasms, are naturally more amenable to Ki-67 assessment. However, lessons of this review may be applicable to other neoplasms where proliferation activity has become an integral part of theranostic evaluation including breast, brain, and hematolymphoid neoplasms.
Collapse
Affiliation(s)
- Claudio Luchini
- Department of Diagnostics and Public Health, Section of Pathology, University of Verona, Verona, Italy.
- ARC-Net Research Center, University and Hospital Trust of Verona, Verona, Italy.
| | - Liron Pantanowitz
- Department of Pathology & Clinical Labs, University of Michigan, Ann Arbor, MI, USA
| | - Volkan Adsay
- Department of Pathology, Koç University Hospital and Koç University Research Center for Translational Medicine (KUTTAM), Istanbul, Turkey
| | - Sylvia L Asa
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, OH, USA
| | - Pietro Antonini
- Department of Diagnostics and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Ilaria Girolami
- Division of Pathology, San Maurizio Central Hospital, Bolzano, Italy
| | - Nicola Veronese
- Department of Internal Medicine and Geriatrics, University of Palermo, Palermo, Italy
| | - Alessia Nottegar
- Pathology Unit, Azienda Ospedaliera Universitaria Integrata (AOUI), Verona, Italy
| | - Sara Cingarlini
- Department of Medicine, Section of Oncology, University and Hospital Trust of Verona, Verona, Italy
| | - Luca Landoni
- Department of Surgery, The Pancreas Institute, University and Hospital Trust of Verona, Verona, Italy
| | - Lodewijk A Brosens
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Anna V Verschuur
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Paola Mattiolo
- Department of Diagnostics and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Antonio Pea
- Department of Surgery, The Pancreas Institute, University and Hospital Trust of Verona, Verona, Italy
| | - Andrea Mafficini
- Department of Diagnostics and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Michele Milella
- Department of Medicine, Section of Oncology, University and Hospital Trust of Verona, Verona, Italy
| | - Muhammad K Niazi
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston Salem, NC, USA
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston Salem, NC, USA
| | - Albino Eccher
- Pathology Unit, Azienda Ospedaliera Universitaria Integrata (AOUI), Verona, Italy
| | - Ian A Cree
- International Agency for Research on Cancer, IARC, Lyon, France
| | - Aldo Scarpa
- Department of Diagnostics and Public Health, Section of Pathology, University of Verona, Verona, Italy.
- ARC-Net Research Center, University and Hospital Trust of Verona, Verona, Italy.
| |
Collapse
|
14
|
Pantelis AG, Panagopoulou PA, Lapatsanis DP. Artificial Intelligence and Machine Learning in the Diagnosis and Management of Gastroenteropancreatic Neuroendocrine Neoplasms-A Scoping Review. Diagnostics (Basel) 2022; 12:874. [PMID: 35453922 PMCID: PMC9027316 DOI: 10.3390/diagnostics12040874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 03/27/2022] [Accepted: 03/29/2022] [Indexed: 12/21/2022] Open
Abstract
Neuroendocrine neoplasms (NENs) and tumors (NETs) are rare neoplasms that may affect any part of the gastrointestinal system. In this scoping review, we attempt to map existing evidence on the role of artificial intelligence, machine learning and deep learning in the diagnosis and management of NENs of the gastrointestinal system. After implementation of inclusion and exclusion criteria, we retrieved 44 studies with 53 outcome analyses. We then classified the papers according to the type of studied NET (26 Pan-NETs, 59.1%; 3 metastatic liver NETs (6.8%), 2 small intestinal NETs, 4.5%; colorectal, rectal, non-specified gastroenteropancreatic and non-specified gastrointestinal NETs had from 1 study each, 2.3%). The most frequently used AI algorithms were Supporting Vector Classification/Machine (14 analyses, 29.8%), Convolutional Neural Network and Random Forest (10 analyses each, 21.3%), Random Forest (9 analyses, 19.1%), Logistic Regression (8 analyses, 17.0%), and Decision Tree (6 analyses, 12.8%). There was high heterogeneity on the description of the prediction model, structure of datasets, and performance metrics, whereas the majority of studies did not report any external validation set. Future studies should aim at incorporating a uniform structure in accordance with existing guidelines for purposes of reproducibility and research quality, which are prerequisites for integration into clinical practice.
Collapse
Affiliation(s)
- Athanasios G. Pantelis
- 4th Department of Surgery, Evaggelismos General Hospital of Athens, 10676 Athens, Greece;
| | | | - Dimitris P. Lapatsanis
- 4th Department of Surgery, Evaggelismos General Hospital of Athens, 10676 Athens, Greece;
| |
Collapse
|
15
|
Binol H, Niazi MKK, Elmaraghy C, Moberly AC, Gurcan MN. OtoXNet—automated identification of eardrum diseases from otoscope videos: a deep learning study for video-representing images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07107-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
16
|
Ida A, Okubo Y, Kasajima R, Washimi K, Sato S, Yoshioka E, Osaka K, Suzuki T, Yamamoto Y, Yokose T, Kishida T, Miyagi Y. Clinicopathological and genetic analyses of small cell neuroendocrine carcinoma of the prostate: Histological features for accurate diagnosis and toward future novel therapies. Pathol Res Pract 2022; 229:153731. [DOI: 10.1016/j.prp.2021.153731] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 11/30/2021] [Accepted: 11/30/2021] [Indexed: 11/15/2022]
|
17
|
Zhang X, Cornish TC, Yang L, Bennett TD, Ghosh D, Xing F. Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67. JCO Clin Cancer Inform 2021; 4:666-679. [PMID: 32730116 PMCID: PMC7397778 DOI: 10.1200/cci.19.00108] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
PURPOSE We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS Our method achieved an F1 score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F1 score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets.
Collapse
Affiliation(s)
- Xuhong Zhang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Lin Yang
- Department of Electrical and Computer Engineering, Department of Computer and Information Science, Department of Biomedical Engineering, University of Florida, Gainesville, FL
| | - Tellen D Bennett
- Department of Pediatrics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| |
Collapse
|
18
|
Tavolara TE, Gurcan MN, Segal S, Niazi MKK. Identification of difficult to intubate patients from frontal face images using an ensemble of deep learning models. Comput Biol Med 2021; 136:104737. [PMID: 34391000 DOI: 10.1016/j.compbiomed.2021.104737] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 08/01/2021] [Accepted: 08/02/2021] [Indexed: 11/27/2022]
Abstract
Failure to identify difficult intubation is the leading cause of anesthesia-related death and morbidity. Despite preoperative airway assessment, 75-93% of difficult intubations are unanticipated, and airway examination methods underperform, with sensitivities of 20-62% and specificities of 82-97%. To overcome these impediments, we aim to develop a deep learning model to identify difficult to intubate patients using frontal face images. We proposed an ensemble of convolutional neural networks which leverages a database of celebrity facial images to learn robust features of multiple face regions. This ensemble extracts features from patient images (n = 152) which are subsequently classified by a respective ensemble of attention-based multiple instance learning models. Through majority voting, a patient is classified as difficult or easy to intubate. Whereas two conventional bedside tests resulted in AUCs of 0.6042 and 0.4661, the proposed method resulted in an AUC of 0.7105 using a cohort of 76 difficult and 76 easy to intubate patients. Generic features yielded AUCs of 0.4654-0.6278. The proposed model can operate at high sensitivity and low specificity (0.9079 and 0.4474) or low sensitivity and high specificity (0.3684 and 0.9605). The proposed ensembled model outperforms conventional bedside tests and generic features. Side facial images may improve the performance of the proposed model. The proposed method significantly surpasses conventional bedside tests and deep learning methods. We expect our model will play an important role in developing deep learning methods where frontal face features play an important role.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Scott Segal
- Dept. of Anesthesiology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - M K K Niazi
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, USA
| |
Collapse
|
19
|
Kobayashi S, Saltz JH, Yang VW. State of machine and deep learning in histopathological applications in digestive diseases. World J Gastroenterol 2021; 27:2545-2575. [PMID: 34092975 PMCID: PMC8160628 DOI: 10.3748/wjg.v27.i20.2545] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/27/2021] [Accepted: 04/29/2021] [Indexed: 02/06/2023] Open
Abstract
Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer vision tasks. While these approaches have been applied to numerous data types, this capacity can be especially leveraged by application on histopathological images, which capture cellular and structural features with their high-resolution, microscopic perspectives. Already, these methodologies have demonstrated promising performance in a variety of applications like disease classification, cancer grading, structure and cellular localizations, and prognostic predictions. A wide range of pathologies requiring histopathological evaluation exist in gastroenterology and hepatology, indicating these as disciplines highly targetable for integration of these technologies. Gastroenterologists have also already been primed to consider the impact of these algorithms, as development of real-time endoscopic video analysis software has been an active and popular field of research. This heightened clinical awareness will likely be important for future integration of these methods and to drive interdisciplinary collaborations on emerging studies. To provide an overview on the application of these methodologies for gastrointestinal and hepatological histopathological slides, this review will discuss general ML and DL concepts, introduce recent and emerging literature using these methods, and cover challenges moving forward to further advance the field.
Collapse
Affiliation(s)
- Soma Kobayashi
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Joel H Saltz
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Vincent W Yang
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
- Department of Physiology and Biophysics, Renaissance School of Medicine, Stony Brook University, Stony Brook , NY 11794, United States
| |
Collapse
|
20
|
Song C, Wang M, Luo Y, Chen J, Peng Z, Wang Y, Zhang H, Li ZP, Shen J, Huang B, Feng ST. Predicting the recurrence risk of pancreatic neuroendocrine neoplasms after radical resection using deep learning radiomics with preoperative computed tomography images. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:833. [PMID: 34164467 PMCID: PMC8184461 DOI: 10.21037/atm-21-25] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Background To establish and validate a prediction model for pancreatic neuroendocrine neoplasms (pNENs) recurrence after radical surgery with preoperative computed tomography (CT) images. Methods We retrospectively collected data from 74 patients with pathologically confirmed pNENs (internal group: 56 patients, Hospital I; external validation group: 18 patients, Hospital II). Using the internal group, models were trained with CT findings evaluated by radiologists, radiomics, and deep learning radiomics (DLR) to predict 5-year pNEN recurrence. Radiomics and DLR models were established for arterial (A), venous (V), and arterial and venous (A&V) contrast phases. The model with the optimal performance was further combined with clinical information, and all patients were divided into high- and low-risk groups to analyze survival with the Kaplan-Meier method. Results In the internal group, the areas under the curves (AUCs) of DLR-A, DLR-V, and DLR-A&V models were 0.80, 0.58, and 0.72, respectively. The corresponding radiomics AUCs were 0.74, 0.68, and 0.70. The AUC of the CT findings model was 0.53. The DLR-A model represented the optimum; added clinical information improved the AUC from 0.80 to 0.83. In the validation group, the AUCs of DLR-A, DLR-V, and DLR-A&V models were 0.77, 0.48, and 0.64, respectively, and those of radiomics-A, radiomics-V, and radiomics-A&V models were 0.56, 0.52, and 0.56, respectively. The AUC of the CT findings model was 0.52. In the validation group, the comparison between the DLR-A and the random models showed a trend of significant difference (P=0.058). Recurrence-free survival differed significantly between high- and low-risk groups (P=0.003). Conclusions Using DLR, we successfully established a preoperative recurrence prediction model for pNEN patients after radical surgery. This allows a risk evaluation of pNEN recurrence, optimizing clinical decision-making.
Collapse
Affiliation(s)
- Chenyu Song
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Mingyu Wang
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yanji Luo
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jie Chen
- Department of Gastroenterology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zhenpeng Peng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yangdi Wang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Hongyuan Zhang
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Zi-Ping Li
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jingxian Shen
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Shi-Ting Feng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
21
|
Dlamini Z, Francies FZ, Hull R, Marima R. Artificial intelligence (AI) and big data in cancer and precision oncology. Comput Struct Biotechnol J 2020; 18:2300-2311. [PMID: 32994889 PMCID: PMC7490765 DOI: 10.1016/j.csbj.2020.08.019] [Citation(s) in RCA: 89] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 08/21/2020] [Accepted: 08/21/2020] [Indexed: 02/07/2023] Open
Abstract
Artificial intelligence (AI) and machine learning have significantly influenced many facets of the healthcare sector. Advancement in technology has paved the way for analysis of big datasets in a cost- and time-effective manner. Clinical oncology and research are reaping the benefits of AI. The burden of cancer is a global phenomenon. Efforts to reduce mortality rates requires early diagnosis for effective therapeutic interventions. However, metastatic and recurrent cancers evolve and acquire drug resistance. It is imperative to detect novel biomarkers that induce drug resistance and identify therapeutic targets to enhance treatment regimes. The introduction of the next generation sequencing (NGS) platforms address these demands, has revolutionised the future of precision oncology. NGS offers several clinical applications that are important for risk predictor, early detection of disease, diagnosis by sequencing and medical imaging, accurate prognosis, biomarker identification and identification of therapeutic targets for novel drug discovery. NGS generates large datasets that demand specialised bioinformatics resources to analyse the data that is relevant and clinically significant. Through these applications of AI, cancer diagnostics and prognostic prediction are enhanced with NGS and medical imaging that delivers high resolution images. Regardless of the improvements in technology, AI has some challenges and limitations, and the clinical application of NGS remains to be validated. By continuing to enhance the progression of innovation and technology, the future of AI and precision oncology show great promise.
Collapse
Affiliation(s)
- Zodwa Dlamini
- SAMRC/UP Precision Prevention & Novel Drug Targets for HIV-Associated Cancers (PPNDTHAC) Extramural Unit, Pan African Cancer Research Institute (PACRI), University of Pretoria, Faculty of Health Sciences, Hatfield 0028, South Africa
| | - Flavia Zita Francies
- SAMRC/UP Precision Prevention & Novel Drug Targets for HIV-Associated Cancers (PPNDTHAC) Extramural Unit, Pan African Cancer Research Institute (PACRI), University of Pretoria, Faculty of Health Sciences, Hatfield 0028, South Africa
| | - Rodney Hull
- SAMRC/UP Precision Prevention & Novel Drug Targets for HIV-Associated Cancers (PPNDTHAC) Extramural Unit, Pan African Cancer Research Institute (PACRI), University of Pretoria, Faculty of Health Sciences, Hatfield 0028, South Africa
| | - Rahaba Marima
- SAMRC/UP Precision Prevention & Novel Drug Targets for HIV-Associated Cancers (PPNDTHAC) Extramural Unit, Pan African Cancer Research Institute (PACRI), University of Pretoria, Faculty of Health Sciences, Hatfield 0028, South Africa
| |
Collapse
|
22
|
Acs B, Rantalainen M, Hartman J. Artificial intelligence as the next step towards precision pathology. J Intern Med 2020; 288:62-81. [PMID: 32128929 DOI: 10.1111/joim.13030] [Citation(s) in RCA: 209] [Impact Index Per Article: 41.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 12/16/2019] [Accepted: 12/30/2019] [Indexed: 12/13/2022]
Abstract
Pathology is the cornerstone of cancer care. The need for accuracy in histopathologic diagnosis of cancer is increasing as personalized cancer therapy requires accurate biomarker assessment. The appearance of digital image analysis holds promise to improve both the volume and precision of histomorphological evaluation. Recently, machine learning, and particularly deep learning, has enabled rapid advances in computational pathology. The integration of machine learning into routine care will be a milestone for the healthcare sector in the next decade, and histopathology is right at the centre of this revolution. Examples of potential high-value machine learning applications include both model-based assessment of routine diagnostic features in pathology, and the ability to extract and identify novel features that provide insights into a disease. Recent groundbreaking results have demonstrated that applications of machine learning methods in pathology significantly improves metastases detection in lymph nodes, Ki67 scoring in breast cancer, Gleason grading in prostate cancer and tumour-infiltrating lymphocyte (TIL) scoring in melanoma. Furthermore, deep learning models have also been demonstrated to be able to predict status of some molecular markers in lung, prostate, gastric and colorectal cancer based on standard HE slides. Moreover, prognostic (survival outcomes) deep neural network models based on digitized HE slides have been demonstrated in several diseases, including lung cancer, melanoma and glioma. In this review, we aim to present and summarize the latest developments in digital image analysis and in the application of artificial intelligence in diagnostic pathology.
Collapse
Affiliation(s)
- B Acs
- From the, Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden
| | - M Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - J Hartman
- From the, Department of Oncology and Pathology, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
23
|
Camalan S, Niazi MKK, Moberly AC, Teknos T, Essig G, Elmaraghy C, Taj-Schaal N, Gurcan MN. OtoMatch: Content-based eardrum image retrieval using deep learning. PLoS One 2020; 15:e0232776. [PMID: 32413096 PMCID: PMC7228122 DOI: 10.1371/journal.pone.0232776] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Accepted: 04/21/2020] [Indexed: 12/29/2022] Open
Abstract
Acute infections of the middle ear are the most commonly treated childhood diseases. Because complications affect children's language learning and cognitive processes, it is essential to diagnose these diseases in a timely and accurate manner. The prevailing literature suggests that it is difficult to accurately diagnose these infections, even for experienced ear, nose, and throat (ENT) physicians. Advanced care practitioners (e.g., nurse practitioners, physician assistants) serve as first-line providers in many primary care settings and may benefit from additional guidance to appropriately determine the diagnosis and treatment of ear diseases. For this purpose, we designed a content-based image retrieval (CBIR) system (called OtoMatch) for normal, middle ear effusion, and tympanostomy tube conditions, operating on eardrum images captured with a digital otoscope. We present a method that enables the conversion of any convolutional neural network (trained for classification) into an image retrieval model. As a proof of concept, we converted a pre-trained deep learning model into an image retrieval system. We accomplished this by changing the fully connected layers into lookup tables. A database of 454 labeled eardrum images (179 normal, 179 effusion, and 96 tube cases) was used to train and test the system. On a 10-fold cross validation, the proposed method resulted in an average accuracy of 80.58% (SD 5.37%), and maximum F1 score of 0.90 while retrieving the most similar image from the database. These are promising results for the first study to demonstrate the feasibility of developing a CBIR system for eardrum images using the newly proposed methodology.
Collapse
Affiliation(s)
- Seda Camalan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Muhammad Khalid Khan Niazi
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Aaron C. Moberly
- Department of Otolaryngology, Ohio State University, Columbus, Ohio, United States of America
| | - Theodoros Teknos
- UH Seidman Cancer Center, Cleveland, Ohio, United States of America
| | - Garth Essig
- Department of Otolaryngology, Ohio State University, Columbus, Ohio, United States of America
| | - Charles Elmaraghy
- Department of Otolaryngology, Ohio State University, Columbus, Ohio, United States of America
| | - Nazhat Taj-Schaal
- Department of Internal Medicine, Ohio State University, Columbus, Ohio, United States of America
| | - Metin N. Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
24
|
Satturwar SP, Pantanowitz JL, Manko CD, Seigh L, Monaco SE, Pantanowitz L. Ki-67 proliferation index in neuroendocrine tumors: Can augmented reality microscopy with image analysis improve scoring? Cancer Cytopathol 2020; 128:535-544. [PMID: 32401429 DOI: 10.1002/cncy.22272] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 02/18/2020] [Accepted: 03/11/2020] [Indexed: 12/27/2022]
Abstract
BACKGROUND The Ki-67 index is important for grading neuroendocrine tumors (NETs) in cytology. However, different counting methods exist. Recently, augmented reality microscopy (ARM) has enabled real-time image analysis using glass slides. The objective of the current study was to compare different traditional Ki-67 scoring methods in cell block material with newer methods such as ARM. METHODS Ki-67 immunostained slides from 50 NETs of varying grades were retrieved (39 from the pancreas and 11 metastases). Methods with which to quantify the Ki-67 index in up to 3 hot spots included: 1) "eyeball" estimation (EE); 2) printed image manual counting (PIMC); 3) ARM with live image analysis; and 4) image analysis using whole-slide images (WSI) (field of view [FOV] and the entire slide). RESULTS The Ki-67 index obtained using the different methods varied. The pairwise kappa results varied from no agreement for image analysis using digital image analysis WSI (FOV) and histology to near-perfect agreement for ARM and PIMC. Using surgical pathology as the gold standard, the EE method was found to have the highest concordance rate (84.2%), followed by WSI analysis of the entire slide (73.7%) and then both the ARM and PIMC methods (63.2% for both). The PIMC method was the most time-consuming whereas image analysis using WSI (FOV) was the fastest method followed by ARM. CONCLUSIONS The Ki-67 index for NETs in cell block material varied by the method used for scoring, which may affect grade. PIMC was the most time-consuming method, and EE had the highest concordance rate. Although real-time automated counting using image analysis demonstrated inaccuracies, ARM streamlined and hastened the task of Ki-67 quantification in NETs.
Collapse
Affiliation(s)
- Swati P Satturwar
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| | | | - Christopher D Manko
- Department of Biology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Lindsey Seigh
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| | - Sara E Monaco
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
25
|
Jiang Y, Yang M, Wang S, Li X, Sun Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Commun (Lond) 2020; 40:154-166. [PMID: 32277744 PMCID: PMC7170661 DOI: 10.1002/cac2.12012] [Citation(s) in RCA: 201] [Impact Index Per Article: 40.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 02/06/2020] [Indexed: 12/11/2022] Open
Abstract
The development of digital pathology and progression of state-of-the-art algorithms for computer vision have led to increasing interest in the use of artificial intelligence (AI), especially deep learning (DL)-based AI, in tumor pathology. The DL-based algorithms have been developed to conduct all kinds of work involved in tumor pathology, including tumor diagnosis, subtyping, grading, staging, and prognostic prediction, as well as the identification of pathological features, biomarkers and genetic changes. The applications of AI in pathology not only contribute to improve diagnostic accuracy and objectivity but also reduce the workload of pathologists and subsequently enable them to spend additional time on high-level decision-making tasks. In addition, AI is useful for pathologists to meet the requirements of precision oncology. However, there are still some challenges relating to the implementation of AI, including the issues of algorithm validation and interpretability, computing systems, the unbelieving attitude of pathologists, clinicians and patients, as well as regulators and reimbursements. Herein, we present an overview on how AI-based approaches could be integrated into the workflow of pathologists and discuss the challenges and perspectives of the implementation of AI in tumor pathology.
Collapse
Affiliation(s)
- Yahui Jiang
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| | - Meng Yang
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Shuhao Wang
- Institute for Interdisciplinary Information SciencesTsinghua UniversityBeijing100084P. R. China
| | - Xiangchun Li
- Department Epidemiology and BiostatisticsKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P.R. China
| | - Yan Sun
- Department of PathologyKey Laboratory of Cancer Prevention and TherapyTianjin's Clinical Research Center for CancerNational Clinical Research Center for CancerTianjin Cancer Institute and HospitalTianjin Medical UniversityTianjin300060P. R. China
| |
Collapse
|
26
|
Binol H, Plotner A, Sopkovich J, Kaffenberger B, Niazi MKK, Gurcan MN. Ros-NET: A deep convolutional neural network for automatic identification of rosacea lesions. Skin Res Technol 2019; 26:413-421. [PMID: 31849118 DOI: 10.1111/srt.12817] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 11/09/2019] [Indexed: 12/27/2022]
Abstract
BACKGROUND Rosacea is one of the most common cutaneous disorder characterized primarily by facial flushing, erythema, papules, pustules, telangiectases, and nasal swelling. Diagnosis of rosacea is principally done by a physical examination and a consistent patient history. However, qualitative human assessment is often subjective and suffers from a relatively high intra- and inter-observer variability in evaluating patient outcomes. MATERIALS AND METHODS To overcome these problems, we propose a quantitative and reproducible computer-aided diagnosis system, Ros-NET, which integrates information from different image scales and resolutions in order to identify rosacea lesions. This involves adaption of Inception-ResNet-v2 and ResNet-101 to extract rosacea features from facial images. Additionally, we propose to refine the detection results by means of facial-landmarks-based zones (ie, anthropometric landmarks) as regions of interest (ROI), which focus on typical areas of rosacea occurrence on a face. RESULTS Using a leave-one-patient-out cross-validation scheme, the weighted average Dice coefficients, in percentages, across all patients (N = 41) with 256 × 256 image patches are 89.8 ± 2.6% and 87.8 ± 2.4% with Inception-ResNet-v2 and ResNet-101, respectively. CONCLUSION The findings from this study support that pre-trained networks trained via transfer learning can be beneficial in identifying rosacea lesions. Our future work will involve expanding the work to a larger database of cases with varying degrees of disease characteristics.
Collapse
Affiliation(s)
- Hamidullah Binol
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Alisha Plotner
- Department of Dermatology, The Ohio State University, Columbus, OH, USA
| | | | | | | | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, USA
| |
Collapse
|
27
|
Tavolara TE, Niazi MKK, Arole V, Chen W, Frankel W, Gurcan MN. A modular cGAN classification framework: Application to colorectal tumor detection. Sci Rep 2019; 9:18969. [PMID: 31831792 PMCID: PMC6908583 DOI: 10.1038/s41598-019-55257-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Accepted: 11/11/2019] [Indexed: 01/24/2023] Open
Abstract
Automatic identification of tissue structures in the analysis of digital tissue biopsies remains an ongoing problem in digital pathology. Common barriers include lack of reliable ground truth due to inter- and intra- reader variability, class imbalances, and inflexibility of discriminative models. To overcome these barriers, we are developing a framework that benefits from a reliable immunohistochemistry ground truth during labeling, overcomes class imbalances through single task learning, and accommodates any number of classes through a minimally supervised, modular model-per-class paradigm. This study explores an initial application of this framework, based on conditional generative adversarial networks, to automatically identify tumor from non-tumor regions in colorectal H&E slides. The average precision, sensitivity, and F1 score during validation was 95.13 ± 4.44%, 93.05 ± 3.46%, and 94.02 ± 3.23% and for an external test dataset was 98.75 ± 2.43%, 88.53 ± 5.39%, and 93.31 ± 3.07%, respectively. With accurate identification of tumor regions, we plan to further develop our framework to establish a tumor front, from which tumor buds can be detected in a restricted region. This model will be integrated into a larger system which will quantitatively determine the prognostic significance of tumor budding.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, USA
| | - M Khalid Khan Niazi
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, USA.
| | - Vidya Arole
- Department of Pathology, The Ohio State University, Columbus, USA
| | - Wei Chen
- Department of Pathology, The Ohio State University, Columbus, USA
| | - Wendy Frankel
- Department of Pathology, The Ohio State University, Columbus, USA
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, USA
| |
Collapse
|
28
|
Xu J, Jing M, Wang S, Yang C, Chen X. A review of medical image detection for cancers in digestive system based on artificial intelligence. Expert Rev Med Devices 2019; 16:877-889. [PMID: 31530047 DOI: 10.1080/17434440.2019.1669447] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Introduction: At present, cancer imaging examination relies mainly on manual reading of doctors, which requests a high standard of doctors' professional skills, clinical experience, and concentration. However, the increasing amount of medical imaging data has brought more and more challenges to radiologists. The detection of digestive system cancer (DSC) based on artificial intelligence (AI) can provide a solution for automatic analysis of medical images and assist doctors to achieve high-precision intelligent diagnosis of cancers. Areas covered: The main goal of this paper is to introduce the main research methods of the AI based detection of DSC, and provide relevant reference for researchers. Meantime, it summarizes the main problems existing in these methods, and provides better guidance for future research. Expert commentary: The automatic classification, recognition, and segmentation of DSC can be better realized through the methods of machine learning and deep learning, which minimize the internal information of images that are difficult for humans to discover. In the diagnosis of DSC, the use of AI to assist imaging surgeons can achieve cancer detection rapidly and effectively and save doctors' diagnosis time. These can lay the foundation for better clinical diagnosis, treatment planning and accurate quantitative evaluation of DSC.
Collapse
Affiliation(s)
- Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Mengjie Jing
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Shiming Wang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Cuiping Yang
- Department of Gastroenterology, Ruijin North Hospital of Shanghai Jiao Tong University School of Medicine , Shanghai , China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| |
Collapse
|
29
|
Niazi MKK, Parwani AV, Gurcan MN. Digital pathology and artificial intelligence. Lancet Oncol 2019; 20:e253-e261. [PMID: 31044723 PMCID: PMC8711251 DOI: 10.1016/s1470-2045(19)30154-8] [Citation(s) in RCA: 581] [Impact Index Per Article: 96.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 02/28/2019] [Accepted: 03/13/2019] [Indexed: 02/06/2023]
Abstract
In modern clinical practice, digital pathology has a crucial role and is increasingly a technological requirement in the scientific laboratory environment. The advent of whole-slide imaging, availability of faster networks, and cheaper storage solutions has made it easier for pathologists to manage digital slide images and share them for clinical use. In parallel, unprecedented advances in machine learning have enabled the synergy of artificial intelligence and digital pathology, which offers image-based diagnosis possibilities that were once limited only to radiology and cardiology. Integration of digital slides into the pathology workflow, advanced algorithms, and computer-aided diagnostic techniques extend the frontiers of the pathologist's view beyond a microscopic slide and enable true utilisation and integration of knowledge that is beyond human limits and boundaries, and we believe there is clear potential for artificial intelligence breakthroughs in the pathology setting. In this Review, we discuss advancements in digital slide-based image diagnosis for cancer along with some challenges and opportunities for artificial intelligence in digital pathology.
Collapse
Affiliation(s)
| | - Anil V Parwani
- Department of Pathology, The Ohio State University, Columbus, OH, USA
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC, USA
| |
Collapse
|