1
|
Hölscher DL, Bülow RD. Decoding pathology: the role of computational pathology in research and diagnostics. Pflugers Arch 2025; 477:555-570. [PMID: 39095655 PMCID: PMC11958429 DOI: 10.1007/s00424-024-03002-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/25/2024] [Accepted: 07/25/2024] [Indexed: 08/04/2024]
Abstract
Traditional histopathology, characterized by manual quantifications and assessments, faces challenges such as low-throughput and inter-observer variability that hinder the introduction of precision medicine in pathology diagnostics and research. The advent of digital pathology allowed the introduction of computational pathology, a discipline that leverages computational methods, especially based on deep learning (DL) techniques, to analyze histopathology specimens. A growing body of research shows impressive performances of DL-based models in pathology for a multitude of tasks, such as mutation prediction, large-scale pathomics analyses, or prognosis prediction. New approaches integrate multimodal data sources and increasingly rely on multi-purpose foundation models. This review provides an introductory overview of advancements in computational pathology and discusses their implications for the future of histopathology in research and diagnostics.
Collapse
Affiliation(s)
- David L Hölscher
- Department for Nephrology and Clinical Immunology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute for Pathology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Roman D Bülow
- Institute for Pathology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|
2
|
Maity A, Maidantchik VD, Weidenfeld K, Larisch S, Barkan D, Haick H. Chemical Tomography of Cancer Organoids and Cyto-Proteo-Genomic Development Stages Through Chemical Communication Signals. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2025; 37:e2413017. [PMID: 39935131 PMCID: PMC11938034 DOI: 10.1002/adma.202413017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 12/13/2024] [Indexed: 02/13/2025]
Abstract
Organoids mimic human organ function, offering insights into development and disease. However, non-destructive, real-time monitoring is lacking, as traditional methods are often costly, destructive, and low-throughput. In this article, a non-destructive chemical tomographic strategy is presented for decoding cyto-proteo-genomics of organoid using volatile signaling molecules, hereby, Volatile Organic Compounds (VOCs), to indicate metabolic activity and development of organoids. Combining a hierarchical design of graphene-based sensor arrays with AI-driven analysis, this method maps VOC spatiotemporal distribution and generate detailed digital profiles of organoid morphology and proteo-genomic features. Lens- and label-free, it avoids phototoxicity, distortion, and environmental disruption. Results from testing organoids with the reported chemical tomography approach demonstrate effective differentiation between cyto-proteo-genomic profiles of normal and diseased states, particularly during dynamic transitions such as epithelial-mesenchymal transition (EMT). Additionally, the reported approach identifies key VOC-related biochemical pathways, metabolic markers, and pathways associated with cancerous transformations such as aromatic acid degradation and lipid metabolism. This real-time, non-destructive approach captures subtle genetic and structural variations with high sensitivity and specificity, providing a robust platform for multi-omics integration and advancing cancer biomarker discovery.
Collapse
Affiliation(s)
- Arnab Maity
- Department of Chemical Engineering and Russell Berrie Nanotechnology InstituteTechnion – Israel Institute of TechnologyHaifa3200003Israel
| | - Vivian Darsa Maidantchik
- Department of Chemical Engineering and Russell Berrie Nanotechnology InstituteTechnion – Israel Institute of TechnologyHaifa3200003Israel
| | - Keren Weidenfeld
- Department of Human Biology and Medical SciencesUniversity of HaifaHaifa3498838Israel
| | - Sarit Larisch
- Department of Human Biology and Medical SciencesUniversity of HaifaHaifa3498838Israel
| | - Dalit Barkan
- Department of Human Biology and Medical SciencesUniversity of HaifaHaifa3498838Israel
| | - Hossam Haick
- Department of Chemical Engineering and Russell Berrie Nanotechnology InstituteTechnion – Israel Institute of TechnologyHaifa3200003Israel
- Life Science Technology (LiST) GroupDanube Private UniversityFakultät Medizin/Zahnmedizin, Steiner Landstraße 124Krems‐Stein3500Austria
| |
Collapse
|
3
|
Zhou H, Zhou F, Chen H. Cohort-Individual Cooperative Learning for Multimodal Cancer Survival Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:656-667. [PMID: 39240739 DOI: 10.1109/tmi.2024.3455931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/08/2024]
Abstract
Recently, we have witnessed impressive achievements in cancer survival analysis by integrating multimodal data, e.g., pathology images and genomic profiles. However, the heterogeneity and high dimensionality of these modalities pose significant challenges in extracting discriminative representations while maintaining good generalization. In this paper, we propose a Cohort-individual Cooperative Learning (CCL) framework to advance cancer survival analysis by collaborating knowledge decomposition and cohort guidance. Specifically, first, we propose a Multimodal Knowledge Decomposition (MKD) module to explicitly decompose multimodal knowledge into four distinct components: redundancy, synergy, and uniqueness of the two modalities. Such a comprehensive decomposition can enlighten the models to perceive easily overlooked yet important information, facilitating an effective multimodal fusion. Second, we propose a Cohort Guidance Modeling (CGM) to mitigate the risk of overfitting task-irrelevant information. It can promote a more comprehensive and robust understanding of the underlying multimodal data while avoiding the pitfalls of overfitting and enhancing the generalization ability of the model. By cooperating with the knowledge decomposition and cohort guidance methods, we develop a robust multimodal survival analysis model with enhanced discrimination and generalization abilities. Extensive experimental results on five cancer datasets demonstrate the effectiveness of our model in integrating multimodal data for survival analysis. Our code is available at https://github.com/moothes/CCL-survival.
Collapse
|
4
|
Borazjani K, Khosravan N, Ying L, Hosseinalipour S. Multi-Modal Federated Learning for Cancer Staging Over Non-IID Datasets With Unbalanced Modalities. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:556-573. [PMID: 39196746 DOI: 10.1109/tmi.2024.3450855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2024]
Abstract
The use of machine learning (ML) for cancer staging through medical image analysis has gained substantial interest across medical disciplines. When accompanied by the innovative federated learning (FL) framework, ML techniques can further overcome privacy concerns related to patient data exposure. Given the frequent presence of diverse data modalities within patient records, leveraging FL in a multi-modal learning framework holds considerable promise for cancer staging. However, existing works on multi-modal FL often presume that all data-collecting institutions have access to all data modalities. This oversimplified approach neglects institutions that have access to only a portion of data modalities within the system. In this work, we introduce a novel FL architecture designed to accommodate not only the heterogeneity of data samples, but also the inherent heterogeneity/non-uniformity of data modalities across institutions. We shed light on the challenges associated with varying convergence speeds observed across different data modalities within our FL system. Subsequently, we propose a solution to tackle these challenges by devising a distributed gradient blending and proximity-aware client weighting strategy tailored for multi-modal FL. To show the superiority of our method, we conduct experiments using The Cancer Genome Atlas program (TCGA) datalake considering different cancer types and three modalities of data: mRNA sequences, histopathological image data, and clinical information. Our results further unveil the impact and severity of class-based vs type-based heterogeneity across institutions on the model performance, which widens the perspective to the notion of data heterogeneity in multi-modal FL literature.
Collapse
|
5
|
Elforaici MEA, Montagnon E, Romero FP, Le WT, Azzi F, Trudel D, Nguyen B, Turcotte S, Tang A, Kadoury S. Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction. Med Image Anal 2025; 99:103346. [PMID: 39423564 DOI: 10.1016/j.media.2024.103346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/05/2024] [Accepted: 09/10/2024] [Indexed: 10/21/2024]
Abstract
Colorectal liver metastases (CLM) affect almost half of all colon cancer patients and the response to systemic chemotherapy plays a crucial role in patient survival. While oncologists typically use tumor grading scores, such as tumor regression grade (TRG), to establish an accurate prognosis on patient outcomes, including overall survival (OS) and time-to-recurrence (TTR), these traditional methods have several limitations. They are subjective, time-consuming, and require extensive expertise, which limits their scalability and reliability. Additionally, existing approaches for prognosis prediction using machine learning mostly rely on radiological imaging data, but recently histological images have been shown to be relevant for survival predictions by allowing to fully capture the complex microenvironmental and cellular characteristics of the tumor. To address these limitations, we propose an end-to-end approach for automated prognosis prediction using histology slides stained with Hematoxylin and Eosin (H&E) and Hematoxylin Phloxine Saffron (HPS). We first employ a Generative Adversarial Network (GAN) for slide normalization to reduce staining variations and improve the overall quality of the images that are used as input to our prediction pipeline. We propose a semi-supervised model to perform tissue classification from sparse annotations, producing segmentation and feature maps. Specifically, we use an attention-based approach that weighs the importance of different slide regions in producing the final classification results. Finally, we exploit the extracted features for the metastatic nodules and surrounding tissue to train a prognosis model. In parallel, we train a vision Transformer model in a knowledge distillation framework to replicate and enhance the performance of the prognosis prediction. We evaluate our approach on an in-house clinical dataset of 258 CLM patients, achieving superior performance compared to other comparative models with a c-index of 0.804 (0.014) for OS and 0.735 (0.016) for TTR, as well as on two public datasets. The proposed approach achieves an accuracy of 86.9% to 90.3% in predicting TRG dichotomization. For the 3-class TRG classification task, the proposed approach yields an accuracy of 78.5% to 82.1%, outperforming the comparative methods. Our proposed pipeline can provide automated prognosis for pathologists and oncologists, and can greatly promote precision medicine progress in managing CLM patients.
Collapse
Affiliation(s)
- Mohamed El Amine Elforaici
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada.
| | | | - Francisco Perdigón Romero
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - Feryel Azzi
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - Dominique Trudel
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Université de Montréal, Montreal, Canada
| | | | - Simon Turcotte
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Department of surgery, Université de Montréal, Montreal, Canada
| | - An Tang
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Department of Radiology, Radiation Oncology and Nuclear Medicine, Université de Montréal, Montreal, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Université de Montréal, Montreal, Canada
| |
Collapse
|
6
|
Tafavvoghi M, Bongo LA, Shvetsov N, Busund LTR, Møllersen K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J Pathol Inform 2024; 15:100363. [PMID: 38405160 PMCID: PMC10884505 DOI: 10.1016/j.jpi.2024.100363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/24/2023] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this scoping review, we identified the publicly available datasets of breast H&E-stained whole-slide images (WSIs) that can be used to develop deep learning algorithms. We systematically searched 9 scientific literature databases and 9 research data repositories and found 17 publicly available datasets containing 10 385 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled 2 lists of breast H&E patches and private datasets as supplementary resources for researchers. Notably, only 28% of the included articles utilized multiple datasets, and only 14% used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 52% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
Collapse
Affiliation(s)
- Masoud Tafavvoghi
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | - Nikita Shvetsov
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | | | - Kajsa Møllersen
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
7
|
Oghbaie M, Araújo T, Schmidt-Erfurth U, Bogunović H. VLFATRollout: Fully transformer-based classifier for retinal OCT volumes. Comput Med Imaging Graph 2024; 118:102452. [PMID: 39489098 DOI: 10.1016/j.compmedimag.2024.102452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 09/20/2024] [Accepted: 10/12/2024] [Indexed: 11/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Despite the promising capabilities of 3D transformer architectures in video analysis, their application to high-resolution 3D medical volumes encounters several challenges. One major limitation is the high number of 3D patches, which reduces the efficiency of the global self-attention mechanisms of transformers. Additionally, background information can distract vision transformers from focusing on crucial areas of the input image, thereby introducing noise into the final representation. Moreover, the variability in the number of slices per volume complicates the development of models capable of processing input volumes of any resolution while simple solutions like subsampling may risk losing essential diagnostic details. METHODS To address these challenges, we introduce an end-to-end transformer-based framework, variable length feature aggregator transformer rollout (VLFATRollout), to classify volumetric data. The proposed VLFATRollout enjoys several merits. First, the proposed VLFATRollout can effectively mine slice-level fore-background information with the help of transformer's attention matrices. Second, randomization of volume-wise resolution (i.e. the number of slices) during training enhances the learning capacity of the learnable positional embedding (PE) assigned to each volume slice. This technique allows the PEs to generalize across neighboring slices, facilitating the handling of high-resolution volumes at the test time. RESULTS VLFATRollout was thoroughly tested on the retinal optical coherence tomography (OCT) volume classification task, demonstrating a notable average improvement of 5.47% in balanced accuracy over the leading convolutional models for a 5-class diagnostic task. These results emphasize the effectiveness of our framework in enhancing slice-level representation and its adaptability across different volume resolutions, paving the way for advanced transformer applications in medical image analysis. The code is available at https://github.com/marziehoghbaie/VLFATRollout/.
Collapse
Affiliation(s)
- Marzieh Oghbaie
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Institute of Artificial Intelligence, Center for Medical Data Science, Medical University of Vienna, Austria.
| | - Teresa Araújo
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Institute of Artificial Intelligence, Center for Medical Data Science, Medical University of Vienna, Austria
| | | | - Hrvoje Bogunović
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Institute of Artificial Intelligence, Center for Medical Data Science, Medical University of Vienna, Austria
| |
Collapse
|
8
|
Jiang S, Hondelink L, Suriawinata AA, Hassanpour S. Masked pre-training of transformers for histology image analysis. J Pathol Inform 2024; 15:100386. [PMID: 39006998 PMCID: PMC11246055 DOI: 10.1016/j.jpi.2024.100386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 04/02/2024] [Accepted: 05/28/2024] [Indexed: 07/16/2024] Open
Abstract
In digital pathology, whole-slide images (WSIs) are widely used for applications such as cancer diagnosis and prognosis prediction. Vision transformer (ViT) models have recently emerged as a promising method for encoding large regions of WSIs while preserving spatial relationships among patches. However, due to the large number of model parameters and limited labeled data, applying transformer models to WSIs remains challenging. In this study, we propose a pretext task to train the transformer model in a self-supervised manner. Our model, MaskHIT, uses the transformer output to reconstruct masked patches, measured by contrastive loss. We pre-trained MaskHIT model using over 7000 WSIs from TCGA and extensively evaluated its performance in multiple experiments, covering survival prediction, cancer subtype classification, and grade prediction tasks. Our experiments demonstrate that the pre-training procedure enables context-aware understanding of WSIs, facilitates the learning of representative histological features based on patch positions and visual patterns, and is essential for the ViT model to achieve optimal results on WSI-level tasks. The pre-trained MaskHIT surpasses various multiple instance learning approaches by 3% and 2% on survival prediction and cancer subtype classification tasks, and also outperforms recent state-of-the-art transformer-based methods. Finally, a comparison between the attention maps generated by the MaskHIT model with pathologist's annotations indicates that the model can accurately identify clinically relevant histological structures on the whole slide for each task.
Collapse
Affiliation(s)
- Shuai Jiang
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Liesbeth Hondelink
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Arief A. Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
- Department of Epidemiology, Geisel School of Medicine at Dartmouth and the Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| |
Collapse
|
9
|
Raza M, Awan R, Bashir RMS, Qaiser T, Rajpoot NM. Dual attention model with reinforcement learning for classification of histology whole-slide images. Comput Med Imaging Graph 2024; 118:102466. [PMID: 39579453 DOI: 10.1016/j.compmedimag.2024.102466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Revised: 11/05/2024] [Accepted: 11/05/2024] [Indexed: 11/25/2024]
Abstract
Digital whole slide images (WSIs) are generally captured at microscopic resolution and encompass extensive spatial data (several billions of pixels per image). Directly feeding these images to deep learning models is computationally intractable due to memory constraints, while downsampling the WSIs risks incurring information loss. Alternatively, splitting the WSIs into smaller patches (or tiles) may result in a loss of important contextual information. In this paper, we propose a novel dual attention approach, consisting of two main components, both inspired by the visual examination process of a pathologist: The first soft attention model processes a low magnification view of the WSI to identify relevant regions of interest (ROIs), followed by a custom sampling method to extract diverse and spatially distinct image tiles from the selected ROIs. The second component, the hard attention classification model further extracts a sequence of multi-resolution glimpses from each tile for classification. Since hard attention is non-differentiable, we train this component using reinforcement learning to predict the location of the glimpses. This approach allows the model to focus on essential regions instead of processing the entire tile, thereby aligning with a pathologist's way of diagnosis. The two components are trained in an end-to-end fashion using a joint loss function to demonstrate the efficacy of the model. The proposed model was evaluated on two WSI-level classification problems: Human epidermal growth factor receptor 2 (HER2) scoring on breast cancer histology images and prediction of Intact/Loss status of two Mismatch Repair (MMR) biomarkers from colorectal cancer histology images. We show that the proposed model achieves performance better than or comparable to the state-of-the-art methods while processing less than 10% of the WSI at the highest magnification and reducing the time required to infer the WSI-level label by more than 75%. The code is available at github.
Collapse
Affiliation(s)
- Manahil Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Ruqayya Awan
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | | | - Talha Qaiser
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; The Alan Turing Institute, London, United Kingdom.
| |
Collapse
|
10
|
Deng B, Tian Y, Zhang Q, Wang Y, Chai Z, Ye Q, Yao S, Liang T, Li J. NecroGlobalGCN: Integrating micronecrosis information in HCC prognosis prediction via graph convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108435. [PMID: 39357091 DOI: 10.1016/j.cmpb.2024.108435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 09/13/2024] [Accepted: 09/19/2024] [Indexed: 10/04/2024]
Abstract
BACKGROUND AND OBJECTIVE Hepatocellular carcinoma (HCC) ranks fourth in cancer mortality, underscoring the importance of accurate prognostic predictions to improve postoperative survival rates in patients. Although micronecrosis has been shown to have high prognostic value in HCC, its application in clinical prognosis prediction requires specialized knowledge and complex calculations, which poses challenges for clinicians. It would be of interest to develop a model to help clinicians make full use of micronecrosis to assess patient survival. METHODS To address these challenges, we propose a HCC prognosis prediction model that integrates pathological micronecrosis information through Graph Convolutional Neural Networks (GCN). This approach enables GCN to utilize micronecrosis, which has been shown to be highly correlated with prognosis, thereby significantly enhancing prognostic stratification quality. We developed our model using 3622 slides from 752 patients with primary HCC from the FAH-ZJUMS dataset and conducted internal and external validations on the FAH-ZJUMS and TCGA-LIHC datasets, respectively. RESULTS Our method outperformed the baseline by 8.18% in internal validation and 9.02% in external validations. Overall, this paper presents a deep learning research paradigm that integrates HCC micronecrosis, enhancing both the accuracy and interpretability of prognostic predictions, with potential applicability to other pathological prognostic markers. CONCLUSIONS This study proposes a composite GCN prognostic model that integrates information on HCC micronecrosis, collecting large dataset of HCC histopathological images. This approach could assist clinicians in analyzing HCC patient survival and precisely locating and visualizing necrotic tissues that affect prognosis. Following the research paradigm outlined in this paper, other prognostic biomarker integration models with GCN could be developed, significantly enhancing the predictive performance and interpretability of prognostic model.
Collapse
Affiliation(s)
- Boyang Deng
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, No. 38 Zheda Road, Hangzhou 310027, China
| | - Yu Tian
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, No. 38 Zheda Road, Hangzhou 310027, China
| | - Qi Zhang
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, China; MOE Joint International Research Laboratory of Pancreatic Diseases, Hangzhou, China; Zhejiang Provincial Key Laboratory of Pancreatic Disease, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China; Zhejiang University Cancer Center, and also with Zhejiang Clinical Research Center of Hepatobiliary and Pancreatic Diseases, Hangzhou, China
| | - Yangyang Wang
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, China; MOE Joint International Research Laboratory of Pancreatic Diseases, Hangzhou, China; Zhejiang Provincial Key Laboratory of Pancreatic Disease, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhenxin Chai
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, No. 38 Zheda Road, Hangzhou 310027, China
| | - Qiancheng Ye
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, No. 38 Zheda Road, Hangzhou 310027, China
| | - Shang Yao
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, No. 38 Zheda Road, Hangzhou 310027, China
| | - Tingbo Liang
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, China; MOE Joint International Research Laboratory of Pancreatic Diseases, Hangzhou, China; Zhejiang Provincial Key Laboratory of Pancreatic Disease, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China; Zhejiang University Cancer Center, and also with Zhejiang Clinical Research Center of Hepatobiliary and Pancreatic Diseases, Hangzhou, China
| | - Jingsong Li
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, No. 38 Zheda Road, Hangzhou 310027, China; Research Center for Data Hub and Security, Zhejiang Lab, Hangzhou 311100, China.
| |
Collapse
|
11
|
Shen M, Jiang Z. Artificial Intelligence Applications in Lymphoma Diagnosis and Management: Opportunities, Challenges, and Future Directions. J Multidiscip Healthc 2024; 17:5329-5339. [PMID: 39582879 PMCID: PMC11583773 DOI: 10.2147/jmdh.s485724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 10/09/2024] [Indexed: 11/26/2024] Open
Abstract
Lymphoma, a heterogeneous group of blood cancers, presents significant diagnostic and therapeutic challenges due to its complex subtypes and variable clinical outcomes. Artificial intelligence (AI) has emerged as a promising tool to enhance the accuracy and efficiency of lymphoma pathology. This review explores the potential of AI in lymphoma diagnosis, classification, prognosis prediction, and treatment planning, as well as addressing the challenges and future directions in this rapidly evolving field.
Collapse
Affiliation(s)
- Miao Shen
- Department of Pathology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou City, Zhejiang Province, 310000, People’s Republic of China
- Department of Pathology, Deqing People’s Hospital, Huzhou City, Zhejiang Province, 313200, People’s Republic of China
| | - Zhinong Jiang
- Department of Pathology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou City, Zhejiang Province, 310000, People’s Republic of China
| |
Collapse
|
12
|
Mezei T, Kolcsár M, Joó A, Gurzu S. Image Analysis in Histopathology and Cytopathology: From Early Days to Current Perspectives. J Imaging 2024; 10:252. [PMID: 39452415 PMCID: PMC11508754 DOI: 10.3390/jimaging10100252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Revised: 10/03/2024] [Accepted: 10/12/2024] [Indexed: 10/26/2024] Open
Abstract
Both pathology and cytopathology still rely on recognizing microscopical morphologic features, and image analysis plays a crucial role, enabling the identification, categorization, and characterization of different tissue types, cell populations, and disease states within microscopic images. Historically, manual methods have been the primary approach, relying on expert knowledge and experience of pathologists to interpret microscopic tissue samples. Early image analysis methods were often constrained by computational power and the complexity of biological samples. The advent of computers and digital imaging technologies challenged the exclusivity of human eye vision and brain computational skills, transforming the diagnostic process in these fields. The increasing digitization of pathological images has led to the application of more objective and efficient computer-aided analysis techniques. Significant advancements were brought about by the integration of digital pathology, machine learning, and advanced imaging technologies. The continuous progress in machine learning and the increasing availability of digital pathology data offer exciting opportunities for the future. Furthermore, artificial intelligence has revolutionized this field, enabling predictive models that assist in diagnostic decision making. The future of pathology and cytopathology is predicted to be marked by advancements in computer-aided image analysis. The future of image analysis is promising, and the increasing availability of digital pathology data will invariably lead to enhanced diagnostic accuracy and improved prognostic predictions that shape personalized treatment strategies, ultimately leading to better patient outcomes.
Collapse
Affiliation(s)
- Tibor Mezei
- Department of Pathology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540139 Targu Mures, Romania;
| | - Melinda Kolcsár
- Department of Pharmacology and Clinical Pharmacy, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540142 Targu Mures, Romania;
| | - András Joó
- Accenture Romania, 540035 Targu Mures, Romania;
| | - Simona Gurzu
- Department of Pathology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540139 Targu Mures, Romania;
| |
Collapse
|
13
|
Zhang Z, Yin W, Wang S, Zheng X, Dong S. MBFusion: Multi-modal balanced fusion and multi-task learning for cancer diagnosis and prognosis. Comput Biol Med 2024; 181:109042. [PMID: 39180856 DOI: 10.1016/j.compbiomed.2024.109042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 07/11/2024] [Accepted: 08/17/2024] [Indexed: 08/27/2024]
Abstract
Pathological images and molecular omics are important information for predicting diagnosis and prognosis. The two kinds of heterogeneous modal data contain complementary information, and the effective fusion of the two modals can better reveal the complex mechanisms of cancer. However, due to the different representation learning methods, the expression strength of different modals in different tasks varies greatly, so that many multimodal fusions do not achieve the best results. In this paper, MBFusion is proposed, to achieve multiple tasks such as prediction of diagnosis and prognosis through multi-modal balanced fusion. The MBFusion framework uses two kinds of specially constructed graph convolutional network to extract the features of molecular omics data, and uses ResNet to extract the features of pathological image data and retain important deep features by using attention and clustering, which effectively improves both kinds of the features representation, making their expressive ability balanced and comparable. The features of these two modal data are then fused through cross-attention Transformer, and the fused features are used to learn both tasks of cancer subtype classification and survival analysis by using multi-task learning. In this paper, MBFusion and other state of the art methods are compared on two public cancer datasets, and MBFusion shows an improvement of up to 10.1% by three kinds of evaluation metrics. In the ablation experiment, MBFusion explores the contribution of each modal data and each framework module to the performance. Furthermore, the interpretability of MBFusion is explained in detail to show the value of application.
Collapse
Affiliation(s)
- Ziye Zhang
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Wendong Yin
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Shijin Wang
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Xiaorou Zheng
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Shoubin Dong
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China.
| |
Collapse
|
14
|
Sokouti M, Sokouti B. Cancer genetics and deep learning applications for diagnosis, prognosis, and categorization. J Biol Methods 2024; 11:e99010017. [PMID: 39544183 PMCID: PMC11557296 DOI: 10.14440/jbm.2024.0016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 07/22/2024] [Indexed: 11/17/2024] Open
Abstract
Gene expression data are used to discover meaningful hidden information in gene datasets. Cancer and other disorders may be diagnosed based on differences in gene expression profiles, and this information can be gleaned by gene sequencing. Thanks to the tremendous power of artificial intelligence (AI), healthcare has become a significant user of deep learning (DL) for predicting cancer diseases and categorizing gene expression. Gene expression Microarrays have been proved effective in predicting cancer diseases and categorizing gene expression. Gene expression datasets contain only limited samples, but the features of cancer are diverse and complex. To overcome their dimensionality, gene expression datasets must be enhanced. By learning and analyzing features of input data, it is possible to extract features, as multidimensional arrays, from the data. Synthetic samples are needed to strengthen the range of information. DL strategies may be used when gene expression data are used to diagnose and classify cancer diseases.
Collapse
Affiliation(s)
- Massoud Sokouti
- Research Center of Evidence-Based Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
- Health Promotion Research Center, Tabriz Medical Sciences, Islamic Azad University, Tabriz, Iran
- Department of Physiology, Faculty of Medicine, Tabriz Medical Sciences, Islamic Azad University, Tabriz, Iran
| | - Babak Sokouti
- Biotechnology Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
15
|
Parvaiz A, Nasir ES, Fraz MM. From Pixels to Prognosis: A Survey on AI-Driven Cancer Patient Survival Prediction Using Digital Histology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1728-1751. [PMID: 38429563 PMCID: PMC11300721 DOI: 10.1007/s10278-024-01049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 03/03/2024]
Abstract
Survival analysis is an integral part of medical statistics that is extensively utilized to establish prognostic indices for mortality or disease recurrence, assess treatment efficacy, and tailor effective treatment plans. The identification of prognostic biomarkers capable of predicting patient survival is a primary objective in the field of cancer research. With the recent integration of digital histology images into routine clinical practice, a plethora of Artificial Intelligence (AI)-based methods for digital pathology has emerged in scholarly literature, facilitating patient survival prediction. These methods have demonstrated remarkable proficiency in analyzing and interpreting whole slide images, yielding results comparable to those of expert pathologists. The complexity of AI-driven techniques is magnified by the distinctive characteristics of digital histology images, including their gigapixel size and diverse tissue appearances. Consequently, advanced patch-based methods are employed to effectively extract features that correlate with patient survival. These computational methods significantly enhance survival prediction accuracy and augment prognostic capabilities in cancer patients. The review discusses the methodologies employed in the literature, their performance metrics, ongoing challenges, and potential solutions for future advancements. This paper explains survival analysis and feature extraction methods for analyzing cancer patients. It also compiles essential acronyms related to cancer precision medicine. Furthermore, it is noteworthy that this is the inaugural review paper in the field. The target audience for this interdisciplinary review comprises AI practitioners, medical statisticians, and progressive oncologists who are enthusiastic about translating AI-driven solutions into clinical practice. We expect this comprehensive review article to guide future research directions in the field of cancer research.
Collapse
Affiliation(s)
- Arshi Parvaiz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Esha Sadia Nasir
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | | |
Collapse
|
16
|
Claudio Quiros A, Coudray N, Yeaton A, Yang X, Liu B, Le H, Chiriboga L, Karimkhan A, Narula N, Moore DA, Park CY, Pass H, Moreira AL, Le Quesne J, Tsirigos A, Yuan K. Mapping the landscape of histomorphological cancer phenotypes using self-supervised learning on unannotated pathology slides. Nat Commun 2024; 15:4596. [PMID: 38862472 PMCID: PMC11525555 DOI: 10.1038/s41467-024-48666-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 05/08/2024] [Indexed: 06/13/2024] Open
Abstract
Cancer diagnosis and management depend upon the extraction of complex information from microscopy images by pathologists, which requires time-consuming expert interpretation prone to human bias. Supervised deep learning approaches have proven powerful, but are inherently limited by the cost and quality of annotations used for training. Therefore, we present Histomorphological Phenotype Learning, a self-supervised methodology requiring no labels and operating via the automatic discovery of discriminatory features in image tiles. Tiles are grouped into morphologically similar clusters which constitute an atlas of histomorphological phenotypes (HP-Atlas), revealing trajectories from benign to malignant tissue via inflammatory and reactive phenotypes. These clusters have distinct features which can be identified using orthogonal methods, linking histologic, molecular and clinical phenotypes. Applied to lung cancer, we show that they align closely with patient survival, with histopathologically recognised tumor types and growth patterns, and with transcriptomic measures of immunophenotype. These properties are maintained in a multi-cancer study.
Collapse
Affiliation(s)
- Adalberto Claudio Quiros
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK
| | - Nicolas Coudray
- Applied Bioinformatics Laboratories, NYU Grossman School of Medicine, New York, NY, USA
- Department of Cell Biology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
| | - Anna Yeaton
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Xinyu Yang
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK
| | - Bojing Liu
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Soln, Sweden
| | - Hortense Le
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Luis Chiriboga
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Afreen Karimkhan
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Navneet Narula
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - David A Moore
- Department of Cellular Pathology, University College London Hospital, London, UK
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
| | - Christopher Y Park
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
| | - Harvey Pass
- Department of Cardiothoracic Surgery, NYU Grossman School of Medicine, New York, NY, USA
| | - Andre L Moreira
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - John Le Quesne
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK.
- Cancer Research UK Scotland Institute, Glasgow, Scotland, UK.
- Queen Elizabeth University Hospital, Greater Glasgow and Clyde NHS Trust, Glasgow, Scotland, UK.
| | - Aristotelis Tsirigos
- Applied Bioinformatics Laboratories, NYU Grossman School of Medicine, New York, NY, USA.
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA.
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA.
| | - Ke Yuan
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK.
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK.
- Cancer Research UK Scotland Institute, Glasgow, Scotland, UK.
| |
Collapse
|
17
|
Chen X, Lin J, Wang Y, Zhang W, Xie W, Zheng Z, Wong KC. HE2Gene: image-to-RNA translation via multi-task learning for spatial transcriptomics data. Bioinformatics 2024; 40:btae343. [PMID: 38837395 PMCID: PMC11164830 DOI: 10.1093/bioinformatics/btae343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 05/06/2024] [Accepted: 05/25/2024] [Indexed: 06/07/2024] Open
Abstract
MOTIVATION Tissue context and molecular profiling are commonly used measures in understanding normal development and disease pathology. In recent years, the development of spatial molecular profiling technologies (e.g. spatial resolved transcriptomics) has enabled the exploration of quantitative links between tissue morphology and gene expression. However, these technologies remain expensive and time-consuming, with subsequent analyses necessitating high-throughput pathological annotations. On the other hand, existing computational tools are limited to predicting only a few dozen to several hundred genes, and the majority of the methods are designed for bulk RNA-seq. RESULTS In this context, we propose HE2Gene, the first multi-task learning-based method capable of predicting tens of thousands of spot-level gene expressions along with pathological annotations from H&E-stained images. Experimental results demonstrate that HE2Gene is comparable to state-of-the-art methods and generalizes well on an external dataset without the need for re-training. Moreover, HE2Gene preserves the annotated spatial domains and has the potential to identify biomarkers. This capability facilitates cancer diagnosis and broadens its applicability to investigate gene-disease associations. AVAILABILITY AND IMPLEMENTATION The source code and data information has been deposited at https://github.com/Microbiods/HE2Gene.
Collapse
Affiliation(s)
- Xingjian Chen
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
- Department of Computer Science, City University of Hong Kong, Kowloog Tong 999077, Hong Kong SAR
| | - Jiecong Lin
- Molecular Pathology Unit, Center for Cancer Research, Massachusetts General Hospital, Department of Pathology, Harvard Medical School, Boston, MA 02129, USA
- Department of Computer Science, The University of Hong Kong, Pokfulam 999077, Hong Kong SAR
| | - Yuchen Wang
- Department of Computer Science, City University of Hong Kong, Kowloog Tong 999077, Hong Kong SAR
| | - Weitong Zhang
- Department of Computer Science, City University of Hong Kong, Kowloog Tong 999077, Hong Kong SAR
| | - Weidun Xie
- Department of Computer Science, City University of Hong Kong, Kowloog Tong 999077, Hong Kong SAR
| | - Zetian Zheng
- Department of Computer Science, City University of Hong Kong, Kowloog Tong 999077, Hong Kong SAR
| | - Ka-Chun Wong
- Department of Computer Science, City University of Hong Kong, Kowloog Tong 999077, Hong Kong SAR
- Shenzhen Research Institute, City University of Hong Kong, Shenzhen 518057, China
| |
Collapse
|
18
|
Islam J, Turgeon M, Sladek R, Bhatnagar S. Case-Base Neural Network: Survival analysis with time-varying, higher-order interactions. MACHINE LEARNING WITH APPLICATIONS 2024; 16:100535. [PMID: 39802089 PMCID: PMC11720922 DOI: 10.1016/j.mlwa.2024.100535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025] Open
Abstract
In the context of survival analysis, data-driven neural network-based methods have been developed to model complex covariate effects. While these methods may provide better predictive performance than regression-based approaches, not all can model time-varying interactions and complex baseline hazards. To address this, we propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures. Using a novel sampling scheme and data augmentation to naturally account for censoring, we construct a feed-forward neural network that includes time as an input. CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function. We compare the performance of CBNNs to regression and neural network-based survival methods in a simulation and three case studies using two time-dependent metrics. First, we examine performance on a simulation involving a complex baseline hazard and time-varying interactions to assess all methods, with CBNN outperforming competitors. Then, we apply all methods to three real data applications, with CBNNs outperforming the competing models in two studies and showing similar performance in the third. Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes that estimates time-varying effects and a complex baseline hazard by design. An R package is available at https://github.com/Jesse-Islam/cbnn.
Collapse
Affiliation(s)
- Jesse Islam
- McGill University Department of Quantitative Life Sciences, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
| | - Maxime Turgeon
- University of Manitoba Department of Statistics, 50 Sifton Rd, Winnipeg, R3T2N2, Manitoba, Canada
| | - Robert Sladek
- McGill University Department of Quantitative Life Sciences, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
- McGill University Department of Human Genetics, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
| | - Sahir Bhatnagar
- McGill University Department of Biostatistics, 805 rue Sherbrooke O, Montréal, H3A 0B9, Quebec, Canada
| |
Collapse
|
19
|
Duwe G, Mercier D, Wiesmann C, Kauth V, Moench K, Junker M, Neumann CCM, Haferkamp A, Dengel A, Höfner T. Challenges and perspectives in use of artificial intelligence to support treatment recommendations in clinical oncology. Cancer Med 2024; 13:e7398. [PMID: 38923826 PMCID: PMC11196383 DOI: 10.1002/cam4.7398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/31/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024] Open
Abstract
Artificial intelligence (AI) promises to be the next revolutionary step in modern society. Yet, its role in all fields of industry and science need to be determined. One very promising field is represented by AI-based decision-making tools in clinical oncology leading to more comprehensive, personalized therapy approaches. In this review, the authors provide an overview on all relevant technical applications of AI in oncology, which are required to understand the future challenges and realistic perspectives for decision-making tools. In recent years, various applications of AI in medicine have been developed focusing on the analysis of radiological and pathological images. AI applications encompass large amounts of complex data supporting clinical decision-making and reducing errors by objectively quantifying all aspects of the data collected. In clinical oncology, almost all patients receive a treatment recommendation in a multidisciplinary cancer conference at the beginning and during their treatment periods. These highly complex decisions are based on a large amount of information (of the patients and of the various treatment options), which need to be analyzed and correctly classified in a short time. In this review, the authors describe the technical and medical requirements of AI to address these scientific challenges in a multidisciplinary manner. Major challenges in the use of AI in oncology and decision-making tools are data security, data representation, and explainability of AI-based outcome predictions, in particular for decision-making processes in multidisciplinary cancer conferences. Finally, limitations and potential solutions are described and compared for current and future research attempts.
Collapse
Affiliation(s)
- Gregor Duwe
- Department of Urology and Pediatric UrologyUniversity Medical Center, Johannes Gutenberg UniversityMainzGermany
| | - Dominique Mercier
- Research Unit Smart Data and Knowledge ServicesGerman Research Center for Artificial IntelligenceKaiserslauternGermany
| | - Crispin Wiesmann
- Department of Urology and Pediatric UrologyUniversity Medical Center, Johannes Gutenberg UniversityMainzGermany
| | - Verena Kauth
- Department of Urology and Pediatric UrologyUniversity Medical Center, Johannes Gutenberg UniversityMainzGermany
| | - Kerstin Moench
- Department of Urology and Pediatric UrologyUniversity Medical Center, Johannes Gutenberg UniversityMainzGermany
| | - Markus Junker
- Research Unit Smart Data and Knowledge ServicesGerman Research Center for Artificial IntelligenceKaiserslauternGermany
| | - Christopher C. M. Neumann
- Department of Hematology, Oncology and Tumor ImmunologyCharité‐Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt‐Universität zu BerlinBerlinGermany
| | - Axel Haferkamp
- Department of Urology and Pediatric UrologyUniversity Medical Center, Johannes Gutenberg UniversityMainzGermany
| | - Andreas Dengel
- Research Unit Smart Data and Knowledge ServicesGerman Research Center for Artificial IntelligenceKaiserslauternGermany
| | - Thomas Höfner
- Department of Urology and Pediatric UrologyUniversity Medical Center, Johannes Gutenberg UniversityMainzGermany
- Department of Urology, Ordensklinikum Linz ElisabethinenLinzAustria
| |
Collapse
|
20
|
Chen Y, Liu J, Jiang P, Jin Y. A novel multilevel iterative training strategy for the ResNet50 based mitotic cell classifier. Comput Biol Chem 2024; 110:108092. [PMID: 38754259 DOI: 10.1016/j.compbiolchem.2024.108092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 03/21/2024] [Accepted: 04/02/2024] [Indexed: 05/18/2024]
Abstract
The number of mitotic cells is an important indicator of grading invasive breast cancer. It is very challenging for pathologists to identify and count mitotic cells in pathological sections with naked eyes under the microscope. Therefore, many computational models for the automatic identification of mitotic cells based on machine learning, especially deep learning, have been proposed. However, converging to the local optimal solution is one of the main problems in model training. In this paper, we proposed a novel multilevel iterative training strategy to address the problem. To evaluate the proposed training strategy, we constructed the mitotic cell classification model with ResNet50 and trained the model with different training strategies. The results showed that the models trained with the proposed training strategy performed better than those trained with the conventional strategy in the independent test set, illustrating the effectiveness of the new training strategy. Furthermore, after training with our proposed strategy, the ResNet50 model with Adam optimizer has achieved 89.26% F1 score on the public MITOSI14 dataset, which is higher than that of the state-of-the-art methods reported in the literature.
Collapse
Affiliation(s)
- Yuqi Chen
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Juan Liu
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China.
| | - Peng Jiang
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Yu Jin
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| |
Collapse
|
21
|
Omar M, Xu Z, Rand SB, Alexanderani MK, Salles DC, Valencia I, Schaeffer EM, Robinson BD, Lotan TL, Loda M, Marchionni L. Semi-Supervised, Attention-Based Deep Learning for Predicting TMPRSS2:ERG Fusion Status in Prostate Cancer Using Whole Slide Images. Mol Cancer Res 2024; 22:347-359. [PMID: 38284821 PMCID: PMC10985477 DOI: 10.1158/1541-7786.mcr-23-0639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/26/2023] [Accepted: 01/22/2024] [Indexed: 01/30/2024]
Abstract
IMPLICATIONS Our study illuminates the potential of deep learning in effectively inferring key prostate cancer genetic alterations from the tissue morphology depicted in routinely available histology slides, offering a cost-effective method that could revolutionize diagnostic strategies in oncology.
Collapse
Affiliation(s)
- Mohamed Omar
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, New York
- Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Zhuoran Xu
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, New York
- Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Sophie B. Rand
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, New York
- Dana-Farber Cancer Institute, Boston, Massachusetts
| | | | - Daniela C. Salles
- Department of Pathology, Johns Hopkins University, Baltimore, Maryland
| | - Itzel Valencia
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, New York
| | | | - Brian D. Robinson
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, New York
| | - Tamara L. Lotan
- Department of Pathology, Johns Hopkins University, Baltimore, Maryland
| | - Massimo Loda
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, New York
| | - Luigi Marchionni
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, New York
| |
Collapse
|
22
|
Sajithkumar A, Thomas J, Saji AM, Ali F, E K HH, Adampulan HAG, Sarathchand S. Artificial Intelligence in pathology: current applications, limitations, and future directions. Ir J Med Sci 2024; 193:1117-1121. [PMID: 37542634 DOI: 10.1007/s11845-023-03479-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 07/26/2023] [Indexed: 08/07/2023]
Abstract
PURPOSE Given AI's recent success in computer vision applications, majority of pathologists anticipate that it will be able to assist them with a variety of digital pathology activities. Massive improvements in deep learning have enabled a synergy between Artificial Intelligence (AI) and deep learning, enabling image-based diagnosis against the backdrop of digital pathology. AI-based solutions are being developed to eliminate errors and save pathologists time. AIMS In this paper, we will discuss the components that went into the use of Artificial Intelligence in Pathology, its use in the medical profession, the obstacles and constraints that it encounters, and the future possibilities of AI in the medical field. CONCLUSIONS Based on these factors, we elaborate upon the use of AI in medical pathology and provide future recommendations for its successful implementation in this field.
Collapse
Affiliation(s)
- Akhil Sajithkumar
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India.
| | - Jubin Thomas
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Ajish Meprathumalil Saji
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Fousiya Ali
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Haneena Hasin E K
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Hannan Abdul Gafoor Adampulan
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Swathy Sarathchand
- Sree Narayana Institute of Medical Sciences, Chalakka - Kuthiathode Rd, North Kuthiathode, Kunnukara, Kerala, 683594, India
| |
Collapse
|
23
|
Arslan S, Schmidt J, Bass C, Mehrotra D, Geraldes A, Singhal S, Hense J, Li X, Raharja-Liu P, Maiques O, Kather JN, Pandya P. A systematic pan-cancer study on deep learning-based prediction of multi-omic biomarkers from routine pathology images. COMMUNICATIONS MEDICINE 2024; 4:48. [PMID: 38491101 PMCID: PMC10942985 DOI: 10.1038/s43856-024-00471-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 02/29/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND The objective of this comprehensive pan-cancer study is to evaluate the potential of deep learning (DL) for molecular profiling of multi-omic biomarkers directly from hematoxylin and eosin (H&E)-stained whole slide images. METHODS A total of 12,093 DL models predicting 4031 multi-omic biomarkers across 32 cancer types were trained and validated. The study included a broad range of genetic, transcriptomic, and proteomic biomarkers, as well as established prognostic markers, molecular subtypes, and clinical outcomes. RESULTS Here we show that 50% of the models achieve an area under the curve (AUC) of 0.644 or higher. The observed AUC for 25% of the models is at least 0.719 and exceeds 0.834 for the top 5%. Molecular profiling with image-based histomorphological features is generally considered feasible for most of the investigated biomarkers and across different cancer types. The performance appears to be independent of tumor purity, sample size, and class ratio (prevalence), suggesting a degree of inherent predictability in histomorphology. CONCLUSIONS The results demonstrate that DL holds promise to predict a wide range of biomarkers across the omics spectrum using only H&E-stained histological slides of solid tumors. This paves the way for accelerating diagnosis and developing more precise treatments for cancer patients.
Collapse
Affiliation(s)
| | | | | | - Debapriya Mehrotra
- Panakeia Technologies, London, UK
- Department of Pathology, Barking, Havering and Redbridge University NHS Trust, Romford, UK
| | | | - Shikha Singhal
- Panakeia Technologies, London, UK
- Department of Pathology, The Royal Wolverhampton NHS Trust, Wolverhampton, UK
| | | | - Xiusi Li
- Panakeia Technologies, London, UK
| | | | - Oscar Maiques
- Cytoskeleton and Cancer Metastasis Group, Breast Cancer Now Toby Robins Breast Cancer Research Centre, The Institute of Cancer Research, London, UK
- Cancer Biomarkers & Biotherapeutics, Barts Cancer Institute, Queen Mary University of London, John Vane Science Building, London, UK
| | - Jakob Nikolas Kather
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | | |
Collapse
|
24
|
Thompson N, Morley-Bunker A, McLauchlan J, Glyn T, Eglinton T. Use of artificial intelligence for the prediction of lymph node metastases in early-stage colorectal cancer: systematic review. BJS Open 2024; 8:zrae033. [PMID: 38637299 PMCID: PMC11026097 DOI: 10.1093/bjsopen/zrae033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 03/02/2024] [Accepted: 03/04/2024] [Indexed: 04/20/2024] Open
Abstract
BACKGROUND Risk evaluation of lymph node metastasis for early-stage (T1 and T2) colorectal cancers is critical for determining therapeutic strategies. Traditional methods of lymph node metastasis prediction have limited accuracy. This systematic review aimed to review the potential of artificial intelligence in predicting lymph node metastasis in early-stage colorectal cancers. METHODS A comprehensive search was performed of papers that evaluated the potential of artificial intelligence in predicting lymph node metastasis in early-stage colorectal cancers. Studies were appraised using the Joanna Briggs Institute tools. The primary outcome was summarizing artificial intelligence models and their accuracy. Secondary outcomes included influential variables and strategies to address challenges. RESULTS Of 3190 screened manuscripts, 11 were included, involving 8648 patients from 1996 to 2023. Due to diverse artificial intelligence models and varied metrics, no data synthesis was performed. Models included random forest algorithms, support vector machine, deep learning, artificial neural network, convolutional neural network and least absolute shrinkage and selection operator regression. Artificial intelligence models' area under the curve values ranged from 0.74 to 0.9993 (slide level) and 0.9476 to 0.9956 (single-node level), outperforming traditional clinical guidelines. CONCLUSION Artificial intelligence models show promise in predicting lymph node metastasis in early-stage colorectal cancers, potentially refining clinical decisions and improving outcomes. PROSPERO REGISTRATION NUMBER CRD42023409094.
Collapse
Affiliation(s)
- Nasya Thompson
- Department of Surgery, University of Otago, Christchurch, New Zealand
| | - Arthur Morley-Bunker
- Department of Pathology and Biomedical Science, University of Otago, Christchurch, New Zealand
| | - Jared McLauchlan
- Department of Surgery, Te Whatu Ora – Health New Zealand Waitaha Canterbury, Christchurch, New Zealand
| | - Tamara Glyn
- Department of Surgery, University of Otago, Christchurch, New Zealand
- Department of Surgery, Te Whatu Ora – Health New Zealand Waitaha Canterbury, Christchurch, New Zealand
| | - Tim Eglinton
- Department of Surgery, University of Otago, Christchurch, New Zealand
- Department of Surgery, Te Whatu Ora – Health New Zealand Waitaha Canterbury, Christchurch, New Zealand
| |
Collapse
|
25
|
Chen RJ, Ding T, Lu MY, Williamson DFK, Jaume G, Song AH, Chen B, Zhang A, Shao D, Shaban M, Williams M, Oldenburg L, Weishaupt LL, Wang JJ, Vaidya A, Le LP, Gerber G, Sahai S, Williams W, Mahmood F. Towards a general-purpose foundation model for computational pathology. Nat Med 2024; 30:850-862. [PMID: 38504018 PMCID: PMC11403354 DOI: 10.1038/s41591-024-02857-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 02/05/2024] [Indexed: 03/21/2024]
Abstract
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.
Collapse
Affiliation(s)
- Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Tong Ding
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Andrew H Song
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Andrew Zhang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Daniel Shao
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Mane Williams
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Lukas Oldenburg
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Luca L Weishaupt
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Judy J Wang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anurag Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Long Phi Le
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Georg Gerber
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Sharifa Sahai
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Systems Biology, Harvard University, Cambridge, MA, USA
| | - Walt Williams
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
26
|
Trettner KJ, Hsieh J, Xiao W, Lee JSH, Armani AM. Nondestructive, quantitative viability analysis of 3D tissue cultures using machine learning image segmentation. APL Bioeng 2024; 8:016121. [PMID: 38566822 PMCID: PMC10985731 DOI: 10.1063/5.0189222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 03/04/2024] [Indexed: 04/04/2024] Open
Abstract
Ascertaining the collective viability of cells in different cell culture conditions has typically relied on averaging colorimetric indicators and is often reported out in simple binary readouts. Recent research has combined viability assessment techniques with image-based deep-learning models to automate the characterization of cellular properties. However, further development of viability measurements to assess the continuity of possible cellular states and responses to perturbation across cell culture conditions is needed. In this work, we demonstrate an image processing algorithm for quantifying features associated with cellular viability in 3D cultures without the need for assay-based indicators. We show that our algorithm performs similarly to a pair of human experts in whole-well images over a range of days and culture matrix compositions. To demonstrate potential utility, we perform a longitudinal study investigating the impact of a known therapeutic on pancreatic cancer spheroids. Using images taken with a high content imaging system, the algorithm successfully tracks viability at the individual spheroid and whole-well level. The method we propose reduces analysis time by 97% in comparison with the experts. Because the method is independent of the microscope or imaging system used, this approach lays the foundation for accelerating progress in and for improving the robustness and reproducibility of 3D culture analysis across biological and clinical research.
Collapse
Affiliation(s)
| | - Jeremy Hsieh
- Pasadena Polytechnic High School, Pasadena, California 91106, USA
| | - Weikun Xiao
- Ellison Institute of Technology, Los Angeles, California 90064, USA
| | | | | |
Collapse
|
27
|
Subramanian V, Syeda-Mahmood T, Do MN. Modelling-based joint embedding of histology and genomics using canonical correlation analysis for breast cancer survival prediction. Artif Intell Med 2024; 149:102787. [PMID: 38462287 DOI: 10.1016/j.artmed.2024.102787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 03/12/2024]
Abstract
Traditional approaches to predicting breast cancer patients' survival outcomes were based on clinical subgroups, the PAM50 genes, or the histological tissue's evaluation. With the growth of multi-modality datasets capturing diverse information (such as genomics, histology, radiology and clinical data) about the same cancer, information can be integrated using advanced tools and have improved survival prediction. These methods implicitly exploit the key observation that different modalities originate from the same cancer source and jointly provide a complete picture of the cancer. In this work, we investigate the benefits of explicitly modelling multi-modality data as originating from the same cancer under a probabilistic framework. Specifically, we consider histology and genomics as two modalities originating from the same breast cancer under a probabilistic graphical model (PGM). We construct maximum likelihood estimates of the PGM parameters based on canonical correlation analysis (CCA) and then infer the underlying properties of the cancer patient, such as survival. Equivalently, we construct CCA-based joint embeddings of the two modalities and input them to a learnable predictor. Real-world properties of sparsity and graph-structures are captured in the penalized variants of CCA (pCCA) and are better suited for cancer applications. For generating richer multi-dimensional embeddings with pCCA, we introduce two novel embedding schemes that encourage orthogonality to generate more informative embeddings. The efficacy of our proposed prediction pipeline is first demonstrated via low prediction errors of the hidden variable and the generation of informative embeddings on simulated data. When applied to breast cancer histology and RNA-sequencing expression data from The Cancer Genome Atlas (TCGA), our model can provide survival predictions with average concordance-indices of up to 68.32% along with interpretability. We also illustrate how the pCCA embeddings can be used for survival analysis through Kaplan-Meier curves.
Collapse
Affiliation(s)
- Vaishnavi Subramanian
- Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, 61801, IL, USA.
| | | | - Minh N Do
- Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, 61801, IL, USA
| |
Collapse
|
28
|
Feng X, Shu W, Li M, Li J, Xu J, He M. Pathogenomics for accurate diagnosis, treatment, prognosis of oncology: a cutting edge overview. J Transl Med 2024; 22:131. [PMID: 38310237 PMCID: PMC10837897 DOI: 10.1186/s12967-024-04915-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/20/2024] [Indexed: 02/05/2024] Open
Abstract
The capability to gather heterogeneous data, alongside the increasing power of artificial intelligence to examine it, leading a revolution in harnessing multimodal data in the life sciences. However, most approaches are limited to unimodal data, leaving integrated approaches across modalities relatively underdeveloped in computational pathology. Pathogenomics, as an invasive method to integrate advanced molecular diagnostics from genomic data, morphological information from histopathological imaging, and codified clinical data enable the discovery of new multimodal cancer biomarkers to propel the field of precision oncology in the coming decade. In this perspective, we offer our opinions on synthesizing complementary modalities of data with emerging multimodal artificial intelligence methods in pathogenomics. It includes correlation between the pathological and genomic profile of cancer, fusion of histology, and genomics profile of cancer. We also present challenges, opportunities, and avenues for future work.
Collapse
Affiliation(s)
- Xiaobing Feng
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Wen Shu
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Mingya Li
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Junyu Li
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Junyao Xu
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Min He
- College of Electrical and Information Engineering, Hunan University, Changsha, China.
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China.
| |
Collapse
|
29
|
Graham S, Vu QD, Jahanifar M, Weigert M, Schmidt U, Zhang W, Zhang J, Yang S, Xiang J, Wang X, Rumberger JL, Baumann E, Hirsch P, Liu L, Hong C, Aviles-Rivero AI, Jain A, Ahn H, Hong Y, Azzuni H, Xu M, Yaqub M, Blache MC, Piégu B, Vernay B, Scherr T, Böhland M, Löffler K, Li J, Ying W, Wang C, Snead D, Raza SEA, Minhas F, Rajpoot NM. CoNIC Challenge: Pushing the frontiers of nuclear detection, segmentation, classification and counting. Med Image Anal 2024; 92:103047. [PMID: 38157647 DOI: 10.1016/j.media.2023.103047] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/19/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland
| | | | - Wenhua Zhang
- The Department of Computer Science, The University of Hong Kong, Hong Kong
| | | | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jinxi Xiang
- Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Josef Lorenz Rumberger
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany; Charité University Medicine, Berlin, Germany
| | | | - Peter Hirsch
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany; Humboldt University of Berlin, Faculty of Mathematics and Natural Sciences, Berlin, Germany
| | - Lihao Liu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Chenyang Hong
- Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | - Ayushi Jain
- Softsensor.ai, Bridgewater, NJ, United States of America; PRR.ai, TX, United States of America
| | - Heeyoung Ahn
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Yiyu Hong
- Department of R&D Center, Arontier Co. Ltd, Seoul, Republic of Korea
| | - Hussam Azzuni
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Min Xu
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Mohammad Yaqub
- Computer Vision Department, Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Benoît Piégu
- CNRS, IFCE, INRAE, Université de Tours, PRC, 3780, Nouzilly, France
| | - Bertrand Vernay
- Institut de Génétique et de Biologie Moléculaire et Cellulaire, Illkirch, France; Centre National de la Recherche Scientifique, UMR7104, Illkirch, France; Institut National de la Santé et de la Recherche Médicale, INSERM, U1258, Illkirch, France; Université de Strasbourg, Strasbourg, France
| | - Tim Scherr
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Moritz Böhland
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Jiachen Li
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Weiqin Ying
- School of software engineering, South China University of Technology, Guangzhou, China
| | - Chixin Wang
- School of software engineering, South China University of Technology, Guangzhou, China
| | - David Snead
- Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom; Division of Biomedical Sciences, Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; Histofy Ltd, Birmingham, United Kingdom; Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, United Kingdom
| |
Collapse
|
30
|
Tak S, Han G, Leem SH, Lee SY, Paek K, Kim JA. Prediction of anticancer drug resistance using a 3D microfluidic bladder cancer model combined with convolutional neural network-based image analysis. Front Bioeng Biotechnol 2024; 11:1302983. [PMID: 38268938 PMCID: PMC10806080 DOI: 10.3389/fbioe.2023.1302983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/28/2023] [Indexed: 01/26/2024] Open
Abstract
Bladder cancer is the most common urological malignancy worldwide, and its high recurrence rate leads to poor survival outcomes. The effect of anticancer drug treatment varies significantly depending on individual patients and the extent of drug resistance. In this study, we developed a validation system based on an organ-on-a-chip integrated with artificial intelligence technologies to predict resistance to anticancer drugs in bladder cancer. As a proof-of-concept, we utilized the gemcitabine-resistant bladder cancer cell line T24 with four distinct levels of drug resistance (parental, early, intermediate, and late). These cells were co-cultured with endothelial cells in a 3D microfluidic chip. A dataset comprising 2,674 cell images from the chips was analyzed using a convolutional neural network (CNN) to distinguish the extent of drug resistance among the four cell groups. The CNN achieved 95.2% accuracy upon employing data augmentation and a step decay learning rate with an initial value of 0.001. The average diagnostic sensitivity and specificity were 90.5% and 96.8%, respectively, and all area under the curve (AUC) values were over 0.988. Our proposed method demonstrated excellent performance in accurately identifying the extent of drug resistance, which can assist in the prediction of drug responses and in determining the appropriate treatment for bladder cancer patients.
Collapse
Affiliation(s)
- Sungho Tak
- Research Center for Bioconvergence Analysis, Korea Basic Science Institute, Cheongju, Republic of Korea
- Graduate School of Analytical Science and Technology, Chungnam National University, Daejeon, Republic of Korea
| | - Gyeongjin Han
- Research Center for Bioconvergence Analysis, Korea Basic Science Institute, Cheongju, Republic of Korea
| | - Sun-Hee Leem
- Department of Biomedical Sciences, Dong-A University, Busan, Republic of Korea
- Department of Health Sciences, The Graduate School of Dong-A University, Busan, Republic of Korea
| | - Sang-Yeop Lee
- Research Center for Bioconvergence Analysis, Korea Basic Science Institute, Cheongju, Republic of Korea
| | - Kyurim Paek
- Center for Scientific Instrumentation, Korea Basic Science Institute, Daejeon, Republic of Korea
| | - Jeong Ah Kim
- Center for Scientific Instrumentation, Korea Basic Science Institute, Daejeon, Republic of Korea
- Department of Bio-Analytical Science, University of Science and Technology, Daejeon, Republic of Korea
- Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
31
|
Tavolara TE, Su Z, Gurcan MN, Niazi MKK. One label is all you need: Interpretable AI-enhanced histopathology for oncology. Semin Cancer Biol 2023; 97:70-85. [PMID: 37832751 DOI: 10.1016/j.semcancer.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Ziyu Su
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Metin N Gurcan
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - M Khalid Khan Niazi
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
| |
Collapse
|
32
|
Shi Y, Olsson LT, Hoadley KA, Calhoun BC, Marron JS, Geradts J, Niethammer M, Troester MA. Predicting early breast cancer recurrence from histopathological images in the Carolina Breast Cancer Study. NPJ Breast Cancer 2023; 9:92. [PMID: 37952058 PMCID: PMC10640636 DOI: 10.1038/s41523-023-00597-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 10/20/2023] [Indexed: 11/14/2023] Open
Abstract
Approaches for rapidly identifying patients at high risk of early breast cancer recurrence are needed. Image-based methods for prescreening hematoxylin and eosin (H&E) stained tumor slides could offer temporal and financial efficiency. We evaluated a data set of 704 1-mm tumor core H&E images (2-4 cores per case), corresponding to 202 participants (101 who recurred; 101 non-recurrent matched on age and follow-up time) from breast cancers diagnosed between 2008-2012 in the Carolina Breast Cancer Study. We leveraged deep learning to extract image information and trained a model to identify recurrence. Cross-validation accuracy for predicting recurrence was 62.4% [95% CI: 55.7, 69.1], similar to grade (65.8% [95% CI: 59.3, 72.3]) and ER status (66.3% [95% CI: 59.8, 72.8]). Interestingly, 70% (19/27) of early-recurrent low-intermediate grade tumors were identified by our image model. Relative to existing markers, image-based analyses provide complementary information for predicting early recurrence.
Collapse
Affiliation(s)
- Yifeng Shi
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Linnea T Olsson
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Katherine A Hoadley
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Genetics, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Benjamin C Calhoun
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - J S Marron
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Statistics and Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Joseph Geradts
- Department of Pathology, East Carolina University, Greenville, NC, USA
| | - Marc Niethammer
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Melissa A Troester
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
33
|
Fazelpour S, Vejdani-Jahromi M, Kaliaev A, Qiu E, Goodman D, Andreu-Arasa VC, Fujima N, Sakai O. Multiparametric machine learning algorithm for human papillomavirus status and survival prediction in oropharyngeal cancer patients. Head Neck 2023; 45:2882-2892. [PMID: 37740534 DOI: 10.1002/hed.27519] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 09/10/2023] [Indexed: 09/24/2023] Open
Abstract
BACKGROUND Human papillomavirus (HPV) status influences prognosis in oropharyngeal cancer (OPC). Identifying high-risk patients are critical to improving treatment. We aim to provide a noninvasive opportunity for managing OPC patients by training multiple machine learning pipelines to determine the best model for characterizing HPV status and survival. METHODS Multi-parametric algorithms were designed using a 492 OPC patient database. HPV status incorporated age, sex, smoking/drinking habits, cancer subsite, TNM, and AJCC 7th edition staging. Survival considered HPV model inputs plus HPV status. Patients were split 4:1 training: testing. Algorithm efficacy was assessed through accuracy and area under the receiver operator characteristic curve (AUC). RESULTS From 31 HPV status models, ensemble yielded 0.83 AUC and 78.7% accuracy. From 38 survival models, ensemble yielded 0.91 AUC and 87.7% accuracy. CONCLUSION Results reinforce artificial intelligence's potential to use tumor imaging and patient characterizations for HPV status and outcome prediction. Utilizing these algorithms can optimize clinical guidance and patient care noninvasively.
Collapse
Affiliation(s)
- Sherwin Fazelpour
- Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| | - Maryam Vejdani-Jahromi
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| | - Artem Kaliaev
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| | - Edwin Qiu
- Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| | - Deniz Goodman
- Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| | - V Carlota Andreu-Arasa
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
- Department of Radiology, VA Boston Healthcare System, Boston, Massachusetts, USA
| | - Noriyuki Fujima
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Osamu Sakai
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
- Department of Otolaryngology-Head and Neck Surgery, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
- Department of Radiation Oncology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts, USA
| |
Collapse
|
34
|
Hanna MG, Brogi E. Future Practices of Breast Pathology Using Digital and Computational Pathology. Adv Anat Pathol 2023; 30:421-433. [PMID: 37737690 DOI: 10.1097/pap.0000000000000414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/23/2023]
Abstract
Pathology clinical practice has evolved by adopting technological advancements initially regarded as potentially disruptive, such as electron microscopy, immunohistochemistry, and genomic sequencing. Breast pathology has a critical role as a medical domain, where the patient's pathology diagnosis has significant implications for prognostication and treatment of diseases. The advent of digital and computational pathology has brought about significant advancements in the field, offering new possibilities for enhancing diagnostic accuracy and improving patient care. Digital slide scanning enables to conversion of glass slides into high-fidelity digital images, supporting the review of cases in a digital workflow. Digitization offers the capability to render specimen diagnoses, digital archival of patient specimens, collaboration, and telepathology. Integration of image analysis and machine learning-based systems layered atop the high-resolution digital images offers novel workflows to assist breast pathologists in their clinical, educational, and research endeavors. Decision support tools may improve the detection and classification of breast lesions and the quantification of immunohistochemical studies. Computational biomarkers may help to contribute to patient management or outcomes. Furthermore, using digital and computational pathology may increase standardization and quality assurance, especially in areas with high interobserver variability. This review explores the current landscape and possible future applications of digital and computational techniques in the field of breast pathology.
Collapse
Affiliation(s)
- Matthew G Hanna
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY
| | | |
Collapse
|
35
|
Chung Y, Lee H. Joint triplet loss with semi-hard constraint for data augmentation and disease prediction using gene expression data. Sci Rep 2023; 13:18178. [PMID: 37875602 PMCID: PMC10598120 DOI: 10.1038/s41598-023-45467-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 10/19/2023] [Indexed: 10/26/2023] Open
Abstract
The accurate prediction of patients with complex diseases, such as Alzheimer's disease (AD), as well as disease stages, including early- and late-stage cancer, is challenging owing to substantial variability among patients and limited availability of clinical data. Deep metric learning has emerged as a promising approach for addressing these challenges by improving data representation. In this study, we propose a joint triplet loss model with a semi-hard constraint (JTSC) to represent data in a small number of samples. JTSC strictly selects semi-hard samples by switching anchors and positive samples during the learning process in triplet embedding and combines a triplet loss function with an angular loss function. Our results indicate that JTSC significantly improves the number of appropriately represented samples during training when applied to the gene expression data of AD and to cancer stage prediction tasks. Furthermore, we demonstrate that using an embedding vector from JTSC as an input to the classifiers for AD and cancer stage prediction significantly improves classification performance by extracting more accurate features. In conclusion, we show that feature embedding through JTSC can aid in classification when there are a small number of samples compared to a larger number of features.
Collapse
Affiliation(s)
- Yeonwoo Chung
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
| | - Hyunju Lee
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea.
- Artificial Intelligence Graduate School, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea.
| |
Collapse
|
36
|
Shafi S, Parwani AV. Artificial intelligence in diagnostic pathology. Diagn Pathol 2023; 18:109. [PMID: 37784122 PMCID: PMC10546747 DOI: 10.1186/s13000-023-01375-z] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/21/2023] [Indexed: 10/04/2023] Open
Abstract
Digital pathology (DP) is being increasingly employed in cancer diagnostics, providing additional tools for faster, higher-quality, accurate diagnosis. The practice of diagnostic pathology has gone through a staggering transformation wherein new tools such as digital imaging, advanced artificial intelligence (AI) algorithms, and computer-aided diagnostic techniques are being used for assisting, augmenting and empowering the computational histopathology and AI-enabled diagnostics. This is paving the way for advancement in precision medicine in cancer. Automated whole slide imaging (WSI) scanners are now rendering diagnostic quality, high-resolution images of entire glass slides and combining these images with innovative digital pathology tools is making it possible to integrate imaging into all aspects of pathology reporting including anatomical, clinical, and molecular pathology. The recent approvals of WSI scanners for primary diagnosis by the FDA as well as the approval of prostate AI algorithm has paved the way for starting to incorporate this exciting technology for use in primary diagnosis. AI tools can provide a unique platform for innovations and advances in anatomical and clinical pathology workflows. In this review, we describe the milestones and landmark trials in the use of AI in clinical pathology with emphasis on future directions.
Collapse
Affiliation(s)
- Saba Shafi
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA
| | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA.
| |
Collapse
|
37
|
Ellen JG, Jacob E, Nikolaou N, Markuzon N. Autoencoder-based multimodal prediction of non-small cell lung cancer survival. Sci Rep 2023; 13:15761. [PMID: 37737469 PMCID: PMC10517020 DOI: 10.1038/s41598-023-42365-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 09/09/2023] [Indexed: 09/23/2023] Open
Abstract
The ability to accurately predict non-small cell lung cancer (NSCLC) patient survival is crucial for informing physician decision-making, and the increasing availability of multi-omics data offers the promise of enhancing prognosis predictions. We present a multimodal integration approach that leverages microRNA, mRNA, DNA methylation, long non-coding RNA (lncRNA) and clinical data to predict NSCLC survival and identify patient subtypes, utilizing denoising autoencoders for data compression and integration. Survival performance for patients with lung adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) was compared across modality combinations and data integration methods. Using The Cancer Genome Atlas data, our results demonstrate that survival prediction models combining multiple modalities outperform single modality models. The highest performance was achieved with a combination of only two modalities, lncRNA and clinical, at concordance indices (C-indices) of 0.69 ± 0.03 for LUAD and 0.62 ± 0.03 for LUSC. Models utilizing all five modalities achieved mean C-indices of 0.67 ± 0.04 and 0.63 ± 0.02 for LUAD and LUSC, respectively, while the best individual modality performance reached C-indices of 0.64 ± 0.03 for LUAD and 0.59 ± 0.03 for LUSC. Analysis of biological differences revealed two distinct survival subtypes with over 900 differentially expressed transcripts.
Collapse
Affiliation(s)
- Jacob G Ellen
- Institute of Health Informatics, University College London, London, UK.
| | - Etai Jacob
- AstraZeneca, Oncology Data Science, Waltham, MA, USA
| | | | | |
Collapse
|
38
|
Li Z, Jiang Y, Lu M, Li R, Xia Y. Survival Prediction via Hierarchical Multimodal Co-Attention Transformer: A Computational Histology-Radiology Solution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2678-2689. [PMID: 37030860 DOI: 10.1109/tmi.2023.3263010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
The rapid advances in deep learning-based computational pathology and radiology have demonstrated the promise of using whole slide images (WSIs) and radiology images for survival prediction in cancer patients. However, most image-based survival prediction methods are limited to using either histology or radiology alone, leaving integrated approaches across histology and radiology relatively underdeveloped. There are two main challenges in integrating WSIs and radiology images: (1) the gigapixel nature of WSIs and (2) the vast difference in spatial scales between WSIs and radiology images. To address these challenges, in this work, we propose an interpretable, weakly-supervised, multimodal learning framework, called Hierarchical Multimodal Co-Attention Transformer (HMCAT), to integrate WSIs and radiology images for survival prediction. Our approach first uses hierarchical feature extractors to capture various information including cellular features, cellular organization, and tissue phenotypes in WSIs. Then the hierarchical radiology-guided co- attention (HRCA) in HMCAT characterizes the multimodal interactions between hierarchical histology-based visual concepts and radiology features and learns hierarchical co- attention mappings for two modalities. Finally, HMCAT combines their complementary information into a multimodal risk score and discovers prognostic features from two modalities by multimodal interpretability. We apply our approach to two cancer datasets (365 WSIs with matched magnetic resonance [MR] images and 213 WSIs with matched computed tomography [CT] images). Our results demonstrate that the proposed HMCAT consistently achieves superior performance over the unimodal approaches trained on either histology or radiology data alone, as well as other state-of-the-art methods.
Collapse
|
39
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
40
|
Wessels F, Schmitt M, Krieghoff-Henning E, Nientiedt M, Waldbillig F, Neuberger M, Kriegmair MC, Kowalewski KF, Worst TS, Steeg M, Popovic ZV, Gaiser T, von Kalle C, Utikal JS, Fröhling S, Michel MS, Nuhn P, Brinker TJ. A self-supervised vision transformer to predict survival from histopathology in renal cell carcinoma. World J Urol 2023; 41:2233-2241. [PMID: 37382622 PMCID: PMC10415487 DOI: 10.1007/s00345-023-04489-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/10/2023] [Indexed: 06/30/2023] Open
Abstract
PURPOSE To develop and validate an interpretable deep learning model to predict overall and disease-specific survival (OS/DSS) in clear cell renal cell carcinoma (ccRCC). METHODS Digitised haematoxylin and eosin-stained slides from The Cancer Genome Atlas were used as a training set for a vision transformer (ViT) to extract image features with a self-supervised model called DINO (self-distillation with no labels). Extracted features were used in Cox regression models to prognosticate OS and DSS. Kaplan-Meier for univariable evaluation and Cox regression analyses for multivariable evaluation of the DINO-ViT risk groups were performed for prediction of OS and DSS. For validation, a cohort from a tertiary care centre was used. RESULTS A significant risk stratification was achieved in univariable analysis for OS and DSS in the training (n = 443, log rank test, p < 0.01) and validation set (n = 266, p < 0.01). In multivariable analysis, including age, metastatic status, tumour size and grading, the DINO-ViT risk stratification was a significant predictor for OS (hazard ratio [HR] 3.03; 95%-confidence interval [95%-CI] 2.11-4.35; p < 0.01) and DSS (HR 4.90; 95%-CI 2.78-8.64; p < 0.01) in the training set but only for DSS in the validation set (HR 2.31; 95%-CI 1.15-4.65; p = 0.02). DINO-ViT visualisation showed that features were mainly extracted from nuclei, cytoplasm, and peritumoural stroma, demonstrating good interpretability. CONCLUSION The DINO-ViT can identify high-risk patients using histological images of ccRCC. This model might improve individual risk-adapted renal cancer therapy in the future.
Collapse
Affiliation(s)
- Frederik Wessels
- Digital Biomarkers for Oncology Group, National Centre for Tumour Diseases (NCT), German Cancer Research Centre (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Max Schmitt
- Digital Biomarkers for Oncology Group, National Centre for Tumour Diseases (NCT), German Cancer Research Centre (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, National Centre for Tumour Diseases (NCT), German Cancer Research Centre (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Malin Nientiedt
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Frank Waldbillig
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Manuel Neuberger
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Maximilian C Kriegmair
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas S Worst
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Matthias Steeg
- Institute of Pathology, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Zoran V Popovic
- Institute of Pathology, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Timo Gaiser
- Institute of Pathology, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Christof von Kalle
- Department of Clinical-Translational Sciences, Berlin Institute of Health (BIH), Charité University Medicine, Berlin, Germany
| | - Jochen S Utikal
- Skin Cancer Unit, German Cancer Research Centre (DKFZ), Heidelberg, Germany
- Department of Dermatology, Venereology and Allergology, University Medical Centre Mannheim, University of Heidelberg, Heidelberg, Germany
| | - Stefan Fröhling
- National Centre for Tumour Diseases, German Cancer Research Centre, Heidelberg, Germany
| | - Maurice S Michel
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Philipp Nuhn
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Centre Mannheim, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, National Centre for Tumour Diseases (NCT), German Cancer Research Centre (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany.
| |
Collapse
|
41
|
Zhou J, Foroughi Pour A, Deirawan H, Daaboul F, Aung TN, Beydoun R, Ahmed FS, Chuang JH. Integrative deep learning analysis improves colon adenocarcinoma patient stratification at risk for mortality. EBioMedicine 2023; 94:104726. [PMID: 37499603 PMCID: PMC10388166 DOI: 10.1016/j.ebiom.2023.104726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 06/19/2023] [Accepted: 07/10/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND Colorectal cancers are the fourth most diagnosed cancer and the second leading cancer in number of deaths. Many clinical variables, pathological features, and genomic signatures are associated with patient risk, but reliable patient stratification in the clinic remains a challenging task. Here we assess how image, clinical, and genomic features can be combined to predict risk. METHODS We developed and evaluated integrative deep learning models combining formalin-fixed, paraffin-embedded (FFPE) whole slide images (WSIs), clinical variables, and mutation signatures to stratify colon adenocarcinoma (COAD) patients based on their risk of mortality. Our models were trained using a dataset of 108 patients from The Cancer Genome Atlas (TCGA), and were externally validated on newly generated dataset from Wayne State University (WSU) of 123 COAD patients and rectal adenocarcinoma (READ) patients in TCGA (N = 52). FINDINGS We first observe that deep learning models trained on FFPE WSIs of TCGA-COAD separate high-risk (OS < 3 years, N = 38) and low-risk (OS > 5 years, N = 25) patients (AUC = 0.81 ± 0.08, 5 year survival p < 0.0001, 5 year relative risk = 1.83 ± 0.04) though such models are less effective at predicting overall survival (OS) for moderate-risk (3 years < OS < 5 years, N = 45) patients (5 year survival p-value = 0.5, 5 year relative risk = 1.05 ± 0.09). We find that our integrative models combining WSIs, clinical variables, and mutation signatures can improve patient stratification for moderate-risk patients (5 year survival p < 0.0001, 5 year relative risk = 1.87 ± 0.07). Our integrative model combining image and clinical variables is also effective on an independent pathology dataset (WSU-COAD, N = 123) generated by our team (5 year survival p < 0.0001, 5 year relative risk = 1.52 ± 0.08), and the TCGA-READ data (5 year survival p < 0.0001, 5 year relative risk = 1.18 ± 0.17). Our multicenter integrative image and clinical model trained on combined TCGA-COAD and WSU-COAD is effective in predicting risk on TCGA-READ (5 year survival p < 0.0001, 5 year relative risk = 1.82 ± 0.13) data. Pathologist review of image-based heatmaps suggests that nuclear size pleomorphism, intense cellularity, and abnormal structures are associated with high-risk, while low-risk regions have more regular and small cells. Quantitative analysis shows high cellularity, high ratios of tumor cells, large tumor nuclei, and low immune infiltration are indicators of high-risk tiles. INTERPRETATION The improved stratification of colorectal cancer patients from our computational methods can be beneficial for treatment plans and enrollment of patients in clinical trials. FUNDING This study was supported by the National Cancer Institutes (Grant No. R01CA230031 and P30CA034196). The funders had no roles in study design, data collection and analysis or preparation of the manuscript.
Collapse
Affiliation(s)
- Jie Zhou
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, UCONN Health, Farmington, CT, USA
| | | | - Hany Deirawan
- Department of Pathology, Wayne State University, Detroit, MI, USA; Department of Dermatology, Wayne State University, Detroit, MI, USA
| | - Fayez Daaboul
- Department of Pathology, Wayne State University, Detroit, MI, USA
| | - Thazin Nwe Aung
- Department of Pathology, Yale University, New Haven, CT, USA
| | - Rafic Beydoun
- Department of Pathology, Wayne State University, Detroit, MI, USA
| | | | - Jeffrey H Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, UCONN Health, Farmington, CT, USA.
| |
Collapse
|
42
|
Sun B, Chen L. Interpretable deep learning for improving cancer patient survival based on personal transcriptomes. Sci Rep 2023; 13:11344. [PMID: 37443344 PMCID: PMC10344908 DOI: 10.1038/s41598-023-38429-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 07/07/2023] [Indexed: 07/15/2023] Open
Abstract
Precision medicine chooses the optimal drug for a patient by considering individual differences. With the tremendous amount of data accumulated for cancers, we develop an interpretable neural network to predict cancer patient survival based on drug prescriptions and personal transcriptomes (CancerIDP). The deep learning model achieves 96% classification accuracy in distinguishing short-lived from long-lived patients. The Pearson correlation between predicted and actual months-to-death values is as high as 0.937. About 27.4% of patients may survive longer with an alternative medicine chosen by our deep learning model. The median survival time of all patients can increase by 3.9 months. Our interpretable neural network model reveals the most discriminating pathways in the decision-making process, which will further facilitate mechanistic studies of drug development for cancers.
Collapse
Affiliation(s)
- Bo Sun
- Department of Quantitative and Computational Biology, University of Southern California, 1050 Childs Way, Los Angeles, CA, 90089, USA
| | - Liang Chen
- Department of Quantitative and Computational Biology, University of Southern California, 1050 Childs Way, Los Angeles, CA, 90089, USA.
| |
Collapse
|
43
|
Abousamra S, Gupta R, Kurc T, Samaras D, Saltz J, Chen C. Topology-Guided Multi-Class Cell Context Generation for Digital Pathology. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2023; 2023:3323-3333. [PMID: 38741683 PMCID: PMC11090253 DOI: 10.1109/cvpr52729.2023.00324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
In digital pathology, the spatial context of cells is important for cell classification, cancer diagnosis and prognosis. To model such complex cell context, however, is challenging. Cells form different mixtures, lineages, clusters and holes. To model such structural patterns in a learnable fashion, we introduce several mathematical tools from spatial statistics and topological data analysis. We incorporate such structural descriptors into a deep generative model as both conditional inputs and a differentiable loss. This way, we are able to generate high quality multi-class cell layouts for the first time. We show that the topology-rich cell layouts can be used for data augmentation and improve the performance of downstream tasks such as cell classification.
Collapse
Affiliation(s)
| | - Rajarsi Gupta
- Stony Brook University, Department of Biomedical Informatics, USA
| | - Tahsin Kurc
- Stony Brook University, Department of Biomedical Informatics, USA
| | | | - Joel Saltz
- Stony Brook University, Department of Biomedical Informatics, USA
| | - Chao Chen
- Stony Brook University, Department of Biomedical Informatics, USA
| |
Collapse
|
44
|
Dehkharghanian T, Bidgoli AA, Riasatian A, Mazaheri P, Campbell CJV, Pantanowitz L, Tizhoosh HR, Rahnamayan S. Biased data, biased AI: deep networks predict the acquisition site of TCGA images. Diagn Pathol 2023; 18:67. [PMID: 37198691 DOI: 10.1186/s13000-023-01355-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 05/07/2023] [Indexed: 05/19/2023] Open
Abstract
BACKGROUND Deep learning models applied to healthcare applications including digital pathology have been increasing their scope and importance in recent years. Many of these models have been trained on The Cancer Genome Atlas (TCGA) atlas of digital images, or use it as a validation source. One crucial factor that seems to have been widely ignored is the internal bias that originates from the institutions that contributed WSIs to the TCGA dataset, and its effects on models trained on this dataset. METHODS 8,579 paraffin-embedded, hematoxylin and eosin stained, digital slides were selected from the TCGA dataset. More than 140 medical institutions (acquisition sites) contributed to this dataset. Two deep neural networks (DenseNet121 and KimiaNet were used to extract deep features at 20× magnification. DenseNet was pre-trained on non-medical objects. KimiaNet has the same structure but trained for cancer type classification on TCGA images. The extracted deep features were later used to detect each slide's acquisition site, and also for slide representation in image search. RESULTS DenseNet's deep features could distinguish acquisition sites with 70% accuracy whereas KimiaNet's deep features could reveal acquisition sites with more than 86% accuracy. These findings suggest that there are acquisition site specific patterns that could be picked up by deep neural networks. It has also been shown that these medically irrelevant patterns can interfere with other applications of deep learning in digital pathology, namely image search. This study shows that there are acquisition site specific patterns that can be used to identify tissue acquisition sites without any explicit training. Furthermore, it was observed that a model trained for cancer subtype classification has exploited such medically irrelevant patterns to classify cancer types. Digital scanner configuration and noise, tissue stain variation and artifacts, and source site patient demographics are among factors that likely account for the observed bias. Therefore, researchers should be cautious of such bias when using histopathology datasets for developing and training deep networks.
Collapse
Affiliation(s)
- Taher Dehkharghanian
- University Health Network, Toronto, ON, Canada
- Department of Pathology and Molecular Medicine, Faculty of Health Science, McMaster University, Hamilton, ON, Canada
| | - Azam Asilian Bidgoli
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada
- Nature Inspired Computational Intelligence (NICI) Lab, Department of Engineering, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada
- Bharti School of Engineering and Computer Science, Laurentian University, Sudbury, ON, Canada
| | | | - Pooria Mazaheri
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada
| | - Clinton J V Campbell
- Department of Pathology and Molecular Medicine, Faculty of Health Science, McMaster University, Hamilton, ON, Canada
- William Osler Health System, Brampton, ON, Canada
| | | | - H R Tizhoosh
- KIMIA Lab, University of Waterloo, Waterloo, ON, Canada
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Shahryar Rahnamayan
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada.
- Nature Inspired Computational Intelligence (NICI) Lab, Department of Engineering, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada.
| |
Collapse
|
45
|
Jiang S, Suriawinata AA, Hassanpour S. MHAttnSurv: Multi-head attention for survival prediction using whole-slide pathology images. Comput Biol Med 2023; 158:106883. [PMID: 37031509 PMCID: PMC10148238 DOI: 10.1016/j.compbiomed.2023.106883] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 03/10/2023] [Accepted: 03/30/2023] [Indexed: 04/11/2023]
Abstract
Whole slide images (WSI) based survival prediction has attracted increasing interest in pathology. Despite this, extracting prognostic information from WSIs remains a challenging task due to their enormous size and the scarcity of pathologist annotations. Previous studies have utilized multiple instance learning approach to combine information from several randomly sampled patches, but this approach may not be adequate as different visual patterns may contribute unequally to prognosis prediction. In this study, we introduce a multi-head attention mechanism that allows each attention head to independently explore the utility of various visual patterns on a tumor slide, thereby enabling more comprehensive information extraction from WSIs. We evaluated our approach on four cancer types from The Cancer Genome Atlas database. Our model achieved an average c-index of 0.640, outperforming three existing state-of-the-art approaches for WSI-based survival prediction on these datasets. Visualization of attention maps reveals that the attention heads synergistically focus on different morphological patterns, providing additional evidence for the effectiveness of multi-head attention in survival prediction.
Collapse
Affiliation(s)
- Shuai Jiang
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Arief A Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA; Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA; Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA.
| |
Collapse
|
46
|
Song Q, Muller KE, Hondelink LM, diFlorio-Alexander RM, Karagas M, Hassanpour S. Non-Metastatic Axillary Lymph Nodes Have Distinct Morphology and Immunophenotype in Obese Breast Cancer patients at Risk for Metastasis. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.04.14.23288545. [PMID: 37131732 PMCID: PMC10153305 DOI: 10.1101/2023.04.14.23288545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Obese patients have worse breast cancer outcomes than normal weight women including a 50% to 80% increased rate of axillary nodal metastasis. Recent studies have shown a potential link between increased lymph node adipose tissue and breast cancer nodal metastasis. Further investigation into potential mechanisms underlying this link may reveal potential prognostic utility of fat-enlarged lymph nodes in breast cancer patients. In this study, a deep learning framework was developed to identify morphological differences of non-metastatic axillary nodes between node-positive and node-negative obese breast cancer patients. Pathology review of the model-selected patches found an increase in the average size of adipocytes (p-value=0.004), an increased amount of white space between lymphocytes (p-value<0.0001), and an increased amount of red blood cells (p-value<0.001) in non-metastatic lymph nodes of node-positive breast cancer patients. Our downstream immunohistology (IHC) analysis showed a decrease of CD3 expression and increase of leptin expression in fat-replaced axillary lymph nodes in obese node-positive patients. In summary, our findings suggest a novel direction to further investigate the crosstalk between lymph node adiposity, lymphatic dysfunction, and breast cancer nodal metastases.
Collapse
|
47
|
Mandair D, Reis-Filho JS, Ashworth A. Biological insights and novel biomarker discovery through deep learning approaches in breast cancer histopathology. NPJ Breast Cancer 2023; 9:21. [PMID: 37024522 PMCID: PMC10079681 DOI: 10.1038/s41523-023-00518-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 02/27/2023] [Indexed: 04/08/2023] Open
Abstract
Breast cancer remains a highly prevalent disease with considerable inter- and intra-tumoral heterogeneity complicating prognostication and treatment decisions. The utilization and depth of genomic, transcriptomic and proteomic data for cancer has exploded over recent times and the addition of spatial context to this information, by understanding the correlating morphologic and spatial patterns of cells in tissue samples, has created an exciting frontier of research, histo-genomics. At the same time, deep learning (DL), a class of machine learning algorithms employing artificial neural networks, has rapidly progressed in the last decade with a confluence of technical developments - including the advent of modern graphic processing units (GPU), allowing efficient implementation of increasingly complex architectures at scale; advances in the theoretical and practical design of network architectures; and access to larger datasets for training - all leading to sweeping advances in image classification and object detection. In this review, we examine recent developments in the application of DL in breast cancer histology with particular emphasis of those producing biologic insights or novel biomarkers, spanning the extraction of genomic information to the use of stroma to predict cancer recurrence, with the aim of suggesting avenues for further advancing this exciting field.
Collapse
Affiliation(s)
- Divneet Mandair
- UCSF Helen Diller Family Comprehensive Cancer Center, San Francisco, CA, 94158, USA
| | | | - Alan Ashworth
- UCSF Helen Diller Family Comprehensive Cancer Center, San Francisco, CA, 94158, USA.
| |
Collapse
|
48
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
49
|
Nanoscale Prognosis of Colorectal Cancer Metastasis from AFM Image Processing of Histological Sections. Cancers (Basel) 2023; 15:cancers15041220. [PMID: 36831563 PMCID: PMC9953928 DOI: 10.3390/cancers15041220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 02/17/2023] Open
Abstract
Early ascertainment of metastatic tumour phases is crucial to improve cancer survival, formulate an accurate prognostic report of disease advancement, and, most importantly, quantify the metastatic progression and malignancy state of primary cancer cells with a universal numerical indexing system. This work proposes an early improvement to metastatic cancer detection with 97.7 nm spatial resolution by indexing the metastatic cancer phases from the analysis of atomic force microscopy images of human colorectal cancer histological sections. The procedure applies variograms of residuals of Gaussian filtering and theta statistics of colorectal cancer tissue image settings. This methodology elucidates the early metastatic progression at the nanoscale level by setting metastatic indexes and critical thresholds based on relatively large histological sections and categorising the malignancy state of a few suspicious cells not identified with optical image analysis. In addition, we sought to detect early tiny morphological differentiations indicating potential cell transition from epithelial cell phenotypes of low metastatic potential to those of high metastatic potential. This metastatic differentiation, which is also identified in higher moments of variograms, sets different hierarchical levels for metastatic progression dynamics.
Collapse
|
50
|
Takagi Y, Hashimoto N, Masuda H, Miyoshi H, Ohshima K, Hontani H, Takeuchi I. Transformer-based personalized attention mechanism for medical images with clinical records. J Pathol Inform 2023; 14:100185. [PMID: 36691660 PMCID: PMC9860154 DOI: 10.1016/j.jpi.2022.100185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 12/10/2022] [Accepted: 12/28/2022] [Indexed: 01/04/2023] Open
Abstract
In medical image diagnosis, identifying the attention region, i.e., the region of interest for which the diagnosis is made, is an important task. Various methods have been developed to automatically identify target regions from given medical images. However, in actual medical practice, the diagnosis is made based on both the images and various clinical records. Consequently, pathologists examine medical images with prior knowledge of the patients and the attention regions may change depending on the clinical records. In this study, we propose a method, called the Personalized Attention Mechanism (PersAM) method, by which the attention regions in medical images according to the clinical records. The primary idea underlying the PersAM method is the encoding of the relationships between medical images and clinical records using a variant of the Transformer architecture. To demonstrate the effectiveness of the PersAM method, we applied it to a large-scale digital pathology problem involving identifying the subtypes of 842 malignant lymphoma patients based on their gigapixel whole-slide images and clinical records.
Collapse
Affiliation(s)
- Yusuke Takagi
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 4668555, Japan
| | - Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 1030027, Japan
| | - Hiroki Masuda
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 4668555, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume 8300011, Japan
| | - Koichi Ohshima
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume 8300011, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 4668555, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 1030027, Japan
- Department of Mechanical Systems Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 4648603, Japan
| |
Collapse
|