BPG is committed to discovery and dissemination of knowledge
Review Open Access
Copyright ©The Author(s) 2026. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Gastroenterol. Jan 8, 2026; 7(1): 115498
Published online Jan 8, 2026. doi: 10.35712/aig.v7.i1.115498
Multimodal artificial intelligence integrates imaging, endoscopic, and omics data for intelligent decision-making in individualized gastrointestinal tumor treatment
Hui Nian, Zhi-Long Zhang, Qian-Cheng Du, Department of Thoracic Surgery, Shanghai Xuhui Central Hospital, Shanghai 200031, China
Yi-Bin Wu, Yu Bai, Department of Intensive Care Unit, Shanghai Xuhui Central Hospital, Shanghai 200031, China
Xiao-Huang Tu, Qi-Zhi Liu, De-Hua Zhou, Department of Gastrointestinal Surgery, Shanghai Fourth People’s Hospital Affiliated to Tongji University School of Medicine, Shanghai 200434, China
ORCID number: Hui Nian (0009-0001-1152-7073); Yi-Bin Wu (0009-0004-0527-8017); Yu Bai (0009-0001-4273-8383); Zhi-Long Zhang (0009-0009-2700-622X); Xiao-Huang Tu (0009-0001-0880-8456); Qi-Zhi Liu (0009-0005-5155-6433); De-Hua Zhou (0000-0003-2877-7746); Qian-Cheng Du (0000-0002-0154-2210).
Co-corresponding authors: De-Hua Zhou and Qian-Cheng Du.
Author contributions: Nian H contributed to project conception, research design, drafting the initial manuscript, and project administration; Bai Y contributed to methodology design, formal analysis, literature curation, and critical review and revision of the manuscript; Zhang ZL and Wu YB conducted literature investigation and visualization, including figure preparation; Tu XH and Liu QZ provided resources, software support, and performed experimental validation; Zhou DH oversaw the entire research process and contributed to manuscript revision and finalization; Du QC served as the corresponding author, providing overall supervision, performing key revisions, and giving final approval of the manuscript; all authors have read and approved the final version of the manuscript. In this study, Zhou DH and Du QC are designated as co-corresponding authors for the following reasons. First, Zhou DH oversaw the entire research process, including project conception and design, data integration, and manuscript revision and finalization, thereby ensuring the methodological rigor and scientific validity of the work. Du QC also provided comprehensive oversight, contributed critical intellectual revisions, and gave final approval of the version to be published, playing a pivotal role in maintaining the manuscript’s academic quality and guiding it through the publication process. Second, the author contribution statement explicitly indicates that both "contributed equally," reflecting their parallel engagement in leadership, cross-departmental coordination-particularly between thoracic surgery and intensive care-and the integration of multi-center data. This co-corresponding arrangement not only reinforces shared accountability but also enhances transparency in interdisciplinary collaboration, aligns with international journal standards regarding corresponding authorship, and supports the credibility and reproducibility of the findings. In conclusion, the joint corresponding authorship reflects their indispensable academic leadership and equivalent scholarly contributions, thereby upholding the integrity and ethical standards of the research.
Supported by Xuhui District Health Commission, No. SHXH202214.
Conflict-of-interest statement: All the authors report no relevant conflicts of interest for this article.
Open Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Qian-Cheng Du, MD, Department of Thoracic Surgery, Shanghai Xuhui Central Hospital, No. 366 Longchuan North Road, Xuhui District, Shanghai 200031, China. duqc1991106@sina.com
Received: October 20, 2025
Revised: November 4, 2025
Accepted: December 18, 2025
Published online: January 8, 2026
Processing time: 80 Days and 18.1 Hours

Abstract

Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity. Multimodal artificial intelligence (AI) addresses this challenge by integrating diverse data sources-including computed tomography (CT), magnetic resonance imaging (MRI), endoscopic imaging, and genomic profiles-to enable intelligent decision-making for individualized therapy. This approach leverages AI algorithms to fuse imaging, endoscopic, and omics data, facilitating comprehensive characterization of tumor biology, prediction of treatment response, and optimization of therapeutic strategies. By combining CT and MRI for structural assessment, endoscopic data for real-time visual inspection, and genomic information for molecular profiling, multimodal AI enhances the accuracy of patient stratification and treatment personalization. The clinical implementation of this technology demonstrates potential for improving patient outcomes, advancing precision oncology, and supporting individualized care in gastrointestinal cancers. Ultimately, multimodal AI serves as a transformative tool in oncology, bridging data integration with clinical application to effectively tailor therapies.

Key Words: Multimodal artificial intelligence; Gastrointestinal tumors; Individualized therapy; Intelligent diagnosis; Treatment optimization; Prognostic prediction; Data fusion; Deep learning; Precision medicine

Core Tip: This review highlights that multimodal artificial intelligence (AI), by integrating imaging, endoscopic, and multi-omics data, is revolutionizing the intelligent decision-making process for individualized gastrointestinal tumor therapy. It enhances precision across the entire clinical spectrum-from improving early detection and accurate staging, to optimizing treatment planning and prognostic assessment. The key to its success lies in effectively addressing challenges related to data fusion, model interpretability, and multicenter validation. Ultimately, multimodal AI serves as a pivotal translational bridge, connecting complex data analysis with actionable clinical insights to advance precision oncology.



INTRODUCTION

Gastrointestinal tumors are among the most common malignant neoplasms encountered in clinical practice, including major pathological types such as gastric cancer, colorectal cancer, gastrointestinal stromal tumors (GIST), and neuroendocrine tumors (NETs). These tumors exhibit marked heterogeneity and complex pathobiological characteristics, manifested not only by diverse histological morphologies but also by substantial molecular and genetic alterations. For instance, GISTs originate from interstitial cells of Cajal or related precursor cells in the gastrointestinal tract. The majority of cases harbor activating mutations in KIT or PDGFRA genes, while a subset of wild-type GISTs lacks these driver mutations and displays distinct, more heterogeneous molecular mechanisms. These genetic differences profoundly influence tumor biology and response to targeted therapies, thereby shaping clinical outcomes and guiding therapeutic decisions[1,2]. Furthermore, NETs display variable differentiation grades and proliferation indices (e.g., Ki-67), leading to considerable variability in clinical presentation and treatment efficacy[3,4]. This extensive heterogeneity challenges the conventional, single-modality diagnostic and treatment approaches, rendering them insufficient for achieving the goals of precision medicine in gastrointestinal oncology.

In recent years, the rapid advancement of medical imaging technologies [e.g., computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET)/CT], endoscopic techniques, and omics approaches-including genomics, transcriptomics, and proteomics-has led to an exponential increase in gastrointestinal tumor-related data. Conventional data analysis methods are limited in their ability to effectively integrate and extract meaningful insights from these heterogeneous, multi-source, and high-dimensional datasets, thereby constraining a comprehensive understanding of tumor biology and impeding the optimization of clinical decision-making[5,6]. In this context, multimodal artificial intelligence (AI) has emerged as a transformative solution, capable of integrating diverse data modalities such as imaging, histopathology, genomic profiles, and clinical records to enable deep tumor phenotyping and intelligent analysis, thus offering robust support for personalized treatment planning. For instance, multimodal AI models have demonstrated superior accuracy (ACC) and reliability in diagnosing gastrointestinal tumors, determining disease stage, predicting therapeutic response, and assessing prognosis, facilitating precise therapeutic targeting and real-time treatment adaptation[7,8].

Furthermore, treatment strategies for gastrointestinal tumors are increasingly transitioning toward precision medicine, with a strong emphasis on developing personalized therapeutic approaches tailored to individual patients' molecular profiles and the underlying tumor heterogeneity. Multimodal AI serves as a critical enabler in this transformation: By integrating diverse data sources, it not only uncovers the intrinsic biological mechanisms of tumors but also supports clinicians in making more accurate and evidence-based decisions within complex clinical settings. For instance, the integration of circulating tumor DNA (ctDNA) detection from liquid biopsies with imaging and clinical data has become essential for assessing recurrence risk and monitoring treatment response-applications that depend fundamentally on the efficient fusion and intelligent interpretation of multimodal information[9,10]. Moreover, AI-driven analytics have accelerated the identification of novel diagnostic biomarkers and therapeutic targets, thereby enhancing the precise deployment of emerging treatment modalities, including immunotherapy[11].

In conclusion, the complex heterogeneity and multi-dimensional pathological features of gastrointestinal tumors render traditional single-modality diagnostic approaches insufficient for achieving the goals of precision medicine in clinical practice. Multimodal AI, through the integration of imaging, endoscopic, omics, and clinical data, enables a comprehensive and in-depth characterization of tumor biology and clinical behavior, thereby facilitating the development and implementation of personalized treatment strategies. With ongoing technological advancements and the growing availability of large-scale biomedical data, the effective integration and utilization of multi-dimensional information to enhance the ACC, reliability, and clinical applicability of intelligent decision-support systems have emerged as key challenges and focal points in gastrointestinal oncology research and translation. This review aims to provide a systematic summary of the current applications, core methodologies, and future trajectories of multimodal AI in intelligent decision-making for gastrointestinal tumors, with the goal of accelerating its adoption in routine clinical practice and advancing the standard of precision care.

MULTIMODAL DATA TYPES AND THEIR APPLICATIONS IN GASTROINTESTINAL TUMORS
Imaging data

Imaging techniques play a critical role in the diagnosis, staging, and treatment assessment of gastrointestinal tumors, with key modalities including CT, MRI, and PET. Each modality offers distinct advantages and provides complementary tumor information, thereby establishing a robust foundation for personalized treatment planning. CT, owing to its high spatial resolution and rapid acquisition speed, is widely employed for initial tumor localization and extent evaluation, particularly excelling in morphological characterization and the assessment of relationships with adjacent anatomical structures. MRI, by virtue of its superior soft tissue contrast and multi-parametric imaging capabilities-especially diffusion-weighted imaging and dynamic contrast-enhanced imaging is particularly well-suited for evaluating tumor histological features and the tumor microenvironment, supporting accurate staging and pre-therapeutic evaluation. PET, particularly when integrated with CT (PET-CT), enables sensitive detection of tumor biological activity through metabolic imaging, facilitating the identification of early metastases and monitoring of therapeutic response, thus playing a vital role in the systemic management of gastrointestinal tumors[12].

The integration of high spatial resolution and functional information in imaging data is essential for achieving precise tumor localization, accurate staging, and dynamic monitoring of treatment response. High-resolution imaging enables detailed visualization of tumor margins and internal architecture, allowing for accurate quantification of tumor size and extent of invasion, which directly informs surgical resection planning and radiotherapy target volume delineation. Concurrently, functional imaging parameters-such as the apparent diffusion coefficient derived from MRI and the standardized uptake value from PET-provide insights into tumor cellularity and metabolic activity, facilitating early detection of treatment sensitivity or resistance and supporting timely therapeutic adjustments. For example, diffusion-weighted MRI has demonstrated high sensitivity in prognostic evaluation of GISTs, whereas PET-CT exhibits superior performance in identifying small metastatic lesions and assessing postoperative recurrence risk[12,13].

In recent years, driven by the rapid advancement of AI technologies, deep learning-based image analysis algorithms have been increasingly adopted in the processing of gastrointestinal tumor imaging data. These AI-driven approaches enable efficient and standardized image interpretation through automated tumor segmentation, substantially reducing inter-observer variability and subjective bias. For example, convolutional neural network (CNN)-based models can accurately identify and delineate gastrointestinal tumor boundaries, thereby enhancing diagnostic ACC and reproducibility[14]. Furthermore, AI exhibits distinct advantages in image feature extraction and quantitative analysis, capable of identifying radiomic features that are imperceptible to human vision from vast imaging datasets. Such features have been shown to correlate strongly with tumor molecular profiles, treatment response, and clinical outcomes. By integrating radiomics with clinical data into multimodal AI frameworks, these models facilitate a deeper understanding of tumor heterogeneity and enable more precise outcome prediction. For instance, MRI-based radiomic signatures combined with machine learning techniques have demonstrated potential in predicting liver metastasis risk in rectal cancer, paving the way for early intervention and personalized therapeutic strategies[15,16].

Overall, imaging modalities such as CT, MRI, and PET serve complementary roles in the diagnosis and management of gastrointestinal tumors, delivering comprehensive structural and functional insights (Table 1). The integration of AI technology has significantly improved both the efficiency and depth of imaging data analysis, while also enabling seamless integration of imaging features with clinical and genomic data, thereby providing robust technical support for individualized, data-driven decision-making in gastrointestinal oncology. Looking ahead, as multimodal imaging data fusion advances and large-scale clinical validation efforts expand, AI-powered imaging analysis is poised to assume a central role in precision diagnosis and treatment, driving the evolution of personalized medicine toward greater sophistication and clinical impact[16,17].

Table 1 Application of multimodal data in gastrointestinal tumors.
Data type
Core characteristics and key technologies
Main clinical application scenarios
AI empowerment and value
Imaging dataCT: High spatial resolution, rapid imaging, morphological analysis; MRI: Excellent soft tissue contrast (DWI, DCE), microenvironment assessment; PET: High metabolic sensitivity (SUV value), assessment of biological activityTumor localization, staging, efficacy evaluation, recurrence monitoringAI application: Automatic segmentation based on CNN; radiomics feature mining. Value: Improves diagnostic consistency, predicts efficacy and metastasis risk
Endoscopic dataProvides HD real-time visualization of mucosal layer; chromo/electronic staining enhances contrastEarly screening and diagnosis (e.g., early gastric cancer, colorectal polyp detection)AI Application: CNN models for automatic lesion identification, classification, and depth assessment. Value: Increases early detection rate, assists treatment decisions
Omics dataGenomics: Reveals driver mutations (e.g., HER2). Transcriptomics/proteomics/metabolomics: Reflects gene expression, protein function, metabolic statusDeciphering tumor heterogeneity, predicting treatment response and prognosis, facilitating personalized therapyAI Application: Feature selection and dimension reduction; multimodal fusion (e.g., GNN model StereoMM, drug response prediction model DROEG). Value: Mines molecular mechanisms, enables precise typing, predicts drug sensitivity
Endoscopic data

Endoscopic data, primarily comprising endoscopic images and video recordings, serve as the core information source for early detection and diagnosis of gastrointestinal tumors. Their unique strength lies in enabling high-resolution, real-time visualization of the gastrointestinal mucosa, allowing direct assessment of lesion morphology, color, texture, and margin characteristics. Compared to other imaging modalities, endoscopic imaging offers superior spatial resolution and dynamic observational capabilities, facilitating the identification of subtle lesions that are often imperceptible to the naked eye and thereby significantly enhancing the detection rate of early-stage gastrointestinal tumors. For instance, flat or superficial lesions associated with early gastric, esophageal, and colorectal cancers are frequently challenging to identify using conventional methods. However, high-resolution endoscopy combined with chromoendoscopy or virtual chromoendoscopy (e.g., narrow-band imaging) can enhance mucosal contrast and improve diagnostic ACC[18,19]. Furthermore, the sequential frame structure of endoscopic video data not only supports dynamic evaluation of lesion behavior during examination but also provides rich temporal features essential for subsequent AI-driven analysis.

Deep learning–based endoscopic image analysis, particularly CNN, has demonstrated substantial advantages in lesion identification, boundary delineation, and pathological prediction. CNNs can automatically learn high-dimensional features of lesions from large volumes of annotated data, enabling efficient detection and classification of various abnormalities, including polyps, ulcers, and tumors. In the context of early gastrointestinal tumor screening, AI systems have been shown to surpass conventional endoscopists in key performance metrics such as polyp detection ACC, precise lesion boundary localization, and assessment of invasion depth. For instance, a meta-analysis revealed that CNN-based models achieved 84% sensitivity and 91% specificity in predicting the invasion depth of gastrointestinal tumors, with an area under the ROC curve (AUC) of 0.93-significantly outperforming manual interpretation[20]. Furthermore, deep learning models can integrate endoscopic imaging features to predict histopathological grades, thereby supporting endoscopic treatment decisions and reducing unnecessary biopsies and surgical interventions[19].

The integration of endoscopic data with other modalities-such as histopathological, genomic, and molecular biomarker data-holds substantial promise, yet it also presents significant challenges (Table 1). Multimodal data fusion enables a comprehensive characterization of gastrointestinal tumor heterogeneity across biological scales, facilitating more accurate risk stratification and the development of individualized treatment strategies. For example, integrating endoscopic imaging with molecular diagnostic information allows for real-time assessment of tumor molecular profiles and therapeutic responses, thereby enhancing the scientific rigor and personalization of clinical decision-making[21,22]. However, high data heterogeneity, inconsistent annotation standards, limited data availability, and concerns regarding patient privacy continue to hinder the broad implementation of multimodal integration approaches. Moreover, the computational intensity required for real-time data processing and the complexity of embedding these systems into existing clinical workflows remain key technical barriers[23,24]. Looking ahead, AI-empowered multimodal data fusion is poised to elevate the early diagnosis and precision management of gastrointestinal tumors, paving the way for truly intelligent and integrated clinical decision support systems (CDSS).

Genomic data

Omics data encompass multiple layers, including genomics, transcriptomics, proteomics, and metabolomics, and are pivotal in elucidating the molecular mechanisms underlying tumorigenesis. Genomic data reveal driver mutations and tumor heterogeneity by identifying genetic alterations and copy number variations in tumor cells. For instance, multi-omics analysis of the HER2 gene in gastrointestinal tumors has demonstrated that alterations in copy number and expression patterns enable precise patient stratification for HER2-targeted therapies[25]. Transcriptomics captures the dynamic regulation of gene expression and enables the identification of molecular subtypes and immune microenvironment features through RNA sequencing[26]. Proteomics delivers functional insights at the protein level, particularly regarding the expression of immune-related proteins within the tumor microenvironment, thereby accelerating the discovery of immunotherapeutic targets[26]. Metabolomics, as a direct readout of cellular metabolic status, uncovers tumor metabolic reprogramming. For example, sphingosine-1-phosphate promotes angiogenesis and modulates immune cell polarization in the colorectal cancer microenvironment, offering novel avenues for metabolism-directed therapeutic strategies[27]. Furthermore, epigenomic data-including DNA methylation and histone modifications-provide critical insights into the regulatory mechanisms of tumor gene expression and immune evasion[28,29].

However, the high dimensionality and complexity of omics data present substantial challenges for AI model development. On one hand, omics datasets are typically characterized by extremely high feature dimensions that far exceed sample sizes, increasing the risk of overfitting and limiting model generalizability. To mitigate this issue, AI researchers employ feature selection, dimensionality reduction techniques, and regularization strategies to identify the most informative biomarkers and critical biological pathways[30,31]. On the other hand, different omics data modalities exhibit marked heterogeneity in data structure, noise levels, and missing value patterns, necessitating AI models with robust multimodal integration capabilities. Current mainstream fusion approaches include concatenation-based, transformation-based, and network-based methods, which enable the capture of complex cross-omics associations and underlying biological mechanisms[32,33]. For instance, the graph neural network (GNNs)-based fusion model StereoMM leverages self-supervised learning (SSL) to effectively integrate spatial transcriptomic and histopathological image data, thereby enhancing the identification of tumor progression regions[34]. Furthermore, the integration of advanced AI paradigms-such as large language models (LLMs)-has strengthened cross-omics semantic interpretation and inferential reasoning, positioning them as potential central hubs for future omics data integration[32,35].

Omics data have demonstrated a wide range of successful applications in predicting tumor biological behavior and treatment response. The integration of multi-omics data with AI models enables accurate prediction of tumor molecular subtypes, recurrence risk, and clinical prognosis. For example, a multi-omics model integrating genomic, transcriptomic, and methylation profiles successfully identified metastasis-associated fibroblast subpopulations and their regulatory signaling pathways in colorectal cancer, elucidating their role in promoting tumor progression and revealing potential therapeutic targets[36]. Conversely, drug response prediction frameworks such as DROEG integrate genomic, transcriptomic, and methylation data with functional annotations of key genes, substantially improving the ACC of chemosensitivity predictions in tumor cell lines and providing robust support for personalized drug selection[37]. Furthermore, AI-powered multi-omics liquid biopsy analyses-incorporating ctDNA, exosomal RNA, and other multidimensional biomarkers-have enabled non-invasive early detection and dynamic monitoring of therapeutic efficacy in gastrointestinal tumors[38]. Collectively, these studies highlight how deep mining of omics data through AI technologies not only advances our understanding of tumor molecular mechanisms but also facilitates the development of precision oncology strategies.

In conclusion, omics data serve as a central resource for multimodal AI in enabling intelligent decision-making for individualized gastrointestinal cancer therapy, encompassing comprehensive, multi-layered insights into tumor biology (Table 1). In response to the challenges posed by high dimensionality and biological complexity, advanced AI-driven fusion models and algorithms continue to emerge, establishing a robust foundation for elucidating tumor molecular mechanisms, predicting treatment responses, and achieving precision therapeutics. These advancements are accelerating the translation of precision medicine into clinical practice. Multimodal data fusion technology.

MULTIMODAL DATA FUSION TECHNOLOGY
Data preprocessing and standardization

Data preprocessing and standardization are critical steps in building multimodal AI systems, particularly in personalized gastrointestinal cancer treatment. Given the substantial heterogeneity in data formats across modalities-such as medical imaging, genomic profiles, and clinical electronic health records (EHR)-a robust preprocessing pipeline is essential to enable effective data integration and enhance model performance. These diverse data types exhibit distinct structural and statistical properties: Medical imaging data typically consists of high-dimensional matrices prone to noise and intensity inhomogeneities; clinical text data is unstructured and requires transformation into standardized representations using natural language processing techniques, including tokenization and lemmatization; genomic data may include sequencing reads, somatic mutations, or copy number variations, each with unique formatting requirements. For imaging data, standard preprocessing workflows encompass image enhancement, denoising, intensity normalization, and bias field correction. Notably, applying N4 bias field correction to MRI scans, combined with AI-powered super-resolution techniques such as Self-supervised Multi-directional Resolution Enhancement, significantly improves image quality and boosts the predictive ACC of downstream radiomics models[39]. Furthermore, image enhancement techniques such as Contrast Limited Adaptive Histogram Equalization enhance fine detail visibility in chest X-rays for lung disease identification, thereby facilitating more effective feature extraction by deep learning models[40].

In text data processing, the integration of structured EHRs with natural language processing techniques-such as the bag-of-words model and word embeddings-enables effective extraction of clinical information and enhances the ACC of clinical event prediction. For example, combining unstructured textual data, such as preoperative surgical notes, with structured clinical variables significantly improves the performance of perioperative risk prediction models in spinal surgery[41]. LLMs (such as BioBERT and GPT-4o) demonstrate strong performance in tumor diagnosis classification tasks, particularly in processing free-text diagnostic descriptions. Proper standardization of the preprocessing pipeline significantly influences model ACC and generalizability[42].

Data standardization serves as a fundamental prerequisite for enabling effective integration of multimodal data during the fusion process. Common standardization techniques-such as min-max normalization, Z-score standardization, and batch normalization-are employed to eliminate disparities in data scale and distribution, thereby preventing any single modality from disproportionately influencing the training process. Notably, Z-score standardization is widely adopted in diverse domains including cardiovascular disease prediction and cryptocurrency price forecasting, where it contributes to faster model convergence and enhanced generalization performance[43,44]. Furthermore, the adoption of the Fast Healthcare Interoperability Resources standard for structuring and standardized encoding of medical data, when combined with LLMs, enables seamless integration of multimodal data and enhances both the applicability and interoperability of AI models in clinical settings[45].

In conclusion, preprocessing and standardization of multimodal data for gastrointestinal tumors should adhere to the following principles: Domain-specific preprocessing techniques-such as denoising and enhancement for imaging data, tokenization and vectorization for textual data, and sequence normalization for genomic data-should be applied according to data type, while data scales should be harmonized using standardization methods including normalization and batch normalization (Table 2). This process not only enhances multimodal data fusion but also establishes a robust foundation for training deep learning models. Nevertheless, current preprocessing and standardization pipelines face significant challenges, including data heterogeneity, missing value imputation, privacy preservation, and algorithmic bias. There is an urgent need to establish unified standards and open frameworks to facilitate broader clinical adoption[46,47]. In the future, integrating AI-driven automated preprocessing tools with open data standards will further strengthen the data infrastructure of intelligent decision-support systems for gastrointestinal tumors, thereby improving the ACC and reliability of personalized treatment.

Table 2 Core framework of multimodal data fusion technologies.
Core stage
Key methods/technologies
Core challenges & solutions
Primary application value
Data preprocessing & standardizationImaging data: N4 bias field correction, CLAHE, SMORE; Text data: Tokenization, word embedding, LLMs (e.g., BioBERT, GPT-4o); Standardization: Z-score, batch normalization, FHIR standardChallenges: Data heterogeneity, missing values, noise, privacy. Solutions: Dedicated preprocessing, automated tools, unified standards (e.g., FHIR)Improves data quality & consistency, lays foundation for fusion
Fusion strategyEarly fusion (data-level): Directly concatenates raw data. Middle fusion (feature-level): Multi-stream CNN, Attention Mechanism, GNNs. Late fusion (decision-level): Weighted averaging, voting, meta-learningChallenges: Data heterogeneity, inter-modal relationships, information loss. Solutions: Select/combine strategies based on data traits and task goals (e.g., using attention to capture cross-modal dependencies)Integrates multi-source complementary information, enhances model robustness & prediction accuracy
Model training & validationTraining techniques: Data augmentation, handling missing values, regularization, early stopping validation methods: K-fold cross-validation, external validation, multi-center validation evaluation metrics: ACC, AUC, sensitivity, specificity, f1-scoreChallenges: Data imbalance, overfitting, generalization. Solutions: Employ rigorous internal/external validation, use explainable AI (e.g., SHAP) to enhance trustEnsures model reliability, stability, and clinical applicability, promotes clinical translation
Integration strategy

Multimodal AI systems enhance the understanding of complex medical problems and enable more accurate decision-making by integrating information from diverse data sources and modalities. In individualized gastrointestinal tumor treatment, fusion strategies are central to multimodal AI and are primarily categorized into three types: Early fusion (data-level fusion), mid-level fusion (feature-level fusion), and late fusion (decision-level fusion). Early fusion involves combining raw data from different modalities at the model input stage to create a unified representation. This approach preserves fine-grained details from the original data, thereby maximizing information retention; however, it is susceptible to challenges such as data heterogeneity, the curse of dimensionality, and noise interference, necessitating rigorous preprocessing and standardization. Mid-level fusion operates after feature extraction, where features are first independently derived from each modality and subsequently integrated. This strategy effectively balances modality-specific diversity with comprehensive feature expression, enhancing model generalization and robustness. Nevertheless, it demands carefully designed feature representations and fusion architectures to prevent information redundancy or loss. Late fusion integrates predictions from separately trained modality-specific models at the decision level, using techniques such as weighted averaging, voting mechanisms, or meta-learning. Given the high demands for deep integration and interpretability in personalized treatment of gastrointestinal tumors, mid-level fusion (feature-level fusion) is widely considered the most promising architectural framework. Figure 1 conceptually illustrates the workflows of the three fusion strategies. Early fusion preserves the integrity of raw data but imposes stringent requirements on preprocessing and standardization of heterogeneous inputs, and it struggles to handle asynchronously acquired or partially missing data. Late fusion, while simple to implement, trains models independently across modalities, thereby failing to capture the intricate cross-modal interactions that are essential for understanding tumor biology. While this approach offers simplicity and flexibility in implementation, it may overlook deep inter-modal correlations, potentially limiting the overall effectiveness of the fused output[48].

Figure 1
Figure 1 Schematic diagram of multimodal artificial intelligence fusion strategies for gastrointestinal tumors. The model architectures for early, intermediate, and late fusion are compared. Intermediate (feature-level) Fusion (highlighted in blue), the predominant approach, involves processing each data modality through specialized networks, followed by integrating the extracted features to model cross-modal interactions for joint prediction. This architecture balances flexibility and the ability to capture complex, biologically meaningful interactions between data types, making it particularly suitable for personalized therapy applications. Early fusion combines raw data at the input stage, while late fusion aggregates decisions from separate models. CT: Computed tomography; MRI: Magnetic resonance imaging; EHR: Electronic health record; GNN: Graph neural network.

In the domain of deep learning, a range of advanced techniques has been developed to enable multimodal fusion, thereby improving both model performance and interpretability. Multi-stream CNNs capture modality-specific heterogeneity by constructing separate convolutional pathways for each data modality, independently extracting features before integrating them. This architecture effectively preserves distinct characteristics across modalities and is widely applied in the integration of medical imaging and clinical data. The Attention Mechanism enhances feature selectivity by dynamically assigning weights to different modalities or regions, emphasizing informative components while suppressing noise. By modeling cross-modal dependencies during fusion, attention mechanisms significantly improve diagnostic and predictive ACC. GNN, known for their ability to process non-Euclidean structured data, represent multimodal information as graph-structured representations, where nodes and edges encode entities and their complex interactions. This makes GNNs particularly suitable for integrating biological data-such as genomic profiles and protein-protein interaction networks-with imaging data, facilitating multi-scale analysis from molecular mechanisms to macroscopic phenotypes[49,50].

The integration strategy plays a pivotal role in improving model performance and generalization capability. Multimodal integration not only leverages complementary information across diverse modalities to overcome the inherent limitations of single-modal data, but also strengthens the model's robustness against noise and missing data. In the context of gastrointestinal tumors, the fusion of multi-source data-including medical imaging, genomic profiles, pathological slides, and EHRs-enables a more precise characterization of tumor heterogeneity and the tumor microenvironment, thereby enhancing the ACC of treatment response assessment and prognosis prediction. For example, multimodal AI models based on integration strategies have shown significant improvements in predicting clinical benefits from adjuvant chemotherapy in colorectal cancer and in classifying tumor molecular subtypes, demonstrating their translational potential in clinical practice. Furthermore, by improving model generalization, multimodal integration supports reliable deployment across heterogeneous clinical settings, mitigates overfitting risks, and advances the realization of precision medicine[6,51,52].

In conclusion, the integration strategy serves as a central component of multimodal AI systems, and its thoughtful design and effective implementation directly influence model performance and clinical applicability (Table 2). As deep learning technologies continue to advance, integration mechanisms are expected to become increasingly intelligent and interpretable, thereby supporting intelligent decision-making in individualized gastrointestinal tumor treatment and offering robust support for the advancement of precision medicine.

Training and validation of the fusion model

In the application of multimodal fusion models to intelligent decision-making for individualized gastrointestinal tumor treatment, training and validation are critical steps for ensuring model performance and clinical utility. By integrating diverse data modalities-such as medical imaging, clinical records, pathological slides, and genomic profiles-the fusion model can more comprehensively capture the complex biological characteristics of tumors. However, this integration also introduces significant challenges in model training, validation complexity, and performance assessment.

The training of multimodal fusion models faces several key challenges. Data imbalance is particularly prominent, as certain tumor stages or molecular subtypes are underrepresented in clinical datasets, leading to biased model learning and compromised generalization. Missing data further complicates the training process, given the heterogeneous acquisition conditions and variable completeness across modalities; missing or low-quality inputs can significantly degrade fusion performance. To mitigate these issues, researchers commonly employ data augmentation, imputation techniques, and feature selection to address data imbalance and incomplete data. Concurrently, regularization methods and early stopping strategies are implemented to prevent overfitting. For instance, in a multimodal fusion model for esophageal cancer, principal component analysis was applied for dimensionality reduction and integrated with deep learning networks and traditional machine learning algorithms, effectively reducing overfitting and enhancing model generalization[53]. Furthermore, the adoption of a multimodal deep neural network architecture, combined with a dynamic attention mechanism to strengthen the learning of critical features, enhances model robustness[54].

Secondly, the model evaluation phase employs multiple validation strategies to ensure both model stability and clinical applicability. Cross-validation techniques-such as 5-fold and 10-fold-are widely used to maximize the utilization of limited datasets, enabling robust performance assessment and reducing random bias. For instance, in prognostic prediction for cervical spinal cord injury, a multimodal model was evaluated using 5-fold cross-validation, demonstrating high ACC (90%) and an AUC of 0.94, indicating strong predictive reliability[55]. External validation and multi-center dataset evaluation more effectively reflect the model's generalization capability and clinical translational potential. Multi-center data typically encompass diverse imaging devices, patient populations, and clinical settings, thereby providing a rigorous assessment of model performance and robustness across heterogeneous data sources. For instance, in hepatocellular carcinoma detection, the multimodal fusion model achieved excellent AUC values during validation across 16 centers, with internal and external validation scores reaching 0.985 and 0.915, respectively, demonstrating strong generalization ability[56]. Furthermore, the multimodal AI diagnostic model for lung cancer has consistently achieved high diagnostic ACC across independent test sets from multiple institutions, underscoring its potential for clinical translation[57].

Finally, recent studies have increasingly adopted a comprehensive set of performance metrics to evaluate the effectiveness of fusion models, including ACC, area under the AUC, sensitivity, specificity, and F1 score. For instance, in multimodal fusion models for esophageal cancer-covering pathological classification, T staging, and N staging-the logistic regression-based fusion model achieved a training ACC of 91.9% and a validation ACC of 88.4% in pathological classification, significantly outperforming single-modal models[53]. In terms of clinical relevance, the fusion model not only improves diagnostic ACC but also facilitates clinical decision-making by enabling recurrence risk prediction and guiding individualized treatment strategies, thereby significantly enhancing treatment efficacy and patient prognosis[58,59]. Simultaneously, the application of interpretable AI techniques-such as SHapley Additive exPlanations (SHAP) and Grad-CAM-to decipher the model's decision-making process enhances clinicians' trust and understanding of its outputs, which is crucial for the clinical adoption of fusion models[56,60].

In conclusion, the training and validation of multimodal fusion models necessitate a comprehensive integration of high-quality data, robust model design, and multi-level evaluation strategies (Table 2). The adoption of advanced data processing techniques, together with multi-center and multi-modal validation frameworks-integrated with performance metrics and clinical relevance assessment-constitutes a critical pathway toward enabling intelligent decision-making in individualized gastrointestinal cancer therapy. As dataset scales continue to expand and algorithms undergo further optimization, the stability and clinical utility of fusion models are expected to improve progressively, offering substantial technical support for the advancement of precision medicine.

THE CLINICAL APPLICATION OF MULTIMODAL AI IN INDIVIDUALIZED GASTROINTESTINAL TUMOR THERAPY
Intelligent diagnosis and tumor staging

Multimodal AI has demonstrated substantial clinical value in the early detection and precise staging of gastrointestinal tumors. Conventional imaging modalities face persistent challenges, including limited sensitivity, high subjectivity, and insufficient characterization of tumor heterogeneity. The integration of AI-particularly deep learning and radiomics-enables the extraction of microenvironmental and heterogeneity features from medical images, thereby improving tumor identification and classification ACC. For instance, in diagnosing peritoneal metastasis in gastrointestinal cancers, AI-driven analysis of imaging data facilitates the construction of highly accurate predictive models. These models not only estimate the risk of peritoneal metastasis but also assist in intraoperative detection of microscopic metastatic lesions, thereby enhancing prognostic evaluation and guiding personalized treatment decisions[13]. Furthermore, endoscopic ultrasound (EUS) integrated with AI demonstrates outstanding performance in risk stratification and prediction of malignant potential in GIST. A deep learning model trained on EUS imaging enables the AI system to accurately differentiate GIST from benign lesions, thereby supporting clinicians in early diagnosis and optimal treatment planning. The diagnostic ACC has consistently remained high in multi-center validation studies[61,62]. In the field of radiomics, AI models leveraging CT imaging data have enabled accurate prediction of risk stratification in GIST through the extraction of morphological and textural features, highlighting the critical role of multimodal data fusion in achieving precise tumor staging[63].

Multimodal AI significantly enhances diagnostic ACC in gastrointestinal tumors by integrating imaging, omics, and clinical data, demonstrating superior performance compared to single-modal approaches. Through the synergistic integration of genomics, omics data, and radiomics, AI enables the construction of multidimensional tumor feature models that more comprehensively capture tumor biological behavior and disease progression. In advanced gastric cancer, for instance, AI systems integrating clinical parameters, radiomic features, and digital pathology images have achieved improved precision in tumor staging and treatment response prediction, thereby advancing personalized therapeutic strategies[64]. Moreover, AI-assisted endoscopic techniques have been validated to enhance the detection of early gastrointestinal neoplasms, reduce miss rates, facilitate timely intervention, and ultimately improve patient outcomes[65,66]. As data fusion methodologies continue to evolve, multimodal AI is poised to play an increasingly pivotal role in intelligent diagnosis and precise staging, driving the transformation of gastrointestinal cancer care toward greater intelligence and personalization[17].

In conclusion, multimodal AI significantly enhances the efficiency of early screening and the ACC of staging for gastrointestinal tumors through deep extraction and integration of imaging and omics data, thereby advancing the realization of precision medicine (Table 3). As big data accumulates and algorithms continue to evolve, coupled with rigorous multi-center clinical validation, multimodal AI is poised to become a cornerstone technology for intelligent diagnosis and staging in gastrointestinal oncology, enabling individualized treatment and accurate outcome prediction.

Table 3 Clinical applications of multimodal artificial intelligence in personalized gastrointestinal cancer therapy.
Application area
Core function
Key technologies/data
Primary value
Intelligent diagnosis & stagingEarly screening & precise staging: Enhances tumor identification and classification, predicts metastasis riskImaging data: CT, EUS, PET/CT; Omics data: Radiomics, genomics; Clinical data: EHRIncreases early detection rates, reduces missed diagnoses; enables more accurate preoperative staging to inform treatment decisions
Treatment optimizationTreatment response prediction: Guides the selection of surgery, radiotherapy, chemotherapy, and targeted/immunotherapy regimensMultimodal fusion models: e.g., MuMo model; Data integration: Radiomics, genomics, immunomics, tumor microbiomeAccurately predicts efficacy, avoids unnecessary treatments; guides personalized medication (e.g., targeted drug combinations) to overcome drug resistance and improve response rates
Prognostic assessment & follow-up managementRisk stratification & recurrence prediction: Precisely assesses patient survival and recurrence risk. Dynamic follow-up management: Enables personalized long-term monitoringPrognostic models: Integrate clinical, imaging, genomic data. Intelligent systems: Clinical Decision Support Systems, EHR analysisEnables precise risk stratification to guide adjuvant therapy; improves follow-up efficiency, provides timely recurrence alerts, and optimizes resource allocation
Optimization of treatment planning

Multimodal AI integrates diverse data sources-including imaging, genomics, and immunomics-to construct treatment response prediction models, substantially improving the optimization of therapeutic strategies for gastrointestinal tumors. These multimodal-based models not only accurately characterize tumor heterogeneity and the tumor microenvironment but also effectively guide decision-making in surgery, radiotherapy, and chemotherapy. For instance, by applying radiomics and deep learning to analyze medical images, AI can identify imaging biomarkers of peritoneal metastasis, predict metastatic risk, and assist in detecting small metastatic lesions during surgical procedures, thereby providing a robust scientific foundation for surgical planning[13]. In neoadjuvant therapy, AI-driven multimodal analysis enables assessment of tumor biological features, supports prediction of tumor downstaging and postoperative survival, and facilitates individualized adjustments in timing and extent of surgery-accelerating the shift from a traditional surgery-first paradigm to a biology-guided, multimodal precision oncology approach[67]. Furthermore, for clinical challenges such as targeted therapy in GIST, AI-assisted molecular profiling can analyze patients’ multi-gene mutation landscapes, design personalized combination regimens, and achieve sustained control of resistant tumors[68,69]. Multimodal AI also incorporates tumor microbiome data to build prognostic models based on microbial abundance, enabling evaluation of patient responsiveness to chemotherapy and immunotherapy and informing dynamic treatment modifications[70]. Deep learning models that integrate multi-omics data can more precisely identify tumor molecular subtypes and cancer stem cell signatures, supporting accurate selection of targeted therapies and optimization of radiotherapy dosing[26,71]. In summary, multimodal treatment response prediction models enable high-fidelity characterization of tumor biology, significantly enhancing the efficacy and efficiency of surgical, radiotherapeutic, and chemotherapeutic planning in gastrointestinal oncology, and driving the transformation of clinical practice toward individualized and precision medicine.

In the fields of targeted and immunotherapy, multimodal AI-enabled individualized decision support systems demonstrate substantial potential. In immunotherapy, AI integrates single-cell transcriptomic data from tumor microenvironment immune cells to uncover the distinct transcriptional signatures of tumor-infiltrating neoantigen-reactive T cells, offering actionable targets for personalized immunotherapeutic strategies[72]. To address variability in patient responses to immune checkpoint inhibitors-such as PD-1, PD-L1, and CTLA-4-AI leverages multimodal data analysis to predict immunotherapy sensitivity, guiding clinicians in designing combination regimens that enhance therapeutic efficacy while minimizing toxicity[73]. Furthermore, the integration of nanotechnology with natural killer (NK) cell-based immunotherapy enables enhanced NK cell targeting and activation via smart nanocarriers; AI-designed nanomedicine systems can precisely modulate the tumor immune microenvironment, thereby amplifying immunotherapeutic effects[74]. In targeted therapy, AI systems analyze multi-gene sequencing data to accurately identify key driver mutations-including C-KIT and PDGFRA-and integrate these with clinical profiles to recommend personalized drug combinations, significantly improving treatment response rates[75,76]. AI-driven multimodal decision support allows for dynamic adaptation of therapeutic strategies to overcome challenges posed by tumor resistance and heterogeneity, enabling optimal integration of targeted and immunotherapies and advancing gastrointestinal cancer treatment into a new era of true precision and personalization. As AI models continue to evolve and multi-omics data become increasingly integrated, intelligent decision-making for individualized therapies will grow more efficient, safe, and accurate, substantially enhancing quality of life and clinical outcomes for patients with gastrointestinal tumors.

Prognostic assessment and follow-up management

Multimodal AI technology demonstrates significant advantages in prognostic evaluation and follow-up management for gastrointestinal tumors, particularly through enhanced precision in risk stratification and recurrence prediction, as well as dynamic, individualized monitoring via intelligent follow-up systems. By integrating imaging, genomic, and clinical data, multimodal AI leverages deep learning and machine learning algorithms to construct refined risk stratification models that accurately predict patient survival outcomes and recurrence likelihood. For instance, in lung cancer and clear cell renal cell carcinoma, AI models that combine multidimensional clinical variables with radiomic features have achieved accurate predictions of 3- and 5-year overall survival rates, with AUC values ranging from 0.77 to 0.79-outperforming conventional prognostic scoring systems and substantially improving the ACC and personalization of clinical decision-making[77,78]. Furthermore, in brain tumors and other neuro-oncological malignancies, AI-driven integration of imaging, histopathological, and genetic data enables automated and precise tumor classification and outcome prediction, demonstrating potential to reduce reliance on invasive biopsies and accelerate molecular subtyping. These advancements offer valuable insights and a translational framework for improving prognostic assessment in gastrointestinal cancers[79].

Secondly, intelligent follow-up management systems leverage multimodal AI platforms that integrate EHR, laboratory test results, and patient behavioral data to enable dynamic updates and continuous risk monitoring throughout the follow-up process. For example, CDSS for prostate cancer patients can automatically compute and visualize biochemical recurrence markers-such as prostate-specific antigen doubling time-assisting clinicians in timely adjustments of treatment regimens and follow-up schedules[80]. Similarly, AI-powered follow-up systems in chronic disease management, including cardiovascular diseases and diabetes, utilize real-time data analysis to predict risks of adverse events, guide personalized interventions, and significantly reduce mortality and recurrence rates[81,82]. These systems facilitate automated data synchronization and intelligent alerting mechanisms, optimize resource utilization, and enhance both follow-up efficiency and patient adherence.

Finally, multimodal AI in prognostic assessment and follow-up management emphasizes interdisciplinary collaboration, multi-center data integration, and ongoing model refinement. By incorporating cognitive and emotional support functions from pathology explanation clinics, these systems can further improve patients’ understanding of their prognosis and psychological adaptation, thereby strengthening the patient-centered nature of follow-up care[83]. Moreover, the establishment of regional management centers to consolidate clinical expertise, combined with AI-assisted decision-making tools, enables end-to-end integrated care-from diagnosis and treatment to long-term follow-up-making it particularly well-suited for the longitudinal monitoring of complex gastrointestinal tumors[84]. Overall, multimodal AI technology, through deep fusion of heterogeneous data sources and advanced analytical capabilities, delivers precise prognostic risk stratification and individualized, adaptive follow-up management for gastrointestinal cancer patients, driving the evolution of intelligent healthcare toward precision medicine.

CHALLENGES AND FUTURE PROSPECTS OF MULTIMODAL AI
Data quality and privacy protection

Multimodal AI in personalized gastrointestinal cancer treatment relies on diverse data sources, including medical imaging, electronic clinical records, and genomic information. During multimodal data acquisition, heterogeneity and noise emerge as critical challenges that compromise data quality. First, variations in data formats and lack of standardized collection protocols across modalities lead to structural and semantic inconsistencies, hindering effective data integration and analysis. For instance, medical images are typically stored in DICOM format, whereas clinical data often exist as structured or semi-structured text entries. To address these issues, standardized frameworks such as the OMOP Common Data Model and the Medical Imaging Common Data Model have been developed, enabling syntactic and semantic interoperability across institutions and regions, thereby supporting seamless integration and sharing of multimodal health data[85].

Second, data noise arises from multiple sources, including differences in imaging equipment, variability in operator expertise, and biological heterogeneity among patients. These factors contribute to inconsistent data quality, which directly impairs the training performance and generalization capacity of AI models. The application of multimodal LLMs in medical image interpretation underscores the pivotal role of data quality-model ACC and interpretability are fundamentally limited by the reliability and consistency of multi-source, heterogeneous datasets[86]. Consequently, enhancing data standardization and implementing robust quality control mechanisms are essential for developing high-performance multimodal AI systems in clinical oncology.

The extensive collection and cross-institutional sharing of multimodal data in personalized gastrointestinal cancer treatment pose significant legal and ethical challenges to patient privacy. As data volume and diversity increase-particularly with sensitive genomic and imaging data-the risk of privacy breaches escalates accordingly. Federated learning (FL) has emerged as a promising solution by enabling model training on local devices without centralizing raw data. By exchanging only model parameters, FL mitigates both data silos and privacy exposure. However, conventional FL approaches often fail to account for individual users’ varying privacy requirements, leading to either inadequate or overly restrictive protection[87].

Differential privacy (DP) is another key privacy-preserving technique that safeguards individual information by injecting calibrated noise into data or model outputs. It is widely recognized as a robust method for ensuring privacy in data-driven systems. For instance, DP mechanisms that integrate semantic sensitivity and location prediction have demonstrated notable success in protecting trajectory data, effectively preserving privacy while maintaining high data utility[88]. Similarly, attribute-based encryption with multi-level privacy protection introduces layered dummy information to achieve reversible and fine-grained control over data access, thereby balancing privacy preservation with service quality[89].

In intelligent healthcare systems, a personalized local DP framework integrated with FL allows dynamic adjustment of privacy budgets based on data sensitivity, accommodating diverse user needs while preserving model performance[87]. Furthermore, combining blockchain technology with local DP enables the design of fine-grained, dynamic access control mechanisms, ensuring secure, auditable, and transparent sharing of electronic medical records[90].

From a regulatory standpoint, frameworks such as the General Data Protection Regulation (GDPR) impose stringent requirements on medical data processing, mandating transparent data handling practices and explicit user consent. Empirical studies on mobile health applications indicate a positive correlation between privacy policy compliance and app quality, underscoring that transparency and adherence to regulations are critical for building user trust[91,92]. Moreover, healthcare institutions must strengthen internal governance by enhancing staff awareness and training on privacy laws, standardizing data handling protocols, and minimizing privacy incidents caused by human error[93,94].

In summary, ensuring high-quality data acquisition and effective multimodal integration-combined with advanced privacy-enhancing technologies such as FL, DP, and blockchain, supported by comprehensive legal-ethical frameworks and institutional management-is essential for establishing a secure and trustworthy data ecosystem (Table 4). This foundation is crucial for advancing AI-driven decision-making in individualized gastrointestinal cancer care.

Table 4 Challenges and future directions of multimodal artificial intelligence in gastrointestinal cancer therapy.
Core challenges
Key technologies/methods
Future directions
Data quality & privacy protection: Data heterogeneity (divergent formats/standards); data noise (equipment/operator variations). Patient privacy risks (esp. genomic/imaging data)Data standardization: Common data models (e.g., OMOP CDM, medical imaging CDM); Privacy-preserving techniques: FL, DP, Blockchain; Legal compliance: Frameworks like GDPR to enhance policy transparencyTo build a more secure and reliable data environment, promoting seamless integration and controlled sharing of high-quality data
Model interpretability & clinical acceptability: "Black-box" problem erodes clinical trust. Opaque decision-making hinders regulatory approval & integrationExplainable AI: Attention mechanisms, prototype networks (ProtoPNet), Counterfactual explanations; Interpretability tools: LIME, SHAP, Grad-CAM for visualization & feature importance ranking; Clinical integration: Displaying model uncertainty & key decision factors in CDSSTo develop transparent and trustworthy AI systems, enhance clinician trust, and promote deep integration of AI into clinical workflows
Multi-center collaboration & standardization: Significant data heterogeneity across centers (equipment, protocols, populations). Poor model generalizability, hindering cross-institutional applicationMulti-center data sharing & standardization: Unified data formats and acquisition standards; privacy-preserving collaborative training: Federated learning for joint modeling; standardized multimodal databases: Integrating genomics, radiomics, and other multidimensional dataTo promote large-scale, high-quality multi-center collaboration, establish industry standards, and improve model generalizability and clinical applicability
Technical integration & clinical translation: Reliance on large annotated datasets limits generalizability. Barriers in translating research findings to clinical applicationEmerging ML paradigms: RL for dynamic treatment optimization; SSL to reduce annotation dependency; Integrating Novel Data types: e.g., digital pathology, patient behavior data; Robust clinical validation: Validating model efficacy and robustness through clinical trials and RWDTo integrate multimodal AI with cutting-edge technologies and validate it through rigorous clinical trials, ultimately enabling its routine use in personalized therapy
Model interpretability and clinical acceptability

In clinical applications, particularly in multimodal AI-driven decision support for personalized gastrointestinal cancer treatment, AI models are playing an increasingly critical role. However, most AI systems suffer from the "black box" problem-characterized by complex and opaque internal decision-making mechanisms-which undermines trust and acceptance among clinicians and patients, thereby limiting their adoption in real-world healthcare settings[95]. To address the "black box" challenge and render AI's reasoning process transparent and actionable for oncologists and surgeons, the field of Explainable AI (XAI) has advanced a range of key techniques. These methods offer interpretable insights at both the local and global levels and are designed to be seamlessly integrated into clinical workflows.

First, attention-based models leverage visualization techniques to reveal the critical regions that the model attends to during decision-making. For example, in CT image analysis, attention maps can highlight pixel areas most influential for malignant tumor classification-such as irregular tumor margins and heterogeneous internal textures-a pattern that closely aligns with radiologists’ visual assessment criteria. Similarly, when processing whole-slide pathological images, the attention mechanism can pinpoint regions exhibiting the highest cellular atypia, offering pathologists precise guidance for targeted review[96]. This form of visual interpretability closely mirrors clinical reasoning patterns, thereby substantially improving clinician trust in AI-generated diagnoses.

Second, model-agnostic post-hoc explanation methods-such as Local Interpretable Model-agnostic Explanations and SHAP-offer robust tools for interpreting individual predictions. SHAP values, grounded in cooperative game theory, enable a fair and mathematically rigorous allocation of each input feature’s contribution to the final prediction. In CDSSs, this translates into actionable insights. For instance, when predicting postoperative recurrence risk in colorectal cancer, the system might inform the clinician: “This patient is classified as high-risk, primarily driven by elevated ctDNA levels (+35% contribution), imaging evidence of suspected liver micro-metastases (+25%), and a high-risk gene mutation profile (+15%).” Such quantitative, feature-specific explanations directly identify key risk factors, enabling clinicians to prioritize confirmatory tests or intensify adjuvant therapy[56,97].

Moreover, higher-level interpretability approaches-including prototype learning and counterfactual explanations-further deepen clinical understanding. Prototype learning frameworks like ProtoPNet base diagnoses on learned "prototypical" patterns of disease, allowing clinicians to compare a new patient’s data against representative examples and assess diagnostic similarity through intuitive analogical reasoning[98]. Counterfactual explanations address clinically relevant “what-if” scenarios-for example: “Had this patient’s KIT mutation been benign instead of pathogenic, the model’s confidence in diagnosing benign GIST would rise from 10% to 92%”. This capability enables clinicians to explore the sensitivity of predictions to specific variables and understand which factors are pivotal in altering diagnostic or therapeutic outcomes[99].

In clinical practice, these XAI techniques are increasingly integrated into the user interfaces of CDSS. Rather than presenting a standalone label such as “high risk” or “recommend Drug A”, AI systems now generate interactive reports augmented with an evidence panel. This panel may display attention heatmaps, SHAP value waterfall plots for key features, and counterfactual analyses. As a result, AI evolves from a passive “black box” into an active “intelligent collaborator” that presents multifaceted, interpretable evidence-supporting, rather than replacing, clinical judgment and paving the way for truly human-AI collaborative precision medicine[100,101].

In summary, addressing the "black box" nature of AI models is fundamental to the broad adoption of multimodal AI in intelligent decision-making for personalized gastrointestinal cancer treatment (Table 4). By continuously advancing interpretability techniques and developing transparent, trustworthy AI systems, we can foster greater clinician confidence in AI-assisted diagnosis and therapy, enhance the scientific rigor and personalization of clinical decisions, and ultimately advance the goals of precision medicine[95,101]. Future research should align more closely with clinical workflows by optimizing the usability and interactivity of interpretability algorithms, validating models in real-world clinical settings, promoting multidisciplinary collaboration, and improving the ethical compliance and regulatory approval prospects of AI-driven clinical systems.

Multicenter collaboration and standardization

In the application of multimodal AI to individualized gastrointestinal cancer treatment, multicenter collaboration and data standardization are critical for enabling broad model deployment and precise clinical implementation. Variations in hardware equipment, imaging protocols, data collection formats, and patient demographics across medical institutions lead to significant heterogeneity in clinical data, which directly undermines the transferability and generalization capability of AI models. Specifically, features learned by a model during training at one institution often fail to align with the data distributions of other centers, resulting in degraded performance or even model failure-particularly in tasks such as gastrointestinal tumor imaging diagnosis and prognosis prediction. For instance, a high-precision model for predicting peritoneal metastasis may perform poorly when applied across institutions if it is not trained on diverse, multicenter data, due to its inability to accommodate differences in imaging modalities and scanning parameters, thereby limiting its real-world clinical utility[13].

Therefore, promoting large-scale data sharing and establishing standardized frameworks across institutions are essential. First, harmonizing data formats and acquisition protocols significantly reduces preprocessing complexity and enhances the consistency and stability of model training. Second, integrating multicenter datasets not only increases sample size and improves model training quality but also strengthens the model’s adaptability to diverse patient populations and clinical environments, thereby improving the ACC and reliability of personalized treatment recommendations. Furthermore, multicenter collaboration facilitates cross-institutional resource integration, enabling the construction and continuous updating of high-quality, curated databases that serve as a foundation for iterative refinement of AI algorithms. For example, although AI models based on multimodal chest imaging have demonstrated strong predictive performance in lung function assessment, their clinical translation remains challenging due to the absence of unified data standards and rigorous multicenter validation-highlighting the imperative for coordinated standardization efforts[95].

Additionally, privacy protection and data security must be addressed within multicenter collaborations. Privacy-preserving technologies such as FL allow collaborative model training without transferring raw patient data beyond local sites, thus safeguarding patient confidentiality while enabling data synergy. Looking ahead, integrating multimodal data fusion techniques to build standardized, comprehensive databases encompassing genomics, radiomics, immunomics, and clinical metadata will substantially enhance AI-driven decision-making in precision oncology for gastrointestinal tumors and advance the level of individualized care. In conclusion, multicenter collaboration and robust data standardization are not only essential for overcoming model transfer challenges but also constitute the foundational infrastructure for the efficient, reliable, and scalable deployment of AI in personalized gastrointestinal cancer therapy (Table 4)[102,103].

FUTURE DIRECTIONS

Expanding the integration of emerging data modalities-such as digital pathology and patient behavioral data-is essential for advancing multimodal AI in personalized gastrointestinal cancer treatment. As AI applications deepen in clinical oncology, future progress hinges on broadening both the diversity and granularity of data sources. While medical imaging has been widely adopted, digital pathology offers high-resolution insights at the cellular and tissue levels, enabling the characterization of tumor microarchitecture and molecular profiles that conventional imaging may overlook. Furthermore, patient behavioral data-including lifestyle patterns, medication adherence, and psychological well-being-are increasingly recognized as critical determinants of therapeutic outcomes. Integrating these novel data types into multimodal AI frameworks not only improves the model’s capacity to capture tumor heterogeneity and inter-patient variability but also strengthens the comprehensiveness of clinical decision support. For instance, recent studies have demonstrated the feasibility of predicting survival rates and disease progression in gastrointestinal cancer patients by jointly analyzing gene expression profiles, immune cell dynamics, and lifestyle factors, highlighting the transformative potential of multi-modal data fusion[102]. Moving forward, the incorporation of digital pathology and behavioral data will enable AI models to more accurately reflect underlying tumor biology and holistic patient states, thereby supporting the refinement and personalization of therapeutic strategies.

The integration of reinforcement learning (RL) and SSL presents a promising approach to enhancing the adaptability of AI models in complex clinical domains such as gastrointestinal oncology. Conventional multimodal AI systems predominantly rely on large-scale annotated datasets for supervised training; however, in the context of gastrointestinal tumors, labeled data are often costly to acquire and prone to bias, thereby constraining model generalization. RL and SSL-two advanced machine learning paradigms-offer complementary advantages in improving model adaptability and data efficiency. Specifically, RL enables iterative optimization of decision-making policies through environmental interaction, making it particularly suitable for modeling dynamic treatment processes, including the adjustment and refinement of multi-stage therapeutic strategies. SSL, on the other hand, leverages inherent data structures to facilitate representation learning without extensive human annotation, thus reducing dependency on labeled data and enhancing the model’s capacity to interpret heterogeneous multimodal inputs. By synergizing these methodologies, future multimodal AI systems may achieve continuous learning and autonomous improvement, enabling more effective adaptation to evolving patient conditions and individualized treatment requirements in real-world clinical environments. For instance, in optimizing radiotherapy and chemotherapy regimens for gastrointestinal tumors, RL can guide the identification of optimal treatment trajectories within simulated clinical scenarios, while SSL enhances the extraction and integration of salient features from imaging and genomic data, collectively improving predictive ACC and clinical decision support.

Advancing the translation of multimodal AI into clinical trials and real-world applications is critical for realizing its full potential in patient care. Despite significant progress in research, the clinical adoption of multimodal AI in gastrointestinal tumor management remains hindered by practical and regulatory challenges. To bridge this gap, increased emphasis should be placed on integrating multimodal AI systems into prospective clinical trials to rigorously evaluate their impact on treatment planning, outcome prediction, and prognostic assessment. Clinical trials offer standardized data collection protocols and robust evaluation frameworks, serving as essential pathways for validating the safety, efficacy, and reliability of AI-driven tools. Concurrently, the incorporation of real-world data captures the heterogeneity and complexity inherent in routine clinical practice, supporting external validation across diverse populations and healthcare settings. A combined strategy leveraging both clinical trial evidence and real-world evidence can accelerate the regulatory approval, clinical implementation, and policy development necessary for scalable deployment. For example, AI models designed to predict peritoneal metastasis in gastrointestinal cancers have demonstrated utility in intraoperative detection of microscopic lesions and recurrence risk stratification. Once validated through well-designed clinical trials, such models could become integral components of surgical decision support systems[13]. Ultimately, fostering standardization, interoperability, and seamless integration of multimodal AI technologies into clinical workflows is essential for advancing personalized medicine and achieving widespread adoption in individualized cancer care (Table 4).

To systematically translate the integrated RL and SSL framework from research into clinical practice in gastrointestinal oncology, a structured, phased roadmap is essential. This roadmap delineates clear short-term (1-3 years), mid-term (3-5 years), and long-term (5+ years) objectives centered on technical validation, limited clinical integration, and broad deployment of adaptive learning systems. It is designed to sequentially address critical challenges-including algorithm development, feasibility assessment, regulatory approval, and sustained clinical adoption-ensuring that these advanced AI paradigms mature into reliable tools that enhance personalized treatment decisions and improve patient outcomes. Specific milestones across technical, clinical, and regulatory domains are summarized in Table 5.

Table 5 Translation roadmap for clinical application of reinforcement learning and self-supervised learning in gastrointestinal tumors.
Phase
Timeframe
Core objective
Key technical milestones
Clinical & regulatory milestones
Short-term1-3 yearsFoundational development & algorithmic validation(1) Complete SSL model pre-training using large-scale historical data; (2) Construct RL simulation environments based on historical outcomes; and (3) Validate superior predictive accuracy of integrated models vs baselines on retrospective data(1) Publication of proof-of-concept studies; and (2) Establishment of open-source benchmark datasets and simulation platforms
Mid-term3-5 yearsClinical trials in limited settings & system integration(1) Develop interpretable, human-in-the-loop CDSS; (2) Model outputs serve as assistive decision aids) for clinicians; and (3) Validate system usability and clinician acceptance in prospective observational studies(1) Obtain initial regulatory approval (e.g., as Class II medical device software); and (2) Develop clinical workflow integration guidelines
Long-term5+ yearsWidespread integration & adaptive learning systems(1) Achieve multi-center deployment using privacy-preserving techniques (e.g., Federated Learning); (2) Explore regulated continuous learning and model adaptation; and (3) Conduct large-scale RCTs with OS as a primary endpoint(1) Confirm clinical benefit through high-level evidence; (2) Establish new standards for individualized care; and (3) Advocate for healthcare reimbursement policy coverage
CONCLUSION

Multimodal AI enhances the precision of individualized diagnosis and treatment for gastrointestinal tumors by integrating diverse data sources-including medical imaging, endoscopy, and multi-omics-demonstrating substantial potential in early detection, therapeutic strategy optimization, and prognostic evaluation. Nevertheless, critical challenges remain, including inconsistent data quality, lack of standardization, limited model interpretability, and barriers to clinical translation. To enable the standardized implementation and broad adoption of multimodal AI in gastrointestinal oncology, future efforts must prioritize large-scale multicenter collaboration, the development of high-quality, harmonized datasets, and the advancement of transparent, clinically validated algorithms.

ACKNOWLEDGEMENTS

We sincerely thank our colleagues at Shanghai Xuhui Central Hospital and Shanghai Fourth People's Hospital Affiliated to Tongji University School of Medicine for their insightful discussions and continuous support.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Gastroenterology and hepatology

Country of origin: China

Peer-review report’s classification

Scientific Quality: Grade A, Grade A

Novelty: Grade A, Grade B

Creativity or Innovation: Grade A, Grade A

Scientific Significance: Grade A, Grade A

P-Reviewer: Inam S, PhD, Assistant Professor, Researcher, Pakistan S-Editor: Qu XL L-Editor: A P-Editor: Lei YY

References
1.  Brčić I, Argyropoulos A, Liegl-Atzwanger B. Update on Molecular Genetics of Gastrointestinal Stromal Tumors. Diagnostics (Basel). 2021;11:194.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 45]  [Cited by in RCA: 42]  [Article Influence: 8.4]  [Reference Citation Analysis (0)]
2.  Florou V, Trent JC, Wilky BA. Precision medicine in gastrointestinal stromal tumors. Discov Med. 2019;28:267-276.  [PubMed]  [DOI]
3.  Nikiforchin A, Peng R, Sittig M, Kotiah S. A Rare Case of Metastatic Heterogeneous Poorly Differentiated Neuroendocrine Carcinoma of Ileum: A Case Report and Literature Review. J Med Cases. 2020;11:6-11.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 1]  [Cited by in RCA: 3]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
4.  Zhang XB, Fan YB, Jing R, Getu MA, Chen WY, Zhang W, Dong HX, Dakal TC, Hayat A, Cai HJ, Ashrafizadeh M, Abd El-Aty AM, Hacimuftuoglu A, Liu P, Li TF, Sethi G, Ahn KS, Ertas YN, Chen MJ, Ji JS, Ma L, Gong P. Gastroenteropancreatic neuroendocrine neoplasms: current development, challenges, and clinical perspectives. Mil Med Res. 2024;11:35.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 17]  [Reference Citation Analysis (0)]
5.  Kumar R, Shalaby A, Narra LR, Gokhale S, Deek MP, Jabbour SK. Updates in the Role of Positron Emission Tomography/Computed Tomography in Radiation Oncology in Gastrointestinal Malignancies. PET Clin. 2025;20:219-229.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 1]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
6.  Shao J, Ma J, Zhang Q, Li W, Wang C. Predicting gene mutation status via artificial intelligence technologies based on multimodal integration (MMI) to advance precision oncology. Semin Cancer Biol. 2023;91:1-15.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 40]  [Reference Citation Analysis (0)]
7.  Zheng L, Jin DW, Yu HW, Yu Z, Qian LY. Application of AI in the identification of gastrointestinal stromal tumors: a comprehensive analysis based on pathological, radiological, and genetic variation features. Front Genet. 2025;16:1555744.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
8.  Keller RB, Mazor T, Sholl L, Aguirre AJ, Singh H, Sethi N, Bass A, Nagaraja AK, Brais LK, Hill E, Hennessey C, Cusick M, Del Vecchio Fitz C, Zwiesler Z, Siegel E, Ovalle A, Trukhanov P, Hansel J, Shapiro GI, Abrams TA, Biller LH, Chan JA, Cleary JM, Corsello SM, Enzinger AC, Enzinger PC, Mayer RJ, McCleary NJ, Meyerhardt JA, Ng K, Patel AK, Perez KJ, Rahma OE, Rubinson DA, Wisch JS, Yurgelun MB, Hassett MJ, MacConaill L, Schrag D, Cerami E, Wolpin BM, Nowak JA, Giannakis M. Programmatic Precision Oncology Decision Support for Patients With Gastrointestinal Cancer. JCO Precis Oncol. 2023;7:e2200342.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 4]  [Cited by in RCA: 8]  [Article Influence: 2.7]  [Reference Citation Analysis (0)]
9.  Nakamura Y, Taniguchi H, Ikeda M, Bando H, Kato K, Morizane C, Esaki T, Komatsu Y, Kawamoto Y, Takahashi N, Ueno M, Kagawa Y, Nishina T, Kato T, Yamamoto Y, Furuse J, Denda T, Kawakami H, Oki E, Nakajima T, Nishida N, Yamaguchi K, Yasui H, Goto M, Matsuhashi N, Ohtsubo K, Yamazaki K, Tsuji A, Okamoto W, Tsuchihara K, Yamanaka T, Miki I, Sakamoto Y, Ichiki H, Hata M, Yamashita R, Ohtsu A, Odegaard JI, Yoshino T. Clinical utility of circulating tumor DNA sequencing in advanced gastrointestinal cancer: SCRUM-Japan GI-SCREEN and GOZILA studies. Nat Med. 2020;26:1859-1864.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 266]  [Cited by in RCA: 263]  [Article Influence: 43.8]  [Reference Citation Analysis (0)]
10.  Pretta A, Lai E, Donisi C, Spanu D, Ziranu P, Pusceddu V, Puzzoni M, Massa E, Scartozzi M. Circulating tumour DNA in gastrointestinal cancer in clinical practice: Just a dream or maybe not? World J Clin Oncol. 2022;13:980-983.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in CrossRef: 1]  [Cited by in RCA: 5]  [Article Influence: 1.3]  [Reference Citation Analysis (0)]
11.  Liao X, Li G, Cai R, Chen R. A Review of Emerging Biomarkers for Immune Checkpoint Inhibitors in Tumors of the Gastrointestinal Tract. Med Sci Monit. 2022;28:e935348.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 2]  [Cited by in RCA: 4]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
12.  Yazdanpanah F, Hunt SJ. PET-Computed Tomography in the Management of Sarcoma by Interventional Oncology. PET Clin. 2025;S1556-8598(25)00071.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
13.  Barat M, Pellat A, Dohan A, Hoeffel C, Coriat R, Soyer P. CT and MRI of Gastrointestinal Stromal Tumors: New Trends and Perspectives. Can Assoc Radiol J. 2024;75:107-117.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 10]  [Cited by in RCA: 16]  [Article Influence: 8.0]  [Reference Citation Analysis (0)]
14.  Wang YY, Liu B, Wang JH. Application of deep learning-based convolutional neural networks in gastrointestinal disease endoscopic examination. World J Gastroenterol. 2025;31:111137.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
15.  Zhang Y. Enhancing rectal cancer liver metastasis prediction: Magnetic resonance imaging-based radiomics, bias mitigation, and regulatory considerations. World J Gastrointest Oncol. 2025;17:102151.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
16.  Wojtulewski A, Sikora A, Dineen S, Raoof M, Karolak A. Using artificial intelligence and statistics for managing peritoneal metastases from gastrointestinal cancers. Brief Funct Genomics. 2025;24:elae049.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
17.  Ren SQ, Chen JM, Cai C. Translational artificial intelligence in gastrointestinal and hepatic disorders: Advancing intelligent clinical decision-making for diagnosis, treatment, and prognosis. World J Gastroenterol. 2025;31:110742.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
18.  Pannala R, Krishnan K, Melson J, Parsi MA, Schulman AR, Sullivan S, Trikudanathan G, Trindade AJ, Watson RR, Maple JT, Lichtenstein DR. Artificial intelligence in gastrointestinal endoscopy. VideoGIE. 2020;5:598-613.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 61]  [Cited by in RCA: 59]  [Article Influence: 9.8]  [Reference Citation Analysis (0)]
19.  Quek SXZ, Ho KY. Artificial Intelligence in Upper Gastrointestinal Diagnosis. Korean J Helicobacter Up Gastrointest Res. 2025;25:251-260.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
20.  Wu R, Qin K, Fang Y, Xu Y, Zhang H, Li W, Luo X, Han Z, Liu S, Li Q. Application of the convolution neural network in determining the depth of invasion of gastrointestinal cancer: a systematic review and meta-analysis. J Gastrointest Surg. 2024;28:538-547.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 7]  [Reference Citation Analysis (0)]
21.  Iacucci M, Santacroce G, Yasuharu M, Ghosh S. Artificial Intelligence-Driven Personalized Medicine: Transforming Clinical Practice in Inflammatory Bowel Disease. Gastroenterology. 2025;169:416-431.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 4]  [Cited by in RCA: 11]  [Article Influence: 11.0]  [Reference Citation Analysis (0)]
22.  Iacucci M, Nardone OM, Ditonno I, Capobianco I, Pugliano CL, Maeda Y, Majumder S, Zammarchi I, Santacroce G, Ghosh S. Advancing Inflammatory Bowel Disease-Driven Colorectal Cancer Management: Molecular Insights and Endoscopic Breakthroughs Towards Precision Medicine. Clin Gastroenterol Hepatol. 2025;23:2361-2373.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 2]  [Cited by in RCA: 6]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
23.  Clement David-Olawade A, Aderinto N, Egbon E, Olatunji GD, Kokori E, Olawade DB. Enhancing endoscopic precision: the role of artificial intelligence in modern gastroenterology. J Gastrointest Surg. 2025;29:102195.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
24.  Araújo CC, Frias J, Mendes F, Martins M, Mota J, Almeida MJ, Ribeiro T, Macedo G, Mascarenhas M. Unlocking the Potential of AI in EUS and ERCP: A Narrative Review for Pancreaticobiliary Disease. Cancers (Basel). 2025;17:1132.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
25.  Li Z, Chen S, Feng W, Luo Y, Lai H, Li Q, Xiu B, Li Y, Li Y, Huang S, Zhu X. A pan-cancer analysis of HER2 index revealed transcriptional pattern for precise selection of HER2-targeted therapy. EBioMedicine. 2020;62:103074.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 10]  [Cited by in RCA: 42]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
26.  Liu Y, Gao F, Cheng Y, Qi L, Yu H. Applications and advances of multi-omics technologies in gastrointestinal tumors. Front Med (Lausanne). 2025;12:1630788.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
27.  Wu F, Feng Z, Wang X, Guo Y, Wu B, Bai S, Lan N, Chen M, Ren J. Sphingosine-1-phosphate stimulates colorectal cancer tumor microenvironment angiogenesis and induces macrophage polarization via macrophage migration inhibitory factor. Front Immunol. 2025;16:1564213.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 5]  [Reference Citation Analysis (0)]
28.  Hamamoto R, Komatsu M, Takasawa K, Asada K, Kaneko S. Epigenetics Analysis and Integrated Analysis of Multiomics Data, Including Epigenetic Data, Using Artificial Intelligence in the Era of Precision Medicine. Biomolecules. 2019;10:62.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 73]  [Cited by in RCA: 64]  [Article Influence: 9.1]  [Reference Citation Analysis (0)]
29.  Niinuma T, Kitajima H, Yamamoto E, Maruyama R, Aoki H, Harada T, Ishiguro K, Sudo G, Toyota M, Yoshido A, Kai M, Nakase H, Sugai T, Suzuki H. An Integrated Epigenome and Transcriptome Analysis to Clarify the Effect of Epigenetic Inhibitors on GIST. Anticancer Res. 2021;41:2817-2828.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
30.  Corti C, Cobanaj M, Dee EC, Criscitiello C, Tolaney SM, Celi LA, Curigliano G. Artificial intelligence in cancer research and precision medicine: Applications, limitations and priorities to drive transformation in the delivery of equitable and unbiased care. Cancer Treat Rev. 2023;112:102498.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 38]  [Reference Citation Analysis (0)]
31.  Lyu H, Wang S, Guo G, Lin W, Huang C, Chen H, Xu C, Liu L, Huang Q, Xue F. A machine learning-derived immune-related prognostic model identifies PLXNA3 as a functional risk gene in colorectal cancer. Front Immunol. 2025;16:1653794.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
32.  Nam Y, Kim J, Jung SH, Woerner J, Suh EH, Lee DG, Shivakumar M, Lee ME, Kim D. Harnessing Artificial Intelligence in Multimodal Omics Data Integration: Paving the Path for the Next Frontier in Precision Medicine. Annu Rev Biomed Data Sci. 2024;7:225-250.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 29]  [Cited by in RCA: 31]  [Article Influence: 15.5]  [Reference Citation Analysis (0)]
33.  Zuo C, Zhu J, Zou J, Chen L. Unravelling tumour spatiotemporal heterogeneity using spatial multimodal data. Clin Transl Med. 2025;15:e70331.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 4]  [Reference Citation Analysis (0)]
34.  Luo B, Teng F, Tang G, Cen W, Liu X, Chen J, Qu C, Liu X, Liu X, Jiang W, Huang H, Feng Y, Zhang X, Jian M, Li M, Xi F, Li G, Liao S, Chen A, Yu W, Xu X, Zhang J. StereoMM: a graph fusion model for integrating spatial transcriptomic data and pathological images. Brief Bioinform. 2025;26:bbaf210.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
35.  Kehl KL. Use of Large Language Models in Clinical Cancer Research. JCO Clin Cancer Inform. 2025;9:e2500027.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
36.  Hu S, Ding M, Lou J, Qin J, Chen Y, Liu Z, Li Y, Nie J, Xu M, Sun H, Gu X, Xu T, Wang S, Wang S, Pan Y. COL10A1(+) fibroblasts promote colorectal cancer metastasis and M2 macrophage polarization with pan-cancer relevance. J Exp Clin Cancer Res. 2025;44:243.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
37.  Wu P, Sun R, Fahira A, Chen Y, Jiangzhou H, Wang K, Yang Q, Dai Y, Pan D, Shi Y, Wang Z. DROEG: a method for cancer drug response prediction based on omics and essential genes integration. Brief Bioinform. 2023;24:bbad003.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
38.  Hussain MS, Rejili M, Khan A, Alshammari SO, Tan CS, Haouala F, Ashique S, Alshammari QA. AI-powered liquid biopsy for early detection of gastrointestinal cancers. Clin Chim Acta. 2025;577:120484.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 6]  [Reference Citation Analysis (0)]
39.  Ibrahim A, Paudyal R, Shah A, Katabi N, Hatzoglou V, Zhao B, Wong RJ, Shaha AR, Tuttle RM, Schwartz LH, Shukla-Dave A, Apte A. Impact of artificial intelligence-based and traditional image preprocessing and resampling on MRI-based radiomics for classification of papillary thyroid carcinoma. BJR Artif Intell. 2025;2:ubaf006.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
40.  Khokan MIP, Tonni TJ, Rony MAH, Fatema K, Hasan MZ. Framework for enhanced respiratory disease identification with clinical handcrafted features. Comput Biol Med. 2025;195:110588.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
41.  Mani KA, Terraciano AP, Goldman SN, Bhatta M, Shankar V, De La Garza Ramos R, Fourman MS, Eleswarapu AS. Assessment of Multimodal Natural Language Processing in Ascertaining Perioperative Safety Indicators From Preoperative Notes in Spine Surgery. J Am Acad Orthop Surg. 2025;.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
42.  Hashtarkhani S, Rashid R, Brett CL, Chinthala L, Kumsa FA, Zink JA, Davis RL, Schwartz DL, Shaban-Nejad A. Cancer Diagnosis Categorization in Electronic Health Records Using Large Language Models and BioBERT: Model Performance Evaluation Study. JMIR Cancer. 2025;11:e72005.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
43.  Marengo A, Pagano A, Santamato V. An efficient cardiovascular disease prediction model through AI-driven IoT technology. Comput Biol Med. 2024;183:109330.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 7]  [Reference Citation Analysis (0)]
44.  Ragab M. An empirical evaluation of fuzzy bidirectional long short-term memory with soft computing based decision-making model for predicting volatility of cryptocurrencies. Sci Rep. 2025;15:8592.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
45.  Engelke M, Baldini G, Kleesiek J, Nensa F, Dada A. FHIR-Former: enhancing clinical predictions through Fast Healthcare Interoperability Resources and large language models. J Am Med Inform Assoc. 2025;32:1793-1801.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 2]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
46.  Jasodanand VH, Bellitti M, Kolachalama VB. An AI-first framework for multimodal data in Alzheimer's disease and related dementias. Alzheimers Dement. 2025;21:e70719.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
47.  Tran Z, Byun J, Lee HY, Boggs H, Tomihama EY, Kiang SC. Bias in artificial intelligence in vascular surgery. Semin Vasc Surg. 2023;36:430-434.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 7]  [Reference Citation Analysis (0)]
48.  Soenksen LR, Ma Y, Zeng C, Boussioux L, Villalobos Carballo K, Na L, Wiberg HM, Li ML, Fuentes I, Bertsimas D. Integrated multimodal artificial intelligence framework for healthcare applications. NPJ Digit Med. 2022;5:149.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 120]  [Reference Citation Analysis (0)]
49.  Lipkova J, Chen RJ, Chen B, Lu MY, Barbieri M, Shao D, Vaidya AJ, Chen C, Zhuang L, Williamson DFK, Shaban M, Chen TY, Mahmood F. Artificial intelligence for multimodal data integration in oncology. Cancer Cell. 2022;40:1095-1110.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 205]  [Cited by in RCA: 321]  [Article Influence: 80.3]  [Reference Citation Analysis (0)]
50.  Simon BD, Ozyoruk KB, Gelikman DG, Harmon SA, Türkbey B. The future of multimodal artificial intelligence models for integrating imaging and clinical metadata: a narrative review. Diagn Interv Radiol. 2025;31:303-312.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 6]  [Cited by in RCA: 10]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
51.  Xie C, Ning Z, Guo T, Yao L, Chen X, Huang W, Li S, Chen J, Zhao K, Bian X, Li Z, Huang Y, Liang C, Zhang Q, Liu Z. Multimodal data integration for biologically-relevant artificial intelligence to guide adjuvant chemotherapy in stage II colorectal cancer. EBioMedicine. 2025;117:105789.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 3]  [Cited by in RCA: 4]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
52.  Yang Z, Guo C, Li J, Li Y, Zhong L, Pu P, Shang T, Cong L, Zhou Y, Qiao G, Jia Z, Xu H, Cao H, Huang Y, Liu T, Liang J, Wu J, Ma D, Liu Y, Zhou R, Wang X, Ying J, Zhou M, Liu J. An Explainable Multimodal Artificial Intelligence Model Integrating Histopathological Microenvironment and EHR Phenotypes for Germline Genetic Testing in Breast Cancer. Adv Sci (Weinh). 2025;12:e02833.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
53.  Hong Y, Wang H, Zhang Q, Zhang P, Cheng K, Cao G, Zhang R, Chen B. Machine Learning and Deep Learning Hybrid Approach Based on Muscle Imaging Features for Diagnosis of Esophageal Cancer. Diagnostics (Basel). 2025;15:1730.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
54.  Lyu J, Chen X, Hossain MS, Al-Hazzaa SAF, Wang C. Dual-MFNet: AI-Driven Dual-Scale Multimodal Fusion With State Space Networks for Personalized MRI Synthesis. IEEE J Biomed Health Inform. 2025;PP.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 1]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
55.  Shimizu T. Answer to the Letter to the Editor of H. Daungsupawong, et al. concerning "A multimodal machine learning model integrating clinical and MRI data for predicting neurological outcomes following surgical treatment for cervical spinal cord injury" by Shimizu T, et al. (Eur Spine J [2025]: doi: 10.1007/s00586-025-08873-2). Eur Spine J. 2025;34:3069.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
56.  Wang Y, Chi S, Tian Y, Li X, Zhang H, Xu Y, Huang C, Gao Y, Jin G, Fu Q, Cao W, Chen C, Ding H, Zhang Y, Hong Y, Li J, Sun X, Li E, Zhang Y, Yao W, Liu R, Hua Y, Huang H, Xu M, Zhang B, Tao W, Yang T, Gao Y, Wang X, Lin C, Li J, Zhang Q, Liang T. Construction of an artificially intelligent model for accurate detection of HCC by integrating clinical, radiological, and peripheral immunological features. Int J Surg. 2025;111:2942-2952.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 2]  [Cited by in RCA: 6]  [Article Influence: 6.0]  [Reference Citation Analysis (0)]
57.  Oncu E, Ciftci F. Multimodal AI framework for lung cancer diagnosis: Integrating CNN and ANN models for imaging and clinical data analysis. Comput Biol Med. 2025;193:110488.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
58.  Yu Y, Ren W, Mao L, Ouyang W, Hu Q, Yao Q, Tan Y, He Z, Ban X, Hu H, Lin R, Wang Z, Chen Y, Wu Z, Chen K, Ouyang J, Li T, Zhang Z, Liu G, Chen X, Li Z, Duan X, Wang J, Yao H. MRI-based multimodal AI model enables prediction of recurrence risk and adjuvant therapy in breast cancer. Pharmacol Res. 2025;216:107765.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 2]  [Cited by in RCA: 5]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
59.  Ding P, Yang J, Guo H, Wu J, Wu H, Li T, Gu R, Zhang L, He J, Yang P, Tian Y, Meng N, Li X, Guo Z, Meng L, Zhao Q. Multimodal Artificial Intelligence-Based Virtual Biopsy for Diagnosing Abdominal Lavage Cytology-Positive Gastric Cancer. Adv Sci (Weinh). 2025;12:e2411490.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
60.  Tan TE, Ng YP, Calhoun C, Chaung JQ, Yao J, Wang Y, Zhen L, Xu X, Liu Y, Goh RSM, Piccoli G, Vujosevic S, Tan GSW, Sun JK, Ting DSW. Detection of Center-Involved Diabetic Macular Edema With Visual Impairment Using Multimodal Artificial Intelligence Algorithms. Ophthalmol Retina. 2025;9:955-963.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 1]  [Cited by in RCA: 1]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
61.  Lu Y, Chen L, Wu J, Er L, Shi H, Cheng W, Chen K, Liu Y, Qiu B, Xu Q, Feng Y, Tang N, Wan F, Sun J, Zhi M. Artificial intelligence in endoscopic ultrasonography: risk stratification of gastric gastrointestinal stromal tumors. Therap Adv Gastroenterol. 2023;16:17562848231177156.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 4]  [Cited by in RCA: 10]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
62.  Ye XH, Zhao LL, Wang L. Diagnostic accuracy of endoscopic ultrasound with artificial intelligence for gastrointestinal stromal tumors: A meta-analysis. J Dig Dis. 2022;23:253-261.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 17]  [Reference Citation Analysis (0)]
63.  Rengo M, Onori A, Caruso D, Bellini D, Carbonetti F, De Santis D, Vicini S, Zerunian M, Iannicelli E, Carbone I, Laghi A. Development and Validation of Artificial-Intelligence-Based Radiomics Model Using Computed Tomography Features for Preoperative Risk Stratification of Gastrointestinal Stromal Tumors. J Pers Med. 2023;13:717.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 7]  [Reference Citation Analysis (0)]
64.  Fu M, Xu J, Lv Y, Jin B. Artificial intelligence in advanced gastric cancer: a comprehensive review of applications in precision oncology. Front Oncol. 2025;15:1630628.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
65.  Xin Y, Zhang Q, Liu X, Li B, Mao T, Li X. Application of artificial intelligence in endoscopic gastrointestinal tumors. Front Oncol. 2023;13:1239788.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 11]  [Reference Citation Analysis (0)]
66.  Sibomana O, Saka SA, Grace Uwizeyimana M, Mwangi Kihunyu A, Obianke A, Oluwo Damilare S, Bueh LT, Agbelemoge BOG, Omoefe Oveh R. Artificial Intelligence-Assisted Endoscopy in Diagnosis of Gastrointestinal Tumors: A Review of Systematic Reviews and Meta-Analyses. Gastro Hep Adv. 2025;4:100754.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 3]  [Cited by in RCA: 3]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
67.  Odunsi DI, Sherief HM, Alhajeri S, Rochill K, Mahjoor K, Navarro G, Marra D, Abourdan J, Ortiz JB, Rolse AM, Rai M. Role of Neoadjuvant Therapy in Remodeling Surgical Approaches for Gastrointestinal Malignancies. Curr Gastroenterol Rep. 2025;27:55.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
68.  Noh S, Sharma AK, Fanta PT, Kato S, Kurzrock R, Sicklick JK. Personalized N-of-1 Combination Therapies for Advanced Gastrointestinal Stromal Tumors. JCO Precis Oncol. 2025;9:e2500066.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
69.  Cui S, Fan L, Bai Y, Sun X, Cai Y, Dai J, Wang T, Sun C, Wang R, Liu L. A case report of advanced small intestinal stromal tumor with KIT gene mutation and BRCA2 deletion after multi-line treatments. Front Oncol. 2025;15:1630699.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
70.  Liu J, Wei D, Chen Y, Liu X. Intratumoral core microbiota predicts prognosis and therapeutic response in gastrointestinal cancers. Microbiol Spectr. 2025;13:e0039025.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
71.  Li MM, Yuan J, Guan XY, Ma NF, Liu M. Molecular subclassification of gastrointestinal cancers based on cancer stem cell traits. Exp Hematol Oncol. 2021;10:53.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 6]  [Cited by in RCA: 12]  [Article Influence: 2.4]  [Reference Citation Analysis (0)]
72.  Zheng C, Fass JN, Shih YP, Gunderson AJ, Sanjuan Silva N, Huang H, Bernard BM, Rajamanickam V, Slagel J, Bifulco CB, Piening B, Newell PHA, Hansen PD, Tran E. Transcriptomic profiles of neoantigen-reactive T cells in human gastrointestinal cancers. Cancer Cell. 2022;40:410-423.e7.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 17]  [Cited by in RCA: 87]  [Article Influence: 21.8]  [Reference Citation Analysis (0)]
73.  Jiang X, Zhan Y, Yang DH, Bao L. Immunotherapy in Gastrointestinal Cancers: Current Insights. Clin Pharmacol. 2025;17:167-183.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
74.  Kang X, Li D, Sun R. Nanotechnology and natural killer cell immunotherapy: synergistic approaches for precise immune system adjustment and targeted cancer treatment in gastrointestinal tumors. Front Med (Lausanne). 2025;12:1647737.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 1]  [Cited by in RCA: 3]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
75.  Dermawan JK, Rubin BP. Molecular Pathogenesis of Gastrointestinal Stromal Tumor: A Paradigm for Personalized Medicine. Annu Rev Pathol. 2022;17:323-344.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 21]  [Article Influence: 4.2]  [Reference Citation Analysis (0)]
76.  Madhala D, Sundaram S, Chinambedudandapani M, Balasubramanian A. Analysis of C-Kit Exon 9, Exon 11 and BRAFV600E Mutations Using Sangers Sequencing in Gastrointestinal Stromal Tumours. Cureus. 2020;12:e7369.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 1]  [Cited by in RCA: 3]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
77.  Barkan E, Porta C, Rabinovici-Cohen S, Tibollo V, Quaglini S, Rizzo M. Artificial intelligence-based prediction of overall survival in metastatic renal cell carcinoma. Front Oncol. 2023;13:1021684.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 13]  [Reference Citation Analysis (0)]
78.  Pontes B, Núñez F, Rubio C, Moreno A, Nepomuceno I, Moreno J, Cacicedo J, Praena-Fernandez JM, Rodriguez GAE, Parra C, León BDD, Del Campo ER, Couñago F, Riquelme J, Guerra JLL. A data mining based clinical decision support system for survival in lung cancer. Rep Pract Oncol Radiother. 2021;26:839-848.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 2]  [Cited by in RCA: 2]  [Article Influence: 0.4]  [Reference Citation Analysis (0)]
79.  Khalighi S, Reddy K, Midya A, Pandav KB, Madabhushi A, Abedalthagafi M. Artificial intelligence in neuro-oncology: advances and challenges in brain tumor diagnosis, prognosis, and precision treatment. NPJ Precis Oncol. 2024;8:80.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 14]  [Cited by in RCA: 80]  [Article Influence: 40.0]  [Reference Citation Analysis (0)]
80.  Park J, Rho MJ, Moon HW, Park YH, Kim CS, Jeon SS, Kang M, Lee JY. Prostate cancer trajectory-map: clinical decision support system for prognosis management of radical prostatectomy. Prostate Int. 2021;9:25-30.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 2]  [Cited by in RCA: 4]  [Article Influence: 0.7]  [Reference Citation Analysis (0)]
81.  Mehrpour O, Saeedi F, Hoyte C, Goss F, Shirazi FM. Correction: Utility of support vector machine and decision tree to identify the prognosis of metformin poisoning in the United States: analysis of National Poisoning Data System. BMC Pharmacol Toxicol. 2022;23:68.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 3]  [Reference Citation Analysis (0)]
82.  Shi B, Chen L, Pang S, Wang Y, Wang S, Li F, Zhao W, Guo P, Zhang L, Fan C, Zou Y, Wu X. Large Language Models and Artificial Neural Networks for Assessing 1-Year Mortality in Patients With Myocardial Infarction: Analysis From the Medical Information Mart for Intensive Care IV (MIMIC-IV) Database. J Med Internet Res. 2025;27:e67253.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 5]  [Reference Citation Analysis (0)]
83.  Bergholtz SE, Kurnot SR, Elahi E, DeJonckheere M, Hawley ST, Owens SR, Salami S, Morgan TM, Lapedis CJ. A longitudinal mixed-methods study of pathology explanation clinics in patients with newly diagnosed localized prostate cancer. Am J Clin Pathol. 2024;162:62-74.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
84.  Welte T, Dinkel J, Maurer F, Richter E, Rohde G, Schwarz C, Taube C, Diel R. [Patients with lung disease caused by non-tuberculous mycobacteria in Germany: a trans-sectoral patient-oriented care concept]. Pneumologie. 2022;76:534-546.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
85.  Jeon K, Park WY, Kahn CE Jr, Nagy P, You SC, Yoon SH. Advancing Medical Imaging Research Through Standardization: The Path to Rapid Development, Rigorous Validation, and Robust Reproducibility. Invest Radiol. 2025;60:1-10.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 7]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
86.  Zhang A, Zhao E, Wang R, Zhang X, Wang J, Chen E. Multimodal large language models for medical image diagnosis: Challenges and opportunities. J Biomed Inform. 2025;169:104895.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
87.  Shen X, Jiang H, Chen Y, Wang B, Gao L. PLDP-FL: Federated Learning with Personalized Local Differential Privacy. Entropy (Basel). 2023;25:485.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 4]  [Reference Citation Analysis (0)]
88.  Zhang J, Li Y, Ding Q, Lin L, Ye X. Successive Trajectory Privacy Protection with Semantics Prediction Differential Privacy. Entropy (Basel). 2022;24:1172.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 3]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
89.  Hu Z, Hu K, Hasan MMK. A bidirectional reversible and multilevel location privacy protection method based on attribute encryption. PLoS One. 2024;19:e0309990.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
90.  Wu G, Wang S, Ning Z, Zhu B. Privacy-Preserved Electronic Medical Record Exchanging and Sharing: A Blockchain-Based Smart Healthcare System. IEEE J Biomed Health Inform. 2022;26:1917-1927.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 4]  [Cited by in RCA: 12]  [Article Influence: 2.4]  [Reference Citation Analysis (0)]
91.  Benjumea J, Ropero J, Dorronzoro-Zubiete E, Rivera-Romero O, Carrasco A. A Proposal for a Robust Validated Weighted General Data Protection Regulation-Based Scale to Assess the Quality of Privacy Policies of Mobile Health Applications: An eDelphi Study. Methods Inf Med. 2023;62:154-164.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
92.  Lin X, Wu X, Zhu Z, Chen D, Li H, Lin R. Quality and Privacy Policy Compliance of Mental Health Care Apps in China: Cross-Sectional Evaluation Study. J Med Internet Res. 2025;27:e66762.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
93.  Xia Y, Chen Q, Zeng L, Guo Q, Liu H, Fan S, Huang H. Factors associated with the patient privacy protection behaviours of nursing interns in China: A cross-sectional study. Nurse Educ Pract. 2022;65:103479.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 7]  [Reference Citation Analysis (0)]
94.  Qashqari AA, Almutairi DS, Ennaceur SA, Farhah NS, Almohaithef MA. Healthcare professionals' perceptions of electronic medical record privacy and its impact on work quality in Riyadh hospitals. Saudi Med J. 2025;46:299-306.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
95.  Aravazhi PS, Gunasekaran P, Benjamin NZY, Thai A, Chandrasekar KK, Kolanu ND, Prajjwal P, Tekuru Y, Brito LV, Inban P. The integration of artificial intelligence into clinical medicine: Trends, challenges, and future directions. Dis Mon. 2025;71:101882.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 1]  [Cited by in RCA: 22]  [Article Influence: 22.0]  [Reference Citation Analysis (0)]
96.  Cottin A, Zulian M, Pécuchet N, Guilloux A, Katsahian S. MS-CPFI: A model-agnostic Counterfactual Perturbation Feature Importance algorithm for interpreting black-box Multi-State models. Artif Intell Med. 2024;147:102741.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 5]  [Reference Citation Analysis (0)]
97.  Rajpoot R, Gour M, Jain S, Semwal VB. Integrated ensemble CNN and explainable AI for COVID-19 diagnosis from CT scan and X-ray images. Sci Rep. 2024;14:24985.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 5]  [Reference Citation Analysis (0)]
98.  Choukali MA, Amirani MC, Valizadeh M, Abbasi A, Komeili M. Pseudo-class part prototype networks for interpretable breast cancer classification. Sci Rep. 2024;14:10341.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 4]  [Reference Citation Analysis (0)]
99.  Zhou S, Pfeiffer N, Islam UJ, Banerjee I, Patel BK, Iquebal AS. Generating Counterfactual Explanations For Causal Inference in Breast Cancer Treatment Response. IEEE Int Conf Automation Sci Eng (CASE). 2022;2022:955-960.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 2]  [Cited by in RCA: 1]  [Article Influence: 0.3]  [Reference Citation Analysis (0)]
100.  Moss L, Corsar D, Shaw M, Piper I, Hawthorne C. Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care. Neurocrit Care. 2022;37:185-191.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 1]  [Cited by in RCA: 15]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
101.  Ardic N, Dinc R. Emerging trends in multi-modal artificial intelligence for clinical decision support: A narrative review. Health Informatics J. 2025;31:14604582251366141.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 4]  [Reference Citation Analysis (0)]
102.  Li L, Jiang J, Guo L, Santos J, González AM, Li S, Qin Y. Clinical Diagnosis and Treatment System for Neurological Psychological Gastrointestinal Diseases Based on Multimodal Artificial Intelligence and Immunology. Curr Pharm Biotechnol. 2025;.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
103.  Papageorgiou PS, Christodoulou R, Korfiatis P, Papagelopoulos DP, Papakonstantinou O, Pham N, Woodward A, Papagelopoulos PJ. Artificial Intelligence in Primary Malignant Bone Tumor Imaging: A Narrative Review. Diagnostics (Basel). 2025;15:1714.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]