BPG is committed to discovery and dissemination of knowledge
Minireviews Open Access
Copyright ©The Author(s) 2025. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Gastroenterol. Sep 21, 2025; 31(35): 111033
Published online Sep 21, 2025. doi: 10.3748/wjg.v31.i35.111033
Explainable artificial intelligence for personalized management of inflammatory bowel disease: A minireview of recent advances
Uchenna E Okpete, Department of Digital Anti-aging Healthcare, Inje University, Gimhae 50834, South Korea
Haewon Byeon, Worker’s Care and Digital Health Lab, Department of Future Technology, Korea University of Technology and Education, Cheonan 31253, South Korea
ORCID number: Uchenna E Okpete (0000-0003-3803-4583); Haewon Byeon (0000-0002-3363-390X).
Author contributions: Okpete UE and Byeon H contributed to writing the article; Byeon H designed the study; Okpete UE was involved in data interpretation and developed the methodology; all authors thoroughly reviewed and endorsed the final manuscript.
Supported by National Research Foundation of Korea, No. RS-2023-00237287.
Conflict-of-interest statement: All the authors report no relevant conflicts of interest for this article.
Open Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Haewon Byeon, PhD, Associate Professor, Director, Worker’s Care and Digital Health Lab, Department of Future Technology, Korea University of Technology and Education, 1600, Chungjeol-ro, Cheonan 31253, South Korea. bhwpuma@naver.com
Received: June 23, 2025
Revised: July 21, 2025
Accepted: August 19, 2025
Published online: September 21, 2025
Processing time: 89 Days and 7.1 Hours

Abstract

Personalized management of inflammatory bowel disease (IBD) is crucial due to the heterogeneity in disease presentation, variable therapeutic response, and the unpredictable nature of disease progression. Although artificial intelligence (AI) and machine learning algorithms offer promising solutions by analyzing complex, multidimensional patient data, the “black-box” nature of many AI models limits their clinical adoption. Explainable AI (XAI) addresses this challenge by making data-driven predictions more transparent and clinically actionable. This minireview focuses on recent advancements and clinical relevance of integrating XAI for personalized IBD management. We explore the importance of XAI in prioritizing treatment and highlight how XAI techniques, such as feature-attribution explanations and interpretable model architectures, enhance transparency in AI models. In recent years, XAI models have been applied to diagnose IBD anomalies by prioritizing the predictive features for gastrointestinal bleeding and dietary intake patterns. Furthermore, studies have revealed that XAI application enhances IBD risk stratification and improves the prediction of drug efficacy and patient responses with high accuracy. By transforming opaque AI models into interpretable tools, XAI fosters clinician trust, supports personalized decision-making, and enables the safe deployment of AI systems in sensitive, individualized IBD care pathways.

Key Words: Precision medicine; Ulcerative colitis; Crohn’s disease; Heterogeneous population; Machine learning; Interpretability; Feature attribution; Clinical decision-making

Core Tip: Personalized management of inflammatory bowel disease is essential because of its heterogeneous clinical presentations and variable treatment responses. While artificial intelligence (AI) offers powerful tools for analyzing patient data and guiding treatment, many AI models lack transparency, which limits their clinical adoption. Explainable AI addresses this issue by making AI predictions more interpretable and trustworthy. This minireview highlights recent advancements in the application of explainable AI to inflammatory bowel disease management, including its use in predicting disease progression, selecting therapies, and monitoring treatment response.



INTRODUCTION

Inflammatory bowel disease (IBD) is a chronic, immune-mediated disorder of the gastrointestinal tract, comprising Crohn’s disease (CD), ulcerative colitis (UC), and indeterminate colitis. IBD occurs in genetically predisposed individuals due to complex interactions arising from dysregulated immune response to the gut microbiota and environmental factors such as diet, geography, and smoking. IBD imposes a significant health burden on millions of people globally, with growing case numbers driven by aging populations and improved survival[1,2]. The heterogeneity in IBD’s clinical presentation, including variable disease location, histopathology, severity, and treatment response, makes personalized care essential but challenging[3].

Personalized management of IBD aims to integrate clinical, genetic, serological, and microbial data to better stratify patients, predict outcomes, and optimize treatment. It considers the heterogeneity of IBD clinical presentations and the underlying disease mechanisms in order to provide several opportunities to improve patient care by maximizing treatment efficacy while minimizing adverse events[4]. Despite notable advances in diagnostics (e.g., colonoscopy, ultrasound, magnetic resonance imaging, histopathology)[5] and therapeutics, clinicians still face major challenges in determining which patients will respond to which treatment, when to escalate therapy, and how to predict the long-term outcomes. These uncertainties hinder timely, targeted intervention and may contribute to suboptimal care or overtreatment.

Studies have demonstrated that approximately one-third of patients requiring corticosteroids do well without needing advanced therapy, indicating a heterogeneous disease course[4]. Ethnic and geographic variability in IBD clinical presentation further influences care strategy. For example, Asian populations show different genetic risk profiles and disease phenotypes compared to Western populations, warranting context-specific diagnostic pathways and regionally adapted treatment protocols[4]. Prognostic models that incorporate clinical variables [e.g., albumin levels, prior surgery, tumor necrosis factor (TNF) exposure] are under development but require broader validation[6,7]. Therefore, future care in IBD requires identifying limitations to current treatment and defining treatment targets that encompass clinical symptoms, biomarkers, endoscopy, cross-sectional imaging, and histopathology.

Artificial intelligence (AI) and machine learning (ML) are increasingly utilized to transform IBD management by enabling the identification of predictive biomarkers and therapeutic response patterns, thereby informing personalized management paradigms in IBD care. These computational approaches have been developed to identify subtle patterns in patients’ datasets, and generate predictive models that assist in clinical decision-making[8]. Notably, AI models have been developed to predict treatment response to biologics, assess disease severity using medical imaging and histopathology, and estimate the risk of relapse[9,10]. However, the widespread clinical adoption of AI in IBD remains limited due to the opacity of many AI-driven models, commonly referred to as the “black-box” AI, whose internal processes are unknown to its users, and which lack interpretability and transparency. Traditional “black-box” algorithms often fail to provide insight into the rationale behind predictions, making it difficult for clinicians to interpret or trust decision-making processes, especially in high-risk patients. Additional challenges in implementing personalized medicine for IBD include developing new technologies for individualized patient management and data analysis, validating existing biomarkers, and overcoming regulatory hurdles[11].

This is where explainable AI (XAI) becomes critical. Unlike traditional “black-box” AI systems, XAI enhances model interpretability by offering human-understandable explanations for its predictions by allowing clinicians to identify which features, such as inflammatory biomarkers, imaging scores, or genetic variants, are most influential in a model’s prediction. By making complex model outputs transparent, XAI equips clinicians with actionable insights into how and why decisions are made, using techniques such as feature attribution, decision trees, attention mechanisms, and inherently interpretable model architectures[12,13]. In IBD management, XAI has been applied to prioritize plasma and fecal metabolites and analyze their dietary associations[14]. Fecal metabolites were found to be more robust predictors of IBD compared to plasma metabolites, with stronger diet-metabolite associations. XAI models could, for example, elucidate why a patient is predicted to respond poorly to anti-TNF therapy or why early surgical intervention is recommended. This interpretability facilitates shared decision-making between clinicians and patients, supporting more precise and personalized care strategies. With the increasing global burden of age-related mortality in IBD and the growing demand for cost-effective, personalized treatment strategies, the integration of XAI into clinical workflows offers significant clinical advantages, by fostering clinical trust, regulatory acceptance, and ethical deployment. XAI facilitates the translation of innovative algorithms into practical clinical applications.

The objective of this minireview is to provide an overview of XAI techniques relevant to IBD management and highlight recent advances in XAI application in IBD management, with a focus on their capacity to improve interpretability and optimize patient outcomes. In contrast to previous reviews that broadly cover AI applications in gastroenterology, our minireview specifically centers on XAI-based approaches, their relevance to IBD care, and the current challenges to their real-world implementation.

XAI TECHNIQUES

XAI encompasses a set of methods and approaches designed to provide transparent, interpretable reasoning behind AI-generated decisions and predictions from ML algorithms. It allows clinicians to understand why a particular treatment is recommended or why a patient is classified as high risk. Before examining how XAI enhances IBD treatment, we outline key interpretability frameworks and classification approaches that underpin explainable modeling in clinical AI systems. Various classification criteria have been proposed in recent literature[15,16], and Figure 1 summarizes these criteria and their respective categories. Below, we elaborate on the categories and characteristics of XAI techniques.

Figure 1
Figure 1 Summary of explainable artificial intelligence methods categorized in this study. AI: Artificial intelligence.
Explanation scope

The scope of model interpretability can be categorized into local and global explanations, each serving distinct yet complementary purposes.

Local explanations: Local explanations focus on the rationale behind individual model predictions, treating the model as a black box and identifying the features that influenced a specific decision. Consider a scenario in which a doctor uses a classifier to guide treatment in a patient with complex clinical features. To build trust and ensure accountability, the doctor must understand why the model made that recommendation. This is where locally explainable methods, such as local interpretable model-agnostic explanation (LIME), SHapley Additive exPlanations (SHAP), and integrated gradients are critical to providing case-level interpretability. These approaches generate explanations centered on a single input instance by analyzing how individual features contribute to the prediction, thus enabling end users to evaluate and validate the model’s reasoning[17,18].

Global explanation: In contrast, global explainability methods aim to provide insights into the overall behavior of an AI model across a broad set of input data points. These approaches help uncover how the model processes patterns, assigns importance to features, and makes predictions[19]. Global methods often involve simplifying complex, non-linear models into more interpretable forms, such as linear approximations or rule-based representations. For instance, decision trees and other tree-based algorithms are inherently interpretable at a global level because their hierarchical structure allows each decision path to be traced back to specific input features. Typically, global explanation techniques operate on an array of input instances to address the question: “How does the model generally make decisions?” It often relies on the model’s internal parameters and performance, enabling users to understand the interactions among variables and the general decision-making mechanism.

Importantly, under certain conditions, local explanations can be aggregated to construct meaningful global interpretations of a model’s behavior[20]. Hybrid approaches that combine local and global methods are increasingly being used to provide more holistic and concept-based explanations. For example, integrated gradients can provide fine-grained local feature attributions, whereas testing with concept activation vectors enables global, concept-level insights[19]. Similarly, a rule-based model-agnostic method generalizes global explanations from local ones using a scoring system based on rule relevance[20]. These approaches aim to provide comprehensive, human-understandable insights into the decision-making processes of ML models.

Methodology types

XAI techniques can also be classified based on their compatibility with different model types. The key methodological difference lies in how explanations are computed. Model-specific methods require a forward and backward pass through the model to compute gradients or internal activations. Model-agnostic methods typically require only a forward pass, using repeated input perturbations and response evaluations to infer explanatory patterns.

Model-specific explanations: Model-specific methods are tailored to specific model architectures. They rely on the internal structure of the model, such as weights, gradients, or layer activations, to generate explanations. For example, backpropagation-based techniques such as saliency maps and class activation maps require access to gradients and can only be applied to models like neural networks. Although they offer detailed insights, their use is limited to specific models[21].

Model-agnostic explanations: In contrast, model-agnostic methods treat the model as a black box and do not depend on its internal workings. Techniques such as LIME and SHAP perturb input features and analyze output changes to infer the feature importance. These methods are more flexible and applicable across a wide range of models, although they may be less precise and computationally intensive[21,22].

Interpretation types

XAI methods are further categorized into intrinsic and post hoc approaches based on when and how interpretability is introduced.

Intrinsic (model-based) explainability: This refers to models that are inherently interpretable due to their transparent structure. Examples include linear regression, decision trees, rule lists, and generalized additive models[23]. These models embed explainability into their architecture, allowing predictions and rationales to be derived directly from the model’s internal rules or coefficients. However, intrinsic methods are often model-specific and may sacrifice predictive performance in complex tasks[24].

Post hoc explainability: This involves the application of interpretation techniques after the model is trained. These methods treat models as black or white boxes, offering explanations without altering the original architecture. Techniques such as SHAP, LIME, saliency maps, and class activation mapping (CAM) fall into this category. Post hoc methods are typically model-agnostic, providing flexibility and allowing the integration of explainability into high-performing but opaque models such as deep neural networks[25,26]. While intrinsic methods offer transparency by design, post hoc techniques provide broad applicability and are crucial for interpreting complex, pre-trained models without compromising accuracy.

WHY IS XAI IMPORTANT FOR PRIORITIZING TREATMENT IN IBD

With foundational XAI concepts in place, the following section discusses their relevance and benefits in IBD management. The diagram in Figure 2 illustrates three levels of AI interpretability in diagnosing IBD from endoscopic images. At the top, a black-box AI model simply identified the image as “IBD” without context. This leaves the clinician confused, questioning the reliability and clinical validity of the output, a critical challenge that influences clinical decisions such as initiating or de-escalating biologics.

Figure 2
Figure 2 Visual comparison of conventional artificial intelligence (with a black box model) and two interpretation types of explainable artificial intelligence models (described in the second section under “interpretation types”) and the implication for user feedback of endoscopic images from inflammatory bowel diseases. The black box model (arrow, above the broken line) provides limited information for classification results such as distinguishing between inflammatory bowel diseases and non-inflammatory bowel diseases endoscopic images. While explainable artificial intelligence methods are represented by the middle and bottom branched arrows represent post hoc and intrinsic method explainable artificial intelligence methods. IBD: Inflammatory bowel diseases; AI: Artificial intelligence; XAI: Explainable artificial intelligence.

In contrast, the middle layer adds explainability by highlighting regions of the image that contributed to the diagnosis [e.g., via saliency maps or gradient-weighted CAM (Grad-CAM)]. This allows the physician to understand the model’s reasoning, thereby increasing confidence in applying the result to treatment planning. The bottom layer represents an even more transparent model, which not only points to important areas but also explains why those features indicate IBD (e.g., pattern recognition similar to known clinical indicators). This level of interpretability supports shared decision-making, aligns with clinical reasoning, and helps guide nuanced treatment strategies, such as recommending early biologic intervention in severe endoscopic patterns or tapering therapy in stable UC. Therefore, integrating XAI frameworks into clinical pipelines for IBD management would serve essential functions that collectively enhance patient safety, regulatory alignment, and physician adoption.

Clinical accountability and decision justification

Treatment decisions in IBD, ranging from immunosuppressive therapy to surgical interventions, carry substantial risk. The risk profile varies according to drug class, patient age, and gender, necessitating personalized benefit-risk assessments[27]. Given this complexity, shared decision-making is essential, requiring patients to understand the potential benefits, adverse effects, and the implications of non-treatment[28]. For example, models that forecast the response to anti-TNF therapy can be paired with feature-attribution explanations (e.g., SHAP values) that reveal which clinical or molecular factors drove the prediction. This transparency promotes clinician accountability and justifies treatment choices in ethically and legally sensitive contexts. Furthermore, when misclassifications occur, due to data quality issues or atypical cases, XAI supports root-cause analysis and refinement, ensuring the system evolves safely over time. By adopting a personalized approach and optimizing drug safety profiles, clinicians can enhance treatment outcomes while minimizing risks in IBD[27].

Building trust among clinicians and patients

Trust is foundational in medical management. AI-driven decision support must be sufficiently transparent to be interpretable by domain experts, particularly in clinical decision support systems[29]. XAI allows gastroenterologists and radiologists to understand the rationale behind AI predictions (e.g., why a certain patient would require early biological intervention), thereby fostering confidence and facilitating shared decision-making between clinicians and patients[29,30].

Regulatory compliance and ethical mandates

International regulations, including the General Data Protection Regulation in Europe, have increasingly demanded that automated systems provide meaningful explanations for their decisions[31]. XAI supports these regulatory requirements by making the inner workings of the AI models more transparent and auditable. The General Data Protection Regulation’s “right to explanation” has significant implications for the design and deployment of automated data processing systems, although the exact nature of this right is debated. Data protection authorities have enhanced powers to enforce and interpret this right, potentially leading to the adoption of algorithmic auditing and “data protection by design” methodologies as new industry standards[32]. These developments support regulatory requirements for meaningful explanations of AI decisions.

Enhancing model robustness and clinical performance

Explainability facilitates model optimization. By revealing which input features (e.g., specific imaging markers, C-reactive protein levels, or patient history) disproportionately influence outcomes, data scientists and clinicians can collaboratively fine-tune models for better generalization. These models can predict significant outcomes such as hospitalizations and treatment response, potentially reducing healthcare costs[33]. This iterative refinement is vital for IBD, where heterogeneity in disease presentation requires adaptable, context-sensitive algorithms.

Detecting bias and ensuring equitable care

AI in healthcare presents both opportunities and risks for equitable care delivery. Although AI can improve clinical decision-making, it may perpetuate or exacerbate biases and health disparities if not carefully developed and implemented[34,35]. Algorithmic biases can arise from various sources, including data acquisition, genetic variation, and labeling variability, leading to unequal outcomes across patient subpopulations[35]. To address these challenges, researchers propose interdisciplinary approaches to advance equitable XAI, leveraging expertise from social epidemiology and health equity to critically assess model explanations and identify potential biases[36]. Emerging technologies such as federated learning, disentanglement, and model explainability offer promising avenues for mitigating bias in AI systems[36].

XAI METHODS AND APPLICATIONS IN IBD TREATMENT

This section provides an overview of prominent XAI methods and their specific contributions to personalized treatment of IBD, including CD and UC. Table 1 outlines the most commonly used XAI techniques in IBD management, studies employing these methods and their significant contributions.

Table 1 Overview of studies selected for the review of explainable artificial intelligence techniques in inflammatory bowel disease care.
Ref.
XAI technique
IBD type
Modality
Classifier
Contribution
Evaluation metrics (highest)
Onwuka et al[14]SHAPCD and UCMultiomics datasetLGBMThis study applies SHAP-based XAI to identify key fecal metabolites that robustly predict IBD and their diet associations. It demonstrates that XAI not only enhances metabolite prioritization across datasets but also clarifies how diet-driven microbial metabolites distinctly influence IBD pathologyAUC = 0.93 (distinguishing between IBD and non-IBD cases)
Patel et al[26]Grad-CAM, saliency maps, integrated gradients, and LIMEUCEndoscopic imagesResNet50 and MobileNetV2This study evaluates four XAI techniques predictions in IBD-related endoscopic image classification. The analysis reveals that Grad-CAM most consistently highlights relevant regions, supporting its use in visualizing model focus and validating TeleXGI’s multi-XAI strategy for clinical interpretabilityACC = 98.8% (health risk prediction of UC endoscopic images remotely)
Chierici et al[37]Saliency and gradient-based techniquesCD and UC Endoscopic imagesDenseNet121This study applied saliency and guided backpropagation to a ResNet50 model for IBD endoscopic image classification. The attribution maps revealed clinically relevant features, with guided backpropagation offering clearer, less noisy insightsMCC = 0.94 (classification of healthy controls and IBD patients)
Sutton et al[38]Grad-CAMUCEndoscopic imagesDenseNet121This study uses explainable deep learning on HyperKvasir endoscopic images to accurately distinguished UC from non-UC conditions and stratified disease severity based on Mayo scores. The DenseNet121 model achieved the highest performance, with Grad-CAM enhancing interpretability through visual explanationsAUC = 0.90 (classification of UC and non-UC images)
Tsai and Lee[39]Grad-CAMUCEndoscopic imagesDenseNet201, InceptionV3, and VGG19This study leverages Grad-CAM to interpret deep learning model predictions for UC classification, comparing individual CNNs and ensemble models. The triplet ensemble produced the most focused and clinically relevant heatmapsACC = 91%
Ma et al[40]Grad-CAMCDUltrasound images and clinical dataResNet50This study developed a deep learning model combining intestinal ultrasound images and clinical data to predict mucosal healing in CD after one year of treatment. Using Grad-CAM for interpretability, the model highlighted key features such as the bowel wall and mesentery, achieving a high PPVAUC = 0.73 (classification of mucosal healing and non-mucosal healing)
Maurício and Domingues[41]Grad-CAM, LIME, SHAP, occlusion sensitivityCD and UCEndoscopic imagesCNN + LSTMThis study develops CNN and ViT models to classify CD and UC from endoscopic images, creating lighter versions via knowledge distillation for clinical use. The ViT-S/16 model achieved the best performance, accurately identifying IBD features while ignoring irrelevant elements. Multiple XAI interpretability analysis confirmed its reliability, with minimal misclassifications after distillation using temperature-based methodsACC = 95% (distinguishing between active and non-active inflammation)
Weng et al[42]SHAP and LIMECDClinical dataExtreme gradient boostingThis study integrates SHAP and LIME to interpret an extreme gradient boosting-based model for differentiating intestinal tuberculosis from CD. These XAI methods successfully identified and visualized the key clinical features influencing the machine learning model’s predictions for differentiating intestinal tuberculosis and CDMCC = 96.9%
Zhen et al[43]LIMECD and UCClinical dataSVCThis study applied LIME to interpret an SVM model predicting quality-of-life impairment in IBD patients, identifying key modifiable risk factors such as anxiety, abdominal pain, and glucocorticoid useAUC = 80% (evaluating IBD-related quality-of-life impairments)
Deng et al[44]Trainable attention mechanismsCDPathological imagesRFC and GNNThis study introduces a cross-scale attention mechanism for multi-instance learning that captures inter-scale interactions in whole slide images for CD diagnosis. Trained on approximately 250000 hematoxylin and eosin-stained patches. Cross-scale attention visualizations localize lesion patterns at different magnifications, enhancing both diagnostic accuracy and model interpretabilityAUC = 0.89 (distinguishing between healthy CD patient)
de Maissin et al[45]Trainable attention mechanismsCDEndoscopic imagesResNet 34This study introduces a recurrent attention neural network trained on a multi-expert annotated CD capsule endoscopy dataset. It demonstrates that higher annotation quality significantly boosts diagnostic accuracy (up to 93.7% precision). The network mimics human visual focus, enabling interpretable lesion localization and outperforming standard CNNs as annotations improveACC = 94.6% (detection of pathological and non-pathological images)
Wu et al[46]Trainable attention mechanismsUCEndoscopic imagesFLATerThis study presents FLATer, an explainable transformer-based model combining CNN and ViT for GIT disease classification. An ablation study reveals the crucial role of the residual block and spatial attention in boosting performance and interpretability, with saliency maps confirming enhanced localization of pathological regionACC = 99.7% (multi-class classification to categorize images into specific diseases)
Sucipto et al[47]Trainable attention mechanismsCDPathological imagesRFC and GNNThis study introduces a cross-scale attention mechanism for multi-instance learning that captures inter-scale interactions in whole slide images for CD diagnosis. Trained on approximately 250000 hematoxylin and eosin-stained patches, it achieved an AUC of 0.8924. Cross-scale attention visualizations localize lesion patterns at different magnifications, enhancing both diagnostic accuracy and model interpretabilityACC = 87% and 85% (prediction of histologic remission)
Ahamed et al[48]SHAP, heatmap, Grad-CAM, and saliency mapsUCEndoscopic imagesEELMThis study introduces a lightweight, parallel-depth CNN optimized for gastrointestinal image classification using the GastroVision dataset, including IBD cases. Integrated with multiple XAI techniques (Grad-CAM, SHAP, saliency maps), the model enhances interpretability and diagnostic transparency. Using EELM for final classification, the system achieved high accuracy, robust generalizabilityAUC = 0.987 (distinguishing between normal and benign ulcer)
Elmagzoub et al[49]Trainable attention mechanismsUC Endoscopic imagesResNet101This study presents a grid search-optimized ResNet101 model with an integrated attention mechanism for classifying gastrointestinal diseases, including UC, from endoscopic imagesACC = 93.5%
Saliency and gradient-based techniques

Saliency maps and guided backpropagation have been applied to deep learning models such as ResNet50 for classifying endoscopic images in IBD. These methods highlight relevant image features that influence the predictions. Guided backpropagation, in particular, provided clearer, less noisy attribution maps, assisting clinicians in verifying AI outputs against visible pathological features[37].

Grad-CAM

Grad-CAM has been widely adopted for IBD-related image interpretation. Studies using DenseNet121, ResNet variants, and ensemble convolutional neural networks (CNNs) have demonstrated the effectiveness of Grad-CAM in localizing clinically relevant areas in endoscopic and ultrasound images. It enabled visual validation of model attention, with several works showing heatmap alignment with expert annotations and highlighting critical regions such as mucosal surfaces and bowel walls[38-41].

Occlusion sensitivity

Occlusion-based techniques evaluate the effect of masking specific regions of an input image on a model’s prediction, thereby identifying the visual areas that are most critical to their decision-making process. In the context of IBD classification using vision transformer occlusion sensitivity, combined with SHAP and Grad-CAM, was used to compare different transformer distillation approaches. The analysis revealed that the data-efficient image transformers-based distilled model occasionally relied on irrelevant image elements, suggesting a lack of precise disease focus. Conversely, the temperature-distilled model demonstrated greater interpretability, consistently attending to clinically meaningful regions while ignoring visual noise[41].

SHAP

SHAP is an interpretability method designed to attribute a model’s prediction to its input features. SHAP quantifies the marginal contribution of each feature toward a particular output, offering both local and global explanations of model behavior. One of SHAP’s major advantages is its model-agnostic nature, that is, it can be applied across a wide range of ML algorithms while providing consistent, theoretically sound explanations. SHAP has proven valuable in both image-based and non-imaging modalities of IBD research[14,40,41]. In endoscopic image classification and multiomics analyses, SHAP provides insight into which features (e.g., specific image regions or fecal metabolites) most influence predictions. In one study, SHAP clarified the role of diet-driven microbial metabolites in IBD pathology, aiding biomarker discovery[14]. Another study used SHAP to differentiate CD from intestinal tuberculosis using clinical data[42].

LIME

LIME is an XAI technique that approximates the behavior of complex ML models by creating simple, interpretable models around individual predictions. LIME perturbs the input data and observes how these changes affect the model’s output. By evaluating prediction changes across a range of perturbed samples, LIME constructs a linear surrogate model that mirrors the behavior of a black-box model in a small, localized region around the input. This approach is particularly valuable when clinicians or researchers need to understand the rationale behind a specific decision rather than general model trends.

LIME has been successfully applied in IBD to interpret clinical prediction models. Notably, in a multicenter study predicting quality-of-life impairment, LIME identified modifiable risk factors such as anxiety, glucocorticoid use, and abdominal pain that predicted quality-of-life impairments among patients with IBD[43]. It has also been combined with SHAP to distinguish CD from intestinal tuberculosis by highlighting influential clinical features in an extreme gradient boosting model[42].

Trainable attention mechanisms

Trainable attention mechanisms allow a neural network to focus on specific parts of the input sequence by assigning weight to the input sequence. They are increasingly utilized in deep learning models to enhance interpretability by focusing on the most relevant parts of the input data during prediction. Similar to occlusion and saliency-based techniques that examine how masking or modifying input regions affect predictions, attention mechanisms internally learn which regions to prioritize, offering a built-in layer of interpretability. In CD diagnosis, trainable attention has shown significant promise in endoscopic and histopathological image analysis[44-47]. For example, attention-guided models have been trained on capsule endoscopy datasets annotated by multiple experts to diagnose CD[45]. These networks outperformed traditional CNNs as annotation quality improved, achieving high diagnostic precision. Similarly, cross-scale attention mechanisms applied to whole slide pathological images have successfully localized lesion patterns at different magnification levels achieving an area under the curve (AUC) of 0.8924 and aiding in the diagnosis of CD[44]. Another approach, FLATer (a hybrid transformer-CNN model optimized with attention), demonstrated that spatial attention significantly enhances both the classification accuracy and pathological region localization in UC[46].

Multi-XAI frameworks (saliency, Grad-CAM, LIME, SHAP, occlusion sensitivity)

Some studies evaluated multiple XAI techniques simultaneously such as Grad-CAM, saliency maps, integrated gradients, LIME, and SHAP, to enhance model interpretability from diverse perspectives. These approaches were especially useful in evaluating classification models for IBD (Table 1)[14,26,37-49].

DISCUSSION ON CLINICAL USE CASES OF XAI IN IBD MANAGEMENT

Numerous studies have applied XAI techniques to develop interpretable models for the classification process and for identifying key features in IBD. These approaches have demonstrated high accuracy in distinguishing IBD from non IBD cases[26,48]. This section explored studies identifying the use of XAI and ML techniques in clinical settings for diagnosis, risk stratification, treatment response prediction and clinical trial design, aiming to predict outcomes of interventions.

Diagnosis of IBD anomalies

XAI techniques are increasingly being applied in the diagnosis and management of IBD. XAI has also been applied to prioritize variables for predicting gastrointestinal bleeding and dietary intake patterns. Furthermore, ML models, particularly those using SHAP, have been employed to identify discriminative metabolites in plasma and feces for IBD diagnosis[14]. These models have shown high accuracy, with fecal metabolites providing more robust predictors (AUC = 0.93) than plasma metabolites (AUC = 0.74). In IBD endoscopy, AI techniques such as support vector machines and CNNs have improved diagnosis reliability and image processing[50]. AI-enhanced endoscopy has been applied in various aspects of IBD care, including diagnosis, assessment of mucosal activity, and prediction of recurrence and complications[51]. AI models have demonstrated potential in predicting treatment response and supporting shared decision-making between clinicians and patients[52].

Risk stratification

XAI offers potential for improved risk stratification and personalized care[53]. Class-contrastive and feature-attribution techniques, including a modified kernel SHAP algorithm, have been used to identify potential target genes and modules for patient stratification in CD[54]. AI models, particularly random forest and least absolute shrinkage and selection operator, have demonstrated high accuracy in predicting adverse outcomes such as hospitalization, surgery, and long-term steroid use in IBD patients[55]. These models can be applied for risk stratification and implementation of preemptive measures in clinical settings. While risk stratification is integral to IBD management according to care pathways and guidelines, its explicit use and influence on patient management in community-based practices requires further examination[56].

Drug response prediction

Recent studies have demonstrated the value of XAI in predicting treatment response and identifying patient-specific biomarkers in IBD. The integration of multi-modal data, including genomic, transcriptomic, and demographic information, improved prediction of drug efficacy and patient responses with high accuracy. Multi-omics studies using RNA-sequencing and ML models have linked specific genetic variants to differential drug efficacy[57]. For example, in patients treated with the p38 mitogen-activated protein kinase inhibitor BIRB796 (doramapimod), the presence of the alternate allele at rs2240467 in ADAM22 was associated with reduced transcript length and lower predicted TNFα levels, suggesting enhanced drug responsiveness. Similarly, a variant in PRDM1 (rs811925) was associated with both lower gene expression and a higher predicted TNFα level, indicating reduced treatment efficacy. XAI techniques, such as SHAP values, further clarify how these polymorphisms contribute to model predictions, offering clinicians mechanistic insights into patient-specific drug responsiveness[57]. In another study, ML models incorporating early treatment data, such as fecal calprotectin levels and drug concentrations, have successfully predicted long-term response to biologic agents like vedolizumab. Patients with high baseline fecal calprotectin were more likely to fail vedolizumab therapy, while combining fecal calprotectin and drug levels after six weeks yielded an area under the receiver operating characteristic curve of 0.73 (95% confidence interval: 0.65-0.82) for predicting remission at one year[14]. These approaches aim to identify important features associated with treatment efficacy, potentially linking genetic polymorphisms or metabolite concentration to variations in drug responses[1,57]. Additionally, computational models and digital twins have been developed to simulate patient-specific responses and optimize treatment strategies[52].

Clinical trial design

AI is improving clinical trial design in IBD by enhancing diagnostic precision, improving recruitment strategies, and enabling adaptive trial frameworks by leveraging electronic health records. AI can more efficiently identify eligible patients while enhancing the cohort diversity. Automated analysis of endoscopic and histologic images improves diagnostic accuracy and standardization across trial sites, reducing inter-observer variability. Additionally, AI-powered predictive modeling informs adaptive trial designs by forecasting patient responses and disease trajectories, helping tailor interventions in real time. Wearable sensors and mobile applications further facilitate continuous disease monitoring, allowing timely treatment adjustments and more dynamic endpoint assessments[56,57].

Recent clinical trials have explored AI’s predictive potential in real-world settings. For instance, Telesco et al[58] conducted a phase 2a study in UC patients using mucosal gene expression data to predict response to Golimumab. While their model achieved AUCs of 0.688 at week six and 0.671 at week thirty, its lower-than-expected performance highlighted challenges such as variability between training and validation cohorts. Similarly, Noor et al[59] evaluated a prognostic assay in a randomized CD trial aimed at distinguishing between step-up and top-down therapy strategies. However, the assay failed to accurately predict disease progression, underlining the need for more robust and well-validated predictive tools[60].

CHALLENGES AND FUTURE DIRECTIONS

A critical challenge highlighted in this reviewed study on XAI in IBD management is the difficulty of effectively integrating local and global explanations in a clinically meaningful manner. In IBD care, especially when personalizing treatment strategies, clinicians require case-specific interpretability to make informed decisions regarding individual patients[59]. Local explanation methods such as LIME, SHAP, and integrated gradients are essential for this purpose[18]. They help clinicians understand why a model makes a particular prediction in a given case, such as identifying anxiety or abdominal pain as predictors of reduced quality of life. This form of case-level transparency supports trust and accountability in AI-assisted decision-making. The challenge arises because local and global explanations often yield different perspectives, and synthesizing them into a unified, clinically actionable insight remains technically and conceptually complex. Hybrid methods, such as combining integrated gradients[17] with concept activation vectors or using rule-based aggregation of local SHAP scores, attempt to bridge this gap, however, they remain methodologically underdeveloped in IBD management, particularly in real-time clinical workflows. Furthermore, while broadly applicable, model-agnostic techniques tend to be computationally expensive and may oversimplify relationships by ignoring model-specific internal dynamics[20]. Conversely, model-specific methods, like saliency maps or CAM, are more precise but restricted to certain architectures (e.g., deep neural networks), limiting their generalizability across diverse clinical tools.

Another issue is the lack of accessible and comprehensive information resources that clearly communicate XAI concepts to end-users. Clinicians and healthcare stakeholders often lack the tools or guidance required to understand how different explanation methods work, how to interpret their outputs, and when each method is most appropriate. Challenges related to data quality, privacy, and ethical concerns need to be addressed for widespread clinical implementation. Moreover, current XAI systems often fall short in addressing the cognitive and contextual needs of end-users. There is a notable absence of robust requirements elicitation frameworks tailored specifically for healthcare professionals, which would ensure that explanation mechanisms align with how clinicians think, reason, and make decisions. Without this alignment, even technically sound explanations may fail to support meaningful clinical judgments.

In addition, few XAI systems are designed from a user-centric perspective. Algorithmic performance is often prioritized over usability, leading to interfaces or outputs that are difficult for non-technical users to interpret or apply. The limited deployment of explanation systems that are both clinically intuitive and responsive to end-user priorities continues to hinder real-world implementation.

Lastly, despite the promising advancements of XAI in adult IBD care, its application in pediatric populations presents several unique challenges. Children and adolescents with IBD often present atypically, manifesting symptoms like poor growth, anemia, or extraintestinal manifestations instead of classic gastrointestinal complaints[61]. These subtleties complicate early diagnosis and risk stratification, areas where AI and ML could provide critical support. However, the complexity and variability in pediatric disease expression mean that models trained predominantly on adult data may fail to capture the nuanced clinical presentations seen in younger patients. XAI tools could bridge this gap by offering transparent, clinician-interpretable outputs. However, the pediatric context demands heightened caution. Misinterpretation of XAI output, especially when models fail to generalize to younger patients, may lead to diagnostic delays or inappropriate treatment adjustments.

The future of IBD care will be increasingly shaped by intelligent, explainable technologies that extend support beyond clinical settings and empower both patients and clinicians. One promising direction is the development of AI-powered tools trained on colonoscopy images and biomarker data (e.g., calprotectin levels, sweat biosensors) to automatically grade disease severity and provide transparent reasoning behind their assessments[20,62]. Unlike opaque “black-box” models, XAI systems can communicate the reasoning behind their outputs, helping gastroenterologists, clinicians, and patients alike understand why a flare is predicted or a treatment is recommended. For example, activity data from wearables (like step trackers or heart monitors) can be combined with biomarkers (e.g., sweat-based markers) to track subtle shifts in health status. XAI can then correlate these patterns with flare risk, enabling early warnings and personalized alerts.

At-home ultrasound systems, guided live by remote experts via tablets or iPads, could further democratize access to care for patients with chronic, debilitating conditions who struggle to access in-person care. Coupled with XAI, these systems could provide interpretable, annotated feedback to patients and clinicians, bridging gaps in digital literacy and ensuring that clinical insights are both accessible and actionable.

Additionally, the integration of gut microbiome analysis, especially in combination with conventional IBD therapies, has demonstrated promising outcomes[60]. By analyzing microbiota shifts alongside metabolic and inflammatory profiles, XAI can help identify which patient subgroups are most likely to benefit from specific therapies. This is especially crucial in light of variable clinical response rates (e.g., 30%-40% for anti-TNF therapies like adalimumab)[63]. Furthermore, early intervention strategies supported by XAI, as seen in studies from SONIC trial, can aid in predicting and reducing time to first hospitalization, surgery, or complication[64,65]. By learning from historical patient trajectories and explaining risk predictions clearly, XAI enhances clinical decision-making while fostering patient trust. In summary, XAI has the capacity to reshape IBD care, from early detection and personalized therapy selection to at-home monitoring and long-term disease management, provided it remains transparent, inclusive, and grounded in real patient needs.

CONCLUSION

XAI holds transformative potential in the personalized management of IBD by making complex AI-driven predictions transparent and clinically interpretable. From improving diagnostic accuracy and risk stratification to predicting drug response and optimizing clinical trials, XAI enhances trust and usability in high-stakes decision-making. Despite current challenges in integration, interpretability, and real-world implementation, particularly in pediatric IBD care, XAI offers a promising path toward more precise, equitable, and patient-centered IBD care. Future efforts must focus on user-centric design, multi-modal integration, and context-aware deployment to fully realize the potential of XAI in clinical practice.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Gastroenterology and hepatology

Country of origin: South Korea

Peer-review report’s classification

Scientific Quality: Grade A, Grade B

Novelty: Grade A, Grade B

Creativity or Innovation: Grade A, Grade B

Scientific Significance: Grade B, Grade B

P-Reviewer: Khajavian MN, PhD, Postdoctoral Fellow, Malaysia; Wang X, PhD, China S-Editor: Wu S L-Editor: A P-Editor: Zhao S

References
1.  Jairath V, Feagan BG. Global burden of inflammatory bowel disease. Lancet Gastroenterol Hepatol. 2020;5:2-3.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 96]  [Cited by in RCA: 220]  [Article Influence: 44.0]  [Reference Citation Analysis (0)]
2.  Wang R, Li Z, Liu S, Zhang D. Global, regional and national burden of inflammatory bowel disease in 204 countries and territories from 1990 to 2019: a systematic analysis based on the Global Burden of Disease Study 2019. BMJ Open. 2023;13:e065186.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 262]  [Reference Citation Analysis (33)]
3.  McDowell C, Farooq U, Haseeb M.   Inflammatory Bowel Disease. 2023 Aug 4. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan.  [PubMed]  [DOI]
4.  Park SH, Park SH. Personalized medicine in inflammatory bowel disease: Perspectives on Asia. J Gastroenterol Hepatol. 2022;37:1434-1445.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 8]  [Reference Citation Analysis (0)]
5.  Panes J, Bouhnik Y, Reinisch W, Stoker J, Taylor SA, Baumgart DC, Danese S, Halligan S, Marincek B, Matos C, Peyrin-Biroulet L, Rimola J, Rogler G, van Assche G, Ardizzone S, Ba-Ssalamah A, Bali MA, Bellini D, Biancone L, Castiglione F, Ehehalt R, Grassi R, Kucharzik T, Maccioni F, Maconi G, Magro F, Martín-Comín J, Morana G, Pendsé D, Sebastian S, Signore A, Tolan D, Tielbeek JA, Weishaupt D, Wiarda B, Laghi A. Imaging techniques for assessment of inflammatory bowel disease: joint ECCO and ESGAR evidence-based consensus guidelines. J Crohns Colitis. 2013;7:556-585.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 539]  [Cited by in RCA: 480]  [Article Influence: 40.0]  [Reference Citation Analysis (0)]
6.  Cameron K, Nguyen AL, Gibson DJ, Ward MG, Sparrow MP, Gibson PR. Review Article: Albumin and Its Role in Inflammatory Bowel Disease: The Old, the New, and the Future. J Gastroenterol Hepatol. 2025;40:808-820.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 2]  [Reference Citation Analysis (0)]
7.  Dulai PS, Boland BS, Singh S, Chaudrey K, Koliani-Pace JL, Kochhar G, Parikh MP, Shmidt E, Hartke J, Chilukuri P, Meserve J, Whitehead D, Hirten R, Winters AC, Katta LG, Peerani F, Narula N, Sultan K, Swaminath A, Bohm M, Lukin D, Hudesman D, Chang JT, Rivera-Nieves J, Jairath V, Zou GY, Feagan BG, Shen B, Siegel CA, Loftus EV Jr, Kane S, Sands BE, Colombel JF, Sandborn WJ, Lasch K, Cao C. Development and Validation of a Scoring System to Predict Outcomes of Vedolizumab Treatment in Patients With Crohn's Disease. Gastroenterology. 2018;155:687-695.e10.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 108]  [Cited by in RCA: 112]  [Article Influence: 16.0]  [Reference Citation Analysis (0)]
8.  Stafford IS, Gosink MM, Mossotto E, Ennis S, Hauben M. A Systematic Review of Artificial Intelligence and Machine Learning Applications to Inflammatory Bowel Disease, with Practical Guidelines for Interpretation. Inflamm Bowel Dis. 2022;28:1573-1583.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 15]  [Cited by in RCA: 32]  [Article Influence: 10.7]  [Reference Citation Analysis (0)]
9.  Ahmad HA, East JE, Panaccione R, Travis S, Canavan JB, Usiskin K, Byrne MF. Artificial intelligence in inflammatory bowel disease: implications for clinical practice and future directions. Intest Res. 2023;21:283-294.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 23]  [Reference Citation Analysis (0)]
10.  Silverman AL, Shung D, Stidham RW, Kochhar GS, Iacucci M. How Artificial Intelligence Will Transform Clinical Care, Research, and Trials for Inflammatory Bowel Disease. Clin Gastroenterol Hepatol. 2025;23:428-439.e4.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 3]  [Cited by in RCA: 10]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
11.  Syed S, Boland BS, Bourke LT, Chen LA, Churchill L, Dobes A, Greene A, Heller C, Jayson C, Kostiuk B, Moss A, Najdawi F, Plung L, Rioux JD, Rosen MJ, Torres J, Zulqarnain F, Satsangi J. Challenges in IBD Research 2024: Precision Medicine. Inflamm Bowel Dis. 2024;30:S39-S54.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 6]  [Cited by in RCA: 13]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
12.  Chaddad A, Peng J, Xu J, Bouridane A. Survey of Explainable AI Techniques in Healthcare. Sensors (Basel). 2023;23:634.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 142]  [Cited by in RCA: 110]  [Article Influence: 55.0]  [Reference Citation Analysis (0)]
13.  Beaugerie L, Kirchgesner J. Balancing Benefit vs Risk of Immunosuppressive Therapy for Individual Patients With Inflammatory Bowel Diseases. Clin Gastroenterol Hepatol. 2019;17:370-379.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 51]  [Cited by in RCA: 63]  [Article Influence: 10.5]  [Reference Citation Analysis (0)]
14.  Onwuka S, Bravo-Merodio L, Gkoutos GV, Acharjee A. Explainable AI-prioritized plasma and fecal metabolites in inflammatory bowel disease and their dietary associations. iScience. 2024;27:110298.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 7]  [Reference Citation Analysis (0)]
15.  Boppiniti ST. A survey on explainable AI: Techniques and challenges. Int Res J Innov Eng Technol. 2020;7:57-66.  [PubMed]  [DOI]  [Full Text]
16.  Das A, Rad P.   Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. Available from: arXiv:2006.11371.  [PubMed]  [DOI]  [Full Text]
17.  Falvo FR, Cannataro M.   Explainability techniques for Artificial Intelligence models in medical diagnostic. 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); 2024 Dec 3-6; Lisbon, Portugal. New York: IEEE, 2024: 6907-6913.  [PubMed]  [DOI]  [Full Text]
18.  Schrouff J, Baur S, Hou S, Mincu D, Loreaux E, Blanes R, Wexler J, Karthikesalingam A, Kim B.   Best of both worlds: local and global explanations with human-understandable concepts. Available from: arXiv:2106.08641.  [PubMed]  [DOI]  [Full Text]
19.  Setzu M, Guidotti R, Monreale A, Turini F.   Global Explanations with Local Scoring. In: Cellier P, Driessens K, editors. Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019: Proceedings of Machine Learning and Knowledge Discovery in Databases - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases; 2019 Sep 16-20; Germany, Würzburg. Berlin: Springer, 2020: 159-171.  [PubMed]  [DOI]  [Full Text]
20.  Devireddy K  A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches. Available from: arXiv:2504.04276.  [PubMed]  [DOI]  [Full Text]
21.  Zafar MR, Khan N. Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability. Mach Learn Knowl Extr. 2021;3:525-541.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 12]  [Cited by in RCA: 48]  [Article Influence: 12.0]  [Reference Citation Analysis (0)]
22.  Zschech P, Weinzierl S, Hambauer N, Zilker S, Kraus M.   GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints. Available from: arXiv:2204.09123.  [PubMed]  [DOI]  [Full Text]
23.  Sudjianto A, Zhang AJ.   Designing Inherently Interpretable Machine Learning Models. Available from: arXiv:2111.01743.  [PubMed]  [DOI]  [Full Text]
24.  Kamath U, Liu J.   Post-Hoc Interpretability and Explanations. In: Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. New York: Springer, 2021: 167-216.  [PubMed]  [DOI]  [Full Text]
25.  Narkhede J  Comparative Evaluation of Post-Hoc Explainability Methods in AI: LIME, SHAP, and Grad-CAM. 2024 4th International Conference on Sustainable Expert Systems (ICSES); 2024 Oct 15-17; Kaski, Nepal. New York: IEEE, 2024: 826-830.  [PubMed]  [DOI]  [Full Text]
26.  Patel M, Gohil K, Gohil A, Ramoliya F, Gupta R, Tanwar S, Polkowski Z, Alqahtani F, Tolba A. Explainable AI for gastrointestinal disease diagnosis in telesurgery Healthcare 4.0. Comput Electr Eng. 2024;118:109414.  [PubMed]  [DOI]  [Full Text]
27.  Dulai PS, Siegel CA. Optimization of Drug Safety Profile in Inflammatory Bowel Disease Through a Personalized Approach. Curr Drug Targets. 2018;19:740-747.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 3]  [Cited by in RCA: 3]  [Article Influence: 0.4]  [Reference Citation Analysis (0)]
28.  Kyriakos N, Papaefthymiou A, Giakoumis M, Iatropoulos G, Mantzaris G, Liatsos C. Informed consent in inflammatory bowel disease: a necessity in real-world clinical practice. Ann Gastroenterol. 2021;34:466-475.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 1]  [Cited by in RCA: 1]  [Article Influence: 0.3]  [Reference Citation Analysis (0)]
29.  Nivedhaa N. Explainable Artificial Intelligence (XAI) in healthcare: Interpretable Models for Clinical Decision Support. Int J Comput Sci Eng Inf Technol Res. 2024;5:33-40.  [PubMed]  [DOI]  [Full Text]
30.  Lötsch J, Kringel D, Ultsch A. Explainable Artificial Intelligence (XAI) in Biomedicine: Making AI Decisions Trustworthy for Physicians and Patients. BioMedInformatics. 2022;2:1-17.  [PubMed]  [DOI]  [Full Text]
31.  Ebers M  Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s). Available from: https://irilaw.org/wp-content/uploads/2021/04/ebers.pdf.  [PubMed]  [DOI]
32.  Farhangi A, Vogl R.   Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise. CA, United States: Berkley Technology Law Journal, 2019.  [PubMed]  [DOI]
33.  Javaid A, Shahab O, Adorno W, Fernandes P, May E, Syed S. Machine Learning Predictive Outcomes Modeling in Inflammatory Bowel Diseases. Inflamm Bowel Dis. 2022;28:819-829.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 3]  [Cited by in RCA: 19]  [Article Influence: 4.8]  [Reference Citation Analysis (1)]
34.  Uche-Anya E, Anyane-Yeboa A, Berzin TM, Ghassemi M, May FP. Artificial intelligence in gastroenterology and hepatology: how to advance clinical practice while ensuring health equity. Gut. 2022;71:1909-1915.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 5]  [Cited by in RCA: 67]  [Article Influence: 22.3]  [Reference Citation Analysis (0)]
35.  Chen RJ, Wang JJ, Williamson DFK, Chen TY, Lipkova J, Lu MY, Sahai S, Mahmood F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng. 2023;7:719-742.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 97]  [Cited by in RCA: 150]  [Article Influence: 75.0]  [Reference Citation Analysis (0)]
36.  Bennett CR, Cole-Lewis H, Farquhar S, Haamel N, Babenko B, Lang O, Fleck M, Traynis I, Lau C, Horn I, Lyles C.   Interdisciplinary Expertise to Advance Equitable Explainable AI. Available from: arXiv:2406.18563.  [PubMed]  [DOI]  [Full Text]
37.  Chierici M, Puica N, Pozzi M, Capistrano A, Donzella MD, Colangelo A, Osmani V, Jurman G. Automatically detecting Crohn's disease and Ulcerative Colitis from endoscopic imaging. BMC Med Inform Decis Mak. 2022;22:300.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 6]  [Cited by in RCA: 17]  [Article Influence: 5.7]  [Reference Citation Analysis (0)]
38.  Sutton RT, Zai Ane OR, Goebel R, Baumgart DC. Artificial intelligence enabled automated diagnosis and grading of ulcerative colitis endoscopy images. Sci Rep. 2022;12:2748.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 52]  [Cited by in RCA: 48]  [Article Influence: 16.0]  [Reference Citation Analysis (0)]
39.  Tsai CM, Lee JD. Dynamic Ensemble Learning with Gradient-Weighted Class Activation Mapping for Enhanced Gastrointestinal Disease Classification. Electronics. 2025;14:305.  [PubMed]  [DOI]  [Full Text]
40.  Ma L, Chen Y, Fu X, Qin J, Luo Y, Gao Y, Li W, Xiao M, Cao Z, Shi J, Zhu Q, Guo C, Wu J. Predicting mucosal healing in Crohn's disease: development of a deep-learning model based on intestinal ultrasound images. Insights Imaging. 2025;16:125.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
41.  Maurício J, Domingues I. Distinguishing between Crohn’s disease and ulcerative colitis using deep learning models with interpretability. Pattern Anal Applic. 2024;27:1.  [PubMed]  [DOI]  [Full Text]
42.  Weng F, Meng Y, Lu F, Wang Y, Wang W, Xu L, Cheng D, Zhu J. Differentiation of intestinal tuberculosis and Crohn's disease through an explainable machine learning method. Sci Rep. 2022;12:1714.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 2]  [Cited by in RCA: 16]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
43.  Zhen J, Liu C, Zhang J, Liao F, Xie H, Tan C, An P, Liu Z, Jiang C, Shi J, Wu K, Dong W. Evaluating Inflammatory Bowel Disease-Related Quality of Life Using an Interpretable Machine Learning Approach: A Multicenter Study in China. J Inflamm Res. 2024;17:5271-5283.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in RCA: 1]  [Reference Citation Analysis (0)]
44.  Deng R, Cui C, Remedios LW, Bao S, Womick RM, Chiron S, Li J, Roland JT, Lau KS, Liu Q, Wilson KT, Wang Y, Coburn LA, Landman BA, Huo Y. Cross-scale Attention Guided Multi-instance Learning for Crohn's Disease Diagnosis with Pathological Images. Multiscale Multimodal Med Imaging (2022). 2022;13594:24-33.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 4]  [Cited by in RCA: 3]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
45.  de Maissin A, Vallée R, Flamant M, Fondain-Bossiere M, Berre CL, Coutrot A, Normand N, Mouchère H, Coudol S, Trang C, Bourreille A. Multi-expert annotation of Crohn's disease images of the small bowel for automatic detection using a convolutional recurrent attention neural network. Endosc Int Open. 2021;9:E1136-E1144.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 3]  [Cited by in RCA: 15]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
46.  Wu S, Zhang R, Yan J, Li C, Liu Q, Wang L, Wang H. High-Speed and Accurate Diagnosis of Gastrointestinal Disease: Learning on Endoscopy Images Using Lightweight Transformer with Local Feature Attention. Bioengineering (Basel). 2023;10:1416.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in RCA: 5]  [Reference Citation Analysis (0)]
47.  Sucipto K, Khosla A, Drage M, Wang Y, Fahy D, Lin M, Resnick M, Montalto M, Beck A, Wapinski I, Hennek S, Jayson C, Najdawi F. Quantitative and explainable artificial intelligence (AI)-powered approaches to predict ulcerative colitis disease activity from hematoxylin and eosin (H&E)-stained whole slide images (WSI). Inflamm Bowel Dis. 2023;29:S22-S23.  [PubMed]  [DOI]  [Full Text]
48.  Ahamed MF, Nahiduzzaman M, Islam MR, Naznine M, Arselene Ayari M, Khandakar A, Haider J. Detection of various gastrointestinal tract diseases through a deep learning method with ensemble ELM and explainable AI. Expert Syst Appl. 2024;256:124908.  [PubMed]  [DOI]  [Full Text]
49.  Elmagzoub MA, Kaur S, Gupta S, Rajab A, Rajab KD, Reshan MSA, Alshahrani H, Shaikh A. Improving Endoscopic Image Analysis: Attention Mechanism Integration in Grid Search Fine-Tuned Transfer Learning Model for Multi-Class Gastrointestinal Disease Classification. IEEE Access. 2024;12:80345-80358.  [PubMed]  [DOI]  [Full Text]
50.  Chen G, Shen J. Artificial Intelligence Enhances Studies on Inflammatory Bowel Disease. Front Bioeng Biotechnol. 2021;9:635764.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 4]  [Cited by in RCA: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
51.  Sundaram S, Choden T, Mattar MC, Desai S, Desai M. Artificial intelligence in inflammatory bowel disease endoscopy: current landscape and the road ahead. Ther Adv Gastrointest Endosc. 2021;14:26317745211017809.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 9]  [Cited by in RCA: 9]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
52.  Pinton P. Impact of artificial intelligence on prognosis, shared decision-making, and precision medicine for patients with inflammatory bowel disease: a perspective and expert opinion. Ann Med. 2023;55:2300670.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 6]  [Cited by in RCA: 13]  [Article Influence: 13.0]  [Reference Citation Analysis (0)]
53.  Brooks-Warburton J, Ashton J, Dhar A, Tham T, Allen PB, Hoque S, Lovat LB, Sebastian S. Artificial intelligence and inflammatory bowel disease: practicalities and future prospects. Frontline Gastroenterol. 2022;13:325-331.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 8]  [Cited by in RCA: 11]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
54.  Olowu S, Lawrence N, Banerjee S.   Enhancing patient stratification and interpretability through class-contrastive and feature attribution techniques. 2025 IEEE Symposium on Trustworthy, Explainable and Responsible Computational Intelligence (CITREx); 2025 Mar 17-20; Trondheim, Norway. New York: IEEE, 2025: 1-7.  [PubMed]  [DOI]  [Full Text]
55.  Zand A, Stokes Z, Sharma A, van Deen WK, Hommes D. Artificial Intelligence for Inflammatory Bowel Diseases (IBD); Accurately Predicting Adverse Outcomes Using Machine Learning. Dig Dis Sci. 2022;67:4874-4885.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 3]  [Cited by in RCA: 15]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
56.  Collins A, Swift S, Recht A, Prime R, Topp R, Beltyukova S, Goldklang R, Gaylis F. Inflammatory bowel disease risk stratification and management in a community practice setting. Inflamm Bowel Dis. 2022;28:S33-S34.  [PubMed]  [DOI]  [Full Text]
57.  Gardiner LJ, Carrieri AP, Bingham K, Macluskie G, Bunton D, McNeil M, Pyzer-Knapp EO. Combining explainable machine learning, demographic and multi-omic data to inform precision medicine strategies for inflammatory bowel disease. PLoS One. 2022;17:e0263248.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 1]  [Cited by in RCA: 20]  [Article Influence: 6.7]  [Reference Citation Analysis (0)]
58.  Telesco SE, Brodmerkel C, Zhang H, Kim LL, Johanns J, Mazumder A, Li K, Baribaud F, Curran M, Strauss R, Paxson B, Plevy S, Davison T, Knight L, Dibben S, Schreiber S, Sandborn W, Rutgeerts P, Siegel CA, Reinisch W, Greenbaum LE. Gene Expression Signature for Prediction of Golimumab Response in a Phase 2a Open-Label Trial of Patients With Ulcerative Colitis. Gastroenterology. 2018;155:1008-1011.e8.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 42]  [Cited by in RCA: 53]  [Article Influence: 7.6]  [Reference Citation Analysis (0)]
59.  Noor NM, Lee JC, Bond S, Dowling F, Brezina B, Patel KV, Ahmad T, Banim PJ, Berrill JW, Cooney R, De La Revilla Negro J, de Silva S, Din S, Durai D, Gordon JN, Irving PM, Johnson M, Kent AJ, Kok KB, Moran GW, Mowat C, Patel P, Probert CS, Raine T, Saich R, Seward A, Sharpstone D, Smith MA, Subramanian S, Upponi SS, Wiles A, Williams HRT, van den Brink GR, Vermeire S, Jairath V, D'Haens GR, McKinney EF, Lyons PA, Lindsay JO, Kennedy NA, Smith KGC, Parkes M; PROFILE Study Group. A biomarker-stratified comparison of top-down versus accelerated step-up treatment strategies for patients with newly diagnosed Crohn's disease (PROFILE): a multicentre, open-label randomised controlled trial. Lancet Gastroenterol Hepatol. 2024;9:415-427.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 124]  [Cited by in RCA: 116]  [Article Influence: 116.0]  [Reference Citation Analysis (0)]
60.  Njoku K, Yadete T, Bettencourt-silva J, Liu MT, Anand V, Purpura A, Kartoun U, Koski E, Mulligan N, Claggett B, Stappenbeck T, Liu JJ. S1798 Clinical Trial Failures in Inflammatory Bowel Disease: An Artificial Intelligence-Assisted Review. Am J Gastroenterol. 2023;118:S1334.  [PubMed]  [DOI]  [Full Text]
61.  Rosen MJ, Dhawan A, Saeed SA. Inflammatory Bowel Disease in Children and Adolescents. JAMA Pediatr. 2015;169:1053-1060.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 335]  [Cited by in RCA: 525]  [Article Influence: 52.5]  [Reference Citation Analysis (0)]
62.  Bressler B, Panaccione R, Fedorak RN, Seidman EG. Clinicians' guide to the use of fecal calprotectin to identify and monitor disease activity in inflammatory bowel disease. Can J Gastroenterol Hepatol. 2015;29:369-372.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 50]  [Cited by in RCA: 54]  [Article Influence: 5.4]  [Reference Citation Analysis (0)]
63.  Solitano V, Hanžel J, Estevinho MM, Sedano R, Massimino L, Ungaro F, Jairath V. Reaching the therapeutic ceiling in IBD: Can Advanced Combination Treatment (ACT) offer a solution? Best Pract Res Clin Gastroenterol. 2025;77:101981.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 2]  [Cited by in RCA: 3]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
64.  Narula N, Peyrin-Biroulet L, Colombel JF. Combination therapy with methotrexate in inflammatory bowel disease: time to COMMIT? Gastroenterology. 2014;146:608-611.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Cited by in Crossref: 8]  [Cited by in RCA: 11]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
65.  Solitano V, D'Amico F, Zacharopoulou E, Peyrin-Biroulet L, Danese S. Early Intervention in Ulcerative Colitis: Ready for Prime Time? J Clin Med. 2020;9:2646.  [RCA]  [PubMed]  [DOI]  [Full Text]  [Full Text (PDF)]  [Cited by in Crossref: 8]  [Cited by in RCA: 36]  [Article Influence: 7.2]  [Reference Citation Analysis (0)]