Published online Nov 27, 2025. doi: 10.4254/wjh.v17.i11.109494
Revised: June 13, 2025
Accepted: October 10, 2025
Published online: November 27, 2025
Processing time: 198 Days and 11.2 Hours
Core Tip: Explainable artificial intelligence (XAI) seeks to improve the interpretability and transparency of machine learning models in healthcare settings. In this context, Explainable Ensemble Learning, a fundamental strategy within XAI, integrates multiple models, including Random Forest, Extreme Gradient Boosting, and Stacking, to improve classification performance in hepatocellular carcinoma (HCC). Despite their high predictive accuracy, the inherent "black-box" feature of ensemble methods remains a barrier to clinical practice. XAI techniques—such as SHapley Additive exPlanations, Local Interpretable Model-agnostic Explanations, and Gradient-weighted Class Activation Mapping—clarify model predictions, fostering medical trust and interpretability. By combining clinical, genetic, and imaging data with XAI frameworks, diagnosis, staging, and prognosis of HCC can be improved, ultimately supporting transparent and reliable decision-making in healthcare. Future research should focus on model interpretability, data integration, and user-friendly clinical interfaces.
