BPG is committed to discovery and dissemination of knowledge
Minireviews
Copyright ©The Author(s) 2025.
World J Gastroenterol. Nov 14, 2025; 31(42): 112196
Published online Nov 14, 2025. doi: 10.3748/wjg.v31.i42.112196
Table 1 Artificial intelligence terminology in contrast-enhanced ultrasound for liver lesion assessment
Term
Definition
CEUS application
CNNDeep learning model using convolutional layers to extract image featuresLesion detection, classification, segmentation
RadiomicsExtraction of quantitative handcrafted features from medical imagesSupports AI-based lesion characterization
UltrasomicsRadiomics applied specifically to ultrasound imagesCEUS feature analysis for HCC risk stratification
TransformersAI models using self-attention to learn feature relationshipsModeling CEUS video sequences for enhancement pattern recognition
Table 2 Liver Imaging Reporting and Data System classification system for hepatocellular carcinoma[24]
LI-RADS category
Interpretation
LR-1Definitely benign
LR-2Probably benign
LR-3Intermediate probability of malignancy
LR-4Probably HCC
LR-5Definitely HCC
LR-MProbably/definite malignancy non-HCC specific
Table 3 Performance comparison of clinicians vs automated Liver Imaging Reporting and Data System classification
Ref.
Comparison
Main findings
Urhuț et al[21]Clinicians vs AI model (CEUS)For differentiating benign from malignant liver tumors, the AI system showed higher specificity than both experienced readers (blinded and unblinded) but lower sensitivity; less accurate for HCC and metastases yet may assist less-experienced clinicians
Hu et al[15]Senior radiologists vs DL model (CEUS)AI outperformed residents (accuracy 82.9%-84.4%, P = 0.038) and matched experts (87.2%-88.2%, P = 0.438), improving resident performance and reducing CEUS interobserver variability in differentiating benign from malignant
Zhou et al[34]3D-CNN vs CNN + LSTM (CEUS cine-loops)High overall AUC (approximately 0.91) for CNN + LSTM, outperforming TIC and 3D-CNN by balancing sensitivity and specificity (3D-CNN: 0.96/0.55), narrowing accuracy gap between less-experienced and more-experienced radiologists (0.82 → 0.87); accuracy for benign vs malignant differentiation (n = 210 Lesions): 0.82 for less-experienced radiologists, 0.87 after AI assistance
Oezsoy et al[32]Weakly supervised DL vs manual LI-RADS scoringModel matched expert performance using only case-level labels with high accuracy (AUC 0.94)
Table 4 Key contributions of artificial intelligence-based contrast-enhanced ultrasound in clinical practice
Objective
Clinical impact
Reduction in interpretation timeAI-assisted models provide results in approximately 10 seconds, faster than manual reading (23-29 seconds)[33]
Improved diagnostic accuracyDeep learning models achieve AUCs of 0.96-0.97 for benign vs malignant lesions[31,33]
Fully automated workflowsEnd-to-end segmentation and classification eliminate manual intervention[34]
Integration into ultrasound systemsReal-time AI implementation feasible within existing CEUS devices[28]
Reduction of annotation workloadWeakly supervised learning reduces dependence on manually labeled training data[32]
Enhanced LI-RADS standardizationAI models align closely with LI-RADS criteria, improving consistency and objectivity[15,24]