Review Open Access
Copyright ©The Author(s) 2023. Published by Baishideng Publishing Group Inc. All rights reserved.
World J Gastroenterol. Mar 7, 2023; 29(9): 1427-1445
Published online Mar 7, 2023. doi: 10.3748/wjg.v29.i9.1427
Clinical impact of artificial intelligence-based solutions on imaging of the pancreas and liver
M Alvaro Berbís, Javier Royuela del Val, Department of Radiology, HT Médica, San Juan de Dios Hospital, Córdoba 14960, Spain
M Alvaro Berbís, Faculty of Medicine, Autonomous University of Madrid, Madrid 28049, Spain
Felix Paulano Godino, Lidia Alcalá Mata, Antonio Luna, Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
ORCID number: M Alvaro Berbís (0000-0002-0331-7762); Felix Paulano Godino (0000-0001-9712-3952); Javier Royuela del Val (0000-0002-4347-2783); Lidia Alcalá Mata (0000-0001-6461-3891); Antonio Luna (0000-0001-9358-3396).
Author contributions: Berbís MA, Paulano Godino F, Royuela del Val J, and Alcalá Mata L performed information compilation and manuscript writing; Luna A performed information compilation and critical reading of the manuscript.
Conflict-of-interest statement: Berbís MA is a board member of Cells IA Technologies; Luna A received institutional royalties and institutional payments for lectures, presentations, speaker bureaus, manuscript writing or educational events from Canon, Bracco, Siemens Healthineers, and Philips Healthcare and is a board member of Cells IA Technologies; the remaining authors declare no competing interests.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Antonio Luna, MD, PhD, Director, Department of Radiology, HT Médica, Clínica las Nieves, MRI Unit, 2 Carmelo Torres, Jaén 23007, Spain. aluna70@htmedica.com
Received: September 28, 2022
Peer-review started: September 28, 2022
First decision: January 3, 2023
Revised: January 13, 2023
Accepted: February 27, 2023
Article in press: February 27, 2023
Published online: March 7, 2023
Processing time: 160 Days and 6 Hours

Abstract

Artificial intelligence (AI) has experienced substantial progress over the last ten years in many fields of application, including healthcare. In hepatology and pancreatology, major attention to date has been paid to its application to the assisted or even automated interpretation of radiological images, where AI can generate accurate and reproducible imaging diagnosis, reducing the physicians’ workload. AI can provide automatic or semi-automatic segmentation and registration of the liver and pancreatic glands and lesions. Furthermore, using radiomics, AI can introduce new quantitative information which is not visible to the human eye to radiological reports. AI has been applied in the detection and characterization of focal lesions and diffuse diseases of the liver and pancreas, such as neoplasms, chronic hepatic disease, or acute or chronic pancreatitis, among others. These solutions have been applied to different imaging techniques commonly used to diagnose liver and pancreatic diseases, such as ultrasound, endoscopic ultrasonography, computerized tomography (CT), magnetic resonance imaging, and positron emission tomography/CT. However, AI is also applied in this context to many other relevant steps involved in a comprehensive clinical scenario to manage a gastroenterological patient. AI can also be applied to choose the most convenient test prescription, to improve image quality or accelerate its acquisition, and to predict patient prognosis and treatment response. In this review, we summarize the current evidence on the application of AI to hepatic and pancreatic radiology, not only in regard to the interpretation of images, but also to all the steps involved in the radiological workflow in a broader sense. Lastly, we discuss the challenges and future directions of the clinical application of AI methods.

Key Words: Artificial intelligence; Machine learning; Deep learning; Imaging; Liver; Pancreas

Core Tip: The gastroenterology field is changing with the application of artificial intelligence (AI) solutions capable of assisting and even automating the interpretation of radiological images (ultrasound, endoscopic ultrasound, computerized tomography, magnetic resonance imaging, and positron emission tomography), generating accurate and reproducible diagnoses. AI can further be applied to other steps of the radiological workflow beyond image interpretation, including test selection, image quality improvement, acceleration of image acquisition, and prediction of patient prognosis and outcome. We herein discuss the current evidence, challenges, and future directions on the application of AI to hepatic and pancreatic radiology.



INTRODUCTION

Malignant tumors of the liver and pancreas are among the most common and lethal types of cancer. According to the recent GLOBOCAN 2020 data[1], liver and pancreas are the 6th and 12th most common sites for primary cancer, with 905677 and 495773 new cases in 2020, respectively. However, they also represent the 3rd and 7th neoplasia with the highest mortality, causing 830180 and 466003 deaths worldwide in 2020, respectively. If taken combined, cancer at the liver or pancreas thus represents the 5th most incident and the second most lethal one.

Cancer at these locations account for almost as many deaths as cases. Five-year survival rates are 20% for liver cancer[2] and as low as 11% for pancreatic cancer[3], making them two of the cancer sites with the poorest prognosis. Other non-oncologic diseases affecting these organs are also highly prevalent, such as diffuse liver disease, including chronic liver disease, which affects tens of millions of people globally and represents a substantial socioeconomic burden[4].

Clinical outcomes of patients with these types of disease depend on a variety of factors, including stage and disease extension as assessed by imaging, and correct election of treatment. Thus, there is an unmet need for new tools capable of assisting specialists in early detection, characterization, and management of these diseases.

In recent years, artificial intelligence (AI) has shown promise in different areas of healthcare. The evaluation of medical images by machine learning (ML) approaches is a leading research field which, in gastroenterology, has applications in automatic analysis of different types of images, such as radiology, pathology, and endoscopy studies[5].

The first applications of AI to radiology have been dominated by anatomic locations such as the brain or the breast. Image analysis of abdominal organs, such as the liver and pancreas, are more challenging. Magnetic resonance imaging (MRI) in these locations, especially at 3 T, is prone to motion and field inhomogeneity artifacts, which are aggravated by larger fields of view[6]. As a result, advances in automatic analyses of abdominal images have gathered comparatively less attention. Nonetheless, the application of AI in liver and pancreas imaging is also gaining increasing interest (Figure 1). The goal of this review is to summarize the current experience on the use of AI to assist radiologists in their workflow, acquisition, and interpretation of medical images of the liver and pancreas.

Figure 1
Figure 1 PubMed results by year using the search terms. A and B: Artificial intelligence radiology (top) and artificial intelligence AND (liver OR pancreas) (bottom).
AI IN RADIOLOGY: BASIC PRINCIPLES

Artificial intelligence is expected to revolutionize the medical field, deeply impacting the hospital and clinical settings by potentially improving diagnostic accuracy, treatment delivery, and allowing a more personalized medical care[7]. Radiology will arguably be one the most changed areas of medicine because of AI implementation in its workflows, as the information-rich images generated in this field are an excellent source of data for the development of AI algorithms. Broadly, the term AI refers to a wide range of technologies and computing processes capable of imitating human intelligence to extract information from input data to solve a problem. This rapidly evolving area has a vocabulary of its own (Figure 2) that can be daunting to those not familiar with the field, including terms that are oftentimes used as synonyms to AI, such as ML.

Figure 2
Figure 2 Relation between artificial intelligence and related subdisciplines, neural network architectures, and/or techniques. ANN: Artificial neural network; FCN: Fully convolutional network; CNN: Convolutional neural network; GAN: Generation adversarial network.

ML is actually a subset of AI consisting of those methods capable of training a computer system to perform a given task based on provided information or experience without explicit programming, thus conferring machines the ability to learn[8]. The aim of ML is to predict an output based on a given input (a training dataset). Common ML applications in radiology include classification, image segmentation, regression, and clustering[9]. ML can be sub-divided into supervised and unsupervised learning[10]. In supervised learning, the most common type used in medical research, the algorithm is trained with labeled examples (i.e., the correct output for these training data, known as ground truth, is already known). Among the methods employed in supervised learning, random forest (RF), and specially, support vector machine (SVM), are powerful algorithms frequently used for the classification of images[7], including image segmentation. Conversely, in unsupervised learning, the ground truth is not known, as the algorithm is trained with unlabeled data that must be classified by the algorithm itself.

Artificial neural networks (ANNs), named after their brain-inspired structure and functioning process, can be trained via both supervised and unsupervised ML. In these ANNs, input information flows through a variable number of layers composed of artificial neurons, joined by weighted connectors, that process the data to obtain an output that matches the ground truth as closely as possible. Generative adversarial networks (GANs) are an example of ANN trained via unsupervised learning. GANs include two networks: One which creates new data based on input examples (i.e., generator), and one which distinguishes between different types of data (i.e., discriminator)[11]. These networks can be used to produce realistic, synthetic images as a strategy for data augmentation[12]. Similarly, the structure of convolutional neural networks (CNNs), a type of ANN specially designed for computer vision tasks, is based on that of the animal visual cortex. Typically used in image recognition and classification, in CNNs the input information is filtered and analyzed through a convolutional layer, and the size of the resulting image is subsequently reduced by a pooling layer. This two-step process will be repeated as many times as layers integrate the CNN, with a final step in which an ANN will classify the image (Figure 3). Fully convolutional networks (FCNs, a type of ANN that only performs the convolution step) are the basis for U-net, a modified architecture that consists of a contracting path including several convolutional and pooling layers to capture context, followed by a symmetric expanding path including a number of up-sampling and convolutional layers to enable accurate localization. U-net is a popular network for the development of automatic segmentation algorithms, as it requires relatively small datasets for algorithm training[13].

Figure 3
Figure 3 Diagram of a convolutional neural network used for the classification of a focal liver lesion in a computerized tomography image. HCC: Hepatocellular carcinoma; CT: Computed tomography.

Deep learning (DL) is a section of ML that utilizes multi-layered ANNs, referred to as deep neural networks (DNN), allowing the exploration of more complex data[14]. DL algorithms are gaining attention and raising considerable enthusiasm thanks to their scalability, easy accessibility, and ability to extract relevant information from the data without further indications other than input data. The recently developed nnU-Net, a publicly available DL-based segmentation tool capable of automatically configuring itself, has set a new state-of-the-art standard thanks to the systematization of the configuration process, which used to be a manual, complicated, and oftentimes limited task in previous approaches[15]. Improvement of the computational resources and the development of cloud technologies are also contributing to the application of DL architectures in a wide variety of research fields beyond medicine[14].

Closely related to the development of AI, the term radiomics refers to the computational extraction (via ML and DL algorithms) of quantitative data from radiological image features[16]. A particularly useful and valuable application of radiomics is the analysis of radiologic textures, defined as the differences in the grayscale intensities in the area of interest, which have been associated with intratumor heterogeneity[17] and that can potentially provide clinically relevant information that otherwise would remain unknown.

IMAGE ACQUISITION

The ultimate aim of computerized tomography (CT) and MRI is to unveil clinically relevant information; thus, the importance of this information relies heavily on the quality of the image. For CT, radiation dose is a parameter as important as image quality, and both are closely related to acquisition and reconstruction times. Iterative reconstruction (IR) algorithms[18] are the current technique of choice to transform the raw data into a 3D volume presented as an anatomical image. These algorithms generate an image estimate that is projected forward into a synthetic sinogram; subsequently, this image estimate is iteratively rectified by comparison with the real raw data sinogram until the algorithm’s predefined endpoint condition is met, resulting in enhanced image quality and thus allowing an important dose reduction[19]. DL reconstruction algorithms (DLR) are currently being developed with the aim to further improve image quality, therefore further reducing radiation doses. Compared to IR algorithms, DLR algorithms trained with low-dose data offer an improved signal-to-noise (SNR) ratio, as demonstrated by the U-net-based CNN developed by Jin et al[20], thus facilitating the detection of lesions of any kind and the increased use of low-dose imaging. Currently, there are two commercially available DLRs: TrueFidelity (GE Healthcare, Chicago, IL, United States) and AiCE (Canon Medical Systems, Otawara, Japan). Akagi et al[21] employed AiCE in their study and reported improved contrast-to-noise ratio and image quality in CT images, compared to images created with a hybrid IR algorithm. Although the preliminary results are exciting, further validation for these DLR algorithms is required, and real dose reduction in the clinical setting has yet to be demonstrated.

An important setback of MRI is the long acquisition time, forcing the patient to lay still for a relatively long period and with any movement affecting the quality of the image. One way to reduce acquisition time is compressed sensing, based on the idea that if signal information is only present in a small portion of pixels, that sparsity can be used to reconstruct a high-definition image from considerably less collected data (undersampling). Kaga et al[22] evaluated the usefulness of the Compressed SENSE algorithm (Philips, Amsterdam, The Netherlands) in MRI of the abdomen using diffusion weighted images (DWIs) and reported a significantly improved image noise and contour of the liver and pancreas and higher apparent diffusion coefficient values, thus offering superior image quality compared to parallel imaging (PI)-DWI[22].

AI applications have also been designed to automate MRI and CT protocol selection with the aim to standardize workflows and increase effectiveness in the radiology setting. The selection of an appropriate imaging protocol requires taking into account factors including the type of procedure, clinical indication, and the patient’s medical history. The increasing incorporation of electronic medical records and other digital content has opened opportunities for the application of natural language processing (NLP) methods to extract structured data from unstructured radiology reports. López-Úbeda et al[23] developed an NLP-based classification system for automated protocol assignment that offered an overall accuracy of 92.25% for the CT and 86.91% for the MRI datasets. This system has already been successfully implemented and is currently in use at the HT Médica centers.

Information about the respiration of the patient can be used for functional studies, overall monitoring, or motion compensation during the performance of an MRI. Typically, breathing is measured via belts or nasal sensors that can potentially alter the raw MRI data. Using adaptive intelligence, the laser-based VitalEye system (Philips) registers a contactless continuous respiratory signal, with up to 50 body locations analyzed simultaneously and in real time, thus producing a more robust respiratory trace compared to traditional respiratory belts[24]. Moreover, as soon as the patient is lying on the table, the BioMatrix Respiratory Sensors (Siemens AG, Munich, Germany) embedded in the spinal coil produce a local magnetic field that changes with the variation of lung volume during breathing. These changes are registered, and the breathing pattern is integrated to optimize image quality[25]. By standardizing and accelerating the workflow, these advances allow technicians and radiologists to concentrate on the patient.

IMAGE ANALYSIS
Segmentation of liver and pancreas

Image analysis has experimented a huge progression with the advent of AI, and especially with DL, reaching state-of-art performances in many biomedical image analysis tasks[26-28] (Table 1). Among them, segmentation is one of the most important in radiology. For instance, accurate pancreas segmentation has applications in surgical planning, assessment of diabetes, and detection and analysis of pancreatic tumors[29]. Another key application of organ and lesion contouring is treatment volume calculation for radiotherapy planning. However, boundary delimitation of anatomical structures in medical images remains a challenge due to their complexity, particularly in the upper abdominal cavity, where there are constant changes in the position of the different organs with the respiratory cycle, as well as the occurrence of anatomical variants and pathological changes of organs[30].

Table 1 Works proposed for automated image analysis.
Image analysis
Anatomical area
Modality
AI model
Ref.
SegmentationPancreasMRICNN[33,34,110]
UDCGAN[111]
3D-Unet[112]
LiverCTSSC (no AI)[36]
PA (Atlas-no AI)[39]
MRICNN[37,38,42,113]
GAN[43]
RegistrationLiverCT, MRICNN[47]
SG-DIR (no AI)[48]
Cycle-GAN + UR-Net[46]
4D-MRINon-rigid[49]

The intersubject variability and complexity of the pancreas make segmentation of this organ a demanding task. Segmentation of pancreatic cancer lesions is particularly challenging because of their limited contrast and blurred boundaries against the background pancreatic parenchyma in CT and MR images[31]. In addition, other factors such as body mass index, visceral abdominal fat, volume of the pancreas, standard deviation of CT attenuation within pancreas, and median and average CT attenuation in the immediate neighborhood of the pancreas may affect segmentation accuracy[29,32].

These problems lead to high segmentation uncertainty and inaccurate results. To tackle these problems, Zheng et al[33] proposed a 2D, DL-based method that describes the uncertain regions of pancreatic MR images based on shadowed sets theory. It demonstrated high accuracy, with a dice similarity coefficient (DSC) of 73.88% on a cancer MRI dataset and 84.37% on the National Institutes of Health (NIH) Pancreas dataset (which contains 82 CT scans of healthy pancreas), respectively. The same authors reported[34] a more sophisticated 2.5D network that benefits from multi-level slice interaction. They surpassed state-of-art performances in the NIH dataset, with a DSC of 86.21% ± 4.37%, sensitivity of 87.49% ± 6.38%, and a specificity of 85.11% ± 6.49%.

The liver is also a popular target for automated segmentation algorithms. Automatic segmentation of this organ is regarded as somewhat less challenging than that of the pancreas, with reported DSC scores typically in the > 0.90 range[35].

Li et al[36] presented a liver segmentation method from abdominal CT volumes for both healthy and pathological tissues, based on the level set and sparse shape composition (SSC) method. The experiments, performed using public databases SILVER07 and 3Dicardb, showed good results, with mean ASD, RMSD, MSD, VOE, and RVD of 0.9 mm, 1.8 mm, 19.4 mm, 5.1%, and 0.1%, respectively. Moreover, Winther et al[37] used a 3D DNN for automatic liver segmentation along with a Gd-EOB-DTPA-enhanced liver MR images dataset. Results show an intraclass correlation coefficient (ICC) of 0.987, DSC of 96.7% ± 1.9%, and a Hausdorff distance of 24.9 mm ± 14.7 mm compared with two expert readers who corresponded to an ICC of 0.973 and a DSC of 95.2% ± 2.8%. Finally, Mohagheghi et al[38] used a CNN but further incorporated prior knowledge. The model learnt the global shape information as prior knowledge by using a convolutional denoising auto-encoder; then, this knowledge was used to define a loss function and combine it with the Dice loss in the main segmentation model. This model with prior knowledge improved the performance of the 3D U-Net model and reached a DSC of 97.62% segmenting CT images of the Silver07-liver dataset.

Organ segmentation is even more challenging in pediatric patients studied with CT, as it is acquired at a low dose to minimize harmful radiation to children, thus having a lower SNR. Nakayama et al[39] proposed a liver segmentation algorithm for pediatric CT scans using a patient-specific level set distribution model to generate a probabilistic atlas, obtaining a DSC index of 88.21% in the segmentation. This approach may be useful for low dose studies in general, i.e., also in the adult population.

Algorithms for automatic segmentation of the liver using MR images have proven equally efficient. For instance, Bobo et al[40] used a 2D FCN architecture to segment livers on T2-weighted MR images with a DSC score of 0.913. In a recent paper, Saunders et al[41] systematically analyzed the performance of different types of MR images in the training of CNN for liver segmentation, using a 3D U-net architecture. Water and fat images outperformed other modalities, such as T2* images, with a DSC of 0.94.

Conversely, high-quality automatic segmentation of liver lesions is not an easy task, since the low contrast between tumors and healthy liver parenchyma in CT images, their inhomogeneity, and its complexity pose a challenge for liver tumor segmentation. In addition, motion-induced phase errors due to peristaltic and respiratory movements negatively affect image quality and assessment of liver lesions in MR images. A 3D CNN was used by Meng et al[42] where a special three-dimensional dual path multiscale convolutional neural network (TDP-CNN) was designed for liver tumor segmentation. Results achieved in the LiTS public dataset were a DSC of 68.9%, Hausdorff distance of 7.96 mm, and average distance of 1.07 mm for liver tumor segmentation and a DSC of 96.5%, Hausdorff distance of 29.162 mm, and average distance of 0.197 mm for liver segmentation. A different approach for liver tumor segmentation was proposed by Chen et al[43]. In this work, an adversarial densely connected network algorithm was trained and evaluated using the Liver Tumor Segmentation challenge dataset. Results revealed an average Dice score of 68.4% and ASD, MSD, VOE, and RVD of 21 mm, 124 mm, 0.46%, and 0.73%, respectively.

Automatic contouring of hepatic tumor volumes has also been reported using CT scans, a modified SegNet CNN[44], and dynamic contrast enhanced (DCE)-MRI images in a U-net-like architecture[45], for example.

Some medical imaging vendors incorporate solutions for liver segmentation and hepatic lesion characterization integrated in the proprietary radiologist’s workflow. For instance, the Liver Analysis research application from Siemens Healthcare (Erlangen, Germany) aims to provide AI support for liver MRI and CT reading. The tool includes DL-based algorithms for automatic segmentation of the whole liver, functional liver segments, and other abdominal organs like the spleen and kidneys (Figure 4A). It also features an AI method to automatically detect and segment focal liver lesions, providing lesion diameter, volume, and 3D contours (Figure 4B).

Figure 4
Figure 4 In-house experience on liver assessment with artificial intelligence. Magnetic resonance studies of a patient with liver focal lesions (liver hemangiomas), processed with the Liver Analysis research application from Siemens Healthcare. A: Automatic segmentation of the whole liver, liver segments, and other abdominal organs; B: Automatic detection, segmentation, and measurement of the two liver hemangiomas.
Registration

Medical image registration seeks to find an optimal spatial transformation that best aligns the underlying anatomical structures. Medical image registration is used in many clinical applications such as image guidance systems (IGS), motion tracking, segmentation, dose accumulation, image reconstruction, etc[28]. In clinical practice, image registration is a major problem in image-guided liver interventions, especially for the soft-tissues, where organ shape changes occurring between pre-procedural and intra-procedural imaging pose significant challenges[46]. Schneider et al[47] showed how semi-automatic registration in IGS may improve patient safety by enabling 3D visualization of critical intra- and extra-hepatic structures. A novel IGS (SmartLiver) offering augmented reality visualization was developed to provide intuitive visualization by using DL algorithms for semi-automatic image registration. Results showed a mean registration accuracy of 10.9 mm ± 4.2 mm (manual) vs 13.9 mm ± 4.4 mm (semi-automatic), hence significantly improving the manual registration. Kuznetsova et al[48] assessed the performance of structure-guided deformable image registration (SG-DIR) relative to rigid registration and DIR using TG-132 recommendations for 14 patients with liver tumors to whom stereotactic body radiation therapy (SBRT) was applied. The median DSC for rigid registration was 88% and 89% for DIR, and 90% for both SG-DIR using liver contours only and using liver structures along with anatomical landmarks. However, most of the existing volumetric registration algorithms are not suitable for the intra-procedural stage, as they involve time-consuming optimization. In the report by Wei et al[46], a fast MR-CT image registration method was proposed for overlaying pre-procedural MR (pMR) and pre-procedural CT (pCT) images onto an intra-procedural CT (iCT) image to guide thermal ablation of liver tumors. This method, consisting of four DL-based modules and one conventional ANTs registration module, showed higher Dice ratios (around 7% improvement) over tumors and compatible Dice ratios over livers. However, its main advantage was the computational time cost of around 7 s in the intra-procedural stage, which is only 0.1% runtime in the conventional way (i.e., ANTs).

Treatment planning concepts using the mid-ventilation and internal-target volume concept are based on the extent of tumor motion between expiration and inspiration. Therefore, four-dimensional (4D) imaging is required to provide the necessary information about the individual respiration-associated motion pattern. Weick et al[49] proposed a method to increase the image quality of the end-expiratory and end-inspiratory phases of retrospective respiratory self-gated 4D MRI data sets using two different non-rigid image registration schemes for improved target delineation of moving liver tumors. In the first scheme, all phases were registered directly (dir-Reg), while in the second next neighbors were successively registered until the target was reached (nn-Reg). Results showed that the Median dir-Reg coefficient of variation of all regions of interest (ROIs) was 5.6% lower for expiration and 7.0% lower for inspiration compared with nn-Reg. Statistically significant differences were found in all comparisons.

DIAGNOSIS

Two decades ago, the methods proposed for ML-based diagnosis required manually extracting the features from the images. This tedious step has been partially relieved with the irruption of CNNs. However, techniques such as radiomics are still in use to try to improve the performance of novel AI methods for medical diagnosis. Radiomics concerns the high throughput extracting of comprehensible features from radiological images that can be further analyzed by ML algorithms for classification or regression tasks. In this section, different methods proposed for liver and pancreas imaging diagnosis are reviewed (Table 2).

Table 2 Summary of works based on artificial intelligence for automated diagnosis of pancreas and hepatobiliary system diseases.
Anatomical area
Modality
AI model
What is diagnosed?
Ref.
LiverScintiscanANNChronic hepatitis and cirrhosis[114]
CTANNHCC, intra-hepatic peripheral cholangiocarcinoma, hemangioma, metastases[52]
CNNHCC, malignant liver tumors, indeterminate mases, hemangiomas, cysts[53]
Liver fibrosis[50,115]
SVMCirrhosis and HCC[51]
Malignant liver tumors[54]
KNN, SVM, RFHCC[116]
MRICNNHCC[55]
Simple cyst, cavernous hemangioma, FNH, HCC, ICC[56,57]
Extremely randomized treesAdenomas, cysts, hemangiomas, HCC, metastases[58]
USPNNBenign and malignant focal liver lesions[65]
SVMFatty liver[68]
HCC[66]
CNNFocal liver lesions: Angioma, metastasis, HCC, cyst, FNH[67]
Liver fibrosis stages[69]
Biliary systemMRIANNCholangiocarcinoma[59,60]
Lymph node status in ICC[117]
PancreasCTHybrid SVM-RFPancreas cancer[76]
SVMSerous cystic neoplasms[72]
CNNIPMN, mucinous cystic neoplasm, serous cystic neoplasm, solid pseudopapillary tumor[73]
MRISVMIPMN[78]
USANNChronic pancreatitis, pancreatic adenocarcinoma[81]
CNNMalignancy in IPMN[82]
Autoimmune pancreatitis, pancreatic ductal adenocarcinoma, chronic pancreatitis[83]
Liver-CT

Starting with chronic liver disease, Choi et al[50] presented a CNN model for staging liver fibrosis from contrast-enhanced CT images. Before using the CT image as input for the CNN, the liver is segmented. The testing dataset included 891 patients and the CNN achieved a staging accuracy of 79.4% and an AUC of 96%, 97%, and 95% for diagnosing significant fibrosis, advanced fibrosis, and cirrhosis, respectively. A different approach was proposed by Nayak et al[51], where SVM was used instead of CNN for aiding in the diagnosis of cirrhosis and hepatocellular carcinoma (HCC) from multi-phase abdomen CT. Features were extracted from the segmented liver in all the phases, which were previously registered. Using 5-fold cross validation, they reported an accuracy of 86.9% and 81% for the detection of cirrhosis and HCC, respectively.

There are also several reports exploring the role of DL in the characterization of focal liver lesions (Figure 5). In this sense, Matake et al[52] applied an ANN to assist in the diagnosis of hepatic mases using clinical and radiological parameters extracted from CT images. The authors used 120 cases of liver diseases and implemented a leave-one-out cross-validation method for training and testing the ANN, reporting an AUC of 96.1%. Also using CT images, Yasaka et al[53] used a CNN for the differentiation of five different types of liver masses from contrast-enhanced CT. For testing, they used 100 liver mass images, reporting an accuracy of 84%. Similarly, Khan and Narejo[54] proposed Fuzzy Linguistic Constant (FLC) to enhance low contrast CT images of the liver before training a SVM to distinguish between cancerous or non-cancerous lesions. The reported classification accuracy was 98.3%. The proposed method also showed the ability to automatically segment the tumor with an improved detection rate of 78% and a precision value of 60%.

Figure 5
Figure 5 Computerized tomography scan of a 61-year-old male patient with colon carcinoma and liver metastases. The intensity histograms of regions with and without metastases are different; hence, the first order radiomics features[109], which are based on the intensity histogram will potentially be different.
Liver and biliary system MRI

Techniques concerning MR images have also been developed for the diagnosis and classification of focal liver lesions (Figure 6). Zhou et al[55] proposed a method using a novel CNN to grade HCC from DWIs. They applied a 2D CNN to log maps generated from different b-value images. In their work, they reported a validation AUC of 83% using 40 cases. A CNN was also trained by Hamm et al[56] and Wang et al[57] to classify six different focal hepatic lesions from T1-weighted MR images in the postcontrast phase. They used 60 cases for testing and reported a sensitivity and specificity of 90% and 98%, respectively. In the second part of their study, they transformed it into an “interpretable” DL system by analyzing the relative contributions of specific imaging features to its predictions in order to shed light on the factors involved in the network’s decision-making process. Finally, DCE-MRI and T2-weighted MRI, together with risk factor features, were applied to build an extremely randomized trees classifier for focal liver lesions[58], achieving an overall accuracy of 77%.

Figure 6
Figure 6 Sixty-seven-year-old patient with pancreatic carcinoma and liver metastases treated with chemotherapy. The Digital Oncology Companion (Siemens Healthineers, Germany) artificial intelligence-based prototype automatically segments liver, portal and hepatic vessels, lesions, and surrounding anatomical structures. From left to right: screenshots of the segmented liver, vessels, and lesions; and generated 3D models.

Some advancements have also been reached in the automatic diagnosis of lesions in the biliary system from MR cholangiopancreatography (MRCP) sequences. Logeswaran[59,60] trained an ANN classifier for assisting in the diagnosis of cholangiocarcinoma. He utilized 55 MRCP studies for testing and reported an accuracy of 94% when differentiating healthy and tumor images and of 88% in multi-disease tests.

MRI is a superior technique in the evaluation of chronic liver disease in comparison with CT, but making the most of it requires considerable skills and optimization at the acquisition, post-processing, and interpretation phases[61]. AI has proved useful to assist radiologists in the MR-guided diagnosis and grading of these diseases, including liver fibrosis and non-alcoholic fatty liver disease[62].

Radiomics studies have been proposed to aid in the diagnosis of liver fibrosis. Kato et al[63] performed texture analysis of the liver parenchyma processed by an ANN to detect and grade hepatic fibrosis, with varying success depending on the type of MR sequence used (AUC of 0.801, 0.597, and 0.525 for gadolinium-enhanced equilibrium phase, T1-weighted, and T2-weighted images, respectively).

Later, Hectors et al[64] developed a DL algorithm for liver fibrosis staging using gadolinium enhancement sequences acquired in the hepatobiliary phase, which showed good to excellent diagnostic performance, comparable to that of MR elastography.

Liver-US

Ultrasound (US) and endoscopic ultrasonography (EUS) are commonly used in the diagnostic work-up of several pancreatic and liver lesions. AI-based solutions have also been applied to US images in the assessment of focal and diffuse liver diseases in order to enhance their diagnostic capabilities. Acharya et al[65] suggested a method for aiding in the diagnosis of focal liver lesions from liver US images. The authors extracted features from US images and trained several classifiers, obtaining the highest AUC (94.1%) using a probabilistic neural network (PNN) classifier. Another approach is shown in Yao et al[66], where a radiomics analysis was established for the diagnosis and clinical behavior prediction of HCC, showing an AUC of 94% for benign and malignant classification. Rightly, CNN architectures have also been developed for US images as in the report by Schmauch et al[67], where a CNN was employed to help in the diagnosis of focal liver lesions from US images. The authors used a dataset composed by 367 2D US images for training and another dataset from 177 patients for testing, reporting a mean score of 89.1%.

There is limited experience in the use of AI with US images with regards to diffuse liver disease. Li et al[68] used a SVM classifier to help in the diagnosis of fatty liver from US images. Input features were computed from ROIs selected by examiners. A total of 93 images were used for training and testing using leave-one-out cross-validation. The authors reported an 84% accuracy for normal livers and 97.1% for fatty livers. Moreover, a mix of radiomics features and DL techniques were used with two-dimensional shear waver elastography (2D-SWE) for assessing liver fibrosis stages in Wang et al[69]. Results reached AUCs of 97% for cirrhosis, 98% for advanced fibrosis, and 85% for significant fibrosis.

Pancreas CT and PET/CT

The role of AI in the detection of pancreatic lesions from CT has been extensively investigated. Pancreatic cancer detection is a challenging task for radiologists and its improvement is a hot research topic. Chen et al[70] developed a DL-based tool including a segmentation CNN and a 5-CNN classifier for the detection of pancreatic cancer lesions, with a special focus on lesions smaller than 2 cm, in abdominal CT scans. Their model was able to distinguish between cancer and control scans with an AUC of 0.95, 89.7% sensitivity, and 92.8% specificity. Sensitivity for the detection of lesions smaller than 2 cm was 74.7%[70]. Still focused on the identification of lesions smaller than 2 cm, Alves et al[71] proposed an automatic framework for pancreatic ductal adenocarcinoma (PDAC) detection based on state-of-the-art DL models. They trained an nnUnet (nnUnet_T) on a dataset including contrast-enhanced CT scans from 119 PDAC patients and 123 healthy individuals for automatic lesion detection and segmentation. Additionally, two other nnUnets were trained to investigate the impact of anatomy integration, with nnUnet_TP segmenting both the pancreas and the tumor and nnUnet_MS segmenting the pancreas, tumor, and adjacent anatomical structures. All three networks were compared on an open access external dataset, with nnUnet_MS offering the best results with an AUC of 0.91 for the entire dataset and of 0.88 for lesions smaller than 2 cm[71]. Several studies have focused on the role of AI-based solutions in the detection of pancreatic cystic lesions. Wei et al[72] presented a ML-based computer-aided diagnosis system to help in the diagnosis of pancreas serous cystic neoplasms from CT images. They extracted radiomic features from manual ROIs outlining the peripheral margin of each neoplasm. After selecting the most important features by using least absolute shrinkage selection operator regression, they trained a SVM classifier by a 5-fold cross validation with 200 patients. The authors used a validation cohort of 60 patients and reported and AUC of 83.7%, a sensitivity of 66.7%, and a specificity of 81.8%. Along the same lines, Li et al[73] also proposed a computer-aided framework for early differential diagnosis of pancreatic cysts without pre-segmenting the lesions by using densely connected convolutional networks (Dense-Net). In this approach, saliency maps were integrated in the framework to assist physicians to understand the decisions of the DL methods. Accuracy reported on a cohort of 206 patients with four pathologically confirmed subtypes of pancreatic cysts was 72.8%, significantly higher than the baseline of 48.1% according to the authors. Park et al[74] developed a 3D nnU-Net-based model for the automatic diagnosis of solid and cystic pancreatic neoplasms on abdominal CT scans. The model was trained on CT scans (852 patients) from both patients who underwent resection for pancreatic lesions and subjects without any pancreatic abnormalities, and performance was evaluated using receiver operating characteristic analysis in a temporally independent cohort (test set 1, including 603 patients) and a temporally and spatially independent cohort (test set 2, including 589 patients). This approach showed a remarkable capacity to identify solid and cystic pancreatic lesions on CT, with an AUC of 0.91 for the test set 1 and 0.87 for the test set 2. Furthermore, it offered a high sensitivity in the identification of solid lesions of any size (98%-100%) and cystic lesions of at least 1 cm (92%-93%)[74].

In the pursuit of more accurate models, some authors have combined CT images with other biomarkers, such as molecular markers or multimodal images. For example, Qiao et al[75] used CT scans and serum tumor markers (including serum carbohydrate antigens 50, 199, and 242) to train different types of networks (CNN, FCN, and U-Net) to diagnose pancreatic cancer with high sensitivity and specificity. Li et al[76] also used a hybrid SVM-RF model to classify normal and pancreas cancer from PET/CT images. First, they segmented the pancreas from CT images and registered the CT and PET series, then they extracted features from the segmented ROI in both types of studies. The authors tested the model using 10-fold cross validation with 80 cases and achieved 96.47% accuracy, 95.23% sensitivity, and 97.51% specificity.

Pancreas-MRI

MR is the technique of election for the assessment of complex pancreatic conditions. Thus, its association with AI is regarded as promising to help radiologists in diagnostic dilemmas regarding this organ. For instance, radiomics has been proposed as a way to predict the malignant potential of pancreatic cystic lesions, differentiating benign cysts from those likely to transform into pancreatic cancer[77].

There is limited experience with the use of AI in the detection of focal lesions with pancreatic MR studies. Corral et al[78] proposed the use of SVM to classify intraductal papillary mucinous neoplasms (IPMN). First, features were extracted using a CNN from T2-weighted and post-contrast T1-weighted MR images. For validation, authors used 10-fold cross-validation using 139 cases. They achieved an AUC of 78%. Kaissis et al[79] also developed a supervised ML algorithm which predicted the above-versus-below median overall survival of patients with pancreatic ductal adenocarcinoma, with 87% sensitivity and 80% specificity, using preoperative DWIs.

Lastly, the generation of synthetic MR images of pancreatic neuroendocrine tumors (PNET) has been explored using GANs. This data augmentation technique can alleviate the relative low abundance of these type of pancreatic tumors in order to train AI models. Gao and Wang then used the synthetic images to evaluate the performance of a CNN in the prediction of PNET grading on contrast-enhanced images[80].

Pancreas-EUS

Application of AI to EUS has focused on the differentiation of focal pancreatic lesions. In this sense, Săftoiu et al[81] developed an ANN to help in the difficult differentiation between PDAC and focal chronic pancreatitis (CP) with EUS-elastography. They included 258 patients in the study and reported 84.27% testing accuracy using 10-fold cross-validation. In addition, Kuwahara et al[82] used a CNN to assist in the distinction between benign and malignant IPMNs of the pancreas from EUS images. For testing, the authors used images from 50 patients, obtaining an AUC of 98% and sensitivity, specificity, and accuracy values of 95.7%, 92.6%, and 94%, respectively. Finally, in the report by Marya et al[83] an EUS-based CNN model was trained to differentiate autoimmune pancreatitis (AIP) from PDAC, CP, and normal pancreas (NP). Results obtained from 583 patients (146 AIP, 292 PDAC, 72 CP, and 73 NP) demonstrated a sensitivity of 99% and a specificity of 98% to distinguish between AIP and NP, 94% and 71% for AIP and CP, and 90% and 93% for AIP and PDAC. Furthermore, the sensitivity and specificity to distinguish AIP from all study conditions (i.e., PDAC, CP, and NP) were 90% and 85%, respectively. In view of these results, the application of AI to EUS in the assessment of focal pancreatic lesions is promising, although limited due to the short number of available databases for algorithm training and validation[84].

TREATMENT PREDICTION

Prediction of treatment response and patient outcome based on AI is a very appealing idea which has been explored in a number of liver and pancreatic diseases, particularly in patients with HCC (Table 3).

Table 3 Summary of the works proposed to predict patient prognosis using artificial intelligence.
Anatomical area
Pathology
Modality
AI model
What is prognosed?
Ref.
LiverHCCCTANNProgression of hepatectomized patients with HCC[85]
CNNEarly recurrence of HCC[88]
Response to transarterial chemoembolization for patients with intermediate-stage HCC[89]
LASSO Cox regressionEarly recurrence of HCC[90]
Recurrence of HCC after liver transplantation[91]
Recurrence of HCC after resection[118]
MRILR, RFResponse to intra-arterial treatment of HCC[86,87]
USCNN, SVMResponse to transarterial chemoembolization for patients with HCC[119]
Biliary systemLiver metastases, HCC, cholangiocarcinomaCTCNNPrediction of hepatobiliary toxicity after liver SBRT[92]
PancreasPostoperative pancreatic fistulaCTRepTreePrediction of postoperative pancreas fistulas after pancreatoduodenectomy[93]

The idea of using ML to predict the prognosis of patients with HCC emerged decades ago. Already in 1995 the progression of hepatectomized patients with HCC was analyzed using ANN[85]. Liver volume, which was measured in CT studies, was used, among others, as an input parameter. Fifty-four example cases were used to train an ANN composed of three layers, and the model was successfully used to predict the prognosis of 11 patients. Nevertheless, the model was not tested with enough cases to determine its usefulness in actual clinical activity. However, the rise of AI has prompted many more works to be developed in the last few years. The response to intra-arterial treatment of HCC prior to intervention has been predicted using ML[86,87]. Specifically, logistic regression (LR) and RF models were trained with 35 patients using features extracted from clinical data and the segmentations of liver and liver lesions in a contrast-enhanced 3D fat-suppressed spoiled gradient-echo T1-weighted sequence in the arterial phase. Both trained models predicted treatment response with an overall accuracy of 78% (62.5% sensitivity, 82.1% specificity). Other authors tried to predict the early recurrence of HCC employing a CNN model based on the combination of CT images and clinical data[88]. They used 10-fold cross-validation with data from 167 patients and reported an AUC of 0.825. A RestNet CNN model was also trained for preoperative response prediction of patients with intermediate-stage HCC undergoing transarterial chemoembolization[89]. The model used the segmented ROI of the tumor area in a CT study as input. The training cohort included 162 patients and the two validation cohorts included 89 and 138 patients, respectively. The authors reported an accuracy of 85.1% and 82.8% in the two evaluation datasets.

Radiomics has also been applied to predict treatment response of HCC to different therapies based on studies of several imaging modalities. The early recurrence of HCC after curative treatment was evaluated using an LR model based on radiomics features[90], which were extracted from manually delineated peritumoral areas in CT images. They used 109 patients for training and 47 patients for validation, reporting an AUC of 0.79 with the validation dataset. Guo et al[91] also predicted the recurrence of HCC after liver transplantation. For that purpose, authors extracted radiomic features from ROIs delineated around the lesion in arterial-phase CT images. Then, they combined clinical risk factors and radiomic features to build a multivariable Cox regression model. The authors used a training dataset of 93 patients and a validation dataset of 40 patients and they reported a C-index of 0.789 in the validation dataset.

ML models have also been used to predict hepatobiliary toxicity after liver SBRT[92]. The authors built a CNN model which was previously pretrained using CT images of human organs. Then, using transfer learning, the model was trained with liver SBRT cases. They used 125 patients for training and validation using a 20-fold cross-validation approach, reporting an AUC of 0.79.

Regarding the pancreas, postoperative pancreatic fistulas were predicted using ML-based texture analysis[93] performed to extract features from ROIs segmented in non-contrast CT images. Then, after dimension reduction, several ML classifiers were built using Auto-WEKA 2.0, obtaining the best results using a REPTree classifier. The authors used 10-fold cross-validation using data from 110 patients, and reported an AUC of 0.95, sensitivity of 96%, and specificity of 98%.

DISCUSSION

In recent years, a large number of AI-based solutions have been developed with the aim of easing and streamlining the radiologist’s workflow. Many of these tools are focused on imaging of the liver, biliary system, and pancreas. The developed tools range from improving image quality to the prediction of the patient’s prognosis after treatment. The literature shows that many AI-based solutions targeting liver and pancreas imaging allow for improved disease detection and characterization, lower inter-reader variability, and increased diagnostic efficiency. A key factor for their success in the clinical setting is to attain a seamless integration in the radiologist’s workflow, requiring minimal additional work by the radiologist and adding significant value to the radiologist’s work. In this sense, it is crucial that there is a fluid collaboration between the radiologists, technicians, and bioengineers in charge of the tools.

Image analysis and processing are transversal parts of most AI methods described in this review. Improving their performance is thus a key task. Unfortunately, some image processing techniques such as registration are still time-consuming, hence making the incorporation of some of these procedures in clinical practice unfeasible. Some new methods are arising to minimize this impact[94], especially in critical applications like image IGS. Semi-automatic or even automatic segmentation is another important step that some of the AI tools may incorporate for diagnosis or prognosis purposes[95]. Therefore, it is of paramount importance for these algorithms to achieve a high level of performance.

The literature reports many applications of AI to aid in the detection and characterization of pancreatic and liver focal lesions using a variety of imaging modalities as input, either single (e.g., T1-weighted MRI) or in combination with other techniques and data (e.g., T2-weighted and DCE-MRI plus risk factors). In chronic liver disease, radiomics-based tools have been developed to assist in the diagnosis and grading of hepatic fibrosis, among others. These models have been built using different imaging modalities, such as MRI or US.

With regard to the prognosis of liver, biliary or pancreatic diseases, tools based on radiological information have hardly been developed. Many of these tools are focused on the prognosis of HCC based on information extracted from CT[96]. In this field of research, literature shows a clear trend toward integrating genetic information[97-101]. There are also studies that try to include variables extracted from clinical data and laboratory values[102,103]. In a scenario that advances towards integrated diagnosis, increasing volumes of data of different nature are available. This should allow for the generation of more accurate predictive models of clinical prognosis using information from many sources.

For the AI-based tools developed to be used in daily clinical practice, they must obtain regulatory clearance, such as Food and Drug Administration (FDA) approval in the United States or CE marking in Europe. Despite the explosive production of such tools in the last years, to date only a small group of them have obtained this approval. One of the main problems is the lack of appropriately annotated data. Without large datasets of properly labeled studies, the performance of data-hungry algorithms like CNNs will not be sufficient to be massively deployed in clinical environments. Furthermore, algorithms demand diverse data, such as multi-centric and multi-vendor, to avoid selection biases that would challenge their implementation in a real-world environment[104]. Another limitation of most AI-based tools found today is that they are aimed at a very concrete application (narrow AI applications), within a specific imaging modality, rather than being valid for a wide range of tasks at the radiologist’s work practice.

Yet, the general attitude of radiology staff toward AI is positive. In a recent survey, European radiographers declared excitement about AI (83%), although only 8% had been taught on this matter in their qualification studies[105].

In another survey, European radiologists regarded the outcomes of AI algorithms for diagnostic purposes as generally reliable (75.7%), and algorithms for workload prioritization as very helpful (23.4%) or moderately helpful (62.2%) to reduce the workload of the medical staff[106].

The sentiment of gastroenterologists toward AI is also generally favorable, with a wide majority of United Kingdom[107] and European[108] specialists perceiving it as beneficial to key aspects of their clinical practice. Their main concerns according to these studies are related to algorithm bias, lack of guidelines, and potential increase in procedural times and operator dependence.

CONCLUSION

The rapid advance of AI is already transforming the gastrointestinal field with the development of applications aimed to assist and streamline image diagnosis. Traditional diagnostic imaging techniques such as US, EUS, CT, MRI, and PET/CT are already benefitting from a variety of AI algorithms that can perform automatic or semi-automatic segmentation and registration of the liver and pancreas and their lesions, aid the diagnosis and characterization of pancreatic and liver focal lesions and diffuse illnesses, improve image quality, accelerate image acquisition, and anticipate treatment response and patient prognosis. Moreover, with the use of radiomics, AI can add quantitative information previously undetected by radiologists to radiological reports. The massive adoption of AI in radiology of pancreatic and liver diseases is still incipient, but irreversible, and the sector is clearly moving in this direction. Advances in the field, such as the availability of regulatory cleared, robust algorithms trained and validated multicentrically, increased awareness on AI by the medical staff, and access to products that seamlessly integrate with their workflow should pave the way for a rapid adoption of AI in the clinical practice, impacting the outcomes of hepatic and pancreatic patients for the better.

Footnotes

Provenance and peer review: Invited article; Externally peer reviewed.

Peer-review model: Single blind

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: Spain

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): B, B

Grade C (Good): C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Alves N, Portugal; Ma C, China; Xiao B, China S-Editor: Chen YL L-Editor: A P-Editor: Chen YL

References
1.  Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, Bray F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J Clin. 2021;71:209-249.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 50630]  [Cited by in F6Publishing: 53385]  [Article Influence: 17795.0]  [Reference Citation Analysis (123)]
2.  American Cancer Society  Liver Cancer Early Detection, Diagnosis, and Staging. [cited 3 January 2023]. Available from: https://www.cancer.org/cancer/Liver-cancer/detection-diagnosis-staging/survival-rates.html.  [PubMed]  [DOI]  [Cited in This Article: ]
3.  American Cancer Society  Survival Rates for Pancreatic Cancer. [cited 3 January 2023]. Available from: https://www.cancer.org/cancer/pancreatic-cancer/detection-diagnosis-staging/survival-rates.html.  [PubMed]  [DOI]  [Cited in This Article: ]
4.  Hirode G, Saab S, Wong RJ. Trends in the Burden of Chronic Liver Disease Among Hospitalized US Adults. JAMA Netw Open. 2020;3:e201997.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 107]  [Cited by in F6Publishing: 147]  [Article Influence: 36.8]  [Reference Citation Analysis (0)]
5.  Berbís MA, Aneiros-Fernández J, Mendoza Olivares FJ, Nava E, Luna A. Role of artificial intelligence in multidisciplinary imaging diagnosis of gastrointestinal diseases. World J Gastroenterol. 2021;27:4395-4412.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 8]  [Cited by in F6Publishing: 5]  [Article Influence: 1.7]  [Reference Citation Analysis (71)]
6.  Chang KJ, Kamel IR, Macura KJ, Bluemke DA. 3.0-T MR imaging of the abdomen: comparison with 1.5 T. Radiographics. 2008;28:1983-1998.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 158]  [Cited by in F6Publishing: 162]  [Article Influence: 10.8]  [Reference Citation Analysis (0)]
7.  Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2:230-243.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1189]  [Cited by in F6Publishing: 1142]  [Article Influence: 163.1]  [Reference Citation Analysis (0)]
8.  Samuel AL. Some Studies in Machine Learning Using the Game of Checkers. IBM J Res Dev. 1959;3:210-229.  [PubMed]  [DOI]  [Cited in This Article: ]
9.  Galbusera F, Casaroli G, Bassani T. Artificial intelligence and machine learning in spine research. JOR Spine. 2019;2:e1044.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 160]  [Cited by in F6Publishing: 123]  [Article Influence: 24.6]  [Reference Citation Analysis (0)]
10.  Alloghani M, Al-Jumeily D, Mustafina J, Hussain A, Aljaaf AJ.   A Systematic Review on Supervised and Unsupervised Machine Learning Algorithms for Data Science. In: Berry M, Mohamed A, Yap B. Supervised and Unsupervised Learning for Data Science. Unsupervised and Semi-Supervised Learning. Cham: Springer, 2019: 3-21.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 92]  [Cited by in F6Publishing: 88]  [Article Influence: 22.0]  [Reference Citation Analysis (0)]
11.  Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial networks. Commun ACM. 2014;63:139-144.  [PubMed]  [DOI]  [Cited in This Article: ]
12.  Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 2018;321:321-331.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 703]  [Cited by in F6Publishing: 309]  [Article Influence: 51.5]  [Reference Citation Analysis (0)]
13.  Ronneberger O, Fischer P, Brox T.   U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells W, Frangi A. Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science. Cham: Springer, 2015; 1-8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13000]  [Cited by in F6Publishing: 13658]  [Article Influence: 1517.6]  [Reference Citation Analysis (0)]
14.  LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-444.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36149]  [Cited by in F6Publishing: 18330]  [Article Influence: 2036.7]  [Reference Citation Analysis (0)]
15.  Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18:203-211.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 672]  [Cited by in F6Publishing: 1819]  [Article Influence: 454.8]  [Reference Citation Analysis (0)]
16.  Li Y, Beck M, Päßler T, Lili C, Hua W, Mai HD, Amthauer H, Biebl M, Thuss-Patience PC, Berger J, Stromberger C, Tinhofer I, Kruppa J, Budach V, Hofheinz F, Lin Q, Zschaeck S. A FDG-PET radiomics signature detects esophageal squamous cell carcinoma patients who do not benefit from chemoradiation. Sci Rep. 2020;10:17671.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 10]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
17.  Varghese BA, Cen SY, Hwang DH, Duddalwar VA. Texture Analysis of Imaging: What Radiologists Need to Know. AJR Am J Roentgenol. 2019;212:520-528.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 104]  [Cited by in F6Publishing: 142]  [Article Influence: 28.4]  [Reference Citation Analysis (0)]
18.  Fleischmann D, Boas FE. Computed tomography--old ideas and new technology. Eur Radiol. 2011;21:510-517.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 203]  [Cited by in F6Publishing: 197]  [Article Influence: 15.2]  [Reference Citation Analysis (0)]
19.  den Harder AM, Willemink MJ, de Ruiter QM, Schilham AM, Krestin GP, Leiner T, de Jong PA, Budde RP. Achievable dose reduction using iterative reconstruction for chest computed tomography: A systematic review. Eur J Radiol. 2015;84:2307-2313.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 46]  [Cited by in F6Publishing: 49]  [Article Influence: 5.4]  [Reference Citation Analysis (0)]
20.  Kyong Hwan Jin, McCann MT, Froustey E, Unser M. Deep Convolutional Neural Network for Inverse Problems in Imaging. IEEE Trans Image Process. 2017;26:4509-4522.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1201]  [Cited by in F6Publishing: 645]  [Article Influence: 92.1]  [Reference Citation Analysis (0)]
21.  Akagi M, Nakamura Y, Higaki T, Narita K, Honda Y, Zhou J, Yu Z, Akino N, Awai K. Deep learning reconstruction improves image quality of abdominal ultra-high-resolution CT. Eur Radiol. 2019;29:6163-6171.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 149]  [Cited by in F6Publishing: 218]  [Article Influence: 43.6]  [Reference Citation Analysis (0)]
22.  Kaga T, Noda Y, Mori T, Kawai N, Takano H, Kajita K, Yoneyama M, Akamine Y, Kato H, Hyodo F, Matsuo M. Diffusion-weighted imaging of the abdomen using echo planar imaging with compressed SENSE: Feasibility, image quality, and ADC value evaluation. Eur J Radiol. 2021;142:109889.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 16]  [Article Influence: 5.3]  [Reference Citation Analysis (0)]
23.  López-Úbeda P, Díaz-Galiano MC, Martín-Noguerol T, Luna A, Ureña-López LA, Martín-Valdivia MT. Automatic medical protocol classification using machine learning approaches. Comput Methods Programs Biomed. 2021;200:105939.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 3]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
24.  Professional Foundation  The Next MR Wave|Philips Healthcare. [cited 1 January 2023]. Available from: https://www.philips-foundation.com/healthcare/resources/Landing/the-next-mr-wave#triggername=close_vitaleye.  [PubMed]  [DOI]  [Cited in This Article: ]
25.  BioMatrix Technology  Siemens Healthineers España. [cited 1 January 2023]. Available from: https://www.siemens-healthineers.com/es/magnetic-resonance-imaging/technologies-and-innovations/biomatrix-technology.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 1]  [Article Influence: 0.3]  [Reference Citation Analysis (0)]
26.  Seo H, Badiei Khuzani M, Vasudevan V, Huang C, Ren H, Xiao R, Jia X, Xing L. Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys. 2020;47:e148-e167.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 116]  [Cited by in F6Publishing: 90]  [Article Influence: 22.5]  [Reference Citation Analysis (0)]
27.  Song J, Patel M, Girgensohn A, Kim C. Combining Deep Learning with Geometric Features for Image based Localization in the Gastrointestinal Tract. Expert syst appl. 2021;185:; 115631.  [PubMed]  [DOI]  [Cited in This Article: ]
28.  Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. Deep learning in medical image registration: a review. Phys Med Biol. 2020;65:20TR01.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 326]  [Cited by in F6Publishing: 215]  [Article Influence: 53.8]  [Reference Citation Analysis (0)]
29.  Bagheri MH, Roth H, Kovacs W, Yao J, Farhadi F, Li X, Summers RM. Technical and Clinical Factors Affecting Success Rate of a Deep Learning Method for Pancreas Segmentation on CT. Acad Radiol. 2020;27:689-695.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 8]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
30.  Krasoń A, Woloshuk A, Spinczyk D. Segmentation of abdominal organs in computed tomography using a generalized statistical shape model. Comput Med Imaging Graph. 2019;78:101672.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 6]  [Article Influence: 1.2]  [Reference Citation Analysis (0)]
31.  Antwi K, Wiesner P, Merkle EM, Zech CJ, Boll DT, Wild D, Christ E, Heye T. Investigating difficult to detect pancreatic lesions: Characterization of benign pancreatic islet cell tumors using multiparametric pancreatic 3-T MRI. PLoS One. 2021;16:e0253078.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
32.  Kumar H, DeSouza SV, Petrov MS. Automated pancreas segmentation from computed tomography and magnetic resonance images: A systematic review. Comput Methods Programs Biomed. 2019;178:319-328.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 35]  [Cited by in F6Publishing: 26]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
33.  Zheng H, Chen Y, Yue X, Ma C, Liu X, Yang P, Lu J. Deep pancreas segmentation with uncertain regions of shadowed sets. Magn Reson Imaging. 2020;68:45-52.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 29]  [Article Influence: 7.3]  [Reference Citation Analysis (0)]
34.  Zheng H, Qian L, Qin Y, Gu Y, Yang J. Improving the slice interaction of 2.5D CNN for automatic pancreas segmentation. Med Phys. 2020;47:5543-5554.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 10]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
35.  Cardobi N, Dal Palù A, Pedrini F, Beleù A, Nocini R, De Robertis R, Ruzzenente A, Salvia R, Montemezzi S, D'Onofrio M. An Overview of Artificial Intelligence Applications in Liver and Pancreatic Imaging. Cancers (Basel). 2021;13.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 7]  [Article Influence: 2.3]  [Reference Citation Analysis (0)]
36.  Li Y, Zhao YQ, Zhang F, Liao M, Yu LL, Chen BF, Wang YJ. Liver segmentation from abdominal CT volumes based on level set and sparse shape composition. Comput Methods Programs Biomed. 2020;195:105533.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 8]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
37.  Winther H, Hundt C, Ringe KI, Wacker FK, Schmidt B, Jürgens J, Haimerl M, Beyer LP, Stroszczynski C, Wiggermann P, Verloh N. A 3D Deep Neural Network for Liver Volumetry in 3T Contrast-Enhanced MRI. Rofo. 2021;193:305-314.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 6]  [Article Influence: 1.5]  [Reference Citation Analysis (0)]
38.  Mohagheghi S, Foruzan AH. Incorporating prior shape knowledge via data-driven loss model to improve 3D liver segmentation in deep CNNs. Int J Comput Assist Radiol Surg. 2020;15:249-257.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 3]  [Article Influence: 0.6]  [Reference Citation Analysis (0)]
39.  Nakayama K, Saito A, Biggs E, Linguraru MG, Shimizu A. Liver segmentation from low-radiation-dose pediatric computed tomography using patient-specific, statistical modeling. Int J Comput Assist Radiol Surg. 2019;14:2057-2068.  [PubMed]  [DOI]  [Cited in This Article: ]  [Reference Citation Analysis (0)]
40.  Bobo MF, Bao S, Huo Y, Yao Y, Virostko J, Plassard AJ, Lyu I, Assad A, Abramson RG, Hilmes MA, Landman BA. Fully Convolutional Neural Networks Improve Abdominal Organ Segmentation. Proc SPIE Int Soc Opt Eng. 2018;10574.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 18]  [Cited by in F6Publishing: 22]  [Article Influence: 3.7]  [Reference Citation Analysis (0)]
41.  Saunders SL, Clark JM, Rudser K, Chauhan A, Ryder JR, Bolan PJ. Comparison of automatic liver volumetry performance using different types of magnetic resonance images. Magn Reson Imaging. 2022;91:16-23.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 2]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
42.  Meng L, Tian Y, Bu S. Liver tumor segmentation based on 3D convolutional neural network with dual scale. J Appl Clin Med Phys. 2020;21:144-157.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 25]  [Article Influence: 5.0]  [Reference Citation Analysis (0)]
43.  Chen L, Song H, Wang C, Cui Y, Yang J, Hu X, Zhang L. Liver tumor segmentation in CT volumes using an adversarial densely connected network. BMC Bioinformatics. 2019;20:587.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 17]  [Article Influence: 3.4]  [Reference Citation Analysis (0)]
44.  Almotairi S, Kareem G, Aouf M, Almutairi B, Salem MA. Liver Tumor Segmentation in CT Scans Using Modified SegNet. Sensors (Basel). 2020;20.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 75]  [Cited by in F6Publishing: 37]  [Article Influence: 9.3]  [Reference Citation Analysis (0)]
45.  Chlebus G, Meine H, Thoduka S, Abolmaali N, van Ginneken B, Hahn HK, Schenk A. Reducing inter-observer variability and interaction time of MR liver volumetry by combining automatic CNN-based liver segmentation and manual corrections. PLoS One. 2019;14:e0217228.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 31]  [Article Influence: 6.2]  [Reference Citation Analysis (0)]
46.  Wei D, Ahmad S, Huo J, Huang P, Yap PT, Xue Z, Sun J, Li W, Shen D, Wang Q. SLIR: Synthesis, localization, inpainting, and registration for image-guided thermal ablation of liver tumors. Med Image Anal. 2020;65:101763.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 4]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
47.  Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc. 2020;34:4702-4711.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 11]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
48.  Kuznetsova S, Grendarova P, Roy S, Sinha R, Thind K, Ploquin N. Structure guided deformable image registration for treatment planning CT and post stereotactic body radiation therapy (SBRT) Primovist(®) (Gd-EOB-DTPA) enhanced MRI. J Appl Clin Med Phys. 2019;20:109-118.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 6]  [Article Influence: 1.2]  [Reference Citation Analysis (0)]
49.  Weick S, Breuer K, Richter A, Exner F, Ströhle SP, Lutyj P, Tamihardja J, Veldhoen S, Flentje M, Polat B. Non-rigid image registration of 4D-MRI data for improved delineation of moving tumors. BMC Med Imaging. 2020;20:41.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 1]  [Article Influence: 0.3]  [Reference Citation Analysis (0)]
50.  Choi KJ, Jang JK, Lee SS, Sung YS, Shim WH, Kim HS, Yun J, Choi JY, Lee Y, Kang BK, Kim JH, Kim SY, Yu ES. Development and Validation of a Deep Learning System for Staging Liver Fibrosis by Using Contrast Agent-enhanced CT Images in the Liver. Radiology. 2018;289:688-697.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 108]  [Cited by in F6Publishing: 125]  [Article Influence: 20.8]  [Reference Citation Analysis (0)]
51.  Nayak A, Baidya Kayal E, Arya M, Culli J, Krishan S, Agarwal S, Mehndiratta A. Computer-aided diagnosis of cirrhosis and hepatocellular carcinoma using multi-phase abdomen CT. Int J Comput Assist Radiol Surg. 2019;14:1341-1352.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 35]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
52.  Matake K, Yoshimitsu K, Kumazawa S, Higashida Y, Irie H, Asayama Y, Nakayama T, Kakihara D, Katsuragawa S, Doi K, Honda H. Usefulness of artificial neural network for differential diagnosis of hepatic masses on CT images. Acad Radiol. 2006;13:951-962.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 17]  [Article Influence: 0.9]  [Reference Citation Analysis (0)]
53.  Yasaka K, Akai H, Abe O, Kiryu S. Deep Learning with Convolutional Neural Network for Differentiation of Liver Masses at Dynamic Contrast-enhanced CT: A Preliminary Study. Radiology. 2018;286:887-896.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 293]  [Cited by in F6Publishing: 357]  [Article Influence: 51.0]  [Reference Citation Analysis (0)]
54.  Khan AA, Narejo GB. Analysis of Abdominal Computed Tomography Images for Automatic Liver Cancer Diagnosis Using Image Processing Algorithm. Curr Med Imaging Rev. 2019;15:972-982.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 3]  [Cited by in F6Publishing: 5]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
55.  Zhou W, Wang G, Xie G, Zhang L. Grading of hepatocellular carcinoma based on diffusion weighted images with multiple b-values using convolutional neural networks. Med Phys. 2019;46:3951-3960.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 11]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
56.  Hamm CA, Wang CJ, Savic LJ, Ferrante M, Schobert I, Schlachter T, Lin M, Duncan JS, Weinreb JC, Chapiro J, Letzen B. Deep learning for liver tumor diagnosis part I: development of a convolutional neural network classifier for multi-phasic MRI. Eur Radiol. 2019;29:3338-3347.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 117]  [Cited by in F6Publishing: 166]  [Article Influence: 33.2]  [Reference Citation Analysis (0)]
57.  Wang CJ, Hamm CA, Savic LJ, Ferrante M, Schobert I, Schlachter T, Lin M, Weinreb JC, Duncan JS, Chapiro J, Letzen B. Deep learning for liver tumor diagnosis part II: convolutional neural network interpretation using radiologic imaging features. Eur Radiol. 2019;29:3348-3357.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 82]  [Article Influence: 16.4]  [Reference Citation Analysis (0)]
58.  Jansen MJA, Kuijf HJ, Veldhuis WB, Wessels FJ, Viergever MA, Pluim JPW. Automatic classification of focal liver lesions based on MRI and risk factors. PLoS One. 2019;14:e0217053.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 38]  [Article Influence: 7.6]  [Reference Citation Analysis (0)]
59.  Logeswaran R. Cholangiocarcinoma--an automated preliminary detection system using MLP. J Med Syst. 2009;33:413-421.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 8]  [Cited by in F6Publishing: 8]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
60.  Logeswaran R. Improved biliary detection and diagnosis through intelligent machine analysis. Comput Methods Programs Biomed. 2012;107:404-412.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 1]  [Article Influence: 0.1]  [Reference Citation Analysis (0)]
61.  Chundru S, Kalb B, Arif-Tiwari H, Sharma P, Costello J, Martin DR. MRI of diffuse liver disease: characteristics of acute and chronic diseases. Diagn Interv Radiol. 2014;20:200-208.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 20]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
62.  Decharatanachart P, Chaiteerakij R, Tiyarattanachai T, Treeprasertsuk S. Application of artificial intelligence in chronic liver diseases: a systematic review and meta-analysis. BMC Gastroenterol. 2021;21:10.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 19]  [Cited by in F6Publishing: 38]  [Article Influence: 12.7]  [Reference Citation Analysis (0)]
63.  Kato H, Kanematsu M, Zhang X, Saio M, Kondo H, Goshima S, Fujita H. Computer-aided diagnosis of hepatic fibrosis: preliminary evaluation of MRI texture analysis using the finite difference method and an artificial neural network. AJR Am J Roentgenol. 2007;189:117-122.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 59]  [Cited by in F6Publishing: 59]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
64.  Hectors SJ, Kennedy P, Huang KH, Stocker D, Carbonell G, Greenspan H, Friedman S, Taouli B. Fully automated prediction of liver fibrosis using deep learning analysis of gadoxetic acid-enhanced MRI. Eur Radiol. 2021;31:3805-3814.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 31]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
65.  Acharya UR, Koh JEW, Hagiwara Y, Tan JH, Gertych A, Vijayananthan A, Yaakup NA, Abdullah BJJ, Bin Mohd Fabell MK, Yeong CH. Automated diagnosis of focal liver lesions using bidirectional empirical mode decomposition features. Comput Biol Med. 2018;94:11-18.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 34]  [Cited by in F6Publishing: 28]  [Article Influence: 4.7]  [Reference Citation Analysis (0)]
66.  Yao Z, Dong Y, Wu G, Zhang Q, Yang D, Yu JH, Wang WP. Preoperative diagnosis and prediction of hepatocellular carcinoma: Radiomics analysis based on multi-modal ultrasound images. BMC Cancer. 2018;18:1089.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 57]  [Cited by in F6Publishing: 83]  [Article Influence: 13.8]  [Reference Citation Analysis (0)]
67.  Schmauch B, Herent P, Jehanno P, Dehaene O, Saillard C, Aubé C, Luciani A, Lassau N, Jégou S. Diagnosis of focal liver lesions from ultrasound using deep learning. Diagn Interv Imaging. 2019;100:227-233.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 67]  [Article Influence: 13.4]  [Reference Citation Analysis (0)]
68.  Li G, Luo Y, Deng W, Xu X, Liu A, Song E. Computer aided diagnosis of fatty liver ultrasonic images based on support vector machine. Annu Int Conf IEEE Eng Med Biol Soc. 2008;2008:4768-4771.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 13]  [Article Influence: 0.9]  [Reference Citation Analysis (0)]
69.  Wang K, Lu X, Zhou H, Gao Y, Zheng J, Tong M, Wu C, Liu C, Huang L, Jiang T, Meng F, Lu Y, Ai H, Xie XY, Yin LP, Liang P, Tian J, Zheng R. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut. 2019;68:729-741.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 226]  [Cited by in F6Publishing: 298]  [Article Influence: 59.6]  [Reference Citation Analysis (1)]
70.  Chen PT, Wu T, Wang P, Chang D, Liu KL, Wu MS, Roth HR, Lee PC, Liao WC, Wang W. Pancreatic Cancer Detection on CT Scans with Deep Learning: A Nationwide Population-based Study. Radiology. 2023;306:172-182.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 33]  [Article Influence: 33.0]  [Reference Citation Analysis (0)]
71.  Alves N, Schuurmans M, Litjens G, Bosma JS, Hermans J, Huisman H. Fully Automatic Deep Learning Framework for Pancreatic Ductal Adenocarcinoma Detection on Computed Tomography. Cancers (Basel). 2022;14.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 24]  [Article Influence: 12.0]  [Reference Citation Analysis (0)]
72.  Wei R, Lin K, Yan W, Guo Y, Wang Y, Li J, Zhu J. Computer-Aided Diagnosis of Pancreas Serous Cystic Neoplasms: A Radiomics Method on Preoperative MDCT Images. Technol Cancer Res Treat. 2019;18:1533033818824339.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 51]  [Cited by in F6Publishing: 52]  [Article Influence: 10.4]  [Reference Citation Analysis (0)]
73.  Li H, Shi K, Reichert M, Lin K, Tselousov N, Braren R, Fu D, Schmid R, Li J, Menze B. Differential Diagnosis for Pancreatic Cysts in CT Scans Using Densely-Connected Convolutional Networks. Annu Int Conf IEEE Eng Med Biol Soc. 2019;2019:2095-2098.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 12]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
74.  Park HJ, Shin K, You MW, Kyung SG, Kim SY, Park SH, Byun JH, Kim N, Kim HJ. Deep Learning-based Detection of Solid and Cystic Pancreatic Neoplasms at Contrast-enhanced CT. Radiology. 2023;306:140-149.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in F6Publishing: 12]  [Reference Citation Analysis (0)]
75.  Qiao Z, Ge J, He W, Xu X, He J. Artificial Intelligence Algorithm-Based Computerized Tomography Image Features Combined with Serum Tumor Markers for Diagnosis of Pancreatic Cancer. Comput Math Methods Med. 2022;2022:8979404.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 7]  [Article Influence: 3.5]  [Reference Citation Analysis (0)]
76.  Li S, Jiang H, Wang Z, Zhang G, Yao YD. An effective computer aided diagnosis model for pancreas cancer on PET/CT images. Comput Methods Programs Biomed. 2018;165:205-214.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 37]  [Article Influence: 6.2]  [Reference Citation Analysis (0)]
77.  Dalal V, Carmicheal J, Dhaliwal A, Jain M, Kaur S, Batra SK. Radiomics in stratification of pancreatic cystic lesions: Machine learning in action. Cancer Lett. 2020;469:228-237.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 37]  [Cited by in F6Publishing: 61]  [Article Influence: 12.2]  [Reference Citation Analysis (0)]
78.  Corral JE, Hussein S, Kandel P, Bolan CW, Bagci U, Wallace MB. Deep Learning to Classify Intraductal Papillary Mucinous Neoplasms Using Magnetic Resonance Imaging. Pancreas. 2019;48:805-810.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 41]  [Article Influence: 8.2]  [Reference Citation Analysis (0)]
79.  Kaissis G, Ziegelmayer S, Lohöfer F, Algül H, Eiber M, Weichert W, Schmid R, Friess H, Rummeny E, Ankerst D, Siveke J, Braren R. A machine learning model for the prediction of survival and tumor subtype in pancreatic ductal adenocarcinoma from preoperative diffusion-weighted imaging. Eur Radiol Exp. 2019;3:41.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 46]  [Article Influence: 9.2]  [Reference Citation Analysis (0)]
80.  Gao X, Wang X. Deep learning for World Health Organization grades of pancreatic neuroendocrine tumors on contrast-enhanced magnetic resonance images: a preliminary study. Int J Comput Assist Radiol Surg. 2019;14:1981-1991.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 17]  [Cited by in F6Publishing: 28]  [Article Influence: 5.6]  [Reference Citation Analysis (0)]
81.  Săftoiu A, Vilmann P, Gorunescu F, Janssen J, Hocke M, Larsen M, Iglesias-Garcia J, Arcidiacono P, Will U, Giovannini M, Dietrich CF, Havre R, Gheorghe C, McKay C, Gheonea DI, Ciurea T; European EUS Elastography Multicentric Study Group. Efficacy of an artificial neural network-based approach to endoscopic ultrasound elastography in diagnosis of focal pancreatic masses. Clin Gastroenterol Hepatol. 2012;10:84-90.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 123]  [Cited by in F6Publishing: 116]  [Article Influence: 9.7]  [Reference Citation Analysis (0)]
82.  Kuwahara T, Hara K, Mizuno N, Okuno N, Matsumoto S, Obata M, Kurita Y, Koda H, Toriyama K, Onishi S, Ishihara M, Tanaka T, Tajika M, Niwa Y. Usefulness of Deep Learning Analysis for the Diagnosis of Malignancy in Intraductal Papillary Mucinous Neoplasms of the Pancreas. Clin Transl Gastroenterol. 2019;10:1-8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 109]  [Article Influence: 27.3]  [Reference Citation Analysis (0)]
83.  Marya NB, Powers PD, Chari ST, Gleeson FC, Leggett CL, Abu Dayyeh BK, Chandrasekhara V, Iyer PG, Majumder S, Pearson RK, Petersen BT, Rajan E, Sawas T, Storm AC, Vege SS, Chen S, Long Z, Hough DM, Mara K, Levy MJ. Utilisation of artificial intelligence for the development of an EUS-convolutional neural network model trained to enhance the diagnosis of autoimmune pancreatitis. Gut. 2021;70:1335-1344.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 70]  [Article Influence: 23.3]  [Reference Citation Analysis (1)]
84.  Tonozuka R, Mukai S, Itoi T. The Role of Artificial Intelligence in Endoscopic Ultrasound for Pancreatic Disorders. Diagnostics (Basel). 2020;11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
85.  Hamamoto I, Okada S, Hashimoto T, Wakabayashi H, Maeba T, Maeta H. Prediction of the early prognosis of the hepatectomized patient with hepatocellular carcinoma with a neural network. Comput Biol Med. 1995;25:49-59.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 16]  [Article Influence: 0.6]  [Reference Citation Analysis (0)]
86.  Abajian A, Murali N, Savic LJ, Laage-Gaupp FM, Nezami N, Duncan JS, Schlachter T, Lin M, Geschwind JF, Chapiro J. Predicting Treatment Response to Image-Guided Therapies Using Machine Learning: An Example for Trans-Arterial Treatment of Hepatocellular Carcinoma. J Vis Exp. 2018;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 5]  [Cited by in F6Publishing: 5]  [Article Influence: 0.8]  [Reference Citation Analysis (0)]
87.  Abajian A, Murali N, Savic LJ, Laage-Gaupp FM, Nezami N, Duncan JS, Schlachter T, Lin M, Geschwind JF, Chapiro J. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept. J Vasc Interv Radiol. 2018;29:850-857.e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 79]  [Cited by in F6Publishing: 111]  [Article Influence: 18.5]  [Reference Citation Analysis (0)]
88.  Wang W, Chen Q, Iwamoto Y, Han X, Zhang Q, Hu H, Lin L, Chen YW. Deep Learning-Based Radiomics Models for Early Recurrence Prediction of Hepatocellular Carcinoma with Multi-phase CT Images and Clinical Data. Annu Int Conf IEEE Eng Med Biol Soc. 2019;2019:4881-4884.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 13]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
89.  Peng J, Kang S, Ning Z, Deng H, Shen J, Xu Y, Zhang J, Zhao W, Li X, Gong W, Huang J, Liu L. Residual convolutional neural network for predicting response of transarterial chemoembolization in hepatocellular carcinoma from CT imaging. Eur Radiol. 2020;30:413-424.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 103]  [Article Influence: 20.6]  [Reference Citation Analysis (0)]
90.  Shan QY, Hu HT, Feng ST, Peng ZP, Chen SL, Zhou Q, Li X, Xie XY, Lu MD, Wang W, Kuang M. CT-based peritumoral radiomics signatures to predict early recurrence in hepatocellular carcinoma after curative tumor resection or ablation. Cancer Imaging. 2019;19:11.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 71]  [Cited by in F6Publishing: 114]  [Article Influence: 22.8]  [Reference Citation Analysis (0)]
91.  Guo D, Gu D, Wang H, Wei J, Wang Z, Hao X, Ji Q, Cao S, Song Z, Jiang J, Shen Z, Tian J, Zheng H. Radiomics analysis enables recurrence prediction for hepatocellular carcinoma after liver transplantation. Eur J Radiol. 2019;117:33-40.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36]  [Cited by in F6Publishing: 37]  [Article Influence: 7.4]  [Reference Citation Analysis (0)]
92.  Ibragimov B, Toesca D, Chang D, Yuan Y, Koong A, Xing L. Development of deep neural network for individualized hepatobiliary toxicity prediction after liver SBRT. Med Phys. 2018;45:4763-4774.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 69]  [Cited by in F6Publishing: 77]  [Article Influence: 12.8]  [Reference Citation Analysis (0)]
93.  Kambakamba P, Mannil M, Herrera PE, Müller PC, Kuemmerli C, Linecker M, von Spiczak J, Hüllner MW, Raptis DA, Petrowsky H, Clavien PA, Alkadhi H. The potential of machine learning to predict postoperative pancreatic fistula based on preoperative, non-contrast-enhanced CT: A proof-of-principle study. Surgery. 2020;167:448-454.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 21]  [Cited by in F6Publishing: 39]  [Article Influence: 7.8]  [Reference Citation Analysis (0)]
94.  Tahmasebi N, Boulanger P, Yun J, Fallone G, Noga M, Punithakumar K. Real-Time Lung Tumor Tracking Using a CUDA Enabled Nonrigid Registration Algorithm for MRI. IEEE J Transl Eng Health Med. 2020;8:4300308.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 2]  [Article Influence: 0.5]  [Reference Citation Analysis (0)]
95.  Ahn Y, Yoon JS, Lee SS, Suk HI, Son JH, Sung YS, Lee Y, Kang BK, Kim HS. Deep Learning Algorithm for Automated Segmentation and Volume Measurement of the Liver and Spleen Using Portal Venous Phase Computed Tomography Images. Korean J Radiol. 2020;21:987-997.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 22]  [Cited by in F6Publishing: 40]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
96.  Hu W, Yang H, Xu H, Mao Y. Radiomics based on artificial intelligence in liver diseases: where we are? Gastroenterol Rep (Oxf). 2020;8:90-97.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 29]  [Cited by in F6Publishing: 25]  [Article Influence: 6.3]  [Reference Citation Analysis (0)]
97.  Shen S, Kong J, Qiu Y, Yang X, Wang W, Yan L. Identification of core genes and outcomes in hepatocellular carcinoma by bioinformatics analysis. J Cell Biochem. 2019;120:10069-10081.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 62]  [Cited by in F6Publishing: 205]  [Article Influence: 34.2]  [Reference Citation Analysis (0)]
98.  Zhang L, Huang Y, Ling J, Zhuo W, Yu Z, Shao M, Luo Y, Zhu Y. Screening and function analysis of hub genes and pathways in hepatocellular carcinoma via bioinformatics approaches. Cancer Biomark. 2018;22:511-521.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 25]  [Article Influence: 4.2]  [Reference Citation Analysis (0)]
99.  Bai Y, Long J, Liu Z, Lin J, Huang H, Wang D, Yang X, Miao F, Mao Y, Sang X, Zhao H. Comprehensive analysis of a ceRNA network reveals potential prognostic cytoplasmic lncRNAs involved in HCC progression. J Cell Physiol. 2019;234:18837-18848.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 64]  [Cited by in F6Publishing: 86]  [Article Influence: 17.2]  [Reference Citation Analysis (0)]
100.  Chaudhary K, Poirion OB, Lu L, Garmire LX. Deep Learning-Based Multi-Omics Integration Robustly Predicts Survival in Liver Cancer. Clin Cancer Res. 2018;24:1248-1259.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 483]  [Cited by in F6Publishing: 523]  [Article Influence: 87.2]  [Reference Citation Analysis (0)]
101.  Zhou Z, Li Y, Hao H, Wang Y, Zhou Z, Wang Z, Chu X. Screening Hub Genes as Prognostic Biomarkers of Hepatocellular Carcinoma by Bioinformatics Analysis. Cell Transplant. 2019;28:76S-86S.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 72]  [Article Influence: 14.4]  [Reference Citation Analysis (0)]
102.  Cucchetti A, Vivarelli M, Heaton ND, Phillips S, Piscaglia F, Bolondi L, La Barba G, Foxton MR, Rela M, O'Grady J, Pinna AD. Artificial neural network is superior to MELD in predicting mortality of patients with end-stage liver disease. Gut. 2007;56:253-258.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 52]  [Article Influence: 3.1]  [Reference Citation Analysis (0)]
103.  Cristoferi L, Nardi A, Ronca V, Invernizzi P, Mells G, Carbone M. Prognostic models in primary biliary cholangitis. J Autoimmun. 2018;95:171-178.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 13]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
104.  Sun W, Nasraoui O, Shafto P. Evolution and impact of bias in human and machine learning algorithm interaction. PLoS One. 2020;15:e0235502.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 23]  [Article Influence: 5.8]  [Reference Citation Analysis (0)]
105.  Coakley S, Young R, Moore N, England A, O'Mahony A, O'Connor OJ, Maher M, McEntee MF. Radiographers' knowledge, attitudes and expectations of artificial intelligence in medical imaging. Radiography (Lond). 2022;28:943-948.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 16]  [Article Influence: 8.0]  [Reference Citation Analysis (0)]
106.  European Society of Radiology (ESR). Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology. Insights Imaging. 2022;13:107.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 24]  [Article Influence: 12.0]  [Reference Citation Analysis (0)]
107.  Kader R, Baggaley RF, Hussein M, Ahmad OF, Patel N, Corbett G, Dolwani S, Stoyanov D, Lovat LB. Survey on the perceptions of UK gastroenterologists and endoscopists to artificial intelligence. Frontline Gastroenterol. 2022;13:423-429.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2]  [Cited by in F6Publishing: 5]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
108.  Wadhwa V, Alagappan M, Gonzalez A, Gupta K, Brown JRG, Cohen J, Sawhney M, Pleskow D, Berzin TM. Physician sentiment toward artificial intelligence (AI) in colonoscopic practice: a survey of US gastroenterologists. Endosc Int Open. 2020;8:E1379-E1384.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 15]  [Cited by in F6Publishing: 35]  [Article Influence: 8.8]  [Reference Citation Analysis (0)]
109.  Zwanenburg A, Vallières M, Abdalah MA, Aerts HJWL, Andrearczyk V, Apte A, Ashrafinia S, Bakas S, Beukinga RJ, Boellaard R, Bogowicz M, Boldrini L, Buvat I, Cook GJR, Davatzikos C, Depeursinge A, Desseroit MC, Dinapoli N, Dinh CV, Echegaray S, El Naqa I, Fedorov AY, Gatta R, Gillies RJ, Goh V, Götz M, Guckenberger M, Ha SM, Hatt M, Isensee F, Lambin P, Leger S, Leijenaar RTH, Lenkowicz J, Lippert F, Losnegård A, Maier-Hein KH, Morin O, Müller H, Napel S, Nioche C, Orlhac F, Pati S, Pfaehler EAG, Rahmim A, Rao AUK, Scherer J, Siddique MM, Sijtsema NM, Socarras Fernandez J, Spezi E, Steenbakkers RJHM, Tanadini-Lang S, Thorwarth D, Troost EGC, Upadhaya T, Valentini V, van Dijk LV, van Griethuysen J, van Velden FHP, Whybra P, Richter C, Löck S. The Image Biomarker Standardization Initiative: Standardized Quantitative Radiomics for High-Throughput Image-based Phenotyping. Radiology. 2020;295:328-338.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 2090]  [Cited by in F6Publishing: 1906]  [Article Influence: 476.5]  [Reference Citation Analysis (0)]
110.  Cai J, Lu L, Zhang Z, Xing F, Yang L, Yin Q. Pancreas Segmentation in MRI using Graph-Based Decision Fusion on Convolutional Neural Networks. Med Image Comput Comput Assist Interv. 2016;9901:442-450.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 41]  [Cited by in F6Publishing: 19]  [Article Influence: 2.4]  [Reference Citation Analysis (0)]
111.  Li M, Lian F, Guo S. Pancreas segmentation based on an adversarial model under two-tier constraints. Phys Med Biol. 2020;65:225021.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 6]  [Article Influence: 1.5]  [Reference Citation Analysis (0)]
112.  Boers TGW, Hu Y, Gibson E, Barratt DC, Bonmati E, Krdzalic J, van der Heijden F, Hermans JJ, Huisman HJ. Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans. Phys Med Biol. 2020;65:065002.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 17]  [Article Influence: 4.3]  [Reference Citation Analysis (0)]
113.  Tang X, Jafargholi Rangraz E, Coudyzer W, Bertels J, Robben D, Schramm G, Deckers W, Maleux G, Baete K, Verslype C, Gooding MJ, Deroose CM, Nuyts J. Whole liver segmentation based on deep learning and manual adjustment for clinical use in SIRT. Eur J Nucl Med Mol Imaging. 2020;47:2742-2752.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 13]  [Article Influence: 3.3]  [Reference Citation Analysis (0)]
114.  Shiomi S, Kuroki T, Kuriyama M, Morikawa H, Masaki K, Ikeoka N, Tanaka T, Ikeda H, Ochi H. Diagnosis of chronic liver disease from liver scintiscans by artificial neural networks. Ann Nucl Med. 1997;11:75-80.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 3]  [Article Influence: 0.1]  [Reference Citation Analysis (0)]
115.  Yasaka K, Akai H, Kunimatsu A, Abe O, Kiryu S. Deep learning for staging liver fibrosis on CT: a pilot study. Eur Radiol. 2018;28:4578-4585.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 54]  [Cited by in F6Publishing: 61]  [Article Influence: 10.2]  [Reference Citation Analysis (0)]
116.  Mokrane FZ, Lu L, Vavasseur A, Otal P, Peron JM, Luk L, Yang H, Ammari S, Saenger Y, Rousseau H, Zhao B, Schwartz LH, Dercle L. Radiomics machine-learning signature for diagnosis of hepatocellular carcinoma in cirrhotic patients with indeterminate liver nodules. Eur Radiol. 2020;30:558-570.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 102]  [Article Influence: 25.5]  [Reference Citation Analysis (0)]
117.  Xu L, Yang P, Liang W, Liu W, Wang W, Luo C, Wang J, Peng Z, Xing L, Huang M, Zheng S, Niu T. A radiomics approach based on support vector machine using MR images for preoperative lymph node status evaluation in intrahepatic cholangiocarcinoma. Theranostics. 2019;9:5374-5385.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 59]  [Cited by in F6Publishing: 107]  [Article Influence: 21.4]  [Reference Citation Analysis (0)]
118.  Ji GW, Zhu FP, Xu Q, Wang K, Wu MY, Tang WW, Li XC, Wang XH. Machine-learning analysis of contrast-enhanced CT radiomics predicts recurrence of hepatocellular carcinoma after resection: A multi-institutional study. EBioMedicine. 2019;50:156-165.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 62]  [Cited by in F6Publishing: 124]  [Article Influence: 24.8]  [Reference Citation Analysis (0)]
119.  Liu D, Liu F, Xie X, Su L, Liu M, Kuang M, Huang G, Wang Y, Zhou H, Wang K, Lin M, Tian J. Accurate prediction of responses to transarterial chemoembolization for patients with hepatocellular carcinoma by using artificial intelligence in contrast-enhanced ultrasound. Eur Radiol. 2020;30:2365-2376.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 51]  [Cited by in F6Publishing: 76]  [Article Influence: 19.0]  [Reference Citation Analysis (0)]