Minireviews Open Access
Copyright ©The Author(s) 2020. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Gastroenterol. Nov 28, 2020; 1(4): 71-85
Published online Nov 28, 2020. doi: 10.35712/aig.v1.i4.71
Artificial intelligence in gastrointestinal cancer: Recent advances and future perspectives
Michihiro Kudou, Toshiyuki Kosuga, Eigo Otsuji, Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, Kyoto 602-8566, Japan
Michihiro Kudou, Department of Surgery, Kyoto Okamoto Memorial Hospital, Kyoto 613-0034, Japan
Toshiyuki Kosuga, Department of Surgery, Saiseikai Shiga Hospital, Ritto 520-3046, Japan
ORCID number: Michihiro Kudou (0000-0003-3518-528X); Toshiyuki Kosuga (0000-0002-1657-7272); Eigo Otsuji (0000-0002-3260-8155).
Author contributions: Kudou M performed the research, analyzed the data, and wrote the manuscript; Kosuga T made contributions to conception and supervision of the study; Otsuji E critically revised the article; and all authors have read and approved the final manuscript.
Conflict-of-interest statement: The authors declare no conflicts of interests for this article.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Toshiyuki Kosuga, MD, PhD, Assistant Professor, Division of Digestive Surgery, Department of Surgery, Kyoto Prefectural University of Medicine, 465 Kawaramachi-hirokoji, Kamigyo-ku, Kyoto 602-8566, Japan. toti-k@koto.kpu-m.ac.jp
Received: September 19, 2020
Peer-review started: September 19, 2020
First decision: October 17, 2020
Revised: October 28, 2020
Accepted: November 13, 2020
Article in press: November 13, 2020
Published online: November 28, 2020
Processing time: 72 Days and 23.9 Hours

Abstract

Artificial intelligence (AI) using machine or deep learning algorithms is attracting increasing attention because of its more accurate image recognition ability and prediction performance than human-aid analyses. The application of AI models to gastrointestinal (GI) clinical oncology has been investigated for the past decade. AI has the capacity to automatically detect and diagnose GI tumors with similar diagnostic accuracy to expert clinicians. AI may also predict malignant potential, such as tumor histology, metastasis, patient survival, resistance to cancer treatments and the molecular biology of tumors, through image analyses of radiological or pathological imaging data using complex deep learning models beyond human cognition. The introduction of AI-assisted diagnostic systems into clinical settings is expected in the near future. However, limitations associated with the evaluation of GI tumors by AI models have yet to be resolved. Recent studies on AI-assisted diagnostic models of gastric and colorectal cancers in the endoscopic, pathological, and radiological fields were herein reviewed. The limitations and future perspectives for the application of AI systems in clinical settings have also been discussed. With the establishment of a multidisciplinary team containing AI experts in each medical institution and prospective studies, AI-assisted medical systems will become a promising tool for GI cancer.

Key Words: Artificial intelligence; Gastric cancer; Colorectal cancer; Endoscopy; Pathology; Radiology

Core Tip: Artificial intelligence (AI) is attracting increasing attention because of its more accurate image recognition ability and prediction performance than human-aid analyses. The application of AI models to gastrointestinal clinical oncology has been investigated, and the findings obtained indicate its capacity for automatic diagnoses with similar accuracy to expert clinicians and the prediction of malignant potential. However, limitations in the evaluation of gastrointestinal tumors by current AI models have yet to be resolved. The limitations of and future perspectives for the application of AI-assisted systems to clinical settings have been discussed herein.



INTRODUCTION

Recent advances in diagnostic technology and treatment strategies for gastrointestinal cancer have improved clinical outcomes. Even with the development of novel imaging modalities with high accuracy and resolution, image reading, and novel biomarkers, such as the genetic screening of tumors, circulating tumor DNA, and micro RNA, the diversity and quantity of data on tumor malignant potential is beyond the limits of human interpretation[1-8]. Therefore, the establishment of more accurate diagnostic methods with high objectivity using computer-aided diagnosis systems (CAD), such as technologies involving artificial intelligence (AI), is needed in clinical settings[9-11].

AI is defined by the intelligence of machines in contrast to the natural intelligence of humans. It is generally applied when a machine mimics the cognitive functions of humans, such as learning and problem solving[12]. The concept of AI was initially advocated in 1956 by McCarthy et al[13], and the development of machines with the ability to think like humans with intelligence was anticipated. However, machines or computer programs that function as classifiers or detectors, such as image classification and recognition and the prediction of characteristics in populations, are currently regarded as AI.

Recent AI technologies were developed due to technical advances in machine learning and deep neural network algorithms[14-17]. Convolutional neural networks (CNN) are one of the deep neural networks that are useful for image analyses. Algorithms using CNN models have been applied to many research fields in gastrointestinal cancer, such as the automatic endoscopic detection of tumors, the automatic diagnosis of cancer in pathological specimens, and image analyses of radiological modalities[10,18]. In endoscopic research, CNN are trained using thousands of endoscopic images to detect tumors, differentiate between benign and malignant tumors, and predict tumor invasion depth[9,19-22]. In recent years, a real-time CAD endoscopic system was developed using trained CNN. In the area of pathology, deep learning has been performed using non-cancerous and cancer images to automatically identify and segment the cytoplasm, nucleus, and stromal cells. CNN and machine learning models with image analyses, such as a texture analysis, were subsequently built to identify cancerous regions or diagnose cancer[23]. In the field of radiology, a CAD system of image modalities, such as X-ray, computed tomography (CT), and magnetic resonance images (MRI), was developed using a deep learning model constructed using cancer and non-cancer images to recognize anatomy and detect and segment tumors[24]. The malignant potential of tumors has been analyzed using a radiomics approach, which aims to quantitatively assess tumor heterogeneity by an analysis of medical images through the deep or machine learning of histograms, textures, and shapes[25-27]. AI models of gastrointestinal cancer are summarized in Figure 1.

Figure 1
Figure 1 Clinical research using artificial intelligence in gastrointestinal cancer . Deep learning based on convolutional neural networks showing the input layer with raw data of the image, such as endoscopic, pathological, and radiological images, the hidden layer with a series of convolutions computed for each layer and the classification of the image, the prediction of malignant potentials, and the segmentation of tumor in the output layer.

AI with strong analytical power has attracted the attention of many researchers; therefore, the number of studies on diagnostic AI systems in gastrointestinal cancer has rapidly increased in the past decade. We herein investigate recent advances and future perspectives through a review of the literature.

In this minireview, the bibliographic search was performed using the database MEDLINE (through PubMed) for identifying studies published on AI technology in the endoscopy, pathology, and radiology of gastric and colorectal cancer between 2016 and 2020. We summarized the application of AI in each area according to the extracted 49 Literatures; subsequently, the consideration about current issues and future perspectives of AI in gastrointestinal cancer was stated with some literature review. 

APPLICATION OF AI TO ENDOSCOPY IN GASTROINTESTINAL CANCER

Previous studies on the endoscopic diagnosis of gastric cancer (GC) and colorectal cancer (CRC) using AI between 2016 and 2020 were summarized in Tables 1 and 2.

Table 1 Previous studies on upper endoscopy of gastric cancer using artificial intelligence.
Ref.
Targets
Sample sizes
Inputs
Tasks
Analysis method
Diagnostic performance
Yoon et al[28] GC (ESD/surgery)800 casesGC/non-GC images in close-up and distant viewsDetection and invasion depth predictionCNNAUC: detection, 0.981; depth, 0.851
Zhu et al[29] GC 993 imagesGC imagesDiagnosis of invasion depthCNNSensitivity: 76.4%, PPV: 89.6%
Li et al[30] GC and healthy 386 GC and 1702 NC imagesNBI imagesDiagnosis of GCCNNSensitivity: 91.1%, PPV: 90.6%
Hirasawa et al[31] GC13584 training and 2296 test imagesGC imagesDiagnosis of GCCNNSensitivity: 92.2%, PPV: 30.6%
Ishioka et al[32] EGC 62 casesReal-time imagesDetectionCNNDetection rate: 94.1%
Luo et al[33] GC 1036496 imagesGC imagesDetectionCNNPPV: 0.814, NPV:0.978
Horiuchi et al[34] GC and gastritis 1492 GC and 1078 gastritis imagesNBI imagesDetectionCNNSensitivity: 95.4%, PPV: 82.3%
Table 2 Previous studies on colonoscopy using artificial intelligence.
Ref.
Targets
Sample sizes
Inputs
Tasks
Analysis method
Diagnostic performance
Akbari et al[35] Screening endoscopy300 polyp imagesPolyp imagesAuto segmentation of polypsCNNAccuracy: 0.977, Sensitivity: 74.8%
Jin et al[36] Screening endoscopyTraining: 2150 polyps, test: 300 polypsNBI imagesDifferentiation of adenoma and hyperplastic polypsCNNThe model reduced the time of endoscopy and increased accuracy by novice endoscopists
Urban et al[37]Screening endoscopy8641 polyp images and 20 colonoscopy videosPolyp imagesDetection of polypsCNNAUC: 0.991, Accuracy: 96.4%
Yamada et al[38] Screening endoscopy4840 images, 77 colonoscopy videosReal-time imagesDifferentiation of the early signs of CRCCNNSensitivity: 97.3%, Specificity: 99.0%
Gastric cancer

The purposes of the studies reviewed on AI for GC were (1) tumor detection; (2) the diagnosis of malignancy; (3) real-time detection; and (4) the prediction of tumor invasion depth. The basic method of these studies was as follows: Endoscopic images of GC, gastritis, and non-cancerous mucosae, which were diagnosed pathologically or by an expert endoscopist, were captured and CNN was subsequently trained using these images. Diagnostic and detection accuracy were then assessed using the constructed CNN models.

Yoon et al[28] attempted to develop CNN models with the ability to detect early GC and predict invasion depth. The areas under the curves of receiver operating characteristic curves (AUC) for early GC detection and depth prediction were 0.981 and 0.851, respectively. Moreover, the diagnostic accuracy of invasion depth was lower for undifferentiated GC than for differentiated GC[28]. Zhu et al[29] also trained a CNN model to predict the invasion depth of GC. The AUC, positive predictive value (PPV), and negative predictive value (NPV) of their model were 0.94, 89.6%, and 88.9%, respectively. The CNN-CAD system achieved significantly higher accuracy and specificity than a human endoscopist. Li et al[30] also developed CNN models for the detection of GC with high diagnostic accuracy (sensitivity: 91.1%, specificity: 90.6%, and PPV: 90.9%). Hirasawa et al[31] reported that CNN models exhibited difficulties distinguishing between differentiated-type intramucosal cancers with a diameter of 6 mm or less and gastritis. Ishioka et al[32] examined the detection accuracy of a real-time endoscopic diagnosis of GC using CNN models that they had constructed; the detection rate of GC using these models was 94.1%. CNN identified the region of GC that had been difficult to distinguish from background gastritis, even by experienced endoscopists. Luo et al[33] developed a gastrointestinal AI diagnostic system (GRAIDs) and compared its diagnostic accuracy with that of expert and trainee endoscopists. PPV was 0.814 for GRAIDs, 0.932 for the expert endoscopist, and 0.824 for the trainee endoscopist, while NPV was 0.978 for GRAIDs, 0.980 for the expert endoscopist, and 0.904 for the trainee endoscopist. These findings demonstrated that the diagnostic accuracy of GRAIDs for the detection of GC was similar to that of the expert endoscopist and superior to that of the trainee endoscopist. CNN models of narrow-band imaging (NBI) for GC have been reported, with sensitivity and PPV of 91.1-95.4% and 82.3-90.6%, respectively[34].

Colorectal cancer

The purposes of the studies reviewed on AI for CRC were (1) the segmentation and detection of polyps; and (2) the diagnosis of polyp pathology. In the development of efficient automatic diagnostic models, models need to automatically segment polyps and extract their features. Akbari et al[35] attempted to construct CNN models of colonoscopy for automatic segmentation and feature extraction. The accuracy, specificity, and sensitivity of the model for automatic segmentation were 0.977, 0.993, and 0.758, respectively. An ideal CAD system of colonoscopy needs to have the ability to predict the pathological diagnosis of an automatically detected tumor and subsequently recommend appropriate treatment strategies for lesions. Jin et al[36] reported a CNN model for predicting the pathological diagnosis of small lesions (≤ 5 mm) using NBI data from colonoscopy. The accuracy, sensitivity, specificity, PPV, and NPV of their model for predicting the pathological diagnosis of polyps, adenoma vs hyperplasia were 86.7%, 83.3%, 91.7%, 93.8%, and 78.6%, respectively. On the other hand, the accuracies of polyp diagnoses by novices, experts, and NBI-trained expert endoscopists were 73.8%, 83.8%, and 87.6%, respectively. Using CNN-processed results, overall accuracy by novice endoscopists significantly increased to 85.6%. A real-time diagnostic system in colonoscopy was developed using CNN models. Urban et al[37] constructed CNN models to identify polyps, which were subsequently adapted to colonoscopy videos, and these models exhibited the ability to detect either type of polyp equally well and identify polyps with an ROC value of 0.991 and accuracy of 96.4%. Yamada et al[38] applied their CNN model, which was developed to detect early signs of CRC, to colonoscopic videos. The sensitivity and specificity of their AI system for detecting the regions of CRC were 97.3% and 99.0%, respectively, while the sensitivity and specificity of endoscopists were 87.4% and 96.4%; respectively. Therefore, the AI system may be used to alert endoscopists in real-time to overlooked abnormalities, such as non-polypoid polyps, during colonoscopy, thereby increasing the early detection of this disease.

APPLICATIONS OF AI TO THE PATHOLOGICAL DIAGNOSIS OF GASTROINTESTINAL CANCER

Previous studies on the pathological diagnosis of GC and CRC using AI between 2016 and 2020 are summarized in Tables 3 and 4. An automatic pathological diagnosis of gastrointestinal cancer generally involves the following processes: (1) Automatic segmentation: Distinguishing various structures, such as the cytoplasm, nuclei, and stoma, and the recognition of atypia; (2) The diagnosis and grading of carcinoma; (3) The diagnosis of malignant potential, such as invasion depth and lymphovascular invasion; and (4) The prediction of survival. Therefore, previous studies aimed to develop a CAD system with the ability to perform these processes.

Table 3 Previous studies on the pathology of gastric cancer using artificial intelligence.
Ref.
Targets
Sample size
Input
Task
Analysis method
Diagnostic performance
Qu et al[39] GC15000 imagesPathological imagesEvaluation of stepwise methodsCNNAUC: 0828-0.920
Yoshida et al[40] GC 3062 biopsy samplesPathological images stained by H&E Automatic segmentation, diagnosis of carcinomaCNNSensitivity: 89.5%, specificity: 50.7%
Mori et al[41] GC (surgery)516 images from 10 GC casesPathological images stained by H&E Diagnosis of invasion depth in signet cell carcinomaCNNSensitivity: 90%, Specificity: 81%
Jiang et al[42] GC (surgery)786 casesIHC (CD3, CD8, CD45RO, CD45RA, CD57, CD68, CD66b, and CD34)Prediction of survivalSVMThe immunomarker SVM was useful for predicting survival
Table 4 Previous studies on the pathology of colorectal cancer using artificial intelligence.
Ref.
Targets
Sample size
Input
Task
Analysis method
Diagnostic performance
Van Eycke et al[43] CRC H&E staining, IHC imageSegmentation of the glandular epitheliumTMA, CNNF1 value: 0,912
Graham et al[44] CRC H&E stainingDifferentiation of intratumor glands CNNF1 values: 0.90
Abdelsamea et al[45] CRC 333 samplesH&E staining, IHC (CD3)Differentiation of the tumor epitheliumTMA, CNNAccuracy: 0.93-0.94
Yan et al[46] CRCH&E stainingTumor classification,segmentation of tumors, CNNAccuracy: Classification, 97.8%; segmentation, 84%
Haj-Hassan et al[47] CRC Multispectral imagesSegmentation of carcinomaCNNAccuracy: 99.1%
Rathore et al[48] CRC Biopsy samples H&E stainingDetection and grading of tumorsTexture and morphology patterns, SVMRecognition rate: Detection, 95.4%; grading; 93.4%
Yang et al[49] CRC 180 samplesH&E stainingDiagnosis of benign tumors, neoplasms, and carcinomaSVM, histogram, texture AUC: 0.852
Chaddad et al[50] CRC 30 casesH&E stainingDiagnosis of carcinoma, adenoma, and benign tumorsAutomatic segmentation, textureAccuracy: 98.9%
Yoshida et al[51] CRC 1328 samplesH&E stainingDiagnosis of benign tumors, neoplasms, and carcinomaCNN, automatic analysis of structureUndetected rate of carcinoma and adenoma: 0-9.3% and 0-9.9%, respectively
Takamatsu et al[52] CRC surgery397 samplesH&E stainingPrediction of lymph node metastasisLR, shape analysisAUC: 0.94
Weis et al[53] CRC 596 casesIHC (AE1/AE3)Automatic evaluation of tumor buddingTMA, CNNCorrelation; R2 value: 0.86
Bychkov et al[54]CRC surgery420 casesH&E stainingPrediction of survivalTMA, CNNGood biomarker for predicting survival
Kather et al[55] CRC 973 slidesH&E stainingPrediction of survivalStromal pattern, CNNGood biomarker for predicting survival
Reichling et al[56] CRC surgery1018 casesHE, IHC (CD3, CD8)Prediction of survivalRF, monogramGood biomarker for predicting survival
Gastric cancer

Qu at al[39] attempted to develop CNN models for (1) and (2), proposed a novel stepwise fine-tuning-based deep learning scheme for gastric pathology image classification, and established a novel protocol to further boost the performance of state-of-the-art deep neural networks and overcome the insufficiency of well-annotated data. In their proposed two-stage method, CNN was initially trained using tissue-wise data on the background, epithelium, and stoma as well as cell-wise data on nuclei and the cytoplasm, and was then tuned using well-annotated data from benign or malignant data sets. The diagnostic accuracy of their constructed two-stage CNN models was higher than that of one-stage models. Yoshida et al[40] attempted to develop CNN models for (1) and (2) with the ability to automatically segment malignant regions in full-slide images of biopsy samples and subsequently diagnose histological classifications through a nuclear analysis at high magnification. In negative biopsy specimens, the concordance rate between their AI system and expert pathologists was 90.6%; however, the concordance rate for positive biopsy specimens was less than 50%. Mori et al[41] trained CNN models for (3) to discriminate the tumor invasion depth of gastric signet-ring cell carcinoma. Their models exhibited the ability to diagnose intramucosal or advanced histological characteristics with an accuracy of 85%, sensitivity of 90%, specificity of 81%, and AUC of 0.91. The prediction of survival in GC patients using the deep learning method has also been examined. Jiang et al[42] investigated the efficacy of deep learning models for (4) using a support vector machine (SVM). They classified GC patients into two groups using SVM based on patient characteristics and immunohistochemistry (IHC) data on the following immunomarkers: CD3, CD8, CD45RO, CD45RA, CD57, CD68, CD66b, and CD34. The findings obtained revealed that the classifier of SVM was a stronger prognostic factor than the TNM stage or CA19-9.

Colorectal cancer

Numerous studies on the pathology of CRC using AI were reported compared to GC, are classified as follows.

Studies on AI models for automatic segmentation: Van Eycke et al[43] and Graham et al[44] developed CNN models to segment the glandular epithelium. The F1 values of these models ranged between 0.9 and 0.912. Abdelsamea et al[45] developed tumor parcellation and quantification (TuPaQ), which is a tool for refining biomarker analyses through the rapid and automated segmentation of the tumor epithelium. Tissue microarray (TMA) cores from CRC were manually annotated and analyzed to provide the ground truth, epithelial or non-epithelial tissue. CNN (TuPaQ) was trained using these data. The accuracy, sensitivity, and specificity of TuPaQ were 0.939, 0.779, and 0.946, respectively. Yan et al[46] examined the diagnostic accuracy of their AI models for the classification, segmentation, and visualization of large-scale tissue histopathology images. The accuracies of their models ranged between 81.3 and 93.2%. Haj-Hassan et al[47] attempted to develop CNN models for the automatic segmentation of benign hyperplasia, intra-epithelial neoplasms, and carcinoma, and the findings obtained showed that the models segmented tumors with a high accuracy of 99.1%.

Diagnosis and grading of carcinoma: Rathore et al[48] reported deep learning models for cancer detection and grading. The features of CRC biopsy samples were extracted based on pink-colored connecting tissues, purple-colored nuclei, and white-colored epithelial cells and lumina. The extracted features, particularly white-colored epithelial cells and lumina, were classified using SVM and classification performance was subsequently assessed. The accuracies of cancer detection and grading by their model were 95.4 and 93.4%, respectively. Yang et al[49] proposed a combination of SVM and color histograms to classify pathological images. The AUC of the model for diagnosing carcinoma was 0.891. Chaddad et al[50] reported that the classification of images using a texture analysis effectively diagnosed carcinoma (accuracy: 98.9%). Yoshida et al[51] showed that a CAD system using a previously described CNN model for GC was useful for diagnosing adenoma and carcinoma (undetected rate of carcinoma and adenoma: 0-9.3% and 0-9.9%, respectively).

Diagnosis of malignant potential: Takamatsu et al[52] reported the prediction of lymph node metastasis using a machine learning analysis of morphological parameters (such as shape and roundness) in cytokeratin-stained T1 CRC images. The AUC of the model was 0.94. The automatic evaluation of tumor budding in IHC with CNN and machine learning was previously performed[53]. Models were constructed to assess tumor budding using TMA on pan-cytokeratin-stained tumors, and the R2 value of the correlation of the models with manual counting for the diagnosis of tumor budding was 0.86.

Prediction of survival: Bychkov et al[54] proposed AI models for the automatic prediction of survival in CRC patients using the TMA of CRC pathological images. The automatic detection of tumors was initially achieved using CNN; CNN cases were subsequently classified by a recurrent neural network. Predicted survival by their model correlated with actual clinical outcomes. Kather et al[55] reported automatic models for discriminating structures in tissue samples and then predicting survival. Their models predicted the survival of CRC more accurately than the TNM stage or manual evaluations of cancer-associated fibroblasts. Moreover, survival prediction SVM models using immunomarkers evaluated by IHC, such as CD3 and CD8, have been developed[56], and the classifier correlated with patient survival.

APPLICATIONS OF AI TO A RADIOLOGICAL DIAGNOSIS OF GASTROINTESTINAL CANCER

Previous studies on the radiological diagnosis of GC and CRC using AI between 2016 and 2020 were summarized in Tables 5 and 6.

Table 5 Previous studies on the radiological diagnosis of gastric cancer using radiomics or artificial intelligence.
Ref.
Targets
Sample size
Input
Task
Analysis method
Diagnostic performance
Li et al[57] GC, radical surgery181 casesPrimary tumor, preoperative CTPrediction of survivalManual segmentation, radiomics, NomogramsThe TNM stage and radiomics signature were good biomarkers
Zhang et al[58] GC, radical surgery669 cases Primary tumor, preoperative CTPredication of early recurrenceManual segmentation, radiomics, NomogramsAUC: 0.806-0.831
Li et al[59] GC, radical surgery204 casesPrimary tumor, pre-operative dual-energy CTPre-operative diagnosis of LNMManual segmentation, radiomics, NomogramAUC; 0.82--.84
Li et al[60] GC, radical surgery554 casesPrimary tumor, preoperative CTPrediction of a pathological status, survivalSemi-automatic segmentation, radiomicsAUC for prediction of the pathological status: 0.77, the TNM stage and radiomics signature were good biomarkers
Wang et al[61] GC, radical surgery187 casesPrimary tumor, preoperative dynamic CTPre-operative prediction of intestinal-type GCManual segmentation, radiomics, NomogramsAUC: 0.904
Jiang et al[62] GC, surgery214 casesPrimary tumor, preoperative PET-CTPrediction of survivalManual segmentation, radiomics, NomogramsC-index: DFS, 0.800; OS, 0.786
Chen et al[63] GC, surgery146 casesPrimary tumor, preoperative MRIPre-operative diagnosis of lymph node metastasisManual segmentation, radiomics analysisAUC: 0.878
Gao et al[64] GC, surgery627 cases, 17340 imagesLymph nodes, preoperative CTPre-operative diagnosis of lymph node metastasisManual segmentation, deep learningAUC: 0.9541.
Huang et al[65] GC, surgeryPrimary tumor, preoperative CTPre-operative diagnosis of peritoneal metastasisManual segmentation, CNNOngoing, retrospective cross-sectional study
Table 6 Previous studies on the radiological diagnosis of colorectal cancer using radiomics or artificial intelligence.
Ref.
Targets
Sample size
Input
Task
Analysis method
Diagnostic performance
Trebeschi et al[66] LRC140 casesPrimary tumor, MRIAutomatic detection, segmentation CNNDSC: 0.68-0.70, AUC: 0.99
Wang et al[67] LRC568 casesPrimary tumor, MRIAutomatic segmentationCNNDSC: 0.82
Wang et al[68] LRC93 casesPrimary tumor, MRIAutomatic segmentationDeep learning DSC: 0.74
Men et al[69] LRC278 casesPrimary tumor, CTAutomatic segmentationCNNDSC: 0.87
Shayesteh et al[70] LRC, NCRT followed by surgery98 casesPrimary tumor, pre-treatment MRIPrediction of CRT responsesManual segmentation, radiomics, machine learningAUC: 0.90
Shi et al[71] LRC, NCRT followed by surgery45 casesPrimary tumor, pre-treatment MRI, mid-radiation MRIPrediction of CRT responsesManual segmentation, CNNAUC: CR, 0.83; good response, 0.93
Ferrari et al[72] LRC, NCRT followed by surgery55 casesPrimary tumor, MRI before, during and after CRTPrediction of CRT responsesManual segmentation, radiomics, RFAUC: CR: 0.86, non-response: 0.83
Bibault et al[73] LRC, NCRT followed by surgery95 casesPrimary tumor, pre-operative CTPrediction of CRT responsesManual segmentation, radiomics, CNN80% accuracy
Dercle et al[74] CRC, FOLFILI with/without cetuximab667 casesMetastatic tumor, CTPrediction of tumor sensitivity to chemotherapyManual segmentation, radiomics, machine learningAUC: 0.72-0.80
Ding et al[75] LRC, radical surgery414 casesLymph nodes, pre-operative MRIPre-operative diagnosis of lymph node metastasisManual segmentation, CNNAI system > radiologist
Taguchi et al[76] CRC40 casesPrimary tumor, CTPrediction of the KRAS statusManual segmentation, radiomicsAUC: 0.82
Gastric cancer

Regarding GC, many researchers have attempted to develop AI models using (1) a radiomics approach; or (2) CNN models predicted malignant potential, such as survival, lymph node metastasis, and post-operative recurrence, through analyses of the radiological image features of GC.

Radiomics approach: Li et al[57] developed a survival prediction model involving a general radiomics analysis of CT. The region of interest was manually drawn along the margin of the tumor on CT images, and radiological features were extracted. After manual image segmentation, the heterogeneity of the extracted feature was quantified using an image analysis, such as texture and histogram analyses. Analyzed cases were then classified based on the risk score (R-signature) evaluated using the least absolute shrinkage and selection operator method. The performance of a radiomics nomogram, including factors correlating with survival, was then evaluated. The findings obtained showed that the R-signature correlated with the survival of GC patients. Furthermore, the prediction of survival by the radiomics monogram including the R-signature was more accurate than that by normal nomograms (T and N stages and differentiation). Previous studies investigated the prediction of malignant potential using a radiomics approach. Zhang et al[58] evaluated the diagnostic accuracy of CT radiomics models for predicting post-operative recurrence in GC patients, and the AUC of the models were 0.806-0.831. Li et al[59] reported CT radiomics models for predicting lymph node metastasis, with an AUC of 0.82-0.84. Li et al[60] also developed CT radiomic models with the ability to predict the pathological status and survival with high accuracy. Wang et al[61] analyzed primary tumors on CT images of the arterial phase, portal phase, and delay phase for the discrimination of intestinal-type GC by a radiomics approach. The AUC of their model was 0.904. Jiang et al[62] described a radiomics model of PET-CT for predicting survival. The C-indexes of this model for overall survival and disease-free survival were 0.786 and 0.800, respectively. A radiomics analysis of MRI for GC has also been conducted. Chen et al[63] examined the heterogeneity of primary tumors on MRI using a radiomics approach, and showed that the model was useful for predicting the N stage.

CNN model: Gao et al[64] developed a CNN model of CT for predicting lymph node metastasis. Radiologists initially labeled upper abdominal-enhanced CT images of metastatic lymph nodes. CNN models were then constructed using the labeled image data, and the AUC of the model was 0.954. Huang et al[65] described a protocol for predicting peritoneal metastasis using CNN models, and this research is ongoing.

Colorectal cancer

Treatment strategies for lower rectal cancer (LRC) have recently been attracting increasing attention because of the difficulties associated with achieving curative treatment. Therefore, many researchers have targeted LRC patients for the development of AI models for radiological diagnoses. The aims of a recent AI study on CRC were (1) the automatic detection or segmentation of primary tumors; (2) the prediction of treatment responses; and (3) the prediction of malignant potential.

Automatic detection or segmentation of primary tumors: Trebeschi et al[66] reported a CNN model for the automatic segmentation of primary tumors on MRI. CNN models were trained using T2-weighted images (T2WI) and diffusion-weighted images with primary tumor labeling by expert radiologists. The CNN model showed high segmentation accuracy, with a dice similarity coefficient (DSC) of 0.68-0.70. The AUC of the resulting probability maps was 0.99. Two CNN models were also developed for the automatic segmentation of primary tumors on T2WIs, with DSC of 0.82 and 0.74, respectively[67,68]. Men et al[69] attempted to develop CNN models for automatic segmentation on CT images with an application to the delineation of the clinical target volume (CTV) and surrounding organs for radiotherapy. The mean DSC values of the models were 87.7% for the CTV, 93.4% for the bladder, 92.1% for the left femoral head, 92.3% for the right femoral head, 65.3% for the intestines, and 61.8% for the colon.

Prediction of treatment responses: Shayesteh et al[70] reported radiomics models predicting treatment responses to neo-adjuvant chemoradiotherapy. Primary tumors on MRI T2WI were manually segmented and an image analysis of the data, shape, texture as well as a histogram analysis were performed. The relationship between the pathological features and treatment responses to CRT was assessed by a machine learning approach, which revealed that the AUC and accuracy of the model were 95 and 90%, respectively. Shi et al[71] and Ferrari et al[72] also described the efficacy of radiomics models for predicting CRT responses using pre-treatment, mid-radiation, post-treatment MRI (AUC for predicting a complete response (CR): 0.83 and 0.86, respectively). Bibault et al[73] compared the diagnostic accuracy of several models, Cox’s regression, CNN, and SVM for predicting CR in pre-operative CRT using CT data. CNN exhibited the ability to predict CR with the highest accuracy (80%). A radiomics model for predicting chemotherapeutic responses has also been reported. Dercle et al[74] demonstrated that their radiomic model using CT images successfully predicted sensitivity to anti-EGFR therapy (AUC: 0.80).

Prediction of malignant potential: Ding et al[75] developed AI models to predict lymphatic node metastasis using pre-operative MRI. CNN models were constructed using MRI lymph node images manually labeled by radiologists. They compared the diagnostic accuracy of CNN and a radiologist for predicting lymph node metastasis. As a result, CNN was more accurate than radiologists in identifying pelvic metastatic lymph nodes. A model for predicting gene profiles was also reported. These research methods are generally called radiogenomics. Taguchi et al[76] showed that a machine learning model using a texture analysis of CT images and SUV values of PET-CT predicted KRAS mutations with high accuracy (AUC: 0.82).

CURRENT ISSUES AND FUTURE PERSPECTIVES
AI research for endoscopy

The majority of studies previously reported that a CAD system using AI for endoscopy had the ability to diagnose gastrointestinal tumors with high accuracy; however, there were many limitations. Researchers were more likely to use high-quality endoscopic images to construct AI models, which cannot always be acquired in clinical settings[9]. Furthermore, outcome indicators for clinical applications have not yet been defined. Therefore, parameters to assess the functional performance of AI models need to be established[19]. In addition, the majority of studies have been retrospective in nature using still images from non-clinical settings. These conditions do not mimic real-time clinical settings, in which endoscopists often encounter difficult-to-analyze images in daily practice. Moreover, it currently remains unclear whether AI models will enhance medical performance, reduce medical costs, and increase the satisfaction of patients and medical staff in clinical settings. Another limitation is that many clinicians and clinical researchers do not have sufficient knowledge to understand AI systems; therefore, non-AI experts as well as medical journal reviewers may encounter difficulties when assessing research on AI and its applications. Furthermore, the number of medical staff with the skill to educate physicians on AI is very limited[19].

Nevertheless, once these limitations are resolved, CAD systems using AI will markedly improve diagnostic quality in endoscopic examinations. CAD systems for endoscopy are expected to serve as a second observer during real-time endoscopy, facilitating the detection of more neoplasms by endoscopists. Some CAD systems may also provide “optical biopsies” to differentiate the types of colon polyps[9]. Therefore, CAD systems have a promising future in the effective training of junior endoscopists as assistant observers.

AI research for pathology

Previous studies reported that AI models distinguish structures in tissues and detect cancerous regions with high accuracy. Furthermore, survival may be predicted using image analyses by AI. However, there are also a number of limitations in research. AI models are educated using pathological images of cancer tissue labeled by pathologists. However, interobserver disagreement in pathological diagnoses commonly occurs between pathologists[77,78]. Therefore, the quality of teaching data varied in each study. Furthermore, the majority of AI models were constructed using a small cohort. It might be possibility non-reproducible laboratory-specific machine learning methods. In addition, the clinical use of AI models requires a digital slide scanner, image storage, maintenance contracts, image analysis software, and IT support systems, which may be expensive in clinical settings. Moreover, many pathologists and technicians do not have sufficient knowledge to understand AI systems. Therefore, the recruitment of AI experts to introduce AI systems into clinical settings is needed for education and the adjustment of systems to different clinical settings.

Despite these limitations, whole-slide scanning using AI models, such as the TMA method, is advantageous for pathologists and clinicians. This method may be a second observer in the prevention of false diagnoses by pathologists and the teaching of trainees. Furthermore, the heterogeneities of cancer tissue cannot be precisely evaluated by the human eyes of pathologists. Therefore, the assessment of cancer tissue using AI models is a novel research method beyond human cognition that is expected to predict proteomics, genomics, and the molecular signaling pathways of tumors as precision medicine by cancer genome sequencing.

AI research for radiology

Previous studies reported the efficacy of automatic segmentation or diagnosis in solid malignant tumors[77-79]. However, difficulties are associated with automatic segmentation by AI models in the field of gastrointestinal cancer because of large individual differences in imaging features of the gastrointestinal tract, except for the rectum. The radiomics approach represents an attractive method for detecting malignant potential and imaging biomarkers for precision medicine through image analyses of intratumor heterogeneity. However, a number of limitations need to be considered. The manual or semi-automatic segmentation of tumors is generally needed in the radiomics approach. Interobserver variability in manual segmentation often occurs in this process, resulting in the poor reproducibility of data by the radiomics model. Furthermore, previous studies demonstrated that radiomic features may be affected by a number of parameters, such as the scanning equipment[80], image pre-processing[81], acquisition protocols[82,83], image reconstruction algorithms[84,85], and delineation. In addition, although researchers of radiology or AI experts are knowledgeable about radiomics and AI models, they often cannot target the clinical task that needs to be improved for clinicians or patients in clinical settings. However, clinicians are not sufficiently aware of AI, and few reviewers of scientific literature on clinical medicine often are developing AI models or are able to judge research involving AI. Therefore, a multidisciplinary team needs to be introduced into research and medical teams to promote AI-supported medicine.

Despite these limitations, radiomic models for the image diagnosis or prediction of malignancy have the potential to support clinical teams for more accurate and rapid diagnoses. These models may increase patient satisfaction levels for homogenized diagnostic accuracy. Moreover, radiogenomics may have a major impact on precision medicine. Non-invasive assessments of the entire tumor tissue may be possible, without having to rely on a single biopsy to represent all cancer lesions within a patient. As further information becomes available on these imaging markers, the characteristics of cancers will be elucidated in more detail. Therefore, the radiomics approach will enhance the treatment effects of molecular biological approaches for oncological precision medicine.

DISCUSSION

AI will be an important component of diagnostic methods to diagnosis patient disease, determine most appropriate treatments, and predict prognosis and drug resistance. A lot of research methods have been developed with the aims and found to have varying levels of performance. For clinical use of disease diagnosis, AI seems valuable for use in endoscopy, where it could increase detection of benign polyp and malignant tumor. Meanwhile, AI may be useful to analysis intratumor heterogeneity of radiological and pathological images in order to predict malignant potentials, such as the prognosis of patients and therapeutic effects. Our minireview covered only articles listed in MEDLINE, and might have missed some literatures in medical image analysis journals and computer science. Despite of the limitation, AI has become an important part of clinical cancer research in recent years.

There is no turning back for the development of AI in gastrointestinal cancer, and future implications are large. However, some limitations that require caution should be recognized. Most studies were performed using low-quality datasets from pre-clinical studies. Furthermore, AL algorithms are often considered to be black-box models. The difficulty in understanding the process of AI decision may prevent physicians from finding the potential confounding factors. Ethical challenge is one of the problems to be considered. In the present AI system, AI is not aware of the human preferences or legal liabilities. Therefore, medical staff will have to make decisions for patients according to their preferences, environment, and ethics. AI will not completely replace doctors, and computer technology and medical staff will always have to work together. However, the diagnostic accuracy of AI systems has markedly increased and may detect novel biomarkers that cannot be identified by the human eye or in human-aid analyses. AI systems will be introduced into general hospitals in the near future under the management of multidisciplinary teams consisting of medical staff and AI experts.

CONCLUSION

We reviewed the recent published literatures on AI in gastrointestinal cancer, suggesting that AI may be used to accurately diagnose clinical images, identify new therapeutic targets, and process clinical data from large patient datasets. Although the physicians must recognize the limitations of AI diagnostic system, AI-assisted medical systems will become a promising tool for gastrointestinal cancer.

Footnotes

Manuscript source: Invited manuscript

Specialty type: Gastroenterology and Hepatology

Country/Territory of origin: Japan

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): 0

Grade C (Good): C, C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Cabezuelo AS, Gao F S-Editor: Wang JL L-Editor: A P-Editor: Ma YJ

References
1.  Huynh JC, Schwab E, Ji J, Kim E, Joseph A, Hendifar A, Cho M, Gong J. Recent Advances in Targeted Therapies for Advanced Gastrointestinal Malignancies. Cancers (Basel). 2020;12.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 11]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
2.  Huang RJ, Choi AY, Truong CD, Yeh MM, Hwang JH. Diagnosis and Management of Gastric Intestinal Metaplasia: Current Status and Future Directions. Gut Liver. 2019;13:596-603.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 39]  [Article Influence: 9.8]  [Reference Citation Analysis (0)]
3.  Tsujiura M, Ichikawa D, Konishi H, Komatsu S, Shiozaki A, Otsuji E. Liquid biopsy of gastric cancer patients: circulating tumor cells and cell-free nucleic acids. World J Gastroenterol. 2014;20:3265-3286.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 45]  [Cited by in F6Publishing: 51]  [Article Influence: 5.1]  [Reference Citation Analysis (0)]
4.  Okajima W, Komatsu S, Ichikawa D, Miyamae M, Ohashi T, Imamura T, Kiuchi J, Nishibeppu K, Arita T, Konishi H, Shiozaki A, Morimura R, Ikoma H, Okamoto K, Otsuji E. Liquid biopsy in patients with hepatocellular carcinoma: Circulating tumor cells and cell-free nucleic acids. World J Gastroenterol. 2017;23:5650-5668.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 61]  [Cited by in F6Publishing: 63]  [Article Influence: 9.0]  [Reference Citation Analysis (0)]
5.  Imamura T, Komatsu S, Ichikawa D, Kawaguchi T, Miyamae M, Okajima W, Ohashi T, Arita T, Konishi H, Shiozaki A, Morimura R, Ikoma H, Okamoto K, Otsuji E. Liquid biopsy in patients with pancreatic cancer: Circulating tumor cells and cell-free nucleic acids. World J Gastroenterol. 2016;22:5627-5641.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 49]  [Cited by in F6Publishing: 45]  [Article Influence: 5.6]  [Reference Citation Analysis (0)]
6.  Lech G, Słotwiński R, Słodkowski M, Krasnodębski IW. Colorectal cancer tumour markers and biomarkers: Recent therapeutic advances. World J Gastroenterol. 2016;22:1745-1755.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 234]  [Cited by in F6Publishing: 266]  [Article Influence: 33.3]  [Reference Citation Analysis (7)]
7.  Li TT, Liu H, Yu J, Shi GY, Zhao LY, Li GX. Prognostic and predictive blood biomarkers in gastric cancer and the potential application of circulating tumor cells. World J Gastroenterol. 2018;24:2236-2246.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 31]  [Cited by in F6Publishing: 39]  [Article Influence: 6.5]  [Reference Citation Analysis (1)]
8.  Van Cutsem E, Verheul HM, Flamen P, Rougier P, Beets-Tan R, Glynne-Jones R, Seufferlein T. Imaging in Colorectal Cancer: Progress and Challenges for the Clinicians. Cancers (Basel). 2016;8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 45]  [Cited by in F6Publishing: 52]  [Article Influence: 6.5]  [Reference Citation Analysis (0)]
9.  He YS, Su JR, Li Z, Zuo XL, Li YQ. Application of artificial intelligence in gastrointestinal endoscopy. J Dig Dis. 2019;20:623-630.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 16]  [Article Influence: 3.2]  [Reference Citation Analysis (0)]
10.  Le Berre C, Sandborn WJ, Aridhi S, Devignes MD, Fournier L, Smaïl-Tabbone M, Danese S, Peyrin-Biroulet L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020; 158: 76-94. e2.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 230]  [Cited by in F6Publishing: 291]  [Article Influence: 72.8]  [Reference Citation Analysis (0)]
11.  Que SJ, Chen QY, Qing-Zhong, Liu ZY, Wang JB, Lin JX, Lu J, Cao LL, Lin M, Tu RH, Huang ZN, Lin JL, Zheng HL, Li P, Zheng CH, Huang CM, Xie JW. Application of preoperative artificial neural network based on blood biomarkers and clinicopathological parameters for predicting long-term survival of patients with gastric cancer. World J Gastroenterol. 2019;25:6451-6464.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 17]  [Cited by in F6Publishing: 11]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
12.  Stuart JR, Peter N.   Artificial intelligence a modern approach, 3rd Edition. Pearson Education, 2009.  [PubMed]  [DOI]  [Cited in This Article: ]
13.  McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the dartmouth summer research project on artificial intelligence. Dartmouth Proposal. 1955;1-13.  [PubMed]  [DOI]  [Cited in This Article: ]
14.  Deo RC. Machine Learning in Medicine. Circulation. 2015;132:1920-1930.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1155]  [Cited by in F6Publishing: 1675]  [Article Influence: 209.4]  [Reference Citation Analysis (6)]
15.  LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-444.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36149]  [Cited by in F6Publishing: 18443]  [Article Influence: 2049.2]  [Reference Citation Analysis (0)]
16.  Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine Learning for Medical Imaging. Radiographics. 2017;37:505-515.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 647]  [Cited by in F6Publishing: 708]  [Article Influence: 101.1]  [Reference Citation Analysis (0)]
17.  McBee MP, Awan OA, Colucci AT, Ghobadi CW, Kadom N, Kansagra AP, Tridandapani S, Auffermann WF. Deep Learning in Radiology. Acad Radiol. 2018;25:1472-1480.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 186]  [Cited by in F6Publishing: 233]  [Article Influence: 38.8]  [Reference Citation Analysis (0)]
18.  Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol. 2019;25:1666-1683.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 166]  [Cited by in F6Publishing: 145]  [Article Influence: 29.0]  [Reference Citation Analysis (4)]
19.  Min JK, Kwak MS, Cha JM. Overview of Deep Learning in Gastrointestinal Endoscopy. Gut Liver. 2019;13:388-393.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 74]  [Cited by in F6Publishing: 98]  [Article Influence: 24.5]  [Reference Citation Analysis (0)]
20.  Mori Y, Kudo SE, Mohmed HEN, Misawa M, Ogata N, Itoh H, Oda M, Mori K. Artificial intelligence and upper gastrointestinal endoscopy: Current status and future perspective. Dig Endosc. 2019;31:378-388.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 84]  [Cited by in F6Publishing: 77]  [Article Influence: 15.4]  [Reference Citation Analysis (0)]
21.  Ebigbo A, Palm C, Probst A, Mendel R, Manzeneder J, Prinz F, de Souza LA, Papa JP, Siersema P, Messmann H. A technical review of artificial intelligence as applied to gastrointestinal endoscopy: clarifying the terminology. Endosc Int Open. 2019;7:E1616-E1623.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 27]  [Cited by in F6Publishing: 28]  [Article Influence: 5.6]  [Reference Citation Analysis (0)]
22.  Ahmad OF, Soares AS, Mazomenos E, Brandao P, Vega R, Seward E, Stoyanov D, Chand M, Lovat LB. Artificial intelligence and computer-aided diagnosis in colonoscopy: current evidence and future directions. Lancet Gastroenterol Hepatol. 2019;4:71-80.  [PubMed]  [DOI]  [Cited in This Article: ]
23.  Acs B, Rantalainen M, Hartman J. Artificial intelligence as the next step towards precision pathology. J Intern Med. 2020;288:62-81.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 135]  [Cited by in F6Publishing: 191]  [Article Influence: 47.8]  [Reference Citation Analysis (0)]
24.  Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin. 2019;69:127-157.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 848]  [Cited by in F6Publishing: 661]  [Article Influence: 132.2]  [Reference Citation Analysis (3)]
25.  Mayerhoefer ME, Materka A, Langs G, Häggström I, Szczypiński P, Gibbs P, Cook G. Introduction to Radiomics. J Nucl Med. 2020;61:488-495.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 211]  [Cited by in F6Publishing: 743]  [Article Influence: 185.8]  [Reference Citation Analysis (0)]
26.  Forghani R, Savadjiev P, Chatterjee A, Muthukrishnan N, Reinhold C, Forghani B. Radiomics and Artificial Intelligence for Biomarker and Prediction Model Development in Oncology. Comput Struct Biotechnol J. 2019;17:995-1008.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 107]  [Article Influence: 21.4]  [Reference Citation Analysis (0)]
27.  Bibault JE, Xing L, Giraud P, El Ayachy R, Giraud N, Decazes P, Burgun A, Giraud P. Radiomics: A primer for the radiation oncologist. Cancer Radiother. 2020;24:403-410.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 32]  [Cited by in F6Publishing: 41]  [Article Influence: 10.3]  [Reference Citation Analysis (0)]
28.  Yoon HJ, Kim S, Kim JH, Keum JS, Oh SI, Jo J, Chun J, Youn YH, Park H, Kwon IG, Choi SH, Noh SH. A Lesion-Based Convolutional Neural Network Improves Endoscopic Detection and Depth Prediction of Early Gastric Cancer. J Clin Med. 2019;8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 73]  [Article Influence: 14.6]  [Reference Citation Analysis (0)]
29.  Zhu Y, Wang QC, Xu MD, Zhang Z, Cheng J, Zhong YS, Zhang YQ, Chen WF, Yao LQ, Zhou PH, Li QL. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc 2019; 89: 806-815. e1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 201]  [Cited by in F6Publishing: 195]  [Article Influence: 39.0]  [Reference Citation Analysis (0)]
30.  Li L, Chen Y, Shen Z, Zhang X, Sang J, Ding Y, Yang X, Li J, Chen M, Jin C, Chen C, Yu C. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer. 2020;23:126-132.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 105]  [Cited by in F6Publishing: 119]  [Article Influence: 29.8]  [Reference Citation Analysis (0)]
31.  Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S, Ozawa T, Ohnishi T, Fujishiro M, Matsuo K, Fujisaki J, Tada T. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer. 2018;21:653-660.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 389]  [Cited by in F6Publishing: 400]  [Article Influence: 66.7]  [Reference Citation Analysis (0)]
32.  Ishioka M, Hirasawa T, Tada T. Detecting gastric cancer from video images using convolutional neural networks. Dig Endosc. 2019;31:e34-e35.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 52]  [Article Influence: 10.4]  [Reference Citation Analysis (0)]
33.  Luo H, Xu G, Li C, He L, Luo L, Wang Z, Jing B, Deng Y, Jin Y, Li Y, Li B, Tan W, He C, Seeruttun SR, Wu Q, Huang J, Huang DW, Chen B, Lin SB, Chen QM, Yuan CM, Chen HX, Pu HY, Zhou F, He Y, Xu RH. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study. Lancet Oncol. 2019;20:1645-1654.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 155]  [Cited by in F6Publishing: 222]  [Article Influence: 44.4]  [Reference Citation Analysis (0)]
34.  Horiuchi Y, Aoyama K, Tokai Y, Hirasawa T, Yoshimizu S, Ishiyama A, Yoshio T, Tsuchida T, Fujisaki J, Tada T. Convolutional Neural Network for Differentiating Gastric Cancer from Gastritis Using Magnified Endoscopy with Narrow Band Imaging. Dig Dis Sci. 2020;65:1355-1363.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 70]  [Cited by in F6Publishing: 83]  [Article Influence: 20.8]  [Reference Citation Analysis (1)]
35.  Akbari M, Mohrekesh M, Nasr-Esfahani E, Soroushmehr SMR, Karimi N, Samavi S, Najarian K. Polyp Segmentation in Colonoscopy Images Using Fully Convolutional Network. Annu Int Conf IEEE Eng Med Biol Soc. 2018;2018:69-72.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 72]  [Cited by in F6Publishing: 42]  [Article Influence: 8.4]  [Reference Citation Analysis (0)]
36.  Jin EH, Lee D, Bae JH, Kang HY, Kwak MS, Seo JY, Yang JI, Yang SY, Lim SH, Yim JY, Lim JH, Chung GE, Chung SJ, Choi JM, Han YM, Kang SJ, Lee J, Chan Kim H, Kim JS. Improved Accuracy in Optical Diagnosis of Colorectal Polyps Using Convolutional Neural Networks with Visual Explanations. Gastroenterology 2020; 158: 2169-2179. e8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 59]  [Cited by in F6Publishing: 74]  [Article Influence: 18.5]  [Reference Citation Analysis (0)]
37.  Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes W, Baldi P. Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy. Gastroenterology 2018; 155: 1069-1078. e8.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 398]  [Cited by in F6Publishing: 391]  [Article Influence: 65.2]  [Reference Citation Analysis (1)]
38.  Yamada M, Saito Y, Imaoka H, Saiko M, Yamada S, Kondo H, Takamaru H, Sakamoto T, Sese J, Kuchiba A, Shibata T, Hamamoto R. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci Rep. 2019;9:14465.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 114]  [Cited by in F6Publishing: 135]  [Article Influence: 27.0]  [Reference Citation Analysis (0)]
39.  Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric Pathology Image Classification Using Stepwise Fine-Tuning for Deep Neural Networks. J Healthc Eng. 2018;2018:8961781.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 34]  [Article Influence: 5.7]  [Reference Citation Analysis (0)]
40.  Yoshida H, Shimazu T, Kiyuna T, Marugame A, Yamashita Y, Cosatto E, Taniguchi H, Sekine S, Ochiai A. Automated histological classification of whole-slide images of gastric biopsy specimens. Gastric Cancer. 2018;21:249-257.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 63]  [Cited by in F6Publishing: 67]  [Article Influence: 11.2]  [Reference Citation Analysis (0)]
41.  Mori H, Miwa H. A histopathologic feature of the behavior of gastric signet-ring cell carcinoma; an image analysis study with deep learning. Pathol Int. 2019;69:437-439.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 7]  [Article Influence: 1.4]  [Reference Citation Analysis (0)]
42.  Jiang Y, Xie J, Han Z, Liu W, Xi S, Huang L, Huang W, Lin T, Zhao L, Hu Y, Yu J, Zhang Q, Li T, Cai S, Li G. Immunomarker Support Vector Machine Classifier for Prediction of Gastric Cancer Survival and Adjuvant Chemotherapeutic Benefit. Clin Cancer Res. 2018;24:5574-5584.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 71]  [Cited by in F6Publishing: 100]  [Article Influence: 16.7]  [Reference Citation Analysis (0)]
43.  Van Eycke YR, Balsat C, Verset L, Debeir O, Salmon I, Decaestecker C. Segmentation of glandular epithelium in colorectal tumours to automatically compartmentalise IHC biomarker quantification: A deep learning approach. Med Image Anal. 2018;49:35-45.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 34]  [Cited by in F6Publishing: 39]  [Article Influence: 6.5]  [Reference Citation Analysis (0)]
44.  Graham S, Chen H, Gamper J, Dou Q, Heng PA, Snead D, Tsang YW, Rajpoot N. MILD-Net: Minimal information loss dilated network for gland instance segmentation in colon histology images. Med Image Anal. 2019;52:199-211.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 111]  [Cited by in F6Publishing: 117]  [Article Influence: 19.5]  [Reference Citation Analysis (0)]
45.  Abdelsamea MM, Grineviciute RB, Besusparis J, Cham S, Pitiot A, Laurinavicius A, Ilyas M. Tumour parcellation and quantification (TuPaQ): a tool for refining biomarker analysis through rapid and automated segmentation of tumour epithelium. Histopathology. 2019;74:1045-1054.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 4]  [Cited by in F6Publishing: 5]  [Article Influence: 1.0]  [Reference Citation Analysis (0)]
46.  Xu Y, Jia Z, Wang LB, Ai Y, Zhang F, Lai M, Chang EI. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics. 2017;18:281.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 208]  [Cited by in F6Publishing: 171]  [Article Influence: 24.4]  [Reference Citation Analysis (0)]
47.  Haj-Hassan H, Chaddad A, Harkouss Y, Desrosiers C, Toews M, Tanougast C. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network. J Pathol Inform. 2017;8:1.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 40]  [Cited by in F6Publishing: 41]  [Article Influence: 5.9]  [Reference Citation Analysis (0)]
48.  Rathore S, Hussain M, Aksam Iftikhar M, Jalil A. Novel structural descriptors for automated colon cancer detection and grading. Comput Methods Programs Biomed. 2015;121:92-108.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 12]  [Article Influence: 1.3]  [Reference Citation Analysis (0)]
49.  Yang K, Zhou B, Yi F, Chen Y, Chen Y. Colorectal Cancer Diagnostic Algorithm Based on Sub-Patch Weight Color Histogram in Combination of Improved Least Squares Support Vector Machine for Pathological Image. J Med Syst. 2019;43:306.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 6]  [Cited by in F6Publishing: 7]  [Article Influence: 1.4]  [Reference Citation Analysis (0)]
50.  Chaddad A, Desrosiers C, Bouridane A, Toews M, Hassan L, Tanougast C. Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery. PLoS One. 2016;11:e0149893.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 37]  [Article Influence: 4.6]  [Reference Citation Analysis (0)]
51.  Yoshida H, Yamashita Y, Shimazu T, Cosatto E, Kiyuna T, Taniguchi H, Sekine S, Ochiai A. Automated histological classification of whole slide images of colorectal biopsy specimens. Oncotarget. 2017;8:90719-90729.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 27]  [Article Influence: 3.9]  [Reference Citation Analysis (0)]
52.  Takamatsu M, Yamamoto N, Kawachi H, Chino A, Saito S, Ueno M, Ishikawa Y, Takazawa Y, Takeuchi K. Prediction of early colorectal cancer metastasis by machine learning using digital slide images. Comput Methods Programs Biomed. 2019;178:155-161.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 35]  [Cited by in F6Publishing: 35]  [Article Influence: 7.0]  [Reference Citation Analysis (0)]
53.  Weis CA, Kather JN, Melchers S, Al-Ahmdi H, Pollheimer MJ, Langner C, Gaiser T. Automatic evaluation of tumor budding in immunohistochemically stained colorectal carcinomas and correlation to clinical outcome. Diagn Pathol. 2018;13:64.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 31]  [Article Influence: 5.2]  [Reference Citation Analysis (0)]
54.  Bychkov D, Linder N, Turkki R, Nordling S, Kovanen PE, Verrill C, Walliander M, Lundin M, Haglund C, Lundin J. Deep learning based tissue analysis predicts outcome in colorectal cancer. Sci Rep. 2018;8:3395.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 427]  [Cited by in F6Publishing: 336]  [Article Influence: 56.0]  [Reference Citation Analysis (0)]
55.  Kather JN, Krisam J, Charoentong P, Luedde T, Herpel E, Weis CA, Gaiser T, Marx A, Valous NA, Ferber D, Jansen L, Reyes-Aldasoro CC, Zörnig I, Jäger D, Brenner H, Chang-Claude J, Hoffmeister M, Halama N. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Med. 2019;16:e1002730.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 555]  [Cited by in F6Publishing: 412]  [Article Influence: 82.4]  [Reference Citation Analysis (0)]
56.  Reichling C, Taieb J, Derangere V, Klopfenstein Q, Le Malicot K, Gornet JM, Becheur H, Fein F, Cojocarasu O, Kaminsky MC, Lagasse JP, Luet D, Nguyen S, Etienne PL, Gasmi M, Vanoli A, Perrier H, Puig PL, Emile JF, Lepage C, Ghiringhelli F. Artificial intelligence-guided tissue analysis combined with immune infiltrate assessment predicts stage III colon cancer outcomes in PETACC08 study. Gut. 2020;69:681-690.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 55]  [Cited by in F6Publishing: 78]  [Article Influence: 19.5]  [Reference Citation Analysis (0)]
57.  Li W, Zhang L, Tian C, Song H, Fang M, Hu C, Zang Y, Cao Y, Dai S, Wang F, Dong D, Wang R, Tian J. Prognostic value of computed tomography radiomics features in patients with gastric cancer following curative resection. Eur Radiol. 2019;29:3079-3089.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 43]  [Cited by in F6Publishing: 59]  [Article Influence: 9.8]  [Reference Citation Analysis (0)]
58.  Zhang W, Fang M, Dong D, Wang X, Ke X, Zhang L, Hu C, Guo L, Guan X, Zhou J, Shan X, Tian J. Development and validation of a CT-based radiomic nomogram for preoperative prediction of early recurrence in advanced gastric cancer. Radiother Oncol. 2020;145:13-20.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 38]  [Cited by in F6Publishing: 82]  [Article Influence: 20.5]  [Reference Citation Analysis (0)]
59.  Li J, Dong D, Fang M, Wang R, Tian J, Li H, Gao J. Dual-energy CT-based deep learning radiomics can improve lymph node metastasis risk prediction for gastric cancer. Eur Radiol. 2020;30:2324-2333.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 97]  [Article Influence: 24.3]  [Reference Citation Analysis (0)]
60.  Li Q, Qi L, Feng QX, Liu C, Sun SW, Zhang J, Yang G, Ge YQ, Zhang YD, Liu XS. Machine Learning-Based Computational Models Derived From Large-Scale Radiographic-Radiomic Images Can Help Predict Adverse Histopathological Status of Gastric Cancer. Clin Transl Gastroenterol. 2019;10:e00079.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 11]  [Cited by in F6Publishing: 12]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
61.  Wang Y, Liu W, Yu Y, Han W, Liu JJ, Xue HD, Lei J, Jin ZY, Yu JC. Potential value of CT radiomics in the distinction of intestinal-type gastric adenocarcinomas. Eur Radiol. 2020;30:2934-2944.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 7]  [Cited by in F6Publishing: 15]  [Article Influence: 3.8]  [Reference Citation Analysis (0)]
62.  Jiang Y, Yuan Q, Lv W, Xi S, Huang W, Sun Z, Chen H, Zhao L, Liu W, Hu Y, Lu L, Ma J, Li T, Yu J, Wang Q, Li G. Radiomic signature of 18F fluorodeoxyglucose PET/CT for prediction of gastric cancer survival and chemotherapeutic benefits. Theranostics. 2018;8:5915-5928.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 70]  [Cited by in F6Publishing: 98]  [Article Influence: 16.3]  [Reference Citation Analysis (0)]
63.  Chen W, Wang S, Dong D, Gao X, Zhou K, Li J, Lv B, Li H, Wu X, Fang M, Tian J, Xu M. Evaluation of Lymph Node Metastasis in Advanced Gastric Cancer Using Magnetic Resonance Imaging-Based Radiomics. Front Oncol. 2019;9:1265.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 13]  [Cited by in F6Publishing: 15]  [Article Influence: 3.0]  [Reference Citation Analysis (0)]
64.  Gao Y, Zhang ZD, Li S, Guo YT, Wu QY, Liu SH, Yang SJ, Ding L, Zhao BC, Li S, Lu Y. Deep neural network-assisted computed tomography diagnosis of metastatic lymph nodes from gastric cancer. Chin Med J (Engl). 2019;132:2804-2811.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 35]  [Cited by in F6Publishing: 34]  [Article Influence: 6.8]  [Reference Citation Analysis (0)]
65.  Huang Z, Liu D, Chen X, Yu P, Wu J, Song B, Hu J, Wu B. Retrospective imaging studies of gastric cancer: Study protocol clinical trial (SPIRIT Compliant). Medicine (Baltimore). 2020;99:e19157.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 10]  [Article Influence: 2.5]  [Reference Citation Analysis (0)]
66.  Trebeschi S, van Griethuysen JJM, Lambregts DMJ, Lahaye MJ, Parmar C, Bakers FCH, Peters NHGM, Beets-Tan RGH, Aerts HJWL. Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR. Sci Rep. 2017;7:5301.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 195]  [Cited by in F6Publishing: 167]  [Article Influence: 23.9]  [Reference Citation Analysis (0)]
67.  Wang M, Xie P, Ran Z, Jian J, Zhang R, Xia W, Yu T, Ni C, Gu J, Gao X, Meng X. Full convolutional network based multiple side-output fusion architecture for the segmentation of rectal tumors in magnetic resonance images: A multi-vendor study. Med Phys. 2019;46:2659-2668.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 10]  [Article Influence: 2.0]  [Reference Citation Analysis (0)]
68.  Wang J, Lu J, Qin G, Shen L, Sun Y, Ying H, Zhang Z, Hu W. Technical Note: A deep learning-based autosegmentation of rectal tumors in MR images. Med Phys. 2018;45:2560-2564.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 54]  [Cited by in F6Publishing: 65]  [Article Influence: 10.8]  [Reference Citation Analysis (0)]
69.  Men K, Dai J, Li Y. Automatic segmentation of the clinical target volume and organs at risk in the planning CT for rectal cancer using deep dilated convolutional neural networks. Med Phys. 2017;44:6377-6389.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 178]  [Cited by in F6Publishing: 196]  [Article Influence: 28.0]  [Reference Citation Analysis (0)]
70.  Shayesteh SP, Alikhassi A, Fard Esfahani A, Miraie M, Geramifar P, Bitarafan-Rajabi A, Haddad P. Neo-adjuvant chemoradiotherapy response prediction using MRI based ensemble learning method in rectal cancer patients. Phys Med. 2019;62:111-119.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 24]  [Cited by in F6Publishing: 32]  [Article Influence: 6.4]  [Reference Citation Analysis (0)]
71.  Shi L, Zhang Y, Nie K, Sun X, Niu T, Yue N, Kwong T, Chang P, Chow D, Chen JH, Su MY. Machine learning for prediction of chemoradiation therapy response in rectal cancer using pre-treatment and mid-radiation multi-parametric MRI. Magn Reson Imaging. 2019;61:33-40.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 59]  [Cited by in F6Publishing: 78]  [Article Influence: 15.6]  [Reference Citation Analysis (0)]
72.  Ferrari R, Mancini-Terracciano C, Voena C, Rengo M, Zerunian M, Ciardiello A, Grasso S, Mare' V, Paramatti R, Russomando A, Santacesaria R, Satta A, Solfaroli Camillocci E, Faccini R, Laghi A. MR-based artificial intelligence model to assess response to therapy in locally advanced rectal cancer. Eur J Radiol. 2019;118:1-9.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 52]  [Article Influence: 10.4]  [Reference Citation Analysis (0)]
73.  Bibault JE, Giraud P, Housset M, Durdux C, Taieb J, Berger A, Coriat R, Chaussade S, Dousset B, Nordlinger B, Burgun A. Deep Learning and Radiomics predict complete response after neo-adjuvant chemoradiation for locally advanced rectal cancer. Sci Rep. 2018;8:12611.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 80]  [Cited by in F6Publishing: 117]  [Article Influence: 19.5]  [Reference Citation Analysis (0)]
74.  Dercle L, Lu L, Schwartz LH, Qian M, Tejpar S, Eggleton P, Zhao B, Piessevaux H. Radiomics Response Signature for Identification of Metastatic Colorectal Cancer Sensitive to Therapies Targeting EGFR Pathway. J Natl Cancer Inst. 2020;112:902-912.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 61]  [Cited by in F6Publishing: 82]  [Article Influence: 27.3]  [Reference Citation Analysis (0)]
75.  Ding L, Liu GW, Zhao BC, Zhou YP, Li S, Zhang ZD, Guo YT, Li AQ, Lu Y, Yao HW, Yuan WT, Wang GY, Zhang DL, Wang L. Artificial intelligence system of faster region-based convolutional neural network surpassing senior radiologists in evaluation of metastatic lymph nodes of rectal cancer. Chin Med J (Engl). 2019;132:379-387.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 34]  [Article Influence: 6.8]  [Reference Citation Analysis (0)]
76.  Taguchi N, Oda S, Yokota Y, Yamamura S, Imuta M, Tsuchigame T, Nagayama Y, Kidoh M, Nakaura T, Shiraishi S, Funama Y, Shinriki S, Miyamoto Y, Baba H, Yamashita Y. CT texture analysis for the prediction of KRAS mutation status in colorectal cancer via a machine learning approach. Eur J Radiol. 2019;118:38-43.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 25]  [Cited by in F6Publishing: 27]  [Article Influence: 5.4]  [Reference Citation Analysis (0)]
77.  Chassagnon G, Vakalopoulou M, Paragios N, Revel MP. Artificial intelligence applications for thoracic imaging. Eur J Radiol. 2020;123:108774.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 62]  [Cited by in F6Publishing: 95]  [Article Influence: 19.0]  [Reference Citation Analysis (0)]
78.  Rudie JD, Rauschecker AM, Bryan RN, Davatzikos C, Mohan S. Emerging Applications of Artificial Intelligence in Neuro-Oncology. Radiology. 2019;290:607-618.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 92]  [Cited by in F6Publishing: 142]  [Article Influence: 28.4]  [Reference Citation Analysis (0)]
79.  Zhou LQ, Wang JY, Yu SY, Wu GG, Wei Q, Deng YB, Wu XL, Cui XW, Dietrich CF. Artificial intelligence in medical imaging of the liver. World J Gastroenterol. 2019;25:672-682.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 109]  [Cited by in F6Publishing: 112]  [Article Influence: 22.4]  [Reference Citation Analysis (5)]
80.  Fave X, Mackin D, Yang J, Zhang J, Fried D, Balter P, Followill D, Gomez D, Jones AK, Stingo F, Fontenot J, Court L. Can radiomics features be reproducibly measured from CBCT images for patients with non-small cell lung cancer? Med Phys. 2015;42:6784-6797.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 104]  [Cited by in F6Publishing: 124]  [Article Influence: 15.5]  [Reference Citation Analysis (0)]
81.  Shafiq-Ul-Hassan M, Zhang GG, Latifi K, Ullah G, Hunt DC, Balagurunathan Y, Abdalah MA, Schabath MB, Goldgof DG, Mackin D, Court LE, Gillies RJ, Moros EG. Intrinsic dependencies of CT radiomic features on voxel size and number of gray levels. Med Phys. 2017;44:1050-1062.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 324]  [Cited by in F6Publishing: 396]  [Article Influence: 56.6]  [Reference Citation Analysis (0)]
82.  Berenguer R, Pastor-Juan MDR, Canales-Vázquez J, Castro-García M, Villas MV, Mansilla Legorburo F, Sabater S. Radiomics of CT Features May Be Nonreproducible and Redundant: Influence of CT Acquisition Parameters. Radiology. 2018;288:407-415.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 295]  [Cited by in F6Publishing: 396]  [Article Influence: 66.0]  [Reference Citation Analysis (0)]
83.  Lecler A, Duron L, Balvay D, Savatovsky J, Bergès O, Zmuda M, Farah E, Galatoire O, Bouchouicha A, Fournier LS. Combining Multiple Magnetic Resonance Imaging Sequences Provides Independent Reproducible Radiomics Features. Sci Rep. 2019;9:2068.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 28]  [Cited by in F6Publishing: 36]  [Article Influence: 7.2]  [Reference Citation Analysis (0)]
84.  Shiri I, Rahmim A, Ghaffarian P, Geramifar P, Abdollahi H, Bitarafan-Rajabi A. The impact of image reconstruction settings on 18F-FDG PET radiomic features: multi-scanner phantom and patient studies. Eur Radiol. 2017;27:4498-4509.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 109]  [Cited by in F6Publishing: 129]  [Article Influence: 18.4]  [Reference Citation Analysis (0)]
85.  Zhao B, Tan Y, Tsai WY, Qi J, Xie C, Lu L, Schwartz LH. Reproducibility of radiomics for deciphering tumor phenotype with imaging. Sci Rep. 2016;6:23428.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 285]  [Cited by in F6Publishing: 333]  [Article Influence: 41.6]  [Reference Citation Analysis (0)]