Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27(47): 8103-8122 [PMID: 35068857 DOI: 10.3748/wjg.v27.i47.8103]
Corresponding Author of This Article
Daniel von Renteln, MD, Associate Professor, Department of Gastroenterology, CRCHUM, 900 Rue Saint Denis, Montreal H2X 0A9, Quebec, Canada. danielrenteln@gmail.com
Research Domain of This Article
Gastroenterology & Hepatology
Article-Type of This Article
Minireviews
Open-Access Policy of This Article
This article is an open-access article which was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Author contributions: Taghiakbari M drafted the manuscript under the supervision of von Renteln D; Mori Y and von Renteln D performed the critical revision of the manuscript for important intellectual content.
Conflict-of-interest statement: Mahsa Taghiakbari has no conflicts of interest relevant to this paper to disclose. Yuichi Mori is a consultant for the Olympus corporation and receives a speaking fee from the same corporation. Yuichi Mori has an ownership interest in Cybernet System Corporation. Daniel von Renteln is supported by the “Fonds de Recherche du Québec Santé” career development award and has received research funding from ERBE, Ventage, Pendopharm, and Pentax; he is also a consultant for Boston Scientific and Pendopharm.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: https://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Daniel von Renteln, MD, Associate Professor, Department of Gastroenterology, CRCHUM, 900 Rue Saint Denis, Montreal H2X 0A9, Quebec, Canada. danielrenteln@gmail.com
Received: March 19, 2021 Peer-review started: March 19, 2021 First decision: August 9, 2021 Revised: August 22, 2021 Accepted: December 3, 2021 Article in press: December 7, 2021 Published online: December 21, 2021 Processing time: 272 Days and 18 Hours
Abstract
Colonoscopy is an effective screening procedure in colorectal cancer prevention programs; however, colonoscopy practice can vary in terms of lesion detection, classification, and removal. Artificial intelligence (AI)-assisted decision support systems for endoscopy is an area of rapid research and development. The systems promise improved detection, classification, screening, and surveillance for colorectal polyps and cancer. Several recently developed applications for AI-assisted colonoscopy have shown promising results for the detection and classification of colorectal polyps and adenomas. However, their value for real-time application in clinical practice has yet to be determined owing to limitations in the design, validation, and testing of AI models under real-life clinical conditions. Despite these current limitations, ambitious attempts to expand the technology further by developing more complex systems capable of assisting and supporting the endoscopist throughout the entire colonoscopy examination, including polypectomy procedures, are at the concept stage. However, further work is required to address the barriers and challenges of AI integration into broader colonoscopy practice, to navigate the approval process from regulatory organizations and societies, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of its accuracy and safety. This article takes a closer look at the current state of AI integration into the field of colonoscopy and offers suggestions for future research.
Core Tip: Artificial intelligence (AI)-assisted decision support systems for endoscopy have shown promising results for the detection and classification of colorectal lesions. However, their integration into clinical practice is currently limited by the lack of design, validation, and testing under real-life clinical conditions. Further work is required to address the challenges of AI integration, to navigate the regulatory approval process, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of accuracy and safety. This article describes the current state of AI integration into colonoscopy practice and offers suggestions for future research.
Citation: Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27(47): 8103-8122
Colorectal cancer (CRC) is the fourth most commonly diagnosed and the third most fatal cancer worldwide in 2018[1]. The prevalence costs of cancer care were estimated to be $14.1 billion for CRC in the United States in 2010[2]. Over the past decade, CRC incidence and mortality have declined as a result of the increase in CRC screening and prevention examinations[3]. Colonoscopy is a screening tool with high sensitivity for the detection of precancerous and cancerous lesions, and may contribute to an approximately 80%, and up to 60% reduction in CRC incidence and mortality, respectively[4-8]. Colonoscopy prevents CRC by breaking the adenoma-carcinoma sequence through detection and removal of premalignant colorectal polyps[3]. Furthermore, it is a cost-effective procedure that often allows surgery to be avoided in patients with adenomas or CRCs that do not invade deeper than the superficial submucosa[9]. However, the quality of colonoscopy procedures depends on the experience of the endoscopists and the techniques and technology used[10]. A suboptimal colonoscopy examination can result in interval cancers, which are CRCs that occur after a colonoscopy and before the next surveillance examination, and are usually due to non-detection and/or incomplete resection of premalignant polyps. Recent research has shown that CRC precursor lesions are incompletely resected in about 14% of colonoscopy procedures[11]. Quality indicators have been established to describe and measure the quality of colonoscopy examinations[12], and the use of pre- and intraprocedural quality metrics has been shown to result in both an increase in colonoscopy quality and standardization of procedures[12,13]. One of the most recognized quality metrics is the adenoma detection rate (ADR), which is the proportion of an endoscopist’s patients undergoing screening colonoscopy who have at least one adenoma detected; every 1% increase in the ADR has been shown to result in a 3% decrease in the risk of post-colonoscopy CRC[10].
Over 90% of colorectal polyps are diminutive (≤ 5 mm) or small (≤ 10 mm), and most of these polyps are non-neoplastic[10]. Recent advances in image-enhanced endoscopy [IEE; e.g., blue-light imaging, narrow-band imaging (NBI), and i-Scan] have resulted in enhanced visualization of the polyp surface pattern. IEE can be employed for the optical classification of colorectal polyps during colonoscopy, obviating the need for pathology[14,15]. The American Society for Gastrointestinal Endoscopy (ASGE) Technology Committee, in its Preservation and Incorporation of Valuable endoscopic Innovations (PIVI) statement, has recommended the optical evaluation of diminutive polyps, adopting a “resect and discard” strategy for all diminutive colorectal polyps, and a “diagnosis and leave” strategy for diminutive rectosigmoid polyps, if the endoscopist can reach the recommended threshold of ≥ 90% agreement with histopathology results for surveillance interval assignment and ≥ 90% negative predictive value (NPV) for diagnosis of adenomatous histology, respectively[14,15]. Optical diagnosis can distinguish between neoplastic and non-neoplastic polyps and therefore deliver clinical and cost benefits by reducing the number of unnecessary histopathology examinations and providing immediate surveillance interval recommendations to patients. However, despite the demonstrated high accuracy of optical diagnosis for diminutive polyps, endoscopists have been reluctant to support its broad implementation because of concerns about incorrect diagnoses, assignment of inappropriate surveillance intervals, and related medicolegal issues[16].
To address the shortcomings in current colonoscopy practice, research has been directed at standardizing colonoscopy procedures among endoscopists through the integration of artificial intelligence (AI) into colonoscopy practice. AI could provide real-time support to physicians by automatically recognizing specific polyp patterns in colonoscopy images and/or videos, as well as suggesting the most probable histology and providing a confidence level for the predicted histology. The use of such technology would help to mitigate the effects of endoscopist experience in optical diagnosis. Computer-assisted, or most recently, AI-assisted colonoscopy diagnostic systems (CAD) for detection (CADe) and classification (CADx) of colorectal polyps are currently the two main areas of research and implementation of AI in clinical practice. AI-assisted colonoscopy improves ADR and allows for reliable, operator-independent pathology prediction of colorectal polyps. However, there is still a substantial communication gap between computer and medical fields, with scientists in these two disciplines divided in terms of background knowledge, available resources, research typology, and awareness of unmet needs in clinical practice. In this review, we summarize the most important aspects of the application of CADe and CADx in routine colonoscopy practice.
DEVELOPMENT OF COMPUTER-ASSISTED DIAGNOSTIC SYSTEMS
Pairing colonoscopy devices with image-enhanced technology (i.e., white-light endoscopy and chromoendoscopy) has improved the quality of care to patients by increasing the precision of colonoscopy procedures[4]. Recently, research efforts have focused on integrating computational power and previously collected data to enhance the simultaneous detection and classification of colonoscopy images or videos and support endoscopists in their decisions about the presence and/or histology of a polyp.
Machine learning is a subset of AI that allows mathematical methods to develop an algorithm based on given data (e.g., polyp images or videos) to predict the same pattern or a specific task in unseen or unknown data[17]. The final output of these systems (e.g., detection or classification of polyps) is based on pre-defined features or extraction of the most relevant image features (e.g., polyps), which may help in the specification, detection, or classification of a new image. In conventional machine learning (i.e., handcrafted models), a researcher manually introduces the clinically relevant polyp features to the machine learning algorithm. In contrast, in the most advanced machine learning method, which is called deep learning, polyp features, clinically relevant or not, are automatically extracted by the algorithm without prior introduction by a researcher. As a result, the output is based on the capture and summary of complex polyp characteristics, either for detection (i.e., discrimination of polyp from background mucosa) or prediction of histopathology (i.e., neoplastic or non-neoplastic)[17]. Deep learning employs deep neural networks (DNNs), which imitates the complex interconnected neural network in the human brain. These artificial neurons are positioned in several detections and pooling layers, taking weighted data (from the precedent layer), processing it, and passing the output (processed data) to the next layer. Each layer performs as a “step of abstraction[17]”, which forms a hierarchy of common features that grow in complexity throughout the layers (i.e., edge- > basic shape- > object- > class prediction). In other words, each layer would extract useful and relevant features from a given data that would facilitate the classification of the images. When data are presented, the DNN performs the repetitive iterations of a previously chosen model (i.e., support vector machines, random forests, or neural networks) throughout the deeper layers, so-called hierarchical feature learning[17]. For computer-assisted colonoscopy, the development of the AI model is primarily based on supervised data, where data are retrospectively labeled by one or a group of expert endoscopists. For example, in CADx, colonoscopy images or videos will be labeled as neoplastic or non-neoplastic based on the reference standard of pathology results (Figure 1), which would have been reviewed and finalized following consensus by several pathologists. In CADe, however, polyp images or videos will be reviewed by experienced endoscopists, and polyp borders will be delineated based on consensus by endoscopists. Ultimately, the output of the AI algorithm will identify the presence of a polyp, or be able to discriminate between a neoplastic and non-neoplastic polyp (Figure 2)[17]. However, there are some shortcomings and barriers to the development and implementation of CAD systems in real-time endoscopy practice, as discussed below.
Figure 2 Detection of a colorectal polyp by the ENDOAID computer-aided detection system for colonoscopy.
The green box delineates the area containing a polyp.
Datasets
The data used to develop a CAD system will be divided into three or more datasets: One training dataset to build the AI model, one validation dataset to check the generalizability of the model, and at least one test dataset from another source of data to test the performance of the model[17]. Commonly, training and validation data are derived from the same source (i.e., colonoscopies performed at a single center); however, it is crucial to avoid overlap of data; otherwise, evaluation of the model hyperparameters would be flawed and would lead to “model overfitting.” Model overfitting is an error in modeling that occurs when the model is too tightly fitted to the training data and random fluctuations in the training data are learned as concepts by the model. The problem is that the fitted model does not generalize to new data due to its low bias and high variance. Overfitting can be avoided by tight monitoring of the model during the training by constantly evaluating the model performance in the training and validation data[17].
Researchers should use large and heterogeneous data, including normal and abnormal colonoscopies. A sufficient number of colonoscopy images or video frames would ensure a robust evaluation of model performance. Data should ideally be collected from multiple centers and diverse patients in terms of race, age, sex, and medical issues.
A lack of ground truth data or reliable annotated “big data” for generating effective and high-performance AI models could limit the broad application of CAD systems in clinical settings[18]. This is a challenging goal to achieve as it requires millions of colonoscopy images and videos to be annotated by multiple highly experienced experts to ensure a consensus on ambiguous images. Annotation and data labeling by experts should follow a uniform and standardized protocol, otherwise, the generalizability and performance evaluation of the model will be unreliable.
Gold standard comparison
The absence of a “gold standard” for diagnosing polyp histology would affect the accuracy of CAD performance. Although pathology results are currently regarded as the reference standard, the interobserver agreement among pathologists is not 100%; polyp histology determined by one pathologist might be different from that of another pathologist when reassessing the same specimen slides[19-22]. Therefore, the pathology data used for AI models must be re-evaluated by several pathologists prior to inclusion to ensure agreement on polyp pathology.
Technical transparency
The application of CAD in routine practice is a product of an interdisciplinary collaboration between medical and AI researchers. A recent review demonstrated that researchers failed to report the AI model characteristics effectively[23]. Researchers should ensure that they clearly define and report the AI model architecture or hyperparameters, including the number of deep layers and learning rate. The definition and testing of hyperparameters are crucial to the validation process owing to their direct effect on the model’s performance; optimal model generalizability in the validation step implies the correct choice of hyperparameters. Researchers should briefly explain the source of data, the process of data selection, and the number of patients, including images/videos frames, normal colonoscopies (i.e., without polyp identification), colonoscopy centers, and participating endoscopists together with their level of expertise[17].
Furthermore, researchers should adopt appropriate techniques to prevent model overfitting. Data leakage may occur when the testing dataset results are used to tune the model parameters instead of using the results derived from the validation dataset. Therefore, the model may over-fit toward the unseen data, risking a biased estimate of model performance. The stringent use of high-quality still images instead of videos that contain large variability in colonoscopy images may increase the risk of overfitting.
Computer-assisted polyp detection system
In the context of CAD, although the shift from separate engineering and medical disciplines to combined medical and engineering research has gained momentum over the last decade, pilot studies established the idea of CADe as early as 2003[24,25]. The primary hand-crafted AI models used the pre-described polyp features (e.g., color and/or texture-based features) and annotated colonoscopy videos for the detection of colorectal polyps[25-29]. Other studies used the same idea and developed several AI models that resulted in up to 90% sensitivity[30-32]. However, these studies used small and homogeneous datasets to develop and validate the AI models, raising doubts over the model’s optimal performance. The hand-crafted features used to build the model led to suboptimal performance, probably because of impaired feature recognition and description, and a high level of false-positive detection owing to the presence of colonic folds, blood vessels, and feces in the lateral view.
After the invention of DNNs, important polyp features could be automatically recognized. Subsequently, the accuracy and sensitivity of models improved, signaling the great potential for CADe application. Recently, Yamada et al[33] developed a CADe system using a supervised DNN, and validated the system using a dataset of 705 still images of 752 lesions and 4135 still images of noncancerous tissue. This system performed well, with a sensitivity and specificity of 97.3% and 99.0%, respectively, and an area under the curve (AUC) of 0.975 in the validation set. Misawa et al[34] developed a model based on 546 short colonoscopy videos, comprising 155 polyp-positive and 391 polyp-negative videos. Two experts retrospectively annotated videos for polyp presentation to provide a gold standard for comparison. The model presented sensitivity, specificity, and accuracy of 90.0%, 63.3%, and 76.5%, respectively. The polyp detection rate and false-positive detection rate were 95% and 60%, respectively. Other significant research used a large dataset for training an AI model, which comprised 8641 annotated images from over 2000 colonoscopies[35]. The model generated excellent detection capability, with an AUC of 99% and an accuracy of 96.4%. The performance of this model was also superior to that of experts. The authors tested model performance in 20 colonoscopy videos with a total duration of 5 h, during which colonoscopists removed 28 polyps. After reviewing the videos by four independent experts, eight additional polyps were identified (36 polyps) without the use of AI assistance and 17 additional polyps were detected with AI assistance (total 45 polyps). The model had a false-positive rate of 7%.
Research with a prospective design and focusing on the evaluation of the real-time performance of CADe is scarce. Wang et al[36] conducted a prospective non-blinded clinical trial, which aimed to measure ADR with and without the application of CADe. Using 522 and 536 colonoscopies in the control and intervention arms, respectively, the authors found a statistically significant increase in ADR (29.1% vs 20.3%) and an increased number of adenomas per patient (0.53 vs 0.31) when CADe was used. The false-positive rate was 7.5% per colonoscopy, and there was no significant difference in the procedure time. CADe could detect a higher number of diminutive adenomas and hyperplastic polyps, which represent a higher risk of unnecessary polypectomies, pathology examinations, and longer procedure times. To date, the generalizability of this system has not been tested in Western clinical settings.
In contrast to the results of the latter study, Klare et al[37] prospectively evaluated endoscopist performance using CADe assistance during the real-time colonoscopy procedures of 55 patients. However, the endoscopists only observed the regular monitor, and an independent investigator observed the monitor dedicated to representing the real-time outputs of the CADe system in a separate room, which was blinded from the endoscopists’ sight. Therefore, the endoscopists were blinded to the real-time CADe outputs. This system did not increase the precision of polyp detection in real-time practice: In per-patient analysis, the application of CADe resulted in endoscopists achieving a lower ADR (29.1% vs 30.9%); in per-polyp analysis, CADe could only detect 55 out of 73 polyps previously detected by endoscopists. Tables 1 and 2 shows the summary of the recent studies evaluating a CADe system.
Table 1 Summary of the randomized controlled trials involving computer-aided detection for colonoscopy.
Ref.
Year
Study design
Study aim
CADe system
Image modality
Number of patients in the CADe group
Number of patients in the control group
Number of polyps (CADe vs control group)
Adenoma detection rate (%) (CADe vs control group)
Polyp detection rate (%) (CADe vs control group)
Number of false-positive rate (%) (CADe vs control group)
Withdrawal time (CADe vs control group), min ± SD; minute
To assess the effectiveness of a CADe system for improving detection of colon adenomas andpolyps; to analyse the characteristics ofpolyps missed by endoscopists
The real-time automatic polyp detection system (Shanghai Wision AI Co., Ltd.) based on artificial neural network-SegNet architecture
Real-time Video stream
484
478
809 (501 vs 308)
34.0 vs 28.0; P = 0.030; OR = 1.36, 95%CI = 1.03–1.79
52.0 vs 37.0; P < 0.0001; OR = 1.86, 95%CI = 1.44–2.41
To develop an automatic quality control system; to investigate whether the system could increase the detection of polyps and adenomas in real clinical practice
Five deep learning convolutional neural networks (DCNNs) based on AlexNet, ZFNet, and YOLO V2
Real-time Video stream
308
315
273 (177 vs 96)
28.9 vs 16.5; P < 0.001; OR = 2.055, 95%CI = 1.397-3.024
38.3 vs 25.4; P = 0.00; OR = 1.824, 95%CI = 1.296-2.569
NA/24 colonoscopy videos containing 31 different polyps
NA/Experiment A: 612 polyp images from all 24 videos. Experiment B: 47886 frames from the 24 videos
Experiment A: accuracy = small vs all polyps = 77.5%, 95%CI = 71.5%–82.6% vs 66.2%, 95%CI = 61.4%–70.7%; P < 0.01. Experiment B: The AUC = high quality frames vs all Frames = 0.79, 95%CI = 0.70–0.87 vs 0.75, 95%CI = 0.66–0.83
Several training and validation sets: (1) Cross-validation on the 8641 images; (2) Training on the 8641 images and testing on the 9 videos, 11 videos, and independent dataset; and (3) Training on the 8641 images and 9 videos and testing on the 11 videos and independent dataset
Computer-assisted diagnosis of the histopathology of colorectal polyps has become an area of significant research interest because of its potential to prevent the resection of low-risk polyps and reduce the number of unnecessary histopathology examinations. Many studies have successfully developed and validated CADx models, the use of which would allow the “diagnosis and leave strategy” to be implemented. In a prospective pilot study, in which the data from 128 patients undergoing colonoscopy using NBI were used to test a CADx system (209 polyps detected and removed), three polyp features were used to build the AI model: Mean vessel length, vessel circumference, and mean brightness within detected blood vessels[38]. The results showed that the endoscopists’ ability to predict polyp histology was superior to that of CADx, which had a sensitivity of 90% and specificity of 70.2% in differentiating neoplastic from non-neoplastic images compared with histopathology as the gold standard. The system's diagnostic performance was compared with that of endoscopists, who were blinded to the histopathology reference standard. Endoscopists accurately predicted polyp histology with a sensitivity of 93.8% and specificity of 85.7% when there was interobserver agreement. In cases of disagreement between endoscopists, the suggested safe prediction of polyp histology (i.e., classification as neoplastic) produced a sensitivity of 96.9% and specificity of 71.4%. Overall, CADx could predict polyp histology with an approximate sensitivity and specificity of 90% and 70%, respectively; however, the overall correct classification rate was moderate (85.3%). Notably, this AI algorithm was not fully automated; thus, its real-time performance in a clinical setting remains to be determined. Another limitation of this study was the use of data from NBI colonoscopies. Although NBI may assist polyp classification, its use may cast doubt on the generalizability of the model, especially in clinical settings where NBI is not available.
The real-time evaluation of CADx is important if the technology is to be integrated into clinical practice. Some studies have used the real-time decision outputs from support vector machines for building CADx algorithms, with promising results[39-43]. Moreover, Chen et al[44] demonstrated that an AI model could accurately predict the histopathology of 284 diminutive polyps, comprising 96 hyperplastic and 188 neoplastic polyps diagnosed using NBI, with 96.3% sensitivity, 78.1% specificity, 91.5% NPV, and 89.6% PPV. This study and the study by Byrne et al[45] that used the combination of CADe and CADx systems (described below), are remarkable in that they achieved the threshold NPV of ≥ 90% recommended by the ASGE PIVI statement, favoring the implementation of the “diagnose and leave” strategy for diminutive rectosigmoid polyps[46]. However, the results of the former study need to be confirmed in a prospective study, ideally in a controlled trial, where the probability of selection bias is less, and the AI model can be compared with a conventional setting (without using AI).
More prospective studies assessing CADx are required to support the integration into clinical practice. The existing prospective studies resulted in a high and favorable diagnostic performance, which provided strong evidence to support the real-time application of CADx[47,48]. In contrast, the AI models developed and tested in a prospective trial by Kuiper et al[49] did not show sufficient power for differentiating adenomatous from non-adenomatous lesions. Another CADx model in a prospective study by Rath et al[50] could only produce moderate accuracy, sensitivity, and specificity (84.7%, 81.8%, and 85.2%, respectively), although the NPV was relatively high at 96.1%. This model would therefore allow diminutive rectosigmoid polyps to be diagnosed and left in situ without resection. The authors suggested that the low prevalence of neoplastic polyps could explain the model's moderate diagnostic performance compared with hyperplastic polyps in their dataset, which might proportionately result in an overestimation of the NPV, and an underestimation of the accuracy and PPV of the model. Table 3 shows the summary of the recent studies evaluating a CADe system.
Table 3 Summary of the non-controlled studies involving computer-aided diagnosis for colonoscopy including studies with combined detection and diagnosis systems.
Ref.
Year
Study design
Study aim
System
Number of patients/colonoscopies used for training/test datasets (total)
Number of colonoscopy/polyp images/videos used in training/test datasets
Distinguishing neoplastic from non-neoplastic lesion categorized
CADx based on SVMs
NA
979 images containing 381 non-neoplasms and 598 neoplasms in the training dataset/100 images containing 50 non-neoplasms and 50 neoplasms in the test dataset
Distinguishing diminutive (≤ 5 mm) neoplastic from non-neoplastic lesions
CADx based on DCNN
Training dataset: 60089 frames from 223 polyp videos (29% NICE type 1, 53% NICE type 2 and 18% of normal mucosa with no polyp)/validation dataset: 40 videos (NICE type 1, NICE type 2 and two videos of normal mucosa)/test dataset: 125 consecutively identified diminutive polyps, comprising 51 hyperplastic polyps and 74 adenomas
The ideal CAD system would support the simultaneous detection and classification of polyps to optimize colonoscopy outcomes and achieve the best level of CRC prevention. A recent study evaluated the real-time application of CADx in combination with CADe[45]. The validated model was tested on a series of 125 diminutive polyps, comprising 51 hyperplastic polyps and 74 adenomas. The combined model could not detect histopathology in 15% of polyps. For the remaining 106 polyps histologically predicted with high confidence, the AI model demonstrated an accuracy of 94%, sensitivity of 98%, specificity of 83%, NPV of 97%, and positive predictive value (PPV) of 90%. In a significant study, Byrne et al[51] developed a new platform using three distinct AI CADe and CADx algorithms to provide endoscopists with a full workflow from detection to classification: An NBI light detector, a polyp detector, and an optical biopsy. The NBI light detector runs throughout the colonoscopy procedure to ensure the detection of all colorectal polyps with white light imaging, and the optical biopsy provides an accurate polyp classification using NBI light. The NBI light model resulted in an excellent accuracy of 99.94% when tested in 21804 unseen colonoscopy video frames. However, the detection mode using white light resulted in a sensitivity of only 79%. The optical biopsy model could accurately classify 97.6% of polyps, which was significantly higher than a previous CADx model tested by the same research team[45], and had a sensitivity of 95.95%, specificity of 91.66%, and NPV of 93.6% for polyp classification.
QUALITY ASSESSMENT OF COLONOSCOPY BY COMPUTER
Few studies have evaluated an AI-assisted system for the ability to accurately and automatically assess the quality of a colonoscopy procedure, including the identification of critical anatomical landmarks, especially when the endoscopic field is blurry[52,53]. Filip et al[53] developed a “Colometer” system that could rate colonoscopy quality based on the percentage of the withdrawal time with adequate visualization. This system could detect the factors associated with optimal real-time visualization of the mucosa, including image clarity, withdrawal velocity, and level of bowel cleanliness. A dataset of expert-annotated images and videos was used to train the AI model. The authors compared the quality rated by this system with that of three independent experts. There was a strong correlation between AI and expert quality ratings (ρ coefficient 0.65, P = 0.01). In another study, a system comprising two AI algorithms was designed to automatically detect the appendiceal orifice on a colon image or video[54]. The first algorithm was developed to detect the appendiceal orifice on endoscopic images based on the local shape, lighting, and intensity differences from a normal edge direction. The second algorithm was designed to detect the appendiceal orifice in the colonoscopy videos using a frame intensity histogram. The system could detect the orifice in images with an average sensitivity and specificity of 96.86% and 90.47%, respectively, and correctly classified 21 out of 23 colonoscopy videos (accuracy 91.30%).
RECOMMENDATIONS FOR FUTURE RESEARCH
Despite potential benefits of AI in colonoscopy, regulatory approval and standardization of AI models are difficult goals to achieve for a number of reasons described below.
Polyp morphology
Datasets might underrepresent particular polyp morphologies that are not common findings during colonoscopy. For example, non-polypoid lesions with Paris classification of flat and/or depressed morphology are more likely to harbor advanced histology or malignancy but are not a common finding during colonoscopy[55]. The endoscopic detection of non-polypoid lesions is problematic because of their surface pattern resemblance to normal mucosa[56]. Moreover, serrated polyps comprise about 30% of colon polyps, with sessile serrated polyp/adenoma (SSA/P) prevalence being less than 10%[57]. It has been proven that SSA/Ps can be responsible for CRC through a serrated (hyperplastic-SSP/A-serrated-CRC) sequence[58]. However, SSA/Ps can hardly be distinguished from normal mucosa or hyperplastic polyps by features of crypt distortion. Research has shown that previously diagnosed hyperplastic polyps might be reclassified as SSAs after pathological reassessment[19-22], particularly for larger (> 5 mm) or right-sided polyps, and co-existing adenomas containing advanced histology[19,21,59]. A recent meta-analysis showed that pathological reassessment of resected polyps led to a significant change in diagnosis from hyperplastic to SSA for polyps in the right colon and polyps ≥ 5 mm (odds ratio 4.401 and 8.336, respectively)[59]. Moreover, there is poor agreement among pathologists in the determination of high-risk polyp features owing to the various approaches used for preparing biopsy specimens or level of expertise[19,60]. Therefore, the development of an AI platform capable of detecting and distinguishing subtle adenomatous features from normal mucosa with a high level of accuracy would be a valuable clinical tool.
Metadata
Most studies have failed to assess the performance and accuracy of AI models according to polyp size, polyp location, bowel preparation score, or withdrawal time[18]. Patients’ information including demographic and clinical characteristics (e.g., colonoscopy indication, disease status), procedure-related quality characteristics (i.e., bowel preparation level, withdrawal time), procedure time and room, endoscopists fatigue (i.e., the procedure performed in the morning or afternoon) are the important factors that are linked with the long-term non-endoscopic outcome of interest. In other words, the detection and classification of colorectal polyps are the intermediate outcomes of the colonoscopy, but the prevention of interval cancer during the surveillance period, or the evaluation of the effectiveness of medical therapy and the need for surgical treatment in patients with inflammatory bowel diseases are the ultimate goals of the colonoscopy depending on the primary indication of the procedure. As mentioned in Kudo et al[61], metadata is a critical component in establishing optimal AI platforms that can perform well in real-world practice with suboptimal conditions. For example, SSA/Ps are mainly located in the right colon, where endoscopic access and complete inspection of the mucosa are challenging[58]. Collecting a high number of colonoscopy videos with a high number of SSA/P polyps and cross-linking with patient's data would increase the accuracy and effectiveness of the colonoscopy. Future AI models must incorporate the information of the polyp size and location as well as the clinical, pre-procedural, and polyp morphological characteristics rather than focusing on the polyp images and videos alone.
Prospective real-time studies
The robustness of AI platforms has not been widely estimated in real-time clinical settings through prospective studies. Most studies have been retrospective in design and subject to selection bias. Therefore, the comparison of accuracy between model and endoscopists may falsely deviate in favor of CAD. For example, in CADe, the researcher might exclude unclear colonoscopy or polyp images/videos; a fuzzy or blurred endoscopic view may occur when water or blood obscures the field, or when feces cover the bowel surface preventing a complete examination. There should also be a mixture of polyp-positive and polyp-negative images from abnormal and normal colonoscopies in all training, validation, and test datasets. The development of AI models must be rigorously based on a training dataset that is preferably gathered during real-time colonoscopies. Data should be collected prospectively by both experienced and novice endoscopists to represent the actual state of practice when assessing the model. The elimination of selection bias is most relevant to CADe systems and less so to CADx systems. Studies should be based in several centers to ensure the reproducibility of the results at the testing level. Testing CAD systems in non-academic settings will demonstrate whether the model represents actual real-world practice, where more polyps are missed and/or there is no access to advanced technologies such as NBI. In addition, real-time and multicenter studies may help to clarify the place of AI in the diagnostic process. Prospective studies would provide robust evidence to support the application of CAD and enhance endoscopists’ trust in optical polyp classification[62]. Nevertheless, CAD is still an operator-dependent technology as it is the experienced endoscopists who must provide the annotated datasets for the development of the system, and the accuracy of the AI output relies on the endoscopist presenting a clear endoscopic field to the system. Certain challenges such as prolonged procedure times, high positivity rate, and inability to predict the histology in the presence of feces or blood in the visual field should be mitigated to prevent suboptimal diagnosis. Physicians should continue to follow the recommended procedural measures, including sufficient bowel preparation and photo documentation, to avoid legal and insurance issues.
Researchers should prioritize prospective controlled trials to allow a precise comparison between the settings that use and do not use AI platforms, otherwise, the real benefits of the AI system cannot be determined. Crossover studies, where patients act as their own controls and undergo colonoscopy both with and without AI support would be useful as fewer patients would be needed. In practice, the endoscopist would first detect and classify a polyp before using the AI support system to ensure the accuracy of their classification. This process should be performed in a time-efficient manner as the benefit of AI assistance would be irrelevant if the procedure was significantly prolonged.
Standardization of endpoints
All research evaluating the diagnostic accuracy of CAD systems should use standardized research endpoints derived from the latest guidelines. Similarly to other diagnostic evaluation studies, sensitivity, specificity, PPV, NPV, and AUC must be reported, as well as confusion matrices and mean average precision for multiclass classifications and intersection over union (IoU), or the DICE coefficient for segmentation (i.e., delineation) in particular situations[63,64]. The use of such a comprehensive set of metrics would provide convincing evidence, reassuring physicians about the reliability of AI tools. For example, ADR must be reported for all research related to the evaluation of CADe systems, as such systems aim to achieve complete detection of all colorectal lesions. Similarly, the NPV of CADx systems must be reported to confirm the ability of CADx to achieve the recommended NPV benchmark of ≥ 90% according to the PIVI statement[46]. In addition, for surveillance interval assignment, the agreement between AI-based assignment and that of the histopathology reference standard must reach the ≥ 90% threshold recommended by the PIVI statement[46].
Transparency of AI analyses
We should avoid the black-box phenomenon when the decision-making process of the model by the convolutional neural network cannot be deconvoluted due to the complexity of the process[65,66]. An important aspect of the wide application of AI platforms is the trust that physicians and responsible regulatory officials place in the AI analyses. Research should move toward facilitating extreme transparency in the generation and validation of AI models to avoid hesitancy about their public implementation.
Safety and cost-effectiveness
Finally, as well as CADe and CADx systems, a computer-based support system that aids endoscopists in selecting the most appropriate polypectomy procedure is necessary. Current practice involves the use of forceps to remove diminutive polyps, especially for the resection of polyps up to 2 mm[67]; however, the rate of incomplete resection is lower for the removal of polyps ≥ 3 mm when a snare is used[68]. In addition to providing a suggestion for an appropriate polypectomy device, AI can also help to estimate polyp size, delineate the extent of the lesion and a safe polypectomy margin, and identify post-resection lesion remnants that indicate an incomplete resection and the need for further tissue removal at colonoscopy follow-up. The goal of this system is to provide a complete polypectomy that will reduce the risk of interval cancer, as about 30% of all interval cancers are thought to be caused by incomplete resection of CRC precursors[11,69,70].
In addition to addressing the challenges associated with the development of reliable AI models that can be confidently employed in routine practice with high efficacy, research is needed to assess the cost-effectiveness of these systems related to the reduction in the number of patients diagnosed with interval cancer, reduction in the number of unnecessary pathology evaluations for low-confidence predictions of polyp histology by optical diagnosis, and facilitation of efficient physician-patient communication concerning future clinical arrangements.
Adapting the newly developed AI-based techniques in routine practice and enhancing endoscopists’ trust in the new devices is only possible by a symbiotic relationship between academia and industry. It would facilitate obtaining regulatory approval from health authorities regarding research involving human subjects, constructing large “ground truth” data for developing AI models, and transporting knowledge and technology to ultimately access the market[71]. Several manufacturers have obtained the regulatory approvals to launch and commercialize their AI-based colonoscopy devices around the world (Table 4); however, many of them have not provided a detailed report of their devices’ performance. Further research should try to compare the performance of different AI-based systems in real-time settings by conducting prospective controlled trials with multiple intervention arms sing different commercially available AI-based colonoscopy systems. Due to the time- and cost-consuming nature of these studies, an alternative method for accelerating research is to test the “benchmarks” using the publicly available datasets such as the ASU-Mayo colonoscopy video database[29], the CVC-ClinicDB database[28], the Kvasir dataset[72], and the ETIS-Larib Polyp database. Nonetheless, these datasets contain a limited number of colonoscopy videos and images and may not reflect the true performance of an AI-based system.
Table 4 Commercially available computer-assisted colonoscopy tools that have cleared regulatory approval.
Computer assissted system
Product
Manufacturer
Year of regulatory approval
Place of regulatory approval
CADx
EndoBRAIN
Cybernet System Corp./Olympus Corp.
2018
Japan
CADe
GI Genius
Medtronic Corp.
2019 in Europe; 2021 in United States
Europe/United States
CADe
ENDO-AID
Olympus Corp.
2020
Europe
CADe/CADx
CAD EYE
Fujifilm Corp.
2020
Europe/Japan
CADe
DISCOVERY
Pentax Corp.
2020
Europe
CADe
EndoBRAIN-EYE
Cybernet System Corp./Olympus Corp.
2020
Japan
CADe
EndoAngel
Wuhan EndoAngel Medical Technology Company
2020
China
CADe
EndoScreener
WISION A.I.
2020
China
CADx
EndoBRAIN-PLUS
Cybernet System Corp./Olympus Corp.
2020
Japan
CADx
EndoBRAIN-UC
Cybernet System Corp./Olympus Corp.
2020
Japan
CADe
WISE VISION
NEC Corp.
2021
Europe/Japan
CADe
ME-APDS
Magentiq Eye
2021
Europe
CADe
CADDIE
Odin Vision
2021
Europe
CONCLUSION
AI research is a rapidly evolving discipline that promises to enhance physicians’ performance. AI models have demonstrated the ability to compete with and outperform endoscopists, suggesting that all endoscopists would benefit from becoming familiar with CAD technology and comfortable with the integration of AI-assisted devices in colonoscopy practice. The decision support systems are being offered as reliable tools for the detection and classification of colorectal polyps, with the primary aim of outperforming endoscopists by detecting all CRC precursors; however, the new era of AI platforms has seen attempts to establish considerably more complex systems, in which the detection and classification of polyps are supported. Despite the recent achievements in designing and validating such systems, the current lack of AI-assisted systems that support endoscopists in monitoring colonoscopy quality, and that automatically annotate colonoscopy videos, suggest appropriate polypectomy devices, and indicate the completeness of polypectomy, limits the role of AI in colonoscopy practice. Through the integration of the most recent advances in computer science into colonoscopy practice, it appears possible to improve the quality of diagnosis, treatment, and screening in patients. However, AI platforms are still in their infancy in terms of clinical establishment and require much more exploration and innovation. They must be trusted by all physicians, regulatory organizations responsible for approval for clinical use, and patients. The AI-assisted colonoscopy is highly dependent on the endoscopist, who must attempt to present the clearest possible image or video to the AI model for analysis, and then take account of other concurrent patient factors such as the family history of CRC or the results of previous colonoscopies. The human qualities of respect and empathy must be apparent when communicating with patients to overcome any mistrust or reservations patients may have toward the new technology. Therefore, at the current stage of AI development, AI models can only “serve as a second observer, or a concurrent observer, but not an independent decision-maker”[73].
Footnotes
Provenance and peer review: Invited article; Externally peer reviewed.
Rex DK, Boland CR, Dominitz JA, Giardiello FM, Johnson DA, Kaltenbach T, Levin TR, Lieberman D, Robertson DJ. Colorectal Cancer Screening: Recommendations for Physicians and Patients From the U.S. Multi-Society Task Force on Colorectal Cancer.Gastroenterology. 2017;153:307-323.
[PubMed] [DOI][Cited in This Article: ]
Brenner H, Chang-Claude J, Jansen L, Knebel P, Stock C, Hoffmeister M. Reduced risk of colorectal cancer up to 10 years after screening, surveillance, or diagnostic colonoscopy.Gastroenterology. 2014;146:709-717.
[PubMed] [DOI][Cited in This Article: ]
Doubeni CA, Corley DA, Quinn VP, Jensen CD, Zauber AG, Goodman M, Johnson JR, Mehta SJ, Becerra TA, Zhao WK, Schottinger J, Doria-Rose VP, Levin TR, Weiss NS, Fletcher RH. Effectiveness of screening colonoscopy in reducing the risk of death from right and left colon cancer: a large community-based study.Gut. 2018;67:291-298.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 179][Cited by in F6Publishing: 259][Article Influence: 43.2][Reference Citation Analysis (0)]
ASGE Technology Committee, Abu Dayyeh BK, Thosani N, Konda V, Wallace MB, Rex DK, Chauhan SS, Hwang JH, Komanduri S, Manfredi M, Maple JT, Murad FM, Siddiqui UD, Banerjee S. ASGE Technology Committee systematic review and meta-analysis assessing the ASGE PIVI thresholds for adopting real-time endoscopic assessment of the histology of diminutive colorectal polyps.Gastrointest Endosc. 2015;81:502.e1-502.e16.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 210][Cited by in F6Publishing: 230][Article Influence: 25.6][Reference Citation Analysis (0)]
van der Sommen F, de Groof J, Struyvenberg M, van der Putten J, Boers T, Fockens K, Schoon EJ, Curvers W, de With P, Mori Y, Byrne M, Bergman JJGHM. Machine learning in GI endoscopy: practical guidance in how to interpret a novel field.Gut. 2020;69:2035-2045.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 60][Cited by in F6Publishing: 76][Article Influence: 19.0][Reference Citation Analysis (0)]
Lipton ZC, Steinhardt J.
Troubling trends in machine learning scholarship. 2018 Preprint. Available from: 10.1145/3316774.
[PubMed] [DOI][Cited in This Article: ]
Hwang S, Oh J, Tavanapong W, Wong J, Groen PCd.
Polyp Detection in Colonoscopy Video using Elliptical Shape Feature. 2007 IEEE International Conference on Image Processing; 2007: 465–468.
[PubMed] [DOI][Cited in This Article: ]
Tajbakhsh N, Gurudu SR, Liang JJ. Automated polyp detection in colonoscopy videos using shape and context information.IEEE Trans Med Imag. 2015;35:630-644.
[PubMed] [DOI][Cited in This Article: ]
Yamada M, Saito Y, Imaoka H, Saiko M, Yamada S, Kondo H, Takamaru H, Sakamoto T, Sese J, Kuchiba A, Shibata T, Hamamoto R. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy.Sci Rep. 2019;9:14465.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 114][Cited by in F6Publishing: 135][Article Influence: 27.0][Reference Citation Analysis (0)]
Misawa M, Kudo SE, Mori Y, Cho T, Kataoka S, Yamauchi A, Ogawa Y, Maeda Y, Takeda K, Ichimasa K, Nakamura H, Yagawa Y, Toyoshima N, Ogata N, Kudo T, Hisayuki T, Hayashi T, Wakamura K, Baba T, Ishida F, Itoh H, Roth H, Oda M, Mori K. Artificial Intelligence-Assisted Polyp Detection for Colonoscopy: Initial Experience.Gastroenterology. 2018;154:2027-2029.e3.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 229][Cited by in F6Publishing: 239][Article Influence: 39.8][Reference Citation Analysis (0)]
Wang P, Berzin TM, Glissen Brown JR, Bharadwaj S, Becq A, Xiao X, Liu P, Li L, Song Y, Zhang D, Li Y, Xu G, Tu M, Liu X. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study.Gut. 2019;68:1813-1819.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 398][Cited by in F6Publishing: 490][Article Influence: 98.0][Reference Citation Analysis (0)]
Byrne MF, Chapados N, Soudan F, Oertel C, Linares Pérez M, Kelly R, Iqbal N, Chandelier F, Rex DK. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model.Gut. 2019;68:94-100.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 363][Cited by in F6Publishing: 389][Article Influence: 77.8][Reference Citation Analysis (0)]
Rex DK, Kahi C, O'Brien M, Levin TR, Pohl H, Rastogi A, Burgart L, Imperiale T, Ladabaum U, Cohen J, Lieberman DA. The American Society for Gastrointestinal Endoscopy PIVI (Preservation and Incorporation of Valuable Endoscopic Innovations) on real-time endoscopic assessment of the histology of diminutive colorectal polyps.Gastrointest Endosc. 2011;73:419-422.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 406][Cited by in F6Publishing: 440][Article Influence: 33.8][Reference Citation Analysis (0)]
Mori Y, Kudo SE, Misawa M, Saito Y, Ikematsu H, Hotta K, Ohtsuka K, Urushibara F, Kataoka S, Ogawa Y, Maeda Y, Takeda K, Nakamura H, Ichimasa K, Kudo T, Hayashi T, Wakamura K, Ishida F, Inoue H, Itoh H, Oda M, Mori K. Real-Time Use of Artificial Intelligence in Identification of Diminutive Polyps During Colonoscopy: A Prospective Study.Ann Intern Med. 2018;169:357-366.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 299][Cited by in F6Publishing: 308][Article Influence: 51.3][Reference Citation Analysis (1)]
Byrne MF, Soudan F, Henkel M, Oertel C, Chapados N, Echagüe FJ, Ghalehjegh SH, Guizard N, Giguère S, MacPhail MEJGE. Mo1679 real-time artificial intelligence “full colonoscopy workflow” for automatic detection followed by optical biopsy of colorectal polyps.Gas Endoscopy. 2018;87:AB475.
[PubMed] [DOI][Cited in This Article: ]
JE IJ, de Wit K, van der Vlugt M, Bastiaansen BA, Fockens P, Dekker E. Prevalence, distribution and risk of sessile serrated adenomas/polyps at a center with a high adenoma detection rate and experienced pathologists.Endoscopy. 2016;48:740-746.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 65][Cited by in F6Publishing: 66][Article Influence: 8.3][Reference Citation Analysis (0)]
Rex DK, Ahnen DJ, Baron JA, Batts KP, Burke CA, Burt RW, Goldblum JR, Guillem JG, Kahi CJ, Kalady MF, O'Brien MJ, Odze RD, Ogino S, Parry S, Snover DC, Torlakovic EE, Wise PE, Young J, Church J. Serrated lesions of the colorectum: review and recommendations from an expert panel.Am J Gastroenterol. 2012;107:1315-1329.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 810][Cited by in F6Publishing: 792][Article Influence: 66.0][Reference Citation Analysis (0)]
Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig L, Lijmer JG, Moher D, Rennie D, de Vet HC, Kressel HY, Rifai N, Golub RM, Altman DG, Hooft L, Korevaar DA, Cohen JF; Group S. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.BMJ. 2015;351:h5527.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 1702][Cited by in F6Publishing: 1903][Article Influence: 211.4][Reference Citation Analysis (0)]
Montavon G, Samek W, Müller K-R. Methods for interpreting and understanding deep neural networks.Dig Sign Proce. 2018;73:1-15.
[PubMed] [DOI][Cited in This Article: ]
Japkowicz N, Shah M.
Evaluating Learning Algorithms: A Classification Perspective: Cambridge University Press; 2011.
[PubMed] [DOI][Cited in This Article: ]
Shaukat A, Kaltenbach T, Dominitz JA, Robertson DJ, Anderson JC, Cruise M, Burke CA, Gupta S, Lieberman D, Syngal S, Rex DK. Endoscopic Recognition and Management Strategies for Malignant Colorectal Polyps: Recommendations of the US Multi-Society Task Force on Colorectal Cancer.Gastroenterology. 2020;159:1916-1934.e2.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 31][Cited by in F6Publishing: 57][Article Influence: 14.3][Reference Citation Analysis (0)]
Pogorelov K, Randel KR, Griwodz C, Eskeland SL, de Lange T, Johansen D, Spampinato C, Dang-Nguyen DT, Lux M, Schmidt PT.
Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection. Proceedings of the 8th ACM on Multimedia Systems Conference; 2017: 164-169.
[PubMed] [DOI][Cited in This Article: ]
Fernández-Esparrach G, Bernal J, López-Cerón M, Córdova H, Sánchez-Montes C, Rodríguez de Miguel C, Sánchez FJ. Exploring the clinical potential of an automatic colonic polyp detection method based on the creation of energy maps.Endoscopy. 2016;48:837-842.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 84][Cited by in F6Publishing: 84][Article Influence: 10.5][Reference Citation Analysis (0)]
Gong D, Wu L, Zhang J, Mu G, Shen L, Liu J, Wang Z, Zhou W, An P, Huang X, Jiang X, Li Y, Wan X, Hu S, Chen Y, Hu X, Xu Y, Zhu X, Li S, Yao L, He X, Chen D, Huang L, Wei X, Wang X, Yu H. Detection of colorectal adenomas with a real-time computer-aided system (ENDOANGEL): a randomised controlled study.Lancet Gastroenterol Hepatol. 2020;5:352-361.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 139][Cited by in F6Publishing: 226][Article Influence: 56.5][Reference Citation Analysis (0)]
Repici A, Badalamenti M, Maselli R, Correale L, Radaelli F, Rondonotti E, Ferrara E, Spadaccini M, Alkandari A, Fugazza A, Anderloni A, Galtieri PA, Pellegatta G, Carrara S, Di Leo M, Craviotto V, Lamonaca L, Lorenzetti R, Andrealli A, Antonelli G, Wallace M, Sharma P, Rosch T, Hassan C. Efficacy of Real-Time Computer-Aided Detection of Colorectal Neoplasia in a Randomized Trial.Gastroenterology. 2020;159:512-520.e7.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 237][Cited by in F6Publishing: 334][Article Influence: 83.5][Reference Citation Analysis (0)]
Kudo SE, Misawa M, Mori Y, Hotta K, Ohtsuka K, Ikematsu H, Saito Y, Takeda K, Nakamura H, Ichimasa K, Ishigaki T, Toyoshima N, Kudo T, Hayashi T, Wakamura K, Baba T, Ishida F, Inoue H, Itoh H, Oda M, Mori K. Artificial Intelligence-assisted System Improves Endoscopic Identification of Colorectal Neoplasms.Clin Gastroenterol Hepatol. 2020;18:1874-1881.e2.
[PubMed] [DOI][Cited in This Article: ][Cited by in Crossref: 113][Cited by in F6Publishing: 130][Article Influence: 32.5][Reference Citation Analysis (0)]