Minireviews Open Access
Copyright ©The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
Artif Intell Gastroenterol. Jun 28, 2021; 2(3): 77-84
Published online Jun 28, 2021. doi: 10.35712/aig.v2.i3.77
Biophysics inspired artificial intelligence for colorectal cancer characterization
Niall P Hardy, Jeffrey Dalli, Ronan A Cahill, UCD Centre for Precision Surgery, Dublin 7 D07 Y9AW, Ireland
Pól Mac Aonghusa, IBM Research, IBM Research Ireland, Dublin 15 D15 HN66, Ireland
Peter M Neary, Department of Surgery, University Hospital Waterford, University College Cork, Waterford X91 ER8E, Ireland
Ronan A Cahill, Department of Surgery, Mater Misericordiae University Hospital (MMUH), Dublin 7, Ireland
ORCID number: Niall P Hardy (0000-0002-7036-3910); Jeffrey Dalli (0001-0001-0001-0001); Pól Mac Aonghusa (0001-0001-0001-0002); Peter M Neary (0000-0002-9319-286X); Ronan A Cahill (0000-0002-1270-4000).
Author contributions: Hardy NP, Dalli J, Mac Aonghusa P, Neary PM and Cahill RA were all involved in the research, planning and construction of this piece.
Conflict-of-interest statement: Cahill RA receives speakers fees from Stryker Corp, Johnson and Johnson/Ethicon and Olympus, consultancy fees from Touch Surgery and DistalMotion and research funding from Intuitive Surgery and holds research funding from the Irish Government in collaboration with IBM Research in Ireland and Deciphex and from EU Horizon 2020 with Palliare. Hardy NP and Dalli J are employed as researchers in this collaboration.
Open-Access: This article is an open-access article that was selected by an in-house editor and fully peer-reviewed by external reviewers. It is distributed in accordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/Licenses/by-nc/4.0/
Corresponding author: Ronan A Cahill, FRCS, MBChB, MD, Professor, Department of Surgery, Mater Misericordiae University Hospital (MMUH), 47 Eccles Street, Dublin 7, Ireland. ronan.cahill@ucd.ie
Received: January 28, 2021
Peer-review started: January 29, 2021
First decision: May 2, 2021
Revised: May 21, 2021
Accepted: June 18, 2021
Article in press: June 18, 2021
Published online: June 28, 2021
Processing time: 157 Days and 6.6 Hours

Abstract

Over the last ten years artificial intelligence (AI) methods have begun to pervade even the most common everyday tasks such as email filtering and mobile banking. While the necessary quality and safety standards may have understandably slowed the introduction of AI to healthcare when compared with other industries, we are now beginning to see AI methods becoming more available to the clinician in select settings. In this paper we discuss current AI methods as they pertain to gastrointestinal procedures including both gastroenterology and gastrointestinal surgery. The current state of the art for polyp detection in gastroenterology is explored with a particular focus on deep leaning, its strengths, as well as some of the factors that may limit its application to the field of surgery. The use of biophysics (utilizing physics to study and explain biological phenomena) in combination with more traditional machine learning is also discussed and proposed as an alternative approach that may solve some of the challenges associated with deep learning. Past and present uses of biophysics inspired AI methods, such as the use of fluorescence guided surgery to aid in the characterization of colorectal lesions, are used to illustrate the role biophysics-inspired AI can play in the exciting future of the gastrointestinal proceduralist.

Key Words: Gastroenterology; Artificial intelligence; Gastrointestinal surgery; Deep learning; Biophysics; Machine learning

Core Tip: In this piece we provide an overview of current state of the art in gastroenterology and gastrointestinal surgery. We discuss current deep learning artificial intelligence methods for colorectal lesion detection and characterization as well as exploring biophysics inspired artificial intelligence methods and the potential role they can play in the future of gastroenterological practice.



INTRODUCTION

One of the most fulfilling yet challenging aspects of medical practice revolves around the art of correct decision-making. Current training models within medicine address this through experiential learning and graded autonomy over time along with sub specialization in order for an individual to reach competency and, ideally, mastery within their chosen field. Despite now widespread use of decision support systems in areas such a manufacturing and business, automated decision support for the modern clinician in clinical practice remains in its infancy. The last ten years have seen all medical specialties introduce artificial intelligence (AI) methods as a topic for research and increasingly it is beginning to impact clinical practice as supportive data accrues. Success rates have however varied with areas such as radiology (chest X-ray and mammogram interpretation) and ophthalmology (retinal disease progression) emerging as early beneficiaries[1-3]. Increasingly interest is developing regarding the application of these principles to gastrointestinal disease and its interventions.

GASTROINTESTINAL INTERVENTION

The practice of gastroenterological endoscopy has also seen promising developments regarding in situ determination of colonic lesions through AI methods culminating in the recent launch of commercially approved, AI software (GI-Genius, Medtronic, MN, United States) to aid in the detection of colorectal polyps at colonoscopy[4]. Commencing in 2003, initial endeavour in this domain involved early computer-aided detection software performing post hoc analysis on static images (“The Colorectal Lesion Detector System” by Maroulis et al[5] and Karkanis et al[6]). The state-of-the art thereafter quickly progressed to post-hoc video, and subsequently real-time video, analysis. Recently published trials have shown significantly improved polyp detection rates with these technologies along with indicators of a potential ability to characterise lesions (hyperplastic vs adenoma) in some cases[7,8]. The aforementioned studies all employ deep learning (DL), a subset of machine learning within AI that emerged in the mid-2010s, as their modus operandi[9]. DL capitalizes upon recent advances in computing capabilities to implement learning algorithms consisting of many networked layers of interconnected processing units known as neurons, arranged as neural networks. For colonoscopy, DL architectures best suited to image recognition such as convolutional neural networks are most applicable.

The GI Genius currently represents the “state of the art” accessible to the practicing clinician today. This “intelligent endoscopy module” acts as an adjunct to the gastroenterologist during a colonoscopy to highlight regions with visual characteristics consistent with different types of mucosal abnormalities and so stops short of being an autonomous polyp detection tool and provides no characterization. Powered by closed, selective datasets which may not be representative of general practice (e.g., in terms of bowel prep, withdrawal times, etc.) the module neither records nor reports its findings but instead presents areas of the screen for the endoscopist to interpret their significance including whether to biopsy, resect or disregard. Surface feature detection learned from these datasets has yet to prove its performance in real world practice in particular regarding accuracy (a typical colonoscopy comprises 50000 frames so even a tiny false positive frame rate could generate significant distraction), explainability (“telling you the settings used in the machine”) and interpretability (“why should I believe it?”). While pertinent to Food and Drug Administration approval, these considerations are particularly import when expanding into the concept of lesion characterization and, even more so, true decision support. Like all DL, system performance depends on the assumption that all possible future polyps are represented by previously encountered polyps upon which the system learned. Nevertheless, this system is truly groundbreaking in its existence as a commercial product and opens up the possibilities for AI integration at scale, including articulation of the value proposition of digital assistance as the norm.

Gastrointestinal surgery, an allied field, is also comprised of sequential steps each requiring numerous operator-led decisions but has unique decision-support challenges and unique barriers to AI implementation (although it too is increasingly delivered via image-driven minimally invasive approaches, whether by standard or robotic-assisted laparoscopy). DL as an AI method, while highly efficient at tasks such as image recognition, does not as readily lend itself surgical video where the landscape is more complex with less hallmarks available to exploit during structure differentiation (for example ureter and vascular identification during dissection vs lesion detection on plain film X-ray). Current AI methods within surgical practice are limited to tasks such as instrument detection or segmentation of procedures into their procedural phases with little AI assistance currently available in the more intricate components of surgery such as tissue identification or classification progressively during dissection[10,11]. In general, surgical procedures are not as easily represented by individual static images like the other specialties mentioned thus far and while video provides a deeper situational understanding for the experienced operator, it makes artificially intelligent interpretation much more challenging. Combined with this increased complexity, the datasets required to train current DL systems for surgery (i.e., many thousands of recorded surgical cases) do not currently exist in volumes comparable to endoscopic polyp images, retinal photographs, or mammograms.

BIOPHYSICS

Biophysics-inspired approaches to AI in surgery may present an alternative, or perhaps even better, a complimentary/synergistic approach to the current DL strategies in both gastrointestinal endoscopy and surgery. The term “biophysics” was first proposed by Karl Pearson in 1892 as an all-encompassing term to describe the application of physics principals to describe biological phenomena[12]. It would not be until the 1950s however, following significant advancements in physical measurement techniques, that the potential contributions of biophysics within the field of medicine could be realized. Initial endeavours sought to understand and describe biological phenomena such as haemoglobin dissociation and cell-cell interactions and structure[13,14]. The field then progressed to more complex tasks such as computerized simulation of blood flow and tissue perfusion using biological compartment models contingent on vascular parameters such as vascular density, perfusion rate and permeability[15]. More recent still, the combining of AI methods with biophysics principles has resulted in paradigm shifts in areas such as the study of protein folding and structure and promises to modify drug research processes[16,17]. It is now possible to study and predict protein-protein interactions by combining existing knowledge of protein structural biology and biophysics with machine learning in order to make predictions about the behaviour of previously undocumented proteins[18]. This technology has many potential uses including advancing understanding of inflammatory signaling processes, the search for cancer driving mutations and in new drug discovery. Currently, mechanisms of drug development include processes such as “target deconvolution” whereby the potential new agent, once identified, must be screened against all the known proteins in the human body. This laborious and resource intensive task aims to identify potential drug benefits and importantly to identify any potential off-target effects that may be undesirable. Harnessing the power of AI in conjunction with biophysics, researchers are now able to use computational modelling to simulate the physical interactions between molecules and potential target proteins[17]. Furthermore, comprehensive databanks of human proteins (the proteome) now exist with which to evaluate any new drugs. This permits in silico creation of a full pharmacological profile of any given drug molecule.

While biophysics inspired approaches such as those mentioned have numerous benefits, it is worth noting however that such methodology can only be employed in cases where an in-depth mechanistic understanding of all involved elements has been achieved. Therefore, utilization is limited to fields with a strong human understanding of relevant biological and physio-chemical components and furthermore efforts may be derailed where incorrect perceptions of what is true exist.

NEARINFRARED ENDOLAPAROSCOPY

In contemporary, gastrointestinal surgical practice, the advent of nearinfrared (NIR) endolaparoscopy (combining conventional endoscopic and/or minimally invasive laparoscopic techniques with NIR imaging) provides great scope for development of such AI algorithms. This technology utilizes an extended electromagnetic illumination wavelength (up to 800 nm) to detect exogenous agents capable of fluorescence (there is no background biological fluorescence at these wavelengths)[19]. Such agents can be profiled dynamically as well as absolutely (presence/absence) to garner information regarding the biological features of the tissue, including disease. This has already proven useful clinically in visual determination of intestinal perfusion during surgery where the NIR imagery is presented alongside the white light appearances, but interpretation remains qualitative by the surgeon. Via AI methods however, the added information provided by NIR tissue assessment over standard white light viewing can be combined with our existing understanding of tissue biology to enhance the proceduralist’s understanding of the field in front of them and to assist them in their task. To date indocyanine green (ICG) represents the most successful NIR agent upon which AI recommending systems have been based (and indeed it remains the only currently approved NIR fluorophore although others are in development) and it is likely that development of such decision support systems will in turn enable broader AI development across more standard surgical imagery[20]. However, for now, ICG in combination with NIR provides an excellent test-case to describe the application of biophysics-inspired AI for gastrointestinal interventions (both endoscopy and image-guided surgery).

NIR-ICG TISSUE PERFUSION

ICG is a fluorescent dye used extensively within the now established field of fluorescent guided surgery[21]. When given intravenously it remains within the vasculature with a half-life of 2.5-3 min and can be seen using a near infra-red camera[22]. It is currently used as a subjective decision-making adjunct in tasks such as anatomical delineation (biliary anatomy) and tissue physiology assessment (colorectal anastomosis formation and gastric conduit formation post oesophagectomy)[23-25]. It has also been used to assist in the identification of solid organ tumours however the non-selective nature of the dye leaves this staining method vulnerable to false negatives secondary to accumulation in other areas of pathology such as inflammation[26,27].

Intra-operative ICG perfusion angiograms to assist operator decision making during colon transection and anastomosis represents the most successful utilization of ICG in gastrointestinal surgery to date. Trials assessing subjective surgeon interpretation of these angiograms have been equivocal in their conclusions with respect to reducing complication rates and overall patient benefit however[28-30]. Numerous groups have set about quantifying these perfusion angiograms using time-fluorescence curves with the aim of reducing subjectivity of interpretation and ideally automating it entirely through computer vision and AI[31-35]. Son et al[36], in their landmark paper demonstrated the perfusion patterns seen during quantitative tracking of ICG colonic angiograms and subsequently analyzed these curves. They concluded that measurable parameters within these time-fluorescence curves such as the fluorescence slope, time from first fluorescence increase to half maximum value (T1/2 Max) and time ratio (T1/2max/Tmax) could be used to detect areas of insufficient perfusion and reduce anastomotic complications[36]. Building on this work, Park et al[34], recently described an AI based real-time microcirculation analysis system (AIRAM) capable of generating more accurate and consistent perfusion assessment results when compared to the original parameter-based method described above. Using a corpus of 50 training videos the authors developed an unsupervised learning algorithm that identified 25 distinct colonic perfusion patterns during ICG inflow and outflow. Each perfusion pattern was then assigned a “risk level” or “assessment of adequacy of perfusion” (safe, intermediate, and dangerous) based on each pattern’s performance using a simulator of colonic circulation. Subsequent testing on 15 unseen videos demonstrated comparable results between the original parameter-based methodology and the AI algorithm with a computer processing time of less than 50 s.

Capitalizing on the well described biophysical differences in perfusion characteristics between malignant and benign tissues (abnormal angiogenesis such as capillary sprouting and increased interstitial pressures) and demonstrating these differences in colorectal lesions using ICG to create unique signatures, we have recently shown too that these signatures can be used to discriminate tissue accurately using traditional machine learning techniques without the need for large volumes of data and DL[37,38]. Blood flow, as well as the active and passive uptake of substances, are altered in malignant tissues. While these differences in fluorescence appearance can at times be appreciable to the human eye on screen, they are subtle, occur at different rates across the full field of view and transpire over several minutes. This complexity, along with the known variability between individuals to interpret intra-operative fluorescence footage, certainly requires the need for computer vision to interpret these differences[39].

To develop this biophysics inspired AI recommender, a commercially available Pinpoint Endoscopic Fluorescence Imaging System (Stryker Corp, Kalamazoo, MI, United States) was used to interrogate lesions within the distal colon transanally following intravenous administration of ICG. A bespoke fluorescence intensity tracker was then used to map intensity changes (representing blood inflow and outflow through tissue) on the multi-spectral intra-operative videos obtained. Tissues in these training videos with known pathology (healthy tissue, benign tissue, malignant tissue) were chosen as “regions of interest” and the fluorescent intensity changes tracked within these tissues to create “perfusion signatures” for each tissue type. The data created from these training videos were then taken and fitted to a parametric curve derived from a biophysical model of in vivo perfusion. A supervised machine learning based classification model was designed using these training perfusion signatures and then applied to tissue signatures of previously unseen videos in real time. Using this method the algorithm was able to successfully discriminate between healthy, cancerous and benign tissue in a pilot study of 20 patients with 95% accuracy[37,38]. Such systems employed clinically, providing objective feedback to the endoscopic operator, would permit either immediate local resection in the case of early disease or prompt appropriate, expedited referral for definitive surgical management in the case of more advanced disease. We are currently exploring the validity of this approach in non-colonic tumours with early results demonstrating generalizability of principle across tissue types and also in applying this working prototype to flexible endoscopic systems for more proximal colonic lesions.

CONCLUSION

Biophysics-inspired AI has numerous strengths over DL approaches when it comes to healthcare (Table 1). The transmutability of biophysics inspired AI is not seen in DL methods where for example, data sets collected to train colon cancer identification algorithms likely will not translate to other cancer types. In addition to solving the issue of training data volumes, biophysics inspired principles also provide answers to the “black box” concerns of DL use in medicine. Using DL methods, conclusions drawn by the system, while potentially accurate, are not “explainable” (the ability of the parameters used to justify the result) given the complexity of the algorithms used. This raises ethical dilemmas and accountability concerns where AI is used to direct patient treatment such as the decision to remove tissue or leave in-situ. Along with increased explainability, biophysics inspired modelling facilitates better system interpretability (the ability to associate a cause to an effect). The interrogation of tissue, through fluorescence guided surgery, allows artificially intelligent analysis of the fundamental properties of the tissue itself over pattern recognition within segmented images[40]. Furthermore, the detection and interpretation of these discrete tissue signals is likely less prone to the bias seen with other AI methods where deficiencies in the trainings data used, such as under-representation of particular conditions or people, negatively impacts the pattern detection capabilities of the AI method[41].

Table 1 Artificial intelligence methods in healthcare: A comparison of biophysics inspired machine learning and deep learning methods.
Criteria
Biophysics inspired machine learning
Deep learning
PrincipleIdentification of discriminating features within data set prior to system training based on already proven biophysical propertiesDiscriminating features/patterns in data discovered through analysis of large databanks
Training corpus for system to accurately assess unseen casesSmall to moderate data cohortsLarge training data corpuses required
ExplainabilitySettings, e.g., parameter description and number, used in algorithms are easily describedComplex algorithms utilizing numerous parameters and hyperparameters to control the learning process mean such algorithms often poorly understood
InterpretabilityConclusions reached are easily appreciated and can be explained logically by an appropriately trained individualHuman comprehension of sophisticated algorithm predictions/results may be difficult (including for experts in the field)
GeneralizabilityAccurate extrapolation of results to unseen cases as well as adaptation of such systems to other similar usesHigh degree of specialization within DL systems makes adaptation to other similar uses difficult
BiasWell described, transparent and biophysics-based features help reduce or identify bias within such systemsBias within training datasets may be perpetuated by DL systems through subtle mechanisms that may even be imperceptible to humans

For these reasons, biophysics represents a core, although as yet underutilized, element of the next AI move in gastroenterology. This may be as a means to compensate for the apparent lack of video data that exists to train DL models or to augment DL methods by unlocking another realm of tissue specific information that is imparted by analysis of ICG behaviour within tissues. The extra information gleaned from the tissues’ biology, combined with AI methods, lay the blueprint for the creation of full field of view topographic maps that are biologically representative of each individual lesion and potentially even facilitate automation of procedures using fluorescent signal guidance.

Footnotes

Manuscript source: Invited manuscript

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: Ireland

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): 0

Grade C (Good): C, C

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Ding L, Panteris V S-Editor: Gao CC L-Editor: A P-Editor: Li JH

References
1.  Wu JT, Wong KCL, Gur Y, Ansari N, Karargyris A, Sharma A, Morris M, Saboury B, Ahmad H, Boyko O, Syed A, Jadhav A, Wang H, Pillai A, Kashyap S, Moradi M, Syeda-Mahmood T. Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents. JAMA Netw Open. 2020;3:e2022779.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 47]  [Cited by in F6Publishing: 70]  [Article Influence: 17.5]  [Reference Citation Analysis (0)]
2.  Kim HE, Kim HH, Han BK, Kim KH, Han K, Nam H, Lee EH, Kim EK. Changes in cancer detection and false-positive recall in mammography using artificial intelligence: a retrospective, multireader study. Lancet Digit Health. 2020;2:e138-e148.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 224]  [Cited by in F6Publishing: 184]  [Article Influence: 46.0]  [Reference Citation Analysis (0)]
3.  Yim J, Chopra R, Spitz T, Winkens J, Obika A, Kelly C, Askham H, Lukic M, Huemer J, Fasler K, Moraes G, Meyer C, Wilson M, Dixon J, Hughes C, Rees G, Khaw PT, Karthikesalingam A, King D, Hassabis D, Suleyman M, Back T, Ledsam JR, Keane PA, De Fauw J. Predicting conversion to wet age-related macular degeneration using deep learning. Nat Med. 2020;26:892-899.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 91]  [Cited by in F6Publishing: 132]  [Article Influence: 33.0]  [Reference Citation Analysis (0)]
4.  Repici A, Badalamenti M, Maselli R, Correale L, Radaelli F, Rondonotti E, Ferrara E, Spadaccini M, Alkandari A, Fugazza A, Anderloni A, Galtieri PA, Pellegatta G, Carrara S, Di Leo M, Craviotto V, Lamonaca L, Lorenzetti R, Andrealli A, Antonelli G, Wallace M, Sharma P, Rosch T, Hassan C. Efficacy of Real-Time Computer-Aided Detection of Colorectal Neoplasia in a Randomized Trial. Gastroenterology 2020; 159: 512-520. e7.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 237]  [Cited by in F6Publishing: 334]  [Article Influence: 83.5]  [Reference Citation Analysis (0)]
5.  Maroulis DE, Iakovidis DK, Karkanis SA, Karras DA. CoLD: a versatile detection system for colorectal lesions in endoscopy video-frames. Comput Methods Programs Biomed. 2003;70:151-166.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 96]  [Cited by in F6Publishing: 59]  [Article Influence: 2.8]  [Reference Citation Analysis (0)]
6.  Karkanis SA, Iakovidis DK, Maroulis DE, Karras DA, Tzivras M. Computer-aided tumor detection in endoscopic video using color wavelet features. IEEE Trans Inf Technol Biomed. 2003;7:141-152.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 319]  [Cited by in F6Publishing: 193]  [Article Influence: 9.2]  [Reference Citation Analysis (0)]
7.  Mori Y, Kudo SE, Misawa M, Mori K. Simultaneous detection and characterization of diminutive polyps with the use of artificial intelligence during colonoscopy. VideoGIE. 2019;4:7-10.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 44]  [Article Influence: 8.8]  [Reference Citation Analysis (0)]
8.  Su JR, Li Z, Shao XJ, Ji CR, Ji R, Zhou RC, Li GC, Liu GQ, He YS, Zuo XL, Li YQ. Impact of a real-time automatic quality control system on colorectal polyp and adenoma detection: a prospective randomized controlled study (with videos). Gastrointest Endosc 2020; 91: 415-424. e4.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 153]  [Cited by in F6Publishing: 189]  [Article Influence: 47.3]  [Reference Citation Analysis (0)]
9.  LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-444.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 36149]  [Cited by in F6Publishing: 18355]  [Article Influence: 2039.4]  [Reference Citation Analysis (0)]
10.  Hashimoto DA, Rosman G, Witkowski ER, Stafford C, Navarette-Welton AJ, Rattner DW, Lillemoe KD, Rus DL, Meireles OR. Computer Vision Analysis of Intraoperative Video: Automated Recognition of Operative Steps in Laparoscopic Sleeve Gastrectomy. Ann Surg. 2019;270:414-421.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 110]  [Cited by in F6Publishing: 150]  [Article Influence: 37.5]  [Reference Citation Analysis (0)]
11.  Cai T, Zhao Z. Convolutional neural network-based surgical instrument detection. Technol Health Care. 2020;28:81-88.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 9]  [Cited by in F6Publishing: 6]  [Article Influence: 1.5]  [Reference Citation Analysis (0)]
12.  Roland G  Biophysics: An Introduction. Springer, 2012.  [PubMed]  [DOI]  [Cited in This Article: ]
13.  Martin JL, Houde D, Petrich J, Migus A, Antonetti A, Poyart C.   Biophysics of hemoglobin. International Quantum Electronics Conference; 1986 June 9; San Francisco, California, USA. Optical Society of America.  [PubMed]  [DOI]  [Cited in This Article: ]
14.  WAUGH DF. Basic contributions to medicine by research in biophysics. JAMA. 1961;177:836-840.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1]  [Cited by in F6Publishing: 1]  [Article Influence: 0.0]  [Reference Citation Analysis (0)]
15.  Choi M, Choi K, Ryu SW, Lee J, Choi C. Dynamic fluorescence imaging for multiparametric measurement of tumor vasculature. J Biomed Opt. 2011;16:046008.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 42]  [Cited by in F6Publishing: 29]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
16.  Senior AW, Evans R, Jumper J, Kirkpatrick J, Sifre L, Green T, Qin C, Žídek A, Nelson AWR, Bridgland A, Penedones H, Petersen S, Simonyan K, Crossan S, Kohli P, Jones DT, Silver D, Kavukcuoglu K, Hassabis D. Improved protein structure prediction using potentials from deep learning. Nature. 2020;577:706-710.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 1334]  [Cited by in F6Publishing: 1421]  [Article Influence: 355.3]  [Reference Citation Analysis (0)]
17.  Batool M, Ahmad B, Choi S. A Structure-Based Drug Discovery Paradigm. Int J Mol Sci. 2019;20.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 187]  [Cited by in F6Publishing: 277]  [Article Influence: 55.4]  [Reference Citation Analysis (0)]
18.  Cunningham JM, Koytiger G, Sorger PK, AlQuraishi M. Biophysical prediction of protein-peptide interactions and signaling networks using machine learning. Nat Methods. 2020;17:175-183.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 39]  [Cited by in F6Publishing: 44]  [Article Influence: 11.0]  [Reference Citation Analysis (0)]
19.  Boni L, David G, Mangano A, Dionigi G, Rausei S, Spampatti S, Cassinotti E, Fingerhut A. Clinical applications of indocyanine green (ICG) enhanced fluorescence in laparoscopic surgery. Surg Endosc. 2015;29:2046-2055.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 279]  [Cited by in F6Publishing: 321]  [Article Influence: 32.1]  [Reference Citation Analysis (0)]
20.  Wu D, Daly HC, Conroy E, Li B, Gallagher WM, Cahill RA, O'Shea DF. PEGylated BF2-Azadipyrromethene (NIR-AZA) fluorophores, for intraoperative imaging. Eur J Med Chem. 2019;161:343-353.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 13]  [Article Influence: 2.2]  [Reference Citation Analysis (0)]
21.  Dip F, Boni L, Bouvet M, Carus T, Diana M, Falco J, Gurtner GC, Ishizawa T, Kokudo N, Lo Menzo E, Low PS, Masia J, Muehrcke D, Papay FA, Pulitano C, Schneider-Koraith S, Sherwinter D, Spinoglio G, Stassen L, Urano Y, Vahrmeijer A, Vibert E, Warram J, Wexner SD, White K, Rosenthal RJ. Consensus Conference Statement on the General Use of Near-Infrared Fluorescence Imaging and Indocyanine Green Guided Surgery: Results of a Modified Delphi Study. Ann Surg. 2020;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 44]  [Cited by in F6Publishing: 62]  [Article Influence: 31.0]  [Reference Citation Analysis (0)]
22.  Hollins B, Noe B, Henderson JM. Fluorometric determination of indocyanine green in plasma. Clin Chem. 1987;33:765-768.  [PubMed]  [DOI]  [Cited in This Article: ]
23.  Dip F, LoMenzo E, Sarotto L, Phillips E, Todeschini H, Nahmod M, Alle L, Schneider S, Kaja L, Boni L, Ferraina P, Carus T, Kokudo N, Ishizawa T, Walsh M, Simpfendorfer C, Mayank R, White K, Rosenthal RJ. Randomized Trial of Near-infrared Incisionless Fluorescent Cholangiography. Ann Surg. 2019;270:992-999.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 71]  [Cited by in F6Publishing: 111]  [Article Influence: 27.8]  [Reference Citation Analysis (0)]
24.  Slooter MD, de Bruin DM, Eshuis WJ, Veelo DP, van Dieren S, Gisbertz SS, van Berge Henegouwen MI. Quantitative fluorescence-guided perfusion assessment of the gastric conduit to predict anastomotic complications after esophagectomy. Dis Esophagus. 2021;34.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 27]  [Article Influence: 6.8]  [Reference Citation Analysis (0)]
25.  Slooter MD, Mansvelders MSE, Bloemen PR, Gisbertz SS, Bemelman WA, Tanis PJ, Hompes R, van Berge Henegouwen MI, de Bruin DM. Defining indocyanine green fluorescence to assess anastomotic perfusion during gastrointestinal surgery: systematic review. BJS Open. 2021;5.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 10]  [Cited by in F6Publishing: 12]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
26.  Holt D, Okusanya O, Judy R, Venegas O, Jiang J, DeJesus E, Eruslanov E, Quatromoni J, Bhojnagarwala P, Deshpande C, Albelda S, Nie S, Singhal S. Intraoperative near-infrared imaging can distinguish cancer from normal tissue but not inflammation. PLoS One. 2014;9:e103342.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 77]  [Cited by in F6Publishing: 105]  [Article Influence: 10.5]  [Reference Citation Analysis (0)]
27.  Ishizawa T, Fukushima N, Shibahara J, Masuda K, Tamura S, Aoki T, Hasegawa K, Beck Y, Fukayama M, Kokudo N. Real-time identification of liver cancers by using indocyanine green fluorescent imaging. Cancer. 2009;115:2491-2504.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 498]  [Cited by in F6Publishing: 556]  [Article Influence: 37.1]  [Reference Citation Analysis (0)]
28.  Alekseev M, Rybakov E, Shelygin Y, Chernyshov S, Zarodnyuk I. A study investigating the perfusion of colorectal anastomoses using fluorescence angiography: results of the FLAG randomized trial. Colorectal Dis. 2020;22:1147-1153.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 66]  [Cited by in F6Publishing: 94]  [Article Influence: 23.5]  [Reference Citation Analysis (0)]
29.  De Nardi P, Elmore U, Maggi G, Maggiore R, Boni L, Cassinotti E, Fumagalli U, Gardani M, De Pascale S, Parise P, Vignali A, Rosati R. Intraoperative angiography with indocyanine green to assess anastomosis perfusion in patients undergoing laparoscopic colorectal resection: results of a multicenter randomized controlled trial. Surg Endosc. 2020;34:53-60.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 117]  [Cited by in F6Publishing: 170]  [Article Influence: 34.0]  [Reference Citation Analysis (0)]
30.  Trastulli S, Munzi G, Desiderio J, Cirocchi R, Rossi M, Parisi A. Indocyanine green fluorescence angiography vs standard intraoperative methods for prevention of anastomotic leak in colorectal surgery: meta-analysis. Br J Surg. 2021;108:359-372.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 14]  [Cited by in F6Publishing: 31]  [Article Influence: 10.3]  [Reference Citation Analysis (0)]
31.  D'Urso A, Agnus V, Barberio M, Seeliger B, Marchegiani F, Charles AL, Geny B, Marescaux J, Mutter D, Diana M. Computer-assisted quantification and visualization of bowel perfusion using fluorescence-based enhanced reality in left-sided colonic resections. Surg Endosc. 2020;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 31]  [Cited by in F6Publishing: 37]  [Article Influence: 9.3]  [Reference Citation Analysis (0)]
32.  Lütken CD, Achiam MP, Osterkamp J, Svendsen MB, Nerup N. Quantification of fluorescence angiography: Toward a reliable intraoperative assessment of tissue perfusion - A narrative review. Langenbecks Arch Surg. 2021;406:251-259.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 26]  [Cited by in F6Publishing: 45]  [Article Influence: 11.3]  [Reference Citation Analysis (0)]
33.  Nerup N, Svendsen MBS, Svendsen LB, Achiam MP. Feasibility and usability of real-time intraoperative quantitative fluorescent-guided perfusion assessment during resection of gastroesophageal junction cancer. Langenbecks Arch Surg. 2020;405:215-222.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 16]  [Cited by in F6Publishing: 16]  [Article Influence: 4.0]  [Reference Citation Analysis (0)]
34.  Park SH, Park HM, Baek KR, Ahn HM, Lee IY, Son GM. Artificial intelligence based real-time microcirculation analysis system for laparoscopic colorectal surgery. World J Gastroenterol. 2020;26:6945-6962.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in CrossRef: 30]  [Cited by in F6Publishing: 45]  [Article Influence: 11.3]  [Reference Citation Analysis (0)]
35.  Wada T, Kawada K, Takahashi R, Yoshitomi M, Hida K, Hasegawa S, Sakai Y. ICG fluorescence imaging for quantitative evaluation of colonic perfusion in laparoscopic colorectal surgery. Surg Endosc. 2017;31:4184-4193.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 120]  [Cited by in F6Publishing: 143]  [Article Influence: 20.4]  [Reference Citation Analysis (1)]
36.  Son GM, Kwon MS, Kim Y, Kim J, Kim SH, Lee JW. Quantitative analysis of colon perfusion pattern using indocyanine green (ICG) angiography in laparoscopic colorectal surgery. Surg Endosc. 2019;33:1640-1649.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 89]  [Cited by in F6Publishing: 125]  [Article Influence: 20.8]  [Reference Citation Analysis (0)]
37.  Zhuk S, Epperlein JP, Nair R, Thirupati S, Mac Aonghusa P, Cahill R, O'Shea D. Perfusion Quantification from Endoscopic Videos: Learning to Read Tumor Signatures. Med Image Comput Comput Assist Interv. 2020;711-721.  [PubMed]  [DOI]  [Cited in This Article: ]
38.  Cahill RA, O'Shea DF, Khan MF, Khokhar HA, Epperlein JP, Mac Aonghusa PG, Nair R, Zhuk SM. Artificial intelligence indocyanine green (ICG) perfusion for colorectal cancer intra-operative tissue classification. Br J Surg. 2021;108:5-9.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 23]  [Cited by in F6Publishing: 42]  [Article Influence: 14.0]  [Reference Citation Analysis (0)]
39.  Hardy NP, Dalli J, Khan MF, Andrejevic P, Neary PM, Cahill RA. Inter-user variation in the interpretation of near infrared perfusion imaging using indocyanine green in colorectal surgery. Surg Endosc. 2021;.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 12]  [Cited by in F6Publishing: 30]  [Article Influence: 10.0]  [Reference Citation Analysis (0)]
40.  Horwath JP, Zakharov DN, Mégret R, Stach EA. Understanding important features of deep learning models for segmentation of high-resolution transmission electron microscopy images. NPJ Comput Mater. 2020;6:108.  [PubMed]  [DOI]  [Cited in This Article: ]
41.  Meskó B, Görög M. A short guide for medical professionals in the era of artificial intelligence. NPJ Digit Med. 2020;3:126.  [PubMed]  [DOI]  [Cited in This Article: ]  [Cited by in Crossref: 132]  [Cited by in F6Publishing: 140]  [Article Influence: 35.0]  [Reference Citation Analysis (0)]