Next Article in Journal
Spontaneous Heterotopic Pregnancy: Case Report and Literature Review
Previous Article in Journal
The Impact of Novel Anticoagulants on the Upper Gastrointestinal Tract Mucosa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future

by
Daniela Cornelia Lazăr
1,†,
Mihaela Flavia Avram
2,*,†,
Alexandra Corina Faur
3,†,
Adrian Goldiş
4,†,
Ioan Romoşan
1,†,
Sorina Tăban
5,† and
Mărioara Cornianu
5,†
1
Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania
2
Department of Surgery X, 1st Surgery Discipline, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania
3
Department I, Discipline of Anatomy and Embriology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania
4
Department VII of Internal Medicine II, Discipline of Gastroenterology and Hepatology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania
5
Department II of Microscopic Morphology, Discipline of Pathology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Eftimie Murgu Sq. no. 2, 300041 Timișoara, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Medicina 2020, 56(7), 364; https://doi.org/10.3390/medicina56070364
Submission received: 13 June 2020 / Revised: 13 July 2020 / Accepted: 16 July 2020 / Published: 21 July 2020

Abstract

:
In the gastroenterology field, the impact of artificial intelligence was investigated for the purposes of diagnostics, risk stratification of patients, improvement in quality of endoscopic procedures and early detection of neoplastic diseases, implementation of the best treatment strategy, and optimization of patient prognosis. Computer-assisted diagnostic systems to evaluate upper endoscopy images have recently emerged as a supporting tool in endoscopy due to the risks of misdiagnosis related to standard endoscopy and different expertise levels of endoscopists, time-consuming procedures, lack of availability of advanced procedures, increasing workloads, and development of endoscopic mass screening programs. Recent research has tended toward computerized, automatic, and real-time detection of lesions, which are approaches that offer utility in daily practice. Despite promising results, certain studies might overexaggerate the diagnostic accuracy of artificial systems, and several limitations remain to be overcome in the future. Therefore, additional multicenter randomized trials and the development of existent database platforms are needed to certify clinical implementation. This paper presents an overview of the literature and the current knowledge of the usefulness of different types of machine learning systems in the assessment of premalignant and malignant esophageal lesions via conventional and advanced endoscopic procedures. This study makes a presentation of the artificial intelligence terminology and refers also to the most prominent recent research on computer-assisted diagnosis of neoplasia on Barrett’s esophagus and early esophageal squamous cell carcinoma, and prediction of invasion depth in esophageal neoplasms. Furthermore, this review highlights the main directions of future doctor–computer collaborations in which machines are expected to improve the quality of medical action and routine clinical workflow, thus reducing the burden on physicians.

1. Introduction

Over time, machine learning (ML), a component of artificial intelligence (AI), has been implemented in a variety of medical specialties, such as radiology, pathology, gastroenterology, neurology, obstetrics and gynecology, ophthalmology, and orthopedics, with the goal of improving the quality of healthcare and medical diagnosis [1].
In clinical gastroenterology practice, due to technological developments, estimates show that AI could have the ability to create a predictive model; for instance, it could develop an ML model that can stratify the risk in patients with upper gastrointestinal bleeding [2,3], establish the existence of a specific gastrointestinal disease, define the best treatment, and offer prognosis and prediction of the therapeutic response [4,5,6]. In this context, by applying ML or deep learning (DL) (AI using neural networks), clinical management in gastroenterology can begin to focus on more personalized treatment centered on the patient and based on making the best individual decisions, instead of relying mostly on guidelines developed for a specific condition. Moreover, the goal of implementing these AI-based algorithms is to increase the possibility of diagnosing a gastrointestinal disease at early stage or the ability to predict the development of a particular condition in advance [7].
Because both AI and gastroenterology encompass many subdomains, the interaction between them might take on various forms. In recent years, we have witnessed a large explosion of research in attempts to improve various fields of gastroenterology, such as endoscopy, hepatology, inflammatory bowel diseases, and many others, with the aid of ML. We also note that, because of the requirement to diagnose more patients with gastrointestinal cancers at an early stage of the disease, which is associated with curative treatment and better prognosis, many studies were developed to address improvement of the detection of these tumors with the aid of AI.
Numerous studies have been performed, using AI to improve the detection of early neoplasia developed on the background of Barrett’s esophagus [8,9] and early esophageal squamous cell carcinoma [10].
This paper offers an overview of the most prominent research data on endoscopic assessment of premalignant and malignant esophageal lesions with the aid of AI. Our review highlights the advantages and drawbacks of these new algorithms based on ML in the field of gastroenterology and supplies insight into new perspectives on collaboration between physicians and computers and future applications of this technology in gastroenterology practice.

2. Methods

A literature search modality was applied for all English language literature published in the last 15 years, before June 2020, by assessing the PubMed electronic database. The keywords used for our research purposes were “esophageal cancer”, “esophageal neoplasm”, “Barrett esophagus”, “artificial intelligence”, “machine learning”, “deep learning”, “convolutional neural network”, “detection”, and “diagnosis”. The specific search was also performed to identify clinical studies involving AI for the endoscopic evaluation of Barrett esophagus/esophageal cancer, using the ClinicalTrials.gov and University Hospital International Network Clinical trial Registry (UMIN-CTR) database.

3. Definitions of Artificial Intelligence Terminology

3.1. Artificial Intelligence (AI)

The concept of AI was first mentioned in the 1950s [11] and refers to the capacity of a computer to perform tasks that might mimic the human mind, including “cognitive” functions such as the abilities of “learning” and “problem solving” [12].

3.2. Machine Learning (ML)

The term ML, introduced for the first time in 1959 by Arthur Samuel from the IBM company, refers to an IT domain whereby a computer system can acquire the ability to “learn” by using data without specific programming and can therefore develop a predictive mathematical algorithm based on input data, using recognition of “features”. The ML “model” is subsequently able to adapt to new situations in which it becomes able to predict and make decisions.
Three main types of learning methodologies are recognized, namely, supervised learning, in which the computer learns from familiar patterns; unsupervised learning, in which the computer discovers the common aspects in unknown patterns; and, finally, reinforcement learning, in which the computer has the ability to learn from trial and error [13,14] (Figure 1). Clustering algorithms are based on unsupervised learning, in which unlabeled data self organizes to predict outcomes (e.g., clustering). Classification and regression algorithms are based on unsupervised learning, in which prelabeled data train a model to predict new outcomes. Rewards and recommendations algorithms are based on reinforcement learning, which gives feedback to an algorithm when it does something right or wrong.
The predictive models encompass the key elements of the “training”, “validation”, and “testing” datasets. Approximately 70% of samples are commonly used in the initial training set to develop the model, and the remaining 30% of the samples are used as model validation and testing sets, but these percentages may vary with the application [7].
AI was implemented in the medical field, using different types of ML, such as binary classifiers, Bayesian inferences, decision trees, ensemble trees, linear discriminants, support vector machines (SVM), k-nearest neighbors, logistic regression, and artificial neural networks (ANNs) [15,16].
Support vector machine (SVM), which was invented in 1963, before the development of DL [17], represents a supervised learning model, a discriminative algorithm that uses a dividing hyperplane. SVM demonstrated its best accuracy in classification and regression analysis.

3.2.1. ML Using Hand-Crafted Features (Conventional Algorithms)

For a long time, ML using images (in the field of gastroenterology, we are using endoscopic images) relied primarily on hand-crafted features. In this context, IT specialists coded a mathematical description of specific patterns, such as color and texture. The researchers manually indicated the potential features of the images based on clinical expertise. A classifier was trained to distinguish between different classes of features, and eventually, the model was able to use this knowledge to recognize the class in a new set of images [14].

3.2.2. ML Using Deep Learning (DL)

DL refers to a subset of ML techniques that is built from multiple-layered neural network algorithms. They represent ML algorithms that use layers of nonlinear processing for “feature extraction”, which is the selection of powerful predictive variables, and “transformation”, which refers to changing the data for more efficient construction of the model [18].

Neural Networks

Neural networks represent a specific area of ML that shows similarities with the human brain, namely densely interconnected neurons, with the aim of recognizing specific patterns, extracting features, or learning different characteristics of the training dataset to elaborate a concrete result [7,18].
Therefore, in the case of an artificial neural network (ANN), we use a fully connected neural network in which the outputs of the neurons of one layer represent the input for the neurons of the next layer. Each connection has a specific weight that is learned during the training process, and the model is based on a nonlinear sigmoidal function.
A deep neural network represents an ANN containing several hidden layers between the input and output layer. This technology proved to possess excellent accuracy for establishing diagnosis and predicting prognosis in the medical area. In most cases, DL outperforms the hand-crafted algorithm, but it requires a larger quantity of data for learning [19,20]. Fortunately, most of the initial weaknesses and limits of the deep neural network have been overcome by the recent availability of big data for training and the major progress in computing software power [15,16].
One drawback of DL is its “black-box” nature, meaning that the system cannot apply reason to the machine-generated decision, which can be a confusing aspect for the endoscopist. However, a new research area known as “interpretable DL” has attracted recent attention through its attempt to present an argument-based framework for decision-making [21]. Although DL models proved to be the most performant algorithms in fitting the data, one of their limits is their dependency on the training dataset. This “overfitting” error appears if the training database is not sufficiently diverse or contains bias. In that case, the results might not be validated and implemented in real-life circumstances. To enlarge the training datasets, these approaches might include images showing normal aspects and images containing pathologic lesions. Additionally, most of the recent studies use augmentation of the image-based data by resizing and cropping of the frame, with a subsequent flipping along the axis [4,16].
Convolutional neural networks (CNN) represent a specific class of ANN composed of convolutional and pooling layers with the role of extracting specific features and fully connected layers that fulfill the task of elaborating the definitive classification.
The input images are subjected to the preprocessing procedure of filtering (convolution) to extract specific features. Subsequently, the convolution filter undergoes a learning process to elaborate the performant feature maps, which are compressed to smaller sizes. At the end, the fully connected layers combine the selected features for design of the final model. In case of CNN, the number of weights is significantly lower than that of the fully connected networks. The CNN has demonstrated excellent performance in image analysis (Figure 2).
The concept of CNN was developed independently by several different groups during the 1970s and 1980s. The proof of concept of CNN emerged in the late 1980s when Bengio, Hinton, and LeCun started to exchange ideas in this field, and the first paper on backpropagation procedure on CNN was published in 1990 [22]. In 1998, LeCun wrote an overview paper on the principles of training of deep neural networks using gradient-based optimization, showing that CNN can be combined with search or inference mechanisms to model interdependent complex outputs [23]. In 2006, the Canadian Institute for Advanced Research (CIFAR) revived the interest in deep feedforward networks by connecting a group of researchers who introduced unsupervised learning procedures. The first major application of this approach was in speech recognition, which was developed during the following years; by 2012, new versions were already being deployed in Android phones. Since the 2000s, CNN have been successfully applied to the detection, recognition, and segmentation of objects/regions in images; a major recent practical success of CNN is face recognition. Despite these advances, CNN was neglected by most of the computer-vision communities until the ImageNet competition (2012) [24].
In recent years, we have observed the impressive emergence of complex CNNs constructed from more than 100 layers, mostly due to an increased interest in this field and initiation of a great number of scientific activities. Due to the annual software contest known as the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), designated for creation of AI, a great variety of software programs have been developed, such as residual nets (e.g., AlexNet, GoogleNet, InceptionResNet, and ResNet from Microsoft and many other variants), fully convolutional networks (FCN), U-Net (which is based on an encoder–decoder mechanism for pixelwise classification and is mostly used in segmentation processing on test images [25,26,27]), and others. As foreseen by LeCun [24], human vision, natural language understanding, and major progress in AI will materialize by using systems that combine representation learning with complex reasoning.
In the area of gastroenterology, which is overwhelmed by a notably large amount of clinical data and endoscopic or ultrasound images, this technology has been applied to aid clinicians in establishing diagnosis, estimating prognosis, and analyzing images.

Computer Vision

Computer vision refers to the specific use of computer systems in the processing of images/videos, and the possibility of acquiring information from this processing. We must note that a multitude of technological developments have been demonstrated recently in this domain. In the medical field, clinicians work with large amounts of visual data that must be analyzed to elaborate the proper diagnosis and choice of the best treatment, especially in domains such as radiology or endoscopy. In the latter domain, CNNs were elaborated for different purposes such as esophageal/gastric cancer detection [28,29] and “real-time” polyp detection/differentiation between polyp types [30,31], among others.

3.3. Automated Endoscopy Report Writing

Due to the high number of endoscopic examinations and findings, the limited storage of images during procedure, as well as the need for standardized databases for epidemiological studies, quality control, surveillance programs, and research, endoscopy reports are especially suitable for automatization and electronic storage. For that purpose, computer vision AI algorithms can be used to analyze the technical aspects of the investigation, document the activity, and enable the transfer and comparison of images and findings (coded automatically) between different hospitals or consultants. Standardized computerized report systems should be accessible, fast, and accurate, so that they can be used in the daily practice by any endoscopist. In this regard, several endoscopy software systems, such as Endobase from Olympus, have been developed in the past years, to record and store endoscopy findings and images and elaborate reports, with the goal of developing a single documentation system for the whole endoscopy workflow. These endoscopy software systems are essential for modern gastroenterology; therefore, they must be fast, informative, and comprehensive in recording and storing the endoscopy findings and to allow automatic data transfer for quality and research purposes, as well as easy data retrieval in a universally format. Furthermore, they should enable the inclusion of other crucial information, such as histopathology of detected lesions, patient’s satisfaction, adverse events, and follow-up recommendations. Moreover, they should allow database handling for many other purposes, such as safety, quality control, maintenance of equipment, management of supply, billing, and others [32,33,34].

4. Principal Applications of AI for Assessment of Precancerous and Cancerous Esophageal Lesions

The most important advances delivered by AI in assessment of esophageal pathology consist of screening of early esophageal carcinoma, both dysplasia/adenocarcinoma developed on Barrett’s esophagus and squamous cell carcinoma.
Esophageal carcinoma ranks seventh in terms of incidence (almost 600,000 new cases) and was also responsible for an estimated 1 in every 20 cancer deaths in 2018. Developing countries are where the histologic subtype of squamous cell carcinoma (SCC) predominates, and esophageal cancer is commonly diagnosed in an advanced stage, which is related to most of the deaths. However, adenocarcinoma (AC) represents the major histologic subtype in high-income countries, with obesity and gastroesophageal reflux disease (GERD) among the major risk factors. In recording a broad decline in the incidence of esophageal SCC, we also remark on an outburst in the incidence rates of AC, partially because of an increasing frequency of the abovementioned risk factors and perhaps also due to eradication of Helicobacter pylori infection. Additionally, the overall prognosis of the AC subtype remains poor, with an overall five-year survival of approximately 15% [35,36].

4.1. Identification of Dysplasia/Early Neoplasia in Barrett’s Esophagus (BE)

BE represents a major risk factor associated with the development of esophageal AC, mostly in patients with long-segment BE and in the presence of intraepithelial neoplasia [37]. Endoscopic assessment of this condition might be highly difficult, especially by nonexperts, because they must differentiate between all sequences of the carcinogenic process, namely nonneoplastic BE, low-grade/high-grade dysplasia (LGD/HGD) BE, and early adenocarcinoma (EAC). Moreover, by detecting AC in an early stage suitable to endoscopic treatment, patient prognosis might be fundamentally improved [38,39,40].
In this context, elaboration of computer assisted diagnosis (CAD) systems to detect early neoplastic areas in BE during regular endoscopic surveillance represents a priority task due to the imperative need to help endoscopists to perform accurate targeted biopsies, instead of using biopsies of any visible lesions plus random biopsies taken every 1–2 cm for the length of the BE in a four-quadrant fashion (Seattle protocol), which has proven to be a laborious, time-consuming procedure and is associated with a lower sensitivity [41,42,43]. To overcome the limitations of the procedures currently used in endoscopic surveillance of BE, the American Society for Gastrointestinal Endoscopy (ASGE) concluded that, to exclude the need for random mucosal biopsies in BE, the use of any imaging technology plus targeted biopsies should have a sensitivity of more than 90%, a specificity of 80%, and a negative predictive value of minimum 98%. However, these performance scores could be achieved only by expert endoscopists [44]. Although a large spectrum of advanced endoscopic technologies, such as magnification endoscopy (ME), chromoendoscopy (CE), probe-based confocal laser-induced endomicroscopy, endocytoscopy, volumetric laser endomicroscopy (VLE), and wide-area transepithelial sampling (WATS) with computer-assisted three-dimensional analysis, have been studied to improve BE assessment, most are expensive, time-consuming, and have a long learning curve [45,46]. Therefore, ML assistance is required for nonexpert endoscopists to perform optical diagnosis in their routine practice [44,47,48,49].

4.1.1. CAD Using White-Light Endoscopy/Narrow-Band Imaging (WLE/NBI)

Van der Sommen et al. [8] constructed a CAD system for detection of early neoplasia in BE, using color filters, specific texture, and ML for WLE images, and this system was evaluated on a dataset of 100 images from 44 patients with BE. The system identified neoplastic lesions (per-image analysis) with a sensitivity/specificity of 83%. Mendel et al. [50] performed a CNN analysis of BE including 50 endoscopic WLE images of neoplastic BE and 50 noncancer images from an open access database (Endoscopic Vision Challenge MICCAI 2015), reaching a sensitivity of 94% and a specificity of 88%. From the same study group, Ebigbo et al. [51] continued research on CNN in early Barrett’s AC, using 71 high-definition WLE and NBI images of early neoplastic BE (T1a) and nondysplastic BE, achieving a sensitivity/specificity of 97%/88% (WLE) and 94%/80% (NBI) in classification of endoscopic images into cancer/noncancer types. For the MICCAI database (WLE), the results increased to a sensitivity of 92% and specificity of 100%. The model proved to be significantly more accurate than nonexpert endoscopists. In addition, Ghatwary et al. used a DL algorithm on the same open-access dataset (100 images) and achieved a sensitivity of 96% and a specificity of 92% [52].
In their pilot study, the Hashimoto group [53] constructed an AI algorithm based on a dataset including 916 WLE/virtual NBI images from 70 patients with histology-proven neoplastic BE (high-grade dysplasia/T1 cancer) in which the software masked the areas of neoplasia. Another 916 control images were collected from 30 patients with histology-proven or confocal-laser-endomicroscopy-proven normal BE. A CNN algorithm was pretrained (on ImageNet) and subsequently fine-tuned to perform binary classification of “dysplastic” or “nondysplastic”. Moreover, researchers elaborated an object detection algorithm with the goal of drawing localization boxes that surround the dysplastic regions. The CAD system evaluated 458 test images, including 225 dysplastic and 233 nondysplastic features, and was able to detect early neoplasia in BE with a high accuracy of 95.4%, a sensitivity of 96.4%, and a specificity of 94.2%. Finally, the object detection algorithm for the validation set was able to localize the areas of dysplasia with high precision (mean-average-precision of 0.7533) and at a speed that allows implementation of the model in a real-time setting.

4.1.2. CAD Using Wide-Area Transepithelial Sampling (WATS)

Wide-area transepithelial sampling (WATS), associated with computer-assisted three-dimensional analysis, consists of abrasive brushing of the BE mucosa, followed by neural network analysis to identify abnormal cells. WATS is a technique that, in addition to assuring the sampling of a wide area of the BE segment, supplies a deep transepithelial section, with the aid of an abrasive brush, and appears to be associated with a high rate of interobserver agreement (overall kappa value of 0.86) [54]. The WATS3D specimen is then sent to the laboratory for computer analysis that uses AI and proprietary 3D imaging to help pathologists reliably identify precancerous cells.
Previous trials have shown that, when associated with classical biopsy, WATS leads to an increase in the detection of intestinal metaplasia and dysplasia. To this end, the multicenter prospective trial of Johanson et al. [55], performed on 1266 patients, demonstrated an increased overall detection of intestinal metaplasia by 39.8%. Anandasabapathy et al. [56], in a study on 151 patients with high-risk BE, showed that, by associating WATS to biopsy, the yield of esophageal dysplasia detection increased by 42% (additional 16 cases). We must mention that these two clinical trials were not focused on HGD/EAC and did not comply entirely with the Seattle protocol.
A multicenter, prospective, randomized trial (www.clincaltrials.gov Clinical trial number: NCT03008980) [57] was performed on 160 referral BE patients undergoing endoscopic surveillance (using HD-WLE), with the primary aim of evaluating the use of WATS in addition to biopsy for the detection of HGD/EAC. Patients received either biopsy (according to the Seattle protocol) followed by WATS or vice versa. The rate of detection of HGD/EAC was 4.1 times higher with using WATS alone, compared with biopsy alone (29 cases vs. 7 cases), and only one case with positive biopsy was missed by WATS. The addition of WATS to biopsy led to a 14.4% increase in the detection rate of HGD/EAC, meaning 23 additional cases. Among these new cases, 11 were classified by biopsy as normal BE and 12 as LGD/indeterminate for dysplasia. The majority of these patients had prior dysplasia histories, and thus they represent a high-risk BE surveillance population. The order of procedure randomization did not influence the performance scores. The addition of WATS prolonged the procedure by an average of 4.5 min. These results demonstrate the promising role of this procedure in surveillance programs for BE.

4.1.3. CAD Using Volumetric Laser Endomicroscopy (VLE)

Volumetric laser endomicroscopy (VLE) is based on use of optical coherence tomography (OCT) to generate real-time microscopic cross-sectional imaging. VLE with laser marking represents an advanced imaging technology that enables a circumferential scan of the esophageal wall layers with the aim of detecting dysplasia, and it has been commercially available in the United States since 2013. The technique supplies direct in vivo marking of suspicious areas for neoplastic transformation from which the endoscopist might collect targeted biopsies [58]. A multicenter US trial on one thousand patients reported that VLE improved the neoplasia diagnostic yield in BE by 55% [59]. Because the endoscopist must analyze an enormous quantity of complex data during this examination, computer assistance might be helpful in recognizing abnormalities. For this purpose, an AI software known as intelligent real-time image segmentation (IRIS) was developed that identifies three VLE features previously associated with histologic-proven dysplasia [60,61], namely a hyperreflective surface (marker of cellular crowding and increased nuclear-to-cytoplasmic ratio), hyporeflective structures (atypical BE epithelial glands), and the lack of a layered architecture (which differentiates between squamous epithelium and BE). This algorithm proved to be a helpful tool for detection of dysplasia during endoscopic surveillance of BE in a less burdensome manner. Swager et al. assessed the CAD detection rate of early neoplastic lesions in BE using 60 ex vivo VLE and obtained good performance cores [58], with a sensitivity of 90% and specificity of 93%. Therefore, a multicenter randomized controlled trial is currently in process to further explore the accuracy of VLE, using the IRIS program compared with VLE without IRIS (NCT03814824) [9].
Another study constructed by Struybenberg et al. [62] [NCT01862666] evaluated the performance of automatic data extraction followed by CAD analysis, using a VLE multi-frame approach for detection of BE neoplasia. Ex vivo VLE images from 29 BE-patients (nondyspastic or HGD/EAC) were retrospectively analyzed followed by assessment of sixty histopathology-correlated regions of interest (30 nondysplastic vs. 30 neoplastic) by means of different CAD systems. Furthermore, multiple neighboring VLE frames were evaluated (including 1.25 mm proximal and distal areas), and in total, the AI analysis included 3060 VLE frames. Multi-frame analysis resulted in a significantly higher median AUC, compared with single-frame analysis (used in the abovementioned study) (0.91 vs. 0.83). The multi-frame approach reached a maximum AUC of 0.94 if including 22 frames on each side of the region of interest. These data revealed rapid and accurate image interpretation and improved BE neoplasia detection by using multi-frame vs. single-frame VLE image analysis.

4.1.4. CAD Using I-SCAN

In their study, Seghal et al. [63] collected video recordings from patients with nondysplastic and dysplastic BE assessed by high-definition endoscopy, using the i-Scan enhancement. I-Scan is a revolutionary endoscopic postprocessing light filtering device that uses software algorithms equipped with real-time image mapping technology embedded with a video processor. This equipment increases the resolution above the standard high-definition level, thus offering additional features for supplementary analysis. Three image-enhancement modes are available: surface, contrast, and surface plus tone enhancement. According to the protocol, the areas of interest were recorded, and the diagnosis was histologically confirmed. The images were interpreted by three blinded experts, based on mucosal and microvasculature patterns, identification of nodularity/ulceration, and overall suspected diagnosis. These data formed the bases on the decision tree for dysplasia prediction. Subsequently, nonexpert endoscopists interpreted the same videos both before and after computer-assisted training using the previously mentioned decision tree. By assessing videos collected from 40 patients, which in 12 cases covered both before and after acetic acid application, experts obtained an average accuracy for dysplasia prediction of 88%. By entering their responses into a decision tree, the accuracy of the resultant model increased to 92%, with sensitivity and specificity of 97% and 88%, respectively. No additional improvement was obtained using acetic acid. Dysplasia detection was improved significantly in the nonexpert group after formal web-based training, reaching the accuracy obtained by experts, and sensitivity rose significantly from 71% to 83%.
We mention a similar web-based program, previously published as an abstract presentation, designed to improve the detection of Barrett’s esophagus-related neoplasia [64]. In this study, endoscopy recordings from patients with neoplastic and nondysplastic BE were assessed by three expert endoscopists who used specialist software to delineate Barrett’s esophagus-related neoplasia (BORN) lesions. Subsequently, 68 endoscopists from the USA and Europe with different degrees of expertise were tasked with recognizing and delineating these features in four sets of 20 videos (including 48 neoplastic and 32 normal BE) with online training. A significant increase in the detection and delineation scores were observed over the four sets. After removal of 55 inadequate videos, the results of the study were validated by using a new group of 121 endoscopists across the USA, Canada, and Europe and reached similar outcomes across all levels of expertise.
These studies highlight the usefulness of AI algorithms in accurate prediction of dysplasia and significant improvement in detection rates, as well as shortening of the learning curve once taught to nonexperts.

4.1.5. Novel Research Toward Real-Time Recognition of BE

Most of the previous results related to ML-assisted evaluation of cancer in BE were achieved by using optimal endoscopic images, which might not accurately reflect the real-life context. To enable integration of CNN-based image classification into clinical practice, Ebigbo and colleagues developed a system to further increase the celerity of image analysis for classification and the resolution of dense prediction, which relies on the color-coded spatial distribution of cancer probabilities. This system represents an encoder–decoder artificial neural network that is pretrained by ImageNet and based on a ResNet containing 101 layers. The CAD system extracts random images from the real-time camera livestream during endoscopic assessment of BE performed by an expert endoscopist, thus supplying an accurate differentiation between normal BE and early esophageal AC through classification and segmentation procedures. The model was trained by using a total of 129 endoscopic images from a hospital image database. For validation, additional images (including 36 of early AC and 26 of normal BE) from 14 patients were assessed by the system during endoscopic examination. All of these images were pathologically confirmed either on resection specimens (AC) or forceps biopsies (normal BE). This CAD system, although applied to a low number of patients, offered successful real-time implementation of AI in the detection of early esophageal AC in BE in a real-life setting and demonstrated excellent performance scores, with a sensitivity of 83.7%, a specificity of 100.0%, and an overall accuracy of 89.9% [28].
The ARGOS consortium, supported by the Dutch Cancer Society and Technology Foundation STW, includes three international referral centers for detection of early neoplasia in BE, a leading academic group involved in image analysis, and two commercial enterprises that collaborate within the strategic research program known as “Technology for Oncology”. This project consisted of designing an improved version of CAD system based on high-quality endoscopic images with the goal of improving endoscopic detection of early neoplastic BE. The prospectively collected training dataset included WLE overview images of 40 neoplastic BE and 20 nondysplastic BE patients. Neoplastic images were delineated by expert endoscopists, who defined the overlap area of at least four delineations as a “sweet spot” and the area with a minimum of one delineation as a “soft spot”. The model was trained on color and texture features and assessed using leave-one-out cross-validation. Positive features were extracted from the sweet spots, and negative features were extracted from nondysplastic BE images. The system obtained an accuracy of 92% for neoplastic detection with a sensitivity of 95% and a specificity of 85%. The system delineated the soft spot and indicated the preferred biopsy location (red-flagged the area) in 100% and 90% of cases, respectively. The total time needed by the algorithm to analyze all images and delineate lesions was 61.8 s [65]. This research adds new advances toward real-time automated recognition of Barrett’s neoplasia.

4.2. Esophageal Squamous Cell Carcinoma

4.2.1. Identification of Premalignant Lesions/Early Esophageal Squamous Cell Carcinoma (ESCC)

CAD Using Narrow Band Imaging (NBI)

A group of four institutions from three different countries developed a CAD system for real-time diagnosis of precancerous esophageal lesions and ESCC [10]. This model was trained by using a total of 6473 NBI images (including noncancerous lesions, precancerous lesions, and ESCC) and was validated with the aid of still endoscopic images and video images. The AI system developed a probability heat map that indicated suspected areas of neoplasia with the color yellow and noncancerous areas with the color blue. The identified neoplastic areas were masked with color. For assessment of image datasets containing 1480 malignant NBI images (59 consecutive cases), the CAD system obtained a sensitivity of 98.04%, whereas for the 5191 noncancerous NBI images (2004 cases), it obtained a specificity of 95.03%, and the area under curve was 0.989. For assessment of the video datasets of neoplastic lesions, the system obtained the following performance scores. For the 27 non-magnifying videos, the per-frame sensitivity was 60.8% and the per-lesion sensitivity was 100%. For the 20 magnifying videos, the per-frame sensitivity was 96.1% and per-lesion sensitivity was 100%. For the normal esophagus videos (including 33 videos), the model obtained a per-frame specificity of 99.9% and a per-case specificity of 90.9%. Due to the high sensitivity and specificity in recognizing precancerous lesions and ESCC in both endoscopic still images and video datasets, the DL model appears to be a helpful tool to assist endoscopists.

CAD Using the LASEREO System

Recently, an image-enhanced endoscopy (IEE) system known as LASEREO (FUJIFILM Co., Japan) has emerged. This system is equipped with two laser light sources and four operating modes: white light (WLE), blue laser imaging (BLI), BLI-bright, and linked color imaging (LCI) [66,67]. Similar to NBI examination, BLI mode visualizes the microvascular and microsurface architecture of the digestive tract mucosa [68], and LCI enhances the ability to detect slight differences in mucosal color.
Non-ME endoscopic techniques are associated with a high sensitivity in identification of all lesions suspicious for esophageal SCC, whereas ME devices have a high accuracy in differentiating between cancerous and noncancerous esophageal lesions, thus offering the ability to perform a noninvasive, real-time endoscopic diagnosis known as “optical biopsy”, which reduces the need for real biopsies [69,70].
The Japanese group of Ohmori [71] developed a CAD system to detect and differentiate superficial esophageal SCC. The AI algorithm used a Single-Shot MultiBox Detector (SSD) containing 16 layers. After training, the CNN was validated by using the Caffe deep learning framework. Moreover, all layers of the model were fine-tuned by using weights from ImageNet. The training dataset consisted of 9591 endoscopic non-magnified/7844 ME images of 804 histology-proven superficial esophageal SCC, plus 1692 non-ME/3435 ME images from noncancerous lesions or normal esophagus. The validation dataset included 255 non-ME WLE, 268 non-ME narrow-band images/blue-laser images (NBI/BLI), and 204 ME-NBI/BLI endoscopic images collected from 135 patients. The performance of the CNN was compared with the diagnostic ability of 15 experienced endoscopists.
The AI system achieved the following performance scores for diagnosing superficial SCC. Using non-ME with WLE, the sensitivity, specificity, and accuracy were 90%, 76%, and 81%, respectively. Using non-ME with NBI/BLI, the sensitivity, specificity, and accuracy were 100%, 63%, and 77%, respectively. Using ME, the sensitivity, specificity, and accuracy were 98%, 56%, and 77%, respectively. By assessing the performance parameters together with the diagnostic flow, the CNN achieved a sensitivity, specificity, and accuracy of 98%, 68%, and 83%, respectively—results superior to those of the experienced endoscopists. Due to the high-speed analysis capacity of the system, real-time accurate diagnosis of SCC by using video images should be possible very soon.

Detection of Early Squamous Cell Carcinoma (ESCC) Plus ESCC Invasion Depth

According to the Japan Esophageal Society guidelines, endoscopic resection represents a definitive indication for treating intraepithelial (EP)/lamina propria (LPM) esophageal lesions and a relative indication for muscularis mucosa (MM) lesions/cancer invading the submucosa to a depth less than 200 µm (SM1). Surgical resection/chemoradiotherapy is recommended in the case of cancer invasion of the submucosa to a depth greater than 200 µm (SM2). Therefore, accurate identification of the invasion depth is essential to avoid overtreatment and thereby improve quality of life [72].
The Japanese single-center retrospective study of Tokai et al. [73] assessed the capacity of an AI system to measure ESCC invasion depth. The authors used the previously developed CNN, which was initially trained on 8428 WLE/NBI images for the detection of ESCC [74]. Additionally, this preexistent system was trained using a total of 1751 new images of ESCC with information on invasion depth collected from the hospital database. Furthermore, to assess diagnostic accuracy, 291 test images were obtained from 55 consecutive patients, 42 with EP-SM1 ESCC and 13 with SM2 ESCC. These images were subsequently reviewed by both the CNN system and 13 board-certified endoscopists. The system diagnosed 95.5% (279/291) of the ESCC and predicted the invasion depth with an accuracy of 80.9% and a sensitivity of 84.1% within an interval of only several seconds. Because the CAD system presented a higher diagnostic accuracy for ESCC invasion depth vs. expert endoscopists, it could be used as an adjunctive tool in the assessment of ESCC.

CAD using Esophageal Intrapapillary Capillary Loops (IPCLs)

Esophageal intrapapillary capillary loops (IPCLs) are microvessels first described using magnification endoscopy (ME) [75] and represent a marker of ESCC. Changes in the morphology of IPCLs have been demonstrated to correlate with neoplastic invasion depth, a major factor in the decision of curative endoscopic therapy [76,77]. Normal IPCLs represent fine-caliber looped capillaries developed from the sub-epithelial network. During ESCC progression and destruction of the normal architecture of the esophageal wall, IPCLs become initially more tortuous and dilated and subsequently form linear dilated vascular structures associated with the appearance of avascular areas corresponding to cancer invasion deep in the mucosal layer. In the stage of deeper submucosal layer invasion, these structures obliterate and are replaced by neovascularization composed of tortuous, dilated, and nonlooped capillaries [78,79]. ME-NBI allows visualization of mucosal microvascular patterns in patients with ESCC [80]. Several classifications have been developed to categorize the abnormal changes of ICPL that correlate with histological invasion depth [76,81]. The recent Japanese Endoscopic Society (JES) IPCL classification is a simplified system [82,83] that has become popular in areas of high prevalence. Each ICPL category corresponds to a specific histological grade and invasion depth with a high accuracy of more than 90%. Additionally, JES classification is associated with excellent interobserver agreement [82,84].
Zhao et al. developed a CAD model to classify IPCL for detection/classification of SCC based on a total of 1383 lesions assessed with high-resolution endoscopes, using the ME-NBI technique [85]. The model used a double-labeling fully CNN and achieved mean diagnostic accuracies of 89.2% and 93% at the lesion and pixel levels, respectively, superior to that of endoscopists. The group of Everson [86] developed a modern AI system capable of real-time classification of IPCL morphologies as neoplastic or nonneoplastic, using ME-NBI endoscopic images. The CNN was trained by using a total of 7046 sequential high-definition ME-NBI images collected from 17 patients, including 10 ESCC and seven with normal esophagus. JES IPCL classification was performed by three expert endoscopists. The normal IPCL pattern was classified as type A, and the abnormal pattern was classified as B1–3, and for all visualized areas, histopathological assessment was obtained by two expert gastrointestinal pathologists. This CNN obtained a 93.7% accuracy (86.2% to 98.3%), 89.3% sensitivity (78.1% to 100%), and 98% specificity (92% to 99.7%) for classification of IPCL patterns as normal/abnormal. At the moment, the developed model is not yet able to categorize all of the specific subtypes with sufficient accuracy for clinical implementation. The system operates in a real-time fashion, with diagnostic prediction times between 26.17 and 37.48 ms.
The group continued their study [87] by collecting a new dataset containing 68K binary labeled frames selected from 114 patient videos (45 normal and 69 abnormal), which were correlated to histopathology. The novel CNN algorithm fulfilled the binary classification task and explained the input features that drive the decision-making process. The method achieved an accuracy of 91.7% vs. the 94.7% reached by a group of 12 senior endoscopists, which was below the average obtained by the clinicians but still better than certain outcomes.
In the future, these novel applications could be improved and used as an in vivo clinical support tool for endoscopists in the evaluation of suspected ESCC and decision on endoscopic treatment.

CAD Using the Endocytoscopic System (ECS)

The endocytoscopic system (ECS) represents a magnifying endoscopic method that allows in vivo assessment of surface epithelial cells in a real-time setting, using vital staining (e.g., methylene blue) [88,89,90]. In 2003, the first clinical trial was performed that described the characteristics of the normal surface squamous epithelium and neoplastic tissue of the esophagus [91]. Later, the characteristics of selected esophageal benign lesions were also analyzed, such as esophagitis, which is a differential diagnosis for esophageal cancer [92,93]. The ECS enables the realization of virtual histology and in vivo confirmation of histopathological diagnosis.
Several studies evaluated the use of CAD to better discriminate neoplastic from nonneoplastic lesions of the esophagus by using ECS [94], including the model developed by Kodashima et al. [95], which enabled microscopic visualization of the mucosa, and the one developed by Shin et al. [96], which obtained a sensitivity and specificity of 87% and 97%, respectively. This model was subsequently improved by Quang et al. [97] by incorporating full automation with real-time analysis (tablet-interfaced high-resolution endomicroscopy). The new model achieved a sensitivity and specificity of 95% and 91%, respectively, and lower cost compared with laptop-interfaced systems. This software was also tested in vivo for three patients, resulting in 100% concordance with histopathological examination.
Kumagai et al. [98] proposed the use of ECS instead of biopsy-based assessment for SCC, with the aid of AI. For this purpose, the researchers designed a CNN (based on GoogLeNet) trained by using 4715 esophageal ECS images, including 1141 malignant and 3574 nonmalignant lesions. Subsequently, to evaluate the performance of the system, the group used an independent test set of 1520 ECS images from 55 consecutive patients, including 27 esophageal SCC and 28 benign lesions. The areas under the curve obtained by the CNN were 0.85 for the total images, 0.90 for higher magnification images, and 0.72 for lower magnification images. CAD obtained an overall sensitivity of 92.6% in diagnosing 25/27 SCC cases, and 25/28 benign lesions were recognized as nonmalignant, reaching a specificity of 89.3% and an accuracy of 90.9%. Two neoplastic lesions were misdiagnosed as nonmalignant by the AI but were correctly diagnosed by the endoscopist. The three cases of benign lesions diagnosed as malignant by the AI consisted of images of radiation-related esophagitis and of gastroesophageal reflux disease. Based on these promising results, CAD is expected to aid endoscopists in diagnosing SCC based on “optical biopsy” with the aid of ECS images, preferably using higher magnification pictures.

4.3. Esophageal Cancer Detection (SCC or AC)

The CNN developed by Horie et al. [74] used 8428 retrospectively collected WLE/NBI images of esophageal cancer histologically proven to depict SCC or AC as training dataset, including both superficial and advanced cancers, from 384 patients. To assess diagnostic accuracy, the authors used a test dataset including 1118 images from 47 patients with 49 esophageal cancers and 50 patients without esophageal cancer. The algorithm reached a sensitivity of 98% for detection of esophageal cancer. The CNN was able to detect all esophageal cancer lesions less than 10 mm in size. Although the obtained positive predictive value for test images was 40%, misdiagnosis of shadows/normal structures determined a negative predictive value of 95%. The CNN reached an accuracy of 98% in distinguishing superficial vs. advanced esophageal cancer. For the two different histologic subtypes, the accuracy in diagnosis was 99% for ESCC and 90% for EAC. These results demonstrate the ability of the constructed system to rapidly analyze a large number of stored endoscopic images with high sensitivity, leading to an improvement of early esophageal cancer detection in clinical practice in the near future (Table 1).
Numerous studies regarding the role of AI in digestive endoscopy are still preclinical and engineer-driven. Many of the presented studies are retrospective single-center that frequently show better results than what is in real settings (selection bias) and cannot analyze low-quality images. Moreover, some of them are based on traditional ML models, while the most recent ones use mainly complex DL algorithms. Moreover, in an attempt to achieve best results, researchers used different endoscopic methods, from the widely available standard endoscopy to the most advanced endoscopic techniques, which are available only in expert centers and skilled hands. In the evaluation of clinical studies, it should be mentioned that most studies were based on endoscopic still images, although the most recent efforts are made toward using more complex video sequences. We would like to point out the fact that, in the last few years, several real-life clinical studies have also been published. For the moment, many of the studies using video images are pilot studies, based on a limited number of patients. The majority of the studies presented in this paper, although showing promising results in the detection of premalignant and malignant esophageal lesions, needs further validation in prospective randomized clinical trials. Table 2 describes the existent and ongoing clinical trials using AI for diagnosing early neoplasia in Barrett’s esophagus and esophageal carcinoma (Table 2). Their results, along with the development of other real-life clinical trials, will probably help us make a step forward in defining the best CAD strategy to improve esophageal-malignancy detection.

5. Future Perspectives and Challenges

Computer-assisted monitoring can assure cost-effective real-time quality control and performance of gastroscopy, and moreover, AI can offer a helpful tool for training of junior endoscopists in the future [99,100]. Another strength of such methodologies is represented by their usefulness in risk stratification of patients [2]. AI systems can aid in the detection of discrete lesions and classify suspicious lesions, thus increasing the detection rate and diagnostic accuracy of gastrointestinal lesions, especially digestive neoplasms [101]. AI algorithms reduce the workload of gastroenterologists and lead to quick and accurate diagnosis within seconds or minutes. Therefore, it is estimated that AI will assist doctors in making clinical decisions and that artificial technologies will be incorporated into routine endoscopic practice. Additionally, in clinical gastroenterology, new fields for exploration might open in the future with the support of computer-assisted diagnostics [19].
Despite the true benefits of the AI systems, several limitations still exist that must be overcome in the future [19]. First, most of the previous studies usually collect only high-quality endoscopic images to elaborate the training datasets, whereas low-quality images (in which the area of interest is covered by mucus, bile, was only partially visible, etc.) were excluded. This practice could possibly cause overfitting of the models [74,102,103] and an exaggeration of the detection accuracy. For correct assessments during endoscopy, unprocessed videos should be used in training and testing of the CNN [99,104,105]. Because most of the datasets are retrospective and usually show the most typical features of the lesion, inclusion of more atypical lesions and indicators for anatomical structures [106] might be used to improve the performance of the DL model. AI models should be assessed by using datasets and tests sets that are completely independent of the level or patient or condition and are adequately developed [16,107].
A highly important issue is the performance indicators for different AI algorithms, which, in previous research, have focused primarily on accuracy, sensitivity, specificity, and positive/negative predictive values, which might be influenced by the distribution of test datasets, selection bias, overfitting, or spectrum bias [99,108,109]. These confusing elements might determine overestimation of the model performance and generalization of the results; therefore, external validation using independent datasets to minimize this bias is compulsory. Many of the studies on the AI impact in gastroenterology were experimental, single center, or retrospective studies, or used specific endoscopic images available only in selected referral units. In the future, prospective multicentric randomized controlled trials with well-defined inclusion and exclusion criteria for the target population will be mandatory to demonstrate whether DL models truly mirror an improved accuracy of detection for gastrointestinal lesions in the real clinical context and to assess the impact magnitude of computer-assisted diagnosis systems in the routine workflow of endoscopists [110]. To diminish the “black box” nature of these models, i.e., their lack of explainability, and to avoid bias and achieve human acceptance, several methods, such as saliency region and attention maps, are already under development [111]. Additionally, by constructing a large number of distinct DL algorithms, all of which predict a different diagnosis, we might be confronted with a differential diagnostic dilemma. Because an increased accuracy implies a high amount of data, which is difficult to obtain due to the paucity of available medical records as a result of privacy issues, data augmentation modalities have been developed [112]. Moreover, more powerful and advanced computer algorithms, such as spiking neural networks that mimic the human brain, might become a novel scientific base for research [16,113]. The future involvement of AI in diagnostic medical procedures might also have an impact on the doctor–patient relationship, with ethical implications related to the assumption of responsibility [16]. These legal issues must be further clarified.
Bearing in mind the enormous advances in novel endoscopic devices, the increasing workload of clinicians due to the high number of patients, and implementation of endoscopic mass screening programs in certain high-risk areas for gastrointestinal malignancies, strong collaboration among physicians, researchers in the computer field, medical companies, and industry might become mandatory in the near future, for integration of powerful and advanced AI systems and algorithms into the composition of endoscopic devices and in daily clinical practice, with the aim of improving medical actions. Other problems to be solved include finding reimbursement modalities to support such advanced technologies and the development of a doctor-friendly interface for these AI systems. We should keep in mind that, in the future, it will likely be a challenging task to translate encouraging experimental research into clinical practice. However, the first steps toward publicly available large databases/platforms for further AI algorithm development and improvement have been already made [114]. As such, the future is already here.

6. Conclusions

Promising results show a good accuracy of CAD algorithms associated with advanced endoscopic techniques for diagnosis of esophageal carcinomas in the early and endoscopic treatable stages, which is associated with improved quality of life and better survival. Computer-assisted diagnostics quite often outperform clinician skills and might be a promising tool in the future for use of “optical biopsies” instead of difficult, time-consuming, and invasive biopsies or polypectomies. Furthermore, the novel artificial models are starting to be able to predict the depth of invasion of esophageal neoplasms with high precision and thus can help in selection and best management of tumors, such as in endoscopic vs. surgical resection. Therefore, such methodologies are useful in adoption of the best therapeutic strategy with reduced costs for healthcare systems. The link between human intelligence and artificial intelligence should evolve toward personalized medicine and, particularly, personalized gastroenterology healthcare.

Author Contributions

Conceptualization and design, D.C.L., M.F.A., A.C.F., A.G., I.R., M.C. Literature search, D.C.L., M.F.A., I.R., S.T. Manuscript writing- original draft: D.C.L., M.F.A., A.C.F., A.G., S.T. Writing review and editing: D.C.L., M.F.A., I.R., M.C. Supervision: D.C.L., M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any grants from any funding agencies.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 2544–2556. [Google Scholar] [CrossRef] [PubMed]
  2. Shung, D.L.; Au, B.; Taylor, R.A.; Tay, J.K.; Laursen, S.B.; Stanley, A.J.; Dalton, H.R.; Ngu, J.; Schultz, M.; Laine, L. Validation of a Machine Learning Model That Outperforms Clinical Risk Scoring Systems for Upper Gastrointestinal Bleeding. Gastroenterology 2020, 158, 160–167. [Google Scholar] [CrossRef] [PubMed]
  3. Neil, S.; Leiman, D.A. Improving Acute GI Bleeding Management Through Artificial Intelligence: Unnatural Selection? Dig. Dis. Sci. 2019, 64, 2061–2064. [Google Scholar] [CrossRef] [Green Version]
  4. Le Berre, C.; Sandborn, W.J.; Aridhi, S.; Devignes, M.-D.; Fournier, L.; Smaïl-Tabbone, M.; Danese, S.; Peyrin-Biroulet, L. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology 2020, 158, 76–94. [Google Scholar] [CrossRef] [Green Version]
  5. Kangi, A.K.; Bahrampour, A. Predicting the survival of gastric cancer patients using artificial and bayesian neural networks. Asian Pac. J. Cancer Prev. 2018, 19, 487–490. [Google Scholar] [CrossRef]
  6. Oh, S.E.; Seo, S.W.; Choi, M.G.; Sohn, T.S.; Bae, J.M.; Kim, S. Prediction of overall survival and novel classification of patients with gastric cancer using the survival recurrent network. Ann. Surg. Oncol. 2018, 25, 1153–1159. [Google Scholar] [CrossRef]
  7. Ruffle, J.K.; Farmer, A.D.; Aziz, Q. Artificial intelligence-assisted gastroenterology- promises and pitfalls. Am. J. Gastroenterol. 2019, 114, 422–428. [Google Scholar] [CrossRef]
  8. Van der Sommen, F.; Zinger, S.; Curvers, W.L.; Bisschops, R.; Pech, O.; Weusten, B.L.; Bergman, J.J.; de With, P.H.; Schoon, E.J. Computer aided detection of early neoplastic lesions in Barrett’s esophagus. Endoscopy 2016, 48, 617–624. [Google Scholar] [CrossRef] [Green Version]
  9. Trindade, A.J.; McKinley, M.J.; Fan, C.; Leggett, C.L.; Kahn, A.; Pleskow, D.K. Endoscopic surveillance of barrett’s esophagus using volumetric laser endomicroscopy with artificial intelligence image enhancement. Gastroenterology 2019, 157, 303–305. [Google Scholar] [CrossRef] [Green Version]
  10. Guo, L.; Xiao, X.; Wu, C.; Zeng, X.; Zhang, Y.; Du, J.; Bai, S.; Xie, J.; Zhang, Z.; Li, Y.; et al. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointest. Endosc. 2020, 91, 41–51. [Google Scholar] [CrossRef]
  11. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  12. Russell, S.J.; Norvig, P. Artificial Intelligence, a Modern Approach, 3rd ed.; Pearson Education: Upper Saddle River, NJ, USA, 2009; pp. 1–5. [Google Scholar]
  13. Topol, E. Deep Medicine; Hachette Book Group: New York, NY, USA, 2019; pp. 17–24. [Google Scholar]
  14. Ebigbo, A.; Palm, C.; Probst, A.; Mendel, R.; Manzeneder, J.; Prinz, F.; de Souza, L.A.; Papa, J.P.; Siersema, P.; Messmann, H. A technical review of artificial intelligence as applied to gastrointestinal endoscopy: Clarifying the terminology. Endosc. Int. Open 2019, 7, E1616–E1623. [Google Scholar] [CrossRef] [PubMed]
  15. Dey, A. Machine learning algorithms: A review. Int. J. Comput. Sci. Inf. Technol. 2016, 7, 1174–1179. [Google Scholar]
  16. Yang, Y.J.; Bang, C.S. Application of artificial intelligence in gastroenterology. World J. Gastroenterol. 2019, 25, 1666–1683. [Google Scholar] [CrossRef] [PubMed]
  17. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory (COLT’92), Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar] [CrossRef]
  18. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
  19. Mori, Y.; Kudo, S.; Mohmed, H.; Misawa, M.; Ogata, N.; Itoh, H.; Oda, M.; Mori, K. Artificial intelligence and upper gastrointestinal endoscopy: Current status and future perspective. Dig. Endosc. 2019, 31, 378–388. [Google Scholar] [CrossRef] [Green Version]
  20. Khan, S.; Yong, S. A comparison of deep learning and handcrafted features in medical image modality classification. In Proceedings of the 2016 3rd International Conference on Computer and Information Sciences (ICCOINS), Kuala Lumpur, Malaysia, 15–17 August 2016; pp. 633–638. [Google Scholar] [CrossRef]
  21. Philbrick, K.A.; Yoshida, K.; Inoue, D.; Akkus, Z.; Kline, T.L.; Weston, A.D.; Korfiatis, P.; Takahashi, N.; Erickson, B.J. What does deep learning see? Insights from a classifier trained to predict contrast enhancement phase from CT images. Am. J. Roentgenol. 2018, 211, 1184–1193. [Google Scholar] [CrossRef]
  22. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Handwritten digit recognition with a back-propagation network. In Proceedings of the Advances in Neural Information Processing Systems 1990, Denver, CO, USA, 27–30 November 1989; pp. 396–404. [Google Scholar]
  23. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. In Proceedings of the 1998 IEEE International Frequency Control Symposium, Pasadena, CA, USA, 29 May 1998; Volume 86, pp. 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  24. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  26. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  27. Ronneberger, O.; Fischer; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  28. Ebigbo, A.; Mendel, R.; Probst, A.; Manzeneder, J.; Prinz, F.; de Souza, L., Jr.; Papa, J.; Palm, C.; Messman, H. Real- time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus. Gut 2020, 69, 615–616. [Google Scholar] [CrossRef]
  29. Hirasawa, T.; Aoyama, K.; Tanimoto, T.; Ishihara, S.; Shichijo, S.; Ozawa, T.; Ohnishi, T.; Fujishiro, M.; Matsuo, K.; Fujisaki, J.; et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018, 21, 653–660. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. East, J.E.; Rees, C.J. Making optical biopsy a clinical reality in colonoscopy. Lancet Gastroenterol. Hepatol. 2018, 3, 10–12. [Google Scholar] [CrossRef]
  31. Yuan, Y.; Meng, M.Q. Deep learning for polyp recognition in wireless capsule endoscopy images. Med. Phys. 2017, 44, 1379–1389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Groenen, M.J.M.; Kuipers, E.J.; van Berge Henegouwen, G.P.; Fockens, P.; Ouwendijk, R.J. Computerisation of endoscopy reports using standard reports and text blocks. Neth. J. Med. 2006, 6, 78–83. [Google Scholar] [CrossRef] [Green Version]
  33. Abadir, A.P.; Ali, M.F.; Karnes, W.; Samarasena, J.B. Artificial Intelligence in Gastrointestinal Endoscopy. Clin. Endosc. 2020, 53, 132–141. [Google Scholar] [CrossRef] [PubMed]
  34. Bretthauer, M.; Aabakken, L.; Dekker, E.; Kaminski, M.F.; Rösch, T.; Hultcrantz, R.; Suchanek, S.; Jover, R.; Kuipers, E.J.; Bisschops, R.; et al. ESGE Quality Improvement Committee Requirements and standards facilitating quality improvement for reporting systems in gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement. United Eur. Gastroenterol. J. 2016, 4, 172–176. [Google Scholar] [CrossRef] [Green Version]
  35. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Coleman, H.G.; Xie, S.-H.; Lagergren, J. The epidemiology of esophageal adenocarcinoma. Gastroenterology 2018, 154, 390–405. [Google Scholar] [CrossRef]
  37. Weismüller, J.; Thieme, R.; Hoffmeister, A.; Weismüller, T.; Gockel, I. Barrett-Screening: Rationale, aktuelle Konzepte und Perspektiven [Barrett-Screening: Rational, current concepts and perspectives]. Z. Gastroenterol. 2019, 57, 317–326. [Google Scholar] [CrossRef]
  38. American Gastroenterological Association; Spechler, S.J.; Sharma, P.; Souza, R.F.; Inadomi, J.M.; Shaheen, N.J. American Gastroenterological Association medical position statement on the management of Barrett’s esophagus. Gastroenterology 2011, 140, 1084–1091. [Google Scholar] [CrossRef] [Green Version]
  39. Pech, O.; May, A.; Manner, H.; Behrens, A.; Pohl, J.; Weferling, M.; Hartmann, U.; Manner, N.; Huijsmans, J.; Gossner, L.; et al. Long-term efficacy and safety of endoscopic resection for patients with mucosal adenocarcinoma of the esophagus. Gastroenterology 2014, 146, 652–660. [Google Scholar] [CrossRef]
  40. Weusten, B.; Bisschops, R.; Coron, E.; Dinis-Ribeiro, M.; Dumonceau, J.M.; Esteban, J.M.; Hassan, C.; Pech, O.; Repici, A.; Bergman, J.; et al. Endoscopic management of Barrett’s esophagus: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement. Endoscopy 2017, 49, 191–198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Choi, S.E.; Hur, C. Screening and surveillance for Barrett’s esophagus: Current issues and future directions. Curr. Opin. Gastroenterol. 2012, 28, 377–381. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Sharma, P. Review article: Emerging techniques for screening and surveillance in Barrett’s oesophagus. Aliment Pharmacol. Ther. 2004, 20, 63–96. [Google Scholar] [CrossRef] [PubMed]
  43. Spechler, S.J. Clinical practice. Barrett’s Esophagus. N. Engl. J. Med. 2002, 346, 836–842. [Google Scholar] [CrossRef] [PubMed]
  44. Sharma, P.; Savides, T.J.; Canto, M.I.; Corley, D.A.; Falk, G.W.; Goldblum, J.R.; Wang, K.K.; Wallace, M.B.; Wolfsen, H.C. The American Society for Gastrointestinal Endoscopy PIVI (Preservation and Incorporation of Valuable Endoscopic Innovations) on imaging in Barrett’s Esophagus. Gastrointest. Endosc. 2012, 76, 252–254. [Google Scholar] [CrossRef]
  45. ASGE Technology Committee; Thosani, N.; Dayyeh, B.K.A.; Sharma, P.; Aslanian, H.R.; Enestvedt, B.K.; Komanduri, S.; Manfredi, M.; Navaneethan, U.; Maple, J.T.; et al. ASGE Technology Committee systematic review and meta-analysis assessing the ASGE Preservation and Incorporation of Valuable Endoscopic Innovations thresholds for adopting real-time imaging-assisted endoscopic targeted biopsy during endoscopic surveillance of Barrett’s esophagus. Gastrointest. Endosc. 2016, 83, 684–698. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Sami, S.S.; Iyer, P.G. Recent Advances in Screening for Barrett’s Esophagus. Curr. Treat. Options Gastroenterol. 2018, 16, 1–14. [Google Scholar] [CrossRef]
  47. Sharma, P.; Hawes, R.H.; Bansal, A.; Gupta, N.; Curvers, W.; Rastogi, A.; Singh, M.; Hall, M.; Mathur, S.C.; Wani, S.B.; et al. Standard endoscopy with random biopsies versus narrow band imaging targeted biopsies in Barrett’s oesophagus: A prospective, international, randomised controlled trial. Gut 2013, 62, 15–21. [Google Scholar] [CrossRef] [Green Version]
  48. Curvers, W.L.; van Vilsteren, F.G.; Baak, L.C.; Böhmer, C.; Mallant-Hent, R.C.; Naber, A.H.; van Oijen, A.; Ponsioen, C.Y.; Scholten, P.; Schenk, E.; et al. Endoscopic trimodal imaging versus standard video endoscopy for detection of early Barrett’s neoplasia: A multicenter, randomized, crossover study in general practice. Gastrointest. Endosc. 2011, 73, 195–203. [Google Scholar] [CrossRef]
  49. Kara, M.A.; Smits, M.E.; Rosmolen, W.D.; Bultje, A.C.; Ten Kate, F.J.; Fockens, P.; Tytgat, G.N.; Bergman, J.J. A randomized crossover study comparing light-induced fluorescence endoscopy with standard videoendoscopy for the detection of early neoplasia in Barrett’s esophagus. Gastrointest. Endosc. 2005, 61, 671–678. [Google Scholar] [CrossRef]
  50. Mendel, R.; Ebigbo, A.; Probst, A.; Messmann, H.; Palm, C. Barrett’s Esophagus Analysis Using Convolutional Neural Networks. In Bildverarbeitung für die Medizin; Informatik Aktuell; Maier-Hein, K.H., Deserno, T.M., Handels, H., Tolxdorff, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 80–85. [Google Scholar] [CrossRef]
  51. Ebigbo, A.; Mendel, R.; Probst, A.; Manzeneder, J.; Souza, L.A., Jr.; Papa, J.P.; Palm, C.; Messmann, H. Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma. Gut 2019, 68, 1143–1145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Ghatwary, N.; Zolgharni, M.; Ye, X. Early esophageal adenocarcinoma detection using deep learning methods. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 611–621. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Hashimoto, R.; Requa, J.; Dao, T.; Ninh, A.; Tran, E.; Mai, D.; Lugo, M.; Chehade, N.E.-H.; Chang, K.J.; Karnes, W.E.; et al. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett’s esophagus (with video). Gastrointest. Endosc. 2020, 5107, 30026-2. [Google Scholar] [CrossRef] [PubMed]
  54. Vennalaganti, P.R.; Kanakadandi, V.N.; Gross, S.A.; Parasa, S.; Wang, K.K.; Gupta, N.; Sharma, P. Inter-observer agreement among pathologists using wide-area transepithelial sampling with computer-assisted analysis in patients with barrett’s esophagus. Am. J. Gastroenterol. 2015, 110, 1257–1260. [Google Scholar] [CrossRef]
  55. Johanson, J.F.; Frakes, J.; Eisen, D.; EndoCDx Collaborative Group. Computer-assisted analysis of abrasive transepithelial brush biopsies increases the effectiveness of esophageal screening: A multicenter prospective clinical trial by the EndoCDx Collaborative Group. Dig. Dis. Sci. 2011, 56, 767–772. [Google Scholar] [CrossRef]
  56. Anandasabapathy, S.; Sontag, S.; Graham, D.Y.; Frist, S.; Bratton, J.; Harpaz, N.; Waye, J.D. Computer-assisted brush-biopsy analysis for the detection of dysplasia in a high-risk Barrett’s esophagus surveillance population. Dig. Dis. Sci. 2011, 56, 761–766. [Google Scholar] [CrossRef]
  57. Vennalaganti, P.R.; Kaul, V.; Wang, K.K.; Falk, G.W.; Shaheen, N.J.; Infantolino, A.; Johnson, D.A.; Eisen, G.; Gerson, L.B.; Smith, M.S.; et al. Increased detection of Barrett’s esophagus-associated neoplasia using wide-area trans-epithelial sampling: A multicenter, prospective, randomized trial. Gastrointest. Endosc. 2018, 87, 348–355. [Google Scholar] [CrossRef]
  58. Swager, A.F.; de Groof, A.J.; Meijer, S.L.; Weusten, B.L.; Curvers, W.L.; Bergman, J.J. Feasibility of laser marking in Barrett’s esophagus with volumetric laser endomicroscopy: First-in-man pilot study. Gastrointest. Endosc. 2017, 86, 464–472. [Google Scholar] [CrossRef]
  59. Smith, M.S.; Cash, B.; Konda, V.; Trindade, A.J.; Gordon, S.; DeMeester, S.; Joshi, V.; Diehl, D.; Ganguly, E.; Mashimo, H.; et al. Volumetric laser endomicroscopy and its application to Barrett’s esophagus: Results from a 1000 patient registry. Dis. Esophagus 2019, 32, doz029. [Google Scholar] [CrossRef] [Green Version]
  60. Evans, J.A.; Poneros, J.M.; Bouma, B.E.; Bressner, J.; Halpern, E.F.; Shishkov, M.; Lauwers, G.Y.; Mino-Kenudson, M.; Nishioka, N.S.; Tearney, G.J. Optical coherence tomography to identify intramucosal carcinoma and high-grade dysplasia in Barrett’s esophagus. Clin. Gastroenterol. Hepatol. 2006, 4, 38–43. [Google Scholar] [CrossRef] [Green Version]
  61. Leggett, C.L.; Gorospe, E.C.; Chan, D.K.; Muppa, P.; Owens, V.; Smyrk, T.C.; Anderson, M.; Lutzke, L.S.; Tearney, G.; Wang, K.K. Comparative diagnostic performance of volumetric laser endomicroscopy and confocal laser endomicroscopy in the detection of dysplasia associated with Barrett’s esophagus. Gastrointest. Endosc. 2016, 83, 880–888. [Google Scholar] [CrossRef] [Green Version]
  62. Struyvenberg, M.R.; van der Sommen, F.; Swager, A.F.; de Groof, A.J.; Rikos, A.; Schoon, E.J.; Bergman, J.J.; de With, P.H.N.; Curvers, W.L. Improved Barrett’s neoplasia detection using computer-assisted multiframe analysis of volumetric laser endomicroscopy. Dis. Esophagus 2020, 33, doz065. [Google Scholar] [CrossRef] [PubMed]
  63. Sehgal, V.; Rosenfeld, A.; Graham, D.G.; Lipman, G.; Bisschops, R.; Ragunath, K.; Rodriguez-Justo, M.; Novelli, M.; Banks, M.R.; Haidry, R.J.; et al. Machine Learning Creates a Simple Endoscopic Classification System that Improves Dysplasia Detection in Barrett’s Oesophagus amongst Non-expert Endoscopists. Gastroenterol. Res. Pract. 2018, 2018, 1872437. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Bergman, J.J.G.H.M.; de Groof, A.J.; Pech, O.; Ragunath, K.; Armstrong, D.; Mostafavi, N.; Lundell, L.; Dent, J.; Vieth, M.; Tytgat, G.N.; et al. An interactive web-based educational tool improves detection and delineation of barrett’s esophagus-related neoplasia. Gastroenterology 2019, 156, 1299–1308. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. De Groof, J.; van der Sommen, F.; van der Putten, J.; Struyvenberg, M.R.; Zinger, S.; Curvers, W.L.; Pech, O.; Meining, A.; Neuhaus, H.; Bisschops, R.; et al. The Argos project: The development of a computer-aided detection system to improve detection of Barrett’s neoplasia on white light endoscopy. United Eur. Gastroenterol. J. 2019, 7, 538–547. [Google Scholar] [CrossRef] [Green Version]
  66. Kuramoto, M.; Kubo, M. Principles of NBI and BLI-blue laser imaging. In NBI/BLI Atlas: New Image-Enhanced Endoscopy; Tajiri, H., Kato, M., Tanaka, S., Saito, Y., Eds.; Nihon Medical Center Inc.: Tokyo, Japan, 2014; pp. 16–21. [Google Scholar]
  67. Kaneko, K.; Oono, Y.; Yano, T.; Ikematsu, H.; Odagaki, T.; Yoda, Y.; Yagishita, A.; Sato, A.; Nomura, S. Effect of novel bright image enhanced endoscopy using blue laser imaging (BLI). Endosc. Int. Open 2014, 2, E212–E219. [Google Scholar] [CrossRef] [Green Version]
  68. Togashi, K.; Nemoto, D.; Utano, K.; Isohata, N.; Kumamoto, K.; Endo, S.; Lefor, A.K. Blue laser imaging endoscopy system for the early detection and characterization of colorectal lesions: A guide for the endoscopist. Therap. Adv. Gastroenterol. 2016, 9, 50–56. [Google Scholar] [CrossRef] [Green Version]
  69. Ishihara, R.; Takeuchi, Y.; Chatani, R.; Kidu, T.; Inoue, T.; Hanaoka, N.; Yamamoto, S.; Higashino, K.; Uedo, N.; Iishi, H.; et al. Prospective evaluation of narrow-band imaging endoscopy for screening of esophageal squamous mucosal high-grade neoplasia in experienced and less experienced endoscopists. Dis. Esophagus 2010, 23, 480–486. [Google Scholar] [CrossRef]
  70. Nagami, Y.; Tominaga, K.; Machida, H.; Nakatani, M.; Kameda, N.; Sugimori, S.; Okazaki, H.; Tanigawa, T.; Yamagami, H.; Kubo, N.; et al. Usefulness of non-magnifying narrow-band imaging in screening of early esophageal squamous cell carcinoma: A prospective comparative study using propensity score matching. Am. J. Gastroenterol. 2014, 109, 845–854. [Google Scholar] [CrossRef] [Green Version]
  71. Ohmori, M.; Ishihara, R.; Aoyama, K.; Nakagawa, K.; Iwagami, H.; Matsuura, N.; Shichijo, S.; Yamamoto, K.; Nagaike, K.; Nakahara, M.; et al. Endoscopic detection and differentiation of esophageal lesions using a deep neural network. Gastrointest. Endosc. 2020, 91, 301–309. [Google Scholar] [CrossRef]
  72. Kuwano, H.; Nishimura, Y.; Oyama, T.; Kato, H.; Kitagawa, Y.; Kusano, M.; Shimada, H.; Takiuchi, H.; Toh, Y.; Doki, Y.; et al. Guidelines for Diagnosis and Treatment of Carcinoma of the Esophagus April 2012 edited by the Japan Esophageal Society. Esophagus 2015, 12, 1–30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Tokai, Y.; Yoshio, T.; Aoyama, K.; Horie, Y.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Tsuchida, T.; Hirasawa, T.; Sakakibara, Y.; et al. Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma. Esophagus 2020, 17, 250–256. [Google Scholar] [CrossRef] [PubMed]
  74. Horie, Y.; Yoshio, T.; Aoyama, K.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Hirasawa, T.; Tsuchida, T.; Ozawa, T.; Ishihara, S.; et al. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest. Endosc. 2019, 89, 25–32. [Google Scholar] [CrossRef] [PubMed]
  75. Kumagai, Y.; Kawada, K.; Yamazaki, S.; Iida, M.; Ochiai, T.; Kawano, T.; Takubo, K. Prospective replacement of magnifying endoscopy by a newly developed endocytoscope, the ‘GIF-Y0002’. Dis. Esophagus 2010, 23, 627–632. [Google Scholar] [CrossRef]
  76. Inoue, H.; Honda, T.; Nagai, K.; Kawano, T.; Yoshino, K.; Takeshita, K.; Endo, M. Ultra-high magnification endoscopic observation of carcinoma in situ of the esophagus. Dig. Endosc. 1997, 9, 16–18. [Google Scholar] [CrossRef]
  77. Sato, H.; Inoue, H.; Ikeda, H.; Sato, C.; Onimaru, M.; Hayee, B.; Phlanusi, C.; Santi, E.G.; Kobayashi, Y.; Kudo, S.E. Utility of intrapapillary capillary loops seen on magnifying narrow-band imaging in estimating invasive depth of esophageal squamous cell carcinoma. Endoscopy 2015, 47, 122–128. [Google Scholar] [CrossRef]
  78. Kumagai, Y.; Toi, M.; Kawada, K.; Kawano, T. Angiogenesis in superficial esophageal squamous cell carcinoma: Magnifying endoscopic observation and molecular analysis. Dig. Endosc. 2010, 22, 259–267. [Google Scholar] [CrossRef]
  79. Inoue, H.; Kaga, M.; Ikeda, H.; Sato, C.; Sato, H.; Minami, H.; Santi, E.G.; Hayee, B.; Eleftheriadis, N. Magnification endoscopy in esophageal squamous cell carcinoma: A review of the intrapapillary capillary loop classification. Ann. Gastroenterol. 2015, 28, 41–48. [Google Scholar]
  80. Gono, K.; Obi, T.; Yamaguchi, M.; Ohyama, N.; Machida, H.; Sano, Y.; Yoshida, S.; Hamamoto, Y.; Endo, T. Appearance of enhanced tissue features in narrow-band endoscopic imaging. J. Biomed. Opt. 2004, 9, 568–577. [Google Scholar] [CrossRef]
  81. Arima, M.; Tada, M.; Arima, H. Evaluation of micro-vascular patterns of superficial esophageal cancers by magnifying endoscopy. Esophagus 2005, 2, 191–197. [Google Scholar] [CrossRef]
  82. Oyama, T.; Inoue, H.; Arima, M.; Momma, K.; Omori, T.; Ishihara, R.; Hirasawa, D.; Takeuchi, M.; Tomori, A.; Goda, K. Prediction of the invasion depth of superficial squamous cell carcinoma based on microvessel morphology: Magnifying endoscopic classification of the Japan Esophageal Society. Esophagus 2017, 14, 105–112. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Oyama, T.; Momma, K. A new classification of magnified endoscopy for superficial esophageal squamous cell carcinoma. Esophagus 2011, 8, 247–251. [Google Scholar] [CrossRef]
  84. Kim, S.J.; Kim, G.H.; Lee, M.W.; Jeon, H.K.; Baek, D.H.; Lee, B.E.; Song, G.A. New magnifying endoscopic classification for superficial esophageal squamous cell carcinoma. World J. Gastroenterol. 2017, 23, 4416–4421. [Google Scholar] [CrossRef]
  85. Zhao, Y.Y.; Xue, D.X.; Wang, Y.L.; Zhang, R.; Sun, B.; Cai, Y.P.; Feng, H.; Cai, Y.; Xu, J.M. Computer-assisted diagnosis of early esophageal squamous cell carcinoma using narrow-band imaging magnifying endoscopy. Endoscopy 2019, 51, 333–341. [Google Scholar] [CrossRef] [PubMed]
  86. Everson, M.; Herrera, L.C.G.P.; Li, W.; Muntion Luengo, I.; Ahmad, O.; Banks, M.; Magee, C.; Alzoubaidi, D.; Hsu, H.M.; Graham, D.; et al. Artificial intelligence for the real-time classification of intrapapillary capillary loop patterns in the endoscopic diagnosis of early oesophageal squamous cell carcinoma: A proof-of-concept study. United Eur. Gastroenterol. J. 2019, 7, 297–306. [Google Scholar] [CrossRef] [Green Version]
  87. García-Peraza-Herrera, L.C.; Everson, M.; Lovat, L.; Wang, H.P.; Wang, W.L.; Haidry, R.; Stoyanov, D.; Ourselin, S.; Vercauteren, T. Intrapapillary capillary loop classification inmagnification endoscopy: Open dataset and baseline methodology. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 651–659. [Google Scholar] [CrossRef] [Green Version]
  88. Inoue, H.; Kazawa, T.; Sato, Y.; Satodate, H.; Sasajima, K.; Kudo, S.E.; Shiokawa, A. In vivo observation of living cancer cells in the esophagus, stomach, and colon using catheter-type contact endoscope, “Endo-Cytoscopy system”. Gastrointest. Endosc. Clin. N. Am. 2004, 14, 589–594. [Google Scholar] [CrossRef]
  89. Kumagai, Y.; Kawada, K.; Yamazaki, S.; Ilida, M.; Momma, K.; Odajima, H.; Kawachi, H.; Nemoto, T.; Kawano, T.; Takubo, K. Endocytoscopic observation for esophageal squamous cell carcinoma: Can biopsy histology be omitted? Dis. Esophagus 2009, 22, 505–512. [Google Scholar] [CrossRef]
  90. Kumagai, Y.; Takubo, K.; Kawada, K.; Kumagai, Y.; Takubo, K.; Higashi, M.; Ishiguro, T.; Sobajima, J.; Fukuchi, M.; Ishibashi, K.I.; et al. A newly developed continuous zoom-focus endocytoscope. Endoscopy 2017, 49, 176–180. [Google Scholar] [CrossRef]
  91. Kumagai, Y.; Monma, K.; Kawada, K. Magnifying chromoendoscopy of the esophagus: In vivo pathological diagnosis using an endocytoscopy system. Endoscopy 2004, 36, 590–594. [Google Scholar] [CrossRef]
  92. Kumagai, Y.; Kawada, K.; Higashi, M.; Ishiguro, T.; Sobajima, J.; Fukuchi, M.; Ishibashi, K.; Baba, H.; Mochiki, E.; Aida, J.; et al. Endocytoscopic observation of various esophageal lesions at ×600: Can nuclear abnormality be recognized? Dis. Esophagus 2015, 28, 269–275. [Google Scholar] [CrossRef] [PubMed]
  93. Kumagai, Y.; Takubo, K.; Kawada, K.; Higashi, M.; Ishiguro, T.; Sobajima, J.; Fukuchi, M.; Ishibashi, K.; Mochiki, E.; Aida, J.; et al. Endocytoscopic observation of various types of esophagitis. Esophagus 2016, 13, 200–207. [Google Scholar] [CrossRef]
  94. Thakkar, S.J.; Kochhar, G.S. Artificial intelligence for real-time detection of early esophageal cancer: Another set of eyes to better visualize. Gastrointest. Endosc. 2020, 91, 52–54. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Kodashima, S.; Fujishiro, M.; Takubo, K.; Kammori, M.; Nomura, S.; Kakushima, N.; Muraki, Y.; Goto, O.; Ono, S.; Kaminishi, M.; et al. Ex vivo pilot study using computed analysis of endo-cytoscopic images to differentiate normal and malignant squamous cell epithelia in the oesophagus. Dig. Liver Dis. 2007, 39, 762–766. [Google Scholar] [CrossRef] [PubMed]
  96. Shin, D.; Protano, M.A.; Polydorides, A.D.; Dawsey, S.M.; Pierce, M.C.; Kim, M.K.; Schwarz, R.A.; Quang, T.; Parikh, N.; Bhutani, M.S.; et al. Quantitative analysis of high-resolution microendoscopic images for diagnosis of esophageal squamous cell carcinoma. Clin. Gastroenterol. Hepatol. 2015, 13, 272–279. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  97. Quang, T.; Schwarz, R.A.; Dawsey, S.M.; Tan, M.C.; Patel, K.; Yu, X.; Wang, G.; Zhang, F.; Xu, H.; Anandasabapathy, S.; et al. A tablet-interfaced high-resolution microendoscope with automated image interpretation for real-time evaluation of esophageal squamous cell neoplasia. Gastrointest. Endosc. 2016, 84, 834–841. [Google Scholar] [CrossRef] [Green Version]
  98. Kumagai, Y.; Takubo, K.; Kawada, K.; Aoyama, K.; Endo, Y.; Ozawa, T.; Hirasawa, T.; Yoshio, T.; Ishihara, S.; Fujishiro, M.; et al. Diagnosis using deep-learning artificial intelligence based on the endocytoscopic observation of the esophagus. Esophagus 2019, 16, 180–187. [Google Scholar] [CrossRef]
  99. Wu, L.; Zhang, J.; Zhou, W.; An, P.; Shen, L.; Liu, J.; Jiang, X.; Huang, X.; Mu, G.; Wan, X.; et al. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut 2019, 68, 2161–2169. [Google Scholar] [CrossRef] [Green Version]
  100. Su, J.R.; Li, Z.; Shao, X.J.; Ji, C.R.; Ji, R.; Zhou, R.C.; Li, G.C.; Liu, G.Q.; He, Y.S.; Zuo, X.L.; et al. Impact of real-time automatic quality control system on colorectal polyp and adenoma detection: A prospective randomized controlled study (with videos). Gastrointest. Endosc. 2020, 91, 415–424. [Google Scholar] [CrossRef]
  101. Xu, J.; Jing, M.; Wang, S.; Yang, C.; Chen, X. A review of medical image detection for cancers in digestive system based on artificial intelligence. Expert Rev. Med. Devices 2019, 16, 877–889. [Google Scholar] [CrossRef]
  102. Kanesaka, T.; Lee, T.C.; Uedo, N.; Lin, K.P.; Chen, H.Z.; Lee, J.Y.; Wang, H.P.; Chang, H.T. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest. Endosc. 2018, 87, 1339–1344. [Google Scholar] [CrossRef] [PubMed]
  103. Yoon, H.J.; Kim, S.; Kim, J.H.; Keum, J.S.; Oh, S.I.; Jo, J.; Chun, J.; Youn, Y.H.; Park, H.; Kwon, I.G.; et al. A Lesion-Based Convolutional Neural Network Improves Endoscopic Detection and Depth Prediction of Early Gastric Cancer. J. Clin. Med. 2019, 8, 1310. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  104. Byrne, M.F.; Chapados, N.; Soudan, F.; Oertel, C.; Pérez, M.L.; Kelly, R.; Iqbal, N.; Chandelier, F.; Rex, D.K. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 2019, 68, 94–100. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  105. Wang, P.; Xiao, X.; Glissen Brown, J.R.; Berzin, T.M.; Tu, M.; Xiong, F.; Hu, X.; Liu, P.; Song, Y.; Zhang, D.; et al. Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy. Nat. Biomed. Eng. 2018, 2, 741–748. [Google Scholar] [CrossRef] [PubMed]
  106. Ding, Z.; Shi, H.; Zhang, H.; Meng, L.; Fan, M.; Han, C.; Zhang, K.; Ming, F.; Xie, X.; Liu, H.; et al. Gastroenterologist-Level Identification of Small-Bowel Diseases and Normal Variants by Capsule Endoscopy Using a Deep-Learning Model. Gastroenterology 2019, 157, 1044–1054. [Google Scholar] [CrossRef] [PubMed]
  107. Park, S.H.; Han, K. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction. Radiology 2018, 286, 800–809. [Google Scholar] [CrossRef]
  108. Wang, P.; Berzin, T.M.; Glissen Brown, J.R.; Bharadwaj, S.; Becq, A.; Xiao, X.; Liu, P.; Li, L.; Song, Y.; Zhang, D.; et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: A prospective randomised controlled study. Gut 2019, 68, 1813–1819. [Google Scholar] [CrossRef] [Green Version]
  109. England, J.R.; Cheng, P.M. Artificial Intelligence for Medical Image Analysis: A Guide for Authors and Reviewers. Am. J. Roentgenol. 2019, 212, 513–519. [Google Scholar] [CrossRef]
  110. He, Y.S.; Su, J.R.; Li, Z.; Zuo, X.L.; Li, Y.Q. Application of artificial intelligence in gastrointestinal endoscopy. J. Dig. Dis. 2019, 20, 623–630. [Google Scholar] [CrossRef]
  111. Park, S.H. Artificial intelligence in medicine: Beginner’s guide. J. Korean Soc. Radiol. 2018, 78, 301–308. [Google Scholar] [CrossRef]
  112. Bae, H.J.; Kim, C.W.; Kim, N.; Park, B.; Kim, N.; Seo, J.B.; Lee, S.M. A Perlin Noise-Based Augmentation Strategy for Deep Learning with Small Data Samples of HRCT Images. Sci. Rep. 2018, 8, 17687. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  113. Tavanaei, A.; Ghodrati, M.; Kheradpisheh, S.R.; Masquelier, T.; Maida, A. Deep learning in spiking neural networks. Neural Netw. 2019, 111, 47–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Luo, H.; Xu, G.; Li, C.; He, L.; Luo, L.; Wang, Z.; Jing, B.; Deng, Y.; Jin, Y.; Li, Y.; et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: A multicentre, case-control, diagnostic study. Lancet Oncol. 2019, 20, 1645–1654. [Google Scholar] [CrossRef]
Figure 1. Types of machine learning algorithms: supervised learning—task driven (classification); unsupervised learning—data driven (clustering); and reinforcement learning—algorithm learns from trial and error.
Figure 1. Types of machine learning algorithms: supervised learning—task driven (classification); unsupervised learning—data driven (clustering); and reinforcement learning—algorithm learns from trial and error.
Medicina 56 00364 g001
Figure 2. Convolutional neural network (CNN) system: input layer with raw data of the endoscopic image, multiple layers with the role of extracting specific features, and elaborating image classification in the output layer.
Figure 2. Convolutional neural network (CNN) system: input layer with raw data of the endoscopic image, multiple layers with the role of extracting specific features, and elaborating image classification in the output layer.
Medicina 56 00364 g002
Table 1. Current studies applying AI in detection of esophageal cancer.
Table 1. Current studies applying AI in detection of esophageal cancer.
Ref.Published YearAim of StudyDesign of StudyType of AI (AI Classifier)AI Validation MethodsNumber of Subjects
Training DatasetTest DatasetPerformance
No Cases (Negative/Positive)No Images (Negative/Positive)Endoscopic ProcedureNo Cases (Negative/Positive)No Images (Negative/
Positive)
Endoscopic ProcedureAccuracy % Sensitivity/Specificity%AUC
Van der Sommen et al. [8]2016Detection of early neoplasia in BERcolor filters, specific texture, and ML (“Filter with Gabor bank”, SVM)leave-
one-out CV on a
per-patient basis
44 pts with BE (23/21)
100 EGD images
WLE 83 (per image); 86/87 (per patient)-
Mendel et al. [50]2017Detection of early neoplasia in BERCNN 50/50 EGD images (Endoscopic Vision Challenge MICCAI 2015)HD-WLE 94/88-
Ebigbo et al. [51]2019Detection of early Barrett ACRdeep CNN (ResNet)leave-one-patient-out CVLocal dataset: 41/33 pts, 148 HD WLE/NBI
MICCAI 2015 Dataset: 22/17 pts, 100 HD-WLE
HD- WLE/NBI Local dataset: 97/88 (WLE)
94/80
(NBI)
MICCAI-dataset: 92/100 (WLE)
-
Ghatwary et al. [52]2019Detection of early Barrett ACRR-CNN, Fast R-CNN, Faster R-CNN, SSD2- and 5-fold-CV, leave-One-Patient-Out
CV
MICCAI dataset:21 pts (9/12) (training dataset)60 (30/30) EGD imagesHD-WLEMICCAI dataset: 9 pts (4/5) (validation dataset)
9 pts (4/5)
(test dataset)
40 (20/20) EGD imagesHD-WLE83 (ARR for Faster R-CNN)96/92 (SSD)-
Hashi-moto et al. [53]2020Detection of early
esophageal neoplasia on BE
RCNN based on Xception architecture, YOLO v2Internal validation100 pts (30/70)1832 (916/916) EGD imagesWLE/NBI39 pts (13/26)
(valida-tion dataset)
458 (233/225) EGD images
(validation dataset)
WLE/NBI95.496.4/
94.2
-
Vennala-ganti et al. [57] NCT03008980 2017Detection of early
esophageal neoplasia on BE
Pneural network-based, high-speed computer scan 160 pts (134 ND/LGD, 26 HGD/EAC) randomized:
−76 pts biopsy → WATS
−84 pts WATS → biopsy
WATSThe addition of WATS: absolute detection rate increase 14.4%
Swager et al. [58]2017Detection of early BE neoplasia RML-methods: SVM, discriminant analysis, Ada-Boost, random forest, k-nearest neighbors etc.Leave-one-out CV−19 BE pts
−60 (30/30) images
Ex vivo VLE images 90/930.95
Struy-benberg et al. [62] NCT018626662019Detection of Barrett’s neoplasiaP8 predictive models (e.g., SVM, random forest, Naive Bayes); best = CAD multi-frame image
analysis
leave-one-out CV−52 endoscopic resection specimens from 29 BE pts
−60 (30/30) regions of interest + 25 neighboring frames → 3060 VLE frames
Ex vivo VLE images--0.94
Seghal et al. [63]
UK national clinical trial (REC reference 08/H0808/8, study no. 08/0018)
2018Detection of dysplasia arising in BEPML-algorithm: DT (WEKA package) −40 pts BE ± dysplasiaVideo HD-EGD, i-Scan9297/88-
Ebigbo et al. [28]2020Real- time detection of early neoplasia in BER/PDeepLab V.3+, an encoder–decoder
ANN (ResNet 101 layers)
classification (global prediction), segmentation (dense prediction) 129 EGD imagesHD-WLE/
gNBI
14 pts BE (valida-tion dataset)26/36 images
(validation dataset)
random images
from real-time camera livestream
89.983.7/
100.0
-
De Groof et al. [65] - The ARGOS project2019Recognition of Barrett’s neoplasiaPsupervised ML-models (trained on color/texture features), SVMleave-one-out CV−60 pts (20/40)
−60 EGD images
HD-WLE9295/850.92
Guo et al. [10]2020Real-time automated
diagnosis of precancerous lesions and ESCCs
R/PDL model: SegNet = deep encoder–decoder architecture for
multi-class pixelwise segmentation
AI probability heat map-generated for each input (ESD image)358/191 pts6473 (3703/2770) imagesNBI imagesValidation: 59 consecutive cc cases (dataset A); 2004 consecutive non-cc cases (dataset B); 27 non-ME cc cases + 20 ME cc cases (dataset C); 33 normal cases (dataset D)Validation: 1480 cc images (dataset A); 5191 non-cc images (dataset B); 27 non-ME cc images + 20 ME cc images (dataset C); 33 normal images (dataset D)NBI images (datasets A, B);
NBI video EGD images (datasets C, D)
-98.04/
95.03 (datasets A, B);
sensitivity per-
frame/lesion: 60.8/100 (non-ME video C) 96.1/100 (ME video C)
specificity per
frame/lesion: 99.9/
90.9 (video D)
0.989 (data-sets A, B)
Ohmori et al. [71]2020Detect and differentiate esophageal SCCRdeep Neural Network-SSDCaffe deep learning framework804 SSC pts9591 non-ME/7844 ME, SCC images;
1692 non-ME/3435 ME, non-cc images
ME/non-ME ESD images135 pts255 non-ME WLE; 268 non-ME, NBI/BLI; 204 ME-NBI/
BLI ESD images
non-ME WLE; non-ME/ME NBI, BLI8398/68-
Tokai et al. [73]2020Diagnostic ability of AI to measure ESCC invasion depthRdeep neural network-SSD, GoogLeNetCaffe deep learning framework -pre-training 8428 images; training 1751 EGD imagesWLE/NBI images55 consecu-tive patients, 42 with EP-SM1 ESCC and 13 with SM2 ESCC291 imagesWLE/NBI images95.5 (SCC diagnosis);
80.9 (invasion depth)
84.1 (invasion depth)-
Zhao et al. [85]2019Automated classification
of IPCLs to improve the detection of esophageal SCC
Pdouble-labelling FCN, self-transfer learningVGG16 net architecture, 3-fold CV−219 pts (30 inflammation, 24 LGD, 165 ESCC)
−1350 images → 1383 lesions (207 type A, 970 type B1, 206 type B2)
ME-NBI images89.2 (lesion level)
93 (pixel level)
87.0/
84.1
(lesion level)
-
Everson et al. [86]2019Real-time classification of IPCL patterns in the diagnosis of ESCCPCNN, eCAMs (discriminative areas normal/abnormal)five-fold CV−17 pts (7 normal 10 ESCC)
−7046 sequential HD images
ME-NBI images (Video EGD)93.7 normal/abnormal IPCL89.3/
98
-
García-Peraza-Herrera et al. [87]2020Classify still
images or video frames as normal or abnormal IPCL patterns (esophageal SCC detection)
PCNN architecture for the binary classification task (explainability) ResNet18CAM-DS −114 pts (45/69)
−67,742 annotated frames (28,078/39,662) with an average of 593 frames per patient.
ME-NBI video91.793.7/
92.4
-
Koda-shima et al. [95]2007Discrimination normal/malignant
esophageal tissue at the
cellular level
P, ex
vivo pilot
ImageJ program −10 ptsEndocytoscopyDifference in the mean ratio of total nuclei:
6.4 ± 1.9% in normal vs. 25.3 ± 3.8% in malignant samples
Shin et al. [96]2015Diagnosis of
esophageal
squamous
dysplasia
PLinear
discriminant
analysis
−177 pts
−375 sites (training set 104 sites; test set 104 sites;
validation set 167 sites)
Laptop-interfaced HRME 87/
97
-
Quang et al. [97]2016Diagnosis of esophageal SCCRLinear
discriminant
analysis
Data identical as for [124]Tablet-interfaced HRME 95/
91
-
Kumagai et al. [98]2019Diagnosing ESCC based on ECS images (optical biopsy)R/PCNN based on GoogLeNet, 22 layers-backpropagationCafe deep learning framework240 pts (114/126) → 308 ECS4715 (3574/1141) imagesECS images55 consecutive pts (28/27)1520 imagesECS images90.992.6/
89.3
0.85; 0.90 (HMP)
0.72
(LMP)
Horie et al. [74]2019Detection of esophageal cancer (SCC and AC)Rdeep CNN-SSDCaffe deep learning
framework
384 pts esophageal cc (397 lesions ESCC, 32 lesions EAC)8428 images esophageal ccWLE/NBI images50/47 pts (49 lesions−41 ESCC,8 EAC)1118 imagesWLE/NBI images98 (superficial/advanced cc) 99 for ESCC,90 for EAC98-
Luo et al. 2019AI for the diagnosis of upper
gastrointestinal cancers
R/PGRAIDS: DL semantic segmentation model (encoder-decoder DeepLab’s V3 + algorithm)internal validation, external validation (5 hospitals), prospective validation−1,036,496 endoscopy images from 84,424 individuals used to develop and test GRAIDSHD-WLE EGD95.5 (internal validation set); 92.7 (prospective set); 91.5–97.7 (5 external validation sets)94.2/92.3 (prospec-tive set)0.966–0.990 (five external valida-tion datasets)
EGD—esophagogastroduodenoscopy; AI—artificial intelligence; R—retrospective; P—prospective; WLE—white-light endoscopy; NBI—narrow-band imaging; HD—high definition; ME—magnifying endoscopy; VLE—volumetric laser endomicroscopy; WATS—wide-area transepithelial sampling; BLI—blue laser endoscopy; ECS—endocytoscopic system; CV—cross-validation; SVM—support vector machine; ANN—artificial neural network; CNN—convolutional neural network; R-CNN—regional-based convolutional neural network; SSD—Single-Shot MultiBox Detector; FCN—fully convolutional network; DT—decision tree; ARR—average recall rate; cc—cancerous; ND—nondysplastic; LGD—low-grade dysplasia; HGD—high-grade dysplasia, EAC—early adenocarcinoma; ESCC—early squamous cell carcinoma; IPCL—intrapapillary capillary loop; eCAMs—explicit class activation maps; HRME—high-resolution microscopic endoscopy; HMP—higher-magnification picture; LMP—lower-magnification picture.
Table 2. Clinical trials using AI for diagnosing early neoplasia in Barrett’s esophagus and esophageal carcinoma.
Table 2. Clinical trials using AI for diagnosing early neoplasia in Barrett’s esophagus and esophageal carcinoma.
StatusStudy TitleNumber ID/AcronymStudy TypeConditionsDesign/InterventionsOutcomesTarget Sample Size (No. Participants)Region
RecruitingThe analysis of WATS3D increased yield of Barrett’s esophagus and esophageal dysplasiaNCT03008980Observational
  • GERD
  • Barrett esophagus
  • Esophageal dysplasia
  • Esophagus adenocarcinoma
Diagnostic test: patients will perform routine care EGD with WATS3D brush samples and forceps biopsies; collection of cytology/pathology results Primary outcomes of patients undergoing WATS sampling. Specifically, incremental yield for Barrett’s esophagus and esophageal dysplasia due to WATS sampling above that noted from routine forceps biopsies in various clinical settings75,000US
RecruitingVolumetric laser endomicroscopy with intelligent real-time image segmentation (IRIS)NCT03814824Interventional
  • Barrett’s esophagus with/without dysplasia
  • Barrett’s esophagus with low/high grade dysplasia
Diagnostic test: IRIS
Diagnostic test: VLE
Patients will undergo a VLE exam ± IRIS per the standard of care. They will be randomized into VLE without IRIS first vs. VLE with IRIS first
Primary:
-time for image interpretation
-biopsy yield
-number of biopsies
200US
CompletedA comparison of Volumetric Laser Endomicroscopy and endoscopic mucosal resection in patients with Barrett’s dysplasia or intramucosal adenocarcinomaNCT01862666Observational
  • Barrett’s-associated dysplasia
  • Intramucosal adenocarcinoma
  • CAD image analysis
To evaluate the ability of physicians to use VLE to visualize HGIN/IMC in both the ex-vivo and in-vivo setting and correlate those images to standard histology of EMR specimens as the gold standard.Primary: the correlation of features seen on VLE images to those seen on histopathology from EMR specimens
Secondary: the creation of an image atlas, to determine the intra- and inter-observer agreement on VLE images in correlation with histopathology → refinement of the existing VLE image interpretation criteria and the validation of the VLE classification
30The Netherlands
Preinitiation The additional effect of AI support system to detect esophageal cancer-exploratory randomized control trialUMIN 000039924/AIDECInterventional
  • Esophageal neoplasm
  • AI
To investigate the efficacy of AI for the diagnosis of esophageal cancerPrimary: improvement of detection rate with AI support system in less experienced endoscopist
Secondary: improvement of detection rate with AI support system in experienced endoscopist
300Japan
RecruitingAutomatic diagnosis of early esophageal squamous neoplasia using pCLE with AINCT04136236Observational
  • Esophageal neoplasm
  • AI
  • Confocal laser endomicroscopy
Diagnosis test: the diagnosis of AI and endoscopistPrimary: the diagnosis efficiency of AI for diagnosing esophageal mucosal disease on real-time pCLE examination
Secondary: contrast the diagnosis efficiency of AI with endoscopist
60China
RecruitingResearch on development of AI for detection and classification of upper gastrointestinal cancers in endoscopic imagesUMIN000039597Observational
  • Esophageal neoplasm
  • AI
Collection of endoscopic images of upper GI cancer, development of an AI system for detection of upper GI cancer- assessment of an AI system performance by expert endoscopistsPrimary: an accuracy of AI system for detection of upper GI cancers in endoscopic images
Secondary: an accuracy of AI system for classification of upper GI cancers in endoscopic images
200Japan
Completed
(April 2020)
AI for early diagnosis of esophageal squamous cell carcinoma during optical enhancement magnifying endoscopyNCT03759756Observational
  • AI
  • Optical enhancement endoscopy
  • Magnifying endoscopy
Arm group label: AI visible/invisible group.
The endoscopic novices analyzing the image can/cannot see the automatic diagnosis
Primary: the diagnosis efficiency (the sensitivity, specificity and accuracy) of the AI model119China
GI—gastrointestinal; AI—artificial intelligence; GERD - gastroesophageal reflux disease; EGD—esophagogastroduodenoscopy; pCLE—probe-based confocal laser endomicroscopy; VLE—volumetric laser endomicroscopy; WATS3D—wide-area transepithelial sampling associated with computer-assisted three-dimensional analysis; IRIS—intelligent real-time image segmentation; EMR—endoscopic mucosal resection; HGIN—high-grade intraepithelial neoplasia; IMC—intramucosal adenocarcinoma; CAD—computer-assisted diagnosis.

Share and Cite

MDPI and ACS Style

Lazăr, D.C.; Avram, M.F.; Faur, A.C.; Goldiş, A.; Romoşan, I.; Tăban, S.; Cornianu, M. The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future. Medicina 2020, 56, 364. https://doi.org/10.3390/medicina56070364

AMA Style

Lazăr DC, Avram MF, Faur AC, Goldiş A, Romoşan I, Tăban S, Cornianu M. The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future. Medicina. 2020; 56(7):364. https://doi.org/10.3390/medicina56070364

Chicago/Turabian Style

Lazăr, Daniela Cornelia, Mihaela Flavia Avram, Alexandra Corina Faur, Adrian Goldiş, Ioan Romoşan, Sorina Tăban, and Mărioara Cornianu. 2020. "The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future" Medicina 56, no. 7: 364. https://doi.org/10.3390/medicina56070364

APA Style

Lazăr, D. C., Avram, M. F., Faur, A. C., Goldiş, A., Romoşan, I., Tăban, S., & Cornianu, M. (2020). The Impact of Artificial Intelligence in the Endoscopic Assessment of Premalignant and Malignant Esophageal Lesions: Present and Future. Medicina, 56(7), 364. https://doi.org/10.3390/medicina56070364

Article Metrics

Back to TopTop