Next Article in Journal
Acute and Chronic Cardiopulmonary Effects of High Dose Interleukin-2 Therapy: An Observational Magnetic Resonance Imaging Study
Next Article in Special Issue
Coming Back to the Basics. Comment on Cangir et al. A CT-Based Radiomic Signature for the Differentiation of Pulmonary Hamartomas from Carcinoid Tumors. Diagnostics 2022, 12, 416
Previous Article in Journal
Diagnostic Management of Acute Pulmonary Embolism in COVID-19 and Other Special Patient Populations
Previous Article in Special Issue
Radiomics Analysis of Brain [18F]FDG PET/CT to Predict Alzheimer’s Disease in Patients with Amyloid PET Positivity: A Preliminary Report on the Application of SPM Cortical Segmentation, Pyradiomics and Machine-Learning Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology

Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2022, 12(6), 1351; https://doi.org/10.3390/diagnostics12061351
Submission received: 15 February 2022 / Revised: 19 May 2022 / Accepted: 26 May 2022 / Published: 30 May 2022

Abstract

:
Imaging in the emergent setting carries high stakes. With increased demand for dedicated on-site service, emergency radiologists face increasingly large image volumes that require rapid turnaround times. However, novel artificial intelligence (AI) algorithms may assist trauma and emergency radiologists with efficient and accurate medical image analysis, providing an opportunity to augment human decision making, including outcome prediction and treatment planning. While traditional radiology practice involves visual assessment of medical images for detection and characterization of pathologies, AI algorithms can automatically identify subtle disease states and provide quantitative characterization of disease severity based on morphologic image details, such as geometry and fluid flow. Taken together, the benefits provided by implementing AI in radiology have the potential to improve workflow efficiency, engender faster turnaround results for complex cases, and reduce heavy workloads. Although analysis of AI applications within abdominopelvic imaging has primarily focused on oncologic detection, localization, and treatment response, several promising algorithms have been developed for use in the emergency setting. This article aims to establish a general understanding of the AI algorithms used in emergent image-based tasks and to discuss the challenges associated with the implementation of AI into the clinical workflow.

1. Introduction

Spawned by the age of digital medical imaging and high throughput computing, Artificial intelligence (AI) is one of the major innovations of the healthcare sector, particularly in radiology [1,2,3]. On a simplistic level, AI can be defined as a field of science involved in the creation of systems that perform problem-solving tasks that require human intelligence [4]. Machine learning is a specific subset of AI that creates algorithms and statistical models that dictate the performance of AI systems in performing specific user-defined tasks without using explicit instructions. These methods rely on predefined engineered features derived from expert knowledge that quantify radiographic characteristics, including volume, shape, size, texture, intensity, and location. The most robust features are then selected and fed into statistical machine learning models to identify imaging biomarkers. Machine learning methods rely on patterns and inferences derived from the human-engineered data extracted from images to perform the task.
In recent years, the ongoing improvements in AI applications have focused on deep learning systems that scale with data, rather than the traditional machine learning methods based on predefined radiologic features. These deep learning systems encompass a specific machine learning approach that relies on artificial neural network (ANN) algorithms and contains hidden layers, such as convolutional neural networks (CNN), to process raw data and perform classification or detection tasks. By design, deep learning is a representation-learning method with multiple levels of representation. Thus, deep learning methods do not require human-engineered features such as inputs as they automatically extract features from images. This in turn reduces the need for the manual processing of images, a time-consuming endeavor that is oftentimes subject to operator bias [5]. Yet, many clinicians are unaware of the complex relationships inherent in the deep learning algorithms, rendering this approach difficult to accept within a clinical setting. While machine learning models may be more efficient in some cases (e.g., when the input data involve single-number metrics), deep learning models, although requiring substantially more input information, may outperform machine learning approaches with more complex data.
As emergency radiology confronts mounting expectations to deliver dedicated on-site service and the demand of increased imaging load and rapid report turnaround time, this field represents a promising avenue of entry for AI into radiology departments globally. Yet, there is a paucity of literature describing AI applications in this field, particularly with regard to the imaging of emergent abdominopelvic pathologies and the practical implications. Deep learning systems provide an opportunity to augment human decision making and improve efficiency and effectiveness, thus achieving a true enhancement in the quality of care [6,7]. Ultimately, this review will provide an overview of the AI applications for abdominopelvic imaging in emergency radiology according to pathologic classification: (1) diseases of the digestive tract, (2) trauma, and (3) abdominal aortic aneurysms.

2. Overview

In recent years, several promising algorithms in a variety of fields have been developed for use in the emergency setting (Table 1). For instance, AI algorithms demonstrated clinical utility in non-contrast, head computed tomography (CT) scans to detect hemorrhage, mass effect, hydrocephalus, and suspected acute infarct. Additional promising AI applications include the detection and classification of chest abnormalities in chest radiographs and CT scans; the identification and quantification of coronary artery calcification in CT scans; and fracture detection in orthopedic trauma. Several studies have also described the utility of AI in oncologic detection, localization, and treatment response. Yet, irrespective of the application, the primary advantage of AI implementation in radiology is the ability to act as a second opinion [8]. This has the potential to improve diagnostic accuracy, particularly in resource-limited settings. AI also has great utility in triaging patients as the algorithm can divide abnormal cases based on predetermined criteria that we can define, and the radiologist can then work off the sorted priority list. This has the potential to improve workflow efficiency, with faster turnaround results for complex cases and reduced overall workloads.

3. Diseases of the Digestive Tract

3.1. Small Bowel Obstruction

As small bowel obstruction is a common diagnostic cause of acute abdominal pain in the emergency setting, patients with high clinical suspicion oftentimes undergo abdominal radiography as one of the first-line screening tests. Three radiologic signs are highly pathognomonic for the detection of SBO: two or more air-fluid levels, air-fluid levels wider than 2.5 cm, and air-fluid levels differing more than 5 mm from one another in the same loop of bowel [12]. Yet, recent studies indicate that a direct correlation exists between SBO detection and radiologist experience [11]. Thus, AI-assisted detection may aid non-radiologists or junior radiology staff members in diagnosing this pathology [11].
Although difficult to comprehend, the mechanisms behind deep learning architecture can be investigated with the use of occlusion maps. Occlusion maps are developed by occluding areas in the original image with a low probability of detecting pathology. These maps can change the output probability of neural network models, making it easier to parse out relevant image details for classification when overlayed on the original image. The application of occlusion maps in neural network models of SBO detection appears to demonstrate the significance of dilated small bowel segments, but other specific SBO features may be universally distributed throughout the image [13]. In a single-institution pilot study of AI technology, transfer learning from a pre-trained neural network was conducted on a set of 3663 clinical supine abdominal radiographs. Four hundred and fifty-two images were classified by the transferred neural network as false positives, of which 94 images (21%) were considered as ileus and 50 images (11%) were considered low-grade bowel obstruction [13]. Following training, the neural network achieved an AUC of 0.849, and an observed sensitivity and specificity of 84% and 68%, respectively [13]. Follow-up studies using full CNN models with large training set sizes demonstrate a marked improvement in the AUC to 0.97, with a sensitivity and specificity of 91% and 92%, respectively [26]. More recently, ensemble models created using a variety of CNN architectures based on 990 plain abdominal radiographs showed an AUC of 0.96, corresponding to a sensitivity and specificity of 91 and 93%, respectively, in identifying small bowel obstruction [20]. Considering that abdominal radiographs are less sensitive than CT for the diagnosis of small bowel obstruction, similar studies using CT images are warranted. Post-validation, the development of such AI-driven systems could alert clinicians to the presence of critical clinical features warranting expedited clinical review and thereby improve patient outcomes.

3.2. Intussusception

In children aged 3 to 36 months old, intussusception remains the most common cause of intestinal obstruction [27]. While roughly 84% of patients experience alleviation of symptoms when diagnosed and treated with an air enema within 24 h of onset, delays in treatment can result in complications such as ischemia, necrosis, and perforation [27]. Traditionally, abdominal radiographs have markedly low sensitivity for detection of intussusception (<50%) and a poor rate of interobserver reliability [28]. In recent years, however, studies investigating risk stratification for intussusception in children have demonstrated the utility of abdominal X-rays as an initial diagnostic modality, with one study reporting sensitivity and specificity values of 0.77 and 0.79, respectively [29]. Unlike ultrasound (US), plain radiography is unaffected by operator skill and equipment variability and remains an inexpensive option for a first-line screening test [19]. As such, the implementation of AI algorithms in abdominal radiography may have a broad patient impact, and it shows promise as an initial point of entry.
The recent studies of AI applications in the detection of intussusception have focused on the implementation of deep learning algorithms for abdominal radiographs and have indicated that the technique may add value to the field. In one retrospective study of 681 pediatric patients, including 242 children diagnosed with intussusception, the authors used a You Only Look Once, Version 3 (YOLOv3) deep learning algorithm to validate automated detection [30]. The sensitivity of the algorithm was higher when compared with radiologist interpretation alone (0.76 vs. 0.46), while there were no significant differences in the specificity (0.96 vs. 0.92) [30]. More recent studies with larger sample sizes have demonstrated improved detection of intussusception with ranges between 0.91–0.94 and 0.85–0.91, respectively [19]. Other authors have described similar findings with AUC values of 0.95 and 0.97 and an accuracy of 0.93 and 0.95 [19]. Thus, as more data are gathered, hospitals may train these algorithms and institute them for routine use in emergency radiology.

3.3. Acute Appendicitis

Acute appendicitis is one of the most common causes of acute abdominal pain in the emergency department [31]. However, many patient-specific factors make detection of the appendix and diagnosis of appendicitis difficult, such as unusual appendix location, scanty intraabdominal fat, prominent cecal wall thickening, and abscess formation adjacent to the adnexa [31]. While both US and CT are important in the diagnosis of acute appendicitis, CT is considered the gold-standard diagnostic tool as it circumvents the issues of operator dependency, abundant bowel gas, and obesity that are prevalent in ultrasound techniques [31]. As for AI applications for the detection of acute appendicitis, the literature remains sparse, with only one study investigating the performance of the CNN-based diagnosis algorithm for abdominopelvic CT imaging [18]. In this retrospective multicenter study, the authors obtained a total of 667 image sets from 215 acute appendicitis patients and 452 controls for the algorithm training [18]. Following training, the CNN algorithm achieved a diagnostic accuracy of 91.5% for all image sets, with a reported sensitivity and specificity of 90.2% and 92.0%, respectively [18].
Although the diagnostic performance of the CNN algorithm was excellent, many false negatives were reported as the AI algorithm oftentimes misinterpreted early phase acute appendicitis, appendiceal perforation with abscess, and small mesenteric fat [18]. Some cases of false negatives were difficult to comprehend as trained humans never deemed these cases as normal [18]. Thus, the application of a CNN-based diagnosis algorithm in CT imaging may be useful in conjunction with a trained radiologist. Along with a thorough examination for false negatives, CNN-based acute appendicitis detection could potentially be implemented as a second opinion in order to improve diagnostic accuracy acutely.
More recently, a random forest-based predictive model of pediatric appendicitis was created and validated on a dataset obtained from 430 children and adolescents. The model used information extracted from patient history, clinical examination, laboratory parameters, and abdominal ultrasonography and reported areas under the precision-recall curve of 0.94, 0.92, and 0.70, respectively, for the diagnosis, management, and severity of appendicitis [21]. External validation using large sample sizes can increase the impact of such findings and help to identify and manage patients with potential appendicitis and its heterogeneous presentation in the pediatric population.

3.4. Colitis

Colitis is a chronic disease resulting from inflammation of the inner lining of the colon that can be caused by multiple etiologies, including ischemia, infection, neutropenia, and inflammatory bowel diseases (Crohn’s disease and ulcerative colitis) [32]. In the acute setting, patients present with diarrhea and abdominal pain, and CT is frequently utilized to evaluate patients for the presence of this disease [33]. Certain CT findings, such as wall thicknesses greater than 3 mm and the presence of an “accordion sign” (due to trapping of oral contrast between thickened haustral folds and mucosal ridges), are considered to be representative of colitis [33]. In addition, both of these radiographic findings can serve as important imaging markers for the AI-based detection of colitis. While older studies investigated the use of traditional AI methods, such as hand-crafted features (Gabor filters) and support vector machines to detect and classify colitis, these methods rely on expert knowledge and the segmentation of muscle, kidneys, and liver to reduce false-positive classification [33]. Other strategies such as high-capacity, region-based CNN have also demonstrated utility in colitis screening, with some studies reporting sensitivities and specificities as high as 94% and 95%, respectively [17,34]. These models have observed AUC values as high as 0.99 and are encouraging for potential clinical application [34].
In a multicenter diagnostic study involving five hospitals in China, deep learning models constructed from 49,154 colonoscopy images collected 1772 participants with inflammatory bowel disease (IBD) and normal controls; the identification accuracy obtained by the deep learning model was superior to that of experienced endoscopists per patient (deep model vs. trainee endoscopist, 99.1% vs. 78.0%) and per lesion (deep model vs. trainee endoscopist, 90.4% vs. 59.7%) [22]. While the difference between the two approaches was smaller when an experienced endoscopist was included, the deep learning still performed significantly (p < 0.001) better than its visual assessment-based counterpart.

4. Trauma

4.1. Hemoperitoneum

In the setting of trauma, point of care ultrasound (POCUS), particularly the Focused Assessment with Sonography for Trauma (FAST) examination, is the gold standard for rapid detection of hemoperitoneum [18]. Certain sonographic findings, such as free fluid in the right upper quadrant (RUQ), are the most important independent predictors of therapeutic laparotomy in trauma [15,35,36,37]. Positive free fluid findings on US imaging can also narrow the differential diagnosis and aid decision making for antibiotic administration, surgery, or transfer of care to tertiary referral hospitals [15,38,39,40].
As the demand for on-call imaging expands, the need for efficient and accurate imaging in POCUS has led to research developments in the feasibility of automated detection systems. In one retrospective pilot study by Gwin et al., the authors employed cross-sectional RUQ views from FAST examinations to investigate the feasibility of automating free fluid detection [15]. A traditional AI algorithm was developed with features related to geometric properties (i.e., linearity, curvilinearity, radius angle covariance, roundness, position, and area), grayscale color properties of shape (i.e., echogenicity, echo variability, medial/lateral neighborhood echogenicity, and medial/lateral neighborhood variability), edge sharpness, and pixilation [15]. The features were subsequently inputted into a support vector machine for the classification of hypoechoic regions of interest as ‘free fluid’ or ‘not free fluid’. This study reported a sensitivity and specificity of 100% and 90%, respectively, in detecting free fluid on FAST examination for trauma; these values are within range of those reported in studies evaluating the human interpretation of free fluid detection. The authors also concluded that AI applications may also allow for the expedited identification of abdominal free fluid in the acutely ill non-trauma patient [15]. Ultimately, these results warrant further investigation and applications in other disease states, as well as the expansion of the approach to all quadrants for true improvements in clinical utility. Furthermore, implementation of automated detection systems may help reduce unnecessary patient transfers to tertiary care centers and make for an ideal triage tool. Taken together, automated detection systems may be vital in reducing the burden of imaging interpretation volumes for the on-call radiologist.
More recently, a multiscale deep learning approach designed for the quantitative visualization of traumatic hemoperitoneum using CT images showed a significantly improved performance (accuracy of 84%, sensitivity of 82%, specificity of 93%, positive predictive value of 86%, and negative predictive value of 83%) for the prediction of a composite outcome of surgical or angiographic hemostatic intervention, massive transfusion, and mortality compared with that of the conventional volume estimation methods [23]. Similar studies using larger sample sizes, multicentric data, and the inclusion of negative controls can improve the impact of the findings and support the development of clinical aids to rapidly and objectively quantify hemoperitoneum.

4.2. Traumatic Pelvic Injuries

These automated detection techniques may also be valuable in the identification of traumatic pelvic injuries as rapid detection remains crucial to the timely delivery of life-saving interventions [41]. Recent studies indicate that approximately 22% of patients with pelvic injuries have concomitant abdominal trauma [42]. Of special significance is the fact that pelvic fractures are a marker of injury from major force and are associated with morbidity and mortality from bleeding and abdominal compartment syndrome, as well as intraabdominal abscesses [9,43,44].
Supervised learning has commonly been used to detect fractures in local regions and has demonstrated an accuracy comparable to that of physicians [45]. Deep learning studies report an accuracy upwards of 90% for detecting hip fractures in various settings [46,47]. Recently, Cheng et al. reported a scalable physician-level deep learning algorithm (PelviXNet) that detects universal trauma on pelvic radiographs. Using data from 5204 pelvic radiographs, PelviXNet yields an AUC of 0.97 (95% CI, 0.96–0.98) [24]. While the results are valuable, most of the conditions analyzed in this study are rarely missed by physicians, leading to a limitation in its impact [48]. Multicentric studies using large sample sizes, particularly those using data including more complex injuries which are visually difficult to discern, will be very impactful to the clinical community.
In severe pelvic fractures, injuries to the bladder are most common (15%) followed by the liver (10%) [9]. In milder pelvic fractures, the most commonly injured organ is the liver (6%) [9]. While contrast-enhanced CT is the gold-standard diagnostic test for pelvic trauma, the sensitivity of this technique in evaluating both mild and severe pelvic fractures is only 66% [49]. This can be attributed to a multitude of imaging complexities, including low resolution, noise, partial volume effects, and inhomogeneities, which are particularly relevant in identifying mild/small bone fractures [41]. These irregularities render image labeling difficult, often requiring multiple reads to confirm the existence and details of a fracture. Thus, computer-assisted support may have a potential niche in assisting emergency radiologists in making accurate diagnoses and assessing the severity of pelvic fractures with shorter turnaround times.
Unfortunately, the literature surrounding AI algorithms for the CT detection of pelvic fractures remains sparse. One retrospective study investigated the feasibility of automated fracture detection in 12 patients, including 8 patients presenting with mild and small fractures [41]. The authors developed a traditional AI algorithm involving pelvic bone segmentation through registered active shape models, adaptive window creations, 2D stationary wavelet transformations, masking, and boundary tracing [41]. The proposed model reduced the overall processing speed and achieved a 92% accuracy, 93% sensitivity, and 89% specificity in detecting pelvic bone fractures [41]. Furthermore, this model quantified certain fracture features, such as separation distance and angle, that are not visible to the human eye.
Computer-assisted decision support for CT can also be implemented in the automated segmentation and measurement of traumatic pelvic hematomas. While the volume of pelvic hematoma is the strongest independent predictor of arterial injury needing angioembolization in trauma patients with pelvic fractures, the measurement of pelvic hematoma volumes through current methods (e.g., semiautomated seeded region growing) are time-consuming [14]. In addition, the shape and location of pelvic hematomas are often variable and have poorly defined margins, further muddling detection. Thus, hospitals may benefit from more efficient automated approaches. In a retrospective study of 253 trauma patients, Dreizin et al. assessed the performance of a deep learning algorithm for the automated segmentation and measurement of pelvic hematoma volume [14]. Not only did this algorithm contain a recurrent saliency transformation network, but it also made objective volumetric hematoma measurements for the prediction of arterial injury requiring angioembolization. Ultimately, these authors reported that the aggregate measure of performance for the model achieved an area-under-curve (AUC) of 0.81, which is comparable to manual measurements of pelvic hematoma volume (AUC of 0.80) [14]. Other studies reported similar findings and noted that the use of deep learning algorithms for hematoma measurements demonstrated an improved prediction of the need for pelvic packing, massive transfusion, and in-hospital mortality when compared to subjective hematoma measurements [50]. Thus, the optimization of hematoma measurement through AI could augment outcome prediction for trauma patients and may guide treatment planning for emergency radiologists.

5. Abdominal Aortic Aneurysms

Abdominal aortic aneurysms (AAAs) are a life-threatening disease characterized by segmental weakening and ballooning of the aorta [51]. While the only curative treatment of AAA is open or endovascular repair, the decision to proceed with surgical repair requires careful consideration of the surgical risks and the risk of aneurysm rupture. Thus, CT imaging is often utilized for operative planning as it allows visualization of the aorta, access vessels, aneurysm morphology, and coexistent occlusive disease [51]. In recent years, AI methods have been proposed to improve the efficiency of image segmentation, the detection of AAA, and the characterization of AAA geometry and fluid dynamics.
A recent systematic review described 15 studies of AI methods on the segmentation of the abdominal aorta [51]. Manual segmentation is time-consuming, often requiring 30 min and is operator-dependent [51]. To reduce segmentation time and the reliability of segmentation, one approach utilized an active shape model (ASM) segmentation scheme for CT angiography (CTA) images [52]. This technique refers to the development of a statistical shape model derived from labeled landmark points and iteratively fitted to an image. Following manual segmentation of the first slice, a shape model of the contours in adjacent image slices is iteratively fitted over the entire volume of the AAA. This in turn reduces the time required for expert segmentation by a factor of six. Other potential techniques include semiautomatic approaches for segmentation with the use of a 3D deformable model and level-set algorithms [53].
AI methods have also been proposed to quantify the morphologic aspects of AAA. In a study by Zhuge et al., predefined features of intensity, volume, and aorta shape from 20 CTA scans of AAA patients were utilized to train a support vector machine classifier [54]. Following preprocessing, global region analysis, surface initialization, local feature analysis, and level set segmentation, the authors observed the mean and worst-case values of the volume overlap at 95% and 93% [54]. The mean segmentation time was also reduced from 30 min to 7.4 min. Other studies have employed finite-element, analysis-based approaches to automate the analysis of CT and magnetic resonance imaging (MRI) images [55]. These applications have been extended to multimodal imaging using neural network fusion models [56]. In this setting, AI models allow a shared representation of the aorta in both the CT and the MRI images. In addition to aneurysm shape, both intraluminal thrombus and calcifications contribute to the development of AAA and the risk of rupture [57]. Recent studies have employed fully automated pipelines to detect the aortic lumen and characterize the intraluminal thrombus and calcifications with computational times of <1 min [58].
As the precise characterization of AAA geometry and arterial wall thickness is vital for the assessment of the rupture risk, several studies have investigated the development of neural network algorithms for accurate measurement [16,59]. One study reported an association between AI performance and the manual assessment performed by vascular surgeons, with coefficients of variation of 11% for ruptured AAA and 13% for non-ruptured AAA [59]. In another study, the authors developed a decision tree algorithm from 76 contrast-enhanced CT scans to characterize AAA geometry into 25 sizes and shapes [16]. Ultimately, this model yielded a prediction accuracy of 87% [16].
AI techniques have also been utilized in the characterization of AAA fluid dynamics as wall shear stress also accounts for AAA rupture risk [60]. While some studies have measured computational fluid dynamics and estimated wall shear stress from geometric parameters, other authors have utilized machine learning to calculate wall shear stress and predict wall shear stress distribution in carotid bifurcation models [60,61,62]. These studies demonstrate the potential clinical utility of AI in distinguishing AAA morphology and may be effective in reducing the costs associated with image analysis.
Using multicenter, multi-scanner, multiphase CT data, the 3D ResNet model demonstrated a high performance (AUC of 0.95) for fully automated abdominal aneurysm detection in an abdominal CT scan [25]. While promising, the study was conducted on a small patient cohort of 187 CT scans as a training dataset, potentially limiting the dataset variability and, thus, the generalizability. The validation of similar approaches in a larger cohort can increase the robustness of the findings and ultimately aid the transition of such AI-driven workflows into clinical practice.

6. Practical Applications

In recent years, a number of promising AI algorithms have been developed for use in the emergency setting. In a study by Kim et al., the authors tested the accuracy of artificial intelligence and deep learning-based algorithms in the context of diagnosing ileocolic intussusception on abdominal radiographs in the pediatric population [63]. Ultimately, the deep learning-based algorithms provided higher sensitivity in diagnosing intussusception in children under five years old when compared to clinical radiologists (0.76 vs. 0.46, p =  0.013), but demonstrated no statistical difference in specificity (0.96 vs. 0.92, p  =  0.32) [30]. Pang et al. also utilized the Yolov3-arch neural network in the clinical setting by identifying cholelithiasis and classifying gallstones on CT images [63]. This algorithm was applied to a medical image dataset comprising 223,846 CT images, with gallstones present in 1369 patients. The diagnostic accuracy of this algorithm was ultimately reported to be 86.5%, thus indicating the practical use of AI in assisting radiologists in gallstone detection [63].

7. Discussion

Despite these strides, several barriers remain that prevent the clinical translation of AI techniques into daily workflow [5,64,65]. These include both ethical and medicolegal challenges, such as standardization difficulties across multiple centers, potential disagreement between radiologists and AI, and gaining trust in the black-box deep learning approach [12]. Some of the implementation-related challenges include incorporating AI within PACS and EMR systems, determining the level of AI–human interaction, and packaging these algorithms into a widely acceptable product [66].
Beyond the implementation barriers, there are deep-rooted issues with artificial intelligence at its core. First, most machine learning technologies have high sensitivity but low to moderate specificity [2]. Thus, AI can be highly beneficial as a screening tool, but oftentimes falls short when ruling on a diagnosis, particularly when dealing with overlapping structures. For instance, an algorithm may identify a micro-calcification smaller than the human eye as nephrolithiasis, which could actually be an early atherosclerotic plaque in the vessel running posterior to the organ of interest. These issues become further compounded by the fact that this novel technology does not consider the full clinical picture when making a diagnosis. As many gastrointestinal pathologies can present similarly on imaging, it is imperative to consider patient demographics and history. For example, while AI-based imaging may correctly identify an adrenal nodule, the clinical context of episodic hypertension and tachycardia would favor a diagnosis of pheochromocytoma, whereas a patient with new-onset truncal obesity, insulin resistance, and hirsutism most likely has an adrenocortical adenoma [67]. In addition, a renal hyperdensity can be interpreted as active extravasation in the context of trauma or nephrolithiasis in the context of unilateral flank pain and colicky pain radiating to the groin. Thus, machine learning techniques should not be utilized as a stand-alone technology, but instead applied under the supervision of a trained radiologist.
For successful implementation of deep learning systems in radiology, large well-annotated datasets of medical images are needed to detect subtle differences in disease states [4]. Yet, a scarcity of this large-scale data currently exists [68]. For medical image datasets that are too small to generate vast networks, pre-trained deep learning networks obtained from large-scale natural images may be repurposed and transferred over, in a process known as transfer learning. However, these techniques are fraught with limitations. Of the datasets that have been generated, many inaccuracies have been identified, particularly in patients who had undergone long periods of hospitalization. In a study by Behzadi-Khormouji et al. that conducted a quality control of various AI datasets, the authors noted that certain CXRs labeled as “no interval change”, were incorrectly coded as “no finding” within the dataset, and thus, were being utilized as a standardized normal [2]. Consequently, the “high accuracy rate” associated with AI models may actually be due to inaccurate training/coding, yielding unforeseen errors [2]. Thus, machine learning technologies require constant retraining and evaluation to ensure their accuracy and precision and that they stay current with the constant learning curve present in medicine.
Additional challenges associated with this technology include the poor generalizability of models trained on one dataset (single-institution dataset) to other data [69,70]. Due to the high-risk nature of translating AI technologies developed from single institutions to widespread clinical practice, governing bodies such as the US Food and Drug Administration (FDA) have attempted to adopt specific regulatory frameworks to ensure effective safeguards. To date, these frameworks have cleared medical devices utilizing “locked” algorithms, i.e., those that provide reproducible results with the same inputs. Changes beyond the original market authorization for these algorithms would require FDA premarket review [71]. However, artificial intelligence/machine learning (AI/ML)-based medical devices increasingly utilize deep learning networks that adapt over time, where the adaptation or change is only recognized after distribution. Current regulatory frameworks have not been designed for medical devices using these adaptive algorithms.
Distributional shift can also greatly impact AI technology and lead to erroneous predictions [72]. For example, models can appear to perform with high accuracy but may fail if the dataset suddenly shifts. As disease patterns are constantly changing, a mismatch can occur between the training and the operational data [72]. In order to combat this, the FDA proposed a potential solution to this problem where manufacturers can submit periodic updates and real-world performance monitoring to the FDA as part of an algorithm-change protocol [71]. This method falls under the framework of a total product lifecycle regulatory approach, allowing the integration of pre-market and post-market surveillance data for medical devices using AI/ML-based technologies. Within the field of radiology, 21 AI/ML-based algorithms are FDA-approved as medical devices, 3 of which are used for CT-based lesion detection (Arterys Oncology DL, Arterys MICA, and QuantX), 2 of which are used for stroke and hemorrhage detection (ContaCT, Accipiolx, and Icobrain), 6 of which are deep learning algorithms used to improve image processing (SubtlePET, Deep Learning Image Reconstruction, Advanced Intelligent Clear-IQ Engine, SubtleMR, and AI-Rad Companion), and 4 of which are focused on acute care for pneumothorax, wrist fracture diagnosis, and triage of head, spine, and chest injuries (Health PNX, Critical Care Suite, OsteoDetect, and Aidoc Medical BriefCase System) [72]. Deep learning algorithms developed from single institutions will require approval under these regulatory pathways for widespread clinical application. Ultimately, this approach can help the FDA embrace the iterative improvement power of AI/ML-based technologies as medical devices, while simultaneously ensuring patient safety.

8. Conclusions

The field of emergency radiology can greatly benefit from AI applications in image segmentation, automated detection, and outcome prediction for a variety of abdominopelvic pathologies. Not only can AI algorithms automatically identify subtle disease states and provide quantitative characterization of disease severity, but they also have the potential to improve workflow efficiency and reduce overall workloads. In addition, AI can help augment human decision making and serve as a second opinion in complicated cases. As most AI methods are trained in one specific task, it remains to be seen whether AI will be broadly implemented in the detection of multiple abdominopelvic pathologies, as outlined here. While the field of AI in emergency radiology is expanding exponentially, many challenges exist that hinder the clinical translation of these technologies.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arora, A. Conceptualising Artificial Intelligence as a Digital Healthcare Innovation: An Introductory Review. Med. Devices Évid. Res. 2020, 13, 223–230. [Google Scholar] [CrossRef] [PubMed]
  2. Behzadi-Khormouji, H.; Rostami, H.; Salehi, S.; Derakhshande-Rishehri, T.; Masoumi, M.; Salemi, S.; Keshavarz, A.; Gholamrezanezhad, A.; Assadi, M.; Batouli, A. Deep learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images. Comput. Methods Programs Biomed. 2019, 185, 105162. [Google Scholar] [CrossRef] [PubMed]
  3. Varghese, B.A.; Shin, H.; Desai, B.; Gholamrezanezhad, A.; Lei, X.; Perkins, M.; Oberai, A.; Nanda, N.; Cen, S.; Duddalwar, V. Predicting clinical outcomes in COVID-19 using radiomics on chest radiographs. Br. J. Radiol. 2021, 94, 20210221. [Google Scholar] [CrossRef] [PubMed]
  4. Lee, J.-G.; Jun, S.; Cho, Y.-W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef] [Green Version]
  5. Hazarika, I. Artificial intelligence: Opportunities and implications for the health workforce. Int. Health 2020, 12, 241–245. [Google Scholar] [CrossRef]
  6. Giger, M.L. Machine Learning in Medical Imaging. J. Am. Coll. Radiol. 2018, 15, 512–520. [Google Scholar] [CrossRef]
  7. Jalal, S.; Parker, W.; Ferguson, D.; Nicolaou, S. Exploring the Role of Artificial Intelligence in an Emergency and Trauma Radiology Department. Can. Assoc. Radiol. J. 2020, 72, 167–174. [Google Scholar] [CrossRef] [Green Version]
  8. Langlotz, C.P.; Allen, B.; Erickson, B.J.; Kalpathy-Cramer, J.; Bigelow, K.; Cook, T.S.; Flanders, A.E.; Lungren, M.P.; Mendelson, D.S.; Rudie, J.D.; et al. A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 2019, 291, 781–791. [Google Scholar] [CrossRef]
  9. Demetriades, D.; Karaiskakis, M.; Toutouzas, K.; Alo, K.; Velmahos, G.; Chan, L. Pelvic fractures: Epidemiology and predictors of associated abdominal injuries and outcomes. J. Am. Coll. Surg. 2002, 195, 1–10. [Google Scholar] [CrossRef]
  10. Ukai, K.; Rahman, R.; Yagi, N.; Hayashi, K.; Maruo, A.; Muratsu, H.; Kobashi, S. Detecting pelvic fracture on 3D-CT using deep convolutional neural networks with multi-orientated slab images. Sci. Rep. 2021, 11, 11716. [Google Scholar] [CrossRef]
  11. Thompson, W.M.; Kilani, R.K.; Smith, B.B.; Thomas, J.; Jaffe, T.A.; Delong, D.M.; Paulson, E.K. Accuracy of Abdominal Radiography in Acute Small-Bowel Obstruction: Does Reviewer Experience Matter? Am. J. Roentgenol. 2007, 188, W233–W238. [Google Scholar] [CrossRef]
  12. Lappas, J.C.; Reyes, B.L.; Maglinte, D.D. Abdominal radiography findings in small bowel obstruction: Relevance to triage for additional diagnostic imaging. AJR 2001, 176, 167–174. [Google Scholar] [CrossRef] [PubMed]
  13. Cheng, P.M.; Tejura, T.K.; Tran, K.N.; Whang, G. Detection of high-grade small bowel obstruction on conventional radiog-raphy with convolutional neural networks. Abdom. Radiol. 2018, 43, 1120–1127. [Google Scholar] [CrossRef] [PubMed]
  14. Dreizin, D.; Zhou, Y.; Zhang, Y.; Tirada, N.; Yuille, A.L. Performance of a Deep Learning Algorithm for Automated Segmentation and Quantification of Traumatic Pelvic Hematomas on CT. J. Digit. Imaging 2019, 33, 243–251. [Google Scholar] [CrossRef]
  15. Sjogren, A.R.; Leo, M.M.; Feldman, J.; Gwin, J.T. Image Segmentation and Machine Learning for Detection of Abdominal Free Fluid in Focused Assessment With Sonography for Trauma Examinations: A Pilot Study. J. Ultrasound Med. Off. J. Am. Inst. Ultrasound Med. 2016, 35, 2501–2509. [Google Scholar] [CrossRef] [PubMed]
  16. Shum, J.; Martufi, G.; Di Martino, E.; Washington, C.B.; Grisafi, J.; Muluk, S.C.; Finol, E. Quantitative Assessment of Abdominal Aortic Aneurysm Geometry. Ann. Biomed. Eng. 2010, 39, 277–286. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, J.; Wang, D.; Lu, L.; Wei, Z.; Kim, L.; Turkbey, E.B.; Sahiner, B.; Petrick, N.A.; Summers, R.M. Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks. Med. Phys. 2017, 44, 4630–4642. [Google Scholar] [CrossRef]
  18. Park, J.J.; Kim, K.A.; Nam, Y.; Choi, M.H.; Choi, S.Y.; Rhie, J. Convolutional-neural-network-based diagnosis of appendicitis via CT scans in patients with acute abdominal pain presenting in the emergency department. Sci. Rep. 2020, 10, 9556. [Google Scholar] [CrossRef]
  19. Kwon, G.; Ryu, J.; Oh, J.; Lim, J.; Kang, B.-K.; Ahn, C.; Bae, J.; Lee, D.K. Deep learning algorithms for detecting and visualising intussusception on plain abdominal radiography in children: A retrospective multicenter study. Sci. Rep. 2020, 10, 17582. [Google Scholar] [CrossRef]
  20. Kim, D.; Wit, H.; Thurston, M.; Long, M.; Maskell, G.; Strugnell, M.; Shetty, D.; Smith, I.; Hollings, N. An artificial intelligence deep learning model for identification of small bowel obstruction on plain abdominal radiographs. Br. J. Radiol. 2021, 94, 20201407. [Google Scholar] [CrossRef]
  21. Marcinkevics, R.; Wolfertstetter, P.R.; Wellmann, S.; Knorr, C.; Vogt, J.E. Using Machine Learning to Predict the Diagnosis, Management and Severity of Pediatric Appendicitis. Front. Pediatr. 2021, 9, 360. [Google Scholar] [CrossRef] [PubMed]
  22. Ruan, G.; Qi, J.; Cheng, Y.; Liu, R.; Zhang, B.; Zhi, M.; Chen, J.; Xiao, F.; Shen, X.; Fan, L.; et al. Development and Validation of a Deep Neural Network for Accurate Identification of Endoscopic Images from Patients With Ulcerative Colitis and Crohn’s Disease. Front. Med. 2022, 9, 854677. [Google Scholar] [CrossRef]
  23. Dreizin, D.; Zhou, Y.; Fu, S.; Wang, Y.; Li, G.; Champ, K.; Siegel, E.; Wang, Z.; Chen, T.; Yuille, A.L. A Multiscale Deep Learning Method for Quantitative Visualization of Traumatic Hemoperitoneum at CT: Assessment of Feasibility and Comparison with Subjective Categorical Estimation. Radiol. Artif. Intell. 2020, 2, e190220. [Google Scholar] [CrossRef] [PubMed]
  24. Cheng, C.-T.; Wang, Y.; Chen, H.-W.; Hsiao, P.-M.; Yeh, C.-N.; Hsieh, C.-H.; Miao, S.; Xiao, J.; Liao, C.-H.; Lu, L. A scalable physician-level deep learning algorithm detects universal trauma on pelvic radiographs. Nat. Commun. 2021, 12, 1066. [Google Scholar] [CrossRef]
  25. Golla, A.-K.; Tönnes, C.; Russ, T.; Bauer, D.F.; Froelich, M.F.; Diehl, S.J.; Schoenberg, S.O.; Keese, M.; Schad, L.R.; Zöllner, F.G.; et al. Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning. Diagnostics 2021, 11, 2131. [Google Scholar] [CrossRef]
  26. Cheng, P.M.; Tran, K.N.; Whang, G.; Tejura, T.K. Refining Convolutional Neural Network Detection of Small-Bowel Obstruction in Conventional Radiography. Am. J. Roentgenol. 2019, 212, 342–350. [Google Scholar] [CrossRef] [PubMed]
  27. Mandeville, K.; Chien, M.; Willyerd, F.A.; Mandell, G.; Hostetler, M.A.; Bulloch, B. Intussusception. Pediatr. Emerg. Care 2012, 28, 842–844. [Google Scholar] [CrossRef] [PubMed]
  28. Hom, J.; Kaplan, C.; Fowler, S.; Messina, C.; Chandran, L.; Kunkov, S. Evidence-Based Diagnostic Test Accuracy of History, Physical Examination, and Imaging for Intussusception. Pediatr. Emerg. Care 2020, 38, e225–e230. [Google Scholar] [CrossRef]
  29. Weihmiller, S.N.; Buonomo, C.; Bachur, R. Risk Stratification of Children Being Evaluated for Intussusception. Pediatrics 2011, 127, e296–e303. [Google Scholar] [CrossRef]
  30. Kim, S.; Yoon, H.; Lee, M.-J.; Kim, M.-J.; Han, K.; Yoon, J.K.; Kim, H.C.; Shin, J.; Shin, H.J. Performance of deep learning-based algorithm for detection of ileocolic intussusception on abdominal radiographs of young children. Sci. Rep. 2019, 9, 19420. [Google Scholar] [CrossRef]
  31. Kim, H.C.; Yang, D.M.; Jin, W.; Park, S.J. Added Diagnostic Value of Multiplanar Reformation of Multidetector CT Data in Patients with Suspected Appendicitis. RadioGraphics 2008, 28, 393–405. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Samadder, N.J.; Gornick, M.; Everett, J.; Greenson, J.K.; Gruber, S.B. Inflammatory bowel disease and familial adenomatous polyposis. J. Crohn’s Colitis 2013, 7, e103–e107. [Google Scholar] [CrossRef] [PubMed]
  33. Wei, Z.S.; Zhang, W.D.; Liu, J.F.; Wang, S.J.; Yao, J.H.; Summers, R.M. Computer-Aided Detection of Colitis on Computed Tomog-raphy Using a Visual Codebook. In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging, San Francisco, CA, USA, 7–11 April 2013; pp. 141–144. [Google Scholar]
  34. Liu, J.; Wang, D.; Wei, Z.; Lu, L.; Kim, L.; Turkbey, E.; Summers, R.M. Colitis detection on computed tomography using regional convolutional neural networks. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 863–866. [Google Scholar] [CrossRef]
  35. Rose, J.S.; Richards, J.R.; Battistella, F.; Bair, A.E.; McGahan, J.P.; Kuppermann, N. The FAST is positive, now what? Derivation of a clinical decision rule to determine the need for therapeutic laparotomy in adults with blunt torso trauma and a positive trauma ultrasound. J. Emerg. Med. 2005, 29, 15–21. [Google Scholar] [CrossRef] [PubMed]
  36. Moylan, M.; Newgard, C.D.; Ma, O.J.; Sabbaj, A.; Rogers, T.; Douglass, R. Association Between a Positive ED FAST Examination and Therapeutic Laparotomy in Normotensive Blunt Trauma Patients. J. Emerg. Med. 2007, 33, 265–271. [Google Scholar] [CrossRef]
  37. Helling, T.S.; Wilson, J.; Augustosky, K. The utility of focused abdominal ultrasound in blunt abdominal trauma: A reap-praisal. Am. J. Surg. 2007, 194, 728–733. [Google Scholar] [CrossRef]
  38. Moore, C.; Todd, W.M.; O’Brien, E.; Lin, H. Free fluid in Morison’s pouch on bedside ultrasound predicts need for operative intervention in suspected ectopic pregnancy. Acad. Emerg. Med. 2007, 14, 755–758. [Google Scholar] [CrossRef] [Green Version]
  39. Volpicelli, G.; Lamorte, A.; Tullio, M.; Cardinale, L.; Giraudo, M.; Stefanone, V.; Boero, E.; Nazerian, P.; Pozzi, R.; Frascisco, M.F. Point-of-care multiorgan ultrasonography for the evaluation of undifferentiated hypotension in the emergency department. Intensiv. Care Med. 2013, 39, 1290–1298. [Google Scholar] [CrossRef] [Green Version]
  40. Maitra, S.; Jarman, R.D.; Halford, N.W.; Richards, S.P. When FAST is a FAFF: Is FAST scanning useful in nontrauma patients? Ultrasound 2008, 16, 165–168. [Google Scholar] [CrossRef]
  41. Wu, J.; Davuluri, P.; Ward, K.R.; Cockrell, C.; Hobson, R.; Najarian, K. Fracture Detection in Traumatic Pelvic CT Images. Int. J. Biomed. Imaging 2012, 2012, 327198. [Google Scholar] [CrossRef]
  42. Küper, M.A.; Working Group on Pelvic Fractures of the German Trauma Society; Bachmann, R.; Wenig, G.F.; Ziegler, P.; Trulson, A.; Trulson, I.M.; Minarski, C.; Ladurner, R.; Stöckle, U.; et al. Associated abdominal injuries do not influence quality of care in pelvic fractures—a multicenter cohort study from the German Pelvic Registry. World J. Emerg. Surg. 2020, 15, 8. [Google Scholar] [CrossRef] [Green Version]
  43. Mohammad, H.; Sara, S.; Ali, S.R. Evaluation of the relationship between pelvic fracture and abdominal compartment syndrome in traumatic patients. J. Emerg. Trauma Shock 2013, 6, 176–179. [Google Scholar] [CrossRef] [PubMed]
  44. Rezende-Neto, J.B.; De Abreu, R.N.E.S.; Gomez, D.; Tanoli, O.S.; Campos, V.M.; Aguiar, R.S.T.; Paskar, D.D.; Nauth, A. Extra-Articular Pelvic Fractures with Concomitant Gastrointestinal Injury Caused by Ballistic Trauma are Harbingers of Intra-Abdominal and Retroperito-neal Abscesses. J. Emerg. Med. Trauma Surg. Care 2019, 6, 27. [Google Scholar] [CrossRef]
  45. Chea, P.; Mandell, J.C. Current applications and future directions of deep learning in musculoskeletal radiology. Skelet. Radiol. 2019, 49, 183–197. [Google Scholar] [CrossRef] [PubMed]
  46. Krogue, J.D.; Cheng, K.V.; Hwang, K.M.; Toogood, P.; Meinberg, E.G.; Geiger, E.J.; Zaid, M.; McGill, K.C.; Patel, R.; Sohn, J.H.; et al. Automatic Hip Fracture Identification and Functional Subclassification with Deep Learning. Radiol. Artif. Intell. 2020, 2, e190023. [Google Scholar] [CrossRef] [Green Version]
  47. Sato, Y.; Asamoto, T.; Ono, Y.; Goto, R.; Kitamura, A.; Honda, S. A computer-aided diagnosis system using artificial intelligence for proximal femoral fractures enables residents to achieve a diagnostic rate equivalent to orthopedic surgeons—Multi-Institutional Joint Development Research. arXiv 2020, arXiv:2003.12443. Available online: https://arxiv.org/abs/2003.12443 (accessed on 19 April 2022).
  48. Hallas, P.; Ellingsen, T. Errors in fracture diagnoses in the emergency department—Characteristics of patients and diurnal variation. BMC Emerg. Med. 2006, 6, 4. [Google Scholar] [CrossRef] [Green Version]
  49. Henes, F.; Nüchtern, J.; Groth, M.; Habermann, C.; Regier, M.; Rueger, J.; Adam, G.; Großterlinden, L. Comparison of diagnostic accuracy of Magnetic Resonance Imaging and Multidetector Computed Tomography in the detection of pelvic fractures. Eur. J. Radiol. 2012, 81, 2337–2342. [Google Scholar] [CrossRef]
  50. Davuluri, P.; Wu, J.; Tang, Y.; Cockrell, C.H.; Ward, K.R.; Najarian, K.; Hargraves, R.H. Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries. Comput. Math. Methods Med. 2012, 2012, 898430. [Google Scholar] [CrossRef] [Green Version]
  51. Raffort, J.; Adam, C.; Carrier, M.; Ballaith, A.; Coscas, R.; Jean-Baptiste, E.; Hassen-Khodja, R.; Chakfé, N.; Lareyre, F. Artificial intelligence in abdominal aortic aneurysm. J. Vasc. Surg. 2020, 72, 321–333.e1. [Google Scholar] [CrossRef]
  52. de Bruijne, M.; van Ginneken, B.; Viergever, A.M.; Niessen, W.J. Interactive segmentation of abdominal aortic aneurysms in CTA images. Med. Image Anal. 2004, 8, 127–138. [Google Scholar] [CrossRef] [Green Version]
  53. Subasic, M.; Loncaric, S.; Sorantin, E. 3-D image analysis of abdominal aortic aneurysm. Stud. Health Technol. Inform. 2000, 77, 1195–1200. [Google Scholar] [PubMed]
  54. Zhuge, F.; Rubin, G.; Sun, S.; Napel, S. An abdominal aortic aneurysm segmentation method: Level set with region and statistical information. Med. Phys. 2006, 33, 1440–1453. [Google Scholar] [CrossRef] [PubMed]
  55. Joldes, G.R.; Miller, K.; Wittek, A.; Forsythe, R.O.; Newby, D.E.; Doyle, B. BioPARR: A software system for estimating the rupture potential index for abdominal aortic aneurysms. Sci. Rep. 2017, 7, 4641. [Google Scholar] [CrossRef] [PubMed]
  56. Wang, D.; Zhang, R.; Teng, Z.; Huang, Y.; Spiga, F.; Du, M.H.-F.; Gillard, J.H.; Lu, Q.; Lio, P.; Zhu, J. Neural network fusion: A novel CT-MR aortic aneurysm image segmentation method. In Medical Imaging 2018: Image Processing; ISOP: London, UK, 2018; Volume 10574, p. 1057424. [Google Scholar] [CrossRef]
  57. Sakalihasan, N.; Limet, R.; Defawe, O. Abdominal aortic aneurysm. Lancet 2005, 365, 1577–1589. [Google Scholar] [CrossRef]
  58. Chaikof, E.L.; Dalman, R.L.; Eskandari, M.K.; Jackson, B.M.; Lee, W.A.; Mansour, M.A.; Mastracci, T.M.; Mell, M.; Murad, M.H.; Nguyen, L.L.; et al. The Society for Vascular Surgery practice guidelines on the care of patients with an abdominal aortic aneurysm. J. Vasc. Surg. 2018, 67, 2–77.e2. [Google Scholar] [CrossRef] [Green Version]
  59. Shum, J.; DiMartino, E.S.; Goldhammer, A.; Goldman, D.H.; Acker, L.C.; Patel, G.; Ng, J.H.; Martufi, G.; Finol, E.A. Semiautomatic vessel wall detection and quantification of wall thickness in computed tomography images of human abdominal aortic aneurysms. Med. Phys. 2010, 37, 638–648. [Google Scholar] [CrossRef]
  60. Filipovic, N.; Ivanović, M.; Krstajic, D.; Kojic, M. Hemodynamic Flow Modeling Through an Abdominal Aorta Aneurysm Using Data Mining Tools. IEEE Trans. Inf. Technol. Biomed. 2010, 15, 189–194. [Google Scholar] [CrossRef]
  61. Canchi, T.; Kumar, S.D.; Ng, E.Y.K.; Narayanan, S. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms. BioMed Res. Int. 2015, 2015, 861627. [Google Scholar] [CrossRef] [Green Version]
  62. Jordanski, M.; Radovic, M.; Milosevic, Z.; Filipovic, N.; Obradovic, Z. Machine Learning Approach for Predicting Wall Shear Distribution for Abdominal Aortic Aneurysm and Carotid Bifurcation Models. IEEE J. Biomed. Health Inform. 2018, 22, 537–544. [Google Scholar] [CrossRef]
  63. Pang, S.; Ding, T.; Qiao, S.; Meng, F.; Wang, S.; Li, P.; Wang, X. A novel YOLOv3-arch model for identifying cholelithiasis and classifying gallstones on CT images. PLoS ONE 2019, 14, e0217647. [Google Scholar] [CrossRef]
  64. Rubin, D.L. Artificial Intelligence in Imaging: The Radiologist’s Role. J. Am. Coll. Radiol. 2019, 16, 1309–1317. [Google Scholar] [CrossRef] [PubMed]
  65. Kelly, C.J.; Karthikesalingam, A.; Suleyman, M.; Corrado, G.; King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Brady, A.P.; Neri, E. Artificial Intelligence in Radiology—Ethical Considerations. Diagnostics 2020, 10, 231. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Oren, O.; Gersh, B.J.; Bhatt, D.L. Artificial intelligence in medical imaging: Switching from radiographic pathological data to clinically meaningful endpoints. Lancet Digit. Health 2020, 2, e486–e488. [Google Scholar] [CrossRef]
  68. Jin, Y.; Pepe, A.; Li, J.; Gsaxner, C.; Zhao, F.H.; Kleesiek, J.; Frangi, A.F.; Egger, J. Ai-based aortic vessel tree segmentation for cardiovascular diseases treatment: Status quo. arXiv 2021, arXiv:2108.02998. [Google Scholar]
  69. Recht, M.P.; Dewey, M.; Dreyer, K.; Langlotz, C.; Niessen, W.; Prainsack, B.; Smith, J.J. Integrating artificial intelligence into the clinical practice of radiology: Challenges and recommendations. Eur. Radiol. 2020, 30, 3576–3584. [Google Scholar] [CrossRef] [Green Version]
  70. Dexter, G.P.; Grannis, S.J.; E Dixon, B.; Kasthurirathne, S.N. Generalization of Machine Learning Approaches to Identify Notifiable Conditions from a Statewide Health Information Exchange. AMIA Jt. Summits Transl. Sci. Proc. AMIA Jt. Summits Transl. Sci. 2020, 2020, 152–161. [Google Scholar]
  71. Benjamens, S.; Dhunnoo, P.; Meskó, B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: An online database. NPJ Digit. Med. 2020, 3, 118. [Google Scholar] [CrossRef]
  72. Challen, R.; Denny, J.; Pitt, M.; Gompels, L.; Edwards, T.; Tsaneva-Atanasova, K. Artificial intelligence, bias and clinical safety. BMJ Qual. Saf. 2019, 28, 231–237. [Google Scholar] [CrossRef]
Table 1. Overview of Deep Learning Algorithms Developed For Use in the Emergency and Clinical Setting.
Table 1. Overview of Deep Learning Algorithms Developed For Use in the Emergency and Clinical Setting.
Title/AuthorJournal/Year/TypeDataData ProcessingApplicationModelPerformanceReference
Pelvic Fractures: Epidemiology and Predictors of Associated Abdominal Injuries and Outcomes
Demetriades et al. [9]
J. Am. Coll. Surg.
2002
Original
No DL Demetriades D, et al. Pelvic fractures: epidemiology and predictors of associated abdominal injuries and outcomes. J Am Coll Surg. 2002 Jul;195(1):1–10. doi:10.1016/s1072-7515(02)01197-3. PMID: 12113532.
Detecting pelvic fracture on 3D-CT using deep convolutional neural networks with multi-orientated slab images
Ukai et al. [10]
Scientific Reports
2021
Original
  • Multisource CT images acquired from 93 subjects who had one or more pelvic fractures.
  • Multisource CT images acquired from 112 subjects identified by orthopedic surgeons as not having any fractures.
Voxel size and Intensity range harmonization Automatically detect pelvic fractures from pelvic CT images of an evaluating subject.DCNN: YOLOv3Area under the curve (AUC) was 0.824, with 0.805 recall and 0.907 precision.Ukai K, et al. Detecting pelvic fracture on 3D-CT using deep convolutional neural networks with multi-orientated slab images. Sci Rep 11, 11716 (2021). https://doi.org/10.1038/s41598-021-91144-z.
Accuracy of Abdominal Radiography in Acute Small-Bowel Obstruction: Does Reviewer Experience Matter?
Thompson et al. [11]
Abdominal Imaging
2007
Original
No DL Thompson WM, et al. Accuracy of abdominal radiography in acute small-bowel obstruction: does reviewer experience matter? AJR Am J Roentgenol. 2007 Mar;188(3):W233-8. doi:10.2214/AJR.06.0817. PMID: 17312028.
Abdominal Radiography Findings in Small-Bowel Obstruction: Relevance to Triage for Additional Diagnostic Imaging
Lappas et al. [12]
AJR
2001
Original
No DL Lappas JC, et al. Abdominal radiography findings in small bowel obstruction: relevance to triage for additional diagnostic imaging. AJR 2001; 176:167–174.
Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks
Cheng et al. [13]
Ab. Radiol.
2018
Original
3663 supine abdominal radiographsPixel size and Intensity range harmonization Determine whether a deep CNN can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs.Inception v3 CNNThe neural network achieved an AUCof 0.84 on the test set (95% CI 0.78–0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction was 83.8%, with a specificity of 68.1%.Cheng PM, et al. Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks. Abdom Radiol (NY) 2018;43(5):1120–1127.
Performance of a Deep Learning Algorithm for Automated Segmentation and Quantification of Traumatic Pelvic Hematomas on CT
Dreizin et al. [14]
Journal of Digital Imaging
2021
Original
253 C/A/P admission trauma CTPixel size and Intensity range harmonization Determine if RSTN would result in sufficiently high Dice similarity coefficients to facilitate accurate and objective volumetric measurements for outcome prediction (arterial injury requiringangioembolization).Recurrent Saliency Transformation Network vs. 3D U-NetDice scores in the test set were 0.71 (SD ± 0.10) using RSTN, compared to 0.49 (SD ± 0.16) using a baseline Deep Learning Tool Kit (DLTK) reference 3D U-Net architecture.Dreizin D, et al. “A Multiscale Deep Learning Method for Quantitative Visualization of Traumatic Hemoperitoneum at CT: Assessment of Feasibility and Comparison with Subjective Categorical Estimation.” Radiology. Artificial intelligence vol. 2,6 e190220. 11 Nov. 2020, doi:10.1148/ryai.2020190220.
Image Segmentation and Machine Learning for Detection of Abdominal Free Fluid in Focused Assessment With Sonography for Trauma Examinations A Pilot Study
Sjogren et al. [15]
J. Ultrasound Med.
2016
Original
20 cross-sectional
abdominal US videos (FAST)
NoneTest the feasibility of automating the detection
of abdominal free fluid in focused assessment with sonography for trauma (FAST)
examinations.
ML: SVMThe sensitivity and specificity (95% confidence interval) were 100% (69.2–100%) and 90.0%
(55.5–99.8%), respectively.
Sjogren AR, et al. “Image Segmentation and Machine Learning for Detection of Abdominal Free Fluid in Focused Assessment With Sonography for Trauma Examinations: A Pilot Study.” Journal of ultrasound in medicine: official journal of the American Institute of Ultrasound in Medicine vol. 35,11 (2016): 2501–2509. doi:10.7863/ultra.15.11017.
Quantitative Assessment of Abdominal Aortic Aneurysm Geometry
Shum et al. [16]
Ann. Biomed. Eng.
2011
Original
76 CTs of patients with aneurysmsNoneTest the feasibility that aneurysm morphology and wall thickness are more predictive of rupture risk and can be the deciding factors in the clinical management.ML: Decision TreeThe model correctly classified 65 datasets and
had an average prediction accuracy of 86.6% (κ = 0.37).
Shum J, et al. “Quantitative assessment of abdominal aortic aneurysm geometry.” Annals of biomedical engineering vol. 39,1 (2011): 277–286. doi:10.1007/s10439-010-0175-3.
Detection and Diagnosis of Colitis on Computed Tomography Using Deep Convolutional Neural Networks
Liu et al. [17]
Med Phys.
2017
Original
CT images of 80 patients with colitisNoneDevelop deep convolutional neural networks methods for lesion-level colitis detection and a support vector machine (SVM) classifier for patient-level colitis diagnosis on routine abdominal CT scans.Faster Region-based Convolutional Neural Network (Faster RCNN) with ZF netFor patient-level colitis diagnosis, with ZF net, the average areas under the ROC curve (AUC) were 0.978 ± 0.009 and 0.984 ± 0.008 for RCNN and Faster RCNN method, respectively.Liu J, et al. Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks. Med Phys 2017;44(9):4630–4642.
Convolutional-neural-network-based diagnosis of appendicitis via CT scans in patients with acute abdominal pain presenting in the emergency department
Park et al. [18]
Scientific Reports
2020
Original
667 CT image sets from 215 patients with acute appendicitis and 452
patients with a normal appendix
Data augmentation to prevent over-fittingTest feasibility of a neural network-based diagnosis algorithm of appendicitis by using CT for patients with acute abdominal pain visiting the emergency room (ER).Deep CNNDiagnostic performance was excellent inboth the internal and external validation with an accuracy larger than 90%.Park JJ, et al. Convolutional-neural-network-based diagnosis of appendicitis via CT scans in patients with acute abdominal pain presenting in the emergency
department. Sci Rep. 2020 Jun 12;10(1):9556. doi:10.1038/s41598-020-66674-7. PMID: 32533053; PMCID: PMC7293232.
Deep learning algorithms for detecting and visualizing intussusception on plain abdominal radiography in children: a retrospective multicenter study
Kwon et al. [19]
Scientific Reports
2021
Original
9935 X-raysNoneVerify a deep CNN algorithm to detect
intussusception in children using a human-annotated dataset of plain abdominal X-rays.
Single Shot MultiBox Detector and ResNetThe internal test values after training with two hospital datasets were 0.946 to 0.971 for the area under the receiver operating characteristic curve (AUC), 0.927 to 0.952 for the highest accuracy, and 0.764 to 0.848 for the highest Youden index.Kwon G, et al. Deep learning algorithms for detecting and visualising intussusception on plain abdominal radiography in children: a retrospective multicenter study. Sci Rep 10, 17582 (2020). https://doi.org/10.1038/s41598-020-74653-1.
An artificial intelligence deep learning model for identification of small bowel obstruction on plain abdominal radiographs
Kim et al. [20]
British Journal of Radiology
2021
Original
990 plain abdominal radiographsNoneDetect small bowel obstructions of plain abdominal X-rays.VGG16, Densenet121, NasNetLarge, InceptionV3, and XceptionThe model showed an AUC of 0.961, corresponding to sensitivity and specificity of 91 and 93%, respectively.Kim DH, et al. “An artificial intelligence deep learning model for identification of small bowel obstruction on plain abdominal radiographs.” The British journal of radiology vol. 94,1122 (2021): 20201407. doi:10.1259/bjr.20201407.
Performance of deep learning-based algorithm for detection of ileocolic intussusception on abdominal radiographs of young children
Kim et al. [19]
Scientific Reports
2019
Original
Abdominal radiographs of 681 childrenIntensity normalization using z-scoreDetect ileocolic intussusception on abdominal radiographs of young children.YOLO v3The sensitivity of the algorithm was higher compared with that of the radiologists (0.76 vs. 0.46, p = 0.013), while specificity was not different between the algorithm and the radiologists (0.96 vs. 0.92, p = 0.32).Kim S, et al. Performance of deep learning-based algo-rithm for detection of ileocolic intussusception on abdominal radiographs of young children. Sci Rep. 2019 Dec 19;9(1):19420. doi:10.1038/s41598-019-55536-6. PMID: 31857641; PMCID: PMC6923478.
Using Machine Learning to Predict the Diagnosis, Management and Severity of Pediatric Appendicitis
Marcinkevics et al. [21]
Frontiers in Pediatrics
2021
Original
430 children and adolescentsNoneDetect pediatric appendicitis.Logistic regression, random forests, and gradient boosting machinesA random forest classifier achieved areas under the precision-recall curve of 0.94, 0.92, and 0.70, respectively, for the diagnosis, management, and severity of appendicitis.Marcinkevics R, et al. (2021). Using Machine Learning to Predict the Diagnosis, Management and Severity of Pediatric Appendicitis [Original Research]. Frontiers in Pediatrics, 9. https://doi.org/10.3389/fped.2021.662183.
Development and Validation of a Deep Neural Network for Accurate Identification of Endoscopic Images From Patients With Ulcerative Colitis and Crohn’s Disease
Ruan et al. [22]
Frontiers in Medicine
2022
Original
49,154 colonoscopy images from 1772 patientsData augmentation using operations such as horizontal flipping, vertical flipping, random cropping, random rotation, brightness adjustment, contrast adjustment, and saturation adjustment, CutMix algorithmDetect ulcerative colitis and Crohn’s disease using endoscopic images.ResNet50The identification accuracy achieved by the deep learning model was superior to that of experienced endoscopists per patient (deep model vs. trainee endoscopist, 99.1% vs. 78.0% and per lesion (deep model vs. trainee endoscopist, 90.4% vs. 59.7%. While the difference between the two was lower when an experienced endoscopist was included, the deep learning still performed significantly (p < 0.001) better.Ruan G, et al. (2022). Development and Validation of a Deep Neural Network for Accurate Identification of Endoscopic Images From Patients With Ulcerative Colitis and Crohn’s Disease [Original Research]. Frontiers in Medicine, 9. https://doi.org/10.3389/fmed.2022.854677.
A Multiscale Deep Learning Method for Quantitative Visualization of Traumatic Hemoperitoneum at CT: Assessment of Feasibility and Comparison with Subjective Categorical Estimation
Dreizin et al. [23]
Radiology AI
2020
Original
CT images of 130 patientsPixel size and Intensity range harmonization Evaluate the feasibility of a multiscale deep learning algorithm for quantitative visualization and measurement of traumatic hemoperitoneum and compare diagnostic performance for relevant outcomes with categorical estimation.MSAN TensorFlowAUCs for automated volume measurement and categorical estimation were 0.86 and 0.77, respectively (p = 0.004). An optimal cutoff of 278.9 mL yielded accuracy of 84%, sensitivity of 82%, specificity of 93%, positive predictive value of 86%, and negative predictive value of 83%.Dreizin D, et al. “A Multiscale Deep Learning Method for Quantitative Visualization of Traumatic Hemoperitoneum at CT: Assessment of Feasibility and Comparison with Subjective Categorical Estimation.” Radiology. Artificial intelligence vol. 2,6 e190220. 11 Nov. 2020, doi:10.1148/ryai.2020190220.
A scalable physician-level deep learning algorithm detects universal trauma on pelvic radiographs
Cheng et al. [24]
Nat. Comm.
2021
Original
5204 pelvic radiographsZero-padding and resizing, Data augmentation such as translation, flipping, scaling, rotation, brightness and contrastDetect most types of trauma-related radiographic findings on pelvic radiographs.PelviXNetPelviXNet yielded an area under the receiver operating characteristic curve (AUROC) of 0.973 (95% CI, 0.960–0.983) and an area under the precision-recall curve (AUPRC) of 0.963 (95% CI, 0.948–0.974) in the clinical population test set of 1888 PXRs. The accuracy, sensitivity, and specificity at the cutoff value were 0.924 (95% CI, 0.912–0.936), 0.908 (95% CI, 0.885–0.908), and 0.932 (95% CI, 0.919–0.946), respectively.Cheng CT, et al. A scalable physician-level deep learning algorithm detects universal trauma on pelvic radiographs. Nat Commun 12, 1066 (2021). https://doi.org/10.1038/s41467-021-21311-3.
Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning
Golla et al. [25]
Diagnostics (Basel)
2021
Original
187 heterogenous CT scans.Pixel size and Intensity range harmonization, Data augmentationDevelop and validate an easily trainable and fully automated deep learning 3D AAA screening algorithm, which can run as a background process in the clinic workflow.ResNet, VGG-16 and AlexNetThe 3D ResNet outperformed both other networks and achieved an accuracy of 0.953 and an AUC of 0.971 on the validation dataset.Golla AK, et al. “Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning.”
Diagnostics (Basel, Switzerland) vol. 11,11 2131. 17 Nov. 2021, doi:10.3390/diagnostics11112131.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, J.; Varghese, B.; Taravat, F.; Eibschutz, L.S.; Gholamrezanezhad, A. An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology. Diagnostics 2022, 12, 1351. https://doi.org/10.3390/diagnostics12061351

AMA Style

Liu J, Varghese B, Taravat F, Eibschutz LS, Gholamrezanezhad A. An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology. Diagnostics. 2022; 12(6):1351. https://doi.org/10.3390/diagnostics12061351

Chicago/Turabian Style

Liu, Jeffrey, Bino Varghese, Farzaneh Taravat, Liesl S. Eibschutz, and Ali Gholamrezanezhad. 2022. "An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology" Diagnostics 12, no. 6: 1351. https://doi.org/10.3390/diagnostics12061351

APA Style

Liu, J., Varghese, B., Taravat, F., Eibschutz, L. S., & Gholamrezanezhad, A. (2022). An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology. Diagnostics, 12(6), 1351. https://doi.org/10.3390/diagnostics12061351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop