Next Article in Journal
Finding the Goldilocks Zone of Mechanical Loading: A Comprehensive Review of Mechanical Loading in the Prevention and Treatment of Knee Osteoarthritis
Next Article in Special Issue
Leveraging Multi-Annotator Label Uncertainties as Privileged Information for Acute Respiratory Distress Syndrome Detection in Chest X-ray Images
Previous Article in Journal
On Automated Object Grasping for Intelligent Prosthetic Hands Using Machine Learning
Previous Article in Special Issue
Multi-Instance Classification of Breast Tumor Ultrasound Images Using Convolutional Neural Networks and Transfer Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward Smart, Automated Junctional Tourniquets—AI Models to Interpret Vessel Occlusion at Physiological Pressure Points

1
U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
2
Israel Defense Forces Medical Corps, Ramat Gan 52620, Israel
3
Division of Anesthesia, Intensive Care, and Pain Management, Tel-Aviv Medical Center, Affiliated with the Faculty of Medicine, Tel Aviv University, Tel Aviv 64239, Israel
*
Author to whom correspondence should be addressed.
Bioengineering 2024, 11(2), 109; https://doi.org/10.3390/bioengineering11020109
Submission received: 15 December 2023 / Revised: 5 January 2024 / Accepted: 18 January 2024 / Published: 24 January 2024
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)

Abstract

:
Hemorrhage is the leading cause of preventable death in both civilian and military medicine. Junctional hemorrhages are especially difficult to manage since traditional tourniquet placement is often not possible. Ultrasound can be used to visualize and guide the caretaker to apply pressure at physiological pressure points to stop hemorrhage. However, this process is technically challenging, requiring the vessel to be properly positioned over rigid boney surfaces and applying sufficient pressure to maintain proper occlusion. As a first step toward automating this life-saving intervention, we demonstrate an artificial intelligence algorithm that classifies a vessel as patent or occluded, which can guide a user to apply the appropriate pressure required to stop flow. Neural network models were trained using images captured from a custom tissue-mimicking phantom and an ex vivo swine model of the inguinal region, as pressure was applied using an ultrasound probe with and without color Doppler overlays. Using these images, we developed an image classification algorithm suitable for the determination of patency or occlusion in an ultrasound image containing color Doppler overlay. Separate AI models for both test platforms were able to accurately detect occlusion status in test-image sets to more than 93% accuracy. In conclusion, this methodology can be utilized for guiding and monitoring proper vessel occlusion, which, when combined with automated actuation and other AI models, can allow for automated junctional tourniquet application.

1. Introduction

Hemorrhage is the leading cause of preventable death in trauma casualties, both civilian [1] and in combat [2]. Through public health programs such as the “Stop the Bleed” initiative [3] and the abundant use of tourniquets, the mortality from extremity hemorrhage has greatly diminished [4]. However, junctional hemorrhage remains a largely unsolved problem. Junctional hemorrhage is defined as hemorrhage from the areas connecting the extremities to the torso—axillae, shoulders, groin, buttocks, and proximal thighs, as well as from the neck [5]. As these areas are not amenable to traditional extremity tourniquet placement, several alternative solutions are in use.
The first approach is wound packing, which creates pressure inside the bleeding wound in order to press against the bleeding vessel, with materials ranging from gauze supplemented with pro-coagulant substances [6] to expanding sponges inserted by a syringe into the wound [7]. While these techniques are effective against hemorrhage from a venous source, they lack efficacy against hemorrhage from a major artery, such as the femoral artery [8]. Another approach is manual pressure points wherein a major artery, proximal to the hemorrhage source, is pressed against a bony surface to stop the blood flow to the wound and beyond. While a single study [9] has described this practice as ineffective, leading to its elimination from most clinical practice guidelines, more recent studies show promising results [10,11]. However, this technique requires continuous monitoring and pressure by the provider, which may limit its effectiveness when used in prolonged case situations. Therefore, this approach provides only a short temporizing solution and severely limits the ability to transport and evacuate the patient.
Several junctional tourniquets have been developed utilizing the pressure points concept. These devices are designed to maintain ongoing pressure on the vessel, pending a definitive surgical solution. Currently, there are four junctional tourniquets that are approved by the Food and Drug Administration: abdominal aortic junctional tourniquet (AAJT), junctional emergency treatment tool (JETT), SAM® junctional tourniquet, and Combat Ready Clamp (CRoC) [12,13,14]. The AAJT consists of a windlass mechanism to stabilize the device on the abdomen, axilla, or groin, where then a pneumatic bladder is inflated to occlude the artery. The CRoC is a compression clamp that can be placed over the axilla or groin, where it is then tightened by a hand crank. The JETT is a belt that can be placed over the pelvis, where two pads can be mechanically extended by turning handles. The SAM junctional tourniquet consists of inflatable bladders that are placed over the pelvis with a belt. Each of these tourniquets is recommended to be used for less than four hours for axilla and groin placements, while less than one hour is recommended for the abdomen [5,12]. However, studies have shown their utilization to be cumbersome and time consuming [15,16] and they have a high failure rate in training [17] and real combat scenarios [18]. Moreover, their effectiveness drops dramatically during patient transport [19].
A few reports on the use of ultrasound to guide pressure against an artery have been published. Garrick et al. have reported a 93% initial success rate in occluding the femoral artery using an ultrasound probe [20]. A case report has described the use of an ultrasound probe against the abdominal aorta to mitigate an iliac artery hemorrhage [21]. However, sonography requires skill, precluding it from prevalent use, compared to an extremity or a junctional tourniquet.
We hypothesized that the skill threshold could be overcome through the use of artificial intelligence (AI) models that can detect the appropriate vessel requiring occlusion from real-time ultrasound video feeds. AI has begun to revolutionize medicine through smart, precision-medicine applications such as categorizing a wide range of abnormal states from optical coherence tomography scans [22], compiling large diverse data sets for making medical decisions [23], and using predictive text AI models to provide medical recommendations during telemedicine [24]. Smart medicine applications have been extensively reviewed elsewhere [25,26,27,28,29,30]. Focusing on AI for interpreting ultrasound images [31], applications include the identification of tumors [32], diagnosing infectious disease [33,34], and determining eFAST scan outcomes [35,36], among others. Each of these applications often relies on deep convolutional neural networks, which extract image features and parameter weights to identify differences in images. Alternatively, object detection AI models can produce a bounding box overlay [37] around features for tracking regions of interest in an ultrasound image, such as tumors [38] and vessels [39]. Each of these approaches may be applicable to this use case as AI models can potentially be trained to guide a user to the proper pressure point and ensure that enough force is applied for occlusion. In this research effort, we describe a deep-learning image classification algorithm to analyze sonographic images and provide the occlusion status of a vessel of interest. This can be used to aid the user in pressing the artery until occlusion and monitoring the effectiveness of this pressure. This algorithm is meant to be integrated into an ultrasound probe, making this probe an effective pressure head as part of a junctional tourniquet, without the need of expertise. The constant monitoring of effectiveness will allow for a rapid or automated response to displacement, overcoming this issue when transporting the patient. This work is presented by first describing the identification of the occlusion threshold in the phantom and capturing images for AI training. Then, we compare the performance metrics of two separate AI model architectures. Lastly, we validate the best performing model using an ex vivo swine model.

2. Materials and Methods

2.1. Tissue Phantom Setup

An ultrasonically compatible basic phantom was developed for image collection to train and test a classification algorithm. This phantom was created with 10% clear ballistic gel (CBG) (Clear Ballistics, Greenville, SC, USA) using a 3D printed mold. Due to the high temperature needed to melt the CBG, a co-polyester filament with a fused deposition 3D printer (Ultimaker, Utrecht, The Netherlands) or high-temp resin and stereolithography 3D printer (FormLabs, Somerville, MA, USA) were selected to print the mold. The phantom was fashioned as a 3″ length × 3″ width × 2.5″ depth box around a 3″ length × 2″ wide × 1” depth wax block (McMaster-Carr, Elmhurst, IL, USA), which acted as a bone to allow for vessel occlusion.
The CBG was then cut into small pieces, placed into a 500 mL beaker, and melted at 130 °C using a laboratory oven (Thermo Fisher Scientific, Waltham, MA, USA) for approximately 2 h or until the gel was fully melted and de-bubbled. Using a ¼” OD biopsy punch (McMaster-Carr, Elmhurst, IL, USA) to hold the place of a vessel, the CBG was slowly poured into the mold lined with silicone oil (Sigma-Aldrich, St Louis, MO, USA) and left to cool at room temperature. Once cooled, the phantom was removed from the mold and placed over the wax block.

2.2. Tissue Phantom Imaging

The phantom was fitted with a ¼” diameter thin-walled latex tubing (GF Health Products, Atlanta, GA) to act as a vessel, which was connected to a simple flow loop. This loop consisted of a simple peristaltic pump (Masterflex, Gelsenkirchen, Germany) driving Doppler-compliant fluid (CIRS Tissue Simulation Technology, Norfolk, VA, USA) from a reservoir through the phantom and a pressure sensor (ADInstruments, Sydney, NSW, Australia) that connected directly to a data acquisition unit (ADInstruments, Sydney, NSW, Australia). Between the pump and the phantom there was a bypass line so that flow could be diverted during occlusion while maintaining physiological relevant pressure in the system. The phantom was kept underwater for imaging. Ultrasound imaging was performed using a 15L4A probe (Terason, Burlington, MA, USA) from a Terason 3200T Plus US imaging system (Terason, Burlington, MA, USA). Real-time video feed from the US screen was recorded with the LabChart (ADInstruments, Sydney, NSW, Australia) software using a video capture box (Amazon, Seattle, WA, USA). US images were recorded with and without color Doppler overlay. The Doppler modality helped to confirm the presence or absence of flow in the vessel. Distal pressure to the phantom was captured in real time to determine the pressure reduction achieved by tourniquet application, which is analogous to flow reduction. With the vessel in view and the phantom placed on top of the wax block, the ultrasound probe was used to compress the vessel until flow was stopped or reduced to at least 90% of its initial rate.

2.3. Ex Vivo Swine Model Setup

Euthanized swine tissue was procured from a commercial vendor (Animal Technologies, Tyler, TX, USA) from the lumbar area to the knees as this section allowed enough area for experimental setup. Swine tissue was utilized due to the similarities between human and swine femoral vessels [40,41,42]. Using previously developed methodologies [43], we cannulated the distal superficial femoral artery using a 14G catheter (MedOfficeDirect, Naples, FL, USA). Proximal cannulations for both vessels were made using 8Fr PCI introducers (Argon Medical Devices, Athens, TX, USA) and secured using silk ligatures. An arteriovenous shunt was fashioned from tubing and a 3-way stopcock was used to connect the vessels distally and allow flow going through the artery to return through the vein or be hemorrhaged from the system. The hemorrhage site was connected to a pressure sensor for measuring distal pressure to the occlusion site. The proximal catheters were connected to a heart-mimicking pump (SuperPump AR Series, Vivitro Labs, Victoria, Canada) to create a flow loop. Doppler-compliant fluid was pumped through the loop to enable Doppler modality for ultrasound imaging. A Draeger patient monitor (Delta XL, Lubeck, Germany) was used to monitor the pressure in the flow loop so that flowrate adjustments to the pump could alter pressure in the system to achieve a systolic and diastolic pressure of approximately 110 and 80 mmHg, respectively. A pressure sensor (ICU Medical, San Clemente, CA, USA) connected to a data acquisition unit was connected downstream in the model, between the artery and arteriovenous shunt. Similar to the phantom model, a Terason or Sonosite Edge (Fujifilm Sonosite, Bothell, WA, USA) US system was used to capture live ultrasound feed using LabChart software and a capture box. Once the vessels were in view, pressure was applied on the inguinal crease, compressing the vessels until 90% occlusion was achieved.

2.4. Ultrasound Image Processing

After data were collected, videos were exported from LabChart as MP4 files and mean distal pressure readings were downsampled to 10 Hz to match the frame rate for the recorded video. Using MATLAB (v2022a, Mathworks, Natick, MA, USA), mean pressure vs. time data were plotted for each recording, and three regions were identified—(i) start and (ii) end region for unobstructed pressure measurement, and (iii) end of probe occlusion of the vessel. A mean unobstructed pressure was measured for the full-flow region, which was then used to create gates for the classification categories. For a two-class scenario, full-flow and no-flow categories were separated by a percent reduction in the distal pressure, ranging from 90 to 50% threshold in different experimental setups. This was used to determine the best threshold to distinguish between full-flow and no-flow classes. For the three-class scenarios, full flow was characterized as unobstructed flow to only 10% reduction in mean pressure, partial flow was 10% reduction to the determined no-flow marker (50 to 90% reduction, depending on the experimental setup), and no flow was any pressure reduction below this threshold. During data processing, images were cropped to remove ultrasound user interface information from the image and then resized to 512 × 512 × 3. This process was repeated for each recorded ultrasound video for tissue phantom and swine.

2.5. Neural Network Model Training

All neural network model developments and evaluations were performed using MATLAB v2022a on an AMD Ryzen 9 5900HX 3.3GHz, 32 GB RAM, and NVIDIA RTX 3080 16 GB VRAM computer system (Lenovo, Morrisville, NC). Two neural network architectures were used: (1) a previously developed custom classification network, ShrapML, optimized for ultrasound image interpretation [44,45]; and (2) MobileNetV2, a conventional neural network model that performed best for interpretation of ultrasound images for small datasets [46]. ShrapML was Bayesian-optimized for ultrasound applications. The optimized architecture comprised 6 blocks each containing a convolutional neural network layer, rectified linear unit activator, and max pooling layer, followed by a flattening layer, fully connected layer, and 36% dropout layer that led to a classification output setup [45]. During the optimization process, the Root Mean Squared Propagation (RMSprop) optimizer, which is a widely used optimizer for classification tasks, was selected as optimal. Each model was fitted with a 512 × 512 × 3 image input layer and a two- or three-category classification output layer, depending on the image sets used.
For phantom training, image sets were loaded and randomly split 80:20 for training and validation, while a phantom image set was completely held out for blind testing. For swine training, a single image set was loaded and randomly split 60:20:20 for training, validation, and testing, respectively. In some training cases, data augmentation in the form of affine transformations was randomly introduced to training images. Specifically, reflections and translation in the X- (−128 to 128 pixels) or Y-direction (−64 to 64 pixels) were introduced randomly in these data augmentation training scenarios.
Model training was performed in all instances for up to 100 epochs using a RMSProp optimizer with a 0.001 learn rate. A batch size of 32 was used throughout with evaluation of validation loss performed at the end of each epoch. A validation patience of five was used, which meant if the validation loss was not further reduced in five epochs, training ended early, and the lowest validation loss was selected as the optimal model. Training was repeated three times with different random image splits for each training strategy, and each model was independently evaluated for determining overall performance.

2.6. Evaluation of Neural Network Model Performance

Model performance was evaluated with test images held out from the training and validation process. Predictions and confidences were calculated for each test image and compared to ground truth labels in order to build a confusion matrix using GraphPad Prism 9 (San Diego, CA, USA). For two-category models, positive predictions were no-flow or occlusion images, while negative predictions were full-flow images. Using these identifications, accuracy, precision, recall, specificity, and F1 scores were calculated. Confidences were used to construct a receiver operating characteristic (ROC) curve and to measure the area under the ROC (AUROC). Performance metrics were found for triplicate models and were shown as average values throughout.
In addition, Gradient-weighted Class Activation Mapping (GradCAM) overlays were created for 1/24th of the testing images for each model. GradCAM overlays are used to produce an approximate localization heat map identifying “hot spots” for regions important to the model prediction as a means of making models more explainable and confirming irrelevant image artifacts are not being tracked [47]. This technique is widely used for showing how AI models used for medical image interpretation are tracking the same area of interest as an expert would [48,49,50]. For our study, GradCAM overlays were created using a built-in MATLAB command for 1/24th of the test images and were saved according to the ground truth and prediction labels. Representative images were selected for highlighting regions of the images the models identified when making a classification prediction.

3. Results

3.1. Determination of the Optimal Threshold for Occlusion

To develop a machine learning model for monitoring junctional flow occlusion, we first identified the occlusion threshold most suitable for distinguishing flow (negative) and no-flow (positive) conditions. Using a tissue phantom model, training performance was compared with thresholds set at 50, 60, 70, 80, or 90% distal pressure reduction for occlusion (Table 1). Lower threshold values had higher accuracy and improved performance in most performance metrics, while the 80 and 90% threshold conditions’ performances were reduced. As the highest occlusion threshold is ideal, 70% was selected as the threshold for future testing as the differences were minimal with lower thresholds but this avoided the performance reduction in the higher thresholds.

3.2. ShrapML and MobileNetV2 Performance for Tracking Tissue Phantom Vessel Occlusion

Next, various deep learning model setups were used for classifying junctional ultrasound images for flow or no flow, following tourniquet application (Figure 1). Two different model architectures were used—ShrapML and MobileNetV2—each without and with affine transformations for data augmentation. MobileNetV2 models trended toward no-flow (false positive) predictions, resulting in recall metrics of 0.675 and 0.646 without and with data augmentation, respectively (Table 2). However, MobileNetV2 was strong at identifying full-flow conditions, with specificity reaching 0.990 and 0.996 without and with data augmentation, respectively. In contrast, augmentation had a more pronounced effect on ShrapML training. Without augmentation, ShrapML models had a high false negative (full-flow prediction) rate, with a specificity of 0.683. Utilizing data augmentation for ShrapML training solved this false negative bias, increasing specificity to 0.991 without impacting the false positive rate. Overall, ShrapML models with augmentation had the strongest accuracy (0.934) and F1 score (0.918) metrics and were selected as the optimal configuration for this application.
To further understand model performance, we created GradCAM overlays to highlight the regions of the ultrasound images most critical to the model prediction (Figure 2). When looking at full-flow ultrasound images, most of the models accurately tracked the vessel patency as the key feature, except for the ShrapML model without augmentation for the training data. The extent of tracking the vessel was reduced in images where less or no Doppler signal was present. For the no-flow image class, model trends were less consistent. MobileNetV2 models were more frequently tracking features at the edges of the image (no augmentation) or below the tissue phantom (with augmentation). ShrapML without augmentation had no strong feature correlation, indicating no flow was identified as an absence of key feature extraction. ShrapML with data augmentation successfully tracked the compression of the tissue phantom, but the precise feature being tracked was not obvious.

3.3. Effect of Three Classes on ShrapML Model Performance for Tracking Junctional Vessel Occlusion

An alternative model design was assessed, which added a third category as partial flow, which represented 10% to 70% distal pressure reduction. This improved the full-flow and no-flow true prediction rate (Figure 3). However, this was at the expense of the partial-flow category as over 75% of the predictions were incorrect. This was further highlighted through GradCAM overlays. The full-flow and no-flow identified images were still tracking the vessel placement and phantom compression, respectively (Figure 3). The partial-flow class did not identify any obvious trends in the ultrasound image, including frequently tracking features outside of the tissue phantom. As a result, the two-category methodology was identified as most suitable for this application.

3.4. Performance of ShrapML Model for Tracking Junctional Vessel Occlusion in an Ex Vivo Swine Model

Lastly, the ShrapML binary classification network design was retrained for use with junctional vessel occlusion datasets collected from the ex vivo swine model. Multiple ultrasound clips were collected from a single ex vivo swine subject, and 20% of the images were held out for testing model performance. Generally, the models had a similar performance to the tissue phantom, with a slight bias toward false positive, no-flow predictions (Figure 4). Results across the three replicate trained models were consistent, each with a similar AUROC (Figure 4) and with low standard deviations for each performance metric (Table 3). Accuracy was over 90% for swine image sets, similar to the tissue phantom performance. For comparison, the models were also trained using the more aggressive 90% distal pressure reduction threshold for occlusion. Overall, performance was minimally impacted when using swine images with this higher occlusion threshold (Figure 4B,C). GradCAM overlays were used to track if the features identified by the model were tracking the vessel as it occluded (as observed in the tissue phantom) or overfitting to noise in the images (Figure 5). In full-flow images, the artery and vein were sometimes being tracked, while at other times the artery was the primary feature responsible for model predictions. In the no-flow image class, the trends were less obvious, but in general the bottom tissue features were tracked if they appeared higher in the ultrasound image due to tissue compression. This provided proof of concept that the model can work in animal tissue, but additional images and subject variability will be needed for more robust performance.

4. Discussion

Junctional and pelvic hemorrhage continue to be a significant cause of early preventable death among trauma casualties, and there is a need for a solution that can provide rapid and reliable hemostasis without requiring advance expertise. Here, we have demonstrated the first part of the development of an artificial intelligence algorithm that can serve to guide such a device, and that can confirm appropriate pressure and continuously monitor the effectiveness of pressure.
An important step in setting up the training datasets for this application was defining a threshold for flow or distal pressure decrement. We approached this from an AI training perspective and a threshold of 70% resulted as the most optimal for model performance; however, the reduction in performance may be preferred if a higher 90% occlusion threshold could be tracked. Both were evaluated in the swine image sets at more than 90% accuracy. More tuning is needed to settle on the optimal threshold. The threshold decision may also impact the output category design of the model. We evaluated two-category (full flow and no flow) and three-category (additional partial-flow category) classifier models, and the partial-flow classification had poor prediction performance. However, this is likely due to the significantly low quantity of images in that partial occlusion window. More data may help with training this intermittent category, which may be preferred in a junctional tourniquet design so that progress toward the goal can be tracked. This may also be accomplished by the use of a regression deep-learning model output as an output to categorical outputs. This may allow for tracking the percent occlusion as opposed to arbitrary categories defining proper occlusion.
The use of AI for ultrasound guidance of medical interventions is not unique—it has been used, for instance, to guide central vascular access. However, to our best knowledge, its use for hemorrhage control has never been described before, potentially due to the relatively high cost of ultrasound machines. With the gradual decrease in its cost and size, the use of ultrasound for hemorrhage control begins to appear feasible, making expertise the limiting factor for use. AI offers a pathway to overcome this limit, allowing healthcare providers, or even laypersons, not trained in sonography to utilize ultrasound technology for this application. The AI models in this work were successfully developed for tracking occlusion, a critical first step in automating a junctional tourniquet.
Another significant advantage of AI over “standard” use of ultrasound for this purpose is its ability for continuous monitoring—the ultrasound system maintains “visualization” of the obstructed vessel and can raise an alarm if this obstruction is no longer effective. In the absence of such an alarm, the first sign of failure might be a pool of blood forming under the casualty, or clinical deterioration in the casualty’s mental status or vital signs, all signifying loss of a substantial amount of precious blood.
This work has several limitations. First, in this work we show an algorithm that can identify appropriate pressure on the artery, but not guide the user to position the probe in the right location. This work is already in progress and will be discussed in future studies. Second, it is based on a simple phantom model, without complex anatomical features. This approach of initial training using phantom data has been shown to be effective in reducing the requirement of animal or human data [51], and the performance demonstrated in this study on the ex vivo model supports that. However, acquisition of human data will be necessary for further development. Lastly, the AI models developed currently rely on color Doppler overlays, which may not be available as a feature on smaller, more portable ultrasound devices. Prototypes will need to ensure that this feature is present, or AI models can potentially be trained to bypass the need for this dependency.

5. Conclusions

Controlling junctional hemorrhage beyond current technology is a critical need for trauma care for both military and civilian situations. The technology we have demonstrated in this work highlights how an AI algorithm for monitoring vessel occlusion has the potential to improve junctional tourniquet application. AI models were developed to track vessel occlusion in phantom and ex vivo swine occlusion to more than 90% accuracy. Inclusion of this AI model with an engineered prototype for actuating compression will allow for automated vessel compression and real-time monitoring of tourniquet efficacy. Future studies will evaluate effectiveness on animal models and/or human volunteers. Further advancement of this technology will simplify junctional tourniquet use and help reduce the high mortality associated with junctional hemorrhage.

6. Patents

G.A and E.J.S. are inventors on a filed provisional patent owned by the U.S. Army related to the automated junctional tourniquet (filed 27 October 2023).

Author Contributions

Conceptualization, G.A. and E.J.S.; tissue phantom development, S.I.H.T. and Z.J.K.; ex vivo testing, G.A. and C.B.; AI model development, S.I.H.T. and E.J.S.; formal analysis, E.J.S.; data curation, S.I.H.T.; writing—original draft preparation, G.A., S.I.H.T., Z.J.K., C.B. and E.J.S.; writing—review and editing, G.A., S.I.H.T., Z.J.K., C.B., J.S. and E.J.S.; visualization, S.I.H.T. and E.J.S.; funding acquisition, E.J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the U.S. Army Medical Research and Development Command (IS220007). This project was supported in part by an appointment to the Science Education Programs at National Institutes of Health (NIH), administered by ORAU through the U.S. Department of Energy Oak Ridge Institute for Science and Education (Z.K. and C.B.).

DoD Disclaimer

The views expressed in this article are those of the authors and do not reflect the official policy or position of the U.S. Army Medical Department, Department of the Army, DoD, or the U.S. Government, nor those of the Israel Defense Forces, Israeli Ministry of Defense or Government of Israel.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this article are not readily available because they have been collected and maintained in a government-controlled database that is located at the US Army Institute of Surgical Research. As such, these data can be made available through the development of a Cooperative Research and Development Agreement (CRADA) with the corresponding author. Requests to access the datasets should be directed to Eric Snider, [email protected].

Acknowledgments

This work is dedicated to the memory of SOF Paramedic First Sergeant Major Lior Arzi.

Conflicts of Interest

G.A and E.J.S. are inventors on a filed provisional patent owned by the U.S. Army related to the automated junctional tourniquet (filed 27 October 2023). Author Guy Avital is an active duty physician in the Israel Defense Forces Medical Corps, Ramat Gan, Israel. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Goolsby, C.; Rouse, E.; Rojas, L.; Goralnick, E.; Levy, M.J.; Kirsch, T.; Eastman, A.L.; Kellermann, A.; Strauss-Riggs, K.; Hurst, N. Post-Mortem Evaluation of Potentially Survivable Hemorrhagic Death in a Civilian Population. J. Am. Coll. Surg. 2018, 227, 502–506. [Google Scholar] [CrossRef] [PubMed]
  2. Eastridge, B.J.; Mabry, R.L.; Seguin, P.; Cantrell, J.; Tops, T.; Uribe, P.; Mallett, O.; Zubko, T.; Oetjen-Gerdes, L.; Rasmussen, T.E.; et al. Death on the Battlefield (2001–2011): Implications for the Future of Combat Casualty Care. J. Trauma Acute Care Surg. 2012, 73, S431–S437. [Google Scholar] [CrossRef]
  3. Goolsby, C.; Jacobs, L.; Hunt, R.C.; Goralnick, E.; Singletary, E.M.; Levy, M.J.; Goodloe, J.M.; Epstein, J.L.; Strauss-Riggs, K.; Seitz, S.R.; et al. Stop the Bleed Education Consortium: Education Program Content and Delivery Recommendations. J. Trauma Acute Care Surg. 2018, 84, 205. [Google Scholar] [CrossRef]
  4. Kotwal, R.S.; Mazuchowski, E.L.; Janak, J.C.; Howard, J.T.; Harcke, H.T.; Holcomb, J.B.; Eastridge, B.J.; Gurney, J.M.; Shackelford, S.A. United States Military Fatalities during Operation New Dawn. J. Trauma Acute Care Surg. 2021, 91, 375–383. [Google Scholar] [CrossRef] [PubMed]
  5. van Oostendorp, S.E.; Tan, E.C.T.H.; Geeraedts, L.M.G. Prehospital Control of Life-Threatening Truncal and Junctional Haemorrhage Is the Ultimate Challenge in Optimizing Trauma Care: A Review of Treatment Options and Their Applicability in the Civilian Trauma Setting. Scand. J. Trauma Resusc. Emerg. Med. 2016, 24, 110. [Google Scholar] [CrossRef] [PubMed]
  6. Smith, A.H.; Laird, C.; Porter, K.; Bloch, M. Haemostatic Dressings in Prehospital Care. Emerg. Med. J. 2013, 30, 784–789. [Google Scholar] [CrossRef] [PubMed]
  7. Kragh, J.F.; Aden, J.K.; Steinbaugh, J.; Bullard, M.; Dubick, M.A. Gauze vs XSTAT in Wound Packing for Hemorrhage Control. Am. J. Emerg. Med. 2015, 33, 974–976. [Google Scholar] [CrossRef] [PubMed]
  8. Kheirabadi, B.S.; Scherer, M.R.; Estep, J.S.; Dubick, M.A.; Holcomb, J.B. Determination of Efficacy of New Hemostatic Dressings in a Model of Extremity Arterial Hemorrhage in Swine. J. Trauma 2009, 67, 450–459, 450–459; discussion 459–460. [Google Scholar] [CrossRef]
  9. Swan, K.G.; Wright, D.S.; Barbagiovanni, S.S.; Swan, B.C.; Swan, K.G. Tourniquets Revisited. J. Trauma 2009, 66, 672–675. [Google Scholar] [CrossRef]
  10. Pikman Gavriely, R.; Lior, Y.; Gelikas, S.; Levy, S.; Ahimor, A.; Glassberg, E.; Shapira, S.; Benov, A.; Avital, G. Manual Pressure Points Technique for Massive Hemorrhage Control-A Prospective Human Volunteer Study. Prehosp. Emerg. Care 2022, 27, 586–591. [Google Scholar] [CrossRef]
  11. Slevin, J.P.; Harrison, C.; Da Silva, E.; White, N.J. Martial Arts Technique for Control of Severe External Bleeding. Emerg. Med. J. 2019, 36, 154–158. [Google Scholar] [CrossRef] [PubMed]
  12. Smith, S.; White, J.; Wanis, K.N.; Beckett, A.; McAlister, V.C.; Hilsden, R. The Effectiveness of Junctional Tourniquets: A Systematic Review and Meta-Analysis. J. Trauma Acute Care Surg. 2019, 86, 532–539. [Google Scholar] [CrossRef] [PubMed]
  13. Kotwal, R.S.; Butler, F.K. Junctional Hemorrhage Control for Tactical Combat Casualty Care. Wilderness Environ. Med. 2017, 28, S33–S38. [Google Scholar] [CrossRef] [PubMed]
  14. Humphries, R.; Naumann, D.N.; Ahmed, Z. Use of Haemostatic Devices for the Control of Junctional and Abdominal Traumatic Haemorrhage: A Systematic Review. Trauma Care 2022, 2, 23–34. [Google Scholar] [CrossRef]
  15. Kragh, J.F.; Kotwal, R.S.; Cap, A.P.; Aden, J.K.; Walters, T.J.; Kheirabadi, B.S.; Gerhardt, R.T.; DeLorenzo, R.A.; Pidcoke, H.F.; Cancio, L.C. Performance of Junctional Tourniquets in Normal Human Volunteers. Prehosp. Emerg. Care 2015, 19, 391–398. [Google Scholar] [CrossRef]
  16. Chen, J.; Benov, A.; Nadler, R.; Landau, G.; Sorkin, A.; Aden, J.K.; Kragh, J.F.; Glassberg, E. Testing of Junctional Tourniquets by Medics of the Israeli Defense Force in Control of Simulated Groin Hemorrhage. J. Spec. Oper. Med. Peer Rev. J. SOF Med. Prof. 2016, 16, 36–42. [Google Scholar] [CrossRef]
  17. Kragh, J.F.; Geracci, J.J.; Parsons, D.L.; Robinson, J.B.; Biever, K.A.; Rein, E.B.; Glassberg, E.; Strandenes, G.; Chen, J.; Benov, A.; et al. Junctional Tourniquet Training Experience. J. Spec. Oper. Med. Peer Rev. J. SOF Med. Prof. 2015, 15, 20–30. [Google Scholar] [CrossRef]
  18. Schauer, S.G.; April, M.D.; Fisher, A.D.; Cunningham, C.W.; Gurney, J.M. Junctional Tourniquet Use During Combat Operations in Afghanistan: The Prehospital Trauma Registry Experience. J. Spec. Oper. Med. Peer Rev. J. SOF Med. Prof. 2018, 18, 71–74. [Google Scholar] [CrossRef]
  19. Gaspary, M.J.; Zarow, G.J.; Barry, M.J.; Walchak, A.C.; Conley, S.P.; Roszko, P.J.D. Comparison of Three Junctional Tourniquets Using a Randomized Trial Design. Prehosp. Emerg. Care 2019, 23, 187–194. [Google Scholar] [CrossRef]
  20. Garrick, B.; Hillis, G.; LaRavia, L.; Lyon, M.; Gordon, R. 330 Effectiveness of Common Femoral Artery Compression in Elimination of Popliteal Blood Flow. Ann. Emerg. Med. 2016, 68, S126–S127. [Google Scholar] [CrossRef]
  21. Michel, W.B.; Lachance, A.; Turcotte, A.-S.; Morris, J. Point-of Care Ultrasonographically Guided Proximal External Aortic Compression in the Emergency Department. Ann. Emerg. Med. 2019, 74, 706–710. [Google Scholar] [CrossRef] [PubMed]
  22. Akinniyi, O.; Rahman, M.M.; Sandhu, H.S.; El-Baz, A.; Khalifa, F. Multi-Stage Classification of Retinal OCT Using Multi-Scale Ensemble Deep Architecture. Bioengineering 2023, 10, 823. [Google Scholar] [CrossRef] [PubMed]
  23. Zaitseva, E.; Levashenko, V.; Rabcan, J.; Kvassay, M. A New Fuzzy-Based Classification Method for Use in Smart/Precision Medicine. Bioengineering 2023, 10, 838. [Google Scholar] [CrossRef]
  24. Habib, M.; Faris, M.; Qaddoura, R.; Alomari, A.; Faris, H. A Predictive Text System for Medical Recommendations in Telemedicine: A Deep Learning Approach in the Arabic Context. IEEE Access 2021, 9, 85690–85708. [Google Scholar] [CrossRef]
  25. Al Kuwaiti, A.; Nazer, K.; Al-Reedy, A.; Al-Shehri, S.; Al-Muhanna, A.; Subbarayalu, A.V.; Al Muhanna, D.; Al-Muhanna, F.A. A Review of the Role of Artificial Intelligence in Healthcare. J. Pers. Med. 2023, 13, 951. [Google Scholar] [CrossRef] [PubMed]
  26. Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial Intelligence in Healthcare: Transforming the Practice of Medicine. Future Healthc. J. 2021, 8, e188–e194. [Google Scholar] [CrossRef]
  27. Bindra, S.; Jain, R. Artificial Intelligence in Medical Science: A Review. Ir. J. Med. Sci. 2023, 1–11. [Google Scholar] [CrossRef]
  28. Liu, R.; Rong, Y.; Peng, Z. A Review of Medical Artificial Intelligence. Glob. Health J. 2020, 4, 42–45. [Google Scholar] [CrossRef]
  29. Ali, O.; Abdelbaki, W.; Shrestha, A.; Elbasi, E.; Alryalat, M.A.A.; Dwivedi, Y.K. A Systematic Literature Review of Artificial Intelligence in the Healthcare Sector: Benefits, Challenges, Methodologies, and Functionalities. J. Innov. Knowl. 2023, 8, 100333. [Google Scholar] [CrossRef]
  30. Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in Health and Medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef]
  31. Liu, S.; Wang, Y.; Yang, X.; Lei, B.; Liu, L.; Li, S.X.; Ni, D.; Wang, T. Deep Learning in Medical Ultrasound Analysis: A Review. Engineering 2019, 5, 261–275. [Google Scholar] [CrossRef]
  32. Cao, Z.; Duan, L.; Yang, G.; Yue, T.; Chen, Q.; Fu, H.; Xu, Y. Breast Tumor Detection in Ultrasound Images Using Deep Learning. In Proceedings of the Patch-Based Techniques in Medical Imaging; Wu, G., Munsell, B.C., Zhan, Y., Bai, W., Sanroma, G., Coupé, P., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 121–128. [Google Scholar]
  33. Diaz-Escobar, J.; Ordóñez-Guillén, N.E.; Villarreal-Reyes, S.; Galaviz-Mosqueda, A.; Kober, V.; Rivera-Rodriguez, R.; Rizk, J.E.L. Deep-Learning Based Detection of COVID-19 Using Lung Ultrasound Imagery. PLoS ONE 2021, 16, e0255886. [Google Scholar] [CrossRef] [PubMed]
  34. Gil-Rodríguez, J.; Pérez de Rojas, J.; Aranda-Laserna, P.; Benavente-Fernández, A.; Martos-Ruiz, M.; Peregrina-Rivas, J.-A.; Guirao-Arrabal, E. Ultrasound Findings of Lung Ultrasonography in COVID-19: A Systematic Review. Eur. J. Radiol. 2022, 148, 110156. [Google Scholar] [CrossRef] [PubMed]
  35. Montgomery, S.; Li, F.; Funk, C.; Peethumangsin, E.; Morris, M.; Anderson, J.T.; Hersh, A.M.; Aylward, S. Detection of Pneumothorax on Ultrasound Using Artificial Intelligence. J. Trauma Acute Care Surg. 2023, 94, 379–384. [Google Scholar] [CrossRef] [PubMed]
  36. Hernandez-Torres, S.I.; Bedolla, C.; Berard, D.; Snider, E.J. An Extended Focused Assessment with Sonography in Trauma Ultrasound Tissue-Mimicking Phantom for Developing Automated Diagnostic Technologies. Front. Bioeng. Biotechnol. 2023, 11, 1244616. [Google Scholar] [CrossRef] [PubMed]
  37. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2015, arXiv:1506.02640. [Google Scholar] [CrossRef]
  38. Chiang, T.-C.; Huang, Y.-S.; Chen, R.-T.; Huang, C.-S.; Chang, R.-F. Tumor Detection in Automated Breast Ultrasound Using 3-D CNN and Prioritized Candidate Aggregation. IEEE Trans. Med. Imaging 2019, 38, 240–249. [Google Scholar] [CrossRef] [PubMed]
  39. Brattain, L.J.; Pierce, T.T.; Gjesteby, L.A.; Johnson, M.R.; DeLosa, N.D.; Werblin, J.S.; Gupta, J.F.; Ozturk, A.; Wang, X.; Li, Q.; et al. AI-Enabled, Ultrasound-Guided Handheld Robotic Device for Femoral Vascular Access. Biosensors 2021, 11, 522. [Google Scholar] [CrossRef]
  40. Edwards, J.; Abdou, H.; Patel, N.; Madurska, M.J.; Poe, K.; Bonin, J.E.; Richmond, M.J.; Rasmussen, T.E.; Morrison, J.J. The Functional Vascular Anatomy of the Swine for Research. Vascular 2022, 30, 392–402. [Google Scholar] [CrossRef]
  41. Bushi, D.; Assaf, Y.; Grad, Y.; Nishri, B.; Yodfat, O.; Tanne, D. Similarity of the Swine Vasculature to the Human Carotid Bifurcation: Analysis of Arterial Diameters. J. Vasc. Interv. Radiol. 2008, 19, 245–251. [Google Scholar] [CrossRef]
  42. Lopes-Berkas, V.C.; Jorgenson, M.A. Measurement of Peripheral Arterial Vasculature in Domestic Yorkshire Swine by Using Quantitative Vascular Angiography. J. Am. Assoc. Lab. Anim. Sci. 2011, 50, 628–634. [Google Scholar] [PubMed]
  43. Boice, E.N.; Berard, D.; Hernandez Torres, S.I.; Avital, G.; Snider, E.J. Development and Characterization of an Ex Vivo Testing Platform for Evaluating Automated Central Vascular Access Device Performance. J. Pers. Med. 2022, 12, 1287. [Google Scholar] [CrossRef] [PubMed]
  44. Snider, E.J.; Hernandez-Torres, S.I.; Boice, E.N. An Image Classification Deep-Learning Algorithm for Shrapnel Detection from Ultrasound Images. Sci. Rep. 2022, 12, 8427. [Google Scholar] [CrossRef] [PubMed]
  45. Boice, E.N.; Hernandez-Torres, S.I.; Snider, E.J. Comparison of Ultrasound Image Classifier Deep Learning Algorithms for Shrapnel Detection. J. Imaging 2022, 8, 140. [Google Scholar] [CrossRef] [PubMed]
  46. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv 2018, arXiv:1801.04381. [Google Scholar] [CrossRef]
  47. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-Cam: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  48. Song, D.; Yao, J.; Jiang, Y.; Shi, S.; Cui, C.; Wang, L.; Wang, L.; Wu, H.; Tian, H.; Ye, X.; et al. A New xAI Framework with Feature Explainability for Tumors Decision-Making in Ultrasound Data: Comparing with Grad-CAM. Comput. Methods Programs Biomed. 2023, 235, 107527. [Google Scholar] [CrossRef]
  49. Yang, J.; Shi, X.; Wang, B.; Qiu, W.; Tian, G.; Wang, X.; Wang, P.; Yang, J. Ultrasound Image Classification of Thyroid Nodules Based on Deep Learning. Front. Oncol. 2022, 12, 905955. [Google Scholar] [CrossRef]
  50. Hsu, S.-T.; Su, Y.-J.; Hung, C.-H.; Chen, M.-J.; Lu, C.-H.; Kuo, C.-E. Automatic Ovarian Tumors Recognition System Based on Ensemble Convolutional Neural Network with Ultrasound Imaging. BMC Med. Inform. Decis. Mak. 2022, 22, 298. [Google Scholar] [CrossRef]
  51. Hernandez-Torres, S.I.; Boice, E.N.; Snider, E.J. Using an Ultrasound Tissue Phantom Model for Hybrid Training of Deep Learning Models for Shrapnel Detection. J. Imaging 2022, 8, 270. [Google Scholar] [CrossRef]
Figure 1. Confusion matrices for MobileNetV2 and ShrapML models tracking junctional vessel occlusion. Average confusion matrices (n = 3 trained models) are shown for (A,C) MobileNetV2 and (B,D) ShrapML models, (A,B) without data augmentation and (C,D) with data augmentation. Confusion matrix values are expressed as percentages across each ground truth category.
Figure 1. Confusion matrices for MobileNetV2 and ShrapML models tracking junctional vessel occlusion. Average confusion matrices (n = 3 trained models) are shown for (A,C) MobileNetV2 and (B,D) ShrapML models, (A,B) without data augmentation and (C,D) with data augmentation. Confusion matrix values are expressed as percentages across each ground truth category.
Bioengineering 11 00109 g001
Figure 2. Gradient-weighted class activation maps (GradCAM) for trained binary classifier models for ultrasound tracking of junctional vessel occlusion. (Column 1) Base ultrasound images are shown for reference as well as (left to right) GradCAMs for Mobile Net V2 without and with data augmentation, and ShrapML without and with data augmentation. Representative ultrasound images are shown for full-flow and no-flow categories. Areas with high relevance to model predictions are highlighted by red-yellow overlays, while lower relevance regions are highlighted in blue-green.
Figure 2. Gradient-weighted class activation maps (GradCAM) for trained binary classifier models for ultrasound tracking of junctional vessel occlusion. (Column 1) Base ultrasound images are shown for reference as well as (left to right) GradCAMs for Mobile Net V2 without and with data augmentation, and ShrapML without and with data augmentation. Representative ultrasound images are shown for full-flow and no-flow categories. Areas with high relevance to model predictions are highlighted by red-yellow overlays, while lower relevance regions are highlighted in blue-green.
Bioengineering 11 00109 g002
Figure 3. Three-category ShrapML performance for tracking junctional vessel occlusion. (A) confusion matrix for three categories—no flow, partial flow, and full flow—ShrapML models with affine transformations for data augmentation. (B) GradCAM overlays for representative ultrasound images for each of three categories. Areas with high relevance to model predictions are highlighted by red-yellow overlays, while lower relevance regions are highlighted in blue-green.
Figure 3. Three-category ShrapML performance for tracking junctional vessel occlusion. (A) confusion matrix for three categories—no flow, partial flow, and full flow—ShrapML models with affine transformations for data augmentation. (B) GradCAM overlays for representative ultrasound images for each of three categories. Areas with high relevance to model predictions are highlighted by red-yellow overlays, while lower relevance regions are highlighted in blue-green.
Bioengineering 11 00109 g003
Figure 4. Confusion matrix and receiver operating characteristic (ROC) curve for ShrapML models tracking junctional vessel occlusion in swine image sets. Results are shown for three replicate trained models, shown as average values for the (A,C) confusion matrix and (B,D) individual ROC curves for a (A,B) 70% or (C,D) 90% occlusion threshold.
Figure 4. Confusion matrix and receiver operating characteristic (ROC) curve for ShrapML models tracking junctional vessel occlusion in swine image sets. Results are shown for three replicate trained models, shown as average values for the (A,C) confusion matrix and (B,D) individual ROC curves for a (A,B) 70% or (C,D) 90% occlusion threshold.
Bioengineering 11 00109 g004
Figure 5. GradCAM overlays for ShrapML models trained with swine image sets. Representative images are shown for (AD) a 70% or (EH) a 90% occlusion threshold. Full-flow and no-flow ultrasound representative image categories are shown without and with GradCAM overlays. Areas with high relevance to model predictions are highlighted by red-yellow overlays, while lower relevance regions are highlighted in blue-green. Color Doppler overlay region is shown as a green box for images captured with Sonosite Edge.
Figure 5. GradCAM overlays for ShrapML models trained with swine image sets. Representative images are shown for (AD) a 70% or (EH) a 90% occlusion threshold. Full-flow and no-flow ultrasound representative image categories are shown without and with GradCAM overlays. Areas with high relevance to model predictions are highlighted by red-yellow overlays, while lower relevance regions are highlighted in blue-green. Color Doppler overlay region is shown as a green box for images captured with Sonosite Edge.
Bioengineering 11 00109 g005
Table 1. Performance metric summary for ShrapML models with training sets split at different pressure reduction thresholds. Models consisted of two categories—full flow and no flow—with affine transformations randomly applied for data augmentation. Metrics are shown as average results for n = 3 trained models. Color map is overlayed on each row to highlight the higher performance metrics from minimum (no color) to maximum (green).
Table 1. Performance metric summary for ShrapML models with training sets split at different pressure reduction thresholds. Models consisted of two categories—full flow and no flow—with affine transformations randomly applied for data augmentation. Metrics are shown as average results for n = 3 trained models. Color map is overlayed on each row to highlight the higher performance metrics from minimum (no color) to maximum (green).
Distal Pressure Reduction Percent Threshold from Full Flow
50%60%70%80%90%
Accuracy0.9370.9240.9350.8850.904
Precision0.9400.8890.8970.8330.861
Recall0.9580.9960.9940.9970.978
Specificity0.9050.8230.8640.7440.821
F1 Score0.9480.9390.9430.9070.916
Table 2. Performance metrics for MobileNetV2 and ShrapML models for ultrasound tracking of junctional vessel occlusion. Average (n = 3 trained models) performance metrics and standard deviations are shown for MobileNetV2 and ShrapML models without and with data augmentation.
Table 2. Performance metrics for MobileNetV2 and ShrapML models for ultrasound tracking of junctional vessel occlusion. Average (n = 3 trained models) performance metrics and standard deviations are shown for MobileNetV2 and ShrapML models without and with data augmentation.
MobileNetV2ShrapML
Without Data AugmentationWith Data AugmentationWithout Data AugmentationWith Data Augmentation
AverageStandard DeviationAverageStandard DeviationAverageStandard DeviationAverageStandard Deviation
Accuracy0.8530.0720.8450.1170.7590.1720.9340.010
Precision0.9760.0420.9930.0060.7180.1970.9860.012
Recall0.6750.1440.6460.2750.8590.0170.8590.032
Specificity0.9900.0180.9960.0040.6830.3160.9910.008
F1 Score0.7940.1140.7570.2230.7710.1200.9180.014
Table 3. Summary of performance metrics for the ShrapML trained with swine image sets for tracking junctional vessel occlusion. Results are shown as average and standard deviations across three replicate trained models for a 70% or 90% occlusion threshold.
Table 3. Summary of performance metrics for the ShrapML trained with swine image sets for tracking junctional vessel occlusion. Results are shown as average and standard deviations across three replicate trained models for a 70% or 90% occlusion threshold.
70% Occlusion Threshold90% Occlusion Threshold
AverageStandard DeviationAverageStandard Deviation
Accuracy0.9090.0150.9420.029
Precision0.9710.0250.9800.014
Recall0.8570.0400.9330.032
Specificity0.9700.0270.9610.027
F1 Score0.9100.0170.9560.023
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Avital, G.; Hernandez Torres, S.I.; Knowlton, Z.J.; Bedolla, C.; Salinas, J.; Snider, E.J. Toward Smart, Automated Junctional Tourniquets—AI Models to Interpret Vessel Occlusion at Physiological Pressure Points. Bioengineering 2024, 11, 109. https://doi.org/10.3390/bioengineering11020109

AMA Style

Avital G, Hernandez Torres SI, Knowlton ZJ, Bedolla C, Salinas J, Snider EJ. Toward Smart, Automated Junctional Tourniquets—AI Models to Interpret Vessel Occlusion at Physiological Pressure Points. Bioengineering. 2024; 11(2):109. https://doi.org/10.3390/bioengineering11020109

Chicago/Turabian Style

Avital, Guy, Sofia I. Hernandez Torres, Zechariah J. Knowlton, Carlos Bedolla, Jose Salinas, and Eric J. Snider. 2024. "Toward Smart, Automated Junctional Tourniquets—AI Models to Interpret Vessel Occlusion at Physiological Pressure Points" Bioengineering 11, no. 2: 109. https://doi.org/10.3390/bioengineering11020109

APA Style

Avital, G., Hernandez Torres, S. I., Knowlton, Z. J., Bedolla, C., Salinas, J., & Snider, E. J. (2024). Toward Smart, Automated Junctional Tourniquets—AI Models to Interpret Vessel Occlusion at Physiological Pressure Points. Bioengineering, 11(2), 109. https://doi.org/10.3390/bioengineering11020109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop