Next Article in Journal
XplainLungSHAP: Enhancing Lung Cancer Surgery Decision Making with Feature Selection and Explainable AI Insights
Previous Article in Journal
Radial Artery Used as Conduit for Coronary Artery Bypass Grafting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review

by
Daniel Caballero
1,
Juan A. Sánchez-Margallo
1,*,
Manuel J. Pérez-Salazar
1 and
Francisco M. Sánchez-Margallo
2
1
Bioengineering and Health Technologies Unit, Jesús Usón Minimally Invasive Surgery Centre, ES-10071 Cáceres, Spain
2
Scientific Direction, Jesús Usón Minimally Invasive Surgery Centre, ES-10071 Cáceres, Spain
*
Author to whom correspondence should be addressed.
Surgeries 2025, 6(1), 7; https://doi.org/10.3390/surgeries6010007
Submission received: 20 December 2024 / Revised: 22 January 2025 / Accepted: 23 January 2025 / Published: 30 January 2025

Abstract

:
Background/Objectives: Among the scientific literature, the significant potential of the application of artificial intelligence (AI) in minimally invasive surgery (MIS) stands out. The aim of this study is to provide a comprehensive review to analyze the scientific literature on AI applications in MIS training, selecting the main applications, limitations, opportunities and challenges in this field of research. Methods/Design: A literature search was conducted in scientific databases. The search was performed with titles or abstracts, using keywords. First, studies unrelated to the topic of study were eliminated. Next, the selection was limited to articles in English. The exclusion criteria for the search were reviews, letters, case reports, industrial articles and conference abstracts. Next, only studies published in the last ten years (2014–2024) were evaluated, with priority given to publications in the last five years (2019–2024) in surgical training in AI and MIS. Finally, the full text was reviewed to add or exclude the study from this review. Results: Of the 54 studies included in this review, 18 studies were related to skills assessment, 30 studies analyzed aspects of surgical training itself, 12 studies were related to learning aspects of surgical planning, 7 studies were based on gesture recognition and 3 studies were based on surgical action recognition to measure surgical performance during MIS training. A brief description of the main AI techniques was included in this review. Conclusions: The application of AI in MIS surgical training is still a developing field of research, which presents great potential for exploring future applications, challenges, opportunities and drawbacks, as well as synergies between the technical and clinical research fields.

1. Introduction

Minimally invasive surgery (MIS) has grown rapidly over the past decades and numerous minimally invasive procedures have become standard surgical techniques for some specialties [1]. The advantages of MIS for patients are widely known, and they have been clearly presented in the scientific literature [2]. These include reduction in tissue trauma, reduction in postoperative pain, faster recovery times or reduction in complications derived from surgical procedures [2]. However, MIS procedures have some limitations for surgeons that need to be addressed, such as potential ergonomic deficiencies during long surgeries, high stress levels in certain procedures or a very demanding extensive learning curve [3], which are hazardous to the surgeon’s health and have a great impact on the quality of surgical procedures and patient care [4]. Thus, laparoscopic procedures are often technically demanding, with an extensive learning curve [5]. Consequently, additional and in-depth training is required for this type of surgical technique [6].
Traditionally, novice surgeons follow a training process under the supervision of expert surgeons, which allows them to evaluate their learning [7]. The first steps consist of the visualization of the different MIS techniques performed by expert surgeons. Subsequently, novice surgeons are assigned to train on simulators with different basic tasks, such as cutting, dissection, needle handling or suturing. For this purpose, several physical, virtual and hybrid laparoscopic training simulators have been developed. The difficulty of these tasks is gradually increased until the novice surgeon acquires an adequate level of basic surgical skills [8]. Then, novice surgeons begin to participate in different surgical procedures until they reach a certain level of adequate surgical competence [9].
On the other hand, the application of artificial intelligence (AI) has exponentially grown in its use and development. These algorithms are based on non-trivial processes for discovering potentially useful knowledge initially hidden in the data [10]. Among the different AI techniques, there are several algorithms that allow the development of predictive models, which can be linear or nonlinear [11], and classification models, which can be supervised or non-supervised [12]. These models should not be confused with machine learning models that are linked to artificial neural networks (ANNs) and convolutional neural networks (CNNs), and deep learning models (DLMs), which can evaluate the results of previous predictive models and learn from these results [13]. In recent years, the apparition of a new AI technology, generative AI based on large language models (LLMs), has allowed the development of several new AI applications across various industries [14], generating a new era of AI with unbelievable potential for establishing new opportunities, limitations and challenges, since generative AI continues to evolve.
Consequently, there are many studies about AI applications in numerous research fields such as food technology [15], mathematics [16], civil engineering [17] military applications [18] or architecture [19]. Specifically, MIS is one of the medical specialties that generates very large datasets that can be processed in detail and depth by AI. Those datasets can include pre-operative datasets (clinical history, laboratory tests and imaging tests of the patients), intra-operative datasets (based on video recordings and wearable devices) and post-operative datasets (mortality, recovery process and patient outcomes). Although there is a substantial interest in AI applied to clinical data, its clinical implementation still faces obstacles due to some drawbacks, such as the following: the performance of AI models is still not perfect since it depends on the quantity and variety of the available data [20]. A lack of external validation in most of the reported studies provides additional unexpected errors [21]. The interpretability of DLM algorithms and their working principles is very difficult to understand for non-experts, especially surgeons, and generates distrust [22]. In this sense, the use of explainable AI (xAI) has grown to be interpretable to surgeons [22].
To our knowledge, there are only a few reviews focused on the applications of AI in MIS [23,24,25,26,27,28,29]. However, there is no general review focusing on the application of AI during surgical training in MIS. For this reason, a scoping review was conducted to systematically map the research conducted in this area. Accordingly, the following research question was formulated: what are the main applications, limitations, opportunities and challenges of AI for assistance in MIS training?
Therefore, the main objective of this study is to provide a comprehensive review to analyze the published literature on AI applications in MIS training, selecting the main applications, limitations, opportunities, challenges and existing gaps in the knowledge in this field of research.

2. Materials and Methods

2.1. Search Strategy

In November 2024, a search of the scientific literature was conducted in the PubMed, Web of Science, Scopus and IEEExplore databases. The search included observational, randomized and non-randomized studies (3486 studies). The search was performed with titles or abstracts, using keywords and hot topics and Boolean operators as follows: (artificial intelligence OR AI OR deep learning OR machine learning OR CNN OR neural networks) AND (minimally invasive surgery OR MIS OR robot-assisted surgery OR RAS) AND (surgical training OR skill assessment OR gesture recognition). In a first scoping review, we eliminated some studies outside the study topic (2426 studies). Then, the selection was limited to English-language articles (2327 studies) with an abstract, published in peer-reviewed journals. The search was complemented by checking published reviews and their references (2327 studies). The exclusion criteria for the search were reviews, letters, case reports, industrial papers and conference abstracts (727 studies). Next, only studies published in the last ten years (2014–2024) were evaluated, with priority given to publications in the last five years (2019–2024) in artificial intelligence and surgical training (613 studies).
Once the search was completed, we screened by title and abstract (64 studies). Subsequently, the full text was reviewed to add or exclude the study from this review (54 studies). Once the full text was added, data from the studies were extracted and analyzed. To increase consistency among reviewers, two authors (D.C. and M.J.P.-S.) screened the full text of all the studies, and the remaining two authors (J.A.S.-M. and F.M.S.-M.) reviewed and amended the screening and data extraction manually for this review. After that, all authors resolved, in the case that it is needed, possible disagreements on study selection and data extraction by consensus and discussions. This method of data identification and assessment conformed to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement and checklist that shown in Table S1 [30]. The final protocol was registered prospectively with the Open Science Framework on 20 December 2024 (osf.io/mca3y). Figure 1 shows the flow diagram of the study selection.

2.2. Analysis of Studies

Since the studies were focused on different application domains in MIS training, five different application groups were created: (i) skills assessment, (ii) surgical training, (iii) surgical planning, (iv) surgical gesture recognition, and (v) surgical action detection. Each study was assigned to the corresponding application groups, and each study could belong to one group or more than one group. For each group, a table was prepared to visually analyze the data of the studies. For this review study, all sources of information included in the synthesis phase will be considered equally.
For each table, the Sample, Phenomenon of Interest, Design, Evaluation, and Type of Investigation (SPIDER) tool [31] was applied. The SPIDER tool reports: the dataset with the number of surgeons and procedures (sample), the type of tasks or procedures (phenomenon of interest), the application, robotic platform, AI algorithms and input data (design), the results obtained (evaluation) and the objectives of the studies (research). For that, a data extraction form was developed and created by all authors in a consensus manner to determine which variables to extract. Two authors (D.C. and M.J.P.-S.) extracted the data for all studies independently and the remaining two authors (J.A.S.-M. and F.M.S.-M.) reviewed and amended the data extraction manually, discussed the results and continuously updated the data extraction form in an iterative process for this review.

2.3. Evaluation of the Scope Throughout the Years

Analyzing the studies published in this field, an increasing number of publications has been observed in recent years, from about 10 publications between 2000 and 2019 to 36 publications in 2020, 70 publications in 2021, about 100 publications in 2022 and 2023, and 260 publications in 2024. This fact demonstrates a growing interest in AI applications, specifically in surgical training. Figure 2 shows the publications per year between 2000 and 2024, indicating an increasing number of publications from 2020 onwards.

3. Results and Discussion

The 54 studies included in this review comprise 18 studies related to skills assessment during surgical training, 30 studies related to aspects of surgical training itself, 12 studies related to aspects of learning surgical planning, 7 studies on gesture recognition and its improvement during surgical training, and 3 studies on recognition of surgical actions to measure surgical performance during surgical training. This set of studies is presented in different Tables (Table 1 for skills assessment, Table 2 for surgical training, Table 3 for surgical planning, Table 4 for gesture recognition and Table 5 for surgical action recognition). In addition, a summary of the AI algorithms applied in the studies included in this scoping review has been added.

3.1. Artificial Intelligence Techniques

In this section, a summary of the AI algorithms used in the analyzed studies of this scoping review is described to aid in their understanding and main purposes.
  • AdaBoost: this technique is an unsupervised statistical classification algorithm that can be used in conjunction with other types of learning algorithms to improve their performance by providing a boosted classifier [32].
  • ANN: This is an AI technique that attempts to simulate the performance of a human brain. In this way, the generated predictive model will continue to learn based on the values of the signals generated in the neural network [10].
  • Autoencoder: This algorithm is a type of ANN that is applied to unsupervised learning. This algorithm is based on the principle of the encoding and decoding process. It is usually applied for the classification of unlabeled data [33].
  • Clustering: this is an algorithm that consists of positioning the log data in several non-overlapping clusters or groups so that the data log is placed at the smallest possible Euclidean distance from the centroids of the different clusters [34].
  • CNN: This AI algorithm is a regularized type of feed-forward ANN that learns from the features by itself using an optimization kernel. This type of ANN has been applied to process predictive models from many different types of multimedia data [35].
  • Decision tree (DT): This is a decision modeling tool that graphically displays the classification process of a given input for a given output class label. This method is one of the learning algorithms that generate classification models in the form of tree structures [36].
  • DLM: These methods are an evolution of machine learning, which focuses on learning from the results extracted from an ANN. This type of AI technique is performed in tasks such as classification (supervised, semi-supervised and unsupervised), prediction and learning representation [35].
  • Edge detection: It is a computer vision method to identify edges, curves in digital images or discontinuities in signal processing. It is a fundamental tool for research in fields such as image processing, computer vision or feature detection and extraction [37]. This technique has been enhanced by AI.
  • Fuzzy rules: this method is used in fuzzy logic systems to infer an output from independent input variables [38].
  • Generative adversarial network (GAN): This method is a class of ANNs or DLMs, generally used as an approach framework for generative AI. This method is composed of two ANNs, where one ANN wins when another ANN loses [39].
  • Image segmentation: This is a computer vision technique to discriminate discrete groups of pixels in a digital image to inform object detection. Complex visual data are being analyzed by applying AI to image segmentation [37].
  • K-nearest neighbors (KNN): This is a supervised learning method typically used for classification purposes. The result is a group of classes with components of similar characteristics [40].
  • LLM: This is a type of AI computational model designed for natural language processing tasks closely related to generative AI. This model acquires the capabilities by learning statistical relations from a large amount of text for a semi-supervised training process [41].
  • Linear regression (LR): This is a predictive technique that allows the creating of future models that can be predicted from current data by trend analysis. This method shows a linear relationship between a dependent variable and several independent variables [42].
  • Naive Bayes: This is an AI algorithm for supervised classification based on Bayesian principles and probability theory [43]. This algorithm can be fed by an ANN.
  • Partial least squares (PLS): this method is a predictive technique based on the statistical relationship between principal component analysis and a reduced rank regression, finding a linear regression model that projects the predicted and observable variables to a new space with maximum covariance [44].
  • Principal component analysis (PCA): This is an exploratory data analysis technique used for data visualization and processing based on linear dimensionality reduction. It can also be used for supervised classification [45].
  • Random forest (RF): This is an ensemble learning technique used for classification and regression. This technique is based on the use of multiple DTs to obtain a correct result. This method is one of the learning algorithms that generate classification models in the form of tree structures [36].
  • Support vector machine (SVM): this method performs a nonlinear predictive approximation, representing a desirable nonlinear relationship that minimizes the separation by tolerating a small error in fitting the data in the transformed space [46].

3.2. Skills Assessment Applications

In this section, 18 studies focused on the evaluation of surgeon skills during surgical training were analyzed (Table 1). Regarding the surgical tasks used, nine of them referred to basic surgical training tasks such as knot tying, needle passing and suturing, seven to training using virtual reality (VR) simulators and two to simulations of colorectal surgeries in phantom models. As for the input data, twelve of them used video frames, nine applied data from wearable devices, one used image data, and one used eye tracking and a subjective survey. Regarding the platform used for surgical training, eleven studies used the da VinciTM surgical robotic system (Intuitive Surgical, Sunnyvale, CA, USA), three studies included different simulators, two studies analyzed the data from conventional laparoscopic surgeries, one study used an experimental robotic surgical system and one a percutaneous guided system. Several AI algorithms were applied in these 18 studies: ANN, autoencoder, CNN, DT, fuzzy rules, KNN, LR, Naive Bayes, PCA and SVM; all these algorithms are described in Section 3.1.
Table 1. Studies on the application of AI in surgical training for competency assessment.
Table 1. Studies on the application of AI in surgical training for competency assessment.
Authors and YearNumber of SurgeonsNumber of Surgical ProceduresType of Surgical TasksInput DataRobotic Platform/Conventional SurgeryAI Algorithms
Alonso-Silveiro et al., 2018 [47]16400VR simulatorImage dataConventional laparoscopy surgeryANN 1
Azimi et al., 2018 [48]33VR simulatorWearable device dataNeurosurgery simulatorCNN 1
Dublin et al., 2018 [49]1428VR simulatorWearable device datada VinciTM surgical systemLR 1
Fard et al., 2018 [50]848Knot tying and suturingWearable device data and video framesda VinciTM surgical systemKNN 1, LR 1, and SVM 1
Wang and Fey, 2018 [51]872Knot tying, suturing, and needle passingWearable device data and video framesda VinciTM surgical systemCNN 1
Zia and Essa, 2018 [52]872Knot tying, suturing, and needle passingWearable device data and video framesda VinciTM surgical systemKNN 1, PCA 1, and SVM 1
Ershad et al., 2019 [53]1428VR simulatorWearable device datada VinciTM surgical systemSVM 1
Fawaz et al., 2019 [54]1199Knot tying, suturing, and needle passingWearable device data and video framesda VinciTM surgical systemCNN 1
Funke et al., 2019 [55]981Knot tying, suturing, and needle passingVideo framesda VinciTM surgical systemCNN 1
Holden et al., 2019 [56]2443Needle passing and US probe motionVideo framesPercutaneous guided interventionsDT 1 and fuzzy rules
Tan et al., 2019 [57]1627VR simulatorVideo framesSimulator VREPCNN 1
Khalid et al., 2020 [58]8120Knot tying, suturing, and needle passingVideo framesda VinciTM surgical systemAutoencoder
Wu et al., 2020 [59]8180Endowrist manipulation, clutching, needle control, needle driving, peg transfer and suturingEye-tracking metrics and subjective surveysda VinciTM surgical systemNaive Bayes
Zhang et al., 2020 [60]8552Knot tying, suturing, and needle passingWearable device data and video framesExperimental microsurgical robot research platformCNN1
Shedage et al., 2021 [9]3366VR simulatorWearable device dataVATDEP SimulatorKNN 1, LR 1, and SVM 1
Reich et al., 2022 [61]2183VR simulatorVideo framesConventional laparoscopy surgery ANN 1
Gillani et al., 2024 [62]10461Colorectal surgeriesVideo framesda VinciTM surgical systemLR 1
Gillani et al., 2024 [63]10461Colorectal surgeriesVideo framesda VinciTM surgical systemLR 1
1 ANN: artificial neural network. CNN: convolutional neural network. DT: decision tree. KNN: k-nearest neighbors. LR: linear regression. PCA: principal component analysis. SVM: support vector machine.
Several of these studies used the John Hopkins University Intuitive Gesture and Surgical Skills Assessment Work Set (JIGSAWS) [64] to conduct skills assessments during surgical training. The JIGSAWS includes three basic surgical tasks: knot tying, needle passing and suturing, which must be completed three to five times by at least eight surgeons [64]. This fact generates an evident homogeneity in some results. Thus, surgeons in studies using JIGSAWS achieved an accuracy between 0.774 and 1, using AI techniques such as KNN and SVM [50,52]. On the other hand, the results vary between 0.992 and 1 when the AI technique applied was CNN [51,54,55,58,60]. However, although the results obtained are very positive, JIGSAWS has some drawbacks such as the following: the limitation of only three basic surgical tasks, not including other basic fundamental tasks such as dissection; the number of surgeons in each attempt also being limited to between eight and fourteen; and that at most surgeons can repeat each task five times, a fact that is strictly limiting for a novice surgeon in the learning phase. In contrast to these limitations, Wu et al. [59] analyzed 15 robotic skill sessions for a total of 180 surgical tasks performed by 8 surgeons. To assess the surgeons’ skill during training, they used eye tracking, achieving a correlation degree of 0.51 and predicting workload levels with an accuracy of 0.847 [59].
In the case of the use of VR simulators, some studies [47,61] achieved an accuracy between 0.83 and 0.93 in the assessment of surgical skills using ANN, showing its potential to be applied in possible surgical training programs. Similarly, Dublin et al. [49] obtained similar results to those of the ANN studies. In this case, they employed LR algorithms with lower complexity, improving the quality of the training process and preparing novice surgeons for the evaluation of new and difficult skills. Other studies using VR for surgical training [48,53,57] presented less favorable results using CNN and SVM but offered powerful audio-visual feedback and assistance during training. In addition, these training systems can enhance the real-time view of VR and allow augmentation of the training process for surgeons. Finally, these systems allow a high versatility in customizing the training style, thus adapting to the trainee’s learning curve.
Holden et al. [56], who presented a study for surgical training assistance in percutaneous guided surgery, obtained excellent results applying fuzzy rules and DTs. They achieved more useful feedback for the trainees using DTs than with fuzzy rules. This feedback proved useful in supporting self-guided training in interventions using these two AI techniques.
Giliani et al. [62,63] evaluated the skills of novice (<50 h of MIS experience), intermediate (50 to 100 h of MIS experience) and expert (>100 h of MIS experience) surgeons in 461 specific MIS steps with the aim of objectively and automatically classifying their level of experience. In both studies, novice surgeons showed less speed and acceleration in surgical movements, which allowed the differentiating of the group of expert surgeons from intermediate and novice surgeons. The automation of this evaluation process could objectively provide self-feedback to surgeons in training and show scalable metrics on the training process until a certain level of surgical proficiency is reached, regardless of the hours in surgeries performed.

3.3. Surgical Training Application

In this section, thirty studies were analyzed (Table 2). Regarding surgical tasks, eight of them included knot tying, needle passing and suturing. Regarding surgical specialty, seven studies focused on the specialty of urology, six studies used VR simulators and four used colorectal surgery simulators in phantom models. Regarding input data, eighteen studies used video stills, eleven used data from wearable devices, six used image data, and one used eye tracking and a subjective survey. Regarding the platform used for surgical training, seventeen of them used the da VinciTM surgical system, six of them used conventional laparoscopic surgery, three of them used the VersiusTM surgical system, three of them performed conventional MIS in different specialties (pulmonary, neurology and arthroscopy), two of them used different simulators, one of them performed the MICS-MVR surgical system and the other used a percutaneous guided system. Several AI algorithms were applied in these studies: AdaBoost, ANN, clustering, CNN, DLM, DT, fuzzy rules, GAN, image segmentation, KNN, LR, Naive Bayes, PCA, PLS, RF and SVM; all these algorithms are described in Section 3.1.
Table 2. Studies about AI and surgical training with surgical training application.
Table 2. Studies about AI and surgical training with surgical training application.
Authors and YearNumber of SurgeonsNumber of Surgical ProceduresType of Surgical TasksInput DataRobotic Platform/Conventional SurgeryAI Algorithms
Nosrati et al., 2016 [65]973096Urology surgeriesImage datada VinciTM surgical systemANN 1, LR 1 and RF 1
Krishnan et al., 2017 [66]577Circle cutting, needle passing and peg transferWearable device data and video framesda VinciTM surgical systemclustering
Sarikaya et al., 2017 [67]102455Ball placement, peg transfer, suturing, knot tying, needle passing, labyrinth, and urology and colorectal surgeryVideo framesda VinciTM surgical systemCNN 1
Zia et al., 2017 [68]9225Two hand suturing, uterine horn dissection, suspensory ligament dissection, rectal artery skeletonization and rectal artery clippingWearable device data and Video framesda VinciTM surgical systemANN 1 and clustering
Dubin et al., 2018 [49]1428VR simulatorWearable device datada VinciTM surgical systemLR 1
Ross et al., 2018 [69]2123,000Urology and colorectal surgeryImage data and video framesda VinciTM surgical systemCNN 1 and GAN 1
Shafiei et al., 2018 [70]3170Urology surgeryImage datada VinciTM surgical systemPCA 1 and SVM 1
Colleoni et al., 2019 [71]18500Urology and colorectal surgeryVideo framesda VinciTM surgical systemCNN 1
Engelhardt et al., 2019 [72]990Surgeries on silicone modelVideo framesMICS-MVR surgical systemGAN 1
Ershad et al., 2019 [53]1428VR simulatorWearable device datada VinciTM surgical systemSVM 1
Holden et al., 2019 [56]2443Needle passing and US probe motionVideo framesPercutaneous guided interventionsDT 1 and fuzzy rules
Islam et al., 2019 [73]8225Surgeries on porcine modelVideo framesda VinciTM surgical systemGAN 1
Tan et al., 2019 [57]1627VR simulatorVideo framesSimulator VREPCNN 1
Attanasio et al., 2020 [74]61080Neurology surgeryVideo framesda VinciTM surgical systemCNN 1
Liu et al., 2020 [75]102455Ball placement, peg transfer, suturing, knot tying, needle passing, labyrinth, and urology and colorectal surgeryVideo framesda VinciTM surgical systemCNN 1
Wu et al., 2020 [59]8180EndoWrist manipulation, clutching, needle control, needle driving, peg transfer and suturingEye-tracking metrics and subjective surveysda VinciTM surgical systemNaive Bayes
De Boer et al., 2021 [76]932Urology surgeriesWearable device dataConventional urology surgeriesLR 1 and PLS 1
Chen et al., 2021 [77]1768Urology surgeryWearable device datada VinciTM surgical systemAdaBoost and RF 1
Shedage et al., 2021 [9]3366VR simulatorWearable device dataVATDEP SimulatorKNN 1, LR 1 and SVM 1
Reich et al., 2022 [61]2183VR simulatorVideo framesConventional laparoscopic surgery ANN 1
Ayuso et al., 2023 [78]1610,514Colorectal surgeriesImage data and video framesConventional laparoscopic surgeryCNN 1, DLM 1, and GAN 1
Moglia et al., 2023 [79]176352VR simulatorWearable device datada VinciTM surgical systemCNN 1
Mohamadipanah et al., 2023 [80]64997Pulmonary surgeriesVideo framesConventional pulmonary surgeriesGAN 1
Caballero et al., 2024 [81]1127Suturing, needle passing, dissection, labyrinth, general surgery, urology and gynecologyWearable device dataVersiusTM surgical system and conventional laparoscopic surgeryANN 1, LR 1 and SVM 1
Gillani et al., 2024 [62]10461Colorectal surgeriesVideo framesda VinciTM surgical systemLR 1
Gillani et al., 2024 [63]10461Colorectal surgeriesVideo framesda VinciTM surgical systemLR 1
Gruter et al., 2024 [82]36216Colorectal surgeriesVideo framesConventional laparoscopic surgeryImage segmentation
Orgiu et al., 2024 [83]620Arthroscopy surgeriesImage data and video framesConventional arthroscopy surgeriesCNN 1
Pérez-Salazar et al., 2024 [8]618SuturingWearable device dataVersiusTM surgical system and conventional laparoscopy surgeryLR 1
Pérez-Salazar et al., 2024 [84]742Peg transfer, cutting, needle passing and suturingWearable device dataVersiusTM surgical system and conventional laparoscopy surgeryANN 1 and LR 1
1 ANN: artificial neural network. CNN: convolutional neural network. DLM: deep learning method. DT: decision tree. GAN: Generative adversarial network. KNN: k-nearest neighbors. LR: linear regression. PCA: principal component analysis. PLS: partial least square. RF: random forest. SVM: support vector machine.
Nosrati et al. [65] developed a new preoperative endoscopic technique to detect visible and occluded structures in surgical training. The training of AI techniques (ANN, LR and RF) was performed with only 15 clinical cases and an improvement of over 45% compared to the state-of-the-art mean results was obtained. This study demonstrated the potential of VR for MIS surgical training. Attanasio et al. [69,71,75] used CNN in video frames from urology and colorectal surgery training to reduce annotation effort, improving performance by 75% and accuracy to between 0.851 and 0.954. Attanasio et al. [74] applied CNN to improve training in neurological surgery, improving by at least 25% in experimental validation with respect to the mean of the state of the art. The framework proposed by Attanasio et al. [74] could be applied to a wide range of surgical tasks. On the other hand, using VR, some studies [49,53] demonstrated better LR performance than the multivariate model, improving the surgical training process, reducing time and costs and completing more surgical tasks. Thus, Tan et al. [57] provided deep learning by reinforcement to give a demonstration of high-quality learning in surgical training. Reich et al. and Moglia et al. [61,79] applied ANN and CNN models and obtained an accuracy of 0.833. This demonstrates the potential of ANNs and CNNs to understand the level of surgical expertise, contributing to the shift towards advanced task-based competence in surgical training [61].
Holden et al. [56] applied AI-based solutions for surgical training on percutaneous guided surgery and obtained excellent results applying fuzzy rules and DT. They achieved more useful feedback using DTs than fuzzy rules. Similarly, some studies tested the evaluation of self-guided percutaneous interventions with unsupervised algorithms [66,76], highlighting the use of LR, PLS and clustering as AI techniques.
Surikaya et al. [67] studied the detection of surgical tools in endoscopic videos during surgical training. In this study, the average accuracy was 91% for six different surgical tasks, improving the results obtained for JIGSAWS [64]. For the detection of surgical activities, Zia et al. [68] applied clustering to their recognition, analyzing training procedures to improve the most relevant steps of the different surgical tasks in MIS training.
Shafiei et al. [70] measured mutual trust in MIS procedures among the surgical team, and between the surgical team and the patient. Ayuso et al. [78] showed the potential of GANs and DLMs to improve surgical training tasks in the specialty of urology. In this study, the GAN achieved an accuracy higher than 0.59 with an error lower than 0.056 [78].
Some studies [72,73] obtained an improvement in visualization on a 3D monitor over the baseline method (93% improving the baseline method, with depth perception and realism) during surgical training. This model improved existing algorithms for surgical tool segmentation and prediction and segmentation accuracy. On the other hand, Gruter et al. [82] achieved a high accuracy of between 0.885 and 0.945 in that study, highlighting the potential of evaluation in colorectal surgery training using video endoscope analysis.
Wu et al. [59] used eye-tracking metrics to detect workload during MIS tasks to prevent workload in the performance of surgical tasks. These metrics can potentially be used to identify tasks with high workloads and provide measures for assisting surgical training in MIS, avoiding high levels of elevated stress and improving the quality of surgical procedures.
Mohamadipanah et al. [80] evaluated more than 4900 video frames in order to improve the evaluation and training process in pulmonary MIS. For this purpose, a GAN was applied as an AI technique that makes up for the lack of data on events that may occur during lung surgery, improving the quality of training and identifying new surgical strategies that had not been previously evaluated.
Orgiu et al. [83] evaluated CNN applications for the recognition of bony structures in assistance during wrist arthroscopy training. This study presented some limitations, as some bones were detected more accurately than others. This fact suggests the importance of proper training of the algorithms to improve bone detection.
Some studies [8,9,77,81,84] offer a comparison between the skill level of novice and expert surgeons during the performance of surgical training tasks such as suturing, needle passing or dissection. In particular, studies offer feedback to surgeons in order to self-assess their level of surgical competence [77] by applying AdaBoost and RFs in urological surgery training. Thus, Shedage et al. [9] evaluated surgical tasks in a VR simulator with three AI techniques (KNN, LR and SVM), obtaining an accuracy higher than 0.74. Caballero et al. [81] measured stress during surgical training to try to avoid high stress levels based on data from the EDA wearable device, applying an ANN, LR and SVM and obtaining an accuracy higher than 0.75. On the other hand, Perez-Salazar et al. [8,84] compared the performance of novice and expert surgeons during MIS surgical training using motion analysis, muscle activity and localized muscle fatigue prediction by applying an ANN and LR.
Finally, in terms of muscle activity, several studies [8,62,63,84] measured muscle activity during surgical training with the aim of predicting and avoiding musculoskeletal problems during surgery. These studies showed better performance with the dominant side than with the non-dominant side for expert surgeons and similar performance with both sides for novice surgeons [62,63]. In particular, the predictive model (ANN, LR) obtained a high accuracy (>0.75) for cutting, maze and pin transfer tasks, indicating that it can be an innovative solution for the prevention of ergonomic risk situations and for improving the quality of surgical training and future patient care [8,84].

3.4. Surgical Planning Applications

In this section, 12 studies on surgical planning training were analyzed (Table 3). Of the surgical tasks used, ten studies involved specialty urological surgery planning tasks, one study involved tasks for arthroscopic surgery training, and the remainder were used strictly for surgical planning. In terms of input data, eight studies used image data, five applied video stills, one applied data from wearable devices, and one used questions and answers via ChatGPT messages. Regarding the platforms used for surgical training, eleven of the studies used the da VinciTM surgical system, and the remaining study used different robotic surgical systems. Several AI algorithms were applied in these 12 studies: AdaBoost, ANN, CNN, CNN, DT, edge detection, image segmentation, KNN, LLM, LR, Naive Bayes, RF and SVM. All these algorithms are described in Section 3.1.
Table 3. Studies about AI and surgical training with surgical planning application.
Table 3. Studies about AI and surgical training with surgical planning application.
Authors and YearNumber of SurgeonsNumber of Surgical ProceduresType of Surgical TasksInput DataRobotic Platform/Conventional SurgeryAI Algorithms
Malpani et al., 2016 [85]624Urology surgeriesImage datada VinciTM surgical systemCNN 1, RF 1 and SVM 1
Hung et al., 2018 [86]978Urology surgeriesImage datada VinciTM surgical systemKNN 1, LR 1, RF 1 and SVM 1
Baghdadi et al., 2019 [87]2020Surgical planningVideo framesda VinciTM surgical systemEdge detection and image segmentation
Hung et al., 2019 [20]8100Urology surgeriesImage datada VinciTM surgical systemCNN 1 and RF 1
Nakawala et al., 2019 [88]39Urology surgeriesVideo framesda VinciTM surgical systemCNN 1
Wong et al., 2019 [89]7338Urology surgeriesImage datada VinciTM surgical systemKNN 1, LR 1 and RF 1
Zhao et al., 2019 [90]14424Urology surgeriesImage datada VinciTM surgical systemAdaBoost, ANN 1, CNN 1, DT 1, LR 1 and RF 1
Zia et al., 2019 [91]12100Urology surgeriesImage data and video framesda VinciTM surgical systemCNN 1
Luongo et al., 2020 [92]123002Urology surgeriesImage data and video framesda VinciTM surgical systemCNN 1
Sumitomo et al., 2020 [93]9400Urology surgeriesImage datada VinciTM surgical systemANN 1, CNN 1, image segmentation, Naive Bayes, RF 1 and SVM 1
Wu et al., 2021 [94]7168Urology surgeriesWearable device data and video framesda VinciTM surgical systemLR 1, Naive Bayes and SVM 1
Li et al., 2024 [95]310Arthroscopy surgeriesChatGPT promptsRobotic surgical systemsLLM 1
1 ANN: artificial neural network. CNN: convolutional neural network. DT: decision tree. KNN: k-nearest neighbors. LLM: large language models. LR: linear regression. RF: random forest. SVM: support vector machine.
Some studies [85,88,91,92] planned and predicted the main steps of surgical training in urology. For prediction, a CNN, RF and SVM were applied for the detection of the main steps during surgical planning, achieving an accuracy between 0.74 and 0.88 and a recall parameter between 0.83 and 0.98. The highest accuracy was achieved by applying the CNN. Future steps could combine this planning with monitoring of the execution of the surgical procedure during training [8,62,63,84]. These results could be compared with those obtained in a recently published study by the authors regarding the design of a planning system in percutaneous ultrasound-guided surgery [96].
Hung et al. [86] tested a surgical planner with automated surgeon performance metrics in MIS surgical training. This development allows us to solve one of the main limitations of traditional surgical training, which is to be able to customize the learning curve of surgeons. Similarly, Baghdadi et al. [87] generated an application to measure surgeon performance during the surgical planning process based on computer vision and video stills with scores above 0.833. Wu et al. [94] generated new metrics to measure surgeon performance during surgical training. These metrics can complement the feedback of surgeons in training by adjusting to their learning curve. Zhao et al. [90] predicted surgical duration by applying different AI techniques (AdaBoost, ANN, CNN, DT, LR and RF) from the duration of different surgical training tasks, obtaining an accuracy of 0.775.
Some studies [20,89,93] described the surgical planning process during prostatectomy training using MIS and urinary contingency assessment. The accuracy achieved in these studies ranged between 0.859 and 0.976 when applying different AI techniques (ANN, CNN, image segmentation, KNN, LR, Naive Bayes, RF and SVM), which is considered a good performance. To measure urinary continence during surgical training, surgical planners from 3 to 6 months [20] and from 1 to 6 months [93] were developed, showing similar results in the preoperative stage.
Li et al. [95] used ChatGPT indications to evaluate anterior cruciate ligament (ACL) reconstruction planning. This study showed good performance (three satisfactory responses, two unsatisfactory responses and one excellent response) in ChatGPT indications during training, providing generalized information about ACL tears and reconstruction.

3.5. Recognition of Surgical Gestures Applications

In this section, seven studies on surgical gesture recognition for training applications were analyzed (Table 4). Four studies focused on the specialty of urology and three on surgical gesture recognition during surgical training. Regarding input data, five studies used video images, three used image data and three used handheld device data. As for the platforms used for surgical training, six of the studies used the da VinciTM surgical system and the other study used the Raven II surgical system. Several AI algorithms were applied in these seven studies: ANN, clustering, CNN, KNN, PCA, RF and SVM. All these algorithms are described in Section 3.1.
Table 4. Studies about AI and surgical training with recognition of surgical gesture application.
Table 4. Studies about AI and surgical training with recognition of surgical gesture application.
Authors and YearNumber of SurgeonsNumber of Surgical ProceduresType of Surgical TasksInput DataRobotic Platform/Conventional SurgeryAI Algorithms
Despinoy et al., 2016 [97]312Drawing R letter and peg transferWearable device data and video framesRaven II surgical systemKNN 1 and SVM 1
Malpani et al., 2016 [85]624Urology surgeriesImage datada VinciTM surgical systemCNN 1, RF 1 and SVM 1
Fard et al., 2017 [98]872Knot tying, needle passing and suturingWearable device data and video framesda VinciTM surgical systemClustering and PCA 1
Di Petro et al., 2019 [99]880Suturing and closure woundWearable device data and video framesda VinciTM surgical systemANN 1 and CNN 1
Nakawala et al., 2019 [88]39Urology surgeriesVideo framesda VinciTM surgical systemCNN 1
Zia et al., 2019 [91]12100Urology surgeriesImage data and video framesda VinciTM surgical systemCNN 1
Luongo et al., 2020 [92]123002Urology surgeriesImage data and video framesda VinciTM surgical systemCNN 1
1 ANN: artificial neural network. CNN: convolutional neural network. KNN: k-nearest neighbors. PCA: principal component analysis. RF: random forest. SVM: support vector machine.
Despinoy et al. [97] compared their new automatic method with the manual annotation of surgical gestures, obtaining an accuracy of 0.974, and, during the learning process, a mean match score of 0.819 was achieved for the fully automated gesture recognition process. This system allows improvements in the efficiency of surgical training, minimizing the learning curve. Similarly, Fard et al. [98] achieved an accuracy of 0.83 and during the learning process obtained a mean matching score of 0.77. This approach demonstrates that the proposed method can provide more information about surgical training and can contribute to the automation of MIS surgical training. Di Petro et al. [99] automated the segmentation and classification of 10 gestures by applying an ANN and CNN to provide automated and targeted evaluation and feedback during surgical training. This demonstrated the potential of ANNs and CNNs for gesture recognition during surgical training. Some studies [91,92] showed the potential of CNNs to recognize 12 surgical gestures with an accuracy of 0.85 to 0.88, demonstrating the potential of gesture recognition for the future automation of robotic surgery.
Malpani et al. [85] used advanced computer vision algorithms to extract information from images and gesture recognition, achieving an accuracy of 0.75 with respect to reality. Similarly, Nakawala et al. [88] recognized some gestures during surgical workflow training in urology. This study found that the combined use of CNNs and knowledge representations of surgical training is a promising approach for multilevel gesture recognition during surgical training.

3.6. Detection of Surgical Actions Applications

In this section, three studies on surgical action recognition were analyzed, of which two dealt with surgical action recognition during surgical training and one with colorectal surgical training with MIS (Table 5). Regarding the input data, three of the studies used video frames and one used data from wearable devices. Regarding the platform used for surgical training, the da VinciTM surgical system was used in all of them. Several AI algorithms were applied in these studies: ANN, autoencoder, clustering and LR. All these algorithms are described in Section 3.1.
Table 5. Studies about AI and surgical training with detection of surgical actions application.
Table 5. Studies about AI and surgical training with detection of surgical actions application.
Authors and YearNumber of SurgeonsNumber of Surgical ProceduresType of Surgical TasksInput DataRobotic Platform/Conventional SurgeryAI Algorithms
Zia et al., 2017 [68]9225Two-hand suturing, uterine horn dissection, suspensory ligament dissection, rectal artery skeletonization and rectal artery clippingWearable device data and Video framesda VinciTM surgical systemANN 1 and clustering
Khalid et al., 2020 [58]8120Knot tying, suturing and needle passingVideo framesda VinciTM surgical systemAutoencoder
Gillani et al., 2024 [62]10461Colorectal surgeriesVideo framesda VinciTM surgical systemLR 1
1 ANN: artificial neural network. LR: linear regression.
Zia et al. [68] recognized surgical actions related to colorectal surgeries by applying an ANN and clustering as AI techniques to provide feedback on individually recognized surgical tasks to improve surgical MIS training. Khalid et al. [58] recognized several surgical actions with high accuracy (0.99 for suturing, 0.99 for tying knots and 0.91 for passing needles, with an accuracy of 0.85 for novice surgeons and 0.80 for expert surgeons) by applying the autoencoder as an AI technique. This study is the first step in creating an automatic feedback mechanism for surgeons to learn and refine their learning curve. Gilliani et al. [62] generated new LR-based objective indicators to recognize some surgical actions during surgical training. These indicators provide automatic, objective and scalable metrics to give feedback to novice surgeons during surgical training.

4. Current Limitations, Future Challenges and Opportunities

In this scoping review, we identified 54 primary studies addressing AI applications in MIS surgical training, published between 2014 and 2024. Our findings identify the main advantages and the main limitations of AI applications in MIS surgical training. These findings were identified individually from AI and MIS points of view. Moreover, future challenges and opportunities in this field of research were also identified.
It is noteworthy that technological solutions for assisting the acquisition of skills acquired by conventional methods are booming. In this review, the scientific literature on AI applications for assisting surgical training in MIS has been evaluated. From the MIS point of view, the main drawbacks are the possible ergonomic deficiencies during long surgeries, high stress levels in certain procedures or a large and very demanding learning curve. From the AI point of view, the main drawbacks regarding the available studies are the small number of participating surgeons, the small datasets of some studies and the lack of validation in most of them. Another drawback highlighted in this review is the incompatibility between some AI processes and MIS surgical training, which does not allow the use of possible solutions. Finally, AI-based solutions may lead to potential technical failures as a consequence of the lack of properly validated solutions, which could disrupt the learning process.
Surgical MIS training generates large amounts of data, both from wearable devices and from video frames or images, which require very expensive treatments for their use in AI techniques, such as image segmentation and processing, manual annotation and supervision of clinical experts generating additional costs. The automation of these processes is key to achieving better surgical training models in the coming years, optimizing costs and improving incorrect surgical gestures, as well as postures, in real time.
It should be noted that most of the studies present in this review have a sufficient number of samples to adequately train the predictive models and provide robustness to these models to improve the quality of surgical MIS training. Another innovative solution put forward in these studies is the feedback offered to surgeons during their training process. Through these solutions, it is possible to adapt and optimize the learning curve and MIS surgical training tasks to the surgeons’ needs, self-adjusting their progress to their level of experience, automating the correction of common errors and correcting the weaknesses and enhancing the strengths of these surgeons in training.
The application of innovative AI techniques to assist and improve surgical procedures during MIS learning is growing exponentially, especially through the use of virtual assistants with useful recommendations. An example of such an assistant could be new LLM-based methods such as ChatGPT 5.0, Microsoft Copilot 365, Google Gemini 1.5 Pro, Amazon Alexa 2.2.548660.0, Apple Siri 18.1, or Yahoo Artifact 1.0, which have the ability to point out common errors that occur during surgical MIS training.
The main challenges that could be extracted for this scoping review are as follows: (i) to automatically plan optimal trajectories and steps during MIS learning, aiding more accurate and personalized training; (ii) to provide intelligent solutions for training assistance validated in a comprehensive way; and (iii) to increase the realism and quality of virtual training environments and, consequently, improve the quality of surgical training.

5. Conclusions

The application of AI in MIS surgical training remains a developing field of research, which presents great potential for exploring future applications and synergies between the fields of technical and clinical research. Thus, the present scoping review found some limitations in some studies on the application of AI in MIS surgical training, due to small numbers of surgeons, lack of external validation, small datasets, and lack of clinical oversight by expert surgeons and expert data analysts. Some advances have also been completed, or are in the process of being successfully completed, such as automation of gesture and surgical action recognition, improved feedback to surgeons, or self-tuning and customization of the learning curve for surgeons. To this end, the application of innovative AI techniques could complete these tasks. In addition, AI has the potential to give rise to a new class of medical devices specifically for surgical training to improve safety and learning efficiency. Finally, future challenges such as automating optimal trajectories and steps during the training process and increasing realism in simulations need to be solved in the coming years.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/surgeries6010007/s1. Table S1: Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist.

Author Contributions

Conceptualization, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; methodology, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; software, D.C., J.A.S.-M. and M.J.P.-S.; validation, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; formal analysis, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; investigation, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; resources, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; data curation, D.C., J.A.S.-M. and M.J.P.-S.; writing—original draft preparation, D.C., J.A.S.-M. and M.J.P.-S.; writing—review and editing, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; visualization, D.C., J.A.S.-M. and M.J.P.-S.; supervision, J.A.S.-M. and F.M.S.-M.; project administration, D.C., J.A.S.-M., M.J.P.-S. and F.M.S.-M.; funding acquisition, J.A.S.-M. and F.M.S.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been financed by the Ministry of Science and Innovation with funds from the European Union Next Generation EU, from the Recovery, Transformation and Resilience Plan (PRTR-C17.I1) and the European Regional Development Fund (ERDF) of Extremadura Operational Program 2021–2027.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions, e.g., privacy or ethical.

Acknowledgments

The authors would thank colleagues who collaborated in this study (Francisco Manuel González-Nuño and Carlos Plaza de Miguel).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hurley, A.M.; Kennedy, P.J.; O’Connor, L.; Dinan, T.G.; Cryan, J.F.; Boylan, G.; O’Reilly, B. SOS save our surgeons: Stress levels reduced by robotic surgery. Gynecol. Surg. 2015, 12, 197–206. [Google Scholar] [CrossRef]
  2. Williamson, T.; Song, S.-E. Robotic surgery techniques to improve traditional laparoscopy. JSLS J. Soc. Laparosc. Robot. Surg. 2022, 26, e2022.00002. [Google Scholar] [CrossRef] [PubMed]
  3. Lee, G.I.; Lee, M.R.; Green, I.; Allaf, M.; Marohn, M.R. Surgeon’s physical discomfort and symptoms during robotic surgery: A comprehensive ergonomic survey study. Surg. Endosc. 2017, 31, 1697–1706. [Google Scholar] [CrossRef] [PubMed]
  4. Kaplan, J.R.; Lee, Z.; Eun, D.D.; Reese, A.C. Complications of Minimally invasive surgery and their management. Curr. Urol. Rep. 2016, 17, 47. [Google Scholar] [CrossRef]
  5. Subramonian, K.; DeSylva, S.; Bishai, P.; Thompson, P.; Muir, G. Acquiring surgical skills: A comparative study of open vs. laparoscopic surgery. Eur. Urol. 2004, 45, 346–351. [Google Scholar] [CrossRef]
  6. Atesok, K.; Satava, R.M.; Marsh, J.L.; Hurwitz, S.R. Measuring surgical skills in simulation-based training. J. Am. Acad. Orthop. Surg. 2017, 25, 665–672. [Google Scholar] [CrossRef]
  7. Diego-Mas, J.A.; Poveda-Bautista, R.; Garzon-Leal, D.C. Influences on the use of observational methods by practioners when identifying risk factors in physical work. Ergonomics 2015, 58, 1660–1670. [Google Scholar] [CrossRef]
  8. Pérez-Salazar, M.J.; Caballero, D.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. Comparative study of ergonomics in conventional and robotic-assisted laparoscopic surgery. Sensors 2024, 24, 3840. [Google Scholar] [CrossRef]
  9. Shedage, S.; Farmer, J.; Demirel, D.; Halic, T.; Kockara, S.; Arikatia, V.; Sexton, K.; Ahmadi, S. Development of virtual skill trainers and their validation study analysis using machine learning. In Proceedings of the International Conference on Information System and Data Mining (ICISDM 21), Silicon Valley, CA, USA, 27–29 May 2021. [Google Scholar]
  10. Ávila-Tomás, J.F.; Mayer-Pujadas, M.A.; Quesada-Varela, V.J. La inteligencia artificial y sus aplicaciones en medicina I: Introducción y antecedentes a la IA y robótica. Aten. Primaria 2020, 52, 778–784. [Google Scholar] [CrossRef]
  11. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
  12. Czimmermann, T.; Ciuti, G.; Milazzo, M.; Chiurazzi, M.; Roccella, S.; Oddo, C.M.; Dario, P. Visual-based defect detection and classification approaches for industrial applications—A survey. Sensors 2020, 20, 1459. [Google Scholar] [CrossRef] [PubMed]
  13. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
  14. Lia, H.; Atkinson, A.G.; Navarro, S.M. Cross-industry thematic analysis of generative AI best practices: Applications and implications for surgical education and training. J. Surg. Educ. 2024, 3, 61. [Google Scholar] [CrossRef]
  15. Caballero, D.; Pérez-Palacios, T.; Caro, A.; Antequera, T. Use of magnetic resonance imaging to analyse meat and meat products non-destructively. Food Rev. Int. 2023, 39, 424–440. [Google Scholar] [CrossRef]
  16. Molano, R.; Caballero, D.; Rodríguez, P.G.; Ávila, M.M.; Torres, J.P.; Durán, M.L.; Sancho, J.C.; Caro, A. Finding the largest volume parallelepipedon of arbitrary orientation in a solid. IEEE Access 2021, 9, 103600–103609. [Google Scholar] [CrossRef]
  17. Hendawy, M.; Ghoz, L. A starting framework for urban AI applications. Aim. Shams Eng. J. 2024, 15, 102987. [Google Scholar] [CrossRef]
  18. Jurado, R.D.-A.; Ye, X.; Plaza, V.O.; Suarez, M.Z.; Moreno, F.P.; Valdes, R.M.A. An introduction to the current state of standardization and certification on military AI applications. J. Air Transp. Manag. 2024, 121, 102685. [Google Scholar] [CrossRef]
  19. Ahmad, A.; Bande, L.; Ahmed, W.; Young, K.; Jha, M. AI applications in architecture in UAE: Application of an advanced optimized shading structure as a retrofit strategy of a midrise residential building façade in downtown Abu Dhabi. Energy Build. 2024, 325, 114995. [Google Scholar] [CrossRef]
  20. Hung, A.J. Can machine learning algorithms replace the conventional statistics? BJU Int. 2019, 123, 1. [Google Scholar] [CrossRef]
  21. Chen, J.; Remulla, D.; Nguyen, J.H.; Dua, A.; Liu, Y.; Dagupta, P.; Hung, A.J. Current status of artificial intelligence applications in urology and their potential to influence clinical practice. BJU Int. 2019, 124, 567–577. [Google Scholar] [CrossRef]
  22. Ribeiro, M.T.; Singh, S.; Guestrin, C. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (ACM SIGKDD 16), San Francisco, CA, USA, 13-17 August 2016. [Google Scholar]
  23. Moglia, A.; Georgiou, K.; Georgiou, E.; Satava, R.M.; Cuschieri, A. A systematic review on artificial intelligence in robot-assisted surgery. Int. J. Surg. 2021, 95, 106151. [Google Scholar] [CrossRef] [PubMed]
  24. Rimmer, L.; Howard, C.; Picca, L.; Bashir, M. The automaton as a surgeon: The future of artificial intelligence in emergency and general surgery. Eur. J. Trauma. Emer. Surg. 2021, 47, 757–762. [Google Scholar] [CrossRef] [PubMed]
  25. Chang, T.C.; Seufert, C.; Eminaga, O.; Shkolyar, E.; Hu, J.C.; Liao, J.C. Current trends in Artificial Intelligence Application for endurology and robotic surgery. Urol. Clin. N. Am. 2021, 48, 151–160. [Google Scholar] [CrossRef] [PubMed]
  26. Pakkajavri, N.; Luthra, T.; Anand, S. Artificial intelligence in Surgical Learning. Surgeries 2023, 4, 86–97. [Google Scholar] [CrossRef]
  27. Nassani, L.M.; Javed, K.; Amer, R.S.; Pun, M.H.J.; Abdelkarim, A.Z.; Fernandes, G.V.O. Technology readiness level of robotic technology and artificial intelligence in dentistry: A comprehensive review. Surgeries 2024, 5, 273–287. [Google Scholar] [CrossRef]
  28. Ma, R.; Vanstrum, E.B.; Lee, R.; Chen, J.; Hung, A.J. Machine learning in the optimization of robotics in the operative field. Curr. Opin. Urol. 2020, 30, 808–816. [Google Scholar] [CrossRef]
  29. Andras, I.; Mazzone, E.; Van Leeuwen, F.W.B.; De Naeyer, G.; Van Oosterom, M.N.; Beato, S.; Buckle, T.; O’Sullivan, S.; Van Leeuwen, P.J.; Beulens, A.; et al. Artificial intelligence and robotics: A combination that is changing the operation room. World J. Urol. 2020, 38, 2359–2366. [Google Scholar] [CrossRef]
  30. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for Scoping Reviews (PRISMA-ScR): Checklist and explanation. Ann. Inter. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  31. Cooke, A.; Smith, D.; Booth, A. Beyond PICO: The SPIDER tool for qualitative evidence synthesis. Qual. Health Res. 2012, 22, 1435–1443. [Google Scholar] [CrossRef]
  32. Freund, Y.; Schapire, R.E. A decision theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar]
  33. Kingma, D.P.; Welling, M. An introduction to variational autoencoders. Found. Trends Mach. Learn. 2019, 12, 307–392. [Google Scholar] [CrossRef]
  34. Forgy, E.W. Cluster analysis of multivariate data: Efficiency versus interpretability of classifications. Biometrics 1965, 21, 768–769. [Google Scholar]
  35. Nirthika, R.; Manivannan, S.; Ramanan, A.; Wang, R. Pooling in convolutional neural networks for medical image analysis: A survey and an empirical study. Neural Comput. Appl. 2022, 34, 5321–5347. [Google Scholar] [CrossRef] [PubMed]
  36. Safavian, R.; Landgrebe, D. A survey of decision tree classifier methodology. IEEE Trans. Syst. Man. Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef]
  37. Barrow, H.G.; Tenenbaum, J.M. Interpreting line drawings as three-dimensional surfaces. Artif. Intell. 1981, 17, 75–116. [Google Scholar] [CrossRef]
  38. Larsen, P.M. Industrial applications of fuzzy logic control. Int. J. Man. Mach. Stud. 1980, 12, 3–10. [Google Scholar] [CrossRef]
  39. Fukami, K.; Fukagata, K.; Taira, K. Assessment of supervised machine learning methods for fluid flows. Theor. Comput. Fluid. Dynam 2020, 34, 497–519. [Google Scholar] [CrossRef]
  40. Samworth, R.J. Optimal weighted nearest neighbour classifiers. Ann. Stat. 2012, 40, 2733–2763. [Google Scholar] [CrossRef]
  41. Manning, C.D. Human language understanding and reasoning. Daedalus 2022, 151, 127–138. [Google Scholar] [CrossRef]
  42. Fayyad, U.; Piatetsky-Shapiro, G.; Smyth, P. From data mining to knowledge discovery in databases. Am. Assoc. Artif. Intell. 1996, 17, 37–54. [Google Scholar]
  43. Nigam, K.; McCallum, A.; Thrun, S.; Mitchell, T. Learning to classify text from labeled and unlabeled documents using EM. Mach. Learn. 2000, 39, 103–134. [Google Scholar] [CrossRef]
  44. Bro, R. Multiway calibration. Multilinear PLS. J. Chemom. 1996, 10, 47–61. [Google Scholar] [CrossRef]
  45. Bro, R.; Smilde, A.K. Principal Component Analysis. Anal. Methods 2014, 6, 2812–2831. [Google Scholar] [CrossRef]
  46. Vapnik, V.N.; Chervonenkis, Y.A. On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. 1971, 16, 264–280. [Google Scholar] [CrossRef]
  47. Alonso-Silveiro, G.A.; Perez-Ercamirosa, F.; Bruno-Sanchez, R.; Ortiz-Simon, J.L.; Muñoz-Guerrero, R.; Minor-Martinez, A.; Alarcón-Paredes, A. Development of a laparoscopic box trainer based on open-source hardware and artificial intelligence for objective assessment of surgical psychomotor skills. Surg. Innov. 2018, 25, 380–388. [Google Scholar] [CrossRef]
  48. Azimi, E.; Molina, C.; Chang, A.; Huang, J.; Huang, C.-M.; Kazanzides, P. Interactive training and operation ecosystem for surgical tasks in mixed reality. In Proceedings of the International Workshop on Computer-Assisted and Robotic Endoscopy (CARE 2018), Granada, Spain, 16–20 September 2018. [Google Scholar]
  49. Dublin, A.K.; Julian, D.; Tanaka, A.; Mattingly, P.; Smith, R. A model for predicting the GEARS score from virtual reality surgical simulator metrics. Surg. Endosc. 2018, 32, 3576–3581. [Google Scholar] [CrossRef]
  50. Fard, M.J.; Ameri, S.; Ellis, R.D.; Chinnam, R.B.; Pandya, A.K.; Klein, M.D. Automated robot-assisted surgical skill evaluation: Predictive analysis approach. Int. J. Med. Robot. 2018, 14, e1850. [Google Scholar] [CrossRef]
  51. Wang, Z.; Fey, A.M. Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1959–1970. [Google Scholar] [CrossRef]
  52. Zia, A.; Essa, I. Automated surgical skill assesment in RMIS training. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 731–739. [Google Scholar] [CrossRef]
  53. Ershad, M.; Rege, R.; Fey, A.M. Automatic and near real-time stylistic behavior assessment in robotic surgery. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 635–643. [Google Scholar] [CrossRef]
  54. Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural network. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1611–1617. [Google Scholar] [CrossRef] [PubMed]
  55. Funke, I.; Mees, S.T.; Weitz, J.; Speidel, S. Video-based surgical skill assessment using 3D convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1217–1225. [Google Scholar] [CrossRef] [PubMed]
  56. Holden, M.S.; Xia, S.; Lia, H.; Keri, Z.; Bell, C.; Patterson, L.; Ungi, T.; Fichtinger, G. Machine learning methods for automated technical skills assessment with instructional feedback in ultrasound guided interventions. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1993–2003. [Google Scholar] [CrossRef] [PubMed]
  57. Tan, X.; Chang, C.-B.; Su, Y.; Lim, K.-B.; Chui, C.-K. Robot-Assisted Training in Laparoscopy Using Deep Reinforcement Learning. IEEE Robot. Autom. Lett. 2019, 4, 485–492. [Google Scholar] [CrossRef]
  58. Khalid, S.; Goldenberg, M.; Grantcharov, T.; Taati, B.; Rudzicz, F. Evaluation of deep learning models for identifying surgical actions and measuring performance. JAMA Netw. Open 2020, 3, e201664. [Google Scholar] [CrossRef]
  59. Wu, C.; Cha, J.; Sulek, J.; Zhou, T.; Sundaram, C.P.; Wachs, J.; Yu, D. Eye-tracking metrics predict perceived workload in robotic surgical skills training. Hum. Factors 2020, 62, 1365–1386. [Google Scholar] [CrossRef]
  60. Zhang, D.; Wu, Z.; Chen, J.; Gao, A.; Chen, X.; Li, P. Automatic microsurgical skill assessment based on cross-domain transfer learning. IEEE Robot. Automa Lett. 2020, 5, 4148–4155. [Google Scholar] [CrossRef]
  61. Reich, A.; Mirchi, N.; Yilmaz, R.; Ledwos, N.; Bissonnette, V.; Tan, D.H.; Winkler-Schwartz, A.; Karlik, B.; Del Maestro, R.F. Artificial neural network approach to competency based training using a virtual reality neurosurgical simulation. Open Neurosurg. 2022, 23, 31–39. [Google Scholar] [CrossRef]
  62. Giliani, M.; Rupji, M.; Olson, T.J.P.; Blach, G.C.; Shields, M.C.; Liu, Y.; Rosen, S.A. Objective performance indicators during specific steps of robotic right colectomy can differentiate surgeon expertise. Surgery 2024, 176, 1036–1043. [Google Scholar] [CrossRef]
  63. Gilliani, M.; Rupji, M.; Olson, T.J.P.; Sullivan, P.; Shaffer, V.; Balch, G.C.; Shields, M.C.; Liu, Y.; Rosen, S.A. Objective Performance Indicators During Robotic Right Colectomy Differ According to Surgeon Skill. J. Surg. Res. 2024, 302, 836–844. [Google Scholar] [CrossRef]
  64. Ahmidi, N.; Tao, L.; Sefari, S.; Gao, Y.; Lea, C.; Haro, B.B.; Zappella, L.; Khundanpur, S.; Vidal, R.; Hager, G.D. A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans. Biomed. Eng. 2017, 64, 2025–2041. [Google Scholar] [CrossRef] [PubMed]
  65. Nosrati, M.S.; Amir-Khalili, A.; Peyrat, J.-M.; Abinahed, J.; Al-Alao, O.; Al-Ansari, A.; Abugharbieh, R.; Harmarneh, G. Endoscopic scene labelling and augmentation using intraoperative pulsatile motion and colour appearance cues with preoperative. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1409–1418. [Google Scholar] [CrossRef] [PubMed]
  66. Krishnan, S.; Garg, A.; Patil, S.; Lea, C.; Hager, G.; Abbeel, P.; Goldberg, K. Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning. Int. J. Robot. Res. 2017, 36, 1595–1618. [Google Scholar] [CrossRef]
  67. Sarikaya, D.; Corso, J.J.; Guru, K.A. Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans. Med. Imag. 2017, 36, 1542–1549. [Google Scholar] [CrossRef] [PubMed]
  68. Zia, A.; Zhang, C.; Xiong, X.; Jarc, A.M. Temporal clustering of surgical activities in robot-assisted surgery. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1171–1178. [Google Scholar] [CrossRef]
  69. Ross, T.; Zimmerer, D.; Vernuri, A.; Isensee, F.; Wiesenfarth, M.; Bodenstedt, S.; Both, F.; Kessler, P.; Wagner, M.; Muller, B.; et al. A model for predicting the GEARS score from virtual reality surgical simulator metrics. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 925–933. [Google Scholar] [CrossRef]
  70. Shaifei, S.B.; Hussein, A.A.; Muldoon, S.F.; Guru, K.A. Functional brain states measure mentor trainee trust duringrobot-assisted surgery. Sci. Rep. 2018, 8, 3667. [Google Scholar]
  71. Colleoni, E.; Moccia, S.; Du, X.; De Momi, E.; Stoyanov, D. Deep learning based robotic tool detection and articulation estimation with spatio temporal layers. IEEE Robot. Automa Lett. 2019, 4, 2714–2721. [Google Scholar] [CrossRef]
  72. Engelhardt, S.; Sharan, L.; Karck, M.; De Simone, R.; Wolf, I. Cross-Domain Conditional Generative Adversarial Networks for stereoscopic hyperrealism in surgical training. In Proceedings of the Medical Imaging Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China, 13–17 October 2019. [Google Scholar]
  73. Islam, M.; Atputharuban, D.A.; Ramesh, R.; Ren, H. Real-time instrument segmentation in robotic surgery using auxiliary supervised deep adversarial learning. IEEE Robot. Automa Lett. 2019, 4, 2188–2195. [Google Scholar] [CrossRef]
  74. Attanasio, A.; Scaglioni, B.; Leonetti, M.; Frangi, A.F.; Cross, W.; Biyani, C.S. Robot-Assisted Training in Laparoscopy Using Deep Reinforcement Learning. IEEE Robot. Automa Lett. 2020, 5, 6528–6535. [Google Scholar] [CrossRef]
  75. Liu, Y.; Zhao, Z.; Chang, F.; Hu, S. An anchor-free convolutional neural network for real-time surgical tool detection in Robot-assisted surgery. IEEE Access 2020, 8, 78193–78201. [Google Scholar] [CrossRef]
  76. De Boer, C.; Ghomrawi, H.; Many, B.; Bouchard, M.E.; Linton, S.; Figueroa, A.; Kwon, S.; Abdullah, F. Utility of Wearable Sensors to Assess Postoperative Recovery in Pediatric Patients After Appendectomy. J. Surg. Res. 2021, 263, 160–166. [Google Scholar] [CrossRef] [PubMed]
  77. Chen, A.B.; Liang, S.; Nguyen, J.H.; Liu, Y.; Hung, A.J. Machine learning analysis of automated performance metrics during granular sub-stitch phases predict surgeon experience. Surgery 2021, 169, 1245–1249. [Google Scholar] [CrossRef] [PubMed]
  78. Ayuso, S.A.; Elhage, S.A.; Zhang, Y.; Aladegbami, B.G.; Gersin, K.S.; Fischer, J.P.; Augenstein, V.A.; Colavita, P.D.; Heniford, B.T. Development of virtual skill trainers and their validation study analysis using machine learning. Surgery 2023, 173, 748–755. [Google Scholar] [CrossRef]
  79. Moglia, A.; Morelli, L.; D’Ischia, R.; Fatucchi, L.M.; Pucci, V.; Berchiolli, R.; Ferrari, M.; Cuschieri, A. Ensemble deep learning for the prediction of proficiency at a virtual simulator for robot-assisted surgery. Surg. Endosc. 2022, 36, 6473–6479. [Google Scholar] [CrossRef]
  80. Mohamadipanah, H.; Kearse, L.; Wise, B.; Backhus, L.; Pugh, C. Generating Rare Surgical Events Using CycleGAN: Addressing Lack of Data for Artificial Intelligence Event Recognition. J. Surg. Res. 2023, 283, 594–605. [Google Scholar] [CrossRef]
  81. Caballero, D.; Pérez-Salazar, M.J.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. Applying artificial intelligence on EDA sensor data to predict stress on minimally invasive robotic-assisted surgery. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 1953–1963. [Google Scholar] [CrossRef]
  82. Gruter, A.A.J.; Torrenvliet, B.R.; Tanis, P.J.; Tuynman, J.B. Video-based surgical quality assessment of minimally invasive right hemicolectomy by medical students after specific training. Surgery 2024, 30, 108951. [Google Scholar] [CrossRef]
  83. Orgiu, A.; Karkazan, B.; Connell, S.; Dechaumet, L.; Bennani, Y.; Gregory, T. Enhancing wrist arthroscopy: Artificial intelligence applications for bone structure recognition using machine learning. Hand Surg. Rehab 2024, 43, 101717. [Google Scholar] [CrossRef]
  84. Pérez-Salazar, M.J.; Caballero, D.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. Correlation study and predictive modelling of ergonomic parameters in robotic assisted laparoscopic surgery. Sensors 2024, 24, 7721. [Google Scholar] [CrossRef]
  85. Malpani, A.; Lea, C.; Chen, C.C.G.; Hager, G.D. System events: Readily accessible features for surgical phase detection. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1201–1209. [Google Scholar] [CrossRef] [PubMed]
  86. Hung, A.J.; Chen, J.; Gill, I.S. Automated performance metrics and machine learning algorithms to measure surgeon performance and anticipate clinical outcomes in robotic surgery. JAMA Surg. 2018, 153, 770–771. [Google Scholar] [CrossRef] [PubMed]
  87. Baghdadi, A.; Hussein, A.A.; Ahmed, Y.; Cavuoto, L.A.; Guru, K.A. A computer vision technique for automated assessment of surgical performance using surgeons’ console-feed videos. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 697–707. [Google Scholar] [CrossRef] [PubMed]
  88. Nakawala, H.; Bianchi, R.; Pescatori, L.E.; De Cobelli, O.; Ferrigno, G.; De Momi, E. Deep onto network for surgical workflow and context recognition. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 685–696. [Google Scholar] [CrossRef]
  89. Wong, N.C.; Lam, C.; Patterson, L.; Shayegan, B. Use of machine learning to predict early biochemical recurrence after robot-assisted prostatectomy. BJU Int. 2019, 123, 51–57. [Google Scholar] [CrossRef]
  90. Zhao, B.; Waterman, R.S.; Urman, R.D.; Gabriel, R.A. A machine learning approach to predicting case duration for robot-assisted surgery. J. Med. Syst. 2019, 43, 32. [Google Scholar] [CrossRef]
  91. Zia, A.; Guo, L.; Zhou, L.; Essa, I.; Jarc, A. Novel evaluation of surgical activity recognition models using task-based efficiency metrics. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 2155–2163. [Google Scholar] [CrossRef]
  92. Luongo, F.; Hakim, R.; Nguyen, J.H.; Anandkumar, A.; Hung, A.J. Deep learning based computer vision to recognize and classify suturing gestures in robot-assisted surgery. Surgery 2021, 169, 1240–1244. [Google Scholar] [CrossRef]
  93. Sumitomo, M.; Teramoto, A.; Toda, R.; Fukami, N.; Fukaya, K.; Zennami, K.; Ichino, M.; Takahara, K.; Kusaka, M.; Shiroki, R. Deep learning using preoperative magnetic resonance imaging information to predict early recovery of urinary continence after robot-assisted radical prostatectomy. Int. J. Urol. 2020, 27, 922–928. [Google Scholar] [CrossRef]
  94. Wu, C.; Cha, J.; Sulek, J.; Sundaram, C.P.; Wachs, J.; Proctor, R.W.; Yu, D. Sensor-based indicators of performance changes between sessions during robotic surgery training. Appl. Erg. 2021, 90, 103251. [Google Scholar] [CrossRef]
  95. Li, L.T.; Sinkler, M.A.; Adelstein, J.M.; Voos, J.E.; Calcei, J.G. ChatGPT Responses to Common Questions About Anterior Cruciate Ligament Reconstruction Are Frequently Satisfactory. Arthroscopy 2024, 40, 2058–2066. [Google Scholar] [CrossRef] [PubMed]
  96. Salazar, L.; Sánchez-Varo, I.; Caballero, D.; Iribar-Zabala, A.; Bertelsen-Simonetti, A.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. System for assistance in ultrasound guided percutaneous hepatic interventions using augmented reality: First steps. Heatlth Tech. Lett. 2024. [Google Scholar] [CrossRef]
  97. Despinoy, F.; Bouget, D.; Forestier, G.; Penet, C.; Zemiti, N.; Poignet, P.; Jannin, P. Unsupervised trajectory segmentation for surgical gesture recognition in robotic training. IEEE Trans. Biomed. Eng. 2016, 63, 1280–1291. [Google Scholar] [CrossRef] [PubMed]
  98. Fard, M.J.; Ameri, S.; Chinnam, R.B.; Ellis, R.D. Soft Boundary Approach for Unsupervised Gesture Segmentation in Robotic-Assisted Surgery. IEEE Robot. Automa Lett. 2017, 2, 171–178. [Google Scholar] [CrossRef]
  99. DiPetro, R.; Ahmidi, N.; Malpani, A.; Waldram, M.; Lee, G.I.; Lee, M.R.; Vedula, S.S.; Hager, G.D. Segmenting and classifying activities in robot-assisted surgery with recurrent neural networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 2005–2020. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the condition selection for the studies of the presented scoping review.
Figure 1. Flow chart of the condition selection for the studies of the presented scoping review.
Surgeries 06 00007 g001
Figure 2. Evolution of the number of publications in the applications of AI in surgical training in the last 25 years (2000–2024).
Figure 2. Evolution of the number of publications in the applications of AI in surgical training in the last 25 years (2000–2024).
Surgeries 06 00007 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Caballero, D.; Sánchez-Margallo, J.A.; Pérez-Salazar, M.J.; Sánchez-Margallo, F.M. Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review. Surgeries 2025, 6, 7. https://doi.org/10.3390/surgeries6010007

AMA Style

Caballero D, Sánchez-Margallo JA, Pérez-Salazar MJ, Sánchez-Margallo FM. Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review. Surgeries. 2025; 6(1):7. https://doi.org/10.3390/surgeries6010007

Chicago/Turabian Style

Caballero, Daniel, Juan A. Sánchez-Margallo, Manuel J. Pérez-Salazar, and Francisco M. Sánchez-Margallo. 2025. "Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review" Surgeries 6, no. 1: 7. https://doi.org/10.3390/surgeries6010007

APA Style

Caballero, D., Sánchez-Margallo, J. A., Pérez-Salazar, M. J., & Sánchez-Margallo, F. M. (2025). Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review. Surgeries, 6(1), 7. https://doi.org/10.3390/surgeries6010007

Article Metrics

Back to TopTop