Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review
Abstract
:1. Introduction
2. Materials and Methods
2.1. Search Strategy
2.2. Analysis of Studies
2.3. Evaluation of the Scope Throughout the Years
3. Results and Discussion
3.1. Artificial Intelligence Techniques
- AdaBoost: this technique is an unsupervised statistical classification algorithm that can be used in conjunction with other types of learning algorithms to improve their performance by providing a boosted classifier [32].
- ANN: This is an AI technique that attempts to simulate the performance of a human brain. In this way, the generated predictive model will continue to learn based on the values of the signals generated in the neural network [10].
- Autoencoder: This algorithm is a type of ANN that is applied to unsupervised learning. This algorithm is based on the principle of the encoding and decoding process. It is usually applied for the classification of unlabeled data [33].
- Clustering: this is an algorithm that consists of positioning the log data in several non-overlapping clusters or groups so that the data log is placed at the smallest possible Euclidean distance from the centroids of the different clusters [34].
- CNN: This AI algorithm is a regularized type of feed-forward ANN that learns from the features by itself using an optimization kernel. This type of ANN has been applied to process predictive models from many different types of multimedia data [35].
- Decision tree (DT): This is a decision modeling tool that graphically displays the classification process of a given input for a given output class label. This method is one of the learning algorithms that generate classification models in the form of tree structures [36].
- DLM: These methods are an evolution of machine learning, which focuses on learning from the results extracted from an ANN. This type of AI technique is performed in tasks such as classification (supervised, semi-supervised and unsupervised), prediction and learning representation [35].
- Edge detection: It is a computer vision method to identify edges, curves in digital images or discontinuities in signal processing. It is a fundamental tool for research in fields such as image processing, computer vision or feature detection and extraction [37]. This technique has been enhanced by AI.
- Fuzzy rules: this method is used in fuzzy logic systems to infer an output from independent input variables [38].
- Generative adversarial network (GAN): This method is a class of ANNs or DLMs, generally used as an approach framework for generative AI. This method is composed of two ANNs, where one ANN wins when another ANN loses [39].
- Image segmentation: This is a computer vision technique to discriminate discrete groups of pixels in a digital image to inform object detection. Complex visual data are being analyzed by applying AI to image segmentation [37].
- K-nearest neighbors (KNN): This is a supervised learning method typically used for classification purposes. The result is a group of classes with components of similar characteristics [40].
- LLM: This is a type of AI computational model designed for natural language processing tasks closely related to generative AI. This model acquires the capabilities by learning statistical relations from a large amount of text for a semi-supervised training process [41].
- Linear regression (LR): This is a predictive technique that allows the creating of future models that can be predicted from current data by trend analysis. This method shows a linear relationship between a dependent variable and several independent variables [42].
- Naive Bayes: This is an AI algorithm for supervised classification based on Bayesian principles and probability theory [43]. This algorithm can be fed by an ANN.
- Partial least squares (PLS): this method is a predictive technique based on the statistical relationship between principal component analysis and a reduced rank regression, finding a linear regression model that projects the predicted and observable variables to a new space with maximum covariance [44].
- Principal component analysis (PCA): This is an exploratory data analysis technique used for data visualization and processing based on linear dimensionality reduction. It can also be used for supervised classification [45].
- Random forest (RF): This is an ensemble learning technique used for classification and regression. This technique is based on the use of multiple DTs to obtain a correct result. This method is one of the learning algorithms that generate classification models in the form of tree structures [36].
- Support vector machine (SVM): this method performs a nonlinear predictive approximation, representing a desirable nonlinear relationship that minimizes the separation by tolerating a small error in fitting the data in the transformed space [46].
3.2. Skills Assessment Applications
Authors and Year | Number of Surgeons | Number of Surgical Procedures | Type of Surgical Tasks | Input Data | Robotic Platform/Conventional Surgery | AI Algorithms |
---|---|---|---|---|---|---|
Alonso-Silveiro et al., 2018 [47] | 16 | 400 | VR simulator | Image data | Conventional laparoscopy surgery | ANN 1 |
Azimi et al., 2018 [48] | 3 | 3 | VR simulator | Wearable device data | Neurosurgery simulator | CNN 1 |
Dublin et al., 2018 [49] | 14 | 28 | VR simulator | Wearable device data | da VinciTM surgical system | LR 1 |
Fard et al., 2018 [50] | 8 | 48 | Knot tying and suturing | Wearable device data and video frames | da VinciTM surgical system | KNN 1, LR 1, and SVM 1 |
Wang and Fey, 2018 [51] | 8 | 72 | Knot tying, suturing, and needle passing | Wearable device data and video frames | da VinciTM surgical system | CNN 1 |
Zia and Essa, 2018 [52] | 8 | 72 | Knot tying, suturing, and needle passing | Wearable device data and video frames | da VinciTM surgical system | KNN 1, PCA 1, and SVM 1 |
Ershad et al., 2019 [53] | 14 | 28 | VR simulator | Wearable device data | da VinciTM surgical system | SVM 1 |
Fawaz et al., 2019 [54] | 11 | 99 | Knot tying, suturing, and needle passing | Wearable device data and video frames | da VinciTM surgical system | CNN 1 |
Funke et al., 2019 [55] | 9 | 81 | Knot tying, suturing, and needle passing | Video frames | da VinciTM surgical system | CNN 1 |
Holden et al., 2019 [56] | 24 | 43 | Needle passing and US probe motion | Video frames | Percutaneous guided interventions | DT 1 and fuzzy rules |
Tan et al., 2019 [57] | 16 | 27 | VR simulator | Video frames | Simulator VREP | CNN 1 |
Khalid et al., 2020 [58] | 8 | 120 | Knot tying, suturing, and needle passing | Video frames | da VinciTM surgical system | Autoencoder |
Wu et al., 2020 [59] | 8 | 180 | Endowrist manipulation, clutching, needle control, needle driving, peg transfer and suturing | Eye-tracking metrics and subjective surveys | da VinciTM surgical system | Naive Bayes |
Zhang et al., 2020 [60] | 8 | 552 | Knot tying, suturing, and needle passing | Wearable device data and video frames | Experimental microsurgical robot research platform | CNN1 |
Shedage et al., 2021 [9] | 33 | 66 | VR simulator | Wearable device data | VATDEP Simulator | KNN 1, LR 1, and SVM 1 |
Reich et al., 2022 [61] | 21 | 83 | VR simulator | Video frames | Conventional laparoscopy surgery | ANN 1 |
Gillani et al., 2024 [62] | 10 | 461 | Colorectal surgeries | Video frames | da VinciTM surgical system | LR 1 |
Gillani et al., 2024 [63] | 10 | 461 | Colorectal surgeries | Video frames | da VinciTM surgical system | LR 1 |
3.3. Surgical Training Application
Authors and Year | Number of Surgeons | Number of Surgical Procedures | Type of Surgical Tasks | Input Data | Robotic Platform/Conventional Surgery | AI Algorithms |
---|---|---|---|---|---|---|
Nosrati et al., 2016 [65] | 97 | 3096 | Urology surgeries | Image data | da VinciTM surgical system | ANN 1, LR 1 and RF 1 |
Krishnan et al., 2017 [66] | 5 | 77 | Circle cutting, needle passing and peg transfer | Wearable device data and video frames | da VinciTM surgical system | clustering |
Sarikaya et al., 2017 [67] | 10 | 2455 | Ball placement, peg transfer, suturing, knot tying, needle passing, labyrinth, and urology and colorectal surgery | Video frames | da VinciTM surgical system | CNN 1 |
Zia et al., 2017 [68] | 9 | 225 | Two hand suturing, uterine horn dissection, suspensory ligament dissection, rectal artery skeletonization and rectal artery clipping | Wearable device data and Video frames | da VinciTM surgical system | ANN 1 and clustering |
Dubin et al., 2018 [49] | 14 | 28 | VR simulator | Wearable device data | da VinciTM surgical system | LR 1 |
Ross et al., 2018 [69] | 21 | 23,000 | Urology and colorectal surgery | Image data and video frames | da VinciTM surgical system | CNN 1 and GAN 1 |
Shafiei et al., 2018 [70] | 3 | 170 | Urology surgery | Image data | da VinciTM surgical system | PCA 1 and SVM 1 |
Colleoni et al., 2019 [71] | 18 | 500 | Urology and colorectal surgery | Video frames | da VinciTM surgical system | CNN 1 |
Engelhardt et al., 2019 [72] | 9 | 90 | Surgeries on silicone model | Video frames | MICS-MVR surgical system | GAN 1 |
Ershad et al., 2019 [53] | 14 | 28 | VR simulator | Wearable device data | da VinciTM surgical system | SVM 1 |
Holden et al., 2019 [56] | 24 | 43 | Needle passing and US probe motion | Video frames | Percutaneous guided interventions | DT 1 and fuzzy rules |
Islam et al., 2019 [73] | 8 | 225 | Surgeries on porcine model | Video frames | da VinciTM surgical system | GAN 1 |
Tan et al., 2019 [57] | 16 | 27 | VR simulator | Video frames | Simulator VREP | CNN 1 |
Attanasio et al., 2020 [74] | 6 | 1080 | Neurology surgery | Video frames | da VinciTM surgical system | CNN 1 |
Liu et al., 2020 [75] | 10 | 2455 | Ball placement, peg transfer, suturing, knot tying, needle passing, labyrinth, and urology and colorectal surgery | Video frames | da VinciTM surgical system | CNN 1 |
Wu et al., 2020 [59] | 8 | 180 | EndoWrist manipulation, clutching, needle control, needle driving, peg transfer and suturing | Eye-tracking metrics and subjective surveys | da VinciTM surgical system | Naive Bayes |
De Boer et al., 2021 [76] | 9 | 32 | Urology surgeries | Wearable device data | Conventional urology surgeries | LR 1 and PLS 1 |
Chen et al., 2021 [77] | 17 | 68 | Urology surgery | Wearable device data | da VinciTM surgical system | AdaBoost and RF 1 |
Shedage et al., 2021 [9] | 33 | 66 | VR simulator | Wearable device data | VATDEP Simulator | KNN 1, LR 1 and SVM 1 |
Reich et al., 2022 [61] | 21 | 83 | VR simulator | Video frames | Conventional laparoscopic surgery | ANN 1 |
Ayuso et al., 2023 [78] | 16 | 10,514 | Colorectal surgeries | Image data and video frames | Conventional laparoscopic surgery | CNN 1, DLM 1, and GAN 1 |
Moglia et al., 2023 [79] | 176 | 352 | VR simulator | Wearable device data | da VinciTM surgical system | CNN 1 |
Mohamadipanah et al., 2023 [80] | 6 | 4997 | Pulmonary surgeries | Video frames | Conventional pulmonary surgeries | GAN 1 |
Caballero et al., 2024 [81] | 11 | 27 | Suturing, needle passing, dissection, labyrinth, general surgery, urology and gynecology | Wearable device data | VersiusTM surgical system and conventional laparoscopic surgery | ANN 1, LR 1 and SVM 1 |
Gillani et al., 2024 [62] | 10 | 461 | Colorectal surgeries | Video frames | da VinciTM surgical system | LR 1 |
Gillani et al., 2024 [63] | 10 | 461 | Colorectal surgeries | Video frames | da VinciTM surgical system | LR 1 |
Gruter et al., 2024 [82] | 36 | 216 | Colorectal surgeries | Video frames | Conventional laparoscopic surgery | Image segmentation |
Orgiu et al., 2024 [83] | 6 | 20 | Arthroscopy surgeries | Image data and video frames | Conventional arthroscopy surgeries | CNN 1 |
Pérez-Salazar et al., 2024 [8] | 6 | 18 | Suturing | Wearable device data | VersiusTM surgical system and conventional laparoscopy surgery | LR 1 |
Pérez-Salazar et al., 2024 [84] | 7 | 42 | Peg transfer, cutting, needle passing and suturing | Wearable device data | VersiusTM surgical system and conventional laparoscopy surgery | ANN 1 and LR 1 |
3.4. Surgical Planning Applications
Authors and Year | Number of Surgeons | Number of Surgical Procedures | Type of Surgical Tasks | Input Data | Robotic Platform/Conventional Surgery | AI Algorithms |
---|---|---|---|---|---|---|
Malpani et al., 2016 [85] | 6 | 24 | Urology surgeries | Image data | da VinciTM surgical system | CNN 1, RF 1 and SVM 1 |
Hung et al., 2018 [86] | 9 | 78 | Urology surgeries | Image data | da VinciTM surgical system | KNN 1, LR 1, RF 1 and SVM 1 |
Baghdadi et al., 2019 [87] | 20 | 20 | Surgical planning | Video frames | da VinciTM surgical system | Edge detection and image segmentation |
Hung et al., 2019 [20] | 8 | 100 | Urology surgeries | Image data | da VinciTM surgical system | CNN 1 and RF 1 |
Nakawala et al., 2019 [88] | 3 | 9 | Urology surgeries | Video frames | da VinciTM surgical system | CNN 1 |
Wong et al., 2019 [89] | 7 | 338 | Urology surgeries | Image data | da VinciTM surgical system | KNN 1, LR 1 and RF 1 |
Zhao et al., 2019 [90] | 14 | 424 | Urology surgeries | Image data | da VinciTM surgical system | AdaBoost, ANN 1, CNN 1, DT 1, LR 1 and RF 1 |
Zia et al., 2019 [91] | 12 | 100 | Urology surgeries | Image data and video frames | da VinciTM surgical system | CNN 1 |
Luongo et al., 2020 [92] | 12 | 3002 | Urology surgeries | Image data and video frames | da VinciTM surgical system | CNN 1 |
Sumitomo et al., 2020 [93] | 9 | 400 | Urology surgeries | Image data | da VinciTM surgical system | ANN 1, CNN 1, image segmentation, Naive Bayes, RF 1 and SVM 1 |
Wu et al., 2021 [94] | 7 | 168 | Urology surgeries | Wearable device data and video frames | da VinciTM surgical system | LR 1, Naive Bayes and SVM 1 |
Li et al., 2024 [95] | 3 | 10 | Arthroscopy surgeries | ChatGPT prompts | Robotic surgical systems | LLM 1 |
3.5. Recognition of Surgical Gestures Applications
Authors and Year | Number of Surgeons | Number of Surgical Procedures | Type of Surgical Tasks | Input Data | Robotic Platform/Conventional Surgery | AI Algorithms |
---|---|---|---|---|---|---|
Despinoy et al., 2016 [97] | 3 | 12 | Drawing R letter and peg transfer | Wearable device data and video frames | Raven II surgical system | KNN 1 and SVM 1 |
Malpani et al., 2016 [85] | 6 | 24 | Urology surgeries | Image data | da VinciTM surgical system | CNN 1, RF 1 and SVM 1 |
Fard et al., 2017 [98] | 8 | 72 | Knot tying, needle passing and suturing | Wearable device data and video frames | da VinciTM surgical system | Clustering and PCA 1 |
Di Petro et al., 2019 [99] | 8 | 80 | Suturing and closure wound | Wearable device data and video frames | da VinciTM surgical system | ANN 1 and CNN 1 |
Nakawala et al., 2019 [88] | 3 | 9 | Urology surgeries | Video frames | da VinciTM surgical system | CNN 1 |
Zia et al., 2019 [91] | 12 | 100 | Urology surgeries | Image data and video frames | da VinciTM surgical system | CNN 1 |
Luongo et al., 2020 [92] | 12 | 3002 | Urology surgeries | Image data and video frames | da VinciTM surgical system | CNN 1 |
3.6. Detection of Surgical Actions Applications
Authors and Year | Number of Surgeons | Number of Surgical Procedures | Type of Surgical Tasks | Input Data | Robotic Platform/Conventional Surgery | AI Algorithms |
---|---|---|---|---|---|---|
Zia et al., 2017 [68] | 9 | 225 | Two-hand suturing, uterine horn dissection, suspensory ligament dissection, rectal artery skeletonization and rectal artery clipping | Wearable device data and Video frames | da VinciTM surgical system | ANN 1 and clustering |
Khalid et al., 2020 [58] | 8 | 120 | Knot tying, suturing and needle passing | Video frames | da VinciTM surgical system | Autoencoder |
Gillani et al., 2024 [62] | 10 | 461 | Colorectal surgeries | Video frames | da VinciTM surgical system | LR 1 |
4. Current Limitations, Future Challenges and Opportunities
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hurley, A.M.; Kennedy, P.J.; O’Connor, L.; Dinan, T.G.; Cryan, J.F.; Boylan, G.; O’Reilly, B. SOS save our surgeons: Stress levels reduced by robotic surgery. Gynecol. Surg. 2015, 12, 197–206. [Google Scholar] [CrossRef]
- Williamson, T.; Song, S.-E. Robotic surgery techniques to improve traditional laparoscopy. JSLS J. Soc. Laparosc. Robot. Surg. 2022, 26, e2022.00002. [Google Scholar] [CrossRef] [PubMed]
- Lee, G.I.; Lee, M.R.; Green, I.; Allaf, M.; Marohn, M.R. Surgeon’s physical discomfort and symptoms during robotic surgery: A comprehensive ergonomic survey study. Surg. Endosc. 2017, 31, 1697–1706. [Google Scholar] [CrossRef] [PubMed]
- Kaplan, J.R.; Lee, Z.; Eun, D.D.; Reese, A.C. Complications of Minimally invasive surgery and their management. Curr. Urol. Rep. 2016, 17, 47. [Google Scholar] [CrossRef]
- Subramonian, K.; DeSylva, S.; Bishai, P.; Thompson, P.; Muir, G. Acquiring surgical skills: A comparative study of open vs. laparoscopic surgery. Eur. Urol. 2004, 45, 346–351. [Google Scholar] [CrossRef]
- Atesok, K.; Satava, R.M.; Marsh, J.L.; Hurwitz, S.R. Measuring surgical skills in simulation-based training. J. Am. Acad. Orthop. Surg. 2017, 25, 665–672. [Google Scholar] [CrossRef]
- Diego-Mas, J.A.; Poveda-Bautista, R.; Garzon-Leal, D.C. Influences on the use of observational methods by practioners when identifying risk factors in physical work. Ergonomics 2015, 58, 1660–1670. [Google Scholar] [CrossRef]
- Pérez-Salazar, M.J.; Caballero, D.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. Comparative study of ergonomics in conventional and robotic-assisted laparoscopic surgery. Sensors 2024, 24, 3840. [Google Scholar] [CrossRef]
- Shedage, S.; Farmer, J.; Demirel, D.; Halic, T.; Kockara, S.; Arikatia, V.; Sexton, K.; Ahmadi, S. Development of virtual skill trainers and their validation study analysis using machine learning. In Proceedings of the International Conference on Information System and Data Mining (ICISDM 21), Silicon Valley, CA, USA, 27–29 May 2021. [Google Scholar]
- Ávila-Tomás, J.F.; Mayer-Pujadas, M.A.; Quesada-Varela, V.J. La inteligencia artificial y sus aplicaciones en medicina I: Introducción y antecedentes a la IA y robótica. Aten. Primaria 2020, 52, 778–784. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
- Czimmermann, T.; Ciuti, G.; Milazzo, M.; Chiurazzi, M.; Roccella, S.; Oddo, C.M.; Dario, P. Visual-based defect detection and classification approaches for industrial applications—A survey. Sensors 2020, 20, 1459. [Google Scholar] [CrossRef] [PubMed]
- Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
- Lia, H.; Atkinson, A.G.; Navarro, S.M. Cross-industry thematic analysis of generative AI best practices: Applications and implications for surgical education and training. J. Surg. Educ. 2024, 3, 61. [Google Scholar] [CrossRef]
- Caballero, D.; Pérez-Palacios, T.; Caro, A.; Antequera, T. Use of magnetic resonance imaging to analyse meat and meat products non-destructively. Food Rev. Int. 2023, 39, 424–440. [Google Scholar] [CrossRef]
- Molano, R.; Caballero, D.; Rodríguez, P.G.; Ávila, M.M.; Torres, J.P.; Durán, M.L.; Sancho, J.C.; Caro, A. Finding the largest volume parallelepipedon of arbitrary orientation in a solid. IEEE Access 2021, 9, 103600–103609. [Google Scholar] [CrossRef]
- Hendawy, M.; Ghoz, L. A starting framework for urban AI applications. Aim. Shams Eng. J. 2024, 15, 102987. [Google Scholar] [CrossRef]
- Jurado, R.D.-A.; Ye, X.; Plaza, V.O.; Suarez, M.Z.; Moreno, F.P.; Valdes, R.M.A. An introduction to the current state of standardization and certification on military AI applications. J. Air Transp. Manag. 2024, 121, 102685. [Google Scholar] [CrossRef]
- Ahmad, A.; Bande, L.; Ahmed, W.; Young, K.; Jha, M. AI applications in architecture in UAE: Application of an advanced optimized shading structure as a retrofit strategy of a midrise residential building façade in downtown Abu Dhabi. Energy Build. 2024, 325, 114995. [Google Scholar] [CrossRef]
- Hung, A.J. Can machine learning algorithms replace the conventional statistics? BJU Int. 2019, 123, 1. [Google Scholar] [CrossRef]
- Chen, J.; Remulla, D.; Nguyen, J.H.; Dua, A.; Liu, Y.; Dagupta, P.; Hung, A.J. Current status of artificial intelligence applications in urology and their potential to influence clinical practice. BJU Int. 2019, 124, 567–577. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (ACM SIGKDD 16), San Francisco, CA, USA, 13-17 August 2016. [Google Scholar]
- Moglia, A.; Georgiou, K.; Georgiou, E.; Satava, R.M.; Cuschieri, A. A systematic review on artificial intelligence in robot-assisted surgery. Int. J. Surg. 2021, 95, 106151. [Google Scholar] [CrossRef] [PubMed]
- Rimmer, L.; Howard, C.; Picca, L.; Bashir, M. The automaton as a surgeon: The future of artificial intelligence in emergency and general surgery. Eur. J. Trauma. Emer. Surg. 2021, 47, 757–762. [Google Scholar] [CrossRef] [PubMed]
- Chang, T.C.; Seufert, C.; Eminaga, O.; Shkolyar, E.; Hu, J.C.; Liao, J.C. Current trends in Artificial Intelligence Application for endurology and robotic surgery. Urol. Clin. N. Am. 2021, 48, 151–160. [Google Scholar] [CrossRef] [PubMed]
- Pakkajavri, N.; Luthra, T.; Anand, S. Artificial intelligence in Surgical Learning. Surgeries 2023, 4, 86–97. [Google Scholar] [CrossRef]
- Nassani, L.M.; Javed, K.; Amer, R.S.; Pun, M.H.J.; Abdelkarim, A.Z.; Fernandes, G.V.O. Technology readiness level of robotic technology and artificial intelligence in dentistry: A comprehensive review. Surgeries 2024, 5, 273–287. [Google Scholar] [CrossRef]
- Ma, R.; Vanstrum, E.B.; Lee, R.; Chen, J.; Hung, A.J. Machine learning in the optimization of robotics in the operative field. Curr. Opin. Urol. 2020, 30, 808–816. [Google Scholar] [CrossRef]
- Andras, I.; Mazzone, E.; Van Leeuwen, F.W.B.; De Naeyer, G.; Van Oosterom, M.N.; Beato, S.; Buckle, T.; O’Sullivan, S.; Van Leeuwen, P.J.; Beulens, A.; et al. Artificial intelligence and robotics: A combination that is changing the operation room. World J. Urol. 2020, 38, 2359–2366. [Google Scholar] [CrossRef]
- Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for Scoping Reviews (PRISMA-ScR): Checklist and explanation. Ann. Inter. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
- Cooke, A.; Smith, D.; Booth, A. Beyond PICO: The SPIDER tool for qualitative evidence synthesis. Qual. Health Res. 2012, 22, 1435–1443. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. A decision theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar]
- Kingma, D.P.; Welling, M. An introduction to variational autoencoders. Found. Trends Mach. Learn. 2019, 12, 307–392. [Google Scholar] [CrossRef]
- Forgy, E.W. Cluster analysis of multivariate data: Efficiency versus interpretability of classifications. Biometrics 1965, 21, 768–769. [Google Scholar]
- Nirthika, R.; Manivannan, S.; Ramanan, A.; Wang, R. Pooling in convolutional neural networks for medical image analysis: A survey and an empirical study. Neural Comput. Appl. 2022, 34, 5321–5347. [Google Scholar] [CrossRef] [PubMed]
- Safavian, R.; Landgrebe, D. A survey of decision tree classifier methodology. IEEE Trans. Syst. Man. Cybern. 1991, 21, 660–674. [Google Scholar] [CrossRef]
- Barrow, H.G.; Tenenbaum, J.M. Interpreting line drawings as three-dimensional surfaces. Artif. Intell. 1981, 17, 75–116. [Google Scholar] [CrossRef]
- Larsen, P.M. Industrial applications of fuzzy logic control. Int. J. Man. Mach. Stud. 1980, 12, 3–10. [Google Scholar] [CrossRef]
- Fukami, K.; Fukagata, K.; Taira, K. Assessment of supervised machine learning methods for fluid flows. Theor. Comput. Fluid. Dynam 2020, 34, 497–519. [Google Scholar] [CrossRef]
- Samworth, R.J. Optimal weighted nearest neighbour classifiers. Ann. Stat. 2012, 40, 2733–2763. [Google Scholar] [CrossRef]
- Manning, C.D. Human language understanding and reasoning. Daedalus 2022, 151, 127–138. [Google Scholar] [CrossRef]
- Fayyad, U.; Piatetsky-Shapiro, G.; Smyth, P. From data mining to knowledge discovery in databases. Am. Assoc. Artif. Intell. 1996, 17, 37–54. [Google Scholar]
- Nigam, K.; McCallum, A.; Thrun, S.; Mitchell, T. Learning to classify text from labeled and unlabeled documents using EM. Mach. Learn. 2000, 39, 103–134. [Google Scholar] [CrossRef]
- Bro, R. Multiway calibration. Multilinear PLS. J. Chemom. 1996, 10, 47–61. [Google Scholar] [CrossRef]
- Bro, R.; Smilde, A.K. Principal Component Analysis. Anal. Methods 2014, 6, 2812–2831. [Google Scholar] [CrossRef]
- Vapnik, V.N.; Chervonenkis, Y.A. On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. 1971, 16, 264–280. [Google Scholar] [CrossRef]
- Alonso-Silveiro, G.A.; Perez-Ercamirosa, F.; Bruno-Sanchez, R.; Ortiz-Simon, J.L.; Muñoz-Guerrero, R.; Minor-Martinez, A.; Alarcón-Paredes, A. Development of a laparoscopic box trainer based on open-source hardware and artificial intelligence for objective assessment of surgical psychomotor skills. Surg. Innov. 2018, 25, 380–388. [Google Scholar] [CrossRef]
- Azimi, E.; Molina, C.; Chang, A.; Huang, J.; Huang, C.-M.; Kazanzides, P. Interactive training and operation ecosystem for surgical tasks in mixed reality. In Proceedings of the International Workshop on Computer-Assisted and Robotic Endoscopy (CARE 2018), Granada, Spain, 16–20 September 2018. [Google Scholar]
- Dublin, A.K.; Julian, D.; Tanaka, A.; Mattingly, P.; Smith, R. A model for predicting the GEARS score from virtual reality surgical simulator metrics. Surg. Endosc. 2018, 32, 3576–3581. [Google Scholar] [CrossRef]
- Fard, M.J.; Ameri, S.; Ellis, R.D.; Chinnam, R.B.; Pandya, A.K.; Klein, M.D. Automated robot-assisted surgical skill evaluation: Predictive analysis approach. Int. J. Med. Robot. 2018, 14, e1850. [Google Scholar] [CrossRef]
- Wang, Z.; Fey, A.M. Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1959–1970. [Google Scholar] [CrossRef]
- Zia, A.; Essa, I. Automated surgical skill assesment in RMIS training. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 731–739. [Google Scholar] [CrossRef]
- Ershad, M.; Rege, R.; Fey, A.M. Automatic and near real-time stylistic behavior assessment in robotic surgery. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 635–643. [Google Scholar] [CrossRef]
- Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural network. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1611–1617. [Google Scholar] [CrossRef] [PubMed]
- Funke, I.; Mees, S.T.; Weitz, J.; Speidel, S. Video-based surgical skill assessment using 3D convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1217–1225. [Google Scholar] [CrossRef] [PubMed]
- Holden, M.S.; Xia, S.; Lia, H.; Keri, Z.; Bell, C.; Patterson, L.; Ungi, T.; Fichtinger, G. Machine learning methods for automated technical skills assessment with instructional feedback in ultrasound guided interventions. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1993–2003. [Google Scholar] [CrossRef] [PubMed]
- Tan, X.; Chang, C.-B.; Su, Y.; Lim, K.-B.; Chui, C.-K. Robot-Assisted Training in Laparoscopy Using Deep Reinforcement Learning. IEEE Robot. Autom. Lett. 2019, 4, 485–492. [Google Scholar] [CrossRef]
- Khalid, S.; Goldenberg, M.; Grantcharov, T.; Taati, B.; Rudzicz, F. Evaluation of deep learning models for identifying surgical actions and measuring performance. JAMA Netw. Open 2020, 3, e201664. [Google Scholar] [CrossRef]
- Wu, C.; Cha, J.; Sulek, J.; Zhou, T.; Sundaram, C.P.; Wachs, J.; Yu, D. Eye-tracking metrics predict perceived workload in robotic surgical skills training. Hum. Factors 2020, 62, 1365–1386. [Google Scholar] [CrossRef]
- Zhang, D.; Wu, Z.; Chen, J.; Gao, A.; Chen, X.; Li, P. Automatic microsurgical skill assessment based on cross-domain transfer learning. IEEE Robot. Automa Lett. 2020, 5, 4148–4155. [Google Scholar] [CrossRef]
- Reich, A.; Mirchi, N.; Yilmaz, R.; Ledwos, N.; Bissonnette, V.; Tan, D.H.; Winkler-Schwartz, A.; Karlik, B.; Del Maestro, R.F. Artificial neural network approach to competency based training using a virtual reality neurosurgical simulation. Open Neurosurg. 2022, 23, 31–39. [Google Scholar] [CrossRef]
- Giliani, M.; Rupji, M.; Olson, T.J.P.; Blach, G.C.; Shields, M.C.; Liu, Y.; Rosen, S.A. Objective performance indicators during specific steps of robotic right colectomy can differentiate surgeon expertise. Surgery 2024, 176, 1036–1043. [Google Scholar] [CrossRef]
- Gilliani, M.; Rupji, M.; Olson, T.J.P.; Sullivan, P.; Shaffer, V.; Balch, G.C.; Shields, M.C.; Liu, Y.; Rosen, S.A. Objective Performance Indicators During Robotic Right Colectomy Differ According to Surgeon Skill. J. Surg. Res. 2024, 302, 836–844. [Google Scholar] [CrossRef]
- Ahmidi, N.; Tao, L.; Sefari, S.; Gao, Y.; Lea, C.; Haro, B.B.; Zappella, L.; Khundanpur, S.; Vidal, R.; Hager, G.D. A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans. Biomed. Eng. 2017, 64, 2025–2041. [Google Scholar] [CrossRef] [PubMed]
- Nosrati, M.S.; Amir-Khalili, A.; Peyrat, J.-M.; Abinahed, J.; Al-Alao, O.; Al-Ansari, A.; Abugharbieh, R.; Harmarneh, G. Endoscopic scene labelling and augmentation using intraoperative pulsatile motion and colour appearance cues with preoperative. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1409–1418. [Google Scholar] [CrossRef] [PubMed]
- Krishnan, S.; Garg, A.; Patil, S.; Lea, C.; Hager, G.; Abbeel, P.; Goldberg, K. Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning. Int. J. Robot. Res. 2017, 36, 1595–1618. [Google Scholar] [CrossRef]
- Sarikaya, D.; Corso, J.J.; Guru, K.A. Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans. Med. Imag. 2017, 36, 1542–1549. [Google Scholar] [CrossRef] [PubMed]
- Zia, A.; Zhang, C.; Xiong, X.; Jarc, A.M. Temporal clustering of surgical activities in robot-assisted surgery. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1171–1178. [Google Scholar] [CrossRef]
- Ross, T.; Zimmerer, D.; Vernuri, A.; Isensee, F.; Wiesenfarth, M.; Bodenstedt, S.; Both, F.; Kessler, P.; Wagner, M.; Muller, B.; et al. A model for predicting the GEARS score from virtual reality surgical simulator metrics. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 925–933. [Google Scholar] [CrossRef]
- Shaifei, S.B.; Hussein, A.A.; Muldoon, S.F.; Guru, K.A. Functional brain states measure mentor trainee trust duringrobot-assisted surgery. Sci. Rep. 2018, 8, 3667. [Google Scholar]
- Colleoni, E.; Moccia, S.; Du, X.; De Momi, E.; Stoyanov, D. Deep learning based robotic tool detection and articulation estimation with spatio temporal layers. IEEE Robot. Automa Lett. 2019, 4, 2714–2721. [Google Scholar] [CrossRef]
- Engelhardt, S.; Sharan, L.; Karck, M.; De Simone, R.; Wolf, I. Cross-Domain Conditional Generative Adversarial Networks for stereoscopic hyperrealism in surgical training. In Proceedings of the Medical Imaging Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China, 13–17 October 2019. [Google Scholar]
- Islam, M.; Atputharuban, D.A.; Ramesh, R.; Ren, H. Real-time instrument segmentation in robotic surgery using auxiliary supervised deep adversarial learning. IEEE Robot. Automa Lett. 2019, 4, 2188–2195. [Google Scholar] [CrossRef]
- Attanasio, A.; Scaglioni, B.; Leonetti, M.; Frangi, A.F.; Cross, W.; Biyani, C.S. Robot-Assisted Training in Laparoscopy Using Deep Reinforcement Learning. IEEE Robot. Automa Lett. 2020, 5, 6528–6535. [Google Scholar] [CrossRef]
- Liu, Y.; Zhao, Z.; Chang, F.; Hu, S. An anchor-free convolutional neural network for real-time surgical tool detection in Robot-assisted surgery. IEEE Access 2020, 8, 78193–78201. [Google Scholar] [CrossRef]
- De Boer, C.; Ghomrawi, H.; Many, B.; Bouchard, M.E.; Linton, S.; Figueroa, A.; Kwon, S.; Abdullah, F. Utility of Wearable Sensors to Assess Postoperative Recovery in Pediatric Patients After Appendectomy. J. Surg. Res. 2021, 263, 160–166. [Google Scholar] [CrossRef] [PubMed]
- Chen, A.B.; Liang, S.; Nguyen, J.H.; Liu, Y.; Hung, A.J. Machine learning analysis of automated performance metrics during granular sub-stitch phases predict surgeon experience. Surgery 2021, 169, 1245–1249. [Google Scholar] [CrossRef] [PubMed]
- Ayuso, S.A.; Elhage, S.A.; Zhang, Y.; Aladegbami, B.G.; Gersin, K.S.; Fischer, J.P.; Augenstein, V.A.; Colavita, P.D.; Heniford, B.T. Development of virtual skill trainers and their validation study analysis using machine learning. Surgery 2023, 173, 748–755. [Google Scholar] [CrossRef]
- Moglia, A.; Morelli, L.; D’Ischia, R.; Fatucchi, L.M.; Pucci, V.; Berchiolli, R.; Ferrari, M.; Cuschieri, A. Ensemble deep learning for the prediction of proficiency at a virtual simulator for robot-assisted surgery. Surg. Endosc. 2022, 36, 6473–6479. [Google Scholar] [CrossRef]
- Mohamadipanah, H.; Kearse, L.; Wise, B.; Backhus, L.; Pugh, C. Generating Rare Surgical Events Using CycleGAN: Addressing Lack of Data for Artificial Intelligence Event Recognition. J. Surg. Res. 2023, 283, 594–605. [Google Scholar] [CrossRef]
- Caballero, D.; Pérez-Salazar, M.J.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. Applying artificial intelligence on EDA sensor data to predict stress on minimally invasive robotic-assisted surgery. Int. J. Comput. Assist. Radiol. Surg. 2024, 19, 1953–1963. [Google Scholar] [CrossRef]
- Gruter, A.A.J.; Torrenvliet, B.R.; Tanis, P.J.; Tuynman, J.B. Video-based surgical quality assessment of minimally invasive right hemicolectomy by medical students after specific training. Surgery 2024, 30, 108951. [Google Scholar] [CrossRef]
- Orgiu, A.; Karkazan, B.; Connell, S.; Dechaumet, L.; Bennani, Y.; Gregory, T. Enhancing wrist arthroscopy: Artificial intelligence applications for bone structure recognition using machine learning. Hand Surg. Rehab 2024, 43, 101717. [Google Scholar] [CrossRef]
- Pérez-Salazar, M.J.; Caballero, D.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. Correlation study and predictive modelling of ergonomic parameters in robotic assisted laparoscopic surgery. Sensors 2024, 24, 7721. [Google Scholar] [CrossRef]
- Malpani, A.; Lea, C.; Chen, C.C.G.; Hager, G.D. System events: Readily accessible features for surgical phase detection. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1201–1209. [Google Scholar] [CrossRef] [PubMed]
- Hung, A.J.; Chen, J.; Gill, I.S. Automated performance metrics and machine learning algorithms to measure surgeon performance and anticipate clinical outcomes in robotic surgery. JAMA Surg. 2018, 153, 770–771. [Google Scholar] [CrossRef] [PubMed]
- Baghdadi, A.; Hussein, A.A.; Ahmed, Y.; Cavuoto, L.A.; Guru, K.A. A computer vision technique for automated assessment of surgical performance using surgeons’ console-feed videos. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 697–707. [Google Scholar] [CrossRef] [PubMed]
- Nakawala, H.; Bianchi, R.; Pescatori, L.E.; De Cobelli, O.; Ferrigno, G.; De Momi, E. Deep onto network for surgical workflow and context recognition. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 685–696. [Google Scholar] [CrossRef]
- Wong, N.C.; Lam, C.; Patterson, L.; Shayegan, B. Use of machine learning to predict early biochemical recurrence after robot-assisted prostatectomy. BJU Int. 2019, 123, 51–57. [Google Scholar] [CrossRef]
- Zhao, B.; Waterman, R.S.; Urman, R.D.; Gabriel, R.A. A machine learning approach to predicting case duration for robot-assisted surgery. J. Med. Syst. 2019, 43, 32. [Google Scholar] [CrossRef]
- Zia, A.; Guo, L.; Zhou, L.; Essa, I.; Jarc, A. Novel evaluation of surgical activity recognition models using task-based efficiency metrics. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 2155–2163. [Google Scholar] [CrossRef]
- Luongo, F.; Hakim, R.; Nguyen, J.H.; Anandkumar, A.; Hung, A.J. Deep learning based computer vision to recognize and classify suturing gestures in robot-assisted surgery. Surgery 2021, 169, 1240–1244. [Google Scholar] [CrossRef]
- Sumitomo, M.; Teramoto, A.; Toda, R.; Fukami, N.; Fukaya, K.; Zennami, K.; Ichino, M.; Takahara, K.; Kusaka, M.; Shiroki, R. Deep learning using preoperative magnetic resonance imaging information to predict early recovery of urinary continence after robot-assisted radical prostatectomy. Int. J. Urol. 2020, 27, 922–928. [Google Scholar] [CrossRef]
- Wu, C.; Cha, J.; Sulek, J.; Sundaram, C.P.; Wachs, J.; Proctor, R.W.; Yu, D. Sensor-based indicators of performance changes between sessions during robotic surgery training. Appl. Erg. 2021, 90, 103251. [Google Scholar] [CrossRef]
- Li, L.T.; Sinkler, M.A.; Adelstein, J.M.; Voos, J.E.; Calcei, J.G. ChatGPT Responses to Common Questions About Anterior Cruciate Ligament Reconstruction Are Frequently Satisfactory. Arthroscopy 2024, 40, 2058–2066. [Google Scholar] [CrossRef] [PubMed]
- Salazar, L.; Sánchez-Varo, I.; Caballero, D.; Iribar-Zabala, A.; Bertelsen-Simonetti, A.; Sánchez-Margallo, J.A.; Sánchez-Margallo, F.M. System for assistance in ultrasound guided percutaneous hepatic interventions using augmented reality: First steps. Heatlth Tech. Lett. 2024. [Google Scholar] [CrossRef]
- Despinoy, F.; Bouget, D.; Forestier, G.; Penet, C.; Zemiti, N.; Poignet, P.; Jannin, P. Unsupervised trajectory segmentation for surgical gesture recognition in robotic training. IEEE Trans. Biomed. Eng. 2016, 63, 1280–1291. [Google Scholar] [CrossRef] [PubMed]
- Fard, M.J.; Ameri, S.; Chinnam, R.B.; Ellis, R.D. Soft Boundary Approach for Unsupervised Gesture Segmentation in Robotic-Assisted Surgery. IEEE Robot. Automa Lett. 2017, 2, 171–178. [Google Scholar] [CrossRef]
- DiPetro, R.; Ahmidi, N.; Malpani, A.; Waldram, M.; Lee, G.I.; Lee, M.R.; Vedula, S.S.; Hager, G.D. Segmenting and classifying activities in robot-assisted surgery with recurrent neural networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 2005–2020. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Caballero, D.; Sánchez-Margallo, J.A.; Pérez-Salazar, M.J.; Sánchez-Margallo, F.M. Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review. Surgeries 2025, 6, 7. https://doi.org/10.3390/surgeries6010007
Caballero D, Sánchez-Margallo JA, Pérez-Salazar MJ, Sánchez-Margallo FM. Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review. Surgeries. 2025; 6(1):7. https://doi.org/10.3390/surgeries6010007
Chicago/Turabian StyleCaballero, Daniel, Juan A. Sánchez-Margallo, Manuel J. Pérez-Salazar, and Francisco M. Sánchez-Margallo. 2025. "Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review" Surgeries 6, no. 1: 7. https://doi.org/10.3390/surgeries6010007
APA StyleCaballero, D., Sánchez-Margallo, J. A., Pérez-Salazar, M. J., & Sánchez-Margallo, F. M. (2025). Applications of Artificial Intelligence in Minimally Invasive Surgery Training: A Scoping Review. Surgeries, 6(1), 7. https://doi.org/10.3390/surgeries6010007