Surgical Hand Gesture Recognition Utilizing Electroencephalogram as Input to the Machine Learning and Network Neuroscience Algorithms
Abstract
:1. Introduction
1.1. Importance of Gesture Detection in Robot-Assisted Surgery (RAS)
1.2. Literature Review of Gesture Detection in RAS Application
1.3. Strengths and Shortcomings of the Existing Methods of Gesture Detection in RAS Application
1.4. The Purpose of This Study
1.5. Contribution of This Study
2. Materials and Methods
2.1. Data Recording Setup
2.2. Gestures Extraction
- Bipolar Cautery: Surgeon uses bipolar tool to control bleeding event by cauterization. While cauterization is passing, a high frequency electrical current through tissue from one electrode to another.
- Monopolar Cautery: Surgeon uses regular monopolar tool to control bleeding by cauterization.
- Blunt Dissection: Surgeon separates tissue planes by pushing them around, instead of cutting them or cauterization.
- Tissue Grasping: Surgeon catches tissue.
- Retraction: Surgeon retracts structures by holding them aside, to improve operative field visibility.
- Suturing: Surgeon uses surgical sutures to hold body tissues together. Sutures (or stitches) are typically applied using a needle with an attached piece of thread and are secured with surgical knots.
- Needle Insertion: Surgeon inserts the needle to the entry site of tissue surface using downward pressure and applies a twisting motion until resistance decreases as the needle passes through the surface and the suture begins to traverse the tissue.
- Surgical Thread Grasping: Surgeon grasps the surgical thread.
- Idle: Surgeon is not doing any action as it may be waiting for a response from the surgical team or is thinking before deciding the next step.
2.3. Electroencephalogram (EEG) Data Analyses
2.4. Types of Feature and Functional Brain Network Analyses
- Brain network: Our approach was to consider EEG channels as nodes (i; i = 1, ….,N, where N = 119) of the functional brain network and strength of functional connectivity (FC) between pairs of brain regions (EEG channels) as the link between nodes of the functional brain network. Functional connectivity was estimated as the coherence of the associated EEG channels’ time series and can be considered as a measure of communication between brain regions [32,33]. The functional brain network will then be a square matrix whose elements (i,j) represent the magnitude of FC between pairs of brain areas. The result is a weighted connectivity matrix () whose entries represent the connection weight between different brain areas i and j (EEG channels) and were also specific to each individual. Since, this network fluctuates over different timescales; it is possible to study time-varying properties of the FC network and find the relationship between these time-varying properties and associated external or internal cause/stimulation. Community structure, the decomposition of a network into densely inter-connected sub-networks or “communities”, is one method for analyzing dynamic features of FC networks [34,35,36]. Communities representing brain regions tend to preferentially communicate with each other, while weakly communicating to the rest of the brain regions [37,38,39].
- Brain functional modules: The adjacency matrix (Γij) of each 1-sec window within the recording was used in categorical multilayer community detection algorithm [40]. A Louvain-like locally “greedy” algorithm was used to optimize modularity in the multilayer network [41,42]. Since the community detection algorithm is non-deterministic [43], and also due to near-degeneracies in the modularity optimization [44], the output typically varies from one run to another. Optimization of the multi-layer modularity was repeated 100 times, in a consensus iterative algorithm for each single run [43]. This was done to identify a single representative partition from all partition sets, based on statistical testing in comparison to the ‘Newman–Girvan’ null network [42]. The output of categorical multilayer community detection is a partition matrix (A), representing the functional community that each channel was assigned to [40].
- Module-allegiance matrix: The module allegiance matrix (MAM) was derived using the functional community each channel was assigned to. MAM elements represent the probability that pairs of brain areas be assigned to the same functional community during processing tasks. This matrix was used to extract dynamic brain features of network flexibility, integration and recruitment.
- -
- Adjacency matrices (Γ; extracted by coherence analysis applied to EEG data): search information, strength, transitivity, mean pairwise diffusion efficiency, global efficiency, and mean global diffusion efficiency features were extracted by using Γ.
- -
- Module-allegiance matrix (MAM; extracted by applying multi-layer community detection techniques to adjacency matrices): integration and recruitment features were extracted by using MAM.
- -
- Partition matrices (A; extracted by applying community detection techniques to adjacency matrices): regional network flexibility feature was extracted using A.
2.5. Definition of Extracted Features
2.5.1. Brain Regional (Node) Network Flexibility
2.5.2. Integration
2.5.3. Recruitment
2.5.4. Search Information
2.5.5. Strength
2.5.6. Mean Pairwise Diffusion Efficiency of Cortices
2.5.7. Transitivity
2.5.8. Global Efficiency
2.5.9. Power
2.6. Machine Learning Algorithms
- KNN: The KNN is a very simple technique. The entire training dataset is stored. When a prediction is required, the k-most similar records to a new record from the training dataset are then located. From these neighbors, a summarized prediction is made. Similarity between records can be measured in many ways. Once the neighbors are discovered, the summary prediction can be made by returning the most common outcome or taking the average. KNN parameters include number of neighbors to use: 5, weight function used in prediction: ‘uniform’.
- Bagging: Bagging involves taking multiple samples from training dataset (with replacement) and training a model for each sample. The final output prediction is averaged across the predictions of all the sub-models. The three bagging models are BAG, RF, and ET. Bagging performs best with algorithms that have high variance.
- RF: Random forest is an extension of bagged decision trees. Samples of the training dataset are taken with replacement, but the trees are constructed in a way that reduces the correlation between individual classifiers. Specifically, rather than greedily choosing the best split point in the construction of the tree, only a random subset of features is considered for each split.
- ET: Extra trees are another modification of bagging where random tree are constructed from samples of the training dataset. The ET algorithm works by creating a large number of unpruned decision trees from the training dataset. Classification predictions are made by using majority voting.
- -
- RF uses bootstrap replicas: it subsamples the input data with replacement, whereas ET uses the whole original sample.
- -
- The selection of the cut points in order to split nodes: RF chooses the optimal split while ET chooses it randomly. However, after selection of the split points, the two algorithms choose the best between all the subset of features. Therefore, ET adds randomization but still has optimization.
2.7. Feature Selection Method
2.8. Measures of Classification Method’s Performance
3. Results
4. Discussion
4.1. Proposed Method and Implications
4.2. Strength of Proposed Method
4.3. Limitations and Shortcomings of the Proposed Method
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
References
- Reza, M.; Maeso, S.; Blasco, J.; Andradas, E. Meta-analysis of observational studies on the safety and effectiveness of robotic gynaecological surgery. Br. J. Surg. 2010, 97, 1772–1783. [Google Scholar] [CrossRef]
- Anderson, C.I.; Gupta, R.N.; Larson, J.R.; Abubars, O.I.; Kwiecien, A.J.; Lake, A.D.; Hozain, A.E.; Tanious, A.; O’Brien, T.; Basson, M.D. Impact of objectively assessing surgeons’ teaching on effective perioperative instructional behaviors. JAMA Surg. 2013, 148, 915–922. [Google Scholar] [CrossRef] [Green Version]
- Bridgewater, B.; Grayson, A.D.; Jackson, M.; Brooks, N.; Grotte, G.J.; Keenan, D.J.; Millner, R.; Fabri, B.M.; Mark, J. Surgeon specific mortality in adult cardiac surgery: Comparison between crude and risk stratified data. BMJ 2003, 327, 13–17. [Google Scholar] [CrossRef] [Green Version]
- Reiley, C.E.; Lin, H.C.; Yuh, D.D.; Hager, G.D. Review of methods for objective surgical skill evaluation. Surg. Endosc. 2011, 25, 356–366. [Google Scholar] [CrossRef] [PubMed]
- Datta, V.; Mackay, S.; Mandalia, M.; Darzi, A. The use of electromagnetic motion tracking analysis to objectively measure open surgical skill in the laboratory-based model. J. Am. Coll. Surg. 2001, 193, 479–485. [Google Scholar] [CrossRef]
- Judkins, T.N.; Oleynikov, D.; Stergiou, N. Objective evaluation of expert and novice performance during robotic surgical training tasks. Surg. Endosc. 2009, 23, 590–597. [Google Scholar] [CrossRef] [PubMed]
- Richards, C.; Rosen, J.; Hannaford, B.; Pellegrini, C.; Sinanan, M. Skills evaluation in minimally invasive surgery using force/torque signatures. Surg. Endosc. 2000, 14, 791–798. [Google Scholar] [CrossRef]
- Yamauchi, Y.; Yamashita, J.; Morikawa, O.; Hashimoto, R.; Mochimaru, M.; Fukui, Y.; Uno, H.; Yokoyama, K. Surgical Skill Evaluation by Force Data for Endoscopic Sinus Surgery Training System. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 44–51. [Google Scholar]
- Zappella, L.; Béjar, B.; Hager, G.; Vidal, R. Surgical gesture classification from video and kinematic data. Med. Image Anal. 2013, 17, 732–745. [Google Scholar] [CrossRef] [PubMed]
- Despinoy, F.; Bouget, D.; Forestier, G.; Penet, C.; Zemiti, N.; Poignet, P.; Jannin, P. Unsupervised trajectory segmentation for surgical gesture recognition in robotic training. IEEE Trans. Biomed. Eng. 2015, 63, 1280–1291. [Google Scholar] [CrossRef]
- Tao, L.; Zappella, L.; Hager, G.D.; Vidal, R. Surgical Gesture Segmentation and Recognition. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; pp. 339–346. [Google Scholar]
- Shafiei, S.B.; Guru, K.A.; Esfahani, E.T. Using Two-Third Power Law for Segmentation of Hand Movement in Robotic Assisted Surgery. In Proceedings of the ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Boston, MA, USA, 2–5 August 2015. [Google Scholar]
- Shafiei, S.B.; Cavuoto, L.; Guru, K.A. Motor Skill Evaluation during Robot-Assisted Surgery. In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Cleveland, OH, USA, 6–9 August 2017. [Google Scholar]
- Wu, Z.; Li, X. A Wireless Surface EMG Acquisition and Gesture Recognition System. In Proceedings of the 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 1675–1679. [Google Scholar]
- Lee, A.-r.; Cho, Y.; Jin, S.; Kim, N. Enhancement of surgical hand gesture recognition using a capsule network for a contactless interface in the operating room. Comput. Methods Programs Biomed. 2020, 190, 105385. [Google Scholar] [CrossRef] [PubMed]
- Sarikaya, D.; Guru, K.A.; Corso, J.J. Joint surgical gesture and task classification with multi-task and multimodal learning. arXiv 2018, arXiv:1805.00721. [Google Scholar]
- Marin, G.; Dominio, F.; Zanuttigh, P. Hand Gesture Recognition with Leap Motion and Kinect Devices. In Proceedings of the 2014 IEEE International conference on image processing (ICIP), Paris, France, 27–30 October 2014; pp. 1565–1569. [Google Scholar]
- Marin, G.; Dominio, F.; Zanuttigh, P. Hand gesture recognition with jointly calibrated leap motion and depth sensor. Multimed. Tools Appl. 2016, 75, 14991–15015. [Google Scholar] [CrossRef]
- Moorthi, M.; Senthilkumar, A.; Thangaraj, A. A Study the effect of Biofertilizer Azotobacter Chroococcum on the Growth of Mulberry Cropmorus Indica L. and the Yield of Bombyx Mori L. Int. J. Environ. Agric. Biotechnol. 2016, 1, 238607. [Google Scholar]
- DiPietro, R.; Ahmidi, N.; Malpani, A.; Waldram, M.; Lee, G.I.; Lee, M.R.; Vedula, S.S.; Hager, G.D. Segmenting and classifying activities in robot-assisted surgery with recurrent neural networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 2005–2020. [Google Scholar] [CrossRef] [PubMed]
- Lin, H.C.; Shafran, I.; Murphy, T.E.; Okamura, A.M.; Yuh, D.D.; Hager, G.D. Automatic Detection and Segmentation of Robot-Assisted Surgical Motions. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Palm Springs, CA, USA, 26–29 October 2005; pp. 802–810. [Google Scholar]
- Gao, Y.; Vedula, S.S.; Reiley, C.E.; Ahmidi, N.; Varadarajan, B.; Lin, H.C.; Tao, L.; Zappella, L.; Béjar, B.; Yuh, D.D. Jhu-isi Gesture and Skill Assessment Working Set (Jigsaws): A surgical Activity Dataset for Human Motion Modeling. In Modeling and Monitoring of Computer Assisted Interventions (MICCAI) Workshop: M2cai; Johns Hopkins University: Boston, MA, USA, 2014; p. 3. [Google Scholar]
- Gao, X.; Jin, Y.; Dou, Q.; Heng, P.-A. Automatic Gesture Recognition in Robot-Assisted Surgery with Reinforcement Learning and Tree Search. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8440–8446. [Google Scholar]
- Sarikaya, D.; Jannin, P. Towards generalizable surgical activity recognition using spatial temporal graph convolutional networks. arXiv 2020, arXiv:2001.03728. [Google Scholar]
- Luongo, F.; Hakim, R.; Nguyen, J.H.; Anandkumar, A.; Hung, A.J. Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery. Surgery 2020. [Google Scholar] [CrossRef]
- Shafiei, S.B.; Elsayed, A.S.; Hussein, A.A.; Iqbal, U.; Guru, K.A. Evaluating the Mental Workload During Robot-Assisted Surgery Utilizing Network Flexibility of Human Brain. IEEE Access 2020, 8, 204012–204019. [Google Scholar] [CrossRef]
- Hussein, A.A.; Shafiei, S.B.; Sharif, M.; Esfahani, E.; Ahmad, B.; Kozlowski, J.D.; Hashmi, Z.; Guru, K.A. Technical mentorship during robot-assisted surgery: A cognitive analysis. BJU Int. 2016, 118, 429–436. [Google Scholar] [CrossRef] [PubMed]
- Shafiei, S.B.; Doyle, S.T.; Guru, K.A. Mentor’s Brain Functional Connectivity Network during Robotic Assisted Surgery Mentorship. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1717–1720. [Google Scholar]
- Shafiei, S.B.; Esfahani, E.T. Aligning Brain Activity and Sketch in Multi-Modal CAD Interface. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Buffalo, NY, USA, 17–20 August 2014; p. V01AT02A096. [Google Scholar]
- Luck, S.J. An Introduction to the Event-Related Potential Technique; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Kayser, J.; Tenke, C.E. On the benefits of using surface Laplacian (current source density) methodology in electrophysiology. Int. J. Psychophysiol. Off. J. Int. Organ. Psychophysiol. 2015, 97, 171. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Friston, K.J. Functional and effective connectivity: A review. Brain Connect. 2011, 1, 13–36. [Google Scholar] [CrossRef]
- Park, H.-J.; Friston, K. Structural and functional brain networks: From connections to cognition. Science 2013, 342, 6158. [Google Scholar] [CrossRef] [Green Version]
- Mattar, M.G.; Cole, M.W.; Thompson-Schill, S.L.; Bassett, D.S. A functional cartography of cognitive systems. PLoS Comput. Biol. 2015, 11, e1004533. [Google Scholar] [CrossRef] [Green Version]
- Fortunato, S. Community detection in graphs. Phys. Rep. 2010, 486, 75–174. [Google Scholar] [CrossRef] [Green Version]
- Newman, M.E. Communities, modules and large-scale structure in networks. Nat. Phys. 2012, 8, 25–31. [Google Scholar] [CrossRef]
- Power, J.D.; Cohen, A.L.; Nelson, S.M.; Wig, G.S.; Barnes, K.A.; Church, J.A.; Vogel, A.C.; Laumann, T.O.; Miezin, F.M.; Schlaggar, B.L. Functional network organization of the human brain. Neuron 2011, 72, 665–678. [Google Scholar] [CrossRef] [Green Version]
- Yeo, B.T.; Krienen, F.M.; Sepulcre, J.; Sabuncu, M.R.; Lashkari, D.; Hollinshead, M.; Roffman, J.L.; Smoller, J.W.; Zöllei, L.; Polimeni, J.R. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J. Neurophysiol. 2011, 106, 1125–1165. [Google Scholar]
- Sporns, O.; Betzel, R.F. Modular brain networks. Annu. Rev. Psychol. 2016, 67, 613–640. [Google Scholar] [CrossRef] [Green Version]
- Shafiei, S.B.; Hussein, A.A.; Guru, K.A. Relationship between surgeon’s brain functional network reconfiguration and performance level during robot-assisted surgery. IEEE Access 2018, 6, 33472–33479. [Google Scholar] [CrossRef]
- Blondel, V.D.; Guillaume, J.-L.; Lambiotte, R.; Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, 2008, P10008. [Google Scholar] [CrossRef] [Green Version]
- Jutla, I.S.; Jeub, L.G.; Mucha, P.J. A Generalized Louvain Method for Community Detection Implemented in MATLAB. 2011. Available online: http://netwiki.amath.unc.edu/GenLouvain (accessed on 1 March 2021).
- Bassett, D.S.; Wymbs, N.F.; Rombach, M.P.; Porter, M.A.; Mucha, P.J.; Grafton, S.T. Task-based core-periphery organization of human brain dynamics. PLoS Comput. Biol. 2013, 9, e1003171. [Google Scholar] [CrossRef] [PubMed]
- Good, B.H.; De Montjoye, Y.-A.; Clauset, A. Performance of modularity maximization in practical contexts. Phys. Rev. E 2010, 81, 046106. [Google Scholar] [CrossRef] [Green Version]
- Betzel, R.F.; Satterthwaite, T.D.; Gold, J.I.; Bassett, D.S. Positive affect, surprise, and fatigue are correlates of network flexibility. Sci. Rep. 2017, 7, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Mattar, M.G.; Betzel, R.F.; Bassett, D.S. The flexible brain. Brain 2016, 139, 2110–2112. [Google Scholar] [CrossRef] [Green Version]
- Bassett, D.S.; Wymbs, N.F.; Porter, M.A.; Mucha, P.J.; Carlson, J.M.; Grafton, S.T. Dynamic reconfiguration of human brain networks during learning. Proc. Natl. Acad. Sci. USA 2011, 108, 7641–7646. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Newman, M.E. Modularity and community structure in networks. Proc. Natl. Acad. Sci. USA 2006, 103, 8577–8582. [Google Scholar] [CrossRef] [Green Version]
- Goñi, J.; Van Den Heuvel, M.P.; Avena-Koenigsberger, A.; De Mendizabal, N.V.; Betzel, R.F.; Griffa, A.; Hagmann, P.; Corominas-Murtra, B.; Thiran, J.-P.; Sporns, O. Resting-brain functional connectivity predicted by analytic measures of network communication. Proc. Natl. Acad. Sci. USA 2014, 111, 833–838. [Google Scholar] [CrossRef] [Green Version]
- Rosvall, M.; Trusina, A.; Minnhagen, P.; Sneppen, K. Networks and cities: An information perspective. Phys. Rev. Lett. 2005, 94, 028701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sporns, O. Network analysis, complexity, and brain function. Complexity 2002, 8, 56–60. [Google Scholar] [CrossRef]
- Goñi, J.; Avena-Koenigsberger, A.; Velez de Mendizabal, N.; van den Heuvel, M.P.; Betzel, R.F.; Sporns, O. Exploring the morphospace of communication efficiency in complex networks. PLoS ONE 2013, 8, e58070. [Google Scholar]
- Rubinov, M.; Sporns, O. Complex network measures of brain connectivity: Uses and interpretations. Neuroimage 2010, 52, 1059–1069. [Google Scholar] [CrossRef] [PubMed]
- Latora, V.; Marchiori, M. Efficient behavior of small-world networks. Phys. Rev. Lett. 2001, 87, 198701. [Google Scholar] [CrossRef] [Green Version]
- Onnela, J.-P.; Saramäki, J.; Kertész, J.; Kaski, K. Intensity and coherence of motifs in weighted complex networks. Phys. Rev. E 2005, 71, 065103. [Google Scholar] [CrossRef] [Green Version]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; Wiley: New York, NY, USA, 1973; Volume 3. [Google Scholar]
- Lin, H.C.; Shafran, I.; Yuh, D.; Hager, G.D. Towards automatic skill evaluation: Detection and segmentation of robot-assisted surgical motions. Comput. Aided Surg. 2006, 11, 220–230. [Google Scholar] [CrossRef] [PubMed]
- Badalato, G.M.; Shapiro, E.; Rothberg, M.B.; Bergman, A.; RoyChoudhury, A.; Korets, R.; Patel, T.; Badani, K.K. The da Vinci robot system eliminates multispecialty surgical trainees’ hand dominance in open and robotic surgical settings. JSLS J. Soc. Laparoendosc. Surg. 2014, 18, e2014.00399. [Google Scholar] [CrossRef] [Green Version]
- Choussein, S.; Srouji, S.S.; Farland, L.V.; Wietsma, A.; Missmer, S.A.; Hollis, M.; Richard, N.Y.; Pozner, C.N.; Gargiulo, A.R. Robotic assistance confers ambidexterity to laparoscopic surgeons. J. Minim. Invasive Gynecol. 2018, 25, 76–83. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kong, X.; Yang, M.; Li, X.; Ni, M.; Zhang, G.; Chen, J.; Chai, W. Impact of surgeon handedness in manual and robot-assisted total hip arthroplasty. J. Orthop. Surg. Res. 2020, 15, 1–8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shafiei, S.B.; Hussein, A.A.; Guru, K.A. Dynamic changes of brain functional states during surgical skill acquisition. PLoS ONE 2018, 13, e0204836. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Study | Data | Classification Method | Task | # of Classes | Accuracy | Distinct Hands? |
---|---|---|---|---|---|---|
Henry C. Lin, et al. [21] | End effector motion data (JIGSAWS [22]) | linear discriminant analysis | Robot-assisted surgery tasks performed on simulator | 8 gestures | 91.52% | No |
Xiaojie Gao, et al. [23] | End effector motion data (JIGSAWS [22]) | reinforcement learning and tree search | Robot-assisted surgery tasks performed on simulator | 8 gestures | 81.67% | No |
Fabien Despinoy [10] | End effector motion data | k-nearest neighbors | pick-and-place performed using Raven-II robot | 12 gestures | 81.9% | No |
Duygu Sarikaya et al. [16] | Surgical videos (JIGSAWS [22]) | long short-term memory network (LSTM) | Robot-assisted surgery tasks performed on simulator | 14 gestures | 51% | No |
Surgical videos (JIGSAWS [22]) | optical flow ConvNets | 3 tasks | 84.36% | No | ||
Duygu Sarikaya and Pierre Jannin [24] | Suturing task of the JIGSAWS [22] | spatial temporal graph convolutional networks | Robot-assisted surgery tasks performed on simulator | 10 gestures | 68% | No |
Francisco Luongo et al. [25] | videos of a live vesico-urethral anastomos (VUA) | LSTM and convLSTM) | Robot-assisted surgery tasks | 5 suturing gestures | 87% | No |
Gesture Schematic | Gesture Name | Explanation | Hand |
---|---|---|---|
Bipolar Cautery | The use of cautery to control bleeding using bipolar device. | Dominant | |
Monopolar Cautery | The use of cautery to control bleeding using monopolar (regular) device. | Dominant | |
Blunt Dissection | Separating tissue planes by “pushing” rather than cutting or cautery. | Dominant | |
Tissue Grasping | To catch tissue. | Non-dominant | |
Retraction | Retraction by holding structures aside to improve visibility of the operative field. | Dominant, non-dominant | |
Suturing | The use of surgical sutures to approximate edges including knot tying. | Dominant, non-dominant | |
Needle Insertion | Initial contact and passing of the needle through the tissue up to the point when suture begins to traverse the tissue. | Dominant, non-dominant | |
Surgical Thread Grasping | To catch surgical thread. | Dominant, non-dominant | |
Idle | Not doing anything. | Dominant, non-dominant |
Feature | Method of Extraction | Total Number | |
---|---|---|---|
Regional network flexibility | Network community detection technique applied to functional connectivity matrix | Calculated for motor, cognition, and perception cortices at left and right lobes of the brain and same cortices throughout brain | 9 |
Integration | 9 | ||
Recruitment | 9 | ||
Search Information | Functional connectivity matrix | 9 | |
Strength | 9 | ||
Mean pairwise diffusion efficiency of cortices | 9 | ||
Transitivity | Calculated throughout brain | 1 | |
Global efficiency | 1 | ||
mean global diffusion efficiency | 1 | ||
Power | Short fast Fourier transform (SFFT) | Calculated for motor, cognition, and perception cortices | 3 |
Test | Improvement | Confidence Interval | p-Value |
---|---|---|---|
D; ET versus BAG | 0.90% | 0.78–1.02% | <0.0001 |
D; RF versus BAG | 0.62% | 0.53–0.72% | <0.0001 |
D; ET versus RF | 0.28% | 0.21–0.36% | <0.0001 |
ND; ET versus BAG | 1.07% | 0.95–1.18% | <0.0001 |
ND; RF versus BAG | 0.58% | 0.45–0.70% | <0.0001 |
ND; ET versus RF | 0.49% | 0.41–0.57% | <0.0001 |
KNN | BAG | RF | ET | |
---|---|---|---|---|
D | 82.8 (0.023) | 88.9 (0.017) | 89.8 (0.017) | 90.2 (0.018) |
ND | 86.7 (0.023) | 91.9 (0.015) | 92.7 (0.016) | 93.4 (0.017) |
Dominant Hand | Predicted Label | ||||||||
---|---|---|---|---|---|---|---|---|---|
Bipolar Cautery | Monopolar Cautery | Blunt Dissection | Retraction | Suturing | Needle Insertion | Surgical Thread Grasping | Idle | ||
True Label | Bipolar Cautery | 83% | 0% | 0% | 7% | 1% | 0% | 0% | 9% |
Monopolar Cautery | 0% | 100% | 0% | 0% | 0% | 0% | 0% | 0% | |
Blunt Dissection | 0% | 0% | 99% | 0% | 1% | 0% | 0% | 0% | |
Retraction | 9% | 0% | 0% | 84% | 0% | 0% | 0% | 7% | |
Suturing | 0% | 0% | 0% | 0% | 94% | 3% | 0% | 3% | |
Needle Insertion | 0% | 0% | 0% | 0% | 3% | 92% | 1% | 4% | |
Surgical Thread Grasping | 0% | 0% | 0% | 0% | 1% | 0% | 99% | 0% | |
Idle | 23% | 0% | 6% | 4% | 0% | 4% | 0% | 63% |
Non-Dominant Hand | Predicted Label | ||||||
---|---|---|---|---|---|---|---|
Tissue Grasping | Retraction | Suturing | Needle Insertion | Surgical Thread Grasping | Idle | ||
True Label | Tissue Grasping | 95% | 5% | 0% | 0% | 0% | 0% |
Retraction | 2% | 86% | 0% | 0% | 5% | 7% | |
Suturing | 0% | 0% | 100% | 0% | 0% | 0% | |
Needle Insertion | 0% | 0% | 0% | 99% | 1% | 0% | |
Surgical Thread Grasping | 1% | 0% | 0% | 0% | 99% | 1% | |
Idle | 0% | 15% | 0% | 2% | 5% | 78% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shafiei, S.B.; Durrani, M.; Jing, Z.; Mostowy, M.; Doherty, P.; Hussein, A.A.; Elsayed, A.S.; Iqbal, U.; Guru, K. Surgical Hand Gesture Recognition Utilizing Electroencephalogram as Input to the Machine Learning and Network Neuroscience Algorithms. Sensors 2021, 21, 1733. https://doi.org/10.3390/s21051733
Shafiei SB, Durrani M, Jing Z, Mostowy M, Doherty P, Hussein AA, Elsayed AS, Iqbal U, Guru K. Surgical Hand Gesture Recognition Utilizing Electroencephalogram as Input to the Machine Learning and Network Neuroscience Algorithms. Sensors. 2021; 21(5):1733. https://doi.org/10.3390/s21051733
Chicago/Turabian StyleShafiei, Somayeh B., Mohammad Durrani, Zhe Jing, Michael Mostowy, Philippa Doherty, Ahmed A. Hussein, Ahmed S. Elsayed, Umar Iqbal, and Khurshid Guru. 2021. "Surgical Hand Gesture Recognition Utilizing Electroencephalogram as Input to the Machine Learning and Network Neuroscience Algorithms" Sensors 21, no. 5: 1733. https://doi.org/10.3390/s21051733
APA StyleShafiei, S. B., Durrani, M., Jing, Z., Mostowy, M., Doherty, P., Hussein, A. A., Elsayed, A. S., Iqbal, U., & Guru, K. (2021). Surgical Hand Gesture Recognition Utilizing Electroencephalogram as Input to the Machine Learning and Network Neuroscience Algorithms. Sensors, 21(5), 1733. https://doi.org/10.3390/s21051733