1. Introduction
The union between theory and practice in clinical training is generally one of the main concerns for educational institutions. Medicine schools recognize dissonances between what students learned in college and what they learned in the assistance centers during their clinical practices and residencies [
1].
In the educational context, the use of real practical scenarios in its educational methodology is not affordable, neither the dynamism of the clinic, nor the achievable experience on technological advances incorporated into clinical practice. Therefore, different efforts have been made to reduce the differences between both realities. Commonly alternatives are (i) mannequins, which are expensive; (ii) the use of animals, whose case has legal and ethical restrictions; and, (iii) in some cases, the collaboration of colleagues for recreation of practical cases. The biggest disadvantage of these alternatives is the inability to repeat the task in countless times, the limitation of customization of the intervention, and the discomfort of evaluating the student [
2].
The emergence of immersive technologies led clinicians to the use of Virtual Reality (VR) environments, which allow a progressive training at the level of complexity and the possibility of repeating the exercise until assimilating the technique [
3]. These advantages offer the ability to define a valuable personalized educational experience [
4,
5]. Augmented Reality (AR) and Mixed Reality (MR) technologies allow virtual objects to be overlaid in the user’s real environment [
6]. The main difference between them is that MR improves the recognition of the geometrical surfaces in the real world, allowing a much more realistic integration of virtual content. Immersive technologies stimulate an emotional adherence in the subject, leading to a motivating learning process. Though these technologies allow students to be placed in highly controlled situations without consequences in real life, AR/MR makes it possible to know the results due to poor decision-making by learners [
7].
In addition, the ease of access to current technologies make extended realities an increasingly everyday tool, creating urgency in the field of education to take advantage of their pedagogical benefits. Immersive technologies become potential alternatives in the generation of clinical cases or simulators in the educational training.
To target this dual academic-clinical scenario, we propose an MR approach built on the combination of gesture detection and hand-tracking to implement an interactive learning. This activity sets up the motor coordination of the users favoring their involvement and motivation for learning of sutures techniques and basic suture materials. We have developed this solution as an MR application on Magic Leap HMD. We tested the user experience and retention potential of the MR learning environment approach through an experimental group with 32 subjects.
Indeed, AR has facilitated medical surgical planning and guidance, due to its potential for including relevant clinical data in the sight of the clinicians [
8]. The implementation of augmented platforms for the visualization of medical images on video capture has been applied both to preoperative planning of orthopedic surgery and to guidance systems [
9,
10].
MR training applications have been developed for clinicians to improve their surgical skills and understand the spatial relationships and concepts. A case study is the Turini’s orthopedic open surgery simulation with Hololens technology [
11]. Its conclusions suggest the suitability of MR technology for this application to display relevant anatomical views. There are currently about 30 medical simulators for educational purposes intended specifically for training in laparoscopic surgery, neurosurgery procedures, and echocardiography according to Barsom’s review [
12]. The advantage of MR and AR trainers is the ability to combine a physical simulation with a virtual reality overlay, creating more realistic situations [
13].
In some research, training in surgery or sutures has been based on the development of virtual simulators with haptic technology designed for the practice of these techniques in hospitals [
14]. However, most of them focus on laparoscopy or arthroscopy techniques, and the high cost of kinaesthetic technology means that its use has not been widespread. Taking these models as references of the efficiency of the simulation tools in the field of suture training, we propose a more affordable tool, resulting in a portable and easy to configure solution, to address the learning of suture technique without the use of haptic hardware.
2. Materials and Methods
The present work addresses two fundamental guidelines: (i) the design and development of a technological solution in MR and (ii) validation of its feasibility as a learning tool.
2.1. Fundamentals on Minor Surgery Sutures
Suturing is a surgical procedure to join tissues that have been sectioned by a wound or surgical incision. Though, eventually, the action of closing a wound is the last phase of any surgical technique [
15]. In the ligation or approximation of tissues, threads are used to support and help in the approximation of the edges of a wound. The stitches distribute the tensile force longitudinally through the wound until the natural process of healing is satisfactorily established to prevent it from opening. This practice involves the implantation of synthetic materials in human tissue; therefore, it is advisable to know the characteristics of the materials used and the appropriate suture techniques for each case.
Determining factors for an adequate intervention are suture procedures, materials (threads), and types of needles. For this reason, we propose a training tool to endorse the learning of suturing process by offering the practice of the four basic patterns and the study of the different types of needles according to its size, section, and type of point.
2.2. Mixed Reality Application as Learning Environment
Nowadays, learning methodologies are focused on stimulating, motivating, and promoting the emotional and experiential involvement of the apprentice in the learning process. This increases the possibilities of knowledge retention. According to Zapata et al., the greater the individual’s emotion in a stimulus generated for short-term memory, the greater the possibility of successfully accessing the brain’s chemical conversion systems to transform it into long-term memory [
16]. The correct application of learning strategies is a fundamental role in the retention of knowledge and the exercise of memory [
17]. The design of Suture MR depends on certain requirements that our learning ecosystem must meet:
Ubiquity: offering a practicable system at any time and place, without dependencies.
Creation of the experiment: increases the impact on their ability to remember information associated with decision-making.
Solving a case study: to promote their analytical capacities.
Promoting movement coordination: the coordination of the limbs movement favors the perception of spatial contents and potentiates the mobilization of cerebral structures different from those developed by reading and writing.
Repetition: subdividing a large target into smaller tasks and inducing the repetition of these tasks contributes to the short-term memory as repetitive learning.
2.3. Suture MR Workflow
We developed our graphical software application in Unity version 2018.1.9f2-MLP using C#. Our solution is clearly divided into three phases as shown in the User Experience (UX) workflow diagram (see
Figure 1), where students will be learning practical suture skills while the experience progresses (see
Supplementary Materials).
The first phase consists in the creation of the educational practice case, which allows the desired needle to be related to the selected suture technique. The four techniques developed in Suture MR have been selected according to the criterion of fundamental knowledge. Including more complex techniques is a simple task. The needle types incorporated in our tool follow the commercial standardization by Serag-Wiessner manufacturer [
18]. In this first phase, the user selects, through interaction with the panels, the type of procedure to be practiced and the needle configuration, as shown in the red modules of
Figure 1.
Once phase I is completed, a tutorial explains how to use how to use the tools for the selected procedure and how to start the interactive activity. After this tutorial, phase II begins: selection of the workspace. In Mixed Reality, the AR system must comply with the detection of volumes of the 3D elements of the physical space by means of scanning techniques based on point cloud contour detection. This allows the subsequent reconstruction of a digital mesh of the space. From this information, the augmented reality system can calculate the occlusions of the real objects with the digital ones and also position the augmented objects on the surfaces detected in the scan. Usually, this function is provided by the development tool kits of each vendor. For our application, we have used the Magic Leap SDK to implement this specific module (phase II) that shows the user the result of the scanning phase. To allow the user to select the workspace based on whether the observed area is suitable, the classification of each fragment of this scanned surface is calculated internally and labeled as vertical or horizontal, depending on its normal vector. When the user points to a surface with the cursor, the green pointer indicates that the area is suitable for positioning the 3D arm; otherwise, the pointer lights up red. The green–red feedback criterion is based on identifying whether the fragment the user is pointing at, and its neighboring fragments, are horizontal surfaces and the sum of their areas is greater than the area occupied by the 3D arm. As a result, the user visualize, through the Mixed Reality glasses, a realistic 3D arm model on his desk from the real world (see
Figure 2).
At this point, phase III starts and two interactive objects are projected onto the user’s hands: the tweezers and the needle holder. In this last phase, the tool must enable realistic user interaction with the virtual arm. To this end, we promote a motor coordination movement based on the activation of the trigger points to complete the suture procedure (see
Figure 3).
2.4. Gesture Detection
For the automatic detection algorithm of user gestures and hand tracking, it is necessary to access the Hand Tracking Controller of Magic Leap [
19]. This provides Suture MR with two main functions: gesture detection and tools orientation. The first is the detection of the pose “pinch”. The gesture detection method enables the movement and positioning of an object. Consequently, we allow the user to interact through gestures (see
Figure 4). Therefore, the methodology of this activity requires the use of gestures as triggers. For correct gesture recognition it is necessary to set the Pose Filter Level, Key Pose Confidence Value, and Key Point Filter Level parameters.
The Pose Filter Level parameter was set to Raw, as it is preferable to smoothly follow the pose and reduce the refresh rate, as rapid hand movements are not expected. The Key Pose Confidence Value is set to 0.85, as we expect the subject to make the pinch gesture as close to the reference as possible. With this restriction, we indicate that the system allows the movement of the virtual tools as long as the user performs the gesture with precision.
In the case of the Key Point Filter Level parameter, the key points of the hand correspond to the joints. With these points we can reconstruct the current skeleton of the hand detecting user’s gesture. Key points detection and gesture classification are done each frame, setting an event of pose detection and its confidence level. When the detected confidence level is lower than the fixed threshold, our system starts a deactivation time, which extends over eight frames. In the case that a new event indicates that the confidence level exceeds the predefined threshold before the deactivation time expiration, this process is stopped maintaining the gesture. Otherwise, the initial gesture would not be recognized, and neither would its functionality respond. We analyzed that a softer or stricter filter level affects the gesture on and off detection times. Therefore, we selected an extra soft filter option so that small variations around the threshold in the gesture confidence level do not make the use of the tool difficult [
20].
2.5. Hand Tracking and Tools Orientation
The needle holder and tweezers are initially arranged in a certain orientation to control the tool according to the orientation of the detected gesture with both hands. They are placed in the center of the corresponding hand, namely, the Wrist/KeyPoint Hand Center (as shown in
Figure 4), placing the dissecting tweezers on the left and the needle holder on the right (see
Figure 5). Thus, the transformation corresponding to the relative rotation of the fingertips with respect to the wrist can be obtained from the vector director with origin in the wrist and terminal point in tip of the thumb. This direction indicates where the surgery tools should be pointing at. Algorithm 1 displays the pseudocode performed on each frame of the game loop, with a rendering rate of 80–90 frames per second. We set a quaternion to change the rotation of the object. The quaternion creates a rotation with a target vector aligned with the
Z axis, the cross product between target vector and upward direction aligned in
Xaxis, and the cross product between the
Z and
X aligned in
Y axis. The approach used for the vector rotation is to represent a rotation by a quaternion unit of length
q = (
,
) with scalar (real) part
and vector (imaginary) part
. The rotation is applied to a 3D vector
through the formula
Algorithm 1: Position orientation right hand-tracking algorithm of augmented surgery tools. |
|
2.6. Experiment
Once we have designed and implemented the educational application Suture MR, the goal of our validation pilot study is to (i) verify the usefulness of the Suture MR application on a Mixed Reality device such as Magic Leap, due to the possible complexity of this new technology; (ii) evaluate its potential in academic use by collecting information on users’ perception of learning; and (iii) evaluate the method of application of this tool as a learning material. That is, we aim to analyze whether the use of this active learning tool hinders the learning process of health students and facilitates the practice of this intervention.
2.7. Usability Metric
The SUS scale questionnaire is a standard scale assessment of usability of technological systems. It is easy to use and understand for the users. It is a 5-point Likert style scale (where 1 = strongly disagree, 5 = strongly agree) that generates a single number that represents a measure of the usability of the system under study. The final score is between 0 to 100 points, meaning the lack or best usability correspondingly [
21]. The test is composed of 10 items whose value ranges between 0 and 4. For the final result, the value obtained from each item is added, and this result is multiplied by 2.5, thus obtaining the overall value of SUS. Note that the scores for each of the 10 independent issues are not significant in themselves.
2.8. Procedures
The experiment is composed of three phases. The first and second are experimental sessions of both learning methodologies: (i) Text-Reading and (ii) Mixed Reality practice. Text-Reading is a session to acquire knowledge about suture techniques by reading a descriptive text (a text adequately supervised by a health specialist). Mixed Reality practice is a session to acquire knowledge of a suture technique using our Mixed Reality application. Each participant reviews a specified part of theoretical content in each session. Then, as the third phase, the participants should complete a usability test of our tool, and a questionnaire about their subjective perceptions in relation to both learning forms.
The experiment has been carried out under the same conditions for all users, avoiding the influence of invariant factors such as time, environment, device, etc. All participants were in the same physical space for the tests with the same lighting and both sessions lasted the same time (see
Figure 2). Methodologically, the only difference between the sessions lies in the applied learning resource. This approach identifies as dependent variables the methodologies of each session (Mixed Reality Learning tool and Text-Reading) and as independent variable the results of the tests obtained after each session.
Specifically, the subjects performed:
Active Learning with MR Experience. The user initiated Suture MR, configured the experience, and carried out the interactive activity. This action was timed up on 10 min. Once it was finished, the participant took the retentive test.
Traditional Learning based on Text-Reading. The participant read the text for 10 min. After that he/she took the retentive test.
Evaluation. Participants complete questionnaires about their personal profile, their subjective assessment of the Suture MR, and the usability questionnaire using the SUS scale questionnaire.
Each participant in the group was assigned one of the four suture procedures for each action and a specific needle configuration in phases I and II. Therefore, the permutation of four different procedures to be performed in two tests generates 16 different combinations for each group (of 16 participants). To sum up, under these conditions, each participant has studied different procedures between the Text-Reading and Mixed Reality sessions, and half of the participants have swapped the order of the sessions (i.e., first Mixed Reality, then Text-Reading).
Regarding the evaluation system, all tests were composed of four multiple choice questions where hits add 1 point, errors subtract 0.5 points, and blank answers do not score. Thus, scores ranged from 0 to 4 points.
2.9. Participants
The present study enrolled a total of 32 participants aged 18 to 30 years (18 women and 14 men) and 50% of them have knowledge of health sciences. The cohort was divided into two groups balanced in men and women, and with health field knowledge/without health field knowledge, in which the order of the sessions was exchanged. Additionally, it is relevant to mention the profile analyzing of those surveyed reveals that 46.9% of the participants had their first contact with extended reality experiences, and the remaining 53.1% have eventually experienced these technologies.
3. Results
This section details the results obtained in our experiment. First of all, we present results about Suture MR usability, and the results of subjective evaluations of the system. We have also assessed whether there are significant differences in learning outcomes due to the use of the Mixed Reality application before or after reading the text. To carry out this statistical analysis we used the R Studio software.
3.1. Suture MR Usability Validation
The mean value of Suture MR usability, calculated with SUS was 72.34 (SD = 10.35), as showed with a yellow line in
Figure 6. To interpret the SUS score, it must be converted to a percentile rank. Based on the distribution of all scores, a raw SUS score of 72.34 converts to a percentile rank of 70%, which objectively means that the evaluated system has a higher perceived usability than 70% of all products tested. In addition, previous researches pointed out as guideline of SUS score interpretation that a score above 68 would be considered acceptable [
22]. We can conclude that Suture MR has been accepted, and its ease of use has been verified. In this way, doing the practical learning with our tool is usable for the users, not hindering the learning by the technological innovation.
3.2. Users Subjective Evaluation of the Mixed Reality
In order to detect the user’s opinion about learning with both methodologies, we elaborated a short questionnaire with the following three Likert scale questions and one multiple choice question:
Do you consider that your attention span during the use of Suture MR has been compromised by the use of new learning technology (Mixed Reality)? Please check whether 1 = Strongly Disagree, 5 = Strongly Agree.
Would you recommend the use of this tool, based on Mixed Reality, as a learning method? Please check whether 1 = Strongly Disagree, 5 = Strongly Agree.
To continue learning, would you recommend Suture MR among the two learning methodologies? Please check whether 1 = Strongly Disagree, 5 = Strongly Agree.
Choose the option you agree with: “I believe the learning has been greater after using...” (A) Text-Reading Methodology; (B) Mixed Reality Tool (Suture MR); (C) Both methodologies are comparable.
Regarding how the use of new technologies can affect attention, the survey results indicate that 10 of the 32 participants noticed limitations in their attention span caused by the need to adapt to technology, 10 were neutral, and the remaining 12 considered that their attention was slightly or not affected at all by the technology. As for the overall assessment of our learning tool, 93.3% agreed or strongly agreed to recommend its use for learning (see
Figure 7). Note that 48% of those surveyed consider that their learning has been greater through using Suture MR, 38.7% consider that both methodologies are comparable, and the remaining 12.9% indicate that they have learned more by following the traditional methodology (see
Figure 8).
3.3. Descriptive Statistical Analysis of Order of Learning Sessions Influence on Learning Tests Scores
We assessed whether the scores obtained are influenced by the order in which the test is taken: first Text-Reading session and then interactive learning session. This analysis aims to contrast whether the sessions order may have affected the test results obtained. We first applied the Shapiro–Wilk normality test to verify the independence of means for each population. In this case, only one of the four statistical tests carried out obtained a
p ≥ 0.05 (see
Table 1), though we generally assume the samples do not follow a normal distribution. Then, we carried out a Wilcoxon Rank Sum Test to contrast independence between population medians. As it is shown in
Table 2 p-values are less than 0.05, which implies to assume the hypothesis of dependence in between populations. This conclusion leads us to confirm that the order in which the learning sessions have been carried out influences the results obtained.
3.4. General Analysis of Learning Tests Outcomes
We analyze which is the most beneficial combination by observing the total average test results.
The median values in the statistics table (see
Table 3) indicate that users show better test results (see retention evaluation procedure in
Section 2.8) after the second session of learning (
= 3.250) rather after the first session (
= 2.500), regardless of which was first methodology applied. As for the order of the learning process, the most recommendable order according to the results obtained would be the one carried out by group 1: first the reading of the text and then the interactive experience. Following this order, the user initially obtains certain theoretical knowledge during the first session with the text and, in the interactive session with Mixed Reality, can internalize this knowledge due to the application of active learning.
4. Discussion
In this article, we provide a learning tool based on an active methodology developed for Mixed Reality technology. We used a next-generation device (Magic Leap) to take advantage of the advances that today’s technology can offer in the field of education. The development of this type of solutions in this technology is notably scarce.
Usability results, subjective users opinion questionnaires results, and recommendations of use Suture MR as part of learning methodology are briefly summarized as follows:
Users valued the usability of the Mixed Reality solution at 72.34/100 points. Such a satisfactory level of usability included in the third quartile indicates that no adaptation of our approach would be necessary.
The user’s subjective perception questionnaire results reveal that users find that the use of Suture MR encourages their learning process to a greater extent than reading the text alone would do.
The analysis of test scores suggests that the combination of a practical experience with the Mixed Reality solution after the reading of a theoretical knowledge improves the retention results of the subject.
In short, Suture MR is presented as an useful, usable, ubiquitous, and motivating system that makes an interactive proposal for the teaching of academic content specialized in basic sutures of minor surgery.
Considering that active learning has the capacity to emotionally link the user and strengthen the process of internalization of information, Suture MR meets these objectives, whose would be difficult to satisfy by reading texts. That is why users perceive the use of Suture MR such effective in learning terms.
Despite the obtained positive results, several issues could be improved considered after the trials. Indeed, this study can be understood as a starting point for future works. Further future research lines to be followed could be to expand and differentiate the contents shown for each suture technique, to incorporate new fixed informative panels that reinforce the theoretical knowledge, to integrate a user guidance system in the realization of the suture stitches, or to conduct new studies aimed at medical and nursing students with greater experience in the use of extended realities. Furthermore, it would be interesting to evaluate long-term memory, to assess the potential of the learning tool as an experiential learning ecosystem.
Supplementary Materials
The following are available online at
https://www.mdpi.com/2076-3417/11/5/2335/s1, Video S1: Design, development and evaluation of a mixed reality solution based on learning environment for sutures in minor surgery.
Author Contributions
Conceptualization and Methodology, A.R. and L.R.; software, formal analysis, and writing—original draft preparation A.R.; resources and writing—review and editing, L.R. and A.S.; validation and investigation, A.R., L.R. and A.S.; funding acquisition, A.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Spanish Ministry of Science, Innovation and Universities grant number RTI2018-098694-B-I00.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Weller, J.M. Simulation in undergraduate medical education: Bringing the gap between theory and practice. Med. Educ. 2004, 38, 32–38. [Google Scholar] [CrossRef] [PubMed]
- Naylor, R.A.; Hollett, L.A.; Valentine, R.J.; Mitchell, I.C.; Bowling, M.W.; Ma, A.M.; Dineen, S.P.; Bruns, B.R.; Scott, D.J. Can medical students achieve skills proficiency through simulation training? Am. J. Surg. 2009, 198, 277–282. [Google Scholar] [CrossRef] [PubMed]
- Zhu, E.; Hadadgar, A.; Masiello, I.; Zary, N. Augmented reality in healthcare education: An integrative review. PeerJ 2014, 469. [Google Scholar] [CrossRef] [Green Version]
- Wojciechowski, R.; Cellary, W. Evaluation of learners’ attitude toward learning in ARIES augmented reality environments. Comput. Educ. 2013, 68, 570–585. [Google Scholar] [CrossRef]
- Di Serio, A.; Ibáñez, B.M.; Kloos, D.C. Impact of Aumented reality system on student’s motivation for a visual art course. Comput. Educ. 2013, 68, 586–596. [Google Scholar] [CrossRef] [Green Version]
- Hanna, M.G.; Ahmed, I.; Nine, J.; Prajapati, S.; Pantanowitz, L. Augmented Reality Technology Using Microsoft HoloLens in Anatomic Pahology. Arch. Pathol. Lab. Med. 2018, 142, 638–644. [Google Scholar] [CrossRef] [Green Version]
- Albrecht, U.-V.; Folta-Schoofs, K.; Behrends, M.; von Jan, U. Effects of Mobile Augmented Reality Learning Compared to Textbook Learning on Medical Students: Randomized Controlled Pilot Study. J. Med. Internet Res. 2018, e182. [Google Scholar] [CrossRef] [PubMed]
- Tepper, O.M.; Rudy, H.L.; Lefkowitz, A.; Weimer, K.A.; Marks, S.M.; Stern, C.S.; Garfein, E.S. Mixed Reality with HoloLens: Where virtual reality meets augmented reality in the operating room. Plast. Reconstr. Surg. 2017, 140, 1066–1070. [Google Scholar] [CrossRef] [PubMed]
- Wu, X.; Liu, R.; Yu, J.; Xu, S.; Yang, C.; Yang, S.; Shao, Z.; Ye, Z. Mixed Reality Technology Launches in Orthopedic Surgery for Comprenhensive Preoperative Management of Complicated Cervical Fractures. Surg. Innov. 2018, 25, 421–422. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Chen, X.; Wang, N.; Zhang, W.; Li, D.; Zhang, L.; Qu, X.; Cheng, W.; Xu, Y.; Chen, W.; et al. A wearable mixed-reality holografic computer for guiding external ventricular drain insertion at the bedside. JNS J. NeuroSurg. 2018, 1–8. [Google Scholar] [CrossRef] [Green Version]
- Turini, G.; Condino, S.; Parchi, P.D.; Viglialoro, R.M.; Piolanti, N.; Gesi, M.; Ferrari, M.; Ferrari, V. A Microsoft Hololens Mixed Reality Surgical Simulator for Patient-Specific Hip Arthroplasty Training; Springer Nature: Basingstoke, UK, 2018; pp. 201–210. [Google Scholar]
- Barsom, E.Z.; Graafland, M.; Schijven, M.P. Systematic review on the effectiveness of augmented reality applications in medical training. Surg. Endosc. 2016, 30, 4174–4183. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kobayashi, L.; Zhang, X.C.; Collins, S.A.; Karim, N.; Merck, D.L. Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training. West. J. Emerg. Med. 2018, 19, 9158–9164. [Google Scholar] [CrossRef]
- Choi, K.; Chan, S.; Pang, W. Virtual Suturing Simulation Based on Commodity Physics Engine for Medical Learning. J. Med. Syst. 2012, 36, 1781–1793. [Google Scholar] [CrossRef] [PubMed]
- Blanco, J.M.A.; Fortet, J.R.C.; Pata, N.R.; Olaso, A.S.; Guztke, M.M. Suturas Básicas y avanzadas en cirugía menor (III). Med. Fam. SEMERGEN 2002, 28, 89–100. [Google Scholar] [CrossRef]
- Zapata, L.F.; Reyes, C.D.L.; Lewis, S.; Barceló, E. Memoria de trabajo y rendimiento académico en estudiantes de primer semestre de una universidad de la ciudad de Barranquilla. Psicol. Caribe 2002, 28, 66–82. [Google Scholar]
- Cabero, J.; Barroso, J. Posibilidades educativas de la Realidad Aumentada. J. New Approaches Educ. Res. 2016, 5, 44–50. [Google Scholar] [CrossRef] [Green Version]
- Serag-Wiessner. Surgical Needles. Available online: https://www.serag-wiessner.de/en/products/surgical-needles/ (accessed on 24 February 2021).
- Hand Tracking Key Points in Unity. Available online: https://developer.magicleap.com/learn/guides/hand-tracking-key-points-unity (accessed on 18 June 2019).
- Hand Poses in Unity. Available online: https://developer.magicleap.com/learn/guides/unity-sdk-0-22-gestures-in-unity (accessed on 25 April 2019).
- Brooke, J. SUS: A quick and dirty usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B.A., Weerdmeester, B., McClelland, I.L., Eds.; Publishing House Taylor & Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
- Measuring Usability with the System Usability Scale (SUS). Available online: https://measuringu.com/sus/ (accessed on 16 July 2019).
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).