Next Article in Journal
Flavone Hybrids and Derivatives as Bioactive Agents
Next Article in Special Issue
Participatory Exhibition-Viewing Using Augmented Reality and Analysis of Visitor Behavior
Previous Article in Journal
Application and Validation of a Simplified Approach to Evaluate the Seismic Performances of Steel MR-Frames
Previous Article in Special Issue
Attentional Bias Modification Training in Virtual Reality: Evaluation of User Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards the Emergence of the Medical Metaverse: A Pilot Study on Shared Virtual Reality for Orthognathic–Surgical Planning

1
Faculty of Information Technology and Communication Sciences, Tampere University, 33014 Tampere, Finland
2
Department of Radiology, Tampere University Hospital, Wellbeing Services County of Pirkanmaa, 33520 Tampere, Finland
3
Faculty of Medicine and Health Technology, Tampere University, 33014 Tampere, Finland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(3), 1038; https://doi.org/10.3390/app14031038
Submission received: 12 December 2023 / Revised: 23 January 2024 / Accepted: 23 January 2024 / Published: 25 January 2024

Abstract

:
Three-dimensional (3D) medical images are used for diagnosis and in surgical operation planning. Computer-assisted surgical simulations (CASS) are essential for complex surgical procedures that are often performed in an interdisciplinary manner. Traditionally, the participants study the designs on the same display. In 3D virtual reality (VR) environments, the planner is wearing a head-mounted display (HMD). The designs can be then examined in VR by other persons wearing HMDs, which is a practical use case for the medical metaverse. A multi-user VR environment was built for the planning of an orthognathic–surgical (correction of facial skeleton) operation. Four domain experts (oral and maxillofacial radiologists) experimented with the pilot system and found it useful. It enabled easier observation of the model and a better understanding of the structures. There was a voice connection and co-operation during the procedure was natural. The planning task is complex, leading to a certain level of complexity in the user interface.

1. Introduction

The metaverse [1,2,3] can be thought as an extension of earlier multi-user shared Virtual Reality (VR) systems. Support for collaboration is an essential feature in these systems in VR environments and has become an important topic in the field of human–computer interaction (HCI) [4,5]. Collaboration in VR may increase the quality of shared presence, communication, and interactions among multidisciplinary teams distributed in several locations [6,7,8,9,10]. Although issues of information security have been raised for the use of the metaverse [11], these are out of the scope of the present paper.
To enable truly effective collaboration in VR, one needs to find out the most important system features for the collaborative participants. Mütterlein et al. [12] studied the interplay between telepresence, interactivity, and immersion in collaborative VR and their results demonstrated the importance of interactivity and immersion; they claim that trust between users of the same team is essential for collaborative VR experiences. It was also shown that immersion is one of the main drivers. Liu et al. [13] demonstrated that the roles and actions of the participants in collaborative VR will affect participant experience. Drey et al. [14] studied collaborative learning systems where either both participants were using VR devices or one had a VR device and the other was using a tablet. They demonstrated that participants experienced greater presence and immersion, and lower cognitive load, when both had VR devices. Burova et al. [15] demonstrated that asymmetric remote collaboration, where some participants used VR and others attended by other means, was still useful.
Spatial perception is important when analyzing medical imagery for operation planning. Wu et al. [16] found that the use of collaboration in the VR environment improved cognition capability. Zaker and Coloma [17] showed that the use of VR made it possible to detect possible conflicts in spatial arrangements. Tea et al. [18] used VR to enable a real-time remote collaboration for the design review process and demonstrated that using an immersive VR application led to better results than using the traditional approach. Increased spatial understanding was also one of the benefits of using a collaborative VR system in a study by Heinonen et al. [19], who studied a system for technical document review and risk assessment.
In the medical field, collaboration in VR environments has been investigated in various contexts, for example in medical training where several participants working in the same environment and on the same data sets is natural. For example, Chheang et al. [20] studied how the training for a collaborative medical operation can be done in a VR environment and Schott et al. [21] used multi-user VR and mixed environments to learn liver anatomy. Kyaw et al. [22] collected a review about using VR for teaching in the health profession. In the learning situations, the teacher usually explains the phenomena or operation, and other participants will be observing a demonstration, but a true collaboration with active participants can also be arranged. Luxenburger et al. [23] demonstrated a system where the participants studied shared electronic patient records in VR and Liu et al. [24] had a collaborative VR system where several domain experts can study the same medical data. Butnaru and Girbacia [25] explored collaborative bone surgery planning in a tele-immersive environment with networked Cave Automatic Virtual Environment (CAVE) systems and other stereoscopic desktop displays.
To the best of our knowledge, this is the first study of collaborative VR for computer-assisted surgical simulation (CASS) in facial or craniomaxillofacial surgery, while VR has earlier been used for solo training [26] and planning [27]. Studying the use of VR in craniomaxillofacial (CMF) CASS is interesting because current surgical planning is still carried out on two-dimensional (2D) screens, but VR has been shown to have advantages over 2D in terms of perception and visualization [28,29,30].
Orthognathic surgery is a type of surgical procedure that is used to correct abnormalities or defects in the facial skeleton [31]. These abnormalities may include misaligned jaws, incorrect bite, and other problems that affect the appearance and function of the face. Having a precise pre-surgical plan is essential when performing surgical treatment of dentofacial deformities, as it helps to ensure that the procedure is successful and that the desired results are achieved [32,33]. In this experiment, to simulate a simplified mandibular bilateral sagittal split osteotomy (BSSO) operation, a scenario was created where a radiologist would make a plan together with a surgeon who is in charge of the final surgical operation. In this operation, the lower jaw is split sagittally in its posterior area [31].
An orthognathic surgery was chosen as a practical example of a collaborative VR procedure because the use of CASS is widely accepted in this field and has shown excellent results in previous studies, showing increased efficiency and more accurate osteotomies than with conventional planning methods [34]. CASS for Orthognathic Surgery is the three-dimensional (3D) integration of bone and soft tissue analysis into the execution of surgical movements to achieve a dental and skeletal harmony of tissues. The result is then transferred from the virtual environment to the surgical setting [35]. Ajoub and Pulijala [36] collated a review article about the use of VR in oral and maxillofacial surgery. Xia et al. [37] presented a use case back in 2001.
There are also several other types of craniomaxillofacial surgery where CASS is increasingly being used, such as cancer surgery and including reconstructions, major trauma reconstructions, and craniosynostosis surgery. The main goal in all of these is to improve surgical accuracy, functional and morphological outcomes, and to reduce operating time by using patient-specific cutting guides and implants. Finally, CASS can also reduce costs, and the use of VR could make the visualization of anatomical structures and the final surgical outcome more accurate than a 2D approach.
In earlier studies [16,17,18,19], collaboration in the VR system was shown to improve performance. The study concentrated on synchronous collaboration between two participants, similar to Pidel and Ackermann [38]. The two parties would make their contributions on the task simultaneously. As demonstrated by Mütterlein et al. [12], the interaction between the participants is important. In the experiment, participants were able to comment on the plan and suggest changes in a truly interactive way, and they were able to reach an agreement on the plan.
The objective of the experiment was to pilot test the functionality of the collaborative VR in the operation design and, in particular, to collect the subjective experience of collaboration between experienced participants. Our contribution through the study was to show that medical professionals appreciated the capabilities of the collaborative VR environment and would be interested in using a similar system in real operations.

2. Materials and Methods

A prototype version of a multi-user virtual reality system was built for the orthognathic–surgical planning operation to collect user comments of the feasibility of VR implementation. Four domain expert users (oral and maxillofacial radiologists) were asked to try out the prototype and experiment with the standard operations to get a feeling of the system, and then give comments and development ideas on how to improve the system. The comments were collected and analyzed after the experiments to continue the development.
The experimental task resembled the standard process of planning an orthognathic–surgical operation. The surgical plan is completed by a radiologist who manipulates the imaging data and 3D models, takes the necessary measurements, defines the provisional positions for the necessary osteotomy lines, defines the movements of the parts, etc. A surgeon who will perform the real-life operation is responsible for the final surgical plan and accepts the preliminary plans or modifies them according to the surgical requirements in this interdisciplinary process. For this reason, radiologists and surgeons work together in the final and decisive planning process, and a multi-user implementation of the system was necessary to enable this work.

2.1. Design of the Experimental System

Medical expert users were consulted during the development process and their suggestions were addressed in iterative development. The experimental system was designed to enable all the usual operations that are needed for orthognathic–surgical operation planning. This decision was made based on the preferences of the expert pilot users and the close resemblance of the tool to the ones they are using in their daily work. The system did not copy any existing design tools, as the objective was to concentrate on testing the VR in the design process, and the experience of collaboration between participants. Introducing more features would complicate the analysis by increasing the complexity [39].
The functions required for operation planning, which were implemented in the system, were as follows:
  • The users may study the model by moving and observing it from different directions to obtain a better understanding of the model structures.
  • The user can mark the necessary cephalometric points [40,41] on the model by simply placing small dots (Figure 1, left).
  • Reference planes can be created on the model defined by three points set on the model. Relevant for the current operation planning were the Frankfort horizontal plane and the maxillary occlusal plane [42,43].
  • Cutting planes can be created and their locations interactively edited on the model by moving the points (Figure 1, middle).
  • The model can be cut into separate parts using the previously defined cutting plane (Figure 1, right).
  • Separate parts of the model can be moved relative to each other and locked into the desired positions.
  • The position and orientation of a model part can be accurately fine-tuned using virtual handles that restrict the moves (Figure 2, left and middle).
  • The user can measure how much the model parts have been moved and the angle between the defined planes (Figure 2, right).
  • For effective collaboration, users can discuss with each other and point to specific points in the design (Figure 3, left).
In the current experiment, the focus was on the functionality of collaboration between participants. It was important to observe the position of the other participant and to understand how the other participant sees the medical image. In the prototype system, only the positions of the HMD and controllers used by the other participant were shown, but it was enough to give us information on the participant’s position (Figure 3, middle). There was an audio connection between the participants built into the system. However, in these experiments, the participants were in the same large room and no audio connection was needed. The participants were able to move freely around the virtual object to observe the planning details, even overlapping their (virtual) viewpoints if necessary.

2.2. Experiment Design

The experiment was designed to collect participant experiences and participant comments. All the actions of the participants (controller and HMD moves, button taps, etc.) were recorded but they were not analyzed for execution times or similar objective measures as the system was a preliminary prototype, and the main focus was on the implementation and experiences of the collaboration functionality.

Subjective Measures

The participants were asked a set of subjective questions (Table 1), using a Likert scale with 7 levels from 1 (Not-at-all) to 4 (Somewhat) to 7 (Very).

3. Experiment

3.1. Apparatus and Participants

The VR system was built using the Meta Quest 2 HMD (“Oculus Quest 2”, Oculus, https://www.oculus.com/quest-2/, accessed on 7 June 2021) and using the Touch controllers (“Oculus Quest 2 Accessories”, Oculus, https://www.oculus.com/quest/accessories/, accessed on 7 June 2021) (Figure 3, right). The Oculus Quest 2 device uses an inside-out tracking technology, which means that no base stations are required for tracking. The VR system was built on the Unity 3D software development system, using version 2021.1. (“Unity Real-Time Development Platform”, Unity, https://unity.com/, accessed on 28 April 2021).
The skull model was segmented from the facial CBCT of an orthognathic surgery patient using Materialise ProPlan CMF 3.0.1. The CBCT was scanned with Promax Mid by Planmeca at the Department of Cranio- and Dentomaxillofacial Radiology, Tampere University Hospital, Finland. Voxel size was 0.3 mm. This study was based on a retrospective registry data set and did not involve human experimentation or the use of human tissue samples. No patients were imaged for this study. The study followed Finnish legislation, the Medical Research Act (488/1999), the Act on Secondary Use of Health and Social Data (552/2019), and the European General Data Protection Regulation 216/679. The use of data was approved by the research director of Tampere University Hospital, Finland, on 1 October 2019 (vote number R20558).
Four expert participants were recruited (two female and two male participants, oral and maxillofacial radiologists, who were readily available) who all had experience of orthognathic–surgical planning. Two participants performed them weekly and one performed them monthly. One was familiar with the planning procedure. Their experience in dentistry and radiology varied from 7 to 35 years and in CASS from 5 to 20 years. These radiologists were responsible for in-house CASS in their own hospital. (In other hospitals, the division of responsibilities may be different.) Participants ranged in age from 33 to 62, with an average age of 46. All the participants had some experience in using VR equipment, but none identified as an expert user.

3.2. Procedure

Each participant was introduced to the study procedure and to the equipment used. After the introduction, they were to read and to sign a consent form. After that, the participants filled in a background information form. The participants were asked to wear a Meta Quest 2 headset and hold the controllers and to practice using the system interface for as long as needed to feel confident that they could use the system. The practice sessions took between 10 and 20 min. The facilitator was available to explain and to help in trying out different functions. As the given task involved two participants, one was chosen to be the radiologist who made the orthognathic–surgical plan. The other was chosen to be the surgeon who observed the planning process and modified it according to the surgical needs.
The task started with both participants wearing the headset and the controllers. The facilitator started the system and the participants found themselves in a VR environment where there was a skull model. The participants saw each other’s avatars in the VR environment (Figure 3, left).
The radiologist was expected to first study the skull model. Then, they marked the cephalometric points and defined the Frankfort horizontal plane and the maxillary occlusal plane for reference. To cut the lower jaw in two places, the radiologist defined two cutting planes. The surgeon guided the process according to the surgical needs, and finally accepted the plan. The locations could be fine-tuned, after which the cutting process would be initiated. The radiologist would measure the length of the movement of the lower jaw relative to the original location. Then, the experiment would be over and the participants would be asked to remove the headsets. As a final task, the participants would fill in a subjective evaluation questionnaire (Table 1) and give free-form comments. The whole procedure took around an hour, including the practice session.

4. Results

The interaction system was subjectively evaluated (Table 1). The subjective evaluation results are visualized in Figure 4. The results show that, on average, the evaluations are at the upper end of the scale. Those participants who were selected to act as “radiologist” (who also happened to have the most planning experience) and one of the “surgeons” gave consistently high evaluations, while the other “surgeon” participant gave lower evaluations.
The participants gave a few general comments. One comment was about the clarity of the user interface and about the practice session, suggesting improvements. The user interface itself was fully functional, but it was found to be complicated to adopt during the relatively short experiment. The same issue was mentioned also in informal discussions with other participants after the experiments. In another comment it was proposed that the user should be able to scale the skull model. In the current implementation, the skull model was of fixed size and enabling scaling would make it possible to study details more easily.

Observations

During the experiment, the facilitator observed the participants. The participants knew each other before the experiment, so the start of the collaboration was easy. There was no need to become familiar with the working styles of each other. This would also apply to professional use if colleagues often work together.
The participants approved the possibility of share the VR environment and were chatting (as observed by the facilitator) about that at the beginning. Even as the participants were only able to observe the position and orientation of the other participant’s HMD and controllers in the environment, this alone created a sense of shared involvement in the task.
The radiologists were able to complete the intended tasks, but more slowly than they expected. Participants repeatedly reported problems with the user interface, mainly due to its complexity. Although the interface was shown to be functional, there were too many possible operations to be mastered in the limited amount of practice time available. They felt confident enough immediately after the practice but then found out during the experiment that they were not ready. For example, when asked to do certain operations, they could not find the necessary actions and had to ask the facilitator for help.

5. Discussion

Previous studies in VR collaboration [12] have shown that immersion and interactivity between collaborators are important factors in effective VR collaboration. From the participant behavior, one could observe that the system was sufficiently immersive and enabled effective collaboration between participants by providing comments, discussions, pointing out places, etc. The participants saw the benefits of everybody seeing the same 3D view of the medical image and being able to interactively agree on the details of the plan. Also, in the given task, the cognition of spatial arrangements was seen as being important for the planning task and it has been shown that VR collaboration is an effective tool for this [16,17,18].
Participants were generally satisfied with what the system offered, but there were small differences between the participants in different roles in the experiment. The “radiologists” gave slightly more positive evaluations than the “surgeons”, which might be related to the less active role of the “surgeons” as their experience was probably less immersive. Liu et al. [13] previously reported similar results.
Those two participants with the most extensive planning experience gave the highest subjective evaluation values. This may have been because they were more likely to see the benefits of using a 3D environment, such as the easier perception of 3D structures. With less experience of the 2D planning process, the benefits of the prototype system were probably less obvious and the problems related to the user interface were more prevalent for these users.
It was found that the interface was complicated, which affected the use of the interface and caused participants to ask for advice during the experiment. The main reason for this was the combination of the number of different functions needed in the complex procedure and the limited time to practice the interface. The complexity of the user interface was somewhat inevitable because the surgical planning task requires the user to perform many different functions. In professional use, the users are usually willing to spend a considerable amount of time training on user interfaces to master them. Still, it was recognized that the interaction design can be further improved in follow-up studies.
Most of the experiment participants approved the prototype virtual reality system and expressed interest in using a similar implementation in real orthognathic–surgical operation planning cases. They saw the benefits of everybody seeing the same 3D view of the medical image and being able to interactively agree on the details of the plan.
The main limitation of the study was that only four participants were used in the experiment, which inevitably introduces statistical uncertainty into the results. The reason for this choice was to collect relevant comments on the VR system, so the participants had to be highly qualified experts in the field, which inevitably limited their number. Nevertheless, it is believed that this small cohort is still representative of potential end users.
Also, as this was the first time the VR system was used in simulated surgical planning tasks, not all operations were as smooth as they would need to be in an operational system. This is inevitable in user-centered research and development, but will improve through later iterations of the systems. The “surgeon” who voted 1 for “easiness” did not elaborate on the reasons for their evaluation but considered the system too complicated. They were also less experienced with VR devices, which may have affected the given evaluation.
When planning future research activities, the functionality of the virtual reality system should be limited to only those functions that are strictly necessary for the intended tasks, in order to keep the complexity of the system under control. This makes the experimental setup less real, but helps focus on the functions studied in specific experiments. To ensure that the system is truly fit for purpose, development work needs be carried out in close collaboration with users who are the real experts in the field. Rapid utility for novice users should be balanced with the day-to-day needs of experienced users. Further work may also focus on conducting longitudinal studies of using the complete system to investigate learning effects.
There is an interesting possibility of not directly transferring the functionality of the current systems to the VR environment, as the expert users suggested, but of reinventing the ways the doctors use the system. This will presumably require changes in the working processes and a relearning of the ways the tasks are carried out, but this may finally lead to increased efficiency and improved user experience. However, such a further step is beyond the present work, and will require a long-term iterative process to produce successful results.
The system needs to be tested with more 3D radiological orthognathic surgery patient data, and the planning results need to be compared with traditional CASS for a meaningful assessment of VR usability. If VR CASS proves to be effective and reliable, it has the potential to be quickly adopted globally in clinical work. It would also be an effective tool for training medical specialists.
The more complex the surgical planning, the more specialists with different backgrounds will be involved, such as radiologists, oral surgeons, otolaryngologists (or Ear, Nose, and Throat (ENT) surgeons), or plastic surgeons. The shortage of doctors in both primary and specialist care is growing worldwide [44,45] and, therefore, there is a need for an efficient and trustworthy solution for CASS, even if the participants would be in separate hospitals, countries or even continents. In CASS, the loss of data in terms of image quality and the time lost when working separately are specific topics for future research. Cypersecurity is a specific issue for the implementation of the metaverse that will need to be addressed in future solutions when sensitive data are transferred between locations [11].

6. Conclusions

Four medical experts were recruited (oral and maxillofacial radiologists), who were familiar with orthognathic–surgical planning, to evaluate a prototype implementation of a multi-user VR planning system that enables collaboration between participants. The experts found the collaborative VR system beneficial for the planning task and expressed interest in using a similar system in real operation planning cases. The results support the expectation that a medical metaverse could provide a viable collaborative medium for medical professionals. The user interface was considered to require further development, but also more comprehensive training before deployment.

Author Contributions

Conceptualization, J.K., H.M., R.R. and J.J.; methodology, J.K. and H.M.; software, K.R. and J.M.; validation, J.K. and K.R.; formal analysis, J.K.; investigation, J.K.; resources, R.R.; data curation, J.K.; writing—original draft preparation, J.K. and J.J.; writing—review and editing, J.K., H.M., J.J., P.H. and R.R.; visualization, J.K.; supervision, R.R.; project administration, R.R.; funding acquisition, R.R. and J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Business Finland, as part of the Digital and Physical Immersion in Radiology and Surgery (decision number 930/31/2019) project, and by the Academy of Finland, as part of the Explainable AI Technologies for Segmenting 3D Imaging Data project (decision number 345448).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki. Ethical review and approval were waived for this study, due to nature of the experimental tasks and the participant population.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors wish to thank the members of the TAUCHI research center for their help in preparing the experiment and the oral and maxillofacial radiologists that provided the knowledge of their specialism.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2DTwo-Dimensional
3DThree-Dimensional
BSSOBilateral Sagittal Split Osteotomy
CASSComputer-Assisted Surgical Simulation
CMFCraniomaxillofacial
HCIHuman–Computer Interaction
ENTEar, Nose, and Throat (Surgeon)
HMDHead-Mounted Display
VRVirtual Reality

References

  1. Park, S.M.; Kim, Y.G. A metaverse: Taxonomy, components, applications, and open challenges. IEEE Access 2022, 10, 4209–4251. [Google Scholar] [CrossRef]
  2. Dwivedi, Y.K.; Hughes, L.; Baabdullah, A.M.; Ribeiro-Navarrete, S.; Giannakis, M.; Al-Debei, M.M.; Dennehy, D.; Metri, B.; Buhalis, D.; Cheung, C.M.; et al. Metaverse beyond the hype: Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2022, 66, 102542. [Google Scholar] [CrossRef]
  3. Wang, G.; Badal, A.; Jia, X.; Maltz, J.S.; Mueller, K.; Myers, K.J.; Niu, C.; Vannier, M.; Yan, P.; Yu, Z.; et al. Development of metaverse for intelligent healthcare. Nat. Mach. Intell. 2022, 4, 922–929. [Google Scholar] [CrossRef]
  4. Churchill, E.F.; Snowdon, D. Collaborative virtual environments: An introductory review of issues and systems. Virtual Real. 1998, 3, 3–15. [Google Scholar] [CrossRef]
  5. Andaluz, V.H.; Sánchez, J.S.; Sánchez, C.R.; Quevedo, W.X.; Varela, J.; Morales, J.L.; Cuzco, G. Multi-user industrial training and education environment. In Augmented Reality, Virtual Reality, and Computer Graphics: 5th International Conference, AVR 2018, Otranto, Italy, 24–27 June 2018, Proceedings, Part II 5; Springer: Berlin/Heidelberg, Germany, 2018; pp. 533–546. [Google Scholar]
  6. Berg, L.P.; Vance, J.M. An industry case study: Investigating early design decision making in virtual reality. J. Comput. Inf. Sci. Eng. 2017, 17, 011001. [Google Scholar] [CrossRef]
  7. Narasimha, S.; Dixon, E.; Bertrand, J.W.; Madathil, K.C. An empirical study to investigate the efficacy of collaborative immersive virtual reality systems for designing information architecture of software systems. Appl. Ergon. 2019, 80, 175–186. [Google Scholar] [CrossRef]
  8. Pedersen, G.; Koumaditis, K. Virtual Reality (VR) in the Computer Supported Cooperative Work (CSCW) Domain: A Mapping and a Pre-study on Functionality and Immersion. In Virtual, Augmented and Mixed Reality. Industrial and Everyday Life Applications: 12th International Conference, VAMR 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July 2020, Proceedings, Part II 22; Springer: Berlin/Heidelberg, Germany, 2020; pp. 136–153. [Google Scholar]
  9. Schina, L.; Lazoi, M.; Lombardo, R.; Corallo, A. Virtual reality for product development in manufacturing industries. In Augmented Reality, Virtual Reality, and Computer Graphics: Third International Conference, AVR 2016, Lecce, Italy, 15–18 June 2016. Proceedings, Part I 3; Springer: Berlin/Heidelberg, Germany, 2016; pp. 198–207. [Google Scholar]
  10. Wolfartsberger, J.; Zenisek, J.; Wild, N. Supporting teamwork in industrial virtual reality applications. Procedia Manuf. 2020, 42, 2–7. [Google Scholar] [CrossRef]
  11. Chow, Y.W.; Susilo, W.; Li, Y.; Li, N.; Nguyen, C. Visualization and Cybersecurity in the Metaverse: A Survey. J. Imaging 2022, 9, 11. [Google Scholar] [CrossRef]
  12. Mütterlein, J.; Jelsch, S.; Hess, T. Specifics of collaboration in virtual reality: How immersion drives the intention to collaborate. In Proceedings of the PACIS 2018, Yokohama, Japan, 26–30 June 2018. [Google Scholar]
  13. Liu, K.Y.; Wong, S.K.; Volonte, M.; Ebrahimi, E.; Babu, S.V. Investigating the Effects of Leading and Following Behaviors of Virtual Humans in Collaborative Fine Motor Tasks in Virtual Reality. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; pp. 330–339. [Google Scholar]
  14. Drey, T.; Albus, P.; der Kinderen, S.; Milo, M.; Segschneider, T.; Chanzab, L.; Rietzler, M.; Seufert, T.; Rukzio, E. Towards Collaborative Learning in Virtual Reality: A Comparison of Co-Located Symmetric and Asymmetric Pair-Learning. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–4 May 2022; pp. 1–19. [Google Scholar]
  15. Burova, A.; Palma, P.B.; Truong, P.; Mäkelä, J.; Heinonen, H.; Hakulinen, J.; Ronkainen, K.; Raisamo, R.; Turunen, M.; Siltanen, S. Distributed Asymmetric Virtual Reality in Industrial Context: Enhancing the Collaboration of Geographically Dispersed Teams in the Pipeline of Maintenance Method Development and Technical Documentation Creation. Appl. Sci. 2022, 12, 3728. [Google Scholar] [CrossRef]
  16. Wu, T.H.; Wu, F.; Liang, C.J.; Li, Y.F.; Tseng, C.M.; Kang, S.C. A virtual reality tool for training in global engineering collaboration. Univers. Access Inf. Soc. 2019, 18, 243–255. [Google Scholar] [CrossRef]
  17. Zaker, R.; Coloma, E. Virtual reality-integrated workflow in BIM-enabled projects collaboration and design review: A case study. Vis. Eng. 2018, 6, 4. [Google Scholar] [CrossRef]
  18. Tea, S.; Panuwatwanich, K.; Ruthankoon, R.; Kaewmoracharoen, M. Multiuser immersive virtual reality application for real-time remote collaboration to enhance design review process in the social distancing era. J. Eng. Des. Technol. 2021, 20, 281–298. [Google Scholar] [CrossRef]
  19. Heinonen, H.; Burova, A.; Siltanen, S.; Lähteenmäki, J.; Hakulinen, J.; Turunen, M. Evaluating the Benefits of Collaborative VR Review for Maintenance Documentation and Risk Assessment. Appl. Sci. 2022, 12, 7155. [Google Scholar] [CrossRef]
  20. Chheang, V.; Saalfeld, P.; Huber, T.; Huettl, F.; Kneist, W.; Preim, B.; Hansen, C. Collaborative virtual reality for laparoscopic liver surgery training. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), San Diego, CA, USA, 9–11 December 2019; pp. 1–17. [Google Scholar]
  21. Schott, D.; Saalfeld, P.; Schmidt, G.; Joeres, F.; Boedecker, C.; Huettl, F.; Lang, H.; Huber, T.; Preim, B.; Hansen, C. A vr/ar environment for multi-user liver anatomy education. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March–1 April 2021; pp. 296–305. [Google Scholar]
  22. Kyaw, B.M.; Saxena, N.; Posadzki, P.; Vseteckova, J.; Nikolaou, C.K.; George, P.P.; Divakar, U.; Masiello, I.; Kononowicz, A.A.; Zary, N.; et al. Virtual reality for health professions education: Systematic review and meta-analysis by the digital health education collaboration. J. Med. Internet Res. 2019, 21, e12959. [Google Scholar] [CrossRef] [PubMed]
  23. Luxenburger, A.; Prange, A.; Moniri, M.M.; Sonntag, D. Medicalvr: Towards medical remote collaboration using virtual reality. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany, 12–16 September 2016; pp. 321–324. [Google Scholar]
  24. Liu, R.; Wen, X.; Jiang, M.; Yang, G.; Zhang, C.; Chen, X. Multiuser collaborative illustration and visualization for volumetric scientific data. Softw. Pract. Exp. 2021, 51, 1080–1096. [Google Scholar] [CrossRef]
  25. Butnaru, T.; Girbacia, F. Collaborative pre-surgery planning in a tele-immersive environment using vr technology. In International Conference on Advancements of Medicine and Health Care through Technology: 23–26 September, 2009, Cluj-Napoca, Romania; Springer: Berlin/Heidelberg, Germany, 2009; pp. 9–14. [Google Scholar]
  26. Pulijala, Y.; Ma, M.; Pears, M.; Peebles, D.; Ayoub, A. An innovative virtual reality training tool for orthognathic surgery. Int. J. Oral Maxillofac. Surg. 2018, 47, 1199–1205. [Google Scholar] [CrossRef]
  27. Zaragoza-Siqueiros, J.; Medellin-Castillo, H.I.; de la Garza-Camargo, H.; Lim, T.; Ritchie, J.M. An integrated haptic-enabled virtual reality system for orthognathic surgery planning. Comput. Methods Biomech. Biomed. Eng. 2019, 22, 499–517. [Google Scholar] [CrossRef] [PubMed]
  28. Gómez-Tone, H.C.; Martin-Gutierrez, J.; Bustamante-Escapa, J.; Bustamante-Escapa, P. Spatial skills and perceptions of space: Representing 2D drawings as 3D drawings inside immersive virtual reality. Appl. Sci. 2021, 11, 1475. [Google Scholar] [CrossRef]
  29. Sutherland, I.E. A head-mounted three dimensional display. In Proceedings of the Fall Joint Computer Conference, Part I, San Francisco, CA, USA, 9–11 December 1968; pp. 757–764. [Google Scholar]
  30. Javaid, M.; Haleem, A. Virtual reality applications toward medical field. Clin. Epidemiol. Glob. Health 2020, 8, 600–605. [Google Scholar] [CrossRef]
  31. Monson, L.A. Bilateral sagittal split osteotomy. In Seminars in Plastic Surgery; Thieme Medical Publishers: New York, NY, USA, 2013; Volume 27, pp. 145–148. [Google Scholar]
  32. Alkhayer, A.; Piffkó, J.; Lippold, C.; Segatto, E. Accuracy of virtual planning in orthognathic surgery: A systematic review. Head Face Med. 2020, 16, 34. [Google Scholar] [CrossRef] [PubMed]
  33. Hsu, S.S.P.; Gateno, J.; Bell, R.B.; Hirsch, D.L.; Markiewicz, M.R.; Teichgraeber, J.F.; Zhou, X.; Xia, J.J. Accuracy of a computer-aided surgical simulation protocol for orthognathic surgery: A prospective multicenter study. J. Oral Maxillofac. Surg. 2013, 71, 128–142. [Google Scholar] [CrossRef] [PubMed]
  34. Zinser, M.J.; Sailer, H.F.; Ritter, L.; Braumann, B.; Maegele, M.; Zöller, J.E. A paradigm shift in orthognathic surgery? A comparison of navigation, computer-aided designed/computer-aided manufactured splints, and “classic” intermaxillary splints to surgical transfer of virtual orthognathic planning. J. Oral Maxillofac. Surg. 2013, 71, 2151-e1. [Google Scholar] [CrossRef] [PubMed]
  35. Haas, O., Jr.; Becker, O.; De Oliveira, R. Computer-aided planning in orthognathic surgery—systematic review. Int. J. Oral Maxillofac. Surg. 2015, 44, 329–342. [Google Scholar] [CrossRef] [PubMed]
  36. Ayoub, A.; Pulijala, Y. The application of virtual reality and augmented reality in Oral & Maxillofacial Surgery. BMC Oral Health 2019, 19, 238. [Google Scholar]
  37. Xia, J.; Ip, H.H.S.; Samman, N.; Wong, H.T.; Gateno, J.; Wang, D.; Yeung, R.W.; Kot, C.S.; Tideman, H. Three-dimensional virtual-reality surgical planning and soft-tissue prediction for orthognathic surgery. IEEE Trans. Inf. Technol. Biomed. 2001, 5, 97–107. [Google Scholar]
  38. Pidel, C.; Ackermann, P. Collaboration in virtual and augmented reality: A systematic overview. In Augmented Reality, Virtual Reality, and Computer Graphics: 7th International Conference, AVR 2020, Lecce, Italy, 7–10 September 2020, Proceedings, Part I 7; Springer: Berlin/Heidelberg, Germany, 2020; pp. 141–156. [Google Scholar]
  39. MacKenzie, I.S. Human-Computer Interaction: An Empirical Research Perspective; Morgan Kaufmann: Burlington, MA, USA, 2013. [Google Scholar]
  40. Li, Z.; Kiiveri, M.; Rantala, J.; Raisamo, R. Evaluation of haptic virtual reality user interfaces for medical marking on 3D models. Int. J. Hum. Comput. Stud. 2021, 147, 102561. [Google Scholar] [CrossRef]
  41. Ismail, D.B.; Aloui, K.; Naceur, M.S. Matching cephalometric points via Ant Colony Optimization. In Proceedings of the 2020 4th International Conference on Advanced Systems and Emergent Technologies (IC_ASET), Hammamet, Tunisi, 15–18 December 2020; pp. 289–293. [Google Scholar]
  42. Mangal, U.; Hwang, J.J.; Jo, H.; Lee, S.M.; Jung, Y.H.; Cho, B.H.; Cha, J.Y.; Choi, S.H. Effects of changes in the Frankfort horizontal plane definition on the three-dimensional cephalometric evaluation of symmetry. Appl. Sci. 2020, 10, 7956. [Google Scholar] [CrossRef]
  43. Rosati, R.; Rossetti, A.; De Menezes, M.; Ferrario, V.F.; Sforza, C. The occlusal plane in the facial context: Inter-operator repeatability of a new three-dimensional method. Int. J. Oral Sci. 2012, 4, 34–37. [Google Scholar] [CrossRef]
  44. Sheldon, G.F.; Ricketts, T.C.; Charles, A.; King, J.; Fraher, E.P.; Meyer, A. The global health workforce shortage: Role of surgeons and other providers. Adv. Surg. 2008, 42, 63–85. [Google Scholar] [CrossRef]
  45. Ahmed, H.; Carmody, J.B. On the looming physician shortage and strategic expansion of graduate medical education. Cureus 2020, 12, e9216. [Google Scholar] [CrossRef]
Figure 1. Two points located on the skull model surface (left). A cut plane defined on the lower jaw by three points (middle). The plane location can be edited by moving the points. Two separated parts of the lower jaw after the cut operation (right).
Figure 1. Two points located on the skull model surface (left). A cut plane defined on the lower jaw by three points (middle). The plane location can be edited by moving the points. Two separated parts of the lower jaw after the cut operation (right).
Applsci 14 01038 g001
Figure 2. Directional handles to move the model part on the main axes (left). The part can be moved in the given directions. Rotational handles to rotate the part around the main axes (middle). The movement length and direction can be measured by observing the movement of any point of interest set on the model (right). The bottom row in the tableindicates the move (9.3 mm to the right, etc.) from the original position.
Figure 2. Directional handles to move the model part on the main axes (left). The part can be moved in the given directions. Rotational handles to rotate the part around the main axes (middle). The movement length and direction can be measured by observing the movement of any point of interest set on the model (right). The bottom row in the tableindicates the move (9.3 mm to the right, etc.) from the original position.
Applsci 14 01038 g002
Figure 3. The collaborating participants can point to specific locations for other participants while discussing (left). Seeing the other participant in the VR environment (middle). The system only shows the HMD (as a blue box on the left) and the hand controllers of the other participant (as two red boxes in the middle), which was enough to understand how the other participant sees the medical imaging data. The view is from the observer point-of-view while the other participant was doing the plan. The movements of the hand controllers during the design added to the sense of immersion. A user with an Oculus Quest 2 HMD and the Touch controllers (right).
Figure 3. The collaborating participants can point to specific locations for other participants while discussing (left). Seeing the other participant in the VR environment (middle). The system only shows the HMD (as a blue box on the left) and the hand controllers of the other participant (as two red boxes in the middle), which was enough to understand how the other participant sees the medical imaging data. The view is from the observer point-of-view while the other participant was doing the plan. The movements of the hand controllers during the design added to the sense of immersion. A user with an Oculus Quest 2 HMD and the Touch controllers (right).
Applsci 14 01038 g003
Figure 4. The distribution of subjective evaluation values for each attribute (Table 1). The values selected by “radiologist” participants are marked by an “x” and the values selected by “surgeon” participants are marked by a dot. The “radiologist” participants and one of the “surgeon” participants consistently gave high evaluations for all attributes.
Figure 4. The distribution of subjective evaluation values for each attribute (Table 1). The values selected by “radiologist” participants are marked by an “x” and the values selected by “surgeon” participants are marked by a dot. The “radiologist” participants and one of the “surgeon” participants consistently gave high evaluations for all attributes.
Applsci 14 01038 g004
Table 1. The questions about the participant’s subjective impressions.
Table 1. The questions about the participant’s subjective impressions.
S1How successful were you in accomplishing what you were trying to do?
S2How confident were you in your ability to use the tools?
S3How efficient was the system to use?
S4How easy was the system to use?
S5Could you imagine using the system for your daily work? (How likely would you be to use the system?)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kangas, J.; Järnstedt, J.; Ronkainen, K.; Mäkelä, J.; Mehtonen, H.; Huuskonen, P.; Raisamo, R. Towards the Emergence of the Medical Metaverse: A Pilot Study on Shared Virtual Reality for Orthognathic–Surgical Planning. Appl. Sci. 2024, 14, 1038. https://doi.org/10.3390/app14031038

AMA Style

Kangas J, Järnstedt J, Ronkainen K, Mäkelä J, Mehtonen H, Huuskonen P, Raisamo R. Towards the Emergence of the Medical Metaverse: A Pilot Study on Shared Virtual Reality for Orthognathic–Surgical Planning. Applied Sciences. 2024; 14(3):1038. https://doi.org/10.3390/app14031038

Chicago/Turabian Style

Kangas, Jari, Jorma Järnstedt, Kimmo Ronkainen, John Mäkelä, Helena Mehtonen, Pertti Huuskonen, and Roope Raisamo. 2024. "Towards the Emergence of the Medical Metaverse: A Pilot Study on Shared Virtual Reality for Orthognathic–Surgical Planning" Applied Sciences 14, no. 3: 1038. https://doi.org/10.3390/app14031038

APA Style

Kangas, J., Järnstedt, J., Ronkainen, K., Mäkelä, J., Mehtonen, H., Huuskonen, P., & Raisamo, R. (2024). Towards the Emergence of the Medical Metaverse: A Pilot Study on Shared Virtual Reality for Orthognathic–Surgical Planning. Applied Sciences, 14(3), 1038. https://doi.org/10.3390/app14031038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop