Next Article in Journal
Assessing Changes in Motor Function and Mobility in Individuals with Parkinson’s Disease After 12 Sessions of Patient-Specific Adaptive Dynamic Cycling
Previous Article in Journal
Mask-Guided Spatial–Spectral MLP Network for High-Resolution Hyperspectral Image Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Application of Augmented Reality Technology in Perioperative Visual Guidance: Technological Advances and Innovation Challenges

School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(22), 7363; https://doi.org/10.3390/s24227363
Submission received: 9 October 2024 / Revised: 9 November 2024 / Accepted: 16 November 2024 / Published: 19 November 2024
(This article belongs to the Section Biomedical Sensors)

Abstract

:
In contemporary medical practice, perioperative visual guidance technology has become a critical element in enhancing the precision and safety of surgical procedures. This study provides a comprehensive review of the advancements in the application of Augmented Reality (AR) technology for perioperative visual guidance. This review begins with a retrospective look at the evolution of AR technology, including its initial applications in neurosurgery. It then delves into the technical challenges that AR faces in areas such as image processing, 3D reconstruction, spatial localization, and registration, underscoring the importance of improving the accuracy of AR systems and ensuring their stability and consistency in clinical use. Finally, the review looks forward to how AR technology could be further facilitated in medical applications with the integration of cutting-edge technologies like skin electronic devices and how the incorporation of machine learning could significantly enhance the accuracy of AR visual systems. As technology continues to advance, there is ample reason to believe that AR will be seamlessly integrated into medical practice, ushering the healthcare field into a new “Golden Age”.

1. Introduction

In modern medicine, the success of surgical operations depends not only on the surgery itself but also on scientifically effective perioperative management. As an interdisciplinary and comprehensive management discipline, the development of perioperative management has reflected a shift from a focus on single surgical techniques to a patient-centered holistic treatment approach [1]. Innovative technologies developed around perioperative management, such as digital healthcare, remote monitoring, and wearable devices, are propelling the field of health engineering towards a more scientific, systematic, and humanized direction [2,3]. Visual guidance technology plays a crucial role in surgery. It provides real-time imaging information to doctors. This information assists in better identifying and locating the surgical area during the operation. By doing so, it helps to reduce damage to surrounding tissues. Additionally, it improves surgical outcomes [4,5]. This technology is especially important in minimally invasive surgery [6] and precision medicine [7]. In these fields, its role is particularly prominent. However, traditional surgeries rely on the doctor’s experience and two-dimensional image assistance, which has certain limitations when dealing with complex or minimally invasive surgeries [8]. In contrast, Augmented Reality (AR) technology, by superimposing virtual information on the patient’s actual anatomical structure, provides doctors with a more intuitive and precise surgical navigation tool, greatly enhancing the success rate and safety of surgeries [9].
It is widely acknowledged that the origins of AR can be traced back to the 1950s, initially conceptualized by Morton Heilig [10]. The technology advanced in the 1960s with Ivan Sutherland’s invention of the head-mounted display (HMD) [11], which remains a prevalent format for presenting AR imagery to this day.
The first concrete application of Augmented Reality (AR) technology dates back to 1969 when Wright-Patterson Air Force Base in Ohio developed a night flight assistance system based on AR technology [12]. This system, equipped with sensors installed on the aircraft, enhanced the pilot’s visibility of the real world under adverse conditions such as obstructions by the aircraft’s structure or insufficient lighting. It superimposed flight data and target information onto the pilot’s field of view and provided audio cues to assist with orientation. This innovation not only improved the safety of night flights but also laid the groundwork for the development of AR technology in the future. However, it was not until 1997 that a clear technical definition of AR was provided by Azuma and others in their seminal AR survey report [13]. AR is defined as the integration of real and virtual environments, registered in 3D and interacting in real time, a definition that is now widely accepted.
The field of neurosurgery was among the pioneers in attempting to incorporate AR for visual guidance during the perioperative period. As early as 1998, Masutani and colleagues published a study on an AR visualization system to support endovascular neurosurgery [14,15]. This system reconstructed blood vessel models based on X-ray images, allowing surgeons to simultaneously view real catheters and reconstructed vascular models on a display monitor. Although the system still had limitations in the accuracy of model registration and image clarity, it offered a novel approach to perioperative visual guidance that included preoperative path planning and intraoperative real-time navigation.
Over the past two decades, perioperative visual guidance based on AR technology has undergone significant transformation, with mature solutions providing substantial support for surgical procedures. The application of AR technology in perioperative management has evolved from initial concept validation to a mature clinical tool. The development of this technology has not only enhanced the precision and safety of surgeries but also improved patient outcomes. However, despite the notable advancements in the application of AR technology in surgical procedures, the field is still rapidly evolving, with new technologies and methodologies emerging continuously.
This study has a clear objective. It aims to conduct a comprehensive review of augmented reality technology. The focus is on the technological advancements and the innovative challenges that come with it. This study will integrate the latest findings from various research fields. The goal is to provide medical professionals with a comprehensive perspective. This perspective will cover the application of AR technology in clinical practice. It will assess the maturity of existing technologies and guide practitioners on how to integrate these innovative tools into their surgical processes, thereby enhancing the precision and safety of surgeries.

2. Methods

2.1. Related Works

By integrating multiple technologies, AR-based surgical visual guidance systems can offer enhanced precision, more intuitive interactivity, and a higher degree of customization in the surgical experience. This not only increases the likelihood of successful surgical outcomes but also significantly improves patient safety. A prime example is Proprio, a company that has combined AI, computer vision, AR, and robotic technology to offer innovative 3D medical imaging and data management solutions for surgeries. Their technology assists surgeons in identifying surgical obstacles, planning operations, and sharing surgical data in real-time during the perioperative period [16]. Currently, Proprio has implemented pilot programs for neurosurgery and orthopedic surgery at institutions such as Seattle Children’s Hospital. Beyond the commercially available surgical guidance products that have been reported, exploring the potential of AR technology in surgical procedures is also a research topic for many scholars, which has given rise to numerous research prototypes. Table 1 lists some of the research prototypes published in recent years that utilize AR technology in perioperative surgical guidance. It is worth noting that precision, in the context of Augmented Reality (AR) technology, refers to the consistency and repeatability of measurements or computational outcomes. Within the realm of AR, precision pertains to the system’s ability to yield uniformly aligned image overlays and positioning information across various time points and conditions. A system with high precision can provide invariant image superposition and localization data in every operation, and it is capable of swiftly and accurately updating the position of virtual imagery in response to changes in the patient’s position during surgery, thereby maintaining congruence with the actual anatomical structures.
These technologies range from Optical See-Through Head-Mounted Displays (OST-HMDs) to AR navigation systems based on 3D displays, as well as advanced devices such as Microsoft HoloLens, and have shown significant potential in improving surgical accuracy. For instance, the system developed by Chen et al. achieved a high-accuracy 3D reconstruction with an average error of only 0.32 mm in minimally invasive knee surgery [19]. However, despite showing promise in laboratory settings, most of these technologies have not yet undergone clinical testing, which limits a comprehensive assessment of their performance in actual surgical environments. Moreover, while some studies provide specific accuracy data, such as the Target Registration Error (TRE) of 10.62 ± 5.90 mm in Ackermann’s research [18], overall, these technologies still require further validation and improvement in terms of accuracy and clinical applicability. Studies like the one conducted by Creighton et al. have demonstrated potential in the field of Total Shoulder Arthroplasty (TSA) using Microsoft HoloLens 1. These studies highlight the benefits of depth-sensing cameras. However, they lack clear accuracy values and clinical testing. As a result, it becomes difficult to evaluate their specific effectiveness in actual surgeries [20]. This is a limitation that needs to be addressed in future research. Therefore, to promote the transition of these promising technologies from the lab to the operating room, future research should focus on addressing the limitations of current technologies, such as improving accuracy, optimizing hardware performance, and enhancing system stability and user-friendliness.
In the academic community, experts generally affirm the advantages of applying Augmented Reality (AR) visual guidance systems during the perioperative period. However, despite this affirmation, these systems still face some challenges during their development that have not yet been fully resolved, which limit their effectiveness and accuracy in surgical navigation. Currently, the biggest challenge lies in improving the implementation accuracy of AR systems. Lin [17] and Liounakos [23] both emphasized the importance of refining AR systems from a technical standpoint to ensure their stability and consistency in clinical applications. Fida, in his research on integrating AR technology into the surgical process, summarized the direction of AR technology advancement into two main aspects: optimizing tracking patterns and accurately registering generated images with the real visual field [24]. This view has also been widely supported in other similar studies. In addition, hardware limitations, such as the fixed focal length issue of the HoloLens 1 and the insufficient performance of its depth-sensing camera, have been identified by Creighton [20] and others as the main factors causing errors. Therefore, continuous attention to and improvement of the integrated hardware systems used in the system are crucial for enhancing the application effects of AR technology in surgical navigation.
A rigorous analysis of the current literature and practical applications of perioperative AR visual guidance has revealed three critical challenges essential for enhancing the performance of these systems.
Challenge 1: image processing and 3D reconstruction proficiency: the accurate extraction and reconstruction of patient anatomy from preoperative imaging require further advancements in image processing and 3D reconstruction technologies. This includes the deployment of advanced deep learning models, such as U-Net and DeepLab, to segment organs, tissues, and blood vessels from volumetric imaging data like CT and MRI. These models are crucial for providing the anatomical models necessary for AR. The primary challenge is to increase the accuracy and processing speed of these models to enable the real-time delivery of precise 3D reconstructions during surgical procedures.
Challenge 2: seamless integration of integrated hardware systems: integrated hardware systems must overcome the challenge of integrating seamlessly with existing surgical processes. This necessitates the development of more efficient and stable technical solutions to achieve a synergistic operation between hardware devices and surgical navigation software. An AR surgical navigation system comprises three core components: virtual image or environment modeling, registration of the virtual environment with real space, and display technology that combines the virtual and real environments. The challenge is to ensure that these components function stably and accurately throughout surgical procedures, in harmony with the operating room’s workflow.
Challenge 3: precise spatial localization and registration technology: precise spatial localization and registration technologies are fundamental to achieving high-precision surgical navigation. These technologies, which link the virtual scene with the real scene through 3D registration techniques, anchor the virtual scene to the real scene’s coordinate system, ensuring a shared spatial context between the virtual and real environments. The challenge is to improve the accuracy and stability of registration and to reduce any latency or jitter that may be perceived by users during surgical interventions.
The use of AR technology has received notable attention in the medical field in recent years. Jud et al. conducted a systematic review in which they summarized the applicability of AR technology in orthopedic surgery, including instrument/implant placement, osteotomy, oncologic surgery, trauma surgery, and surgical training and education, and highlighted the potential of AR technology to improve surgical accuracy and reduce radiation exposure [25]. Kim et al. focused on the use of VR and AR technologies in orthopedic surgery, covering preoperative planning, surgical navigation, and training, and discussed how these technologies can improve surgical accuracy and patient satisfaction with appearance by simulating the surgical process [26]. In addition, a review study by Barcali et al. systematically analyzed literature published between 2019 and 2022, focusing on the use of AR technology in orthopedic, maxillofacial, and oncology surgery and evaluating the main solutions for AR technology, including the Microsoft HoloLens optical viewer and marker-based tracking and registration methods [27]. Currently, studies such as these are mainly dedicated to demonstrating the positive applications of AR technology in healthcare, but they also mention the need for technological integration, operational challenges and user experience improvements. However, review studies mainly devoted to the challenges faced by AR technology in the perioperative period with current state-of-the-art technological solutions, with a view towards providing valuable insights and directions for improvement in future research and clinical practice, have not yet been retrieved.
The contribution of this study lies in its comprehensive review of the latest advancements in AR technology in the fields of surgical planning, navigation, and education, and it goes further by proposing innovative approaches to tackle technological challenges. Compared to the existing literature, this research delves deeper into how cutting-edge technologies like machine learning algorithms and skin electronic devices can enhance the accuracy and stability of AR visual guidance systems in clinical applications. Additionally, it offers unique insights into future technological trends, including the integration of AR with artificial intelligence, improvements in real-time tracking technologies, and the potential of skin electronic devices to increase the comfort and precision of surgical procedures. These discussions not only provide valuable information resources for medical professionals but also guide technology developers in setting future research directions, collectively promoting the advancement of surgical navigation technology to new heights. Figure 1 illustrates the technical composition of the AR visual system.

2.2. Literature Search Strategy

We selected papers published between 2014 and 2024 from the Web of Science, PubMed, and IEEE Xplore databases. Utilizing four sets of search strings, we conducted our search across titles, keywords, and abstracts, culminating in the inclusion of 193 studies:
  • First set: ((“AR *” or “Hybrid Reality *” or “Augmented Reality ”) and (“surgery” or “surgical operation *” or “perioperative period *” or “preoperative planning *”) and (“visual guidance *” or “visual direct *” or “visual lead *”));
  • Second set: ((“AR *” or “hybrid reality *” or “Augmented Reality ”) and (“medicine” or “medical field *” or “medical Science *”) and (“3D reconstruction *” or “3D modeling *”));
  • Third set: ((“AR *” or “hybrid reality *” or “Augmented Reality *”) and((“spatial positioning *” or “spatial navigation *” or “registration *”)));
  • Fourth set: ((“AR *” or “Hybrid Reality *” or “Augmented Reality *”) and (“display device *” or “display screen *” or “HMD *”)).
Furthermore, among these 193 articles, 7 were review articles. We employed the aforementioned search strings to sift through the reference lists of these seven reviews, which yielded an additional 21 papers. By examining the abstracts of these papers, we initially eliminated duplicates. Subsequently, through a full-text review, we further excluded papers that met one or more of the following criteria: non-empirical studies; non-original research; studies with unclear applications of AR technology; studies with incomplete or inaccessible result data; and studies primarily focusing on themes unrelated to surgical operations. Ultimately, we selected 79 articles that serve as the principal sources of information for this paper, as illustrated in Figure 2.

3. Image Processing and 3D Reconstruction for AR Visual Guidance Systems

Before surgical procedures, basic anatomical structures can be identified through medical imaging, typically Computed Tomography (CT) [28] or Magnetic Resonance Imaging (MRI) [29]. Although these images contain all the necessary information about tumors, major blood vessels, and the tissue environment, surgeons may find it difficult to perceive the relationships between these structures during surgical planning and execution. Therefore, providing surgeons with tools that can simplify the interpretation of traditional images seems crucial. Among these tools, three-dimensional (3D) visualization shows significant advantages compared to the standard 2D slice visualization [30,31], which is also the fundamental reason why many studies introduce AR as a solution. The realization of AR effects is based on two main processes: the 3D modeling and visualization of anatomical or pathological structures appearing in medical images and the deployment of this visualization onto the actual patient. Among them, medical image reconstruction is the most basic and important technical element in building an AR visual guidance system [32].
The evolution of 3D reconstruction techniques grounded in medical imaging harks back to the 1970s, with early scholars pioneering the reconstruction of 3D models from sequential two-dimensional images through ray tracing technology [33]. As computer graphics and vision technologies have matured, diverse 3D reconstruction methodologies have surfaced, encompassing voxel-based, surface-based, and regularization-based approaches [34]. The dawn of the 21st century witnessed a surge in the sophistication of medical imaging equipment, which in turn accelerated the evolution of 3D reconstruction technologies. Today, such technologies, leveraging CT and MRI, are instrumental in diagnosing a spectrum of conditions, including tumors and cardiovascular diseases [35]. The utility of 3D reconstruction has expanded into realms such as virtual surgical simulations, surgical planning, and postoperative evaluations [36]. The embrace of AR-based visual guidance in the perioperative period by the clinical community is significantly attributed to the transformative improvements in 3D reconstruction outcomes facilitated by machine learning technologies. Handling vast and intricate datasets to identify useful patterns and features has long posed a formidable challenge for traditional computational methods. Machine learning, particularly deep learning, has risen to the forefront with its ability to automate the extraction of features from 2D imaging data and construct accurate 3D models using algorithms such as Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs). Moreover, machine learning technologies possess the innate capability to continually learn from new data, thereby optimizing the reconstruction process and enhancing the accuracy and reliability of the models. As the volume of medical imaging data continues to grow, the application of machine learning is demonstrating immense potential and value in managing this data, reducing noise, improving image resolution, and automating clinical workflows. Consequently, the integration of machine learning techniques into the realm of 3D medical imaging reconstruction not only elevates the quality of diagnosis and treatment but also propels the advancement of personalized and precision medicine. For example, CNNs are adept at automatically extracting features from CT and MRI images to accomplish 3D reconstruction [37]. Furthermore, innovative reconstruction methods leveraging embeddings have emerged, offering a novel angle by converting images into textual data and reconstructing 3D models with the aid of embeddings [38]. Table 2 encapsulates a compendium of recent scholarly milestones in harnessing machine learning for the 3D reconstruction of medical imagery. Accuracy, in the context of AR technology, denotes the degree to which virtual imagery or data corresponds to the patient’s actual anatomical structures. A system possessing high accuracy guarantees that the virtual information is precisely overlaid on the corresponding real-world locations, devoid of any systemic deviation.
In the realm of 3D reconstruction, the application of machine learning methods, particularly deep learning algorithms, has achieved remarkable breakthroughs, especially in the field of medical imaging technology. For instance, in a 2024 study by Prakash et al., a Conditional Generative Adversarial Network (cGAN) was utilized to distinguish between tumor and non-tumor tissues in CT scan images, achieving a diagnostic accuracy rate of 96.5% [39]. This significant outcome not only demonstrates the immense potential of deep learning in enhancing the precision of clinical diagnostics but also reflects its exceptional capability in handling complex medical data. Similarly, in 2023, Zi et al. employed a U-Net architecture in the Brain Tumor Segmentation Challenge (BraTS), obtaining a Dice Coefficient of 85.3% and an Intersection over Union (IoU) of 78.9% [40]. These accomplishments further confirm the effectiveness and prospective application of deep learning networks in image segmentation tasks.
In the realm of feature extraction, the Lineformer model, proposed by Cai et al. captures the internal structure of objects by simulating the interdependencies within X-ray line segments, achieving significant performance enhancements in novel view synthesis and CT reconstruction tasks compared to existing NeRF-based methods [42]. The innovation of this model lies in its ability to address the sparsity of X-ray images by focusing on the spatial relationships between line segments, which is a relatively novel approach in medical image analysis. Shen et al. utilized Recurrent Neural Networks (RNNs) to extract nonlinear features from CT images, achieving an average reconstruction accuracy of 62.9% based on the Structural Similarity Index (SSIM) [43]. This result highlights the reliability of deep learning in processing complex medical data and extracting deep-level features. The use of RNNs demonstrates strong modeling capabilities for time-dependent sequences, which is crucial for handling continuous medical image data. In 2023, Hong et al. combined Generative Adversarial Networks (GANs) with Long Short-Term Memory networks (LSTMs) for the reconstruction of lung tumors, showcasing the superiority of their method through Hamming and Euclidean distance metrics [44]. This combination of generative and sequential models not only produces high-quality images but also optimizes the reconstruction process by leveraging the temporal information from LSTMs. Perdios et al., in 2019, focused on the reconstruction, recovery, and enhancement of ultrasound images, demonstrating that CNN-processed images could improve the performance of vector flow estimation in certain aspects [45]. This finding is of significant importance for enhancing the diagnostic value of ultrasound images.
Although these studies demonstrate the immense potential of machine learning in 3D reconstruction, several challenges remain. For instance, deep learning models typically require substantial computational resources, which may limit their application in resource-constrained environments. Additionally, model performance heavily relies on the quality and diversity of training data, and obtaining and annotating a large volume of training data for 3D reconstruction tasks is challenging. The generalizability, interpretability, and real-time performance of deep learning models are also critical issues in this field. To overcome these challenges, researchers are developing more efficient algorithms, such as the Weight Pruning U-Net (WP-UNet) proposed by Prakash et al. and the Deep Learning-based Iterative Reconstruction (DL-MBIR) strategy suggested by Ziabari et al., aiming to optimize computational efficiency and the practicality of model deployment [46]. These methods aim to reduce the computational resource requirements by decreasing the number of model parameters and improving parallel processing capabilities, while maintaining or enhancing model performance.
Looking to the future, breakthroughs in 3D reconstruction within the realm of machine learning are anticipated to manifest across various critical avenues. The enhancement of resource efficiency is becoming a top priority. Researchers are increasingly dedicated to creating more streamlined algorithms and network frameworks. These improvements aim to reduce the reliance on substantial computational resources. This reduction is particularly important for deep learning models during both the training and inferencing phases. By doing so, they hope to make these models more efficient and accessible. An exemplar approach involves the refinement of neural network architectures to sustain output excellence while concurrently curtailing the model’s parameter count, leading to a reduced need for storage and computational capabilities [47]. In tandem, the sphere of data acquisition and enhancement will garner increased research attention, with a spotlight on elevating the caliber and spectrum of training data. This could encompass the innovation of automated data annotation tools, the generation of synthetic data methodologies, and the investigation of data augmentation techniques, all aimed at fortifying model proficiency and flexibility in the face of varied conditions [48]. Aiming to augment model generalizability, upcoming research endeavors will concentrate on the conception of more resilient network architectures and the introduction of innovative training strategies, such as domain adaptation and meta-learning. These initiatives will facilitate machine learning models to adapt more seamlessly to a spectrum of datasets and environmental contexts, enhancing their steadfastness and dependability in real-world applications. For instance, domain adaptation techniques will enable models to transpose learning from a domain abundant with data to one that is data-sparse, while meta-learning will afford models the agility to swiftly adjust to novel tasks within the confines of limited data [49].

4. Spatial Positioning and Registration Techniques for AR Visual Guidance Systems

In 1947, a milestone in medical history was achieved at Temple University School of Medicine in Philadelphia, with the first stereotactic neurosurgery on the human brain conducted successfully. Spiegel detailed in his report how the team utilized plaster models alongside ventriculography to pinpoint surgical targets, marking a colossal leap forward in technology at that time [50]. This innovation significantly elevated the precision of neurosurgical procedures, minimized harm to adjacent healthy brain tissue, and bolstered the safety and efficacy of surgeries. Fast forward to the present, over seven decades later, surgical navigation systems have evolved exponentially beyond their rudimentary stereotactic predecessors. They now integrate preoperative imaging with real-time intraoperative visuals to construct highly accurate 3D models, offering physicians an AR-assisted framework for precise surgical routes and target localization [51,52,53,54]. These AR visual guidance systems harness foundational medical imaging data from ultrasound, X-rays, CT scans, and MRI to generate a patient-specific 3D model, which surgeons then employ to devise surgical strategies. Intraoperatively, these systems leverage real-time tracking to navigate the surgeon’s maneuvers and affirm the surgical plan’s execution. The crux of this technological marvel lies in the registration and positioning technologies that ensure the virtual imagery aligns precisely with the patient’s anatomical structures.
The act of registration is the alignment of virtual models or images with a patient’s actual anatomical structures, a process that is typically completed prior to the start of surgery. Its purpose is to create a mapping that enables the virtual images in the surgical navigation system to overlay precisely onto the patient’s anatomical reality [55]. Manual registration stands as the most direct approach, where surgeons are tasked with manually adjusting the virtual model to correspond with the patient’s anatomy during the procedure. In situations where advanced equipment is not available or is hard to reach, manual registration can be a practical solution. It is particularly useful for simple surgical navigation tasks, like minor outpatient procedures. Additionally, manual registration is beneficial in environments with limited resources [56]. This approach ensures that basic navigation needs can still be met even under constrained conditions. Gregory et al. employed manual registration in reverse shoulder arthroplasty to match the visible bone segments with their holographic counterparts, effectively addressing the clinical challenge of limited bone volume in the dome area [57]. Li et al. utilized manual registration to align a catheter with a holographic trajectory during the insertion of an external ventricular drain (EVD), thereby completing the registration process [58]. However, the precision of manual registration can fluctuate considerably due to the surgeon’s proficiency or visual obstructions in the operating field, presenting a critical risk in surgical contexts and explaining its limited widespread adoption [55]. Azimi et al. introduced a novel registration technique using a black-box method for HMD calibration, with tracker data as inputs and the 3D coordinates of virtual objects in the observer’s visual field as outputs. This technique has refined the average re-projection error of manual registration to 4 mm, which could potentially augment the practicality of manual registration in surgical guidance [59]. Yet, in an array of clinical settings, automatic registration methods leveraging sensing technologies or machine vision are more commonly preferred. The choice of feature extraction in these automatic methods accommodates a variety of patient anatomies, surgical equipment, and procedural nuances. Currently, there are three predominant medical image registration techniques: point-based, surface-based, and marker-based registration. Point-based registration focuses on identifying and correlating specific points between preoperative imaging and actual anatomical structures, making it suitable for surgeries requiring precise localization of small anatomical points. Surface-based registration utilizes 3D surface models to match the patient’s anatomical surfaces, ideal for procedures where distinct surface features are present, such as cranial reconstructions. Marker-based registration, on the other hand, relies on placing known geometric markers on the patient and recognizing these markers in preoperative images to align the images with the anatomical structures, necessitating high precision in marker placement and recognition. Each technique has its unique applications and limitations, and selecting the appropriate registration method is crucial for enhancing the precision of surgical navigation. Table 3 compiles the most recent studies on diverse registration techniques, delving into the selection of application contexts and the attainment of precision benchmarks
In the recent domain of medical imaging and computer-assisted surgery, research on registration methods has emerged as a topical subject. Registration techniques play an indispensable role in AR navigation systems, particularly in surgical environments that demand high precision, such as pediatric tumor surgery and neurosurgery. From the provided literature, various registration methods can be observed, including marker-based, markerless, surface-based, and volume-based approaches.
In the study by Souzaki et al., an AR navigation system was developed, leveraging preoperative CT and MRI imaging for endoscopic surgery in pediatric tumors [61]. This system employs an optical tracking system to align the reconstructed 3D images with surface markers during surgery, achieving precise superposition of virtual imagery onto actual anatomical structures. The key to this method lies in its adaptability to patient movement, ensuring the alignment of virtual information with real structures, thereby enhancing surgical accuracy and safety. On the other hand, Yavas et al. explored an AR neuronavigation system based on 3D-printed markers, which utilizes mobile devices to recognize specifically designed markers on the patient’s skull, providing 3D imaging [67]. The innovation of this method is its provision of a cost-effective, user-friendly, and highly accurate navigation technique, reducing reliance on high-cost navigation systems and shortening the time required for preoperative image registration. Goerres et al. investigated a planning, guidance, and quality assurance system for pelvic screw placement based on deformable image registration [62]. This system incorporates automatic planning of pelvic screw trajectories and accounts for the deformation of surgical devices, such as K-wire deflection. By performing 3D–2D image registration between preoperative CT and intraoperative fluoroscopy, it achieves precise positioning of surgical instruments. Joeres et al. proposed a two-step registration process to address the time pressure encountered during the repair of resection sites in laparoscopic surgery [63]. The method involves an initial accurate registration before tumor resection and a rapid re-registration using artificial markers after resection. Validated in a simulated use study, this approach demonstrated faster registration speeds compared to traditional anatomical landmark-based registration and improved accuracy when the primary registration was successful.
In summary, these studies showcase the potential of registration methods in enhancing the precision and efficiency of surgical navigation. Although each method has its strengths and limitations, they collectively contribute to the advancement towards more accurate, faster, and user-friendly surgical navigation systems.
The versatility of registration technologies has immensely expanded the horizons for AR visual guidance, a technique that has garnered widespread acclaim for its contributions to surgical precision, safety, and efficiency. Given that a patient’s position can vary throughout a surgical procedure, the incorporation of accurate positioning technology is essential for ensuring the precise overlay of virtual information onto actual anatomical structures [69]. Positioning technology encompasses the real-time tracking and computation of the location and orientation of surgical instruments or the patient’s anatomy during surgery. This ensures that virtual imagery remains in precise alignment with the physical structures, even in the event of patient movement or tissue distortion [70,71]. The AR visual guidance system, through its dynamic real-time tracking capabilities, can adjust the virtual models to accommodate any anatomical changes, thereby offering surgeons an unerringly accurate surgical perspective.
Developers crafting AR applications often prioritize the most straightforward self-localization methods. These methods harness the cameras embedded in AR devices to detect environmental changes, ensuring that the superimposed virtual 3D imagery aligns with the relative positions of the actual surroundings [72]. Within this spectrum of technologies, the HMD stands as the go-to device for self-localization. It facilitates a smooth transition between virtual and real-world patient anatomy through holographic optics and manual alignment techniques, enabling markerless, natural tracking [73,74]. For example, Scherl et al. deployed the HoloLens®1 (Microsoft Corporation, Redmond, WA, USA) to offer preoperative planning for surgeries in the parotid gland area [75]. Their approach necessitated the manual alignment of 2D and 3D augmented reality models derived from MRI scans with the patient’s physical form. Creighton et al. likewise utilized HMD technology to aid in orthopedic procedures, affirming the viability of this method [20]. Despite the high degree of system integration and user-friendliness offered by automatic localization technologies, concerns have been raised by some researchers regarding the potential inaccuracies in the manual alignment of holographic images with actual anatomical structures [76,77].
To tackle this challenge, the academic sphere has embraced sensor technology and machine vision as innovative approaches. Fischer et al. integrated an infrared camera system with a machine vision algorithm to enhance positioning precision through the tracking of distinctive feature points [78]. Schwald et al. investigated the combined effect of an optical tracking setup, consisting of a stereo camera system with infrared filters and frame grabbers, alongside the pciBIRD sensor, in the context of AR localization [79]. In a more groundbreaking development, Racadio et al. introduced a novel camera-free eye-tracking sensor designed for AR glasses. Leveraging laser scanning technology, this sensor employs a Micro-Electro-Mechanical Systems (MEMSs) micromirror to direct an infrared laser beam, reflected in the eye area, with the scattered light detected by a photodiode [80]. Technically, eye-tracking technology enhances the interactivity, user experience, and seamless alignment of visual content in Augmented Reality (AR) systems by monitoring users’ line of sight in real time, thereby achieving a more intuitive and personalized superposition of virtual images. This method offers not only high integration but also the benefit of low power consumption compared to traditional video-oculography (VOG) systems. Sylvain et al. propelled the evolution of AR systems by enabling the automatic localization of endoscopes within intraoperative CT imagery [81]. This advancement renders the AR system independent of external tracking systems or the need for endoscopic image analysis, offering a more intuitive and accurate navigation solution for laparoscopic surgeries.
As registration and positioning technologies advance, AR visual guidance systems are making welcome strides towards a broader application horizon in the medical field. A multitude of studies have validated the clinical viability of AR systems, marking a transition from the pressing needs of surgeons to a “golden era” of recognized value. Particularly in the challenging realm of vascular surgery, the demand for precision in minimally invasive vascular interventions has reached sub-millimeter precision [82], while the accuracy of AR systems in registration and positioning is still assessed in millimeters. The further amalgamation of machine learning technologies foretells a breakthrough in the precision of AR visual systems, charting a new course for future advancements. Pioneering research has already attested to the vast potential of these integrated technologies. For instance, Fu et al. introduced a non-rigid magnetic resonance–transrectal ultrasound (MR-TRUS) image registration framework for prostate interventions [83]. This framework utilizes a two-pronged approach for image analysis. First, it employs convolutional neural networks (CNNs) to segment the prostate in both magnetic resonance (MR) and transrectal ultrasound (TRUS) images. Second, it integrates a point cloud-based network to facilitate rapid 3D point cloud matching. This combination of technologies has achieved a mean surface distance (MSD) of 0.90 ± 0.23 mm, demonstrating high precision in aligning the segmented images. In a concurrent development, Elgarba et al. leveraged artificial intelligence (AI) to automate the registration of Cone Beam Computed Tomography (CBCT) [84]. They conducted a study involving six intraoral scan-cone beam computed tomography (IOS-CBCT) scans to evaluate consistency. The study demonstrated that the AI-driven IOS and the registration of artifact-rich CBCT images were reliable and efficient. Furthermore, these registrations were expertly accurate and highly consistent, marking a significant advancement in the field. The pursuit of enhancing the precision of AR visual system registration and positioning to even higher echelons, while also achieving notable advancements in real-time capabilities, multi-modal integration, intelligent diagnostics, and full perioperative integration, will undoubtedly require our collective endeavor.
This section provides a comprehensive discussion on the registration and positioning challenges encountered in AR-based surgical visual guidance systems. Initially, the manual registration process is not only time-consuming but also highly dependent on the individual skills of the surgeon, which may lead to inaccurate navigation in emergency or complex surgical situations. Furthermore, existing hardware devices, such as the HoloLens 1, suffer from limitations in fixed focal length and depth-sensing camera performance, impacting the precision of AR systems. Additionally, while automatic registration techniques offer greater accuracy and consistency, they still face technical challenges in adapting to different patient anatomies and surgical equipment. Concurrently, the need for AR systems to update the position of virtual models in real-time during surgery, due to patient movement, poses higher demands on real-time tracking technologies.
To address these challenges, current research has proposed a range of solutions. These include improving manual registration techniques, such as the black-box method proposed by Azimi et al., to reduce errors and enhance the accuracy of surgical navigation [59]. Concurrently, efforts are being made to optimize hardware systems by developing cameras with higher resolution and superior depth perception capabilities, thereby enhancing the overall performance of AR systems. The exploration of advanced sensor technologies and machine vision algorithms for automatic registration is also gaining popularity, like the infrared camera system by Fischer et al., aiming to improve the accuracy and efficiency of registration [78]. Moreover, the integration of more sophisticated real-time tracking technologies, such as the MEMSs micromirror technology by Racadio et al., will achieve more accurate real-time tracking, further enhancing the accuracy of surgical navigation [80].

5. Status of AR Device Applications

The journey of AR technology has been one of significant evolution, from the basic overlaying of images to the sophisticated interactive experiences we enjoy today. Contemporary AR devices offer not only lifelike visual effects but also enable naturalistic interaction through gesture recognition and voice control. These features empower users to engage with virtual content in ways that are more intuitive and seamlessly integrated with their natural behaviors.
In the realm of surgical procedures, AR visual guidance systems are broadly categorized into two primary types: intelligent glasses and hybrid reality displays. These cutting-edge devices significantly augment the work of surgeons by offering real-time visual feedback and precise surgical navigation. HMDs, in particular, are emerging as the leading technology in this domain, favored for their portability, utility, and human–computer interaction capabilities. Smith’s research emphasizes the significance of tailoring HMDs for medical use [85]. It highlights several key features that are essential for such applications. These include lightweight construction, which is crucial for comfort. High transmittance rates are also important for clear image display. Prolonged battery life ensures the devices can be used for extended periods. Doughty et al. have provided further evidence supporting the superiority and efficacy of Optical See-Through Head-Mounted Displays (OST-HMDs) in surgical settings. Their research offers valuable insights into the benefits of these devices. As a result, there is a growing trend in both research and practice to use OST-HMDs like Microsoft HoloLens. This trend also explores interactive modalities such as hand gestures and voice commands, enhancing the usability of these devices in surgical contexts [86]. Despite this, projection display technologies and mobile video display units continue to hold relevance in certain specialized applications. Table 4 encapsulates a comprehensive review of the display devices currently in circulation, alongside an assessment of their strengths and limitations.
The enthusiasm with which doctors embrace the integration of AR technology in neurosurgical navigation comes as no great shock [97]. AR technology, by overlaying virtual information onto the patient’s genuine anatomical structures, substantially amplifies the precision and security of surgical operations. The real-time enhancement of visual data not only streamlines the surgical workflow but also bestows upon surgeons a perspective and depth of understanding that were previously unattainable. Yet, while AR technology holds vast theoretical promise, the practical implementation encounters certain hurdles. Notably, the discomfort and strain that head-mounted display devices (HMDs) may impose on physicians during extended surgical sessions have been validated by a plethora of research [98,99]. The potential for neck and head fatigue from prolonged use could compromise surgical efficiency and the surgeon’s focus.
Addressing this challenge, recent studies have introduced a groundbreaking solution: skin electronics [100]. These devices, characterized by their thinness and flexibility, offer a nearly unnoticeable wearing experience that alleviates the discomfort physicians may encounter during surgery. Skin electronics devices address the discomfort challenges posed by HMDs through a variety of innovative approaches. With their thin, flexible design, these devices significantly alleviate the physical strain associated with prolonged wear, particularly around the head and neck. Compared to traditional HMDs, the unobtrusive nature of skin-electronics allows for seamless integration into wearable patches or smart clothing, offering users a more natural and comfortable experience. They also provide direct haptic feedback to the skin, simulating real touch sensations, which reduces reliance on visual displays and enables interaction with AR content without the need for an HMD. Moreover, these devices can be tailored to fit the user’s body shape and movements, ensuring a comfortable fit and greater freedom of movement while minimizing physical constraints. Skin electronics devices also help reduce eye strain, potentially eliminating the need for users to continuously focus on the small displays found in HMDs. Furthermore, skin electronic devices come equipped with sophisticated input and output capabilities, facilitating superior signal acquisition and responsive feedback [101]. For example, physicians can monitor patients’ vital signs non-invasively through skin electronics, including heart rate, blood pressure, and muscle activity—information that is pivotal for making informed, real-time surgical decisions. Additionally, these devices are capable of delivering haptic feedback, enhancing the tactile sensation of surgical instrument manipulation for the surgeon [102,103]. Skin electronics have become self-sufficient with the integration of independent power sources. This integration eliminates the need for external power supplies and the hassle of tangled wiring. As a result, the convenience and safety of surgical operations are significantly enhanced. The integrated system, powered by efficient data processing and intelligent algorithms, is adept at providing real-time, customized surgical assistance tailored to the surgeon’s specific needs.
With the relentless march of technological progress, there is ample justification for our belief that skin electronic devices are poised to assume a more pivotal role in the medical field of the future. These devices hold the promise of not only mitigating the existing challenges associated with AR visual guidance systems but also of forging groundbreaking surgical techniques and therapeutic approaches. They are set to deliver a treatment experience that is characterized by enhanced precision and safety for patients. Ongoing research endeavors will delve into the untapped potential of these devices across diverse healthcare settings and will be dedicated to refining their capabilities and elevating the user experience to unprecedented levels.

6. Conclusions

This paper offers an exhaustive retrospective on the evolution, technical delineations, and clinical exemplars of AR technology within the realm of perioperative visual guidance. It also delves into an analysis of the existing challenges and prospective trends on the horizon. AR technology, distinguished by its capacity to merge virtual information with the tangible anatomical structures of patients, has markedly enhanced the precision, safety, and efficacy of surgical interventions. The application of AR in surgery has evolved significantly. Initially, it began with simple visualization systems. Today, it has advanced to incorporate sophisticated technologies such as artificial intelligence, computer vision, and robotics. This evolution has demonstrated the immense potential and substantial value of AR in surgical procedures. This technology stands as a testament to the transformative impact of innovative solutions on the field of surgery, promising to shape the future of medical procedures with its ceaseless evolution.
In the evolution of AR surgical visual guidance technology, researchers have encountered numerous challenges, including enhancing system implementation precision, overcoming hardware limitations, optimizing automatic registration techniques, strengthening real-time tracking capabilities, improving the generalizability and interpretability of models, enhancing the comfort and practicality of devices, and achieving multimodal integration and intelligent diagnostics. A variety of innovative solutions have risen to meet these challenges. Although these solutions have not been fully implemented in the medical field, they lay a solid groundwork for technological progress. Specifically, they hold great potential for advancing surgical visual guidance systems.
These solutions can be summarized as follows: improving system precision through technical optimization and hardware upgrades; refining the automatic registration process with sensor fusion and machine learning algorithms; enhancing real-time tracking capabilities with advanced sensors and algorithmic optimizations; bolstering model generalizability and interpretability through multicenter data training and model explanation tools; developing thin and flexible skin electronic devices and ergonomic designs to enhance device comfort and practicality; and constructing platforms that integrate various imaging data and diagnostic information, utilizing AI technology to provide intelligent diagnostic support. These comprehensive strategies not only promote the application of AR technology in surgical navigation but also lay a solid foundation for the future development of the medical field, heralding a significant enhancement in surgical precision and safety.
As we gaze into the future, the evolution of innovative technologies like skin electronics heralds a seamless integration of AR into medical practice, promising surgical assistance that is more comfortable, intuitive, and precise than ever before. With the profound amalgamation of machine learning, breakthroughs in the precision of AR visual systems are on the horizon. We eagerly anticipate notable progress in real-time capabilities, multi-modality integration, and intelligent diagnostics of AR technology, enabling comprehensive access throughout the perioperative period. This progression is set to not only accelerate the advancement of surgical navigation systems but also to deliver treatment experiences that are more accurate and secure for patients, heralding a new “golden era” in healthcare.

Author Contributions

Conceptualization, Y.S. (Yichun Shen) and S.W.; methodology, Y.S. (Yichun Shen) and Y.S. (Yuhan Shen); validation, J.H.; investigation, Y.S. (Yichun Shen), S.W. and Y.S. (Yuhan Shen); writing—original draft preparation, Y.S. (Yichun Shen) and J.H.; writing—review and editing, S.W. and Y.S. (Yuhan Shen); visualization Y.S. (Yuhan Shen) and J.H.; supervision, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

Practice Base for Joint Cultivation of Engineers with Excellence in Biomedical Engineering of University of Shanghai for Science and Technology (10-23-308-007).

Acknowledgments

The authors are grateful to all those who participated in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Buhre, W.; Rossaint, R. Perioperative management and monitoring in anaesthesia. Lancet 2003, 362, 1839–1846. [Google Scholar] [CrossRef] [PubMed]
  2. Elmallah, R.K.; Cherian, J.J.; Pierce, T.P.; Jauregui, J.J.; Harwin, S.F.; Mont, M.A. New and common perioperative pain management techniques in total knee arthroplasty. J. Knee Surg. 2016, 29, 169–178. [Google Scholar] [CrossRef] [PubMed]
  3. Boysen, P.G. Perioperative management of the thoracotomy patient. Clin. Chest Med. 1993, 14, 321–333. [Google Scholar] [CrossRef]
  4. Zhang, G.; Bartels, J.; Martin-Gomez, A.; Armand, M. Towards reducing visual workload in surgical navigation: Proof-of-concept of an augmented reality haptic guidance system. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2023, 11, 1073–1080. [Google Scholar] [CrossRef]
  5. van Leeuwen, F.W.; Valdés-Olmos, R.; Buckle, T.; Vidal-Sicart, S.J. Hybrid surgical guidance based on the integration of radionuclear and optical technologies. Br. J. Radiol. 2016, 89, 20150797. [Google Scholar] [CrossRef] [PubMed]
  6. Staub, C.; Knoll, A.; Osa, T.; Bauernschmitt, R. Autonomous high precision positioning of surgical instruments in robot-assisted minimally invasive surgery under visual guidance. In Proceedings of the 2010 Sixth International Conference on Autonomic and Autonomous Systems, Cancun, Mexico, 7–13 March 2010; pp. 64–69. [Google Scholar] [CrossRef]
  7. Metson, R.; Cosenza, M.; Gliklich, R.E.; Montgomery, W.W. The role of image-guidance systems for head and neck surgery. Arch. Otolaryngol. Neck Surg. 1999, 125, 1100–1104. [Google Scholar] [CrossRef]
  8. Herline, A.J.; Stefansic, J.D.; Debelak, J.P.; Hartmann, S.L.; Pinson, C.W.; Galloway, R.L.; Chapman, W.C. Image-guided surgery. Arch. Surg. 1999, 280, 62–69. [Google Scholar] [CrossRef]
  9. Shuhaiber, J.H. Augmented reality in surgery. Arch. Surg. 2004, 139, 170–174. [Google Scholar] [CrossRef]
  10. Gutierrez, N. The ballad of morton heilig: On VR’s mythic past. J. Ciné. Media Stud. 2023, 62, 86–106. [Google Scholar] [CrossRef]
  11. Sutherland, I.E. A head-mounted three dimensional display. In Proceedings of the Fall Joint Computer Conference, Part I, San Franciso, CA, USA, 9–11 December 1968; pp. 757–764. [Google Scholar] [CrossRef]
  12. Furness, L.T.A. The Application of Head-Mounted Displays to Airborne Reconnaissance and Weapon Delivery; Wright-Patterson Air Force Base: Dayton, OH, USA, 1969. [Google Scholar]
  13. Azuma, R.T. A survey of augmented reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  14. Carmigniani, J.; Furht, B.; Anisetti, M.; Ceravolo, P.; Damiani, E.; Ivkovic, M. Augmented reality technologies, systems and applications. Multimedia Tools Appl. 2011, 51, 341–377. [Google Scholar] [CrossRef]
  15. Masutani, Y.; Dohi, T.; Yamane, F.; Iseki, H.; Takakura, K. Augmented reality visualization system for intravascular neurosurgery. Comput. Aided Surg. 1998, 3, 239–247. [Google Scholar] [CrossRef] [PubMed]
  16. Aghdasi, N.; Youngquist, J.A. Methods and Systems for Registering Preoperative Image Data to Intraoperative Image Data of a Scene, such as a Surgical Scene. U.S. Patent 11295460B1, 5 April 2022. [Google Scholar]
  17. Lin, M.A.; Siu, A.F.; Bae, J.H.; Cutkosky, M.R.; Daniel, B.L. HoloNeedle: Augmented reality guidance system for needle placement investigating the advantages of three-dimensional needle shape reconstruction. IEEE Robot. Autom. Lett. 2018, 3, 4156–4162. [Google Scholar] [CrossRef]
  18. Ackermann, J.; Liebmann, F.; Hoch, A.; Snedeker, J.G.; Farshad, M.; Rahm, S.; Zingg, P.O.; Fürnstahl, P. Augmented reality based surgical navigation of complex pelvic osteotomies—A feasibility study on cadavers. Appl. Sci. 2021, 11, 1228. [Google Scholar] [CrossRef]
  19. Chen, F.; Cui, X.; Han, B.; Liu, J.; Zhang, X.; Liao, H. Augmented reality navigation for minimally invasive knee surgery using enhanced arthroscopy. Comput. Methods Progr. Biomed. 2021, 201, 105952. [Google Scholar] [CrossRef] [PubMed]
  20. Creighton, F.X.; Unberath, M.; Song, T.; Zhao, Z.; Armand, M.; Carey, J. Early feasibility studies of augmented reality navigation for lateral skull base surgery. Otol. Neurotol. 2020, 41, 883–888. [Google Scholar] [CrossRef]
  21. Deib, G.; Johnson, A.; Unberath, M.; Yu, K.; Andress, S.; Qian, L.; Osgood, G.; Navab, N.; Hui, F.; Gailloud, P. Image guided percutaneous spine procedures using an optical see-through head mounted display: Proof of concept and rationale. J. NeuroInterventional Surg. 2018, 10, 1187–1191. [Google Scholar] [CrossRef]
  22. Gu, W.; Shah, K.; Knopf, J.; Navab, N.; Unberath, M. Visualization. Feasibility of image-based augmented reality guidance of total shoulder arthroplasty using microsoft HoloLens 1. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2021, 9, 261–270. [Google Scholar] [CrossRef]
  23. Liounakos, J.I.; Urakov, T.; Wang, M.Y. Head-up display assisted endoscopic lumbar discectomy—A technical note. Int. J. Med Robot. Comput. Assist. Surg. 2020, 16, e2089. [Google Scholar] [CrossRef]
  24. Fida, B.; Cutolo, F.; di Franco, G.; Ferrari, M.; Ferrari, V. Augmented reality in open surgery. Updat. Surg. 2018, 70, 389–400. [Google Scholar] [CrossRef]
  25. Jud, L.; Fotouhi, J.; Andronic, O.; Aichmair, A.; Osgood, G.; Navab, N.; Farshad, M. Applicability of augmented reality in orthopedic surgery–A systematic review. BMC Musculoskelet. Disord. 2020, 21, 103. [Google Scholar] [CrossRef] [PubMed]
  26. Kim, Y.; Kim, H.; Kim, Y.O. Virtual reality and augmented reality in plastic surgery: A review. Arch. Plast. Surg. 2017, 44, 179–187. [Google Scholar] [CrossRef] [PubMed]
  27. Barcali, E.; Iadanza, E.; Manetti, L.; Francia, P.; Nardi, C.; Bocchi, L. Augmented reality in surgery: A scoping review. Appl. Sci. 2022, 12, 6890. [Google Scholar] [CrossRef]
  28. Simpson, A.L.; Adams, L.B.; Allen, P.J.; D’Angelica, M.I.; DeMatteo, R.P.; Fong, Y.; Kingham, T.P.; Leung, U.; Miga, M.I.; Parada, E.P. Texture analysis of preoperative CT images for prediction of postoperative hepatic insufficiency: A preliminary study. J. Am. Coll. Surg. 2015, 220, 339–346. [Google Scholar] [CrossRef]
  29. Houssami, N.; Hayes, D.F. Review of preoperative magnetic resonance imaging (MRI) in breast cancer: Should MRI be performed on all women with newly diagnosed, early stage breast cancer? CA Cancer J. Clin. 2009, 59, 290–302. [Google Scholar] [CrossRef]
  30. Zheng, Y.-X.; Yu, D.-F.; Zhao, J.-G.; Wu, Y.-L.; Zheng, B. 3D printout models vs. 3D-rendered images: Which is better for preoperative planning? J. Surg. Educ. 2016, 73, 518–523. [Google Scholar] [CrossRef]
  31. Spottiswoode, B.; Van den Heever, D.; Chang, Y.; Engelhardt, S.; Du Plessis, S.; Nicolls, F.; Hartzenberg, H.; Gretschel, A. Preoperative three-dimensional model creation of magnetic resonance brain images as a tool to assist neurosurgical planning. Ster. Funct. Neurosurg. 2013, 91, 162–169. [Google Scholar] [CrossRef]
  32. Zeng, G.L. Medical Image Reconstruction; Springer: Berlin/Heidelberg, Germany, 2010; Volume 530. [Google Scholar]
  33. Angelopoulou, A.; Psarrou, A.; Garcia-Rodriguez, J.; Orts-Escolano, S.; Azorin-Lopez, J.; Revett, K. 3D reconstruction of medical images from slices automatically landmarked with growing neural models. Neurocomputing 2015, 150, 16–25. [Google Scholar] [CrossRef]
  34. Pires, F.; Costa, C.; Dias, P. On the use of virtual reality for medical imaging visualization. J. Digit. Imaging 2021, 34, 1034–1048. [Google Scholar] [CrossRef]
  35. Khan, U.; Yasin, A.; Abid, M.; Shafi, I.; Khan, S.A. A methodological review of 3D reconstruction techniques in tomographic imaging. J. Med Syst. 2018, 42, 190. [Google Scholar] [CrossRef]
  36. Dogan, S. 3D reconstruction and evaluation of tissues by using CT, MR slices and digital images. In Proceedings of the 20th International Society for Photogrammetry and Remote Sensing (ISPRS), Istambul, Turkey, 12–23 July 2004; Volume 35, pp. 323–327. [Google Scholar]
  37. Chang, L.-W.; Chen, H.-W.; Ho, J.-R. Reconstruction of 3D medical images: A nonlinear interpolation technique for reconstruction of 3D medical images. CVGIP Graph. Model. Image Process. 1991, 53, 382–391. [Google Scholar] [CrossRef]
  38. Rani, S.; Lakhwani, K.; Kumar, S. Knowledge vector representation of three-dimensional convex polyhedrons and reconstruction of medical images using knowledge vector. Multimedia Tools Appl. 2023, 82, 36449–36477. [Google Scholar] [CrossRef]
  39. Prakash, P.S.; Rao, P.K.; Babu, E.S.; Khan, S.B.; Almusharraf, A.; Quasim, M.T. Decoupled SculptorGAN Framework for 3D Reconstruction and Enhanced Segmentation of Kidney Tumors in CT Images. IEEE Access 2024, 12, 62189–62198. [Google Scholar] [CrossRef]
  40. Zi, Y.; Wang, Q.; Gao, Z.; Cheng, X.; Mei, T. Research on the application of deep learning in medical image segmentation and 3D reconstruction. Acad. J. Sci. Technol. 2024, 10, 8–12. [Google Scholar] [CrossRef]
  41. Cheng, J.Y.; Chen, F.; Alley, M.T.; Pauly, J.M.; Vasanawala, S.S. Highly scalable image reconstruction using deep neural networks with bandpass filtering. arXiv 2018, arXiv:1805.03300. [Google Scholar] [CrossRef]
  42. Cai, Y.; Wang, J.; Yuille, A.; Zhou, Z.; Wang, A. Structure-aware sparse-view X-ray 3D reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 11174–11183. [Google Scholar]
  43. Shen, G.; Dwivedi, K.; Majima, K.; Horikawa, T.; Kamitani, Y. End-to-end deep image reconstruction from human brain activity. Front. Comput. Neurosci. 2019, 13, 432276. [Google Scholar] [CrossRef]
  44. Hong, L.; Modirrousta, M.H.; Hossein Nasirpour, M.; Mirshekari Chargari, M.; Mohammadi, F.; Moravvej, S.V.; Rezvanishad, L.; Rezvanishad, M.; Bakhshayeshi, I.; Alizadehsani, R. GAN-LSTM-3D: An efficient method for lung tumour 3D reconstruction enhanced by attention-based LSTM. CAAI Trans. Intell. Technol. 2023; early view. [Google Scholar] [CrossRef]
  45. Perdios, D.; Vonlanthen, M.; Martinez, F.; Arditi, M.; Thiran, J.-P. Deep learning based ultrasound image reconstruction method: A time coherence study. In Proceedings of the 2019 IEEE International Ultrasonics Symposium (IUS), Glasgow, UK, 6–9 October 2019; pp. 448–451. [Google Scholar] [CrossRef]
  46. Ziabari, A.; Ye, D.H.; Srivastava, S.; Sauer, K.D.; Thibault, J.-B.; Bouman, C.A. 2.5 D deep learning for CT image reconstruction using a multi-GPU implementation. In Proceedings of the 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 28–31 October 2018; pp. 2044–2049. [Google Scholar] [CrossRef]
  47. Wang, C.-H.; Huang, K.-Y.; Yao, Y.; Chen, J.-C.; Shuai, H.-H.; Cheng, W.-H. Lightweight deep learning: An overview. IEEE Consum. Electron. Mag. 2022, 13, 51–64. [Google Scholar] [CrossRef]
  48. Morrison, M.A.; Payabvash, S.; Chen, Y.; Avadiappan, S.; Shah, M.; Zou, X.; Hess, C.P.; Lupo, J.M. A user-guided tool for semi-automated cerebral microbleed detection and volume segmentation: Evaluating vascular injury and data labelling for machine learning. NeuroImage Clin. 2018, 20, 498–505. [Google Scholar] [CrossRef]
  49. Farahani, A.; Voghoei, S.; Rasheed, K.; Arabnia, H.R. A brief review of domain adaptation. In Advances in Data Science and Information Engineering; Springer: Berlin/Heidelberg, Germany, 2021; pp. 877–894. [Google Scholar] [CrossRef]
  50. Spiegel, E.; Wycis, H.; Szekely, E.; Adams, J.; Flanagan, M.; Baird, H.J. Campotomy in various extrapyramidal disorders. JAMA 1963, 20, 871–884. [Google Scholar] [CrossRef]
  51. Tang, R.; Ma, L.-F.; Rong, Z.-X.; Li, M.-D.; Zeng, J.-P.; Wang, X.-D.; Liao, H.-E.; Dong, J.-H. Augmented reality technology for preoperative planning and intraoperative navigation during hepatobiliary surgery: A review of current methods. Hepatobiliary Pancreat. Dis. Int. 2018, 17, 101–112. [Google Scholar] [CrossRef]
  52. Okamoto, T.; Onda, S.; Yanaga, K.; Suzuki, N.; Hattori, A. Clinical application of navigation surgery using augmented reality in the abdominal field. Surg. Today 2015, 45, 397–406. [Google Scholar] [CrossRef] [PubMed]
  53. Marmulla, R.; Hoppe, H.; Mühling, J.; Eggers, G. An augmented reality system for image-guided surgery. Int. J. Oral Maxillofac. Surg. 2005, 34, 594–596. [Google Scholar] [CrossRef] [PubMed]
  54. Shekhar, R.; Dandekar, O.; Bhat, V.; Philip, M.; Lei, P.; Godinez, C.; Sutton, E.; George, I.; Kavic, S.; Mezrich, R.; et al. Live augmented reality: A new visualization method for laparoscopic surgery using continuous volumetric computed tomography. Surg. Endosc. 2010, 24, 1976–1985. [Google Scholar] [CrossRef] [PubMed]
  55. Andrews, C.M.; Henry, A.B.; Soriano, I.M.; Southworth, M.K.; Silva, J.R. Registration techniques for clinical applications of three-dimensional augmented reality devices. IEEE J. Transl. Eng. Health Med. 2020, 9, 4900214. [Google Scholar] [CrossRef]
  56. Schneider, C.; Thompson, S.; Totz, J.; Song, Y.; Allam, M.; Sodergren, M.; Desjardins, A.; Barratt, D.; Ourselin, S.; Gurusamy, K. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: A clinical feasibility study. Surg. Endosc. 2020, 34, 4702–4711. [Google Scholar] [CrossRef]
  57. Gregory, T.M.; Gregory, J.; Sledge, J.; Allard, R.; Mir, O. Surgery guided by mixed reality: Presentation of a proof of concept. Acta Orthop. 2018, 89, 480–483. [Google Scholar] [CrossRef]
  58. Li, Y.; Chen, X.; Wang, N.; Zhang, W.; Li, D.; Zhang, L.; Qu, X.; Cheng, W.; Xu, Y.; Chen, W.J.; et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. J. Neurosurg. 2018, 131, 1599–1606. [Google Scholar] [CrossRef]
  59. Azimi, E.; Qian, L.; Navab, N.; Kazanzides, P. Alignment of the virtual scene to the tracking space of a mixed reality head-mounted display. arXiv 2017, arXiv:1703.05834. [Google Scholar]
  60. Liang, H.; Yang, Z.; Jiang, S.; Liu, S.; Wang, W. An improved registration method based on ICP for image guided prostate seed implanting surgery. Biomed. Phys. Eng. Express 2016, 2, 055019. [Google Scholar] [CrossRef]
  61. Souzaki, R.; Ieiri, S.; Uemura, M.; Ohuchida, K.; Tomikawa, M.; Kinoshita, Y.; Koga, Y.; Suminoe, A.; Kohashi, K.; Oda, Y.; et al. An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images. J. Pediatr. Surg. 2013, 48, 2479–2483. [Google Scholar] [CrossRef]
  62. Goerres, J.; Uneri, A.; Jacobson, M.; Ramsay, B.; De Silva, T.; Ketcha, M.; Han, R.; Manbachi, A.; Vogt, S.; Kleinszig, G.; et al. Planning, guidance, and quality assurance of pelvic screw placement using deformable image registration. Phys. Med. Biol. 2017, 62, 9018. [Google Scholar] [CrossRef] [PubMed]
  63. Joeres, F.; Mielke, T.; Hansen, C. Laparoscopic augmented reality registration for oncological resection site repair. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1577–1586. [Google Scholar] [CrossRef]
  64. Han, Y.-T.; Lin, W.-C.; Fan, F.-Y.; Chen, C.-L.; Lin, C.-C.; Cheng, H.-C. Comparison of dental surface image registration and fiducial marker registration: An in vivo accuracy study of static computer-assisted implant surgery. J. Clin. Med. 2021, 10, 4183. [Google Scholar] [CrossRef] [PubMed]
  65. Hu, X.; y Baena, F.R.; Cutolo, F. Head-mounted augmented reality platform for markerless orthopaedic navigation. IEEE J. Biomed. Health Inform. 2021, 26, 910–921. [Google Scholar] [CrossRef] [PubMed]
  66. Shao, L.; Yang, S.; Fu, T.; Lin, Y.; Geng, H.; Ai, D.; Fan, J.; Song, H.; Zhang, T.; Yang, J. Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation. Comput. Biol. Med. 2022, 148, 105826. [Google Scholar] [CrossRef]
  67. Yavas, G.; Caliskan, K.E.; Cagli, M.S. Three-dimensional–printed marker–based augmented reality neuronavigation: A new neuronavigation technique. Neurosurg. Focus 2021, 51, E20. [Google Scholar] [CrossRef]
  68. Figueira, I.; Ibrahim, M.T.; Majumder, A.; Gopi, M. Augmented reality patient-specific registration for medical visualization. In Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology, Tsukuba, Japan, 29 November–1 December 2022; pp. 1–2. [Google Scholar] [CrossRef]
  69. Lee, D.; Yi, J.W.; Hong, J.; Chai, Y.J.; Kim, H.C.; Kong, H.-J. Augmented reality to localize individual organ in surgical procedure. Health Inform. Res. 2018, 24, 394–401. [Google Scholar] [CrossRef]
  70. Syed, T.A.; Siddiqui, M.S.; Abdullah, H.B.; Jan, S.; Namoun, A.; Alzahrani, A.; Nadeem, A.; Alkhodre, A.B. In-depth review of augmented reality: Tracking technologies, development tools, AR displays, collaborative AR, and security concerns. Sensors 2022, 23, 146. [Google Scholar] [CrossRef]
  71. Nuri, T.; Mitsuno, D.; Iwanaga, H.; Otsuki, Y.; Ueda, K. Application of augmented reality (AR) technology to locate the cutaneous perforator of anterolateral thigh perforator flap: A case report. Microsurgery 2022, 42, 76–79. [Google Scholar] [CrossRef]
  72. Yang, F.; Fang, Z.; Guan, F. What Do We Actually Need During Self-localization in an Augmented Environment? In Proceedings of the International Symposium on Web and Wireless Geographical Information Systems, Wuhan, China, 13–14 November 2020; pp. 24–32. [Google Scholar] [CrossRef]
  73. Andersen, D.; Villano, P.; Popescu, V. AR HMD guidance for controlled hand-held 3D acquisition. IEEE Trans. Vis. Comput. Graph. 2019, 25, 3073–3082. [Google Scholar] [CrossRef]
  74. Budhiraja, R.; Lee, G.A.; Billinghurst, M. Using a HHD with a HMD for mobile AR interaction. In Proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Adelaide, Australia, 1–4 October 2013; pp. 1–6. [Google Scholar] [CrossRef]
  75. Scherl, C.; Stratemeier, J.; Karle, C.; Rotter, N.; Hesser, J.; Huber, L.; Dias, A.; Hoffmann, O.; Riffel, P.; Schoenberg, S.O. Augmented reality with HoloLens in parotid surgery: How to assess and to improve accuracy. Eur. Arch. Oto-Rhino-Laryngol. 2021, 278, 2473–2483. [Google Scholar] [CrossRef] [PubMed]
  76. Meyer, J.; Schlebusch, T.; Fuhl, W.; Kasneci, E. A novel camera-free eye tracking sensor for augmented reality based on laser scanning. IEEE Sens. J. 2020, 20, 15204–15212. [Google Scholar] [CrossRef]
  77. Santoni, F.; De Angelis, A.; Moschitta, A.; Carbone, P. MagIK: A hand-tracking magnetic positioning system based on a kinematic model of the hand. IEEE Trans. Instrum. Meas. 2021, 70, 9507313. [Google Scholar] [CrossRef]
  78. Fischer, J.; Eichler, M.; Bartz, D.; Straßer, W. Model-based Hybrid Tracking for Medical Augmented Reality. In Proceedings of the Eurographics Symposium on Virtual Environments (EGVE), Lisbon, Portugal, 8 May 2006; pp. 71–80. [Google Scholar]
  79. Schwald, B.; Seibert, H. Registration tasks for a hybrid tracking system for medical augmented reality. J. WSCG. 2004, 12, 411–418. [Google Scholar]
  80. Racadio, J.M.; Nachabe, R.; Homan, R.; Schierling, R.; Racadio, J.M.; Babić, D. Augmented reality on a C-arm system: A preclinical assessment for percutaneous needle localization. Radiology 2016, 281, 249–255. [Google Scholar] [CrossRef]
  81. Bernhardt, S.; Nicolau, S.A.; Agnus, V.; Soler, L.; Doignon, C.; Marescaux, J. Automatic localization of endoscope in intraoperative CT image: A simple approach to augmented reality guidance in laparoscopic surgery. Med Image Anal. 2016, 30, 130–143. [Google Scholar] [CrossRef]
  82. Zhao, H.-L.; Liu, S.-Q.; Zhou, X.-H.; Xie, X.-L.; Hou, Z.-G.; Zhou, Y.-J.; Zhang, L.-S.; Gui, M.-J.; Wang, J.-L. Design and performance evaluation of a novel vascular robotic system for complex percutaneous coronary interventions. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 31 October–4 November 2021; pp. 4679–4682. [Google Scholar] [CrossRef]
  83. Fu, Y.; Lei, Y.; Wang, T.; Patel, P.; Jani, A.B.; Mao, H.; Curran, W.J.; Liu, T.; Yang, X. Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching. Med Image Anal. 2021, 67, 101845. [Google Scholar] [CrossRef]
  84. Elgarba, B.M.; Meeus, J.; Fontenele, R.C.; Jacobs, R. AI-Based Registration of IOS and CBCT with High Artifact Expression. J. Dent. 2024, 147, 105166. [Google Scholar] [CrossRef]
  85. Smith, R.; Schwiegerling, J. Head mounted display based augmented reality device for medical applications. In Proceedings of the ODS 2023: Industrial Optical Devices and Systems, San Diego, CA, USA, 20–25 August 2023; pp. 102–106. [Google Scholar] [CrossRef]
  86. Doughty, M.; Ghugre, N.R.; Wright, G.A. Augmenting performance: A systematic review of optical see-through head-mounted displays in surgery. J. Imaging 2022, 8, 203. [Google Scholar] [CrossRef]
  87. Kawakami, H.; Suenaga, H.; Sakakibara, A.; Hoshi, K. Computer-assisted surgery with markerless augmented reality for the surgical removal of mandibular odontogenic cysts: Report of two clinical cases. Int. J. Oral Maxillofac. Surg. 2024, 53, 347–350. [Google Scholar] [CrossRef]
  88. Huang, K.; Liao, J.; He, J.; Lai, S.; Peng, Y.; Deng, Q.; Wang, H.; Liu, Y.; Peng, L.; Bai, Z.; et al. A real-time augmented reality system integrated with artificial intelligence for skin tumor surgery: Experimental study and case series. Int. J. Surg. 2024, 110, 3294–3306. [Google Scholar] [CrossRef] [PubMed]
  89. Mehta, P.D.; Karanth, H.; Yang, H.; Slesnick, T.C.; Shaw, F.; Chau, D.H. ARCollab: Towards Multi-User Interactive Cardiovascular Surgical Planning in Mobile Augmented Reality. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–6. [Google Scholar] [CrossRef]
  90. Dogan, I.; Eray, H.A.; Ozgural, O.; Tekneci, O.; Hasimoglu, S.; Terzi, M.; Mete, E.B.; Kuzukiran, Y.C.; Elmas, H.; Orhan, O. Navigating the calvaria with mobile mixed reality–based neurosurgical planning: How feasible are smartphone applications as a craniotomy guide? Neurosurg. Focus 2024, 56, E4. [Google Scholar] [CrossRef] [PubMed]
  91. Choi, M.-H.; Han, W.; Min, K.; Min, D.; Han, G.; Shin, K.-S.; Kim, M.; Park, J.-H. Recent Applications of Optical Elements in Augmented and Virtual Reality Displays: A Review. ACS Appl. Opt. Mater. 2024, 2, 1247–1268. [Google Scholar] [CrossRef]
  92. Judy, B.F.; Menta, A.; Pak, H.L.; Azad, T.D.; Witham, T.F. Augmented Reality and Virtual Reality in Spine Surgery: A Comprehensive Review. Neurosurg. Clin. 2024, 35, 207–216. [Google Scholar] [CrossRef]
  93. Verhellen, A.; Elprama, S.A.; Scheerlinck, T.; Van Aerschot, F.; Duerinck, J.; Van Gestel, F.; Frantz, T.; Jansen, B.; Vandemeulebroucke, J.; Jacobs, A.; et al. Exploring technology acceptance of head-mounted device-based augmented reality surgical navigation in orthopaedic surgery. Int. J. Med Robot. Comput. Assist. Surg. 2024, 20, e2585. [Google Scholar] [CrossRef]
  94. Kann, M.; Ruiz-Cardozo, M.A.; Brehm, S.; Carey-Ewend, A.; Singh, S.; Barot, K.; Verastegui, G.T.; De La Paz, M.; Hanafy, A.; Bui, T.; et al. 1071 Initial Experience Using an Augmented Reality Head-Mounted Display System During Surgical Management of Thoracolumbar Spinal Trauma. Neurosurgery 2024, 70, 181. [Google Scholar] [CrossRef]
  95. Ibrahim, M.T.; Majumder, A.; Gopi, M.; Sayadi, L.R.; Vyas, R.M. Illuminating precise stencils on surgical sites using projection-based augmented reality. Smart Health 2024, 32, 100476. [Google Scholar] [CrossRef]
  96. Mamone, V.; Ferrari, V.; Condino, S.; Cutolo, F. Projected augmented reality to drive osteotomy surgery: Implementation and comparison with video see-through technology. IEEE Access 2020, 8, 169024–169035. [Google Scholar] [CrossRef]
  97. Benila, S.; Naveen, N.; Kumar, R.P. Augmented Reality Based Doctor's Assistive System. I-Manag. J. Digit. Signal Process. 2021, 9, 30. [Google Scholar] [CrossRef]
  98. Ito, K.; Tada, M.; Ujike, H.; Hyodo, K. Effects of the weight and balance of head-mounted displays on physical load. Appl. Sci. 2021, 11, 6802. [Google Scholar] [CrossRef]
  99. Thompson, M.B.; Tear, M.J.; Sanderson, P.M. Multisensory integration with a head-mounted display: Role of mental and manual load. J. Hum. Factors Ergon. Soc. 2010, 52, 92–104. [Google Scholar] [CrossRef] [PubMed]
  100. Kim, J.J.; Wang, Y.; Wang, H.; Lee, S.; Yokota, T.; Someya, T. Skin electronics: Next-generation device platform for virtual and augmented reality. Adv. Funct. Mater. 2021, 31, 2009602. [Google Scholar] [CrossRef]
  101. Zhu, Y.; Li, J.; Kim, J.; Li, S.; Zhao, Y.; Bahari, J.; Eliahoo, P.; Li, G.; Kawakita, S.; Haghniaz, R. Skin-interfaced electronics: A promising and intelligent paradigm for personalized healthcare. Biomaterials 2023, 296, 122075. [Google Scholar] [CrossRef] [PubMed]
  102. Jung, Y.H.; Kim, J.H.; Rogers, J.A. Skin-integrated vibrohaptic interfaces for virtual and augmented reality. Adv. Funct. Mater. 2021, 31, 2008805. [Google Scholar] [CrossRef]
  103. Yu, X.; Xie, Z.; Yu, Y.; Lee, J.; Vazquez-Guardado, A.; Luan, H.; Ruban, J.; Ning, X.; Akhtar, A.; Li, D. Skin-integrated wireless haptic interfaces for virtual and augmented reality. Nature 2019, 575, 473–479. [Google Scholar] [CrossRef]
Figure 1. Technical components of the AR vision system.
Figure 1. Technical components of the AR vision system.
Sensors 24 07363 g001
Figure 2. Search and screening process.
Figure 2. Search and screening process.
Sensors 24 07363 g002
Table 1. Selected prototype studies on the use of AR technology in perioperative surgical guidance.
Table 1. Selected prototype studies on the use of AR technology in perioperative surgical guidance.
ReferenceYearTechnical SolutionFields of UseAchieved PrecisionAre Clinical Tests Included?
Lin et al. [17]2018Optical See-Through Head-Mounted Display (OST-HMD) for image-guided percutaneous spine proceduresPercutaneous Spine ProceduresComparable to traditional monitor in terms of procedural time and dosimetryNo
Ackermann et al. [18]2021AR navigation system with HMD, overlaying Computed Tomography (CT) data using fiducial markersLateral Skull Base SurgeryTarget Registration Error (TRE) of 10.62 ± 5.90 mmNo
Chen et al. [19]2021AR navigation system with 3D display, tissue properties-based deformation methodMinimally Invasive Knee SurgeryMean error of 0.32 mm for virtual arthroscopic imagesNo
Creighton et al. [20]2020Image-based AR guidance using HMD for Total Shoulder Arthroplasty (TSA)TSANot explicitly provided; depth sensing camera performance identified as a major error sourceNo
Deib et al. [21]2018Performing image-guided spinal interventional surgeries using OST-HMDNeedle Placement ProceduresSignificantly reduced placement errors with shape display compared to rigid needle assumptionNo
Gu et al. [22]2021Head-up display-assisted endoscopic lumbar discectomyLumbar Discectomy-No
Liounakos et al. [23]2020Endoscopic Lumbar discectomy assisted by HMDGanz Periacetabular Osteotomy (PAO)Osteotomy starting points error of 10.8 mmNo
Table 2. Recent research on 3D reconstruction using machine learning method.
Table 2. Recent research on 3D reconstruction using machine learning method.
Usage ScenariosReferenceYearMachine Learning MethodsApplication AreasAchievement Accuracy
Image segmentationPrakash et al. [39]2024Conditional Generative Adversarial Network (cGan)Correctly distinguishing tumor from non-tumor tissue in CT scansThe diagnostic accuracy has increased to 96.5%
Zi et al. [40]2023U-Net ArchitectureBrain Tumor Segmentation Challenge (BraTS)Dice Coefficient = 85.3%, Intersection over Union (IoU) = 78.9%
Cheng et al. [41]2018Deep Neural Network (DNN)Used for image denoising and super-resolution-
Feature extractionCai et al. [42]2024Line Segment-based Transformer (Lineformer)Capturing the internal structure of objects by simulating the dependencies within each segment of X-raysSAX-NeRF achieves 12.56 dB and 2.49 dB improvement over existing NeRF-based methods on new view synthesis and CT reconstruction tasks, respectively
Shen et al. [43]2019Recurrent Neural Network (RNN)Capturing non-linear features from CT imagesAverage reconstruction accuracy of 62.9% based on Structural Similarity Index (SSIM)
Model reconstructionHong et al. [44]2023Combining Generative Adversarial Networks (GAN) and Long Short-Term Memory Networks (LSTM)Lung tumor reconstructionThe method shows superiority on Hamming and Euclidean distance metrics
Perdios et al. [45]2019CNNFor reconstruction, recovery, and enhancement of ultrasound imagesCNN-processed images improve the performance of vector flow estimation in some ways
Efficiency optimizationPrakash et al. [39]2024Weight Pruning U-Net (WP-UNet)Optimizing computational efficiency-
Ziabari et al. [46]2018Deep Learning Model Based Iterative Reconstruction (DL-MBIR)A strategy for multi-GPU implementation is proposed-
Table 3. Typical studies using different registration methods.
Table 3. Typical studies using different registration methods.
ReferenceYearRegistration MethodApplication ScenarioAchievement PrecisionLimitations
Liang et al. [60]2016Point-based RegistrationRadioactive seed implantation for prostate cancer0.44 ± 0.07 mmElectromagnetic localizer susceptible to interference
Souzaki et al. [61]2013Point-based RegistrationEndoscopic surgery for pediatric tumorsPrecision has met surgical requirementsProblems of movement and deformation of organs during surgery
Goerres et al. [62]2017Point-based RegistrationPercutaneous screw fixation of pelvic fracturesWithin 1.1 mmGeometric errors introduced by deformation of surgical instruments
Joeres et al. [63]2021Surface-based RegistrationLaparoscopic tumor resection site repair surgeryAverage target registration error (TRE) increased by an average of 2.35 mmClinical applicability yet to be demonstrated
Han et al. [64]2021Surface-based RegistrationDental surgeryMean lateral biases in tooth surface registration are clinically acceptableNot suitable for patients with edentulous jaws or few remaining teeth
Hu et al. [65]2021Surface-based RegistrationAssisted femoral drilling4.90 ± 1.04 mm in video perspective (VST); 4.36 ± 0.80 mm in optical perspective (OST)Must “anchor” strategy to solve occlusion problems
Shao et al. [66]2022Marker-based RegistrationAids in surgical planning, medical training, and surgical procedures-Stability and precision issues in different light and environments
Yavas et al. [67]2021Marker-based RegistrationNeurosurgeryAverage positioning
error 1.70 ± 1.02 mm
Brain displacement or deformation due to cerebrospinal fluid leakage or surgical location
Figueira et al. [68]2022Marker-based RegistrationSurgical navigationAverage fusion error 0.70 ± 0.16 mmImage marker may be obscured during the procedure
Table 4. AR display devices for intraoperative visual guidance.
Table 4. AR display devices for intraoperative visual guidance.
Display Device ClassificationReferenceYearApplication FieldAdvantagesDisadvantages
Fixed video displayKawakami et al. [87]2024Dental surgeryHigh-resolution displayRequires frequent diversions from the doctor
Huang et al. [88]2024Dermatological surgery
Mobile video displayMehta et al. [89]2024Cardiovascular surgeryAdaptation to complex surgical environmentsScreen size limits implementation;
screen stabilization issues
Dogan et al. [90]2024Craniotomy
Translucent screenChoi et al. [91]2024-Intuitive;
in situ magnification
Limited perspective
HMDJudy et al. [92]2024Spinal surgeryImmersive experienceBurden of surgery
Verhellen et al. [93]2024Orthopedic surgery
Kann et al. [94]2024Thoracolumbar spine trauma surgery
Projection display technologyIbrahim et al. [95]2024Facial surgeryLarge space for surgical operationsDistortion problems;
lower image resolution
Mamone et al. [96]2020Osteotomy
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, Y.; Wang, S.; Shen, Y.; Hu, J. The Application of Augmented Reality Technology in Perioperative Visual Guidance: Technological Advances and Innovation Challenges. Sensors 2024, 24, 7363. https://doi.org/10.3390/s24227363

AMA Style

Shen Y, Wang S, Shen Y, Hu J. The Application of Augmented Reality Technology in Perioperative Visual Guidance: Technological Advances and Innovation Challenges. Sensors. 2024; 24(22):7363. https://doi.org/10.3390/s24227363

Chicago/Turabian Style

Shen, Yichun, Shuyi Wang, Yuhan Shen, and Jingyi Hu. 2024. "The Application of Augmented Reality Technology in Perioperative Visual Guidance: Technological Advances and Innovation Challenges" Sensors 24, no. 22: 7363. https://doi.org/10.3390/s24227363

APA Style

Shen, Y., Wang, S., Shen, Y., & Hu, J. (2024). The Application of Augmented Reality Technology in Perioperative Visual Guidance: Technological Advances and Innovation Challenges. Sensors, 24(22), 7363. https://doi.org/10.3390/s24227363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop