Next Article in Journal
Error Analysis of a New Five-Degree-of-Freedom Hybrid Robot
Next Article in Special Issue
Dynamic Modeling and Passivity-Based Control of an RV-3SB Robot
Previous Article in Journal
Comparison of Separation Control Mechanisms for Synthetic Jet and Plasma Actuators
Previous Article in Special Issue
Passability and Internode Mechanics Analysis of a Multisection Micro Pipeline Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Advancements in Augmented Reality for Robotic Applications: A Survey

1
Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milan, Italy
2
Asensus Surgical Inc., Durham, NC 27703, USA
3
Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong SAR, China
4
College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Actuators 2023, 12(8), 323; https://doi.org/10.3390/act12080323
Submission received: 10 July 2023 / Revised: 3 August 2023 / Accepted: 10 August 2023 / Published: 13 August 2023
(This article belongs to the Special Issue Motion Planning and Control of Robot Systems)

Abstract

:
Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement.

1. Introduction

Robotic systems have experienced significant development in recent decades and have become increasingly prevalent in various applications, including medical, industrial, collaborative, bionics [1,2,3,4,5], etc. Among all the robotic applications, medical and industrial robots have gained extensive attention and have brought about a revolution in their respective fields. For instance, the integration of medical robotic systems has greatly empowered surgeons’ capabilities and performance for surgical task completion compared to traditional manual surgical procedures [6,7]. Specifically, robotic systems have high positioning capabilities, improved dexterity, fatigue avoidance, and reduced trauma, which relates to faster recovery, especially for delicate procedures [8]. Additionally, the advent of the industry 4.0 paradigm strives to achieve more intelligent manufacturing, perception, and monitoring processes [9], in which advanced robotic systems are indispensable. In particular, robotic systems outperform human operators in repetitive tasks, heavy load lifting, quality consistency, and more.
The interface for human–machine or robot interaction and collaboration plays an important role, with which the human operator can send commands and supervise the task execution, such as input from a keyboard, voice, gestures [10,11], etc. However, using traditional interfaces, such as a mouse and keyboard, the visual feedback is provided to the user on a screen and external monitor. Most of these visualizations are not intuitive enough and require high hand–eye coordination capability and could cause distractions [12]. Augmented reality (AR) has emerged as a transformative technology that provides the human operator with more transparent and immersive visual feedback, hence, further assisting the operator in decision-making procedures and improving task completion performance [13,14]. AR could enhance human–robot interaction (HRI) and human–robot collaboration (HRC) by improving communication, situational awareness, and remote collaboration, leading to more efficient and effective interactions.
The integration of AR can enhance the user experience, simplify robot programming, and enable remote collaboration, making robotic systems more accessible and versatile in various applications. In medical robotics, the AR technique has been employed in various applications, such as rehabilitation, surgery, and medical assistance. Especially in surgical fields, the AR technique has been introduced into multiple phases during surgical operations, including preoperative surgical task planning [15], intraoperative surgical guidance [16], and telesurgery [17]. The AR interface can be used to guide the surgeon to locate the incision position and provide real-time intraoperative guidance during robot-assisted surgery (RAS) procedures. Similarly, AR has been demonstrated to be highly beneficial in industrial areas, such as complicated assembly task guidance [18], collaborative task [19], and manufacturing process monitoring [20]. By leveraging the capabilities of AR, industrial robots can become more intuitive, efficient, and accessible tools, empowering operators and experts to collaborate effectively and precisely, enhancing productivity.
Several reviews have been implemented on the topic of AR for robotic systems in recent years, and these reviews cover topics from industrial robots to medical robots, including rehabilitation, assistive, and surgical applications. For example, Qian et al. [21] reviewed AR in RAS scenarios and the hardware components, application paradigms, and clinical relevance. Furthermore, future perspectives were summarized in this work. Bertolo et al. [22] performed a systematic review of AR in urological interventions and pointed out that the critical limitation of AR-assisted surgery was the inaccuracy in registration, which causes poor navigation precision. In addition, Makhataeva et al. [23] reviewed AR in medical, motion planning, human–robot interaction, and multi-agent system applications. Suzuki et al. [24] classified the AR-improved human–robot interaction interfaces for robotics. Recent years witnessed both hardware systems and human–machine interfaces for AR being improved significantly, as well as extensive research results. For example, the HoloLens2 (Microsoft, Redmond, WA, USA) was released in 2019 with improved immersive perception capability, ergonomics, positioning accuracy, gaze tracking function, human–machine interface, etc., compared with the first generation of HoloLens [25]. Furthermore, extensive applications have been investigated in robotic applications recently, such as surgical guidance, surgical training, standard manufacturing, and intelligent manufacturing. Hence, this paper aims to provide a brief review of recent works and advancements in AR techniques for both medical and industrial robot applications. Afterward, the current challenges and future perspectives in AR for robotics are summarized as well.
In this survey paper, the adopted definition of augmented reality is the widely accepted one given by Milgram et al. [26]: “any case in which an otherwise real environment is augmented by means of virtual (computer graphic) objects”. In contrast, a virtual reality (VR) environment is one in which the participant/observer is totally immersed and in which they can interact with a completely synthetic world. To this day, AR and VR still share similarities in terms of hardware, scope, and usability. The importance of VR in technology applications and its increasingly relevant role in research is undeniable; however, due to the differences in areas of application and the sizes of both research fields, this survey paper only focuses on AR applications.
The remainder of this manuscript is organized as follows: Section 2 gives an introduction to the development of AR and robotics. Afterward, Section 3 summarizes the AR for medical robotic applications and consists of Section 3.1, preoperative and surgical task planning; Section 3.2, image-guided robotic surgery; Section 3.3, surgical training and simulation; and Section 3.4, Telesurgery. Following that, Section 4 describes AR for industrial robotic applications and includes Section 4.1, human–robot interaction and collaboration; Section 4.2, path planning and task allocation; Section 4.3, training and simulation; and Section 4.4, teleoperation control/assistance. Then, a discussion is given in Section 5 and includes Section 5.1, limitations and challenges; and Section 5.2, future perspectives. Finally, Section 6 concludes the work.

2. Augmented Reality and Robotics

AR possesses the potential to transcend the physical limitations of conventional interaction by integrating holographic information onto real scenarios. This unique attribute provides the users with a more comprehensive perception capability of their surroundings and results in improved interactive experiences [27]. Over the past decades, AR has gained substantial popularity and is increasingly influential in diverse fields such as industry, medicine, entertainment, and education [28]. Benefiting from the inherent intuitive and efficient information presentation ability in interaction, AR-based solutions have been integrated into many robotic applications [24]. Figure 1 illustrates the concept of integrating AR techniques in medical and industrial robotic applications.

2.1. Augmented Reality

Although AR has achieved widespread application, a clear and consistent definition of this technology remains elusive both in academic and industrial fields. Milgram and Kishino et al. [26] introduced the concept of a “reality–virtuality continuum” to define augmented reality and augmented virtual environments based on surrounding environmental characteristics. Azuma et al. [29] subsequently proposed a widely adopted definition of AR as the seamless integration of 3D virtual objects with the real environment in real-time, characterized by three key features: the incorporation of virtual and real space, real-time interaction, and spatial registration. Considering the blurred boundaries between AR and mixed reality (MR), a more general description is nowadays widely adopted, positing that any system that enhances physical objects and environments can be deemed as AR, regardless of the technological form [23,24]. Despite these various definitions, the overarching objective of AR is to augment human perceptual capabilities of the physical environment while simultaneously improving information interaction efficiency [30].
AR devices encompass various fixed/mobile displays, projectors, and head-mounted displays (HMD) [31]. Display-based AR implementation necessitates a camera to capture real-world information, which is rendered with virtual information in the computer and finally exhibited on the monitor. In contrast, projection-based augmented reality solutions directly project digital content onto physical environments, leading to an enhanced perception of reality and enabling users to interact with virtual objects in a more direct way. In recent years, lightweight wearable displays have become the primary medium of AR, typically implemented through two primary modes: optical perspective and video perspective [32]. The optical perspective utilizes holographic optics to project virtual information onto the eye’s imaging system, superimposing it onto real-world environmental imaging. In contrast, the video perspective fuses virtual and environmental information, displaying it on an opaque HMD that is then optically delivered to the user’s eyes for perception.
In recent years, a number of commercially available AR devices have appeared on the market and are being used in various applications [33]. Display-based AR is the simplest and easiest to implement and can run on any display-equipped device, including computers, mobile phones, or tablets. Projection-based AR (also known as projection mapping and spatial AR) uses a projector instead of a display to project augmented information onto an irregular surface to enhance perception and provide the ability to interact spatially. One of the best-known HMD devices based on optical perception is Google Glass (Google Inc., Mountain View, CA, USA), which projects information onto a small screen located directly above and to the right of the user’s right eye, with little obstruction to the user’s vision. The prismatic projection structure is easy to implement, but the small screen and the monocular structure provide only limited immersion. Another commonly used device is Microsoft’s HoloLens, which is now in its second generation. HoloLens 2 is a complete AR system and contains a central processing unit, a custom-designed holographic processing unit, various types of sensors, and a holographic projector with see-through optics. Due to its rich sensing system and large field of view (FoV), it has become a mainstream device for AR applications in industry and medicine. In addition, Magic Leap One (Magic Leap Inc., Plantation, FL, USA) is an optical see-through head-mounted display (OST-HMD) AR glass that is capable of overlaying digital content in the real world, creating an immersive AR experience, and has been integrated into many robotic applications. Common video-based HMD devices currently available include the Samsung Gear VR2 (Samsung Corp., Seoul, Republic of Korea), which uses a smartphone screen and embedded camera to display the real world, and the ZED Mini (Stereolabs, San Francisco, CA, USA), which is specifically designed to provide a high-resolution stereoscopic view of the real world while the user wears the HMD. Notably, Apple’s recently announced Vision Pro (Apple Inc., Cupertino, CA, USA) is also a video see-through HMD and is expected to be the new most powerful AR HMD.
Table 1 collects and compares the hardware parameters of the most relevant AR devices on the market [34]. As technology advances, augmented reality devices are achieving unprecedented levels of performance, seamlessly blending virtual elements with the real world. Their enhanced processing power and refined optics are opening up exciting possibilities across various industries and research fields, revolutionizing how we interact with information and our environment. The improvement of display and tracking technologies will broaden the range of practical applications, especially when more precise alignments are needed [24].

2.2. Robotics Applications

Robotics is currently experiencing a paradigm shift from single-purpose applications and fixed workspace constraints towards the development of general-purpose collaborative robots: positional accuracy, repeatability, spatial awareness, and back-drivability are now key concepts shared by very diverse fields of applications.
Specifically, robotics has had a profound impact on both industrial and medical applications, revolutionizing these sectors in numerous ways. In the industrial domain, robots are extensively used in manufacturing processes, automating tasks such as welding, assembly, and material handling [35]. This integration enhances productivity, lowers production costs, and ensures consistent quality. Furthermore, collaborative robots, or cobots, work alongside human workers, augmenting their capabilities and creating safer work environments. In the medical field, robotics has brought about significant advancements, particularly in surgical procedures. Robotic surgical systems enable surgeons to perform complex operations with enhanced precision and dexterity, leading to reduced trauma, faster recovery times, and improved patient outcomes [36]. Additionally, telemedicine applications are leveraging robotics to facilitate remote surgeries and consultations, expanding access to healthcare in remote areas. Moreover, rehabilitation robots aid in physical therapy, helping patients to recover from injuries or neurological conditions more effectively. The continuous development of robotics technology promises even more innovative applications in both the industrial and medical realms, making processes more efficient and elevating the standard of care in healthcare.
Medical robotics and industrial robotics are two distinct yet interconnected branches of the robotics field, each with its specialized applications and unique set of challenges. Medical robotics focuses on the development of robotic systems and devices to assist healthcare professionals in surgeries, diagnostics, and patient care [37]. These robots are designed to be precise, compact, and capable of delicate movements to ensure safe and accurate medical interventions. In contrast, industrial robotics is geared towards automating manufacturing processes in various industries, ranging from automotive to electronics, where the emphasis is on high-speed production, efficiency, and cost effectiveness [38]. While both medical and industrial robotics employ cutting-edge technologies, they have divergent objectives and constraints. Medical robotics demands stringent safety and regulatory compliance due to the potential risks associated with human–robot interactions in sensitive medical procedures [39]. On the other hand, industrial robots primarily operate within controlled factory environments, and safety measures are mainly focused on protecting human workers from accidental collisions. In terms of complexity, medical robotics often involves intricate kinematics and sensor systems to navigate within the confined spaces of the human body and deliver precise surgical maneuvers. Industrial robots, however, are engineered to perform repetitive tasks with higher payloads and longer reach, requiring robust mechanical structures and control algorithms. The cost factor also differentiates these two domains significantly. Medical robots tend to be more expensive due to their specialized nature, advanced materials, and extensive research and development. Industrial robots, by contrast, are often designed for mass production and can benefit from economies of scale, resulting in comparatively lower costs.
Despite their distinct purposes, there are areas of convergence between medical and industrial robotics. Advancements in perception systems [40], machine learning, and artificial intelligence [41] have influenced both domains, leading to improved capabilities such as object recognition, adaptive control, and autonomous decision making. Moreover, developments in materials and miniaturization [42] have enabled the creation of more compact and versatile robots in both sectors.
In conclusion, while medical robotics and industrial robotics serve divergent purposes, they share common technological foundations and have shaped each other’s progress through interdisciplinary collaboration. As these fields continue to evolve, their combined contributions hold the potential to drive further innovations and improve the quality of life for individuals worldwide.

2.3. Robotic Applications with Augmented Reality

The collaboration between human operators and robots within overlapping workspaces requires a higher level of efficiency and flexibility in human–robot interaction. Traditional human–robot interaction relies on the robots’ internal physical and perceptual capabilities, such as gestures, audio, visual displays, natural language (text and spoken language), physical interaction, and haptic feedback [43]. However, these interaction methods are limited by their expression capacity, presenting major challenges in efficiency and convenience. The emergence of AR technology has the potential to address these challenges in HRI, particularly in medical and industrial applications.
For medical robots, the AR technique is usually integrated into preoperative planning and intraoperative navigation, which can effectively alleviate the cognitive burden on surgeons during surgeries and improve the efficiency and accuracy of surgical operations. For example, Porpiglia et al. [44] introduced elastic 3D virtual models in the da Vinci surgical system (Intuitive Surgical Inc., Sunnyvale, CA, USA) and superimposed them on the three-dimensional visual interface provided by the laparoscope to dynamically simulate the deformed organ and guide the prostate surgery execution. In addition to the lead surgeon, AR technology can also be used to assist paramedics with their tasks. For example, Qian et al. [45] proposed the concept of ARsist, which uses a head-mounted optical display to superimpose the laparotomy scene with a real operating scene to assist the first assistant in the auxiliary task of robotic surgery.
In industrial robotic applications, AR has been introduced for robot motion planning, control, task allocation, manufacturing system monitoring, etc. The use of AR technology allows for the direct mapping of virtual objects into real-world environments, facilitating rapid path planning and programming. Ong et al. [46] proposed an AR-assisted welding path programming system that simplifies the welding path and torch direction definition process through an AR interface, enabling accurate positioning based on user inputs. Ostanin et al. [47] developed an AR-based interactive programming method using the Unity engine and HoloLens glasses, which involved point cloud analysis of real robot positions to generate viable robot trajectories using virtual markers and menus. In another study, Wesley et al. [48] established an AR-based application for a carbon-fiber-reinforced polymer materials process, showing that AR can reduce physical exertion and task completion times compared to a manual positioning joystick while also improving robot utilization.

3. Augmented Reality for Medical Robotic Applications

AR has gained significant traction in the medical robot field and has been integrated into various applications to enhance preoperative and intraoperative surgical planning and surgical procedure guidance [31]. AR in medical robotics has a fascinating history that began with its early applications in the late 20th century. One of the pioneering works in this field was the development of the “virtual fixtures” system by Louis Rosenberg at the U.S. Air Force Research Laboratory in 1993, which introduced the concept of overlaying virtual information onto the real world for robotic-assisted tasks [49]. As technology advanced, AR found its way into surgical procedures, where it was used to enhance visualization, navigation, and precision. A notable example is the first reported use of AR in neurosurgery in 1998, when Kwoh et al. utilized AR visualization to assist in skull base tumor resections, achieving improved accuracy and safety [50]. Since then, AR in medical robotics has continued to evolve, revolutionizing various aspects of healthcare, from training and simulation to intraoperative assistance and post-operative monitoring. One of the most relevant examples proposed in the early 2000s was developed by Anderson et al. [51], who proposed a computer-based system for the simulation of image-guided cardiovascular procedures for physician and technician training.
These pioneering works still resonate in the more modern applications of the research community of medical robotics, medical imaging, and computer-aided surgery. One of the largest research groups is the Medical Image Computing and Computer-assisted Intervention (MICCAI) community, which embeds laboratories and researchers from all around the world, organizing annual conferences, workshops, and challenges to push the boundaries of AR [13] and medical robotics forward.
In this section, related works using AR for medical robots are reviewed and summarized, mainly categorized into four topics, namely, “preoperative and surgical task planning”, “image-guided robotic surgery”, “surgical training and simulation”, and “telesurgery”.

3.1. Preoperative and Surgical Task Planning

With medical imaging data, such as computerized tomography (CT) or magnetic resonance image (MRI) scans, surgeons can accurately locate targets and assess the depth and orientation of structures, improving surgical precision and reducing risks. In particular, AR can further assist surgeons in preoperative and surgical task planning by providing them with real-time and precise guidance and visualization. Moreover, optimizing the positioning of the surgical robot, the robot configuration, and the incision position can improve the surgeon’s ergonomics and reduce the probability of intraoperative failure during RAS. This subsection summarizes related works on using AR interfaces for surgical task planning, robot configuration optimization and port placement, and surgical tool insertion tasks. Related works using AR for preoperative and surgical task planning are summarized in Table 2.
AR techniques have been successfully integrated into the surgical task planning process to improve users’ precision capability and task completion efficiency by leveraging the information overlapped onto their view. For instance, Samei et al. [52] implemented a partial AR system with live ultrasound and registered preoperative MRI for guiding robot-assisted radical prostatectomy using the da Vinci (Intuitive Surgical, Sunnyvale, CA, USA) robotic system. Zhang et al. [53] proposed a multiple-view registration interface for planning robot-assisted spinal puncture procedures; comparison experiments with a single-view registration interface were conducted and showed the proposed approach achieved a registration accuracy of  1.70 ± 0.25  mm, which met the requirements in clinical procedures. In ref. [15], an AR application was employed for trajectory planning on the adjacent surface of the full crown during robot-assisted tooth preparation procedures; the robot could also be controlled using the designed AR interface. Fu et al. [54] proposed an AR interface for interactive needle insertion path planning for robot-assisted percutaneous nephrolithotomy (PCNL) tasks. Moreover, Ho et al. [55] designed an AR-assisted task planning and navigation system for robot-assisted surgical drilling tasks to enhance safety. The implemented AR interfaces enabled the human operator to modify the robot’s trajectory, both the position and orientation of the robotic manipulator, and supervise the execution process of the surgical task.
In addition, AR visualization provides the surgeon with an intuitive and immersive interface for the robotic configuration verification, surgical system setup, and incision port selection on a patient’s body. Fotouhi et al. [56] proposed an AR-assisted interactive interface to address the challenge of perspective ambiguities during the virtual-to-real alignment procedure of the robotic arm in robot-assisted minimally invasive surgery (RA-MIS) tasks. Żelechowski et al. [57,58] investigated the application of using the AR OST-HMD to choose the position of the patient appropriately considering the limited workspace of the serial LBR iiwa (KUKA, AG, Augsburg, Germany) surgical robotic system in RA-MIS tasks. Similarly, Wörn et al. [59] explored the utilization of an AR interface to find the optimal placement of the trocars to minimize the probability of collision with the robot arm in manual laparoscopic surgery and the robot configuration initialization in teleoperated robot-assisted surgery using the da Vinci robotic system. Following this work, Weede et al. [60] developed a projection-based AR visualization system to overlap the optimized port positions onto the patient’s abdomen, taking into account the ergonomic working directions, collision avoidance, and reachability of the target areas during minimally invasive surgery (MIS) procedures. Moreover, an AR-assisted framework for repositioning the C-arm was implemented, and the X-ray technician was equipped with an HMD AR visualization interface to operate the C-arm interventionally in 3D via an integrated infrared sensor [61]. Fu et al. [62] designed an AR-assisted robot learning framework for MIS tasks, and an AR interface was adopted to verify the configuration of a serial redundant robot from HoloLens2 OST-HMD. To generate a robust desired trajectory to transfer to the robotic system for reproduction in the minimally invasive surgery scenario, the Gaussian mixture model (GMM) and Gaussian mixture regression (GMR) were employed to encode the human-demonstrated multiple trajectories in this work.
Surgical tool or needle insertion is a common and crucial step in surgical procedures, particularly for MIS tasks [31]. AR has been employed to assist surgeons in selecting the needle poses intuitively instead of observing the imaging feedback from external 2D monitors, which potentially has reduced surgeons’ mental and physical workloads [63]. Vörös et al. [64] created an AR interface, with HoloLens2 OST-HMD employed, to help the operator to position and adjust the pose of the drilling apparatus in robot-assisted pedicle screw surgical tasks. Qian et al. [21] designed the “ARssist” framework to guide the “first assistant” with preoperative instrument insertion and tool manipulation in robotic-assisted laparoscopic surgery. The experimental results showed that the proposed interface significantly enhanced task completion performance and efficiency while requiring lower hand–eye coordination than a typical 2D monitor visualization interface. Similarly, an AR interface was implemented for novice surgeons training using OST-HMD in PCNL tasks, particularly for the needle alignment and insertion procedures [65]. Except for wearable devices, a projection-based AR system was used to plan needle placement during a percutaneous radio-frequency (RF) liver ablation for liver resections. The average error in the experiment was 1.86 mm for virtual needle insertion, which was less than the clinical requirement of 2 mm [66]. In ref. [67], an AR-assisted robotic navigation system was designed for spinal surgery, and the pre-planned path was visualized using the wearable OST-HMD to guide the pedicle screw insertion process.
Table 2. Summary of AR use in medical robots for preoperative and surgical task planning.
Table 2. Summary of AR use in medical robots for preoperative and surgical task planning.
ApplicationReferencesRobot PlatformAR MediumDetailed Contents
Surgical
task planning
Samei et al. [52]da VinciConsoleProstatectomy
Zhang et al. [53]KUKA KR 6 R900ProjectorSpinal
Jiang et al. [15]CustomMonitorDental
Fu et al. [54]KUKA LWR IV+OST-HMDPCNL
Ho et al. [55]CustomOST-HMDSpinal
Robot
system setup
Fotouhi et al. [56]KUKA iiwa 7 R800OST-HMDMIS
Żelechowski et al. [57]KUKA LBR iiwa MedOST-HMDMIS
Wörn et al. [59]da VinciProjectorLaparoscopy
Weede et al. [60]KUKA LWR IV+ProjectorMIS
Fu et al. [62]KUKA LWR IV+OST-HMDMIS
Needle
insertion planning
Vörös et al. [64]KUKA LWR IV+OST-HMDSpinal
Qian et al. [21]da VinciOST-HMDMIS
Ferraguti et al. [65]KUKA LWR IV+OST-HMDPCNL
Wen et al. [66]CustomProjectorRF Ablation
Boles et al. [67]CustomOST-HMDSpinal

3.2. Image-Guided Robotic Surgery

Surgeons rely on the information from medical imaging not only for preoperative planning purposes but in the intraoperative phase as well. In most cases, the operating room (OR) is equipped with displays where the surgeon and the OR staff visualize the medical image in its 3D or sliced-2D form, using it as a reference for locating the anatomical structures of interest. In this scenario, the surgeon is constantly switching their focus of attention from the medical images to the surgical environment, requiring extraordinary hand–eye coordination and the ability to map the surgical scene and the motion of the instruments to the image space. The advent of AR and its introduction into the surgical field empowered surgeons with techniques for visualizing the information gathered from medical images directly superimposed onto their view of the surgical environment and seamlessly aligned with anatomical structures. All the applications of AR for image-guided robotic surgery presented in this subsection are summarized in Table 3.
When the surgical scenario is not subject to deformations, the registration is rigid, the accuracy is highest, and the benefits of AR technologies are the most obvious. Fotouhi et al. [68] guided the placement of K-wires in orthopedic surgery for total hip replacement (THP) by visualizing the optimal insertion path of the OST-HMD of the surgeon. Andress et al. [69] exploited co-registration between the OST-HMD and a C-arm for visualizing annotations on X-ray images and their real position in the real-world space, allowing for easier localization of orthopedical lesions. For spinal microscopic surgery, Carl et al. [70] integrated the visualization of vertebrae, discs, and tumors into the video feed of heads-up displays to allow an intraoperative see-through experience.
Teleoperated leader–follower (in the past known as master–slave) surgical robotic platforms are equipped with screens for real-time visualization of the endoscopic camera feed: AR applications in this context usually take advantage of such a display to visualize the anatomical structures directly superimposed. Chen et al. [71] performed an ultrasound-based reconstruction and registration of oropharyngeal anatomical structures, which were then displayed onto the high-resolution stereo viewers (HRSV) of a da Vinci robot during trans-oral robotic surgery (TORS). In the same context, Chan et al. [48] successfully conducted a feasibility cadaveric experiment, visualizing in real-time the carotid artery system during robotized head-and-neck surgery. Integrating the augmented visualization onto the display of the robotic platform may be advantageous compared to employing external hardware such as OST-HMDs, which come with their limitations in terms of battery life, connectivity issues, and visual line-of-sight impairments [72].
AR has also been integrated into surgical contexts where the anatomy is non-rigid and, therefore, registration must be performed with more advanced techniques that account for deformations. This challenge was pursued by Pessaux et al. [73], who built an advanced visualization system for the abdominal vasculature and anatomy to assist surgeons during liver resection procedures, accounting for pneumoperitoneal deformations into the registration phase. With a different approach, Marques et al. [74] proposed a framework for surgical assistance during minimally invasive robotic hepatic surgery, registering preoperative CT images to point clouds of the liver surface acquired intraoperatively from a stereo endoscope. Lee et al. [75] built a similar framework for visualization in robotic thyroid surgery. Shen et al. [76] developed a fully customized actuated system for trans-rectal ultrasound (TRUS) 3D reconstruction of the rectum, to display the augmented visualization enriched with the reconstructed anatomy and the target tumoral tissues on an OST-HMD. Kalia et al. [77] conducted a similar approach, where a preclinical study on a real-time AR-based guidance system for radical prostatectomy was embedded in a surgical robot aimed at assisting the surgeon in the intraoperative visualization of the prostate anatomy reconstructed from TRUS, the projected US scans, and the tumors to be targeted. Porpiglia et al. [78] employed a computer vision algorithm for the automated anchoring of virtual 3D models on intraoperative images during a robotized prostatectomy, effectively utilizing a learning-based approach for AR registration.
The work of Piana et al. [79] showcased how AR was employed to address some notable limitations of teleoperated surgical robotics. Specifically, their work highlighted how the lack of haptic feedback in robotic surgery was problematic in the identification of atheromatic plaques during robot-assisted kidney transplant, and, therefore, built an intraoperative visualization tool based on the patient’s 3D images, intending to support the localization process during minimally invasive surgery.
Bianchi et al. [16] notably conducted a 20-patient quantitative study demonstrating that embedding AR-enhanced visualizations of anatomical structures in radical prostatectomies allows for significantly reduced positive surgical margins. AR was validated as a technique for increasing the safety of surgical interventions in addition to the benefits to the workflow and cognitive load of the surgeon.
Edgcumbe et al. [80] introduced a custom-made sterilizable “dart-shaped” tracker to be surgically inserted into the patient’s body to allow its registration into the coordinate system of the surgical robot, effectively allowing the visualization of the relative pose of the instruments and the tracker itself. Combining the robot kinematics and the detected relative pose of the tracker, the surgeon visualizes the surgical structures and their 3D relative pose with respect to the instruments intraoperatively. Similarly, Qian et al. [45] exploited ArUco markers mounted onto the actuated arms of the da Vinci robot to project a hologram of the robot on an OST-HMD worn by the OR first assistant. They could, therefore, visualize the whole robot hologram projected in the OR and onto the patient, where the instruments inside the surgical scene were also visible.
Table 3. Summary of AR in image-guided surgery application.
Table 3. Summary of AR in image-guided surgery application.
ApplicationReferencesPlatformAR MediumDetailed Contents
Head-and-NeckChen et al. [71]da VinciHRSVAnatomy
Head-and-NeckChan et al. [48]da VinciHRSVAnatomy
HepaticPessaux et al. [73]da VinciHRSVAnatomy
HepaticMarques et al. [74]da VinciOST-HMDAnatomy
ThyroidLee et al. [75]da VinciHRSVAnatomy
ColorectalShen et al. [76]CustomOST-HMDAnatomy
ColorectalKalia et al. [77]da VinciHRSVUS Scans
ColorectalPorpiglia et al. [78]da VinciHRSVAnatomy
UrologyPiana et al. [79]da VinciHRSVAnatomy
UrologyEdgcumbe et al. [80]da VinciHRSVInstruments
Abdominal CavityQian et al. [45]da VinciOST-HMDInstruments

3.3. Surgical Training and Simulation

The increase in minimally invasive surgical robotics procedures in the last decade has demanded an increasingly higher number of trained surgeons capable of teleoperating such advanced and complex systems while taking advantage of the benefits of RA-MIS safely and effectively. The role of surgical training and surgical robotics training is, hence, of key importance in the achievement of optimized learning time, skill retention, and skill transfer. In this context, AR acts as a guidance system and as an automated supervisor for personalized learning: Peden et al. [81] reported a significant improvement in the perceived quality and utility of the teaching curriculum of surgical skills for students that learned with AR-enhanced modalities. Long et al. [82] leveraged reinforcement learning to learn from expert demonstrations on a peg-transfer task and then generated a 3D guidance trajectory, providing prior context information of the surgical procedure.
The integration of advanced AR visualization systems into surgical robotics training curricula allows trainees to effectively visualize the correct motion paths that they should follow, to more quickly learn the anatomy and the structures involved in the task, and to smooth the learning curve. Rewkowski et al. [83], for example, proposed projecting visual cues on an OST-HMD worn by a trainee teleoperating a surgical robot during the execution of a peg-transfer task, with accurate calibration accuracy and real-time capabilities. Barresi et al. [84] developed an AR-enhanced simulator for learning robot-assisted laser microsurgery, where the Electroencephalography (EEG) of the operator was recorded and processed in real-time to estimate the level of focus and retract the virtual scalpel in low concentration phases. A similar framework was proposed by the work of Wang et al. [85] and Zeng et al. [86], where brain–computer interfaces (BCIs) were embedded with augmented reality feedback to control a robotic arm used as an assistance strategy for learning object grasping. Both studies were aimed at paralyzed subjects. Gras et al. [87] proposed an adaptive AR-enhanced surgical robotic simulator for neurosurgery embedded with a Gaussian process voter that automatically selects the level of AR assistance deployed to the trainee. The authors also conducted a user study that showed clear improvements in user perception of the surgical scene and task times during a tumor marking task. Condino et al. [88] proposed a “tactile” AR approach by building an actuated wearable fabric yielding device that mimics the haptic sensations on the fingertips for improving artery palpation training.
AR technology in surgical robotics training is also exploited to enhance the supervision phase as an advanced communication and mentoring tool. Specifically, Jørgensen et al. [89] addressed the crucial issue related to limited visual communication between the supervisor and the trainee and proposed a compact system to overlay the video streams on the HRSV daVinci with annotation and 3D computer graphics generated by the supervisor. All the aforementioned applications of AR for surgical training and simulations are summarized in Table 4.

3.4. Telesurgery

Benefiting from the advances in robotic systems and telecommunication, telesurgery enables surgeons to perform complex surgical operations in a remote manner regardless of their physical location [91]. AR in telesurgery and telementoring further enables remote visualization, robot control, and proximity alerts. By leveraging these capabilities, AR facilitates seamless real-time remote and local collaboration, enhances surgical accuracy, expands access to specialized medical expertise, and ultimately improves patient care and outcomes [92]. Related works using AR for preoperative and surgical task planning are summarized in Table 5.
Several works have investigated the topic of how to improve users’ remote visualization capability and situational awareness in telesurgery with AR. In these works, AR was employed to provide the remote operator with intuitive and immersive visualization feedback of important anatomies, surgical instruments inside the human abdominal cavity, depth information, etc. Lin et al. [93] explored how to provide clinicians working remotely with synchronous visual feedback using an OST-HMD, which was mounted at the end effector of a serial redundant robotic manipulator. Gasques et al. [94] designed an immersive collaboration framework for surgical telementoring tasks. Experiments were conducted on cadavers with both expert and novice surgeons and exhibited the promising potential of using AR for telesurgery. Black et al. [95] explored the possibility of using the MR interface in a tele-ultrasound task. In detail, the follower was instructed to track the desired position and contact force to perform ultrasound scanning tasks demonstrated by a remote expert. In addition, Qian et al. [96] developed an AR-assisted framework “ARAMIS” to provide the surgeon with visual feedback on the patient’s internal structure during MIS in real-time and wirelessly. The end-to-end latency was reported as 178.3 ms and improved intuitiveness and reduced task completion time cost, as well having a higher success rate. In ref. [97], Huang et al. designed an auto-stereoscopic visualization system for telesurgery leveraging the AR techniques on a local site display. The planned path and model during the preoperative phase were fused with the point clouds acquired using an RGB-D camera from a remote environment.
Furthermore, telesurgery robotic systems using the AR medium have been extensively explored considering the advantages of enhanced hand–eye coordination reducing the cognitive load, remote collaboration possibilities, radiation exposure avoidance, etc. In ref. [98], Lin et al. designed an AR-assisted touchless teleoperation control interface designed to provide the operator with immersive visualization of the patient’s anatomy structure and to guide the surgical robot for endoluminal intervention procedures with human hand gesture recognition. Fu et al. [99] explored the usability of an AR visualization interface to ensure the synchronization between the local and remote sides in teleoperated robot-assisted ultrasound scanning tasks. After acquiring each frame of the image from the patient site, a Pose-ResNet artificial intelligence model was utilized to calculate the positions of 16 key points of the human body on the master site. Ho et al. [55] studied the AR-assisted supervised control modality of a robot-assisted drilling system. The surgical trajectory was projected onto a 3D vertebrae bone model using BT-300 AR glasses (Seiko Epson, Suwa, Japan), and the operator could modify the robot trajectory through a graphical user interface (GUI) remotely. Ma et al. [100] implemented a view adjustment framework using an OST-HMD to track the surgeon’s head movement for autonomous navigation of a robotic stereo flexible endoscope, which was equipped at the distal end of the da Vinci surgical robotic manipulator.
Latency is another critical issue in telesurgery, which could affect task completion accuracy and safety, especially in delicate tasks (e.g., delicate and fine manipulation tasks). Furthermore, the time delay could impose a high cognitive workload on the operator, cause impaired telepresence, and reduce the efficiency of task completion. Although 5G technology had been investigated for minimizing the latency in remote laser microsurgery [103], the economy and accessibility were not always available in all telesurgery applications. Instead, AR has been utilized in many robot teleoperation applications to overcome this challenge by merging digital information into physical scenarios and robot motion prediction.
For example, Richter et al. [17] developed a stereoscopic AR predictive display (SARPD) interface to deal with the time delay issue in telesurgery by displaying the predicted surgical instrument motion cues alongside the in situ scenarios immediately. Bonne et al. [101] implemented a digital twin system to deal with the network instability and delays challenge for peg-transfer surgical training tasks in telesurgery. The teleoperator performed the surgical operation by observing the digital twin robotic system and did not suffer from unstable or low-bandwidth communication anymore, and the remote robot executed the human command semi-autonomously. Similar work was performed by Gonzalez et al. in [102], an AR interface for predicting robotic arm motion visualization was implemented to provide the surgeon with real-time feedback to avoid fatal surgical errors caused by communication delays during telesurgery. Fu et al. [99] integrated AR visualization into a teleoperated robot-assisted ultrasound system incorporated with dynamic contact force prediction between the probe and the patient’s body to ensure safety. Although latency existed, the real robot motion was immediately displayed on the local side by overlapping the robot hologram model on the physical.

4. Augmented Reality for Industrial Robotic Applications

AR techniques facilitate widespread application of industrial robots across various domains, including collaborative systems, natural interaction, intuitive task planning, robot training, and remote control. In this section, related works utilizing AR for industrial robots are reviewed and summarized, primarily categorized into four topics, namely, “human–robot interaction and collaboration”, “path planning and task allocation”, “training and simulation”, and “teleoperation control and assistance”.

4.1. Human–Robot Interaction and Collaboration

HRI and HRC are vital nowadays for smart manufacturing transformation, especially toward human-centric, resilient, and sustainable principles [104]. HRI means one teammate communicates, guides, or controls the other remotely or physically in touch for the completion of a shared task [105]. HRC pays more attention to the parallel, coordinated, and synchronous activity of humans and robots in an overlapped workspace for a common task goal [106]. For both phases, AR technologies play a critical role in both efficient and effective collaborative work.
The deployment of an AR environment in HRI and HRC tasks allows human operators to intuitively teleoperate or remote control robots without expert knowledge of robot programming. For example, Wang et al. [19] developed a feasible AR system for closed-loop robot control, as presented in Table 6. The user was able to manipulate an industrial robot by planning the posture, trajectory points, and tasks of a virtual robot in the AR environment with gesture commands. Then, Ji et al. [107] integrated human eye blink input and AR feedback in HRI tasks. The human could interactively create and modify robotic path planning according to the different inputs and achieve an AR-based modify-and-preview process. Furthermore, Sanna et al. [108] combined BCI and AR for HRC in assembly tasks. This approach allowed users to visualize different parts to be assembled via AR and guide a robot picking-and-placing selection task via the NextMind. Especially, users in the system could free both hands and assemble objects that required manual work.
Beyond robot control, AR systems can improve human-centric user experience and respond to personalized requirements in industrial scenarios. For instance, Choi et al. [109] focused on safety-aware HRC by providing real-time safe distance and preview of a robot digital twin in an AR environment. Among the systems, the approach of deep-learning-based instance segmentation is used to estimate a 3D object point cloud between a real robot and virtual robot, i.e., the robot’s digital twin. Then, Umbrico et al. [110] presented a user-aware control method for HRC via the headset and tablet AR environment. This method integrated process decomposition, communication, and motion planning modules for indicative task execution, which matched users’ skills to the requirements of production tasks. In addition, Aivaliotis et al. [111] developed an AR software suite for HRC with mobile robot platforms. Humans could define robot motion and navigation goals in a virtual interface. Moreover, the AR suite allowed users to visualize task execution instructions and safety working zones and recover robots from unexpected events. Furthermore, Szczurek et al. [112] proposed a multi-user AR interface for the remote operations of robots. Multiple human operators could teleoperate with the robot through multimodal interaction, including hands, eyes and motion tracking, and voice recognition. The AR interface provided video, 3D point cloud, and audio as feedback for humans. Lastly, Liu et al. [116] introduced an AR system into tasks of human–robot collaborative maintenance. In the robot maintenance task, the decision was generated by a deep reinforcement learning module.
With these explorations, AR-based HRI and HRC systems can be applied in various manufacturing activities, such as welding [117], assembly [118], maintenance, etc. In this context, Hietanen et al. [113] explored a projector–mirror setup and wearable AR interface in a realistic industrial assembly task. The AR user interface could intuitively present a danger zone, changed regions, various control buttons, and robot status to humans. The AR-based HRC reduced the task completion time and robot idle time. Then, Chan et al. [114] evaluated an AR interface for HRC in large-scale, labor-intensive manufacturing tasks. The system allowed a user to specify the robot’s path, visualize the motion, and execute robot trajectories with speech, arm gestures, and gaze. Compared with joystick-based robot control, the AR interface was easy, fast, and more convenient to use. Furthermore, Moya et al. [115] proposed an AR content tool that supported non-expert users with a web interface in HRC. Humans could create, visualize, and maintain AR manuals based on different assets, such as 3D models, audio, PDF files, images, and video. The tool reduced the task load and assisted humans in training robots in assembly tasks. Lastly, Liu et al. [116] introduced an AR-based HRC system into the maintenance process. In the system, a robot could recognize human maintenance requests and execute maintenance tasks from human gestures, while the human was able to interact with the robot and operate auxiliary maintenance tasks.

4.2. Path Planning and Task Allocation

Path planning and task allocation are preconditions before a robot conducts various manipulations. AR techniques can be introduced into this process and optimize robot task execution, as shown in Table 7.
Ong et al. [119] utilized AR to plan collision-free robot paths in an unknown environment. A piecewise linear parameterization algorithm was introduced to interactively generate a 3D curve demonstrated by the user for robot task allocation and execution. Then, dynamic constraints of robots were added in the proposed AR system to resolve discrepancies between the planned and simulated paths of robots [120]. In addition, Young et al. [121] explored occlusion removal and path planning of robots using tablet-based AR systems. The occlusion effect in AR is eliminated by depth correction, whereas the robot task path is planned by a rapidly exploring random tree algorithm which avoids obstacles in the working environment. Furthermore, Solyman et al. [122] proposed a semi-automatic offline task programming method for robots in AR and stereo vision systems. Among the systems, stereo matching algorithms were used to match and overlay virtual graphics on real scene settings, while robot forward kinematics was leveraged to calculate the 3D position of robot arm joints for task programming.
The use of AR systems in robotics allows a robot to adapt, re-plan, and optimize its motion in a timely fashion. For instance, Tavares et al. [123] introduced an AR projector to allow humans to identify the metal part placement in welding tasks. After the calibration of an industrial robot in the projector, the robotic system adapted the part location information in a timely fashion and optimized the welding motions and poses. Furthermore, Mullen et al. [124] investigated using AR to communicate robot inference to humans. With human feedback, robots could re-plan various task planning procedures.
Table 7. AR application in path planning and task allocation.
Table 7. AR application in path planning and task allocation.
CategoryReferenceMethodRobotMediumAR Content
Intuitive robot programmingOng et al. [119]Piecewise linear parameterization algorithm for generation of 3D path curve from data pointsScorbot ER VIIHMDVirtual robot, workspace’s physical entity, and probe marker
Occlusion removal and path planningYoung et al. [121]Coordinate mapping between robot and tablet and rapidly exploring random tree for path planningIndustrial robotTablet PCVirtual robot and planned path
Semi-automatic offline task programmingSolyman et al. [122]Stereo matching algorithms for overlaying of virtual graphics on real scenes and interactive robot path planning6-DoF robot armTablet PC2D workspace boundary, rendering robot path, and exception notification
Welding task optimizationTavares et al. [123]Laser scanner TCP calibration and genetic algorithm for robot trajectory optimizationWelding robotMediaLas ILP 622 projectorLocation where the operator should place and tack weld the metal parts
Inferred goals communicationMullen et al. [124]AR-based passive visualization and haptic wristband-based active prompts7-DoF robot armHoloLensRobot motion goal and text alert
Grasping task planningWeisz et al. [125]OpenRAVE for motion planning and RANSAC method for target object localizationBarrettHand gripperTablet PCScene point cloud, selection button, and object model
Navigation trajectory decisionChadalavada et al. [126]Eye-gaze-based human-to-robot intention transferenceAGV systemOptoma X320UST projectorLine (robot path) and arrow (robot direction)
Multi-robot task planningLi et al. [127]AR for robot teleoperation, reinforcement learning for robot motion planningUniversal Robot 5HoloLens 2Task video, control button, and virtual robot model
Adaptive HRC task allocationZheng et al. [128]Human action recognition, object 6-DoF estimation, 3D point cloud segmentation and knowledge graph for task strategy generationUniversal Robot 5HoloLens 2Robot states and task instruction
Task allocation under uncertaintiesZheng et al. [129]Knowledge-graph-based task planning, human digital twin modeling, robot learning from demonstrationUniversal Robot 5HoloLens 2Virtual robot model and overlapped distance
The exploration of AR systems provides a feasible solution to intuitively deliver robot task intentions to humans, based on which a user can identify and correct robot task allocation. Zheng et al. [128] leveraged an AR system to show task allocation strategies to human operators. The task planning strategy was generated by knowledge-graph-based artificial intelligence algorithms with a holistic perception of the surrounding environment, including human action recognition, object 6 degrees-of-freedoms (DoFs) estimation, and 3D point cloud segmentation. The knowledge-graph-based task allocation provided an explainable, graphical structure of robot tasks [130], which were easily learned by humans in the AR environment. Further, to tackle task allocation problems facing industrial uncertainties, Zheng et al. [129] investigated human digital twin modeling and robot learning from demonstration algorithms. A deep reinforcement learning algorithm was leveraged to let a human re-plan the robot motion. In this context, humans could assist robots in adjusting task planning for new situations, whereas robots could ensure human safety in unexpected events.
Beyond one fixed robot arm, AR-based path planning and task allocation show advantages in dexterous grippers, mobile manipulators, and multiple robot systems. For example, Weisz et al. [125] developed an assistive robotic grasping system with an AR interface. Users could make grasp tasks online for known and unknown objects in cluttered spaces. To improve robot navigation decisions, Chadalavada et al. [126] used eye-tracking glasses to record human trajectories and let humans choose a safer path or the shortest encounter manner in an AR environment. The projection-pattern-based AR manner was preferred among users and enhanced bi-directional communication in HRI. Then, Li et al. [127] integrated AR and digital twin techniques for multi-robot collaborative tasks. The approach utilized reinforcement learning algorithms to generate path trajectories of multiple robots and preview task planning in the AR environment.

4.3. Training and Simulation

By overlaying virtual instructions onto the real-world robot and work cell, AR enables hands-on learning experiences and simulations. Meanwhile, by visualizing the robot’s movements, operators can understand its interactions with the surroundings and train complex tasks in a safe and adjustable virtual environment. Therefore, AR can be utilized to train operators in the programming and operation of industrial robots and simulate the sequences of movements of robots.
In this context, Sievers et al. [131] designed a mixed-reality learning environment for employees’ skills training with collaborative robots. Specifically, the experimental modular assembly plant, a decentralized learning factory, which consists of reconfigurable autonomous sub-modules, was proposed. After that, Leutert and Schilling [132] proposed a projector-based AR system that can support shop-floor-level intuitive programming and modification of milling robot operations in real industrial scenarios. A large-scale metal workpiece with high production tolerances processing task was conducted, evaluating the practicability of the proposed approach.
Not only the robot programming but the verification of the planned trajectories is also essential. Wassermann et al. [133] developed a system for industrial robots’ workspace simulation and program verification based on augmented reality. Specifically, the environment was constructed by 2D images and 3D point clouds in the AR system, and the user performed task-oriented level programming. Then, the robot executed the safe trajectory after plausibility and collision checking. Similarly, Ong et al. [134] proposed an augmented reality system (including a head-mounted display and a handheld pointer) to simulate the work cell of serial industrial robots, enabling motion planning, collision detection, and plan validation. The users can wear the HMD to check the real-time situation of the work cell and define the tasks by 3D points and paths with the pointer. While Hernandez et al. [135] proposed a high-level augmented reality specifications (HARS) method, which can allow the users to just specify high-level requests to the robots. Instead of defining 3D points and paths, the users just place the virtual objects in target locations in the AR system, and the planner will compute the feasible configuration trajectory for the robot later. Expected visual input and audio information can also be applied. In [136], vision and speech were used for the interaction between the users and the AR to achieve intuitive robot programming.
In addition, to ensure a clear understanding of the robot’s motion intent for human collaborators, Rosen et al. [137] implemented a mixed reality head-mounted display (MR-HMD) system. The proposed HoloLens visualization interface for users showed sequences of virtual arm movement graphics overlaid in the real world, which were compared with the 2D display and no visualization interface. A conducted user study demonstrated the advantages of the MR interface in terms of task accuracy and time cost. Later, in [14], the MR-HMD interface was tested in a more complex pick-and-place task, where obstacle avoidance and conditional reasoning were included, compared with the 2D interface. The results reported the proposed interface could improve the users’ time efficiency, usability, and naturalness and reduce the cognitive workload.
Moreover, AR simulations can also be used to verify robot learning. For example, an AR interface was combined with a semantic-based learning system in [138]. The semantic learning system is driven by knowledge graph algorithms and can generate a human-readable description of the demonstrated task [139]. The user can decide if the new demonstration is necessary by evaluating the learned trajectories in the AR system. Later, Luebbers et al. [140] proposed constraint-based learning from a demonstrations method that allows users to maintain and adapt previously learned skills without providing new demos, based on an AR interface. Specifically, the users can directly modify the existing skills with defined constraints to fit new task requirements in the AR in situ visualizations. The aforementioned applications of AR for training and simulation in industrial robotic applications are summarized in Table 8.

4.4. Teleoperation Control/Assistance

Teleoperation is a technique that involves controlling a robot (actions and movements) remotely with a human operator. It is commonly used in various fields, including industrial settings, healthcare, space exploration, and hazardous environments where direct human presence may be unsafe or impractical. Since AR/MR systems can provide a reasonable visualization of the work environment, they are widely applied in robot teleoperation control and assistance. The applications of AR for teleoperation control/assistance in industrial robotic applications are summarized in Table 9.
For teleoperation, Solanes et al. [141] proposed an AR-based interface to industrial robots to replace the conventional teaching pendant. Especially, the computer-generated graphics in the real environment were shown in the HMD, and users could interact with the graphics with a gamepad to command the robot. The usability tests reported that the proposed interface was more intuitive, ergonomic, and easy to use, improving the speed of the teleoperation task. Later, the interface was implemented for bimanual industrial robot teleoperation in [142]. To meet the orientation and velocity requirements during teleoperation, Pan et al. [143] proposed an AR interface based on an RGB-D camera and handheld orientation teaching pendant for industrial robots. The path of the end effector (EE) was defined by the user by selecting several points with a mouse in the virtual work cell. Then, with the portable teaching device, the orientation and motion speed were also given by the user. In addition to fixed-base robots, AR can also be applied to the teleoperation of mobile manipulators. In this direction, Su et al. [144] proposed a 3D/2D vision-based MR interface for mobile manipulators. Three tasks were conducted to evaluate the proposed system compared with a typical 2D visual display method. The results reported that the MR method can reduce overall task completion time and minimize the training effort and cognitive workload. AR can also be used in a shared control manner in teleoperation tasks. In [145], an AR system was implemented to visualize the spraying process in real-time (i.e., not yet complete, complete, and overdosed) based on a proposed logical approach. According to the visual information from the HMD, the operator decided to move the handheld spray robot to the next target region. Similarly, Wonsick et al. [146] developed a virtual reality interface for robot telemanipulation. For the environment reconstruction in virtual reality, a deep learning approach was utilized to segment objects in the workspace and estimate the object’s 6D pose.
Meanwhile, haptic feedback is also important for teleoperation tasks. Lin et al. [147] compared the effect of haptic feedback and AR for assisting teleoperation. Four kinds of telemanipulation tasks were conducted for eight participants, namely, target location, constraint alert, grasping affordance, and grasp confirmation. The results showed that both the haptic feedback and AR assistance can significantly improve the performance of the teleoperation tasks. While the haptic feedback is suitable for tasks that need a prompt response, AR cues were preferred in system status monitoring. Moreover, the participants preferred reducing their cognitive workload regardless of increasing other efforts. In [148], an MR system with haptic feedback was designed for an industrial robot welding task teleoperation. With this system, the user’s hand movement was directly mapped to the robot’s EE in a velocity controlled manner, and the haptic feedback could guide the operator’s hand in following the conical instructions to align the torch for welding and constrain the movement within a collision-free space.
To make the AR system more accessible, Frank et al. [149] developed a mobile MR interface based on tablets to achieve object manipulation in a human–robot collaborative manner. Specifically, virtual objects were attached to physical objects in the robot workspace in augmented live video, and the operator could command the robot to move the specific object by moving the virtual one to the desired location. Later, Su et al. [150] also proposed a tablet-based AR system for industrial robots. Differently, this system could also ensure collision-free operation based on developed tools, which is an important feature for teleoperation. In terms of collision-free operation, Piyavichayanon et al. [151] proposed a collision-aware AR teleoperation approach based on depth mesh. With an RGB-D camera, the proposed system can reconstruct the robot’s work environment in AR. Then, the user can command the robots to generate collision-free movements based on the integrated collision checking function.
Table 9. AR implementation for teleoperation control/assistance in industrial robotic applications.
Table 9. AR implementation for teleoperation control/assistance in industrial robotic applications.
CategoryReferenceRobotMediumAR Content
Industrial robot teleoperationSolanes et al. [141]6R industrial robotHoloLensVirtual work environment
Bimanual robot teleoperationGarcia et al. [142]Two industrial robotsHoloLensWorking environment simulation
Industrial robot teleoperation with orientation and speed requirementsPan et al. [143]ABB IRB120PC screenRobot work cell
Mobile manipulator teleoperationSu et al. [144]Mobile manipulatorHTC ViveVirtual work scenarios
Spraying task assistanceElsdon and Demiris [145]Handheld spraying robotHoloLensVirtual processing paths and menu
Comparison of haptic and AR cues for assisting teleoperationLin et al. [147]KINOVA Gen 3HTC ViveRobot workspace
Robotic welding with AR and haptic assistanceSu et al. [148]UR5HTC Vive HMDVirtual scene
Object manipulationFrank et al. [149]Collaborative robotsTabletsVirtural objects
Industrial robot teleoperationSu et al. [150]Industrial robotHTabletVirtual robot workspace
Collision-aware telemanipulationPiyavichayanon et al. [151]7-DOF manipulatorMobilephoneVirtual robot workspace

5. Discussion

Although AR techniques have been considered a promising paradigm for improving human operators’ situation awareness and visual feedback before and during task execution, several limitations and challenges should be tackled and emphasized. In this section, we discuss the general limitations and challenges of the use of AR in robotic applications from the view of transparency, hardware limitations, safety, and accuracy issues. In addition, we also summarize several potential perspectives that can be improved and addressed in future works on AR-related robotic applications.

5.1. Limitations and Challenges

Immersion and Transparency: AR can provide the user with improved visual feedback in robotic applications. However, immersion and transparency remain critical issues. Firstly, the FoV from the AR display (monitor, HMD, projector) could restrict the amount of information that is accessible to the operator. Moreover, the non-ergonomic interaction during HRI and HRC tasks will affect the task completion performance and impose high cognitive workloads, both mentally and physically. Hence, more transparent and intuitive interfaces should be implemented considering the possible customized or personalized requirements. Additionally, the absence of force and haptic feedback sensing is another limitation [152,153]. Haptic feedback integration into AR-related robotic systems could further provide the operator with informative feedback, such as texture, force, friction, etc., which would assist the user in decision-making and operations.
Hardware and Communication Efficiency: The computation capability in several applications could be limited by the computation capability of the hardware configuration. For instance, most commercial general-purpose AR devices possess only fundamental visualization and basic computing and graphics rendering capabilities, making it challenging to achieve the real-time rendering of the deformation of non-rigid objects and complicated applications. Furthermore, the low communication quality can lead to significant latency in robotics systems, particularly notable in teleoperated systems, thereby affecting the synchronization between the human operator and robotic systems during remote and teleoperation control tasks.
Accuracy and Safety: The accuracy of registration/alignment between the hologram model or the computed-generated image on the physical environment could cause misoperations, thus causing failures in task completion, for example, in fine manipulation, and delicate surgical operation tasks. For the OST-HMD, the hologram model can experience a displacement of 3–5 cm when the operator moves significantly. In addition, considering the latency and inaccurate registration, the safety issue between the robotic system and the environment in dynamic and unstructured environments is challenging and should be addressed. Therefore, safety enhancement strategies should be developed, such as collision avoidance, contact force control, manufacturing process monitoring, etc.
Simulation to Real Adaptation: AR-based simulation and digital twin systems have found their implementation in both medical and industrial domains. For example, a representative application of simulation to real (Sim2Real) transfer proves beneficial in tasks such as robot programming and trajectory planning for robot-assisted operations. However, considering the inherent limitations and challenges, such as the errors in modeling and the dynamics of the physical environment, and the noise/disturbance, the direct transfer of results from simulations to the real world still has a lot of obstacles. Appropriate strategies should be considered and implemented to adapt the results from simulations to physical scenarios.

5.2. Future Perspectives

Hardware System: The comfort level of utilizing AR systems remains a primary issue faced by current AR applications, which encompasses two aspects: visual comfort and wearing comfort. To ensure optimum visual experience, the optical system must offer various features, including a suitable field of view, well-matched depth imaging with high resolution, and ample contrast, and brightness. The attainment of wearing comfort relies on a more compact optical structure design to alleviate the burden on the head, low-power processors to reduce discomfort caused by heat, and advanced battery systems to balance capacity and portability. At the optical design level, the introduction of adjustable lenses is expected to lower visual accommodation conflict and relieve dizziness. The polarization film is also capable of providing a solution to the compact size problem, while free-form optics can provide sufficient FoV. Regarding the signal processing system, customized, dedicated chip solutions are expected to solve the balance between AR system performance, power consumption, and heat dissipation. Furthermore, with the participation of more consumer electronics companies, the costs and prices of AR systems are expected to decrease, which would accelerate the development of AR technology.
Interaction Modalities: A handheld controller, voice, gaze, and gestures are the most commonly used interaction methods in current AR-based systems. However, a handheld controller cannot be used in all-weather wearable applications, and the use of both hands makes it impossible to execute other operations in AR applications. Gesture operation requires real-time capture of hand movements by the camera, which has accuracy issues and can affect social acceptance of public gesture operations with privacy risks. Voice operation is susceptible to environmental noise and can disturb others in shared spaces. Eye-tracking technology has been proven to be faster in efficiency than pointing with fingers, and gaze-driven interfaces based on eye tracking have been well applied in tasks such as target selection and information placement. However, due to the current limitations of the interaction interface style, the gaze-driven experience is poor in complex interaction tasks. Optimizing the interface settings, combining user operating habits, and introducing dynamic UI and sticky capture features can improve and enhance the interactive experience based on eye-tracking technology, leading to significant improvements in AR interaction.
Multimodal Information to Improve the Perception Capability: As optical resolution and FoV improve, virtual objects in AR systems will become increasingly realistic. However, relying solely on visual feedback still leads to a disconnect between virtual objects and the real environment. Enhancing AR with additional audio, haptic, temperature, or olfactory feedback can provide users with a more immersive and natural interaction in AR environments. For example, haptic feedback can be used to create a sense of touch and improve the user’s spatial awareness, while auditory feedback can be used to indicate distance or direction. It is foreseeable that in the future, AR systems will introduce multimodal information feedback to provide alternative feedback modes that are better suited to individual needs, increase usability for users with sensory or cognitive impairments, and provide users with greater perception beyond the limits of the visual field.

6. Conclusions

This paper provides a review and summary of the recent advancements in robotic applications using AR technology. The primary research focuses of this paper are on the implementation of AR in medical and industrial scenarios. Additionally, this work offers a summary of the popular robotic platforms, medium, and AR system contents. Specifically, we classified AR in the medical robot context into four subsections according to their application: preoperative and surgical task planning, image-guided robotic surgery, surgical training and simulation, and telesurgery. For industrial robotic applications, this paper investigated and discussed the recent advancement in human–robot interaction and collaboration, path planning and task allocation, training and simulation, and teleoperation control/assistance. In the meantime, the limitations and challenges that exist in robotic applications are emphasized, such as transparency, interfaces, hardware, and safety issues. Following that, future perspectives are summarized to improve the application of AR in the field of robotic systems. This review paper aims to provide a reference for the future application and development of AR in the field of robotic systems.

Author Contributions

Conceptualization, J.F., A.R., S.L., J.Z., Q.L., E.I., G.F. and E.D.M.; writing—original draft preparation, J.F., A.R., S.L., J.Z., Q.L. and E.I.; writing—review and editing, J.F., A.R., S.L., J.Z., Q.L., E.I., G.F. and E.D.M.; supervision, G.F. and E.D.M.; project administration, E.D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript (in alphabetical order):
ARAugmented reality
BCIBrain–computer interface
CTComputerized tomography
DoFsDegrees of freedoms
EEEnd effector
FoVField of view
GUIGraphical user interface
HMDHead–mounted display
HMIHuman–machine interface
HRCHuman–robot collaboration
HRIHuman–robot interaction
HRSVHigh-resolution stereo viewers
MRMixed reality
MRIMagnetic resonance image
MISMinimally invasive surgery
OST-HMDOptical see-through head-mounted display
OROperating room
PCNLPercutaneous nephrolithotomy
RASRobot-assisted surgery
RA-MISRobot-assisted minimally invasive surgery
THPTotal hip replacement
UIUser interface

References

  1. Dupont, P.E.; Nelson, B.J.; Goldfarb, M.; Hannaford, B.; Menciassi, A.; O’Malley, M.K.; Simaan, N.; Valdastri, P.; Yang, G.Z. A decade retrospective of medical robotics research from 2010 to 2020. Sci. Robot. 2021, 6, eabi8017. [Google Scholar] [CrossRef] [PubMed]
  2. Saeidi, H.; Opfermann, J.D.; Kam, M.; Wei, S.; Léonard, S.; Hsieh, M.H.; Kang, J.U.; Krieger, A. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Sci. Robot. 2022, 7, eabj2908. [Google Scholar] [CrossRef] [PubMed]
  3. Casalino, A.; Bazzi, D.; Zanchettin, A.M.; Rocco, P. Optimal proactive path planning for collaborative robots in industrial contexts. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6540–6546. [Google Scholar]
  4. Zhao, J.; Giammarino, A.; Lamon, E.; Gandarias, J.M.; De Momi, E.; Ajoudani, A. A hybrid learning and optimization framework to achieve physically interactive tasks with mobile manipulators. IEEE Robot. Autom. Lett. 2022, 7, 8036–8043. [Google Scholar] [CrossRef]
  5. Fu, J.; Poletti, M.; Liu, Q.; Iovene, E.; Su, H.; Ferrigno, G.; De Momi, E. Teleoperation Control of an Underactuated Bionic Hand: Comparison between Wearable and Vision-Tracking-Based Methods. Robotics 2022, 11, 61. [Google Scholar] [CrossRef]
  6. Attanasio, A.; Scaglioni, B.; De Momi, E.; Fiorini, P.; Valdastri, P. Autonomy in surgical robotics. Annu. Rev. Control Robot. Auton. Syst. 2021, 4, 651–679. [Google Scholar] [CrossRef]
  7. Faoro, G.; Maglio, S.; Pane, S.; Iacovacci, V.; Menciassi, A. An Artificial Intelligence-Aided Robotic Platform for Ultrasound-Guided Transcarotid Revascularization. IEEE Robot. Autom. Lett. 2023, 8, 2349–2356. [Google Scholar] [CrossRef]
  8. Iovene, E.; Casella, A.; Iordache, A.V.; Fu, J.; Pessina, F.; Riva, M.; Ferrigno, G.; De Momi, E. Towards Exoscope Automation in Neurosurgery: A Markerless Visual-Servoing Approach. IEEE Trans. Med. Robot. Bionics 2023, 5, 411–420. [Google Scholar] [CrossRef]
  9. Zheng, P.; Wang, H.; Sang, Z.; Zhong, R.Y.; Liu, Y.; Liu, C.; Mubarok, K.; Yu, S.; Xu, X. Smart manufacturing systems for Industry 4.0: Conceptual framework, scenarios, and future perspectives. Front. Mech. Eng. 2018, 13, 137–150. [Google Scholar] [CrossRef]
  10. Young, S.N.; Peschel, J.M. Review of human-machine interfaces for small unmanned systems with robotic manipulators. IEEE Trans. Hum.-Mach. Syst. 2020, 50, 131–143. [Google Scholar] [CrossRef]
  11. Guo, L.; Lu, Z.; Yao, L. Human-machine interaction sensing technology based on hand gesture recognition: A review. IEEE Trans. Hum.-Mach. Syst. 2021, 51, 300–309. [Google Scholar] [CrossRef]
  12. Aronson, R.M.; Santini, T.; Kübler, T.C.; Kasneci, E.; Srinivasa, S.; Admoni, H. Eye-hand behavior in human-robot shared manipulation. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 4–13. [Google Scholar]
  13. Palumbo, M.C.; Saitta, S.; Schiariti, M.; Sbarra, M.C.; Turconi, E.; Raccuia, G.; Fu, J.; Dallolio, V.; Ferroli, P.; Votta, E.; et al. Mixed Reality and Deep Learning for External Ventricular Drainage Placement: A Fast and Automatic Workflow for Emergency Treatments. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2022: 25th International Conference, Singapore, 18–22 September 2022; Proceedings, Part VII. Springer: Cham, Switzerland, 2022; pp. 147–156. [Google Scholar]
  14. Gadre, S.Y.; Rosen, E.; Chien, G.; Phillips, E.; Tellex, S.; Konidaris, G. End-user robot programming using mixed reality. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 2707–2713. [Google Scholar]
  15. Jiang, J.; Guo, Y.; Huang, Z.; Zhang, Y.; Wu, D.; Liu, Y. Adjacent surface trajectory planning of robot-assisted tooth preparation based on augmented reality. Eng. Sci. Technol. Int. J. 2022, 27, 101001. [Google Scholar] [CrossRef]
  16. Bianchi, L.; Chessa, F.; Angiolini, A.; Cercenelli, L.; Lodi, S.; Bortolani, B.; Molinaroli, E.; Casablanca, C.; Droghetti, M.; Gaudiano, C.; et al. The use of augmented reality to guide the intraoperative frozen section during robot-assisted radical prostatectomy. Eur. Urol. 2021, 80, 480–488. [Google Scholar] [CrossRef] [PubMed]
  17. Richter, F.; Zhang, Y.; Zhi, Y.; Orosco, R.K.; Yip, M.C. Augmented reality predictive displays to help mitigate the effects of delayed telesurgery. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 444–450. [Google Scholar]
  18. Wang, Z.; Bai, X.; Zhang, S.; Billinghurst, M.; He, W.; Wang, P.; Lan, W.; Min, H.; Chen, Y. A comprehensive review of augmented reality-based instruction in manual assembly, training and repair. Robot. Comput.-Integr. Manuf. 2022, 78, 102407. [Google Scholar] [CrossRef]
  19. Wang, X.V.; Wang, L.; Lei, M.; Zhao, Y. Closed-loop augmented reality towards accurate human-robot collaboration. CIRP Ann. 2020, 69, 425–428. [Google Scholar] [CrossRef]
  20. Mourtzis, D.; Siatras, V.; Zogopoulos, V. Augmented reality visualization of production scheduling and monitoring. Procedia CIRP 2020, 88, 151–156. [Google Scholar] [CrossRef]
  21. Qian, L.; Deguet, A.; Wang, Z.; Liu, Y.H.; Kazanzides, P. Augmented reality assisted instrument insertion and tool manipulation for the first assistant in robotic surgery. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 5173–5179. [Google Scholar]
  22. Bertolo, R.; Hung, A.; Porpiglia, F.; Bove, P.; Schleicher, M.; Dasgupta, P. Systematic review of augmented reality in urological interventions: The evidences of an impact on surgical outcomes are yet to come. World J. Urol. 2020, 38, 2167–2176. [Google Scholar] [CrossRef]
  23. Makhataeva, Z.; Varol, H.A. Augmented reality for robotics: A review. Robotics 2020, 9, 21. [Google Scholar] [CrossRef] [Green Version]
  24. Suzuki, R.; Karim, A.; Xia, T.; Hedayati, H.; Marquardt, N. Augmented reality and robotics: A survey and taxonomy for ar-enhanced human-robot interaction and robotic interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, Orleans, LA, USA, 29 April–5 May 2022; pp. 1–33. [Google Scholar]
  25. Ungureanu, D.; Bogo, F.; Galliani, S.; Sama, P.; Duan, X.; Meekhof, C.; Stühmer, J.; Cashman, T.J.; Tekin, B.; Schönberger, J.L.; et al. Hololens 2 research mode as a tool for computer vision research. arXiv 2020, arXiv:2008.11239. [Google Scholar]
  26. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  27. Dwivedi, Y.K.; Hughes, L.; Baabdullah, A.M.; Ribeiro-Navarrete, S.; Giannakis, M.; Al-Debei, M.M.; Dennehy, D.; Metri, B.; Buhalis, D.; Cheung, C.M.; et al. Metaverse beyond the hype: Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2022, 66, 102542. [Google Scholar] [CrossRef]
  28. Dargan, S.; Bansal, S.; Kumar, M.; Mittal, A.; Kumar, K. Augmented Reality: A Comprehensive Review. Arch. Comput. Methods Eng. 2023, 30, 1057–1080. [Google Scholar] [CrossRef]
  29. Azuma, R.T. A survey of augmented reality. Presence Teleoper. Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  30. Speicher, M.; Hall, B.D.; Nebeling, M. What is mixed reality? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–15. [Google Scholar]
  31. Qian, L.; Wu, J.Y.; DiMaio, S.P.; Navab, N.; Kazanzides, P. A review of augmented reality in robotic-assisted surgery. IEEE Trans. Med. Robot. Bionics 2019, 2, 1–16. [Google Scholar] [CrossRef]
  32. Carbone, M.; Cutolo, F.; Condino, S.; Cercenelli, L.; D’Amato, R.; Badiali, G.; Ferrari, V. Architecture of a hybrid video/optical see-through head-mounted display-based augmented reality surgical navigation platform. Information 2022, 13, 81. [Google Scholar] [CrossRef]
  33. Lin, G.; Panigrahi, T.; Womack, J.; Ponda, D.J.; Kotipalli, P.; Starner, T. Comparing order picking guidance with Microsoft hololens, magic leap, google glass xe and paper. In Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications, Virtual, 24–26 February 2021; pp. 133–139. [Google Scholar]
  34. Andrews, C.M.; Henry, A.B.; Soriano, I.M.; Southworth, M.K.; Silva, J.R. Registration techniques for clinical applications of three-dimensional augmented reality devices. IEEE J. Transl. Eng. Health Med. 2020, 9, 4900214. [Google Scholar] [CrossRef]
  35. Michalos, G.; Makris, S.; Papakostas, N.; Mourtzis, D.; Chryssolouris, G. Automotive assembly technologies review: Challenges and outlook for a flexible and adaptive approach. CIRP J. Manuf. Sci. Technol. 2010, 2, 81–91. [Google Scholar] [CrossRef]
  36. Lin, J.C. The role of robotic surgical system in the management of vascular disease. Ann. Vasc. Surg. 2013, 27, 976–983. [Google Scholar] [CrossRef]
  37. Okamura, A.M.; Matarić, M.J.; Christensen, H.I. Medical and health-care robotics. IEEE Robot. Autom. Mag. 2010, 17, 26–37. [Google Scholar] [CrossRef]
  38. Hägele, M.; Nilsson, K.; Pires, J.N.; Bischoff, R. Industrial robotics. In Springer Handbook of Robotics; Springer: Cham, Switzerland, 2016; pp. 1385–1422. [Google Scholar]
  39. Yang, G.Z.; Cambias, J.; Cleary, K.; Daimler, E.; Drake, J.; Dupont, P.E.; Hata, N.; Kazanzides, P.; Martel, S.; Patel, R.V.; et al. Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci. Robot. 2017, 2, eaam8638. [Google Scholar] [CrossRef]
  40. Galin, R.; Meshcheryakov, R. Collaborative robots: Development of robotic perception system, safety issues, and integration of ai to imitate human behavior. In Proceedings of the 15th International Conference on Electromechanics and Robotics “Zavalishin’s Readings” ER (ZR) 2020, Ufa, Russia, 15–18 April 2020; Springer: Singapore, 2021; pp. 175–185. [Google Scholar]
  41. Raj, M.; Seamans, R. Primer on artificial intelligence and robotics. J. Organ. Des. 2019, 8, 1–14. [Google Scholar] [CrossRef] [Green Version]
  42. Bandari, V.K.; Schmidt, O.G. System-Engineered Miniaturized Robots: From Structure to Intelligence. Adv. Intell. Syst. 2021, 3, 2000284. [Google Scholar] [CrossRef]
  43. Frijns, H.A.; Schürer, O.; Koeszegi, S.T. Communication models in human–robot interaction: An asymmetric MODel of ALterity in human–robot interaction (AMODAL-HRI). Int. J. Soc. Robot. 2023, 15, 473–500. [Google Scholar] [CrossRef]
  44. Porpiglia, F.; Checcucci, E.; Amparore, D.; Manfredi, M.; Massa, F.; Piazzolla, P.; Manfrin, D.; Piana, A.; Tota, D.; Bollito, E.; et al. Three-dimensional elastic augmented-reality robot-assisted radical prostatectomy using hyperaccuracy three-dimensional reconstruction technology: A step further in the identification of capsular involvement. Eur. Urol. 2019, 76, 505–514. [Google Scholar] [CrossRef] [PubMed]
  45. Qian, L.; Deguet, A.; Kazanzides, P. ARssist: Augmented reality on a head-mounted display for the first assistant in robotic surgery. Healthc. Technol. Lett. 2018, 5, 194–200. [Google Scholar] [CrossRef]
  46. Ong, S.; Nee, A.; Yew, A.; Thanigaivel, N. AR-assisted robot welding programming. Adv. Manuf. 2020, 8, 40–48. [Google Scholar] [CrossRef]
  47. Ostanin, M.; Mikhel, S.; Evlampiev, A.; Skvortsova, V.; Klimchik, A. Human-robot interaction for robotic manipulator programming in Mixed Reality. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2805–2811. [Google Scholar]
  48. Chan, J.Y.; Holsinger, F.C.; Liu, S.; Sorger, J.M.; Azizian, M.; Tsang, R.K. Augmented reality for image guidance in transoral robotic surgery. J. Robot. Surg. 2020, 14, 579–583. [Google Scholar] [CrossRef]
  49. Rosenberg, L.B. Virtual fixtures: Perceptual tools for telerobotic manipulation. In Proceedings of the IEEE Virtual Reality Annual International Symposium, Seattle, WA, USA, 18–22 September 1993; pp. 76–82. [Google Scholar]
  50. Kwoh, Y.S.; Hou, J.; Jonckheere, E.A.; Hayati, S. A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery. IEEE Trans. Biomed. Eng. 1988, 35, 153–160. [Google Scholar] [CrossRef]
  51. Anderson, J.; Chui, C.K.; Cai, Y.; Wang, Y.; Li, Z.; Ma, X.; Nowinski, W.; Solaiyappan, M.; Murphy, K.; Gailloud, P.; et al. Virtual reality training in interventional radiology: The Johns Hopkins and Kent Ridge digital laboratory experience. Semin. Interv. Radiol. 2002, 19, 179–186. [Google Scholar] [CrossRef] [Green Version]
  52. Samei, G.; Tsang, K.; Kesch, C.; Lobo, J.; Hor, S.; Mohareri, O.; Chang, S.; Goldenberg, S.L.; Black, P.C.; Salcudean, S. A partial augmented reality system with live ultrasound and registered preoperative MRI for guiding robot-assisted radical prostatectomy. Med. Image Anal. 2020, 60, 101588. [Google Scholar] [CrossRef]
  53. Zhang, F.; Chen, L.; Miao, W.; Sun, L. Research on accuracy of augmented reality surgical navigation system based on multi-view virtual and real registration technology. IEEE Access 2020, 8, 122511–122528. [Google Scholar] [CrossRef]
  54. Fu, J.; Matteo, P.; Palumbo, M.C.; Iovene, E.; Rota, A.; Riggio, D.; Ilaria, B.; Redaelli, A.C.L.; Ferrigno, G.; De Momi, E.; et al. Augmented Reality and Shared Control Framework for Robot-Assisted Percutaneous Nephrolithotomy. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA) Workshop—New Evolutions in Surgical Robotics: Embracing Multimodal Imaging Guidance, Intelligence, and Bio-Inspired Mechanisms, London, UK, 29 May–2 June 2023; pp. 1–2. [Google Scholar]
  55. Ho, T.H.; Song, K.T. Supervised control for robot-assisted surgery using augmented reality. In Proceedings of the 2020 20th International Conference on Control, Automation and Systems (ICCAS), Busan, Republic of Korea, 13–16 October 2020; pp. 329–334. [Google Scholar]
  56. Fotouhi, J.; Song, T.; Mehrfard, A.; Taylor, G.; Wang, Q.; Xian, F.; Martin-Gomez, A.; Fuerst, B.; Armand, M.; Unberath, M.; et al. Reflective-ar display: An interaction methodology for virtual-to-real alignment in medical robotics. IEEE Robot. Autom. Lett. 2020, 5, 2722–2729. [Google Scholar] [CrossRef]
  57. Żelechowski, M.; Karnam, M.; Faludi, B.; Gerig, N.; Rauter, G.; Cattin, P.C. Patient positioning by visualising surgical robot rotational workspace in augmented reality. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2022, 10, 451–457. [Google Scholar] [CrossRef]
  58. Żelechowski, M.; Faludi, B.; Karnam, M.; Gerig, N.; Rauter, G.; Cattin, P.C. Automatic patient positioning based on robot rotational workspace for extended reality. Int. J. Comput. Assist. Radiol. Surg. 2023, 1–9. [Google Scholar] [CrossRef] [PubMed]
  59. Wörn, H.; Weede, O. Optimizing the setup configuration for manual and robotic assisted minimally invasive surgery. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering, Munich, Germany, 7–12 September 2009; Vol. 25/6 Surgery, Nimimal Invasive Interventions, Endoscopy and Image Guided Therapy. Springer: Berlin/Heidelberg, Germany, 2009; pp. 55–58. [Google Scholar]
  60. Weede, O.; Wünscher, J.; Kenngott, H.; Müller-Stich, B.; Wörn, H. Knowledge-based planning of port positions for minimally invasive surgery. In Proceedings of the 2013 IEEE Conference on Cybernetics and Intelligent Systems (CIS), Manila, Philippines, 12–15 November 2013; pp. 12–17. [Google Scholar]
  61. Unberath, M.; Fotouhi, J.; Hajek, J.; Maier, A.; Osgood, G.; Taylor, R.; Armand, M.; Navab, N. Augmented reality-based feedback for technician-in-the-loop C-arm repositioning. Healthc. Technol. Lett. 2018, 5, 143–147. [Google Scholar] [CrossRef] [PubMed]
  62. Fu, J.; Palumbo, M.C.; Iovene, E.; Liu, Q.; Burzo, I.; Redaelli, A.; Ferrigno, G.; De Momi, E. Augmented Reality-Assisted Robot Learning Framework for Minimally Invasive Surgery Task. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 11647–11653. [Google Scholar]
  63. Giannone, F.; Felli, E.; Cherkaoui, Z.; Mascagni, P.; Pessaux, P. Augmented reality and image-guided robotic liver surgery. Cancers 2021, 13, 6268. [Google Scholar] [CrossRef] [PubMed]
  64. Vörös, V.; Li, R.; Davoodi, A.; Wybaillie, G.; Vander Poorten, E.; Niu, K. An Augmented Reality-Based Interaction Scheme for Robotic Pedicle Screw Placement. J. Imaging 2022, 8, 273. [Google Scholar] [CrossRef]
  65. Ferraguti, F.; Minelli, M.; Farsoni, S.; Bazzani, S.; Bonfè, M.; Vandanjon, A.; Puliatti, S.; Bianchi, G.; Secchi, C. Augmented reality and robotic-assistance for percutaneous nephrolithotomy. IEEE Robot. Autom. Lett. 2020, 5, 4556–4563. [Google Scholar] [CrossRef]
  66. Wen, R.; Chui, C.K.; Ong, S.H.; Lim, K.B.; Chang, S.K.Y. Projection-based visual guidance for robot-aided RF needle insertion. Int. J. Comput. Assist. Radiol. Surg. 2013, 8, 1015–1025. [Google Scholar] [CrossRef] [PubMed]
  67. Boles, M.; Fu, J.; Iovene, E.; Francesco, C.; Ferrigno, G.; De Momi, E. Augmented Reality and Robotic Navigation System for Spinal Surgery. In Proceedings of the 11th Joint Workshop on New Technologies for Computer/Robot Assisted Surgery, Napoli, Italy, 25–27 April 2022; pp. 96–97. [Google Scholar]
  68. Fotouhi, J.; Mehrfard, A.; Song, T.; Johnson, A.; Osgood, G.; Unberath, M.; Armand, M.; Navab, N. Development and pre-clinical analysis of spatiotemporal-aware augmented reality in orthopedic interventions. IEEE Trans. Med. Imaging 2020, 40, 765–778. [Google Scholar] [CrossRef]
  69. Andress, S.; Johnson, A.; Unberath, M.; Winkler, A.F.; Yu, K.; Fotouhi, J.; Weidert, S.; Osgood, G.; Navab, N. On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial. J. Med. Imaging 2018, 5, 021209. [Google Scholar] [CrossRef]
  70. Carl, B.; Bopp, M.; Saß, B.; Voellger, B.; Nimsky, C. Implementation of augmented reality support in spine surgery. Eur. Spine J. 2019, 28, 1697–1711. [Google Scholar] [CrossRef] [PubMed]
  71. Chen, W.; Kalia, M.; Zeng, Q.; Pang, E.H.; Bagherinasab, R.; Milner, T.D.; Sabiq, F.; Prisman, E.; Salcudean, S.E. Towards transcervical ultrasound image guidance for transoral robotic surgery. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 1061–1068. [Google Scholar] [CrossRef] [PubMed]
  72. Rahman, R.; Wood, M.E.; Qian, L.; Price, C.L.; Johnson, A.A.; Osgood, G.M. Head-mounted display use in surgery: A systematic review. Surg. Innov. 2020, 27, 88–100. [Google Scholar] [CrossRef] [PubMed]
  73. Pessaux, P.; Diana, M.; Soler, L.; Piardi, T.; Mutter, D.; Marescaux, J. Towards cybernetic surgery: Robotic and augmented reality-assisted liver segmentectomy. Langenbeck’s Arch. Surg. 2015, 400, 381–385. [Google Scholar] [CrossRef]
  74. Marques, B.; Plantefève, R.; Roy, F.; Haouchine, N.; Jeanvoine, E.; Peterlik, I.; Cotin, S. Framework for augmented reality in Minimally Invasive laparoscopic surgery. In Proceedings of the 2015 17th International Conference on E-Health Networking, Application & Services (HealthCom), Boston, MA, USA, 14–17 October 2015; pp. 22–27. [Google Scholar]
  75. Lee, D.; Kong, H.J.; Kim, D.; Yi, J.W.; Chai, Y.J.; Lee, K.E.; Kim, H.C. Preliminary study on application of augmented reality visualization in robotic thyroid surgery. Ann. Surg. Treat. Res. 2018, 95, 297–302. [Google Scholar] [CrossRef]
  76. Shen, J.; Zemiti, N.; Taoum, C.; Aiche, G.; Dillenseger, J.L.; Rouanet, P.; Poignet, P. Transrectal ultrasound image-based real-time augmented reality guidance in robot-assisted laparoscopic rectal surgery: A proof-of-concept study. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 531–543. [Google Scholar] [CrossRef] [Green Version]
  77. Kalia, M.; Avinash, A.; Navab, N.; Salcudean, S. Preclinical evaluation of a markerless, real-time, augmented reality guidance system for robot-assisted radical prostatectomy. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1181–1188. [Google Scholar] [CrossRef]
  78. Porpiglia, F.; Checcucci, E.; Amparore, D.; Piana, A.; Piramide, F.; Volpi, G.; De Cillis, S.; Manfredi, M.; Fiori, C.; Piazzolla, P.; et al. PD63-12 extracapsular extension on neurovascular bundles during robot-assisted radical prostatectomy precisely localized by 3D automatic augmented-reality rendering. J. Urol. 2020, 203, e1297. [Google Scholar] [CrossRef]
  79. Piana, A.; Gallioli, A.; Amparore, D.; Diana, P.; Territo, A.; Campi, R.; Gaya, J.M.; Guirado, L.; Checcucci, E.; Bellin, A.; et al. Three-dimensional Augmented Reality–guided Robotic-assisted Kidney Transplantation: Breaking the Limit of Atheromatic Plaques. Eur. Urol. 2022, 82, 419–426. [Google Scholar] [CrossRef]
  80. Edgcumbe, P.; Singla, R.; Pratt, P.; Schneider, C.; Nguan, C.; Rohling, R. Augmented reality imaging for robot-assisted partial nephrectomy surgery. In Proceedings of the Medical Imaging and Augmented Reality: 7th International Conference, MIAR 2016, Bern, Switzerland, 24–26 August 2016; Proceedings 7. Springer: Cham, Switzerland, 2016; pp. 139–150. [Google Scholar]
  81. Peden, R.G.; Mercer, R.; Tatham, A.J. The use of head-mounted display eyeglasses for teaching surgical skills: A prospective randomised study. Int. J. Surg. 2016, 34, 169–173. [Google Scholar] [CrossRef]
  82. Long, Y.; Cao, J.; Deguet, A.; Taylor, R.H.; Dou, Q. Integrating artificial intelligence and augmented reality in robotic surgery: An initial dvrk study using a surgical education scenario. In Proceedings of the 2022 International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 13–15 April 2022; pp. 1–8. [Google Scholar]
  83. Rewkowski, N.; State, A.; Fuchs, H. Small Marker Tracking with Low-Cost, Unsynchronized, Movable Consumer Cameras for Augmented Reality Surgical Training. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Recife, Brazil, 9–13 November 2020; pp. 90–95. [Google Scholar]
  84. Barresi, G.; Olivieri, E.; Caldwell, D.G.; Mattos, L.S. Brain-controlled AR feedback design for user’s training in surgical HRI. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 1116–1121. [Google Scholar]
  85. Wang, Y.; Zeng, H.; Song, A.; Xu, B.; Li, H.; Zhu, L.; Wen, P.; Liu, J. Robotic arm control using hybrid brain-machine interface and augmented reality feedback. In Proceedings of the 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), Shanghai, China, 25–28 May 2017; pp. 411–414. [Google Scholar]
  86. Zeng, H.; Wang, Y.; Wu, C.; Song, A.; Liu, J.; Ji, P.; Xu, B.; Zhu, L.; Li, H.; Wen, P. Closed-loop hybrid gaze brain-machine interface based robotic arm control with augmented reality feedback. Front. Neurorobot. 2017, 11, 60. [Google Scholar] [CrossRef] [Green Version]
  87. Gras, G.; Yang, G.Z. Context-aware modeling for augmented reality display behaviour. IEEE Robot. Autom. Lett. 2019, 4, 562–569. [Google Scholar] [CrossRef]
  88. Condino, S.; Viglialoro, R.M.; Fani, S.; Bianchi, M.; Morelli, L.; Ferrari, M.; Bicchi, A.; Ferrari, V. Tactile augmented reality for arteries palpation in open surgery training. In Proceedings of the Medical Imaging and Augmented Reality: 7th International Conference, MIAR 2016, Bern, Switzerland, 24–26 August 2016; Proceedings 7. Springer: Cham, Switzerland, 2016; pp. 186–197. [Google Scholar]
  89. Jørgensen, M.K.; Kraus, M. Real-time augmented reality for robotic-assisted surgery. In Proceedings of the 3rd AAU Workshop on Human-Centered Robotics, Aalborg Universitetsforlag, Aalborg, Denmark, 30 October 2014; pp. 19–23. [Google Scholar]
  90. Si, W.X.; Liao, X.Y.; Qian, Y.L.; Sun, H.T.; Chen, X.D.; Wang, Q.; Heng, P.A. Assessing performance of augmented reality-based neurosurgical training. Vis. Comput. Ind. Biomed. Art 2019, 2, 1–10. [Google Scholar] [CrossRef] [Green Version]
  91. Su, H.; Yang, C.; Ferrigno, G.; De Momi, E. Improved human-robot collaborative control of redundant robot for teleoperated minimally invasive surgery. IEEE Robot. Autom. Lett. 2019, 4, 1447–1453. [Google Scholar] [CrossRef] [Green Version]
  92. Dinh, A.; Yin, A.L.; Estrin, D.; Greenwald, P.; Fortenko, A. Augmented Reality in Real-time Telemedicine and Telementoring: Scoping Review. JMIR mHealth uHealth 2023, 11, e45464. [Google Scholar] [CrossRef] [PubMed]
  93. Lin, Z.; Zhang, T.; Sun, Z.; Gao, H.; Ai, X.; Chen, W.; Yang, G.Z.; Gao, A. Robotic Telepresence Based on Augmented Reality and Human Motion Mapping for Interventional Medicine. IEEE Trans. Med. Robot. Bionics 2022, 4, 935–944. [Google Scholar] [CrossRef]
  94. Gasques, D.; Johnson, J.G.; Sharkey, T.; Feng, Y.; Wang, R.; Xu, Z.R.; Zavala, E.; Zhang, Y.; Xie, W.; Zhang, X.; et al. ARTEMIS: A collaborative mixed-reality system for immersive surgical telementoring. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–14. [Google Scholar]
  95. Black, D.; Salcudean, S. Human-as-a-robot performance in mixed reality teleultrasound. Int. J. Comput. Assist. Radiol. Surg. 2023, 1–8. [Google Scholar] [CrossRef] [PubMed]
  96. Qian, L.; Zhang, X.; Deguet, A.; Kazanzides, P. Aramis: Augmented reality assistance for minimally invasive surgery using a head-mounted display. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Proceedings, Part V 22. Springer: Cham, Switzerland, 2019; pp. 74–82. [Google Scholar]
  97. Huang, T.; Li, R.; Li, Y.; Zhang, X.; Liao, H. Augmented reality-based autostereoscopic surgical visualization system for telesurgery. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1985–1997. [Google Scholar] [CrossRef]
  98. Lin, Z.; Gao, A.; Ai, X.; Gao, H.; Fu, Y.; Chen, W.; Yang, G.Z. ARei: Augmented-reality-assisted touchless teleoperated robot for endoluminal intervention. IEEE/ASME Trans. Mechatron. 2021, 27, 3144–3154. [Google Scholar] [CrossRef]
  99. Fu, Y.; Lin, W.; Yu, X.; Rodríguez-Andina, J.J.; Gao, H. Robot-Assisted Teleoperation Ultrasound System Based on Fusion of Augmented Reality and Predictive Force. IEEE Trans. Ind. Electron. 2022, 70, 7449–7456. [Google Scholar] [CrossRef]
  100. Ma, X.; Song, C.; Qian, L.; Liu, W.; Chiu, P.W.; Li, Z. Augmented reality-assisted autonomous view adjustment of a 6-DOF robotic stereo flexible endoscope. IEEE Trans. Med. Robot. Bionics 2022, 4, 356–367. [Google Scholar] [CrossRef]
  101. Bonne, S.; Panitch, W.; Dharmarajan, K.; Srinivas, K.; Kincade, J.L.; Low, T.; Knoth, B.; Cowan, C.; Fer, D.; Thananjeyan, B.; et al. A Digital Twin Framework for Telesurgery in the Presence of Varying Network Quality of Service. In Proceedings of the 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, Mexico, 20–24 August 2022; pp. 1325–1332. [Google Scholar]
  102. Gonzalez, G.; Balakuntala, M.; Agarwal, M.; Low, T.; Knoth, B.; Kirkpatrick, A.W.; McKee, J.; Hager, G.; Aggarwal, V.; Xue, Y.; et al. ASAP: A Semi-Autonomous Precise System for Telesurgery during Communication Delays. IEEE Trans. Med. Robot. Bionics 2023, 5, 66–78. [Google Scholar] [CrossRef]
  103. Acemoglu, A.; Krieglstein, J.; Caldwell, D.G.; Mora, F.; Guastini, L.; Trimarchi, M.; Vinciguerra, A.; Carobbio, A.L.C.; Hysenbelli, J.; Delsanto, M.; et al. 5G robotic telesurgery: Remote transoral laser microsurgeries on a cadaver. IEEE Trans. Med. Robot. Bionics 2020, 2, 511–518. [Google Scholar] [CrossRef]
  104. Wang, L. A futuristic perspective on human-centric assembly. J. Manuf. Syst. 2022, 62, 199–201. [Google Scholar] [CrossRef]
  105. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.V.; Makris, S.; Chryssolouris, G. Symbiotic human-robot collaborative assembly. CIRP Ann. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  106. Li, S.; Zheng, P.; Liu, S.; Wang, Z.; Wang, X.V.; Zheng, L.; Wang, L. Proactive human–robot collaboration: Mutual-cognitive, predictable, and self-organising perspectives. Robot. Comput.-Integr. Manuf. 2023, 81, 102510. [Google Scholar] [CrossRef]
  107. Ji, Z.; Liu, Q.; Xu, W.; Yao, B.; Liu, J.; Zhou, Z. A closed-loop brain-computer interface with augmented reality feedback for industrial human-robot collaboration. Int. J. Adv. Manuf. Technol. 2021, 124, 3083–3098. [Google Scholar] [CrossRef]
  108. Sanna, A.; Manuri, F.; Fiorenza, J.; De Pace, F. BARI: An Affordable Brain-Augmented Reality Interface to Support Human–Robot Collaboration in Assembly Tasks. Information 2022, 13, 460. [Google Scholar] [CrossRef]
  109. Choi, S.H.; Park, K.B.; Roh, D.H.; Lee, J.Y.; Mohammed, M.; Ghasemi, Y.; Jeong, H. An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation. Robot. Comput.-Integr. Manuf. 2022, 73, 102258. [Google Scholar] [CrossRef]
  110. Umbrico, A.; Orlandini, A.; Cesta, A.; Faroni, M.; Beschi, M.; Pedrocchi, N.; Scala, A.; Tavormina, P.; Koukas, S.; Zalonis, A.; et al. Design of advanced human–robot collaborative cells for personalized human–robot collaborations. Appl. Sci. 2022, 12, 6839. [Google Scholar] [CrossRef]
  111. Aivaliotis, S.; Lotsaris, K.; Gkournelos, C.; Fourtakas, N.; Koukas, S.; Kousi, N.; Makris, S. An augmented reality software suite enabling seamless human robot interaction. Int. J. Comput. Integr. Manuf. 2023, 36, 3–29. [Google Scholar] [CrossRef]
  112. Szczurek, K.A.; Prades, R.M.; Matheson, E.; Rodriguez-Nogueira, J.; Di Castro, M. Multimodal multi-user mixed reality human–robot interface for remote operations in hazardous environments. IEEE Access 2023, 11, 17305–17333. [Google Scholar] [CrossRef]
  113. Hietanen, A.; Pieters, R.; Lanz, M.; Latokartano, J.; Kämäräinen, J.K. AR-based interaction for human-robot collaborative manufacturing. Robot. Comput.-Integr. Manuf. 2020, 63, 101891. [Google Scholar] [CrossRef]
  114. Chan, W.P.; Hanks, G.; Sakr, M.; Zhang, H.; Zuo, T.; Van der Loos, H.M.; Croft, E. Design and evaluation of an augmented reality head-mounted display interface for human robot teams collaborating in physically shared manufacturing tasks. ACM Trans. Hum.-Robot. Interact. (THRI) 2022, 11, 1–19. [Google Scholar] [CrossRef]
  115. Moya, A.; Bastida, L.; Aguirrezabal, P.; Pantano, M.; Abril-Jiménez, P. Augmented Reality for Supporting Workers in Human–Robot Collaboration. Multimodal Technol. Interact. 2023, 7, 40. [Google Scholar] [CrossRef]
  116. Liu, C.; Zhang, Z.; Tang, D.; Nie, Q.; Zhang, L.; Song, J. A mixed perception-based human-robot collaborative maintenance approach driven by augmented reality and online deep reinforcement learning. Robot. Comput.-Integr. Manuf. 2023, 83, 102568. [Google Scholar] [CrossRef]
  117. Wang, B.; Hu, S.J.; Sun, L.; Freiheit, T. Intelligent welding system technologies: State-of-the-art review and perspectives. J. Manuf. Syst. 2020, 56, 373–391. [Google Scholar] [CrossRef]
  118. Li, S.; Zheng, P.; Fan, J.; Wang, L. Toward proactive human–robot collaborative assembly: A multimodal transfer-learning-enabled action prediction approach. IEEE Trans. Ind. Electron. 2021, 69, 8579–8588. [Google Scholar] [CrossRef]
  119. Ong, S.; Chong, J.; Nee, A. A novel AR-based robot programming and path planning methodology. Robot. Comput.-Integr. Manuf. 2010, 26, 240–249. [Google Scholar] [CrossRef]
  120. Fang, H.; Ong, S.; Nee, A. Interactive robot trajectory planning and simulation using augmented reality. Robot. Comput.-Integr. Manuf. 2012, 28, 227–237. [Google Scholar] [CrossRef]
  121. Young, K.Y.; Cheng, S.L.; Ko, C.H.; Su, Y.H.; Liu, Q.F. A novel teaching and training system for industrial applications based on augmented reality. J. Chin. Inst. Eng. 2020, 43, 796–806. [Google Scholar] [CrossRef]
  122. Solyman, A.E.; Ibrahem, K.M.; Atia, M.R.; Saleh, H.I.; Roman, M.R. Perceptive augmented reality-based interface for robot task planning and visualization. Int. J. Innov. Comput. Inf. Control 2020, 16, 1769–1785. [Google Scholar]
  123. Tavares, P.; Costa, C.M.; Rocha, L.; Malaca, P.; Costa, P.; Moreira, A.P.; Sousa, A.; Veiga, G. Collaborative welding system using BIM for robotic reprogramming and spatial augmented reality. Autom. Constr. 2019, 106, 102825. [Google Scholar] [CrossRef]
  124. Mullen, J.F.; Mosier, J.; Chakrabarti, S.; Chen, A.; White, T.; Losey, D.P. Communicating inferred goals with passive augmented reality and active haptic feedback. IEEE Robot. Autom. Lett. 2021, 6, 8522–8529. [Google Scholar] [CrossRef]
  125. Weisz, J.; Allen, P.K.; Barszap, A.G.; Joshi, S.S. Assistive grasping with an augmented reality user interface. Int. J. Robot. Res. 2017, 36, 543–562. [Google Scholar] [CrossRef]
  126. Chadalavada, R.T.; Andreasson, H.; Schindler, M.; Palm, R.; Lilienthal, A.J. Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human–robot interaction. Robot. Comput.-Integr. Manuf. 2020, 61, 101830. [Google Scholar] [CrossRef]
  127. Li, C.; Zheng, P.; Li, S.; Pang, Y.; Lee, C.K. AR-assisted digital twin-enabled robot collaborative manufacturing system with human-in-the-loop. Robot. Comput.-Integr. Manuf. 2022, 76, 102321. [Google Scholar] [CrossRef]
  128. Zheng, P.; Li, S.; Xia, L.; Wang, L.; Nassehi, A. A visual reasoning-based approach for mutual-cognitive human-robot collaboration. CIRP Ann. 2022, 71, 377–380. [Google Scholar] [CrossRef]
  129. Zheng, P.; Li, S.; Fan, J.; Li, C.; Wang, L. A collaborative intelligence-based approach for handling human-robot collaboration uncertainties. CIRP Ann. 2023, 72, 1–4. [Google Scholar] [CrossRef]
  130. Li, S.; Zheng, P.; Pang, S.; Wang, X.V.; Wang, L. Self-organising multiple human–robot collaboration: A temporal subgraph reasoning-based method. J. Manuf. Syst. 2023, 68, 304–312. [Google Scholar] [CrossRef]
  131. Sievers, T.S.; Schmitt, B.; Rückert, P.; Petersen, M.; Tracht, K. Concept of a Mixed-Reality Learning Environment for Collaborative Robotics. Procedia Manuf. 2020, 45, 19–24. [Google Scholar] [CrossRef]
  132. Leutert, F.; Schilling, K. Projector-based Augmented Reality support for shop-floor programming of industrial robot milling operations. IEEE Int. Conf. Control Autom. ICCA 2022, 2022, 418–423. [Google Scholar]
  133. Wassermann, J.; Vick, A.; Krüger, J. Intuitive robot programming through environment perception, augmented reality simulation and automated program verification. Procedia CIRP 2018, 76, 161–166. [Google Scholar] [CrossRef]
  134. Ong, S.K.; Yew, A.W.; Thanigaivel, N.K.; Nee, A.Y. Augmented reality-assisted robot programming system for industrial applications. Robot. Comput.-Integr. Manuf. 2020, 61, 101820. [Google Scholar] [CrossRef]
  135. Hernandez, J.D.; Sobti, S.; Sciola, A.; Moll, M.; Kavraki, L.E. Increasing robot autonomy via motion planning and an augmented reality interface. IEEE Robot. Autom. Lett. 2020, 5, 1017–1023. [Google Scholar] [CrossRef]
  136. Quintero, C.P.; Li, S.; Pan, M.K.; Chan, W.P.; Loos, H.F.M.V.D.; Croft, E. Robot Programming Through Augmented Trajectories in Augmented Reality. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 1838–1844. [Google Scholar]
  137. Rosen, E.; Whitney, D.; Phillips, E.; Chien, G.; Tompkin, J.; Konidaris, G.; Tellex, S. Communicating and controlling robot arm motion intent through mixed-reality head-mounted displays. Int. J. Robot. Res. 2019, 38, 1513–1526. [Google Scholar] [CrossRef]
  138. Diehl, M.; Plopski, A.; Kato, H.; Ramirez-Amaro, K. Augmented Reality interface to verify Robot Learning. In Proceedings of the 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, Naples, Italy, 31 August–4 September 2020; pp. 378–383. [Google Scholar]
  139. Bates, T.; Ramirez-Amaro, K.; Inamura, T.; Cheng, G. On-line simultaneous learning and recognition of everyday activities from virtual reality performances. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 3510–3515. [Google Scholar]
  140. Luebbers, M.B.; Brooks, C.; Mueller, C.L.; Szafir, D.; Hayes, B. ARC-LfD: Using Augmented Reality for Interactive Long-Term Robot Skill Maintenance via Constrained Learning from Demonstration. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; Volume 2021, pp. 909–916. [Google Scholar]
  141. Solanes, J.E.; Muñoz, A.; Gracia, L.; Martí, A.; Girbés-Juan, V.; Tornero, J. Teleoperation of industrial robot manipulators based on augmented reality. Int. J. Adv. Manuf. Technol. 2020, 111, 1077–1097. [Google Scholar] [CrossRef]
  142. García, A.; Solanes, J.E.; Muñoz, A.; Gracia, L.; Tornero, J. Augmented Reality-Based Interface for Bimanual Robot Teleoperation. Appl. Sci. 2022, 12, 4379. [Google Scholar] [CrossRef]
  143. Pan, Y.; Chen, C.; Li, D.; Zhao, Z.; Hong, J. Augmented reality-based robot teleoperation system using RGB-D imaging and attitude teaching device. Robot. Comput.-Integr. Manuf. 2021, 71, 102167. [Google Scholar] [CrossRef]
  144. Su, Y.; Chen, X.; Zhou, T.; Pretty, C.; Chase, G. Mixed reality-integrated 3D/2D vision mapping for intuitive teleoperation of mobile manipulator. Robot. Comput.-Integr. Manuf. 2022, 77, 102332. [Google Scholar] [CrossRef]
  145. Elsdon, J.; Demiris, Y. Augmented reality for feedback in a shared control spraying task. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, 21–25 May 2018; pp. 1939–1946. [Google Scholar]
  146. Wonsick, M.; Keleștemur, T.; Alt, S.; Padır, T. Telemanipulation via virtual reality interfaces with enhanced environment models. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 2999–3004. [Google Scholar]
  147. Lin, T.C.; Krishnan, A.U.; Li, Z. Comparison of Haptic and Augmented Reality Visual Cues for Assisting Tele-manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation, Philadelphia, PA, USA, 23–27 May 2022; pp. 9309–9316. [Google Scholar]
  148. Su, Y.P.; Chen, X.Q.; Zhou, T.; Pretty, C.; Chase, J.G. Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding. Appl. Sci. 2021, 11, 11280. [Google Scholar] [CrossRef]
  149. Frank, J.A.; Moorhead, M.; Kapila, V. Realizing mixed-reality environments with tablets for intuitive human-robot collaboration for object manipulation tasks. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016, New York, NY, USA, 26–31 August 2016; pp. 302–307. [Google Scholar]
  150. Su, Y.H.; Chen, C.Y.; Cheng, S.L.; Ko, C.H.; Young, K.Y. Development of a 3D AR-Based Interface for Industrial Robot Manipulators. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018, Miyazaki, Japan, 7–10 October 2019; pp. 1809–1814. [Google Scholar]
  151. Piyavichayanon, C.; Koga, M.; Hayashi, E.; Chumkamon, S. Collision-Aware AR Telemanipulation Using Depth Mesh. In Proceedings of the 2022 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Sapporo, Japan, 11–15 July 2022; Volume 2022, pp. 386–392. [Google Scholar]
  152. Li, J.R.; Fu, J.L.; Wu, S.C.; Wang, Q.H. An active and passive combined gravity compensation approach for a hybrid force feedback device. Proc. Inst. Mech. Eng. Part J. Mech. Eng. Sci. 2021, 235, 4368–4381. [Google Scholar] [CrossRef]
  153. Enayati, N.; De Momi, E.; Ferrigno, G. Haptics in robot-assisted surgery: Challenges and benefits. IEEE Rev. Biomed. Eng. 2016, 9, 49–65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Concept of using AR techniques in medical and industrial robotic applications. Left: AR visualization and guidance for RAS tasks and surgical operation; Right: AR guidance for the robot-assisted industrial manufacturing process.
Figure 1. Concept of using AR techniques in medical and industrial robotic applications. Left: AR visualization and guidance for RAS tasks and surgical operation; Right: AR guidance for the robot-assisted industrial manufacturing process.
Actuators 12 00323 g001
Table 1. Hardware parameters of the AR devices used in the applications collected in this survey. The devices are ordered chronologically by their release date.
Table 1. Hardware parameters of the AR devices used in the applications collected in this survey. The devices are ordered chronologically by their release date.
PlatformField of View (degrees)Per-Eye Resolution (pixels)Tracking Type (DoF)Eye TrackingLatency (ms)
Google Glass-640 × 3603 Non-positionalNo-
HoloLens 1341268 × 7206 Inside-outNo16
Magic Leap 1501280 × 9606 Inside-outYes8
HoloLens 2522048 × 10806 Inside-outYes16
Magic Leap 2701440 × 17606 Inside-outYes8
Meta Quest Pro961800 × 19206 Inside-outYes10
Table 4. Summary of AR applications in surgical training.
Table 4. Summary of AR applications in surgical training.
ApplicationReferencesPlatformAR MediumAssessment
General SkillsRewkowski et al. [83]CustomOST-HMDNone
General SkillsLong et al. [82]daVinciHRSVQuantitative
SupervisionJørgensen et al. [89]daVinciHRSVQualitative
SafetyBarresi et al. [84]CustomOST-HMDQualitative and Quantitative
RehabWang et al. [85]CustomScreenQuantitative
RehabZeng et al. [86]CustomScreenQuantitative
Injury DetectionGras et al. [87]SimulatorDisplayQuantitative User Study
Preoperative TrainingSi et al. [90]SimulatorOST-HMDQuestionnaire
HapticCondino et al. [88]CustomCustomQuestionnaire
Table 5. Summary of AR use in telesurgery in medical robot applications.
Table 5. Summary of AR use in telesurgery in medical robot applications.
ApplicationReferenceRobot PlatformAR MediumDetailed Contents
Remote
visualization
Lin et al. [93]KUKA LBR MedOST-HMDInterventions
Gasques et al. [94]CustomOST-HMDTrauma
Black et al. [95]CustomOST-HMDUltrasound
Qian et al. [96]da VinciOST-HMDMIS
Huang et al. [97]KUKA LBR iiwaMonitorPercutaneous
Teleoperation
control
Lin et al. [98]CustomOST-HMDEndoluminal
Fu et al. [99]Universal Robot 5MonitorUltrasound
Ho et al. [55]da VinciProjectorLaparoscopy
Ma et al. [100]da Vinci and CustomOST-HMDLaparoscopy
Latency and
motion prediction
Richter et al. [17]da VinciConsoleMIS
Bonne et al. [101]da VinciOST-HMDMIS
Gonzalez et al. [102]da VinciMonitorMIS
Fu et al. [99]Universal Robot 5MonitorUltrasound
Table 6. AR technique for HRI and HRC in industrial robotic applications.
Table 6. AR technique for HRI and HRC in industrial robotic applications.
CategoryReferenceRobotMediumAR Content
Accurate robot controlWang et al. [19]KUKA KR6 R700HoloLensMove cube, rotate cube, and create waypoint
Interactive path planningJi et al. [107]ABB IRB1200HoloLensRobot trajectory point, user interface
Pick-and-place selectionSanna et al. [108]COMAU e.DO manipulatorHoloLens 2Explore and select NeuroTags
Safety-aware HRCChoi et al. [109]Universal Robot 3HoloLens 2Synchronized digital twin, safety information, motion preview
User-aware controlUmbrico et al. [110]Universal Robot 10eHoloLens 2 headset and Samsung Galaxy S4 tablet3D model, arrow control guidance, task instruction, operator feedback
Mobile HRIAivaliotis et al. [111]Mobile robot platformHoloLens 2Programming interface, production status, safety zones, and recovering instruction
Multi-user robot teleoperationSzczurek et al. [112]CERNBotHoloLens 2Video, 3D point cloud, and audio feedback
Engine assembly taskHietanen et al. [113]Universal Robot 5HoloLens, LCD projectordanger zone, changed region, robot status, and control button
Carbon-fiber-reinforced polymer fabricationChan et al. [114]KUKA IIWA LBR14HoloLensVirtual robot, workpiece model, and robot trajectory
Gearbox assembly taskMoya et al. [115]Universal Robot 10Tablet3D model animation, audio, PDF file, image, and video
Maintenance manipulationLiu et al. [116]AE AIR4–560HoloLens 2Text, 3D model, execute task, remote expert
Table 8. Applications of AR techniques for training and simulation.
Table 8. Applications of AR techniques for training and simulation.
CategoryReferenceRobotMediumAR Content
Human–robot collaboration learning environmentSievers et al. [131]Collaborative robotsHMD/TabletExperimental modular assembly plant
Industrial robot millingLeutert and Schilling [132]Industrial robotProjectorVirtual processing paths and menu
Workspace simulation and program verificationWassermann et al. [133]KUKA KR6Tablet2D image, 3D point cloud
Robot work cell simulationOng et al. [134]Industrial robotOculus Rift DK2 HMDWork cell, 3D points and paths
High-level augmented reality specificationsHernandez et al. [135]FetchHMDVirtual objects
Augmented trajectories simulationQuintero et al. [136]Barrett WAMHoloLensRobot-augmented trajectories
Robot motion intent visualizationRosen et al. [137]BaxterHoloLensSequences of robot movement
Robot trajectory simulationGadre et al. [14]BaxterHoloLensMotion preview
Robot learning verificationDiehl et al. [138]UR5eHoloLens/TabletVirtual robot, target objects
Robot learned skills modificationLuebbers et al. [140]SawyerHoloLensVirtual robot and tasks
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, J.; Rota, A.; Li, S.; Zhao, J.; Liu, Q.; Iovene, E.; Ferrigno, G.; De Momi, E. Recent Advancements in Augmented Reality for Robotic Applications: A Survey. Actuators 2023, 12, 323. https://doi.org/10.3390/act12080323

AMA Style

Fu J, Rota A, Li S, Zhao J, Liu Q, Iovene E, Ferrigno G, De Momi E. Recent Advancements in Augmented Reality for Robotic Applications: A Survey. Actuators. 2023; 12(8):323. https://doi.org/10.3390/act12080323

Chicago/Turabian Style

Fu, Junling, Alberto Rota, Shufei Li, Jianzhuang Zhao, Qingsheng Liu, Elisa Iovene, Giancarlo Ferrigno, and Elena De Momi. 2023. "Recent Advancements in Augmented Reality for Robotic Applications: A Survey" Actuators 12, no. 8: 323. https://doi.org/10.3390/act12080323

APA Style

Fu, J., Rota, A., Li, S., Zhao, J., Liu, Q., Iovene, E., Ferrigno, G., & De Momi, E. (2023). Recent Advancements in Augmented Reality for Robotic Applications: A Survey. Actuators, 12(8), 323. https://doi.org/10.3390/act12080323

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop