Next Article in Journal
Estimating CPT Parameters at Unsampled Locations Based on Kriging Interpolation Method
Previous Article in Journal
Assessment of Respiratory System Resistance during High-Frequency Oscillatory Ventilation Based on In Vitro Experiment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding

1
Mechanical Engineering Department, College of Engineering, University of Canterbury, Christchurch 8041, New Zealand
2
Manufacturing Futures Research Institute (MFRI), Swinburne University of Technology, Melbourne 3122, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(23), 11280; https://doi.org/10.3390/app112311280
Submission received: 30 October 2021 / Revised: 23 November 2021 / Accepted: 23 November 2021 / Published: 29 November 2021
(This article belongs to the Topic Extended Reality (XR): AR, VR, MR and Beyond)

Abstract

:
This paper presents an integrated scheme based on a mixed reality (MR) and haptic feedback approach for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed robotic tele-welding system features imitative motion mapping from the user’s hand movements to the welding robot motions, and it enables the spatial velocity-based control of the robot tool center point (TCP). The proposed mixed reality virtual fixture (MRVF) integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Onsite welding and tele-welding experiments identify the operational differences between professional and unskilled welders and demonstrate the effectiveness of the proposed MRVF tele-welding framework for novice welders. The MRVF-integrated visual/haptic tele-welding scheme reduced the torch alignment times by 56% and 60% compared to the MRnoVF and baseline cases, with minimized cognitive workload and optimal usability. The MRVF scheme effectively stabilized welders’ hand movements and eliminated undesirable collisions while generating smooth welds.

1. Introduction

Welding has been used extensively in the maintenance of nuclear plants, the construction of underwater structures, and the repair of spacecraft in outer space [1]. In these hazardous situations in which human welders have no access, the judgment and intervention of the human operators are required [2]. Customized production is also an application scenario for welding, where welders often work in environments with dust, strong light, radiation, and explosion hazards [3]. Human-in-the-loop (HITL) robotic tele-welding strategies have become a feasible approach for bringing humans out of these dangerous, harmful, and unpleasant environments while performing welding operations [4,5]. Robotic tele-welding systems (RTWSs) combine the advantages of humans and robotics and coordinate the functions of all system components efficiently and safely [6,7]. RTWSs can diminish geographical limitations for scarce welding professionals and bring a remote workforce into manufacturing [8,9].
Welding training is a time-consuming and costly process. Intensive instruction and training are usually required to bring unskilled welders to an intermediate skill level [10,11]. It is important to analyze the differences between the operating skills of professional and novice welders to facilitate the professional welding level of unskilled welders and further to improve the feasibility, efficiency, and welding quality of RTWSs for novice welders during remote welding operations. The expertise and skill extraction of professional welders as well as the application of robot assistance in on-site welding operations have become popular research topics [12,13,14]. The implementation of interactive robots can stabilize the hand movements of novice welders for improved welding quality, but robot-assisted welding has not been studied in teleoperated welding scenarios. Welding motion capture systems were used in [15,16] to differentiate between professional and unskilled welders in terms of operational behavior in the gas tungsten arc welding (GTAW) process, providing an experimental basis for the development of robot-assisted tele-welding schemes. The experiments in [17] revealed the differences between professional and unskilled welders in the trajectory of the GTAW hand movements and indicated that the main cause of the unsatisfactory welding results is that novice welders make abrupt movements in the direction perpendicular to the weld surface. However, the operational difference of gas metal arc welding (GMAW) between professional and novice users was not well researched [18].
Recent research on human-centered robotic welding has focused on the development of MR-based robot-assisted welding training platforms, intuitive programming for telerobotic welding, interactive telerobotic welding design, and MR-enhanced tele-welding paradigms. A virtual reality (VR)-based haptic-guided welder training system was introduced in [19]. This system provides guidance force to welders, simulating a human welding trainer. Both novice and skilled welders can use this platform to improve their welding skills in a virtual environment. However, this system does not integrate real welding scenarios into the virtual environment to allow welders to adjust their movements in real-time according to the welding pool status, nor does it transfer human movements to the robot for actual tele-welding operations. Olaf Ciszak et al. [20] proposed a vision-guided approach for programming automated welding robot paths in 2D, where the programmer draws the target weld pattern in the user presentation space, a low-cost camera in the system captures the image, and an algorithm detects and processes the geometry (contour lines) drawn by the human. This intuitive remote programming system for welding is limited by programming the contour lines in two-dimensional planes only and does not have the real-time capability of a telerobotic welding system. In work [21], the authors analyzed the integration of advanced technologies such as MR, robot vision, intuitive and immersive teleoperation, and artificial intelligence (AI) to build an interactive telerobotic welding system. This paradigm enables efficient human-centered collaboration between remote welding platforms and operators through multi-channel communication. A teleoperated wall-climbing robotic welding system was developed to demonstrate the application of various technologies in an innovative robotic interaction system to best achieve natural human–robot interaction. However, the mobile wall-climbing welding robot presented in this system has a simple structure and does not have a flexible robot manipulator to mimic welders’ human-level manipulation and make dexterous welding adjustments. Natural human movement signals were not used to improve the system intuitiveness and control the robot for tele-welding tasks.

2. Related Work

More recent research attention has focused on MR-enhanced tele-welding paradigms [22]. It was verified in [23] that there were no statistically significant differences in the total welding scores between participants in the physical welding group and the mixed reality-based welding groups. The mixed reality welding user interface gives operators the ability to perform welding at a distance while maintaining a level of manipulation [24]. An optical tracking-based telerobotic welding system was introduced in [25]. The Leap Motion sensor captures the trajectory of a virtual welding gun held by a human welder in userspace to control the remote welding robot for the welding task. However, this welding system requires the use of a physical replica of the workpiece for welding in the userspace to project a real-time weld pool state and guide the welders to adjust their hand movements to the shape of the workpiece [26]. Qiyue Wang et al. [27] developed an MR-based human-robot collaborative welding system. The collaborative tele-welding platform combines the strengths of humans and robots to perform weaving gas tungsten arc welding (GTAW) tasks. The welder can monitor the welding process through an MR display without the need to be physically present. Welding experiments indicated that collaborative tele-welding has better welding results compared to welding performed by humans or robots independently. MR-based robot-assisted remote welding platforms were developed in [28] to provide the welders with more natural and immersive human–robot interaction (HRI) [29]. However, in these systems, the users rely on visual feedback for movement control and have no haptic effects to completely prevent accidental collisions between the robot and the workpiece when the operator controls the robot for welding from a distance. A visual and haptic robot programming system based on mixed reality and force feedback was developed in [30], but the system was not suitable for real-time remote welding operations and was inefficient in unstructured and dynamic welding situations. Haptic feedback provides the welders with additional scene modality and increases the sense of presence in the remote environment, thereby improving the ability to perform complex tasks [31,32,33]. The primary benefit of incorporating haptic effects is to enhance the tele-welding task performance and operator’s perception [34]. These existing remote robotic welding systems do not take sufficient advantage of the potential performance improvements that various forms of haptic effects can bring to the user. The rapid development of MR-enhanced teleoperation has led to the integration of MR and virtual fixtures (VF) to improve task performance and user perception [35,36]. The integration of MR and VF in RTWSs can effectively address the defects and problems that exist in the above telerobotic welding systems. The immersive and interactive MR environment allows for the effective generation of virtual workpieces in the user space [37] and can be combined with VF technology to provide force feedback and guidance to users, thereby improving the accuracy of robot movements and effectively preventing accidental collisions [38,39].
The main weakness of the published studies on tele-welding is that existing remote-controlled robotic welding systems do not adequately incorporate MR technology and virtual fixtures to effectively eliminate potentially harmful collisions in the tele-welding process and grant welding robots human-level dynamics for dexterous GMAW welding tasks. No attempt has been made to reduce operational complexity to assist inexperienced welders to perform welding quickly and address the time-consuming training and shortage of a qualified workforce [40]. In this work, an on-site welding experiment was designed to investigate the motion difference between the expert and unskilled welders, extracting the expertise and skills of professional welders to optimize the robotic tele-welding platform. An MRVF robotic tele-welding platform was developed to facilitate novice welders with better weld control by incorporating MR and VF. This tele-welding paradigm integrates imitation-based motion mapping and MR and VF functions, providing human-level operating capabilities and enabling non-skilled welders to perform remote GMAW tasks. A tele-welding experiment was carried out to verify the effect of MR-integrated visual and haptic cues on the tele-welding tasks against the typical baseline and MR tele-welding cases.

3. Materials and Methods

3.1. Welding Skill Extraction System Design

The objectives of this research were to (1) remove human welders from hazardous and unpleasant working environments without increasing operational complexity or sacrificing the welding quality; (2) enable the welders to conduct tele-welding in the same way it is performed onsite, minimizing the learning required by introducing a tele-welding robot in the loop; (3) analyze the operational techniques and welding expertise distinguishing professional welders from unskilled welders; (4) further assist unskilled workers with integrated visual and haptic HRI modalities via MR to improve task performance and system usability in remote-controlled tele-welding and to achieve welding results comparable to those of professional welders. These objectives address key issues in remote tele-welding.
To identify operational differences between unskilled and professional welders, hand movements of professional welders performing manual welding tasks were tracked and compared to those of unskilled welders. Figure 1 shows the hardware components of the gas metal arc welding (GMAW) motion tracking platform, including an HTC Vive tracking system, welding shelter, welding torch, and extra welding gas/wire/electricity supplies. A 6-DOF Vive tracker was mounted on the welding torch and exposed to the two surrounding Vive tracking base stations for tracking the translational and rotational motion of the welder’s torch hand by generating a wireless connection between the tracked welding torch and the base stations. A metal welding shelter enables more precise motion tracking and covers the torch tip and workpieces to prevent infrared (IR) light exposure, which may interfere with the IR-sensitive tracking sensors. Auto-darkening welding helmets and welding gloves were used by all participants.

3.2. MRVF Tele-Welding System Overview

In this study, we investigated the impact of an integrated visual/haptic perception in MR on a natural, 3D motion mapping, enhanced immersive, and intuitive tele-welding process. Figure 2 shows the MR-incorporated virtual fixture (MRVF) telerobotic system consisting of four main elements—(1) the welding robot and visualization system; (2) the haptic master robot; (3) the MR workspace implementation; (4) the robot and operator space communication implementation.
The remote robotic welding platform consisted of a UR5 industrial manipulator with six degrees of freedom (DOF), gas metal arc welding (GMAW) equipment, welding camera, and auto-darkening filter. The UR5 industrial manipulator was equipped with an arc welding torch to perform the remote welding process, as shown in Figure 3. A monocular Logitech C615 webcam (Logitech International S.A, Lausanne, Switzerland) with an auto-darkening lens was mounted on the robot to observe the welding process and provide the operator with a direct view of the workpieces. A robot operating system (ROS) middleware-supported driver for the UR5 robot ran on a computer with an Ubuntu 16.04 operating system. The Ubuntu computer was equipped with an i7-10700 CPU, 64 GB RAM, and GeForce RTX 2060 graphics to command the UR5 robot controller through TCP/IP and process the on-site welding streams. The TCP/IP-based ROS communication protocol was capable of fast control rates at 125 Hz, which is sufficient for teleoperated robotic welding tasks, where real-time control is required.
A PHANToM Omni haptic robot (SensAble Technologies Inc., Woburn, MA, USA) was utilized as the motion input device to remotely operate the welding robot in a manual welding manner. The MRVF system features velocity-centric motion mapping (VCMM) from the user’s hand movements to the robot motions and enables spatial velocity-based control of the robot tool center point (TCP). The welder uses the stylus of the haptic robot with the same motion and manner as when performing manual welding. This approach enables intuitive and precise user control of the position and orientation of the UR5 end effector to achieve the desired travel speed, and travel/work angles as if the user was directly present.
The operator space for the MRVF tele-welding consisted of an HTC Vive HMD and 27-inch monitor connected to a desktop with an i7-8700k CPU, 32 GB RAM, and a GeForce GTX 1080 graphics processor. The immersive MRVF environment was generated in Unity 3D to display an integrated 3D visualization with overlaid monoscopic image streams and corresponding haptic feedback during the welding process. The ROS bridge provided a network intermediate, enabling the exchange of messages between nodes, and it was used to establish communication between the master and slave robot sides.

3.3. MRVF Visual/Haptic Workspace

Digital twin technology was used to capture the physical UR5 robot pose during operation and allowed the welders to view the rotation status of each joint [41]. The combination of the virtual twin and onsite video streams in MR provided comprehensive real-time monitoring of the robot’s operating status. It also provided assistance in accurately and efficiently amending the welding motion based on data from the robot model. The scale ratio for the virtual UR5 robot was 1:5 so the digital twin data and motions fit the user’s view in the MR welding workspace.
Virtual fixtures (VFs) can be divided into guidance fixtures and prevention fixtures. The proposed MRVF presented uses a combination of both to guide the users to efficiently navigate to the initial welding point and effectively prevent the torch tip from colliding with the workpiece.
During the welding process, the electrode needs to contact the molten weld pool to ensure the filler metal can be transferred from the electrode to the work. However, contact between the torch tip and the workpiece must be prevented to avoid damage. In the MRVF tele-welding workspace (Figure 4d), a transparent prevention VF panel remains overlaid on the virtual workpiece with a 2D display of the actual welding process to minimize collisions of the torch tip manipulated by the user and the workpiece.
The welding experiments revealed it is relatively difficult to move the torch to the exact weld starting point for novice users, and this torch alignment process is often time-consuming and increases overall task completion time. In the MR workspace, a conical guidance fixture is installed with the tip aligned to the welding start point, as shown in Figure 4d. The user simply moves the torch head to the wide end and then quickly moves the virtual torch tip to the cone tip position by following the resistance of the inner wall of the conical shape, and the actual torch is simultaneously driven to the intended welding start point.
Interaction between the haptic robot and the MR environment occurs at the haptic interface point (HIP), representing the corresponding position of the physical haptic probe of the master haptic robot [42,43]. The force exerted on the haptic stylus is calculated by simulating a spring between the proxy and the HIP. The resistance force exerted by the haptic stylus to the user’s hand is proportional to the distance between the proxy point and the HIP. Figure 5 illustrates how haptic rendering and robot control are implemented using a master-controlled HIP and proxy-controlled robot (MHPR) architecture [44]. Considering the proxy point never violates the constraints imposed by the virtual fixtures, the welding robot will not collide with the workpiece, even though the operator overcomes the resistance force. This architecture forms a hard prevention fixture, allowing the user to maintain the desired contact tip-to-work distance (CTWD), preventing unwanted collisions and increasing the precision and stability of tele-welding operations.

3.4. Welding Experiments

3.4.1. Experimental Design

Two experiments were conducted with novice and professional welders. Experiment 1 (the onsite welding experiment) investigated the motion difference between the expert and unskilled welders and extracted the expertise and skills of professional welders to optimize the robot-assisted welding platform. The experimental results further served as the “ground truth” for the development of MRVF robot-assisted welding platforms when facilitating novice welders with better weld control by incorporating MR and VF. Experiment 2 (the tele-welding experiment) was carried out to verify the effect of MR-integrated visual and haptic cues on the tele-welding performance of unskilled welders. The study was focused on novice participants to assess improvements and quality relative to professional on-site welding. Experiments were also conducted with professional welders to produce the criteria for the desired welding results.

3.4.2. On-Site Welding Experiment

Sixteen (16) student volunteers and four (4) technical staff members were recruited at the University of Canterbury (Christchurch, New Zealand). All participants were right-handed. The 16 students were unskilled welders who self-rated as having no prior experience, and the 4 technical staff members were very experienced welders who perform welding regularly and train undergraduates with no welding experience. Due to the relatively small number of professional welders, each professional welder was asked to weld multiple times to produce a comparable sample size.
Prior to the welding experiments, the workshop technician provided the novice subjects with the same standardized welding face-to-face instructions on the manual GMAW process, including the use of the welding torch, melting conditions, and the desired molten weld pool status for quality welding results. To observe the workshop safety precautions, the professional welders and experimenters remained close onsite and supervised the novice welders throughout the experiment. The novice welding results are, thus, safe and the best-case results for this cohort.
The welding experiments were conducted using a single-phase welding GMAW machine. A steel workpiece plate was placed in a horizontal position on the welding table for typical flat welding. The dimensions of the plates were 150 mm × 100 mm × 10 mm. The centerline line of the workpiece was set as the intended welding trajectory. Each professional welder was required to perform onsite flat welding four times for GMAW operation expertise and skill extraction. Each novice welder performed once for motion data collection and analysis. The corresponding hand movements and welding results were used to assess the absolute and relative welding performance, distinguishing the gap between experts and novices.

3.4.3. Tele-Welding Experiment

A 3 × 1 within-participants experiment was designed to validate whether the designed MRVF scheme facilitated novice welder control of a robotic tele-welding platform to achieve quality welding results and to assess the user experience. The null hypothesis (H0) of the repeated-measures ANOVA was that the baseline, MRnoVF, and MRVF tele-welding paradigms are equally effective in welding quality and welder experience for novices, in terms of effectiveness, intuitiveness, and learnability, using the VCMM imitation-based motion mapping approach as the basis for teleoperation.
In this work, three visualization modules in the tele-welding HRI platform, shown in Figure 4, were tested to validate the efficacy of the proposed MRVF tele-welding paradigm. In particular, to show the differences between the 2D baseline, MR, and MRVF settings for remote tele-welding. Specifically, the three modules were as follows:
  • Baseline: Perform the tele-welding operation with a non-immersive display using monoscopic streams (Figure 4a). The display screen was a standard 27-inch PC monitor. The 2D visualization was transmitted from the monoscopic camera mounted on the welding robot. The welder manipulated the master haptic robot for the welding robot control without haptic effects. The non-immersive 2D display was used as the baseline condition, as it is commonly used for visual feedback in typical remote-controlled welding systems.
  • MRnoVF: Conduct the tele-welding task with immersive MR-HMD with overlaid monocular images on the top of the virtual workpiece (Figure 4c). The MRnoVF scheme is a limited version of the proposed MRVF module because it does not provide the participants with haptic cues to support hand maneuvering. The haptic device was deployed to command the UR5 arm for welding but provided no force feedback to the operator.
  • MRVF: MRVF incorporates combined planar prevention and conical guidance haptic cues in the immersive MR workspace (Figure 4d). The user maneuvered the haptic device within the constraints provided by guidance and prevention VFs while welding with the remotely placed robot. The user inspected the real-time pose of the physical welding robot via the scaled virtual replica in the scene.
The participants ran through all three experimental setups distinguished by increasing levels of visual and haptic HRI modalities. First, each participant read the instructions and completed a pre-task questionnaire recording age, gender, and familiarity with welding, robotics, and MR experience. The objective of each trial was then explained. Each subject was given the same introduction that demonstrated the proposed intuitive tele-welding platform with the visual/haptic feedback modules they were going to use before testing, ensuring standardized, consistent training for all subjects. After a demonstration, the participants were given 2 min to experience the MR-enhanced telerobotic welding system to familiarize themselves with the haptic robot, mixed reality imagery, and robotic welding platform.
The MR-HMD and haptic stylus were fitted on each participant at the user site. When the subject sent verbal confirmation, the MR welding workspace appeared as intended, and they started completing the teleoperated robotic welding tasks as required. Each participant completed the typical horizontal flat welding task under each experimental condition (2D baseline, MRnoVF, MRVF). The condition sequence was randomized to mitigate learning and fatigue effects. After completing each experimental task using one control-feedback condition, the participants filled out a questionnaire about the HRI module to directly compare the three conditions.
The participants were given unbounded time to complete the welding tasks but were instructed to navigate the torch from a given pose to the desired welding starting pose as effectively as they could. The alignment time participants spent to position the torch tip significantly influences the overall tele-welding completion time compared to the welding itself. Thus, alignment times were measured to evaluate improvement in participant work efficiency with each condition as the VFs aid this process in particular. The number of accidental collisions between the torch tip and the metal was recorded as a performance metric. User effort and workload during teleoperation experiments were evaluated by the NASA task load index (NASA-TLX) score at the end of each task, assessing the qualitative mental demand, physical demand, time demand, performance, effort, and frustration (score range of 1–100, from the least to the most demanding) [45,46]. User acceptance and system usability, including usefulness and ease-of-use, was assessed by a questionnaire based on the technology acceptance model (TAM) measuring acceptance and ease-of-use (score range of 0–7, from worst to best) [47,48].
A one-way within-participants ANOVA with repeated measures analyzed the statistical differences among the means of all measurements [49]. Bonferroni correction indicated which mean values were significantly different and was used in this analysis when the ANOVA test showed a significant main effect of the experiment condition [50]. The Greenhouse–Geisser correction was applied to assess the difference in the welder reports of the baseline, MRnoVF, and MRVF modules as within-subject variables [51].

4. Results

4.1. Onsite Welding Results

The experiment identified the difference between the welding motion trajectories of the skilled and unskilled welders to assist unskilled welders in achieving better control of the weld in telerobotic welding operations. Figure 6 shows the welding results of the skilled and unskilled welders; the expert welds exhibited consistent uniformity, with a smooth weld surface and even thickness across the weld axis. The results of the unskilled welders were heterogeneous, abrupt, variable, and uneven in thickness and direction. Analysis of the tracked torch motion data was performed to determine the causes of these discrepancies.
Figure 7 compares the motions and velocities between the professional and unskilled welders, showing that the unskilled welders had difficulty stabilizing the torch hand movement in the X and Z directions. Figure 7b shows that both the professional and novice welders could manipulate the torch smoothly in the target direction, Y. Significantly aggressive hand velocities were observed in the X and Z directions, which indicates that the unskilled welders could adjust the motion velocity in the welding direction according to the real-time weld pool status just as the professional welders did, but they had more velocity and motion due to instability. The motion analysis for the hand motion differences was summarized by variance and RMSE and are shown in Table 1 and Figure 8, in which the overall results match those in Figure 7.

4.2. Tele-Welding Results

Overall, all the subjects completed the tele-welding experiments under the three conditions. In the post-experiment questionnaire, the baseline case was rated as the most difficult welding task condition by the majority of participants. Most subjects commented that the MRVF VFs supported their suspended torch hands and reduced fatigue during the robotic welding process. Figure 9 presents a comparison between the sample welding results of the expert and novice welders for the MRVF-integrated visual/haptic scheme, which reduced undesirable deviations of the unskilled welder. The welding results show the gap between the unskilled and professional welders was significantly reduced, and the MVRF condition was intuitive enough to enable experienced welders to quickly transfer their skills from onsite welding to remote tasks.
Statistical analysis results that compared the MR-enhanced visual/haptic tele-welding frameworks for HRI paradigms to the baseline and MRnoVF cases are given in Table 2 and Table 3. Table 2 lists the mean scores and standard deviations of all measurements and ratings for all participants under each condition. Table 3 lists the p-values and statistical significance of the three modules using one-way ANOVA. The results indicate significant differences between the three visual/haptic integration levels in tele-welding tasks.

4.2.1. Objective Measures

The analysis rejected the null hypothesis (H0) that the MRVF visual/haptic HRI approach for intuitive tele-welding, the MRnoVF, and 2D baseline modules have identical effects on welder performance. In particular, the results show that the MRVF visual/haptic HRI approach significantly outperformed both the 2D baseline and MRnoVF HRI methods on the welding tasks for all pairwise comparisons. Guiding a welding robot using natural welding motion through MR with hybrid guidance/prevention VFs in the MR workspace improved remote welding performance and reduced novice, unskilled welder effort.
As shown in Figure 10a, a one-way within-subjects ANOVA with repeated measures and a Greenhouse–Geisser correction indicated the time taken to position the torch to the desired welding pose was statistically significantly different—F (1.866, 27.995) = 47.279, p < 0.001, Partial = 0.76. The post-hoc test revealed the time to position the torch decreased significantly with the MRVF (M = 18.60) compared to the MRnoVF module (M = 42.32) and the baseline module (M = 46.72). Torch alignment times for the welding tasks using the MRVF-integrated visual and haptic tele-welding framework were reduced by 56% and 60% compared to the MRnoVF and baseline cases, respectively, indicating that the typical 2D tele-welding module and the MRnoVF case require additional time to achieve the same capabilities as the proposed MR-integrated visual/haptic HRI module.
Statistical significance was also seen for the average number of collisions between the three HRI modules—F (1.424, 21.353) = 4.091, p < 0.05, Partial = 0.21. The pairwise comparisons indicated the mean collision numbers to complete the welding task were significantly reduced in the baseline (M = 0.50) compared to the MRnoVF module (M = 0.25) and the MRVF module (M = 0), as shown in Figure 10b. The statistical results demonstrate that following through the cone-shaped guidance fixture provided by the MRVF can reduce the welding completion time by minimizing the time used for navigating the torch tip to the initial welding pose. In addition, the prevention VF greatly reduced the likelihood of a collision occurring.

4.2.2. Subjective Measures

The NASA task load index (NASA-TLX) assessed the cognitive workload. On a scale of 0 to 100, with 100 being the most difficult, the participants rated their qualitative experience of mental demand, physical demand, temporal demand, performance, effort, and frustration after completing each task. Figure 11 shows all average NASA-TLX scores were lower for the MR-integrated visual and haptic HRI module (MRVF) compared to the baseline and MRnoVF cases. The MRVF visual-haptic mapping module significantly reduced the mental and physical demands and effort of participants. In particular, the mental workload was reduced from baseline (M = 80.31) compared to the MRnoVF module (M = 75.31) and the MRVF module (M = 45.00). The physical workload was reduced from baseline (M = 77.94) compared to the MRnoVF module (M = 74.44) and the MRVF module (M = 32.25). In addition, the average effort score in the NASA-TLX decreased significantly by 56% and 19% in comparison to the baseline and MRnoVF, respectively (F (1.590, 23.852) = 27.782, p < 0.001, Partial = 0.65), when the visual and haptic feedback were incorporated in the MRVF.
The technology acceptance model (TAM) evaluated the system functionality, usability, and user’s acceptance and perception of the three tele-welding modules. Each scale consisted of three items measured on a seven-point scale (1 = strongly disagree; 7 = strongly agree). The MRVF visual/haptic HRI method (M = 4.19) was reported to be more acceptable than the MRnoVF module (M = 2.69) and baseline case (M = 2.25) in terms of perceived usefulness, as shown in Figure 12. The TAM results indicate there was an overall significant difference between the means of the users’ appeal with the three different HRI modules. The participants found the MR-integrated visual/haptic tele-welding framework (MRVF) (M = 5.50) to be significantly easier to use compared to the 2D baseline (M = 2.63), and marginally easier to use than the MRnoVF module (M = 3.13). The subjective measures analysis proved the MRVF vision/force mapping approach for tele-welding outperformed the MRnoVF and 2D baseline modules in task workload and user perception.

4.3. Limitations

The overall system experiment was conducted at short range. Thus, time lags were relatively very small. Research conducted in other studies indicated that lag between user motion and robot motion causes increasing errors [52,53]. Ongoing work using Markov models and other forecasting methods can address this issue in future work, given the results in our proof-of-concept system.
The MRVF system presented relatively low-cost and readily available components, where faster or more precise systems could provide greater accuracy, potentially reducing the improvements seen here. One purpose of this study was to use commercial off-the-shelf products to demonstrate the potential of a relatively low-cost system to achieve tele-welding. Hence, while performance can be improved with better components, it also raises the cost, for which economic feasibility is application-dependent.
Subject numbers were limited in this study, and future work should replicate this effort with a larger study if feasible. However, the relatively low number of unskilled welders had consistent results. Thus, while greater numbers would more accurately quantify the gains to be obtained by an MRVF approach, the consistently large differences seen in both objective and subjective assessments indicate that the results should be replicable. This study aimed to enable inexperienced welders to perform quality remote welding tasks. Unskilled welders do not have frequent contact with the physical welding torch and are not reliant on its weight. It is feasible in future work to replace the handheld stylus on the haptic device with an actual welding torch or a 3D-printed torch model of the same weight to improve the professional welder’s experience.

5. Conclusions

This research was focused on immersive and intuitive human–robot interaction with visual and haptic cues, specifically focusing on the MRVF framework for tele-welding scenarios. The MRVF visual/haptic mapping framework provided the welders with an intuitive approach to control the movement of the complex robotic welding system in a manner similar to conventional handheld manual welding via using a single-point grounded haptic robot. The users felt they could access the physical welding scenario from the MR-based operator space, as indicated in the subjective assessments. The MRVF allowed the unskilled, novice welders to rest their suspended torch hands against the VF surface during the robotic welding process, stabilizing the torch hand movements in the X and Z directions. With the integrated visual and haptic perception, the MRVF tele-welding scheme enabled the non-professional welders to achieve welding results in remote control tele-welding that were comparable to those of professional welders both remotely and onsite, reducing the dependence of remote welding on welder experience and specialized skills. The prevention haptic structure enabled in the MRVF module using VFs successfully eliminated collisions that can damage the robot and/or workpiece. The proposed MRVF visual/haptic framework for remote-controlled welding also enabled professional welders to retain a professional level of operation in the tele-welding process, indicating its intuitive ease of use. Overall, this approach improved the task performance of unskilled, novice welders, increased work efficiency, was intuitive and easy to use, and prevented unwanted collisions.

Author Contributions

Conceptualization, Y.-P.S.; methodology, Y.-P.S. and G.C.; validation, Y.-P.S., X.-Q.C. and C.P.; writing—original draft preparation, Y.-P.S.; writing—review and editing, G.C. and T.Z.; supervision, G.C.; project administration, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) for use of their haptic devices to carry out the experimental work detailed in this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fu, B.; Seidelman, W.; Liu, Y.; Kent, T.; Carswell, M.; Zhang, Y.; Yang, R. Towards Virtualized Welding: Visualization and Monitoring of Remote Welding. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar]
  2. Baklouti, S.; Gallot, G.; Viaud, J.; Subrin, K. On the Improvement of Ros-Based Control for Teleoperated Yaskawa Robots. Appl. Sci. 2021, 11, 7190. [Google Scholar] [CrossRef]
  3. Wang, B.; Hu, S.J.; Sun, L.; Freiheit, T. Intelligent Welding System Technologies: State-of-the-Art Review and Perspectives. J. Manuf. Syst. 2020, 56, 373–391. [Google Scholar] [CrossRef]
  4. Liu, Y.K. Toward Intelligent Welding Robots: Virtualized Welding Based Learning of Human Welder Behaviors. Weld. World 2016, 60, 719–729. [Google Scholar] [CrossRef]
  5. Solanes, J.E.; Muñoz, A.; Gracia, L.; Martí, A.; Girbés-Juan, V.; Tornero, J. Teleoperation of Industrial Robot Manipulators Based on Augmented Reality. Int. J. Adv. Manuf. Technol. 2020, 111. [Google Scholar] [CrossRef]
  6. Liu, Y.K.; Zhang, Y.M. Toward Welding Robot with Human Knowledge: A Remotely-Controlled Approach. IEEE Trans. Autom. Sci. Eng. 2015, 12, 769–774. [Google Scholar] [CrossRef]
  7. Ming, H.; Huat, Y.S.; Lin, W.; Hui Bin, Z. On Teleoperation of an Arc Welding Robotic System. In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; Volume 2, pp. 1275–1280. [Google Scholar] [CrossRef]
  8. Park, J.H.; Kim, M.C.; Böhi, R.; Gommel, S.A.; Kim, E.S.; Choi, E.; Park, J.O.; Kim, C.S. A Portable Intuitive Haptic Device on a Desk for User-Friendly Teleoperation of a Cable-Driven Parallel Robot. Appl. Sci. 2021, 11, 3823. [Google Scholar] [CrossRef]
  9. Fine, T.; Zaidner, G.; Shapiro, A. Grasping Assisting Algorithm in Tele-Operated Robotic Gripper. Appl. Sci. 2021, 11, 2640. [Google Scholar] [CrossRef]
  10. Ding, D.; Shen, C.; Pan, Z.; Cuiuri, D.; Li, H.; Larkin, N.; van Duin, S. Towards an Automated Robotic Arc-Welding-Based Additive Manufacturing System from CAD to Finished Part. CAD Comput. Aided Des. 2016, 73, 66–75. [Google Scholar] [CrossRef] [Green Version]
  11. Dinham, M.; Fang, G. Autonomous Weld Seam Identification and Localisation Using Eye-in-Hand Stereo Vision for Robotic Arc Welding. Robot. Comput.-Integr. Manuf. 2013, 29, 288–301. [Google Scholar] [CrossRef]
  12. van Essen, J.; van der Jagt, M.; Troll, N.; Wanders, M.; Erden, M.S.; van Beek, T.; Tomiyama, T. Identifying Welding Skills for Robot Assistance. In Proceedings of the 2008 IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications, MESA 2008, Beijing, China, 12–15 October 2008; pp. 437–442. [Google Scholar]
  13. Erden, M.S.; Billard, A. End-Point Impedance Measurements at Human Hand during Interactive Manual Welding with Robot. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–5 June 2014; pp. 126–133. [Google Scholar]
  14. Erden, M.S.; Billard, A. Hand Impedance Measurements during Interactive Manual Welding with a Robot. IEEE Trans. Robot. 2015, 31, 168–179. [Google Scholar] [CrossRef]
  15. Erden, M.S.; Billard, A. End-Point Impedance Measurements across Dominant and Nondominant Hands and Robotic Assistance with Directional Damping. IEEE Trans. Cybern. 2015, 45, 1146–1157. [Google Scholar] [CrossRef]
  16. Liu, Y.K.; Shao, Z.; Zhang, Y.M. Learning Human Welder Movement in Pipe GTAW: A Virtualized Welding Approach. Weld. J. 2014, 93, 388s–398s. [Google Scholar]
  17. Erden, M.S.; Tomiyama, T. Identifying Welding Skills for Training and Assistance with Robot. Sci. Technol. Weld. Join. 2009, 14, 523–532. [Google Scholar] [CrossRef]
  18. Liu, Y.K.; Zhang, Y.M. Control of Human Arm Movement in Machine-Human Cooperative Welding Process. Control Eng. Pract. 2014, 32, 161–171. [Google Scholar] [CrossRef]
  19. Wang, Y.; Chen, Y.; Nan, Z.; Hu, Y. Study on Welder Training by Means of Haptic Guidance and Virtual Reality for Arc Welding. In Proceedings of the 2006 IEEE International Conference on Robotics and Biomimetics, Kunming, China, 17–20 December 2006; pp. 954–958. [Google Scholar]
  20. Ciszak, O.; Juszkiewicz, J.; Suszyński, M. Programming of Industrial Robots Using the Recognition of Geometric Signs in Flexible Welding Process. Symmetry 2020, 12, 1429. [Google Scholar] [CrossRef]
  21. Yu, H.; Qin, J.; Zhao, K. Innovation in Interactive Design of Tele-Robotic Welding in the Trend of Interaction Change. Des. Eng. 2020, 322–330. [Google Scholar] [CrossRef]
  22. Wang, Q.; Jiao, W.; Yu, R.; Johnson, M.T.; Zhang, Y.M. Virtual Reality Robot-Assisted Welding Based on Human Intention Recognition. IEEE Trans. Autom. Sci. Eng. 2020, 17, 799–808. [Google Scholar] [CrossRef]
  23. Wells, T.; Miller, G. The Effect of Virtual Reality Technology on Welding Skill Performance. J. Agric. Educ. 2020, 61, 152–171. [Google Scholar] [CrossRef]
  24. Byrd, A.P.; Stone, R.T.; Anderson, R.G.; Woltjer, K. The Use of Virtual Welding Simulators to Evaluate Experienced Welders. Weld. J. 2015, 94, 389–395. [Google Scholar]
  25. Liu, Y.; Zhang, Y. Human Welder 3-D Hand Movement Learning in Virtualized GTAW: Theory and Experiments. In Transactions on Intelligent Welding Manufacturing; Springer: Singapore, 26 August 2019; pp. 3–25. [Google Scholar]
  26. Liu, Y.K.; Zhang, Y.M. Supervised Learning of Human Welder Behaviors for Intelligent Robotic Welding. IEEE Trans. Autom. Sci. Eng. 2017, 14, 1532–1541. [Google Scholar] [CrossRef]
  27. Wang, Q.; Cheng, Y.; Jiao, W.; Johnson, M.T.; Zhang, Y.M. Virtual Reality Human-Robot Collaborative Welding: A Case Study of Weaving Gas Tungsten Arc Welding. J. Manuf. Process. 2019, 48, 210–217. [Google Scholar] [CrossRef]
  28. Wang, Q.; Jiao, W.; Yu, R.; Johnson, M.T.; Zhang, Y. Modeling of Human Welders’ Operations in Virtual Reality Human–Robot Interaction. IEEE Robot. Autom. Lett. 2019, 4, 2958–2964. [Google Scholar] [CrossRef]
  29. Papadopoulos, T.; Evangelidis, K.; Kaskalis, T.H.; Evangelidis, G.; Sylaiou, S. Interactions in Augmented and Mixed Reality: An Overview. Appl. Sci. 2021, 11, 8752. [Google Scholar] [CrossRef]
  30. Ni, D.; Yew, A.W.W.; Ong, S.K.; Nee, A.Y.C. Haptic and Visual Augmented Reality Interface for Programming Welding Robots. Adv. Manuf. 2017, 5, 191–198. [Google Scholar] [CrossRef]
  31. Selvaggio, M.; Notomista, G.; Chen, F.; Gao, B.; Trapani, F.; Caldwell, D. Enhancing Bilateral Teleoperation Using Camera-Based Online Virtual Fixtures Generation. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Daejeon, Korea, 9–14 October 2016; pp. 1483–1488. [Google Scholar] [CrossRef]
  32. Bischof, B.; Gluck, T.; Bock, M.; Kugi, A. A Path/Surface Following Control Approach to Generate Virtual Fixtures. IEEE Trans. Robot. 2018, 34, 1577–1592. [Google Scholar] [CrossRef]
  33. Vitrani, M.A.; Poquet, C.; Morel, G. Applying Virtual Fixtures to the Distal End of a Minimally Invasive Surgery Instrument. IEEE Trans. Robot. 2017, 33, 114–123. [Google Scholar] [CrossRef]
  34. He, Y.; Hu, Y.; Zhang, P.; Zhao, B.; Qi, X.; Zhang, J. Human–Robot Cooperative Control Based on Virtual Fixture in Robot-Assisted Endoscopic Sinus Surgery. Appl. Sci. 2019, 9, 1659. [Google Scholar] [CrossRef] [Green Version]
  35. Krupke, D.; Zhang, J.; Steinicke, F. Virtual Fixtures in VR—Perceptual Overlays for Assisted Teleoperation, Teleprogramming and Learning. In Proceedings of the ICAT-EGVE 2018—International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Limassol, Cyprus, 7–9 November 2018; pp. 195–201. [Google Scholar] [CrossRef]
  36. Moccia, R.; Iacono, C.; Siciliano, B.; Ficuciello, F. Vision-Based Dynamic Virtual Fixtures for Tools Collision Avoidance in Robotic Surgery. IEEE Robot. Autom. Lett. 2020, 5, 1650–1655. [Google Scholar] [CrossRef]
  37. Druta, R.; Druta, C.; Negirla, P.; Silea, I. A Review on Methods and Systems for Remote Collaboration. Appl. Sci. 2021, 11, 10035. [Google Scholar] [CrossRef]
  38. Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Choi, S.M. A Review on Mixed Reality: Current Trends, Challenges and Prospects. Appl. Sci. 2020, 10, 636. [Google Scholar] [CrossRef] [Green Version]
  39. Aygün, M.M.; Ögüt, Y.Ç.; Baysal, H.; Taşcioglu, Y. Visuo-Haptic Mixed Reality Simulation Using Unbound Handheld Tools. Appl. Sci. 2020, 10, 5344. [Google Scholar] [CrossRef]
  40. Liu, Y.K.; Zhang, Y.M. Fusing Machine Algorithm with Welder Intelligence for Adaptive Welding Robots. J. Manuf. Process. 2017, 27, 18–25. [Google Scholar] [CrossRef]
  41. Tu, X.; Autiosalo, J.; Jadid, A.; Tammi, K.; Klinker, G. A Mixed Reality Interface for a Digital Twin Based Crane. Appl. Sci. 2021, 11, 9480. [Google Scholar] [CrossRef]
  42. Saeidi, H.; Wagner, J.R.; Wang, Y. A Mixed-Initiative Haptic Teleoperation Strategy for Mobile Robotic Systems Based on Bidirectional Computational Trust Analysis. IEEE Trans. Robot. 2017, 33, 1500–1507. [Google Scholar] [CrossRef]
  43. Vo, C.P.; To, X.D.; Ahn, K.K. A Novel Force Sensorless Reflecting Control for Bilateral Haptic Teleoperation System. IEEE Access 2020, 8, 96515–96527. [Google Scholar] [CrossRef]
  44. Valenzuela-Urrutia, D.; Muñoz-Riffo, R.; Ruiz-del-Solar, J. Virtual Reality-Based Time-Delayed Haptic Teleoperation Using Point Cloud Data. J. Intell. Robot. Syst. Theory Appl. 2019, 96, 387–400. [Google Scholar] [CrossRef]
  45. de Pace, F.; Gorjup, G.; Bai, H.; Sanna, A.; Liarokapis, M.; Billinghurst, M. Assessing the Suitability and Effectiveness of Mixed Reality Interfaces for Accurate Robot Teleoperation. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, Virtual Event, Canada, 1–4 November 2020; pp. 7–9. [Google Scholar] [CrossRef]
  46. Lima, A.; Rocha, F.; Torre, M.P.; Azṕurua, H.; Freitas, G. Teleoperation of an ABB IRB 120 Robotic Manipulator and Barretthand BH8-282 Using a Geomagic Touch x Haptic Device and ROS. In Proceedings of the 15th Latin American Robotics Symposium, 6th Brazilian Robotics Symposium and 9th Workshop on Robotics in Education, LARS/SBR/WRE 2018, Joao Pessoa, Brazil, 6–10 November 2018; pp. 194–200. [Google Scholar]
  47. Rakita, D.; Mutlu, B.; Gleicher, M. A Motion Retargeting Method for Effective Mimicry-Based Teleoperation of Robot Arms. In Proceedings of the 2017 12th ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, Vienna, Austria, 6–9 March 2017; pp. 361–370. [Google Scholar]
  48. Wang, Z.; Fey, A.M. Human-Centric Predictive Model of Task Difficulty for Human-in-the-Loop Control Tasks. PLoS ONE 2018, 13, e0195053. [Google Scholar] [CrossRef] [Green Version]
  49. Tavakkoli, A.; Wilson, B.; Bounds, M. An Immersive Virtual Environment for Teleoperation of Remote Robotic Agents for Everyday Applications in Prohibitive Environments. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, VRW 2020, Atlanta, GA, USA, 22–26 March 2020; pp. 371–375. [Google Scholar] [CrossRef]
  50. Dybvik, H.; Løland, M.; Gerstenberg, A.; Slåttsveen, K.B.; Steinert, M. A Low-Cost Predictive Display for Teleoperation: Investigating Effects on Human Performance and Workload. Int. J. Hum. Comput. Stud. 2021, 145, 102536–102554. [Google Scholar] [CrossRef]
  51. Triantafyllidis, E.; McGreavy, C.; Gu, J.; Li, Z. Study of Multimodal Interfaces and the Improvements on Teleoperation. IEEE Access 2020, 8, 78213–78227. [Google Scholar] [CrossRef]
  52. Chen, Z.; Huang, F.; Sun, W.; Song, W. An Improved Wave-Variable Based Four-Channel Control Design in Bilateral Teleoperation System for Time-Delay Compensation. IEEE Access 2018, 6, 12848–12857. [Google Scholar] [CrossRef]
  53. Guo, J.; Liu, C.; Poignet, P. A Scaled Bilateral Teleoperation System for Robotic-Assisted Surgery with Time Delay. J. Intell. Robot. Syst. 2019, 95, 165–192. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The hardware components of the GMAW motion tracking platform. (a) Welding motion-tracking platform, including welding shelter, welding torch, and the attached motion tracker; (b) an unskilled welder performing manual welding for motion data collection and analysis; (c) a professional welder performing on-site GMAW operation for expertise and skill extraction.
Figure 1. The hardware components of the GMAW motion tracking platform. (a) Welding motion-tracking platform, including welding shelter, welding torch, and the attached motion tracker; (b) an unskilled welder performing manual welding for motion data collection and analysis; (c) a professional welder performing on-site GMAW operation for expertise and skill extraction.
Applsci 11 11280 g001
Figure 2. The communication scheme of the MRVF tele-welding hardware apparatus.
Figure 2. The communication scheme of the MRVF tele-welding hardware apparatus.
Applsci 11 11280 g002
Figure 3. The imitation-based and robot-assisted teleoperation of the GMAW process. (a) Telerobotic welding platform including the UR5 manipulator, welding torch, and the attached vision system; (b) a novice welder conducting immersive and intuitive robot-assisted welding with haptic guidance; (c) a professional welder performing remote-controlled robotic GMAW operation for comparison.
Figure 3. The imitation-based and robot-assisted teleoperation of the GMAW process. (a) Telerobotic welding platform including the UR5 manipulator, welding torch, and the attached vision system; (b) a novice welder conducting immersive and intuitive robot-assisted welding with haptic guidance; (c) a professional welder performing remote-controlled robotic GMAW operation for comparison.
Applsci 11 11280 g003
Figure 4. The visualization of tele-welding interaction modules used in the tele-welding experiments. (a) Typical 2D visual feedback for remote-controlled robotic welding where the user uses a monitor to observe the welding process, without the immersive HMD usage; (b) the virtual replica of the physical welding workpiece; (c) tele-welding MR platform without haptic effects including welding virtual workpiece, overlaid RGB stream, virtual welding torch, and the scaled digital twin of UR5; (d) the MRVF module involving hybrid guidance and prevention VFs.
Figure 4. The visualization of tele-welding interaction modules used in the tele-welding experiments. (a) Typical 2D visual feedback for remote-controlled robotic welding where the user uses a monitor to observe the welding process, without the immersive HMD usage; (b) the virtual replica of the physical welding workpiece; (c) tele-welding MR platform without haptic effects including welding virtual workpiece, overlaid RGB stream, virtual welding torch, and the scaled digital twin of UR5; (d) the MRVF module involving hybrid guidance and prevention VFs.
Applsci 11 11280 g004aApplsci 11 11280 g004b
Figure 5. Block diagram of the master-controlled HIP and proxy-controlled robot architecture.
Figure 5. Block diagram of the master-controlled HIP and proxy-controlled robot architecture.
Applsci 11 11280 g005
Figure 6. Manual welding results of a professional (above) and unskilled welder (below).
Figure 6. Manual welding results of a professional (above) and unskilled welder (below).
Applsci 11 11280 g006
Figure 7. Sample welding motion of skilled and novice welders. Direct comparison of (a) back and forth movement perpendicular to the direction of welding movement (X); (b) movement in the direction of welding movement (Y); (c) up and down movement perpendicular to the direction of welding movement (Z). (df) show the associated (X, Y, Z) velocities.
Figure 7. Sample welding motion of skilled and novice welders. Direct comparison of (a) back and forth movement perpendicular to the direction of welding movement (X); (b) movement in the direction of welding movement (Y); (c) up and down movement perpendicular to the direction of welding movement (Z). (df) show the associated (X, Y, Z) velocities.
Applsci 11 11280 g007aApplsci 11 11280 g007b
Figure 8. Boxplots of quantitative measures in the X and Z directions for variance (a) and RMSE (b) of the users’ torch hand motions for both professional and unskilled welders in the onsite welding experiment. The special symbols (circles and asterisks (*)) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles and asterisks indicate which observations in the dataset are outliers.
Figure 8. Boxplots of quantitative measures in the X and Z directions for variance (a) and RMSE (b) of the users’ torch hand motions for both professional and unskilled welders in the onsite welding experiment. The special symbols (circles and asterisks (*)) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles and asterisks indicate which observations in the dataset are outliers.
Applsci 11 11280 g008
Figure 9. Results of the MR-integrated visual/haptic tele-welding system from a professional welder (above) and unskilled welder (below).
Figure 9. Results of the MR-integrated visual/haptic tele-welding system from a professional welder (above) and unskilled welder (below).
Applsci 11 11280 g009
Figure 10. (a) The average number of collisions for each of the three conditions (baseline, MRnoVF, and MRVF) in the flat position tele-welding tasks. The prevention effect in MRVF eliminated all unintentional contacts. (b) The amount of time participants spent aligning the torch tip across all conditions. The alignment time serves as a major component of task completion time.
Figure 10. (a) The average number of collisions for each of the three conditions (baseline, MRnoVF, and MRVF) in the flat position tele-welding tasks. The prevention effect in MRVF eliminated all unintentional contacts. (b) The amount of time participants spent aligning the torch tip across all conditions. The alignment time serves as a major component of task completion time.
Applsci 11 11280 g010
Figure 11. Subjective NASA-TLX ratings of task workload across all conditions in the tele-welding tasks. The special symbols (circles and asterisks (*)) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles and asterisks indicate which observations in the dataset are outliers.
Figure 11. Subjective NASA-TLX ratings of task workload across all conditions in the tele-welding tasks. The special symbols (circles and asterisks (*)) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles and asterisks indicate which observations in the dataset are outliers.
Applsci 11 11280 g011
Figure 12. Subjective scores on system functionality, usability, and user’s acceptance and perception of the three tele-welding modules. Higher scores represent higher preferences in all cases. The MRVF design demonstrated improvements in perceived usefulness and perceived ease of use. The special symbols (circles) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles indicate which observations in the dataset are outliers.
Figure 12. Subjective scores on system functionality, usability, and user’s acceptance and perception of the three tele-welding modules. Higher scores represent higher preferences in all cases. The MRVF design demonstrated improvements in perceived usefulness and perceived ease of use. The special symbols (circles) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles indicate which observations in the dataset are outliers.
Applsci 11 11280 g012
Table 1. Mean statistic results for objective movement measures for each direction.
Table 1. Mean statistic results for objective movement measures for each direction.
X-DirectionZ-DirectionY-Direction
Measured GroupsVarianceRMSEVarianceRMSEMean Velocity in Y
Skilled Welder0.260.580.570.903.07
Novice Welder0.961.171.401.273.17
Table 2. Mean values and standard deviations of all objective and subjective measurements.
Table 2. Mean values and standard deviations of all objective and subjective measurements.
BaselineMRnoVFMRVF
MeasureMeanStd. DevMeanStd. DevMeanStd. Dev
Time46.728.7542.3211.4018.605.37
Collisions0.500.730.250.450.000.00
Mental Demand80.319.4175.3110.0845.0015.71
Physical Demand77.9416.0274.4410.5832.2513.87
Temporal Demand78.0015.5673.4413.1566.8111.36
Performance64.7524.3378.5011.8043.8818.06
Effort78.9414.3663.5619.4334.3810.78
Frustration92.815.4769.9412.9238.3111.46
Average Workload78.795.8072.535.1343.444.86
Usefulness2.250.782.691.084.191.42
Ease of Use2.631.033.130.965.500.82
TAM2.440.482.910.784.850.77
Table 3. Statistical p-values for all quantitative metrics, where B = baseline and MR = MRnoVF.
Table 3. Statistical p-values for all quantitative metrics, where B = baseline and MR = MRnoVF.
Post-Hoc Tests
MeasurePartial Eta SquaredFpMRVF-MRMRVF-BMR-B
Time0.76F (1.866, 27.995) = 47.279<0.001<0.001<0.0010.478
Collisions0.21F (1.424, 21.353) = 4.0910.0430.1230.0460.783
Mental Demand0.75F (1.905, 28.580) = 45.449<0.001<0.001<0.0010.584
Physical Demand0.79F (1.972, 29.580) = 57.679<0.001<0.001<0.0011.000
Temporal Demand0.15F (1.486, 22.292) = 2.5940.1090.2220.1181.000
Performance0.49F (1.505, 22.581) = 14.660<0.001<0.0010.0610.046
Effort0.65F (1.590, 23.852) = 27.782<0.0010.001<0.0010.156
Frustration0.88F (1.703, 25.552) = 112.067<0.001<0.001<0.001<0.001
Overall Workload0.94F (1.703, 25.552) = 228.777<0.001<0.001<0.0010.023
Usefulness0.44F (1.683, 25.241) = 11.719<0.0010.0170.0020.559
Ease of Use0.72F (1.641, 24.617) = 38.829<0.001<0.001<0.0010.684
TAM0.78F (1.916, 28.742) = 54.141<0.001<0.001<0.0010.180
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Su, Y.-P.; Chen, X.-Q.; Zhou, T.; Pretty, C.; Chase, G. Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding. Appl. Sci. 2021, 11, 11280. https://doi.org/10.3390/app112311280

AMA Style

Su Y-P, Chen X-Q, Zhou T, Pretty C, Chase G. Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding. Applied Sciences. 2021; 11(23):11280. https://doi.org/10.3390/app112311280

Chicago/Turabian Style

Su, Yun-Peng, Xiao-Qi Chen, Tony Zhou, Christopher Pretty, and Geoffrey Chase. 2021. "Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding" Applied Sciences 11, no. 23: 11280. https://doi.org/10.3390/app112311280

APA Style

Su, Y. -P., Chen, X. -Q., Zhou, T., Pretty, C., & Chase, G. (2021). Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding. Applied Sciences, 11(23), 11280. https://doi.org/10.3390/app112311280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop