Next Article in Journal
Response of Rhizosphere Microbial Community in High-PAH-Contaminated Soil Using Echinacea purpurea (L.) Moench
Next Article in Special Issue
Digital Twin for Human–Robot Collaboration in Manufacturing: Review and Outlook
Previous Article in Journal
Automatic Understanding and Mapping of Regions in Cities Using Google Street View Images
Previous Article in Special Issue
Joint Motion Planning of Industrial Robot Based on Modified Cubic Hermite Interpolation with Velocity Constraint
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Closed-Loop Robotic Arm Manipulation Based on Mixed Reality

Laboratory for Manufacturing Systems and Automation, Department of Mechanical and Aeronautics Engineering, University of Patras, 26504 Rio Patras, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(6), 2972; https://doi.org/10.3390/app12062972
Submission received: 18 February 2022 / Revised: 10 March 2022 / Accepted: 11 March 2022 / Published: 14 March 2022
(This article belongs to the Special Issue Advanced Robotics Applications in Industry)

Abstract

:
Robotic manipulators have become part of manufacturing systems in recent decades. However, in the realm of Industry 4.0, a new type of manufacturing cell has been introduced—the so-called collaborative manufacturing cell. In such collaborative environments, communication between a human operator and robotic manipulators must be flawless, so that smooth collaboration, i.e., human safety, is ensured constantly. Therefore, engineers have focused on the development of suitable human–robot interfaces (HRI) in order to tackle this issue. This research work proposes a closed-loop framework for the human–robot interface based on the utilization of digital technologies, such as Mixed Reality (MR). Concretely, the framework can be realized as a methodology for the remote and safe manipulation of the robotic arm in near real-time, while, simultaneously, safety zones are displayed in the field of view of the shop-floor technician. The method is based on the creation of a Digital Twin of the robotic arm and the setup of a suitable communication framework for continuous and seamless communication between the user interface, the physical robot, and the Digital Twin. The development of the method is based on the utilization of a ROS (Robot Operating System) for the modelling of the Digital Twin, a Cloud database for data handling, and Mixed Reality (MR) for the Human–Machine Interface (HMI). The developed MR application is tested in a laboratory-based machine shop, incorporating collaborative cells.

1. Introduction

Modern manufacturing systems working under the Fourth Industrial Revolution (or Industry 4.0) paradigm, are constantly evolving. Industry 4.0 has introduced a wide variety of technologies and techniques for improving both productivity and working conditions in modern manufacturing plants. However, it is of great importance to keep in mind that human resources and, more specifically, shop-floor technicians must retain the engineers’ center of attention [1]. With recent technological advances, the integration of robotic manipulation has been leveraged during the last decades, and the path to the factories of the future leads to the design and development of collaborative environments [2]. Therefore, what is needed is the provision of suitable tools that will enable continuous and flawless communication between human operators and machines [3,4]. Additionally, Human–Robot Interaction (HRI) poses new challenges to the manufacturing landscape, such as safety, autonomy, and social acceptance, as the demand for collaborative robots or cobots [5] to interact, collaborate, and assist human operators grows. Smart manufacturing technologies [6] are gradually displacing jobs that are repetitive, monotonous, and low-skilled. Artificial Intelligence (AI)-based systems have great potential for automating jobs that previously required human intelligence for adaptive decision making [7]. In collaborative manufacturing cells, safety is also a major issue. Robotics and automation are creating new and more skill-demanding job opportunities. This shift has led to the reshaping of manufacturing to make it smarter and safer, not only in terms of production processes, but also in terms of human labor, with new skills and competencies required [8]. Human–Robot Collaboration (HRC) [9] also poses significant challenges, especially in terms of safety [10]. The ability to predict human actions [10,11] and the capability to plan and continuously replan safe robot trajectories based on predicted/observed human actions [11] have been identified as two major challenges in the literature.
As mentioned in the previous paragraph, the current era is characterized by immense technological advances. Concretely, since there is also a great development of the Information and Communication Technologies (ICT), both in terms of hardware and software, several other digital technologies, such as Extended Reality (XR), have become more popular in the industrial world [12]. Extended Reality (XR) is an umbrella term, including Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR) [13]. Under the scope of the current research work, special attention will be given to MR. MR is similar to AR, since it partially immerses the users, i.e., it involves the registration of digital content on their field of view (FoV). However, the main difference with AR is that MR enables the interaction of the user with the digital contents, i.e., the holograms. The capabilities of the above-mentioned technologies are leveraged by the fact that Artificial Intelligence (AI) technologies have become mainstream [14]. As a result of the above-mentioned transition and progress, HRC is one of the outcomes that allows humans and robots to collaborate to achieve common goals. As a result, new HRI methods encourage collaboration, especially in more complex scenarios.
Safety is a critical consideration in the design and implementation of any new technology that aims to work in close collaboration with operators during the age of industrialization and automation [15], particularly as the human-centric industrial revolution or Industry 5.0 approaches [16]. In the research work of Gualtieri et al. [17], safety risks in HRCs are identified mainly in the field of collaborative assembly stations, in which non-intentional contact between humans and robots is one of the main safety risks. Similarly, in [18] the authors have investigated the available literature in an attempt to highlight challenges in HRC implementation. Among the key findings of this research work, safety hazards also contain ergonomics issues. Moreover, collision avoidance and mitigation are among the key topics providing fertile ground for further research. Interestingly, safety assurance in collaborative environments, according to Bi et al. [19], requires (i) integration of recent Industry 4.0 technologies in order to adequately acquire data and (ii) the development of suitable algorithmic approaches for processing these data and constantly adapting system parameters in order to ensure that humans and robots can safely co-exist. According to the OECD (Organization for Economic Co-operation and Development), 14% of jobs in OECD countries are at risk of automation [20], owing to a lack of meaning, increased repetitiveness, or a high risk of injury [21].
Based on the abovementioned challenges, the identified challenges as well as the limitations of the investigated key relevant publications in the field of HRC are summarized in Table 1, below.
Table 1. Identified limitations and challenges in the field of HRC.
Table 1. Identified limitations and challenges in the field of HRC.
A/ARef.ChallengesLimitation
1[5]
  • Limitation of a robot’s interface design
  • Bottleneck caused by a combination of multimodal control commands for intuitive control of the robot
  • Typical challenge of Human–Robot Collaboration (HRC) assembly: HRC assembly and adaptive responses to commands and correct triggering of the relevant controls
  • Proposed framework is a proof of concept
2[9]
  • Confusions exist surrounding the relationships between robots and humans: coexistence, interaction, cooperation, and collaboration
  • Lack of standards
  • Lack of safety solutions
  • Low acceptance of the human–robot combination
3[11]
  • Ability to understand and imitate human behavior will be a key skill to allow collaborative robots to operate and become useful in the human environment
  • Limited investigation of behavior of motion; primitive learning in the presence of multiple demonstrators
  • No investigation of the higher order model for improving the segmentation results and for predicting human motion
  • Application and validation of the proposed approach with other types of human motion, including task and goal-based motion (e.g., interaction with the environment)
4[15]
  • Safety is not always explicitly mentioned as an application in research works
  • Safety in Human–Robot Interaction remains an open problem
  • Novel, robust, and generalizable safety methods are required in order to enable safe incorporation of robots into homes, offices, factories, or any other setting
  • Perception view: Active vision mechanisms should be incorporated into robots
  • Cognition view: Incorporation of Machine Learning techniques into the action robotic skills
  • Incorporation of probabilistic learning into task planning and decision making
5[17]
  • Safety: unwanted and unexpected contacts between human and robotic systems may cause injuries and therefore limit the potential for collaboration
  • There is a lack of simple and practical tools for helping system designers to overcome such limiting conditions
  • The proposed validation of the Collaborative Assembly System process is based on a virtual model
  • The validation was based on only one test case which involved three manufacturing engineers
  • This work did not analyze the hierarchical relationships between the various guidelines, as well as possible inconsistencies in their implementation
6[18]
  • Occupational health and safety criteria are of crucial importance in the implementation of collaborative robotics
  • Collaborative robotics could be helpful for small and medium-sized enterprises (SMEs) As such, future reviews could include the term ‘SMEs’ as a search keyword
  • Contact avoidance research should be improved
  • Contact detection and mitigation should be improved
  • Physical Ergonomics
  • Cognitive Ergonomics
7[19]
  • Acquiring, processing, and fusing diversified data for risk classification
  • Update the control to avoid any interference in a real-time mode
  • Developing technologies to improve HMI performance
  • Reducing the overall cost of safety assurance features
  • Develop standards in expressing the safety features of a functional module
  • Define the technical implementations to enforce corresponding guidelines and regulations
  • Classify and specify the methods of recognition for hazard scenarios
System Programming and Control
Intuitive programming
Task-driven programming
Skill-based programming
Risk management
Evaluation of biomechanical loads
Real-time estimation of stopping distances
Sensing Systems
  • New instrumentations and algorithms for effective sensing, processing, and fusing of diverse data
  • Machine learning for high-level complexity and uncertainty
8[22]
  • Physical human–robot contacts are not allowed during the actual polishing task
  • An innovative coexistence modality and human–robot communication with gestural commands were demonstrated for the collaborative phases of setup operations of the cell/tools and of quality assessment of the workpiece
  • The investigated case study-cell is still a research project and not all safety functions have achieved the performance requirements of industrial robot safety standards
Therefore, in this paper, the design and development of a framework for near real-time remote navigation of robotic arms is presented. Furthermore, the proposed framework is facilitated with the integration of MR. Additional functionalities, such as safety zones, robot reachability zones, etc., are implemented in the framework in an attempt to upscale the user experience.
The rest of the paper is structured as follows. In Section 2, the most pertinent literature on AR-based robotic manipulation interfaces is reviewed. In Section 3, the proposed system architecture and its modules are discussed in detail. Then, in Section 4, the implementation steps are presented. Section 5 presents the case study, and Section 6 summarizes the results and the discussion. Finally, conclusions and future steps are set out in Section 7.

2. State of the Art

Robotics, automation, and AI have gained a rapidly growing position in the workplace, faster than many organizations had ever expected the introduction of automation to be [23]. Although companies are gradually using these technologies in order to automate internal processes, true pioneers are fundamentally rethinking the work environment to optimize the value of both humans and machines by creating new opportunities to coordinate work more efficiently and to redefine the skills and professions of human staff [24]. Due to the fact that even more organizations are rushing to adopt these technologies, the market for AI tools and robotics is blooming. Leading companies, such as Microsoft, IBM, Facebook, and other technology giants, are investing heavily in this field. CEOs are becoming increasingly aware that these systems are most successful when they complement, instead of replacing, human operators [25]. Research suggests that while automation is capable of improving scale, speed, and quality, it does not do away with jobs. It might actually do just the opposite [2].
Human–Robot Collaboration (HRC) aims at creating work environments in the manufacturing context where human operators can work side by side in close proximity with robots. In such configurations, the main goal is to achieve efficient and high-quality manufacturing processes. In the literature, several recent works have demonstrated such implementations of HRC systems in real industrial manufacturing tasks, taking into consideration both human safety and communication. The authors of [26] proposed an AR-based wearable interface integrated into an off-the-shelf safety system. This wearable AR assists the assembly line operator by providing visual guidance on how to execute the current task in the form of textual details or parts representation in a 3D model. This research work has been applied in an automotive assembly task. Next, the author of [27] used a standardized and control and communication architecture in conjunction with fused sensor data in order to ensure safety robot control. Apart from the safety aspect, one of the key challenges of industrial HRC is the interaction and coordination between human and robot resources, as presented in [28]. More similar to this research work, a context-aware MR approach was used in car door assembly and tested against two standard methods, i.e., printed and screen display instructions [29]. In addition, the authors of [30] focused on enabling human operators to communicate with mobile dual arm robots, namely, Mobile Robot Platforms (MRPs), via an AR-based software suite. The novelties of the systems proposed lie in the end-to-end (E2E) integration of the human side interface AR-based framework, with mobile robot controllers exploiting the Digital Twin capabilities of the production entities [31].
Moving on, a recent study presenting the problems of Human–Robot Interaction (HRI) [21] suggests that AR interfaces can enhance the process of interaction by manipulating robots. Moreover, MR has been used in order to embed the user in a virtual environment deeper than AR. Furthermore, a similar study in [32] proposed an intuitive robot programming based on MR. A methodology to plan the geometric path, including orientation, has been developed. Shared autonomy systems enhance the ability of people to carry out everyday life tasks using robotic manipulators. The authors of [33] describe a robotic cell that manipulates, assembles, and packages geometrically complex products using cognitive control and actuation systems. Individual mechatronic components, such as a 6 DoF (Degrees of Freedom) gripper and a flexible assembly mechanism, were designed by decomposing the actual assembly and handling tasks into functional components. Additionally, a problem for users that cannot change their point of view has been addressed in [33] with the introduction of the InvisibleRobot, which is a diminished reality-based approach that overlays the background information onto the robot in the FoV of the user, through an Optical See-Through Head-Mounted Display. The authors of [34] developed an AR system allowing for safer online programming of industrial robots. Lastly, [35] presented the results of a project to develop an AR-based HRC system to improve safety when working with robots, the solutions consisting of volumes of safe working zones and audio and visual instructions to indicate danger. Furthermore, the levels of collaboration between an operator and a robot are classified in [36] as (1) Coexistence, (2) Cooperation, and (3) Collaboration, these being the three pillars of coexistence. As a result, as defined in ISO/TS 15,066 [37], different levels of collaboration necessitate different safety actions and measures.
Therefore, following the literature investigation, there is only a limited number of similar studies proposing a method for near real-time wireless robot manipulation with MR capabilities. Additional user-experience-enhancing features, such as safety zones, robot reachability zones, and so on, are also supported.

3. Proposed System Architecture

The proposed method is based on the design and development of two main software modules. The first module is responsible for the 3D representation of the robotic manipulator surroundings. The second module is responsible for the simulation of a 3D functional model of the robotic manipulator as well as the calculation of the kinematics. Essentially, the framework consists of a closed-loop control system for the robot; as the user inputs the desired position, the digital twin of the robot calculates the positions/motions of the robot joints. Then the resultant motion is sent to the robot’s controller to be executed as well as to the MR application in order to generate the visualization. Finally, the robotic controller sends feedback to the backend of the application in order to confirm that the motion has been successfully executed, and thus the user can proceed with the input of a new motion. The above-mentioned process is executed recursively until the user terminates it. In Figure 1, the flowchart of actions, describing the proposed system architecture is presented. More specifically, the framework initially relies on the successful connection of the AR/MR application with the interface of the robotic arm. As soon as the connection has been established and no errors are thrown, then the user selects whether they wish to work with a collaborative robot or just manipulate the robot. In the case of the collaborative mode, safety precautions are automatically applied, such as the maximum velocity and acceleration of the robot, as per the ISO/TS 15066, which explicitly specifies the requirements for collaborative robotic cells.
Then, the first functional block for the manipulation of the robot is the environmental understanding by the computer(s). This task can be accomplished in two modes. The first mode requires a 3D map of the environment of the robotic arm, so the resultant 3D map in the form of a point cloud is imported into the development environment.

3.1. Robotic Arm Navigation Module

One of the main features of the navigation framework is the visualization of the robotic arm’s reachability, i.e., the maximum distance the end effector can reach. Concretely, during the navigation of the robot with the use of the navigation tool, the user receives a vivid visualization of the robot’s reachability. This is accomplished either statically or dynamically. The static mode involves the visualization of color-coded reachability zones. Therefore, the areas located close to the base of the robot are colored green, indicating a close-range radius and minor loss in capacity. Similarly, medium range areas are colored yellow, indicating also that the robotic arm’s capacity is significantly reduced, and with orange the limit/high radius is displayed. Finally, if by mistake the user tries to guide the robotic arm in an unreachable area, this area is colored red, and an error notification is displayed in the graphical user interface (GUI).

3.2. Virtual Robot Animation

The second mode of the reachability visualization is based on the animation of the 3D robotic arm. Therefore, as the user commands the 3D robotic arm to move in a specific place, i.e., point to the 3D point cloud, the robotic arm is colored based on the color codes discussed in the previous paragraphs. Further to that, if the user instructs the robotic arm to move towards a position, which is obstructed by a foreign object, then the robotic arm is colored red and a notification pops up in order to notify the operator in a timely manner. In addition to that, the robot motion is halted until a new command is given by the operator. It must be noted that this functionality also takes into consideration the limitations of the robotic arm motors.
As will be discussed in the following paragraphs, the robotic arm used cannot perform a full circle rotation, i.e., a 360-degree rotation, for any of the comprising motors. Consequently, if the motion exceeds this limitation, then the motor brakes are automatically engaged in order to protect the motors and the robotic arm itself. This is a stressful and time-consuming situation, as the operator has to manually reset the robotic arm to a safe position. The framework, however, does not let such a situation arise, as it gives timely notification to the user to re-design the robotic arm motions.
An equally important feature implemented is the automatic generation of safety zones and their continuous visualization around the robotic arm with respect to the next motions to be performed. Again, the robotic arm in the current experimental setup has already been implemented with all the required safety protocols regarding the operation of the robotic arm in a collaborative environment as per the guidelines provided by ISO/TS 15066. The framework automatically applies these regulations when the user is prompted to select whether the robotic arm will collaborate with a human operator or not. Consequently, the margin of error is further minimized as the speed and acceleration settings for the robotic arm motors cannot be exceeded and most importantly cannot be overridden.
However, in industrial robots this is not a standard feature; therefore, it is of great importance to notify the shop-floor operators in a timely manner about the robot’s intentions. As a result, while using the framework, the safety zones of the robot are automatically created and can be later communicated to the shop-floor operator wirelessly in the form of 3D visualizations. Since the robotic arm is moving in all three directions, the safety zones are implemented as 3D objects, thus contributing to a more intuitive user experience.

3.3. Augmented Reality via Handheld Devices

As will be discussed in the next section, the proposed system architecture can be realized through the development of a multi-platform MR application. The implementation of the application in handheld devices is supported, as these devices are widely adopted and no special equipment is required. However, since the handheld devices, i.e., tablets and mobile phones, have very limited hardware and software capabilities in contrast to HMDs, certain functionalities cannot be implemented in the mobile version of the application, while others are tailored to fit the capabilities of these devices.

3.4. Mixed Reality via HMDs

The proposed system architecture will encompass the complete list of functionalities in HMDs, such as the Microsoft HoloLens MR device. In addition to that, the implementation in such devices comes in the form of Mixed Reality, since the users will interact with the holograms registered in their real environment for the manipulation of the robotic arm. When the application is used in conjunction with an HMD, i.e., Microsoft Hololens, the user is able to drag the robotic arm via the use of a pinch gesture. In this mode, the user can position the virtual robotic arm to the desired position, make adjustments to the end effector pose, and as a result teach the new position/pose coordinates to the robot.

3.5. Safety Zone Visualization Module

Human safety in industrial environments, particularly when collaborative robots are involved, is of paramount importance. As a result, the proposed method has been designed to calculate and display safety zones whenever a shop-floor technician works near a collaborative robot. The following equation is used to calculate the safety radius (ISO/TS 15066):
Si = KH ∗ (TR + TB) + KR ∗ TR + B
where KR denotes robot speed, KH denotes human operator speed, TR denotes robot reaction time, and TB denotes robot breaking time. The UR10 robot has a reaction time of 400 milliseconds and a breaking time of 1250 milliseconds, as well as a maximum end effector speed of 120 degrees per second and a maximum braking distance of 56 degrees.

4. Software Tool Implementation

For the implementation of the proposed framework, a multi-platform, stand-alone application has been designed and developed. The application is compatible with handheld devices, such as tablets and Head-Mounted Displays (HMDs). However, it is stressed that the 3D scanning of the robotic arm surroundings is capable only for the HoloLens HMD due to the limitations of the handheld devices’ hardware. Concretely, the HMD chosen is the commonly known Microsoft HoloLens MR HMD. As far as the handheld device is concerned, a common Android tablet has been chosen.
Regarding the development of the framework, the software used was mainly the Unity 3D game engine, due to the wide range of MR functionalities and supporting APIs. In addition to that the Vuforia API was used for the development of the functionalities of the handheld-based MR, whereas the Mixed Reality Toolkit (MRTK) was used for the HMD based MR. The writing of the code scripts was accomplished in C# programming language, using the Microsoft Visual Studio IDE.
One of the most important implementation steps is the communication of the framework with the real robot, and more specifically the robotic arm’s controller. As discussed in the previous paragraphs, the main goal was set to develop a fully wireless framework. As such, for the real-time data exchange the TCP/IP protocol was implemented, which is also compatible with the UR10 interface. The communication with the robotic arm is performed in two layers. The first layer is the real time data exchange layer and its purpose is to transmit data from the robot to the backend of the developed application, so that the position and the status of the robotic arm is successfully perceived by the application and, by extension, the 3D model is updated. The second layer of communication is the remote procedure call. This method can be realized as an XML file exchange between the application and the robotic arm controller, enabling the communication of programs, i.e., motion commands and methods/function calls, from the application, i.e., the user, to the robot. The architecture of the communication interfaces, based on the UR10 implementation, is presented in Figure 2, emphasizing the steps involved (see steps 1–9), the information flow (see the data filetypes), and the communication protocols implemented in order to achieve the interface between the individual modules. Below the step sequence is presented:
START
Step 1: Launch application on
  • Handheld device (Android)
  • MR HMD (MS HoloLens)
Step 2: Environmental understanding based on the device sensing system
  • Get device relevant position based on image target or 3D space anchor
  • 3D scan environment and recognize user’s hands
Step 3: Control virtual robot via
  • Virtual controllers, implemented on the application GUI
  • Hand gestures (e.g., tap to select, tap hold to grab, drag while grabbing)
Step 4: Update visualization of the 3D robot on the real environment
Step 5: Save current position
GoTo Step 3 for new motion OR GoTo Step 6
Step 6: Upload motion list (XML file) to Cloud Database
Step 7: Digital twin of robot (ROS environment)
  • Initialize communication with Cloud Database via web sockets
  • Download list of motions, and robot’s URDF (Unified Robot Description Format)
  • Calculate the kinematic values
  • Check feasibility of motions list
Step 8: Setup communication framework with physical robot
Step 9: Motion execution in physical robot
  • Download list of motions from digital twin
  • Execute motion
  • Send feedback to user for successful completion of motion
GoTo Step 3 until user interruption
END
Furthermore, for the facilitation of communication between the robotic arm and the standalone application, the Robot Operating System (ROS) acts as a middleware for the translation of the 3D motions into commands for the actual robot. The ROS is utilized since it provides advanced capabilities regarding the calculation of the robotic arm kinematics and reverse kinematics. More specifically, what is of great interest is the calculation of the robot’s joints revolution angles. These values, along with the other information, such as the robot’s physical properties, are saved in a separate file, also known as Unified Robot Description Format (URDF). However, in order to enable this functionality in the Unity 3D development environment, the Rosbridge API has been implemented. It is stressed that from the standalone application, every time the operator moves the 3D robotic arm, for every joint of the robot, the angular displacement is recorded in a JSON file. Concretely, for the ROS operating system, a virtual PC has been set up and the motions from the standalone application are imported in the form of a JSON file, then the ROS engine interprets the motions in robot commands with the utilization of inverse kinematics for the corresponding robotic arm which are then communicated to the robot’s control box via web sockets.
In an attempt to further raise the awareness of the technicians whenever they work in close collaboration with the robot, two functionalities have been developed. The first functionality is the distance indicator, which calculates the absolute distance between the user (camera position) and the base of the robot, and is expressed in meters. In the following figure (i.e., Figure 3), the distance calculation method is illustrated. The equation (Equation (2)) presented in Figure 3 is a basic vector equation for the calculation of the relative distance between two vectors. In this case, the two vectors are the relative coordinates of the device and the base of the physical robot. In order to set up a global coordinate system, the setup of a 3D (virtual) anchor is required for indicating the origin of the virtual and physical environment (0,0,0). However, as per a reviewer’s comment, we have further elaborated the content of Figure 3 in order to explicitly illustrate/indicate the above-mentioned technical information. Furthermore, the equation has been introduced and discussed within the main text of the manuscript.
d r , c = r c = x c x r 2 + y c y r 2 + z c z r 2
where   d r , c is the absolute distance between the user (device camera) and the physical robot, in meters.
r = x r y r z r are the coordinates of the robot’s base.
c = x c y c z c are the coordinates of the device camera.
The second functionality is the display of a colored edge around the screen of the user’s device. The color-coding for this functionality is based on the color coding of the safety zones. In Figure 4 the three states are indicated as the user’s position in relation to the robot changes.
The current implementation of the proposed framework is based on the development of a mobile application which is compatible with Android-based handheld devices. The developed application currently contains all the needed functionalities for the robotic arm manipulation, which include the main functionality, the communication of the application with the control box of the robot as well as the near real-time data exchange, which is used for the visualization of the robot’s current position and stance. The main HoloLens application has also been developed, however, during the development of the framework, and issues have arisen regarding the communication of the 3D map to the framework so that the dynamic safety zones can be implemented. More specifically, the issue concerns the spatial scanning of the environment of the human operator which, by extension, contains the robotic arm. Therefore, what is needed is the development of an extra module for the recognition of the robot geometry by the HMD and the dynamic exclusion of the polygons, representing the figure of the robot, from the spatial map created by the HMD. As far as the safety zones are concerned, in the current development, they are implemented as 3D cylindrical objects covering the volume of the robot itself and its close surroundings. More specifically, there is an area close to the robot, which is constantly prohibited, thus it is coloured red, and an outer area in which the user can freely move shown as green. A partial cylindrical area of 30 degrees around the end effector indicates the intentions of the robotic manipulator. A representation of the current developments is depicted in Figure 5.
From a hardware point of view, the development and testing of the application, a desktop PC equipped with an Intel Core i7 CPU, 16GB of RAM memory, and a Nvidia 1060 GPU has been utilized. In Figure 5, the virtual robot is illustrated in AR. More specifically, in this figure the key functionalities of the developed application are presented, such as the virtual model of the robot, the safety zones, and a real-time distance indicator.

5. Case Study

For the validation of the developed robotic arm navigation tool, a set of experiments has been set up in a laboratory-based machine shop. More specifically, a UR10 collaborative robot has been used which is installed in the machine shop. It is stressed that there has also been developed an additional functionality in order to enable users to create different configurations of the robotic arm and its surroundings which has facilitated experimentation with different configurations. Initially, the testing was focused on the 3D scanning of the robotic arm surroundings and, more importantly, on the 3D regeneration of the 3D map by the framework. Upon completion of this step, either the engineer or the shop-floor technician is able to navigate the robotic arm remotely from the provided AR-based GUI. In the experimental tests of the framework, five engineers and shop-floor technicians participated. Each of the participants had to perform a set of actions in two different situations, i.e., the current situation, involving the hardwired robotic controller and the developed wireless application. In each of the experiments, the number of errors was measured, as was the user’s awareness, the ease of use, and the time needed for completion of the assigned tasks. An error is defined as an action that leads to conflict between the human operator and the robot and also as a robot motion that leads to conflict between the robot and any other object in its surroundings. It is stressed that the experiments were conducted with the use of the UR10 robotic arm, which is considered to be collaborative, and upon collision the robot motion was automatically halted. Thus, no health risk was induced during the experiments. In order to record user awareness and ease of use, a short interview with the participants was performed after the end of the experiments. Finally, the whole process was recorded for each individual, including the number of errors and the total time needed for each experiment to reach completion.
Six metrics have been used in evaluating HMI: (1) Task effectiveness, (2) Neglect Tolerance, (3) Robot Attention Demand, (4) Free Time, (5) Fan Out, and (6) Interaction Effort, on a scale of 1 to 10. A short description of each metric is presented in Table 2 [38]:
As far as the experiment scenarios are concerned, two different scenarios have been tested. In the first scenario, the operator had to work in collaboration with the robotic arm in order to assemble a mechanism. The assembly process involved the collection of the assembly components by the robotic arm and their placement on the assembly, while the operator had to secure the assembled components with screws. It is stressed that during the execution of this scenario the participants selected a set of predefined movement sequences from the GUI of the developed application. Such movement sequences were pre-created by the engineering department and uploaded on a Cloud database which served as a repository for such assembly scenarios. However, through the GUI of the application, the shop-floor engineer could alter the parameters of the movement sequence so that the collaboration between the human and the robotic arm was further facilitated, considering the safety limitations. For the second scenario, the operator had to transfer high-volume objects with the use of the robotic arm under low visibility circumstances. In Figure 6, the assembly sequence, in the form of steps, is presented.

6. Results and Discussion

The operator awareness in a collaborative environment is of critical importance and in order to increase user awareness in such environments, hands-on experience and safety demonstrations are required. However, with the developed application, an increased user awareness has been accomplished with the AR/MR visualization of the safety zones as well as the robot’s intentions. Regarding the number of errors, significant variation was observed among the users due to different experience levels, i.e., the shop-floor technicians made fewer errors than the participating engineers. Despite that, for all the participants, there has been noted a reduction in the number of errors with the use of the developed application relative to the current controller. More specifically, the increase in the human operator’s awareness, from approximately 51% to 81%, is one of the most important findings, as reflected in Figure 7.
An interesting finding was the time required for completion of the assembly task. Similar to the rest of the findings in this research work, the time recordings varied considerably between the users, which is also explained by the variance of experience of such operations. However, in order to measure this indicator, the users were allowed to familiarize themselves with the interface of the developed application. Another aspect that has been observed during the experiments is an increase in users’ engagement, as less experienced users seemed more confident in experimenting with the robot in the collaborative cell in contrast to the current setup where the hardwired controller has to be used. Mostly, this was due to the wireless remote operation of the robot, which gave the participants the feeling of freedom in an unwanted situation. Finally, the total assembly time was decreased by approximately 24%, and user errors decreased by approximately 60%.

7. Conclusions and Outlook

This research work has presented a tool for the manipulation of robotic arms based on Digital Twin and MR. The added value of the proposed navigation tool is focused on the provision of a tool capable of mapping a robotic arm’s environment and thereby facilitating its navigation in a 3D space. With the addition of AR functionalities, a better user experience is guaranteed. It has become evident from the results that the 3D representation of the environment of the robot provided a clearer view of the working environment of the robot in contrast to the existing solution, contributing to a reduction in total assembly time of approximately 24% and an approximately 60% reduction in the number of user errors. Another benefit is that the proposed framework is completely remote, thus the operator is not restricted by the controller’s cable, reducing health hazards.
Future work will be focused on the additional experiments. The framework can be further improved with support for more robotic arms. Furthermore, network security has to be addressed in order to reduce the risk of external and unwanted security breaches. One of the main limitations encountered during the development of and experimentation with the proposed framework was the time needed for the communication between the individual modules so that the transformation of the 3D robotic arm was correctly translated in joint motions for the real robot. Therefore, what is needed is to further optimize the inverse kinematics algorithm running on the virtual machine so that the calculations are performed faster. In continuation, since the development required a machine that operates on a Linux distribution for the ROS, a dedicated desktop computer will have to be set up, as the virtual machine that is currently implemented shares resources with the physical computer, thus compromising the performance of the framework.

Author Contributions

All authors have participated in the modelling of the research project. D.M., supervisor; J.A., research, conceptualization, software, writing—original draft preparation; N.P., investigation, writing—original draft preparation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barrett, T.R.; Llion, M.E.; Mike, F.; Fred, D.; Simon, C.M.; Daniel, I.; Elizabeth, S. Virtual Engineering of a Fusion Reactor: Application to Divertor Design, Manufacture, and Testing. IEEE Trans. Plasma Sci. 2018, 47, 889–896. [Google Scholar] [CrossRef]
  2. Mourtzis, D.; Siatras, V.; Angelopoulos, J.; Panopoulos, N. An Augmented Reality Collaborative Product Design Cloud-Based Platform in the Context of Learning Factory. Procedia Manuf. 2020, 45, 546–551. [Google Scholar] [CrossRef]
  3. Mourtzis, D.; Angelopoulos, J.; Dimitrakopoulos, G. Design and development of a flexible manufacturing cell in the concept of learning factory paradigm for the education of generation 4.0 engineers. Procedia Manuf. 2020, 45, 361–366. [Google Scholar] [CrossRef]
  4. Hermann, M.; Tobias, P.; Boris, O. Design Principles for Industrie 4.0 Scenarios. In Proceedings of the 2016 49th Hawaii International Conference on System Sciences (HICSS), Koloa, HI, USA, 5–8 January 2016; pp. 3928–3937. [Google Scholar]
  5. Liu, S.; Wang, L.; Wang, X.V. Symbiotic human-robot collaboration: Multimodal control using function blocks. Procedia CIRP 2020, 93, 1188–1193. [Google Scholar] [CrossRef]
  6. Mourtzis, D. Simulation in the design and operation of manufacturing systems: State of the art and new trends. Int. J. Prod. Res. 2020, 58, 1927–1949. [Google Scholar] [CrossRef]
  7. Michalos, G.; Makris, S.; Papakostas, N.; Mourtzis, D.; Chryssolouris, G. Automotive assembly technologies review: Challenges and outlook for a flexible and adaptive approach. CIRP J. Manuf. Sci. Technol. 2010, 2, 81–91. [Google Scholar] [CrossRef]
  8. Whiting, K. These Are the Top 10 Job Skills of Tomorrow—And How Long It Takes to Learn Them; World Economic Forum: Geneva, Switzerland, 2020; Volume 21. [Google Scholar]
  9. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.; Makris, S.; Chryssolouris, G. Symbiotic human-robot collaborative assembly. CIRP Ann. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  10. Takata, S.; Hirano, T. Human and robot allocation method for hybrid assembly systems. CIRP Ann. 2011, 60, 9–12. [Google Scholar] [CrossRef]
  11. Krüger, J.; Lien, T.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann. 2009, 58, 628–646. [Google Scholar] [CrossRef]
  12. Kulic, D.; Nakamura, Y. Incremental Learning of Human Behaviors Using Hierarchical Hidden Markov Models. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4649–4655. [Google Scholar]
  13. ElMaraghy, H.; Monostori, L.; Schuh, G.; ElMaraghy, W. Evolution and future of manufacturing systems. CIRP Ann. 2021, 70, 635–658. [Google Scholar] [CrossRef]
  14. Demir, K.A.; Cicibaş., H. The Next Industrial Revolution: Industry 5.0 and Discussions on Industry 4.0. In Industry 4.0 from the Management Information Systems Perspectives; Peter Lang Publishing House: New York, NY, USA, 2019. [Google Scholar]
  15. De Nul, L.; Maija, B.; Petridis, A. Industry 5.0: Towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar]
  16. Zacharaki, A.; Kostavelis, I.; Gasteratos, A.; Dokas, I. Safety bounds in human robot interaction: A survey. Saf. Sci. 2020, 127, 104667. [Google Scholar] [CrossRef]
  17. Wang, L. A futuristic perspective on human-centric assembly. J. Manuf. Syst. 2021, 62, 199–201. [Google Scholar] [CrossRef]
  18. Gualtieri, L.; Rauch, E.; Vidoni, R. Development and validation of guidelines for safety in human-robot collaborative assembly systems. Comput. Ind. Eng. 2021, 163, 107801. [Google Scholar] [CrossRef]
  19. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput. Integr. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  20. Bi, Z.; Luo, M.; Miao, Z.; Zhang, B.; Zhang, W.; Wang, L. Safety assurance mechanisms of collaborative robotic systems in manufacturing. Robot. Comput. Manuf. 2021, 67, 102022. [Google Scholar] [CrossRef]
  21. Smids, J.; Nyholm, S.; Berkers, H. Robots in the Workplace: A Threat to—Or opportunity for—Meaningful Work? Philos. Technol. 2020, 33, 503–522. [Google Scholar] [CrossRef] [Green Version]
  22. Tsarouchi, P.; Makris, S.; Chryssolouris, G. Human-robot interaction review and challenges on task planning and programming. Int. J. Comput. Integr. Manuf. 2016, 29, 916–931. [Google Scholar] [CrossRef]
  23. Georgieff, A.; Milanez, A. What Happened to Jobs at High Risk of Automation? 2021. Available online: https://www.oecd.org/future-of-work/reports-and-data/what-happened-to-jobs-at-high-risk-of-automation-2021.pdf (accessed on 13 February 2022).
  24. Mourtzis, D.; Vlachou, A.; Zogopoulos, V. Cloud-based augmented reality remote maintenance through shop-floor monitoring: A product-service system approach. J. Manuf. Sci. Eng. 2017, 39, 061011. [Google Scholar] [CrossRef]
  25. Rishab, A. 2021 Digital Transformation Assessment. The Manufacturer and IBM Report 2021. Available online: https://www.ibm.com/downloads/cas/MPQGMEN9 (accessed on 15 February 2022).
  26. Berglund, F.Å.; Gong, L.; Li, D. Testing and validating Extended Reality (xR) technologies in manufacturing. In Proceedings of the 8th Swedish Production Symposium (SPS 2018), Stokholm, Sweden, 16–18 May 2018; pp. 31–38. [Google Scholar]
  27. Bonci, A.; Cen Cheng, P.D.; Indri, M.; Nabissi, G.; Sibona, F. Human-Robot Perception in Industrial Environments: A Survey. Sensors 2021, 21, 1571. [Google Scholar] [CrossRef]
  28. Nahavandi, S. Industry 5.0—A human-centric solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef] [Green Version]
  29. Chryssolouris, G. Manufacturing: Systems Theory and Practice, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  30. Agarwal, D.; Bersin, J.; Lahiri, G.; Schwartz, J.; Volini, E. AI, Robotics, and Automation: Put Humans in the Loop. Global Human Capital Trends. Deloitte Insights 2018. Available online: https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2018/ai-robotics-intelligent-machines.html (accessed on 15 February 2022).
  31. Bessen, E.J. How computer automation affects occupations: Technology, jobs, and skills. Boston Univ. Sch. Law Law Econ. Res. Pap. 2016, 15–49. [Google Scholar] [CrossRef] [Green Version]
  32. Karagiannis, P.; Michalos, G.; Andronas, D.; Matthaiakis, A.-S.; Giannoulis, C.; Makris, S. Cognitive Mechatronic Devices for Re-configurable Production of Complex Parts. Appl. Sci. 2021, 11, 5034. [Google Scholar] [CrossRef]
  33. Magrini, E.; Ferraguti, F.; Ronga, A.J.; Pini, F.; De Luca, A.; Leali, F. Human-robot coexistence and interaction in open industrial cells. Rob. Comput.-Integr. Manuf. (RCIM) 2020, 61, 120–143. [Google Scholar] [CrossRef]
  34. Ganesan, R.K.; Rathore, Y.K.; Ross, H.M.; Amor, H.B. Better teaming through visual cues: How projecting imagery in a workspace can improve human-robot collaboration. IEEE Rob. Autom. Mag. 2018, 25, 59–71. [Google Scholar] [CrossRef]
  35. Alpaslan, D.K.; Caymaz, E.; Elci, M. Issues in Integrating Robots into Organizations. In Proceedings of the 12th International Scientific Conference on Defense Resources Management in the 21st Century, Brașov, Romania, 9–10 November 2017. [Google Scholar]
  36. Aaltonen, I.; Salmi, T.; Marstio, I. Refining levels of collaboration to support the design and evaluation of human-robot interaction in the manufacturing industry. Procedia CIRP 2018, 72, 93–98. [Google Scholar] [CrossRef]
  37. ISO/TS 15066:2016; Robots and Robotic Devices—Collaborative Robots. International Organization for Standardization: Geneva, Switzerland, 2016.
  38. Olsen, D.R., Jr.; Goodrich, M.A. Metrics for evaluating human-robot interactions. Proc. PERMIS 2003, 2003, 4. [Google Scholar]
Figure 1. Flowchart of the proposed system architecture.
Figure 1. Flowchart of the proposed system architecture.
Applsci 12 02972 g001
Figure 2. Communication interface architecture.
Figure 2. Communication interface architecture.
Applsci 12 02972 g002
Figure 3. Distance calculation method.
Figure 3. Distance calculation method.
Applsci 12 02972 g003
Figure 4. AR visualization of the virtual robot, the safety zones, and the distance indicator.
Figure 4. AR visualization of the virtual robot, the safety zones, and the distance indicator.
Applsci 12 02972 g004
Figure 5. AR visualization of the robot safety zones and the next robot position.
Figure 5. AR visualization of the robot safety zones and the next robot position.
Applsci 12 02972 g005
Figure 6. Assembly steps for experimental scenario 1.
Figure 6. Assembly steps for experimental scenario 1.
Applsci 12 02972 g006
Figure 7. Results: (a) number of errors; (b) assembly time; (c) user awareness.
Figure 7. Results: (a) number of errors; (b) assembly time; (c) user awareness.
Applsci 12 02972 g007
Table 2. Metrics for user awareness in HMI.
Table 2. Metrics for user awareness in HMI.
MetricDefinition
Task EffectivenessHow successfully was the task accomplished
Neglect ToleranceDeviation in time for the robot’s task when the operator is not attending the robot
Robot Attention DemandThe fraction of total task time that an operator must attend to the robot
Free TimeThe fraction of total task time that an operator does not have to attend to the robot
Fan OutEstimated number of number of robots that the operator can attend simultaneously
Interaction EffortThe total time for interaction plus the cognitive demands of interaction
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mourtzis, D.; Angelopoulos, J.; Panopoulos, N. Closed-Loop Robotic Arm Manipulation Based on Mixed Reality. Appl. Sci. 2022, 12, 2972. https://doi.org/10.3390/app12062972

AMA Style

Mourtzis D, Angelopoulos J, Panopoulos N. Closed-Loop Robotic Arm Manipulation Based on Mixed Reality. Applied Sciences. 2022; 12(6):2972. https://doi.org/10.3390/app12062972

Chicago/Turabian Style

Mourtzis, Dimitris, John Angelopoulos, and Nikos Panopoulos. 2022. "Closed-Loop Robotic Arm Manipulation Based on Mixed Reality" Applied Sciences 12, no. 6: 2972. https://doi.org/10.3390/app12062972

APA Style

Mourtzis, D., Angelopoulos, J., & Panopoulos, N. (2022). Closed-Loop Robotic Arm Manipulation Based on Mixed Reality. Applied Sciences, 12(6), 2972. https://doi.org/10.3390/app12062972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop