Next Article in Journal
Inverse Kinematic Solver Based on Bat Algorithm for Robotic Arm Path Planning
Next Article in Special Issue
Human–Exoskeleton Interaction Force Estimation in Indego Exoskeleton
Previous Article in Journal
Inverse Kinematics of a Class of 6R Collaborative Robots with Non-Spherical Wrist
Previous Article in Special Issue
Human Factors Assessment of a Novel Pediatric Lower-Limb Exoskeleton
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Research Perspectives in Collaborative Assembly: A Review

by
Thierry Yonga Chuengwa
1,*,
Jan Adriaan Swanepoel
1,
Anish Matthew Kurien
2,
Mukondeleli Grace Kanakana-Katumba
3 and
Karim Djouani
2,4
1
Department of Industrial Engineering, Tshwane University of Technology, Staatsartillerie Rd, Pretoria 0183, South Africa
2
F’SATI, Department of Electrical Engineering, Tshwane University of Technology, Staatsartillerie Rd, Pretoria 0183, South Africa
3
FEBE, Tshwane University of Technology, Staatsartillerie Rd, Pretoria 0183, South Africa
4
LISSI LAB, University Paris Est-Creteil, Avenue du General de Gaulle, 9400 Cretail, France
*
Author to whom correspondence should be addressed.
Robotics 2023, 12(2), 37; https://doi.org/10.3390/robotics12020037
Submission received: 18 January 2023 / Revised: 24 February 2023 / Accepted: 1 March 2023 / Published: 7 March 2023
(This article belongs to the Special Issue Human Factors in Human–Robot Interaction)

Abstract

:
In recent years, the emergence of Industry 4.0 technologies has introduced manufacturing disruptions that necessitate the development of accompanying socio-technical solutions. There is growing interest for manufacturing enterprises to embrace the drivers of the Smart Industry paradigm. Among these drivers, human–robot physical co-manipulation of objects has gained significant interest in the literature on assembly operations. Motivated by the requirement for human dyads between the human and the robot counterpart, this study investigates recent literature on the implementation methods of human–robot collaborative assembly scenarios. Using a combination of strings, the researchers performed a systematic review search, sourcing 451 publications from various databases (Science Direct (253), IEEE Xplore (49), Emerald (32), PudMed (21) and SpringerLink (96)). A coding assignment in Eppi-Reviewer helped screen the literature based on ‘exclude’ and ‘include’ criteria. The final number of full-text publications considered in this literature review is 118 peer-reviewed research articles published up until September 2022. The findings anticipate that research publications in the fields of human–robot collaborative assembly will continue to grow. Understanding and modeling the human interaction and behavior in robot co-assembly is crucial to the development of future sustainable smart factories. Machine vision and digital twins modeling begin to emerge as promising interfaces for the evaluation of tasks distribution strategies for mitigating the actual human ergonomic and safety risks in collaborative assembly solutions design.

1. Introduction

The migration towards smart manufacturing is powered by sharing key information from different resources throughout the manufacturing process [1]. To achieve higher productivity in future manufacturing and assembly operations, the inclusion of humans in the machine loop is a solution that has the potential to combine the cognitive capabilities of the human with the robustness of autonomous systems [2,3].
There is currently a wide set of integrated technologies, such as additive manufacturing, cyber-physical systems, internet of things and virtual reality, that can support the design and implementation of future industrial operations in several ways [4]. These technologies under Industry 4.0 bring new work methods and a paradigm shift toward physical human–machine interaction. Human–robot collaboration (HRC), in particular, is an emerging application that requires robots to work alongside humans as capable teammates. In this collaborative paradigm, both the human and the robot work together, sharing the same task execution [5]. The study of HRC shows a growing importance in the scientific literature in recent years [6]. Gervasi, Mastrogiacomo [7] describe HRC as the foundation of Industry 4.0, in which operations will strongly rely on data-driven computing and machine learning.
Collaborative robotics have moved robots out of the usual case separation, and they are now designed with a number of inherent safety features which allows for the implementation of reliable humans in the loop applications [6]. In industry, collaborative assembly robots can bring numerous economic benefits but also challenges in terms of the physical hazards associated with the unpredictable contact of the human with the potentially not-functioning mechanical environment [4]. The provision of human factors at the design stage can level effective collaborative systems to expand the flexibility of HRC assembly operations. From a physical perspective, understanding the human role in the collaborative assembly is becoming a crucial topic. Beyond safety in collaboration, limited research has been carried out to standardize [8,9] the task distribution between the robot and the human worker in a manner that considers them peers in relation to each other [10].
Besides the need to optimize efficiency by making manufacturing processes more flexible, industry cannot disregard strategies of sustainability that encompass workers’ safety and well-being in the manufacturing process [11]. Emerging HRC paradigms require a deeper investigation not only into the task allocation methods between the human and the robot but also into the effects of prolonged work execution and continuous operations on the human counterpart. New expectations in human–robot assembly collaboration require the study of situation awareness, which also considers the dynamic nature of human biomechanics and motion behaviors. Modeling interfaces such as humans in the virtual reality loop technology [12] and machine vision begin to show common ground for manufacturing systems design in which there is a greater combination of interaction scenarios between humans and robots. In the consideration of the human-centered nature and the socio-technical perspectives associated with future production systems, this paper explored the following research questions (RQs):
  • RQ1: How have publications on HRC in assembly evolved in recent years?
  • RQ2: What are the main research themes addressed in the scientific literature concerning the successful implementation of collaborative assembly robots in manufacturing?
  • RQ3: What are the research perspectives and emerging challenges for human-centered collaborative assembly in industry?
To bridge this knowledge gap, this research employed a systematic literature review (SLR) framework (shown in Figure 1) to identify the growing research considerations regarding human characteristics that affect the team dynamics of HRC in future manufacturing and assembly operations and to provide a deeper understanding of them.
According to the data (see Section 3), 62.7% of the papers included equally discussed the term ‘collaborative-assembly’ in the context of tasks distribution strategies. Only 15.25% of the papers considered are related to the specific study of fatigue under human factors and ergonomics. Other terms such as ‘action recognition’ and ‘systems adaptation’, which relate to the state at time t i of human behavior and the robot response, respectively, are even less reported in the papers sampled. Yet, these emerging methods of task synchronization are associated with human-centered design and, similarly, with the context-specific study of human fatigue in HRC, are gaining traction in the literature on designing collaborative assembly systems. This means that although task allocation strategies are the main topic in the design of collaborative assembly, less research has investigated the requirements for the integration of human inherent characteristics as the focus in the development of human–robot co-assembly operations.
Nevertheless, the human physical fatigue in the design of collaborative assembly has been a growing research trend in ergonomics in the last 5 years. The trend underlines the need to align future research efforts into furthering the ergonomic study in HRC. Indeed, collaborative task planning does not only affect productivity but also influences the worker’s health and comfort in the work environment. In this context, an important topic in HRC is to find solutions that are based on non-intrusive methods [13], can monitor the operators’ physical capability and are aimed at improving the production efficiency while reducing the risk of biomechanical impairment [14].
The remainder of the paper is arranged as follows. Section 2 presents the materials and methods. In Section 3, the key terminologies identified in the screening process are introduced. Section 4 analyzes the descriptive results of the study. Section 5 discusses some emerging research fields, and Section 6 concludes the paper.

2. Materials and Methods

Considering the above motivation for the study of collaborative robots in industrial assembly, this investigation on co-assembly planning methods focused on those developed under similar considerations. The search was performed in five electronic databases, namely: Science Direct, Emerald, IEEE Xplore, PubMed and SpringerLink. Essentially, this review work is based on the hypothesis that the human inherent characteristics differ from the robustness of automation. Then, the main research themes under HRC in assembly are examined to understand how the research published in recent times can establish the foundation for such studies.

2.1. Research Objectives

In addressing the RQs described in the introduction, this study identifies recent literature on the theory underpinning the interaction and interdependencies between humans and collaborative robots. The content identified in this review process helps derive how the fields of collaborative assembly have evolved and what the prospects in task allocation for human–robot collaborative assembly are. There are four stages in this process, as shown in Figure 2.
  • Step 1: Define the research objectives of the review.
  • Step 2: Establish the research field (inclusion) of the review.
  • Step 3: Screen the title, abstract and full text.
  • Step 4: Report on the reduced data, generate categories and summarize the validation process of the literature review.
Figure 2. Flow diagram of the selection process.
Figure 2. Flow diagram of the selection process.
Robotics 12 00037 g002

2.2. The Research Fields of the Review

In the second step, recent relevant research work in the manufacturing industry is focused on. Therefore, the search was limited to science and engineering databases. Searches in PubMed were also included as early observations of published work suggested that articles related to HRC in industry are associated with biomechanics as a subset of biomedical sciences. The search expressions shown in Table 1 were developed and applied in all five databases. Only English full-text articles were considered in the search. Releases in the press, editorials and undefined publications were excluded.
The search expressions were added through a combination of Boolean ‘AND’ descriptions. The main search expression (SE) ‘collaborative assembly’ (and its various derivations) was added to further SEs to develop the search strings such that the search combinations ‘main SE AND SEn’ (n = 1, 2, 3, 4, 5, 6) were applied to the databases. Consequently, six (1 × 6) keyword combinations were used to collect the articles.
The analysis of the Scopus database by Gualtieri, Rauch [4] revealed that the research on HRC started to grow significantly from 2015. While HRC has been a fast-growing research interest in recent years, the study conducted reveals that HRC is a relatively recent field overall. The timespan considered in this study was limited to the last decade (2013–2022), as the interest was in studying the broader research trend in collaborative assembly. As a result, 451 papers were obtained from the five databases consulted as the boundary of the study.

2.3. Screening

The conceptual boundary of the review process was based on the terms ‘collaborative assembly’ (and its alternative expressions) and the fields of reference were in the production (manufacturing) environment. First, the redundant records were deleted using the automated-screening-for-duplicates function in the Eppi-Reviewer reference management software. Among the 451 papers extracted from the databases, 368 records remained after deleting duplicates. Secondly, in addition to the search expressions of Table 1, further examination was conducted to satisfy the considered inclusion criteria. Screening was performed across (i) titles studied for inclusion, (ii) abstracts examined for relevance (objective, method and findings) and (iii) full texts retrieved and reviewed for consideration. Through this filtering process, 118 relevant papers were obtained (see Figure 3).
While human factors were later revealed as the foundation on which this study lies, work related to cognitive ergonomics was discarded for several reasons. Cognitive science is a larger field that cuts across a wide array of topics. Therefore, cognitive ergonomics could not be equally concerned with the specific relevance to the nature of assembly operations in HRC. On the other hand, the various aspects of cognition embody mental activities that pertain to the process of ‘knowing’. Since the study sought to investigate non-intrusive methods of coupling the human with the collaborative assembly robot, irrespective of the human competence, aspects related to cognition were beyond the scope of this work.

2.4. Validation of the Review Method

The papers considered for this review work were sourced from reputable academic and engineering research databases and are strong representatives of today’s tasks allocation strategies. This was to ensure the relevance of the referenced literature in relation to the research questions [4]. To answer RQ1, all the articles reviewed were coded according to their key expressions. This provided the trend in the years of publication and the collective data representing the research themes. Second, a table was created for the identification and description of the collaboration modalities to answer RQ2. The relation of task allocation in the context of human fatigue in HRC was also described. The relation was established through the medium action recognition and motion planning for the controller to recognize the human gesture-based intention.
From the 118 papers analyzed, task allocation was the theme with the highest representation, in 77 papers, while fatigue management, as a context-specific derivation of ergonomics, occupies the lowest margin, coded in 18 papers. In identifying the research gap that answers RQ3, the most represented and least represented themes were divided into clusters, each made of sub-clusters. Task allocation and fatigue management were the base clusters. Motion planning was adopted as the unifying theme after carefully reading and categorizing all the identified work.

3. Enabling Technologies in Collaborative Assembly

Assembly operations represent the highest share of investment and the larger percentage of the labor force in the manufacturing industry [15,16]. The manufacturing operations of Industry 4.0 evolved from human hands to machine tools, partial automation through programmable logic controls and, now, collaborative robots assembly using advanced communication technologies [17,18]. Numerous authors [5,10,19,20] describe that teaming up industrial robots with humans combines the higher process efficiency of automation with the flexibility and soft skills of the human worker.
An early analysis of the cost effectiveness in terms of the time cost and payment cost of different assembly strategies revealed that the coordination of humans and robots in assembly can reduce both the assembly time and cost [21]. The challenges of the production planning of robot capabilities and their collaboration with humans add new components to the task scheduling and assembly line balancing [22]. The main theme addressed in the literature [13,23,24] on human–robot co-assembly is the problem of task allocation (as shown in Figure 4).
A common assumption in previously proposed task allocation methods in HRC is that the human partner has a constant level of physical performance throughout a work cycle [25]. According to authors [26,27], these approaches to task allocation are based on fixed rules for repetitive sequences of actions and are unreliable because the features representation of the system is not shared dynamically across all agents.
The authors in [5,28] argue that the slowness in the widespread industrial acceptance of HRC is due to the high requirements regarding safety, for which there is a lack of engineering tools for analyzing the human functions in collaborative robotics. Driven by the need for agile manufacturing, the benefits and economic prospects of HRC make it a growing trend in industry. However, the economic considerations for cost and profit are often the priority in the design of HRC, while the human factors come second [15,29].
Collaborative robots overcome the traditional separation of labor to enable a direct interaction with the human operator for the execution of tasks of various complexities [30,31]. Dynamics and uncertainties are other critical aspects in integrating human factors when designing HRC [32]. Among these challenges for HRC in assembly is to make the robot aware of the human gesture-based intention, muscle activation and motor performance [33]. Indeed, the HRC systems exhibit dynamic behavior whereby the human states continuously change when subjected to different conditions. Thus, this requires the constant adaptation of the robot counterpart.
It is worth noting that the levels of relationships between humans and robots may have different meanings. HRC is generally characterized as a subfield of human–robot interaction (HRI) [34]. As collaboration implies some form of physical interaction, HRC can also be referred to as physical human–robot interaction (pHRI) [35]. However, other works [17,36] note subtle differences in the use of HRI and HRC, as both terms are used invariably. The above discussion is to make the reader aware of the vast combination of factors for modeling a realistic human in the loop task-sequencing problem in HRC. Further readings on the classification hierarchies of the relationship between humans and robots and industrial applications of HRI can be found in [5,37].
In essence, assembly operations that involve manual workstations rely on operational safety, in which the role of the human operators directly impacts the cycle time, quality and feasibility [38]. On the one hand, robots perform repetitive and physically stressful tasks with high precision [39]. On the other hand, the lack of knowledge and inappropriate work synchronization in operations involving manual assembly lead to a loss of productivity. Human labor comes with high cost and little stability [26]. Workers may become physically overloaded when frequently exposed to handling heavy components and repetitive tasks [16].
The safety-stop and idle time of the robot in the event of an impending risk, the costly expertise for re-programming and the high cost of robot commissioning constitute some obstacles facing collaborative robots in industry [5]. Furthermore, the majority of human–robot collaborative systems are still configured in a stop-and-go fashion, creating delays in command and response patterns [40]. Similarly, the configuration of collaborative robots in assembly lines that is based on explicitly defined waypoints in space does not satisfy the requirements associated with the varying impedance of the workers’ joints and limbs, leading to persistent drawbacks for sustainable collaborative assembly.
Optimizing the variety of gaols contained within a hybrid assembly system requires the special applications of modeling, simulations and predictive visualization of the collaborative system’s performance. New modeling techniques for manufacturing solutions have emerged in recent time. Digital human models (DHM) are currently considered a promising approach in the general evaluation of human characteristics. Yet, enriching collaborative robots with the capability to intuitively capture, interpret and understand the human physical-based competence and skill degradation for balancing the workload is still at an early stage.

3.1. Task Allocation

Task allocation in manufacturing is the problem of evaluating and assigning operations to existing resources within the most feasible sequence that improves economic performance and social benefits [41]. Traditionally, the division of tasks between the active resources of the production line was based on fixed rules, and both humans and robots performed high-frequency repetitive operations [26]. The allocation of tasks between the human and the robot primarily aims to follow the criteria that satisfy the respective capabilities of the individual resources. Attention must be given to the kind of resources used based on their competence and capabilities [39]. At a deterministic level, HRC deals with the paradigm of shared sequential task execution between the human and the robot. As shown in Figure 5, a task sequence comprising n = five operations is distributed between the human (H) and the robot (R). The human performs tasks k2, k3 and k4, while the robot performs tasks J1, and J5.
Current shared industrial workplaces bring numerous uncertainties that cannot be anticipated with rigid automation. Co-operation-based assembly through task sharing using human intelligence for decision making and robots for accurate execution is critical for workload planning in the production environment [42,43]. Task allocation problems in assembly, commonly known as the assembly line balancing problem (ALBP), emerge when the assembly process must be redesigned based on optimization criteria for the proper re-assigning of tasks [39,44].
Several modeling tools are available for solving the task sequence problem. Difficulty score sheets in design for assembly (DFA) were used in [24,45] to provide a good understanding of the attributes that affect the human–robot task assignment. The concept of dynamic function allocation was studied in [46] to resolve the problem of an unbalanced workload by changing the levels of human/machine controls over system functions, which lead into more situational awareness of human factors in automation. The authors in [47] proposed the disassembly sequence planning model that is capable of minimizing the disassembly time without violating the human safety and the resources constraints.
In solving the task allocation problem in the design phase, the authors in [48] used the nominal schedule to best distribute the work among actors. Following an AND/OR graph, the scheduler is capable of allocating the most suitable task for each actor to execute at each point in time, whereby human expertise is exploited to improve the collaboration. While most studies consider the single human–robot collaborative system, Liau and Ryu [49] studied two different HRC modes, namely, multi-station and flexible modes. Through simulation, they proposed a three-level task allocation model to improve the cycle time, human capability and ergonomic factors.
Beyond the suitability of the task distribution strategy, the capability of the HRC system and the optimization of the assembly cycle time and line balancing can also be studied if an appropriate task allocation model demonstrates sufficient levels of situational awareness. This assigns adjustable roles to the active resources in a manner that determines the limits of acceptable physical work requirements. Furthermore, the design of task allocation that ignores the human factors can lead to economic costs associated with health damage and the loss of productivity due to absenteeism [16].

3.2. Ergonomics in Collaborative Assembly

The requirement of integrating the human factors in operations involving manual material handling has become a growing trend in research. In the effort to involve human analysis in collaborative work design, ergonomics focuses on the human physical and cognitive characteristics and describe the science of designing appropriate working conditions [44]. There are two key aspects considered below: occupational health and safety.

3.2.1. Ergonomics and Fatigue

Ergonomic factors in HRC play a critical role at the task allocation level based on prolonged work execution and the posture of the operator when performing different tasks [50]. When it comes to materials handling, collaborative robots can alleviate the biomechanical overload on the human operator in heavy and repetitive operations [4]. Work-related musculoskeletal disorders make up the vast proportion of occupational diseases and absenteeism in the manufacturing industry [16]. The evaluation of collaborative assembly teams in [51] showed that the overall workload and subjectively rated workload were lower for the human–robot teams than they were for the human–human teams. Several studies [52,53] have found a strong correlation between fatigue and product quality and have concluded that the performance of the operators often declines because of fatigue induced by metabolic disturbances. Unlike their rigid robot counterparts, the repetitive motions of the human limbs result in the accumulation of muscle fatigue.
The authors in [54] previously showed that muscle fatigue affects workers’ performance in steadiness. Li, Liu [55], stress that fatigue is a significant feature affecting the human proficiency when humans and robots are continuously collaborating while executing tasks. The authors propose a mathematical model as a logarithmic function of fatigue to describe the relationship between time and the human energy level in a HRC disassembly operation. The use of force torque measurement is reported in [50] to monitor the overloading joint-torque variations due to external forces in real time. Their model accounts for the accumulation of overloading torque on joints over time. In ref. [56,57,58], impedance control is considered as a low-level biomechanical system detecting muscles activation and guiding the adaptation of the robot to the human pace.
Hence, ref. [59] used the equation of the maximum voluntary contraction (MVC) of the muscle to determine the level of physical fatigue of the operator based on the execution time of the operations. The authors in [60] combined the use of wearable sensors and machine learning techniques to collect accurate kinematic data and biomechanical information as a measurement of the joint load estimation for the human activity recognition. There is ongoing research on ergonomics and fatigue in HRC, such as the analysis of the energy expenditure [44] and the sequence optimization of hybrid assembly lines based on evolutionary algorithms [61].
Interestingly, ergonomics in collaborative assembly is still under-represented in the literature [62]. However, the data coding performed shows growth in recent years in terms of the publication of research in the specific context of human fatigue in HRC, as shown in Figure 6.
Unlike rigid robotics, repetitive motions of the human limbs result in the accumulation of muscle fatigue [50], which induces changes in the motion patterns of the human partner. In such system, the level of fatigue of the human can be calculated based on the level of collaboration, particularly the speed of movement execution, the posture assessment and other individual characteristics that differentiate the human from the robot [16].
Generally, the study of human factors and ergonomics in HRC aims to reduce the cognitive loads, the risks of work-related biomechanical injuries and the operator’s discomfort while performing a task [4].
On the one hand, non-invasive ergonomics techniques such as REBA and RULA, which are based on observation, have been proposed in the literature to limit the reliance on cumbersome wearables [13]. However, the accuracy of these descriptive methods is less than ideal for controlling the switching response of a robotic controller. These methods use a wide range of postures belonging to the same ergonomic risk factors, neglecting significant changes. Furthermore, these techniques are mainly static and cannot investigate the human fatigue associated with the motion of the limbs. On the other hand, physiological models such as EMGs, calcium ion ( C a 2 + ) modeling and pH–muscle force contraction may be highly accurate, but they are too complex for physical ergonomics [50]. Therefore, substantial gaps remain in the unified validation and standardization of the task allocation planning strategy that focuses on ergonomics and operators’ safety.

3.2.2. Safety

As the intrinsic characteristic of human–robot collaboration for relieving the human from hazardous and strenuous tasks, physical safety is the primary challenge to be addressed in any method implementing collaboration [30]. Previous safety strategies in the manufacturing environment used different devices such as fences, demarcation and emergency stops in the robot working areas [63]. New safety challenges emerge because of continuous close contacts between the human and the robot in collaborative assembly. Given the unpredictability of human movements under certain conditions, the prevention of unexpected and unwanted collision between the human and the robot is critical in human–robot collaboration.
The authors in [63] used the hierarchical task analysis of the analytical hierarchy process to propose a decision-making method that considers four criteria, namely, safety, productivity, human fatigue and quality, to solve the problem of the robot manipulator obstructing the operator’s sight. However, the full description of the human tasks in collaborative assembly remains unclear, leading to a lack of safety standards that constitutes a hurdle to the wide acceptance of human–robot collaboration [5]. To ensure an effective and intuitive collaboration, the robotic controller should be given intelligence to understand and establish appropriate situational awareness that guarantees the safety and ergonomic compliance of the human [18]. A sense of trust in collaboration is necessary so that the robot’s paths can be automatically adjusted by the co-worker to avoid collision in a predictable manner so as to exclude the sense of fear and surprise [64].
The safety standards for industrial ISO/TS 15066 describe the different design criteria that the robot system and robot tool manufacturers should introduce into their designs, and they are built on the information in ISO 10218-1/2, as summarized in [30]. It comprises four levels: safety-rated monitored stop (SMS), hand guiding (HG), speed and separation monitoring (SSM) and power and force limiting (PFL). While these dedicated safety methods are inherent in collaborative robots, their applications have now also moved to industrial robots with enhanced control and sensing devices. Once safety in collaborative assembly is addressed, there remains a need to acquire ways to program an intuitive and interactive robot.

3.3. Intelligent Controllers

Assembly operations that involve humans can be characterized by the random and uncertain behavior of the agents involved. This leads to unpredictable changes in the occurrence of events over time. In this probabilistic context, the collaborative state must be continuously integrated into the system’s response in terms of both what to execute and when to execute it.

3.3.1. Prediction

In the design of the task allocation for HRC and assembly line balancing, there is ongoing research on intent prediction for the controller to accurately switch the robot response when detecting the human-planned action [26,35,65]. In predicting human behavioral changes, the authors in [66] showed that physical fatigue exponentially increases with working time and proposed the human fatigue model as a normalized function of time and the accumulated rate of human fatigue corresponding to different working intensities. Ferjani, Ammar [52] proposed a heuristic simulation-based optimization model that uses an adaptable dynamic assignment approach to minimize the mean flow time of jobs in a multi-skilled-workers manufacturing system that copes with the consequence of fatigue.
The authors in [67,68] predicted the motion intention of the human in collaborative assembly using impedance control. The authors in [67] used a Bayesian method for an adaptive controller to track a target impedance model and neural networks to compensate for uncertainties in robotic dynamics. The authors in [68] employed radial basis function neural networks (RBFNNs) to estimate the human motion intention in real time.
The authors in [69] presented a prototype interface that implicitly describes regions in the configuration space. The approach uses high-level goals based on motion planning techniques. A human-aware robotic assistant [70] equipped with algorithms for motion prediction delivers the leveraged prediction of anticipatory behavior by planning in time. The authors in [71] proposed a framework for seamlessly adapting a robot’s behavior by learning the proactive associations between human hand gestures and the intended robotic manipulation actions.
The shared control of both the motions of the human and the robot via reinforcement learning is achieved without the need for the knowledge of human and robot dynamics [65,72]. Losey, McDonald [35] defined an intelligent controller that is capable of identifying the human intent in collaboration. The authors proposed three key themes: intent definition, intent measurement and intent interpretation in the shared control scenario. Such capabilities allow the system to iteratively negotiate its interaction ‘affordances’ with the human by dynamically adapting to shifting motion patterns during the process cycles. In so doing, the controller generates, in real time, the robot behaviors for an intelligent and collaborative execution of the task in terms of the velocity or position trajectory of the predicted forward path.
In a real and uncontrollable environment, the characteristics of human actions such as speed and position can exhibit great variability in the manner in which similar tasks are performed [73,74]. A task allocation algorithm is proposed by [75] for the automatic generation of task planning in the design of a hybrid layout and human–robot task allocation considering the human gesture. The intelligent decision-making method is based on a Robot Operating System (ROS) platform whereby each resource is represented as a service. The proposed prototype model enabled the introduction of a unified structure for an HR task allocation model.
Considering intelligence, human activity prediction in terms of workspace occupancy is presented in [76]. The probabilistic method presented is based on previous work [77] for collaborative tasks planning in close proximity. Using inverse optimal control, the authors gathered data from motion capture in order to find a cost function balancing different features in terms of the task space and the joint center distance.
Imitation learning or learning by demonstration is a machine learning approach of training an intelligent agent (robot) by mimicking or predicting human behavior in accomplishing a task [78,79]. Huang, Rozo [80] addressed the issue of high-dimensional inputs in minimizing the information loss for robot learning and imitating human motion patterns. They presented the kernelized motion primitives capable of mixing different trajectories to preserve the probabilistic properties of human demonstrations and the capability to adapt to multiple unseen situations.
While research on intent recognition is ongoing, the unpredictability of the human movements when physically overloaded in manual assembly makes it difficult for the robot to understand the human gesture-based intentions.

3.3.2. Action Planning and Motion Control

Conventional HRC paradigms are such that the robot recognizes a set of repeatable movements performed by the operator. This rigid collaborative requirement opposes the fundamentals of realism and natural collaboration. Human motion trajectory prediction is based on considerable uncertainties from the start to the end of a path [81]. Posture estimation [82] and motion planning [43,83] consider the robot and the human movements in driving the robot’s kinematics when participating in a collaborative task. Action planning is a key enabler such that the robot can adopt the worker’s behavior. Today’s robot systems with advanced force-limiting features make the scenario of continuous contact between the human and the robot now possible for low-speed operations [6].
Sensory systems for enhancing the robot’s awareness of the human’s intention and state include electromyography (EMG) [50,57], voice command [84], force torque limiting [85] and visual feedback [86]. Table 2 illustrates several control methods currently used in HRC.
A dynamic behavior control architecture is presented in [89] to reduce the conflicts between different robot agents in a co-manipulation task involving a human. A similar controller scenario is depicted in Figure 7 for motion control. The controller comprises the scheduling model, an interaction controller, the robot states and the human states. The scheduling model specifies the allocation of tasks to the human and to the robot according to the task management system. The interaction controller is based on a combinatorial search algorithm for generating the optimized control of the robot, as per the information received from the robot model and the human model. Both the human states and the robot states are monitored and fed back into the controller to improve the process efficiency.
In reference [45], a task-oriented programming approach is presented, which follows a complexity-based rule whereby the task assignment responds to the robot’s characteristics and the operator’s abilities and then dynamically reassigns to overcome disturbances or delays at the shop floor level. Programming by demonstration is another method used in [84] and includes both speech recognition and haptic control technologies to control a collaborative robot and to visualize these combined communication methods. Similarly, Danielsson, Syberfeldt [88] assessed instructions in human–robot collaborative assembly using a demonstrator. The study revealed that a demonstrator can be used to create a modular test environment that allows a test person to perform real assembly in collaboration with a robot.
Iterative planning is another approach in robot motion re-planning for dynamic obstacle avoidance [76,90]. This approach considers the human as a dynamic obstacle that the controller monitors iteratively for re-planning and executing the robot motion to avoid collisions. The need to improve the efficiency of human–robot collaboration has led to an increase in information sharing. For the robot to navigate the production environment to assist the human operator, localizing, tracking and sharing information about the motion of the agents is necessary.

3.3.3. Recognition and Communication

Smart manufacturing systems will require a great deal of situational awareness, in which the human and the machine execute their tasks based on shared information [91]. While collaboration entails a wide range of enablers, mutual awareness through timely information sharing is of key importance [92]. Communication is established between two or multiple agents in an HRC setup when physical barriers, noise and idle time are removed as much as possible [5]. In the context of hybrid assembly, the robot and the human form a community of agents that share information about the work and the state of the environment through sensors.
Essentially, there are two methods for communicating and recognizing the counterpart state: contact-based and contactless, as shown in Figure 8. The use of contact communication (mechanical) such as EMG [57] is mainly considered for the force limitation required for safety standards. However, force limitation does not enable natural communication between the human and the robot [34]. Hardware sensory devices such as EMG can come with an associated cost and impose increased discomfort on the human co-worker in the industrial setting.
During communication between the human and robot, information is generally conveyed as a feedforward command, which the robot receives to execute the instruction. Feedback is of particular interest for efficient collaboration. This allows for the leveraging of force-motion and action in much the same way as the human body uses sensors embedded in muscles to adjust interactions with the environment [35]. Two-way communication (feedforward and feedback) is anticipated for better information sharing in fast-paced human–robot collaboration [18].
The multi-modal fusion of information coming from different sensors shows that combining different communication channels provides higher accuracy and robustness when compared to the use of individual channels [5,92,93]. An example is provided in [94], where hand guiding and force power limiting are combined with vision sensors to improve the level of collaboration. Such approaches are based on (i) the independent recognition of commands such as verbal commands, gestures and gazes and (ii) the fusion of these information channels while managing contradictions and trade-offs. A similar approach in [95] makes use of data fusion by establishing a set of manufacturing capability indicators to obtain more accurate data as inputs for the assessment of the resources.

3.4. Optimization Techniques

Manual operations cannot satisfy the demand for repeated human movements under load in collaborative assembly. Therefore, mathematical models could provide guidelines for making effective decisions within the current insufficient knowledge of the shared assembly tasks. Indeed, assembly task planning can be categorized as a particular optimization problem. One of the challenges in collaborative operations is the minimization of the cycle time, irrespective of the variability of the manual processing time in executing the assembly tasks as compared to automation [96]. When it comes to manufacturing, the first problem is concerned with task allocation and modeling, for which mathematical models and computer languages can provide the quantitative description of tasks to be performed [41]. The optimization of task allocation considering various modalities such as the part geometry, robot model and kinematics, as shown in Figure 9.
Adding new constraints to a single-board problem based on constraint programming (CP) is easier to develop and more readable when compared to mixed-integer linear programming (MILP) [97]. The authors’ comparison between MILP and CP revealed that CP offers a superior computational performance for the ALBP of printed circuit boards comprising between 60 and 200 tasks. The authors in [98] proposed an online estimation of the quality of interaction between a human and a robot. Through the computation of fluency metrics, the authors measured the contribution of the human to the interaction. Zhang, Lv [26] used reinforcement learning to optimize the task sequence allocation in the HRC assembly process. A visual interface displays the assembly sequence to the operators to obey the decision of the human agent. The authors in [23] evaluated multiple criteria such as resources availability, suitability and processing time, which they integrated with a modular framework where the individual agents communicate over an ROS-based architecture.
Weckenborg, Kieckhäfer [22] developed a genetic algorithm to minimize the assembly lines’ cycle times for a given number of stations with collaborative robots. Stecke and Mokhtarzadeh [61] presented a use case of an assembly task of a base shaft module to demonstrate the impact of robot mobility on the performance of a hybrid assembly line. The authors used an energy expenditure model to analyze the advantages of collaborative robots in assembly lines. A combination of mixed-integer programming, constraint programming and a bender decomposition algorithm reveals that the configuration for equipping an assembly line with a robot is best when the ratio of robots over the station is near 0.7, with 37% of mobile robots.
The implementation of a multi-modal interface for the fusion of different communication methods such as voice and gesture commands is well reported for the robust human–robot control architecture in manufacturing systems [5,27,93]. However, solutions based on speech recognition face numerous limitations such as the noise in the environment that is characteristic of a real production line. In summary, human and robot characteristics are often considered similar from mathematical and computer modeling perspectives [29]. A summary of the optimization methods reported in the literature for assembly task planning in HRC is captured in Table 3.
Beyond the computational complexity of mathematical modeling approaches and the use of cumbersome data acquisition methods such as direct EMG signals, alternative solutions to assembly systems design such as the virtualization [87] and visualization [99] of manufacturing processes have emerged in recent years. These tools have received growing interest in improving the product design for collaborative assembly for their non-reliance on physical set-ups and the associated reduction in safety risks.
Table 3. Optimization methods for HRC in the recent literature.
Table 3. Optimization methods for HRC in the recent literature.
YearRef.Description/TitleALBPADMMOTKey Feature
2018[25]Robot adaptation to human physical fatigue in
human–robot co-manipulation
DMPProposes a new human fatigue model in HRC based on the measurement of EMG signals.
2019[55]Sequence Planning Considering Human Fatigue for Human–Robot Collaboration in DisassemblyDBASolved the sequence planning considering human fatigue in human–robot collaboration using a bee algorithm.
2019[31]A selective muscle fatigue management approach to ergonomic human–robot co-manipulation MLPerformed experiments on two different HRC tasks to estimate individual muscle forces to learn the
relationship between the given configuration and endpoint force inputs and muscle force outputs.
2020[100]Mathematical model and bee algorithms for the
mixed-model assembly line balancing problem with physical human–robot collaboration
MILP
BA
ABC
The authors presented a mixed-model assembly line balancing problem using a combination of MILP, BA and ABC algorithms. To this end, the proposed model and algorithm offer a new line design for increasing the
assembly line efficiency.
2020[101]Bound-guided hybrid estimation of the distribution
algorithm for energy-efficient robotic assembly line balancing
BGSThe authors proposed a bounded guided sampling method as a multi-objective mathematical model for solving the problem of the energy efficiency of robotic assembly line balancing.
2020[97]Scheduling of human–robot collaboration in the
assembly of printed circuit boards: a constraint programming approach
MILP
CP
A comparison between MILP and CP reveals that CP offers a superior computational performance for ALBP, comprising between 60 and 200 task.
2020[22]Balancing of assembly lines with collaborative robotsMILP
GA
The authors developed a genetic algorithm to minimize the assembly lines’ cycle times for a given number of stations with collaborative robots.
2021[61]Balancing collaborative human–robot assembly lines to optimize the cycle time and ergonomic riskMILP
CP
BD
Human–robot collaboration was studied for sensitivity analysis. MILP, CP and BD algorithms were developed to analyze the benefits of human–robot collaboration in assembly lines. To this end, regression lines can help managers determine how many robots should be used for a line.
2022[26]A reinforcement learning method for human–robot
collaboration in assembly tasks
RLThe use of reinforcement learning to optimize the task sequence allocation in the HRC assembly process. A
visual interface displays the assembly sequence to the operators to obey the decision of the human agent.
2022[13]A dynamic task allocation strategy for mitigating the human physical fatigue in collaborative roboticsDNNA non-intrusive online fatigue algorithm that predicts the joint muscle activation associated with the human motion. The estimation process allocates the task activities based on a sophisticated musculoskeletal model and a 3D vison system that tracks the human motion in real time.
2022[12]Development of an integrated virtual reality system with wearable sensors for the ergonomic evaluation of human–robot cooperative workplaces Ergonomic analysis strategy of humans in the loop virtual reality technology. The system uses a mixed-
prototyping strategy involving a VR environment, computer–aided design (CAD) objects, wearable sensors and human subjects.
Notes—ALBP: Assembly line balancing problem; AD: Algorithm development; MM: Mathematical modeling; OT: Optimization tool; DMP: Dynamic movement primitive; DBA: Discrete bee algorithm; ML: Machine learning; MILP: Mixed-integer linear programming; BA: Bee algorithm; ABC: Artificial bee colony; BGS: Bounded guided sampling; CP: Constraint programming; GA: Genetic algorithm; BD: Bender decomposition; RL: Reinforcement learning; DNN: Deep neural network.

3.5. Digital Interface

Previous research has focused on using computer modeling to better identify the system requirements for human–machine task analysis. A way to quickly and safely design and test a manufacturing process such as HRC is by utilizing a virtual space. In Hernández, Sobti [69], motion planning in an augmented reality (AR) interface increased the robot’s autonomy and decision-making capabilities, thereby allowing the human to make more general and open requests. Matsas, Vosniakos [102] positively judged the application of virtual reality (VR) for the experimentation of complex interaction metaphors, especially for the use of cognitive aids.
The experiments in [36] demonstrated the feasibility of pHRI through a VR approach in which the operator achieves the necessary comfort functions. Computer simulation is also used to map a digital counterpart of an HRC work environment in [103,104]. Digital twins help establish each entity in the virtual space, whereby the physical assembly space is driven by real-time simulation, analysis and decision making of the mapping process [5].
Malik, Masood [43] developed a unified framework for integrating human–robot simulation with VR as an event-driven simulation to estimate the H–R cycle times and develop a process plan, layout optimization and robot control. Ji, Yin [105] presented a novel programming-free automated assembly planning and control approach based on virtual training. The variety of goals contained within an HRC assembly system requires special applications for the modeling, simulations and predictive visualization of the collaborative system’s performance. VR and AR offer the interface in which multiple scenarios and components can be configured and tested, as shown in Figure 10.
With the widespread adoption of the digital human model (DHM), the realism and effectiveness of virtual manufacturing planning now enable the experiment of complex process assessments such as motion control and postures prediction. From the static postures of the DHM in the virtual environment, the designers can interpolate key postures to generate a continuous movement [106]. This can be achieved by inserting the anthropometric data of targeted users into a computer-generated environment for the virtual ergonomic evaluation of the human fit with the workstation [107,108].
The integration of biomechanical parameters enables the evaluation of various workload scenarios within the simulation of the DHM. Because it is impractical to infer all functions of a real human, DHMs are generated with simplified features according to specific needs. It may therefore be necessary to model a set of specific postures. During the planning of human–robot collaborative systems for the analysis of physical fatigue, a DHM is used as the complementary agent for the upper body motion study in the interaction with the virtual collaborative robot, as shown in Figure 11. Once the data from the iterative analysis of the postural risks R x n at times T n and the desired motion patterns are acquired, the computation of the training data follows for the robotic interaction controller that identifies the variation (∆) in motion patterns.
Given the stream of continuous movements that moving systems exhibit during their daily routine, a fundamental question that remains is to determine the initial blocks that, looped together, build and execute the motion controls of both artificial and biological entities [109]. The authors in [42] discuss the previous limitation of simulation software for collaborative assembly lines to the modeling of plain action sequences. Progress in virtual technology now enables 3D simulation to automatically generate a work cell with the allocation of tasks between the human and the robot resources [75]. The conceptual simulation in [49] improved the cycle time, ergonomic factor and human utilization in the collaboration modes presented and proved the possibility of HRC application in the mold assembly.
The authors in [12] proposed a novel collaborative assembly design strategy based on virtual reality for ergonomic assessment. The system was made up of four key components: virtual reality devices for the human immersion and interaction, a robotic simulator for modeling the robot in the working environment, surface EMG sensors and accelerometers for measuring the human ergonomic status. After applying the system to a real industrial use-case related to a human–robot task in the automotive industry, it was found that the methodology can effectively be applied in the analysis of physical conditions in human–robot interaction. This was to endow the co-worker with self–awareness with respect to their ergonomic status and safety conditions while the co-worker directly performs the task in the immersive virtual environments. A similar virtual collaborative task-planning set-up is shown in Figure 12.
Given the recent development in the fields of vision sensors, VR as a synthetic environment can be used to handle some of the engineering and testing problems in machine vision (MV). It has become possible to develop frameworks for human–robot teams to work collaboratively through gesture recognition [71]. A priori, virtual reality and computer vision may seem to be research areas in HRC, with opposite objectives. Yet, human situational parameters can be monitored with fixed systems such as cameras, and through cognitive enablers, smart actuators can provide triggers to change the system state (flow) if the operator pace is downgrading due to fatigue [2]. In the simulated environment, MV provides the enablers needed to implement intelligent creatures within the virtual environment. In the immersive test environment, MV captures the sensing information of the real human in terms of movement pace. Then, the behavior of the virtual robot is positioned and arranged as the human situation changes.

4. Analysis of Results

This section first presents the descriptive results of the study. Second, the results obtained from the content review to derive the corresponding relationship in the main clusters are analyzed. These are the most- and least-represented themes from the literature survey: ‘task allocation’ and ‘fatigue’, respectively. Given that fatigue appeared as the least-studied theme in the survey, it became evident that the study of human physical fatigue in collaborative robotics in general and in task allocation in particular is a research gap. Therefore, the sub-cluster motion planning (action recognition) is established in terms of a number of the sub-criteria addressed in the research.

4.1. Descriptive Results

In Figure 13, four parameters of the research analysis are shown, namely: the yearly trend of publication, the type of study, the type of publication and the findings. For the final set of articles analyzed, it is possible to observe a large concentration of articles after 2017 that speak to the specific context of human–robot collaboration in assembly, as shown in Figure 13a. In terms of the type of study, 83% of the papers published are related to experimental studies or a simulated environment (98 papers), while 19 papers were investigative or literature reviews (see Figure 13b). Regarding the type of publication, 94 articles were journal papers, while 24 were conference papers (see Figure 13c). Finally, the research findings were reported as: 16 papers proposed guidelines for the design of collaborative assembly, 84 papers developed methods and tools and 18 papers were a mix of both guidelines and application (see Figure 13d).

4.2. Research Trends in Collaborative Assembly

Table 4 shows the relative data of the annual production of papers according to clusters across the period of 2017–2022. According to the total paper production per cluster (Figure 14) and the relative growth of focus areas (Table 4), it is evident that the main research themes in collaborative assembly are increasing, on average, over the period of publications studied.
In Figure 14, cluster 1 (task allocation) presents the highest annual average growth compared to other clusters. This can be associated with the fact that the problem of task allocation holds the largest value of sub-themes to be considered for human–robot co-assembly. Cluster 2 (fatigue) and cluster 3 (mathematical optimization) have the most regular growth over the period studied, even though this can only be observed from 2020.
In the initial years of publication considered, the term ‘fatigue’ only featured scarcely in publications, most often as a sub-field of ergonomics. Nowadays, the growth in paper production with physical fatigue as the main theme is significant. This sudden increase can be related to the growing importance of human-centered technologies for future sustainable industrial operations. The growth in mathematical optimization approaches is also quite understandable given that mathematical optimization can rapidly generate the best solutions to a problem as a non-intrusive method of tasks allocation in human–robot collaboration. Mathematical optimization provides the possibility of testing multiple variables of the problem configuration without the necessity of physical equipment.
Cluster 4, (motion control), cluster 5 (action planning), cluster 7 (adaptation) and cluster 8 (prediction) from the ‘safety’ (cluster 9) viewpoint show a relatively slow annual growth. This entails that the study of safety in manufacturing in general and HRC in particular is a consolidated and well-established knowledge area. However, the data evaluation points to studies focusing on unexpected and accidental contact rather than on the reduction in human postural risks. Finally, cluster 6 (virtual reality and simulation) shows a high concentration in the years 2018 and 2019, before a sharp drop in 2020. The topic witnessed a revival in 2021, whereby new scenarios such as ergonomics [12,13,110,111] and task allocation [36,108,112,113] were increasingly studied in the virtual environment.

4.3. Task Allocation and Fatigue Management Clusters

The ‘task allocation’ and ‘fatigue’ clusters and sub-clusters are shown in Table 5. It provides a concise description of the analytical gap, whereby the human motion planning and the robotic trajectory are a function of task allocation and physical fatigue. As discussed earlier in the introduction, task allocation was revealed as the main theme documented in the sampled papers. Surprisingly, none of the sampled research works on programming by demonstration that seeks to understand and imitate the human posture and movement did investigate the inherent human fatigue characteristics that may alter the work compliance. In the sub-cluster vision-based, [13] discussed motion planning in the context of both the task allocation and fatigue components in the design of collaborative assembly operations. This is in line with the requirements of non-invasive methods of tasks synchronization.

5. Discussions and Outlook

This review work sought to answer a number of questions raised in the introduction, which can provide an insight into the growing research themes and challenges in collaborative assembly. In terms of RQ1, it can be observed that the number of publications on collaborative assembly is growing exponentially. This is associated with the fact that automation and robotics in the manufacturing industry are shifting from mass production to mass customization, increasing the research and development of unstructured industrial environments. Indeed, intelligent task allocation in collaborative assembly is going through complex design in order to meet the requirements of smart manufacturing systems. Changing product requirements and growing socio-economic needs push industries and researchers to provide novel solutions in human–machine collaboration [8].
The review work to answer RQ2 revealed that the control architectures in terms of tasks scheduling and motion planning for intuitive co-assembly robots are promising research areas in fatigue evaluation. It is worth noting that while closed interaction may increase assembly efficiency, it also increases the discomfort of the co-worker [4]. Previous task allocation methods have given little account of the assembly planning of HRC, which is based on the time-varying levels of fatigue that the operator experiences. Traditional computational approaches lacked the appropriate models to reproduce the real dynamic behavior of human-centered production systems with fidelity [116].
The implementation of manufacturing systems under Industry 4.0 requires data and computation. An apparent challenge in the design of collaborative assembly robots is the reliance on wearable and dependable sensors for data acquisition. In this context, it becomes relevant to provide methodologies and an interface for the evaluation and testing of collaborative assembly systems. Within the prior un-phased technology and sensors’ integration, a virtual representation can help in dynamically mapping the spatial and temporal evolution of the HRC assembly system. Once the robotic simulation interface is acquired, it appears that visualizing the performance of the hybrid assembly systems is much more effective than deploying actual humans and robots on the shop floor.
There is a recognition that Industry 4.0 is transforming assembly operations into highly connected processes. Computationally efficient controllers that focus on highly dynamic environments and unpredictable human motions are introduced to analyze situational changes in real time and adapt the robot’s behavior accordingly [40,117]
The growing publications in industrial HRC point to the constant monitoring and adaptation of the system. Considering the emerging production and manufacturing trends, the research objectives in this study identified new technologies and theories, as presented in Section 3, for intelligent task design in collaborative assembly to become more possible.
Robots are suited to perform repetitive operations with a higher accuracy, which is ideal for continuous production lines. The major handicap at the human level is the ergonomic imbalance that is associated with the use of the upper body and the arms to perform repetitive assembly activities [118]. Such scenarios represent new study areas in controller design for the management of the human–robot interaction dynamics that cause several interruptions and variations in the production line [56].
Although the literature review shows that human–robot collaboration has many characteristics, it remains critical that operations be planned in anticipation of obtaining feasible task allocation strategies with the lowest ergonomic risk [61]. Ergonomic considerations are increasingly introduced in assembly line balancing, whereby the physical effort of operators as well as the fatigue they experience seem to be crucial for the future research [119].
As revealed earlier in the introduction and also suggested in [4], there is growing interest in the literature on specific works on human fatigue in HRC [6,50]. The literature survey and the research gap analysis considered correlate with similar findings indicating that the research on human fatigue management in HRC has witnessed significant growth from 2017. Although the hardware (mechanical) components of HRC systems are the critical enablers, this paper was dedicated to investigating the intermediate design interface.
As a human-centered system, the requirements for modeling all the possible configurations of the deviation of the system’s performance (cognitive fatigue, muscle fatigue, reach analysis, gait analysis, sight analysis and so forth) are immense. However, the recent advances in systems modeling, physics-based simulations and virtual environments facilitate the automated generation and visualization of multi-modal parameters. Digital interfaces [43,64,87,120] for the unified simulation of human–robot collaboration can mitigate the above, offer minimal safety risk and provide multiple interaction scenarios when compared to physical set-ups. Considering this, the system controllers continuously track the state of the evolving virtual environments in an awareness process for estimating the current situation and predicting the future states [1].
An enhanced perception of the environment through human visual feedback can also be achieved by AR [88] and VR [12] approaches, enabling the human partner to observe and review the adaptive path of the robot prior to execution. This can lead to a positive increase in interaction channels. However, they can also pose the risk of adding a cognitive load on the operator. Alternatively, the use of combinatorial sensory modalities coupled with fatigue models can be used to estimate the human patterns degradation during HRC, which is likely caused by excessive levels of physical fatigue [50].
Human factors and ergonomics (HFE) that comprise physical ergonomics, organizational ergonomics and cognitive ergonomics contribute to socially sustainable manufacturing and continue to gain the attention of safety, health and environmental professionals [121]. Indeed, metrics for cognitive ergonomics, gender and task performance are important aspects in sustainable collaborative assembly [122]. However, the aspects of cognition were beyond the context of the study.
The specific focus was constructed around finding non-intrusive solutions for modeling the human physical interaction with the collaborative robot. The study supports similar findings [36,99,123] in that virtual assembly simulation holds significant potential in designing and evaluating multiple HRC assembly scenarios for the analysis of human fatigue. An advanced functionality of a virtual HRC assembly controller might therefore be to track and predict the trajectory of the upper limbs when metabolic fatigue has occurred, and then estimate the variation, from sufficient sampling, in the time duration for executing the same task from when the energy level is high. Using this information, the controller can be aware of the fatigue-constrained interaction states and anticipate the appropriate corrective behavior.
Research prospects as per RQ3 in HRC design point to the advancement of fast prototyping methods. Further work will seek to investigate the architecture for the design of the human-in-the-loop robotic interaction and validate the solution with the help of machine vision in the virtual environment, as shown in Figure 15. As in [115], the segmentation, classification and prediction of ongoing human actions is based on spatiotemporal characteristics. Accordingly, digital sensors are a preferred sensing modality due to their high signal-to-noise ratio, which makes them benefit from a high noise tolerance [112].
Vision sensors are used for object recognition in the environment and can recognize the body gestures [30]. During such experiments, the fatigue detection algorithm computes the variation between two motion events from the motion estimation catalog. The combinatorial algorithm estimates the probability of the next action segment occurring in the prescribed future space and time. Human motion transition probability is generated to estimate the deviation between a set of consecutive motion patterns. Based on this comparison, human fatigue is detected as the degradation of the pace of movement, and the performance of the worker is established [124]. In realistic human contexts, human characteristics are expected to vary over time; this can be the result of the increasing level of human fatigue.
For simplicity, the visualization of the reactive hybrid behavior of the agents can be applied as the least partially shared representations of the environment in which they are operating. Shared virtual representation provides a digital twin [103,104] that is mapped dynamically for its temporal evolution in real time. This is the prerequisite for aligning their (joint) goals, roles, plans and activities with the physical production environment [5]. Another advantage is the integration of the decision-making framework with a 3D simulation tool, enabling the calculation of criteria in a simulation mode and the visualization of the result in a short timeframe [75]. In this way, the user is able to validate the proposed result of the layout and preliminarily check the simulation of the HR tasks.
Current flexible robots are designed for low-payload and low-speed operations. Given the shorter lifecycle and the high degree of customization in today’s products, it is anticipated that future human–robot collaborative assembly operations will be characterized by continuous high-speed operations that expose the human operator to repetitive, short cycle times. In such applications, the challenges of tasks assignment and the number of possible assembly configurations raise the questions of physical evaluation and fatigue management during the balancing on the assembly line. Hence, aspects related to physical ergonomics appear as the most promising research fields for task allocation in human–robot collaborative assembly. In summary, further research for achieving the objectives of hybrid assembly in HRC can be aligned with:
  • The design of collaborative assembly solutions that focus on the advancement of adaptive and non-intrusive task scheduling methodologies. Such control methods should enable a reduction in the human workload during the work cycle, according to the operator’s physical conditions and performance.
  • Human safety is paramount in collaborative assembly. Notwithstanding physical safety as the most important requirement for human–robot collaboration, sustainable human–robot collaboration must be able to monitor and interpret the human states. Therefore, the collaborative systems must be able to generate and interpret a substantial amount of real-time data about the operator’s psychophysical conditions.
  • Numerous communication techniques between the robot and the human are reported in the literature. Impedance for gesture control, voice command and haptics have been proven useful for the robotic control of specific task execution. However, these communication methods are subjects to noise and interruptions. Better work ergonomics can be achieved through real-time multi-modal communication for context-aware HRC. The multiplicity of signaling modalities is characteristic of a natural interaction between multiple assembly agents.
  • With the advancements in data integration and simulation analytics, the consideration of fatigue management in intuitive human–robot collaborative tasks can accelerate the development of an interface for high-level hybrid collaboration. Future research can envision the virtual integration of hybrid assembly process planning with fatigue analysis tools. Sensor-less methodologies such as digital twins can improve the prediction accuracy of the energy degradation and enable the visualization of the requirements for tasks execution and workload balance at the early design stage.

6. Conclusions

This paper investigated the emerging literature on the implementation methods of human–robot collaborative assembly. Unlike previous research on ergonomic safety, this paper discussed the human fatigue element and its integration methods into the design of task allocation. Less anticipated in this research direction is the combination of virtual reality techniques and non-dependable sensory modalities such as MV and digital twins that show promising implementations for design and testing safety. These tools increasingly hold common ground for merging both dynamic and real-time monitoring systems into the analysis of the productivity of the human, whereby subsequent simulations software can study the operators’ ergonomics and fatigue.
In line with the expectations of the sustainable smart industry, a long-standing goal of the human–robot collaborative system is to increase the collaborative working efficiency. The emergence of multi-modal communication, together with the progress in hardware design and software development, now enable the implementation of various control modalities into the robotic platform.
The investigation into how the physical workload influences the human performance in the collaborative scenario has been shown to be a growing area of research. Task planning for verifying the sub-task resource allocation, modeling and understanding the human interaction and behavior in robot collaboration is crucial to the development of future smart factories. As the industry increasingly deploys real human–robot co-assembly systems, the development of non-intrusive methods for the monitoring, prediction and adaptation of the robotic controller to varying levels of human performance is a promising research area that can drive efforts in collaborative assembly design.

Author Contributions

All authors contributed substantially to the reported work. Conceptualization, T.Y.C., J.A.S. and A.M.K.; methodology, M.G.K.-K. and K.D.; writing—original draft preparation, T.Y.C.; writing—review and editing, A.M.K., T.Y.C., J.A.S. and K.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data sharing is not applicable. No new data were created or analysed in this study. Data sharing is not applicable to this study.

Acknowledgments

The authors acknowledge the Department of Industrial Engineering and F’SATI at Tshwane University of Technology, Pretoria, South Africa for their ongoing support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Park, C.Y.; Laskey, K.B.; Salim, S.; Lee, J.Y. Predictive situation awareness model for smart manufacturing. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  2. Cimini, C.; Pirola, F.; Pinto, R.; Cavalieri, S. A human-in-the-loop manufacturing control architecture for the next generation of production systems. J. Manuf. Syst. 2020, 54, 258–271. [Google Scholar] [CrossRef]
  3. Gil, M.; Albert, M.; Fons, J.; Pelechano, V. Engineering human-in-the-loop interactions in cyber-physical systems. Inf. Softw. Technol. 2020, 126, 106349. [Google Scholar] [CrossRef]
  4. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput.-Integr. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  5. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.V.; Makris, S.; Chryssolouris, G. Symbiotic human-robot collaborative assembly. CIRP Ann. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  6. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2018, 42, 957–975. [Google Scholar] [CrossRef] [Green Version]
  7. Gervasi, R.; Mastrogiacomo, L.; Franceschini, F. A conceptual framework to evaluate human-robot collaboration. Int. J. Adv. Manuf. Technol. 2020, 108, 841–865. [Google Scholar] [CrossRef]
  8. Malik, A.A.; Bilberg, A. Collaborative robots in assembly: A practical approach for tasks distribution. Procedia CIRP 2019, 81, 665–670. [Google Scholar] [CrossRef]
  9. Teiwes, J.; Bänziger, T.; Kunz, A.; Wegener, K. Identifying the potential of human-robot collaboration in automotive assembly lines using a standardised work description. In Proceedings of the 2016 22nd International Conference on Automation and Computing (ICAC), Colchester, UK, 7–8 September 2016; IEEE: Piscataway, NJ, USA. [Google Scholar]
  10. Smith, T.; Benardos, P.; Branson, D. Assessing worker performance using dynamic cost functions in human robot collaborative tasks. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2020, 234, 289–301. [Google Scholar] [CrossRef]
  11. Ciccarelli, M.; Papetti, A.; Cappelletti, F.; Brunzini, A.; Germani, M. Combining World Class Manufacturing system and Industry 4.0 technologies to design ergonomic manufacturing equipment. Int. J. Interact. Des. Manuf. 2022, 16, 263–279. [Google Scholar] [CrossRef]
  12. Caporaso, T.; Grazioso, S.; Di Gironimo, G. Development of an integrated virtual reality system with wearable sensors for ergonomic evaluation of human–robot cooperative workplaces. Sensors 2022, 22, 2413. [Google Scholar] [CrossRef]
  13. Messeri, C.; Bicchi, A.; Zanchettin, A.M.; Rocco, P. A Dynamic Task Allocation Strategy to Mitigate the Human Physical Fatigue in Collaborative Robotics. IEEE Robot. Autom. Lett. 2022, 7, 2178–2185. [Google Scholar] [CrossRef]
  14. Ciccarelli, M.; Papetti, A.; Scoccia, C.; Menchi, G.; Mostarda, L.; Palmieri, G.; Germani, M. A system to improve the physical ergonomics in Human-Robot Collaboration. Procedia Comput. Sci. 2022, 200, 689–698. [Google Scholar] [CrossRef]
  15. Abdous, M.-A.; Delorme, X.; Battini, D.; Sgarbossa, F.; Berger-Douce, S. Assembly Line Balancing Problem with ergonomics: A new fatigue and recovery model. Int. J. Prod. Res. 2022, 61, 693–706. [Google Scholar] [CrossRef]
  16. Mura, M.D.; Dini, G. Job rotation and human–robot collaboration for enhancing ergonomics in assembly lines by a genetic algorithm. Int. J. Adv. Manuf. Technol. 2021, 118, 2901–2914. [Google Scholar] [CrossRef]
  17. Kolbeinsson, A.; Lagerstedt, E.; Lindblom, J. Foundation for a classification of collaboration levels for human-robot cooperation in manufacturing. Prod. Manuf. Res. 2019, 7, 448–471. [Google Scholar] [CrossRef] [Green Version]
  18. Inkulu, A.K.; Bahubalendruni, M.R.; Dara, A.; SankaranarayanaSamy, K. Challenges and opportunities in human robot collaboration context of Industry 4.0-a state of the art review. Ind. Robot. Int. J. Robot. Res. Appl. 2022, 49, 226–239. [Google Scholar] [CrossRef]
  19. Papanastasiou, S.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Papavasileiou, A.; Dimoulas, K.; Baris, K.; Koukas, S.; Michalos, G.; Makris, S. Towards seamless human robot collaboration: Integrating multimodal interaction. Int. J. Adv. Manuf. Technol. 2019, 105, 3881–3897. [Google Scholar] [CrossRef]
  20. Bruno, G.; Antonelli, D. Dynamic task classification and assignment for the management of human-robot collaborative teams in workcells. Int. J. Adv. Manuf. Technol. 2018, 98, 2415–2427. [Google Scholar] [CrossRef]
  21. Chen, F.; Sekiyama, K.; Huang, J.; Sun, B.; Sasaki, H.; Fukuda, T. An assembly strategy scheduling method for human and robot coordinated cell manufacturing. Int. J. Intell. Comput. Cybern. 2011, 4, 487–510. [Google Scholar] [CrossRef]
  22. Weckenborg, C.; Kieckhäfer, K.; Müller, C.; Grunewald, M.; Spengler, T.S. Balancing of assembly lines with collaborative robots. Bus. Res. 2020, 13, 93–132. [Google Scholar] [CrossRef] [Green Version]
  23. Tsarouchi, P.; Matthaiakis, A.-S.; Makris, S.; Chryssolouris, G. On a human-robot collaboration in an assembly cell. Int. J. Comput. Integr. Manuf. 2017, 30, 580–589. [Google Scholar] [CrossRef] [Green Version]
  24. Tram, A.V.N.; Raweewan, M. Optimal Task Allocation in Human-Robotic Assembly Processes. In Proceedings of the 2020 5th International Conference on Robotics and Automation Engineering (ICRAE), Singapore, 20–22 November 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  25. Peternel, L.; Tsagarakis, N.; Caldwell, D.; Ajoudani, A. Robot adaptation to human physical fatigue in human–robot co-manipulation. Auton. Robot. 2018, 42, 1011–1021. [Google Scholar] [CrossRef]
  26. Zhang, R.; Lv, Q.; Li, J.; Bao, J.; Liu, T.; Liu, S. A reinforcement learning method for human-robot collaboration in assembly tasks. Robot. Comput. -Integr. Manuf. 2022, 73, 102227. [Google Scholar] [CrossRef]
  27. Liu, H.; Fang, T.; Zhou, T.; Wang, L. Towards robust human-robot collaborative manufacturing: Multimodal fusion. IEEE Access 2018, 6, 74762–74771. [Google Scholar] [CrossRef]
  28. Saenz, J.; Elkmann, N.; Gibaru, O.; Neto, P. Survey of methods for design of collaborative robotics applications-why safety is a barrier to more widespread robotics uptake. In Proceedings of the 2018 4th International Conference on Mechatronics and Robotics Engineering, Cuernavaca, Mexico, 26–29 November 2018; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar]
  29. Hashemi-Petroodi, S.E.; Thevenin, S.; Kovalev, S.; Dolgui, A. Operations management issues in design and control of hybrid human-robot collaborative manufacturing systems: A survey. Annu. Rev. Control 2020, 49, 264–276. [Google Scholar] [CrossRef]
  30. Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
  31. Peternel, L.; Fang, C.; Tsagarakis, N.; Ajoudani, A. A selective muscle fatigue management approach to ergonomic human-robot co-manipulation. Robot. Comput.-Integr. Manuf. 2019, 58, 69–79. [Google Scholar] [CrossRef]
  32. Liu, Z.; Liu, Q.; Wang, L.; Xu, W.; Zhou, Z. Task-level decision-making for dynamic and stochastic human-robot collaboration based on dual agents deep reinforcement learning. Int. J. Adv. Manuf. Technol. 2021, 115, 3533–3552. [Google Scholar] [CrossRef]
  33. Demircan, E.; Yung, S.; Choi, M.; Baschshi, J.; Nguyen, B.; Rodriguez, J. Operational space analysis of human muscular effort in robot assisted reaching tasks. Robot. Auton. Syst. 2020, 125, 103429. [Google Scholar] [CrossRef]
  34. Gustavsson, P.; Holm, M.; Syberfeldt, A.; Wang, L. Human-robot collaboration–towards new metrics for selection of communication technologies. Procedia CIRP 2018, 72, 123–128. [Google Scholar] [CrossRef]
  35. Losey, D.P.; McDonald, C.G.; Battaglia, E.; O’Malley, M.K. A review of intent detection, arbitration, and communication aspects of shared control for physical human–robot interaction. Appl. Mech. Rev. 2018, 70, 010804. [Google Scholar] [CrossRef] [Green Version]
  36. Shu, B.; Sziebig, G.; Pieskä, S. Human-robot collaboration: Task sharing through virtual reality. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  37. Bdiwi, M.; Pfeifer, M.; Sterzing, A. A new strategy for ensuring human safety during various levels of interaction with industrial robots. CIRP Ann. 2017, 66, 453–456. [Google Scholar] [CrossRef]
  38. Alkan, B.; Vera, D.; Ahmad, M.; Ahmad, B.; Harrison, R. A lightweight approach for human factor assessment in virtual assembly designs: An evaluation model for postural risk and metabolic workload. Procedia CIRP 2016, 44, 26–31. [Google Scholar] [CrossRef] [Green Version]
  39. Koltai, T.; Dimény, I.; Gallina, V.; Gaal, A.; Sepe, C. An analysis of task assignment and cycle times when robots are added to human-operated assembly lines, using mathematical programming models. Int. J. Prod. Econ. 2021, 242, 108292. [Google Scholar] [CrossRef]
  40. Hoffman, G. Evaluating Fluency in Human–Robot Collaboration. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 209–218. [Google Scholar] [CrossRef]
  41. Cheng, Y.; Sun, F.; Zhang, Y.; Tao, F. Task allocation in manufacturing: A review. J. Ind. Inf. Integr. 2019, 15, 207–218. [Google Scholar] [CrossRef]
  42. Antakli, A.; Spieldenner, T.; Rubinstein, D.; Spieldenner, D.; Herrmann, E.; Sprenger, J.; Zinnikus, I. Agent-based Web Supported Simulation of Human-robot Collaboration. In Proceedings of the 15th International Conference on Web Information Systems and Technologies, WEBIST, Vienna, Austria, 18–20 September 2019. [Google Scholar]
  43. Malik, A.A.; Masood, T.; Bilberg, A. Virtual reality in manufacturing: Immersive and collaborative artificial-reality in design of human-robot workspace. Int. J. Comput. Integr. Manuf. 2020, 33, 22–37. [Google Scholar] [CrossRef]
  44. Mura, M.D.; Dini, G. Optimizing ergonomics in assembly lines: A multi objective genetic algorithm. CIRP J. Manuf. Sci. Technol. 2019, 27, 31–45. [Google Scholar] [CrossRef]
  45. Malik, A.A.; Bilberg, A. Complexity-based task allocation in human-robot collaborative assembly. In Industrial Robot: The International Journal of Robotics Research and Application; Emerald Group Publishing: Bingley, UK, 2019. [Google Scholar]
  46. Atashfeshan, N.; Saidi-Mehrabad, M.; Razavi, H. A novel dynamic function allocation method in human-machine systems focusing on trigger mechanism and allocation strategy. Reliab. Eng. Syst. Saf. 2021, 207, 107337. [Google Scholar] [CrossRef]
  47. Lee, M.-L.; Behdad, S.; Liang, X.; Zheng, M. Task allocation and planning for product disassembly with human–robot collaboration. Robot. Comput.-Integr. Manuf. 2022, 76, 102306. [Google Scholar] [CrossRef]
  48. Pupa, A.; Van Dijk, W.; Brekelmans, C.; Secchi, C. A Resilient and Effective Task Scheduling Approach for Industrial Human-Robot Collaboration. Sensors 2022, 22, 4901. [Google Scholar] [CrossRef] [PubMed]
  49. Liau, Y.Y.; Ryu, K. Genetic algorithm-based task allocation in multiple modes of human–robot collaboration systems with two cobots. Int. J. Adv. Manuf. Technol. 2022, 119, 7291–7309. [Google Scholar] [CrossRef]
  50. Lorenzini, M.; Kim, W.; De Momi, E.; Ajoudani, A. A new overloading fatigue model for ergonomic risk assessment with application to human-robot collaboration. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  51. Harriott, C.E.; Zhang, T.; Adams, J.A. Assessing physical workload for human–robot peer-based teams. Int. J. Hum.-Comput. Stud. 2013, 71, 821–837. [Google Scholar] [CrossRef]
  52. Ferjani, A.; Ammar, A.; Pierreval, H.; Elkosantini, S. A simulation-optimization based heuristic for the online assignment of multi-skilled workers subjected to fatigue in manufacturing systems. Comput. Ind. Eng. 2017, 112, 663–674. [Google Scholar] [CrossRef]
  53. Yung, M.; Kolus, A.; Wells, R.; Neumann, W.P. Examining the fatigue-quality relationship in manufacturing. Appl. Ergon. 2020, 82, 102919. [Google Scholar] [CrossRef]
  54. Kearney, R.; Hunter, I. Dynamics of human ankle stiffness: Variation with displacement amplitude. J. Biomech. 1982, 15, 753–756. [Google Scholar] [CrossRef]
  55. Li, K.; Liu, Q.; Xu, W.; Liu, J.; Zhou, Z.; Feng, H. Sequence planning considering human fatigue for human-robot collaboration in disassembly. Procedia CIRP 2019, 83, 95–104. [Google Scholar] [CrossRef]
  56. Roveda, L.; Maskani, J.; Franceschi, P.; Abdi, A.; Braghin, F.; Tosatti, L.M.; Pedrocchi, N. Model-based reinforcement learning variable impedance control for human-robot collaboration. J. Intell. Robot. Syst. 2020, 100, 417–433. [Google Scholar] [CrossRef]
  57. Peternel, L.; Tsagarakis, N.; Ajoudani, A. A human–robot co-manipulation approach based on human sensorimotor information. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 811–822. [Google Scholar] [CrossRef]
  58. Mujica, M.; Crespo, M.; Benoussaad, M.; Junco, S.; Fourquet, J.-Y. Robust variable admittance control for human–robot co-manipulation of objects with unknown load. Robot. Comput.-Integr. Manuf. 2022, 79, 102408. [Google Scholar] [CrossRef]
  59. Michalos, G.; Spiliotopoulos, J.; Makris, S.; Chryssolouris, G. A method for planning human robot shared tasks. CIRP J. Manuf. Sci. Technol. 2018, 22, 76–90. [Google Scholar] [CrossRef]
  60. Bassani, G.; Filippeschi, A.; Avizzano, C.A. A Dataset of Human Motion and Muscular Activities in Manual Material Handling Tasks for Biomechanical and Ergonomic Analyses. IEEE Sens. J. 2021, 21, 24731–24739. [Google Scholar] [CrossRef]
  61. Stecke, K.E.; Mokhtarzadeh, M. Balancing collaborative human–robot assembly lines to optimise cycle time and ergonomic risk. Int. J. Prod. Res. 2021, 60, 25–47. [Google Scholar] [CrossRef]
  62. Weckenborg, C.; Thies, C.; Spengler, T.S. Harmonizing ergonomics and economics of assembly lines using collaborative robots and exoskeletons. J. Manuf. Syst. 2022, 62, 681–702. [Google Scholar] [CrossRef]
  63. Heydaryan, S.; Bedolla, J.S.; Belingardi, G. Safety design and development of a human-robot collaboration assembly process in the automotive industry. Appl. Sci. 2018, 8, 344. [Google Scholar] [CrossRef] [Green Version]
  64. Shu, B.; Sziebig, G.; Pieters, R. Architecture for safe human-robot collaboration: Multi-modal communication in virtual reality for efficient task execution. In Proceedings of the 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), Vancouver, BC, Canada, 12–14 June 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  65. Li, Y.; Tee, K.P.; Yan, R.; Limbu, D.K.; Ge, S.S. Shared control of human and robot by approximate dynamic programming. In Proceedings of the 2015 American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  66. Glock, C.; Grosse, E.; Kim, T.; Neumann, W.; Sobhani, A. An integrated cost and worker fatigue evaluation model of a packaging process. Int. J. Prod. Econ. 2019, 207, 107–124. [Google Scholar] [CrossRef]
  67. Yu, X.; He, W.; Li, Y.; Xue, C.; Li, J.; Zou, J.; Yang, C. Bayesian estimation of human impedance and motion intention for human–robot collaboration. IEEE Trans. Cybern. 2019, 51, 1822–1834. [Google Scholar] [CrossRef]
  68. Yu, X.; Li, Y.; Zhang, S.; Xue, C.; Wang, Y. Estimation of human impedance and motion intention for constrained human–robot interaction. Neurocomputing 2020, 390, 268–279. [Google Scholar] [CrossRef]
  69. Hernández, J.D.; Sobti, S.; Sciola, A.; Moll, M.; Kavraki, L.E. Increasing Robot Autonomy via Motion Planning and an Augmented Reality Interface. IEEE Robot. Autom. Lett. 2020, 5, 1017–1023. [Google Scholar] [CrossRef]
  70. Unhelkar, V.V.; Lasota, P.A.; Tyroller, Q.; Buhai, R.-D.; Marceau, L.; Deml, B.; Shah, J.A. Human-aware robotic assistant for collaborative assembly: Integrating human motion prediction with planning in time. IEEE Robot. Autom. Lett. 2018, 3, 2394–2401. [Google Scholar] [CrossRef] [Green Version]
  71. Shukla, D.; Erkent, Ö.; Piater, J. Learning semantics of gestural instructions for human-robot collaboration. Front. Neurorobotics 2018, 12, 7. [Google Scholar] [CrossRef] [Green Version]
  72. Li, Y.; Tee, K.P.; Yan, R.; Ge, S.S. Reinforcement learning for human-robot shared control. Assem. Autom. 2019, 40, 105–117. [Google Scholar] [CrossRef] [Green Version]
  73. Hawkins, K.P.; Vo, N.; Bansal, S.; Bobick, A.F. Probabilistic human action prediction and wait-sensitive planning for responsive human-robot collaboration. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Atlanta, GA, USA, 15–17 October 2013; IEEE: Piscataway, NJ, USA. [Google Scholar]
  74. Petković, T.; Puljiz, D.; Marković, I.; Hein, B. Human intention estimation based on hidden Markov model motion validation for safe flexible robotized warehouses. Robot. Comput.-Integr. Manuf. 2019, 57, 182–196. [Google Scholar] [CrossRef] [Green Version]
  75. Tsarouchi, P.; Michalos, G.; Makris, S.; Athanasatos, T.; Dimoulas, K.; Chryssolouris, G. On a human–robot workplace design and task allocation system. Int. J. Comput. Integr. Manuf. 2017, 30, 1272–1279. [Google Scholar] [CrossRef]
  76. Mainprice, J.; Hayne, R.; Berenson, D. Predicting human reaching motion in collaborative tasks using inverse optimal control and iterative re-planning. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, USA, 26–30 May 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  77. Mainprice, J.; Berenson, D. Human-robot collaborative manipulation planning using early prediction of human motion. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  78. Rozo, L.; Calinon, S.; Caldwell, D.G.; Jimenez, P.; Torras, C. Learning physical collaborative robot behaviors from human demonstrations. IEEE Trans. Robot. 2016, 32, 513–527. [Google Scholar] [CrossRef] [Green Version]
  79. Zhang, Z.; Peng, G.; Wang, W.; Chen, Y.; Jia, Y.; Liu, S. Prediction-Based Human-Robot Collaboration in Assembly Tasks Using a Learning from Demonstration Model. Sensors 2022, 22, 4279. [Google Scholar] [CrossRef] [PubMed]
  80. Huang, Y.; Rozo, L.; Silvério, J.; Caldwell, D.G. Kernelized movement primitives. Int. J. Robot. Res. 2019, 38, 833–852. [Google Scholar] [CrossRef] [Green Version]
  81. Li, S.; Wang, H.; Zhang, S.; Wang, S.; Han, K. Human Motion Trajectory Prediction in Human-Robot Collaborative Tasks. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2019. [Google Scholar]
  82. Vianello, L.; Mouret, J.-B.; Dalin, E.; Aubry, A.; Ivaldi, S. Human Posture Prediction during Physical Human-Robot Interaction. IEEE Robot. Autom. Lett. 2021, 6, 6046–6053. [Google Scholar] [CrossRef]
  83. Liu, H.; Wang, L. Human motion prediction for human-robot collaboration. J. Manuf. Syst. 2017, 44, 287–294. [Google Scholar] [CrossRef]
  84. Gustavsson, P.; Syberfeldt, A.; Brewster, R.; Wang, L. Human-robot collaboration demonstrator combining speech recognition and haptic control. Procedia CIRP 2017, 63, 396–401. [Google Scholar] [CrossRef]
  85. Lamon, E.; De Franco, A.; Peternel, L.; Ajoudani, A. A capability-aware role allocation approach to industrial assembly tasks. IEEE Robot. Autom. Lett. 2019, 4, 3378–3385. [Google Scholar] [CrossRef] [Green Version]
  86. Bernard, J.; Dobermann, E.; Vögele, A.; Krüger, B.; Kohlhammer, J.; Fellner, D. Visual-interactive semi-supervised labeling of human motion capture data. Electron. Imaging 2017, 2017, 34–45. [Google Scholar] [CrossRef]
  87. Dianatfar, M.; Latokartano, J.; Lanz, M. Review on existing VR/AR solutions in human–robot collaboration. Procedia CIRP 2021, 97, 407–411. [Google Scholar] [CrossRef]
  88. Danielsson, O.; Syberfeldt, A.; Brewster, R.; Wang, L. Assessing instructions in augmented reality for human-robot collaborative assembly by using demonstrators. Procedia CIRP 2017, 63, 89–94. [Google Scholar] [CrossRef]
  89. Liu, X.; Ge, S.S. Optimized control for human-multi-robot collaboration via multi-agent adaptive dynamic programming. IFAC-Pap. 2020, 53, 9207–9212. [Google Scholar] [CrossRef]
  90. Park, C.; Pan, J.; Manocha, D. Real-time optimization-based planning in dynamic environments using GPUs. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  91. Mabkhot, M.M.; Al-Ahmari, A.M.; Salah, B.; Alkhalefah, H. Requirements of the smart factory system: A survey and perspective. Machines 2018, 6, 23. [Google Scholar] [CrossRef] [Green Version]
  92. Kardos, C.; Kemény, Z.; Kovács, A.; Pataki, B.E.; Váncza, J. Context-dependent multimodal communication in human-robot collaboration. Procedia CIRP 2018, 72, 15–20. [Google Scholar] [CrossRef]
  93. Maurtua, I.; Fernandez, I.; Tellaeche, A.; Kildal, J.; Susperregi, L.; Ibarguren, A.; Sierra, B. Natural multimodal communication for human–robot collaboration. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417716043. [Google Scholar] [CrossRef] [Green Version]
  94. Krüger, J.; Lien, T.K.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann. 2009, 58, 628–646. [Google Scholar] [CrossRef]
  95. Cheng, H.; Xu, W.; Ai, Q.; Liu, Q.; Zhou, Z.; Pham, D.T. Manufacturing capability assessment for human-robot collaborative disassembly based on multi-data fusion. Procedia Manuf. 2017, 10, 26–36. [Google Scholar] [CrossRef]
  96. Casalino, A.; Zanchettin, A.M.; Piroddi, L.; Rocco, P. Optimal scheduling of human–robot collaborative assembly operations with time petri nets. IEEE Trans. Autom. Sci. Eng. 2019, 18, 70–84. [Google Scholar] [CrossRef]
  97. Mokhtarzadeh, M.; Tavakkoli-Moghaddam, R.; Vahedi-Nouri, B.; Farsi, A. Scheduling of human-robot collaboration in assembly of printed circuit boards: A constraint programming approach. Int. J. Comput. Integr. Manuf. 2020, 33, 460–473. [Google Scholar] [CrossRef]
  98. Mayima, A.; Clodic, A.; Alami, R. Toward a Robot Computing an Online Estimation of the Quality of its Interaction with its Human Partner. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020. [Google Scholar]
  99. Xia, P.; Lopes, A.M.; Restivo, M.T. A review of virtual reality and haptics for product assembly (part 1): Rigid parts. Assem. Autom. 2013, 33, 68–77. [Google Scholar] [CrossRef]
  100. Çil, Z.A.; Li, Z.; Mete, S.; Özceylan, E. Mathematical model and bee algorithms for mixed-model assembly line balancing problem with physical human–robot collaboration. Appl. Soft Comput. 2020, 93, 106394. [Google Scholar] [CrossRef]
  101. Sun, B.-Q.; Wang, L.; Peng, Z.Q. Bound-guided hybrid estimation of distribution algorithm for energy-efficient robotic assembly line balancing. Comput. Ind. Eng. 2020, 146, 106604. [Google Scholar] [CrossRef]
  102. Matsas, E.; Vosniakos, G.C.; Batras, D. Effectiveness and acceptability of a virtual environment for assessing human–robot collaboration in manufacturing. Int. J. Adv. Manuf. Technol. 2017, 92, 3903–3917. [Google Scholar] [CrossRef]
  103. Malik, A.A.; Brem, A. Digital twins for collaborative robots: A case study in human-robot interaction. Robot. Comput.-Integr. Manuf. 2021, 68, 102092. [Google Scholar] [CrossRef]
  104. Wang, Y.; Feng, J.; Liu, J.; Liu, X.; Wang, J. Digital Twin-based Design and Operation of Human-Robot Collaborative Assembly. IFAC-Pap. 2022, 55, 295–300. [Google Scholar] [CrossRef]
  105. Ji, W.; Yin, S.; Wang, L. A virtual training based programming-free automatic assembly approach for future industry. IEEE Access 2018, 6, 43865–43873. [Google Scholar] [CrossRef]
  106. Zhu, W.; Fan, X.; Zhang, Y. Applications and research trends of digital human models in the manufacturing industry. Virtual Real. Intell. Hardw. 2019, 1, 558–579. [Google Scholar] [CrossRef]
  107. Maurya, C.M.; Karmakar, S.; Das, A.K. Digital human modeling (DHM) for improving work environment for specially-abled and elderly. SN Appl. Sci. 2019, 1, 1326. [Google Scholar] [CrossRef] [Green Version]
  108. Matsas, E.; Vosniakos, G.-C. Design of a virtual reality training system for human–robot collaboration in manufacturing tasks. Int. J. Interact. Des. Manuf. 2017, 11, 139–153. [Google Scholar] [CrossRef]
  109. Schaal, S. Dynamic movement primitives-a framework for motor control in humans and humanoid robotics. In Adaptive Motion of Animals and Machines; Springer: Berlin/Heidelberg, Germany, 2006; pp. 261–280. [Google Scholar]
  110. Paredes-Astudillo, Y.A.; Moreno, D.; Vargas, A.-M.; Angel, M.-A.; Perez, S.; Jimenez, J.-F.; Saavedra-Robinson, L.A.; Trentesaux, D. Human fatigue aware cyber-physical Production system. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  111. El Makrini, I.; Mathijssen, G.; Verhaegen, S.; Verstraten, T.; Vanderborght, B. A Virtual Element-Based Postural Optimization Method for Improved Ergonomics During Human-Robot Collaboration. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1772–1783. [Google Scholar] [CrossRef]
  112. Tenbrink, L.; Feldotto, B.; Röhrbein, F.; Knoll, A. Motion prediction of virtual patterns, human hand motions, and a simplified hand manipulation task with hierarchical temporal memory. In Proceedings of the 2019 IEEE International Conference on Cyborg and Bionic Systems (CBS), Munich, Germany, 18–20 September 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  113. Liu, H.; Qu, D.; Xu, F.; Zou, F.; Song, J.; Jia, K. A human-robot collaboration framework based on human motion prediction and task model in virtual environment. In Proceedings of the 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Suzhou, China, 29 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  114. Kim, W.; Lee, J.; Peternel, L.; Tsagarakis, N.; Ajoudani, A. Anticipatory robot assistance for the prevention of human static joint overloading in human–robot collaboration. IEEE Robot. Autom. Lett. 2017, 3, 68–75. [Google Scholar] [CrossRef]
  115. Ding, W.; Liu, K.; Cheng, F.; Zhang, J. Learning hierarchical spatio-temporal pattern for human activity prediction. J. Vis. Commun. Image Represent. 2016, 35, 103–111. [Google Scholar] [CrossRef]
  116. Elkosantini, S.; Gien, D. Integration of human behavioural aspects in a dynamic model for a manufacturing system. Int. J. Prod. Res. 2009, 47, 2601–2623. [Google Scholar] [CrossRef] [Green Version]
  117. Scalera, L.; Giusti, A.; Vidoni, R.; Gasparetto, A. Enhancing fluency and productivity in human-robot collaboration through online scaling of dynamic safety zones. Int. J. Adv. Manuf. Technol. 2022, 121, 6783–6798. [Google Scholar] [CrossRef]
  118. Battini, D.; Delorme, X.; Dolgui, A.; Persona, A.; Sgarbossa, F. Ergonomics in assembly line balancing based on energy expenditure: A multi-objective model. Int. J. Prod. Res. 2016, 54, 824–845. [Google Scholar] [CrossRef]
  119. Battaïa, O.; Dolgui, A. A taxonomy of line balancing problems and their solution approaches. Int. J. Prod. Econ. 2013, 142, 259–277. [Google Scholar] [CrossRef]
  120. Rückert, P.; Wohlfromm, L.; Tracht, K. Implementation of virtual reality systems for simulation of human-robot collaboration. Procedia Manuf. 2018, 19, 164–170. [Google Scholar] [CrossRef]
  121. Jasiulewicz-Kaczmarek, M.; Saniuk, A. Human Factor in Sustainable Manufacturing. In Universal Access in Human-Computer Interaction. Access to the Human Environment and Culture: 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, 2–7 August 2015, Proceedings, Part IV; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; Volume 9178, pp. 444–455. [Google Scholar]
  122. Hopko, S.K.; Khurana, R.; Mehta, R.K.; Pagilla, P.R. Effect of cognitive fatigue, operator sex, and robot assistance on task performance metrics, workload, and situation awareness in human-robot collaboration. IEEE Robot. Autom. Lett. 2021, 6, 3049–3056. [Google Scholar] [CrossRef]
  123. Fratczak, P.; Goh, Y.M.; Kinnell, L.; Justham, L.; Soltoggio, A. Virtual Reality Study of Human Adaptability in Industrial Human-Robot Collaboration. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; IEEE: Piscataway, NJ, USA, 2020.
  124. Lippi, M.; Marino, A. A mixed-integer linear programming formulation for human multi-robot task allocation. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar]
Figure 1. Research framework.
Figure 1. Research framework.
Robotics 12 00037 g001
Figure 3. Reviewer interface.
Figure 3. Reviewer interface.
Robotics 12 00037 g003
Figure 4. Distribution of enabling concepts identified in the literature for an effective implementation of collaborative assembly in manufacturing.
Figure 4. Distribution of enabling concepts identified in the literature for an effective implementation of collaborative assembly in manufacturing.
Robotics 12 00037 g004
Figure 5. Task allocation between the human and robot.
Figure 5. Task allocation between the human and robot.
Robotics 12 00037 g005
Figure 6. Research trends in fatigue management in collaborative assembly (up to September 2022).
Figure 6. Research trends in fatigue management in collaborative assembly (up to September 2022).
Robotics 12 00037 g006
Figure 7. Human–robot control states loop. The interaction controller generates the optimized task allocation to the robot, as per changes in human movements.
Figure 7. Human–robot control states loop. The interaction controller generates the optimized task allocation to the robot, as per changes in human movements.
Robotics 12 00037 g007
Figure 8. Sensors communication in HRC.
Figure 8. Sensors communication in HRC.
Robotics 12 00037 g008
Figure 9. Optimization planning of task allocation for human–robot collaboration. Multi-criteria optimization for minimizing the human fatigue, energy consumption and path and maximizing efficiency.
Figure 9. Optimization planning of task allocation for human–robot collaboration. Multi-criteria optimization for minimizing the human fatigue, energy consumption and path and maximizing efficiency.
Robotics 12 00037 g009
Figure 10. Components of a virtual HRC assembly design. CAD models of humans, collaborative robots and tools are imported into an immersive environment where interaction is enabled through sensors. Such environments allow for the relative safety of testing multiple interaction scenarios prior to physical set-ups.
Figure 10. Components of a virtual HRC assembly design. CAD models of humans, collaborative robots and tools are imported into an immersive environment where interaction is enabled through sensors. Such environments allow for the relative safety of testing multiple interaction scenarios prior to physical set-ups.
Robotics 12 00037 g010
Figure 11. (a) Co-assembly areas of the human (orange) and robot (blue), (b) Biomechanical modeling of upper-body motion patterns. The motion patterns of the upper-body limbs are cataloged, and the visual controller can evaluate the deviations from the prescribed path(s) in both space and time.
Figure 11. (a) Co-assembly areas of the human (orange) and robot (blue), (b) Biomechanical modeling of upper-body motion patterns. The motion patterns of the upper-body limbs are cataloged, and the visual controller can evaluate the deviations from the prescribed path(s) in both space and time.
Robotics 12 00037 g011
Figure 12. Virtual collaborative assembly design. The development is performed at the X-Reality Lab, RMCERI, Department of Industrial Engineering, TUT. The protocols are designed to enable the co-operator to view the ergonomic characteristics of the assembly task in terms of the contact points in space between the human, the robot and the product, the load variance at various execution times and the aging energy level.
Figure 12. Virtual collaborative assembly design. The development is performed at the X-Reality Lab, RMCERI, Department of Industrial Engineering, TUT. The protocols are designed to enable the co-operator to view the ergonomic characteristics of the assembly task in terms of the contact points in space between the human, the robot and the product, the load variance at various execution times and the aging energy level.
Robotics 12 00037 g012
Figure 13. Research classification.
Figure 13. Research classification.
Robotics 12 00037 g013
Figure 14. Cluster (polynomial) analysis of key research areas of HRC in assembly.
Figure 14. Cluster (polynomial) analysis of key research areas of HRC in assembly.
Robotics 12 00037 g014
Figure 15. Computer vision for the tracking of joint motion patterns in collaborative assembly design.
Figure 15. Computer vision for the tracking of joint motion patterns in collaborative assembly design.
Robotics 12 00037 g015
Table 1. Search Strings.
Table 1. Search Strings.
Search ExpressionsSynonyms/Definition
Collaborative assemblyHuman–robotCollaborative Robot
Main SEHuman–machineRobot Interaction
Human-in-the-LoopCo-manipulation
Task allocationShared taskFunction allocation
SE1Task synchronizationTask distribution
Job assignmentLine balancing
Motion planningMovementAction
SE2GestureHand guiding
PostureCoordination
Mathematical programmingOptimizationAlgorithm
SE3Mathematical modelReinforcement learning
Linear programming
Human factors and ergonomicsPhysical fatigueWorkload/effort
SE4Energy/metabolic expenditureMuscular analysis
Joint overloadSafety
Digital modelingDigital humanSimulation
SE5Virtual realityMachine vision
3D representationDemonstrator
PredictionControlInstruction
SE6Adaptation(p)Recognition
IntuitionPerception
Table 2. HRC control methods in assembly applications.
Table 2. HRC control methods in assembly applications.
Control Method Used
ReferenceProblemABCD
Wang, Gao [5]Symbiotic HRC
Dianatfar, Latokartano [87]VR/AR environment
Mainprice and Berenson [77]Trajectory optimizer
Peternel, Tsagarakis [25]Robot adaption
Malik, Masood [43]VR environment
Danielsson, Syberfeldt [88]AR environment
Notes: A—Gesture command, B—Haptics, C—Voice command, D—Context-aware.
Table 4. Data about paper production per cluster for a six-year period.
Table 4. Data about paper production per cluster for a six-year period.
No. of Papers per YearTask
Allocation
% of GrowthAction Recognition and Motion Planning% of GrowthPhysical Ergonomics and Fatigue % of Growth
202218+12.512015−11.76
202116+77.7712+5017+183.33
20209−22.228+606+20
201911+105+255−16.66
201810+504+506/
20175 2 0
Average value11.5+25.617.16+379.8+43.73
Sum69 43 49
Table 5. Task allocation and fatigue sub-cluster.
Table 5. Task allocation and fatigue sub-cluster.
Robotics 12 00037 i001
Highest and least represented clustersRobotics 12 00037 i002
Sub-cluster gap
levels
[13], [25], [31], [49], [50], [56],
[57], [58], [59], [60], [61], [57],
[68], [78], [85], [89], [94], [114]
Robotics 12 00037 i003[13], [25], [31], [49], [50], [59],
[61], [114]
[12], [13], [43], [63], [69], [87],
[88], [92], [103], [104], [110]
Robotics 12 00037 i004[12], [13], [63], [103], [110]
[13], [30], [94], [115]Robotics 12 00037 i005[13]
[78], [79], [84], [88], [87]Robotics 12 00037 i006
[2], [13], [16], [51], [55], [59],
[62], [71]
Robotics 12 00037 i007[2], [13], [16], [51], [55], [59],
[62], [71]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yonga Chuengwa, T.; Swanepoel, J.A.; Kurien, A.M.; Kanakana-Katumba, M.G.; Djouani, K. Research Perspectives in Collaborative Assembly: A Review. Robotics 2023, 12, 37. https://doi.org/10.3390/robotics12020037

AMA Style

Yonga Chuengwa T, Swanepoel JA, Kurien AM, Kanakana-Katumba MG, Djouani K. Research Perspectives in Collaborative Assembly: A Review. Robotics. 2023; 12(2):37. https://doi.org/10.3390/robotics12020037

Chicago/Turabian Style

Yonga Chuengwa, Thierry, Jan Adriaan Swanepoel, Anish Matthew Kurien, Mukondeleli Grace Kanakana-Katumba, and Karim Djouani. 2023. "Research Perspectives in Collaborative Assembly: A Review" Robotics 12, no. 2: 37. https://doi.org/10.3390/robotics12020037

APA Style

Yonga Chuengwa, T., Swanepoel, J. A., Kurien, A. M., Kanakana-Katumba, M. G., & Djouani, K. (2023). Research Perspectives in Collaborative Assembly: A Review. Robotics, 12(2), 37. https://doi.org/10.3390/robotics12020037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop