Next Article in Journal
Variable Damping Actuator Using an Electromagnetic Brake for Impedance Modulation in Physical Human–Robot Interaction
Next Article in Special Issue
Industrial Robots in Mechanical Machining: Perspectives and Limitations
Previous Article in Journal
Path Following for an Omnidirectional Robot Using a Non-Linear Model Predictive Controller for Intelligent Warehouses
Previous Article in Special Issue
Indoor Positioning Systems of Mobile Robots: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

COBOT Applications—Recent Advances and Challenges

Department of Mechanical and Industrial Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy
*
Author to whom correspondence should be addressed.
Robotics 2023, 12(3), 79; https://doi.org/10.3390/robotics12030079
Submission received: 30 April 2023 / Revised: 1 June 2023 / Accepted: 2 June 2023 / Published: 4 June 2023
(This article belongs to the Special Issue The State-of-the-Art of Robotics in Europe)

Abstract

:
This study provides a structured literature review of the recent COllaborative roBOT (COBOT) applications in industrial and service contexts. Several papers and research studies were selected and analyzed, observing the collaborative robot interactions, the control technologies and the market impact. This review focuses on stationary COBOTs that may guarantee flexible applications, resource efficiency, and worker safety from a fixed location. COBOTs offer new opportunities to develop and integrate control techniques, environmental recognition of time-variant object location, and user-friendly programming to interact safely with humans. Artificial Intelligence (AI) and machine learning systems enable and boost the COBOT’s ability to perceive its surroundings. A deep analysis of different applications of COBOTs and their properties, from industrial assembly, material handling, service personal assistance, security and inspection, Medicare, and supernumerary tasks, was carried out. Among the observations, the analysis outlined the importance and the dependencies of the control interfaces, the intention recognition, the programming techniques, and virtual reality solutions. A market analysis of 195 models was developed, focusing on the physical characteristics and key features to demonstrate the relevance and growing interest in this field, highlighting the potential of COBOT adoption based on (i) degrees of freedom, (ii) reach and payload, (iii) accuracy, and (iv) energy consumption vs. tool center point velocity. Finally, a discussion on the advantages and limits is summarized, considering anthropomorphic robot applications for further investigations.

1. Introduction

Robot installations are expected to grow over the next five years, based on a significant dominant sales trend (annual mean increase of 12% over 2017–2022), despite the global pandemic event and the contractions of some specific markets. COllaborative roBOTs (COBOTs) are classified based on reference frame location as fixed, mobile, and hybrid solutions [1,2,3]. The first class considers the robot placement in a time-invariant position, while the mobile configuration allows the robot motion. The hybrid architecture is composed of both aforementioned elements; it can move between different tasks and work areas, enabling material transportation (kits, tools, light parts, sub-assemblies). In addition, robots are available with sensors as well as a user interface that recognizes and reacts to an unstructured environment. In this context, automation and AI are impacting on workers and job profiles where repetitive or dangerous tasks are prevalent. COBOTs can be programmed without involving experts of high-skilled resources. SMEs (small and mid-size enterprises) are the pivotal players due to the investment leverage that is not widely affordable for pioneering technologies [4]. The obtained flexibility permits the SMEs to accomplish productivity enhancements without compromising the low-volume production [5] to react to customer demand variability. On the other hand, large multi-modal factories can rapidly switch from a range of different applications: from oil and gas to aerospace, building, and automotive products [6]. These companies manage the ability to operate with several product lines, employing teams with various skills who are able to reconfigure the layout to respond to a dynamic order. Moreover, considering the transformation of digital factory models and Industry 4.0 enabling technologies, the data are gathered at each phase of production from machines/equipment according to ISO 10218 safe interaction in a collaborative workspace [4,7,8]. Data are then aggregated and processed to optimize the entire production process [9]. For instance, the gripping force or the trajectory of a robot arm can be updated if the digital twin estimates an enhancement in production performance in terms of safety, quality or production indicators [10].
The most involved industrial sectors were the automotive and electronic fields, which accounted for over 60% of global new installations. Considering the 2022 survey of the International Federation of Robotics (IFR), the results show that the China region is the main COBOT producer, with 52 models representing 26.4% of the global assessment. The second country is South Korea, with 14.7%, and Japan, with 11.2% of the total assessment. The mentioned countries represent 52.3% of the global COBOT models. The remaining percentage is divided into two groups: (i) the United States of America, Germany, Switzerland, and Denmark (29.5% of the total) and (ii) Italy, United Kingdom, France, Canada, and India (17.5 of the total).
Although the literature presents various COBOT applications in an industrial context, further studies are required to investigate the recent advances. In particular, the increase in COBOT abilities shows the need for a set of guidelines to permit a valuable comparison. In this work, the authors aim to present a review of the recent state-of-the-art innovations, classifying the type of applications and the interaction with human beings, highlighting the practical implications. Finally, we present a market classification of COBOT features, including the degrees of freedom (DoF), reach volume, payload, position accuracy, and intention recognition.
The paper is organized as follows: Section 2 describes the methodology employed in conducting the literature review, including the search strategy, selection criteria and screening process. It provides a detailed account of the steps taken to ensure the comprehensive and systematic identification of relevant articles. Section 3 reports the various applications of COBOTs in the manufacturing and service industries. It examines how COBOTs are employed in different contexts, such as manufacturing processes, material handling, personal assistance, security and inspections, and medical applications. Section 4 presents the advancements and developments in the field of human–robot interaction (HRI) within the context of COBOTs. It explores different aspects of HRI, including control interfaces, programming, learning techniques, virtual reality systems, and intention recognition. The review highlights the latest research and innovations that aim to improve the collaboration, communication, and mutual understanding between humans and COBOTs. Section 5 presents the COBOT market analyses examining the different types of COBOTs offered by manufacturers and analyzing their features, capabilities, and trends. The review provides insights into the current state of the COBOT market, key players, market share, and emerging trends in terms of technological advancements and adoption rates. Section 6, Discussion, and Section 7, Conclusion, summarize the review work and future challenges.

2. Collaborative Robot Architecture Frame

In the last decades, collaborative robots have attracted wide interest from academic researchers to industrial and service operators [11,12]. The definitions of collaborative robot were given in the 1990s. The initial concept was a passive mechanism supervised by a human operator [13,14].
The literature provides works that study three-dimensional workspace sharing, collaborative and cooperative tasks, programming, and interaction. Additional factors regarding the work area layout that are important to consider during a COBOT applications review are:
  • Coexistence: the work areas need to be defined without overlapping zones. The human operator and the robot can perform the activities separately.
  • Synchronization: the human and the robot share the work environment with independent tasks.
  • Cooperation: the human and the robot share the work environment and the task execution is in a step-by-step procedure.
  • Collaboration: the human and robot share the work area and the task concurrently.
Referring the collaboration identified by the robot safety standards ISO 10218, Figure 1 shows the four scenarios: (i) Safety rated monitoring stop, (ii) Hand guiding, (iii) Speed and separation monitoring, and (iv) Force and torque limitation.
To evaluate the practical applications, this review focuses on three macro-elements:
  • Safety: COBOTS are designed to work safely in the same workspace occupied by an operator, detecting and reacting to the risk of accidents or injuries.
  • Flexibility: COBOTS can be reconfigured to execute a set of unknown tasks.
  • User-friendliness: COBOTS are equipped with intuitive interfaces to program and operate them without requiring extensive technical knowledge.
The purpose of this work is to conduct a literature review on the topic of collaborative robotics in manufacturing and service applications from the period 1996 to 2023. The review focuses on identifying relevant publications through an extensive search using specific keywords, as “COBOT”, “collaborative robotics”, and terms related to manufacturing, assembly, assistance, and medicine. The research involved multiple databases, including Web of Science, Scopus, and PubMed, to ensure a comprehensive coverage of the relevant literature. The search strategy involved querying the selected databases using the identified keywords and selected relevant filters, such as publication period and the subject area of engineering. The intention was to retrieve publications that specifically addressed collaborative robotics in manufacturing and service applications. The search resulted in 423 initial results, which were then subjected to further screening. To refine the list of relevant publications, a two-stage screening process was employed. In the first stage, titles and abstracts were reviewed to eliminate duplicate articles of the same research. Thus, the selection criteria emphasized articles that presented case studies rather than simulations. As a result, 98 articles were identified for full-text review. The reviewed articles embraced a range of topics related to collaborative robotics in both manufacturing and service applications. The manufacturing applications primarily focused on:
  • Manufacturing processes: articles that focus on the use of collaborative robots in various manufacturing processes, such as assembly lines and welding.
  • Material handling: Articles that specifically address the application of collaborative robots in material handling tasks, including picking, sorting, and transporting objects.
In the service applications domain, the articles discussed of:
  • Personal assistance: articles that explore the use of collaborative robots in providing assistance to individuals in tasks such as household chores or caregiving.
  • Security and inspections: articles that examine the application of collaborative robots in security-related tasks, such as surveillance, monitoring, and inspections in various settings.
  • Medicare: articles that discuss the utilization of collaborative robots in healthcare and medical environments, including patient care, surgical assistance, and rehabilitation.
Furthermore, the article selection highlighted the importance of the interaction between humans and collaborative robots, emphasizing its significance within collaborative robotics. Specific focus was given to articles that discussed human interactions with collaborative robots. This included four key areas:
  • Control interface: articles that investigate different interfaces and control mechanisms for humans to interact and communicate with collaborative robots effectively.
  • Intention recognition: articles that study how to define techniques, algorithms, and sensor systems used to enable robots to recognize and understand human intentions.
  • Programming and learning: articles that explore methods and techniques for programming and teaching collaborative robots, including machine learning, programming languages, and algorithms.
  • Virtual reality perspectives: articles that discuss the potential of virtual reality systems in enhancing human–robot interactions and collaboration, such as immersive training environments and augmented reality interfaces.
Finally, to examine applications and human interactions, the review also prioritized the analysis of core technologies that support and enable collaborative robotics. The selected technologies include bioelectric interfaces and force, impedance, and visual sensors.

3. Classification of COBOT Applications

An initial classification of COBOT applications is established on the device usage context: industrial (assembly and handling tasks) or service (personal assistance, security, and Medicare).

3.1. Industrial Application of Collaborative Robots: Assembly

In manufacturing and assembling processes, production depends on the availability of tools, human labor, and machinery. The efficiency determines the lead time and the product quality. During manufacturing processes, various repetitive activities that cause fatigue in humans are often involved. Therefore, to eliminate employee risks and fatigue, it is necessary to develop robots that would complement human labor in heavy or repetitive work. Levratti, A. et al. introduce a modern tire workshop assistant robot which can bear heavy wheels and transfer them to any spot in the workshop, and can be interacted with either via gestures or tele-operatively through a haptic interface [15]. Further, Peternel, L. et al. propose a method to enable robots to adapt their behavior to human fatigue in human–robot co-manipulation tasks. The online model is used to estimate human motor fatigue, and when a specific level is discerned, the robot applies the acquired ability to accomplish the challenging phase of the task. The efficacy of the proposed approach is evidenced by trials on a real-world co-manipulation task [16].
In the assembly domain, COBOTs are employed to support the assembly of complex products. Cherubini, A. et al. present a collaborative human–robot manufacturing cell for homokinetic joint assembly, in which the COBOT switches between active and passive behaviors to lighten the burden on the operator and to comply with his/her needs. The approach is validated in a series of assembly experiments, and it is fully compatible with safety standards [17]. Many papers discuss how humans and robots can work simultaneously to improve the efficiency and complexity of assembly processes. The work of Tan, J. T. C. et al. studies the design COBOTs in cellular manufacturing. Task modeling, safety development, mental workload, and man-machine interface are all studied to optimize the system design and performance [18]. Krueger, J. et al. also look at logistic and financial aspects of cooperative assembly, such as efficient component supply [19]. The study of Erden, M. S. et al. presents an end-point impedance measurement of the human hand while performing welding interactively with the KUKA robot [20]. A paper discusses human–robot cooperation in precise positioning of a flat object on a target. Algorithms were developed to represent the cooperation schemes, and these were evaluated using a robot prototype and experiments with humans. Furthermore, the main challenge of Wojtara, T. et al. is in regulating the robot-human interaction, as the robot interprets signals from the human in order to understand their intention [21]. Morel, G et al. define a control algorithm combining visual servo control and force feedback within the impedance control approach to perform peg-in-hole insertion experiments with a seven-axis robot manipulator [22]. Magrini, E. et al. present a framework for guaranteeing human safety in robotic cells that enable harmonious coexistence and dependable interplay between humans and robots. Certified laser scanners are also employed to observe human–robot proximity in the cell, while safe communication protocols and logical units are utilized for secure low-level robot control. Furthermore, a smart human-machine interface is included to facilitate in-process collaborative activities, as well as gesture recognition of operator instructions. The framework has been tested in an industrial cell, with a robot and an operator closely examining a workpiece [23]. Another critical application of collaborative robots in manufacturing is the elimination of redundancy in operations. For most manufacturing activities, the repetitive processes often come towards the end of the production activities. During these activities, a series of other repetitive actions are performed. To ensure higher quality and uniformity, polishing, lifting of assembling parts can be assigned to collaborative robots [24].
Machine learning in accordance with collaborative robots ensures consistency in the quality and cycle time to accomplish the industrial tasks. G. Michalos et al. highlight how learning control techniques are essential in human–robot collaboration for better handling of materials. They implement control techniques through collaborative robots that are human-centered with neural networks, fuzzy logic control, and adaptive control forms as the basis for ensuring collaborative robots’ dependable material-handling ability. Like humans, collective human-centered robots need logical interpretation of situations as they present themselves to correctly hand-related risk issues [25]. A robot should take the initiative during joint human–robot task execution. Three initiative conditions are tested in a user study: human-initiated, reactive, and proactive. Results show significant advances in proactive conditions [26].

3.2. Industrial Application of Collaborative Robots: Material Handling

The application of collaborative robots in material handling provides significant benefits. Material handling processes can be complex, involving multiple stages and various types of equipment. Coordinating these processes and ensuring that they are executed correctly can be challenging. Donner, P. and Buss, M. present a controller that can actively dampen undesired oscillations, while allowing desired oscillations to reach a desired energy level. In the paper, real-world experiments show the positive results in interaction with an operator [27]. Dimeas, F. et al. work on a method to detect and stabilize unstable behavior in physical human–robot interactions using an admittance controller with online adaptation of the admittance control gains [28]. Deformable materials are critical to handle. Kruse, D. et al. discuss a novel approach to robotic manipulation of highly deformable materials, using sensor feedback and vision to dictate robot motion. The robot is capable of contact sensing to maintain tension and equipped with a head-mounted RGBD sensor to detect folds. The combination of force and vision controllers allows the robot to follow human motion without excessive crimps in the sheet [29].
Gams et al. have extended the dynamic movement primitives (DMPs) framework in order to enable dynamic behavior execution and cooperative tasks that are bimanual and tightly coupled. To achieve this, they proposed a modulation approach and evaluated it for the purpose of interacting with objects and the environment. This permits the combination of independent robotic trajectories, thereby allowing implementation of an iterative learning control algorithm to execute bimanual and tightly coupled cooperative tasks. The algorithm is used to learn a coupling term, which is then applied to the original trajectory in a feed-forward manner, thereby adjusting the trajectory to the desired positions or external forces [30].

3.3. Service Application of Collaborative Robots: Personal Assistance

The application of collaborative robots in personal assistance has advanced over the years because of increased artificial intelligence technology that allows robots to take over some activities that humans previously concentrated on. Because of the ability of collaborative robots to operate in a logical and sequential manner, they have, in many ways, become personal assistants to human beings in handling various issues. For this scope, Bestick, A. et al. estimate personalized human kinematic models from motion capture data, which can be utilized to refine a variety of human–robot collaborative scenarios that prioritize the comfort and ergonomics of a single human collaborator. An experiment involving human–robot collaborative manipulation is conducted to evaluate the approach, and results demonstrate that when the robot plans with a personalized kinematic model, human subjects rotate their torsos significantly less during bimanual object handoffs [31]. In healthcare, collaborative robots can assist healthcare professionals in various tasks, such as patient monitoring, medication management, and rehabilitation exercises. They can also help patients with limited mobility to perform daily activities, such as dressing, bathing, and grooming. Collaborative robots can provide assistance to elderly people living independently or in care homes.
Moreover, for persons whose movement is restricted because of health complications, facilities have developed robots that help such individuals in their movement. The pilot study of Kidal et al. investigates human factors associated with assembly cells for workers with cognitive disabilities. Preliminary findings indicate that personalized human-automation load-balancing strategies and collaborative robots have the potential to empower workers to complete complex assembly tasks. Design requirements for assembly cells are contrasted with those for the regular workforce to ensure that they are optimized for the needs of workers with cognitive disabilities [32]. As personal assistants to older people, collaborative robots help individuals with day-to-day tasks. The paper edited by Bohme, H. J. et al. presents a scheme for human–robot interaction that can be used in unstructured, crowded, and cluttered environments, such as a mobile information kiosk in a home store. The methods used include vision-based interaction, sound analysis, and speech output, and they are integrated into a prototypical interaction cycle. Experimental results show the key features of the subsystems, which can be applied to a variety of service robots, and future research will focus on improving the tracking system, which is currently limited to people facing the robot [33]. Besides the home-based robotics assistance activities, the application of personal assistance robots has been applied in the telecommunication and construction industries. In telecommunications, collaborative robots have been essential in assisting the subscribers of a particular telecommunication authority. As a personal assistant to the subscriber, the collaborative robot forwards and responds to the calls whenever the subscriber is offline or on another call. Through relaying relevant information such as voice notes, personal assistance robots enable individuals to receive information about all the calls they missed while offline. Finally, robots are essential for human labor as personal assistants in construction. The robots help engineers lift material, create a safer work environment, enhance the quality of outcomes, and make the whole process more cost-effective [34].

3.4. Service Application of Collaborative Robots: Security and Inspection

The security context shows technological advancements in accordance with the application of collaborative robots for persons to be effectively protected against any form of attack. Inspections robots have been developed to help in detecting illegal materials before they are smuggled into public or private places. In most protected sensitive areas such as international airports, collaborative inspection robots are an essential layer of security measures. The robots used in these areas of security use X-rays to scan passenger’s luggage to detect any illegal objects, and raise alarms [35]. The inspection activities of collaborative robots have enhanced the ability of military personnel to detect and neutralize the possibility of terrorist activities occurring when terrorist weapons of mass destruction are detected by the robots during border inspections using robotic machine inspection. To further complement security inspection, some security inspection robots have been developed and programmed to aid in defusing detected threats, such as bombs, that might be too risky to be handled by human operators. Murphy, R. R. offers an instructional guide on the utilization of robots in urban search and rescue missions, as well as an examination of the challenges in combining humans and robots. Their paper further presents a domain theory on searches, which is composed of a workflow model and an information flow model [36].
Besides security inspection, robots are also crucial as human co-workers for product and process inspection. During manufacturing and assembly processes, inspection robots are used to visually inspect flaws in every stage of production. Most industrial inspection collaborative robots are often designed with either 2-D or 3-D vision sensors [37]. The installation of 2-D and 3-D sensors enables collaborative robots to conduct efficient accuracy-based inspections that ensure all requirement for each production stage are obtained [38]. Because of the increased ability of COBOTS to evaluate various aspects during the inspection, they have increasingly been adopted in the practical transport system to assess the safety of using a particular means of transport. For this purpose, Tsagarakis et al. present a humanoid robot platform that has been exploited to work in representative unstructured environments [39].

3.5. Service Application of Collaborative Robots: Medicare

In Medicare, the collaborative treatment process between human Medicare professionals and robots has become popular. Patient handling has been one of the demanding responsibilities of causing musculoskeletal issues among Medicare professionals who rely on their physical strength to discharge their duties [40]. Notably, applying collaborative robots has been essential in addressing such challenges. In most modern facilities, nurses have been trained to collaborate with robots in providing services such as muscle massage and fixing of broken bones. The application of medical COBOTS in fixing broken limbs has ensured greater accuracy in restoring the mobility of individuals after sustaining multiple fractures of the limbs. Therefore, collaborative robots are a significant breakthrough in orthopedic medical facilities.
Moreover, collaborative robots are extensively used in surgical operations. For most surgical doctors, working collaboratively with robots during operations ensures a higher level of operation precision, flexibility, and control [41]. Furthermore, adopting COBOTS in surgical processes facilitates the provision of 3-D vision via the robot vision, thus allowing doctors to see the operation site better and reducing error that is caused by lack of proper visibility during the operation. Therefore, through collaborative robots, surgeons can perform delicate and complex procedures such as organ implantation that may be difficult or impossible if done only through collaboration with other human surgeons. The relationship between force and motion are a critical factor in conveying intended movement direction. Mojtahedi, K. aims to understand how humans interact physically to perform motor tasks such as moving a tool [42].
COBOT prosthetic applications are becoming increasingly popular in the field of prosthetics. The technology is used to create custom-fit robotic prosthetic arms and hands, allowing users with amputations or other physical impairments the ability to interact with the environment in an innovative way. Vogel, J. presents a robotic arm/hand system that is controlled in 6-D Cartesian space through measured human muscular activity. Numerical validation and live evaluations demonstrate the validity of the system and its potential applications [43]. An incremental learning method is used to control a robotic hand prosthetic using myoelectric signals. The approach of Gijsberts, A. is effective and applicable to this problem, by analyzing its performance while predicting single-finger forces. They tested this method on a robotic arm and the subject could reliably grasp, carry and release everyday objects, regardless of the signal changes [44]. Electrical signals from the muscles of the operator can be employed as the main means of information transportation. The work of Fleischer, C. and Hommel, G. presents a human-machine interface to control exoskeletons. A biomechanical model and calibration algorithm are presented, and an exoskeleton for a knee joint support is designed and constructed to verify the model and investigate the interaction between operator and machine [45]. De Vlugt, E. describes the design and application of a haptic device to study the mechanical properties of the human arm during interaction with compliant environments [46]. With the same aim, Burdet, E. found that humans learn to manipulate objects and tools in physical environments by compensating for any forces arising from the interaction. This is achieved by learning an internal model of the dynamics and by controlling the impedance [47].

3.6. Supernumerary Robotics

Soft robotic limbs (SRLs) have become increasingly popular tools for augmenting the manipulation and locomotion capabilities of humans [48]. They are designed to provide additional degrees of freedom that need to be controlled independently or simultaneously with respect to biological limbs. A bilateral interface between the robot and the operator is necessary for proper functioning, wherein control signals are acquired from the human without interference with the biological limbs, and feedback is provided from the robot to the human. SRLs have been developed for various purposes, for instance, legs, arms, hands, and fingers. In the work published by Luo, J. et al., the authors face the challenge of providing a solution that allows an individual operator to accomplish overhead tasks with the assistance of a robotic limb. To address this challenge, the authors propose a balance controller for the SuperLimb wearable robotic solution, utilizing a decomposition methodology to decouple joint torques of the SuperLimb and the interaction forces. Additionally, a force plate is used to measure the center of pressure position as an evaluation method of the standing balance [49]. In 2012, Baldin L. et al. presented a novel approach to using a compliant robot to reduce the load on a human while performing physical activities. The robot is attached to the subject’s waist and supports their body in fatiguing postures, allowing them to sustain those postures with less effort. The team conducted a mathematical analysis to optimize the robot’s posture and joint torques, thereby decreasing the load on the individual. Results from numerical simulations and experiments showed that the proposed method was successful in reducing the workload of the subject [50]. The work of Parietti, F. et al. presents a new approach to physically assisting a human with a wearable robot. Supernumerary robotic limbs (SRLs) are attached to the waist of the human to support their body in fatiguing postures, such as hunching over, squatting, or reaching the ceiling. The SRL is able to take an arbitrary posture to maximize load bearing efficiency, rather than constrained movements that leg exoskeletons require. A methodology for supporting the human body is described and a mathematical analysis of load bearing efficiency is conducted. Optimal SRL posture and joint torques are obtained to minimize the human load. Numerical and experimental results of a prototype SRL demonstrate the effectiveness of this method [51].
Recent advancements in robotic technology have proposed SRLs as a potential solution to reduce the risk of work-related musculoskeletal disorders (WMSD). SRLs can be worn by the worker and augment their natural ability, thus providing a new generation of personal protective equipment. For instance, a supernumerary robotic upper limb allows for indirect interaction with hazardous objects, such as chemical products or vibrating tools, thus reducing the risks of injury associated with joint overloading, bad postures, and vibrations. Within this perspective, Ciullo et al. present a supernumerary robotic limb system to reduce vibration transmitted along the arms and minimize load on the upper limb joints. An off-the-shelf wearable gravity compensation system is integrated with a soft robotic hand and a custom damping wrist, designed based on a mass-spring-damper model. The efficacy of the system is experimentally tested in a simulated industrial work environment, where subjects perform a drilling task on two materials. Analysis of the results according to ISO 5349 show a reduction of 40–60% in vibration transmission with the presented SRL system, without compromising time performance [52].
Studies conducted by Khazoom, C. et al. demonstrate the potential of a supernumerary leg powered by delocalized magnetorheological clutches (MR) to assist walking with three different gaits. Simulations show that the MR leg’s low actuation inertia reduces the impact impulse by a factor of 4 compared to geared motors, and that delocalizing the clutches reduces by half the inertial forces transmitted to the user during swing.
Other studies focus on hand applications. Surgeons may be able to use a third hand under their direct control to perform certain surgical interventions without the need for a human assistant, thus reducing coordination difficulties. To assess this possibility, Abdi E. et al. present a study with naive adults using three virtual hands controlled by their two hands and right foot. The results of this study show that participants were able to successfully control virtual hands after a few trials. Further, the workspace of the hands was found to be inversely correlated with the task velocity. There was no significant difference between the three- and two-hand controls in terms of success in catching falling objects and average effort during the tasks. Participants reported that they preferred the three-hand control strategy, found it easier, and experienced less physical and mental burden [53]. Meraz, N.S. et al. present a sixth finger system as an extension of the human body and investigate how an extra robotic thumb affects the body schema and self-perception. The sixth finger is controlled with the thumb of the opposite hand and contact information is conveyed via electrostimulation. Reaching task experiments are conducted with and without visual information to evaluate the level of embodiment of the sixth robotic finger and the modification of the self-perception of the controlling finger. Results indicate that the sixth finger is incorporated into the body schema of the user and the body schema of the controlling finger is modified, implying the brain’s ability to adapt to different scenarios and body geometries [54].

4. Interactions with Human Beings: Practical Implications

As COBOTs become more common in various industries for several applications, there is an increasing research activity on the technologies that enable them to work safely and efficiently alongside humans. COBOTS are equipped with a range of technologies, including control systems, intent recognition, programming, and learning systems. Dynamics from the signals have influence through individual and social aspects that incorporate personality traits.
These technologies allow COBOTs to adapt to changing conditions in real time, learn from their experiences, and interact with humans in a way that is safe and efficient. This section provides a detailed analysis of each technology’s research activity.

4.1. Control Interface

The control system is the component of COBOTS responsible for ensuring that the machine operates safely and efficiently in a shared workspace with human. COBOT control systems are designed to drive and monitor the robot’s movements and ensure that it does not collide with humans or other objects in the environment. They also enable the robot to adapt to changing conditions, such as changes in lighting or the presence of new obstacles. COBOT control systems typically include sensors, software, and other technologies that allow the robot to detect and respond to changes in the environment in real-time. Observing human action instead of a robot leads to interference of executed actions. However, various aspects affiliated with human movement have been instrumental in triggering the interference effect. Observing movement has measurable consequences for peripheral motor systems [55]. In action observation, there exists a significant increase in a motor-evoked potential originating from hand muscles that are utilized while making such movements. For instance, P. Maurice et al. worked on a method for performing ergonomic assessments of collaborative robotic activities and applying an evolutionary algorithm to optimize the robot’s design for improved ergonomic performance [56]. Current investigations focused on whether an interference effect linked with observed human action towards executed action contains specifics information of biological motion trajectory. The research carried out by J. Rosen et al. studied the integration of a human arm with a powered exoskeleton and its experimental implementation in an elbow joint, using the neuromuscular signal (EMG) as the primary command signal. Four indices of performance were used to assess the system and results indicated the feasibility of an EMG-based powered exoskeleton system as an integrated human-machine system [57]. Human movements are likely to cause interference with incongruently executed arm movements only under biological trajectories. Additionally, the observed non-biological incongruent human movement lacks the interference effect associated with executed movements. In contrast, an observed ball movement causes interference on an incongruent executed arm motion despite being biological or non-biological. The method described by K. A. Farry et al. focuses on commanding two grasping (key and chuck) options and three thumb motions (abduction, extension, and flexion). Outcomes include a 90% correct grasp selection rate and an 87% correct thumb motion selection, both using the myoelectric spectrum [58]. Such effects are outcomes from the quantity of information distinguished by the brain based on distinct kinds of motion stimuli [59]. Alternatively, the impact resulting from prior experience with diverse kinds of forms as well as motion needs to be taken into consideration. Extensive research is necessary to assist in discriminating amid the existing possibilities [60]. Data-driven interaction involves using data to optimize interactions between collaborative robots and human workers, mainly in manufacturing and industrial environments [61]. According to Magrini et al., the collaboration between humans and robots depends on a suitable exchange of contact forces that are likely to take place at various points along an existing robotic structure. The researchers concentrated on the physical collaboration elements whereby humans determine the nature of contact with robots, as the robot reacts as a function of the altered forces [62]. The implication is that safe coexistence has been made possible and ensured. O. Khatib et al. work on physical collaboration, where robots have to ensure that they accomplish various kinds of subtasks [63]. The first task entails detecting contact with a human and distinguish between intentional contact and undesired collision. The second task the identification of points on the robot’s surface where contact has taken place. The third task involves estimating the alteration of Cartesian forces. The fourth task involves controlling the robot’s reactions based on the desired behavior. Force and pressure represent significant considerations affiliated with the design and implementation of collaborative robot interactions. According to Tsumugiwa et al., human-robot cooperative responsibility has two main areas: the carrying task and positioning task. The carrying task has independence characteristics, as a robot undergoes adjustments depending on the mode of estimation stiffness from the arm stiffness [64]. Virtual stiffness is maintained depending on human characteristics whereby the stiffness of the human operator’s arm or applied force to robots is part of the cooperative task [65]. One of the major assumptions is that human operators often stiffen their arms during the positioning task [66]. Morel et al. proposed a novel variable impedance control comprising of virtual stiffness. Such virtual forces produced through the proposed controller made a cooperative positioning task easy to achieve with precise outcomes [23]. For confirmation of the usefulness of the proposed control, a cooperative peg-in-hole task was executed by a robot [67]. Experimental outcomes illustrate how the proposed control happens to be effective for cooperative carrying as well as positioning tasks [68]. Vision is a significant element in the process of enabling robots to effectively perceive and comprehend their surroundings and to interact with humans within a safe and effective process. Using vision in COBOT interaction is effective in object recognition as well as tracking, whereby vision sensors including cameras are used for tracking objects in the surroundings of a robot. Human-computer interfaces are ideal for facilitating communication that offer assistance in exchanging information and procedural commands, in addition to controls. Within this domain, C. Plagemann et al. present a novel interest point detector for mesh and range data that is particularly well suited for analyzing the human shape. The experiments carried out show that our interest points are significantly better in detecting body parts in depth images than state-of-the-art sliding-window based detectors [69]. Working in the robotics sector, professionals often concentrate on the integration of spoken natural language along with natural gestures associated with commanding and controlling semi-autonomous mobile robots. Both spoken natural language along with natural gesture have become user-friendly platforms of interaction with mobile robots. Considering the human perspective, the mode of interactions has become easier since the human is incapable of learning additional interactions despite depending on natural channels for communication. According to Perzanowski et al., the objective of developing a natural language or gesture interface in a semi-autonomous robot was successful. Using natural language or gestures within the interface relies on two distinct assumptions [70]. The first assumption suggests that as natural language remains ambiguous, gestures disambiguate various kinds of information in the speech. The second assumption is that humans utilize natural gestures in an easier manner when issuing directives and locomotive commands in mobile robots. Association of vision and force/pressure sensing provides several positive outcomes for COBOTs, enabling them to safely interact with humans and carry out a range of tasks. Zanchettin, A.M. and Rocco, P. combine these two elements in a constraint-based algorithm for combined trajectory generation and kinematic control for robotic manipulators. The algorithm shifts from an imperative programming paradigm to a declarative motion programming approach [71]. Furthermore, L. Peternel et al. propose an exoskeleton control method for adaptive learning of assistive joint torque profiles in periodic tasks. Within this research, human muscle activity is utilized as feedback to modify the assistive joint torque behavior in a way that reduces the muscle activity [72]. Force and pressure measurements tend to be critical components playing central roles in making certain there is safe and effective collaboration between humans and robots [73]. According to Lippiello et al., it is important to consider the interaction control between a robot manipulator and a partially known environment. Autonomy in a robotic system has a strict connection to the availability of sensing information within external surroundings. Among the different sensing capabilities, vision and force that have critical roles. This is confirmed within a work purposed by A. Cherubini et al., where a multimodal sensor-based control framework for intuitive human–robot collaboration has been developed. The approach is markerless, utilizes a Kinect and an on-board camera, and is based on a unified task formalism. The framework has been validated in an industrial mock-up scenario of humans and robots collaborating to insert screws [74]. Other research by Lippiello et al., confirmed by a simulation case study, proposes an algorithm for online estimation of the pose of an unknown and possibly time-varying rigid object based on visual data from a camera. Force and joint position measurements are also used to improve estimation accuracy [75].

4.2. Intention Recognition

Intention recognition is key technology for COBOT applications. A COBOT intention recognition system typically relies on sensors and software that allow the robot to detect and interpret human movements and gestures. By understanding the intentions of humans, COBOTs can adapt their actions to avoid collisions or other safety hazards. They can also provide more effective assistance to human workers by anticipating their needs and responding in real-time. Intention recognition is an essential technology for COBOT evolution, making it an important area of research.
An approach to developing relevant knowledge of discrete robot motions from different sets of demonstration is relevant, especially during intention recognition. In a study by Mohammad and Billard, there is the development of motion in the form of a non-linear autonomous dynamical system (DS) as the researchers concentrate on the definition of sufficient conditions to facilitate global asymptotic stability at the existing targets [76]. The study proposes a learning approach known as a Stable Estimator of Dynamical Systems (SEDS), which is ideal for learning the different parameters under dynamical systems to ascertain all motions, following demonstrations as they reach and stop at the target [77]. From the study, it is logical to state that DS provides a significant framework ideal for allowing the fast learning of robot motions through small sets of demonstrations [30].
Image-based collision detection is currently being studied in industrial-robot environments. The study published by F. Stulp et al. investigates the legibility of robot behavior as a property that emerges from requirements for the efficiency and robustness of joint human–robot task completion. Two experiments involving human subjects demonstrate that robots are able to adjust their behavior to increase their ability to predict the robot’s intentions, resulting in faster and more reliable task completion [78]. An ideal approach associated with conducting collision tests depending on images retrieved from numerous stationary cameras in a work cell has been also presented in the study conducted by Ebert and Henrich [79]. The work of V. Magnanimo et al. proposes a Dynamic Bayesian Network for recognizing tasks which consist of sequences of manipulated objects and performed actions. The DBN takes RGBD raw data as input and classifies manipulated objects and performed actions. To evaluate the effectiveness of the proposed approach, a case study of three typical kitchen tasks is conducted [80]. The sensor-controlled transfer motion originating from the current configuration to transferring motion from the current configuration is a necessary basic skill that allows robots to operate safely with humans under the same workspace. This has been studied in a paper by L. Bascetta et al., which presents advanced algorithms for cognitive vision. Using a dynamic model of human walking, these algorithms are applied to detecting and tracking humans and estimating their intentions [81]. D. J. Agravante et al. purpose a framework combination that involves vision and haptic information aligned with human–robot joint actions is an ideal angle to understand the connection between vision and force/pressure in the intention recognition of COBOT interaction. The framework consists of a hybrid controller that utilizes visual serving in addition to impedance controllers. The presence of humanoid robots has contributed to various advantages as they work alongside humans with the aim of performing different kinds of tasks [82]. Furthermore, humanoids can maintain interaction with human-like ranges of motion while they sense capabilities. The proposed general framework of human–robot joint collaborative responsibilities proves to be effective.

4.3. Programming and Learning

Programming and learning are two critical technologies that enable COBOTs to adapt to changing conditions and perform tasks safely and efficiently. COBOT programming typically involves creating a set of instructions or commands that the robot will follow to complete a specific task. Programming can be done manually by a human operator through a programming visual interface. COBOTs can also learn from humans by observing their movements and actions and adapting their behavior accordingly.
Paradigms affiliated with simultaneous and proportional control from hand prostheses continue to gain momentum within the robotics rehabilitation community, which demonstrates the value of bioelectricity in programming. Simultaneous and proportional control is designed to facilitate control of desired forces or torques from each DoF of the hand or wrist that has real-time predictions. The restoration process of motor function for an upper following an amputation presents a significant task to the rehabilitation engineering sector. The study conducted by I. Strazzulla et al. applies a simultaneous and proportional control approach to two robotic hands [83]. In an investigation conducted by Calinon et al., the robot programming by demonstration (PbD) is ideal, since it addresses methods through which robots develop new skills via the observation of humans. The methodology proposes a probabilistic approach combining hidden Markov models (HMM) and Gaussian mixture regression (GMR) for learning and reproducing human motions. This approach is tested on simulated and real robots, demonstrating their ability to handle cyclic and crossing movements as well as multiple constraints at once [84]. The connection between robots and human-like activities enables the machines to interact with people in natural and harmless ways. New and complete strategies have been detected, estimated, and implemented to handle dynamic force interaction taking place at various points in the robot’s structure. For instance, L. Rozo et al. present a robot motion adaptation method for collaborative tasks that combines extraction of the desired robot behavior, a task-parametrized Gaussian mixture model, and variable impedance control for human-safe interaction. This approach is tested in a scenario where a 7-DoF back-drivable manipulator learns to cooperate with a human to transport an object, and the results show that the proposed method is effective [85]. Human–robot interaction (HRI) is an indication that robots can establish communication with a person based on needs and behave in a manageable manner [86]. Furthermore, the authors present a framework that allows a user to teach a robot collaborative skills from demonstrations, which can be applied to tasks involving physical contact with the user. This method enables the robot to learn trajectory-following skills as well as impedance behaviors [87]. The process of determining the levels of engagement in human–robot interaction is crucial. Engagement measures depend on the dynamics linked with social signals traded through the partners, precisely speech, and gaze. This has been studied by S. Ivaldi et al., who assessed the influence of extroversion and negative attitude towards robots on speech and gaze during a cooperative task [88]. In the model presented by A. Colome et al., dynamic movement primitives (DMP) and visual/force feedback are utilized within the reinforcement learning (RL) algorithm to enable the robot to learn safety-critical tasks such as wrapping a scarf around the neck. Experimental results demonstrate that the robot is consistently capable of learning tasks that could not be learned otherwise, thus improving its capability with this approach [89]. Furthermore, according to the research presented by S. Lallee et al., a cooperative human–robot interaction system has been developed to recognize objects, recognize actions as sequences of perceptual primitives, and transfer this learning between different robotic platforms. This system also provides the capability to link actions into shared plans, forming the basis of human–robot cooperation [90]. Thus, in the future, the sharing of spaces between humans and collaborative robots will become more common. As a result of a process of integrating ever more advanced technologies, people and COBOTs will be able to collaborate more effectively and securely [91] in the same working environment. As confirmed by M. Lawitzky et al., combining, planning and learning algorithms can lead to superior results in goal-directed physical robotic assistance tasks [92]. The potential to cooperate, establish, and utilize shared action measures is a distinguished cognitive capacity that separates humans from non-human primates. Language has become an inherently cooperative activity whereby a listener and speaker cooperate to ensure the arrival at a shared objective of communication [93]. Current investigations are in the greater context of cognitive-developmental robotics that possess physical embodiments designed to play the central role in structuring representations in a system. The robotic system can attain global information regarding the surrounding environment that is utilized for task planning and obstacle avoidance. Having a complementary nature has influenced a natural belief of vision and force being exploited in the integration and synergic mode of designing sufficient planning and controlling strategies for the existing robotic system. L. Peternel et al. demonstrate that robots can be taught dynamic manipulation tasks in cooperation with a human partner using a multi-modal interface. They employ locally weighted regression for trajectory generalization and adaptive oscillators for adaptation of the robot to the partner’s motion. The authors conduct an experiment teaching a robot how to use a two-person crosscut saw, demonstrating this approach [94].

4.4. Virtual Reality (VR)-Based COBOT

The combination of virtual reality (VR), digital twins, and virtual commissioning of robotics and COBOTs is emerging as a promising solution for automation. This solution allows for the real-time simulation of robotic systems in a virtual environment and enables engineers and designers to monitor and optimize performance in a cost-effective and safe manner. In addition, by using VR, digital twins, and virtual commissioning, users can gain a better understanding of the robotic system, its components, and its environment. For instance, the work of Oyekan, J.O. et al. presents the use of a virtual reality digital twin of a physical layout as a mechanism to understand human reactions to both predictable and unpredictable robot motions. A set of established metrics as well as a newly developed kinetic energy ratio metric is used to analyse human reactions and validate the effectiveness of the virtual reality environment [95]. Duguleana, M. et al. present an analysis of virtual reality (VR) as an alternative to real-world applications for testing material-handling scenarios that involve collaboration between robots and humans. They measure variables such as the percentage of tasks completed successfully, the average time to complete tasks, the relative distance and motion estimate, and presence and contact errors, and compare the results between different manipulation scenarios [96]. People with two-arm disabilities face difficulties in completing tasks that require them to grasp multiple objects that are closely spaced. Current arm-free human–robot interfaces (HRIs) such as language-based and gaze-based HRIs are not effective in controlling robotic arms to complete such tasks. Zhang, C et al. propose a novel human–robot interface (HRI) system that leverages mixed reality (MR) feedback and head control for arm-free operation. The proposed HRI system is designed to enable users with disabilities to control a robotic gripper with high accuracy and flexibility. Experiments conducted on objects of various sizes and shapes demonstrate its capability to complete tasks with high adaptability and point cloud error tolerance [97]. With the advancement of artificial intelligence technology in making smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. In order to deal with this issue, Gupta et al. present a novel methodology to investigate user trust in auditory assistance in a virtual reality (VR)-based search task. The study collected physiological sensor data such as EEG, GSR, HRV, and subjective data through questionnaires such as STS, SMEQ, and NASA-TLX, and a behavioral measure of trust in response to valid/invalid verbal advice from the agent. Results show that cognitive load and agent accuracy play an important role in trust formation in the customized VR environment [98].

5. COBOT Market Analyses: Potentialities and Limits

In this section, 195 COBOTs that are existing in the current market have been investigated, listed in Appendix A. The classification is based on (i) the degrees of freedom; (ii) the robot typology, as anthropomorphic, Cartesian, SCARA, and Torso; (iii) the payload; (iv) the reach volume; and (v) the accuracy. The aim of this assessment is to provide a synthetic overview of the features and performance of COBOTs available on the market.

5.1. COBOT Assessment: Degrees of Freedom

Robotic arms are characterized by the numbers of DoF from one to fourteen. A higher number of DoF implies that the robot has more pose options. COBOTS can be classified into four categories: Anthropomorphic, Cartesian, SCARA and Torso, as in Table 1.
Anthropomorphic COBOTs consist of a mechanical serial structure composed of rigid arms linked with at least four joints that allow their movement. The joints can be cylindrical or prismatic. Cartesian COBOTs consist of a motion-based arm on an orthogonal Cartesian ternary system. To move in space, they use orthogonal sliding joints through metal arms. Since the movement works on linear axes, the movements are easily programmable at the cost of less flexibility. Selective compliance assembly robot arm (SCARA) COBOTs are defined with two arms that can move in the horizontal plane, with, at the end of them a prismatic coupling that allows vertical movement. Torso COBOTs have a human-like aspect and behavior capable of twisting, bending, and rotating in multiple directions, giving them a high degree of freedom. The structure can be based on serial, parallel or differential kinematics, each with pros and cons. Serial torso COBOTs are commonly easier to control. In contrast, parallel and differential kinematics offer a greater number of DoF driven by higher number of smaller actuators; however, the kinematics are more complex in control and design.
The most popular class of COBOT is Anthropomorphic, followed by SCARA, Torso, and Cartesian. The anthropomorphic class represents 90% of the total COBOTs offered by the market. Regarding Torso COBOTS (2%), despite being designed with multiple degrees of freedom to provide greater flexibility and adaptability in a wide range of applications, their complexity and large footprints limit their acceptance.

5.2. COBOT Assessment: Reach and Payload

The payload capacity refers to the mass and inertia that the robot’s wrist can manage. The robotic arm’s reach is a measurement of the distance that the mechanism can execute tasks, defining the tridimensional workspace. The COBOTs studied in this review are grouped into five categories, as shown in Table 2.
Group 1 includes small-sizd COBOTs, with a payload lower than 5 kg and a limited reach of 500 mm. Medium-size COBOTs have payloads between 5 and 20 kg. Large-size COBOTs include devices with the highest payload, greater than 20 kg, and the highest-reach Group 5. The Anthropomorphic class represents more than 90.0% of the total; the most popular sizes are represented by Group 2, Group 3, and Group 4, which include 88.6% of the total Anthropomorphic models, as illustrated in Table 3. Small and large Anthropomorphic models of Group 1 and Group 5 number 16 and 4, respectively. The technology of COBOTs derives from traditional robotics equipment; thus it is possible to find Anthropomorphic COBOTs with a long-distance reach and great payload, up to 170 kg. In this case, the producer equipped the traditional equipment with a tactile skin and proximity sensors that allow it to avoid collisions and retract, depending on the contact force. The mentioned COBOT model is exceptional in size, features, and application; thus this model has been excluded in graphing and statistics. The Cartesian COBOT accounts for one model; cartesian robots consist of a motion-based arm on an orthogonal Cartesian ternary system. These machines are widely installed in production lines, typically with the aim of performing activities such as feeding pallet or chain conveyors. SCARA accounts for 14 models on the market, representing 7.2% of the total; its typical application is pick-and-place with high speed and high accuracy, comparable to, and even higher than, anthropomorphic. Despite their number degrees of freedom, the complexity and large footprints of Torso COBOTs limit their diffusion and development. ABB, Rethink Robotics, and Siasun are the key producers.
Figure 2a,b shows the payload and the reach relation as a proportional trend for Anthropomorphic and SCARA classes. The correlation coefficient is in the 29.4–38.5% range for Anthropomorphic and SCARA typology, respectively.

5.3. COBOT Assessment: Accuracy

Accuracy is an indicator that represents the deviation between the planned and the observed pose. COBOT accuracy is expressed in comparison with payload capacity in Figure 3a,b. In the current market, more than 90% of the anthropomorphic COBOTs show performance in the 0.01 mm and 0.20 mm range with no interrelated impact on robot payload ability from 0.3 kg to 20.0 kg, Figure 3a. Moreover, there is no significant trend between the maximum payload and the deterioration of accuracy. Figure 3b shows that the payload ranges of Cartesian, Torso, and SCARA COBOTs concentrate in the range 0.5–5.0 kg, and the level of accuracy is lower than 0.10 mm.
Figure 4 shows a percentile representation of accuracy for the two main classes: Anthropomorphic accuracy is described by the Q1—25th percentile as 0.03 mm, Q3—75th percentile as 0.10 mm, and median as 0.05 mm. The SCARA COBOT level of accuracy is described by the Q1—25th percentile as 0.02 mm, Q3—75th percentile as 0.06, and median as 0.04 mm.
Figure 5a,b confirms the correlation analysis showing that accuracy is not affected by the COBOT reach, for Anthropomorphic configuration in Figure 5a and for Cartesian, SCARA and Torso architecture in Figure 5b. The level of accuracy is lower than 0.25 mm and it depends on the model or provider.

5.4. COBOTs Assessment: Energy Consuption vs. Tool Center Point (TCP) Velocity

The TCP velocity is a valuable characteristic of the COBOT and refers to the end-effector motion performance during its operations. The TCP velocity has a direct impact on the cycle time of the workstation and operator safety. Power consumption is an index that is central in the equipment installation and device daily supervision. The energy consumption increases consistently with the payload. Figure 6 shows a correlation between energy consumption [kW] and the maximum TCP velocity [m/s], listed in Appendix B. The investigated COBOT payload range is within 0.5 kg–20.0 kg with a TCP velocity from 0.3 m/s–6.0 m/s. The expected power consumption exceeds 0.50 kW for COBOTs that provide a payload greater than 10 kg. There is significant evidence that the TCP increment from 1.0 m/s to 3.0 m/s does not statistically influence energy consumption. The main driver for power use is the payload offered by the anthropomorphic COBOT, in particular for payloads from 1.0 kg to 6.0 kg, considering the total gripper combined with the manipulated workpiece mass and inertia.

6. Discussion

Developing a COBOT selection procedure is a challenging task that covers a broad range of domains. In particular, the application dictates the device concept and design. Furthermore COBOT providers may offer ad hoc solutions that do not provide optimal performance. This paper provides an overview of the current state of the art of COBOT applications and learning abilities, and the existing equipment on the market. The applications of COBOTs are growing in terms of installations, remarkably in the SME context.
In manufacturing domain, COBOTs are employed to assist with repetitive work, reducing the risks and fatigue associated with heavy tasks, making the work environment safer for employees. COBOTs are also used to support the assembly of products. The review highlights that a number of researchers are focusing their efforts on the development of methods for reducing workload and optimizing productivity. These methods are mainly aimed at complex components. The research in material handling is mainly on the challenging task of handling unstable materials. In this field, the developed methods for adjusting and compensating trajectories in real time are very promising. COBOT employment in security and inspection supporting activities is growing. COBOTs offer new opportunities in a variety of contexts in healthcare, improving accuracy and precision in tasks such as surgery or rehabilitation. By combining AI and machine learning algorithms with robotics, COBOTs can be trained to assist physical therapists in providing superior and faster restoration. COBOT-assisted physical therapy could also provide personalized and dynamic treatment, allowing for more effective rehabilitation. The use of AI and machine learning systems is rapidly accelerating the ability of COBOTs to interact with humans and reduce the training times. Various research has been carried out to control COBOTs using natural language and gestures. Efforts are currently focusing on training COBOTs to understand and respond to natural language, interpret human gestures, and visualize objects to achieve more accurate task completion. Additionally, AI-enabled systems are being developed to allow COBOTs to constantly update their knowledge and refine their decision-making. Force, pressure, and vision sensors are critical components in enabling human–robot interaction.
This article reviewed a number of COBOTs. The literature shows a gap in human–robot interaction; nevertheless, there are still issues and constraints to improve the collaboration abilities. The market analyses results show that the most promising typology for COBOTs applications is the Anthropomorphic one, which can provide greater flexibility and adaptability than traditional robotics. Anthropomorphic COBOTs show improved adaptability to time-variant conditions and unstructured environmentS. Future developments should consider the usability conditions to increase the compliant applications. The proposed classification and comparison underline how the SME and researchers are moving toward innovative solution.

7. Conclusions

In this paper, a systematic review was performed, which led to the selection of 98 papers to find current trends in COllaborative roBOT applications, with a specific focus on the use of stationary (fixed) systems. The results were evaluated, and screening criteria were applied in industrial and service applications from 1996 to 2023, filtering 423 papers with a two-stage process. A classification of different collaborative contexts was proposed, mainly composed of industrial assembly, material handling, service personal assistance, security and inspection, Medicare and supernumerary classifications. Collaborative robot technology offers an innovative and modular solution to enhance safe interaction with human beings. The article focused on the robot architecture, AI, and machine learning combination paradigm since these factors are interconnected for an effective implementation. Furthermore, the progress in force control, vision processing, and pressure regulation are enabling human–robot collaboration. The studied potential barriers and challenges that require research effort and business approval are mainly the following: Machine learning implementation is not predominant in the literature and it may classified in supervised, unsupervised and reinforcement techniques. Regarding perception and sensing, vision is the most used mode in the selected set of papers, followed by accelerometer input and muscular signal input. Sensor fusion is not extensively used in human–robot collaboration. The market analysis covers 195 models, focusing on the key features: (i) degrees of freedom, (ii) reach and payload, (iii) accuracy, and (iv) energy consumption vs. tool center point velocity, to further demonstrate the relevance and growing interest from researchers and companies. In particular, medium-size anthropomorphic COBOTs are the suitable configuration for most applications, providing greater flexibility and adaptability than the SCARA or Torso configuration. It is noted that COBOTs with payloads of 5.0 kg–20.0 kg and reach of 500 mm–2000 mm show an invariant accuracy lower than 0.20 mm, representing 88.6% of the analyzed samples. The potential barriers or challenges in guaranteeing the 0.20 mm accuracy depend on their design and how they are programmed, the reduced TCP velocity, the higher pose stabilization time, and the limited payload. The investigated COBOT payload range is within 0.5 kg–20.0 kg, with a TCP velocity from 0.3 m/s–6.0 m/s. The expected power consumption exceeds 0.50 kW for COBOTs that provide a payload greater than 10 kg. There is significant evidence that the TCP increment from 1.0 m/s to 3.0 m/s does not statistically influence energy consumption. The main driver for power use is the payload offered by the Anthropomorphic COBOT, in particular for payloads from 1.0 kg to 6.0 kg, considering the total gripper combined with the manipulated workpiece mass and inertia.
The comparative analysis between COBOTs and traditional robots highlight that robots are designed to complete repetitive tasks with high accuracy and precision (0.03 mm), making them ideal for scenarios that need consistent performance, as throughput, and cycle time. In same setting, industrial robots provide higher technical specifications vs collaborative robots. They are faster, more accurate, and have a higher reach volume. The best use is in high-volume processes with low variations. They are not easy to reprogram and redeploy on new cell settings and part configurations. Nowadays, collaborative robot models are providing increasing performance and quality, and they can flexibly adapt to part variations, improving the worker experience. User-friendliness is the main factor for SME accessibility over traditional robots that require additional safety equipment. The safety elements include light barriers, scanners, fencing, and dual-emergency stops. Additionally, the industrial robot is more difficult to integrate. This is mainly due to the more complex programming environment. These robots require experienced staff to install and set up the layout. Finally, maintenance costs are higher for industrial robots as well. Industrial robots are often more expensive after integration compared with collaborative robots due to the high-usage daily duty cycle. Nevertheless, human interaction and learning technologies would have to apply research from multidisciplinary fields such as psychology and behavioral sciences in order to be ready for deployment in real world applications that offer new opportunities for COBOTs in the future.

Author Contributions

Conceptualization, C.T., F.A. and N.P.; methodology, C.T., F.A. and N.P.; formal analysis, C.T., F.A. and N.P.; investigation, C.T., F.A. and N.P.; resources, C.T., F.A. and N.P.; data curation, C.T., F.A. and N.P.; writing—original draft preparation, C.T., F.A. and N.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. COBOT vs. Payload, Reach and Accuracy.
Table A1. COBOT vs. Payload, Reach and Accuracy.
ProducerModelClassDoFPayload [kg]Reach [mm]Accuracy [mm]
ABBCRB 11000 SWIFTIAnthropomorphic64.05800.01
CRB 15000 GoFaAnthropomorphic65.09500.05
IRB 1400 YumiTorso140.512000.02
IRB 14050 YumiAnthropomorphic70.55590.02
AcutronicsMARAAnthropomorphic63.06560.10
AIRSKINKuka Agilus FencelessAnthropomorphic610.011000.02
AirskinKuka Cybertech FencelessAnthropomorphic624.020200.04
AUBO RoboticsI10Anthropomorphic610.013500.10
I3Anthropomorphic63.06250.03
I5Anthropomorphic65.09240.05
I7Anthropomorphic67.011500.05
AutomataEVAAnthropomorphic61.36000.50
AutomationwareAW-Tube 5Anthropomorphic65.09000.03
AW-Tube 8Anthropomorphic68.010000.04
AW-Tube 12Anthropomorphic613.013000.05
AW-Tube 15Anthropomorphic615.010000.05
AW-Tube 18Anthropomorphic618.017000.06
AW-Tube 20Anthropomorphic620.015000.07
BoschAPASAnthropomorphic64.09110.03
ComauAuraAnthropomorphic6170.027900.10
e.DoAnthropomorphic61.04780.01
Racer 5 0.80 CobotAnthropomorphic65.08090.03
DensoCobottaAnthropomorphic60.53420.05
DobotCR10Anthropomorphic610.015250.03
CR16Anthropomorphic616.012230.03
CR3Anthropomorphic63.07950.02
CR5Anthropomorphic65.010960.03
M1SCARA41.54000.02
MagicianAnthropomorphic40.33200.20
MagicianAnthropomorphic40.53400.20
MG 400Anthropomorphic40.84400.05
Doosan RoboticsA0509Anthropomorphic65.09000.03
A0912Anthropomorphic69.012000.05
H2017Anthropomorphic620.017000.10
H2515Anthropomorphic625.015000.10
M0609Anthropomorphic66.09000.10
M0617Anthropomorphic66.017000.10
M1013Anthropomorphic610.013000.10
M1509Anthropomorphic615.09000.10
EfortECR5Anthropomorphic65.09280.03
Elephant RoboticsC3Anthropomorphic63.05000.50
E5Anthropomorphic65.08100.50
myCobotAnthropomorphic60.32800.20
Panda 3Anthropomorphic63.05500.50
Panda 5Anthropomorphic65.08500.50
Elite RobotCS612Anthropomorphic612.013040.05
CS63Anthropomorphic63.06240.02
CS66Anthropomorphic66.09140.03
EC612Anthropomorphic612.013040.03
EC63Anthropomorphic63.06240.02
EC66Anthropomorphic66.09140.03
ESIC-15Anthropomorphic615.013230.05
C-7Anthropomorphic67.09000.05
F&P Personal Robotics2R 24VAnthropomorphic63.07750.10
2R 48VAnthropomorphic65.07750.10
Fanuc1CR4iALAnthropomorphic614.09110.03
CR15iAAnthropomorphic615.014110.02
CR35iAAnthropomorphic635.018130.08
CR4iAAnthropomorphic64.05500.02
CR7iAAnthropomorphic67.07170.02
CR7iALAnthropomorphic67.09110.02
CRX10iAAnthropomorphic610.012490.05
CR10XiALAnthropomorphic610.014180.05
FlexivRizon 4Anthropomorphic74.07800.01
Franka EmikaRobotAnthropomorphic73.08550.10
Hans RobotE10Anthropomorphic610.010000.05
E15Anthropomorphic615.07000.05
E3Anthropomorphic63.05900.05
E5Anthropomorphic65.08000.05
E5-LAnthropomorphic63.59500.05
HanwhaHCR-12Anthropomorphic612.013000.10
HCR-12AAnthropomorphic612.013000.05
HCR-3Anthropomorphic63.06300.10
HCR-3AAnthropomorphic63.06300.05
HCR5Anthropomorphic65.09150.10
HCR-5AAnthropomorphic65.09150.05
HIT Robot GroupT5Anthropomorphic65.08500.10
HITBOTZ-Arm 1632SCARA41.04520.02
Z-Arm 1832SCARA43.04550.02
Z-Arm 2140SCARA43.05320.03
Z-Arm 2442SCARA41.06170.03
Z-Arm 6140SCARA41.05320.02
Z-Arm miniSCARA41.03200.10
HyundaiYL005Anthropomorphic65.09160.10
YL012Anthropomorphic612.013500.10
YL015Anthropomorphic615.09630.10
Inovo RoboticsRobotic Arm 1300Anthropomorphic63.01.3400.25
Robotics Arm 650Anthropomorphic610.06900.25
Robotics Arm 850Anthropomorphic66.09900.25
IsybotSYB3Anthropomorphic410.016000.20
JAKAZu 12Anthropomorphic612.01.3000.03
Zu 18Anthropomorphic618.01.0730.03
Zu 3Anthropomorphic63.04980.03
Zu 7Anthropomorphic67.07960.03
Kassow RobotsKR1018Anthropomorphic610.010000.10
KR1205Anthropomorphic75.012000.10
KR1410Anthropomorphic710.014000.10
KR1805Anthropomorphic75.018000.10
KR810Anthropomorphic710.08500.10
Kawasaki RoboticsDuaroSCARA84.07600.05
Duaro 2SCARA86.07600.05
Kinetic Systems6 Axes RobotAnthropomorphic616.019000.05
SCARA RobotSCARA45.012000.05
KinovaGen2Anthropomorphic72.49850.15
Gen3Anthropomorphic74.09020.15
Gen3 LiteAnthropomorphic60.57600.15
KUKALBR iisy 3 R760Anthropomorphic63.07600.01
LBR iisy 11 R1300Anthropomorphic611.013000.15
LBR iisy 15 R930Anthropomorphic615.09300.15
LBR iiwa 14 R820Anthropomorphic714.08200.15
LBR iiwa 7 R800Anthropomorphic77.08000.10
LWRAnthropomorphic77.07900.05
Life RoboticsCOROAnthropomorphic62.08001.00
MabiSpeedy 12Anthropomorphic612.012500.10
Speedy 6Anthropomorphic66.08000.10
MegaroboMRX-T4Anthropomorphic43.05050.05
MIP RoboticsJunior 200SCARA43.04000.50
Junior 300SCARA45.06000.40
Mitsubishi ElectricRV-5AS-D MELFA ASSISTAAnthropomorphic65.09100.03
MRK SystemeKR 5 SIAnthropomorphic65.014320.04
NachiCZ 10Anthropomorphic610.013000.10
Neura RoboticsLARA 10Anthropomorphic610.010000.02
LARA 5Anthropomorphic65.08000.02
NeuromekaIndy 10Anthropomorphic610.010000.10
Indy 12Anthropomorphic612.012000.50
Indy 3Anthropomorphic63.05900.10
Indy 5Anthropomorphic63.08000.10
Indy 7Anthropomorphic67.08000.05
Indy RPAnthropomorphic65.09500.05
Indy RP 2Anthropomorphic75.08000.05
Opti 10Anthropomorphic610.012160.10
Opti 5Anthropomorphic65.08800.10
NiryoOneAnthropomorphic60.34400.10
PilzPRBTAnthropomorphic66.07410.20
Precise AutomationDirect Drive 6 AxesSCARA66.017930.02
PAVP6Anthropomorphic62.54320.02
PAVS6Anthropomorphic637.07700.03
PF3400SCARA423.05880.05
PP100Cartesian42.012700.10
Productive RoboticsOB7Anthropomorphic75.010000.10
OB7 Max 12Anthropomorphic712.013000.10
OB7 Max 8Anthropomorphic78.017000.10
OB7 StretchAnthropomorphic74.012500.10
Rainbow RoboticsRB10 1200Anthropomorphic610.012000.10
RB3 1300Anthropomorphic63.013000.10
RB5 850Anthropomorphic65.08500.10
Rethink RoboticsBaxterTorso142.212103.00
SawyerAnthropomorphic74.012600.10
Sawyer Black EditionAnthropomorphic74.012600.10
Robut TecnologyArmobotAnthropomorphic63.015000.10
RokaeX Mate 3Anthropomorphic73.07600.03
X Mate 7Anthropomorphic77.08500.03
Rozum RoboticsPulse 75Anthropomorphic66.07500.10
Pulse 90Anthropomorphic64.09000.10
SiasunDSCR3 DucoTorso73.08000.02
DSCR5Torso75.08000.02
GCR14 1400Anthropomorphic614.014000.05
GCR20 1100Anthropomorphic620.011000.05
GCR5 910Anthropomorphic65.09100.05
SCR3Anthropomorphic73.06000.02
SCR5Anthropomorphic75.08000.02
TCR 0.5Anthropomorphic60.53000.05
TCR 1Anthropomorphic61.05000.05
ST RoboticsR12Anthropomorphic61.05000.10
R17Anthropomorphic63.07500.20
StaubliTX2 Touch 60Anthropomorphic64.56700.02
TX2 Touch 60LAnthropomorphic63.79200.03
TX2 Touch 90Anthropomorphic614.010000.03
TX2 Touch 90LAnthropomorphic612.012000.04
TX2 Touch 90XLAnthropomorphic67.014500.04
YamahaYA-U5FAnthropomorphic75.05590.06
YA-U10FAnthropomorphic710.07200.10
YA-U20FAnthropomorphic720.09100.10
TechmanTechman TM12Anthropomorphic612.013000.10
Techman TM14Anthropomorphic614.011000.10
Techman TM5 700Anthropomorphic66.07000.05
Techman TM5 900Anthropomorphic64.09000.05
Tokyo RoboticsTorobo ArmAnthropomorphic76.06000.05
Torobo Arm MiniAnthropomorphic73.06000.05
UFACTORYuArm Swift ProAnthropomorphic40.53200.20
xArm 5 LiteAnthropomorphic53.07000.10
xArm 6Anthropomorphic63.07000.10
xArm 7Anthropomorphic73.57000.10
Universal RobotsUR10 CB3Anthropomorphic610.013000.10
UR10eAnthropomorphic610.013000.03
UR16eAnthropomorphic616.09000.05
UR3 CB3Anthropomorphic63.05000.10
UR3eAnthropomorphic63.05000.03
UR5 CB3Anthropomorphic65.08500.10
UR5eAnthropomorphic65.08500.03
YaskawaMotoman HC10Anthropomorphic610.012000.10
Motoman HC10 DTAnthropomorphic610.012000.10
Motoman HC20Anthropomorphic620.017000.05
YuandaRobotics ArmAnthropomorphic67.010000.10
Svaya RoboticsSR-L3Anthropomorphic63.06000.03
SR-L6Anthropomorphic66.08500.03
SR-L10Anthropomorphic610.013000.05
SR-L12Anthropomorphic612.011000.05
SR-L16Anthropomorphic616.09000.05

Appendix B

Table A2. COBOT vs. TCP velocity and Power consumption.
Table A2. COBOT vs. TCP velocity and Power consumption.
ModelClassPayload [kg]TCP Velocity [m/s]Power Consumption [kW]
OB7 Max 12Anthropomorphic12.02.00.90
OB7 Max 8Anthropomorphic8.02.00.90
AW-Tube 5Anthropomorphic5.0 0.75
AW-Tube 8Anthropomorphic8.0 0.75
AW-Tube 12Anthropomorphic13.0 0.75
AW-Tube 15Anthropomorphic15.0 0.75
AW-Tube 18Anthropomorphic18.0 0.75
AW-Tube 20Anthropomorphic20.0 0.75
SYB3Anthropomorphic10.01.00.70
OB7 StretchAnthropomorphic4.02.00.65
Zu 18Anthropomorphic18.03.50.60
RV-5AS-D MELFA ASSISTAAnthropomorphic5.01.00.60
GCR20 1100Anthropomorphic20.01.00.60
I10Anthropomorphic10.04.00.50
Racer 5 0.80 CobotAnthropomorphic5.06.00.50
CS612Anthropomorphic12.03.00.50
EC612Anthropomorphic12.03.20.50
Zu 12Anthropomorphic12.03.00.50
I7Anthropomorphic7.0 0.40
SCR5Anthropomorphic5.01.00.40
Gen3Anthropomorphic4.00.50.36
E10Anthropomorphic10.01.00.35
E15Anthropomorphic15.01.00.35
Zu 7Anthropomorphic7.02.50.35
Indy 10Anthropomorphic10.01.00.35
Indy 12Anthropomorphic12.01.00.35
Indy 3Anthropomorphic3.01.00.35
Indy 5Anthropomorphic3.01.00.35
Indy 7Anthropomorphic7.01.00.35
Indy RP 2Anthropomorphic5.01.00.35
UR10eAnthropomorphic10.02.00.35
UR16eAnthropomorphic16.01.00.35
X Mate 3Anthropomorphic3.0 0.30
Techman TM12Anthropomorphic12.01.30.30
Techman TM14Anthropomorphic14.01.10.30
EVAAnthropomorphic1.30.80.28
E5Anthropomorphic5.01.00.26
Panda 5Anthropomorphic5.01.00.26
CS66Anthropomorphic6.02.60.25
EC66Anthropomorphic6.02.80.25
Gen2Anthropomorphic2.40.20.25
KR 5 SIAnthropomorphic5.0 0.25
Pulse 90Anthropomorphic4.02.00.25
SCR3Anthropomorphic3.00.80.25
UR10 CB3Anthropomorphic10.01.00.25
MRX-T4Anthropomorphic3.0 0.24
Techman TM5 700Anthropomorphic6.01.10.22
Techman TM5 900Anthropomorphic4.01.40.22
I5Anthropomorphic5.02.80.20
CR10Anthropomorphic10.03.00.20
CR16Anthropomorphic16.03.00.20
CR3Anthropomorphic3.03.00.20
CR5Anthropomorphic5.03.00.20
ECR5Anthropomorphic5.02.80.20
E3Anthropomorphic3.01.00.20
Gen3 LiteAnthropomorphic0.50.30.20
PAVP6Anthropomorphic2.5 0.20
GCR5 910Anthropomorphic5.0 0.20
UR5eAnthropomorphic5.01.00.20
C3Anthropomorphic3.01.00.18
E5Anthropomorphic5.01.00.18
E5-LAnthropomorphic3.51.00.18
IRB 14050 YumiAnthropomorphic0.51.50.17
Panda 3Anthropomorphic3.01.00.16
I3Anthropomorphic3.01.90.15
CS63Anthropomorphic3.02.00.15
EC63Anthropomorphic3.02.00.15
Zu 3Anthropomorphic3.01.50.15
Pulse 75Anthropomorphic6.02.00.15
UR5 CB3Anthropomorphic5.01.00.15
xArm 5 LiteAnthropomorphic3.00.30.12
UR3 CB3Anthropomorphic3.01.00.12
2R 48VAnthropomorphic5.0 0.10
T5Anthropomorphic5.0 0.10
UR3eAnthropomorphic3.01.00.10
OB7Anthropomorphic5.02.00.09
2R 24VAnthropomorphic3.0 0.08
COROAnthropomorphic2.0 0.08
RobotAnthropomorphic3.02.00.06
OneAnthropomorphic0.3 0.06

References

  1. Moulières-Seban, T.; Salotti, J.M.; Claverie, B.; Bitonneau, D. Classification of Cobotic Systems for Industrial Applications. In Proceedings of the 6th Workshop towards a Framework for Joint Action, Paris, France, 26 October 2015. [Google Scholar]
  2. Di Marino, C.; Rega, A.; Vitolo, F.; Patalano, S.; Lanzotti, A. A new approach to the anthropocentric design of human–robot collaborative environments. Acta IMEKO 2020, 9, 80–87. [Google Scholar] [CrossRef]
  3. Vitolo, F.; Rega, A.; Di Marino, C.; Pasquariello, A.; Zanella, A.; Patalano, S. Mobile Robots and Cobots Integration: A Preliminary Design of a Mechatronic Interface by Using MBSE Approach. Appl. Sci. 2022, 12, 419. [Google Scholar] [CrossRef]
  4. Harold, L.S.; Michael, Z.; Ryan, R.J. The Robotics Revolution. Electron. Power 1985, 31, 598. [Google Scholar] [CrossRef]
  5. Rigby, M. Future-proofing UK manufacturing Current investment trends and future opportunities in robotic automation. Barclays/Dev. Econ. 2015, 1, 1–10. [Google Scholar]
  6. Russmann, M.; Lorenz, M.; Gerbert, P.; Waldner, M.; Justus, J.; Engel, P.; Harnisch, M. Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries. Bost. Consult. Gr. 2015, 9, 54–89. [Google Scholar]
  7. Di Marino, C.; Rega, A.; Vitolo, F.; Patalano, S.; Lanzotti, A. The anthropometric basis for the designing of collaborative workplaces. In Proceedings of the II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0 IoT), Naples, Italy, 4–6 June 2019; pp. 98–102. [Google Scholar]
  8. Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
  9. Bi, Z.; Luo, C.; Miao, Z.; Zhang, B.; Zhang, W.; Wang, L. Safety assurance mechanisms of collaborative robotic systems in manufacturing. Robot. Comput. Manuf. 2021, 67, 102022. [Google Scholar] [CrossRef]
  10. OECD. The Future of Productivity; OECD: Paris, France, 2015; Volume 1. [Google Scholar]
  11. Schmidtler, J.; Knott, V.; Hölzel, C.; Bengler, K. Human Centered Assistance Applications for the working environment of the future. Occup. Ergon. 2015, 12, 83–95. [Google Scholar] [CrossRef]
  12. Wang, X.V.; Seira, A.; Wang, L. Classification, personalised safety framework and strategy for human-robot collaboration. In Proceedings of the CIE 48, International Conference on Computers & Industrial Engineering, Auckland, New Zealand, 2–5 December 2018. [Google Scholar]
  13. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.; Makris, S.; Chryssolouris, G. Symbiotic human-robot collaborative assembly. CIRP Ann. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  14. Antonelli, D.; Astanin, S. Qualification of a Collaborative Human-robot Welding Cell. Procedia CIRP 2016, 41, 352–357. [Google Scholar] [CrossRef] [Green Version]
  15. Levratti, A.; De Vuono, A.; Fantuzzi, C.; Secchi, C. TIREBOT: A novel tire workshop assistant robot. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Zurich, Switzerland, 4–7 September 2016; AIM: Cranberry Township, PA, USA, 2016; pp. 733–738. [Google Scholar]
  16. Peternel, L.; Tsagarakis, N.; Caldwell, D.; Ajoudani, A. Adaptation of robot physical behaviour to human fatigue in hu-man-robot co-manipulation. In Proceedings of the IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 489–494. [Google Scholar]
  17. Cherubini, A.; Passama, R.; Crosnier, A.; Lasnier, A.; Fraisse, P. Collaborative manufacturing with physical human–robot interaction. Robot. Comput. Integr. Manuf. 2016, 40, 1–13. [Google Scholar] [CrossRef] [Green Version]
  18. Tan, J.T.C.; Duan, F.; Zhang, Y.; Watanabe, K.; Kato, R.; Arai, T. Human-robot collaboration in cellular manufacturing: Design and development. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, St Louis, MI, USA, 11–15 October 2009; pp. 29–34. [Google Scholar]
  19. Krüger, J.; Lien, T.; Verl, A. Cooperation of human and machines in assembly lines. CIRP Ann. 2009, 58, 628–646. [Google Scholar] [CrossRef]
  20. Erden, M.S.; Billard, A. End-point impedance measurements at human hand during interactive manual welding with robot. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 126–133. [Google Scholar]
  21. Wojtara, T.; Uchihara, M.; Murayama, H.; Shimoda, S.; Sakai, S.; Fujimoto, H.; Kimura, H. Human–robot collaboration in precise positioning of a three-dimensional object. Automatica 2009, 45, 333–342. [Google Scholar] [CrossRef]
  22. Morel, G.; Malis, E.; Boudet, S. Impedance based combination of visual and force control. In Proceedings of the IEEE International Conference on Robotics and Automation, Leuven, Belgium, 20–20 May 1998; Volume 2, pp. 1743–1748. [Google Scholar]
  23. Magrini, E.; Ferraguti, F.; Ronga, A.J.; Pini, F.; De Luca, A.; Leali, F. Human-robot coexistence and interaction in open in-dustrial cells. Robot. Comput. Integr. Manuf. 2020, 61, 101846. [Google Scholar] [CrossRef]
  24. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robots 2018, 42, 957–975. [Google Scholar] [CrossRef] [Green Version]
  25. Michalos, G.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Dimoulas, K.; Koukas, S.; Mparis, K.; Papavasileiou, A.; Makris, S. Seamless human robot collaborative assembly—An automotive case study. Mechatronics 2018, 55, 194–211. [Google Scholar] [CrossRef]
  26. Baraglia, J.; Cakmak, M.; Nagai, Y.; Rao, R.P.; Asada, M. Efficient human-robot collaboration: When should a robot take initiative? Int. J. Robot. Res. 2017, 36, 563–579. [Google Scholar] [CrossRef]
  27. Donner, P.; Buss, M. Cooperative Swinging of Complex Pendulum-Like Objects: Experimental Evaluation. IEEE Trans. Robot. 2016, 32, 744–753. [Google Scholar] [CrossRef]
  28. Dimeas, F.; Aspragathos, N. Online Stability in Human-Robot Cooperation with Admittance Control. IEEE Trans. Haptics 2016, 9, 267–278. [Google Scholar] [CrossRef]
  29. Kruse, D.; Radke, R.J.; Wen, J.T. Collaborative human-robot manipulation of highly deformable materials. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 3782–3787. [Google Scholar]
  30. Gams, A.; Nemec, B.; Ijspeert, A.J.; Ude, A. Coupling Movement Primitives: Interaction with the Environment and Bimanual Tasks. IEEE Trans. Robot. 2014, 30, 816–830. [Google Scholar] [CrossRef] [Green Version]
  31. Bestick, A.M.; Burden, S.A.; Willits, G.; Naikal, N.; Sastry, S.S.; Bajcsy, R. Personalized kinematics for human-robot collaborative manipulation. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. [Google Scholar]
  32. Kildal, J.; Martín, M.; Ipiña, I.; Maurtua, I. Empowering assembly workers with cognitive disabilities by working with collaborative robots: A study to capture design requirements. Procedia CIRP 2019, 81, 797–802. [Google Scholar] [CrossRef]
  33. Böhme, H.-J.; Wilhelm, T.; Key, J.; Schauer, C.; Schröter, C.; Groß, H.-M.; Hempel, T. An approach to multi-modal human–machine interaction for intelligent service robots. Robot. Auton. Syst. 2003, 44, 83–96. [Google Scholar] [CrossRef] [Green Version]
  34. Vasconez, J.P.; Kantor, G.A.; Cheein, F.A.A. Human–robot interaction in agriculture: A survey and current challenges. Biosyst. Eng. 2019, 179, 35–48. [Google Scholar] [CrossRef]
  35. Hjorth, S.; Chrysostomou, D. Human–robot collaboration in industrial environments: A literature review on non-destructive disassembly. Robot. Comput. Manuf. 2021, 73, 102208. [Google Scholar] [CrossRef]
  36. Murphy, R.R. Human—Robot Interaction in Rescue Robotics. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2004, 34, 138–153. [Google Scholar] [CrossRef]
  37. Magalhaes, P.; Ferreira, N. Inspection Application in an Industrial Environment with Collaborative Robots. Automation 2022, 3, 13. [Google Scholar] [CrossRef]
  38. Weiss, A.; Wortmeier, A.-K.; Kubicek, B. Cobots in Industry 4.0: A Roadmap for Future Practice Studies on Human–Robot Collaboration. IEEE Trans. Hum. Mach. Syst. 2021, 51, 335–345. [Google Scholar] [CrossRef]
  39. Tsagarakis, N.G.; Caldwell, D.G.; Negrello, F.; Choi, W.; Baccelliere, L.; Loc, V.G.; Noorden, J.; Muratore, L.; Margan, A.; Cardellino, A.; et al. WALK-MAN: A High-Performance Humanoid Platform for Realistic Environments. J. Field Robot. 2017, 34, 1225–1259. [Google Scholar] [CrossRef]
  40. Masaracchio, M.; Kirker, K. Resistance Training in Individuals with Hip and Knee Osteoarthritis: A Clinical Commentary with Practical Applications. Strength Cond. J. 2022, 44, 36–46. [Google Scholar] [CrossRef]
  41. Gravel, D.P.; Newman, W.S. Flexible Robotic Assembly Efforts at Ford Motor Company. Proceeding of the 2001 IEEE International Symposium on Intelligent Control (ISIC’ 01) (Cat. No.01CH37206), Mexico City, Mexico, 5–7 September 2001; Available online: https://ieeexplore.ieee.org/abstract/document/971504/ (accessed on 17 May 2022).
  42. Mojtahedi, K.; Whitsell, B.; Artemiadis, P.; Santello, M. Communication and Inference of Intended Movement Direction during Human–Human Physical Interaction. Front. Neurorobot. 2017, 11, 21. [Google Scholar] [CrossRef] [Green Version]
  43. Vogel, J.; Castellini, C.; Van Der Smagt, P. EMG-Based Teleoperation and Manipulation with the DLR LWR-III. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6434–6437. [Google Scholar]
  44. Gijsberts, A.; Bohra, R.; González, D.S.; Werner, A.; Nowak, M.; Caputo, B.; Roa, M.A.; Castellini, C. Stable myoelectric control of a hand prosthesis using non-linear incremental learning. Front. Neurorobot. 2014, 8, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Fleischer, C.; Hommel, G. A Human-Exoskeleton Interface Utilizing Electromyography. IEEE Trans. Robot. 2008, 24, 872–882. [Google Scholar] [CrossRef]
  46. de Vlugt, E.; Schouten, A.; van der Helm, F.C.; Teerhuis, P.C.; Brouwn, G.G. A force-controlled planar haptic device for movement control analysis of the human arm. J. Neurosci. Methods 2003, 129, 151–168. [Google Scholar] [CrossRef] [PubMed]
  47. Burdet, E.; Osu, R.; Franklin, D.W. The central nervous system stabilizes unstable dynamics by learning optimal impedance. Nature 2001, 414, 446–449. [Google Scholar] [CrossRef] [PubMed]
  48. Hao, M.; Zhang, J.; Chen, K.; Asada, H.H.; Fu, C. Supernumerary Robotic Limbs to Assist Human Walking with Load Carriage. J. Mech. Robot. 2020, 12, 061014. [Google Scholar] [CrossRef]
  49. Luo, J.; Gong, Z.; Su, Y.; Ruan, L.; Zhao, Y.; Asada, H.H.; Fu, C. Modeling and Balance Control of Supernumerary Robotic Limb for Overhead Tasks. IEEE Robot. Autom. Lett. 2021, 6, 4125–4132. [Google Scholar] [CrossRef]
  50. Bonilla, B.L.; Parietti, F.; Asada, H.H. Demonstration-based control of supernumerary robotic limbs. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 3936–3942. [Google Scholar]
  51. Parietti, F.; Chan, K.; Asada, H.H. Bracing the human body with supernumerary Robotic Limbs for physical assistance and load reduction. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 141–148. [Google Scholar]
  52. Ciullo, A.S.; Catalano, M.G.; Bicchi, A.; Ajoudani, A. A Supernumerary Soft Robotic Limb for Reducing Hand-Arm Vibration Syndromes Risks. Front. Robot. AI 2021, 8, 650613. [Google Scholar] [CrossRef]
  53. Abdi, E.; Burdet, E.; Bouri, M.; Himidan, S.; Bleuler, H. In a demanding task, three-handed manipulation is preferred to two-handed manipulation. Sci. Rep. 2016, 6, 21758. [Google Scholar] [CrossRef] [Green Version]
  54. Meraz, N.S.; Sobajima, M.; Aoyama, T.; Hasegawa, Y. Modification of body schema by use of extra robotic thumb. Robomech J. 2018, 5, 3. [Google Scholar] [CrossRef] [Green Version]
  55. Kilner, J.; Hamilton, A.F.d.C.; Blakemore, S.-J. Interference effect of observed human movement on action is due to velocity profile of biological motion. Soc. Neurosci. 2007, 2, 158–166. [Google Scholar] [CrossRef]
  56. Maurice, P.; Padois, V.; Measson, Y.; Bidaud, P. Human-oriented design of collaborative robots. Int. J. Ind. Ergon. 2017, 57, 88–102. [Google Scholar] [CrossRef] [Green Version]
  57. Rosen, J.; Brand, M.; Fuchs, M.; Arcan, M. A myosignal-based powered exoskeleton system. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans 2001, 31, 210–222. [Google Scholar] [CrossRef] [Green Version]
  58. Farry, K.; Walker, I.; Baraniuk, R. Myoelectric teleoperation of a complex robotic hand. IEEE Trans. Robot. Autom. 1996, 12, 775–788. [Google Scholar] [CrossRef]
  59. Castellini, C.; Artemiadis, P.; Wininger, M.; Ajoudani, A.; Alimusaj, M.; Bicchi, A.; Caputo, B.; Craelius, W.; Dosen, S.; Englehart, K.; et al. Proceedings of the first workshop on Peripheral Machine Interfaces: Going beyond traditional surface electromyography. Front. Neurorobot. 2014, 8, 22. [Google Scholar] [CrossRef] [Green Version]
  60. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The Extraction of Neural Information from the Surface EMG for the Control of Upper-Limb Prostheses: Emerging Avenues and Challenges. IEEE Trans. Neural Syst. Rehabilitat. Eng. 2014, 22, 797–809. [Google Scholar] [CrossRef] [PubMed]
  61. Kim, S.; Kim, C.; Park, J.H. Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2006; pp. 3486–3491. [Google Scholar] [CrossRef] [Green Version]
  62. Magrini, E.; Flacco, F.; De Luca, A. Estimation of contact forces using a virtual force sensor. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2126–2133. [Google Scholar] [CrossRef]
  63. Khatib, O.; Demircan, E.; De Sapio, V.; Sentis, L.; Besier, T.; Delp, S. Robotics-based synthesis of human motion. J. Physiol. 2009, 103, 211–219. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Tsumugiwa, T.; Yokogawa, R.; Hara, K. Variable impedance control with virtual stiffness for human-robot cooperative pegin-hole task. In Proceedings of the Intelligent Robots and Systems, Osaka, Japan, 5–7 August 2003; Volume 2, pp. 1075–1081. [Google Scholar] [CrossRef]
  65. Ficuciello, F.; Romano, A.; Villani, L.; Siciliano, B. Cartesian impedance control of redundant manipulators for human-robot co-manipulation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 2120–2125. [Google Scholar] [CrossRef] [Green Version]
  66. Kosuge, K.; Hashimoto, S.; Yoshida, H. Human-robots collaboration system for flexible object handling. In Proceedings of the 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), Leuven, Belgium, 20–20 May 2002; Volume 2, pp. 1841–1846. [Google Scholar] [CrossRef]
  67. Ajoudani, A.; Godfrey, S.B.; Bianchi, M.; Catalano, M.G.; Grioli, G.; Tsagarakis, N.; Bicchi, A. Exploring Teleimpedance and Tactile Feedback for Intuitive Control of the Pisa/IIT SoftHand. IEEE Trans. Haptics 2014, 7, 203–215. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Yang, C.; Ganesh, G.; Haddadin, S.; Parusel, S.; Albu-Schaeffer, A.; Burdet, E. Human-Like Adaptation of Force and Impedance in Stable and Unstable Interactions. IEEE Trans. Robot. 2011, 27, 918–930. [Google Scholar] [CrossRef] [Green Version]
  69. Plagemann, C.; Ganapathi, V.; Koller, D.; Thrun, S. Real-time identification and localization of body parts from depth images. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 3108–3113. [Google Scholar] [CrossRef]
  70. Perzanowski, D.; Schultz, A.; Adams, W. Integrating natural language and gesture in a robotics domain. In Proceedings of the 1998 IEEE International Symposium on Intelligent Control (ISIC) held jointly with IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA) Intell, Gaithersburg, MD, USA, 17 September 2002. [Google Scholar] [CrossRef]
  71. Zanchettin, A.M.; Rocco, P. Reactive motion planning and control for compliant and constraint-based task execution. In Proceedings of the International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2748–2753. [Google Scholar] [CrossRef]
  72. Peternel, L.; Noda, T.; Petrič, T.; Ude, A.; Morimoto, J.; Babič, J. Adaptive Control of Exoskeleton Robots for Periodic Assistive Behaviours Based on EMG Feedback Minimisation. PLoS ONE 2016, 11, e0148942. [Google Scholar] [CrossRef] [Green Version]
  73. Maeda, Y.; Takahashi, A.; Hara, T.; Arai, T. Human-robot cooperation with mechanical interaction based on rhythm entrainment-realization of cooperative rope turning. In Proceedings of the 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), Seoul, Republic of Korea, 21–26 May 2002; Volume 4, pp. 3477–3482. [Google Scholar]
  74. Cherubini, A.; Passama, R.; Meline, A.; Crosnier, A.; Fraisse, P. Multimodal control for human-robot cooperation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2202–2207. [Google Scholar] [CrossRef] [Green Version]
  75. Lippiello, V.; Siciliano, B.; Villani, L. Robot Interaction Control Using Force and Vision. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 1470–1475. [Google Scholar] [CrossRef]
  76. Khansari-Zadeh, S.M.; Billard, A. Learning Stable Nonlinear Dynamical Systems with Gaussian Mixture Models. IEEE Trans. Robot. 2011, 27, 943–957. [Google Scholar] [CrossRef] [Green Version]
  77. Fernandez, V.; Balaguer, C.; Blanco, D.; Salichs, M. Active human-mobile manipulator cooperation through intention recognition. In Proceedings of the 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), Seoul, Republic of Korea, 21–26 May 2002. [Google Scholar] [CrossRef]
  78. Stulp, F.; Grizou, J.; Busch, B.; Lopes, M. Facilitating intention prediction for humans by optimizing robot motions. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1249–1255. [Google Scholar] [CrossRef] [Green Version]
  79. Ebert, D.; Henrich, D. Safe human-robot-cooperation: Image-based collision detection for industrial robots. IEEE Int. Conf. Intell. Robot. Syst. 2003, 2, 1826–1831. [Google Scholar] [CrossRef] [Green Version]
  80. Magnanimo, V.; Saveriano, M.; Rossi, S.; Lee, D. A Bayesian approach for task recognition and future human activity prediction. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 726–731. [Google Scholar] [CrossRef]
  81. Bascetta, L.; Ferretti, G.; Rocco, P.; Ardö, H.; Bruyninckx, H.; Demeester, E.; Di Lello, E. Towards safe human-robot interaction in robotic cells: An approach based on visual tracking and intention estimation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2971–2978. [Google Scholar] [CrossRef]
  82. Agravante, D.J.; Cherubini, A.; Bussy, A.; Gergondet, P.; Kheddar, A. Collaborative human-humanoid carrying using vision and haptic sensing. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 607–612. [Google Scholar] [CrossRef] [Green Version]
  83. Strazzulla, I.; Nowak, M.; Controzzi, M.; Cipriani, C.; Castellini, C. Online Bimanual Manipulation Using Surface Electromyography and Incremental Learning. IEEE Trans. Neural Syst. Rehabilitat. Eng. 2016, 25, 227–234. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  84. Calinon, S.; Sauser, E.L.; Caldwell, D.G.; Billard, A.G. Learning and reproduction of gestures by imitation an approach based on Hidden Markov Model and Gaussian Mixture Regression. IEEE Robot. Autom. Mag. 2010, 17, 44–54. [Google Scholar] [CrossRef] [Green Version]
  85. Rozo, L.; Bruno, D.; Calinon, S.; Caldwell, D.G. Learning optimal controllers in human-robot cooperative transportation tasks with position and force constraints. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1024–1030. [Google Scholar] [CrossRef]
  86. Rozo, L.; Calinon, S.; Caldwell, D.G. Learning force and position constraints in human-robot cooperative transportation. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions, Edinburgh, UK, 25–29 August 2014; pp. 619–624. [Google Scholar] [CrossRef]
  87. Rozo, L.; Calinon, S.; Caldwell, D.G.; Jimenez, P.; Torras, C. Learning Physical Collaborative Robot Behaviors from Human Demonstrations. IEEE Trans. Robot. 2016, 32, 513–527. [Google Scholar] [CrossRef] [Green Version]
  88. Ivaldi, S.; Lefort, S.; Peters, J.; Chetouani, M.; Provasi, J.; Zibetti, E. Towards Engagement Models that Consider Individual Factors in HRI: On the Relation of Extroversion and Negative Attitude Towards Robots to Gaze and Speech During a Human–Robot Assembly Task: Experiments with the iCub humanoid. Int. J. Soc. Robot. 2016, 9, 63–86. [Google Scholar] [CrossRef] [Green Version]
  89. Colome, A.; Planells, A.; Torras, C. A friction-model-based framework for Reinforcement Learning of robotic tasks in non-rigid environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 5649–5654. [Google Scholar] [CrossRef] [Green Version]
  90. Lallee, S.; Pattacini, U.; Lemaignan, S.; Lenz, A.; Melhuish, C.; Natale, L.; Skachek, S.; Hamann, K.; Steinwender, J.; Sisbot, E.A.; et al. Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans. IEEE Trans. Auton. Ment. Dev. 2012, 4, 239–253. [Google Scholar] [CrossRef] [Green Version]
  91. Lee, D.; Ott, C. Incremental kinesthetic teaching of motion primitives using the motion refinement tube. Auton. Robot. 2011, 31, 115–131. [Google Scholar] [CrossRef]
  92. Lawitzky, M.; Medina, J.R.; Lee, D.; Hirche, S. Feedback motion planning and learning from demonstration in physical robotic assistance: Differences and synergies. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 3646–3652. [Google Scholar] [CrossRef] [Green Version]
  93. Petit, M.; Lallee, S.; Boucher, J.-D.; Pointeau, G.; Cheminade, P.; Ognibene, D.; Chinellato, E.; Pattacini, U.; Gori, I.; Martinez-Hernandez, U.; et al. The Coordinating Role of Language in Real-Time Multimodal Learning of Cooperative Tasks. IEEE Trans. Auton. Ment. Dev. 2012, 5, 3–17. [Google Scholar] [CrossRef] [Green Version]
  94. Peternel, L.; Petrič, T.; Oztop, E.; Babič, J. Teaching robots to cooperate with humans in dynamic manipulation tasks based on multi-modal human-in-the-loop approach. Auton. Robot. 2013, 36, 123–136. [Google Scholar] [CrossRef]
  95. Zhang, C.; Lin, C.; Leng, Y.; Fu, Z.; Cheng, Y.; Fu, C. An Effective Head-Based HRI for 6D Robotic Grasping Using Mixed Reality. IEEE Robot. Autom. Lett. 2023, 8, 2796–2803. [Google Scholar] [CrossRef]
  96. Duguleana, M.; Barbuceanu, F.G.; Mogan, G. Evaluating Human-Robot Interaction during a Manipulation Experiment Conducted in Immersive Virtual Reality. In Proceedings of the International Conference on Virtual and Mixed Reality, Orlando, FL, USA, 9–14 July 2011; pp. 164–173. [Google Scholar] [CrossRef]
  97. Zhang, Z. Building Symmetrical Reality Systems for Cooperative Manipulation. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, Shanghai, China, 25–29 March 2023; pp. 751–752. [Google Scholar] [CrossRef]
  98. Gupta, K.; Hajika, R.; Pai, Y.S.; Duenser, A.; Lochner, M.; Billinghurst, M. Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 22–26 March 2020; pp. 756–765. [Google Scholar] [CrossRef]
Figure 1. COBOT collaboration with operator: Safety rated monitoring stop (a); Hand guiding (b); Speed and separation monitoring (c); Force and torque limitation (d).
Figure 1. COBOT collaboration with operator: Safety rated monitoring stop (a); Hand guiding (b); Speed and separation monitoring (c); Force and torque limitation (d).
Robotics 12 00079 g001
Figure 2. COBOT scatter plot of payload and reach: Anthropomorphic (a); Cartesian, SCARA and Torso (b).
Figure 2. COBOT scatter plot of payload and reach: Anthropomorphic (a); Cartesian, SCARA and Torso (b).
Robotics 12 00079 g002
Figure 3. COBOT scatter plot of accuracy and payload: Anthropomorphic (a); Cartesian, SCARA and Torso (b).
Figure 3. COBOT scatter plot of accuracy and payload: Anthropomorphic (a); Cartesian, SCARA and Torso (b).
Robotics 12 00079 g003
Figure 4. COBOT box plot of accuracy for Anthropomorphic and SCARA (minimum, Q1, median, Q3, maximum and outlier—cycle).
Figure 4. COBOT box plot of accuracy for Anthropomorphic and SCARA (minimum, Q1, median, Q3, maximum and outlier—cycle).
Robotics 12 00079 g004
Figure 5. COBOT scatter plot of accuracy and reach: Anthropomorphic (a); Cartesian, SCARA and Torso (b).
Figure 5. COBOT scatter plot of accuracy and reach: Anthropomorphic (a); Cartesian, SCARA and Torso (b).
Robotics 12 00079 g005aRobotics 12 00079 g005b
Figure 6. COBOT scatter plot of power consumption vs. tool center point velocity of Anthropomorphic architecture.
Figure 6. COBOT scatter plot of power consumption vs. tool center point velocity of Anthropomorphic architecture.
Robotics 12 00079 g006
Table 1. COBOT models by mechanism class.
Table 1. COBOT models by mechanism class.
ClassNo.
Anthropomorphic176
Cartesian1
SCARA14
Torso4
Table 2. COBOT clusters based on payload and reach features.
Table 2. COBOT clusters based on payload and reach features.
COBOT ClusterPayload (kg)Reach (mm)
Group 1P ≤ 5.0R < 500
Group 25.0 < P ≤ 10.0500 < R ≤ 1000
Group 310.0 < P ≤ 15.01000 < R ≤ 1500
Group 415.0 < P ≤ 20.01500 < R ≤ 2000
Group 5P > 20.0R > 2000
Table 3. Number of available COBOTs grouped by mechanism class and payload-reach clusters.
Table 3. Number of available COBOTs grouped by mechanism class and payload-reach clusters.
ClassGroup 1Group 2Group 3Group 4Group 5Total
Anthropomorphic169150154176
Cartesian 1 1
SCARA5611114
Torso 22 4
Total219954165195
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Taesi, C.; Aggogeri, F.; Pellegrini, N. COBOT Applications—Recent Advances and Challenges. Robotics 2023, 12, 79. https://doi.org/10.3390/robotics12030079

AMA Style

Taesi C, Aggogeri F, Pellegrini N. COBOT Applications—Recent Advances and Challenges. Robotics. 2023; 12(3):79. https://doi.org/10.3390/robotics12030079

Chicago/Turabian Style

Taesi, Claudio, Francesco Aggogeri, and Nicola Pellegrini. 2023. "COBOT Applications—Recent Advances and Challenges" Robotics 12, no. 3: 79. https://doi.org/10.3390/robotics12030079

APA Style

Taesi, C., Aggogeri, F., & Pellegrini, N. (2023). COBOT Applications—Recent Advances and Challenges. Robotics, 12(3), 79. https://doi.org/10.3390/robotics12030079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop