Next Article in Journal
Common Noninteracting Control with Simultaneous Common Partial Zeroing with Application to a Tracked UGV
Next Article in Special Issue
Design, Kinematics and Workspace Analysis of a Novel 4-DOF Kinematically Redundant Planar Parallel Grasping Manipulator
Previous Article in Journal
Influence of Preparation Characteristics on Stability, Properties, and Performance of Mono- and Hybrid Nanofluids: Current and Future Perspective
Previous Article in Special Issue
Fault Diagnosis of Mine Ventilator Bearing Based on Improved Variational Mode Decomposition and Density Peak Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Expanding Role of Artificial Intelligence in Collaborative Robots for Industrial Applications: A Systematic Review of Recent Works

by
Alberto Borboni
1,*,
Karna Vishnu Vardhana Reddy
2,
Irraivan Elamvazuthi
2,
Maged S. AL-Quraishi
2,
Elango Natarajan
3 and
Syed Saad Azhar Ali
4
1
Mechanical and Industrial Engineering Department, Universita Degli Studi di Brescia, Via Branze, 38-25123 Brescia, Italy
2
Smart Assistive and Rehabilitative Technology (SMART) Research Group & Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar 32610, Malaysia
3
Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia
4
Aerospace Engineering Department & Center for Smart Mobility and Logistics, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
*
Author to whom correspondence should be addressed.
Machines 2023, 11(1), 111; https://doi.org/10.3390/machines11010111
Submission received: 25 November 2022 / Revised: 9 January 2023 / Accepted: 10 January 2023 / Published: 13 January 2023

Abstract

:
A collaborative robot, or cobot, enables users to work closely with it through direct communication without the use of traditional barricades. Cobots eliminate the gap that has historically existed between industrial robots and humans while they work within fences. Cobots can be used for a variety of tasks, from communication robots in public areas and logistic or supply chain robots that move materials inside a building, to articulated or industrial robots that assist in automating tasks which are not ergonomically sound, such as assisting individuals in carrying large parts, or assembly lines. Human faith in collaboration has increased through human–robot collaboration applications built with dependability and safety in mind, which also enhances employee performance and working circumstances. Artificial intelligence and cobots are becoming more accessible due to advanced technology and new processor generations. Cobots are now being changed from science fiction to science through machine learning. They can quickly respond to change, decrease expenses, and enhance user experience. In order to identify the existing and potential expanding role of artificial intelligence in cobots for industrial applications, this paper provides a systematic literature review of the latest research publications between 2018 and 2022. It concludes by discussing various difficulties in current industrial collaborative robots and provides direction for future research.

1. Introduction

A collaborative robot, or cobot, is designed for direct human–robot collaboration (HRC) or contact in a shared area or when people and robots are close to each other. In contrast to conventional industrial robot operations, which keep robots away from people, cobot applications allow for human interaction [1]. Cobot safety may depend on soft edges, low-weight materials, speed and force limitations built-in, or sensing devices and programming that enforce safe and positive behavior [2,3]. There are two main categories of robots recognized by the International Federation of Robots (IFR): industrial or automated robots, which are used in automation processes in an industrial environment [4], and service robots, used for personal and business purposes. The service robots are designed to collaborate or work with human beings and are categorized as cobots [5]. Cobots eliminate the divide that has historically existed between industrial robots and humans while they work within fences or any other security barriers [6]. Cobots can be used for a variety of tasks, from communication robots in public areas and logistic or supply chain robots that move materials inside a building [7], to articulated or industrial robots that assist in automating tasks which are not ergonomically sound, such as assisting individuals in carrying large parts, or assembly lines. They are designed to fill the gap between completely automated industrial processes and manual system functioning, providing the advantages of automation without adding to the complexities of a completely robotic testing regime. Cobots are also well suited for application in the biomedical sector, where increasing automation is frequently impractical, yet lab productivity, security, and information protection are crucial [8].
The four stages of interaction involved between robots and humans are defined by the IFR [9]:
  • Coexistence: there is no common office, yet humans and robots coexist side by side without a boundary.
  • Human and robot activity occurs in a shared workspace, but their movements are sequential; they do not simultaneously work on a component.
  • Cooperation: while both are in movement, a robot and a person work simultaneously on the same component.
  • Responsive collaboration: the robots react instantly to human worker movement.
In the majority of current industrial cobot applications, a human operator and a cobot coexist in the same location but carry out separate or consecutive duties. Human faith in collaboration has increased through HRC applications built with dependability and safety in mind, which also enhances employee performance and working circumstances [10]. Robots and humans work together in the same location during HRC. Cobots are designed to stop before any unintentional contact with a human teammate could be harmful. Additionally, cobots should be lightweight in order to reduce their inertia and enable abrupt stops. Certain cobots may even be taught to perform tasks in logistical operations by having other individuals direct their arms once to make the motion. This shortens the programming procedure and expedites the personalized packing process. The use of robotics in logistics and transportation is growing quickly [11].
A cobot was estimated to cost an average of approximately $28,000 in 2015, but by 2025, that price is predicted to plummet to an unexpectedly low average of approximately $17,500. The market for collaborative robots was assessed at USD 1.01 billion in 2021, and is anticipated to increase at a compound annual growth rate of 31.5% from 2022 to 2030. Figure 1 shows the global collaborative robot market in 2021 [12].
According to Figure 1, more than 24% of the market in 2021 belonged to the automotive sector, which is predicted to increase significantly over the next five years. Due to their capacity to save on floor space and the expense of production downtime, collaborative robot usage has expanded, which is significantly responsible for the expansion. They also play a significant role in other processes, such as spot and arc welding, component assembly, painting, and coating. Innovations that support weight reduction, cost-efficiency, and low production overheads will be combined with the introduction of new chemicals and metals to lead the automotive sector.
Artificial intelligence (AI) and robotics have made it possible to find creative answers to the problems encountered by companies of all sizes across industries. Robots powered by AI are being used by industries to bridge the gap between humans and technology, solve issues, and adapt business strategies to changing customer expectations. Robots with AI capabilities operate in shared environments to keep employees safe in industrial workplaces. Additionally, they work independently to complete complicated operations such as cutting, grinding, welding, and inspection. Machine learning is essential to the ability of AI robots to learn and improve over time at performing tasks. Robots that employ machine learning can create new learning ways and competencies by using contextual knowledge learned through experience and real-time data. This enables the robots to address novel and unusual issues as they arise in their contexts. The most sophisticated type of machine learning is called deep learning, and like neural networks, it deals with algorithms that are motivated by the structure and operation of the brain. Deep learning, which is essentially a “deep” neural network, gets its name from the abundance of layers, or “depth.” Given that deep learning demands truly enormous quantities of computing power and data, it is a future objective rather than something that can be achieved now for cobots. The more of each it has, the better it will function. Figure 2 shows the relationship between AI, machine learning, and deep learning.
It can be interpreted from Figure 2 that machine learning is a sub-category of AI, and deep learning is a sub-category of machine learning, meaning they are both forms of AI. AI is the broad idea that machines can intelligently execute tasks by mimicking human behaviors and thought processes. Several recent review articles [4,13,14,15] were evaluated and it was found that although article [4] provided a good analysis of machine learning techniques and their industrial applications from the perspective of flexible collaborative robots, some of the recent works of 2022 were not covered. The application of only machine learning techniques in the context of HRC has been reviewed in the literature [13] where it emphasized the need of including time dependencies in machine learning algorithms. Article [14] covered five articles only until 2021 where it focused on control techniques for safe, ergonomic, and efficient HRC in industries. The role of AI in the development of cobots was not addressed in this article. Article [15] mainly focused on smart manufacturing architectures, communication technology, and protocols in the deployment of machine learning algorithms for cooperative tasks between human workers and robots. This article did not cover the deep learning techniques that are providing advanced learning approaches for cobots. As it was published in 2021, it did not include up-to-date research and only related articles until 2021 were cited.
There is growing interest in creating a collaborative workspace where people and robots can work cooperatively because of the supportive nature of their abilities. These diverse elements of the industrial sector’s dynamic nature and the existing deficiencies in analyses serve as a high impetus for the development of AI-based HRC. Hence, this paper aims to precisely respond to the following research questions:
What have researchers found in the literature on the expanding role of AI on cobots?
Will the implementation of AI on cobots be able to reduce previous concerns about industrial applications and contribute to better performance?
It is easy to identify areas where there are gaps that need to be addressed by future efforts by reviewing the existing literature. As a result, the objectives of the research are addressed in the following manner:
to research the key distinctions between robots and cobots
to research the common characteristics and capacities of robots
to discuss the various levels of industrial situations including robots, the role of AI, and collaboration
The main contribution of this research is to examine the interactions and influence between AI and collaborative robots (cobots) about human elements and contemporary industrial processes. Apart from machine learning and deep learning methods, recent works about the role of vision systems that employ deep learning for cobots have been specifically included. A literature study is selected as an appropriate method to determine the association between one (or more) of the mentioned aspects to achieve this purpose. However, details regarding safety concerns over the cobots’ ability to accurately recognize human emotions and hand movements were not included.
The paper is organized as follows. The methodology presents how the review was carried out in Section 2. This is followed by the discussion on the findings in Section 3. Then, a discussion on the collected data is shown in Section 4 and recommendations and future directions are provided in Section 5. Conclusions are drawn in Section 6.

2. Methodology

The understanding and evaluation of the methodologies used are aided by a precise, well-described structure for systematic reviews. As a result, the preferred reporting items for the systematic review and meta-analysis (PRISMA) model was used in this research. The PRISMA model, as illustrated in Figure 3, depicts the flow of information from one stage to the next in a systematic review of the literature, including the total number of studies identified, excluded, and included, as well as the reasons for inclusion and exclusion. The databases Web of Science, IEEEXplore, PubMed, ScienceDirect, SpringerLink, Scopus, and Research Gate, were examined for the literature search on collaborative robots using the following key words: human–robot interaction (HRI), cobots, AI in robots, collaborative learning, HRC, reinforcement learning, deep learning in robotics, industrial robots. Peer-reviewed academic journals, conference papers, and reviews published since 2018 and written in English, and those that contain qualitative or quantitative information or both, were included in this systematic review.
From Figure 3, we have initially identified 156 articles related to collaborative robots and applications through database searching. Out of 156 articles, 30 duplicate papers were removed. By screening the remaining 126 papers, 40 records were excluded by reviewing the title and abstract. A total of 86 full-text articles were considered for eligibility, of which 43 articles were excluded for various reasons, such as inappropriate study design (flawed or underdeveloped design that produced low-quality results and makes research unreliable), publications before the year 2018, not relevant to cobots, AI techniques not addressed in their study, and inconsistency in results (the results were not the same through the manuscript). Finally, we included 43 full-text articles that were published between 2018 and 2022 to review. The discussion of the findings is provided in the following sub-sections.

3. Findings

3.1. Cobots

The current research on cobots aims to enable cobots to emulate mankind in learning, adapting, manipulating capabilities, vision, and cognizance. It is necessary to improve ergonomic HRC for jobs such as welding, assembly, safety checks, handling of materials, polishing, etc., by creating robot perception, motion control planning, and learning in a safe, reliable, and flexible manner. Researchers are striving for cognitive solutions to the following research problems facing industries [13,16,17]:
  • How cobots acquire knowledge and abilities with little or no task-specific coding
  • How cobots replicate user perception and motor control to carry out a physical task and reach objectives
  • How cobots with enhanced mobility complete a difficult task in a wide-open area
Safety has always been a top priority in conventional HRC, and people are taught to only utilize the robot as a device. The industrial sector is finally opening to the human–robot collaborative work environment, as evidenced by the international norms ISO 10218-1:2011 and ISO/TS 15066:2016 [18,19], and the research is advancing to also look at psychological empowerment. These standards can propose alternatives for sustaining safety and wellbeing without the use of concrete barriers. Operators must engage securely with an industrial robot throughout the entire production cycle with little to no training in order to expand the scope of true collaboration. Robots can be made safer and simpler to operate in production activities by implementing innovative interaction techniques utilizing multi-sensory interfaces, motion control tools, and augmented reality [20]. These techniques can also enable rapid prototyping and lower training expenses. Manufacturers of collaborative robots are utilizing the integration of many technological solutions, such as machine learning, computer vision, and sophisticated gripping techniques in robotic arms to make them safer to collaborate with, interrupting current robotic strategies and continuing to expand the application of robotic systems to research labs, the medical sector, assembly, and silicon wafer handling. Example uses of cobots in industrial applications are shown in Figure 4 [21].
Figure 4 shows numerous kinds of examples of cobots in various industrial applications, such as pick and place, assembly, machine tending, gluing, quality assurance, palleting, screw driving, intralogistics, and so on.

3.1.1. Difference between the Robot and Cobot

Cobots and robots are capable of doing work that is quite similar to each other. For example, both are intended to replace the need for a human operator by automating all or part of the assessment process. Either approach may, therefore, be better than the other in particular situations due to a few significant distinctions. Industrial collaboration robots are essentially intended to work with human workers in completing assigned tasks. Instead of being independent, they are human-powered and utilized to boost productivity and effectiveness by providing extra force, energy, accuracy, and intelligence. Industrial robots, in contrast, replace human workers rather than stand collaboratively with them, automating repetitive jobs that frequently necessitate a significant amount of force. Table 1 summarizes the key distinctions between conventional robots and cobots [22].
Cobots help employees, whereas robots take their place, which is the main distinction. In addition, cobots benefit from faster learning and easier programming due to AI. Industrial robots need intricate reprogramming, which calls for a knowledgeable engineer or programmer. A cobot’s real-time interface allows for interactive human input during communication, whereas robots require remote interaction. Finally, because cobots are lightweight and made for collaborating, they are not typically used for heavy-duty production; instead, industrial robots handle them. Due to their size and sturdiness, robots are often caged to safeguard workers from mishaps. On the flip side, a cobot may work on anything in a similar industry, such as production quality control, testing, or accurate welding [23].
In a manufacturing setup for industrial applications, some typical tasks that a cobot can perform are picking and placing, removing trash, packing, testing, quality assurance, tracking machinery, gluing, bolting, soldering joints, riveting, cutting, polishing, etc. Cobots are employed in a variety of sectors and industries, such as the production of electronics, aircraft, automobiles, furniture, plastic modeling, etc., due to their adaptability. They also work in fields including agriculture, research labs, surveillance, food service and production, healthcare, and pharmaceuticals. Soon, cobots will become increasingly complex and adaptable. Cobots will continue to do precise and sophisticated jobs as long as AI technology improves. Flexible robots’ connectivity and compatibility further make them an essential innovation for present and future industrial, medical, manufacturing, and assistive technological demands.
However, developing robots outside traditional automation presents incomparable obstacles, particularly for real-world applications, such as autonomous decision making and cognitive awareness. Modern adaptable robots have a lot of promise, and the integration of AI and machine learning has sparked the attention of many different study fields [24]. End-to-end autonomy in learning-based robots commonly includes three primary elements, such as perception, cognition, and control. Due to the complementary nature of these elements, autonomous control is made possible by sophisticated sensing and cognitive techniques. A cobot deployed on a mechanical alloying system can significantly increase lab productivity, operator security, and data variability and reliability in the biomedical sector. Test laboratories that want to boost productivity but do not have sufficient test volumes to support the acquisition of a fully autonomous system work especially well with cobots.

3.1.2. Advantages of Cobots

Cobots may now replace manual tasks that are ubiquitous across several companies in the workplace. They may also be given chores that are monotonous, nasty, hazardous, or otherwise unappealing to people. Among the main advantages of using cobots in the workplace is the avoidance of injuries. Cobots can perform any tasks that need an arduous lift or repetitive motion. Cobot use can help prevent contact with poisonous items, hazardous machinery, and the highest-risk tools. Staff absence declines as a consequence of fewer casualties.
Numerous cobot safety risk evaluation companies have been established as a result of the deployment of cobots, and they issue warnings about potential risks. The hazards that could arise throughout a cobot’s sequence of tasks and the connection of its tools must be taken into account, despite the fact that cobots are frequently promoted as being harmless to use right out of the package [1].
Companies may grow exponentially and automate various manufacturing processes with the aid of cobots, which also makes extra space available for working remotely. By taking over undesirable duties, they also increase worker safety [25]. Cobots may supplement human labor and are quite cost-effective, making them perfect for small- and medium-scale enterprises. Cobots are also creatively and adaptably utilized by AI, ensuring that they are never idle on the worksite. In summary, cobots in the manufacturing industry can enhance quality control, maximize effectiveness, and raise output.

3.1.3. Disadvantages of Cobots

The main drawbacks of cobots in production are not related to their functionality, but rather to the issue of whether an enterprise business should use them. For instance, cobots cannot perform heavy lifting because they were not designed for that purpose. They are not completely automated to handle complex tasks, either. However, when it comes to industrial floors where workers require an extra hand, their strengths are clear.
However, more importantly, cobots still face some challenges in terms of cognitive and dexterity tasks. Cobots are expected to address these drawbacks as the technology advances, or the engineers and programmers will [8]. Such cobots remain unable to discern an individual’s emotional condition [26].

3.2. Artificial Intelligence

As a segment of AI, machine learning refers to algorithmic or statistical operations that allow computer systems to learn from experience automatically [24]. An interconnected industry with a network of industrial Internet-of-Things (IIoT) devices, such as robotics that improve and optimize processes as part of the smart manufacturing process, is made possible largely through machine learning. Assembling can benefit immensely from machine learning; manufacturing particular items, such as semiconductors, on machine learning technology can lower expenses associated with maintenance and examination, leakage, and outage.
Machine learning can also enhance quality control after assembly. A non-destructive examination can also be carried out by machine learning without human mistakes [8]. The big data produced by IIoT sensors that capture information on the status of the equipment is used to predict maintenance for industrial robots and other devices. The data is then analyzed by machine learning algorithms to forecast when a machine will require repair, preventing expensive downtime from unplanned maintenance and allowing the opportunity to schedule maintenance for periods of low consumer needs. Supply chain management can be improved by centralized data insights from digital industries fed into machine learning algorithms. This includes optimizing logistical routes, switching from barcode scans to a vision-based inventory system, and making the most of available storage capacity. Additionally, machine learning can forecast demand trends to assist in preventing excessive production.
Big data must be supplied to machine learning algorithms in order for them to identify trends and gain insights from them. The machine learning model might never be able to perform to its maximum capabilities without much data. While it may seem apparent, the right data is also necessary for effective model learning. There are various subcategories of machine learning, such as deep learning, that are now widely used since the significant computer power it needs is now widely available and reasonably priced. Neural networks are networks of nodes where the weights of the nodes are learned from data and are used in deep learning. These networks are created to replicate how the brains of both humans and animals adapt to changing inputs in order to acquire knowledge.
Although the use of machine learning in diverse industries and warehouses has increased recently [27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43], the COVID-19 pandemic has served as a warning call. Some businesses halted operations to reduce the risk of infection among workers on assembly lines in close proximity to one another. Nevertheless, companies that made investments in autonomous robots that were controlled by machine learning techniques were able to respond quickly, imaginatively, and effectively. AI, known as machine learning, is used to find patterns in the massive volumes of data produced by digital images, audio, video, and text. Robots can make intelligent, secure, reliable, and independent choices, such as where to install the appropriate rivet at the proper force on a production line, using algorithms that recognize patterns and translate them into rules [44]. There are three key jobs for skilled workers to fulfill. They must program robots to carry out specific jobs, describe the results of such activities to non-skilled workers (particularly whenever the conclusions are illogical or debatable), and uphold the appropriate use of technology.
Machine learning has a big impact that extends well beyond manufacturing or warehousing floors. Machine learning and robots are becoming more accessible thanks to new technology and processor generations. Robots are now being changed from science fiction to science through machine learning. They can quickly respond to change, decrease expenses, and enhance user experience. However, machine learning techniques have some shortcomings and do not always yield satisfactory results. For instance, machine learning approaches are opaque, the results of machine learning are not always precise and reliable in complex and delicate research papers, and machine learning algorithms are not able to address all underlying assumptions and circumstances of the issues. Machine learning is an intriguing research tool for roboticists since it allows robots to understand complicated behaviors flexibly from unstructured settings. Machine learning can assist cobots, particularly in learning to respond to these kinds of conditions. As a result, current research has focused on the invention and application of different machine learning approaches, such as neural networks and reinforcement learning, in order to create natural, smooth, and flexible HRCs [45].
Deep learning has been very popular recently. This is because this approach makes it simple to create sophisticated or infeasible image-processing solutions. A neural network capable of learning from the conveyed picture data is known as deep learning. This enables the completion of a wide range of activities, including localizing components, identifying flaws on intricate surfaces, deciphering challenging characters, and classifying. Numerous parameters around an object may now be examined in conjunction with a robot. Reinforcement learning and deep learning are both autonomous learning systems. The distinction is that reinforcement learning learns proactively by modifying behaviors based on continual feedback to optimize a reward [46], whereas deep learning learns from a training set and afterward is applied to a new test dataset [20].

3.3. Analysis

3.3.1. Non-Collaborative Workspace-Type Robots

The summary of the state-of-art research on non-collaborative workspace-type robots is provided in Table 2.
From Table 2, there were several works that implemented cobots but were not able to carry out collaborative tasks due to safety concerns. For non-collaborative tasks, AI was employed in [47,49,50], whereas [48] utilized control algorithms and [51] used a linear mixed effects model. Bagheri et al. [47] proposed a bidirectional and more transparent interaction-based learning between human beings and cobots to improve interaction with enhanced performance using a transparent graphical user interface (T-GUI). A T-GUI enables the cobot to describe its operations and the operator to add instructions that are needed to assist the cobot in completing the task. The suggested approach has been validated by experimenting with 67 volunteers, and it concluded that giving explanations boosts performance in terms of effectiveness and efficiency. An industrial conventional robot with cooperative learning was proposed in the work of Amarillo et al. [48] by providing a physical interaction interface with improved admittance controller algorithms for robotic-assisted spine surgery. The recommended system was used to communicate with the robotic assistant in a surgical environment in an understandable manner while maintaining the mechanical rigidity of the industrial robot. This involved the application of an admittance control paradigm in hand navigation behavior through the introduction of a revised inverse kinematics close loop (IKCL) into the joint velocity computation. An orientation restriction control loop (OCL) was introduced to make sure that the system maintained the required orientation while being transformed into hand guidance mode.
To help workers who engage with cobots maintain an excellent psychological state, Nicora et al. [49] presented a control framework. An anticipated human-driven control structure is described along with a thorough breakdown of the elements needed to create such an automation system, beginning with the determination of the elements that potentially affect the collaboration environment. They developed a MindBot coworker by combining a cobot and Avatar, an interactive virtual system, for a better working environment through various elements such as gaze, gestures, and talking capabilities with physical, voice, and visual interactions. The orchestrator module divides up the duties between the workman and the MindBot workmate and coordinates the activities of the cobot and the avatar. The worker’s physiological reactions were recorded using FitBit Inspire HR. The 3D position and skeletal joints were leveraged by Microsoft Azure Kinect.
Oliff et al. [50] detailed the creation of a modeling approach for robotic devices that worked well and a reinforcement learning robot that could make decisions on its own by tailoring its behavior accordingly with respect to human activity. This reinforcement learning issue was approached using a deep Q-learn network-focused methodology since the robot controller discretized the functionality of the robotic operators into a set of protocols. The tripolar production plant, which shows how the robot and user-operated cells interact, and the Anylogic modeling approach for the robot operator was developed. Story et al. [51] suggested investigating how the people’s workload and faith during a HRC activity were affected by the robot’s velocity and proximity settings. Workload and faith were assessed after every run of a task involving a UR5 industrial robotic arm operating at various velocities and proximity settings, which involved 83 individuals. Trust and the setting of velocity or closeness did not significantly interact. This research demonstrated that a robotic system could impact people’s workload even though it complies with existing safety regulations.

3.3.2. Collaborative Workspace-Type Robots

The summary of the state-of-art research on collaborative workspace-type robots is provided in Table 3.
As per Table 3, a lot of research has been carried out on making robots do collaborative tasks of numerous kinds for industrial applications. All the works employed AI such as both deep learning and reinforcement learning for the design of cobots in performing several industrial activities.
The task order assignment mechanism in assembly operations is presented by Zhang et al. [52] to optimize using a HRC-reinforcement learning system. Furthermore, a practical examination of a simulated alternator assembly is conducted to confirm the efficacy of the technique. The deep deterministic policy gradient was expanded in the creation of the HRC-RL framework. The technician, the assembling component, the UR5 robot with a deep image sensor, and the different control instruments comprise the actual collaborative assembling station. The authors concluded that by using the suggested strategy, the decision is replaced, the supervisor’s effort is reduced, and irrational sequencing is avoided. Silva et al. [53] investigated the idea of direct control of a robot using video streams from cameras. Utilizing homography and deep learning, the robot can automatically map picture pixels from several camera systems to locations on its global cartesian coordinates. A robot’s route plan is then superimposed on each camera feed using this map, which also enables a user to control the robot by engaging with the video sequence immediately. An ArUco marker is used to locate the robot pixels. The findings were verified in both simulation and practical tests using a Baxter mobile base as a robot.
Buerkle et al. [54] provided a method for recognizing the purpose of upper-limb movement using an EEG to improve safety in a human–robot collaborative task. A unique data processing technology was introduced to identify the EEG signals as quickly as feasible and to reduce smooth efforts. For training a long short-term memory recurrent neural network (LSTM-RNN), motion intents were labeled using TimeSeriesKMeans. The authors concluded that the proposed technology might provide quicker detecting speeds, but it still must be evaluated in an online platform in a collaborative human–robot setting.
De Winter et al. [55] proposed a method to decrease the problem-solving space in assembly sequences by appropriate communication between humans and robots using interactive reinforcement learning (IRL) and potential-based reward shaping (PBRS). Rather than modifying the cobot programming, transferring the skills can decrease the expenses of maintenance and accelerates the learning rate. Nevertheless, this method requires that cobots can define, elucidate, and defend their actions to people and that the people can then pass on their expertise to the cobots through feedback in order to assist them in carrying out their duties in an effective manner.
A framework for human-centered collaborative robotic systems based on reinforcement learning was developed by Ali Ghadirzadeh et al. [56] to provide more time-effective cooperation between humans and robots in packing activities. To handle the sequential motion data, graph convolutional networks (GCNs) and recurrent Q-learning were used. An additional unsupervised motion reconstruction network was trained for improving the data effectiveness for the learning model. The experimental demonstrations prove that unwanted delays can be minimized by enabling better natural communication between users and robots.
Akkaladevi et al. [57] anticipated a reinforcement learning framework that provides a full collaborative assembly process intuitively. There are two steps to the learning strategy. The first phase entails utilizing task-based formalism to model the straightforward tasks that make up the assembling process. The use of the framework to address errors or unusual circumstances that arise during the actual implementation of the assembly operation was then demonstrated. The robot system uses 3D sensors to observe the operator and the surrounding area, and a dynamic GUI to communicate with the user. Additionally, the framework enables various users to instruct the robot in various assembly procedures. Heo et al. [58] proposed a deep learning-based collision detection framework for industrial cobots. A deep neural network system was created to understand robot collision signals and detect any accidents. High-dimensional inputs from robot joints have been analyzed by 1-D convolution neural networks (CNN) which determined whether there was a collision that happened as an outcome. Quantitative research and experimentation have been carried out using six-degrees of freedom (DoF) cobots to confirm the effectiveness of the suggested approach. The authors concluded that the framework demonstrated great collision sensitivity while also being resistant to false-positive findings brought on by erratic signals and/or dubious models. The framework was applied to general industrial robots only.
Gomes et al. [59] investigated the usage of deep reinforcement learning to guide a cobot through pick-and-place activities. They demonstrated the creation of a controlling system that allowed a cobot to grip objects that were not covered in training and to respond to changes in object placement. A collaborative UR3e robot with a two-finger grip and a mounted RGBD camera pointed toward the workspace ahead of the robot. Convolution neural networks were used to estimate the Q-values. CNN models, namely ResNext, DenseNet, MobileNet, and MNASNet, were employed to compare the system performance. From the simulation and experimental results, when handling a previously unseen object using the pre-trained CNN model MobileNet, the proposed system achieved a gripping success of 89.9%. For the human–robot collaborative activity, Chen et al. [60] suggested a unique neural learning improved admittance control technique. A smooth stiffness mapped between the human arm terminal and the mechanical arm joint was created to inherit the properties of the human arm electromyography signals, which was influenced by cognitive collaboration. To build a better-integrated HRC, they suggested a stiffness mapping approach between the human and the robot arms based on the estimated stiffness. A neural network-based dynamic controller was developed to accommodate uncertain dynamics and unknown payloads in order to improve the tracking performance of the mechanical arm. The task chosen in this work was sawing wooden pieces. Comparative studies were carried out to confirm the efficacy of the suggested method.
Qureshi et al. [61] provided a framework for intrinsic motivational reinforcement learning where an individual receives incentives based on their intrinsic motive via an action-conditional prediction model. By employing the suggested technique, the robot acquired interpersonal skills from experiences with HRI obtained in actual chaotic circumstances. The suggested approach consists of a policy network (Qnet) and an action-conditional prediction network (Pnet). The Pnet provides self-motivation for the Qnet to acquire societal communication abilities.
Wang et al. [62] explored deep learning as a data-driven method for constant human movement observation and predicting future HRC demands, resulting in better robotic control and planning when carrying out a collaborative activity. A deep CNN (DCNN) was utilized to identify human activities. Using a video camera, people’s actions were captured. Each video’s frames underwent preprocessing to provide sequential steps (grasping, holding, and assembling) needed to finish the given task. They achieved an accuracy of 96.6% in classifying the task through the network.
The paradigm for collaborative assembly between humans and robots proposed by Q. Lv et al. [63] is based on transfer learning and describes the robot taking part in the cooperation as an operator with reinforcement learning. To achieve quick development and validation of assembling strategy, it comprises three modules: HRCA strategy development, similarity assessment, and strategy transferring. According to the findings, the proposed method can increase assembling efficiency above developed assembly by 25.846%. In order to investigate the socio-technological environment of Industry 4.0, which involves cobots at the personal, workgroup, and organizational levels, Weiss et al. [64] established a study plan for social practice and workspace research. They established cutting-edge collaboration concepts for a cobot in two distinct scenarios, polishing molds and assembling automobile combustion engines, as part of the AssisstMe project. Bilateral control and imitation learning were employed to conduct the collaborative activity. According to Sasagawa et al. [65], bilateral control retrieves human involvement abilities for interrelations by extracting responses and commands separately. The cobot carried out meal-serving activities for validation. A total of 4ch bilateral control was employed to collect the necessary data, and long-short term memory was utilized to train the model. Meal serving activity was carried out for validation of the proposed approach. The experimental findings unequivocally show the significance of controlling forces, and the predicted force was capable of controlling dynamic interactions.
Lu et al. [66] predicted user intention by relying on the dynamics of the user’s limbs and subsequently developed an assistance controller to support humans in completing collaborative tasks. The efficiency of the prediction technique and controller was evaluated using the Franka Emika robot. The controller that was suggested integrates assistance control and admittance control. The best damping value was determined using reinforcement learning, and the assistant movement was created using predictions of user intention. Using a knowledge-based architecture, Karn et al. [67] suggested that people and robots may collaborate to comprehend the environment of defense operation. The context-aware collaborative agent (CACA) model, which was established on an ontology, provides contextual patterns and enhances robot army collaboration and communication. In order to extract information from past data that is helpful to the actor and critic, a recurrent actor–critic model was created. De Miguel Lazaro et al. [68] developed a method for modifying a cobot workspace to accommodate human workers within a deep learning camera that was mounted on the cobot. The worker who works with the cobot was recognized by the camera. The operator’s data was analyzed and used as the input by a module that adapts particular robot attributes. The cobot was adjusted to the worker’s abilities or provided pieces for the operator to handle depending on how they were handled.

3.3.3. Industrial Robots Employing Machine Learning

The summary of the state-of-art research on industrial robots employing machine learning is presented in Table 4.
From Table 4, collaborative workspace robots have been implemented by research works [69,70], non-collaborative robots by [71], and [72] by utilizing machine learning techniques.
Mohammad et al. [69] built a smart system that could control a robot utilizing human brain EEG signals to complete cooperative tasks. To record brainwaves, an EEG collection device called a g.Nautilus headset was chosen. Various pre-processing steps, such as compression and digitization of EEG signals, were performed to remove abnormalities from the recorded EEG signals and to prepare them for the following phase. Furthermore, feature extraction and classification were performed using discrete Fourier transform and linear discriminant analysis, respectively. In order to achieve the desired assembling tasks, the classification result was converted into control signals that were then transmitted to a robot. To validate the system, a case study was accomplished for an automobile manifold. To prevent outside interference, the practice session must be conducted in a strictly restricted setting. Brain activity might change throughout the day. A machine learning-assisted intelligent traffic monitoring system (ML-ITMS) was suggested by Wang et al. [70] for enhancing transportation security and dependability. ITMS incorporates automobile parking, medical care, city protection, and road traffic control using installed signals from the LoRa cloud platform. To determine whether a path is crowded or not, preprocessed data from traffic lights was sent to a machine learning algorithm. When compared to other current task-adaptation in physical HRI (TA-HRI), gesture-based HRI (GB-HRI), and emotional processes in HRI methods (EP-HRI), the suggested technique reaches the greatest traffic monitoring accuracy of 98.6%. The authors concluded that HRI made it possible for suppliers and customers at the two ends of transport networks to concurrently resolve significant issues.
By adjusting the robot’s physical and event parameters while handling the collaborative duties, Aliev et al. [71] suggested an online monitoring system for cobots to predict breakdowns and safe stops. To predict potential disruptions or effects during interactions between humans and robots, an automated machine learning model was utilized. The physical parameters, such as speed, vibrations, force, voltage, current, and temperature of the robots were collected by installing the appropriate sensors, and the event data includes breakdowns, working status, and software or hardware failures. The acquired data were transmitted through RTDE (real-time data exchange) and MODBUS protocols over Wi-Fi. Various data preprocessing steps, namely data standardization, normalization, transformation, and correlation analysis have been performed to extract significant information by removing noisy data. Thereafter, multiple linear regression and automatic classification models were employed to predict the quantitative and qualitative parameters by assessing various performance metrics. Malik et al. [72] investigated the potential of adopting a digital twin to handle the intricacy of collaborative production environments. A digital twin, a pacemaker, was created during the design, construction, and use of a human–robot assembly process for validation. The authors discussed various phases and forms of the digital twin, namely design, development, commissioning, operation, and maintenance.

3.3.4. Cobot-Related Works without AI

A summary of cobot-related works without employing AI is provided in Table 5.
According to Table 5, few research works successfully achieved collaborative tasks using industrial robots without AI. Walker et al. [73] demonstrated a robotic system for cuffing chickens. The system is made up of an Intel Realsense 435d RGB-D camera, a Universal Robots UR5 manipulator, and various software modules. The cameras could detect an entire, de-feathered chicken item and could precisely estimate the location and direction of the hock. Using a unique cutting head, the UR5 operator then independently grabbed this joint and secured it to a cuff. Edmonds et al. [74] developed a comprehensive framework that consisted of a neural network-based sensory prediction model to serve as the data-driven representation and a symbolic action planner employing a deterministic language as a planner-based representation. The model was evaluated in a robot system utilizing an interaction-handling task of unlocking medicine bottles. An enhanced generalized Earley Parser (GEP) was utilized to merge both the sensory model and symbolic planner. The task was carried out on numerous bottles with different locking mechanisms. The symbolic planner produced mechanical explanations, whereas the sensory model generated functional ones. The authors concluded that an automated system can learn to open three pharmaceutical bottles from a modest number of human instructions.
Grigore et al. [75] assessed the mobility of robots (three autonomous vehicles) to determine the effectiveness of collaborative robot systems in accomplishing challenging disaster and recovery operations. The primary areas of this study’s originality are the control, communication, computing, and integration of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) into real-time applications.
Bader et al. [76] investigated the use of a cobot-histotripsy system for the in vitro therapy of venous thrombosis ablation. A flow channel containing a human complete blood clot was used to test mechanical repeatability and bubble cloud targeting. The histotripsy system was translated by a six-degree-of-freedom cobot. The cobot could rotate around 360 degrees on each axis at a maximum speed of 180 degrees per second. To operate the cobot, a unique GUI was created in MATLAB. The cobot served as a zero-gravity support to permit controlled placement of the transducer when the GUI turned on free drive mode. The research findings show that cobots could be utilized to direct the histotripsy ablation of targets that are outside the transducer’s normal field of view. Eyam et al. [77] provided a method for leveraging the electroencephalography technique to digitize and analyze human emotions to tailor cobot attributes to the individual’s emotions. The cobot’s parameters were changed to maintain human emotional states within acceptable bounds, fostering more trust and assurance between the cobot and the user. They also studied several technologies and techniques for recognizing and feeling emotions. The suggested method was then validated using an ABB YuMi cobot and widely accessible EEG equipment in the box alignment task. Sentience-based emotions were segmented, and the robot’s settings were changed using a first-order regulation-based algorithm. The work did not focus on the internal effect caused by the stress that may produce unstable robot reactions.
By merging a steady-state visual evoked potential (SSVEP)-based brain–computer interface (BCI) with visual servoing (VS) technology, an intuitive robotic manipulation controlling system was created by Yang et al. [78]. They suggested the least squares method (LSM) for camera calibration, the task motion and self-motion for avoiding obstacles, and a dynamic color alteration for object recognition. Numerous tests were run to see whether the distributed control system was reliable in picking the correct object as per the subject order and whether the suggested techniques were effective. The study concluded that the produced system could assist disabled people in readily operating it without the requirement for considerable training. Cunha et al. [79] presented the outcomes of the use of a neuro-inspired model for action choice in a human–robot collaborative situation, using dynamic neural fields. They tested the concept in a real-world construction setting in which the robot Sawyer, working alongside a human worker, chose and verbalized the next component to be installed at every stage and produced the suitable way to install it. The 2D action execution layer enabled the simultaneous visualization of the constituents and action.

3.3.5. Vision Systems in Cobots

Visual inspection in industrial applications generally can be divided into manual visual inspection and automated visual inspection. The disadvantages of manual visual inspection are that it can be monotonous, laborious, fatiguing, subjective, lacking in good reproducibility, costly to document in detail, too slow in many cases, and expensive. On the other hand, automated visual inspection shows it is more reliable if programmed accurately (although it is not expected to be error-free). It poses minimal to no safety concerns, but some operational conditions must apply.
Collaborative robots are becoming increasingly perceptive to their environment because of developments in sensors, cameras, AI, and machine vision. These advanced robots can operate more safely and effectively in specific types of workplaces due to their enhanced visibility [80]. The little end effectors and payload can nevertheless be dangerous even though they are intended for safe interaction with human staff [81]. In completely automated packing and loading systems, the cobots must be able to identify items, determine their posture in space so they may be grasped, and design pick-and-place trajectories that avoid collisions. To determine an object’s posture, it is often not enough to just record raw sensor data. The data must be processed using specialized machine vision techniques [82].
Computer vision enables cobots to have a highly developed sense of perception and knowledge of their surroundings. A cobot’s inbuilt sensors, including proximity, LiDAR, motion, torque, and 2D vision, all work together to make it safer on its own. Systems that employ 3D depth cameras and computer vision algorithms enable cobots to operate alongside people and have complete knowledge of their environment [83]. The summary of the state-of-art research on vision systems employed in cobots is provided in Table 6.
Table 6 summarizes the deep learning-based vision systems employed in the cobots for collaborative environments to work with human workers. A stable intelligent perceiving and planning system (IPPS) for cobots was developed by Xu et al. [84], using the deep learning method. A well-designed vision system was employed to provide a novel method of observing the environment. A hand-tracking approach, fingertip marking method, new grasping method, and trajectory planning method were also suggested. The perceivable image was utilized as input to the deep learning neural networks to implement planning. For intelligent robot planning, a vision system was created with the help of two 3D RGB cameras, a depth camera, and an eye tracker. The RGB object images from the perception process were used as the input for a convolutional neural network, and the output was the type of object that was grasped. It can be said that IPPS has undergone real-world testing and has proven to be capable of realizing intelligent perception and planning with high efficacy and stability that can satisfy the needs of intelligence. Jia et al. [85] present a deep learning-based technique for autonomous robot systems to detect computer numerical control (CNC) machines and recognize their operational status. First, a system called the SiameseRPN was suggested to recognize the target CNC device and human–machine interface (HMI). The collected, pre-processed HMI images were then utilized as sources for working status recognition. To determine the target CNC device’s operational status, a unique text recognition technique was created by fusing projection-based segmentation with a convolutional recurrent neural network (CRNN). When compared to the benchmark method Faster-RCNN, the proposed method was 16.5% more accurate.
Xiong et al. [86] developed a cutting-edge robotic device that can identify ports and automatically position the scope in a strategic location. They carried out an initial trial to evaluate the accuracy and technical viability of this system in vitro. A cobot can locate a marker attached to the surgical port’s entrance using its 3D camera and machine vision program. It can then automatically align the scope’s axis with the port’s longitudinal axis to provide the best possible brightness and visual observation to save the time and effort of specialists. Comari et al. [87] proposed a robotic feed system for an automatic packing machine that incorporated a serial manipulator and a mobile platform, both of which featured collaborative characteristics. To identify targets for the cobot near stationary plant components and to examine raw materials before loading operations, a vision system with a laser pointer and a monochrome 2D camera were used. To create trustworthy target objects for manipulating raw materials and interacting with static components of the fully automated production cell, appropriate computer vision techniques were used in this work. Zhou et al. [88] created a deep learning-based object detection system on 3D point clouds for a mobile collaborative manipulator to streamline small- and medium-sized enterprise (SME) operations. Robust detection and exact localization problems were addressed by the development of the 3D point cloud method. The mobile manipulator’s position in relation to workstations was calibrated using the 2D camera. Utilizing the deep learning-based PV-RCNN, the identification of the targeted objects was acquired, and the localization was carried out utilizing the findings of the detection.
The implementation of a commercial collision prevention platform was proposed by Ahmed Zaki et al. [89] in order to carry out simultaneous, unplanned industrial operations including robots, cobots, and human workers. A robotic cell was deployed with two robotic manipulators. An Intel Realsense D435 camera-based 3D vision system was used to detect and recognize the products to be chosen. The implemented technology made it possible to control the two robots’ real-time trajectory planning, allowing for simultaneous use of both robots even while the items to be collected were being placed onto the conveyor belt relatively closely together. A CNN training based on deep learning was employed for the aided assembly operation by Zidek et al. [90]. The approach was tested in a SMART manufacturing system with an aided assembly workstation that used cam switches as the assembling product of choice from actual production. The authors trained two CNN models (single shot detection and mask region-based CNN) using 2D images created from 3D virtual models as the training data and created a communication framework for cobots that aided in assembly operation. Olesen et al. [91] examined the advantages of integrating a collaborative robot for mobile phone assembly with a cutting-edge RGB-D imaging system and deep learning principles. To get around the difficulties in gripping the cellphone parts, a multi-gripper switching approach was put into place employing suction and several fingertips. The system employed a YOLOv3 model to identify and locate the various components, and a separate CNN to precisely adjust each component’s orientation during phone assembly.
A hybrid vision safety solution was suggested by Amin et al. [92] to increase productivity and security in HRC applications by enabling the cobot to be aware of human actions by visual perception and to differentiate between deliberate and unintentional touch. The authors collected two distinct datasets from the subjects which included contact and visual information. For human action recognition they utilized 3D-CNN and for physical contact detection they utilized 1D-CNN algorithms. The authors investigated the effectiveness of these networks and provided the outcomes with the help of the Franka Emika robot and vision systems. In order to assemble a product package, Bejarano et al. [93] suggested a HRC assembly workspace made up of the ABB YuMi robot. The IRB 14000 gripper, which is outfitted with a Cognex AE3 camera, was utilized to perform image acquisition and recognition. This study also outlined the benefits and difficulties of using cobots by using an actual example of collaborative contact between a cobot and a human worker that could be used in any industrial plant. They concluded that the cobot was able to carry out a collaborated assembly process within allowable precision, coexistence, and simultaneity characteristics without endangering a human involved directly in the process.
From the above literature, a cobot now processes a substantial portion of 3D video information and responds swiftly due to sophisticated machine-vision methods and its underlying bespoke computer capability. When it notices obstacles close to its workspace, it will immediately stop moving to protect its human coworkers from damage.

4. Discussion

Robots are now able to collaborate closely with people thanks to new technology. In the previous two decades, there existed a wall separating the human workspace from where the robot was located. In the next five years, this will change because the robot will be capable of coexisting with humans in our living environments, including our homes, workplaces, and industries, and they will be ready to do so safely and securely. A new generation of robots that have sensing elements all over them, meaning their joints are independently operated, have started to appear in the last five years. As a result, if a person approaches a robot and touches it, the robot will halt as it would recognize that a person is nearby. The collision rate has been successfully decreased and the success rate has been improved with reinforcement learning comparatively without RL [63]. The work time for a robot is more than that of a human. According to Weiss et al. [64], there are presently just a few areas where cobots are used in the workplace. Cobots have not yet effectively overtaken career opportunities; rather, their use has been focused on automating simple parts of team projects. Even though robots excel at routine and tedious jobs, human workers still manage unexpected and unscheduled duties better than their computerized coworkers. In a way, people continue to be the system’s most adaptable resource. HRC may be superior to solely robotic processes by utilizing the heterogeneous benefits.
The preceding section discussed the findings in five different tables (Table 2, Table 3, Table 4, Table 5 and Table 6). In Table 2, the discussion on the related works with the non-collaborative workspace-type robots with five articles shows that only a few of the studies that implemented cobots were not able to carry out the collaborative tasks due to safety concerns. For non-collaborative tasks, AI was employed where performance in terms of effectiveness and efficiency has improved. Table 3, which discussed 17 articles, provided a summary of the state-of-art research on collaborative workspace-type robots, where a lot of research has been carried out in making the robots do collaborative tasks of numerous kinds for industrial application. All studies employed AI methods, such as deep learning and reinforcement learning, for the design of cobots in performing several industrial activities with improved execution. The summary of the state-of-art research on industrial robots employing machine learning in Table 4 with four articles shows that the implementation of collaborative workspace robots by utilizing machine learning techniques is showing better output. Table 5 with the summary of cobot-related works without AI with five articles has shown the poor quality of performance in their output. The summary of the state-of-art research on vision systems employed in cobots with 10 articles is provided in Table 6. It shows the influence of deep learning on improving performance.
In addition, the robot’s usage in collaborative research works in the last five years from Table 2, Table 3, Table 4, Table 5 and Table 6 is given in Figure 5.
From Figure 5, the use of robots in collaborative research work has been increasing between 2018 and 2022. Cobots provide robustness, reliability, and data analytical skills to adaptable and collaborative technology, enhancing human abilities and contributing positively to the cobot’s enterprise customers.
The tasks performed by the robot in collaborative research works are given in Figure 6.
According to Figure 6, most studies utilized collaborative robots to perform the assembly tasks followed by pick and place tasks. The development of cobots is aimed at sharing a workstation with people in order to improve a workflow (co-existence) and scalable automation for different activities with human involvement (collaboration). Nevertheless, in any kind of task human behavior can be unexpected, making it challenging for robotics to interpret a person’s intentions. Therefore, it is still difficult for some people and robots to work together in industry sectors.
From the analysis of the literature, it can be safely stated that quite a number of recent articles have shown the expanding role of AI on cobots. In addition, the implementation of AI on cobots has resulted in better performance as clearly stated in Section 3.

5. Recommendations and Future Directions

The published research shows that the investigations have mostly concentrated on a particular set of fixed tasks. For dynamic circumstances with several kinds of tasks, an in-depth study is lacking. Additionally, such collaboration techniques depend on static feature-based techniques or robots that are controlled by humans. Most of the decisions are still made by the supervisor, who is still a human being, and thus the robots providing services are not cognitive. To execute the activities effectively and safely, it can be difficult to develop healthy cooperation between robots and workers. Although, if any of the commands are incorrect or missed, the cobot likely will not be able to complete the task successfully on time. In this instance, assistance from a human worker is needed, which calls for the person to be aware of the cobot’s prior activities [47].
Existing deep learning techniques are not time-efficient and do not offer the essential adaptability for cobots in complicated circumstances, in which real-time robotic application is not possible. It is required to make strides in several areas, notably online deep learning for dispersed teams of cobots and human operators communicating with one another, to bring up the data-driven control systems to the next generation level. The exchange of knowledge regarding prior actions and experiences across numerous cobots is not taken into account by current technologies in order to enhance and speed up learning. To ensure each cobot will learn from both its own and the experiences of other cobots, necessitates unique distributed sensor signal processing and data aggregation across the numerous wirelessly networked robots.
The degree of autonomy in robots has recently been increased in the industrial and service sectors by using machine learning methods, which have seen a growing success rate. Most significantly, techniques utilizing reinforcement learning or learning from demonstration have produced impressive outcomes, such as training robots to carry out difficult tasks by examining the range of potential behaviors in the environment or following human instructors. However, the application of these strategies in automated robotic coding is constrained.
Reinforcement learning successfully automates the trial-and-error method by enabling the robots to continuously interact with their surroundings, which is not possible during the normal operating stage of an actual production unit. In simulated situations, RL necessitates highly precise and computationally costly simulators, so reconciling the associated gap between the simulation model and reality is regarded as an unanswered problem.
The sophisticated machine-vision methods with the use of deep learning enable cobots to have a highly developed sense of perception and knowledge of their surroundings. With these capabilities, these advanced robots can operate more safely and effectively in specific types of workplaces due to their enhanced visibility.

5.1. Advanced Autonomous Algorithms

For cobots to fully realize their enormous potential for manufacture in high-mix, low-volume production situations, cutting-edge algorithms are required. Cobots should be capable of operating without clear instructions in new circumstances. In situations where its surroundings are well known, the cobot’s movement planning algorithm enables it to reach a position of the object, while collision-avoiding algorithms enable responsive behavior in environments where its surroundings are dynamic. These algorithms rely on the contextual information supplied by the cobot’s sensors as it moves.

5.2. Safety Devices

It is imperative to understand, create, and verify an environment where the cobot can perform its tasks and safely coexist with humans. Several ISO-regulated requirements must be fulfilled aiming to create a stable and safe environment, such as safety-rated stop monitoring, hand guiding (teaching by demonstration), speed and separation monitoring, power and force limiting, and so on.
A technical challenge to the wider usage of robots is safety barriers. Cobots are created to meet safety standards with inherent safety designs that permit the cobot to communicate safely with human beings and handle things carefully in its workplace. Cobots incorporate adaptive elements, namely joint torque sensors, to absorb the force of unintended hits, reducing the momentum exposed to possible accidents. The development of cobots also makes use of a wide range of external sensing devices (vision systems) such as cameras, lasers, depth sensors, and so on, fusing the data obtained to enable accurate vicinity and action recognition between humans and robots.
The majority of cobots integrate the following security features, among others: firstly, when a robot detects a human entering its functional workspace, it immediately stops moving. This is known as safety-rated stop monitoring. To recognize the presence of people, it is frequently implemented by utilizing one or more sensors. Secondly, it has a hand-guiding capability that enables a human to securely train the robot to adhere to a predetermined operating trajectory. In the event of an unexpected touch, the robot will immediately decrease its force to avoid hurting the human. The safety and health of people working with robotics have been a topic of active research in recent years, and progress has been made.

6. Conclusions

It was found that based on our review of the state-of-the-art publications, various cobots have been widely applied in various areas. These areas include communication robots in public areas. These logistic or supply chain robots move materials inside a building and articulated or industrial robots assist in automating tasks that are not ergonomically sound, such as assisting individuals in carrying large parts, or assembly lines. Since the cobot and robot can both undertake similar tasks, the differences between the two approaches were demonstrated to highlight their usage and to show which is better than the other in certain scenarios. The advantages and disadvantages of cobots were discussed. Several metrics can affect the performance of the cobot, including different sensing, preprocessing techniques, and control methods. This work presents an overview of robot and cobot types, sensing or simulation tools, the task and where it can be achieved, and the types of control technology based on AI. Many reviewed studies implemented machine learning and deep learning techniques for managing the cobot task. In addition, this review discussed the outcomes of the selected papers, including the accuracy, safety issue, time delay, training process, and robot ability. Finally, this systematic review provided recommendations and future direction for the interaction between the cobot and the ever-advancing AI domain.

Author Contributions

Conceptualization, A.B., K.V.V.R. and I.E.; methodology, A.B., K.V.V.R., I.E., M.S.A.-Q. and S.S.A.A.; formal analysis, A.B., K.V.V.R., I.E., M.S.A.-Q. and E.N.; investigation, K.V.V.R.; resources, A.B., I.E., M.S.A.-Q., E.N. and S.S.A.A.; data curation, K.V.V.R. and E.N.; writing—original draft preparation, A.B., K.V.V.R., and I.E.; writing—review and editing, A.B., I.E., M.S.A.-Q., E.N. and S.S.A.A.; supervision, A.B., I.E., M.S.A.-Q., E.N. and S.S.A.A.; project administration, A.B., I.E., M.S.A.-Q. and S.S.A.A.; funding acquisition, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research has received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors declare no external contributions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hentout, A.; Aouache, M.; Maoudj, A.; Akli, I. Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Adv. Robot. 2019, 33, 764–799. [Google Scholar] [CrossRef]
  2. “Demystifying Collaborative Industrial Robots”, International Federation of Robotics, Frankfurt, Germany. Available online: https://web.archive.org/web/20190823143255/https://ifr.org/downloads/papers/IFR_Demystifying_Collaborative_Robots.pdf (accessed on 6 October 2022).
  3. Li, Y.; Sena, A.; Wang, Z.; Xing, X.; Babič, J.; van Asseldonk, E.; Burdet, E. A review on interaction control for contact robots through intent detection. Prog. Biomed. Eng. 2022, 4, 032004. [Google Scholar] [CrossRef]
  4. Mukherjee, D.; Gupta, K.; Chang, L.H.; Najjaran, H. A Survey of Robot Learning Strategies for Human-Robot Collaboration in Industrial Settings. Robot. Comput. Integr. Manuf. 2021, 73, 102231. [Google Scholar] [CrossRef]
  5. Service Robots. Available online: https://ifr.org/service-robots (accessed on 7 October 2022).
  6. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput. Integr. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  7. Mobile Robots Improve Patient Care, Employee Satisfaction, Safety, Productivity and More. Available online: https://aethon.com/mobile-robots-for-healthcare/ (accessed on 9 October 2022).
  8. Maadi, M.; Khorshidi, H.A.; Aickelin, U. A Review on Human–AI Interaction in Machine Learning and Insights for Medical Applications. Int. J. Environ. Res. Public Health 2021, 18, 2121. [Google Scholar] [CrossRef]
  9. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human-robot collaboration in manufacturing applications: A review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef] [Green Version]
  10. Faccio, M.; Granata, I.; Menini, A.; Milanese, M.; Rossato, C.; Bottin, M.; Minto, R.; Pluchino, P.; Gamberini, L.; Boschetti, G.; et al. Human factors in cobot era: A review of modern production systems features. J. Intell. Manuf. 2022, 34, 85–106. [Google Scholar] [CrossRef]
  11. Castillo, J.F.; Hamilton Ortiz, H.; Díaz Velásquez, F.M.; Saavedra, F.D. COBOTS in Industry 4.0: Safe and Efficient Interaction. In Collaborative and Humanoid Robots; IntechOpen: London, UK, 2021; p. 13. [Google Scholar] [CrossRef]
  12. Collaborative Robots Market Size, Share & Trends Analysis Report by Payload Capacity, by Application (Assembly, Handling, Packaging, Quality Testing), by Vertical, by Region, and Segment Forecasts, 2022–2030. Available online: https://www.grandviewresearch.com/industry-analysis/collaborative-robots-market (accessed on 19 November 2022).
  13. Semeraro, F.; Griffiths, A.; Cangelosi, A. Human–robot collaboration and machine learning: A systematic review of recent research. Robot. Comput. Integr. Manuf. 2022, 79, 102432. [Google Scholar] [CrossRef]
  14. Proia, S.; Carli, R.; Cavone, G.; Dotoli, M. A Literature Review on Control Techniques for Collaborative Robotics in Industrial Applications. IEEE Int. Conf. Autom. Sci. Eng. 2021, 2021, 591–596. [Google Scholar] [CrossRef]
  15. Lins, R.G.; Givigi, S.N. Cooperative Robotics and Machine Learning for Smart Manufacturing: Platform Design and Trends within the Context of Industrial Internet of Things. IEEE Access 2021, 9, 95444–95455. [Google Scholar] [CrossRef]
  16. Spezialetti, M.; Placidi, G.; Rossi, S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front. Robot. AI 2020, 7, 1–11. [Google Scholar] [CrossRef]
  17. Vicentini, F. Collaborative Robotics: A Survey. J. Mech. Des. Trans. ASME 2020, 143, 1–20. [Google Scholar] [CrossRef]
  18. Robots and Humans Can Work Together with New ISO Guidance. 2016. Available online: https://www.iso.org/news/2016/03/Ref2057.html (accessed on 5 October 2022).
  19. Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 2: Robot Systems and Integration. 2011. Available online: https://www.iso.org/obp/ui/#iso:std:iso:10218:-2:ed-1:v1:en (accessed on 5 October 2022).
  20. Pierson, H.A.; Gashler, M.S. Deep learning in robotics: A review of recent research. Adv. Robot. 2017, 31, 821–835. [Google Scholar] [CrossRef] [Green Version]
  21. Mobile Cobots. Available online: https://www.dimalog.com/mobile-cobots/ (accessed on 9 November 2022).
  22. Cohen, Y.; Shoval, S.; Faccio, M.; Minto, R. Deploying cobots in collaborative systems: Major considerations and productivity analysis. Int. J. Prod. Res. 2021, 60, 1815–1831. [Google Scholar] [CrossRef]
  23. Bi, Z.M.; Luo, M.; Miao, Z.; Zhang, B.; Zhang, W.J.; Wang, L. Safety assurance mechanisms of collaborative robotic systems in manufacturing. Robot. Comput. Integr. Manuf. 2020, 67, 102022. [Google Scholar] [CrossRef]
  24. Kashish, G.; Homayoun, N. Curriculum-Based Deep Reinforcement Learning for Adaptive Robotics: A Mini-Review. Int. J. Robot. Eng. 2021, 6, 31. [Google Scholar] [CrossRef]
  25. Knudsen, M.; Kaivo-Oja, J. Collaborative Robots: Frontiers of Current Literature. J. Intell. Syst. Theory Appl. 2020, 3, 13–20. [Google Scholar] [CrossRef]
  26. Eyam, T. Emotion-Driven Human-Cobot Intreraction Based on EEG in Industrial Applications. Master’s Thesis, Tampere University, Tampere, Finland, 2019. [Google Scholar]
  27. Nurhanim, K.; Elamvazuthi, I.; Izhar, L.I.; Capi, G.; Su, S. EMG Signals Classification on Human Activity Recognition using Machine Learning Algorithm. In Proceedings of the 2021 8th NAFOSTED Conference on Information and Computer Science (NICS), Hanoi, Vietnam, 21–22 December 2021; pp. 369–373. [Google Scholar] [CrossRef]
  28. Reddy, K.V.V.; Elamvazuthi, I.; Aziz, A.A.; Paramasivam, S.; Chua, H.N.; Pranavanand, S. Heart Disease Risk Prediction Using Machine Learning Classifiers with Attribute Evaluators. Appl. Sci. 2021, 11, 8352. [Google Scholar] [CrossRef]
  29. Ganesan, T.; Elamvazuthi, I.; Vasant, P. Solving Engineering Optimization Problems with the Karush-Kuhn-Tucker Hopfield Neural Networks. Int. Rev. Mech. Eng. 2011, 5, 1333–1339. [Google Scholar]
  30. Ali, Z.; Elamvazuthi, I.; Alsulaiman, M.; Muhammad, G. Detection of Voice Pathology using Fractal Dimension in a Multiresolution Analysis of Normal and Disordered Speech Signals. J. Med. Syst. 2016, 40, 20. [Google Scholar] [CrossRef]
  31. Sathyan, A.; Cohen, K.; Ma, O. Comparison Between Genetic Fuzzy Methodology and Q-Learning for Collaborative Control Design. Int. J. Artif. Intell. Appl. 2019, 10, 1–15. [Google Scholar] [CrossRef]
  32. Vasant, P.; Ganesan, T.; Elamvazuthi, I.; Webb, J.F. Interactive fuzzy programming for the production planning: The case of textile firm. Int. Rev. Model. Simul. 2011, 4, 961–970. [Google Scholar]
  33. Reddy, K.V.V.; Elamvazuthi, I.; Aziz, A.A.; Paramasivam, S.; Chua, H.N.; Pranavanand, S. Prediction of Heart Disease Risk Using Machine Learning with Correlation-based Feature Selection and Optimization Techniques. In Proceedings of the 2021 7th International Conference on Signal Processing and Communication (ICSC), Noida, India, 25–27 November 2021; pp. 228–233. [Google Scholar] [CrossRef]
  34. Sharon, H.; Elamvazuthi, I.; Lu, C.; Parasuraman, S.; Natarajan, E. Development of Rheumatoid Arthritis Classification from Electronic Image Sensor Using Ensemble Method. Sensors 2019, 20, 167. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Roveda, L.; Maskani, J.; Franceschi, P.; Abdi, A.; Braghin, F.; Tosatti, L.M.; Pedrocchi, N. Model-Based Reinforcement Learning Variable Impedance Control for Human-Robot Collaboration. J. Intell. Robot. Syst. Theory Appl. 2020, 100, 417–433. [Google Scholar] [CrossRef]
  36. Ali, Z.; Alsulaiman, M.; Elamvazuthi, I.; Muhammad, G.; Mesallam, T.A.; Farahat, M.; Malki, K.H. Voice pathology detection based on the modified voice contour and SVM. Biol. Inspired Cogn. Archit. 2016, 15, 10–18. [Google Scholar] [CrossRef]
  37. Ananias, E.; Gaspar, P.D. A Low-Cost Collaborative Robot for Science and Education Purposes to Foster the Industry 4.0 Implementation. Appl. Syst. Innov. 2022, 5, 72. [Google Scholar] [CrossRef]
  38. Gupta, R.; Elamvazuthi, I.; Dass, S.C.; Faye, I.; Vasant, P.; George, J.; Izza, F. Curvelet based automatic segmentation of supraspinatus tendon from ultrasound image: A focused assistive diagnostic method. Biomed. Eng. Online 2014, 13, 157. [Google Scholar] [CrossRef] [Green Version]
  39. Reddy, K.V.V.; Elamvazuthi, I.; Aziz, A.A.; Paramasivam, S.; Chua, H.N. Heart Disease Risk Prediction using Machine Learning with Principal Component Analysis. In Proceedings of the 2020 8th International Conference on Intelligent and Advanced Systems (ICIAS), Kuching, Malaysia, 13–15 July 2021; pp. 1–6. [Google Scholar]
  40. Rahim, K.N.K.A.; Elamvazuthi, I.; Izhar, L.I.; Capi, G. Classification of human daily activities using ensemble methods based on smartphone inertial sensors. Sensors 2018, 18, 4132. [Google Scholar] [CrossRef] [Green Version]
  41. Ali, Z.; Alsulaiman, M.; Muhammad, G.; Elamvazuthi, I.; Mesallam, T.A. Vocal fold disorder detection based on continuous speech by using MFCC and GMM. In Proceedings of the 2013 7th IEEE GCC Conference and Exhibition (GCC), Doha, Qatar, 17–20 November 2013; pp. 292–297. [Google Scholar] [CrossRef]
  42. Kolbinger, F.R.; Leger, S.; Carstens, M.; Rinner, F.M.; Krell, S.; Chernykh, A.; Nielen, T.P.; Bodenstedt, S.; Welsch, T.; Kirchberg, J.; et al. Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: An exploratory feasibility study. medRxiv 2022. [Google Scholar]
  43. Reddy, K.V.V.; Elamvazuthi, I.; Aziz, A.A.; Paramasivam, S.; Chua, H.N.; Pranavanand, S. Rotation Forest Ensemble Classifier to Improve the Cardiovascular Disease Risk Prediction Accuracy. In Proceedings of the 2021 8th NAFOSTED Conference on Information and Computer Science (NICS), Hanoi, Vietnam, 21–22 December 2021; pp. 404–409. [Google Scholar] [CrossRef]
  44. Galin, R.; Meshcheryakov, R. Collaborative robots: Development of robotic perception system, safety issues, and integration of ai to imitate human behavior. Smart Innov. Syst. Technol. 2021, 187, 175–185. [Google Scholar] [CrossRef]
  45. Ibarz, J.; Tan, J.; Finn, C.; Kalakrishnan, M.; Pastor, P.; Levine, S. How to train your robot with deep reinforcement learning: Lessons we have learned. Int. J. Rob. Res. 2021, 40, 698–721. [Google Scholar] [CrossRef]
  46. Kragic, D.; Gustafson, J.; Karaoguz, H.; Jensfelt, P.; Krug, R. Interactive, collaborative robots: Challenges and opportunities. IJCAI Int. Jt. Conf. Artif. Intell. 2018, 2018, 18–25. [Google Scholar] [CrossRef] [Green Version]
  47. Bagheri, E.; De Winter, J.; Vanderborght, B. Transparent Interaction Based Learning for Human-Robot Collaboration. Front. Robot. AI 2022, 9, 1–9. [Google Scholar] [CrossRef]
  48. Amarillo, A.; Sanchez, E.; Caceres, J.; Oñativia, J. Collaborative Human–Robot Interaction Interface: Development for a Spinal Surgery Robotic Assistant. Int. J. Soc. Robot. 2021, 13, 1473–1484. [Google Scholar] [CrossRef]
  49. Nicora, M.L.; Andre, E.; Berkmans, D.; Carissoli, C.; D’Orazio, T.; Fave, A.D.; Gebhard, P.; Marani, R.; Mira, R.M.; Negri, L.; et al. A human-driven control architecture for promoting good mental health in collaborative robot scenarios. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 285–291. [Google Scholar] [CrossRef]
  50. Oliff, H.; Liu, Y.; Kumar, M.; Williams, M.; Ryan, M. Reinforcement learning for facilitating human-robot-interaction in manufacturing. J. Manuf. Syst. 2020, 56, 326–340. [Google Scholar] [CrossRef]
  51. Story, M.; Webb, P.; Fletcher, S.R.; Tang, G.; Jaksic, C.; Carberry, J. Do Speed and Proximity Affect Human-Robot Collaboration with an Industrial Robot Arm? Int. J. Soc. Robot. 2022, 14, 1087–1102. [Google Scholar] [CrossRef]
  52. Zhang, R.; Lv, Q.; Li, J.; Bao, J.; Liu, T.; Liu, S. A reinforcement learning method for human-robot collaboration in assembly tasks. Robot. Comput. Integr. Manuf. 2021, 73, 102227. [Google Scholar] [CrossRef]
  53. Silva, G.; Rekik, K.; Kanso, A.; Schnitman, L. Multi-perspective human robot interaction through an augmented video interface supported by deep learning. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Napoli, Italy, 29 August–2 September 2022; pp. 1168–1173. [Google Scholar] [CrossRef]
  54. Buerkle, A.; Eaton, W.; Lohse, N.; Bamber, T.; Ferreira, P. EEG based arm movement intention recognition towards enhanced safety in symbiotic Human-Robot Collaboration. Robot. Comput. Integr. Manuf. 2021, 70, 102137. [Google Scholar] [CrossRef]
  55. De Winter, J.; De Beir, A.; El Makrini, I.; Van de Perre, G.; Nowé, A.; Vanderborght, B. Accelerating interactive reinforcement learning by human advice for an assembly task by a cobot. Robotics 2019, 8, 104. [Google Scholar] [CrossRef] [Green Version]
  56. Ghadirzadeh, A.; Chen, X.; Yin, W.; Yi, Z.; Bjorkman, M.; Kragic, D. Human-Centered Collaborative Robots with Deep Reinforcement Learning. IEEE Robot. Autom. Lett. 2020, 6, 566–571. [Google Scholar] [CrossRef]
  57. Akkaladevi, S.C.; Plasch, M.; Pichler, A.; Ikeda, M. Towards reinforcement based learning of an assembly process for human robot collaboration. Procedia Manuf. 2019, 38, 1491–1498. [Google Scholar] [CrossRef]
  58. Heo, Y.J.; Kim, D.; Lee, W.; Kim, H.; Park, J.; Chung, W.K. Collision detection for industrial collaborative robots: A deep learning approach. IEEE Robot. Autom. Lett. 2019, 4, 740–746. [Google Scholar] [CrossRef]
  59. Gomes, N.M.; Martins, F.N.; Lima, J.; Wörtche, H. Reinforcement Learning for Collaborative Robots Pick-and-Place Applications: A Case Study. Automation 2022, 3, 223–241. [Google Scholar] [CrossRef]
  60. Chen, X.; Wang, N.; Cheng, H.; Yang, C. Neural Learning Enhanced Variable Admittance Control for Human-Robot Collaboration. IEEE Access 2020, 8, 25727–25737. [Google Scholar] [CrossRef]
  61. Qureshi, H.; Nakamura, Y.; Yoshikawa, Y.; Ishiguro, H. Intrinsically motivated reinforcement learning for human–robot interaction in the real-world. Neural Netw. 2018, 107, 23–33. [Google Scholar] [CrossRef] [Green Version]
  62. Wang, P.; Liu, H.; Wang, L.; Gao, R.X. Deep learning-based human motion recognition for predictive context-aware human-robot collaboration. CIRP Ann. 2018, 67, 17–20. [Google Scholar] [CrossRef]
  63. Lv, Q.; Zhang, R.; Liu, T.; Zheng, P.; Jiang, Y.; Li, J.; Bao, J.; Xiao, L. A strategy transfer approach for intelligent human-robot collaborative assembly. Comput. Ind. Eng. 2022, 168, 108047. [Google Scholar] [CrossRef]
  64. Weiss, A.; Wortmeier, A.K.; Kubicek, B. Cobots in Industry 4.0: A Roadmap for Future Practice Studies on Human-Robot Collaboration. IEEE Trans. Hum. -Mach. Syst. 2021, 51, 335–345. [Google Scholar] [CrossRef]
  65. Sasagawa, A.; Fujimoto, K.; Sakaino, S.; Tsuji, T. Imitation learning based on bilateral control for human-robot cooperation. IEEE Robot. Autom. Lett. 2020, 5, 6169–6176. [Google Scholar] [CrossRef]
  66. Lu, W.; Hu, Z.; Pan, J. Human-Robot Collaboration using Variable Admittance Control and Human Intention Prediction. IEEE Int. Conf. Autom. Sci. Eng. 2020, 2020, 1116–1121. [Google Scholar] [CrossRef]
  67. Karn, A.L.; Sengan, S.; Kotecha, K.; Pustokhina, I.V.; Pustokhin, D.A.; Subramaniyaswamy, V.; Buddhi, D. ICACIA: An Intelligent Context-Aware framework for COBOT in defense industry using ontological and deep learning models. Rob. Auton. Syst. 2022, 157, 104234. [Google Scholar] [CrossRef]
  68. De Miguel Lazaro, O.; Mohammed, W.M.; Ferrer, B.R.; Bejarano, R.; Lastra, J.L.M. An approach for adapting a cobot workstation to human operator within a deep learning camera. IEEE Int. Conf. Ind. Inform. 2019, 2019, 789–794. [Google Scholar] [CrossRef]
  69. Mohammed, A.; Wang, L. Advanced human-robot collaborative assembly using electroencephalogram signals of human brains. Procedia CIRP 2020, 93, 1200–1205. [Google Scholar] [CrossRef]
  70. Wang, J.; Pradhan, M.R.; Gunasekaran, N. Machine learning-based human-robot interaction in ITS. Inf. Process. Manag. 2021, 59, 102750. [Google Scholar] [CrossRef]
  71. Aliev, K.; Antonelli, D. Proposal of a monitoring system for collaborative robots to predict outages and to assess reliability factors exploiting machine learning. Appl. Sci. 2021, 11, 1621. [Google Scholar] [CrossRef]
  72. Malik, A.; Brem, A. Digital twins for collaborative robots: A case study in human-robot interaction. Robot. Comput. Integr. Manuf. 2020, 68, 102092. [Google Scholar] [CrossRef]
  73. Walker, T.; Ahlin, K.J.; Joffe, B.P. Robotic Rehang with Machine Vision. In Proceedings of the 2021 ASABE Annual International Virtual Meeting, Virtual, 12–16 July 2021. [Google Scholar] [CrossRef]
  74. Edmonds, M.; Gao, F.; Liu, H.; Xie, X.; Qi, S.; Rothrock, B.; Zhu, Y.; Wu, Y.N.; Lu, H.; Zhu, S.-C. A tale of two explanations: Enhancing human trust by explaining robot behavior. Sci. Robot. 2019, 37, 1–14. [Google Scholar] [CrossRef]
  75. Grigore, L.S.; Priescu, I.; Joita, D.; Oncioiu, I. The Integration of Collaborative Robot Systems and Their Environmental Impacts. Processes 2020, 8, 494. [Google Scholar] [CrossRef] [Green Version]
  76. Bader, K.B.; Hendley, S.A.; Bollen, V. Assessment of Collaborative Robot (Cobot)-Assisted Histotripsy for Venous Clot Ablation. IEEE Trans. Biomed. Eng. 2020, 68, 1220–1228. [Google Scholar] [CrossRef]
  77. Eyam, A.; Mohammed, W.M.; Lastra, J.L.M. Emotion-driven analysis and control of human-robot interactions in collaborative applications. Sensors 2021, 21, 4626. [Google Scholar] [CrossRef]
  78. Yang, C.; Wu, H.; Li, Z.; He, W.; Wang, N.; Su, C.-Y. Mind Control of a Robotic Arm With Visual Fusion Technology. IEEE Trans. Ind. Inform. 2018, 14, 3822–3830. [Google Scholar] [CrossRef]
  79. Cunha, A.; Ferreira, F.; Sousa, E.; Louro, L.; Vicente, P.; Monteiro, S.; Erlhagen, W.; Bicho, E. Towards collaborative robots as intelligent co-workers in human-robot joint tasks: What to do and who does it? In Proceedings of the 52th International Symposium on Robotics, Online, 9–10 December 2020; pp. 141–148. [Google Scholar]
  80. Ivorra, E.; Ortega, M.; Alcaniz, M.; Garcia-Aracil, N. Multimodal Computer Vision Framework for Human Assistive Robotics. In Proceedings of the 2018 Workshop on Metrology for Industry 4.0 and IoT, Brescia, Italy, 16–18 April 2018; pp. 18–22. [Google Scholar] [CrossRef]
  81. Pagani, R.; Nuzzi, C.; Ghidelli, M.; Borboni, A.; Lancini, M.; Legnani, G. Cobot user frame calibration: Evaluation and comparison between positioning repeatability performances achieved by traditional and vision-based methods. Robotics 2021, 10, 45. [Google Scholar] [CrossRef]
  82. Martínez-Franco, J.C.; Álvarez-Martínez, D. Machine Vision for Collaborative Robotics Using Synthetic Data-Driven Learning. In Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future; Springer: Cham, Switzerland, 2021; pp. 69–81. [Google Scholar]
  83. How Vision Cobots Are Reshaping Factory Automation. Available online: https://www.lanner-america.com/blog/vision-cobots-reshaping-factory-automation/ (accessed on 24 November 2022).
  84. Xu, L.; Li, G.; Song, P.; Shao, W. Vision-Based Intelligent Perceiving and Planning System of a 7-DoF Collaborative Robot. Comput. Intell. Neurosci. 2021, 2021, 5810371. [Google Scholar] [CrossRef] [PubMed]
  85. Jia, F.; Jebelli, A.; Ma, Y.; Ahmad, R. An Intelligent Manufacturing Approach Based on a Novel Deep Learning Method for Automatic Machine and Working Status Recognition. Appl. Sci. 2022, 12, 5697. [Google Scholar] [CrossRef]
  86. Xiong, R.; Zhang, S.; Gan, Z.; Qi, Z.; Liu, M.; Xu, X.; Wang, Q.; Zhang, J.; Li, F.; Chen, X. A novel 3D-vision–based collaborative robot as a scope holding system for port surgery: A technical feasibility study. Neurosurg. Focus 2022, 52, E13. [Google Scholar] [CrossRef]
  87. Comari, S.; Di Leva, R.; Carricato, M.; Badini, S.; Carapia, A.; Collepalumbo, G.; Gentili, A.; Mazzotti, C.; Staglianò, K.; Rea, D. Mobile cobots for autonomous raw-material feeding of automatic packaging machines. J. Manuf. Syst. 2022, 64, 211–224. [Google Scholar] [CrossRef]
  88. Zhou, Z.; Li, L.; Fürsterling, A.; Durocher, H.J.; Mouridsen, J.; Zhang, X. Learning-based object detection and localization for a mobile robot manipulator in SME production. Robot. Comput. Integr. Manuf. 2021, 73, 102229. [Google Scholar] [CrossRef]
  89. Zaki, M.A.; Fathy, A.M.M.; Carnevale, M.; Giberti, H. Application of Realtime Robotics platform to execute unstructured industrial tasks involving industrial robots, cobots, and human operators. Procedia Comput. Sci. 2022, 200, 1359–1367. [Google Scholar] [CrossRef]
  90. Židek, K.; Pite, J.; Balog, M.; Hošovský, A.; Hladký, V.; Lazorík, P.; Iakovets, A.; Demčák, J. Cnn training using 3d virtual models for assisted assembly with mixed reality and collaborative robots. Appl. Sci. 2021, 11, 4269. [Google Scholar] [CrossRef]
  91. Olesen, S.; Gergaly, B.B.; Ryberg, E.A.; Thomsen, M.R.; Chrysostomou, D. A collaborative robot cell for random bin-picking based on deep learning policies and a multi-gripper switching strategy. Procedia Manuf. 2020, 51, 3–10. [Google Scholar] [CrossRef]
  92. Amin, M.; Rezayati, M.; Van De Venn, H.W. A Mixed-Perception Approach for Safe Human–Robot. Sensors 2020, 20, 6347. [Google Scholar] [CrossRef]
  93. Bejarano, R.; Ferrer, B.R.; Mohammed, W.M.; Lastra, J.L.M. Implementing a human-robot collaborative assembly workstation. IEEE Int. Conf. Ind. Inform. 2019, 2019, 557–564. [Google Scholar] [CrossRef]
Figure 1. Global collaborative robot market in 2021.
Figure 1. Global collaborative robot market in 2021.
Machines 11 00111 g001
Figure 2. Relationship between AI, machine learning, and deep learning.
Figure 2. Relationship between AI, machine learning, and deep learning.
Machines 11 00111 g002
Figure 3. The PRISMA model diagram for the systematic review.
Figure 3. The PRISMA model diagram for the systematic review.
Machines 11 00111 g003
Figure 4. An example application of cobots in the manufacturing industry [21].
Figure 4. An example application of cobots in the manufacturing industry [21].
Machines 11 00111 g004
Figure 5. Robot usage in collaborative research works between 2018 and 2022.
Figure 5. Robot usage in collaborative research works between 2018 and 2022.
Machines 11 00111 g005
Figure 6. Tasks performed by the robot in collaborative research works.
Figure 6. Tasks performed by the robot in collaborative research works.
Machines 11 00111 g006
Table 1. Key distinctions between conventional robots and cobots [22].
Table 1. Key distinctions between conventional robots and cobots [22].
CharacteristicsConventional RobotsCobots
RoleSubstituting human employeeAiding human employee
Human collaborationCoding used to specify motions, positions, and gripsRecognizes gestures and voice commands and predicts operator movements
Workstation Robot and operator workstations are typically fencedA shared workstation without a fence
ReprogrammingRarely requiredNecessitates frequently
MobilityFast movementsSlow movements
Handling payloadsCapable of carrying large payloadsCannot handle large payloads
Capability to work in a dynamic environment with moving objectsRestricted Yes
Table 2. Related works with the non-collaborative workspace-type robots.
Table 2. Related works with the non-collaborative workspace-type robots.
No.Author(s) and YearRobot TypeSensing/Simulation ToolTask TypeTechniqueRemarks
1.Bagheri et al. [47], 2022Franka robotic armT-GUIAssemble toysInteractive reinforcement learningThe experiment was carried out online and potential behaviors of the cobot across all circumstances were recorded. During the learning process using the cobot’s answers, the human was not permitted to assist.
2.Amarillo et al. [48], 2021Staubli TX40Optoforce FT sensorSpinal surgeryControl algorithmsSafety issues were not considered. Therefore, the robot’s joint accelerations and velocities have limited use.
3.Nicora et al. [49], 2021Virtual robotAzure Kinect camerasPredicting mental health conditionsMachine learningThe experiments were carried out in simulation. No real robot was utilized to perform the collaborative tasks.
4.Oliff et al. [50], 2020-Deep learning-4-JavaPick and place, move, scrap, and manipulate productsDeep Q-learning networks (DQN)The model that determines the behavior of the robot was validated through simulation only. Safety issues regarding HRI were not addressed.
5.Story et al. [51], 2022UR5Microsoft Kinect v2 visionAssembly taskLinear mixed effects modelAccording to the research, there are correlations between two important robot characteristics, speed and proximity, and psychological tests that were created for many other manufacturing applications with higher levels of automation but not for collaborative work.
Table 3. Summary of the state-of-art research on collaborative workspace-type robots.
Table 3. Summary of the state-of-art research on collaborative workspace-type robots.
No.Author(s) and YearRobot TypeSensing/Simulation ToolTask TypeTechniqueRemarks
1.Zhang et al. [52], 2022UR5 robotDeep image sensorSimulated alternator assemblyReinforcement learningThe overall completion time was influenced by several factors, including product features and process modifications. How to calculate and adjust the operating time and resource utilization during collaborative learning in real time was not investigated.
2.Silva et al. [53], 2022Baxter mobile base2D cameras with 1280x720, 30 FPSHomograph pixel mappingDeep learning (Scaled-Yolo V4)When the robot was moving with a significant velocity, a timing discrepancy between the robot placement inside the camera as well as its overlayed position lead both to cover separate portions of the video frame.
3.Buerkle et al. [54], 2021UR10mobile EEG Epoc+Assembly tasksLong short-term memory recurrent neural networkDuring the pre-movement period, the EEG data from multiple subjects often showed strong comparable patterns that were consistent, such as a decrease in amplitude and a variation in frequency.
4.Winter et al. [55], 2019-GUICranfield Assembly taskInteractive reinforcement learningThe assembly was not carried out in real-time, the participant’s knowledge was represented as a consequence graph. The type of robot was not specified.
5.Ghadirzadeh et al. [56], 2021ABB YuMi robotRokoko motion capture suitPick, place, and packingGraph convolutional networks, recurrent Q-learningUnwanted delays were reduced but the safety issues were not addressed in the work.
6.Akkaladevi et al. [57], 2019UR10 with SCHUNK 2-finger parallel gripperRGBD and 3D sensorsAssembly taskReinforcement learningIn order to understand how things are put together, the robotic system actively suggested a series of appropriate actions based on the current situation.
7.Jin Heo et al. [58], 2019Indy-7Force sensitive resistorCollision detectionDeep learning (1-D CNN)Model uncertainty and sensor noise were mostly insensitive to the proposed deep neural network.
8.Gomes et al. [59], 2022UR3RGBD cameraPick and placeReinforcement learning (CNN)The drawback of this model is the lengthy training process, which took several hours to complete the setup before it could be used. The model has restricted flexibility as it excluded the gripper rotation and adversarial geometry.
9..Chen et al. [60], 2020Robotic armForce sensorSawing wooden pieceNeural learningThe EMG signals employed in this work were used to track the levels of muscle activation; muscle exhaustion was not taken into account. There was no discussion of safety concerns during the collaborative task.
10.Qureshi et al. [61], 2018Aldebaran’s Pepper2D camera, 3D sensor, and FSR touch sensorSocietal interaction skills (handshake, eye contact, smile)Reinforcement learning (DNN)The existing system performed only a few actions and had no memory. Therefore, the robot was not able to remember the actions executed by people and could not recognize them.
11.Wang et al. [62], 2018--Engine assemblyDCNN, AlexNetA collaborative experiment was not discussed clearly. The type of robot and sensors used for the assembly task were not specified.
12.Q. Lv et al. [63], 2022Industrial robotic armIntel RealSense depth camera (D435) and GUILithium battery assemblyReinforcement learningThe system required extensive coding for assembly tasks. Safety measures were not addressed clearly.
13.Weiss et al. [64], 2021UR10-Assembling combustion engine, polishing moldsInteractive learningThe task assigned to the robot during assembly was tightening the screw and safety precautions were not discussed.
14.Sasagawa et al. [65], 2020Master and slave robotsTouch USP haptic deviceHandling of objectsLong short-term memory modelThe robot proved competent at carrying out tasks using the suggested technique in reaction to modifications in items and settings.
15.Lu et al. [66], 2020Franka Emika robot with 7 d.o.fJoint torque sensorsHandling of objectsLong-short term memory model, Q-learningThe findings of the research demonstrate that the suggested methodology performs well in predicting human intentions and that the controller obtains the least jerky trajectory with the least amount of contact force.
16.Karn et al. [67], 2022Hexahedral robotRGB cameraDefenseLong-short term memory modelRegarding the length of time it takes for individuals to converse with one another, the architecture is effective.
17.De Miguel Lazaro et al. [68], 2019YuMi IRB 14000 robotAWS DeepLens camera, Apache MXNetIdentifying human operatorDeep learning (CNN)The developed model was not tested for the assembly process to determine the algorithm’s performance level.
Table 4. Summary of the state-of-art research on industrial robots employing machine learning.
Table 4. Summary of the state-of-art research on industrial robots employing machine learning.
No.Author(s) and YearRobot TypeSensing/Simulation ToolTask TypeWorkspace TypeTechniqueRemarks
1.Mohammed et al. [69], 2022ABB IRB 120RobotStudioAssembly tasksCollaborationMachine learningOutside interference was not prevented during the practice session. The changes in brain activity throughout the day were not considered.
2.Wang et al. [70], 2022Industrial RobotSmart sensors andcameraTraffic monitoringHRIMachine learningThe researchers took into account social, technical, and economic aspects regarding safety. They did not take into account other human elements. Robot abilities were not discussed.
3.Aliev et al. [71], 2021UR3Sensors, Real-time data exchangePredict outages and safe stopsOnline monitoring (AutoML)Machine learningThe research was not carried out on various working environments and the human factors were not reviewed clearly.
4.Malik et al. [72], 2021UR-5 e-seriesTecnomatix process simulation, CAD, proximity sensorAssembly, pick, and place tasksSequentialMachine learningThe physical robot performed the tasks without a worker. The collaborative tasks were explained with the help of digital twins only.
Table 5. Summary of cobot-related works without AI.
Table 5. Summary of cobot-related works without AI.
No.Author(s) and YearRobot TypeSensing/Simulation ToolTask TypeWorkspace TypeTechniqueRemarks
1.Walker et al. [73], 2021UR5Realsense 435d RGB-D cameraShackling chickensCollaborativeMachine visionRobot learning methods and abilities were examined without considering the implications for contemporary production plants. However, no additional human aspects were examined.
2.Edmonds et al. [74], 2019Baxter robotTactile glove with force sensors, Generalized Earley ParserMedicine bottle cap openingHuman explanations-The robot learning techniques and safety issues were not discussed.
3.Grigore et al. [75], 2020Firefighting, Hexacopter, HIRRUS V1Electro-optical/ infrared camerasDisaster and recovery tasksUAV-UGV collaborative-The operating scope of the robots in the article did not consider AI techniques.
4.Bader et al. [76], 2021UR5eTransducer, GUIHistotripsy ablation systemCollaborative-The low resolution of the passive cavitation images employed in this study was a drawback. AI techniques were not addressed.
5.Eyam et al. [77], 2021ABB YuMirobotEEG Epoc+ headsetBox alignment taskCollaborativeHuman profilingThe work has not focused on the internal effect caused by the stress that may produce unstable robot reactions.
6.Yang et al. [78], 2018Baxter robotBumblebee2 cameraObject picking taskCollaborativeLeast squares methodThe experiments were validated with only two healthy subjects.
7.Cunha et al. [79], 2020Articulatedrobotic arm with7 DoFThe vision system, Rethink robot sawyerPipe joining taskCollaborativeDynamic neural fieldsThe conceptual framework suggested in this study was validated in a real cooperative assembly activity and is flexible enough to accommodate unforeseen events.
Table 6. Summary of the state-of-art research on vision systems employed in cobots.
Table 6. Summary of the state-of-art research on vision systems employed in cobots.
No.Author(s) and YearRobot TypeSensing ToolTask TypeWorkspace TypeTechniqueRemarks
1.Xu et al. [84], 2021Seven DoF manipulator and three-finger robot handTwo 3D RGB cameras, one depth camera, eye trackerHand tracking, environment perceiving, grasping, and trajectory planningCollaborativeConvolutional neural networkThe experiments demonstrate a decrease in planning time and length as well as a posture error, suggesting that the planning process may be more accurate and efficient.
2.Jia et al. [85], 2022DOBOT CR5 manipulatorWebcam with 1920 × 1080 pixelsText recognition, working status recognitionAutonomousSiamese region proposal network, convolutional recurrent neural networkCompared to broad object recognition, text detection and recognition are much more susceptible to image quality. Lettering may also appear blurry when an image is taken due to camera movement.
3.Xiong et al. [86], 2022UR53D camera, Basler acA2440-20gm GigE (Basler AG)Port surgeryCollaborativeMachine visionThroughout simulating port surgery, the cobot effectively served as a reliable scope-carrying system to deliver a steady and optimized surgical vision.
4.Comari et al. [87], 2022LBR iiwa has seven degrees of freedomLaser pointer, monochrome 2D cameraRaw material feedingCollaborativeComputer visionThe suggested robotic device can load raw ingredients autonomously into a tea-packaging machine while operating securely in the same space as human workers.
5.Zhou et al. [88], 2022UR5e, Robotiq 2F-85 gripperPMD 3D camera and See3Cam 2D cameraPrinting and cutting of nametags, plug-in chargingCollaborativePoint-voxel region-based CNN (PV-RCNN)Presented a broad robotic method using a mobile manipulator that was outfitted with cameras and an adaptable gripper for automatic nametag manufacture and plug-in charging in SMEs.
6.Ahmed Zaki et al. [89], 2022RV-2FRB Mitsubishi industrial robot and Mitsubishi Assista cobotIntel Realsense D435 3D stereo camerasIndustrial tasksCollaborativeComputer visionThe implemented system, which was built on dynamic road-map technology, enables run-time trajectory planning for collision prevention between robots and human workers.
7.Zidek et al. [90], 2021ABB YuMiDual 4K e-con, Cognex 7200, and MS Hololens camerasAssembly processCollaborativeDeep learning (CNN)The work discussed in this paper introduces a CNN training approach for implementing deep learning into the assisted assembly operation.
8.Olesen et al. [91], 2020UR5 manipulator, Schunk WSG 50-110 gripperIntel RealSense D415 Camera, URG-04LX-UG01 scannerMobile phone assemblyCollaborativeCNN, YOLOv3 networkThe suggested method deals with the assembly of sample phone prototypes without engaging in actual manufacturing procedures. However, the overall success rate was achieved as 47% only.
9.Amin et al. [92], 2020Franka Emika robotTwo Kinect V2 camerasHuman action recognition, contact detectionCollaborative3D-CNN and 1-D CNNThe human action recognition system achieved an accuracy of 99.7% in an HRC environment using the 3D-CNN algorithm, and 96% of accuracy in physical perception using 1D-CNN.
10.Bejarano et al. [93], 2019A 7 DoF dual arm ABB YuMi robotCognex AE3 cameraAssembling a product boxCollaborativeMachine visionThe design, development, and validation of the assembly process and workstation are shown.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Borboni, A.; Reddy, K.V.V.; Elamvazuthi, I.; AL-Quraishi, M.S.; Natarajan, E.; Azhar Ali, S.S. The Expanding Role of Artificial Intelligence in Collaborative Robots for Industrial Applications: A Systematic Review of Recent Works. Machines 2023, 11, 111. https://doi.org/10.3390/machines11010111

AMA Style

Borboni A, Reddy KVV, Elamvazuthi I, AL-Quraishi MS, Natarajan E, Azhar Ali SS. The Expanding Role of Artificial Intelligence in Collaborative Robots for Industrial Applications: A Systematic Review of Recent Works. Machines. 2023; 11(1):111. https://doi.org/10.3390/machines11010111

Chicago/Turabian Style

Borboni, Alberto, Karna Vishnu Vardhana Reddy, Irraivan Elamvazuthi, Maged S. AL-Quraishi, Elango Natarajan, and Syed Saad Azhar Ali. 2023. "The Expanding Role of Artificial Intelligence in Collaborative Robots for Industrial Applications: A Systematic Review of Recent Works" Machines 11, no. 1: 111. https://doi.org/10.3390/machines11010111

APA Style

Borboni, A., Reddy, K. V. V., Elamvazuthi, I., AL-Quraishi, M. S., Natarajan, E., & Azhar Ali, S. S. (2023). The Expanding Role of Artificial Intelligence in Collaborative Robots for Industrial Applications: A Systematic Review of Recent Works. Machines, 11(1), 111. https://doi.org/10.3390/machines11010111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop