Next Article in Journal
A Novel Deep Learning Approach for Real-Time Critical Assessment in Smart Urban Infrastructure Systems
Previous Article in Journal
A Comparative Study of Machine Learning Models for Predicting Meteorological Data in Agricultural Applications
Previous Article in Special Issue
The Design and Real-Time Optimization of an EtherCAT Master for Multi-Axis Motion Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Study of Human–Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies

Faculty of Computer Science, Electronics, and Telecommunications, AGH University of Krakow, al. Adama Mickiewicza 30, 30-059 Kraków, Poland
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(16), 3285; https://doi.org/10.3390/electronics13163285
Submission received: 8 July 2024 / Revised: 7 August 2024 / Accepted: 18 August 2024 / Published: 19 August 2024

Abstract

:
In recent decades, the potential of robots’ understanding, perception, learning, and action has been widely expanded due to the integration of artificial intelligence (AI) into almost every system. Cooperation between AI and human beings will be responsible for the bright future of AI technology. Moreover, for a perfect manually or automatically controlled machine or device, the device must perform together with a human through multiple levels of automation and assistance. Humans and robots cooperate or interact in various ways. With the enhancement of robot efficiencies, they can perform more work through an automatic method; therefore, we need to think about cooperation between humans and robots, the required software architectures, and information about the designs of user interfaces. This paper describes the most important strategies of human–robot interactions and the relationships between several control techniques and cooperation techniques using sensor fusion and machine learning (ML). Based on the behavior and thinking of humans, a human–robot interaction (HRI) framework is studied and explored in this article to make attractive, safe, and efficient systems. Additionally, research on intention recognition, compliance control, and perception of the environment by elderly assistive robots for the optimization of HRI is investigated in this paper. Furthermore, we describe the theory of HRI and explain the different kinds of interactions and required details for both humans and robots to perform different kinds of interactions, including the circumstances-based evaluation technique, which is the most important criterion for assistive robots.

1. Introduction

The use of artificial intelligence (AI) technologies has significantly improved human perception, awareness, behavior, action, and learning abilities [1]. The robotics community has expanded thanks to the Industry 4.0 initiative [2], allowing for more flexible interactions between robots and their surroundings. To be able to detect environments, generate findings, and carry out activities, mobile robots are equipped with actuators and integrated processors [3]. Thus, mobile robots are capable of independent navigation throughout their environment. Mobile robots are intelligent devices that employ preprogramming to observe, recognize, collaborate, and execute a variety of tasks, including medical assistance, consumer service, defense and civil monitoring, factory operations, and more [4]. Human–AI interaction is essential to AI’s advancement in the future. Apart from completely automated or manually operated devices, robots can collaborate with an operator partner with varying degrees of support and automation [5]. Human–robot interaction (HRI) is the study of the development, comprehension, and assessment of robotic systems in which humans and robots interact. The robotics community has long focused on human–robot collaboration. When humans need to be actively considered, a robot’s movement in a human-populated environment presents unique challenges for mobility control and planning. The integration of robots into everyday activities brings up an important issue that has been introduced as a typical problem of autonomous robots: the requirement for human interaction and involvement in robotic surroundings. Figure 1 provides a complete illustration of the process of HRI. This image illustrates the relationship between robots, humans, and the environment.
Research in HRI aims to analyze, create, and assess robotic systems intended for human usage or interaction. A robot must have a thorough understanding of its surroundings, including the items that make it up, the other entities that are present, and the relationships that exist between them before it can engage with people in that environment. The relationship between robots and humans must be considered at every stage of the robot’s design as a way for them to “coexist” with people. Hentout et al. [6] state that HRI might be divided into three distinct types: (i) human–robot collaboration, (ii) human–robot cooperation, and (iii) human–robot cohabitation, which might be further classified into contactless collaboration and physical collaboration. HRI can be classified on the basis of the degree of cooperation in the following ways: supervised autonomy (robot operates autonomously but under human supervision); full autonomy (robot operates independently); teleoperation (human controls robot remotely); collaborative control (human and robot work together); and direct control (human controls robot directly through physical gestures or interfaces). The scientific community, testing facilities, technological companies, and television have recently given HRI much interest. By definition, interaction necessitates human–robot collaboration. Robot assistants are one example of close engagement with mobile robots, and physical contact might be a part of this type of interaction. There are various ways in which a person and a robot can communicate, but the type of interaction that occurs is mostly determined by their closeness to one another. As a result, HRI falls into two broad categories [7].
  • Remote HRI: If humans and robots are not placed together but rather exist apart while interacting with each other, then this type of interaction is called Remote HRI. For example, the Mars Rovers communicate with humans.
  • Proximate HRI: If humans and robots are placed together while interacting with each other, then this type of interaction is called Proximate HRI. For example, the assistive robot and an elderly person are in the same room.
HRI is an interdisciplinary field incorporating industrial applications, computer science, communications, engineering, medical support systems, psychology, science, entertainment, and much more. Advancements in AI, ML, hardware development, and user-friendly design have transformed the area of HRI into a quickly expanding discipline of study [8]. These developments have enhanced the capabilities, intelligence, and responsiveness of robotic assistants that meet clients’ demands, eventually enhancing the standard of living for those seeking support. HRI advances the notion that robots might stay beside people in restaurants, homes, and healthcare facilities, helping senior citizens with a variety of duties as companions. Effective HRI enables robots to fulfill human demands in life and work, relieving people of hazardous and repetitive activities and enabling them to focus on more complex tasks [9]. Moreover, the global pattern of aging populations has made the demand for assistance robots imperative. The present assistive robots, though, remain far from this level and are unable to perform well in our houses and offices. Thus, building a peaceful and productive human–robot collaboration ecosystem is necessary for an advanced assistive robot.
A robot’s perception is restricted by the capability of its sensors; thus, it cannot be suitable for every application. Manual control techniques are usually used in situations where the system has a lot of unknown components, including unorganized, dynamic, and time-dependent parameters. Under this control technique, the system primarily relies on humans to interpret external data, make appropriate decisions, and produce control instructions. However, humans might tend to perceive the world incompletely owing to unknown cognitive and physical reasons, and occasionally, there can be significant mistakes and variances. To make up for human limitations, robots must help augment human perception and provide assistance in control [10]. Thus, HRI is an important technique. When humans and robots possess complementary or opposite abilities, a human–robot collaborative controller can be implemented. Figure 2 illustrates the human–robot collaborative command framework.
The capacity to figure out HRI is essential for a robotic device to be able to take actions in its surroundings, converse about it, and make assessments about it. The bilateral interaction that begins between the human user and the robot is the primary component of HRI. Collaboration can lead to adaptive automaton or tunable autonomy. Certain issues that are organized, linearized, statistically computed, or challenging for humans to answer can be solved autonomously by robots using their developed autonomous intelligence. Robots are capable of autonomously detecting their surroundings, making decisions according to relevant experience and knowledge, and generating commands to operate using a collaborative controller [11]. Human interference is limited to specific situations or influences the creation of autonomous operations using excellent instructions. When users possess equal or different skills, human–robot collaborative command is utilized. The collaborative controller distributes the different tasks or combines human and robot commands according to a variety of criteria, including confidence and trust.
HRI is defined in a variety of ways, from collaborative physical responsibilities [12] to cognitive functions [13]. HRI can be more advanced and highly applicable after the implementation of human activity recognition [14,15] and human pose estimation [16] techniques in robotics. Physical interaction for elderly assistive robots concentrates on giving the robotic devices the tools that are required to fulfill the diverse demands of senior citizens in daily life. Conversely, cognitive features focus more on factors that include emotional interaction, intention recognition, and human–robot trust, which affect how robots and elderly people interact. The ability of the assistive robot to assess multidimensional responses from the surroundings, plan movements, and generate plans is crucial for a robot to be capable of adjusting to changing circumstances and selecting an appropriate policy for a given job. Nowadays, the majority of research in HRI is concentrated on elderly support robots because of the high demand in the senior citizen care sector.
Numerous assistive robotic possibilities will be realized in the future due to the rapid growth of AI and hardware advances. Although studies of HRI in the field of assistive robots have advanced significantly, there are still several substantial research shortcomings. The emphasis on universal methods in modern research frequently minimizes the significance of tailored and adaptable interaction technologies that meet the demands of specific users. For robots to improve the dynamics of relations with humans, they require higher levels of psychological intelligence and awareness. Moreover, there is a dearth of multimodal interaction skills, thorough assessments of usability, and multidisciplinary techniques that incorporate knowledge from other domains. Furthermore, the use of assistive robots is severely constrained by the costly, single-functional, and undeveloped state of current robotics technology [17]. Substantial real-world implementation research is required to comprehend real-world difficulties, and greater focus must be placed on inclusion for a variety of users. Though still lacking, efficient feedback systems and sophisticated adaptation and learning techniques are essential for enhancing HRI. The development of robotic assistants and integrating them into everyday activities depends on filling all of these above-discussed shortcomings. Driven by social demands, including the elderly population, assistive devices can be helpful in subject training and motor functioning. This article provides a comprehensive overview of the recent advances in HRI, various control approaches for HRI, and different techniques for intention recognition for assistive robots. This work offers an up-to-date description of how sensors give the increasing number of assistive robots that communicate with people direct perceptual capacities.
With the intention of giving researchers who are new to this area of study a head start, this article reviews the essential approaches of senior assistance robots within the wider discipline of robotics. As part of the literature review for Section 2, we start this paper by briefly reviewing the research on HRI from the viewpoint of assistive robots. Section 3 provides the most important techniques of robotics perception for the purpose of improvement of HRI, including sensor fusion, and illustrates intention recognition methodologies for the purpose of HRI to assist elderly or physically disabled people. Section 4 contains different types of assistive robots. Section 5 discusses the challenges and future directions of study for assistive robots in human–robot cooperation, and lastly, Section 6 concludes this article by providing a conclusion.

2. Literature Survey

One interesting and difficult area of modern robotics research is assistive technology. The majority of interactions between users and robots remain restricted to teleoperation features, where the user is typically presented with footage streamed from robotic platforms along with a sort of interface for controlling the robot’s trajectory. Beyond allowing a robot to operate from a distance, HRI enables the robot to do a variety of independent tasks. This might be anything from a robot adjusting a control arm in response to a human’s incredibly precise directions to a more advanced robot system that plans and executes a route from a starting point to a final location that the user supplies. It has become feasible for humans to connect with robots in the last ten years due to developments in robotics (perceptions, reasoning, and computing) that enable partially autonomous systems. In recent years, several researchers have been dedicated to creating the most advanced state-of-the-art papers regarding HRI for assistive devices. A literature survey of some of the most well-known articles is given below.
Beckerle et al. [18] provide a viewpoint on the prospects and problems currently facing the domain of HRI. Control and ML techniques that assist without diverting attention are examined in this study. This study presents options for providing sensory user input that robotic technologies do not yet offer. In addition, the need for methods of functional evaluation related to real-life duties is addressed. This study addresses various factors with the goal of providing new ideas for potential robotic solutions in the years ahead.
Olatunji et al. [19] provide a conceptual framework for combining levels of transparency (LoT) and levels of automation (LOA) in assistive robots to meet the needs and demands of senior citizens. There are established benchmarks for assessing LOA and LoT architectural configurations. This study creates two unique test cases with the goal of investigating interaction design problems for robots serving this group in daily duties. One involves a mobile robot accompanying a human, and the other involves the manipulator of robots arranging a table. Assessments from user studies with senior citizens show that interaction aspects are influenced by combinations of LOA and LoT.
Casper et al. [20] describe an HRI that occurred at the World Trade Center during robot-assisted urban search and rescue operations. An extremely difficult chance to research HRI during an actual unstaged rescue is presented by the World Trade Center rescue reaction in this study. The data gathered during the reaction were analyzed for a subsequent evaluation, which produced 17 observations about the influence of the surroundings and circumstances on the HRI. The information expressed at what moment, the specifics of the urban search and rescue assignment, the expertise required and demonstrated by both humans and robots, and interpersonal information systems for the urban search and rescue field. Through the provision of a case analysis of HRI in urban search and rescue derived using an unstaged urban search and rescue operation, the study’s outcomes have had an influence on the robotics community.
Asbeck et al. [21] designed an assistive technology that has no need for external force transmission and only offers a small portion of the normal physiological torques. Exosuits are a promising way to modify the human body using wearable devices that are compact, lightweight, and sensitive. According to this study, it might be possible to improve these systems to the point where they are sufficiently low-profile to wear beneath the user’s clothes. This study’s preliminary findings show that the technology might significantly preserve regular biomechanics and have a favorable impact on a wearer’s rate of metabolism. Although much of the work in this area has been on gait aid so far, there are many more potential uses, such as upper body assistance, rehabilitation, and assisting with other actions.
Yu et al. [22] proposed a gait rehabilitation robot interaction control approach. In this study, a unique modular series elastic actuator powers the robot and offers inherent compliance and reverse driving capability for secure HRI. The actuator design serves as the foundation for the control layout, which takes interaction dynamics into consideration. It is augmented with an interference investigator and primarily comprises friction compensation and compensation for human contact. While the robot is working in a force-controlled manner, it can accomplish precise force monitoring; while it is working in a human-in-charge manner, it can attain a small output impedance. The assured reliability of the closed-loop system using the suggested controller is demonstrated theoretically. The outcomes of this method are easily transferable to other assistive and rehabilitative robots that are powered by cooperative actuators.
Modares et al. [23] provide an HRI system that maximizes the functionality of the human–robot system while assisting its human operator to complete a job with the least amount of effort. Inspired by research on human factors, the control structure described is made up of two loops of control. First, an inner loop designs robotic-specific neuro-adaptive controllers to make the unfamiliar nonlinear robot act the way the specified robot impedance models might be seen by the human operator. Secondly, the ideal specifications of the suggested robot impedance model are determined by an outer loop controller tailored to the job at hand for the purpose of reducing tracking errors and adapting the dynamics of the robot to the operator’s abilities. The provided linear quadratic regulator issue is solved using integral reinforcement learning, which eliminates the need for expertise of the human model.
Feingold-Polak et al. [24] propose employing socially assistive robots (SARs) as a post-stroke training tool. Whether prolonged engagement with a SAR can enhance a person’s functional skills after a stroke is still unknown. This preliminary study compared the effects of three different long-term approaches to upper-limb rehabilitation for post-stroke patients: (1) training using a SAR along with regular treatment, (2) training using a computer along with regular treatment, and (3) regular care without any extra assistance. The objective was to assess variations in motor functioning and standard of life. This study shows that utilizing a SAR for continuous interaction with stroke survivors as an aspect of their rehabilitation strategy is clinically beneficial and feasible.
Lu et al. [25] present a wearable device that allows human–machine interaction to operate a robotic arm system that drives a wheelchair. People with serious motor impairments are unable to use wheelchair autonomous arm equipment due to the limits of conventional manual human–machine interaction instruments (HMIs), which negatively affects their freedom and standard of life. To solve this issue and satisfy the real-world needs of those with serious motor limitations, this study constructed a wearable multimodal HMI. According to the study, the suggested HMI provides a viable option for non-manual control in intricate assisted rehabilitation systems. The usefulness of the suggested HMI was confirmed by enlisting 10 healthy volunteers to participate in three tests: a wheelchair autonomous arm system assessment, a wheelchair control assessment, and a blink-detecting assessment. It might assist a greater number of people with motor impairments, enhancing their quality of life.
Saunders et al. [26] suggest the application of assistive robots for caring for elderly people. The robot’s customization to an elderly individual’s evolving demands over the years is an obstacle. One method is to let the elderly individual, their caretakers, or family members educate the robot on what to do in their smart home and how to respond to various activities. The method of design for the robot, smart house, and teaching and learning mechanisms is described in this study, along with the findings of an assessment of the instructional element conducted with twenty participants and an early assessment of the learning element conducted with three people involved in an HRI experiment. According to the findings, participants believed that this method of personalizing robots was simple to use, practical, and something they might benefit from to assist themselves and other individuals in everyday settings.
Katzschmann et al. [27] introduce ALVU (Array of LiDARs and Vibrotactile Units), a wearable technology that is electronic, hands-free, easy to use, and discreet. It enables people with visual impairments to identify physical constraints and barriers that are either high or low in their immediate vicinity. The method lets an individual discriminate between barriers and open areas, allowing for secure local navigation across small and large environments. The described gadget consists of two components: a vibrating strap and a sensor belt. The sensor belt, which is a collection of time-of-flight measurement devices positioned across the outer edge of a user’s waist, measures the separation between the individual and nearby objects or barriers with accuracy and dependability thanks to infrared radiation pulses. Through a series of vibrating motors wrapped across the user’s top abdomen, the sensory strap transmits distance measurements and provides haptic feedback. To provide the individual with separated vibrations, the linear vibration motor is paired to a point-loaded pretensioned actuator. The device’s wearers were able to navigate corridors, avoid obstructions, and identify staircases with ease.
Ao et al. [28] explore the possibility of improving human–robot collaboration control efficiency using an ankle power-assist wearable robot by employing an extra physiologically suitable model. To accomplish this, a linear proportional model (LPM) and an electromyography-assisted Hill-type neuromusculoskeletal model (HNM) were constructed and evaluated using maximum isometric voluntary dorsiflexion (MIVD). HNM is more precise and can consider variations in the angle of joints and muscular dynamics than the other control framework, which only predicts ankle joint torque in continuous motion. Subsequently, a group of eight fit individuals was enlisted to don the wearable ankle robot and carry out a sequence of vertical oscillating monitoring exercises. The individuals were told to perform dorsiflexion and plantarflexion positions at the ankle to follow the goal presented on the display as closely as possible, with varying amounts of support according to both of the calibrated models.
E. Martinez-Martin et al. [29] provide a vision framework for assistive robots that can instantly identify and locate items based on visual input in common areas. Drawing motivation from vision research, the technique estimates color, movement, and form signals, integrating them in a stochastic approach to precisely perform object detection and classification. The suggested methodology has been put into practice and assessed using a humanoid robot torso situated in real-life settings. With the goal of obtaining further practical validation, an object detection public image library was utilized that enables quantitative comparison to state-of-the-art approaches whenever real-life situations are considered. Lastly, a spatial assessment of the demonstration was given in relation to the quantity of objectives in the environment and picture resolution.

3. Robotics Perceptions

Most of the jobs that were formerly performed exclusively by humans can now be completed by robots because of the quick advancement of robotics. The possible uses for robots have, therefore, increased significantly. It is reasonable to assume that human expertise and familiarity with robotics will vary significantly. The capacity for social interaction with people will continue to play a significant role since most of the latest applications require robots to operate in close proximity to humans compared to the past. In the context of HRIs, an efficient control system is required for assistive robots, which is an important aspect of allowing robots to navigate while providing assistance to elderly or impaired people. The sensory framework for assistive robots addressed here concentrates on the three-dimensional (3D) surroundings and perception of objects since these tasks are necessary to achieve efficient HRI, especially given the unorganized and uncertain nature of the surroundings [30]. Its major goal is to use robot sensors to gather environmental data, identify important characteristics from noise, and, ultimately, comprehend the environment around it. Elderly support robots are more capable of helping humans with everyday tasks such as making food, walking, and feeding if they have an accurate awareness of their surroundings. An illustration of the robotics perception system is shown in Figure 3, where sensors, environment perception, perception techniques, and decision-making steps are the most crucial parts of the system.
Future industrial settings will need a high degree of automation in order to be sufficiently flexible and adaptable to meet the ever-increasing demands in the marketplace for inexpensive, quicker products. Robots that are cooperative, intelligent, and can adjust to changing and dynamic environmental conditions, including the existence of people, will play an increasingly important role in this scenario. However, a workspace including both people and robots can hinder production and cause potential danger to humans when a robot seems unaware of the human’s location and purpose [31]. For HRI, perception skills are crucial for robots. Thus, robots that can operate autonomously while collaborating with humans will probably be used more often in tasks that call for a shared workspace. The next generation of smart industries, senior care facilities, and healthcare facilities will place an increasing emphasis on robotic perceptions. Most of the tasks will require preventing obstacles, interacting with people, and independently finding and determining the elements that need to be transferred or performed. The three essential perceptual and sensory skills are mobility control, human–machine interfaces, and awareness of the surroundings and navigation. The most important parts of the robotics perception system, which are also useful for HRI from the perspective of assistive robots, are described below.

3.1. Sensors

Sensors are gadgets that can sense their surroundings, translate that information into electrical impulses or additional required forms in accordance with regulations, and send that information to different gadgets. The growing number of robotic devices that have feet or wheels, hands, joints, and both lower and upper limbs has advanced quickly in recent decades. These devices all need sensor actuator signals that accurately represent the user’s desired motions. Numerous sensor devices that fit into one of the two classifications are in use. In the most traditional method, the movement of the robotics device is started by the user using a keyboard or other user interface equipment and is monitored by sensors with mechanical factors, which usually depend on microelectromechanical system (MEMS) technologies [32]. Examples of these sensors include accelerometers, degree and location sensors, and gyroscopes. The second type, myoelectric, is still in its infancy within the robotics domain. This measures impulses of electricity directly related to human muscular movement and reacts according to the desired activities of the elderly person or patient [32,33]. These sensors, which are also known as electromyographic or EMG sensors, depend on many technologies. However, the most extensively researched ones comprise surface electrodes that identify electrical impulses on skin layers and electrodes with needles inserted into the muscle [33]. Figure 4 illustrates the link between robots with different sensors and various application scenarios.
Automation and AI have entered a new age as a result of the rapid growth of robotics and are further accelerated by the integration of improved sensing technologies. Robotics perception technology is a key aspect of robotics technology that has gained increased interest because of its fast improvements [34]. Notably, the area of robotics has effectively and widely deployed sensors and sensor fusion methods that are seen to be vital for improving robotics perception techniques. Therefore, a viable strategy that allows for adaptability to different tasks in novel conditions is the combination of sensors and sensor fusion technologies with robotics perception technologies. Perceiving its surroundings is essential for a robot to carry out complicated tasks. Robots use a range of sensors to identify various elements in their surroundings. Thus, better sensor fusion approaches in robotics perception systems can be an effective way to enhance the capability of assistive robots to provide assistance to elderly or impaired humans. Some of the most important sensors used by assistive robots are described as follows.

3.1.1. Infrared Sensors

Robots must have the ability to satisfy human demands and requests to continue to be valuable to us, and this will involve some sort of interaction. Although technologies for communication between humans and robots are becoming more advanced, natural conversation continues to be far ahead. Thus, the infrared sensor plays a vital role in assistive robotics for natural interactions. An optoelectronic equipment with a radiation sensitivity and a spectral sensitiveness spanning the infrared wavelength band of 780 nm to 50 µm is called an infrared sensor (IR sensor) [35]. Nowadays, IR sensors are frequently found in motion detection systems employed in alarm systems to identify unusual activity or in buildings for switching on lights. The sensor components identify thermal radiation (infrared radiation) that varies with time and place as a result of human activity within a predetermined angle range. Due to its sensitivity to variances in infrared heat caused by human mobility and resistance to environmental variations, the infrared sensor remains the preferred option for person detection. We provide an efficient illustration of an infrared sensor-based navigation approach for blind people in this section based on [36,37,38,39].
Nowadays, infrared sensors are widely used to provide assistance to blind people for efficient navigation in crowded cities. The blind individual’s hand has an IR sensor mounted to the upper part of it for navigation. Every piece of equipment and software is comparable to a mobile robot. The warning unit receives the gathered impulses and processes them into vibrations that represent information that people can comprehend. Figure 5 shows the location of the notice module and sensor module on the arm of a blind person. A mathematical technique eliminates the IR sensor vibrating in synchronization with the hand activity. This approach makes it possible for people to distinguish between a building’s end and only its inner edge. There will be no difference in temperature at the interior portion of the corridor, which is crucial information for both humans and robots. Figure 6a,b depicts the look of the sensor system and the notice system, respectively.

3.1.2. Light Detection and Ranging (LiDAR) Sensors

LiDAR is a robust sensor system used to measure distances and create incredibly accurate three-dimensional (3D) models of surroundings and objects [40]. A LiDAR system begins the sensing operation by directing laser pulses at a predetermined region. Part of the light is reflected backward toward the LiDAR sensor whenever these pulses come into contact with obstacles. LiDAR determines the desired distance by timing the comeback of every laser pulse and using the light’s steady speed. When LiDAR is used methodically over wide regions and combined with other data to calculate distance, it creates a point cloud, which is an array of many points in three-dimensional space [41]. The 3D characteristics and geometry of the region or item are successfully mapped by these points. LiDAR is widely used for robot navigation. Robots use simultaneous localization and mapping (SLAM) extensively to create real-time localized mappings depending on odometry algorithms and perception sensor inputs. Sensor data from cameras, IMUs, LiDAR, and other devices can be used by odometry systems; in certain cases, combining several data sources can increase precision and the rate of convergence. Nowadays, LiDAR is an important sensor for robotics navigation in the human-centered environment that plays an important role in HRI for assistive purposes.

3.1.3. Inertial Measurement Unit (IMU) Sensors

The use of robots, particularly mobile robots, has grown quickly and is now widespread. A collection of sensors called IMU sensors is crucial to the navigation of autonomous robots. When usable information is computed regarding location, orientation, and speed, the data gathered using the IMU sensors by an autonomous robot are appropriately transformed. Technology has advanced to the point that IMUs are now tiny, adaptable, and reliable instead of bulky and complicated. The accelerometer, gyroscope, and magnetometer are the three primary sensors found in an IMU. There are additional sensors as well, including an attitude sensor, pressure sensor, temperature sensor, and barometer. An IMU is made up of several components, with the primary distinctions being the technologies they incorporate, the goals of the designers, and the manufacturer’s standards [42]. Together with gyroscopes, tiny accelerometers and magnetometers have also been developed, and these days, they are produced as micro-electro-mechanical systems (MEMS), which are incredibly compact, dependable, and affordable IMU sensors. The growth of wearable IMU sensors offers numerous advantages in the discipline of human motion assessment with regard to the sensory framework for HRI applications, including mobility, precise measurement, and simplicity of use in unorganized environments [43]. Future assistive robots and medical applications are predicted to be made possible by the combination of wearable sensors and autonomous robots in sophisticated interaction settings.

3.1.4. EMG Sensors

Electromyography (EMG) is an important sensor in assistive technology. Exoskeleton robots and other assistive robots have been widely controlled by electromyography (EMG) signals, eliminating the need for the user to activate a separate device to operate the robot. As electromyography (EMG) signals can be applied to specifically identify the movement intent of the wearer, they are now widely employed to drive assistive and rehabilitative robots. Over the past decade, a lot of robotic arm prostheses have been created that are operated by a variety of sensing devices, including surface EMG, digital vision, and haptic sensing. However, considering the disturbance and the high processing overhead, using EMG signals as an operator control signal within robotics is quite challenging [44].

3.2. Environment Perception

Intelligent robots must be able to see their surroundings in order to carry out certain activities. This perception serves as the foundation for further control and decision-making. Goal detection and recognition of targets are examples of vision-based perception of environments techniques that have advanced significantly in recent decades due to the rapid growth of DL and the notable enhancement of hardware capabilities. However, the majority of vision models are created using pictures that have consistent lighting and few notable anomalies. In the real world, robots frequently have to work in complicated, unstructured settings or in ones with poor visual quality. The demands of the work are unable to be fulfilled just by visual perception since it is not environment-adaptive. As a result, multi-sensor fusion-based environmental perception technologies are gaining popularity as a study area [45]. The complexity of data is reduced via the combination of data from different sensors and sensory modules; lacking this, the computational task of analyzing signals from sensors becomes uncontrolled [46].
The vision system for support robots addressed here concentrates on surrounding modeling and recognizing objects since these tasks are necessary to achieve good HRI, especially in light of the uncertain and unorganized surrounding environment. Its major goal is to use robotic sensors to gather environmental data, identify important characteristics from disturbances, and, ultimately, comprehend its surroundings [47]. The robot’s environmental perception technology is constantly presented with a wide variety of complicated environmental information. Ensuring environmental perception for data fusion techniques requires two fundamental specifications: resilience and concurrent processing capacity. Multiple sensors function differently, gather data in distinct manners, and are not all equally able to adapt to their surroundings. The multi-sensor fusion-based perception technique might overcome the inherent constraints of just one sensor, integrate the benefits of several sensors, and produce more precise and dependable data for later robot operation. Fusion of sensors in a robotic environmental perception unit has become prevalent, with IMU (inertial measurement unit), vision cameras, LiDAR, and combining implementations being the most popular uses [48].
Several fundamental technological advances in robotics, including mechanical planning, visual perception, and robot control, are necessary to enable authentic HRI. Robots need to be able to perceive things in order to build models of both their internal and external environments [49]. The inbuilt perception models enable the robot to carry out its mission precisely, quickly, and securely. Multiple sensor classes that collect exteroceptive and proprioceptive data enable perception. The process of perception is challenging for assistive robots, particularly ones that are movable. This is because the numerous articulable components in a mobile robot provide a high degree of autonomy, which can lead to jerks and shaky movements of the attached sensors. For the sake of this part, we categorize the primary domains of robot perception into two main topics that converge: robot state estimation and navigation environment knowledge. Various efficient environmental perceptions have been carried out in [50,51,52,53] for HRI applications.

3.3. Visual Perception

The capacity to analyze and comprehend visual data from the surroundings through vision is known as visual perception. Accurately recording the 3D movements of humans and robots in the field of assistive technology is essential for adaptive and secure HRI. The eyes’ perception of light is the first stage in this sophisticated procedure, which concludes with the brain’s analysis of all visual information. Through the recognition, organization, and interpretation of forms, colors, spatial connections, motion, and other visual properties, visual perception enables robots to interpret and interact with their surroundings. Autonomous robots need visual perception as a basic skill in order to appropriately and securely navigate around humans in real life. Technological developments in DL have recently brought about some amazing advancements in vision technologies. Visual perception is an extremely desired paradigm because of its intrinsic passive and friendly qualities. It does not require the surroundings to be transformed, nor does it necessitate heavy equipment, which prospective humans engaging with the robot will have to manage [54]. Since there is not one specific vision algorithm or approach that works well for every vision job, the seamless operation of various visual systems depends on their effective and efficient integration.
Robotics perception has been transformed by ML advances, which have improved its applications in a number of fields, including medical services and assisted living. Deep neural networks (DNNs) specifically, which are DL algorithms, have been crucial in enhancing robotic devices’ visual capabilities. Combining multiple-sensor integration approaches is essential to improving vision-based perception approaches for robotic assistants in complex, cluttered, and low-visual-quality situations. The accuracy of visual data can also be enhanced by using effective image augmentation and preprocessing techniques, and more dependable functionality for real-time applications can be guaranteed by sophisticated algorithms that continuously adapt to dynamic situations. Robots can now analyze and interpret enormous volumes of sensory input thanks to these techniques, which improves their ability to understand and communicate with their surroundings [55]. Furthermore, these developments in AI have aided in the creation of more durable and adaptive robotic devices that can adjust to evolving and unpredictable surroundings. Some of the most important visual perception techniques are described as follows.

3.3.1. Object Classification

Computer vision, which analyzes visual input from cameras, is essential for improving robotics perception. Robots are capable of deriving valuable data from images and videos that their cameras acquire using vision-based techniques. Recognition of objects has become a key use of vision technology for robot perceptions. Robots can recognize and categorize objects from their surroundings using sophisticated algorithms, which improves their ability to interact and navigate [56]. To accurately recognize objects according to their visual properties, vision-based object identification systems use methods including extraction of features, pattern recognition, and DL. Robots need this capacity to carry out activities, including manipulating objects, navigating on their own, and comprehending scenes. Generally, recognition of objects and vision algorithms play a key role in allowing robots to see and communicate with their environment more intelligently and independently. Various efficient object classifications have been carried out in [57,58,59] for HRI applications.

3.3.2. Intention Recognition

Improving navigational ability is a key benefit of integrating robotics perception into assistive technology. For self-driving cars and robotics to navigate effectively and securely in dynamic environments, navigation is an essential feature. Robotics perception systems can improve their perception and comprehension of their surroundings, including obstacles, symbols, intentions of humans, and various other related information using ML. Many robots utilize sensors to prevent injury to people, but because they are unable to understand human intentions or actions, they are essentially passive recipients of information rather than communicative partners [60]. Though intention-based technologies are capable of deducing human intentions and forecasting future behavior, their increased proximity to people creates trust issues. A new type of user-focused assistance system called intentions-based technologies can determine the user’s intention and respond accordingly, allowing them to participate in their interaction both actively and passively [61].
Improving medical operations and care for patients by using robotics perceptions to enhance healthcare skills is an important development in the industry. Medical professionals might perform procedures in a whole new way because of robotic perception. Robotic perception can reduce mistakes and increase safety for patients in surgeries by improving accuracy and offering real-time feedback. Significant advancements within robotic perception are also being made in the field of senior care. Elderly people frequently need assistance with everyday tasks. Perception-capable robots can help with drug administration, detecting falls, and critical condition monitoring, among other activities. Assistive robots also meet the social and psychological requirements of the elderly by offering companionship and emotional assistance. Various efficient intention recognitions, including human activity recognition [62,63], human pose estimations [64,65,66], gesture recognition [67,68], and emotion recognition [69,70,71], have been carried out for HRI applications.

4. Different Types of Assistive Robots

The physical and psychological well-being of senior citizens might be significantly impacted by assistive social robots, which are a specific kind of assistive robot, intended for social contact with humans. For children with impairments, those going through rehabilitation, elderly persons, and disabled working-age adults, assistive robots have great promise as a support tool. Any robot, piece of technology, or device that helps the elderly and those who have impairments live normally at a residence, at work, in educational institutions, and in neighborhoods is referred to as an assistive robot. Although robotic assistants have much potential to help the elderly with crucial personal care, ethical considerations have made it difficult for them to be widely accepted. Throughout the commercial sector, cobots and cooperative robots have stepped up as two of the most widespread and necessary uses. Cobots and cooperative robots are described below.

4.1. Cobots

The initial purpose of collaborative robots, which are also known as cobots, was to help people in industrial settings. Cobots, in contrast to traditional robots, are made to work with humans instead of replacing them [72]. Contrarily, traditional industrial robots cannot operate alongside humans since they need a physical safeguard to ensure their safety. Cobots are robots that can collaborate directly with people without the need for traditional safeguards. By employing a range of tools, including vision systems, force and torque sensors, and ML algorithms, cobots are able to detect and adapt to the existence of humans, ensuring a secure and effective work environment. The direct human–cobot association has several advantages, including safe completion of difficult jobs, high manufacturing quality, simple and user-friendly cobot training and computing, and assistance for the elderly or disabled. Cobots and cooperative robots are both excellent options for automation since they each have unique benefits and skills. Figure 7 shows an illustration of an elderly assistive cobot widely used in Japan.

4.2. Cooperative Robots

Manufacturing robots that have a virtual barrier separating them from their human operators are referred to as cooperative robots. The cooperative robot occupies an intermediate place in the service line between cobots and industrial robots, leveraging safety sensors (usually laser scanning devices) to combine the advantages of both collaborative and industrial robots [74]. Because cooperative robots are more complicated, cooperative robots need more sophisticated programming knowledge to operate. Cooperative robots are typically more costly as a result.

5. Future Research Trends and Challenges

While most robotic assistants are still in the prototype stage, they can only partially mimic the dynamics of HRI. Robotic wheelchairs are an exception to the rule that many robotic devices to serve the impaired or elderly have not yet achieved a substantial degree of adoption, partly due to expense and partly because of the wide range of demands. These are the subject of in-depth study, have a sizable prospective marketplace, and have more precisely specified criteria. Scientists are quickly working to build an assistive robot that can help us, inspire us, instruct us, support labor-intensive tasks with precision, and perhaps provide the ideal interactive companion, depending on its intended function [75]. To do this, though, we will need a significant leap beyond the current state-of-the-art robotic solutions which rely heavily on interpersonal interaction as the foundation of HRI. The fascinating world of teleoperated robots, which is always changing due to the blending of scientific research and technological advancement, offers a wide range of amazing possibilities. As we make progress across this revolutionary terrain, we are pushing the envelope of comprehension to imagine a time when useful apps enable people to travel to remote areas easily and make everyday activities easier. An efficient sensor fusion technique will be required for better HRI.
Research on HRI is a comparatively new field. The research area is wide and varied, and there are many exciting and unexplored problems in the software and hardware design procedures. The research community is now investigating the use of robotic assistants across a variety of sectors. Due to these factors, the emergence of HRI draws on the expertise of several fields, from broader social studies to those with a stronger mathematical/engineering orientation. Future research will focus further on fostering logical and natural interactions in addition to enhancing robot comprehension and reactivity to human emotions. This can be resolved by developing robots that resemble human beings and will be accomplished by integrating emotion synthesis and recognition technologies [76]. The application of multi-robot frameworks, in which several robots collaborate to complete a job, will represent another development. Revolutionary developments in assistive robots keep improving the lives of those who require assistance. The advanced robotic assistant needs to be designed to participate in a special three-way conversation between the individual in need of support, the caretaker, and the robot. Thus, more work is required to enhance the software of assistive robots. Novel innovations in programming provide the path forward for an improved, inclusive, and helpful era by showcasing not just the possibility of robotics to improve daily living but also the cooperative symbiosis between caregivers and robotic assistants. Future advancements in hardware and DL will greatly improve the versatility and reliability of vision systems for real-world applications. The ability of assistive systems to more effectively generalize from a variety of intricate inputs will be made possible by advancements in neural network topologies, including more resilient and accurate convolutional neural networks (CNNs) and transformers.
Additionally, emotions and generating algorithms will be included, but there will be an emphasis on building robots that can adjust to human variances in communication interests and styles. Moreover, geometric integration into robots is a critical component that accelerates learning and requires accurate encoding. While learning algorithms are important, intuitive software is equally important since it ensures accessibility and usability [77]. In the near future, many modes of communication, such as hearing, voice, seeing, touching, and learning, will be required for robots to engage with people efficiently. The field of human–robot cooperation has witnessed encouraging advancements in the creation of technologies that can improve the effectiveness of robots functioning alongside human collaborators in recent times. These developments have made it possible for robots to help people with jobs that could be hazardous, repetitive, or very precise [78]. A key challenge in the corresponding setting is the robot’s ability to respond in real-time and reliably to a wide range of potential tasks. Techniques for adaptability and tailored learning might help achieve this. Research on ethics and other legal concerns regarding human–robot interactions is crucial to ensuring long-term, robust, and peaceful contact between the elderly and robots [79]. In conclusion, ethical issues pertaining to HRI will gain significance, necessitating the establishment of ethical protocols and rules to guarantee the responsible and secure deployment of robots in diverse fields.

6. Conclusions

This study covers a variety of topics related to assistive robotics for the elderly or disabled. We explore a number of areas related to human–robot interaction, including intention recognition, robotics perception, sensor fusion, and environment perception. Assistive robots that attend to and care for the elderly and impaired need to be integrated with advanced sensors that can sense their unpredictable and unorganized surroundings. This fascinating study seeks to understand the fundamental human sensation of existence from a distance rather than focusing just on the research of remotely operated robotic assistance. The process of collaboration between humans and robots is believed to be successful when the robot accurately discerns the human purpose and effectively completes the associated task. Lastly, we present a number of future research directions and challenges in preserving steady and peaceful human–robot collaboration and communication. We consider that early-stage researchers passionate about robotics science for assistive technology can use this review as their starting point, as it covers the wide range of fundamental issues of assistive robots for the elderly or disabled.

Author Contributions

Conceptualization, R.R. and A.K.; methodology, R.R.; formal analysis, R.R.; investigation, R.R.; resources, R.R. and A.K.; data curation, R.R.; writing—original draft preparation, R.R.; writing—review and editing, R.R. and A.K.; visualization, R.R. and A.K.; supervision, A.K.; all authors have read and equally contributed to the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported financially by the AGH University of Krakow, Poland, under subvention no. 16.16.230.434.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Raj, R.; Kos, A. Artificial Intelligence: Evolution, Developments, Applications, and Future Scope. Przegląd Elektrotechniczny 2023, 99, 1. [Google Scholar] [CrossRef]
  2. Billard, A.; Kragic, D. Trends and challenges in robot manipulation. Science 2019, 364, eaat8414. [Google Scholar] [CrossRef] [PubMed]
  3. Raj, R.; Kos, A. An Optimized Energy and Time Constraints-Based Path Planning for the Navigation of Mobile Robots Using an Intelligent Particle Swarm Optimization Technique. Appl. Sci. 2023, 13, 9667. [Google Scholar] [CrossRef]
  4. Raj, R.; Kos, A. A Comprehensive Study of Mobile Robot: History, Developments, Applications, and Future Research Perspectives. Appl. Sci. 2022, 12, 6951. [Google Scholar] [CrossRef]
  5. Chen, Y.; Luo, Y.; Yang, C.; Yerebakan, M.O.; Hao, S.; Grimaldi, N.; Li, S.; Hayes, R.; Hu, B. Human mobile robot interaction in the retail environment. Sci. Data 2022, 9, 673. [Google Scholar] [CrossRef] [PubMed]
  6. Hentout, A.; Aouache, M.; Maoudj, A.; Akli, I. Human–Robot Interaction in Industrial Collaborative Robotics: A Literature Review of the Decade 2008–2017. Adv. Robot. 2019, 33, 764–799. [Google Scholar] [CrossRef]
  7. Goodrich, M.A.; Schultz, A.C. Human-Robot Interaction: A Survey. Now Found. Trends 2008, 1, 203–275. [Google Scholar] [CrossRef]
  8. Peng, G.; Yang, C.; Chen, C.L.P. Neural Control for Human–Robot Interaction with Human Motion Intention Estimation. IEEE Trans. Ind. Electron. 2024; early access. [Google Scholar] [CrossRef]
  9. Fasola, J.; Mataric, M.J. Using Socially Assistive Human–Robot Interaction to Motivate Physical Exercise for Older Adults. Proc. IEEE 2012, 100, 2512–2526. [Google Scholar] [CrossRef]
  10. Hoc, J.-M. From human—machine interaction to human—machine cooperation. Ergonomics 2000, 43, 833–843. [Google Scholar] [CrossRef]
  11. Yang, C.; Zhu, Y.; Chen, Y. A Review of Human–Machine Cooperation in the Robotics Domain. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 12–25. [Google Scholar] [CrossRef]
  12. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2017, 42, 957–975. [Google Scholar] [CrossRef]
  13. Freedy, A.; DeVisser, E.; Weltman, G.; Coeyman, N. Measurement of trust in human-robot collaboration. In Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems, Orlando, FL, USA, 25–25 May 2007; pp. 106–114. [Google Scholar] [CrossRef]
  14. Raj, R.; Kos, A. An improved human activity recognition technique based on convolutional neural network. Sci. Rep. 2023, 13, 22521. [Google Scholar] [CrossRef] [PubMed]
  15. Raj, R.; Kos, A. Different Techniques for Human Activity Recognition. In Proceedings of the 2022 29th International Conference on Mixed Design of Integrated Circuits and System (MIXDES), Wrocław, Poland, 23–24 June 2022; pp. 171–176. [Google Scholar] [CrossRef]
  16. Toshev, A.; Szegedy, C. DeepPose: Human Pose Estimation via Deep Neural Networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar] [CrossRef]
  17. Tsujimura, M.; Ide, H.; Yu, W.; Kodate, N.; Ishimaru, M.; Shimamura, A.; Suwa, S. The essential needs for home-care robots in Japan. J. Enabling Technol. 2020, 14, 201–220. [Google Scholar] [CrossRef]
  18. Beckerle, P.; Salvietti, G.; Unal, R.; Prattichizzo, D.; Rossi, S.; Castellini, C.; Hirche, S.; Endo, S.; Amor, H.B.; Ciocarlie, M.; et al. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics. Front. Neurorobot. 2017, 11, 24. [Google Scholar] [CrossRef]
  19. Olatunji, S.A.; Oron-Gilad, T.; Markfeld, N.; Gutman, D.; Sarne-Fleischmann, V.; Edan, Y. Levels of Automation and Transparency: Interaction Design Considerations in Assistive Robots for Older Adults. IEEE Trans. Hum.-Mach. Syst. 2021, 51, 673–683. [Google Scholar] [CrossRef]
  20. Casper, J.; Murphy, R.R. Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center. IEEE Trans. Syst. Man Cybern. Part B 2003, 33, 367–385. [Google Scholar] [CrossRef]
  21. Asbeck, A.T.; De Rossi, S.M.M.; Galiana, I.; Ding, Y.; Walsh, C.J. Stronger, Smarter, Softer: Next-Generation Wearable Robots. IEEE Robot. Autom. Mag. 2014, 21, 22–33. [Google Scholar] [CrossRef]
  22. Yu, H.; Huang, S.; Chen, G.; Pan, Y.; Guo, Z. Human–Robot Interaction Control of Rehabilitation Robots with Series Elastic Actuators. IEEE Trans. Robot. 2015, 31, 1089–1100. [Google Scholar] [CrossRef]
  23. Modares, H.; Ranatunga, I.; Lewis, F.L.; Popa, D.O. Optimized Assistive Human–Robot Interaction Using Reinforcement Learning. IEEE Trans. Cybern. 2016, 46, 655–667. [Google Scholar] [CrossRef]
  24. Feingold-Polak, R.; Barzel, O.; Levy-Tzedek, S. Socially Assistive Robot for Stroke Rehabilitation: A Long-Term in-the-Wild Pilot Randomized Controlled Trial. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 1616–1626. [Google Scholar] [CrossRef]
  25. Lu, Z.; Zhou, Y.; Hu, L.; Zhu, J.; Liu, S.; Huang, Q.; Li, Y. A Wearable Human–Machine Interactive Instrument for Controlling a Wheelchair Robotic Arm System. IEEE Trans. Instrum. Meas. 2024, 73, 4005315. [Google Scholar] [CrossRef]
  26. Saunders, J.; Syrdal, D.S.; Koay, K.L.; Burke, N.; Dautenhahn, K. “Teach Me–Show Me”—End-User Personalization of a Smart Home and Companion Robot. IEEE Trans. Hum.-Mach. Syst. 2016, 46, 27–40. [Google Scholar] [CrossRef]
  27. Katzschmann, R.K.; Araki, B.; Rus, D. Safe Local Navigation for Visually Impaired Users with a Time-of-Flight and Haptic Feedback Device. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 583–593. [Google Scholar] [CrossRef]
  28. Ao, D.; Song, R.; Gao, J. Movement Performance of Human–Robot Cooperation Control Based on EMG-Driven Hill-Type and Proportional Models for an Ankle Power-Assist Exoskeleton Robot. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1125–1134. [Google Scholar] [CrossRef] [PubMed]
  29. Martinez-Martin, E.; del Pobil, A.P. Object Detection and Recognition for Assistive Robots: Experimentation and Implementation. IEEE Robot. Autom. Mag. 2017, 24, 123–138. [Google Scholar] [CrossRef]
  30. Liu, X.; Huang, C.; Zhu, H.; Wang, Z.; Li, J.; Cangelosi, A. State-of-the-Art Elderly Service Robot: Environmental Perception, Compliance Control, Intention Recognition, and Research Challenges. IEEE Syst. Man Cybern. Mag. 2024, 10, 2–16. [Google Scholar] [CrossRef]
  31. Bonci, A.; Cheng, P.D.C.; Indri, M.; Nabissi, G.; Sibona, F. Human-Robot Perception in Industrial Environments: A Survey. Sensors 2021, 21, 1571. [Google Scholar] [CrossRef]
  32. Bogue, R. Sensors for robotic perception. Part one: Human interaction and intentions. Ind. Robot 2015, 42, 386–391. [Google Scholar] [CrossRef]
  33. Kutílek, P.; Hýbl, J.; Mareš, J.; Socha, V.; Smrčka, P. A myoelectric prosthetic arm controlled by a sensor-actuator loop. Acta Polytech. 2014, 54, 197–204. [Google Scholar] [CrossRef]
  34. Luo, J.; Zhou, X.; Zeng, C.; Jiang, Y.; Qi, W.; Xiang, K.; Pang, M.; Tang, B. Robotics Perception and Control: Key Technologies and Applications. Micromachines 2024, 15, 531. [Google Scholar] [CrossRef] [PubMed]
  35. Infrared Sensor—IR Sensor. Infratec. Available online: https://www.infratec.eu/sensor-division/service-support/glossary/infrared-sensor/ (accessed on 6 July 2024).
  36. Marzec, P.; Kos, A. Indoor Precise Infrared Navigation. In Proceedings of the 2020 27th International Conference on Mixed Design of Integrated Circuits and System (MIXDES), Lodz, Poland, 25–27 June 2020; pp. 249–254. [Google Scholar] [CrossRef]
  37. Marzec, P.; Kos, A. Low Energy Precise Navigation System for the Blind with Infrared Sensors. In Proceedings of the 2019 MIXDES—26th International Conference “Mixed Design of Integrated Circuits and Systems”, Rzeszow, Poland, 27–29 June 2019; pp. 394–397. [Google Scholar] [CrossRef]
  38. Papagianopoulos, I.; De Mey, G.; Kos, A.; Wiecek, B.; Chatziathasiou, V. Obstacle Detection in Infrared Navigation for Blind People and Mobile Robots. Sensors 2023, 23, 7198. [Google Scholar] [CrossRef]
  39. Marzec, P.; Kos, A. Thermal navigation for blind people. Bull. Pol. Acad. Sci. Tech. Sci. 2021, 69, e136038. [Google Scholar] [CrossRef]
  40. Roriz, R.; Cabral, J.; Gomes, T. Automotive LiDAR Technology: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6282–6297. [Google Scholar] [CrossRef]
  41. Lee, D.; Jung, M.; Yang, W.; Kim, A. LiDAR odometry survey: Recent advancements and remaining challenges. Intell. Serv. Robot. 2024, 17, 95–118. [Google Scholar] [CrossRef]
  42. Samatas, G.G.; Pachidis, T.P. Inertial Measurement Units (IMUs) in Mobile Robots over the Last Five Years: A Review. Designs 2022, 6, 17. [Google Scholar] [CrossRef]
  43. Cifuentes, C.A.; Frizera, A.; Carelli, R.; Bastos, T. Human–Robot Interaction Based on Wearable IMU Sensor and Laser Range Finder. Robot. Auton. Syst. 2014, 62, 1425–1439. [Google Scholar] [CrossRef]
  44. Gopal, P.; Gesta, A.; Mohebbi, A. A Systematic Study on Electromyography-Based Hand Gesture Recognition for Assistive Robots Using Deep Learning and Machine Learning Models. Sensors 2022, 22, 3650. [Google Scholar] [CrossRef] [PubMed]
  45. Wu, J.; Gao, J.; Yi, J.; Liu, P.; Xu, C. Environment Perception Technology for Intelligent Robots in Complex Environments: A Review. In Proceedings of the 2022 7th International Conference on Communication, Image and Signal Processing (CCISP), Chengdu, China, 18–20 November 2022; pp. 479–485. [Google Scholar] [CrossRef]
  46. Wolpert, D.; Ghahramani, Z. Computational principles of movement neuroscience. Nat. Neurosci. 2000, 3, 1212–1217. [Google Scholar] [CrossRef]
  47. Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4357. [Google Scholar] [CrossRef]
  48. Xu, X.; Zhang, L.; Yang, J.; Cao, C.; Wang, W.; Ran, Y.; Tan, Z.; Luo, M. A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR. Remote Sens. 2022, 14, 2835. [Google Scholar] [CrossRef]
  49. Raj, R.; Kos, A. Discussion on Different Controllers Used for the Navigation of Mobile Robot. Int. J. Electron. Telecommun. 2024, 70, 229–239. [Google Scholar] [CrossRef]
  50. James, S.; Ma, Z.; Arrojo, D.R.; Davison, A.J. RLBench: The Robot Learning Benchmark & Learning Environment. IEEE Robot. Autom. Lett. 2020, 5, 3019–3026. [Google Scholar] [CrossRef]
  51. Tang, B.; Jiang, C.; He, H.; Guo, Y. Human Mobility Modeling for Robot-Assisted Evacuation in Complex Indoor Environments. IEEE Trans. Hum.-Mach. Syst. 2016, 46, 694–707. [Google Scholar] [CrossRef]
  52. Papadakis, P.; Spalanzani, A.; Laugier, C. Social mapping of human-populated environments by implicit function learning. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1701–1706. [Google Scholar] [CrossRef]
  53. Thorsten, H.; Marc-André, F.; Aly, K.; Ayoub, A.-H.; Laslo, D. Semantic-Aware Environment Perception for Mobile Human-Robot Interaction. In Proceedings of the 2021 12th International Symposium on Image and Signal Processing and Analysis (ISPA), Zagreb, Croatia, 13–15 September 2021; pp. 200–203. [Google Scholar] [CrossRef]
  54. Medioni, G.; François, A.R.J.; Siddiqui, M.; Kim, K.; Yoon, H. Robust Real-Time Vision for a Personal Service Robot. Comput. Vis. Image Underst. 2007, 108, 196–203. [Google Scholar] [CrossRef]
  55. Rybczak, M.; Popowniak, N.; Lazarowska, A. A Survey of Machine Learning Approaches for Mobile Robot Control. Robotics 2024, 13, 12. [Google Scholar] [CrossRef]
  56. Robotics Perception. Ally Robotics. Available online: https://allyrobotics.com/robotics-perception (accessed on 4 July 2024).
  57. Wang, W.; Li, R.; Diekel, Z.M.; Chen, Y.; Zhang, Z.; Jia, Y. Controlling Object Hand-Over in Human–Robot Collaboration Via Natural Wearable Sensing. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 59–71. [Google Scholar] [CrossRef]
  58. Turkoglu, M.O.; Ter Haar, F.B.; van der Stap, N. Incremental Learning-Based Adaptive Object Recognition for Mobile Robots. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 6263–6268. [Google Scholar] [CrossRef]
  59. Madan, C.E.; Kucukyilmaz, A.; Sezgin, T.M.; Basdogan, C. Recognition of Haptic Interaction Patterns in Dyadic Joint Object Manipulation. IEEE Trans. Haptics 2015, 8, 54–66. [Google Scholar] [CrossRef]
  60. Zhang, Y.; Doyle, T. Integrating Intention-Based Systems in Human-Robot Interaction: A Scoping Review of Sensors, Algorithms, and Trust. Front. Robot. AI 2023, 10, 1233328. [Google Scholar] [CrossRef]
  61. Wendemuth, A.; Böck, R.; Nürnberger, A.; Al-Hamadi, A.; Brechmann, A.; Ohl, F.W. Intention-Based Anticipatory Interactive Systems. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 2583–2588. [Google Scholar] [CrossRef]
  62. Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [PubMed]
  63. Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
  64. Jiang, Y.; Ding, W.; Li, H.; Chi, Z. Multi-Person Pose Tracking With Sparse Key-Point Flow Estimation and Hierarchical Graph Distance Minimization. IEEE Trans. Image Process. 2024, 33, 3590–3605. [Google Scholar] [CrossRef]
  65. Zhou, X.; Jin, T.; Dai, Y.; Song, Y.; Li, K.; Song, S. MDST: 2-D Human Pose Estimation for SISO UWB Radar Based on Micro-Doppler Signature via Cascade and Parallel Swin Transformer. IEEE Sens. J. 2024, 24, 21730–21749. [Google Scholar] [CrossRef]
  66. Raj, R.; Kos, A. Learning the Dynamics of Human Patterns for Autonomous Navigation. In Proceedings of the 2024 IEEE 18th International Conference on Compatibility, Power Electronics and Power Engineering (CPE-POWERENG), Gdynia, Poland, 24–26 June 2024; pp. 1–6. [Google Scholar] [CrossRef]
  67. Hu, Q.; Azar, G.A.; Fletcher, A.; Rangan, S.; Atashzar, S.F. ViT-MDHGR: Cross-day Reliability and Agility in Dynamic Hand Gesture Prediction via HD-sEMG Signal Decoding. IEEE J. Sel. Top. Signal Process. 2024; early access. [Google Scholar] [CrossRef]
  68. Liu, Y.; Li, X.; Yang, L.; Yu, H. A Transformer-Based Gesture Prediction Model via sEMG Sensor for Human–Robot Interaction. IEEE Trans. Instrum. Meas. 2024, 73, 2510615. [Google Scholar] [CrossRef]
  69. Chen, L.; Li, M.; Su, W.; Wu, M.; Hirota, K.; Pedrycz, W. Adaptive Feature Selection-Based AdaBoost-KNN With Direct Optimization for Dynamic Emotion Recognition in Human–Robot Interaction. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 5, 205–213. [Google Scholar] [CrossRef]
  70. Alonso-Martín, F.; Malfaz, M.; Sequeira, J.; Gorostiza, J.F.; Salichs, M.A. A Multimodal Emotion Detection System during Human–Robot Interaction. Sensors 2013, 13, 15549–15581. [Google Scholar] [CrossRef]
  71. Szabóová, M.; Sarnovský, M.; Maslej Krešňáková, V.; Machová, K. Emotion Analysis in Human–Robot Interaction. Electronics 2020, 9, 1761. [Google Scholar] [CrossRef]
  72. Biton, A.; Shoval, S.; Lerman, Y. The Use of Cobots for Disabled and Older Adults. IFAC-PapersOnLine 2022, 55, 96–101. [Google Scholar] [CrossRef]
  73. Yousuf, P.; Ethics of Using Care Robots for Older People. Asian Scientist. 2023. Available online: https://www.asianscientist.com/2023/10/in-the-lab/ethics-of-using-care-robots-for-older-people/ (accessed on 5 August 2024).
  74. Cooperative and Collaborative Robots Are Essential in the Automation Industry—But What’s the Difference? Available online: https://sickconnect.com/sickconnect-com-collaborativecooperativerobot/ (accessed on 6 August 2024).
  75. D’Onofrio, G.; Sancarlo, D. Assistive Robots for Healthcare and Human–Robot Interaction. Sensors 2023, 23, 1883. [Google Scholar] [CrossRef] [PubMed]
  76. Su, H.; Qi, W.; Chen, J.; Yang, C.; Sandoval, J.; Laribi, M.A. Recent advancements in multimodal human–robot interaction. Front. Neurorobot. 2023, 17, 1084000. [Google Scholar] [CrossRef]
  77. Agurbash, E.; The Future of Human-Robot Collaboration and Assistive Technologies. AI for Good Blog. 2024. Available online: https://aiforgood.itu.int/the-future-of-human-robot-collaboration-and-assistive-technologies/ (accessed on 7 July 2024).
  78. Safavi, F.; Olikkal, P.; Pei, D.; Kamal, S.; Meyerson, H.; Penumalee, V.; Vinjamuri, R. Emerging Frontiers in Human–Robot Interaction. J. Intell. Robot. Syst. 2024, 110, 45. [Google Scholar] [CrossRef]
  79. Sharkey, A.; Sharkey, N. Granny and the robots: Ethical issues in robot care for the elderly. Ethics Inf. Technol. 2012, 14, 27–40. [Google Scholar] [CrossRef]
Figure 1. Process of human–robot interactions.
Figure 1. Process of human–robot interactions.
Electronics 13 03285 g001
Figure 2. Illustration of the collaborative control for human–robot cooperation.
Figure 2. Illustration of the collaborative control for human–robot cooperation.
Electronics 13 03285 g002
Figure 3. Illustration of the robotics perception system.
Figure 3. Illustration of the robotics perception system.
Electronics 13 03285 g003
Figure 4. Illustration of different sensors and their corresponding applications in robotics [34].
Figure 4. Illustration of different sensors and their corresponding applications in robotics [34].
Electronics 13 03285 g004
Figure 5. Illustration of arm-mounted IR sensor and data processor [39].
Figure 5. Illustration of arm-mounted IR sensor and data processor [39].
Electronics 13 03285 g005
Figure 6. Illustration of complete IR sensor-based navigation system for the blind: (a) Description of Sensor module; (b) Description of Notice module [39].
Figure 6. Illustration of complete IR sensor-based navigation system for the blind: (a) Description of Sensor module; (b) Description of Notice module [39].
Electronics 13 03285 g006
Figure 7. Illustration of an assistive cobot [73].
Figure 7. Illustration of an assistive cobot [73].
Electronics 13 03285 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Raj, R.; Kos, A. Study of Human–Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies. Electronics 2024, 13, 3285. https://doi.org/10.3390/electronics13163285

AMA Style

Raj R, Kos A. Study of Human–Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies. Electronics. 2024; 13(16):3285. https://doi.org/10.3390/electronics13163285

Chicago/Turabian Style

Raj, Ravi, and Andrzej Kos. 2024. "Study of Human–Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies" Electronics 13, no. 16: 3285. https://doi.org/10.3390/electronics13163285

APA Style

Raj, R., & Kos, A. (2024). Study of Human–Robot Interactions for Assistive Robots Using Machine Learning and Sensor Fusion Technologies. Electronics, 13(16), 3285. https://doi.org/10.3390/electronics13163285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop