Next Article in Journal
Development of Static Test Equipment and a System for Lever-Loaded Air Springs
Next Article in Special Issue
Current State, Needs, and Opportunities for Wearable Robots in Military Medical Rehabilitation and Force Protection
Previous Article in Journal
Gust Load Alleviation Control Strategies for Large Civil Aircraft through Wing Camber Technology
Previous Article in Special Issue
Active Power Assist with Equivalent Force on Connection for Lower Limb Exoskeleton Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Intelligent Wheelchair Multimode Human–Computer Interaction and Assisted Driving Technology

Institute of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Actuators 2024, 13(6), 230; https://doi.org/10.3390/act13060230
Submission received: 6 May 2024 / Revised: 16 June 2024 / Accepted: 19 June 2024 / Published: 20 June 2024

Abstract

:
The traditional wheelchair focuses on the “human-chair” motor function interaction to ensure the elderly and people with disabilities’ basic travel. For people with visual, hearing, physical disabilities, etc., the current wheelchairs show shortcomings in terms of accessibility and independent travel for this group. Therefore, this paper develops an intelligent wheelchair with multimodal human–computer interaction and autonomous navigation technology. Firstly, it researches the multimodal human–computer interaction technology of occupant gesture recognition, speech recognition, and head posture recognition and proposes a wheelchair control method of three-dimensional head posture mapping the two-dimensional plane. After testing, the average accuracy of the gesture, head posture and voice control modes of the motorized wheelchair proposed in this study reaches more than 95 percent. Secondly, the LiDAR-based smart wheelchair indoor autonomous navigation technology is investigated to realize the autonomous navigation of the wheelchair by constructing an environment map, using A* and DWA algorithms for global and local path planning, and adaptive Monte Carlo simulation algorithms for real-time localization. Experiments show that the position error of the wheelchair is within 10 cm, and the heading angle error is less than 5° during the autonomous navigation. The multimode human–computer interaction and assisted driving technology proposed in this study can partially compensate and replace the functional deficiencies of the disabled population and improve the quality of life of the elderly and disabled population.

1. Introduction

Since entering the 21st century, China’s population aging problem has been deepening [1,2], and the number of people with disabilities is also on the rise, increasing the demand for wheelchairs year by year [3]. Ground mobility robots such as robotic wheelchairs can significantly improve people’s comfort and quality of life [4,5]. The traditional wheel + chair, electric wheel + chair + control handle is the mainstream of the current wheelchair, which focuses on the “human-chair” motor function interaction to ensure the elderly and people with disabilities’ essential travel. For people with visual, auditory, tactile, and physical disabilities, the current wheelchair has apparent deficiencies in achieving independent travel for this group. In addition, the traditional wheelchair does not have the function of autonomous navigation. In the process of using it, it is often necessary to have auxiliary personnel to ensure the travel safety of the wheelchair occupants.
To make up for the shortcomings of traditional wheelchairs and ensure the safety of wheelchair occupants, it is crucial to combine the relevant technologies in the field of robotics, such as SLAM (Simultaneous Localization and Mapping), environment perception, motion control, path planning and multimodal human–computer interaction, etc., to upgrade and transform the traditional wheelchair, and to develop an intelligent wheelchair with multimodal human–computer interaction and autonomous navigation functions, which is of excellent research significance and practical value.
Intelligent wheelchair mobility can be categorized into manual, electric, state-detecting, and autonomous navigation mobility, in which state-detecting mobility is to control the movement of the wheelchair by detecting the wheelchair occupant’s electroencephalogram information, electromyography information, pupil position, gesture, or head posture [6,7,8,9]. Autonomous navigation is to sense the wheelchair by installing LiDAR, inertial sensors and vision sensors, etc., to sense the wheelchair environment around the wheelchair, thus realizing the autonomous movement of the wheelchair. Li et al. proposed a navigation method combining differential GNSS (Global Navigation Satellite System) and LiDAR SLAM, improving localization and navigation accuracy [10]. Ferracuti et al. proposed the human-in-the-loop framework, which is used to navigate a wheelchair to its destination in indoor scenarios according to the EEG signals generated by the person after the path-planning error. EEG signal serves as an additional input for navigation, thus realizing real-time path modification [11]. Li et al. studied a computer vision-based wheelchair following system, which utilizes camera visual information to achieve target tracking, position prediction, and localization functions [12]. Wang et al. proposed a path-planning method for robotic wheelchair navigation, which combines the utility function of human comfort and path cost for path optimization through navigation map modeling to improve the efficiency of optimal trajectory search and ensure navigation safety [13]. Maksud researched and designed a brain–computer interface-based smart wheelchair, which acquires attention and blinking signals through a wearable device and then controls the wheelchair through virtual maps, completes destination mapping, and autonomously reaches the desired location [14].
The human–computer interaction of the intelligent wheelchair can be divided into two aspects: (1) two-way human–computer interaction between the occupant and the wheelchair, where the occupant can check the status information of the wheelchair through the display screen and control the movement of the wheelchair through gestures or voice; and (2) two-way human–computer interaction between the guardian and the wheelchair. With the help of a cloud server, 5G, and other technologies, the guardian can interact with the wheelchair remotely through the cell phone APP, check the parameters of the wheelchair, or control the wheelchair.
In terms of human–computer interaction between the occupant and the wheelchair, Xu et al. designed an intelligent wheelchair with eye-movement control, which acquires the occupant’s eye image through a camera and uses deep learning to determine the direction of eye movement, as well as to establish a motion acceleration model for the intelligent wheelchair to improve motion smoothness [15]. Cui et al. proposed an intelligent wheelchair posture adjustment method based on action intent recognition, which adjusts the wheelchair posture by investigating the relationship between the force changes on the contact surface between the human body and the wheelchair and the action intent [16]. Wanluk proposed the concept of an eye-tracking intelligent wheelchair, which can control the movement of the wheelchair through the eyes and remotely control some electrical devices [17]. Aktar developed an intelligent wheelchair based on a speech recognition system, where the user can use voice to control the wheelchair’s movement and speed and use infrared sensors to ensure that the wheelchair is within a safe distance from obstacles [18]. Dey proposed an intelligent wheelchair system for head posture navigation, which integrates modules such as acceleration, ultrasound, and photoresistor, and the system can realize the wheelchair movement in five directions according to different head postures [19]. Welihinda et al. proposed a hybrid control system to operate a powered wheelchair using a combination of EEG and EMG, and developed an EEG-based user attention detection system and an EMG-based navigation system [20]. Regarding human–computer interaction between the guardian and the wheelchair, Lu et al. used Internet of Things (IoT) technology to monitor the data of the position of the electric wheelchair, the tire wear, the battery power level, etc., and the guardian can know the user’s operation status [21]. Li et al. designed a mobile terminal APP based on virtual reality and IoT technology, sent commands to a remote server through the app, and completed rehabilitation training with the help of virtual reality technology [22]. Cui et al. developed an intelligent wheelchair with multimodal sensing and control, which utilizes LiDAR and temperature and humidity sensors to sense environmental information and achieve wheelchair control with the help of sensors such as gestures and handles [23].
In this paper, the human–computer interaction and assisted driving technology of intelligent wheelchairs are studied, and the following work is mainly accomplished:
(1) In this paper, a novel intelligent wheelchair system is developed, which innovatively integrates multimodal human–computer interaction technology and autonomous navigation technology. A multi-modal intelligent human–computer interaction framework including occupant–wheelchair and guardian–wheelchair, as well as a “human-in-the-loop” intelligent wheelchair autonomous navigation framework are proposed.
(2) A novel multi-modal human–computer interaction framework based on the principle of functional substitution is proposed to solve the problem of missing human functions in the “human-in-the-loop” system. In terms of occupant–wheelchair interaction, quick control is realized by installing a handle on the traditional wheelchair; different gestures and voice commands are recognized by gesture and voice recognition sensors, so that occupants with hand disabilities can control the wheelchair conveniently. The occupant’s head gesture is recognized and mapped onto the two-dimensional plane to control the wheelchair movement; and in terms of guardian-wheelchair interaction, remote human–computer interaction is realized by means of a cloud server and a cell phone APP.
(3) A “human-in-the-loop” autonomous navigation framework for smart wheelchairs is proposed. The technology innovatively integrates speech technology and indoor navigation technology. By integrating sensor modules such as LiDAR and inertial measurement unit (IMU), the system realizes accurate mapping of the indoor environment and real-time sensing of movement status. Based on the environment map and path planning algorithm, the system outputs precise speed commands. In addition, combined with the embedded processor and voice recognition technology, the wheelchair is able to realize accurate autonomous control and fixed-point navigation based on voice commands, providing a more intelligent and flexible travel solution.
The remainder of the paper is organized as follows. Section 2 gives the general design of the intelligent wheelchair. The third part investigates the human–computer interaction techniques of the wheelchair, including wheelchair–occupant and wheelchair–guardian human–computer interaction. Section 4 explores the indoor navigation techniques of the intelligent wheelchair. Section 5 describes wheelchair experiments. Section 6 summarizes the thesis.

2. System Overview

2.1. System Overall Structure

Aiming at the shortcomings of traditional wheelchairs in environment perception, autonomous navigation, and human–computer interaction, this study installs 3D LiDAR and attitude sensors on the wheelchair to realize the multimode human–computer interaction and indoor autonomous navigation of the wheelchair. This study adopts a layered development approach to develop the functions of the intelligent wheelchair: the physical layer, communication transmission layer, motion control layer, human–computer interaction layer, autonomous navigation layer and perception layer from bottom to top, as shown in Figure 1.
The role of each layer is as follows.
(1) Physical layer: It includes the computing unit and the intelligent wheelchair itself, in which the computing unit adopts the industrial computer as the central controller of the system, runs the ROS operating system, and communicates with the MCU development board through the serial port downward to obtain the status of the wheelchair and send the control instructions. The MCU development board is used to accept instructions from the central control, drive the four motors of the intelligent wheelchair, and collect the mileage of the wheelchair and other sensor information, which are transmitted upward to the central control, and transmit them to the main control. The structure of the intelligent wheelchair includes the body itself, the motor drive and the motors. The intelligent wheelchair adopts a two-wheel differential speed to establish the kinematic model.
(2) Communication transmission layer: The sensor module, intelligent wheelchair control, and cloud platform communication all use communication protocols to complete the data sending and receiving, of which USART (Universal Synchronous Asynchronous Receiver Transmitter) is used to complete the data communication between the master control and MCU, RS485 is used for the communication between MCU and motor drive, and IIC(Inter-IntegratedCircuit) is used for the communication between MCU and sensors.
(3) Human–computer interaction layer: With the help of gestures, voice, and other sensors, the interaction between the occupant and the wheelchair is realized; the guardian can check the status of the wheelchair and its control through the APP.
(4) Autonomous navigation layer: Based on the positioning information and obstacle information collected in the sensing layer, the mobile wheel in the physical layer is driven to realize autonomous movement. After completing the map construction, based on the LiDAR point cloud information and path-planning algorithm, it realizes autonomous movement and obstacle avoidance in the known environment.
(5) Multi-mode sensing layer: With the help of MPU6050 and other modules, it senses the movement information in the navigation process of the intelligent wheelchair, including odometer information and positioning information.

2.2. Intelligent Wheelchair

The intelligent wheelchair is equipped with several sensors and motor actuators, including a gesture recognition module, LiDAR, microphone array and IMU, as well as the motor drive, STM32 microcontroller, and industrial computer. The LiDAR is RoboSense’s 16-line LiDAR RS-Helios-16P. The central control part of the system is the wheelchair’s hub motor, and the control of the motor drive can realize the hub motor’s precise motion speed control. The wheelchair adopts two-wheel differential control, and the hardware connection and some hardware functions are shown in Figure 2. The coordinate relationship between each sensor and the rotation center of the intelligent wheelchair is defined in Table 1.

3. Intelligent Wheelchair Multi-Mode Human–Computer Interaction Technology

In the process of using traditional wheelchairs, there are the following problems in human–computer interaction: (1) handle controllers are designed primarily for people without physical impairments, and for users with missing hands or inability to hold the handle have to rely on others to realize autonomous travel; (2) in traditional wheelchair travel, the family members of the occupants are unable to obtain the travel information of the wheelchair, and are also unable to deal with the emergencies of the wheelchair occupants promptly, and need to realize remote monitoring and control of the wheelchair.
The human–computer interaction of the intelligent wheelchair developed in this paper can be divided into two aspects. (1) The first is two-way human–computer interaction between the occupant and the wheelchair, in which the occupant can view the status information of the wheelchair through the display screen, and control the movement of the wheelchair through gestures, voice or head posture. (2) The second is bidirectional human–computer interaction between the guardian and the wheelchair, in which the family members of the occupant can remotely view the wheelchair travel information with the help of the cell phone APP and remotely control the wheelchair. Figure 3 shows this study’s architecture design of the multimodal human–computer interaction.

3.1. Wheelchair Mobility Control Based on Gesture Recognition

Traditional wheelchairs require a person to push them to move manually. A motorized rocker controls an electric wheelchair to move forward and backward and steer the wheelchair. This is not friendly to people with muscular atrophy and arm weakness, so we designed a gesture recognition-based wheelchair control to control the movement of the wheelchair by detecting changes in the occupant’s gestures.
To consider recognition accuracy and cost, we chose the ATK-PAJ7620 gesture module, which supports recognizing 4 gesture types: forward, backward, left turn, and right turn. When the chip works, it emits infrared signals by driving infrared LEDs; the raw feature data collected by the information extraction array on the sensor array is stored in the register, the raw data is recognized by the gesture recognition array, and the results are stored. The recognition results are output using the IIC bus.
When using the PAJ7620 module, it is necessary to drive the sensor through three steps, wake-up, initialization, and recognition, and read and write to the bank register area to obtain the gesture information. The movement direction of the wheelchair is defined according to different gestures, as follows: (1) gesture forward, the wheelchair moves forward; (2) gesture left, the wheelchair turns left; (3) gesture right, the wheelchair turns right; and (4) gesture back, the wheelchair stops moving. In addition, when the embedded processor Stm32 detects a change in gesture posture, it sends motor rotation direction and speed control commands to the motor driver via the 485 bus, which, in turn, controls the motion of the wheelchair. The wheelchair motion control command communication protocol is shown in Table 2. The function bits represent different control methods, where 0x11 is touch screen control, 0x12 is APP control, 0x13 is gesture control, 0x14 is head posture control, and 0x15 is navigation control. Among them, the polarity represents the rotation direction of the left and right wheels: 0x01: left wheel forward rotation, right wheel forward rotation; 0x02: left wheel forward rotation, right wheel reverse rotation; 0x03: left wheel reverse rotation, right wheel forward rotation; and 0x04: left wheel reverse rotation, right wheel reverse rotation. Different control modes are distinguished by setting the priority, while manual and automatic control modes are distinguished by setting the flag bit. The two control modes, gesture and head posture, are suitable for priority control, where gesture control is set to a higher priority. Specifically, when both gesture and head posture control commands are received, the system will prioritize the execution of the gesture control command.

3.2. Smart Wheelchair Voice Recognition Control

The traditional electric wheelchair is still a challenge for people with upper limb disability, quadriplegia, or even more severe physical disabilities, to achieve autonomous mobility. In this study, the audio signal of the occupant is captured using the KeDaXunFei M260C ring array and R818 noise reduction board. The module performs pre-processing, such as filtering and endpoint detection on the captured audio, and then performs feature extraction on the processed signal. The extracted features are used for training or recognition; the training features are used to build a template library. In contrast, the feature recognition is compared with the features in the template library, and the recognition result is outputted according to the degree of match. In this study, the voice control function is turned on by setting the voice wake-up word, and “forward,” “stop”, “turn left”, “turn right” and “autonomous” are set. The speech recognition and control process are shown in Figure 4.

3.3. Smart Wheelchair Head Posture Control

In this study, a simple vision-based head motion control scheme is implemented, and head posture control can help patients with physical disabilities to realize autonomous travel. A 3D rigid body has two types of motions relative to the camera: translational and rotational. These include X , Y , and Z axes translational motions and three rotational motions, namely, roll, pitch, and yaw. The estimation of the occupant’s head pose solves for these six parameters.
Suppose a point P   ( U , V , W ) in the world coordinate system is known. Assume that the rotation matrix and translation vector are known to be R and t , respectively, to obtain the position of P in the camera coordinate system ( X , Y , Z ):
X Y Z = R U V W + t X Y Z = R | t U V W 1
From the camera coordinate system to the pixel coordinate system can be calculated by Equation (2):
x y 1 = s f x 0 c x 0 f y c y 0 0 1 X Y Z
where f x and f y are the focal lengths in the x and y -axis directions, respectively, ( c x ,   c y ) is the optical center, and S is the scale factor.
The relationship between the pixel coordinate system and the world coordinate system can be calculated by Equation (3):
s x y 1 = f x 0 c x 0 f y c y 0 0 1 R | t U V W 1
Solve by DLT (Direct Linear Transform) with least squares to find the Euler angles from the rotation matrix.
R = r 00 r 01 r 02 r 10 r 11 r 12 r 20 r 21 r 22 = cos φ c o s   γ cos φ sin γ sin φ cos φ sin γ + sin φ sin φ cos γ cos φ c o s   γ + sin φ sin φ s i n γ sin φ sin φ sin φ   sin γ cos φ cos φ cos γ sin φ cos γ + cos φ sin φ sin γ cos φ sin φ φ = a t a n 2 ( r 12 , r 22 ) φ = a t a n 2 r 02 , r 12 2 + r 12 2   γ = a t a n 2 ( r 01 , r 00 )
In this study, the wheelchair motion is controlled by the Yaw and Pitch parameters of the head pose. The Yaw and Pitch of the occupant’s head are mapped to a two-dimensional plane, as shown in Figure 5. There are five zones, namely, forward zone, stop zone, left turn zone, correct turn zone, and keep current state zone. Within a specific range of parameters, the wheelchair is controlled by the occupant’s head posture to move forward, stop, turn left, and turn right, respectively. When the head is in the middle of a specific range, the current state of the wheelchair is maintained to move forward or stop.

3.4. Remote Control of Smart Wheelchair Based on Cloud Server

With the development of cloud computing and 5G technology, remote monitoring and wheelchair control is also possible. In this study, we designed a wheelchair remote video and control scheme based on the Ali cloud server, using a lightweight video streaming server MJPEG-Streamer to realize the video transmission; the embedded processor uses a 5G module to transmit the data in real time to the Ali cloud server, and the APP on the cell phone is connected to the cloud platform to receive the video data from the cloud platform in real time. At the same time, combined with the cloud platform to realize the remote control of the wheelchair, the specific control scheme is as follows. The occupant clicks on the control button in the cell phone APP, the APP sends control commands to the server, the cloud platform receives the commands and forwards them to the intelligent wheelchair, and the embedded platform of the smart wheelchair analyzes the control commands, and then controls the motor drive module to realize the movement control of the wheelchair. The wheelchair remote monitoring and control program is shown in Figure 6.

4. Smart Wheelchair Assisted Driving Technology

The current autonomous navigation technology for smart cars and food delivery robots relies on environment maps created with the help of LiDAR, cameras, and so on. With the help of the existing environment map, the robot can carry out path planning and realize autonomous navigation. The intelligent wheelchair designed in this paper adopts the autonomous navigation algorithm based on LiDAR. The navigation process includes 2D map construction, data reading (map, positioning, IMU, odometer), labeling the start and end points, global/local path planning, and ultimately outputting the autopilot commands that contain the direction and speed of the wheelchair’s movement, as shown in Figure 7. At the same time, this paper designs an autonomous navigation framework in the “human-in-the-loop”, where the occupants can realize precise fixed-point navigation based on preset target points through voice commands. In the autonomous navigation process, the occupants can also dynamically adjust the target point through voice commands to realize flexible path planning and target switching.

4.1. Two-Dimensional Map Construction

The wheelchair relies on the existing environment maps when navigating autonomously, and 3D map creation and point cloud matching have high requirements on the processor, so this study adopts a 2D map construction scheme to convert the 3D radar-collected point cloud data into a 2D point cloud. Specifically, the Cartographer algorithm is used to create a 2D raster map. The PointCloud_to_laserscan function package converts the 3D point cloud into a 2D point cloud. Its idea of constructing a global map based on the submap can effectively avoid the interference of moving objects in the map-building environment.

4.2. Path Planning Algorithm

4.2.1. Global Path Planning Algorithm

The move_base function package in Ros implements the path planning function of the intelligent wheelchair, which can carry out global planning based on the user-set target position, global map information, and localization information, as well as identify obstacle information and update the global and local cost maps in real time to carry out the local path planning in the process of the wheelchair’s movement. This study uses an A* global path-planning algorithm and a DWA-based local path-planning algorithm based on move_base.
The A* algorithm uses a heuristic search strategy to select the minimum cost point as the next trajectory point through the cost function without traversing the map, which is faster. The A* cost function is as follows:   g ( n ) denotes the cost from the initial position to node n , and h ( n ) denotes the predicted cost from node n to the target position.
f n = g n + h ( n )
In this paper, the Manhattan distance is also used to calculate the distance between two nodes.
d x = k x 1 x 2 + y 1 y 2
Here, k is the unit distance of the grid in the map, and ( x 1 , y 1 ), ( x 2 , y 2 ) are two nodes.
The path-planning process of the A* algorithm is shown in Figure 8 with the following steps:
(1) First, create Openlist and Closelist lists for storing nodes to be checked and nodes that have been checked.
(2) Starting from starting point A (start point), put starting point A into the Openlist list and initialize the Closelist list to empty.
(3) Select the node with the smallest value of f ( n ) from the Openlist, denoted as N, and move point N from the Openlist list to the Closelist list. Expand the neighboring nodes of node N (up to 8 directions) and calculate the f ( n ) values of these neighboring nodes.
(4) If the neighboring node is already in the Openlist and the g ( n ) value to reach the neighboring node through point N is smaller, update the parent of the neighboring node to N and update its g ( n ) value and f ( n ) value. If the neighboring node is not in Closelist, add it to Openlist.
(5) Repeat steps (3) (4) until end point B is added to the Closelist, indicating that the path is found. If Openlist is empty and endpoint node B is not found during this process, the path cannot be planned.
Figure 8. A* Algorithm Flow Chart.
Figure 8. A* Algorithm Flow Chart.
Actuators 13 00230 g008

4.2.2. Local Path-Planning Algorithm

The DWA local path-planning algorithm consists of three main steps: kinematic model state solving, state prediction in prediction time, and the scoring of the predicted trajectory. First, the DWA algorithm constrains the spatial range of the values of the angular and linear velocities of the wheelchair according to the intelligent wheelchair’s own performance and motion state. Second, multiple sets of angular and linear velocity data are sampled in the spatial range, and the predicted trajectories of these data are computed at Δt time. Finally, the calculated trajectories are simulated and scored using an evaluation function to select the optimal combination of angular and linear velocities to control the smart wheelchair for local path planning. The motion model of the wheelchair with two-wheel differential drive is shown in Figure 9.
Assuming that the wheelchair is moving in a two-dimensional plane and the position at moment t is represented as [ x t , y t , θ t ] , x t , y t represents the position of the unmanned vehicle coordinates, and θ t is the direction, then the motion model is as follows:
x t + t y t + t θ t + t = x t y t θ t + c o s θ 0 0 0 s i n θ 0 0 0 1 v t t y t t ω t t
where, [ x t + t , y t + t , θ t + t ] T is the position of the wheelchair at the moment of t + t , [ x t , y t , θ t ] T is the position of the intelligent wheelchair at the moment of t, and [ v t t , y t t , ω t t ] T is the increment of the intelligent wheelchair position at the moment of ∆t.
Intelligent wheelchairs are constrained by maximum velocity, angular velocity, minimum velocity, and angular velocity. They are also subject to the dynamic constraints of maximum linear acceleration, angular acceleration, minimum linear acceleration, and angular velocity. Considering the safe traveling of the wheelchair, the wheelchair to the obstacle satisfies a specific distance constraint, as shown in Equation (8).
  V t = v , ω v m i n v v m a x , ω m i n ω ω m a x V d = v , ω v t + v ˙ a t v v t + v ˙ b t , ω t + ω ˙ a t ω ω t + ω ˙ b t V a = v , ω v 2 · d i s t v , ω · v ˙ b , ω 2 · d i s t v , ω · ω ˙ b
Considering the three critical factors of velocity constraints, acceleration constraints, and safe distance constraints of the intelligent wheelchair, multiple sets of predictable trajectories can be obtained to form a dynamic windowed velocity simulation sampling for the multiple groups v , ω within the window; G v , ω is used to score and select the optimal trajectory as the path at the next moment. The evaluation function is expressed as follows:
G v , ω = σ ( α · h e a d i n g v , ω + β · d i s t v , ω + γ · v e l o c i t y v , ω )
Obstacles in the environment are usually dynamic, resulting in a jump transformation of the evaluation function. To solve this problem, the above evaluation function needs to be smoothed, i.e., normalized, with the formula shown below:
h e a d i n g n o r m a l i = h e a d i n g ( i ) i n h e a d i n g ( i ) d i s t n o r m a l i = d i s t ( i ) i n d i s t ( i ) v e l o c i t y n o r m a l i = v e l o c i t y ( i ) i n v e l o c i t y ( i )
where h e a d i n g v , ω is the angular deviation of facing, d i s t ( v , ω ) is the distance between the trajectory and the nearby obstacle in a specific time and v e l o c i t y ( v , ω ) is the magnitude of the velocity on the simulated trajectory in a certain time; α is the weighting coefficient of facing, β is the weighting coefficient of distance to the obstacle, γ is the weighting coefficient of velocity, and σ is the normalization factor.

4.3. Adaptive Monte Carlo Simulation for Localization

During the navigation and movement of the intelligent wheelchair, the controller needs to obtain the real-time positioning information of the wheelchair based on the laser point cloud, coordinate transformation, and map information, where the coordinate transformation is between the wheelchair and the world coordinate system, IMU, and odometer. In the ROS system, the coordinate relationship of AMCL is shown in Figure 10. The traditional Monte Carlo simulation algorithm has the problem that the number of particles is inevitable and the robot’s position is kidnapped; in this paper, we adopt the adaptive Monte Carlo simulation algorithm to estimate the position of the intelligent wheelchair on the map by using the particle filter, and the specific steps are as follows: (1) randomly generating N particles; (2) fusing the odometer and the IMU information through the Extended Kalman Filter and then fusing them with the state estimation of the previous moment to estimate the predicted position at the next moment; (3) calculating the particle weights and updating the state according to the sensor data; and (4) resampling and obtaining the particles according to the new weight values.

5. Experimental Result

5.1. Experiments in Gesture and Voice Control for Intelligent Wheelchairs

This study conducts a series of experiments to verify the accuracy of various control modalities, especially the ability of the three control modalities to perform recognition and instruction commands. In this paper, the control modes of human–computer interaction are verified as shown in Figure 11a, where Figure 11b indicates that the wheelchair advances, and Figure 11c suggests that the wheelchair turns to the left. Figure 11d indicates that the wheelchair turns to the right. It is worth mentioning that, to better verify the experimental effect, we used an external camera for the experiments. The system proposed in this paper was tested by 10 experimenters during the course of the experiment. The wheelchair occupant sits on the wheelchair. The wheelchair is controlled by hand gestures, speech, and head posture for wheelchair forwarding, left turning, right turning, and stopping, and each action is repeated 50 times. The accuracy of the command recognition and execution is recorded, respectively. The mean values of accuracy are shown in Table 3. The embedded processor recognizes the signal changes detected by different sensors and then controls the motor driver to control the two motion wheels of the wheelchair to realize the motion control of the wheelchair.
The experimental results show that the gesture and voice control have high accuracy; in contrast, there is a specific error in the head posture control, and the error mainly comes from the error of image processing and head posture recognition, and the accuracy of the head posture recognition can be improved in the well-lit area and by increasing the threshold value of the holding state area to satisfy the primary control requirements.

5.2. Intelligent Wheelchair Remote Control Experiment

The cell phone APP and the wheelchair are used as clients, and the cell phone sends control commands to the AliCloud server and accepts video data from the wheelchair side to realize two-way data interaction. Figure 12 shows the cell phone app interface, including a video display and motion control modules. When the motion control button is pressed, the wheelchair executes the corresponding motion. When the remote video button is pressed, the video display module displays the received remote video screen. The latency of video remote transmission and command remote control is about 150 ms.

5.3. Smart Wheelchair Indoor Navigation Experiment

To verify the autonomous navigation performance of the intelligent wheelchair in indoor environments, this study takes region A as an example for autonomous navigation experiments, including several parts of region A’s map building, path planning, localization, and obstacle avoidance. After running the ROS program, the AMCL localization algorithm determines the initial position of the wheelchair based on the point cloud information. At the same time, the user gives the initial orientation and target position of the wheelchair on the map. The target position includes the endpoint coordinates and the direction of travel. In the actual experiment, the target position is usually set in two ways, one is by the occupant on the touch screen interface connected to the industrial control machine, on the map, by dragging the target arrows to manually specify the target point that they want to reach. The other way is by the occupant or the guardian to set up the target point in advance; the occupant triggers the fixed-point navigation through the touch screen buttons or the voice commands. After the path planning is completed, the industrial control computer sends the target speeds of the X, Y, and Z axes through the serial port to the microcontroller, and the microcontroller releases the control commands of the wheelchair according to the inverse solution of the wheelchair kinematics model. In the process of moving, the wheelchair will encounter obstacles, divided into static obstacles and dynamic obstacles such as, as Figure 13 demonstrates, the process of the autonomous obstacle avoidance of the intelligent wheelchair; the experimental results show that the system has an excellent obstacle avoidance ability to obstacles. The actual stopping position of the autonomous wheelchair navigation is within 10 cm of the given end position, and the orientation of the wheelchair at the time of stopping is less than 5° from the given target orientation angle. Figure 13a shows the 2D map of region A. Figure 13b shows the path planning, where the red line is the global path planning, and the green line is the local path planning. Figure 13c is the real map of the obstacle avoidance process, and Figure 13d is the path planning during obstacle avoidance.

6. Conclusions

To improve the control performance of the traditional wheelchair so that it can be controlled by different types of occupants and complete autonomous travel indoors while the guardian can remotely view and control the wheelchair, we have studied the multi-mode human–computer interaction and assisted driving technology, explored the human–computer interaction technology combining multiple control modes, and explored the possibility of the autonomous navigation of the intelligent wheelchair indoors, with the main functions as follows:
(1) Multi-mode human–computer interaction: In this paper, we design and implement a multi-mode human–computer interaction mode to realize the gesture control, voice control, and head posture control of the wheelchair; the three control modes can be better adapted to different people, and learn the wheelchair’s forward, left turn, right turn and stop functions. The guardian can also view the video data and control the wheelchair remotely with the help of an APP. In addition, the delay time of the remote control is about 150 ms, and the accuracy rate of gesture, voice, and head posture control is over 95%.
(2) Autonomous navigation of smart wheelchair: this paper researches and realizes the indoor navigation technology of the smart wheelchair, including constructing a 2D environment map, global and local path planning, data input, and marking the start and end points. Users mark the endpoint on the map, and the intelligent wheelchair can autonomously navigate to the target point. This dramatically facilitates wheelchair users’ traveling. The accuracy of the autonomous navigation endpoint is less than 10 cm.
The intelligent wheelchair described in this paper has the advantages of a high degree of intelligence, multiple operation modes, and high safety performance. However, after our experiments, there are still the following shortcomings: (1) more sensors are installed on the wheelchair, the wiring is more chaotic, and most of the electrical components are not waterproof, which needs to be solved in the future; (2) when the wheelchair is controlled by the occupant, it is still necessary to increase the number of sensors for detecting obstacles, further enhancing the safety performance of wheelchair travel. In future work, we will consider the implementation of smart wheelchair autonomous navigation target point setting on the guardian APP side.

Author Contributions

J.C. managed the project. Y.S. designed the program and created the first draft. Y.W. and S.Y. designed the project and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by China’s National Key R&D Program (2020YFC2007401): Multimodal intelligent sensing, human–machine interaction, and active safety technology.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The materials and equipment were supported by Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences and Changzhou Zhongjin Medical Co., Ltd.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. An, M.; Yu, C. The current situation and suggestions of population aging in China. J. Econ. Res. 2018, 10, 54–58+66. (In Chinese) [Google Scholar]
  2. Zhou, Y. How to face the challenges of population aging. People’s Forum 2018, 3, 94–95. (In Chinese) [Google Scholar]
  3. Wu, F.; Tang, B. Research on the impact of population aging on the development of China’s service industry. China Popul. Sci. 2018, 2, 103–115+128. (In Chinese) [Google Scholar]
  4. Li, L.; Liu, Y.-H.; Jiang, T.; Wang, K.; Fang, M. Adaptive trajectory tracking of nonholonomic mobile robots using vision-based position and velocity estimation. IEEE Trans. Cybern. 2018, 48, 571–582. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, H.; Sun, Y.; Liu, M. Self-supervised drivable area and road anomaly segmentation using rgb-d data for robotic wheelchairs. IEEE Robot. Autom. Lett. 2019, 4, 4386–4393. [Google Scholar] [CrossRef]
  6. Machangpa, J.W.; Chingtham, T.S. Head Gesture Controlled Wheelchair for Quadriplegic Patients. Procedia Comput. Sci. 2018, 132, 342–351. [Google Scholar] [CrossRef]
  7. Luo, W.P.; Cao, J.T.; Ishikawa, K.; Ju, D. A Human-machine Control System Based on Intelligent Recognition of Eye Movements and Its Application in Wheelchair Driving. Multimodal Technol. Interact. 2021, 5, 50. [Google Scholar] [CrossRef]
  8. Fereidouni, S.; Hassani, M.S.; Talebi, A.; Rezaie, A.H. A Novel Design and Implementation of Wheelchair Navigation System Using Leap Motion Sensor. Disabil. Rehabil. Assist. Technol. 2022, 17, 442–448. [Google Scholar] [CrossRef] [PubMed]
  9. Abdulghani, M.M.; Al-Aubidy, K.M.; Ali, M.M.; Hamarsheh, Q.J. Wheelchair Neuro Fuzzy Control and Tracking System Based on Voice Recognition. Sensors 2020, 20, 2872. [Google Scholar] [CrossRef] [PubMed]
  10. Li, N.; Guan, L.; Gao, Y. A seamless indoor and outdoor low-cost integrated navigation system based on LIDAR/GPS/INS. In Proceedings of the 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall), Victoria, BC, Canada, 18 November–16 December 2020; pp. 1–6. [Google Scholar]
  11. Ferracuti, F.; Freddi, A.; Iarlori, S.; Longhi, S.; Monteriù, A.; Porcaro, C. Augmenting robot intelligence via EEG signals to avoid trajectory planning mistakes of a smart wheelchair. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 223–235. [Google Scholar] [CrossRef]
  12. Li, Y.; Tang, D.; Zhou, Y.; Dai, Q. Design of wheelchair following system based on computer vision. J. Comput. Eng. Appl. 2021, 57, 163–172. [Google Scholar]
  13. Wang, C.; Xia, M. Stable Autonomous Robotic Wheelchair Navigation in the Environment With Slope Way. IEEE Trans. Veh. Technol. 2020, 69, 10759–10771. [Google Scholar] [CrossRef]
  14. Maksud, A.; Chowdhury, R.I.; Chowdhury, T.T.; Fattah, S.A.; Shahanaz, C.; Chowdhury, S.S. Low-cost eeg based electric wheelchair with advanced control features. In Proceedings of the TENCON 2017—2017 IEEE Region 10 Conference, Penang, Malaysia, 5–8 November 2017; pp. 2648–2653. [Google Scholar]
  15. Xu, J.; Huang, Z.; Liu, L.; Li, X.; Wei, K. Eye-Gaze Controlled Wheelchair Based on Deep Learning. Sensors 2023, 23, 6239. [Google Scholar] [CrossRef] [PubMed]
  16. Cui, J.; Huang, Z.; Li, X.; Cui, L.; Shang, Y.; Tong, L. Research on Intelligent Wheelchair Attitude-Based Adjustment Method Based on Action Intention Recognition. Micromachines 2023, 14, 1265. [Google Scholar] [CrossRef] [PubMed]
  17. Wanluk, N.; Visitsattapongse, S.; Juhong, A.; Pintavirooj, C. Smart wheelchair based on eye tracking. In Proceedings of the 2016 9th Biomedical Engineering International Conference (BMEiCON), Laung Prabang, Laos, 7–9 December 2016; pp. 1–4. [Google Scholar]
  18. Aktar, N.; Jaharr, I.; Lala, B. Voice recognition based intelligent wheelchair and GPS tracking system. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 7–9 February 2019; pp. 1–6. [Google Scholar]
  19. Dey, P.; Hasan, M.M.; Mostofa, S.; Rana, A.I. Smart wheelchair integrating head gesture navigation. In Proceedings of the 2019 International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, 10–12 January 2019; pp. 329–334. [Google Scholar]
  20. Welihinda, D.V.D.S.; Gunarathne, L.K.P.; Herath, H.M.K.K.M.B.; Yasakethu, S.L.P.; Madusanka, N.; Lee, B.I. EEG and EMG-based human-machine interface for navigation of mobility-related assistive wheelchair (MRA-W). Heliyon 2024, 10, e27777. [Google Scholar] [CrossRef] [PubMed]
  21. Lu, C.-Y.; Tseng, C.-L.; Horng, W.-Y.; Chiu, Y.-S.; Tai, C.-C.; Su, T.-J. Applying Internet of Things to Data Monitoring of Powered Wheelchairs. Sens. Mater. 2021, 33, 1869–1881. [Google Scholar] [CrossRef]
  22. Li, W.; Yu, H.; Wang, M. A variety of human-computer interactions of smart wheelchair. In Proceedings of the 12th International Convention on Rehabilitation Engineering and Assistive Technology, Shanghai, China, 14–16 July 2018; pp. 246–248. [Google Scholar]
  23. Cui, J.; Cui, L.; Huang, Z.; Li, X.; Han, F. IoT Wheelchair Control System Based on Multi-Mode Sensing and Human-Machine Interaction. Micromachines 2022, 13, 1108. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Intelligent wheelchair system architecture design.
Figure 1. Intelligent wheelchair system architecture design.
Actuators 13 00230 g001
Figure 2. Intelligent wheelchairs and sensor and actuator positions.
Figure 2. Intelligent wheelchairs and sensor and actuator positions.
Actuators 13 00230 g002
Figure 3. Smart wheelchair HCI architecture design.
Figure 3. Smart wheelchair HCI architecture design.
Actuators 13 00230 g003
Figure 4. Speech Recognition Process Control Schematic.
Figure 4. Speech Recognition Process Control Schematic.
Actuators 13 00230 g004
Figure 5. Schematic of 2D mapping for head attitude control.
Figure 5. Schematic of 2D mapping for head attitude control.
Actuators 13 00230 g005
Figure 6. Intelligent Wheelchair Remote Control Technology Architecture.
Figure 6. Intelligent Wheelchair Remote Control Technology Architecture.
Actuators 13 00230 g006
Figure 7. Smart Wheelchair Indoor Navigation Technology Architecture.
Figure 7. Smart Wheelchair Indoor Navigation Technology Architecture.
Actuators 13 00230 g007
Figure 9. Smart wheelchair two-wheel differential kinematics modeling.
Figure 9. Smart wheelchair two-wheel differential kinematics modeling.
Actuators 13 00230 g009
Figure 10. AMCL coordinates the transformation relationship.
Figure 10. AMCL coordinates the transformation relationship.
Actuators 13 00230 g010
Figure 11. Smart Wheelchair Multi-Mode Human–Computer Interaction Experiments.
Figure 11. Smart Wheelchair Multi-Mode Human–Computer Interaction Experiments.
Actuators 13 00230 g011
Figure 12. Smart Wheelchair remote interaction experiment.
Figure 12. Smart Wheelchair remote interaction experiment.
Actuators 13 00230 g012
Figure 13. Intelligent Wheelchair Autonomous Navigation Experiment.
Figure 13. Intelligent Wheelchair Autonomous Navigation Experiment.
Actuators 13 00230 g013
Table 1. Wheelchair center of rotation and the positional relationship between each hardware.
Table 1. Wheelchair center of rotation and the positional relationship between each hardware.
x/my/mz/myaw/°Pitch/°Roll/°
IMU0.0400000
LiDAR−0.1201.20000
Camera0.700.280.80000
Left wheel00.280000
Right wheel0−0.280000
Table 2. Wheelchair Motion Control Communication Protocol.
Table 2. Wheelchair Motion Control Communication Protocol.
HeaderFunctional BitFlag BitPolarityLeft Wheel SpeedRight Wheel SpeedCRC Checksum HighCRC Checksum Low
0x1F0x130xFF0x010000
Table 3. Accuracy of different control methods.
Table 3. Accuracy of different control methods.
Control ModeForwardTurn LeftTurn RightStop
Gesture98.2%98.2%98.2%97.8%
Voice98.6%98.0%98.2%98.4%
Head Posture95.2%95.4%95.6%96.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cui, J.; Shang, Y.; Yu, S.; Wang, Y. Research on Intelligent Wheelchair Multimode Human–Computer Interaction and Assisted Driving Technology. Actuators 2024, 13, 230. https://doi.org/10.3390/act13060230

AMA Style

Cui J, Shang Y, Yu S, Wang Y. Research on Intelligent Wheelchair Multimode Human–Computer Interaction and Assisted Driving Technology. Actuators. 2024; 13(6):230. https://doi.org/10.3390/act13060230

Chicago/Turabian Style

Cui, Jianwei, Yucheng Shang, Siji Yu, and Yuanbo Wang. 2024. "Research on Intelligent Wheelchair Multimode Human–Computer Interaction and Assisted Driving Technology" Actuators 13, no. 6: 230. https://doi.org/10.3390/act13060230

APA Style

Cui, J., Shang, Y., Yu, S., & Wang, Y. (2024). Research on Intelligent Wheelchair Multimode Human–Computer Interaction and Assisted Driving Technology. Actuators, 13(6), 230. https://doi.org/10.3390/act13060230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop