Next Article in Journal
Distributed Machine Learning and Native AI Enablers for End-to-End Resources Management in 6G
Next Article in Special Issue
Global Individual Interaction Network Based on Consistency for Group Activity Recognition
Previous Article in Journal
An Improved Full-Speed Domain Sensorless Control Scheme for Permanent Magnet Synchronous Motor Based on Hybrid Position Observer and Disturbance Rejection Optimization
Previous Article in Special Issue
AI-Driven High-Precision Model for Blockage Detection in Urban Wastewater Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An AIoT-Based Assistance System for Visually Impaired People

1
School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou 510665, China
2
Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan University of Science and Technology, Wuhan 430065, China
3
Guangdong Provincial Key Laboratory of Intellectual Property and Big Data, Guangdong Polytechnic Normal University, Guangzhou 510665, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(18), 3760; https://doi.org/10.3390/electronics12183760
Submission received: 17 August 2023 / Revised: 3 September 2023 / Accepted: 4 September 2023 / Published: 6 September 2023
(This article belongs to the Special Issue Advances of Artificial Intelligence and Vision Applications)

Abstract

:
In this work, an assistance system based on the Artificial Intelligence of Things (AIoT) framework was designed and implemented to provide convenience for visually impaired people. This system aims to be low-cost and multi-functional with object detection, obstacle distance measurement, and text recognition achieved by wearable smart glasses, heart rate detection, fall detection, body temperature measurement, and humidity-temperature monitoring offered by an intelligent walking stick. The total hardware cost is approximately $66.8, as diverse low-cost sensors and modules are embedded. Meanwhile, a voice assistant is adopted, which helps to convey detection results to users. As for the performance evaluation, the accuracies of object detection and text recognition in the wearable smart glasses experiments are 92.16% and 99.91%, respectively, and the maximum deviation rate compared to the mobile app on obstacle distance measurement is 6.32%. In addition, the intelligent walking stick experiments indicate that the maximum deviation rates compared to the commercial devices on heart rate detection, body temperature measurement, and humidity-temperature monitoring are 3.52%, 0.19%, and 3.13%, respectively, and the fall detection accuracy is 87.33%. Such results demonstrate that the proposed assistance system yields reliable performances similar to commercial devices and is impressive when considering the total cost as a primary concern. Consequently, it satisfies the fundamental requirements of daily life, benefiting the safety and well-being of visually impaired people.

1. Introduction

According to recent statistics from the World Health Organization (WHO), the number of visually impaired people worldwide is approximately 285 million, of which 246 million are low-vision and 39 million are entirely blind [1]. With the growth of the population and the deepening of aging, it is expected that such a number will increase three times by 2040 [2]. Consequently, the problem of coping with the ever-increasing high cost of the healthcare system causes a heavy burden in many countries. Commonly, visually impaired people are unable to perceive external information through the visual system, leading to difficulties and inconveniences in daily life [3]. For example, they could easily fall due to hitting obstacles and nearby objects that are impossible to recognize, as well as read content on a book or screen. In addition, health surveillance is vital for visually impaired people because long-term physical status data can monitor health conditions and assess chronic diseases [4]. However, the related commercial devices are usually expensive, bulky, and with poor interaction. Therefore, the way for visually impaired people to obtain health conditions should be further improved.
Previously, several solutions were proposed to help visually impaired people to perceive external information. For instance, the white stick is a traditional tool for visually impaired people to navigate their surroundings and make themselves visible to others. It was recently reinvented to be foldable, lighter, and more visible [5]. Nevertheless, the usage of the white stick has limitations in terms of detecting potential obstacles. Specifically, it is unable to provide a warning before obstacles that may be approaching from a distance, and it also may not recognize those raised obstacles that are situated above the knee. The guide dog is another option that assists visually impaired people, which helps them to walk independently, confidently, and safely without relying on others, and accidents from obstacles and falls can be avoided. In addition, emotional support and social interaction can be provided, helping to reduce anxiety and mental stress levels. However, it requires time and effort to train a guide dog; the entire process usually takes up to 2 years, and the price is about $2379 [6], revealing that the total cost is prohibitive. Currently, with the development of hardware design and artificial intelligence techniques, several prototypes based on wearable [7], voice guidance [8], and hand-held [9] are offered. Most of them integrate various sensors to collect nearby data and then sound the alarms through headphones or other audio equipment. However, these attempts can be further improved by including the function of healthcare surveillance and lowering the total cost simultaneously.
To address those existing shortcomings, inspired by the current prototypes, an assistance system based on the Artificial Intelligence of Things (AIoT) framework was designed and implemented in this work. It aims to be low-cost and multi-functional, which includes two main electronic devices consisting of diverse low-cost sensors and modules connected by AIoT. One is the wearable smart glasses, later illustrated, which can detect objects and estimate their distances from the users. This functionality enables the device to alert users and provide reminders to avoid those obstacles. In addition, it offers text recognition that can be presented through the voice assistant, making the glasses automatically recognize words and transmit the results through Bluetooth audio equipment to the users. To this end, a binocular camera is adopted to acquire images of nearby environments. Subsequently, to assist visually impaired people in better perceiving information about their surroundings and providing real-time operation, all the image-related data processing is performed on the Alibaba Cloud, a platform that offers a range of cloud computing products and services in this field. Therefore, a Raspberry Pi 3B+ with the cloud platform is employed to process the corresponding data. Another device is the intelligent walking stick, which gathers various types of data through specialized sensors and modules for heart rate detection, fall detection, body temperature measurement, and humidity-temperature monitoring. Such data can be uploaded to the cloud platform for collection. Additionally, the reactions of obstacle avoidance control and alarm response are conducted through the STM32F103C8T6 microcontroller, in which the obstacle avoidance control is based on the results from obstacle distance measurement, and the alarm response is applied in the case of a fall. As a result, the collaborative interaction between the wearable smart glasses and the intelligent walking stick exhibits the potential to enhance the safety and well-being of visually impaired people in daily life. In short, the main contributions of this work are summarized as follows:
  • To enhance the perceptual capabilities of visually impaired people, the proposed wearable smart glasses can accomplish real-time object detection, obstacle distance measurement, and text recognition. Additionally, the incorporation of a voice assistant enables the presentation of relevant information. The experiments demonstrate the convenience of such functions in facilitating them to acquire pertinent information about their surroundings.
  • Compared to the existing prototypes, the proposed intelligent walking stick is equipped with healthcare surveillance of heart rate detection and body temperature measurement, fulfilling the daily needs of visually impaired people to monitor their health conditions. In addition, fall detection and humidity-temperature monitoring are performed to enhance mobility. Meanwhile, obstacle avoidance control and alarm response are achieved in the corresponding cases. The experiments disclose the usefulness of such incorporated functions in ensuring safety.
  • The assistance system aims to be low-cost, which relies on a Raspberry Pi 3B+ and STM32F103C8T6 microcontroller, along with diverse low-cost sensors and modules connected by the AIoT. The total hardware cost is approximately $66.8, providing a low-cost solution to design relevant systems in this field.
The rest of this work is organized as follows: Section 2 reviews the related work based on each respective function. Section 3 describes the details of the proposed wearable smart glasses and intelligent walking stick. Section 4 shows the experimental results with discussions. Finally, Section 5 summarizes this work.

2. Related Work

Regarding the assistance system for visually impaired people, several approaches were designed previously, which can be typically classified into three distinct categories based on their technological foundations: vision-based, non-vision-based, and hybrid [10]. The vision-based system employs real-time video streams or images to offer insights into the nearby environments. Conversely, the non-vision-based system utilizes a vast amount of sensors and modules to analyze the surroundings and then convey relevant information to the users. The hybrid system combines vision-based with sensor technologies to derive the benefits of both approaches. Consequently, in this work, the proposed system is hybrid, where vision-based is the wearable smart glasses, and non-vision-based is the intelligent walking stick. Furthermore, as the assistance system aims to be multi-functional, the reviews of related works concentrate on each respective function designed.

2.1. Object Detection

Lowe [11] proposed the Scale Invariant Feature Transform (SIFT) algorithm, which identifies robust feature points unaffected by illumination and noise. Subsequently, an improved version based on SIFT is the Speeded-Up Robust Features (SURF) algorithm developed by Bay et al. [12], which employs a Hessian matrix-based measure for the detector, and a distribution-based descriptor. Although these conventional algorithms are comprehensible for detecting specific objects, they manually extract low-level features and ineffectively process a large variety of multi-class objects [13]. For this purpose, several deep learning approaches have been applied, which not only enable the extraction of more sophisticated features but also encompass the feature extraction, selection, and classification within a unified model. For example, OverFeat [14] is one of the extensively used deep learning approaches for object detection, which can extract image features through a multi-scale sliding window combined with AlexNet to conduct detection. Its mean Average Precision (mAP) on the ILSVRC2013 dataset is about 24.3%, which is a significant improvement compared with the conventional algorithms. However, it still produces a high error rate. The Region-Convolution Neural Network (R-CNN) [15], in which a large Intersection over Union (IoU) threshold is chosen for high-quality samples, and the region proposal network that generates the regions where objects may exist, can helpfully improve detection accuracy. Nonetheless, it is slightly insufficient in real-time requirements [16]. The You Only Look Once (YOLO) [17] is a deep learning regression technique. Unlike the region proposal network, YOLO uses a cell-centered multi-scale region, sacrificing accuracy to obtain faster speeds. It can realize up to 140 Frames Per Second (FPS), properly satisfying real-time requirements. Based on that, Mallikarjuna et al. [18] presented a cognitive IoT system capable of handling occlusion, sudden camera or object movements, and complex rotations. The results displayed real-time detection of diverse objects. Overall, deep learning approaches have greatly surpassed conventional methods in terms of detection accuracy and speed. Therefore, those works hold distinguished potential for assisting visually impaired people in perceiving their surroundings.

2.2. Obstacle Distance Measurement

The obstacle distance measurement can be classified into four categories: ultrasonic, infrared, laser, and vision [19]. For instance, Meshram et al. [20] employed ultrasonic sensors to detect obstacles below the knee so that the priority of obstacles in the path without causing information overload can be achieved. Villanueva and Farcy [21] combined a walking stick with near-infrared sensors to emit and detect infrared pulses reflected from obstacles, enabling visually impaired people to find a wide enough way on the road to pass. Although the above approaches can be used to solve the ranging problem, they are unsuitable for identifying moving objects in nearby environments. Moreover, the propagation speeds in the air are impacted by environmental parameters such as temperature, humidity, and ambient noise [22]. Additionally, the laser could cause damage to human eyes. In this regard, the vision-based method is a better solution. Monteiro et al. [23] utilized video datasets recorded from the perspective of a guide dog to train a convolutional neural network for recognizing the activities occurring around the camera and generating feedback for visually impaired people. Therefore, it can be said that the approaches based on computer vision support visually impaired people to sense their surroundings and avoid obstacles more reliably.

2.3. Text Recognition

Several previous works focus on developing robust systems that recognize text in real-time through wearable smart glasses. They often use computer vision techniques, such as edge detection and Optical Character Recognition (OCR), to extract content from the book or screen. For example, Pei and Zhu [24] introduced an accurate real-time text recognition system based on Tesseract, which is a powerful OCR engine that encloses a long short-term memory (LSTM) architecture integrating new feature words and line recognition, where the LSTM detects and predicts the text for each frame, then translates the per-frame predictions into the final label sequence, thus producing higher recognition accuracy on the images. In another work, Mukhiddinov and Cho [25] accomplished an end-to-end text recognition with the help of the OCR engine, in which the fundamental component is a fully convolutional network modified for text recognition that results in dense per-pixel predictions of sentences or text lines, as it is trained to immediately predict the presence of text occurrences and their geometries from input images.

2.4. Heart Rate Detection

Heart rate detection contains two main manners: contact and non-contact, in which electrocardiograph (ECG) is the gold standard for heart rate detection [26]. The contact manner applies electrodes or sensors that are attached to the subject, so it is limited by the requirement for direct contact with the human body, prone to discomfort, and the complexity of the operation. Conversely, the non-contact way shows convenience and comfort for users as it eliminates the need for physical contact, allowing for a more natural experience. Generally, it can be realized in three solutions: infrared serial image, optical Doppler, and photoplethysmography (PPG) [27]. Among them, PPG obtains the pulsatile variation of blood volume in peripheral microvessels in response to the heartbeat by tracing the light absorption at the measured site (finger, earlobe, nose, etc.). Its main principle is that when external light hits the human skin surface, the absorption of light by the blood in the skin pulsates with the change in its volume, which yields a corresponding periodic variation in the reflected light intensity [28]. In addition, PPG exhibits cost-effective properties, which is more suitable to be a healthcare monitoring solution in this field. Based on that, it has been extensively adopted than the other two techniques. Huang and Selvaraj [29] designed a robust system through PPG to conduct high-quality heart rate long-term monitoring. As heart rate detection functions in an earlier predictive diagnosis of cardiovascular diseases, it plays a vital role in healthcare surveillance for visually impaired people.

2.5. Fall Detection

Fall detection is usually achieved by sensors, such as accelerometer, gyroscope, and magnetometer, to detect fall-related events based on changes in movement and posture [30]. For instance, Pierleoni et al. [31] developed a wearable fall detector by integrating a three-axis accelerometer, a three-axis gyroscope, a three-axis magnetometer, and a barometer sensor, along with the data fusion algorithm derived from Attitude and Heading Reference Systems (AHRS). The vision-based methods that adopt cameras or wearable devices with built-in cameras to analyze visual data for detecting falls through irregularities in the scene are also available in this field [32]. Furthermore, several attempts [33,34,35] not only combine diverse sensors to improve performance and reduce false positives or false negatives, but also incorporate alert mechanisms to notify caregivers or emergency services. Hence, it can be said that the continuous advancement of sensor technologies contributes to accurate fall detection for visually impaired people.

2.6. Body Temperature Measurement

Body temperature measurement for visually impaired people requires specialized considerations to ensure accurate and accessible readings. In this regard, the infrared thermometer is proper, which enables convenient body temperature readings without physical contact, promoting hygiene and reducing the risk of cross-contamination [36]. More importantly, auditory feedback is desired, which caters to visually impaired people, converting body temperatures into audio signals and providing voice-guided instructions accordingly. Meanwhile, user-friendly features, such as tactile markings or braille labels on the thermometer, allow individuals proficient in braille to interpret temperatures through touch conveniently. Following this way, Khan et al. [37] placed the GY-906 infrared thermometer on the top of a portable white stick to monitor the body temperature, and those abnormalities can be wirelessly communicated through a Bluetooth transceiver, then presented by voice for notifications.

2.7. Humidity-Temperature Monitoring

Humidity-temperature monitoring designed for visually impaired people primarily aims to supply real-time information about the environmental conditions, maintaining comfortable and healthy living surroundings [38]. For example, to inform users of unfavorable or extreme conditions, or to sense the presence of water around the floor and offer warnings timely. For this purpose, integrated humidity-temperature sensors are usually applied to track historical trends and patterns, then analyze the data for conducting quantitation of both humidity and temperature levels, in which the humidity element measures the amount of moisture or water vapor present in the air, typically expressed as a percentage relative to the maximum moisture capacity of the air, i.e., Relative Humidity (RH), and temperature element measures the changes in a particular physical property (electrical resistance, voltage, or frequency) that is affected by ambient temperatures. Zhangaskanov et al. [39] modified the white stick by allocating a number of sensors at different angles to perform interactions with surroundings, including the DHT11 sensor for humidity-temperature monitoring.

3. Proposed System

3.1. AIoT-Based Architecture

Considering various situations and the health detection needs of visually impaired people in daily life, the wearable smart glasses and intelligent walking stick should satisfy two fundamental requirements in terms of convenience and reliability. To this end, the proposed design of smart glasses is illustrated in Figure 1, which can perceive surroundings based on the YOLO v5, where a binocular camera captures the images, and a lightweight network model identifies both the external objects with corresponding distances and the content on the book or screen. On the other side, the proposed design of the intelligent walking stick is displayed in Figure 2, which employs a microcontroller combined with diverse low-cost sensors and modules where the microcontroller deals with sensor data acquisition, including fall, air humidity and temperature, the user’s heart rate and body temperature, and so on. All these results can also be uploaded to the cloud platform for data collection. In addition, according to the results of obstacle distance measurement or fall detection, the obstacle avoidance or alarm response can be conducted by the microcontroller to control the stick equipped with the motor driver and omni-directional wheel, or the buzzer and vibration motor, respectively.
Although breaking the IoT architecture into three layers is the most common way, Figure 3 depicts the AIoT-based framework of the proposed assistance system. In this work, the perception layer mainly includes different sensors and modules that are responsible for collecting external data. Subsequently, the network layer uses Narrow Band-IoT (NB-IoT) technology as the cost-effective platform for connecting a wide range of sensors and modules, meanwhile adopting the Hyper Text Transfer Protocol (HTTP) and Message Queuing Telemetry Transport (MQTT) protocols for data transmission. Finally, in the application layer, an app is designed to read the current status information and present it to the users through Bluetooth audio equipment. More details are described in the subsequent sections.

3.2. Design of Wearable Smart Glasses

First, object detection is accomplished by YOLO v5, a cutting-edge technique that can enhance the accuracy and speed of object detection through the utilization of deep neural networks [17,40]. It functions by partitioning an image into a grid and subsequently predicting bounding boxes for each grid cell, so several items in an image while creating bounding boxes around them can be identified. Compared to similar methods and recent YOLO versions, it is a single-pass approach that employs a unified neural network that contributes to a speed of up to 140 FPS. This attribute is important for visually impaired people, as it facilitates immediate feedback regarding the objects existing in their nearby environments without notable delays. Additionally, its lightweight nature renders it compatible with devices possessing limited computational power, enabling its integration into the Raspberry Pi 3B+. Moreover, it exhibits the ability to accurately detect a wide array of objects across diverse categories; such versatility equips it to determine common objects in daily life. Based on that, although YOLO V5 is not the newest version, it is still superior in object detection and can provide reliable performance.
In this work, a pre-trained YOLO v5 network referenced by [41] is applied, as presented in Figure 4, which can be mainly divided into four layers: input, backbone, neck, and output. The input layer defines the initial representation of the input images from the binocular camera. The backbone layer is responsible for extracting features from the input images and, here, the CNN model serves as the backbone that performs a series of convolutions and pooling operations to capture hierarchical features at different scales. The neck connects the backbone layer to the output layer, and the feature fusion is incorporated, which can improve its ability to detect objects of different sizes. Lastly, the output layer shows the final prediction of the object with a bounding box.
More than object detection, it is worth achieving the distance measurement of obstacles in front of visually impaired people so that obstacle avoidance control can be conducted. To this end, the binocular vision ranging method [42] is used for the data recorded from a binocular camera (Intel RealSense D435), as shown in Figure 5, where OL and OR refer to the left and right lenses, f denotes the focal length of the camera, B means the distance between the left and right lenses, a is the lens width, P is the target point of the obstacle, P′, P″ are the intersection points of P and the two lenses, m is the lens midpoint to P′, n is the lens endpoint to P″, and Z represents the distance from the obstacle to the baseline. Mathematically, their relationships can be expressed as:
Z f Z = B ( m + n ) B
X L = a 2 + m
X R = a 2 n
Combining (1), (2), and (3), we then obtain (4):
Z = B × f X L X R = B × f d
In (4), Z is related to B, f, and d (the parallax of the corresponding left and right points, i.e., XLXR). As B and f are constant values of the camera, which are relatively fixed, and when the parallax d value of each point is obtained, the Z value (obstacle distance) can be estimated accordingly. As a result, if an obstacle is detected and its distance to the users is measured, the assistance system can timely inform the category of the object and its corresponding distance to the visually impaired people.
Furthermore, text recognition based on OCR is employed in combination with a voice assistant to solve the difficulties encountered when reading from the book or screen. Its implementation relies on a commercial Application Programming Interface (API) provided by the Baidu OCR engine [43]. The entire process involves several stages, as drawn in Figure 6. Initially, the image is acquired from a binocular camera, and then sampling is applied to reduce image noise. Subsequently, pre-processing in terms of grayscale and binarization is adopted to extract the relevant text area from the image. Following this, character slicing is adopted to segment individual characters. The next step involves scanning the segmented image, and feature vectors are extracted from the recognized characters. After that, toward optimization processing, the feature vectors are matched against a template library, which facilitates template fine matching so the characters can be recognized. Finally, the results are outputted through a voice assistant with Bluetooth, addressing the reading difficulties of visually impaired people.

3.3. Design of Intelligent Walking Stick

Aiming at health surveillance, the functions of heart rate detection and body temperature measurement are designed in the intelligent walking stick. Regarding heart rate detection, the key is identifying the number of heartbeats per minute, i.e., the number of pulses within a minute. To this end, the Beat Per Minute (BPM) is acquired by measuring the interval between two adjacent pulses, denoted as Inter-Beat Interval (IBI), and using 60 s to divide this interval, so BPM is equal to 60/IBI. Moreover, the number of pulses is achieved by detecting wave peaks, where a threshold is set, and a pulse is considered to be detected when the signal value is larger than the threshold. As the waveform voltage range is uncertain and the amplitude of the sensor output is randomly changing, the threshold is adjusted according to the signal amplitude to accommodate the detection of different signal peaks, meaning that the solution is to find local maxima that exceed a pre-defined threshold. Such a threshold also determines the acquired voltage value to decide whether it is a waveform. When a valid waveform is received, half of the amplitude (i.e., 50%) of the signal is regarded as a feature point [44], as displayed in Figure 7. Consequently, the IBI is calculated by the time of two adjacent feature points, and the BPM is obtained accordingly.
Second, as the size and wavelength distribution of the infrared radiation energy of an object is closely related to its surface temperature, the surface temperature can be determined by measuring infrared radiation [45]. Based on that, the GY-906 infrared thermometer, a non-contact sensor that measures the infrared radiation emitted by an object to determine its temperature, is embedded in the walking stick to collect the variation caused by the change in body temperature. As for its working principle, the sensor unit converges the magnitude of the infrared radiation energy of the target in the Field of View (FOV), which is a factor in measuring the temperature of a specific area that defines the angle from which it can detect infrared radiation. Subsequently, through analog-to-digital conversion, the collected body temperature value is communicated to STM32F103C8T6 by the Inter-Integrated Circuit (I2C) protocol, allowing it to receive the temperature data and output the results correspondingly.
Next, in order to enhance the perception of surroundings for visually impaired individuals, the design focuses on incorporating humidity-temperature monitoring and fall detection capabilities into the walking stick. For this purpose, on the one hand, referenced by [39], the DHT11 sensor, which includes a calibration unit, is utilized. This sensor uses digital module acquisition and humidity-temperature sensing technologies, integrating a resistive moisture sensing element and a Negative Temperature Coefficient (NTC) temperature measurement element. As for data transmission, when the microcontroller initiates a start signal by pulling down the data bus for at least 800 µs, the DHT11 transitions from sleep mode to high-speed mode. Then, 40 bits of data are transmitted serially through the bus. Upon completion of data transmission, the sensor automatically enters hibernation mode until the subsequent communication is initiated. Additionally, an ATGM336H Global Position System (GPS) module, an MQ-2 smoke sensor, and an MH-RD raindrop sensor are employed to enrich the environmental data, such as location, weather, air quality, and so on, which further ensures the sense of comfortable surroundings. Therefore, all these sensors assist in accurate humidity-temperature monitoring.
On the other hand, a fall is characterized by an abrupt change in posture, resulting in the user falling to the ground. To identify this accident, the intelligent walking stick is equipped with the MPU6050 accelerometer and gyroscope sensor, which plays a vital role in fall detection. It is capable of analyzing acceleration to discern changes in posture when the device falls alongside the user, disclosing that the fall detection mechanism relies on obtaining body acceleration data and monitoring changes in posture. So, when the acceleration exceeds a pre-determined threshold, it signals the occurrence of a fall event. To facilitate fall detection, the MPU6050 sensor is initialized at the first step. Then, its inner Digital Motion Processor (DMP) is calibrated, which allows direct readings of the three Euler angles: roll, pitch, and yaw. These angles represent the stick mapped in the X, Y, and Z axes, as depicted in Figure 8. As a result, by analyzing them, any changes in posture and stick can be identified, helping the detection of a potential fall event.
By defining the acceleration in the X-axis, Y-axis, and Z-axis as ax, ay, and az, respectively, the combined acceleration (ACLR) can be mathematically obtained by (5), which is a parameter to determine the current status of the stick:
A C L R = a x 2 + a y 2 + a z 2
Typically, a larger ACLR reveals drastic movement, while it tends to decrease with soft movement. During a fall, a distinct impact phase occurs when the body makes contact with the ground, resulting in a pronounced peak in acceleration, which is more apparent than normal movements, making it a vital indicator for fall detection. As mentioned, a pre-determined threshold is needed, and based on the preliminary tests, it is 2.5 g in this work, where 1 g is equal to 9.8 m/s2. In the general case, the ACLR is lower than the threshold. Nevertheless, when the ACLR exceeds the threshold, the walking stick identifies a possible fall event and activates the buzzer and vibration motor to alert. To avoid false alarms, its response continues for 60 s, during which the user has an opportunity to cancel the alarm by pressing a button placed on the stick. Failure to do so within this period indicates an actual fall event, and this alarm will be immediately transmitted to the user’s cell phone through Bluetooth and then send a message to the emergency contact.

4. Results and Discussion

4.1. Experimental Implementation

The experimental implementation mainly includes hardware validation through the designed wearable smart glasses and intelligent walking stick. It aims to check whether this system can conduct those functions according to various conditions. Please note that the experiments were conducted by fixing the STM32F103C8T6 with diverse sensors and modules on the stick to test heart rate detection, fall detection, body temperature measurement, and humidity-temperature monitoring, respectively. On the other side, a binocular camera is embedded into the glasses connected to the Raspberry Pi 3B+ to verify object detection, obstacle distance measurement, and text recognition, respectively. As mentioned, in order to ensure the real-time requirement, the algorithms concerning the functions achieved by the glasses were operated on the Alibaba Cloud.

4.2. Results of Wearable Smart Glasses Experiments

Regarding the validations of object detection and obstacle distance measurement, an evaluation involving 20 cases was conducted, containing single and multiple objects commonly found in indoor and outdoor environments. As object detection and obstacle distance measurement are usually combined in practice applications, the results enclose the predicated object with its respective distance, as shown in Figure 9 for indoors, and Figure 10 for outdoors. In addition, detailed results of object detection under various test cases are summarized in Table 1.
The results of Table 1 present an accurate detection for single and dual-object cases. However, as the number of objects in the image increased, failure cases were observed, particularly for objects with similar appearances. For instance, the computer monitor was erroneously recognized as the Television (TV) in cases 10 and 13, and the motorcycle was misidentified as the bicycle in cases 19 and 20, implying the need for additional data training to improve the recognition capabilities for objects with similar appearances. In short, wearable smart glasses can achieve an object detection accuracy of around 92.16% (47 correct predictions out of 51 total items in 20 cases, with only 4 failure results), demonstrating its promise in offering object information for visually impaired people.
Concerning the distance measurement results, among all cases presented in Table 1, although several objects were misidentified, their respective distances can still be obtained, and the range is 0.71–65.54 m. Compared to a mobile app, the deviation rate (the absolute difference between the distance measured by the glasses and the distance measured by the mobile app, divided by the distance measured by the mobile app) is from 0.28–6.32%, where the longer the distance, the higher the deviation rate. The reason may be due to the binocular vision ranging method, where depth perception is achieved by analyzing the relative displacement of corresponding points in the images captured from the left and right cameras, and the baseline is the separation between the cameras. As the distance increases, the disparity between the corresponding points in the images becomes smaller for those distant objects, reducing the precision of depth estimation [46]. Hence, it has limitations at longer distances because of the way it estimates depth. Based on that, if the distance measured by the glasses Is less than 0.70 m, the user will receive an alarm, and simultaneously, with the help of the cloud platform, such data will be sent to the STM32F103C8T6 to control the stick for obstacle avoidance.
Furthermore, text recognition is designed to beneficently improve real-time reading experiences for visually impaired people, such as reading content on the book or screen, which are usually in indoor environments. Hence, its evaluation is on ten texts extracted from a book, and the example results are displayed in Figure 11, where the top displays the original text, and the bottom shows the outcomes produced by the wearable smart glasses embedded with the Baidu OCR engine. Remarkably, all words can be accurately recognized in this test case. Additional cases have involved various numbers of words, and the results are presented in Table 2. Although the head movements during reading may lead to incomplete camera screenshots, causing very slight errors in real-time text recognition, it still maintains an impressive accuracy (correct number of words divided by the original number of words) of approximately 99.91%, disclosing that the smart glasses properly extract text from the captured images. Therefore, by presenting such words through Bluetooth audio equipment, visually impaired people can enhance their real-time reading experiences in daily life.

4.3. Results of Intelligent Walking Stick Experiments

First, a heart rate module based on PPG was integrated into the stick to validate its heart rate detection capabilities. As individual variations may exist in physical status data, four subjects were recruited for the experiments, where each subject underwent 20 tests in two states: meditation and walking, with each state lasting for 30 s, alternating between the two. The heart rate detection results were not only obtained from the intelligent walking stick but also from a commercial device (Xiaomi smart band) for validation and comparison purposes. Table 3 discloses the average performance of heart rate detection, in which the BPM values represent the average heart rate value calculated from the 20 tests in each case, and the deviation rate (the absolute difference between the heart rate detected by stick and the heart rate detected by Xiaomi device, divided by the heart rate detected by Xiaomi device) ranges from 0.72% to 3.52%. Such deviations fall within an acceptable range, verifying the reliability of heart rate detection achieved by the intelligent walking stick. Therefore, it proves effective in providing healthcare surveillance by monitoring heart rate variations, which can identify abnormal situations, particularly when the heart beats faster. For visually impaired people, this capability is valuable as it allows for the early detection of potential chronic diseases, leading to timely medical intervention and health management.
Second, toward an investigation of fall detection, the experiment mainly simulated the falls of visually impaired people in three scenarios: walking, upstairs, and downstairs. The embedded MPU6050 sensor was employed to collect acceleration variations during the experiment, and each scenario was tested 100 times. Table 4 displays the results, where the correct detection means the stick detects a fall correctly, the incorrect detection refers to when the stick detects a fall, but the user is still in a normal state, and the unresponsive detection denotes the stick being unable to detect a fall when it occurs. Hence, the average accuracy is the total number of correct detections divided by the total number of tests (262/300 = 87.33%). Such an accuracy reveals that the intelligent walking stick can determine whether visually impaired people are in a fall in most cases. Those misjudgments may be due to the behaviors of different subjects, as the manner of the user holding a cane or their movement patterns during a fall may impact the ability of the sensor to detect the event accurately [47]. Consequently, suitable training before daily usage is preferred in this sense.
Third, as for body temperature measurement, its validation was tested 10 times for each subject and then compared with the results measured by a medical forehead thermometer (ORCOE Co., Ltd., Yantai, China, XCH01A). Table 5 presents the average performance on four subjects, where the maximum deviation rate compared to the commercial device is only 0.19%, proving that body temperature measurement can be accomplished appropriately by the stick, which can benefit the timely assessment of body temperature for visually impaired people. It is particularly essential during epidemics, such as the previous Coronavirus Disease-2019 (COVID-19) pandemic.
Next, humidity-temperature monitoring was evaluated over one week at a consistent time (8 a.m.) and outdoor location (university playground). The obtained data were also compared with the measurements through a Xiaomi humidity-temperature device, as shown in Table 6. The results display that the maximum deviation rates of temperature and humidity are 2.86% and 3.13%, respectively, occurring on Day 3, and the differences were very slight on other days, indicating the walking stick is capable of providing a trustworthy environmental perception of surroundings in terms of humidity and temperature levels.
Lastly, an Android app is designed to visualize the current status information. Figure 12 shows the homepage of the app, and the example results of body temperature measurement, real-time heart rate detection, and humidity monitoring. These data can be played to the users through Bluetooth audio equipment, which helps them to sense health and environmental information. Additionally, the emergency contact or family member can use it to collect the data timely, which enhances their support for visually impaired people.

5. Conclusions

In this work, an assistance system to enhance convenience for visually impaired people is proposed, incorporating wearable smart glasses for object detection, obstacle distance measurement, and text recognition, and the intelligent walking stick equipped with capabilities of heart rate detection, fall detection, body temperature measurement, and humidity-temperature monitoring. This system is mainly designed by integrating a variety of low-cost sensors and modules with their connection based on an AIoT framework. Additionally, a voice assistant that communicates the detection results to the users is included. The glasses experiments indicate high accuracies of 92.16% and 99.91% for object detection and text recognition, respectively. In addition, the maximum deviation rate observed in obstacle distance measurement compared to a mobile app is 6.32%, showing its reliability. Then, the stick experiments focus on evaluating heart rate detection, body temperature measurement, humidity-temperature monitoring, and fall detection. Comparisons with commercial devices reveal maximum deviation rates of 3.52%, 0.19%, and 3.13% for heart rate detection, body temperature measurement, and humidity-temperature monitoring, respectively, and the fall detection accuracy achieves 87.33%. Such outcomes demonstrate that the proposed system offers performances on par with existing commercial devices and, particularly, its total hardware cost amounts to $66.8, which is lower than several previous related works, such as $240 by Lan et al. [48], $97 by Jiang et al. [49], $70 by Rajesh et al. [50], and $68 by Khan et al. [51]. Furthermore, as all image-related processes are performed on the Alibaba Cloud, the results can be available in an average of 1.12 s per image, satisfying a very-high, real-time requirement. Thus, the low-cost and multi-functional system is an appropriate and user-friendly solution to address the needs of visually impaired people, providing them with enhanced safety and well-being.
In the future, to further improve its detection accuracy with the help of sufficiently large data samples, several advanced models such as contrastive learning approach [52], self-attention enhanced deep residual network [53], time-series sequencing method [54], multiscale superpixelwise prophet model [55], and multistage stepwise discrimination with compressed MobileNet [56] will be investigated into the assistance system. Furthermore, additional functions, such as emotion recognition and fatigue detection, will be designed to enhance the overall life quality of visually impaired people.

Author Contributions

Conceptualization, J.L., L.X., Z.C., L.S. and R.C.; Funding acquisition, J.L. and R.C.; Methodology, J.L., L.X., Z.C., L.S. and R.C.; Project administration, R.C., Y.R., L.W. and X.L.; Resources, R.C., L.W. and X.L.; Software, L.X., Z.C., L.S. and Y.R.; Validation, J.L., L.X., Z.C., L.S. and R.C.; Writing—original draft, J.L., L.X. and Z.C.; Writing—review and editing, J.L., R.C., Y.R., L.W. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants 62072122 and 62176067, in part by the Special Projects in Key Fields of Ordinary Universities of Guangdong Province under Grant 2021ZDZX1087, in part by Guangzhou Science and Technology Plan Project under Grants 2023B03J1327 and 2023A04J0361, in part by the Fund of Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System (Wuhan University of Science and Technology) under Grant ZNXX2022005, in part by the Research Fund of Guangdong Polytechnic Normal University under Grants 22GPNUZDJS17 and 2022SDKYA015, in part by the Special Project Enterprise Sci-tech Commissioner of Guangdong Province under Grant GDKTP2021033100, in part by the Special Fund for Science and Technology Innovation Strategy of Guangdong Province (Climbing Plan) under Grant pdjh2023a0300, and in part by the Guangdong Provincial Key Laboratory Project of Intellectual Property and Big Data under Grant 2018B030322016.

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors would like to appreciate the special contributions from Digital Content Processing & Security Technology of Guangzhou Key Laboratory.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Jivrajani, K.; Patel, S.K.; Parmar, C.; Surve, J.; Ahmed, K.; Bui, F.M.; Al-Zahrani, F.A. AIoT-based smart stick for visually impaired person. IEEE Trans. Instrum. Meas. 2023, 72, 2501311. [Google Scholar] [CrossRef]
  2. Ackland, P.; Resnikoff, S.; Bourne, R. World blindness and visual impairment: Despite many successes, the problem is growing. Community Eye Health 2017, 30, 71–73. [Google Scholar] [PubMed]
  3. Real, S.; Araujo, A. Navigation systems for the blind and visually impaired: Past work, challenges, and open problems. Sensors 2019, 19, 3404. [Google Scholar] [CrossRef] [PubMed]
  4. El-Rashidy, N.; El-Sappagh, S.; Islam, S.M.R.; El-Bakry, H.M.; Abdelrazek, S. Mobile health in remote patient monitoring for chronic diseases: Principles, trends, and challenges. Diagnostics 2021, 11, 607. [Google Scholar] [CrossRef] [PubMed]
  5. Husin, M.H.; Lim, Y.K. InWalker: Smart white cane for the blind. Disabil. Rehabil. Assist. Technol. 2020, 15, 701–707. [Google Scholar] [CrossRef]
  6. Glenk, L.M.; Přibylová, L.; Stetina, B.U.; Demirel, S.; Weissenbacher, K. Perceptions on health benefits of guide dog ownership in an Austrian population of blind people with and without a guide dog. Animals 2019, 9, 428. [Google Scholar] [CrossRef]
  7. Chang, W.; Chen, L.; Hsu, C.; Chen, J.; Yang, T.; Lin, C. MedGlasses: A wearable smart-glasses-based drug pill recognition system using deep learning for visually impaired chronic patients. IEEE Access 2020, 8, 17013–17024. [Google Scholar] [CrossRef]
  8. Kuriakose, B.; Shrestha, R.; Sandnes, F.E. Tools and technologies for blind and visually impaired navigation support: A review. IETE Tech. Rev. 2020, 39, 3–18. [Google Scholar] [CrossRef]
  9. Li, B.; Muñoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-based mobile indoor assistive navigation aid for blind people. IEEE Trans. Mobile Comput. 2019, 18, 702–714. [Google Scholar] [CrossRef]
  10. Plikynas, D.; Žvironas, A.; Gudauskis, M.; Budrionis, A.; Daniušis, P.; Sliesoraitytė, I. Research advances of indoor navigation for blind people: A brief review of technological instrumentation. IEEE Instrum. Meas. Mag. 2020, 23, 22–32. [Google Scholar] [CrossRef]
  11. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  12. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  13. Xiao, Y.; Tian, Z.; Yu, J.; Zhang, Y.; Liu, S.; Du, S.; Lan, X. A review of object detection based on deep learning. Multimed. Tools Appl. 2020, 79, 23729–23791. [Google Scholar] [CrossRef]
  14. Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv 2013, arXiv:1312.6229. [Google Scholar]
  15. Wu, M.; Yue, H.; Wang, J.; Huang, Y.; Liu, M.; Jiang, Y.; Ke, C.; Zeng, C. Object detection based on RGC mask R-CNN. IET Image Process. 2020, 14, 1502–1508. [Google Scholar] [CrossRef]
  16. Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A survey and performance evaluation of deep learning methods for small object detection. Expert Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
  17. Diwan, T.; Anirudh, G.; Tembhurne, J.V. Object detection using YOLO: Challenges, architectural successors, datasets and applications. Multimed. Tools Appl. 2022, 82, 9243–9275. [Google Scholar] [CrossRef]
  18. Mallikarjuna, G.C.P.; Hajare, R.; Pavan, P.S.S. Cognitive IoT System for visually impaired: Machine learning approach. Mater. Today Proc. 2022, 49, 529–535. [Google Scholar] [CrossRef]
  19. Dunai, L.D.; Lengua, I.L.; Tortajada, I.; Simon, F.B. Obstacle detectors for visually impaired people. In Proceedings of the 2014 International Conference on Optimization of Electrical and Electronic Equipment (OPTIM), Bran, Romania, 22–24 May 2014; pp. 809–816. [Google Scholar]
  20. Meshram, V.V.; Patil, K.; Meshram, V.A.; Shu, F.C. An astute assistive device for mobility and object recognition for visually impaired people. IEEE Trans. Hum. Mach. Syst. 2019, 49, 449–460. [Google Scholar] [CrossRef]
  21. Villanueva, J.; Farcy, R. Optical device indicating a safe free path to blind people. IEEE Trans. Instrum. Meas. 2011, 61, 170–177. [Google Scholar] [CrossRef]
  22. Mustapha, B.; Zayegh, A.; Begg, R.K. Ultrasonic and infrared sensors performance in a wireless obstacle detection system. In Proceedings of the 2013 1st International Conference on Artificial Intelligence, Modelling and Simulation (AIMS), Kota Kinabalu, Malaysia, 3–5 December 2013; pp. 487–492. [Google Scholar]
  23. Monteiro, J.; Aires, J.P.; Granada, R.; Barros, R.C.; Meneguzzi, F. Virtual guide dog: An application to support visually-impaired people through deep convolutional neural networks. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 2267–2274. [Google Scholar]
  24. Pei, S.; Zhu, M. Real-time text detection and recognition. arXiv 2020, arXiv:2011.00380. [Google Scholar]
  25. Mukhiddinov, M.; Cho, J. Smart glass system using deep learning for the blind and visually impaired. Electronics 2021, 10, 2756. [Google Scholar] [CrossRef]
  26. Georgiou, K.; Larentzakis, A.V.; Khamis, N.N.; Alsuhaibani, G.I.; Alaska, Y.A.; Giallafos, E.J. Can wearable devices accurately measure heart rate variability? A systematic review. Folia Med. 2018, 60, 7–20. [Google Scholar] [CrossRef] [PubMed]
  27. Kumar, A.; Komaragiri, R.; Kumar, M. A review on computation methods used in photoplethysmography signal analysis for heart rate estimation. Arch. Comput. Methods Eng. 2021, 29, 921–940. [Google Scholar]
  28. Kyriacou, P.A. Introduction to photoplethysmography. In Photoplethysmography; Elsevier: Amsterdam, The Netherlands, 2022; pp. 1–16. [Google Scholar]
  29. Huang, N.; Selvaraj, N. Robust PPG-based ambulatory heart rate tracking algorithm. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 5929–5934. [Google Scholar]
  30. Mubashir, M.; Shao, L.; Seed, L. A survey on fall detection: Principles and approaches. Neurocomputing 2013, 100, 144–152. [Google Scholar] [CrossRef]
  31. Pierleoni, P.; Belli, A.; Maurizi, L.; Palma, L.; Pernini, L.; Paniccia, M.; Valenti, S. A wearable fall detector for elderly people based on AHRS and barometric sensor. IEEE Sens. J. 2016, 16, 6733–6744. [Google Scholar] [CrossRef]
  32. Xu, T.; Zhou, Y.; Zhu, J. New advances and challenges of fall detection systems: A survey. Appl. Sci. 2018, 8, 418. [Google Scholar] [CrossRef]
  33. Mrozek, D.; Koczur, A.; Małysiak-Mrozek, B. Fall detection in older adults with mobile IoT devices and machine learning in the cloud and on the edge. Inf. Sci. 2020, 537, 132–147. [Google Scholar] [CrossRef]
  34. Rahman, M.M.; Islam, M.M.; Ahmmed, S.; Khan, S.A. Obstacle and fall detection to guide the visually impaired people with real time monitoring. SN Comput. Sci. 2020, 1, 219. [Google Scholar] [CrossRef]
  35. Chang, W.J.; Chen, L.B.; Chen, M.C.; Su, J.P.; Sie, C.Y.; Yang, C.H. Design and implementation of an intelligent assistive system for visually impaired people for aerial obstacle avoidance and fall detection. IEEE Sen. J. 2020, 20, 10199–10210. [Google Scholar] [CrossRef]
  36. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors 2017, 17, 565. [Google Scholar] [CrossRef]
  37. Khan, M.A.; Nisar, K.; Nisar, S.; Chowdhry, B.S.; Lodhi, E.; Khan, J.; Haque, M.R. An Android-based portable smart cane for visually impaired people. In Proceedings of the 2021 IEEE 15th International Conference on Application of Information and Communication Technologies (AICT), Baku, Azerbaijan, 13–15 October 2021; pp. 1–6. [Google Scholar]
  38. Islam, M.M.; Sadi, M.S.; Zamli, K.Z.; Ahmed, M.M. Developing walking assistants for visually impaired people: A review. IEEE Sens. J. 2019, 19, 2814–2828. [Google Scholar] [CrossRef]
  39. Zhangaskanov, D.; Zhumatay, N.; Ali, M.H. Audio-based smart white cane for visually impaired people. In Proceedings of the International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 19–22 April 2019; pp. 889–893. [Google Scholar]
  40. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  41. Chandna, S.; Singhal, A. Towards outdoor navigation system for visually impaired people using YOLOv5. In Proceedings of the 2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 27–28 January 2022; pp. 617–622. [Google Scholar]
  42. Jiang, J.; Liu, L.; Fu, R.; Yan, Y.; Shao, W. Non-horizontal binocular vision ranging method based on pixels. Opt. Quantum Electron. 2020, 52, 223. [Google Scholar] [CrossRef]
  43. Jiang, Y.; Dong, H.; El Saddik, A. Baidu Meizu deep learning competition: Arithmetic operation recognition using end-to-end learning OCR technologies. IEEE Access 2018, 6, 60128–60136. [Google Scholar] [CrossRef]
  44. Cao, T.; Tao, L.; Liu, D.; Wang, Q.; Sun, J. Design and realization of blood oxygen and heart rate sensor nodes in wireless body area network. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 27–29 June 2020; pp. 469–473. [Google Scholar]
  45. Abuzairi, T.; Sumantri, I.N.; Irfan, A.; Mohamad, M.R. Infrared thermometer on the wall (iThermowall): An open source and 3-D print infrared thermometer for fever screening. HardwareX 2021, 9, e00168. [Google Scholar] [CrossRef] [PubMed]
  46. Xu, G.; Li, X.; Su, J.; Pan, H.; Tian, G. Precision evaluation of three-dimensional feature points measurement by binocular vision. J. Opt. Soc. Korea 2011, 15, 30–37. [Google Scholar] [CrossRef]
  47. Delahoz, Y.S.; Labrador, M.A. Survey on fall detection and fall prevention using wearable and external sensors. Sensors 2014, 14, 19806–19842. [Google Scholar] [CrossRef]
  48. Lan, F.; Zhai, G.; Lin, W. Lightweight smart glass system with audio aid for visually impaired people. In Proceedings of the 2015 IEEE Region 10 Conference (TENCON), Macao, China, 1–4 November 2015; pp. 1–4. [Google Scholar]
  49. Jiang, B.; Yang, J.; Lv, Z.; Song, H. Wearable vision assistance system based on binocular sensors for visually impaired users. IEEE Internet Things J. 2019, 6, 1375–1383. [Google Scholar] [CrossRef]
  50. Rajesh, M.; Rajan, B.K.; Ajay, R.; Thomas, K.A.; Thomas, A.; Tharakan, T.B.; Dinesh, C. Text recognition and face detection aid for visually impaired person using Raspberry PI. In Proceedings of the 2017 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Kollam, India, 20–21 April 2017; pp. 1–5. [Google Scholar]
  51. Khan, M.A.; Paul, P.; Rashid, M.; Hossain, M.; Ahad, M.A.R. An AI-Based visual aid with integrated reading assistant for the completely blind. IEEE Trans. Hum. Mach. Syst. 2020, 50, 507–517. [Google Scholar] [CrossRef]
  52. Zhang, J.; Wang, X.; Zhang, D.; Lee, D.J. Semi-supervised group emotion recognition based on contrastive learning. Electronics 2022, 11, 3990. [Google Scholar] [CrossRef]
  53. Xie, G.; Ren, J.; Marshall, S.; Zhao, H.; Li, R.; Chen, R. Self-attention enhanced deep residual network for spatial image steganalysis. Digit. Signal Process. 2023, 139, 104063. [Google Scholar] [CrossRef]
  54. Li, J.W.; Barma, S.; Mak, P.U.; Pun, S.H.; Vai, M.I. Brain rhythm sequencing using EEG signals: A case study on seizure detection. IEEE Access 2019, 7, 160112–160124. [Google Scholar] [CrossRef]
  55. Ma, P.; Ren, J.; Sun, G.; Zhao, H.; Jia, X.; Yan, Y.; Zabalza, J. Multiscale superpixelwise prophet model for noise-robust feature extraction in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5508912. [Google Scholar] [CrossRef]
  56. Chen, R.; Huang, H.; Yu, Y.; Ren, J.; Wang, P.; Zhao, H.; Lu, X. Rapid detection of multi-QR codes based on multistage stepwise discrimination and a compressed MobileNet. IEEE Internet Things J. 2023; early access. [Google Scholar]
Figure 1. Proposed design of wearable smart glasses for object detection, obstacle distance measurement, and text recognition.
Figure 1. Proposed design of wearable smart glasses for object detection, obstacle distance measurement, and text recognition.
Electronics 12 03760 g001
Figure 2. Proposed design of intelligent walking stick for heart rate detection, fall detection, body temperature measurement, and humidity-temperature monitoring.
Figure 2. Proposed design of intelligent walking stick for heart rate detection, fall detection, body temperature measurement, and humidity-temperature monitoring.
Electronics 12 03760 g002
Figure 3. AIoT-based framework of the proposed assistance system.
Figure 3. AIoT-based framework of the proposed assistance system.
Electronics 12 03760 g003
Figure 4. The architecture of YOLO V5 in this work.
Figure 4. The architecture of YOLO V5 in this work.
Electronics 12 03760 g004
Figure 5. Obstacle distance measurement based on binocular vision ranging method.
Figure 5. Obstacle distance measurement based on binocular vision ranging method.
Electronics 12 03760 g005
Figure 6. The flow chart of text recognition in this work.
Figure 6. The flow chart of text recognition in this work.
Electronics 12 03760 g006
Figure 7. The feature points in the heart rate detection.
Figure 7. The feature points in the heart rate detection.
Electronics 12 03760 g007
Figure 8. Three-axis plot derived from three Euler angles for fall detection.
Figure 8. Three-axis plot derived from three Euler angles for fall detection.
Electronics 12 03760 g008
Figure 9. Example results of object detection and obstacle distance measurement using wearable smart glasses for the indoor environment.
Figure 9. Example results of object detection and obstacle distance measurement using wearable smart glasses for the indoor environment.
Electronics 12 03760 g009
Figure 10. Example results of object detection and obstacle distance measurement using wearable smart glasses for the outdoor environment.
Figure 10. Example results of object detection and obstacle distance measurement using wearable smart glasses for the outdoor environment.
Electronics 12 03760 g010
Figure 11. Example results of text recognition. The left side displays the original content and the right side shows the outcomes produced by the wearable smart glasses embedded with the Baidu OCR engine.
Figure 11. Example results of text recognition. The left side displays the original content and the right side shows the outcomes produced by the wearable smart glasses embedded with the Baidu OCR engine.
Electronics 12 03760 g011
Figure 12. The homepage of the app, and the example results of body temperature measurement, real-time heart rate detection, and humidity monitoring.
Figure 12. The homepage of the app, and the example results of body temperature measurement, real-time heart rate detection, and humidity monitoring.
Electronics 12 03760 g012
Table 1. Performances of object detection under different cases.
Table 1. Performances of object detection under different cases.
CasesNumber of ItemsActual ObjectPredicated ObjectFailure Detection
1OnePersonPersonNone
2OneCupCupNone
3OnePenPenNone
4OneKeyboardKeyboardNone
5OneMonitorMonitorNone
6TwoPerson, BottlePerson, BottleNone
7TwoChair, MouseChair, MouseNone
8ThreeBook, Pen, PhoneBook, Pen, PhoneNone
9ThreePerson, Cup, ChairPerson, Cup, ChairNone
10FourPerson, Mouse, Monitor, KeyboardPerson, Mouse, TV, KeyboardMonitor
11FourPerson, Bottle, Chair, Phone Person, Bottle, Chair, PhoneNone
12FourBook, Pen, Mouse, KeyboardBook, Pen, Mouse, Keyboard None
13FivePerson, Bottle, Chair, Mouse, Monitor Person, Bottle, Chair, Mouse, TVMonitor
14OneCarCarNone
15TwoBicycle, MotorcycleBicycle, MotorcycleNone
16TwoPerson, CarPerson, CarNone
17ThreePerson, Car, TreePerson, Car, TreeNone
18ThreePerson, Bicycle, CarPerson, Bicycle, CarNone
19ThreePerson, Motorcycle, CarPerson, Bicycle, CarMotorcycle
20FivePerson, Tree, Car, Bicycle, MotorcyclePerson, Tree, Car, BicycleMotorcycle
Table 2. Performances of text recognition under different cases.
Table 2. Performances of text recognition under different cases.
CasesOriginal Number of WordsCorrect Number of Words Accuracy (%)
1363363100
2412412100
3480480100
4557557100
5614614100
6672672100
7754754100
880380099.63
987286999.66
1095695499.79
Table 3. Performances of heart rate detection under different cases.
Table 3. Performances of heart rate detection under different cases.
CasesStatesAgeGenderHeart Rate Detected by Stick (BPM)Heart Rate Detected by Xiaomi Device (BPM)Deviation Rates (%)
1Meditation
30 s
20Female72.5472.020.72
222Male83.8982.212.04
345Female69.6671.482.55
448Male75.2878.033.52
5Walking
30 s
20Female82.4879.763.41
622Male86.3187.020.82
745Female74.5076.112.11
848Male86.0885.270.95
Table 4. Performances of fall detection under three scenarios.
Table 4. Performances of fall detection under three scenarios.
ScenariosNumber of Correct DetectionNumber of Incorrect DetectionNumber of Unresponsive DetectionAccuracy (%)
Walking8610486.00
Upstairs8810288.00
Downstairs888488.00
Table 5. Performances of body temperature measurement from four subjects.
Table 5. Performances of body temperature measurement from four subjects.
SubjectsAgeGenderBody Temperature Measured by Stick (°C) Body Temperature Measured by Medical Forehead Thermometer (°C)Deviation Rates (%)
A20Female36.4736.440.08
B22Male36.8036.730.19
C45Female36.6336.670.11
D48Male36.8636.810.14
Table 6. Performances of humidity-temperature monitoring under different cases.
Table 6. Performances of humidity-temperature monitoring under different cases.
DaysTemperature Measured by Stick (°C) Humidity Measured by Stick (%RH)Temperature Measured by Xiaomi Device (°C) Humidity Measured by Xiaomi Device (%RH)Deviation Rates of Temperature (%)Deviation Rates of Humidity (%)
116.74916.4501.832.00
219.25719.3570.520.00
317.06217.5642.863.13
420.15120.0510.500.00
517.85517.9540.561.86
618.96018.8590.531.69
717.56117.7611.130.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Xie, L.; Chen, Z.; Shi, L.; Chen, R.; Ren, Y.; Wang, L.; Lu, X. An AIoT-Based Assistance System for Visually Impaired People. Electronics 2023, 12, 3760. https://doi.org/10.3390/electronics12183760

AMA Style

Li J, Xie L, Chen Z, Shi L, Chen R, Ren Y, Wang L, Lu X. An AIoT-Based Assistance System for Visually Impaired People. Electronics. 2023; 12(18):3760. https://doi.org/10.3390/electronics12183760

Chicago/Turabian Style

Li, Jiawen, Lianglu Xie, Zhe Chen, Liang Shi, Rongjun Chen, Yongqi Ren, Leijun Wang, and Xu Lu. 2023. "An AIoT-Based Assistance System for Visually Impaired People" Electronics 12, no. 18: 3760. https://doi.org/10.3390/electronics12183760

APA Style

Li, J., Xie, L., Chen, Z., Shi, L., Chen, R., Ren, Y., Wang, L., & Lu, X. (2023). An AIoT-Based Assistance System for Visually Impaired People. Electronics, 12(18), 3760. https://doi.org/10.3390/electronics12183760

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop