Next Article in Journal
WAMS-Based Online Disturbance Estimation in Interconnected Power Systems Using Disturbance Observer
Previous Article in Journal
A Pulse Shaping Based Optical Transmission System of 128QAM for DWDM with N × 904 Gbps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An ARCore Based User Centric Assistive Navigation System for Visually Impaired People

1
Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China
2
School of Industrial Design, Georgia Institute of Technology, GA 30332, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(5), 989; https://doi.org/10.3390/app9050989
Submission received: 17 February 2019 / Revised: 6 March 2019 / Accepted: 6 March 2019 / Published: 9 March 2019
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:

Featured Application

The navigation system can be implemented in smartphones. With affordable haptic accessories, it helps the visually impaired people to navigate indoor without using GPS and wireless beacons. In the meantime, the advanced path planning in the system benefits the visually impaired navigation since it minimizes the possibility of collision in application. Moreover, the haptic interaction allows a human-centric real-time delivery of motion instruction which overcomes the conventional turn-by-turn waypoint finding instructions. Since the system prototype has been developed and tested, a commercialized application that helps visually impaired people in real life can be expected.

Abstract

In this work, we propose an assistive navigation system for visually impaired people (ANSVIP) that takes advantage of ARCore to acquire robust computer vision-based localization. To complete the system, we propose adaptive artificial potential field (AAPF) path planning that considers both efficiency and safety. We also propose a dual-channel human–machine interaction mechanism, which delivers accurate and continuous directional micro-instruction via a haptic interface and macro-long-term planning and situational awareness via audio. Our system user-centrically incorporates haptic interfaces to provide fluent and continuous guidance superior to the conventional turn-by-turn audio-guiding method; moreover, the continuous guidance makes the path under complete control in avoiding obstacles and risky places. The system prototype is implemented with full functionality. Unit tests and simulations are conducted to evaluate the localization, path planning, and human–machine interactions, and the results show that the proposed solutions are superior to those of the present state-of-the-art solutions. Finally, integrated tests are carried out with low-vision and blind subjects to verify the proposed system.

Graphical Abstract

1. Introduction

According to statistics presented by the World Health Organization in October 2017, there are more than 253 million visually impaired people worldwide. Compared to normally sighted people, they are unable to access sufficient visual clues of the surroundings due to weakness in visual perception. Consequently, visually impaired people face challenges in numerous aspects of daily life, including when traveling, learning, entertaining, socializing, and working.
Visually impaired people have a strong dependency on travel aids. Self-driving vehicles have achieved SAE (Society of Automotive Engineers) Level 3 long back, which allows the vehicle to make decisions autonomously regarding the machine cognition. Autonomous robots and drones have also been dispatched for unmanned tasks. Obviously, the advances in robotics, computer vision, GIS (Geographic Information System), and sensors allow for integrated smart systems to perform mapping, positioning, and decision-making while executing in urban areas.
Human beings have the ability to interpret the surrounding environment using sensory organs. Over 90% of the information transmitted to the brain is visual, and the brain processes images tens of thousands of times faster than texts. This explains why human beings are called visual creatures; when traveling, visually impaired people have to face difficulties imposed by their visual impairment [1].
Traditional assistive solutions for visually impaired people include white canes, guide dogs, and volunteers. However, each of these solutions has its own restrictions. They either work only under certain situations, with limited capability, or are expensive in terms of extra manpower.
Modern assistive solutions for visually impaired people borrow power from mobile computing, robotics, and autonomous technology. They are implemented in various forms such as mobile terminals, portable computers, wearable sensor stations, and indispensable accessories. Most of these devices use computer vision or GIS/GPS to understand the surroundings, acquire a real-time location, and use turn-by-turn commands to guide the user. However, turn-by-turn commands are difficult for users to follow.
In this work, we propose an assistive navigation system for visually impaired people (ANSVIP, see Figure 1) using ARCore area learning; we introduce an adaptive artificial potential field path planning mechanism that generates smooth and safe paths; we design a user-centric dual channel interaction that uses haptic sensors to deliver real-time traction information to the user. To verify the design of the system, we have the proposed system prototype implemented with full functionality, and have the prototype tested with blind folded and blind subjects.

2. Related Works

2.1. Assistive Navigation System Frameworks

Recent advances in sensor technology support the design and integration of portable assistive navigation. Katz [2] designed an assistive device that aids in macro-navigation and micro-obstacle avoidance. The prototype was on a backpacked laptop employed with a stereo camera and audio sensors. Zhang [3] proposed a hybrid-assistive system with a laptop with a head-mounted web camera and a belt-mounted depth camera along with IMU (Inertial Measurement Unit). They used a robotics-operating system to connect and manage the devices and ultimately help visually impaired people when roaming indoors. Ahmetovic [4] used a smartphone as the carrier of the system, but a considerable number of beacons had to be deployed prior to support the system. Furthermore, Bing [5] used Project Tango Tablet with no extra sensor to implement their proposed system. The system allowed the on-board depth sensor to support area learning, but the computational power burden was heavy. Zhu [6] proposed and implemented the ASSIST system on a Project Tango smartphone. However, due to the presence of more advanced Google ARCore, the Google Project Tango introduced in 2014 has been depreciated [6] since 2017, and smartphones with the required capability are no longer available. To the best of our knowledge, the proposed ANSVIP system is the first assistive human–machine system using an ARCore-supported commercial smartphone.

2.2. Positioning and Tracking

Most indoor positioning and tracking technologies were borrowed from autonomous robotics and computer vision. Methods using direct sensing and dead reckoning [7] are no longer qualified options. Yang [8] proposed a Bluetooth RSSI-based sensing framework to localize users in large public venues. The particle filter was applied to localize the subject. Jiao [9] used an RGB-D camera to reconstruct a semantic map to support indoor positioning. They used an artificial neural network on reflectivity to improve the accuracy of 3D localization. Moreover, Xiao [10,11] and Zhang [3,12] used hybrid sensors to carry out fast visual odometry and feature-based loop closure in localization, while Zhu [6] and Bing [5,13] used area learning (a pattern recognition method) to bond subjects with areas of interest.

2.3. Path Planning

As the most popular path planning in robotics, A* is also extensively used by assistive systems. Xiao [10,11] and Zhang [3] used A* to connect areas of interest. Bing [5] applied greedy path planning in the label-intensive semantic map. Meanwhile, Zhao [14] suggested a potential field as a potential candidate for local planning. Paths in Reference [15,16] were planned on well-labeled maps using global optimal methods such as Dijkstra [17] and its varieties. Most existing path-planning methods generate sharp turn-by-turn paths connecting corner or feature anchors. These paths are good for robots, but bring unbearable experiences to humans.

2.4. Human–Machine Interaction

Most present systems use audio to deliver turn-by-turn directional instructions [5,11,18]. However, the human brain has its own understanding for positioning, direction, and velocity [19,20], which is not robot-styled. Some recent works proposed using haptic interfaces in obstacle avoidance [1,3,6,10,13,21,22,23]. However, due to the restriction of turn-by-turn path planning, the haptic interaction is unlikely to be used in a continuous path-following interface in assistive systems. Fernandes [7] used perceptual 3D audio as a solution; however, the learning was not easy and the accuracy in real complex scenes needs to be improved. Ahmetovic [15] conducted a data-driven analysis, which pointed out that turn-by-turn audio instructions have considerable drawbacks due to latency in interaction and limited information per instruction. Guerreiro [24] stated that turn-by-turn instructions may confuse visually impaired people’s navigation behaviors, and result in, for example, deviating from the intended path. These behaviors lead to errors, confusion and longer recovery times back to the right track. Such behaviors also emphasize that more effective real-time feedback interfaces are necessary. Ahmetovic [15] studied the factors that cause rotation error in turn-by-turn navigation. Rotation errors accompanying audio instructions significantly affect user experiences in navigation. Rector [22] compared the accuracy of three different human guidance interfaces and provided insights into the design of multimodal feedback mechanisms.

3. Design of ANSVIP

3.1. Information Flow in System Design

Most tasks in real life are difficult to accomplish using a single sensor or functional unit. Instead, they require collaboration (cooperation, competition, or coordination) from multiple functional units or sensing agents in intelligent systems to make the most favorable final synthesis. In such a collaborative context, each functional unit has its own duty and cooperates via the agreed channel, thereby maximizing the effectiveness of shared resources to achieve the goal.
Specifically, an assistive navigation system is composed of two parts: the cognitive system and the guidance system. The cognitive system aims to understand the world, including the micro-scale surroundings and the macro-scale scene; the guidance system aims to properly deliver the micro-scale guidance command, the macro-scale plan, as well as semantic scene understanding, to the user. The collaboration of the two allows the machine to understand the scene, and then the user acquires understanding from the machine, as shown in Figure 2.
The proposed ANSVIP uses an ARCore-supported smartphone as the major carrier and uses ARCore-based SLAM (Simultaneous Localization and Mapping) to track motion so as to create a scene understanding along with mapping. The human-scale understanding of motion and space is processed to produce a short and safe path towards the goal. The corresponding micro-motion guidance is delivered to the user using haptic interaction, while the macro-path clues are delivered using audio interaction.
Based on the information flow in an assistive navigation system, we design the ANSVIP structure as follows:
Firstly, the system should be fully aware of the information related to the user’s location during navigation. Unlike GPS-based solutions that are commonly used outdoors, our system has to use computer vision-based SLAM since indoor GPS signals are unreliable. SLAM is based on the Google ARCore, which integrates the vision and inert sensor hardware to support area learning.
Secondly, the system should be capable of conveying the abstracted systemic cognition to the user. Unlike the conventional exclusive audio interaction, we propose a haptic-based cooperative mechanism. This allows us to replace the popular turn-by-turn guidance with a more continuous motion guidance.
The working logic among the ANSVIP components is shown in Figure 3. Details of the major components are discussed in the following subsections:

3.2. Real-Time Area Learning-Based Localization

The system relies on existing indoor scenario CAD maps, which are available as escape maps near elevators (as requested by fire departments). The map we use in this study is presented in Figure 4. We label the area of interest on the map so as to allow the system to understand navigation requests and to plan the path accordingly.
Google ARCore is used to track the pose of the system in navigation. The sparse features are collected and stored in an area description dataset and subsequently used in re-localization. Specifically, a normal-sighted person has to build the sparse map of the indoor scenario by running the ARCore SLAM in advance. Then, the assistive system is capable to re-localize itself on the pre-built map after entering the scenario. By observing and recognizing the labeled traceable objects, the system is able to re-localize itself after roaming in the system map. However, the mapping between points in the system feature-based map and those on the scenario CAD map has to be obtained.
We use a singular value decomposition method to find the transportation matrix A . Two groups of corresponding feature point sets are used for finding the homogeneous transformation matrix:
A = [ R 2 × 2 t 2 × 1 0 1 ] = [ cos ( θ ) sin ( θ ) t x sin ( θ ) cos ( θ ) t y 0 0 1 ] .
Let l n = [ x n , y n ] T denote the point sets in the feature map, and let p n = [ i n , j n ] T denote the corresponding point set on the scenario CAD map. We use the least squares method to find the rotation R and translation t as follows:
( R , t ) arg min R , t i = 1 N R × p i + t l i 2 .
Denote l ¯ = 1 N i = 1 N l i , p ¯ = 1 N i = 1 N p i , Equation (2) can be written as
R arg min R i = 1 N R × p ¯ i l ¯ i 2 .
For M   x = b ,
M = [ x 1 y 1 1 0 y 1 x 1 0 1 x n y n 1 0 y n x n 0 1 ] ,
x = [ cos ( θ ) , sin ( θ ) , t x , t y ] T ,
b = [ i 1 , j 1 , ... , i n , j n ] T .
Using SVD to decompose M , we find
M N × 4 = U N × N S N × 4 V T 4 × 4 ,
where U denotes the feature matrix of M M T , S denotes the diagonal matrix with eigenvalue δ i , V is the eigenvector matrix of M T M , and we have A in (1) calculated by
x = ( V d i a g ( δ 1 1 , ... , δ 4 1 ) U T ) 1 b .

3.3. Area Learning in ANSVIP

ARCore is an augmented reality framework for smartphones with Android operating system. It is an advanced substitute of the deprecated Project Tango. Without the extra depth sensor, an ARCore-powered cell phone is able to track its pose and build a map of the surroundings in real time. Besides, ARCore also enhances the area of interest detection by estimating the average illumination intensity, which helps area segmentation during semantic mapping.
The smartphone is a remarkable feat of engineering. It integrates a great number of sensors like a gyroscope, camera, and GPS into a small slab. Specifically, in our work, a HUAWEI P20 with Kirin 970 CPU, Gravity sensor, Ambient light sensor, Proximity sensor, Gyroscope Compass is used.

3.4. Adaptive Artificial Potential Field-Based Path Planning

In indoor navigation for the visually impaired, the path planning has to consider both efficiency and safety. Specifically, our path planning considers the issues that follow.
1. The path should be planned to be away from obstacles and risks: Where the conventional robot path planning prefers the shortest path, the assistive system has more in-depth requirements. For visually impaired users, the path should be away from obstacles and risks such as walls, pillars, and uneven steps, which may cause falls [25].
2. The path and guidance shall be updated in real time: Unlike autonomous robot systems, the assistive system cannot expect visually impaired users to proceed along the planned path accordingly. When the user deviates from the planned path, there should be a corresponding real-time path evolution instead of asking the user to return to the planned track.
3. The mechanism shall be flexible to scale up with new elements: The path-planning algorithm should be able to easily expand with new elements, such as dynamic obstacle avoidance, functional unit integration, step warning, and extreme case re-planning.
4. The path shall be planned in a human-friendly manner: Unlike robots, visually impaired users are unable to grasp precise turning angles, and thus, it is difficult for them to follow conventional turn-by-turn paths [15]. Qualitative direction guidance is more suitable. Users prefer continuous guidance in navigation and a generally smooth plan.
The artificial potential field path planning is a suitable candidate for the above issues and challenges, since it has the characteristics of a simple structure, strong practicability, ease of implementation, and flexibility regarding expansion [14,17].
Therefore, we propose an adaptive artificial potential field path-planning mechanism for path generation.
Specifically, the target (goal) is considered an attractive potential, while walls are repulsive. The potential fields are the combination of first-order attractive and repulsive potentials:
U = U att + U rep ,
U att ( X current ) = k ρ ( X current , X target ) ,
U rep ( X current ) = { η ( 1 ρ ( X current , X obs ) 1 ρ 0 ) if   ρ ( X current , X obs ) ρ 0 0 if   ρ ( X current , X obs ) > ρ 0 ,
where η denotes the repulsive factor, ρ denotes a distance function, and ρ 0 denotes the effective radius. A path can be generated along with the potential gradients.
However, local minimums may block the path or cause redundant travel costs. Thus, we refer to the local minimum immune A* algorithm path length to control ρ 0 and solve this problem.
ρ 0 = ρ 0 Δ ρ until C A A P F > λ C A * , where λ denotes the control factor, and C A A P F and C A * denote the path length of adaptive artificial potential field (AAPF) and A* from the current position, respectively. A sliding window is used to smooth the path to support and enhance the experience of motion guidance in human–machine interaction (Figure 5):
X ( i ) = 1 2 N + 1 ( X ( i + N ) + X ( i + N 1 ) + ... + X ( i N ) ) .
A case of a smoothed path is shown in Figure 5. Since the plan is discrete, the path (red dotted) is planned in taxicab style before smoothing. The sliding window described in equation (12) updates each point on the path by averaging its position with the nearest 2N points’ positions on the path. Consequently, the path (dark curve) is smoothed after the process.

3.5. Dual-Channel Human–Machine Interaction

The information transfer between the user and system relies on human–machine interactions (HMIs). The HMI in an assistive navigation system has certain unique characters. First, the HMI does not rely on visual cognition. Second, the HMI is highly task-oriented. Third, the different information has distinct delivery requirements in situations of urgency or depending on high accuracy. The most popular audio interaction for assistive navigation systems [16,26,27,28] suffers from the following aspects:
Instruction delay: The instruction delivery is not instantaneous, and the latency becomes a bottleneck when dealing with urgent interaction requests. It is vital when dealing with urgency in navigation.
Limited information: The amount of information per message/second is very limited and tends to cause ambiguity, which makes accomplishing tasks with multiple semantics difficult.
Vulnerable to interference: The user may not be able to access multiple instructions simultaneously, and environmental sounds may cause interference.
Result-oriented instructions: The conventional graphical interaction provides many individual small tasks to users, allowing them to choose among different combinations to achieve their goals. Audio instructions are usually goal-driven and result-oriented, and they are weak in procedure-oriented interaction tasks.
Thus, we design a hybrid haptic interaction mechanism as the major interface to deliver navigation instructions, especially micro-motion instructions. Audio is used to deliver less-sensitive macro-informative messages.
After a path to the target is determined, motion guidance is generated as shown in Figure 6.
To deliver the motion guidance in real time via haptic interaction, there is a numerical solution. In this work, we design haptic gloves as shown in Figure 7.
The left glove guides the motion, and the right glove warns of obstacles. Using the middle finger as the head front of the subject, the motion directional guidance can be instantaneously delivered to the user as soon as the motion plan is made.

4. System Prototyping and Evaluation

4.1. System Prototyping

We use the HUAWEI P20 as the ARCore-supported smartphone, use Arduino sensors to implement the haptic interactive glove, and use the BAIDU open API for audio recognition. The application is developed in Unity3D. Roberto Lopez Mendez ARCore SLAM is applied as the base for visual odometry and area learning. Bluetooth is used to connect the smartphone and the accessory. The ready-to-work human–machine prototype is shown in Figure 8.

4.2. Localization

To validate the localization accuracy and reliability, we compare the localization of area learning with visual odometry in an indoor test. Two subjects wearing the system are asked to walk five times along a path in the corridor, one subject with area learning and the other with visual odometry (VO). The results on Figure 9 are consistent with our expectations: The VO trials suffer from accumulative errors, which cause localization drifts; meanwhile, in the area learning method, there are certain drifts in passing corners but the system is able to swiftly correct the drift by recognizing learned areas.

4.3. Path Planning

Simulation comparisons on four different path planning mechanisms are conducted: the adaptive artificial potential field (AAPF), the adaptive artificial potential field without a sliding window (AAPF/S), the artificial potential field without a repulsive force and sliding window (AAPF/RS), and the A* path planning.
On the map, we set the elevator’s location as the starting position. Then, 100 random destinations are generated outside a circle with a radius of 25 meters centered at the starting point, as shown in Figure 10. We use the four candidate path-planning mechanisms to generate the paths for the start–target pairs.
In Figure 11, we compare the path lengths generated by the four mechanisms. Obviously, the path lengths of AAPF are always lower than those of AAPF/S and AAPF/RS because the sliding window curves the sharp turns on the path into filleted turns. Therefore, the path length is shorter, as expected. The path lengths of A* are always the lowest and are the best among the four. Because A* is using a greedy mechanism to generate the path, it is guaranteed to produce the shortest length when a global view is accessible. However, it is noted that the path length differences between A* and AAPF is very limited.
In Figure 12, we collect the discrete distances from the path to obstacles along the paths. It is shown that AAPF and AAPF/S properly deal with distance to obstacles, which is consistent with our design: The repulsive forces of obstacles keep the path away. AAPF/RS and A* do not have such repulsive forces, and thus, a good portion of the paths is close to obstacles. This is not desired in assistive navigation [6,11].
Although the path lengths of A* are a bit shorter than those of AAPF, considering the fact that subjects in navigation are prone to experience risks and panics if risky places are close to the paths, the AAPF outperforms A* in keeping the path safer.

4.4. Haptic Guidance

To verify the directional guidance of the haptic device, we carry out unit tests of the haptic guidance glove. The guidance glove on the left hand and the Arduino joystick to be controlled on the right hand are shown in Figure 13. A series of programmed guidance commands are stored and sent to the glove so as to let the subject feel the guidance. A blind-folded subject is told to use the joystick to depict the directional instruction received. The joystick behaviors are recorded every half second.
In Figure 14, the input guidance commands are compared with the joystick records. Obviously, there is a latency between the input and records. The latency is caused by three factors: the cognitive delay of human haptic sensibility, the delay from understanding the guidance to controlling the joystick, and the delay between joystick action and recording. The average delay is less than 0.4 s, which is acceptable in most cases. Note that the delays in later trials are much less than those in earlier trials. One of the reasons for this is that the subject is getting familiar with the haptic interaction. In other words, the subject is capable of efficiently and quickly converting the data perceived by the haptic interaction into their own perception after a few attempts. Thus, a cooperative cognition is built between the assistive system, haptic interaction, and human perceptions.

4.5. Integration Test

To verify the prototype system, we conduct target-oriented navigation tests with three low-vision subjects and one blind subject. To evaluate the human–machine interaction that occurs in our systems, we administrate navigation with two different interaction mechanisms. One with pure audio instructions [3] and the other with haptic instructions. Experience surveys are collected after the tests. A 5-minute tutorial on the navigation instructions is given prior to the tests, and all of the subjects are told that security personnel will interfere before any collision or risk is met. This gives the users peace of mind.
After the test, all four subjects believed they successfully followed the instructions to reach the target (5/5); most subjects agreed that the instructions were very easy to understand (4.5/5); and all subjects agreed that their cognition of haptic instructions enhanced a short while after beginning the experiment (5/5). Furthermore, all subjects agreed that the haptic instructions were less likely to cause hesitation than audio instructions (5/5); some subjects believed that they feel safer than expected (3.75/5); most believed that they had a better experience with haptic instructions than audio instructions in micro-guidance (4.75/5); and all believed that audio instructions were indispensable as macro-instructions (5/5). Two subjects believed the haptic glove would affect holding objects in daily life and suggested migrating the haptic component to the arm or back of the hand.

5. Conclusions

In this work, we propose a human-centric navigation system to assist people with visual impairment while travelling indoors. The system takes a commercial smartphone as the carrier and uses Google ARCore vison SLAM in positioning. Comparing with the conventional visual odometry-supported travel aids, the system achieves better mapping and tracking. An adaptive artificial potential field-based path planning has been proposed for the system; it keeps the path away from the obstacles so as to avoid risk and collision while generating a real-time smooth path. Finally, a dual-channel human-machine interaction mechanism is introduced in the system. The system user-centrically incorporates haptic interfaces to provide a fluent and continuous guidance superior to the conventional turn-by-turn audio-guiding method. The haptic interaction can be carried out via different candidate devices, but our proposed haptic gloves benefit from an affordable cost and the convenience to plug and play.
Evaluation on field tests and simulations shows that the localization and path planning achieves the expected performance, and as such, the proposed ANSVIP system is welcomed by visually impaired subjects.

Author Contributions

Conceptualization, X.Z.; Investigation, Y.Z.; Methodology, X.Z. and F.H.; Project administration, F.H.; Resources, Y.Z.; Software, X.Y.; Writing—original draft, X.Z.

Funding

This work was funded by the Humanity and Social Science Youth foundation of the Ministry of Education of China, grant number 18YJCZH249, 17YJCZH275.

Acknowledgments

The authors would like to thank Bing Li, Jizhong Xiao and Wei Wang for their insightful suggestions regarding this research. We thank LetPub for its linguistic assistance during the preparation of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Horton, E.L.; Renganathan, R.; Toth, B.N.; Cohen, A.J.; Bajcsy, A.V.; Bateman, A.; Jennings, M.C.; Khattar, A.; Kuo, R.S.; Lee, F.A.; et al. A review of principles in design and usability testing of tactile technology for individuals with visual impairments. Assist. Technol. 2017, 29, 28–36. [Google Scholar] [CrossRef] [PubMed]
  2. Katz, B.F.G.; Kammoun, S.; Parseihian, G.; Gutierrez, O.; Brilhault, A.; Auvray, M.; Truillet, P.; Denis, M.; Thorpe, S.; Jouffrais, C. NAVIG: Augmented reality guidance system for the visually impaired. Virtual Reality 2012, 16, 253–269. [Google Scholar] [CrossRef]
  3. Zhang, X. A Wearable Indoor Navigation System with Context Based Decision Making for Visually Impaired. Int. J. Adv. Robot. Autom. 2016, 1, 1–11. [Google Scholar] [CrossRef]
  4. Ahmetovic, D.; Gleason, C.; Kitani, K.M.; Takagi, H.; Asakawa, C. NavCog: Turn-by-turn smartphone navigation assistant for people with visual impairments or blindness. In Proceedings of the 13th Web for All Conference Montreal, Montreal, QC, Canada, 11–13 April 2016; pp. 90–99. [Google Scholar] [CrossRef]
  5. Bing, L.; Munoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-based Mobile Indoor Assistive Navigation Aid for Blind People. IEEE Trans. Mobile Comput. 2019, 18, 702–714. [Google Scholar]
  6. Nair, V.; Budhai, M.; Olmschenk, G.; Seiple, W.H.; Zhu, Z. ASSIST: Personalized Indoor Navigation via Multimodal Sensors and High-Level Semantic Information. In Proceedings of the 2018 European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; Volume 11134, pp. 128–143. [Google Scholar] [CrossRef]
  7. Fernandes, H.; Costa, P.; Filipe, V.; Paredes, H.; Barroso, J. A review of assistive spatial orientation and navigation technologies for the visually impaired. In Universal Access in the Information Society; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar] [CrossRef]
  8. Yang, Z.; Ganz, A. A Sensing Framework for Indoor Spatial Awareness for Blind and Visually Impaired Users. IEEE Access 2019, 7, 10343–10352. [Google Scholar] [CrossRef]
  9. Jiao, J.C.; Yuan, L.B.; Deng, Z.L.; Zhang, C.; Tang, W.H.; Wu, Q.; Jiao, J. A Smart Post-Rectification Algorithm Based on an ANN Considering Reflectivity and Distance for Indoor Scenario Reconstruction. IEEE Access 2018, 6, 58574–58586. [Google Scholar] [CrossRef]
  10. Joseph, S.L.; Xiao, J.Z.; Zhang, X.C.; Chawda, B.; Narang, K.; Rajput, N.; Mehta, S.; Subramaniam, L.V. Being Aware of the World: Toward Using Social Media to Support the Blind with Navigation. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 399–405. [Google Scholar] [CrossRef]
  11. Xiao, J.; Joseph, S.L.; Zhang, X.; Li, B.; Li, X.; Zhang, J. An Assistive Navigation Framework for the Visually Impaired. IEEE Trans. Hum.-Mach. Syst. 2017, 45, 635–640. [Google Scholar] [CrossRef]
  12. Zhang, X.; Bing, L.; Joseph, S.L.; Xiao, J.; Yi, S.; Tian, Y.; Munoz, J.P.; Yi, C. A SLAM Based Semantic Indoor Navigation System for Visually Impaired Users. Proceedings of 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015. [Google Scholar]
  13. Bing, L.; Muñoz, J.P.; Rong, X.; Xiao, J.; Tian, Y.; Arditi, A. ISANA: Wearable Context-Aware Indoor Assistive Navigation with Obstacle Avoidance for the Blind. In Proceedings of the 2016 European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016. [Google Scholar]
  14. Zhao, Y.; Zheng, Z.; Liu, Y. Survey on computational-intelligence-based UAV path planning. Knowl.-Based Syst. 2018, 158, 54–64. [Google Scholar] [CrossRef]
  15. Ahmetovic, D.; Oh, U.; Mascetti, S.; Asakawa, C. Turn Right: Analysis of Rotation Errors in Turn-by-Turn Navigation for Individuals with Visual Impairments. In Proceedings of the 20th International ACM Sigaccess Conference on Computers and Accessibility, Assets’18, Galway, Ireland, 22–24 October 2018; pp. 333–339. [Google Scholar] [CrossRef]
  16. Balata, J.; Mikovec, Z.; Slavik, P. Landmark-enhanced route itineraries for navigation of blind pedestrians in urban environment. J. Multimodal User Interfaces 2018, 12, 181–198. [Google Scholar] [CrossRef]
  17. Soltani, A.R.; Tawfik, H.; Goulermas, J.Y.; Fernando, T. Path planning in construction sites: Performance evaluation of the Dijkstra, A*, and GA search algorithms. Adv. Eng. Inform. 2002, 16, 291–303. [Google Scholar] [CrossRef]
  18. Sato, D.; Oh, U.; Naito, K.; Takagi, H.; Kitani, K.; Asakawa, C. NavCog3 An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 20 October–1 November 2017. [Google Scholar] [CrossRef]
  19. Epstein, R.A.; Patai, E.Z.; Julian, J.B.; Spiers, H.J. The cognitive map in humans: Spatial navigation and beyond. Nat. Neurosci. 2017, 20, 1504–1513. [Google Scholar] [CrossRef] [PubMed]
  20. Marianne, F.; Sturla, M.; Witter, M.P.; Moser, E.I.; May-Britt, M. Spatial representation in the entorhinal cortex. Science 2004, 305, 1258–1264. [Google Scholar]
  21. Papadopoulos, K.; Koustriava, E.; Koukourikos, P.; Kartasidou, L.; Barouti, M.; Varveris, A.; Misiou, M.; Zacharogeorga, T.; Anastasiadis, T. Comparison of three orientation and mobility aids for individuals with blindness: Verbal description, audio-tactile map and audio-haptic map. Assist. Technol. 2017, 29, 1–7. [Google Scholar] [CrossRef] [PubMed]
  22. Rector, K.; Bartlett, R.; Mullan, S. Exploring Aural and Haptic Feedback for Visually Impaired People on a Track: A Wizard of Oz Study. In Proceedings of the 20th International ACM Sigaccess Conference on Computers and Accessibility, Assets’18, Galway, Ireland, 22–24 October 2018. [Google Scholar] [CrossRef]
  23. Papadopoulos, K.; Koustriava, E.; Koukourikos, P. Orientation and mobility aids for individuals with blindness: Verbal description vs. audio-tactile map. Assist. Technol. 2018, 30, 191–200. [Google Scholar] [CrossRef] [PubMed]
  24. Guerreiro, J.; Ohn-Bar, E.; Ahmetovic, D.; Kitani, K.; Asakawa, C. How Context and User Behavior Affect Indoor Navigation Assistance for Blind People. In Proceedings of the 2018 Internet of Accessible Things, Lyon, France, 23–25 April 2018. [Google Scholar] [CrossRef]
  25. Kacorri, H.; Ohn-Bar, E.; Kitani, K.M.; Asakawa, C. Environmental Factors in Indoor Navigation Based on Real-World Trajectories of Blind Users. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [Google Scholar] [CrossRef]
  26. Boerema, S.T.; van Velsen, L.; Vollenbroek-Hutten, M.M.R.; Hermens, H.J. Value-based design for the elderly: An application in the field of mobility aids. Assist. Technol. 2017, 29, 76–84. [Google Scholar] [CrossRef] [PubMed]
  27. Mone, G. Feeling Sounds, Hearing Sights. Commun. ACM 2018, 61, 15–17. [Google Scholar] [CrossRef]
  28. Martins, L.B.; Lima, F.J. Analysis of Wayfinding Strategies of Blind People Using Tactile Maps. Procedia Manuf. 2015, 3, 6020–6027. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The components of the proposed ANSVIP system.
Figure 1. The components of the proposed ANSVIP system.
Applsci 09 00989 g001
Figure 2. The information flow in the assistive system: The assistive system core aims to understand the world and translate the essential understanding to the user.
Figure 2. The information flow in the assistive system: The assistive system core aims to understand the world and translate the essential understanding to the user.
Applsci 09 00989 g002
Figure 3. The working logic among the ANSVIP components: The physical components and soft components are shown on the left hand side and right hand side, respectively.
Figure 3. The working logic among the ANSVIP components: The physical components and soft components are shown on the left hand side and right hand side, respectively.
Applsci 09 00989 g003
Figure 4. The digital CAD map before (left) and after (right) being labeled.
Figure 4. The digital CAD map before (left) and after (right) being labeled.
Applsci 09 00989 g004
Figure 5. A case of a smoothed path by sliding window: before smoothing (red dotted) versus after smoothing (dark curve).
Figure 5. A case of a smoothed path by sliding window: before smoothing (red dotted) versus after smoothing (dark curve).
Applsci 09 00989 g005
Figure 6. The motion guidance is generated intersecting the planned path and the awareness cycle.
Figure 6. The motion guidance is generated intersecting the planned path and the awareness cycle.
Applsci 09 00989 g006
Figure 7. The design of haptic gloves.
Figure 7. The design of haptic gloves.
Applsci 09 00989 g007
Figure 8. The implemented ANSVIP prototype with full functionality.
Figure 8. The implemented ANSVIP prototype with full functionality.
Applsci 09 00989 g008
Figure 9. The ground truth and trajectory of test trails.
Figure 9. The ground truth and trajectory of test trails.
Applsci 09 00989 g009
Figure 10. The 100 generated destinations (star) and the starting position (pentagram) on the map.
Figure 10. The 100 generated destinations (star) and the starting position (pentagram) on the map.
Applsci 09 00989 g010
Figure 11. Simulation results on path planning cost.
Figure 11. Simulation results on path planning cost.
Applsci 09 00989 g011
Figure 12. Simulation results on distances to obstacles.
Figure 12. Simulation results on distances to obstacles.
Applsci 09 00989 g012
Figure 13. (Left) Prototype of haptic glove. (Right) Joystick for test purpose.
Figure 13. (Left) Prototype of haptic glove. (Right) Joystick for test purpose.
Applsci 09 00989 g013
Figure 14. Haptic glove guidance versus joystick records.
Figure 14. Haptic glove guidance versus joystick records.
Applsci 09 00989 g014

Share and Cite

MDPI and ACS Style

Zhang, X.; Yao, X.; Zhu, Y.; Hu, F. An ARCore Based User Centric Assistive Navigation System for Visually Impaired People. Appl. Sci. 2019, 9, 989. https://doi.org/10.3390/app9050989

AMA Style

Zhang X, Yao X, Zhu Y, Hu F. An ARCore Based User Centric Assistive Navigation System for Visually Impaired People. Applied Sciences. 2019; 9(5):989. https://doi.org/10.3390/app9050989

Chicago/Turabian Style

Zhang, Xiaochen, Xiaoyu Yao, Yi Zhu, and Fei Hu. 2019. "An ARCore Based User Centric Assistive Navigation System for Visually Impaired People" Applied Sciences 9, no. 5: 989. https://doi.org/10.3390/app9050989

APA Style

Zhang, X., Yao, X., Zhu, Y., & Hu, F. (2019). An ARCore Based User Centric Assistive Navigation System for Visually Impaired People. Applied Sciences, 9(5), 989. https://doi.org/10.3390/app9050989

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop