Next Article in Journal
Analysis of High-Frequency Communication Channel Characteristics in a Typical Deep-Sea Incomplete Sound Channel
Next Article in Special Issue
Addressing Body Image Disturbance through Metaverse-Related Technologies: A Systematic Review
Previous Article in Journal
Distributed Ensemble Clustering in Networked Multi-Agent Systems
Previous Article in Special Issue
Exploring the Perception of the Effect of Three-Dimensional Interaction Feedback Types on Immersive Virtual Reality Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Wayfinding Experience in Low-Vision Individuals through a Tailored Mobile Guidance Interface

Human-Centered Intelligent Systems Lab., Gwangju Institute of Science and Technology, Gwang-ju 61005, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(22), 4561; https://doi.org/10.3390/electronics12224561
Submission received: 19 September 2023 / Revised: 2 November 2023 / Accepted: 3 November 2023 / Published: 7 November 2023
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)

Abstract

:
Individuals with low vision (LV) face daily wayfinding challenges, struggling with route establishment, direction recognition, and obstacle avoidance. Mobile navigation, though commonly relied upon, often clashes with LV visual conditions, particularly central scotomas from diseases like age-related macular degeneration (AMD), hindering the relay of crucial wayfinding information. Addressing this, we introduced a novel guidance interface post-literature review. In a VR-based user study involving 25 participants simulating LV experience, our map guidance interface enabled quicker navigation, improving system usability and presence. The resulting performance paralleled that of normal-vision individuals. This study demonstrated that modifying familiar navigation interfaces effectively addresses visual impairment conflicts, enhancing outdoor navigation for individuals with LV. The modifications extend benefits, enabling comparable wayfinding ease to individuals with normal vision.

1. Introduction

Individuals with LV face daily challenges, even when utilizing optical aids such as glasses or lenses, owing to conditions that impair their sight [1]. Globally, more than 200 million people experience LV symptoms, and 65% of the population aged over 50 experience vision loss attributable to various conditions [2]. These conditions hinder the ability of those with LV to perceive information or situations in their surroundings, thereby complicating several daily tasks, such as reading, driving, and object finding.
Among these tasks, wayfinding has been identified as particularly challenging for individuals with LV [3]. This task requires individuals to gather environmental information, recognize their current location, establish a path to their destination, and ensure safe walking, avoiding collisions with obstacles and other pedestrians. Individuals with normal vision depend on their wide field of view and clear visual acuity for seamless navigation. In contrast, individuals with LV face challenges and experience elevated cognitive workload when performing a wayfinding task owing to their restricted visual function.
Addressing these challenges necessitates the use of assistive technologies to aid individuals with LV in wayfinding tasks. Given their difficulty accessing visual information, such as signs or route guidance, several assistive devices have been suggested to convey crucial wayfinding data through auditory or haptic feedback [4,5,6,7,8]. These methods have improved wayfinding capabilities for individuals with LV. However, those with LV often prefer to utilize their residual vision for daily tasks [3]. Therefore, utilizing visual feedback to enhance the usability of their residual vision, such as through mobile navigation applications or AR glasses, is deemed a more appropriate strategy for individuals with LV. In this regard, Zhao et al. [9] reported that there is no discernible difference in performance between visual and auditory feedback. Furthermore, individuals with LV experience a heightened sense of safety when provided with visual feedback. These findings have prompted the exploration of several studies focused on devices that offer visual cues for wayfinding tasks.
However, devices based on visual feedback face several practical limitations. The use of augmented reality (AR) cues through head-mounted displays (HMD) for visual feedback poses challenges. AR glasses are yet to be widely adopted, and their real-world application still faces several technical issues. Optical see-through AR HMDs are challenging to use outdoors owing to visibility issues, while video see-through AR HMDs are unsuitable for daily use owing to their heavy weight. In contrast, mobile navigation applications on smartphones prove practical for both indoor and outdoor environments, offering services across extensive regions [3]. Furthermore, individuals with LV often avoid using assistive tools that are not commonly employed by the general population. Nonetheless, people with LV often utilize smartphone navigation applications such as Google Maps when performing wayfinding tasks.
However, smartphone navigation applications still face limitations when employed for wayfinding tasks by individuals with LV. Central vision loss, often manifesting as a scotoma and common in conditions such as AMD, is one of the most prevalent vision loss conditions among individuals with LV [10]. The user interface (UI) of navigation applications typically positions the current user’s location, surrounding information, viewing direction, and the desired path at the center of the display. Such an interface proves inefficient for individuals with AMD. Recognizing this challenge, attempts have been made to enhance the accessibility of smartphone application UIs for patients with AMD [11]. These enhancements enable easier access to visual information and smoother task execution for these individuals. However, there remains a limited understanding of the accessibility of wayfinding applications and the impact of enhanced UIs on the user experience of individuals with LV.
In this study, we develop a new navigation interface, side UI for both map and AR types, to enhance the accessibility and visibility of wayfinding information provided by navigation applications for individuals with AMD. We assess the effectiveness of the proposed approach by comparing task performance and user experience between the traditional navigation UI and our proposed UI among visually impaired pedestrians. To achieve this, we conduct a study involving 25 participants in a virtual environment, and we compare their wayfinding task performance and their experiences using different guidance UIs.
Our findings unequivocally demonstrate that the incorporation of a simple modification in the newly proposed side UI result in expedited navigation to destinations and a substantial enhancement of the overall user experience. Particularly noteworthy is the significant improvement observed in both wayfinding performance and experience under low-vision conditions when utilizing the modified map interface, the side Map UI. However, the side AR UI does not manifest any discernible improvements in performance or user experience when juxtaposed with the original AR guidance. These observations underscore the efficacy of the side Map UI in enabling users with low vision to navigate outdoor environments more swiftly and comfortably. The parallel wayfinding experience to that of normal vision conditions signifies the refined map interface’s effective mitigation of conflicts arising from vision loss. This proposed interface has showcased its potential as a visual guidance tool for individuals with AMD. If our interface modification strategy is implemented in widely used mobile navigation tools in daily life, we anticipate a substantial enhancement in the practical wayfinding experience for individuals with AMD.
The remainder of this paper is organized as follows: Section 2 introduces the methods proposed to assist with wayfinding for individuals with LV and reviews the usability of these methods, summarizing the ways in which their findings are incorporated into this study. Section 3 provides a detailed explanation of the background behind the design of the newly proposed route guidance system, as well as the virtual wayfinding environment constructed for usability evaluation. In Section 4, we outline the design of the user study conducted to evaluate the usability of the guidance system, detailing the Research Questions and Research Hypotheses, as well as the independent and dependent variables. Section 5 reports the results of the user study, analyzing the findings in terms of wayfinding performance and user experience. Section 6 organizes the discussion based on the analysis results, stating the implications and contributions of this research. Finally, Section 7 concludes the study.

2. Related Works

In this section, we aim to review various prior works proposed to assist wayfinding for individuals with LV and explain the strategy of incorporating their findings into our work.

2.1. Wayfinding Assist System for People with LV

The wayfinding function is multifaceted, demanding a myriad of tasks and abilities. Wayfinding encompasses orientation, which involves recognizing the current location, destination, and required route, ensuring smooth transit to the destination [12,13]. Moreover, it entails safely navigating without colliding with other pedestrians or obstacles [14]. Because of their visual impairment, individuals with LV frequently struggle not just to perform each sub-task but also to execute multiple tasks simultaneously due to cognitive capacity limitations [15]. Consequently, several approaches have been explored to assist their wayfinding based on the feedback modality utilized.

2.1.1. Auditory and Haptic Feedback Systems

Auditory and haptic feedback types are frequently employed in assistive devices, considering the difficulties individuals with LV face in perceiving visual information. Additionally, devices providing multimodal feedback, integrating both auditory and haptic modalities, are also identifiable.
Auditory feedback offers elements essential to wayfinding such as destination, waypoints, guidance direction, and obstacle information in audio modality to users with LV. It can be categorized into verbal audio descriptions [5,6,16], which can deliver detailed information, but carefully determine the levels of detail to prevent task conflicts, and non-verbal audio feedback [17,18], predominantly using spatial audio for an intuitive perception of destinations and guidance directions. Similarly, haptic feedback offers assistance by translating necessary wayfinding information into haptic signals via wearable devices [7,8]. These wearables, adaptable to various body parts, require meticulous feedback design due to the varied sensitivity of body parts. Devices that capitalize on multimodal feedback harness the advantages of both auditory and haptic feedback, presenting more comprehensive assistance to users with LV [4].

2.1.2. Audio Guidance Systems in Mobile Navigation

In the realm of guidance systems, several initiatives have harnessed the capabilities of mobile phones and their embedded sensors to enhance wayfinding for individuals with LV. Recognizing that mobile navigation devices stand out as most accessible and frequently used tools for LV individuals in their daily lives [3], these systems have proposed instructions to aid wayfinding through the provision of spatial audio and verbal descriptions [19,20,21,22]. Ahemetovic et al. [19] introduced an innovative design for audio feedback, enabling LV users to continuously and accurately navigate paths with precise turns. This feedback has demostrated substantial improvements over traditional auditory guidance, seamlessly assisting LV users in accurate turn-taking and path-following. While auditory and haptic feedback can intuitively and descriptively support wayfinding, they bear the drawback of impeding natural interaction with the surrounding environment. Given the inclination of individuals with LV to utilize residual vision and perform tasks in a natural manner [3], a need arises for assistive devices that employ visual feedback.

2.1.3. Visual Feedback Systems

Devices using visual modalities enhance the accessibility of visual information for users with LV, as their residual vision complicates the processing of such information. In many cases, these devices utilize AR cues of HMDs to aid in making route guidance, waypoints, and destinations more perceivable [9,23,24,25]. While AR HMD-based path guidance systems significantly assist individuals with LV in wayfinding, they pose challenges in environments where AR cues are not clearly visible (i.e., outdoor environments due to strong light). Furthermore, the proliferation of AR HMDs and glasses is still nascent, and users with LV exhibit a preference for mainstream devices [3]. This renders AR HMD-based systems impractical. Notably, mobile phone navigation applications are commonly used by individuals with LV for daily wayfinding [3]. Additionally, there have been several studies proposing mobile applications that offer visual guidance for individuals with LV [26,27]. Feedback from LV users who have experienced these tools consistently indicates that they find such systems both attractive and useful. Considering this aspect, our study aims to leverage visual modality feedback in a practical mobile navigation application as an assistive tool for wayfinding. However, there is an imperative to refine the navigation guidance interface to improve its accessibility to individuals with LV, ensuring efficient conveyance of essential wayfinding information.

2.2. Mobile Application Accessibility for People with Low Vision

Individuals with LV often encounter challenges in perceiving the visual information provided by mobile applications due to their own visual conditions. While the visual conditions experienced by those with low vision are diverse, issues such as blurry vision caused by cataracts can diminish their visual acuity. This reduction complicates their ability to gather information from mobile applications and impedes their interaction with interfaces [11,28]. To address this challenge, Choo et al. [28] proposed a design platform solution to yield an accessible mobile interface specifically tailored for those suffering from cataracts. Through this platform, UI expert designers were able to design mobile interfaces that individuals with blurry vision could use with ease.
Beyond cataracts, disorders like AMD create a central scotoma, which hinders the use of mobile applications. The central vision represents the area with the highest visual acuity in the field of view. Thus, individuals with AMD face difficulties when closely observing and interacting with mobile interfaces. To counteract these challenges, Hakobyan et al. [11] presented a novel mobile application interface, coupled with design guidelines and usability evaluations, aimed to facilitate easier usage by those with AMD. Their design approach prioritized simplicity in the mobile interface, ensuring that interface elements (e.g., return button) were positioned away from central scotoma, usually located at the display’s periphery. Actual patients with AMD who utilized this newly designed mobile interface reported better accessibility and found it more practical in their daily lives compared to traditional mobile applications.
Drawing upon these findings, critical design considerations for mobile applications intended for users experiencing central vision loss were identified. Furthermore, these insights are being integrated into strategies for placing and adjusting navigation application elements (e.g., user location, forward direction, and route in map type).

2.3. Low Vision Simulation

Simulating the experiences of individuals with disabilities is instrumental in fostering empathy among others and in designing or evaluating methods to alleviate the challenges posed by these disabilities [29]. Particularly in the context of simulating the visual conditions of individuals with LV, such tools are employed during the design of appropriate interfaces and in the evaluation of proposed interfaces and feedback mechanisms [28,30].
While LV simulations can be executed on static displays, numerous studies endeavor to recreate the practical experiences of individuals with LV through the use of HMDs or wearable glasses [28,30,31]. Zhang et al. [30] developed a wearable vision loss simulator using transparent displays, designed to familiarize designers with the experiences of individuals with low vision. This simulator is able to emulate both central and peripheral vision loss and it recreates the visual acuity and field of view reductions inherent to low-vision conditions without inducing motion sickness or visual fatigue. This affirms its potential as a tool allowing users empathizing with the visual conditions of those with low vision. Additionally, Choo et al. [28] employed an HMD in a virtual environment to simulate the visual conditions of those experiencing cataracts. This aided designers in crafting mobile interfaces that were congruent with the visual conditions of individuals with LV, ensuring accessibility.
LV simulations can thus be employed to determine whether the interface of a tool is in conflict with the visual conditions of those with LV. Moreover, such simulations address a persistent challenge in traditional LV research: the diverse visual conditions of participants can often make statistical control elusive. By controlling the symptoms of the target eye disease uniformly, researchers can derive more general findings based on a larger participant pool.
Consequently, in the present study, we utilize simulation methods to ascertain whether a navigation interface, designed to accommodate central scotoma, clashes with the visual condition during the wayfinding process.

3. Method

The following section contains a description of the method used in this study.

3.1. Mobile Navigation Interface Design for Individuals with AMD

The design guidelines proposed by Hakobyan et al. [11] suggest that the mobile interface should (1) avoid placing interface elements in the center and (2) be as clear and prominent as possible. Original AR guidance typically provides users with a single AR cue route that extends from the bottom to the center of the screen (see Figure 1A). In response to this, we designed a side AR interface, which offers side route guidance indicating the boundaries of the route to be traversed, rather than a central route (see Figure 1B). For the map-type navigation interface, the user location, the direction to move in, and the upcoming route are usually displayed in the center of the screen (see Figure 1C). Addressing this, we designed a side map interface, where the user location and the starting point of the route to be followed are displayed at the bottom rather than the center (see Figure 1D). While the conventional interface design can obscure essential wayfinding information due to central scotoma, utilizing the side interface approach helps to prevent conflicts between the scotoma and interface elements.

3.2. LV Simulation

Individuals with AMD predominantly experience visual conditions such as central scotoma and reduced saturation. To closely replicate the visual experience of individuals with LV who have central vision loss, our user study was constructed by a review of prior research that utilized LV simulations [28,30,31]. Based on these insights, we implemented a realistic representation of central vision loss and desaturation symptoms. Central vision loss, which is a critical symptom of AMD, was emulated by obscuring the center of the field of vision with an elliptical, black visual stimulus (see Figure 2C). The size and shape of this obstruction were chosen concerning previous studies that simulated vision loss [30]. The desaturation effect was achieved using the saturation function within Unity URP’s post-processing effects (see Figure 2B) [31]. This visual stimulus impedes the perception of information not only during mobile navigation but also when the surrounding environment is being observed.

3.3. Wayfinding Environment

The virtual walking environment was developed using Unity (version 2019.4.18f LTS) and entails a digital twin of a real environment spanning approximately 400 m × 380 m (see Figure 2D). This virtual environment was displayed using the Vive Cosmos HMD by Vive. To emulate the experience of utilizing mobile navigation for wayfinding outdoors, this environment was incorporated into the virtual setting. The mobile guidance object, providing the navigation application’s functionalities, was mapped to the HMD’s controller appearing as a virtual mobile device. Users engaged with this virtual mobile phone to execute wayfinding tasks. Within the virtual space, movement is facilitated through the controller’s joystick, allowing forward or backward motion, and users can rotate within the environment in alignment with their viewing direction (see Figure 2E).

4. Study Design

In this study, we aim to address the following research questions and hypotheses:

4.1. Research Question 1: “Does Central Scotoma Significantly Impede the Utilization of the Conventional Mobile Guidance during Wayfinding Tasks for Individuals with Central Vision Loss?”

  • RH1.1: When utilizing conventional wayfinding guidance for navigation, conditions with central vision loss demonstrate lower wayfinding performance compared to those with normal vision.
  • RH1.2: When utilizing conventional wayfinding guidance for navigation, conditions with central vision loss demonstrate a worse user experience compared to those with normal vision.
Mobile device applications play a pivotal role in aiding individuals with LV in various daily tasks [3]. Nonetheless, conflicts stemming from their visual conditions often result in a compromised usability experience compared to users with intact vision. Notably, central scotoma obstructs the perception of information displayed centrally on mobile interfaces. To ascertain whether this phenomenon also manifests during wayfinding using mobile navigation, the study approaches this inquiry in terms of wayfinding performance (RH1.1) and user experience (RH1.2). To address this question, the study measures wayfinding performance and user experience across varying visual conditions using identical wayfinding guidance (i.e., normal vision with map UI vs. low vision with map UI).

4.2. Research Question 2: “Can the New Guidance Interface, Tailored to Mitigate Conflicts Arising from Central Scotoma in Individuals with LV, Address the Challenges They Face during Wayfinding?”

  • RH2.1: Utilizing the newly proposed guidance interface, even under the same low vision conditions, leads to enhanced wayfinding performance and an improved user experience.
  • RH2.2: Utilizing the newly proposed guidance interface, individuals under a low vision condition exhibit wayfinding performance and user experience comparable to those with normal vision conditions.
Hakobyan et al. [11] developed an accessible application for individuals with AMD experiencing central scotoma, ensuring that mobile interface conflicts were alleviated by strategically positioning interface elements around the screen periphery. Based on this principle, we hypothesized that providing wayfinding-related information on the screen’s periphery may enable individuals with central vision loss to navigate more seamlessly than before (RH2.1). Thus, our study aims to ascertain any observable shifts in wayfinding experiences when the newly designed guidance is employed under low-vision conditions, in contrast to previous systems (i.e., low vision with original map UI vs. low vision with modified map UI). Furthermore, anticipating that resolving the conflict between the visual condition and mobile interface could yield wayfinding experiences comparable to those of normal-vision conditions, we established RH2.2 (i.e., normal vision with original AR UI vs. low vision with modified AR UI).
To address these questions, we designed a navigation interface tailored for central scotoma. Specifically, based on insights from the literature review, we modified the conventionally used map type interface in original navigation, and to the AR type route guidance interface, recent features in mobile navigation facilitated by advancements in AR technology. Furthermore, to replicate the wayfinding experience of individuals of LV in our user study, we implemented a virtual environment based on a digital twin map of an actual environment and yielded a central vision loss simulation.

4.3. User Study

During the user study, participants undertook two categories of tasks: the Main and the Interview tasks. The Main task required participants to wear an HMD and navigate within a virtual environment. Six distinct wayfinding scenarios were formulated based on a combination of interface types (map, AR) and interaction types (normal vision with original UI, LV with original UI, and LV with side map UI) (see Figure 3). These scenarios were designed around six unique routes, each approximating a distance of 300 m and comprising four turns. Following the conclusion of each scenario, participants proceeded with the Interview task, which involved mental map drawing and survey completion. Participants were instructed to sketch the route they navigated during the preceding wayfinding task, thereby gathering mental map data. The survey was structured as a questionnaire targeting user experience, inclusive of NASA-TLX [32], SUS [33], IPQ [34], and SSQ [35].

4.3.1. Participants

The user study encompassed 25 participants aged between 21 and 33 years (Age; M = 24.0, SD = 2.35, Male = 20, Female = 5). Only participants capable of performing daily activities with the aid of optical devices were drawn from the pool of undergraduate and graduate students currently enrolled at a university. Only two participants reported having no prior experience with VR. The study was conducted while they wore their corrective devices.

4.3.2. Procedure

Essential demographic details of all participants were collected. Subsequently, we provided explanations regarding the two task categories. Once it was ascertained that the participants had an adequate understanding of the Main and Interview tasks, a preliminary test of the experimental environment was initiated. Initially, participants wore the HMD to ensure that the VR environment functioned appropriately. Subsequently, we introduced the two navigation interfaces (map, AR) and briefed them on the information relayed by each interface element. The functioning of the LV simulation—emulating central scotoma and desaturation—was verified, and subsequently movement capabilities (advancement, retraction, and rotation) within the virtual wayfinding environment were tested.
After testing, participants started on the Main task, followed by an interview concerning their wayfinding experience. Once participants reached their destination, they removed the HMD and used a tablet provided by the experimenter to sketch their navigational route within the virtual environment and complete the assigned survey. Subsequently, they wore the HMD again to prepare for the next scenario. To mitigate any sequence-related biases, the order of Main task scenarios was randomized using a Latin square design. The procedure was repeated for all six scenarios, culminating in the completion of the experiment, which lasted approximately 40 min.

4.4. Metrics

To examine the wayfinding performance and user experience based on the factors of interface type (map, AR) and interaction type (Normal Vision with Original UI, and Low Vision with Original UI, Low Vision with side Map UI), we analyzed data collected from the main task featuring six scenarios and the subsequent Interview task. Initially, wayfinding performance was calculated by dividing the distance covered in the scenario by the time taken. We evaluated the mental map data referencing the method presented in Aginsky et al. [36] and Zhao et al. [9], aiming to verify the (1) length accuracy and (2) turn correctness. Additionally, survey results were employed to measure the degrees of workload [32], system usability [33], presence [34], and simulator sickness [35].

4.5. Statistical Analysis

The data analysis procedure was as follows: First, outliers were addressed using the Interquartile Range method. Subsequently, the Shapiro–Wilk test confirmed the normal distribution of data across all measures. A two-way (factorial) repeated measure analysis of variance (RM ANOVA) was conducted to identify interaction effects due to the two factors. In addition, a one-way RM ANOVA was executed to compare the six conditions. In cases where Mauchly’s Test of Sphericity was violated, the Greenhouse–Geisser correction adjusted the degrees of freedom. Pairwise comparisons were performed using the Bonferroni post hoc test, with effect sizes of 0.3, 0.5, and 0.8 categorized as small, medium, and large, respectively.

5. Results

In this section, we present the analysis of the impact on wayfinding experiences based on two factors: the interface type (map, AR) used in mobile phone navigation applications, and the interaction type (normal vision with original UI, LV with original UI, LV with side UI), which is related to the user’s visual condition and the interface modification.

5.1. Wayfinding Performance

Through a factorial RM ANOVA analysis, we observed an interaction effect on wayfinding performance based on the interface type and interaction type ( F ( 2 , 32 ) = 17.079, p < 0.001, η p 2 = 0.516). Additionally, after conducting a one-way RM ANOVA analysis for all conditions, significant differences were identified among the conditions ( F ( 5 , 80 ) = 8.199, p < 0.001, η p 2 = 0.339) (see Figure 4A). In post hoc analysis, C6 (LV with side map) appeared to exhibit a trend toward improved wayfinding performance when compared to C4 (LV with original map), exhibiting a large effect size (C6–C4: t = 2.658, p = 0.142, d = 0.834). No statistically significant difference was observed between C6 and C2 (normal vision with original map UI), as the latter group comprised participants with a different visual condition (C6–C2: t = 1.058, p = 1.000). Moreover, when comparing conditions using the AR UI with central scotoma, C5, which offered a side AR, no significant difference from C3 was shown, which provided the original AR (C5–C3: t = −1.654, p = 1.000). However, it presented significantly lower performance compared to C1 (C5–C1: t = −5.636, p < 0.001).
These results imply that the side map interface is more effective in supporting individuals with LV in executing wayfinding tasks, yielding performance levels comparable to those of individuals with normal vision. Conversely, individuals with LV who used original AR guidance demonstrated significantly lower wayfinding performance compared to that of individuals with normal vision, and changing the AR guidance did not lead to performance enhancement.

5.2. Mental Map

5.2.1. Length Accuracy

The interaction effect, which considered both interface type and interaction type, did not yield significant results regarding the Length Accuracy of the Mental Map ( F ( 2 , 42 ) = 1.671, p = 0.200, η p 2 = 0.074). Furthermore, no significant differences in Length Accuracy were observed across all conditions ( F ( 5 , 105 ) = 0.882, p = 0.496, η p 2 = 0.040) (see Figure 4C).
These results indicate that neither the interface nor the interaction type significantly influences the ability to perceive relative route segment lengths (between waypoints of the entire route), nor does it significantly impact the construction of a mental map during wayfinding.

5.2.2. Turn Correctness

In terms of the Turn Correctness of the Mental Map influenced by interface type and interaction type, no interaction effect was observed ( F ( 2 , 34 ) = 1.523, p = 0.231, η p 2 = 0.083). However, upon conducting a one-way RM ANOVA for all conditions, significant differences among the conditions were observed ( F ( 2.971 , 50.509 ) = 3.960, p = 0.013, η p 2 = 0.189) (see Figure 4B). Post hoc testing revealed that using the original map interface did not yield any difference in turn correctness between normal vision and LV conditions (C4–C2: t = 1.105, p = 1.000), and no difference was observed when using the side map interface (C4–C6: t = 0.425, p = 1.000). Additionally, when employing the AR guidance, there were no differences in turn correctness between distinct visual conditions C1 and C3 (C3–C1: t = −1.275, p = 1.000) or between different interfaces C3 and C5 (C3–C5: t = 0.425, p = 1.000). However, despite having the same LV condition, differences in turn correctness were observed depending on the interface type (i.e., map vs. AR). Significant differences were observed between AR guidance (C3) and Map interface (C4), which used the original UI (C3–C4: t = −3.061, p = 0.044), as well as between conditions C5 (AR type) and C6 (map type), which employed side UI (C5–C6: t = −3.061, p = 0.044).
These results indicate that the interaction type factor did not induce differences in the turn correctness of the mental map. This suggests that the processes of perception, memory recall during turning, and the formation of a mental map remained unaffected by these factors. However, when the map interface was used under LV conditions, turn correctness was higher compared to that of the AR interface. This infers that the map-type interface facilitated participants in forming a mental map more effectively.

5.3. Workload

Based on interface type and interaction type, an interaction effect was observed in the results of the NASA-TLX questionnaire ( F ( 2 , 40 ) = 5.350, p = 0.009, η p 2 = 0.211). Additionally, a one-way ANOVA analysis of the results across all conditions revealed significant differences among the conditions ( F ( 5 , 100 ) = 6.275, p < 0.001, η p 2 = 0.239) (see Figure 5A). When using the AR guidance for navigation, post hoc testing showed that under normal vision conditions (C1) resulted in a significantly lower workload than the LV conditions (C3, C5) (C1–C3: t = −3.788, p = 0.004) (C1–C5: t = −4.535, p < 0.001). In contrast, when using the map UI for wayfinding, the workload of the condition under normal vision (C2) was comparable to that under the LV conditions (C2–C4: t = −1.841, p = 1.000) (C2–C6: t = −0.427, p = 1.000). Moreover, condition C1, which utilized AR guidance under normal visual conditions, exhibited a significantly lower workload compared to all other conditions.
This indicates that wayfinding with AR guidance under normal visual conditions resulted in the lowest workload. However, participants experienced a relatively higher workload with a central scotoma. Moreover, regardless of visual condition, the map interface consistently resulted in an increased workload.

5.4. System Usability

In the evaluation of the impact of interface type and interaction type on the SUS questionnaire results, no interaction effect was observed ( F ( 2 , 26 ) = 2.899, p = 0.073, η p 2 = 0.182). A one-way RM ANOVA of all conditions for the SUS questionnaire revealed significant differences ( F ( 5 , 65 ) = 5.676, p < 0.001, η p 2 = 0.304) (see Figure 5B). Post hoc analyses indicated that AR guidance in conditions with normal vision (C1) showed a medium effect size, for a tendency of enhanced system usability compared to conditions with LV (C1–C3: t = 2.544, p = 0.200, d = 0.741). However, side AR guidance did not result in an improvement in system usability (C3–C5: t = −0.484, p = 1.000). Conversely, upon using side map interface, the LV condition exhibited an increase in system usability, as indicated by a medium effect size (C4–C6: t = −2.059, p = 0.652, d = −0.600).
This suggests that AR guidance experienced a notable decline in usability as visual conditions deteriorated, and no significant enhancements were observed owing to the interface modification. In contrast, the side map UI displayed an enhancement in usability compared to the original map, indicating a more positive usability experience for participants using the tailored wayfinding guidance system.

5.5. Presence

The results of the IPQ questionnaire, analyzed for interface type and interaction type using a two-way RM ANOVA, revealed an interaction effect ( F ( 1.990 , 33.826 ) = 6.118, p = 0.005, η p 2 = 0.265). Furthermore, an RM ANOVA analysis of all IPQ survey conditions revealed significant differences ( F ( 2.723 , 46.299 ) = 8.881, p < 0.001, η p 2 = 0.343) (see Figure 5C). Post hoc comparisons indicated that when using original AR guidance for wayfinding, the normal vision condition (C1) resulted in significantly higher presence than central vision loss conditions (C1–C3: t = 5.355, p < 0.001; C1–C5: t = 4.229, p < 0.001). However, there was no distinguishable difference in presence between the original AR guidance condition (C3) and the side AR guidance condition (C5) (C3–C5: t = −1.126, p = 1.000). Similarly, a significant difference in presence was observed between the two conditions with the original map interface under differing visual conditions (C2–C4: t = 3.378, p = 0.017). However, the condition using the side map interface under LV (C6) exhibited a trend towards increased presence compared to the condition under the original map, with a medium effect size (C6–C4: t = 2.472, p = 0.232, d = 0.611). Moreover, this presence level was comparable to the condition with normal vision (C2–C6: t = 0.906, p = 1.000).
This result suggests that although original AR guidance delivers a wayfinding experience characterized by a heightened sense of presence under normal vision conditions, this sense diminishes when a scotoma appears in the central region of their visual field. Similarly, while wayfinding with the original map interface under normal vision conditions displayed high presence, there was a notable decrease in presence under LV conditions. Nonetheless, wayfinding using the side map interface under LV conditions offered a presence experience similar to normal vision with the original map, indicating an enhanced experience.

5.6. Simulator Sickness

In the analysis of the interaction effect between the interface and interaction type factors on SSQ questionnaire results, no such effect was observed ( F ( 2 , 34 ) = 2.874, p = 0.070, η p 2 = 0.145). However, when evaluating the results of all conditions in the SSQ survey via a one-way RM ANOVA, significant differences between conditions were identified ( F ( 2.945 , 50.070 ) = 5.766, p = 0.002, η p 2 = 0.253) (see Figure 5D). Post hoc analysis revealed that under AR guidance, compared to the normal-vision condition (C1), two other conditions (C3, C5) induced significantly higher VR sickness (C1–C3: t = −4.644, p < 0.001; C1–C5: t = −3.534, p = 0.010). However, side AR interface (C5) did not alter sickness levels from the original AR interface (C3) (C3–C5: t = 1.111, p = 1.000). Under conditions using the map interface, compared to the normal -vision condition (C2), the other two conditions (C4 and C6) showed an increased tendency for sickness, as inferred from a small effect size (C2–C4: t = −2.019, p = 0.699, d = −0.445; C2–C6: t = −1.918, p = 0.876, d = −0.422). The difference in sickness between the two different map interface conditions under the same visual condition was not observed (C4–C6: t = 0.101, p = 1.000).
These findings suggest that irrespective of the guidance type, conditions with LV consistently exhibited higher sickness levels compared to those with normal vision. Moreover, improvements in the interface did not lead to discernable changes in sickness levels.

6. Discussion

6.1. Wayfinding Performance Improvement with Side UI

Our study explored changes in wayfinding performance while utilizing side UI, a navigation interface specifically designed to prevent conflicts with the central scotoma in individuals with AMD.
Initially, the newly designed AR guidance, side AR, did not result in any significant changes in wayfinding performance under LV conditions. Specifically, when employing side AR for wayfinding, the performance decreased compared to the normal-vision condition, with no discernible difference between this condition and the condition with the original AR interface. This suggests that even the use of the side AR guidance did not substantially affect the wayfinding performance under LV conditions, consistently exhibiting suboptimal performance. These findings raise questions about the effectiveness of side AR UI in addressing conflicts with central scotoma, given that it failed to effectively convey necessary wayfinding information to users.
Conversely, the side map interface demonstrated improved wayfinding performance compared to the original map interface in a LV condition. Moreover, when using side map UI, participants who experienced central scotoma performed similarly with normal-vision conditions. This implies that using the side map interface, individuals with LV can achieve wayfinding performance comparable to those with normal vision. Consequently, the side map interface appears to successfully circumvent the central scotoma, facilitating efficient wayfinding by conveying information smoothly to users.

6.2. User Experience Changes by Tailored UI

Moreover, we investigated the influence of UI modifications on user experience. Initially, the side AR guidance yielded a user experience comparable to that of the original AR guidance, indicating no discernible improvements. The strategic placement of various AR guidance elements on the periphery, intended to avoid conflicts with the central scotoma, did not appear to influence wayfinding performance or the overall user experience.
Conversely, modifications to the map type of navigation resulted in changes in the user experience. Specifically, results from the SUS questionnaire revealed that the side map interface scored higher than the original map, signifying enhanced navigation usability. This suggests that while the conventional navigation interface displayed reduced system usability owing to conflicts with scotoma, our new map interface addressed these issues, becoming more beneficial for LV users in wayfinding tasks. Additionally, the side map interface exhibited greater presence compared to the original map interface under the same LV condition, with presence levels comparable to those observed in normal-vision conditions. This implies that while central scotoma in LV conditions previously decreased presence in wayfinding owing to conflicts with the original map interface, the restructuring interface covered this issue. Despite users experiencing LV conditions, the side map interface provided an experience of presence comparable to that of normal vision, indicating successful conflict mitigation. Thus, our findings indicate that the side map interface can potentially enhance the usability of navigation applications and yield a higher presence experience that enables LV users to engage in wayfinding in a manner similar to individuals with normal vision.

6.3. Contributions and Limitations

Previous studies have reported enhanced performance and user experience when employing their proposed new interfaces compared to traditional navigation applications [19,22]. Although such comparisons confirm improvements in wayfinding experience for those with LV by using their interface, they do not conclusively address the practicality of the system. However, by contrasting the results with the performance and user experience of individuals with normal vision, we can better address these concerns.
In this study, we not only assessed the improvements in wayfinding performance brought about by our new interface, but also compared it to the performance of individuals with normal vision. Specifically, even under LV conditions where central vision loss might lead to conflicts with navigation interfaces, the use of side UI demonstrated wayfinding performance akin to conditions of normal vision without such conflicts. Similarly, when comparing wayfinding performance and some user experience metrics (e.g., workload and system usability) between low-vision users with side UI and normal-vision users with the original UI, it became evident that side UI significantly enhanced the wayfinding experience for those with LV. These results suggest that if our strategy of modifying the original guidance interface to side UI is applied uniformly to conventional navigation interfaces, individuals with LV could reap substantial benefits in their daily navigation tasks.
Additionally, this work underscored the advantages of utilizing LV simulations in research. Prior works faced challenges in recruiting a large number of LV participants for their experiments, which hampered precise quantitative analysis and necessitated a focus on qualitative analysis [20,22,37]. Moreover, instead of recruiting participants with common symptoms, they often included participants with a range of visual impairments, complicating the understanding and resolution of conflicts between the interface and specific eye conditions [23]. While simulations that do not accurately reflect the visual experiences of actual LV may produce misleading results, a meticulous LV simulation approach enables a detailed and accurate analysis of LV experiences. In our study, we employed an LV simulation to test side UI, and by analyzing data from a relatively larger participant pool, we secured statistical evidence for the advantages of wayfinding using side UI. Thus, utilizing LV simulations can aid in understanding the potential conflicts and benefits that a proposed assistive system or interface may bring to the daily tasks of those with LV.
This differential impact is presumed to result from the failure of the redesiged AR guidance to address conflicts with central scotoma. Participant feedback further elucidates issues with side AR guidance. For example, P5 noted that “compared to traditional AR types, this (side AR) navigation disperses route information on both sides, making it difficult to concentrate”. P11 also remarked that “in the case of this guidance (side AR), having route information on both sides of the screen made it challenging to determine if that interface was actually guiding the route”. Despite the intention of side AR to avoid conflicts with central scotoma by positioning route guidance along the screen periphery, this strategy proved ineffective in mitigating conflicts. Therefore, additional efforts are necessary to modify AR guidance in a manner more accommodating for individuals with central scotoma.
Our research has several limitations. First, we do not collect and report the wayfinding feedback from individuals with LV who actually experience central vision loss. Instead of examining the usability of the Side UI enhanced guidance system with individuals with LV, we evaluate it within a simulated environment that mimics the visual experience of LV. While this approach has been employed in prior studies and has its advantages in assessing assistive technology [28,30,31], our study also endeavors to create an environment that closely resembles the visual and walking experiences of LV based on references. However, this simulated environment cannot be identical to the actual visual experience of individuals with LV, and unexpected issues might arise. To overcome this limitation, it is necessary to recruit actual participants with LV, even in small numbers, to gather feedback on the assistive technology. Specifically, using the LV simulation environment as a method for a pilot study to obtain statistical evidence that the proposed assist system improves task performance and user experience, followed by recruiting participants with LV to collect diverse feedback and make improvements, would enable a more accurate evaluation. By following this two-step process in the evaluation of assist systems for individuals with LV, future research can fully utilize the advantages of LV simulations while overcoming their limitations, and foster various proactive attempts to develop useful assistive technologies.
The second limitation pertains to the characteristics of participants recruited for the user study. Specifically, there was a tendency for an unbalanced gender ratio among the participants in our user study. This issue could lead to gender biased results. Although simulating visual and walking experiences might mitigate and control gender differences in walking speed and experience, it remains a limitation of this study. Future research should consider these aspects when recruiting participants and conducting evaluations to prevent potential biases due to the characteristics of the participant group.

7. Conclusions

In this study, we explored mobile navigation and its interfaces to enhance assistance for individuals with LV in their wayfinding tasks. Specifically, our goal was to address the wayfinding challenges associated with central scotoma, a prevalent symptom in conditions such as AMD, which hinders smooth navigation for those with LV. To mitigate these challenges, we introduced a redesigned interface known as side UI. We then evaluated the redesigned interface in a virtual environment replicating the wayfinding experiences of individuals with LV, involving 25 participants in our study. Even for those with LV, the side map navigation demonstrated performance and user experience comparable to that of individuals with normal vision. However, the side AR guidance did not effectively address the issues arising from scotoma, highlighting the need for an alternative guidance design approach. Our research shed light on the conflicts that arise between navigation interfaces and the symptoms of LV. By understanding these challenges and devising modification strategies to address them, we ascertained that an adapted interface enables LV users to perform wayfinding swiftly and without discomfort. With the application of our approaches and methodologies in practical mobile applications commonly used in daily life, we anticipate that individuals with LV will be better equipped to seamlessly execute their daily tasks.

Author Contributions

Conceptualization, T.J. and D.Y.; methodology, T.J. and D.Y.; software, T.J.; validation, T.J., D.Y. and S.K.; formal analysis, T.J. and D.Y.; investigation, T.J. and D.Y.; resources, T.J.; data curation, T.J.; writing—original draft preparation, T.J.; writing—review and editing, T.J. and S.K.; visualization, T.J.; supervision, S.K.; project administration, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea funded by the Ministry of Science and ICT (2021R1A4A1030075); This work was supported by the Culture, Sports and Tourism Research and Development Program through the Korea Creative Content Agency funded by the Ministry of Culture, Sports and Tourism, in 2022, through the Project Name: Development of Artifical Intelligence-Based Game Simulation Technology to Support Online Game Content Production under Project R2022020070.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Gwangju Institute of Science and Technology. (Protocol code 20210806-HR-62-01-02, approved on 17 August 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. NIH. Low Vision. 2023. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/low-vision (accessed on 11 May 2023).
  2. WHO. Control and Prevention of Blindness and Deafness. 2012. Available online: https://www.emro.who.int/control-and-preventions-of-blindness-and-deafness/announcements/global-estimates-on-visual-impairment.html (accessed on 18 September 2023).
  3. Szpiro, S.; Zhao, Y.; Azenkot, S. Finding a Store, Searching for a Product: A Study of Daily Challenges of Low Vision People. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 61–72. [Google Scholar] [CrossRef]
  4. Xu, S.; Yang, C.; Ge, W.; Yu, C.; Shi, Y. Virtual Paving: Rendering a Smooth Path for People with Visual Impairment through Vibrotactile and Audio Feedback. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 99. [Google Scholar] [CrossRef]
  5. Guerreiro, J.A.; Ahmetovic, D.; Sato, D.; Kitani, K.; Asakawa, C. Airport Accessibility and Navigation Assistance for People with Visual Impairments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–14. [Google Scholar] [CrossRef]
  6. Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K.; Takagi, H.; Asakawa, C. NavCog: A Navigational Cognitive Assistant for the Blind. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, Florence, Italy, 6–9 September 2016; pp. 90–99. [Google Scholar] [CrossRef]
  7. Katzschmann, R.K.; Araki, B.; Rus, D. Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic Feedback Device. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 583–593. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, H.C.; Katzschmann, R.K.; Teng, S.; Araki, B.; Giarré, L.; Rus, D. Enabling independent navigation for visually impaired people through a wearable vision-based feedback system. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 6533–6540. [Google Scholar] [CrossRef]
  9. Zhao, Y.; Kupferstein, E.; Rojnirun, H.; Findlater, L.; Azenkot, S. The Effectiveness of Visual and Audio Wayfinding Guidance on Smartglasses for People with Low Vision. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–14. [Google Scholar] [CrossRef]
  10. NIH. Age-Related Macular Degeneration (AMD). 2021. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/age-related-macular-degeneration (accessed on 11 May 2023).
  11. Hakobyan, L.; Lumsden, J.; Shaw, R.; O’Sullivan, D. A Longitudinal Evaluation of the Acceptability and Impact of a Diet Diary App for Older Adults with Age-Related Macular Degeneration. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, Florence, Italy, 6–9 September 2016; pp. 124–134. [Google Scholar] [CrossRef]
  12. Brouwer, D.M.; Sadlo, G.; Winding, K.; Hanneman, M.I. Limitations in mobility: Experiences of visually impaired older people. Br. J. Occup. Ther. 2008, 71, 414–421. [Google Scholar] [CrossRef]
  13. Tjan, B.S.; Beckmann, P.J.; Roy, R.; Giudice, N.; Legge, G.E. Digital sign system for indoor wayfinding for the visually impaired. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops, San Diego, CA, USA, 21–23 September 2005; IEEE: New York, NY, USA, 2005; p. 30. [Google Scholar]
  14. Jacobson, W.H. The Art and Science of Teaching Orientation and Mobility to Persons with Visual Impairments; American Foundation for the Blind: New York, NY, USA, 1993. [Google Scholar]
  15. Turano, K.A.; Geruschat, D.R.; Stahl, J.W. Mental effort required for walking: Effects of retinitis pigmentosa. Optom. Vis. Sci. 1998, 75, 879–886. [Google Scholar] [CrossRef] [PubMed]
  16. Liu, H.; Wang, J.; Wang, X.; Qian, Y. iSee: Obstacle Detection and Feedback System for the Blind. In Proceedings of the Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, Osaka, Japan, 7–11 September 2015; pp. 197–200. [Google Scholar] [CrossRef]
  17. Katz, B.F.; Kammoun, S.; Parseihian, G.; Gutierrez, O.; Brilhault, A.; Auvray, M.; Truillet, P.; Denis, M.; Thorpe, S.; Jouffrais, C. NAVIG: Augmented reality guidance system for the visually impaired: Combining object localization, GNSS, and spatial audio. Virtual Real. 2012, 16, 253–269. [Google Scholar] [CrossRef]
  18. Kayukawa, S.; Higuchi, K.; Guerreiro, J.A.; Morishima, S.; Sato, Y.; Kitani, K.; Asakawa, C. BBeep: A Sonic Collision Avoidance System for Blind Travellers and Nearby Pedestrians. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–12. [Google Scholar] [CrossRef]
  19. Ahmetovic, D.; Avanzini, F.; Baratè, A.; Bernareggi, C.; Ciardullo, M.; Galimberti, G.; Ludovico, L.A.; Mascetti, S.; Presti, G. Sonification of navigation instructions for people with visual impairment. Int. J. Hum.-Comput. Stud. 2023, 177, 103057. [Google Scholar] [CrossRef]
  20. Peng, R.; Lam, J.; Manduchi, R.; Mirzaei, F. Experiments with RouteNav, A Wayfinding App for Blind Travelers in a Transit Hub. In Proceedings of the ASSETS ′23: The 25th International ACM SIGACCESS Conference on Computers and Accessibility, New York, NY, USA, 22–25 October 2023. [Google Scholar]
  21. Ahmetovic, D.; Mascetti, S.; Bernareggi, C.; Guerreiro, J.; Oh, U.; Asakawa, C. Deep learning compensation of rotation errors during navigation assistance for people with visual impairments or blindness. Acm Trans. Access. Comput. (Taccess) 2019, 12, 1–19. [Google Scholar] [CrossRef]
  22. Mascetti, S.; Picinali, L.; Gerino, A.; Ahmetovic, D.; Bernareggi, C. Sonification of guidance data during road crossing for people with visual impairments or blindness. Int. J. Hum.-Comput. Stud. 2016, 85, 16–26. [Google Scholar] [CrossRef]
  23. Min Htike, H.; Margrain, T.H.; Lai, Y.K.; Eslambolchilar, P. Augmented Reality Glasses as an Orientation and Mobility Aid for People with Low Vision: A Feasibility Study of Experiences and Requirements. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021. [Google Scholar] [CrossRef]
  24. Tschakert, H.; Lang, F.; Wieland, M.; Schmidt, A.; Machulla, T.K. A Dataset and Machine Learning Approach to Classify and Augment Interface Elements of Household Appliances to Support People with Visual Impairment. In Proceedings of the 28th International Conference on Intelligent User Interfaces, Sydney, Australia, 27–31 March 2023; pp. 77–90. [Google Scholar] [CrossRef]
  25. Zhao, Y.; Kupferstein, E.; Castro, B.V.; Feiner, S.; Azenkot, S. Designing AR Visualizations to Facilitate Stair Navigation for People with Low Vision. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, New Orleans, LA, USA, 20–23 October 2019; pp. 387–402. [Google Scholar] [CrossRef]
  26. Lo Valvo, A.; Croce, D.; Garlisi, D.; Giuliano, F.; Giarré, L.; Tinnirello, I. A navigation and augmented reality system for visually impaired people. Sensors 2021, 21, 3061. [Google Scholar] [CrossRef] [PubMed]
  27. Medina-Sanchez, E.H.; Mikusova, M.; Callejas-Cuervo, M. An interactive model based on a mobile application and augmented reality as a tool to support safe and efficient mobility of people with visual limitations in sustainable urban environments. Sustainability 2021, 13, 9973. [Google Scholar] [CrossRef]
  28. Choo, K.T.W.; Balan, R.K.; Lee, Y. Examining Augmented Virtuality Impairment Simulation for Mobile App Accessibility Design. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–11. [Google Scholar] [CrossRef]
  29. Bennett, C.L.; Rosner, D.K. The promise of empathy: Design, disability, and knowing the “other”. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–13. [Google Scholar]
  30. Zhang, Q.; Barbareschi, G.; Huang, Y.; Li, J.; Pai, Y.S.; Ward, J.; Kunze, K. Seeing Our Blind Spots: Smart Glasses-Based Simulation to Increase Design Students’ Awareness of Visual Impairment. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022. [Google Scholar] [CrossRef]
  31. Krösl, K.; Elvezio, C.; Luidolt, L.R.; Hürbe, M.; Karst, S.; Feiner, S.; Wimmer, M. CatARact: Simulating Cataracts in Augmented Reality. In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality, Porto de Galinhas, Brazil, 9–13 November 2020. [Google Scholar] [CrossRef]
  32. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; North-Holland: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  33. Brooke, J. Sus: A “Quick and Dirty” Usability; Taylor & Francis: Boca Raton, FL, USA, 1996; Volume 189, pp. 189–194. [Google Scholar]
  34. Schubert, T.; Friedmann, F.; Regenbrecht, H. The experience of presence: Factor analytic insights. Presence Teleoper. Virtual Environ. 2001, 10, 266–281. [Google Scholar] [CrossRef]
  35. Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. Int. J. Aviat. Psychol. 1993, 3, 203–220. [Google Scholar] [CrossRef]
  36. Aginsky, V.; Harris, C.; Rensink, R.; Beusmans, J. Two strategies for learning a route in a driving simulator. J. Environ. Psychol. 1997, 17, 317–331. [Google Scholar] [CrossRef]
  37. Wang, R.; Zeng, L.; Zhang, X.; Mondal, S.; Zhao, Y. Understanding How Low Vision People Read Using Eye Tracking. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023. [Google Scholar] [CrossRef]
Figure 1. Four types of navigation interfaces: (A) Original AR guidance, (B) Side AR guidance, (C) Original map interface, and (D) Side map interface.
Figure 1. Four types of navigation interfaces: (A) Original AR guidance, (B) Side AR guidance, (C) Original map interface, and (D) Side map interface.
Electronics 12 04561 g001
Figure 2. Depiction of the LV simulation progression: (A) Commencing with normal vision, followed by the implementation of (B) desaturation effect and (C) central scotoma. The virtual wayfinding environment is realized in (D) a digital twin of the actual pedestrian setting. (E) Experimental setup.
Figure 2. Depiction of the LV simulation progression: (A) Commencing with normal vision, followed by the implementation of (B) desaturation effect and (C) central scotoma. The virtual wayfinding environment is realized in (D) a digital twin of the actual pedestrian setting. (E) Experimental setup.
Electronics 12 04561 g002
Figure 3. Description and classification of the six conditions implemented for the wayfinding task, based on accessibility, interface type, and visual condition.
Figure 3. Description and classification of the six conditions implemented for the wayfinding task, based on accessibility, interface type, and visual condition.
Electronics 12 04561 g003
Figure 4. Experimental results illustrating (A) wayfinding performance, as well as mental map’s (B) turn correctness and (C) length accuracy. Statistical significance with p-value was marked as *, **, ***, when p < 0.05, p < 0.005, p < 0.001.
Figure 4. Experimental results illustrating (A) wayfinding performance, as well as mental map’s (B) turn correctness and (C) length accuracy. Statistical significance with p-value was marked as *, **, ***, when p < 0.05, p < 0.005, p < 0.001.
Electronics 12 04561 g004
Figure 5. Experimental results illustrating (A) NASA-TLX, (B) SUS, (C) IPQ, and (D) SSQ. Statistical significance with p-value was marked as *, **, ***, when p < 0.05, p < 0.005, p < 0.001.
Figure 5. Experimental results illustrating (A) NASA-TLX, (B) SUS, (C) IPQ, and (D) SSQ. Statistical significance with p-value was marked as *, **, ***, when p < 0.05, p < 0.005, p < 0.001.
Electronics 12 04561 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jo, T.; Yeo, D.; Kim, S. Enhancing Wayfinding Experience in Low-Vision Individuals through a Tailored Mobile Guidance Interface. Electronics 2023, 12, 4561. https://doi.org/10.3390/electronics12224561

AMA Style

Jo T, Yeo D, Kim S. Enhancing Wayfinding Experience in Low-Vision Individuals through a Tailored Mobile Guidance Interface. Electronics. 2023; 12(22):4561. https://doi.org/10.3390/electronics12224561

Chicago/Turabian Style

Jo, Taewoo, Dohyeon Yeo, and SeungJun Kim. 2023. "Enhancing Wayfinding Experience in Low-Vision Individuals through a Tailored Mobile Guidance Interface" Electronics 12, no. 22: 4561. https://doi.org/10.3390/electronics12224561

APA Style

Jo, T., Yeo, D., & Kim, S. (2023). Enhancing Wayfinding Experience in Low-Vision Individuals through a Tailored Mobile Guidance Interface. Electronics, 12(22), 4561. https://doi.org/10.3390/electronics12224561

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop