Next Article in Journal
Consistent Roof Geometry Encoding for 3D Building Model Retrieval Using Airborne LiDAR Point Clouds
Next Article in Special Issue
Towards an Affordance-Based Ad-Hoc Suitability Network for Indoor Manufacturing Transportation Processes
Previous Article in Journal
Public Transit Route Mapping for Large-Scale Multimodal Networks
Previous Article in Special Issue
On Wi-Fi Model Optimizations for Smartphone-Based Indoor Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simulation-Based Evaluation of Ease of Wayfinding Using Digital Human and As-Is Environment Models

1
National Institute of Advanced Industrial Science and Technology, Tokyo 135-0064, Japan
2
Graduate School of Information Science and Technology, Hokkaido University, Sapporo 060-0814, Japan
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2017, 6(9), 267; https://doi.org/10.3390/ijgi6090267
Submission received: 30 June 2017 / Revised: 4 August 2017 / Accepted: 24 August 2017 / Published: 26 August 2017
(This article belongs to the Special Issue 3D Indoor Modelling and Navigation)

Abstract

:
As recommended by the international standards, ISO 21542, ease of wayfinding must be ensured by installing signage at all key decision points on walkways such as forks because signage greatly influences the way in which people unfamiliar with an environment navigate through it. Therefore, we aimed to develop a new system for evaluating the ease of wayfinding, which could detect spots that cause disorientation, i.e., “disorientation spots”, based on simulated three-dimensional (3D) interactions between wayfinding behaviors and signage location, visibility, legibility, noticeability, and continuity. First, an environment model reflecting detailed 3D geometry and textures of the environment, i.e., “as-is environment model”, is generated automatically using 3D laser-scanning and structure-from-motion (SfM). Then, a set of signage entities is created by the user. Thereafter, a 3D wayfinding simulation is performed in the as-is environment model using a digital human model (DHM), and disorientation spots are detected. The proposed system was tested in a virtual maze and a real two-story indoor environment. It was further validated through a comparison of the disorientation spots detected by the simulation with those of six young subjects. The comparison results revealed that the proposed system could detect disorientation spots, where the subjects lost their way, in the test environment.

1. Introduction

It is increasingly important in our rapidly aging society [1] to perform accessibility evaluations for enhancing the ease and safety of access to indoor and outdoor environments for all people, including the elderly and the disabled. Under international standards [2], “accessibility” is defined as “provision of buildings or parts of buildings for people, regardless of disability, age or gender, to be able to gain access to them, into them, to use them and exit from them.” As recommended in the ISO/IEC Guide 71 [3], accessibility must be assessed considering both the physical and cognitive abilities of individuals. From the physical viewpoint, for example, tripping risks in an environment [4] must be assessed to ensure the environment is safe to walk in, as conducted in our previous study [5]. By contrast, from the cognitive aspect, ease of wayfinding [6] must be assessed to enable people to gain access to destinations in unfamiliar environments.
Wayfinding is a basic cognitive response of people trying to find their way to destinations in an unfamiliar environment based on perceived information and their own background knowledge [7]. Visual signage influences the way in which people unfamiliar with an indoor environment navigate through it [8]. As shown in Table 1, visual signage can be classified into positional, directional, routing, and identification signage depending on the type of navigation information on the signage. As recommended in the guidelines [2], these four types of signage must be arranged appropriately at key decision points considering the relationship between the navigation information on signage and the path structure of the environment. In addition, as mentioned in the literature [9], ease of wayfinding must be evaluated considering not only signage continuity, visibility, and legibility but also signage noticeability.
Currently, ease of wayfinding is evaluated using four approaches: real field testing [10], virtual field testing [11,12], CAD model analysis [13], and wayfinding simulation [14,15,16,17,18,19,20]. In real field tests [10], a certain number of human subjects are asked to perform experimental wayfinding tasks in a real environment. By contrast, in virtual field tests [11,12], subjects are asked to perform wayfinding tasks in a virtual environment using virtual reality devices. In these real or virtual field tests [10,11,12], ease of wayfinding is evaluated by analyzing subjects’ responses to a questionnaire and their wayfinding results, e.g., walking route, gaze duration, and gaze direction. However, in these tests, prolonged wayfinding experiments involving a variety of wayfinding tasks must be conducted by various human subjects of different ages, genders, body dimensions, and visual capabilities. Thus, field tests are not necessarily efficient and low-cost approaches. In CAD model analysis [13], signage continuity is evaluated by analyzing the relationships among various pieces of user-specified navigation information indicated by signage. However, this approach cannot evaluate ease of wayfinding in terms of signage visibility, legibility, and noticeability because three-dimensional (3D) interactions between individuals and signage are not considered. Recently, a variety of wayfinding simulations has been proposed [14,15,16,17,18,19,20]. Such simulation-based approaches have made it possible to evaluate the ease of wayfinding by simulating the wayfinding of the pedestrian model. However, these simulations consider only a part of signage factors such as signage location, continuity, visibility, legibility, and noticeability. In addition, these simulations involve only simplified as-planned environment models that do not model the detailed environmental geometry, including obstacles on the walkway, and realistic environmental textures. For reliable evaluation, an environment model must be created to reflect the as-is situation of the environment because detailed 3D geometry and realistic textures affect the wayfinding of individuals [17,21].
Given the above background, the purpose of this study is to develop a new system for evaluating ease of wayfinding. The system makes it possible to detect spots that cause disorientation, i.e., “disorientation spots”, based on simulated 3D interactions among realistic wayfinding behaviors, as-is environment model, and realistic signage system. In this study, the as-is environment model represents an environment model that reflects a given environment as-is, i.e., detailed 3D geometry including obstacles and realistic textures. A schematic of the proposed system is shown in Figure 1. To achieve this goal, we draw on the results of our previous studies, in which algorithms of as-is environment modeling [22], walking simulation of a digital human model (DHM) in that environment model [23], and basic wayfinding simulation of the DHM [24] were developed.
As shown in Figure 1, first, the as-is environment model consisting of the walk surface points W S , navigation graph G N , and textured 3D environmental geometry G I is automatically generated from 3D laser-scanned point clouds [22] and a set of photographs of the environment [24]. Then, a set of signage entities is created by the user by manually assigning signage information. Then, a wayfinding simulation scenario is specified manually by the user. Thereafter, the DHM commences its wayfinding in accordance with the navigation information indicated by the arranged signage, while estimating signage visibility, noticeability, and legibility based on imitated visual perception. As a result, disorientation spots are detected.
The proposed system is demonstrated in a virtual maze and a real two-story indoor environment. The system is further validated by comparing the disorientation spots detected by the simulation with those obtained in a test involving six young subjects in the two-story indoor environment.
The rest of this paper is organized as follows. Section 2 introduces the related literature and clarifies the contributions of this study. Section 3 presents a brief introduction of the previously developed as-is environment modeling system [22,24]. In Section 4, an overview of signage entity creation is described. In Section 5, the algorithm for the simulation in which DHM performs wayfinding is introduced. Finally, in Section 6, the system is demonstrated and validated.

2. Related Work

This study is related primarily to wayfinding simulation research. A variety of simulation algorithms aiming to evaluate the ease of wayfinding have been studied.
Chen et al. [14] proposed a wayfinding simulation algorithm based on architectural information such as egress width, height, contrast intensity, and room illumination in a 3D as-planned environment model. Furthermore, Morrow et al. [15] proposed an environmental visibility evaluation system using 3D pedestrian model. In the study, environmental visibilities from pedestrian models were evaluated to assist facility managers in designing architectural layout and signage placement. However, these studies [14,15] are not applicable to the evaluation of ease of wayfinding based on signage system because the pedestrian models used in them were not modeled to incorporate the surrounding signage in the simulation.
Hajibabai et al. [16] proposed a wayfinding simulation using directional signage in an as-planned 2D environment model for emergency evacuation during a fire. The 2D pedestrian model used in the study could make decisions about its walking route based on perceived signage and fire propagation. However, in that study, signage visibility and legibility were estimated by oversimplified human visual perception, and signage noticeability was not considered. In addition, performing a precise 3D wayfinding simulation using a 3D as-is environment model using their framework is infeasible.
Recently, signage-based 3D wayfinding simulation has been advancing. Brunnhuber et al. [17] and Becker-Asano et al. [18] proposed schemes for wayfinding simulation using directional and identification signage in a 3D as-planned environment model. In these simulations, the next walking direction of the pedestrian models was determined autonomously based on the navigation information on the perceived signage. Signage perception was realized by estimating signage visibility and legibility based on the imitated visual perception of the pedestrian model. However, signage noticeability was not considered in these simulations, although it has a significant effect on the wayfinding of people in unfamiliar environments [9].
More recently, advanced approaches for estimating suitable signage locations have been proposed. Zhang et al. [19] proposed a system for planning the placement of directional signage for evaluation. In their system, a minimum number of signage and appropriate signage locations were determined automatically by simulating interactions between the pedestrian models and the signage system. In addition, Motamedi et al. [20] proposed a system for optimizing the arrangement of directional and identification signage in building information model (BIM)-enabled environments. Their system estimated optimal signage arrangement based on signage visibility and legibility for a 3D pedestrian model walking in a BIM-based environment model. However, as in cases of other previous simulations, signage noticeability was not considered in these studies [19,20]. In addition, the system [19] was validated with an oversimplified environment model imitating a large rectangular space having an egress, and the feasibility of its use in realistic and complex as-is environments was not validated. By contrast, in the system [20], the walking route of the pedestrian model was not changed based on the navigation information indicated by perceived signage, so evaluation based on signage continuity was basically infeasible.
Furthermore, these simulations [16,17,18,19,20] treated only one or two types of signage—Directional and/or identification. Thus, these simulations cannot be applied to actual signage systems including all signage types in Table 1.
Moreover, with the exception of the simulation proposed by Motamedi et al. [20], simplified as-planned environment models were used in the previous wayfinding simulations. Therefore, to realize a reliable evaluation of ease of wayfinding, simulation users and/or facility managers are urged to create detailed and realistic as-planned environment models, including small obstacles and environmental textures based on measurements of the environment.
Unlike the simulations developed in these previous studies [14,15,16,17,18,19,20], the proposed system can evaluate the ease of wayfinding by simulating 3D interactions among realistic wayfinding behaviors, as-is environment model, and realistic signage system. Specifically, the contributions of the present study are as follows:
  • DHM can make a decision based on the surrounding signage perceived by its imitated visual perception in consideration of signage location, continuity, visibility, noticeability, and legibility.
  • As-is environment model including detailed environmental geometry and realistic textures, can be generated automatically using 3D laser-scanning and SfM.
  • Proposed system can simulate the wayfinding of the DHM by discriminating among four types of signage, namely, positional, directional, routing, and identification signage.
  • Proposed system is validated through a comparison of disorientation spots between simulations and measurements obtained from young subjects.

3. Automatic 3D As-Is Environment Modeling

In the proposed system, first, an as-is environment model is generated automatically. As shown in Figure 2, the model comprises walk surface points W S , a navigation graph G N , and textured 3D environmental geometry G I . W S represents a set of laser-scanned point clouds on walkable surfaces such as floors, slopes, and stair-treads. Specifically, W S is used to estimate the footprints of the DHM during the simulation. G N generated from W S represents the environmental pathways that the DHM would navigate through during the simulation. The graph G N = V ,   E ,   c ,   t ,   E S comprises a set of graph nodes V and a set of edges E . Each node v k V represents free space in the environment, and has a position vector t ( v k ) and cylinder attribute c ( v k ) , whose radius r ( v k ) and height h v represent the distance to the wall and walkable step height, respectively. Each edge e k , representing the connectivity of free spaces, is generated between two adjacent nodes with a common region. E S = { e s k } represents a set of stair edges connecting two graph nodes at the end of stairs. W S and G N can be generated automatically using our method [22]. By contrast, G I represents a 3D mesh model with high-quality textures, and it is used to estimate signage visibility and noticeability during simulation. G I can be created automatically using SfM with a set of photographs of the environment [24]. Detailed algorithms and demonstrations are given in our previous studies [22,23,24].

4. Creation of Signage Entity

In the proposed scheme, the signage system is modeled as a set of signage entities S = { S i } . Each signage entity S i = [ G i ,   I i ] consists of a 3D textured mesh model G i of the signage and a set of signage information entities I i = { I i , j } ( j [ 1 , N i ] ), where N i represents the number of signage information items included in S i . When modeling the existing signage, G i is constructed using SfM; otherwise, G i is created using 3D CAD software. I i , j is created by manually assigning the geometric, navigation, and legibility properties in Table 2. The details are given below.

4.1. Geometric Property

The geometric property includes the description region R g , center position p g , unit normal vector n g , width w g , and transformation matrix T G I . As shown in Figure 3a, R g = [ p t o p ,   p b o t t o m ] consists of two diagonal points of the rectangular description region on G i , in which the signage information is written. p g , n g , and w g are estimated from R g . T G I represents a transformation matrix from the local coordinate system X I of I i , j to the coordinate system X G of G i , where X I is defined to satisfy three conditions: (1) the origin of X I is located on p g , (2) y-axis of X I is aligned with n g , and (3) z-axis of X I is aligned with the z-axis of X G . Under this definition, T G I is calculated automatically from p g and n g .

4.2. Navigation Property

The navigation property includes the type of signage T n , name of indicated place D n , and navigation information N I . As listed and shown in Table 3 and Figure 3b, respectively, N I is assigned by the user in accordance with T n . The user must specify a next goal position p n , next walking direction d n , and a set of passing points P N for positional, directional, and routing signage, respectively. p n and P N are specified w.r.t. the coordinate system X W of the textured environmental geometry G I . By contrast, d n is specified w.r.t. X I of I i , j .

4.3. Legibility Property

The legibility property includes the center point p l w.r.t. X I of I i , j and radius r l of the 3D visibility catchment area (VCA). As shown in Figure 3c, the 3D VCA of signage represents a sphere in which people can recognize the information written in the signage. The VCA was defined originally as a 2D circle by Fillipidis et al. [25] and Xie et al. [26]. In this study, the 3D VCA is calculated such that the great circle of the sphere on the horizontal plane corresponds to the 2D VCA circle proposed by Xie et al. [26]. Specifically, p l and r l are calculated using the following equation:
r l = w g 2   sin   φ l p l = p g + n g ( w g 2   tan   φ l ) φ l = tan 1 ( w g 2   d l ) ,
where d l represents the maximum viewing distance between the signage and the subject standing at a place, in which the subject can recognize the information on the signage. By measuring d l from the subjects, the legible space of the signage is calculated as the 3D VCA using Equation (1).

5. System for Evaluation of Ease of Wayfinding

As shown in Figure 1, the wayfinding simulation using the DHM is performed in accordance with the user-specified wayfinding scenario, including DHM properties H = [ M ,   θ H ,   θ V ,   n t ] , start position p s , initial walking direction d I , name of destination D , and signage locations and orientations T s = { T i } , where M , θ H , θ V , n t , and T i represent motion-capture (MoCap) data for flat walking obtained from the gait database [27], horizontal and vertical angles of view frustum, threshold value of signage noticeability, and transformation matrix from X G to X W , respectively.
Before the simulation, the locations and the orientations of each signage entity S i S are determined by assigning T i T s . Then, a DHM having the same body dimensions as the subject of M is generated. As shown in Figure 4, the DHM has 41 degrees of freedom and a link mechanism corresponding to that of M . The imitated eye position p e y e of the DHM is estimated as the midpoint between the top of the head and the neck.
Finally, the wayfinding simulation is performed by repeating the algorithms described in the following subsections.

5.1. Signage Perception Based on Imitated Visual Perception

In the proposed system, signage visibility, noticeability, and legibility are estimated to determine whether a signage is found and its information is recognized by the DHM. The details are described in the following subsections.

5.1.1. Signage Visibility Estimation

Signage visibility represents whether a signage is included in the view frustum of the DHM defined by θ H and θ V . As shown in Figure 5, it is estimated simply by scanning the eyesight of the DHM. First, the eyesight of the DHM is obtained using OpenGL by rendering an image from the camera model located at the DHM eye position p e y e . At the same time, as shown in the figure, the textured 3D environmental geometry G I and the textured 3D mesh model G i of each signage S i S are rendered with a single color instead of their original textures. Finally, if the color of G i appears in the rendered image, S i is considered “visible” signage and inserted into a set of visible signage entities S v i s = { S k } .

5.1.2. Signage Noticeability Estimation

As people overlook objects in their eyesight, it is not always true that the DHM can find a signage S i when S i is visible S i S v i s . Therefore, signage noticeability representing whether the DHM can notice S i S v i s must be estimated.
In the proposed system, signage noticeability is estimated using the saliency estimation algorithm proposed by Itti et al. [28] based on the visual search mechanism of real humans [29]. In this algorithm, a Gaussian pyramid is first generated from an image rendered by the camera model at p e y e . Then, feature maps representing contrasts of intensity, color differences, and orientations are obtained from each image. By integrating and normalizing the feature maps, a saliency map M s = { m ( x , y ) } is generated, where m ( x , y ) [ 0 , 1 ] represents the degree of saliency at a pixel ( x , y ) . In the map, m ( x , y ) increases at the pixel, in which contrasts of intensity, color differences, and orientations are higher than those of other pixels. Finally, as shown in Figure 6, the propose system estimates the noticeability n i of visible signage S i S v i s using the following equation:
n i = max ( x , y ) P i   m ( x , y ) ,
where m ( x , y ) and P i represent the degree of saliency at pixel ( x , y ) in M S and a set of pixels, in which the signage geometry G i is rendered. If n i is greater than the noticeability threshold n t of the user-specified wayfinding scenario, S i is considered “found” signage, and inserted into a set of found signage entities S f o u n d = { S k } ( S f o u n d S v i s ).

5.1.3. Signage Legibility Estimation

Signage legibility represents whether the DHM can recognize signage information of found signage S i S f o u n d , i.e., whether the DHM can read the textual or graphical information written on the signage. It is estimated using the 3D VCA of signage information I i , j of S i . If p e y e is included in the 3D VCA of I i , j , I i , j is considered “recognized” signage information. In the proposed system, it is assumed that the DHM can correctly interpret I i , j only when S i and I i , j are found (i.e., S i S f o u n d ) and recognized, respectively. Note that the signage noticeability, n i , does not influence the signage legibility estimation.

5.2. Wayfinding Decision-Making Based on Signage Perception

Based on the estimated signage visibility, noticeability, and legibility, the wayfinding state of the DHM is changed dynamically in accordance with the state transition chart shown in Figure 7a. As shown in the figure, when the simulation is performed, the DHM is set to start walking in the direction d I (state SW1 in Figure 7a). Then, as shown in Figure 7b, when a signage S i is found by the DHM, i.e., S i is inserted to S f o u n d , the DHM is set to walk toward the center position p g of I i ,   j of S i (state SW2) to read the information on S i . Thereafter, the other signage S j does not influence the state transition until the state is changed to the look-around state (SW3) even if S j is found by the DHM. When I i ,   j of S i becomes legible, the name of indicated place D n of I i ,   j is compared with the name of destination D of the wayfinding scenario. If D n D , the state is changed to SW3 to find other signage related to D ; else, the state is changed in accordance with the type of recognized signage information T n . If T n represents positional, directional, or routing signage, the state is changed to the motion planning state (SW4). By contrast, if T n represents an identification signage, the state is changed to the success state (SW5). In this state, the simulation is deemed complete because this state is the final state.
During the wayfinding simulation, the DHM basically repeats the states SW2, SW4, SW6, and SW3. As shown in Figure 7c, when the DHM recognizes I i ,   j , it is set to walk toward the temporal destination of the DHM, i.e., subgoal position p s u b (SW4 and SW6). Then, as shown in Figure 7d, when the DHM arrives at p s u b , it is asked to observe the surrounding environment (i.e., look-around) by rotating the neck joint horizontally within its range of motion (SW3). When the DHM finds new signage in this state, the state changes back to SW2. By contrast, when the DHM cannot find any signage, the current DHM position is treated as a “disorientation spot” (SW7). The state SW7 is considered the failed state. Note that the state can be changed to SW8 from SW7 only when T n represents a directional signage, as described in Section 5.3.1.

5.3. Signage-Based Motion Planning

5.3.1. Updating Subgoal Position of DHM

In the signage-based motion planning state (SW4), first, the subgoal position p s u b is determined automatically depending on the type of recognized signage information T n and its navigation information N I .
When T n = p o s i t i o n a l , p s u b is determined as the next goal position p n of N I to make the DHM walk toward a location indicated by the recognized signage information I i ,   j .
When T n = d i r e c t i o n a l , as shown in Figure 8, a queue of fork points F = { p m } is extracted by the following steps.
(1)
A graph node v c   ( v c V ) just under the pelvis position p p of the DHM is extracted from the navigation graph G N . Then, v c is inserted into a set of graph nodes V P , where V P represents graph nodes on a feasible walking path when the DHM walks in accordance with the next walking direction d n indicated by I i ,   j .
(2)
v c and d n of I i ,   j are assigned to the variables v t and d t , respectively.
(3)
A graph node v p located in the direction of d t is extracted using the following equation:
p = arg max k N t   d k · d t d k = t ( v k ) t ( v t ) t ( v k ) t ( v t ) ,
where N t represents a set of indices of graph node v k ( v k V P ) connected to v t by a graph edge. Using this equation, v p is determined as a graph node with the minimum angle difference between d t and a graph edge connecting v k and v t .
(4)
If N t , v p is inserted into V P , and d k and v p are assigned to v t and d t , respectively.
(5)
If | N t | 2 N t = , t ( v p ) is pushed into F because t ( v p ) is considered a center position at the fork way or at the terminal of the walkway.
(6)
Steps (3)–(5) are repeated, until N t = , i.e., until a graph node representing the terminal of the walkway is found.
When the wayfinding state is changed to SW4 or SW8 in Figure 7a, a first fork point is taken from F and assigned to p s u b . This algorithm enables the proposed system to detect multiple disorientation spots, i.e., fork points with no visible and noticeable signage after perceiving directional signage.
When T n = r o u t i n g , p s u b is determined as the last elements of a set of passing points P N of N I indicated by I i ,   j . Then, the walking path V P of the DHM is estimated such that it passes the graph nodes at p k P N in Section 5.3.2.

5.3.2. Walking Path Selection and Walking Trajectory Generation

As shown in Figure 9, after determining the subgoal position p s u b , the walking path V P = { v i } ( v i V ) of the DHM is determined automatically by the following function:
V P = Path ( p a ,   p b ) ,
where Path ( p a ,   p b ) represents a function to select a set of graph nodes V P between two nodes located at p a and p b using the Dijkstra method from G N .
When the wayfinding state is changed to SW2 with the visible signage S i S v i s , t ( v c ) and p g are assigned to p a and p b , where v c and p g represent a graph node just under the DHM pelvis position p p and the center position p g of I i ,   j of S i , respectively. By contrast, when the state is changed to SW4, p a and p b are determined depending on the type of recognized signage information T n . When T n = p o s i t i o n a l or T n = d i r e c t i o n a l , t ( v c ) and p s u b are assigned to p a and p b , respectively. By contrast, when T n = r o u t i n g , V P is determined as V P = k = 0 k < | P N | Path ( p k ,   p k + 1 ) , where p k P N is a passing point representing a walking route indicated by N I of I i ,   j .
After determining V P , the walking trajectory V T = p i is generated automatically by our previously developed optimization algorithm [23], where V T represents a sequence of sparsely discretized target pelvis positions of the DHM. This optimization algorithm is designed to make V T more natural and smooth, while avoiding contact with walls. The details are described in [23].

5.4. MoCap-Based Adaptive Walking Motion Generation

Finally, the walking motion of the DHM is generated as it follows V T using our MoCap-based adaptive walking motion generation algorithm [23]. In the algorithm, realistic articulated walking movements of the DHM are generated based on MoCap data M for flat walking. The details and demonstrations are introduced in [23].

6. Results and Validations

The proposed system was developed using Visual Studio 2010 Professional edition with C++. The system was applied to a virtual maze and a real two-story indoor environment. In addition, it was validated by comparing the disorientation spots between the simulation and measurements obtained from young subjects. Videos of as-is environment modeling and wayfinding simulation results, i.e., Figure 10, Figure 11, Figure 12 and Figure 13, are available in the supplementary video file.

6.1. Evaluation of Ease of Wayfinding in Virtual Maze

The proposed system was first applied to a virtual maze with a set of signage entities S = { S 1 , S 2 , S 3 , S 4 , S 5 } , to test its basic performance. Figure 10 shows the constructed environment model of the virtual maze. In the figure, textured environmental geometry G I was constructed manually using CAD software [30], and the set of walk surface points W S and navigation graph G N were constructed from a set of vertices of G I . Note that the proposed system could perform not only in the as-is environment model but in the given 3D model of the environment, e.g., CAD data of the environment, by converting the model to dense point clouds.
Table 4 and Table 5 show the wayfinding scenario and the user-assigned parameters of each signage information I i , j , respectively. As shown in Table 5, all four types of signage were used.
Figure 11 shows the evaluation results of ease of wayfinding. As shown in Figure 11a, when the simulation was performed, the DHM found and recognized S 1 and I 1 ,   1 , respectively. In consequence, the DHM was set to walk toward the next goal positon p n indicated by I 1 ,   1 . Then, when the DHM arrived at p n , S 2 and I 2 ,   1 were found and recognized by the DHM (Figure 11b), respectively. A feasible walking path V P and a set of fork points F of I 2 ,   1 were then extracted. Then, the DHM was set to walk toward the first fork point p 1 F of I 2 ,   1 . After that, the DHM found and recognized S 3 and I 3 ,   1 at p 1 F , respectively. Then, as shown in Figure 11c, V P and F of I 3 ,   1 were extracted. At the same time, the DHM was set to walk toward p 1 F of I 3 ,   1 . However, as shown in Figure 11d, the DHM could not find any new signage when it arrived at p 1 F of I 3 ,   1 . Therefore, this spot was detected as a disorientation spot. As recommended by international standards [2], a facility manager must provide signage at all key decision points such as forks. Therefore, from this standpoint, the detection of this disorientation spot can be considered reasonable.
Thereafter, as shown in Figure 11e, the DHM was set to walk toward p 2 F indicated by I 3 ,   1 to evaluate the ease of wayfinding after passing the detected disorientation spot. In consequence, the DHM found and recognized S 4 and I 4 ,   1 at p 2 F of I 3 ,   1 , respectively. Then, the DHM was set to walk toward p 4 P N of I 4 ,   1 following V P generated on passing points p i P N of I 4 ,   1 . Finally, as shown in Figure 11f, the DHM found and recognized S 5 and I 5 ,   1 , respectively, where S 5 was an identification signage pertaining to the destination D . In consequence, the wayfinding simulation was completed.
Based on the above results, from the standpoints of system performance, the following conclusions were obtained.
  • The proposed system could detect disorientation spots resulting from the lack of signage or poor location of signage in the environment model.
  • The proposed system could simulate the wayfinding of the DHM by discriminating among four types of signage, namely, positional, directional, routing, and identification.

6.2. Evaluation Results of Ease of Wayfinding in Real Two-Story Indoor Environment

The proposed system was further applied to a real two-story indoor environment with a set of signage entities S = { S 1 ,   S 2 ,   S 3 ,   S 4 } . Figure 12 shows the constructed as-is environment model. In Figure 12, the laser-scanned point clouds were acquired from the environment by a terrestrial laser scanner [32]. The textured environmental geometry G I was constructed from 21,143 photos of the environment using commercial SfM software, ContextCapture [33], where the photos were extracted from the video data captured using a digital single-lens reflex camera [34]. As shown in Figure 12c, the model contains a few distorted regions, which can be attributed to the performance limitations of the SfM software. However, most of the model could be generated successfully.
In the simulation, the DHM properties H of the wayfinding scenario was identical to that in Table 4. The starting position p s , initial walking direction d I , and signage locations and orientations T S are shown in Figure 12d,e. The maximum viewing distance d l of each signage was specified as d l = 4.46 m for each signage information I i , j , as determined by measurement of d l of S 1 using six subjects ranging in age from 22 to 26 years. A positional signage S 1 , two types of directional signage S 2 and S 3 , and an identification signage S 4 were arranged in the environment to simulate the situation in which people tried to find a conference room using only the signage in the unfamiliar indoor environment.
Figure 13 shows the evaluation results of ease of wayfinding. As shown in Figure 13a, when the simulation was performed, S 1 and I 1 ,   1 were found and recognized by the DHM, respectively. Since the next goal position p n indicated by I 1 ,   1 was specified on the end of the caracole on the second floor, the DHM was set to ascend the caracole. When the DHM arrived at p n of I 1 ,   1 , the DHM was asked to observe the surrounding environment to find new signage. However, as shown in Figure 13b, the DHM could not find S 2 although S 2 was visible. This was because the estimated signage noticeability n 2 = 0.27 of S 2 at the spot was less than the user-specified threshold, n t = 0.3 . Thus, this spot was detected as a disorientation spot because S 2 was overlooked.
Following the above results, in Figure 13c, the signage design of S 2 , i.e., texture on G i , was improved to enhance its noticeability. As a result, the ease of wayfinding was improved to enable the DHM to find S 2 at the detected disorientation spot. This improvement was caused by the fact that n 2 of S 2 from the DHM standing at the disorientation spot detected previously increased to an adequately large value, n 2 = 0.68 .
After the DHM recognized I 2 ,   1 , the DHM was set to walk toward the first fork point p 1 indicated by I 2 ,   1 . However, as shown in Figure 13c, when the DHM arrived at p 1 of I 2 ,   1 , the wayfinding state had fallen into SW7, i.e., gotten lost, since the DHM could not find any new signage at p 1 . This was because any signage could not be seen by the DHM at p 1 . Therefore, this spot was also detected as a disorientation spot owing to the lack of signage.
By contrast, in Figure 13d, a new positional signage S 5 was arranged around the detected disorientation spot. As a result, as shown in the figure, the wayfinding simulation of the DHM was completed successfully.
As described above, the proposed system enabled the user to validate the ease of wayfinding in the environment interactively by considering the wayfinding of the DHM, as-is environment model, and arranged signage system. From the standpoint of system performance, the following conclusions were obtained.
  • The proposed system could detect disorientation spots resulting from the lack of signage and overlooking signage.
  • The proposed system could simulate the wayfinding of the DHM even in the realistic and complex as-is environment model.
  • The proposed system could quickly re-evaluate rearranged signage based on the simulation.

6.3. Efficiency of Environment Modeling and Simulation

Table 6 shows the elapsed time of the as-is environment modeling and simulation. As shown in the table, the times for 3D environment modeling from laser-scanned point clouds were less than one minute in both environments. By contrast, owing to the performance limitation of the SfM software [33], construction of the textured environmental geometry G I required approximately one week.
Furthermore, the time required for signage visibility, legibility, and noticeability estimation was less than 0.17 s. In addition, the times required for one-step walking motion generation were 0.15 s and 2.5 s in the virtual maze and the two-story indoor environment, respectively. Therefore, it was confirmed that the proposed system could simulate the DHM wayfinding efficiently. Note that the time required for walking motion generation in the two-story indoor environment was longer owing to the high computational load of rendering the environment model.

6.4. Experimental Validation of System for Evaluating Ease of Wayfinding

6.4.1. Overview of Wayfinding Experiment

The simulation results on ease of wayfinding presented in Section 6.2 were validated by the wayfinding experiment using six young subjects. In the validation, two signage systems imitating S = { S 1 ,   S 2 ,   S 3 ,   S 4 } and S S 5 were arranged in the real environment, where S and S 5 represent the set of signage entities used in the simulation in Figure 13a,b and the added signage in the simulation in Figure 13d, respectively. In the wayfinding experiment, first, the name of destination was revealed to the subjects at the start position p s . Then, the subjects were asked to find their way to the destination using the arranged signage system. During this process, wayfinding events such as finding signage and recognizing signage information were recorded by the thinking-aloud method [36], where the subjects were asked to walk while continuously thinking out loud. Verbal information from the subjects was recorded by handheld voice recorders. At the same time, videos of the walking trajectories of the subjects were captured by the observer. Finally, when the subjects arrived at the destination, the experiment was deemed complete. Note that all subjects have regularly used the environment, but the locations of arranged signage and the destination were not revealed to them. In addition, in the simulation results in Section 6.2, the maximum viewing distance d l was specified by measuring d l from those six subjects.
In the experiments, first, the wayfinding behaviors of three young subjects (Y1–Y3) were measured using the signage system imitating S . After that, the behaviors of the other three young subjects (Y4–Y6) were measured using the signage system imitating S S 5 .

6.4.2. Comparison of Wayfinding Results between DHM and Subjects

Figure 14 shows the comparison of wayfinding results between the DHM and the subjects. As shown in Figure 14a, a disorientation spot was found during the experiment by three subjects (Y1–Y3), which corresponded to the disorientation spot detected by the simulation. Thus, it was confirmed that the proposed ease of wayfinding simulation could detect disorientation spot, where the subjects actually lost their way owing to the lack of signage.
By contrast, as shown in Figure 14b, two subjects, Y4 and Y5, arrived at the destination when the signage system imitating S S 5 was arranged. However, a disorientation spot was found during the experiment by subject Y6. This was explained by the fact that the subject Y6 overlooked the signage imitating S 2 . As shown in Figure 14a, this disorientation spot was also detected in the simulation because the DHM could not find S 2 owing to the low noticeability of S 2 . Therefore, it was further confirmed that the proposed system could detect disorientation spot, where subjects actually lost their way owing to overlooking signage.

7. Conclusions

In this study, we developed a simulation-based system for evaluating ease of wayfinding using a DHM in an as-is environment model. The proposed system was demonstrated using a virtual maze and a real two-story indoor environment. The following conclusions were drawn from our results:
  • Our system makes it possible to evaluate the ease of wayfinding by simulating the 3D interactions among the realistic wayfinding behaviors of a DHM, as-is environment model, and realistic signage system.
  • Under the user-specified wayfinding scenario, the system simulates the wayfinding of the DHM by evaluating signage locations, continuity, visibility, legibility, and noticeability based on the imitated visual perception of the DHM.
  • Realistic signage system, including four types of signage, namely, positional, directional, routing, and identification, can be discriminated in the wayfinding simulation.
  • Disorientation spots owing to the lack of signage and overlooking signage can be identified only by conducting the simulation.
  • Rearranged signage plans can be re-evaluated quickly by carrying out the simulation alone.
Our system was further validated by comparison of disorientation spots between simulations and measurements obtained from six young subjects. From this validation, it was confirmed that the proposed system has a possibility of detecting disorientation spots, where people lose their way owing to the lack of signage or overlooking signage.
To validate the performance of the proposed system in detail, wayfinding experiments with a greater number of subjects in various as-is environments, including outdoor environments, must be conducted using more complex wayfinding scenarios in a future work. Furthermore, in Section 6.1 and Section 6.2, the noticeability threshold n t was specified without reference to measurements of human visual capabilities. However, in practice, n t must be specified as the minimum value estimated by the dominant users of the environment in consideration of their visual capabilities. Therefore, a method for determining a suitable value of n t using a statistical database related to human visual capabilities [37] will be developed in a future work.
The textured environmental geometry G I of the two-story indoor environment included a few distorted regions owing to performance limitations of the SfM software and poor textures on the walls. In the proposed system, G I was used for signage noticeability estimation. From the standpoint of evaluating ease of wayfinding, the system must detect the disorientation spot, where low signage noticeability is expected. In general, the signage noticeability decreases in areas where wall surfaces around the signage are complex and textural, i.e., saliency of signage design is relatively low compared to its surroundings. Fortunately, in such areas, G I can be well reconstructed owing to the nature of the SfM algorithm. Therefore, the proposed system can detect disorientation spots resulting from overlooking signage, even if a part of G I is distorted.
Furthermore, as mentioned in the literature [20], the presence of crowds influences the ease of wayfinding. Thus, crowd simulation technologies must be introduced into the proposed simulation framework. In addition, in the proposed system, the walking trajectory of the DHM was generated using a previously developed optimization algorithm [23]. However, as observed in Figure 14, the walking trajectories of individual human subjects vary. In our future work, such variabilities will be considered by introducing Monte Carlo simulation into the proposed system, i.e., generating a variety of DHM walking trajectories using the algorithm [23] with resampled parameters related to the trajectory generation.

Supplementary Materials

The following is available online at www.mdpi.com/2220-9964/6/9/267/s1, Video S1: EvaluationResults.mp4.

Acknowledgments

This work was supported by JSPS KAKENHI Grant No. 15J01552 and JSPS Grant-in-Aid for Challenging Exploratory Research under Project No.26560168.

Author Contributions

Tsubasa Maruyama proposed the original idea of this paper; Tsubasa Maruyama developed the entire system and performed the experiments; Satoshi Kanai, Hiroaki Date, and Mitsunori Tada improved the idea of the paper; Tsubasa Maruyama wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. WHO Global Report on Falls Prevention in Older Age. Available online: http://www.who.int/ageing/publications/Falls_prevention7March.pdf (accessed on 30 June 2017).
  2. International Organization for Standardization. ISO21542: Building Construction—Accessibility and Usability of the Built Environment. Available online: https://www.iso.org/standard/50498.html (accessed on 15 December 2011).
  3. International Organization for Standardization/International Electrotechnical Commission. ISO/IEC Guide 71 Second Edition: Guide for Addressing Accessibility in Standards. Available online: http://www.iec.ch/webstore/freepubs/isoiecguide71%7Bed2.0%7Den.pdf (accessed on 1 December 2014).
  4. Rubenstein, L.Z. Falls in Older People: Epidemiology, Risk Factors and Strategies for Prevention. Available online: https://www.ncbi.nlm.nih.gov/pubmed/16926202 (accessed on 22 June 2017).
  5. Maruyama, T.; Kanai, S.; Date, H. Tripping risk evaluation system based on human behavior simulation in laser-scanned 3D as-is environments. J. Comput. Des. Eng. 2017. under review. [Google Scholar]
  6. Churchill, A.; Dada, E.; de Barros, A.G.; Wirasinghe, S.C. Quantifying and validating measures of airport terminal wayfinding. J. Air Transp. Manag. 2008, 14, 151–158. [Google Scholar] [CrossRef]
  7. Hunt, E.; Waller, D. Orientation and Wayfinding: A Review. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.5608 (accessed on 30 June 2017).
  8. Hölscher, C.; Büchner, S.J.; Brosamle, M.; Meilinger, T.; Strube, G. Signs and maps: Cognitive economy in the use of external aids for indoor navigation. In Proceedings of the 29th Annual Conference of the Cognitive Science Society, Nashville, TE, USA, 1–4 August 2007. [Google Scholar]
  9. Yasufuku, K.; Akizuki, Y.; Hokugo, A.; Takeuchi, Y.; Takashima, A.; Matsui, T.; Suzuki, H.; Pinheiro, A.T.K. Noticeability of illuminated route signs for tsunami evacuation. Fire Saf. J. 2017, in press. [Google Scholar] [CrossRef]
  10. Thora, T.; Bergmann, E.; Konieczny, L. Wayfinding and description strategies in an unfamiliar complex building. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Boston, MA, USA, 20–23 July 2011. [Google Scholar]
  11. Vilar, E.; Rebelo, F.; Noriega, P. Indoor human wayfinding performance using vertical and horizontal signage in virtual reality. Hum. Factors Ergon. Manuf. Serv. Ind. 2014, 24, 601–605. [Google Scholar] [CrossRef]
  12. Buechner, S.J.; Wiener, J.; Hölscher, S. Methodological triangulation to assess sign placement. In Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA, 28–30 March 2012. [Google Scholar]
  13. Furubayashi, S.; Yabuki, N.; Fukuda, T. A data model for checking directional signage at railway stations. In Proceedings of the First International Conference on Civil and Building Engineering Informatics, Tokyo, Japan, 7–8 November 2013. [Google Scholar]
  14. Chen, Q.; de Vries, B.; Nivf, M.K. A wayfinding simulation based on architectural features in the virtual built environment. In Proceedings of the 2011 Summer Computer Simulation Conference, Hague, The Netherlands, 27–30 June 2011. [Google Scholar]
  15. Morrow, E.; Mackenzie, I.; Nema, G.; Park, D. Evaluating three dimensional vision fields in pedestrian microsimulations. Transp. Res. Procedia 2014, 2, 436–441. [Google Scholar] [CrossRef]
  16. Hajibabai, L.; Delavar, M.R.; Malek, M.R.; Frank, A.U. Agent-Based Simulation of Spatial Cognition and Wayfinding in Building Fire Emergency Evacuation. Available online: https://publik.tuwien.ac.at/files/pub-geo_1946.pdf (accessed on 22 June 2017).
  17. Brunnhuber, M.; Schrom-Feiertag, H.; Luksch, C.; Matyus, T.; Hesina, G. Bridging the gaps between visual exploration and agent-based pedestrian simulation in a virtual environment. In Proceedings of the 18th ACM Symposium on Virtual Reality Software and Technology, Toronto, ON, Canada, 10–12 December 2012. [Google Scholar]
  18. Becker-Asano, C.; Ruzzoli, F.; Hölscher, C.; Nebel, B. A multi-agent system based on unity 4 for virtual perception and wayfinding. Transp. Res. Procedia 2014, 2, 452–455. [Google Scholar] [CrossRef]
  19. Zhang, Z.; Jia, L.; Qin, Y. Optimal number and location planning of evacuation signage in public space. Saf. Sci. 2017, 91, 132–147. [Google Scholar] [CrossRef]
  20. Motamedi, A.; Wang, Z.; Yabuki, N.; Fukuda, T.; Michikawa, T. Signage visibility analysis and optimization system using BIM-enabled virtual reality (VR) environments. Adv. Eng. Inf. 2017, 32, 248–262. [Google Scholar] [CrossRef]
  21. Phaholthep, C.; Sawadsri, A.; Bunyasakseri, T. Evidence-based research on barriers and physical limitations in hospital public zones regarding the universal design approach. Asian Soc. Sci. 2017, 13, 133. [Google Scholar] [CrossRef]
  22. Maruyama, T.; Kanai, S.; Date, H. Simulating a Walk of Digital Human Model Directly in Massive 3D Laser-Scanned Point Cloud of Indoor Environments. Available online: https://link.springer.com/chapter/10.1007/978-3-642-39182-8_43 (accessed on 22 June 2017).
  23. Maruyama, T.; Kanai, S.; Date, H.; Tada, M. Motion-capture-based walking simulation of digital human adapted to laser-scanned 3D as-is environments for accessibility evaluation. J. Comput. Des. Eng. 2016, 3, 250–265. [Google Scholar] [CrossRef]
  24. Maruyama, T.; Kanai, S.; Date, H. Vision-based wayfinding simulation of digital human model in three dimensional as-is environment models and its application to accessibility evaluation. In Proceedings of the International Design Engineering Technical Conferences & Computers & Information in Engineering Conference, Charlotte, NC, USA, 6–9 August 2016. [Google Scholar]
  25. Filippidis, L.; Galea, E.R.; Gwynne, S.; Lawrence, P.J. Representing the influence of signage on evacuation behavior within an evacuation model. J. Fire Prot. Eng. 2006, 16, 37–73. [Google Scholar] [CrossRef]
  26. Xie, H.; Filippidis, L.; Gwynne, S.; Galea, E.R.; Blackshields, D.; Lawrence, P.J. Signage legibility distances as a function of observation angle. J. Fire Prot. Eng. 2007, 17, 41–64. [Google Scholar] [CrossRef]
  27. Kobayashi, Y.; Mochimaru, M. AIST Gait Database 2013. Available online: https://www.dh.aist.go.jp/database/gait2013/ (accessed on 22 June 2017).
  28. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual-attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef]
  29. Treisman, A.M.; Gelade, G. A feature-integration theory of attention. Cogn. Psychol. 1980, 12, 97–136. [Google Scholar] [CrossRef]
  30. FreeCAD: An Open-Source Parametric 3D CAD Modeler. Available online: https://www.freecadweb.org/ (accessed on 22 June 2017).
  31. Takashi, T.; Keita, I.B. Physiology. In Handbook of Environmental Design, 2nd ed.; Koichi, I., Ed.; Maruzen Publishing: Tokyo, Japan, 2003. [Google Scholar]
  32. 3D Laser-Scanner FARO. Available online: http://www.faro.com/products/3d-surveying/laserscanner-faro-focus/overview (accessed on 22 June 2017).
  33. Bentley—Reality Modeling Software. Available online: https://www.bentley.com/en/products/brands/contextcapture (accessed on 22 June 2017).
  34. Nikon D3300. Available online: http://www.nikon-image.com/products/slr/lineup/d3300/ (accessed on 22 June 2017).
  35. Floor Maps of Graduate School of Information Science and Technology. Available online: http://www.ist.hokudai.ac.jp/facilities/ (accessed on 22 June 2017).
  36. O’Neill, M.J. Evaluation of a conceptual model of architectural legibility. Environ. Behav. 1991, 23, 259–284. [Google Scholar] [CrossRef]
  37. Database of Sensory Characteristics of Older Persons with Disabilities. Available online: http://scdb.db.aist.go.jp/ (accessed on 22 June 2017).
Figure 1. Overview of system for evaluating ease of wayfinding.
Figure 1. Overview of system for evaluating ease of wayfinding.
Ijgi 06 00267 g001
Figure 2. 3D as-is environment model: (a) Walk surface points W S ; (b) navigation graph G N ; (c) textured environmental geometry G I .
Figure 2. 3D as-is environment model: (a) Walk surface points W S ; (b) navigation graph G N ; (c) textured environmental geometry G I .
Ijgi 06 00267 g002
Figure 3. Overview of signage information: (a) Geometric property; (b) navigation property; (c) legibility property.
Figure 3. Overview of signage information: (a) Geometric property; (b) navigation property; (c) legibility property.
Ijgi 06 00267 g003
Figure 4. Link mechanism of DHM.
Figure 4. Link mechanism of DHM.
Ijgi 06 00267 g004
Figure 5. Signage visibility estimation: (a) View frustum of DHM; (b) image rendered using OpenGL.
Figure 5. Signage visibility estimation: (a) View frustum of DHM; (b) image rendered using OpenGL.
Ijgi 06 00267 g005
Figure 6. Signage noticeability estimation.
Figure 6. Signage noticeability estimation.
Ijgi 06 00267 g006
Figure 7. Wayfinding decision-making based on signage perception: (a) Wayfinding state transition; (b) walking toward signage; (c) walking toward subgoal position; (d) look-around.
Figure 7. Wayfinding decision-making based on signage perception: (a) Wayfinding state transition; (b) walking toward signage; (c) walking toward subgoal position; (d) look-around.
Ijgi 06 00267 g007
Figure 8. Extraction of fork points from navigation graph.
Figure 8. Extraction of fork points from navigation graph.
Ijgi 06 00267 g008
Figure 9. Examples of walking path selection and walking trajectory generation.
Figure 9. Examples of walking path selection and walking trajectory generation.
Ijgi 06 00267 g009
Figure 10. Environment model of virtual maze: (a) Textured environmental geometry G I (#vertices: 4,241,573, #faces: 8,436,885); (b) walk surface points W S ; (c) navigation graph G N ; (d) wayfinding scenario. The results are available in the supplementary video file.
Figure 10. Environment model of virtual maze: (a) Textured environmental geometry G I (#vertices: 4,241,573, #faces: 8,436,885); (b) walk surface points W S ; (c) navigation graph G N ; (d) wayfinding scenario. The results are available in the supplementary video file.
Ijgi 06 00267 g010
Figure 11. Evaluation results of ease of wayfinding in virtual maze (red lines: graph edges, blue lines: graph edges on V P , cyan lines: V T , yellow lines: walking trajectory of DHM, purple lines: graph edges on V P ): (a) Wayfinding in accordance with S 1 ; (b) wayfinding in accordance with S 2 ; (c) wayfinding in accordance with S 3 ; (d) detecting disorientation spot; (e) wayfinding in accordance with S 4 ; (f) simulation was completed. The results are available in the supplementary video file.
Figure 11. Evaluation results of ease of wayfinding in virtual maze (red lines: graph edges, blue lines: graph edges on V P , cyan lines: V T , yellow lines: walking trajectory of DHM, purple lines: graph edges on V P ): (a) Wayfinding in accordance with S 1 ; (b) wayfinding in accordance with S 2 ; (c) wayfinding in accordance with S 3 ; (d) detecting disorientation spot; (e) wayfinding in accordance with S 4 ; (f) simulation was completed. The results are available in the supplementary video file.
Ijgi 06 00267 g011
Figure 12. As-is environment model of two-story indoor environment: (a) Laser-scanned point clouds (#points: 5,980,647); (b) navigation graph G N ; (c) textured environmental geometry G I (#vertices: 625,484, #faces: 1,241,049); (d) wayfinding scenario on first floor [35]; (e) wayfinding scenario on second floor [35]. The results are available in the supplementary video file.
Figure 12. As-is environment model of two-story indoor environment: (a) Laser-scanned point clouds (#points: 5,980,647); (b) navigation graph G N ; (c) textured environmental geometry G I (#vertices: 625,484, #faces: 1,241,049); (d) wayfinding scenario on first floor [35]; (e) wayfinding scenario on second floor [35]. The results are available in the supplementary video file.
Ijgi 06 00267 g012
Figure 13. Evaluation results of ease of wayfinding in two-story indoor environment (yellow lines: walking trajectory of DHM): (a) Wayfinding simulation on first floor; (b) detection of disorientation spot resulting from overlooking the signage S 2 ; (c) design improvement of S 2 and detection of disorientation spot resulting from lack of signage; (d) ease of wayfinding improved completely by changing the design of S 2 and adding S 5 . The results are available in the supplementary video file.
Figure 13. Evaluation results of ease of wayfinding in two-story indoor environment (yellow lines: walking trajectory of DHM): (a) Wayfinding simulation on first floor; (b) detection of disorientation spot resulting from overlooking the signage S 2 ; (c) design improvement of S 2 and detection of disorientation spot resulting from lack of signage; (d) ease of wayfinding improved completely by changing the design of S 2 and adding S 5 . The results are available in the supplementary video file.
Ijgi 06 00267 g013
Figure 14. Comparison of wayfinding results between simulation and human measurements: (a) Comparison using S = { S 1 ,   S 2 ,   S 3 ,   S 4 } ; (b) comparison using S S 5 .
Figure 14. Comparison of wayfinding results between simulation and human measurements: (a) Comparison using S = { S 1 ,   S 2 ,   S 3 ,   S 4 } ; (b) comparison using S S 5 .
Ijgi 06 00267 g014
Table 1. Signage type and navigation information.
Table 1. Signage type and navigation information.
Signage TypeNavigation Information
Positional signageNext goal position to be reached to arrive at a destination (e.g., map)
Directional signageNext walking direction to take to reach a destination (e.g., right or left)
Routing signageWalking route to be taken to reach a destination (e.g., route drawn on map or indicated by textual information)
Identification signageName of current place
Table 2. Signage information entity.
Table 2. Signage information entity.
PropertyAttributeAssignment Method
Geometric propertyDescription region R g = [ p t o p ,   p b o t t o m ] Assigned by user by picking two diagonal points
Center position p g Estimated from R g
Unit normal vector n g
Width w g
Transformation matrix T G I Estimated from p g and n g
Navigation propertyType of signage T n { p o s i t i o n a l ,   d i r e c t i o n a l ,   r o u t i n g ,   i d e n t i f i c a t i o n } Assigned by user based on the signage design
Name of indicated place D n
Navigation information N I
Legibility propertyMaximum viewing distance d l Measured from human subjects
Center point of 3D VCA p l Estimated from d l
Radius of 3D VCA r l
Table 3. Assignment of navigation information depending on signage type.
Table 3. Assignment of navigation information depending on signage type.
Signage TypeNavigation Information N I to Achieve a DestinationReferenced Coordinate System
Positional signageNext goal position p n X W of G I
Directional signageNext walking direction d n X I of I i , j
Routing signageA set of passing points P N = { p k } X W of G I
Identification signageName of current place C n None
Table 4. User-specified wayfinding scenario.
Table 4. User-specified wayfinding scenario.
ParametersSpecified Values
MoCap data for flat walking M of H MoCap data of a young male subject
(Age: 22 years, height: 1.73 m)
Horizontal angle of view frustum θ H of H 100 deg 1
Vertical angle of view frustum θ V of H 60 deg 1
Noticeability threshold n t [ 0 ,   1 ] of H 0.3 2
Start position p s Shown in Figure 10d
Initial walking direction d I
Name of destination D “Goal“
Signage locations and orientations T s Shown in Figure 10d
1 θ H and θ V were specified based on the handbook [31]. 2 n t was specified as a small value for validation.
Table 5. User-assigned parameters of signage information.
Table 5. User-assigned parameters of signage information.
ParametersSign S 1 Sign S 2 Sign S 3 Sign S 4 Sign S 5
Type of signage T n PositionalDirectionalDirectionalRoutingIdentification
Name of indicated place D n “Goal”
Navigation information N I Shown in Figure 10d“Goal”
Maximum viewing distance d l 4.0 m 15.0 m 11.74 m 1
1 d l was specified as a tentative value without human measurements.
Table 6. Time required for environment modeling and simulation. (CPU: Intel(R) Core(TM) i7-6850K 3.60 GHz, RAM: 64 GB, GPU: GeForce GTX 1080).
Table 6. Time required for environment modeling and simulation. (CPU: Intel(R) Core(TM) i7-6850K 3.60 GHz, RAM: 64 GB, GPU: GeForce GTX 1080).
ProcessTime Required in Case of Virtual MazeTime Required in Case of Two-Story Indoor Environment
Automatic construction of W S and G N from laser-scanned point clouds2.5 s
(#points: 963,691) 1
50.0 s
(#points: 5,980,647) 1
Automatic construction of G I using SfM software [33]Approximately 1 week
(#photos: 21,143)
(resolution: 1920 × 1080)
Signage visibility, legibility, and noticeability estimationLess than 0.17 s
Signage-based motion planningLess than 0.02 s
One-step walking motion generation with 100 frames interpolation 20.15 s2.5 s
1 Number of downsampled points used for environment modeling. 2 Elapsed time of signage visibility, legibility, and noticeability evaluation was not included.

Share and Cite

MDPI and ACS Style

Maruyama, T.; Kanai, S.; Date, H.; Tada, M. Simulation-Based Evaluation of Ease of Wayfinding Using Digital Human and As-Is Environment Models. ISPRS Int. J. Geo-Inf. 2017, 6, 267. https://doi.org/10.3390/ijgi6090267

AMA Style

Maruyama T, Kanai S, Date H, Tada M. Simulation-Based Evaluation of Ease of Wayfinding Using Digital Human and As-Is Environment Models. ISPRS International Journal of Geo-Information. 2017; 6(9):267. https://doi.org/10.3390/ijgi6090267

Chicago/Turabian Style

Maruyama, Tsubasa, Satoshi Kanai, Hiroaki Date, and Mitsunori Tada. 2017. "Simulation-Based Evaluation of Ease of Wayfinding Using Digital Human and As-Is Environment Models" ISPRS International Journal of Geo-Information 6, no. 9: 267. https://doi.org/10.3390/ijgi6090267

APA Style

Maruyama, T., Kanai, S., Date, H., & Tada, M. (2017). Simulation-Based Evaluation of Ease of Wayfinding Using Digital Human and As-Is Environment Models. ISPRS International Journal of Geo-Information, 6(9), 267. https://doi.org/10.3390/ijgi6090267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop