Next Article in Journal
An Endmember Initialization Scheme for Nonnegative Matrix Factorization and Its Application in Hyperspectral Unmixing
Next Article in Special Issue
SmartEscape: A Mobile Smart Individual Fire Evacuation System Based on 3D Spatial Model
Previous Article in Journal
Implementation of a Parallel GPU-Based Space-Time Kriging Framework
Previous Article in Special Issue
LiDAR—A Technology to Assist with Smart Cities and Climate Change Resilience: A Case Study in an Urban Metropolis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Formalized 3D Geovisualization Illustrated to Selectivity Purpose of Virtual 3D City Model

1
Quartier Agora, Geomatics Unit, Liège University (ULiege), Allée du Six-Août, 19, 4000 Liège, Belgium
2
Department of Geomatics Sciences, Laval University, Pavillon Louis-Jacques-Casault 1055, Avenue du Séminaire, Bureau 1315, Québec, QC G1V 5C8, Canada
3
Department of Mathematics, University of Liège (ULiege), Allée de la Découverte, 12, 4000 Liège, Belgium
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2018, 7(5), 194; https://doi.org/10.3390/ijgi7050194
Submission received: 29 March 2018 / Revised: 9 May 2018 / Accepted: 16 May 2018 / Published: 18 May 2018

Abstract

:
Virtual 3D city models act as valuable central information hubs supporting many aspects of cities, from management to planning and simulation. However, we noted that 3D city models are still underexploited and believe that this is partly due to inefficient visual communication channels across 3D model producers and the end-user. With the development of a formalized 3D geovisualization approach, this paper aims to support and make the visual identification and recognition of specific objects in the 3D models more efficient and useful. The foundation of the proposed solution is a knowledge network of the visualization of 3D geospatial data that gathers and links mapping and rendering techniques. To formalize this knowledge base and make it usable as a decision-making system for the selection of styles, second-order logic is used. It provides a first set of efficient graphic design guidelines, avoiding the creation of graphical conflicts and thus improving visual communication. An interactive tool is implemented and lays the foundation for a suitable solution for assisting the visualization process of 3D geospatial models within CAD and GIS-oriented software. Ultimately, we propose an extension to OGC Symbology Encoding in order to provide suitable graphic design guidelines to web mapping services.

Graphical Abstract

1. Introduction

Due to recent and significant developments in data acquisition techniques (LIDAR, photogrammetry, and remote sensing) and computer sciences (storage and processing), the third geometric dimension (mainly the height or the Z coordinates of objects) has become for both experts and non-professional users an efficient way to complete multiple tasks in disciplines such as civil engineering, geology, archaeology, and education [1,2,3]. Moving to the third dimension seems attractive for the development of a more direct cognitive reasoning about geographical and temporal phenomena. This is especially due to the geometry of 3D virtual environments that allows a more natural interaction with the spatial content [4,5,6]. In addition, the third dimension also solves the drawbacks of 2D environments when considering multi-level structures (e.g., multi-story buildings and subway stations) [7,8]. This is for instance the case in systems used for evacuating people from large public and business buildings where the addition of the third dimension turns out to be useful and efficient [9].
In the administration of cities, virtual 3D city models are extremely valuable, as they constitute 3D geovirtual environments serving many application fields: urban planning, facility management, mobile telecommunication, environmental simulation, navigation, and disaster management [10,11]. Today, an ever-greater number of cities (e.g., Berlin, Montreal, Paris, Zürich, Rotterdam, Helsinki, and Abu Dhabi) manipulate 3D city models for the management, integration, presentation, and distribution of complex urban geoinformation [12]. Through the development of standards (e.g., CityGML [13]), virtual 3D city models act as relevant central information hubs to which many applications can attach their domain information [14]. However, it does not necessarily mean that people can directly and efficiently communicate with each other because they have a common information model at their disposal. To achieve this, a communication channel must be designed and set up across stakeholders. As sight is one of the key senses for information communication [15], the link could be carried out by an appropriate 3D geospatial data visualization as a means of effectively exchanging contextual knowledge [16,17]. Nevertheless, there is still a crucial issue to solve: how to efficiently show 3D geospatial data in order to produce relevant visual communication? There are plenty of 3D visualization techniques (e.g., transparency, hue and shading), but they are not all compatible with each other, leading to potential graphical conflicts that may cause misunderstanding across stakeholders. For instance, too much shading may hide visual variables (such as hue, pattern, and material) on faces of 3D objects and so make them useless. Furthermore, their application is often specific to the type of data to be visualized, the task to be performed, and the context in which the task is to be executed [18]. As a result, selecting appropriate 3D visualization techniques is quite complex, especially for non-experts who often deal with new combinations of criteria (data, task, or context).
In this research project, we hypothesize that the formalization of graphic design principles is possible and corresponds to a valuable approach to support the user in the visualization process of 3D models. This is why we propose a formalized 3D geovisualization. Through a deep understanding of how visualization in 3D geospatial models operates, we can extract the key components from graphics and computer graphics fields. Then, connections between the camera settings, the visualization techniques, and the targeted purposes are established. The goal is to produce a suitable 3D geospatial data visualization and to build a knowledge network by gathering and connecting an ever-increasing number of visualization techniques from different fields. This knowledge network could even automatically and intelligently be structured through machine learning [19]: the more a visualization technique is used, the more it fits specific requirements, and subsequently the more it is helpful.
Interactive support tools could then be implemented to provide designers with feedback about the suitability of their representation choices. This seems necessary, since current CAD and GIS-oriented software does not warn against graphical conflicts that may appear during the visualization pipeline. The same applies to 3D geoinformation diffusion through the OGC web 3D services. While Neubauer and Zipf [20] have already provided an extension to Symbology Encoding for the visualization of 3D scenes, this process remains unstructured. Additional elements should be incorporated to provide suitable graphic design guidelines.
This paper is structured as follows. Section 2 is dedicated to the 3D geovisualization field and aims to provide theoretical insights. It also deals with the visualization process of virtual 3D city models and graphical conflicts. Section 3 presents the second-order logic formalism applied to the 3D geovisualization process and illustrates its role in the selectivity of virtual 3D city models. In Section 4, the formalism is implemented in three different kinds of applications. Ultimately, we discuss the results, conclude, and address some research perspectives.

2. 3D Geovisualization of Virtual City Models

This section proposes a brief review of the concepts and principles of 3D geovisualization. First, it presents the key components from graphics (static retinal variables and interpretation tasks) and computer graphics (3D environment settings and enhancement techniques) fields. Then, it deals with the visualization of virtual 3D city models. A discussion is then provided, especially regarding graphical conflicts.

2.1. 3D Geovisualization

2.1.1. Definition

Based on MacEachren and Kraak’s definition, 3D geovisualization is defined as the field that provides theory, methods, and tools for the visual exploration, analysis, confirmation, synthesis, and communication of spatial data [21]. It incorporates approaches from a wide range of fields, such as scientific computing, cartography, image analysis, information visualization, exploratory data analysis, and geographic information systems.
The development of 3D geovisualization relies heavily on computer graphics, the technologies to design and manipulate digital images of 3D environment [22]. Through this field, it is also possible to incorporate interaction (the ability for the user to move or to apply a motion to objects) and immersion (the sensation of “being in” the environment) with the use of head-mounted display or CAVE (Cave Automatic Virtual Environment) [23,24,25,26]. Besides, many application fields, such as education, geoscience, and human activity-travel patterns research showed their great usefulness in the visualization of 3D geospatial data [27,28,29].
Figure 1 is a simplified configuration of key components involved in 3D geovisualization and defines style as the application of graphic elements from graphics and computer graphics fields to features and the 3D environment. The next section explains these concepts in detail.

2.1.2. Graphics

To fulfil geospatial visualization challenges, theoretical and methodological approaches from graphics are used in 3D geovisualization [23,30]. Graphics is the graphical part of semiotics, the field that studies the processes of communication and systems of signification through the use of any kinds of sign (e.g., senses-related, mathematical, and information technology) [31]. Within the framework of graphics representation, components are visual (or retinal) variables sensed in accordance with group levels equivalent to the four scales of measurement: nominal, ordinal, interval, and ratio [32]. Over the last fifty years, many authors, including Bertin [33], Morisson [34], MacEachren [35], Carpendale [36], Slocum et al. [37], and Boukhelifa et al. [38], have supplied a wide range of static retinal variables (summarized in Table 1). Initially developed on 2D geometries (points, curves, surfaces), these static retinal variables can be easily extended to volumes and subsequently to the third dimension.
When using visual variables, users must keep in mind the suitability of these variables to perform specific visual tasks (one variable may perform well while another is less suitable). The suitability for graphics is in the interpretation tasks they are able to carry out. In Table 2, we present the perceptual properties classes as defined by Bertin [33]. Note that additional definitions exist in the literature, as do alternative criteria to measure the suitability of retinal variables [35].
Retinal variables have different degrees of consistency to achieve these visual tasks. This is why research has been conducted in 3D graphics. For instance, the studies of Pouliot et al. [39,40,41] and Wang et al. [42] show that colour is one of the most relevant variables for selectivity tasks in 3D cadastre visualization. This is also the conclusion of Rautenbach et al. [43], who identify (in the specific context of urban planning) hue (and texture) as the most adapted visual variables for selectivity. Ultimately, retinal variables are also suitable for managing specific 3D geovisualization challenges. For example, transparency can solve occlusion issues [44,45] or improve spatial relationships [46,47].

2.1.3. 3D Environment Settings

While 3D geovisualization is partly based on graphics, it also incorporates additional settings from computer graphics that greatly influence the final display. Häberling [48] distinguishes the following rendering parameters:
  • Projection: parallel or perspective;
  • Camera: position, orientation, and focal length;
  • Lighting: direct, ambient, or artificial light;
  • Shading;
  • Shadow;
  • Atmospheric effect.
The previous list can be extended with viewport variations that change the projection (parallel or perspective) progressively and degressively in order to efficiently reduce occlusion issues in 3D geospatial environments. Lorenz et al. [49] show that it does not modify the dynamic aspect of virtual 3D city applications and Jobst and Döllner [50] even conclude that it allows a better perception of 3D spatial relations.

2.1.4. Enhancement Techniques

Besides static visual variables and 3D environment settings, additional techniques have been developed in order to improve the visualization of 3D geospatial environments. Bazargan and Falquet [51] show the suitability of seven enhancement techniques (illustrative shadows, virtual PDA, croquet 3D windows, croquet 3D interactor, 2D medial layer, sidebar, and 3D labels) for the depiction of non-geometric information, while Trapp et al. [52] classify object-highlighting techniques useful for the visualization of user’s selection, query results, and navigation. The classification is carried out based on the type of rendering: style-variance techniques (focus-based style or context-based style), outlining techniques, and glyph-based techniques. They conclude that context-based style variance and outline techniques seem to be the most relevant techniques, since they highlight (to some extent) hidden objects in the scene.

2.2. Virtual 3D City Models

2.2.1. Definition and Benefits

Virtual 3D city models are defined as three-dimensional digital representations of urban environments [53,54]. Their development and usage are linked to the drawbacks of photorealistic displays. Unlike 3D models, photorealistic depictions present [55]:
  • A higher cost for data acquisition due to the required higher quality of geometries and facade textures;
  • A more difficult integration of thematic information owing to the visual predominance of textured facades, roofs, and road systems in the image space;
  • A more complex visualization of multiple information layers on account of photorealistic details;
  • A more complex display on lower-specification devices (e.g., mobile phones, tablets) that generally require a simplification and aggregation process to be efficiently visualized [56].
Virtual 3D city models mainly focus on common aboveground and underground urban objects and structures such as buildings (Figure 2), transportation, vegetation, and public utilities, of which they are able to store geometric, topologic, and semantic information with the development of common information models such as BIM, indoorGML and CityGML [13,57,58,59]. CityGML is organized in thematic classes and incorporates the “level of details” concept, both related to geometry, appearance, and semantics [60,61,62]. It is used to describe a series of different representations of real-world objects and to meet the requirements of a wide range of application fields and to enhance the performance and quality of the visualization process [63,64,65]. However, in order to produce an efficient visualization, 3D objects’ appearance and geometry are one aspect; another is the representation of their semantic information, and this is developed in the next section.

2.2.2. Semantic Driven Visualization

The development of CityGML as a standard for the representation and exchange of semantic 3D city models extended their initial purpose of visualization to thematic queries, analytical tasks, or spatial data mining [66]. A semantic driven visualization has been developed to relay information, and improved the usability of virtual 3D city models [67]. However, the magnitude and complexity of virtual 3D city models raise the question of how to expressively and efficiently produce such a visualization. To address this issue, the semiotics of graphics field has been extended to the third dimension [68]. In a paper published in 2015, Semmo et al. [69] present a 3D semiotic model incorporating the studies of several researchers. They distinguish five processing stages in the visualization process:
  • The modelling of real-world phenomena, which can be carried out by different kinds of sensors: passive (photogrammetry), active (ground laser scanner, airborne LIDAR), or hybrid (imagery and laser range sensors, hybrid DSM, aerial image and 2D GIS) [70].
  • The filtering stage to produce a primary landscape model where only the required information for further processing is selected.
  • The mapping of the primary model to a cartographic model via symbolisation (i.e., the application of static retinal variables (e.g., hue, size, transparency) to selected objects).
  • The rendering of the cartographic model; that is, the definition of 3D environment settings (e.g., projection, camera attributes, lighting, and atmospheric effects) and potentially the application of enhancement techniques.
  • The perceptional aspects of the 3D graphic representation, such as the spatial and temporal coherence of mapping and rendering stages, as well as psychological and physiological cues. When used carefully, they facilitate the communication process [71].
Note that the mapping and rendering stages include the design aspects of Jobst et al. [72] and Häberling et al. [73]. In addition to the filtering stage, they constitute the visualization process defined by Ware [74] that may incorporate the generalisation processes classified by Foerster et al. [75].
The main weakness of this 3D semiotic model concerns the user’s involvement across the visualization process (stages 2 to 4) [69]. Indeed, it is common that the visualization of 3D geospatial data collection requires the application of more than one visualization technique [76]. In this case, the selected visualization techniques must at least be able to convey the desired information and mainly not negatively interfere with each other [77,78]. Indeed, graphical conflicts may occur in the visualization process, especially between the mapping and rendering stages. For instance, shadow may hide 3D objects in the 3D environment, making the application of any visual variables useless. In a paper published in 2008, Jobst et al. present a first set of graphical conflicts [72]. While this study introduces graphical conflicts, it has some limitations. First, it only considers incompatibilities among visualization techniques, while they also exist among their targeted purposes (e.g., their perceptual properties). It also does not present a methodological framework that takes additional visualization techniques into account and does not define further connections among visualization techniques and their targeted purposes. Ultimately, the study does not provide a graphic design support tool for assisting the visualization process.

3. Knowledge Network Configuration

3.1. Introduction

To address previous limits, we propose a formalization of the 3D geovisualization. We do not pretend to solve graphical conflicts, but we do believe that with a deep understanding of how 3D geovisualization works, we can at best avoid them and subsequently produce a more efficient visualization of 3D geospatial data. This section aims to present and extend the initial formalism developed by Neuville et al. [79]. The insertion of camera settings in the mathematical framework as well as their connections to visualization techniques are new, and enhance the formalization process.
3D geovisualization is formalized with second-order logic, from which we use the mathematical language for defining and connecting its components. Working at this level of abstraction allows a deep understanding of the process and an integration at numerous scales, both for domain experts who define their own graphic design guidelines and users who apply them. First, this section presents the mathematical framework. Key components of 3D geovisualization are stored in collections of entities. Then, a set of functions define their role(s) in the 3D geovisualization process and express their connections. Functions are classified into three categories in order to distinguish functions related to camera settings (geometry-related), camera settings and visualization techniques (geometry and attribute-related), and visualization techniques (attribute-related). In order to clarify the process, sets and operators are presented in Table 3. Ultimately, this section illustrates the formalism on a subset of static retinal variables and their targeted purposes.

3.2. Mathematical Framework

3.2.1. Collections of Entities

Collection A (Equation (1)) gathers camera settings and corresponds to the cross-product of camera position (A1), camera orientation (A2), focal length (A3), and vision time (A4). It incorporates a subset of variables of vision defined by Jobst et al. [72], to which we add vision time (i.e., time spent visualising a given viewpoint). The value is infinity for a static viewpoint, while it is a positive real in a motion with multiple viewpoints. Mathematically:
A = A1 × A2 × A3 × A4,
with
A 1 = R 3 ,
A 2 = R 3 ,
A 3 = R + ,
A 4 = R + { } .
Then, collection B (Equation (6)) gathers visualization techniques. It includes static retinal variables from Table 1 as well as 3D environment settings and enhancement techniques from Section 2.1.3. and Section 2.1.4.:
B = {[Static retinal variables], [3D environment settings], [Enhancement techniques]}.
Thirdly, collection C (Equation (7)) defines the targeted purposes arising from the application of collections A and B. It incorporates the fulfilment of interpretation tasks (Table 2), solutions to 3D geovisualization challenges (Section 2.1.3. and Section 2.1.4.), and globally all perceptions conveyed by A and B (e.g., scale, security, and pollution).
C = {[Interpretation tasks], [3D geovisualization challenges], [Perceptions]}.
Ultimately, collection O (Equation (8)) corresponds to 3D geometric objects (e.g., buildings, transportation, and vegetation) inside the 3D geospatial environment.
O = {[3D geometric objects]}.

3.2.2. Geometry-Related Functions

Function F (Equation (9)) means that A determines the visibility of 3D geometric objects. Mathematically, it goes from the set of A to ( ) the set of O parts (P(O)) and from an “a” element of A is associated ( ) the part of O that includes the “o”, such as “a” implies “o”.
F : A P ( O ) : a { o O : a o )
Then, the completeness of a specific entity “a” (camera position, orientation, focal length, and vision time) is defined as the number of visible 3D geometric objects (Equation (10)):
|F(a)|.
Due to the magnitude and complexity of virtual 3D city models, the completeness of “a” can be less than the total number of 3D geometric objects returned by the query. To solve this issue, several solutions are feasible, such as the application of static visual variables (e.g., transparency), enhancement techniques (e.g., outlining technique), and viewport variations. However, if these options are not feasible, the completeness can be improved with a motion which seeks to display the missing information. Mathematically, the motion is defined as the addition of two (or more) entities of A and its completeness (Equation (11)) as the sum of each entity completeness, from which we subtract the completeness of the intersections (two by two).
|F(a1)| + |F(a2)| − |F(a1) ∩ F(a2)| > |F(a1)|, |F(a2)|.
Function G (Equation (12)) aims to connect collection A to collection C, meaning that an entity “a” is associated to a part of C (targeted purposes). For instance, a low-angle shot may imply a scale perception.
G : A P ( C ) : a { c C : a c } .

3.2.3. Geometry- and Attribute-Related Function

Function H (Equation (13)) aims to link collection B to collection C. However, this connection requires the involvement of collection A, since the targeted purpose(s) of any entity “b” implies at least the visibility of this entity. Function H means that a combination of an entity “a” and an entity “b1” or a set of entities “bn” is associated to a part of C. For instance, hue is selective and associative; transparency solves occlusion issues; a dilapidated city perception may require the combination of several visualization techniques, such as the simultaneous application of haze (3D environment setting) and damaged facade materials (static retinal variable).
H n : A × B n P ( C ) : ( a , b 1 , , b n ) { c C : ( a , b 1 , , b n ) c } ,
with
n N .

3.2.4. Attribute-Related Functions

The following functions aim to determine the interactions occurring between entities of collections B and C.
Function I (Equation (14)) means that the application of an entity “b1” induces the use or the indirect application of an additional entity “b2”. For example, the production of a shadow implies the use of a directional light; the application of perspective height as a static retinal variable indirectly implies the application of size. This function defines a consequence connection among visualization techniques.
I : B P ( B ) : b 1 { b 2 B : b 1 b 2 } .
Function J (Equation (15)) means that an entity “b1” is compatible with another entity “b2”.
J : B P ( B ) : b 1 { b 2 B : b 1 b 2 } .
Function K (Equation (16)) means that an entity “b1” is incompatible with another entity “b2”. For example, size is incompatible with perspective projection since the latter modifies the perception of size as a function of object position in the 3D geospatial environment.
K : B P ( B ) : b 1 { b 2 B : ¬ ( b 1 b 2 ) } .
Function L (Equation (17)) means that “c1” (the targeted purpose of an entity “b1”) is compatible with “c2” (the targeted purpose of an entity “b2”). For example, the simultaneous application of size and hue on a single 3D object combines the specific targeted purpose of these static visual variables (the order interpretation task with size and the associative interpretation task with hue).
Lb1,b2:H1(b1) → P(H1(b2)):c1 ↦ {c2 ∈ H1(b2):c1 ∨ c2}.
Function M (Equation (18)) means that “c1” (the targeted purpose of an entity “b1”) is incompatible with “c2” (the targeted purpose of an entity “b2”).For example, the simultaneous application of pattern and grain on a single 3D object does not maintain their specific selectivity interpretation task, since grain interferes in the pattern perception.
Mb1,b2:H1(b1) → P(H1(b2)):c1 ↦ {c2 ∈ H1(b2):¬ (c1 ∨ c2)}.

3.3. Illustration with Static Retinal Variables and 3D Environment Parameters for Selectivity Purposes

3.3.1. Collection of Entities

The formalism application is illustrated in the visualization of virtual 3D city models. In this paper, we consider a subset of visualization techniques and their targeted purposes. Hence, collection B (Equation (19)) first gathers a subset of static retinal variables (Table 1). We do not consider the position visual variable of Bertin, since changing the 3D object position alters its spatial relation to other elements. Note however that this variable remains very useful for semantic information representation, such as labelling [80]. Collection B also includes 3D environment settings (Section 2.1.3.), while the only selectivity interpretation task (Table 2) is considered in collection C (Equation (20)).
B = {arrangement, atmosphere effect (haze), crispness, depth of field, environment projection, grain, hue, lightness/value, directional lighting, material, orientation, pattern, perspective height, resolution, saturation, shading, shadow, shape, size, sketchiness, spacing, transparency}
C = {selectivity}
Then, the connections between collections B and C are defined. This was carried out by a review of the existing literature but also through empirical tests performed on a part of the virtual 3D LOD1 city model of New York (Figure 2). In the following, we provide a set of figures to illustrate the statements.

3.3.2. Truth Values of Functions

Equation (14) addresses the consequence connection among the entities of B. It links the following entities:
  • the production of a shadow induces the use of a directional light (Figure 3a);
  • the application of transparency indirectly implies the application of lightness/value and saturation (Figure 3b);
  • the application of grain indirectly implies the application of spacing. In Figure 3c, two levels of grain are applied to the same building, which also implies a spacing variation between points.
  • the application of perspective height indirectly induces the application of size. In Figure 3d, two different perspective heights are applied to the same red building, which also implies a size variation of this building.
Equations (15) and (16) refer to compatibility and incompatibility connections among the entities of B. On the basis of previous studies, the following incompatibilities can be extracted:
  • Atmosphere effect (haze) influences the perception of lightness/value and saturation (Figure 4a) [72];
  • Shading influences the perception of lightness/value and saturation (Figure 4b) [72];
  • Directional lighting influences the perception of lightness/value and saturation (Figure 4c) [72];
  • Shadow influences the perception of lightness/value and saturation (Figure 4d) [72];
  • Depth of field changes the perception of size [72], orientation [42], grain and spacing [74], but also perspective height and resolution (Figure 4e).
Equations (17) and (18) address compatibility and incompatibility connections among the targeted purposes of B (i.e., the entities of C). Among the selective entities of B, some are incompatible because their selectivity perceptual property cancels when they are applied on a same 3D object. This is especially the case of grain, pattern, and sketchiness, since their combination makes the specific extraction of each individual element more difficult (Figure 5).
Previous consequences and incompatibilities are quite direct (i.e., undeniable). They are also generic (i.e., independent of data to visualise, task to perform, and the context in which task is executed). However, most graphical conflicts are actually difficult to predict, since they partly depend on the spatial distribution of 3D objects. This is especially the case of transparency and shadow. The first may induce a superposition of static visual variables, while the second may hide a part of the 3D scene and subsequently make the application of any retinal variables useless (Figure 6a,b). Some incompatibilities are also a function of the application level of visualization techniques. While shading is useful to emphasise the three-dimensional appearance, too much shading may hide retinal variables on some faces (Figure 6c). However, too little shading may not highlight the geometric appearance of 3D objects.

4. Examples of Knowledge Network Application

In Section 3, we created the framework of a future knowledge network on 3D geovisualization through a formalization process. At this stage of development, domain experts (e.g., from urban planning) are able to define their own graphic design guidelines based on the previous mathematical framework. However, one challenge has yet to be solved: how to incorporate this knowledge into an operational solution that assists end-users in the visualization process of a 3D model? To answer this question, we designed three applications. The first one is an application chart of a 3D geovisualization knowledge network. The second is a dynamic client WebGL application that implements the previous chart. Ultimately, we suggest an extension to the OGC Symbology Encoding so as to introduce knowledge in the visualization process of web mapping services.

4.1. Application Chart

In Section 3.3., we extracted connections (consequence, incompatibility, and potential incompatibility) between a set of static retinal variables and 3D environment settings for the purpose of selectivity. The following application aims to bring the knowledge into a chart that assists in the visualization process of virtual 3D city models. Static retinal variables (classified into three categories) and 3D environment settings are displayed horizontally and vertically, respectively. Then, the use of four colours expresses the four categories of connections:
  • Compatibility connection in green;
  • Potential incompatibility connection in yellow; this refers to incompatibilities linked to the spatial distribution of 3D objects and/or the application level of static visual variables used simultaneously.
  • Incompatibility connection in red;
  • Consequence connection in blue.
The chart reading is performed either through a selection of static retinal variables that constrains the use of 3D environment settings, or the reverse. Users can then find appropriate graphical expressions and avoid graphical conflicts. Note that the chart also shows connections among static retinal variables and 3D environment settings through the use of coloured exponents (e.g., consequence connection between shadow and lighting, incompatibility connection between pattern and grain). Figure 7 illustrates the application chart and extends the version of Neuville et al. [81] by reviewing some graphical conflicts.

4.2. Dynamic WebGL Application

In order to provide an operational solution that could be implemented into existing CAD and GIS-oriented software, we propose an interactive design plugin. It was developed with three.js, a cross-browser JavaScript library using WebGL, and aims to assist the visualization process of 3D geospatial models. The application interface includes a 3D viewer and a sidebar that incorporates a set of static retinal variables and 3D environment parameters for the design of 3D models. Unlike standard 3D viewers, the plugin brings intelligence into the visualization process. Indeed, two events are produced when the user applies a specific visualization technique. The first concerns the display of consequence, potential incompatibility, and incompatibility connections (through the previous colour coding), while the second explains these connections via a warning window. This is especially useful for potential incompatibilities where the inconsistency degree depends on specific factors such as the spatial distribution of 3D objects and/or the application level of other visualization techniques.
Figure 8 illustrates the WebGL application. To show the visual evolution of the 3D model, we present multiple views (times t1 to t4). In the first step, the user downloads the 3D model without applying any visualization techniques (upper image). In the example, shading is then used to highlight the 3D geometric appearance of 3D objects (second image). After that, the user changes the parallel projection to a perspective one (third image). Finally, hue is used for some buildings for the purpose of selectivity (lower image).
During the whole visualization process, the sidebar is continuously updated to inform the user of compatible, incompatible, and potentially incompatible visualization techniques. This is carried out by the application of the same colour coding used in the application chart. To emphasize the continuous updating during the visualization process in Figure 8, each visualization technique is associated to a time. For instance, shading (time t2) constrains all visualization techniques with the “t2” indication. Note that we applied hue despite the constraint established by the use of shading (Figure 7). As a reminder, there is a potential incompatibility among these two variables. Indeed, too much shading may hide hue on some 3D object faces. Since a low level of shading was applied on the 3D model, this visual variable can be used without causing any graphical conflicts. Note that the plugin also displays a window for these kinds of incompatibilities in order to inform designers against potential graphical conflicts (Figure 9). As a result, suitable styles can be applied to features and the 3D environment.

4.3. OGC Symbology Encoding Extension

Ultimately, we suggest an extension to Symbology Encoding (SE). SE is an XML language for styling information that can be applied to the features and coverage data of web mapping services. While an extension of SE has been proposed in [20], the visualization process remains unstructured and graphical conflicts may still appear. That is why we propose a new extension to SE in order to assist users in the visualization process of web mapping services. To achieve this, we propose an additional XML element that deals with the suitability between the visualization techniques and their targeted purposes. The new element is called Suitability, and its format is shown in the following XML-Schema fragment:
<xsd:element name=”Suitability” type=”se:SuitabilityType”>
</xsd:element>
<xsd:complexType name=” SuitabilityType”>
<xsd:sequence>
     <xsd:element ref=”se:Name” minOccurs=”1”/>
     <xsd:element ref=”se:Description” minOccurs=”1”/>
     <xsd:element ref=”se:TargetedPurpose” minOccurs=”1” maxOccurs=”unbounded”/>
     <xsd:element ref=”se:Consequence” minOccurs=”0” maxOccurs=”unbounded”/>
     <xsd:element ref=”se:Incompatibility” minOccurs=”0” maxOccurs=”unbounded”/>
     <xsd:element ref=”se:PotentialIncompatibility” minOccurs=”0” maxOccurs=”unbounded”/>
</xsd:sequence>
</xsd:complexType>
<xsd:element name=” TargetedPurpose” type=”xsd:string”/>
<xsd:element name=”Consequence” type=”xsd:string”/>
<xsd:element name=”Incompatibility” type=”IncompatibilityType”>
</xsd:element>
<xsd:complexType name=“IncompatibilityType”>
     <xsd:simpleContent>
          <xsd:extension base=”xsd:string”>
               <xsd:attribute name=” TargetedPurposeFrom” type=”string” use=”optional”/>
               <xsd:attribute name=” TargetedPurposeTo” type=”string” use=”optional”/>
          </xsd:extension>
     </xsd:simpleContent>
</xsd:complexType>
<xsd:element name=”PotentialIncompatibility” type=”se:PotentialIncompatibilityType”>
</xsd:element>
<xsd:complexType name=” PotentialIncompatibilityType”>
     <xsd:sequence>
          <xsd:element name=”Technique” type=”xsd:string”/>
          <xsd:element name=”Explanation” type=”xsd:string”/>
     </xsd:sequence>
     <xsd:attribute name=” TargetedPurposeFrom” type=”string” use=”optional”/>
     <xsd:attribute name=” TargetedPurposeTo” type=”string” use=”optional”/>
</xsd:complexType>
		
The Name element refers to a given visualization technique (static retinal variable, 3D environment setting, or enhancement technique) and the Description element clarifies the context in which the connections are defined (e.g., urban visualization). The TargetedPurpose element corresponds to the targeted purpose(s) arising from the application of the visualization technique (Equation (13)). The Consequence element refers to Equation (14). Incompatibility and Potential Incompatibility elements refer respectively to Equations (16) and (18). TargetedPurposeFrom and TargetedPurposeTo attributes express the incompatibilities among entities of C (i.e., the targeted purposes of visualization techniques). The following provides two application examples for hue and pattern visualization techniques.
<Suitability>
     <Name>Hue</Name>
     <Description>
         <Title>Hue usage in urban visualization</Title>
     </Description>
     <TargetedPurposeFrom>Selectivity</TargetedPurposeFrom>
     <PotentialIncompatibility>
         <Technique>Shading</Technique>
         <Explanation>Too much shading may hide hue on some faces</Explanation>
     </PotentialIncompatibility>
     <PotentialIncompatibility>
         <Technique>Shadow</Technique>
         <Explanation>Shadow may hide 3D objects and subsequently hue</Explanation>
     </PotentialIncompatibility>
</Suitability>
<Suitability>
     <Name>Pattern</Name>
     <Description>
         <Title>Pattern usage in urban visualization</Title>
     </Description>
     <TargetedPurposeFrom>Selectivity</TargetedPurposeFrom>
     <Incompatibility TargetedPurposeFromFrom=”Selectivity” TargetedPurposeFromTo=”Selectivity”>Grain</Incompatibility>
     <PotentialIncompatibility>
         <Technique>Shading</Technique>
         <Explanation>Too much shading may hide pattern on some faces</Explanation>
     </PotentialIncompatibility>
     <PotentialIncompatibility>
         <Technique>Shadow</Technique>
         <Explanation>Shadow may hide 3D objects and subsequently pattern</Explanation>
     </PotentialIncompatibility>
</Suitability>
      

5. Discussion and Conclusions

In this paper, we proposed to formalize, as a knowledge network, the parameters and components that influence the quality of efficient 3D geovisualization schema. Those parameters and components are classified into four categories as (1) camera settings (position, orientation, focal length, and vision time), (2) visualization techniques (from graphics and computer graphics fields), (3) targeted purposes (interpretation tasks, 3D geovisualization challenges, and perceptions), and (4) 3D objects. Furthermore, connections between the camera settings, the visualization techniques, and the targeted purposes are identified and formalized into a formal mathematical framework (second-order logic). We showed that 3D geovisualization components may be joined according to four kinds of connections: compatibility, incompatibility, potential incompatibility, and consequence. To demonstrate the usability and the utility of the proposal, the formalism is applied on a first set of visualization techniques for the purpose of selectivity. The mathematical framework is then used to connect visualization techniques and to provide a first set of graphic design guidelines. Ultimately, three applications are proposed as proofs of concept.
The knowledge network of key components for efficient 3D geovisualization is a clear contribution to the field, since we could not find such a proposal in the scientific literature or practices. It assists both domain experts in the definition of their own graphic design guidelines and non-professional users that manipulate and visualize 3D geospatial data. Indeed, the knowledge network and its formalization are written generically, such that it may be applicable to any 3D geospatial data. Additionally, the applications themselves—especially the WebGL plugin and the Symbology Encoding extension—contribute to the domain, since they can now be used and tested in various contexts.
Our work has some limitations. For example, the formalism was only applied to a small set of visualization techniques and targeted purposes. In the future, further entities will be incorporated in order to extend the preliminary results. Indeed, the formalism aims to build a knowledge network by gathering and connecting an ever-increasing number of entities.
In this paper, we provided a first set of graphic design guidelines. Some are actually valid for any application fields manipulating 3D geospatial data (e.g., cadastre, augmented and virtual realities, and navigation), while others may be reviewed in order to fit specific contexts, data, and tasks. However, it is clear that static retinal variables “lightness/value” and “saturation” are not actually relevant in 3D geovisualization due to their numerous graphical conflicts with most 3D environment settings. Results also indicate that the degree of inconsistency among considered visualization techniques is heavily connected to the spatial distribution of requested objects in the 3D geospatial environment and the application level of visualization techniques used simultaneously. This is especially the case for transparency, shadow, and shading, which should be used with caution. Many graphical conflicts are difficult to predict. However, this does not mean that they are inevitable, at least if they are defined. This is why it is necessary to build a knowledge network on the visualization of 3D geospatial data. It could even be developed and distributed through XML documents via the new element Suitability.
Ultimately, CAD and GIS-oriented software should incorporate viewpoint management support tools. Indeed, camera settings (camera position, orientation, focal length, and vision time) are crucial in the 3D geovisualization process, since they determine the visibility of 3D objects to which visualization techniques have been applied. However, to date, the viewpoint has been manually fixed by the designer who tries to maximise at best the visibility of 3D objects. This operation may be quite arduous, especially with 3D models at high density (e.g., virtual 3D city models). In 2016, Neuville et al. [81] proposed a computation method to automatically generate optimal viewpoints. The algorithm was then included into a web-based platform in Poux & Neuville et al. [82,83]. However, it still does not account for the dynamic aspect of viewpoint (i.e., its motion in space and time), which is essential in 3D geovisualization. The temporal and spatial management of camera settings is still a challenge to be solved.

Author Contributions

Romain Neuville conceived and designed the formalism, performed the experiments, implemented the knowledge network applications and wrote the paper. Laurent De Rudder assisted the writing of formulae. Roland Billen and Jacynthe Pouliot contributed in the 3D formalized geovisualization elaboration and participated, with Florent Poux, in reviewing the paper.

Acknowledgments

The authors would like to sincerely thank the anonymous reviewers for their relevant and in-depth comments.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Abdul-Rahman, A.; Pilouk, M. Spatial Data Modelling for 3D GIS; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–13. [Google Scholar]
  2. Bandrova, T. Innovative technology for the creation of 3D maps. Data Sci. J. 2005, 4, 53–58. [Google Scholar] [CrossRef]
  3. Jazayeri, I.; Rajabifard, A.; Kalantari, M. A geometric and semantic evaluation of 3D data sourcing methods for land and property information. Land Use Policy 2014, 36, 219–230. [Google Scholar] [CrossRef]
  4. Carpendale, M.S.T.; Cowperthwaite, D.J.; Fracchia, F.D. Distortion viewing techniques for 3-dimensional data. In Proceedings of the IEEE Symposium on Information Visualization’96, San Francisco, CA, USA, 28–29 October 1996; pp. 46–53. [Google Scholar]
  5. Egenhofer, M.J.; Mark, D.M. Naive Geography; Springer: Berlin, Germany, 1995; pp. 1–18. [Google Scholar]
  6. Jobst, M.; Germanchis, T. The employment of 3D in cartography—An overview. In Multimedia Cartography; Springer: Berlin, Germany, 2007; pp. 217–228. [Google Scholar]
  7. Kwan, M.-P.; Lee, J. Emergency response after 9/11: The potential of real-time 3D GIS for quick emergency response in micro-spatial environments. Comput. Environ. Urban Syst. 2005, 29, 93–113. [Google Scholar] [CrossRef]
  8. Zlatanova, S.; Van Oosterom, P.; Verbree, E. 3D technology for improving Disaster Management: Geo-DBMS and positioning. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004. [Google Scholar]
  9. Meijers, M.; Zlatanova, S.; Pfeifer, N. 3D geoinformation indoors: Structuring for evacuation. In Proceedings of the Next Generation 3D City Models, Bonn, Germany, 21–22 June 2005; Volume 6. [Google Scholar]
  10. Döllner, J.; Baumann, K.; Buchholz, H. Virtual 3D City Models as Foundation of Complex Urban Information Spaces. In Proceedings of the 11th International Conference on Urban Planning and Spatial Development in the Information Society (REAL CORP), Vienna, Austria, 13–16 February 2006. [Google Scholar]
  11. Sinning-Meister, M.; Gruen, A.; Dan, H. 3D City models for CAAD-supported analysis and design of urban areas. ISPRS J. Photogramm. Remote Sens. 1996, 51, 196–208. [Google Scholar] [CrossRef]
  12. Döllner, J.; Kolbe, T.H.; Liecke, F.; Sgouros, T.; Teichmann, K. The Virtual 3D City Model of Berlin—Managing, Integrating and Communicating Complex Urban Information. In Proceedings of the 25th International Symposium on Urban Data Management, Aalborg, Denmark, 15–17 May 2006. [Google Scholar]
  13. Open Geospatial Consortium. Candidate OpenGIS® CityGML Implementation Specification (City Geography Markup Language); Open Geospatial Consortium: Wayland, MA, USA, 2006. [Google Scholar]
  14. Kolbe, T.H. Representing and exchanging 3D city models with CityGML. In 3D Geo-Information Sciences; Springer: Berlin, Germany, 2009; pp. 15–31. [Google Scholar]
  15. Ward, M.O.; Grinstein, G.; Keim, D. Interactive Data Visualization: Foundations, Techniques, and Applications; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  16. Glander, T.; Döllner, J. Abstract representations for interactive visualization of virtual 3D city models. Comput. Environ. Urban Syst. 2009, 33, 375–387. [Google Scholar] [CrossRef]
  17. Batty, M.; Chapman, D.; Evans, S.; Haklay, M.; Kueppers, S.; Shiode, N.; Smith, A.; Torrens, P.M. Visualizing the city: Communicating urban design to planners and decision-makers. In Planning Support Systems, Models and Visualisation Tools; ESRI Press and Center Urban Policy Research, Rutgers Universtiy: London, UK, 2000. [Google Scholar]
  18. Métral, C.; Ghoula, N.; Silva, V.; Falquet, G. A repository of information visualization techniques to support the design of 3D virtual city models. In Innovations in 3D Geo-Information Sciences; Springer: Berlin, Germany, 2014; pp. 175–194. [Google Scholar]
  19. Brasebin, M.; Christophe, S.; Buard, É.; Pelloie, F. A knowledge base to classify and mix 3d rendering styles. In Proceedings of the 27th International Cartographic Conference, Rio de Janeiro, Brazil, 23–28 August 2015; p. 11. [Google Scholar]
  20. Neubauer, S.; Zipf, A. Suggestions for Extending the OGC Styled Layer Descriptor (SLD) Specification into 3D—Towards Visualization Rules for 3D City Models. In Proceedings of the 26th Urban and Regional Data Management, Stuttgart, Germany, 10–12 October 2007. [Google Scholar]
  21. Kraak, M.-J. Geovisualization illustrated. ISPRS J. Photogramm. Remote Sens. 2003, 57, 390–399. [Google Scholar] [CrossRef]
  22. Bleisch, S. 3D geovisualization–definition and structures for the assessment of usefulness. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 1, 129–134. [Google Scholar] [CrossRef]
  23. MacEachren, A.M.; Kraak, M.-J. Research Challenges in Geovisualization. Cartogr. Geogr. Inf. Sci. 2001, 28, 3–12. [Google Scholar] [CrossRef]
  24. Heim, M. Virtual Realism; Oxford University Press: Oxford, UK, 2000. [Google Scholar]
  25. MacEachren, A.M.; Edsall, R.; Haug, D.; Baxter, R.; Otto, G.; Masters, R.; Fuhrmann, S.; Qian, L. Virtual environments for geographic visualization: Potential and challenges. In Proceedings of the 1999 Workshop on New Paradigms in Information Visualization and Manipulation in Conjunction with the Eighth ACM Internation Conference on Information and Knowledge Management, Kansas City, MO, USA, 2–6 November 1999; pp. 35–40. [Google Scholar]
  26. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  27. Billen, M.I.; Kreylos, O.; Hamann, B.; Jadamec, M.A.; Kellogg, L.H.; Staadt, O.; Sumner, D.Y. A geoscience perspective on immersive 3D gridded data visualization. Comput. Geosci. 2008, 34, 1056–1072. [Google Scholar] [CrossRef]
  28. Kwan, M.-P. Interactive geovisualization of activity-travel patterns using three-dimensional geographical information systems: A methodological exploration with a large data set. Transp. Res. Part C Emerg. Technol. 2000, 8, 185–203. [Google Scholar] [CrossRef]
  29. Philips, A.; Walz, A.; Bergner, A.; Graeff, T.; Heistermann, M.; Kienzler, S.; Korup, O.; Lipp, T.; Schwanghart, W.; Zeilinger, G. Immersive 3D geovisualization in higher education. J. Geogr. High. Educ. 2015, 39, 437–449. [Google Scholar] [CrossRef]
  30. Andrienko, G.; Andrienko, N.; Dykes, J.; Fabrikant, S.I.; Wachowicz, M. Geovisualization of Dynamics, Movement and Change: Key Issues and Developing Approaches in Visualization Research. Inf. Vis. 2008, 7, 173–180. [Google Scholar] [CrossRef]
  31. Eco, U. A Theory of Semiotics; Indiana University Press: Bloomington, Indiana, 1976. [Google Scholar]
  32. Stevens, S.S. On the Theory of Scales of Measurement. Science 1946, 103, 677–680. [Google Scholar] [CrossRef] [PubMed]
  33. Bertin, J. Sémiologie Graphique: Les Diagrammes, les Réseaux et les Cartes; Gauthier-Villars, Mouton & Cie.: Paris, France, 1967. [Google Scholar]
  34. Morrison, J.L. A theoretical framework for cartographic generalization with the emphasis on the process of symbolization. Int. Yearb. Cartogr. 1974, 14, 115–127. [Google Scholar]
  35. MacEachren, A.M. How Maps Work; Guilford.: New York City, NY, USA, 1995. [Google Scholar]
  36. Carpendale, M.S.T. Considering Visual Variables as a Basis for Information Visualization; Department of Computer Science, University of Calgary: Calgary, AB, Canada, 2003. [Google Scholar]
  37. Slocum, T.A.; McMaster, R.B.; Kessler, F.C.; Howard, H.H. Thematic Cartography and Geovisualization; Pearson Education LTD.: London, UK, 2010. [Google Scholar]
  38. Boukhelifa, N.; Bezerianos, A.; Isenberg, T.; Fekete, J.-D. Evaluating sketchiness as a visual variable for the depiction of qualitative uncertainty. IEEE Trans. Vis. Comput. Graph. 2012, 18, 2769–2778. [Google Scholar] [CrossRef] [PubMed]
  39. Pouliot, J.; Wang, C.; Fuchs, V.; Hubert, F.; Bédard, M. Experiments with Notaries about the Semiology of 3D Cadastral Models. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-2/W2, 53–57. [Google Scholar] [CrossRef]
  40. Pouliot, J.; Wang, C.; Hubert, F. Transparency Performance in the 3D Visualization of Bounding Legal and Physical Objects: Preliminary Results of a Survey. In Proceedings of the 4th International Workshop on 3D Cadastres, Dubai, UAE, 9–11 November 2014; pp. 173–182. [Google Scholar]
  41. Pouliot, J.; Wang, C.; Hubert, F.; Fuchs, V. Empirical Assessment of the Suitability of Visual Variables to Achieve Notarial Tasks Established from 3D Condominium Models. In Innovations in 3D Geo-Information Sciences; Isikdag, U., Ed.; Lecture Notes in Geoinformation and Cartography; Springer: Berlin, Germany, 2014; pp. 195–210. ISBN 978-3-319-00514-0. [Google Scholar]
  42. Wang, C.; Pouliot, J.; Hubert, F. Visualization Principles in 3D Cadastre: A First Assessment of Visual Variables. In Proceedings of the 3rd International Workshop on 3D Cadastres, Shenzhen, China, 25–26 October 2012. [Google Scholar]
  43. Rautenbach, V.; Coetzee, S.; Schiewe, J.; Çöltekin, A. An Assessment of Visual Variables for the Cartographic Design of 3D Informal Settlement Models. In Proceedings of the 27th International Cartographic Conference, Rio de Janeiro, Brazil, 23–28 August 2015. [Google Scholar]
  44. Elmqvist, N.; Tsigas, P. A taxonomy of 3D occlusion management techniques. In Proceedings of the Virtual Reality Conference 2007 (VR’07), Charlotte, NC, USA, 10–14 March 2007; pp. 51–58. [Google Scholar]
  45. Elmqvist, N.; Assarsson, U.; Tsigas, P. Employing dynamic transparency for 3D occlusion management: Design issues and evaluation. In Proceedings of the IFIP Conference on Human-Computer Interaction, Rio de Janeiro, Brazil, 10–14 September 2007; pp. 532–545. [Google Scholar]
  46. Coors, V. 3D-GIS in networking environments. Comput. Environ. Urban Syst. 2003, 27, 345–357. [Google Scholar] [CrossRef]
  47. Avery, B.; Sandor, C.; Thomas, B.H. Improving spatial perception for augmented reality X-ray vision. In Proceedings of the Virtual Reality Conference 2009 (VR 2009), Lafayette, LA, USA, 14–18 March 2009; pp. 79–82. [Google Scholar]
  48. Haeberling, C. 3D Map Presentation—A Systematic Evaluation of Important Graphic Aspects. In Proceedings of the ICA Mountain Cartography Workshop ‘Mount Hood’, Mt. Hood, OR, USA, 15–19 May 2002. [Google Scholar]
  49. Lorenz, H.; Trapp, M.; Döllner, J.; Jobst, M. Interactive multi-perspective views of virtual 3D landscape and city models. In The European Information Society; Springer: Berlin, Germany, 2008; pp. 301–321. [Google Scholar]
  50. Jobst, M.; Döllner, J. Better Perception of 3D-Spatial Relations by Viewport Variations. In Visual Information Systems. Web-Based Visual Information Search and Management; Sebillo, M., Vitiello, G., Schaefer, G., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 7–18. ISBN 978-3-540-85890-4. [Google Scholar]
  51. Bazargan, K.; Falquet, G. Specifying the representation of non-geometric information in 3D virtual environments. In Proceedings of the International Conference on Human-Computer Interaction, San Diego, CA, USA, 19–24 July 2009; pp. 773–782. [Google Scholar]
  52. Trapp, M.; Beesk, C.; Pasewaldt, S.; Döllner, J. Interactive Rendering Techniques for Highlighting in 3D Geovirtual Environments. In Advances in 3D Geo-Information Sciences; Kolbe, T.H., König, G., Nagel, C., Eds.; Lecture Notes in Geoinformation and Cartography; Springer: Berlin/Heidelberg, Germany, 2011; pp. 197–210. ISBN 978-3-642-12669-7. [Google Scholar]
  53. Hajji, R.; Billen, R. Collaborative 3D Modeling: Conceptual and Technical Issues. Int. J. 3-D Inf. Model. 2016, 5, 47–67. [Google Scholar] [CrossRef]
  54. Stadler, A.; Kolbe, T.H. Spatio-semantic coherence in the integration of 3D city models. In Proceedings of the 5th International ISPRS Symposium on Spatial Data Quality ISSDQ 2007, Enschede, The Netherlands, 13–15 June 2007. [Google Scholar]
  55. Döllner, J.; Buchholz, H. Expressive virtual 3D city models. In Proceedings of the XXII International Cartographic Conference (ICC2005), A Coruña, Spain, 11–16 July 2005. [Google Scholar]
  56. Ellul, C.; Altenbuchner, J. Investigating approaches to improving rendering performance of 3D city models on mobile devices. Geo-Spat. Inf. Sci. 2014, 17, 73–84. [Google Scholar] [CrossRef]
  57. Liu, X.; Wang, X.; Wright, G.; Cheng, J.; Li, X.; Liu, R. A State-of-the-Art Review on the Integration of Building Information Modeling (BIM) and Geographic Information System (GIS). ISPRS Int. J. Geo-Inf. 2017, 6, 53. [Google Scholar] [CrossRef]
  58. Open Geospatial Consortium. OGC® IndoorGML: Corrigendum; Open Geospatial Consortium: Wayland, MA, USA, 2018. [Google Scholar]
  59. Kim, J.; Yoo, S.; Li, K. Integrating IndoorGML and CityGML for Indoor Space. In Web and Wireless Geographical Information Systems; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8470. [Google Scholar]
  60. Löwner, M.-O.; Benner, J.; Gröger, G.; Häfele, K.-H. New concepts for structuring 3D city models—An extended level of detail concept for CityGML buildings. In Proceedings of the International Conference on Computational Science and Its Applications, Ho Chi Minh City, Vietnam, 24–27 June 2013; pp. 466–480. [Google Scholar]
  61. Biljecki, F.; Ledoux, H.; Stoter, J.; Zhao, J. Formalization of the level of detail in 3D city modelling. Comput. Environ. Urban Syst. 2014, 48, 1–15. [Google Scholar] [CrossRef]
  62. Biljecki, F.; Ledoux, H.; Stoter, J. An improved LOD specification for 3D building models. Comput. Environ. Urban Syst. 2016, 59, 25–37. [Google Scholar] [CrossRef]
  63. Biljecki, F. Level of Detail in 3D City Models; TU Delft: Delft, The Netherlands, 2017. [Google Scholar]
  64. Gröger, G.; Plümer, L. CityGML—Interoperable semantic 3D city models. ISPRS J. Photogramm. Remote Sens. 2012, 71, 12–33. [Google Scholar] [CrossRef]
  65. Luebke, D.; Reddy, M.; Cohen, J.D.; Varshney, A.; Watson, B.; Huebner, R. Level of Details for 3D Graphics; Morgan Kaufmann Publishers: Burlington, MA, USA, 2012. [Google Scholar]
  66. Zhu, Q.; Hu, M.; Zhang, Y.; Du, Z. Research and practice in three-dimensional city modeling. Geo-Spat. Inf. Sci. 2009, 12, 18–24. [Google Scholar] [CrossRef]
  67. Benner, J.; Geiger, A.; Leinemann, K. Flexible generation of semantic 3D building models. In Proceedings of the 1st International Workshop on Next Generation 3D City Models, Bonn, Germany, 21–22 June 2005; pp. 17–22. [Google Scholar]
  68. Jobst, M.; Dollner, J.; Lubanski, O. Communicating Geoinformation effectively with virtual 3D city models. In Handbook of Research on E-Planning: ICTs for Urban Development and Monitoring; IGI Global: Hershey, PA, USA, 2010. [Google Scholar]
  69. Semmo, A.; Trapp, M.; Jobst, M.; Döllner, J. Cartography-Oriented Design of 3D Geospatial Information Visualization—Overview and Techniques. Cartogr. J. 2015, 52, 95–106. [Google Scholar] [CrossRef]
  70. Hu, J.; You, S.; Neumann, U. Approaches to large-scale urban modeling. IEEE Comput. Graph. Appl. 2003, 23, 62–69. [Google Scholar]
  71. Buchroithner, M.; Schenkel, R.; Kirschenbauer, S. 3D display techniques for cartographic purposes: Semiotic aspects. Int. Arch. Photogramm. Remote Sens. 2000, 33, 99–106. [Google Scholar]
  72. Jobst, M.; Kyprianidis, J.E.; Döllner, J. Mechanisms on Graphical Core Variables in the Design of Cartographic 3D City Presentations. In Geospatial Vision; Moore, A., Drecki, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 45–59. ISBN 978-3-540-70967-1. [Google Scholar]
  73. Häberling, C.; Bär, H.; Hurni, L. Proposed Cartographic Design Principles for 3D Maps: A Contribution to an Extended Cartographic Theory. Cartographica 2008, 43, 175–188. [Google Scholar] [CrossRef]
  74. Ware, C. Information Visualization Perception for Design, 3rd ed.; Interactive Technologies; Elsevier Science: Burlington, NJ, USA, 2012; ISBN 0-12-381464-2. [Google Scholar]
  75. Foerster, T.; Stoter, J.; Köbben, B. Towards a formal classification of generalization operators. In Proceedings of the 23rd International Cartographic Conference, Moscow, Russia, 4–10 August 2007. [Google Scholar]
  76. Ogao, P.J.; Kraak, M.-J. Defining visualization operations for temporal cartographic animation design. Int. J. Appl. Earth Obs. Geoinf. 2002, 4, 23–31. [Google Scholar] [CrossRef]
  77. Khan, M.; Khan, S.S. Data and information visualization methods, and interactive mechanisms: A survey. Int. J. Comput. Appl. 2011, 34, 1–14. [Google Scholar]
  78. Métral, C.; Ghoula, N.; Falquet, G. An Ontology of 3D Visualization Techniques for Enriched 3D City Models; Leduc, T., Moreau, G., Billen, R., Eds.; EDP Sciences: Les Ulis, France, 2012; p. 02005. [Google Scholar]
  79. Neuville, R.; Pouliot, J.; Poux, F.; Hallot, P.; De Rudder, L.; Billen, R. Towards a decision support tool for 3d visualisation: Application to selectivity purpose of single object in a 3d city scene. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-4/W5, 91–97. [Google Scholar] [CrossRef]
  80. Stein, T.; Décoret, X. Dynamic Label Placement for Improved Interactive Exploration; ACM Press: New York, NY, USA, 2008; p. 15. [Google Scholar]
  81. Neuville, R.; Poux, F.; Hallot, P.; Billen, R. Towards a normalised 3D Geovisualisation: The viewpoint management. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 4, 179. [Google Scholar] [CrossRef]
  82. Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R. Digital Investigations of an Archaeological Smart Point Cloud: A Real Time Web-Based Platform To Manage the Visualization of Semantical Queries. Conservation Cultural Heritage in the Digital Era 2017. pp. 581–588. Available online: http://hdl.handle.net/2268/212353 (accessed on 1 March 2018).
  83. Poux, F.; Neuville, R.; Van Wersch, L.; Nys, G.-A.; Billen, R. 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 2017, 7, 96. [Google Scholar] [CrossRef]
Figure 1. Key components of 3D geovisualization.
Figure 1. Key components of 3D geovisualization.
Ijgi 07 00194 g001
Figure 2. Part of a virtual 3D LOD1 city model of New York (building features) provided by the Technical University of Munich and visualized with ArcGlobe.
Figure 2. Part of a virtual 3D LOD1 city model of New York (building features) provided by the Technical University of Munich and visualized with ArcGlobe.
Ijgi 07 00194 g002
Figure 3. Consequences between a set of static visual variables and 3D environment settings.
Figure 3. Consequences between a set of static visual variables and 3D environment settings.
Ijgi 07 00194 g003
Figure 4. Incompatibilities among a set of static visual variables and 3D environment settings.
Figure 4. Incompatibilities among a set of static visual variables and 3D environment settings.
Ijgi 07 00194 g004
Figure 5. Incompatibility between pattern and grain for selectivity.
Figure 5. Incompatibility between pattern and grain for selectivity.
Ijgi 07 00194 g005
Figure 6. Potential incompatibilities between a set of static visual variables and 3D environment settings.
Figure 6. Potential incompatibilities between a set of static visual variables and 3D environment settings.
Ijgi 07 00194 g006aIjgi 07 00194 g006b
Figure 7. An application chart (extension version of Neuville et al. [79]) for assisting the visualization of selectivity purpose of virtual 3D city models.
Figure 7. An application chart (extension version of Neuville et al. [79]) for assisting the visualization of selectivity purpose of virtual 3D city models.
Ijgi 07 00194 g007
Figure 8. A dynamic WebGL application for assisting the visualization process of 3D geospatial data. Multiple views of the 3D model (times t1 to t4) are shown to highlight the visualization process and feedback provided by the plugin (left sidebar).
Figure 8. A dynamic WebGL application for assisting the visualization process of 3D geospatial data. Multiple views of the 3D model (times t1 to t4) are shown to highlight the visualization process and feedback provided by the plugin (left sidebar).
Ijgi 07 00194 g008
Figure 9. A warning window to inform users against potential incompatibilities.
Figure 9. A warning window to inform users against potential incompatibilities.
Ijgi 07 00194 g009
Table 1. State-of-the-art of static retinal variables defined over the last fifty years.
Table 1. State-of-the-art of static retinal variables defined over the last fifty years.
Visual VariableAuthor (Date)Example
ArrangementMorisson (1974) Ijgi 07 00194 i001
CrispnessMacEachren (1995) Ijgi 07 00194 i002
GrainBertin (1967) Ijgi 07 00194 i003
HueBertin (1967) Ijgi 07 00194 i004
Lightness/ValueBertin (1967) Ijgi 07 00194 i005
MaterialCarpendale (2003) Ijgi 07 00194 i006
OrientationBertin (1967) Ijgi 07 00194 i007
PatternCarpendale (2003) Ijgi 07 00194 i008
Perspective heightSlocum et al. (2010) Ijgi 07 00194 i009
PositionBertin (1967) Ijgi 07 00194 i010
ResolutionMacEachren (1995) Ijgi 07 00194 i011
SaturationMorisson (1974) Ijgi 07 00194 i012
ShapeBertin (1967) Ijgi 07 00194 i013
SizeBertin (1967) Ijgi 07 00194 i014
SketchinessBoukhelifa et al. (2012) Ijgi 07 00194 i015
SpacingSlocum et al. (2010) Ijgi 07 00194 i016
TransparencyMacEachren (1995) Ijgi 07 00194 i017
Table 2. Interpretation tasks of static retinal variables.
Table 2. Interpretation tasks of static retinal variables.
Interpretation TaskSignificationQuestion
SelectivityThe capacity to extract categoriesDoes the retinal variable variation identify categories?
AssociativityThe capacity to regroup similaritiesDoes the retinal variable variation group similarities?
Order perceptionThe capacity to compare several ordersDoes the retinal variable variation identify a change in order?
Quantitative perceptionThe capacity to quantify a differenceDoes the retinal variable variation quantify a difference?
Table 3. Sets and operators definition.
Table 3. Sets and operators definition.
NotationSignification
a ∈ Aa is an element of A
|A|Number of elements in A
A × BA cross-product B = {(a, b) a ∈ A, b ∈ B}
Set of reals
Set of integers
+Set of positive reals ([0; +∞])
3 × ×
Union of two sets
Intersection of two sets
OR boolean operator
AND boolean operator
IMPLICATION operator
¬NOT operator

Share and Cite

MDPI and ACS Style

Neuville, R.; Pouliot, J.; Poux, F.; De Rudder, L.; Billen, R. A Formalized 3D Geovisualization Illustrated to Selectivity Purpose of Virtual 3D City Model. ISPRS Int. J. Geo-Inf. 2018, 7, 194. https://doi.org/10.3390/ijgi7050194

AMA Style

Neuville R, Pouliot J, Poux F, De Rudder L, Billen R. A Formalized 3D Geovisualization Illustrated to Selectivity Purpose of Virtual 3D City Model. ISPRS International Journal of Geo-Information. 2018; 7(5):194. https://doi.org/10.3390/ijgi7050194

Chicago/Turabian Style

Neuville, Romain, Jacynthe Pouliot, Florent Poux, Laurent De Rudder, and Roland Billen. 2018. "A Formalized 3D Geovisualization Illustrated to Selectivity Purpose of Virtual 3D City Model" ISPRS International Journal of Geo-Information 7, no. 5: 194. https://doi.org/10.3390/ijgi7050194

APA Style

Neuville, R., Pouliot, J., Poux, F., De Rudder, L., & Billen, R. (2018). A Formalized 3D Geovisualization Illustrated to Selectivity Purpose of Virtual 3D City Model. ISPRS International Journal of Geo-Information, 7(5), 194. https://doi.org/10.3390/ijgi7050194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop