Spatial Databases: Design, Management, and Knowledge Discovery

A special issue of ISPRS International Journal of Geo-Information (ISSN 2220-9964).

Deadline for manuscript submissions: closed (31 May 2020) | Viewed by 25532

Special Issue Editor


E-Mail Website
Guest Editor
Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA 22030, USA
Interests: spatial data science; geographic information systems; data mining and machine learning; spatial index structures and efficient algorithms; uncertain data; geospatial simulation; location-based social networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The recent explosion in the amount of spatial data calls for specialized systems to manage, search, and mine very large sets of spatial and spatio-temporal data.

This data explosion is facilitated by the vast proliferation of devices such as smartphones, traffic cameras, space telescopes, and Earth observation satellites. For example, NASA’s Earth Observing System Data and Information System (EOSDIS) adds more than 6 TB of data to its archives every day and makes it available to scientists and researchers around the world. As another example, millions of geo-tagged tweets become available on the Twitter API every day.

To handle this data deluge, specialized systems are required to store this data, to make this data actionable for knowledge extraction, and to develop data-driven geo-information systems.

This Special Issue is dedicated to giving an overview of state-of-the-art spatial and spatio-temporal data management, as well as to exploring future trends of concepts, methods, implementations, validations, and applications. We call for original papers from researchers around the world that focus on topics including, but not limited to, the following:

  • Big Spatial Data
  • Computational Geometry
  • Crowdsourcing Spatial Data
  • Distributed and Parallel Algorithms
  • Earth Observation Data Management
  • Efficient Algorithms for GIS
  • Geographic Information Systems
  • Geospatial Information Retrieval
  • Indoor Space
  • Moving Objects Databases
  • Parallel and Distributed Spatial Databases
  • Privacy, Security, and Integrity in Spatial Databases
  • Real Applications and Systems
  • Recommendation Systems
  • Spatial and Spatio-Temporal Data Acquisition
  • Spatial Data Mining and Knowledge Discovery
  • Spatial Database Design
  • Spatial (Road) Networks
  • Sensor Networks
  • Similarity Searching
  • Spatial Access Methods and Indexing
  • Spatial Data Streams
  • Spatial Database Design and Conceptual Modeling
  • Spatio-Temporal and Temporal Databases
  • Uncertain, Imprecise, and Probabilistic Data
  • Urban Analytics and Mobility
Dr. Andreas Züfle
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • spatial databases
  • spatio-temporal data
  • big spatial data
  • data management
  • spatial data mining
  • spatial data science
  • geographic information systems
  • earth observation data
  • social media data

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1280 KiB  
Article
Augmenting Geostatistics with Matrix Factorization: A Case Study for House Price Estimation
by Aisha Sikder and Andreas Züfle
ISPRS Int. J. Geo-Inf. 2020, 9(5), 288; https://doi.org/10.3390/ijgi9050288 - 28 Apr 2020
Cited by 2 | Viewed by 3008
Abstract
Singular value decomposition (SVD) is ubiquitously used in recommendation systems to estimate and predict values based on latent features obtained through matrix factorization. But, oblivious of location information, SVD has limitations in predicting variables that have strong spatial autocorrelation, such as housing prices [...] Read more.
Singular value decomposition (SVD) is ubiquitously used in recommendation systems to estimate and predict values based on latent features obtained through matrix factorization. But, oblivious of location information, SVD has limitations in predicting variables that have strong spatial autocorrelation, such as housing prices which strongly depend on spatial properties such as the neighborhood and school districts. In this work, we build an algorithm that integrates the latent feature learning capabilities of truncated SVD with kriging, which is called SVD-Regression Kriging (SVD-RK). In doing so, we address the problem of modeling and predicting spatially autocorrelated data for recommender engines using real estate housing prices by integrating spatial statistics. We also show that SVD-RK outperforms purely latent features based solutions as well as purely spatial approaches like Geographically Weighted Regression (GWR). Our proposed algorithm, SVD-RK, integrates the results of truncated SVD as an independent variable into a regression kriging approach. We show experimentally, that latent house price patterns learned using SVD are able to improve house price predictions of ordinary kriging in areas where house prices fluctuate locally. For areas where house prices are strongly spatially autocorrelated, evident by a house pricing variogram showing that the data can be mostly explained by spatial information only, we propose to feed the results of SVD into a geographically weighted regression model to outperform the orginary kriging approach. Full article
(This article belongs to the Special Issue Spatial Databases: Design, Management, and Knowledge Discovery)
Show Figures

Figure 1

20 pages, 6913 KiB  
Article
Evaluation of Replication Mechanisms on Selected Database Systems
by Tomáš Pohanka and Vilém Pechanec
ISPRS Int. J. Geo-Inf. 2020, 9(4), 249; https://doi.org/10.3390/ijgi9040249 - 17 Apr 2020
Cited by 6 | Viewed by 3527
Abstract
This paper is focused on comparing database replication over spatial data in PostgreSQL and MySQL. Database replication means solving various problems with overloading a single database server with writing and reading queries. There are many replication mechanisms that are able to handle data [...] Read more.
This paper is focused on comparing database replication over spatial data in PostgreSQL and MySQL. Database replication means solving various problems with overloading a single database server with writing and reading queries. There are many replication mechanisms that are able to handle data differently. Criteria for objective comparisons were set for testing and determining the bottleneck of the replication process. The tests were done over the real national vector spatial datasets, namely, ArcCR500, Data200, Natural Earth and Estimated Pedologic-Ecological Unit. HWMonitor Pro was used to monitor the PostgreSQL database, network and system load. Monyog was used to monitor the MySQL activity (data and SQL queries) in real-time. Both database servers were run on computers with the Microsoft Windows operating system. The results from the provided tests of both replication mechanisms led to a better understanding of these mechanisms and allowed informed decisions for future deployment. Graphs and tables include the statistical data and describe the replication mechanisms in specific situations. PostgreSQL with the Slony extension with asynchronous replication synchronized a batch of changes with a high transfer speed and high server load. MySQL with synchronous replication synchronized every change record with low impact on server performance and network bandwidth. Full article
(This article belongs to the Special Issue Spatial Databases: Design, Management, and Knowledge Discovery)
Show Figures

Figure 1

21 pages, 5873 KiB  
Article
Methods and Application of Archeological Cloud Platform for Grand Sites Based on Spatio-Temporal Big Data
by Yongxing Wu, Shaofu Lin, Fei Peng and Qi Li
ISPRS Int. J. Geo-Inf. 2019, 8(9), 377; https://doi.org/10.3390/ijgi8090377 - 29 Aug 2019
Cited by 5 | Viewed by 3122
Abstract
Grand sites are important witnesses of human civilization. The archeology of grand sites has the characteristics of a long period, interdisciplinary study, irreversibility and uncertainties. Because of the lack of effective methods and valid tools, large amounts of archeological data cannot be properly [...] Read more.
Grand sites are important witnesses of human civilization. The archeology of grand sites has the characteristics of a long period, interdisciplinary study, irreversibility and uncertainties. Because of the lack of effective methods and valid tools, large amounts of archeological data cannot be properly processed in time, which creates many difficulties for the conservation and use of grand sites. This study provides a method of integrating spatio-temporal big data of grand sites, including classification and coding, spatial scales and a spatio-temporal framework, through which the integration of archeological data of multiple sites or different archeological excavations is realized. A system architecture was further proposed for an archeological information cloud platform for grand sites. By providing services such as data, visualization, standardizations, spatial analysis, and application software, the archeological information cloud platform of grand sites can display sites, ruins, and relics in 2D and 3D according to their correlation. It can also display the transformation of space and time around archeological cultures, and restored ruins in a 3D virtual environment. The platform provides increased support to interdisciplinary study and the dissemination of research results. Taking the Origin of Chinese Civilization Project as a case study, it shows that the method for data aggregation and fusion proposed in this study can efficiently integrate multi-source heterogeneous archeological spatio-temporal data of different sites or different periods. The archeological information cloud platform has great significance to the study of the origin of Chinese civilization, dissemination of Chinese civilization, and the public participation in archeology, which would promote the sustainable development of the conservation and use of grand sites. Full article
(This article belongs to the Special Issue Spatial Databases: Design, Management, and Knowledge Discovery)
Show Figures

Figure 1

20 pages, 4748 KiB  
Article
NS-DBSCAN: A Density-Based Clustering Algorithm in Network Space
by Tianfu Wang, Chang Ren, Yun Luo and Jing Tian
ISPRS Int. J. Geo-Inf. 2019, 8(5), 218; https://doi.org/10.3390/ijgi8050218 - 8 May 2019
Cited by 32 | Viewed by 7644
Abstract
Spatial clustering analysis is an important spatial data mining technique. It divides objects into clusters according to their similarities in both location and attribute aspects. It plays an essential role in density distribution identification, hot-spot detection, and trend discovery. Spatial clustering algorithms in [...] Read more.
Spatial clustering analysis is an important spatial data mining technique. It divides objects into clusters according to their similarities in both location and attribute aspects. It plays an essential role in density distribution identification, hot-spot detection, and trend discovery. Spatial clustering algorithms in the Euclidean space are relatively mature, while those in the network space are less well researched. This study aimed to present a well-known clustering algorithm, named density-based spatial clustering of applications with noise (DBSCAN), to network space and proposed a new clustering algorithm named network space DBSCAN (NS-DBSCAN). Basically, the NS-DBSCAN algorithm used a strategy similar to the DBSCAN algorithm. Furthermore, it provided a new technique for visualizing the density distribution and indicating the intrinsic clustering structure. Tested by the points of interest (POI) in Hanyang district, Wuhan, China, the NS-DBSCAN algorithm was able to accurately detect the high-density regions. The NS-DBSCAN algorithm was compared with the classical hierarchical clustering algorithm and the recently proposed density-based clustering algorithm with network-constraint Delaunay triangulation (NC_DT) in terms of their effectiveness. The hierarchical clustering algorithm was effective only when the cluster number was well specified, otherwise it might separate a natural cluster into several parts. The NC_DT method excessively gathered most objects into a huge cluster. Quantitative evaluation using four indicators, including the silhouette, the R-squared index, the Davis–Bouldin index, and the clustering scheme quality index, indicated that the NS-DBSCAN algorithm was superior to the hierarchical clustering and NC_DT algorithms. Full article
(This article belongs to the Special Issue Spatial Databases: Design, Management, and Knowledge Discovery)
Show Figures

Figure 1

32 pages, 1196 KiB  
Article
Finding Visible kNN Objects in the Presence of Obstacles within the User’s View Field
by I-Fang Su, Ding-Li Chen, Chiang Lee and Yu-Chi Chung
ISPRS Int. J. Geo-Inf. 2019, 8(3), 151; https://doi.org/10.3390/ijgi8030151 - 20 Mar 2019
Cited by 2 | Viewed by 3127
Abstract
In many spatial applications, users are only interested in data objects that are visible to them. Hence, finding visible data objects is an important operation in these real-world spatial applications. This study addressed a new type of spatial query, the View field-aware Visible [...] Read more.
In many spatial applications, users are only interested in data objects that are visible to them. Hence, finding visible data objects is an important operation in these real-world spatial applications. This study addressed a new type of spatial query, the View field-aware Visible k Nearest Neighbor (V2-kNN) query. Given the location of a user and his/her view field, a V2-kNN query finds data object p so that p is the nearest neighbor of and visible to the user, where visible means the data object is (1) not hidden by obstacles and (2) inside the view field of the user. Previous works on visible NN queries considered only one of these two factors, but not both. To the best of our knowledge, this work is the first to consider both the effect of obstacles and the restriction of the view field in finding the solutions. To support efficient processing of V2-kNN queries, a grid structure is used to index data objects and obstacles. Pruning heuristics are also designed so that only data objects and obstacles relevant to the final query result are accessed. A comprehensive experimental evaluation using both real and synthetic datasets is performed to verify the effectiveness of the proposed algorithms. Full article
(This article belongs to the Special Issue Spatial Databases: Design, Management, and Knowledge Discovery)
Show Figures

Figure 1

16 pages, 9117 KiB  
Article
A Novel Process-Oriented Graph Storage for Dynamic Geographic Phenomena
by Cunjin Xue, Chengbin Wu, Jingyi Liu and Fenzhen Su
ISPRS Int. J. Geo-Inf. 2019, 8(2), 100; https://doi.org/10.3390/ijgi8020100 - 25 Feb 2019
Cited by 17 | Viewed by 3908
Abstract
There exists a sort of dynamic geographic phenomenon in the real world that has a property which is maintained from production through development to death. Using traditional storage units, e.g., point, line, and polygon, researchers face great challenges in exploring the spatial evolution [...] Read more.
There exists a sort of dynamic geographic phenomenon in the real world that has a property which is maintained from production through development to death. Using traditional storage units, e.g., point, line, and polygon, researchers face great challenges in exploring the spatial evolution of dynamic phenomena during their lifespan. Thus, this paper proposes a process-oriented two-tier graph model named PoTGM to store the dynamic geographic phenomena. The core ideas of PoTGM are as follows. 1) A dynamic geographic phenomenon is abstracted into a process with a property that is maintained from production through development to death. A process consists of evolution sequences which include instantaneous states. 2) PoTGM integrates a process graph and a sequence graph using a node–edge structure, in which there are four types of nodes, i.e., a process node, a sequence node, a state node, and a linked node, as well as two types of edges, i.e., an including edge and an evolution edge. 3) A node stores an object, i.e., a process object, a sequence object, or a state object, and an edge stores a relationship, i.e., an including or evolution relationship between two objects. Experiments on simulated datasets are used to demonstrate an at least one order of magnitude advantage of PoTGM in relation to relationship querying and to compare it with the Oracle spatial database. The applications on the sea surface temperature remote sensing products in the Pacific Ocean show that PoTGM can effectively explore marine objects as well as spatial evolution, and these behaviors may provide new references for global change research. Full article
(This article belongs to the Special Issue Spatial Databases: Design, Management, and Knowledge Discovery)
Show Figures

Figure 1

Back to TopTop