Next Article in Journal
Forest/Nonforest Segmentation Using Sentinel-1 and -2 Data Fusion in the Bajo Cauca Subregion in Colombia
Next Article in Special Issue
Fast Thick Cloud Removal for Multi-Temporal Remote Sensing Imagery via Representation Coefficient Total Variation
Previous Article in Journal
Vegetation Land Segmentation with Multi-Modal and Multi-Temporal Remote Sensing Images: A Temporal Learning Approach and a New Dataset
Previous Article in Special Issue
Learn by Yourself: A Feature-Augmented Self-Distillation Convolutional Neural Network for Remote Sensing Scene Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Urban Surface Elements by Combining Multisource Data and Ontology

1
School of Geomatics and Urban Spatial Information, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(1), 4; https://doi.org/10.3390/rs16010004
Submission received: 10 October 2023 / Revised: 7 December 2023 / Accepted: 14 December 2023 / Published: 19 December 2023

Abstract

:
The rapid pace of urbanization and increasing demands for urban functionalities have led to diversification and complexity in the types of urban surface elements. The conventional approach of relying solely on remote sensing imagery for urban surface element extraction faces emerging challenges. Data-driven techniques, including deep learning and machine learning, necessitate a substantial number of annotated samples as prerequisites. In response, our study proposes a knowledge-driven approach that integrates multisource data with ontology to achieve precise urban surface element extraction. Within this framework, components from the EIONET Action Group on Land Monitoring in Europe matrix serve as ontology primitives, forming a shared vocabulary. The semantics of surface elements are deconstructed using these primitives, enabling the creation of specific descriptions for various types of urban surface elements by combining these primitives. Our approach integrates multitemporal high-resolution remote sensing data, network big data, and other heterogeneous data sources. It segments high-resolution images into individual patches, and for each unit, urban surface element classification is accomplished through semantic rule-based inference. We conducted experiments in two regions with varying levels of urban scene complexity, achieving overall accuracies of 93.03% and 97.35%, respectively. Through this knowledge-driven approach, our proposed method significantly enhances the classification performance of urban surface elements in complex scenes, even in the absence of sample data, thereby presenting a novel approach to urban surface element extraction.

1. Introduction

Urban land surface element monitoring encompasses the real-time and intelligent observation and identification of urban coverage elements, including structures, buildings, roads, and driving elements such as population and economic factors. This is accomplished through the application of technologies such as remote sensing and the Internet of Things [1]. Although urban surface elements primarily refer to land cover (LC), the complexity of urban environments often necessitates the consideration of certain aspects of land use (LU). Hence, in this paper, we adopt the term “urban surface element”. Urban surface element classification represents a critical challenge in the realm of remote sensing image processing, offering fundamental data support for urban planning, resource management, and environmental monitoring [2]. For instance, United Nations Sustainable Development Goal 11 for 2030 aims to foster inclusive, safe, resilient, and sustainable cities, with the assessment of various urban surface elements being integral to its evaluation.
However, the classification of urban surface elements remains a formidable task within the field of remote sensing due to the existence of both sensory and semantic gaps [3]. Progress in algorithm development has been sluggish [4], and several challenges hinder the extraction of urban surface elements:
(1) Incompatible Classification Systems: The classification products of urban surface elements find applications in diverse disciplines, including city planning, urban environmental monitoring, sponge city research, and emergency response to urban disasters. Different professional terminologies and concepts have resulted in incompatibilities between various classification systems. These products are often created according to the producer’s perspective rather than the end user’s interests. Typically, they provide simple descriptive text for each type, lacking comprehensive conceptualization and detailed expression of semantic information regarding urban surface elements. Consequently, users from different fields often struggle to find existing products that meet their specific needs, leading to duplicate product construction, consuming both human and financial resources.
(2) Land Parcel Classification Challenges: Urban land parcel classification frequently encompasses the functions provided by these parcels, leading to significant confusion between LC and LU. For instance, in China’s urban fundamental geographic conditions monitoring projects, the Tier 1 class includes planting land, forest and grass coverage, buildings, railways and roads, structures, artificial stacking sites, bare land, and water bodies. Here, LC and LU are intertwined, making direct extraction from images difficult and necessitating manual visual interpretation [5].
(3) Challenges in Neural Networks and Deep Learning: While neural networks and deep learning offer promising results, they present inherent challenges. The model structures often resemble black boxes, making it challenging to explain the mechanisms behind remote sensing image classification [6]. Moreover, their effectiveness relies on the availability and reliability of training samples, a process that is labor intensive [7].
Due to these complexities, urban surface element extraction projects commonly resort to traditional manual visual interpretation, a time-consuming and labor-intensive approach [3,4].
A knowledge-driven approach distinguishes itself from data-driven methodologies by relying on prior expert knowledge, thereby remaining independent of the data. This characteristic mitigates the need to modify classification models when transitioning between different Earth observation (EO) data or training datasets. The knowledge-driven approach is esteemed for its high interpretability, robust manual intervention capabilities, and domain adaptability, effectively bridging the semantic gap between classified products and end users.
In the realm of remote sensing, the evolution of knowledge-driven methods has predominantly been channeled through geographic object-based image analysis (GEOBIA) [8,9], a technique commonly employed in urban LC classification, especially when dealing with high-resolution imagery. Notably, GEOBIA has surpassed pixel-based classification methods in terms of accuracy [10]. Leveraging knowledge representation techniques such as ontologies [11] augments the potential of GEOBIA. Ontology, initially rooted in philosophy, was subsequently integrated into geographic information systems. It serves as a formal language to describe concepts and their relationships. Within geographic information systems, ontology finds application in constructing semantic models of geographic information, thereby enhancing the sharing and utilization of geographic data [11,12,13].
Pioneering work by Arvor et al. introduced an ontology-based prototype for the automated classification of Landsat images, grounded in explicit spectral rules. This prototype underwent testing on four Landsat image subsets, affirming the efficacy of ontology in formalizing expert knowledge and enhancing remote sensing image classification [14,15,16]. Subsequently, Arvor et al. extended their approach by incorporating a conceptual framework inspired by ontology, leading to a knowledge-driven method adaptable to end users’ requirements, aligning with the construction of the Land Cover Classification System [3,17].
Adamto et al. explored the effectiveness of a knowledge-driven GEOBIA learning scheme for classifying very high resolution images for mapping natural grassland ecosystems. The findings demonstrated that the knowledge-driven approach not only applies to (semi)natural grassland ecosystem mapping in vast, inaccessible areas but also reduces the costs associated with ground truth data acquisition [18]. Gu et al. proposed an object-based semantic classification method for high-resolution satellite images by utilizing and harnessing the strengths of ontology in combination with GEOBIA, thus forming a hybrid classification method grounded in both knowledge and data [19]. However, this approach may not be suitable for urban areas characterized by robust economies and complex LU, as these areas exhibit high spectral and geometric complexity, often accompanied by tall building shadows [20]. Remote sensing imagery alone may prove insufficient for describing intricate urban LC, such as information pertaining to vegetation phenology, which necessitates a multitemporal analysis.
In light of the rapid advancements in computer and remote sensing technology, the fusion of multisource remote sensing data with diverse datasets has gained prominence for LU/LC classification [21,22,23]. Crowdsourced geographic data, voluntarily contributed by nonprofessionals and accessible via the Internet, represent a valuable resource in this context [24,25]. Prominent sources of crowdsourced geographic data include the OpenStreetMap (OSM) project and social media check-in data from platforms such as Weibo and Instagram [26]. This type of data boasts characteristics such as real-time updates, rapid dissemination, rich information content, cost-effectiveness, and substantial volume [27].
With the ongoing evolution of communication technology, the acquisition of geospatially significant spatial big data, such as road network data and point of interest (POI) data, has become increasingly convenient, offering valuable avenues for urban spatial information research. Researchers such as Fan et al. have combined POI data from OSM with GEOBIA techniques to extract impervious surfaces from high-spatial-resolution imagery [28]. Huang et al. merged Gaofen-2 satellite imagery with social perception data to construct a suitability analysis framework for urban residential land. The results underscore the utility of this framework in urban LU planning and sustainable residential land development [29]. The integration of multisource data and high-spatial-resolution data enriches the knowledge rules used in urban LC classification, rendering them more comprehensive and accurate.
To enhance the semantic expression of LC types, including urban LC, it is imperative to develop a method that effectively distinguishes between LC and LU. This distinction is crucial for improving the flexibility of urban surface element monitoring and narrowing the semantic gap between end users and classified products. In 1999, the Land Monitoring Action Group of EU member states initiated the European Harmonized Land Monitoring Project, aimed at fostering collaboration among EU member states in land monitoring. Central to this project is the concept of the EIONET Action Group on Land Monitoring in Europe (EAGLE), which serves as a semantic translation and data integration tool between datasets and conceptual terms [30]. The EAGLE matrix comprises three components: land cover component (LCC), land use attribute (LUA), and characteristic (CH). Unlike traditional categorization methods, the EAGLE matrix deconstructs LC class definitions into components, attributes, and features, enabling better comprehension of these categories. Notably, the EAGLE matrix offers a flexible structure with rich and intuitive content, allowing for the dynamic addition or removal of features. It also facilitates semantic translation of LC types by breaking down LC components and LUAs into separate modules to prevent potential confusion and ambiguity. By leveraging the EAGLE matrix, we deconstruct the semantics of urban surface elements, making it comprehensible for various end users, not just remote sensing experts. This approach clarifies LC categories and narrows the semantic gap. Given the diverse LC classification schemes within the Land Cover Classification System [17], it is crucial to adopt a general approach that addresses the semantic gap between low-level visual features and high-level image semantics. To bridge this gap, we propose the use of ontology primitives as transitional elements. Ontology primitives play a pivotal role in this methodology, mitigating the semantic gap between the domain of image objects and human-centered language-based class formulations [4,15,19,22]. The explicit descriptions of primitives employ symbolic natural language assertions, enhancing the interpretability of classification rules for end users. This approach aims to minimize both semantic and sensory gaps, affording end users the flexibility to establish the final relationship between the geographic entity of interest (e.g., natural vegetation) and the primitives. Semantic primitive theory, originating in linguistics, seeks to distill the smallest set of natural language elements that can convey semantics most effectively [31]. Ontology primitives represent the smallest semantic compositional units derived from deconstructed semantic descriptions of land surface cover, forming a crucial component of the urban surface elements ontology. In this paper, the EAGLE matrix serves as a shared vocabulary, or ontology primitives. Utilizing this shared vocabulary, we can semantically deconstruct urban surface element categories in complex scenes, enabling the expression of challenging-to-identify and easily mixed ground object types through different combinations of ontology primitives. The ontology primitive is pivotal in constructing the urban surface element ontology, enabling the expression of urban surface element categories through multisource features.
This paper presents a methodology for extracting urban surface elements by combining ontology with multisource data. The key objectives include the following: 1. Remote sensing data and crowdsourced geographic data are combined to extract multisource features. 2. Constructing a classification ontology model and deconstructing the semantic meaning of urban surface elements using ontology primitives while establishing relationships between primitives, urban surface elements, and image object features. 3. Realizing the classification of urban surface elements through the creation of semantic classification rules and reasoning processes.

2. Materials

The experimental dataset primarily comprised multitemporal data collected from six distinct periods, specifically, September 2021, November 2021, February 2022, April 2022, June 2022, and July 2022. This dataset encompassed high-resolution remote sensing imagery acquired from the Beijing-2 satellite, POI data sourced from Amap (https://lbs.amap.com/, accessed on 18 December 2022) for the months of June and July 2022, and road network data extracted from OSM (https://www.openstreetmap.org/, accessed on 18 December 2022).

2.1. Multitemporal Beijing-2 Satellite Remote Sensing Data

The Beijing-2 remote sensing satellite constellation system comprises three optical remote sensing satellites, boasting a panchromatic resolution of 0.8 m and a multispectral resolution of 3.2 m. Figure 1 displays an image from the sixth phase of the Beijing-2 satellite system, captured over Haidian District, Beijing.

2.2. OSM and POI Data

The term point of interest, often abbreviated as POI, refers to specific locations on a map, typically encompassing information about buildings, shops, restaurants, landmarks, and other notable features. The Amap Open Platform offers dedicated APIs (https://developer.amap.com/, accessed on 18 December 2022) for web scraping, providing a convenient means for users to access POI data. Within the scope of this research, the primary emphasis was placed on leveraging data that could convey information related to buildings and structures. To complement this, road network data from OSM were integrated. The OSM data were primarily obtained through the Geofabrik free download server (http://download.geofabrik.de/, accessed on 18 December 2022). OSM data are contributed by nonprofessional individuals and encompass various elements, including points, lines, surfaces, and more, covering aspects such as roads, LU, and POI, among others. To ensure compatibility, the original Mars coordinate system of the POI data was transformed into the WGS-84 coordinate system. Subsequently, the acquired POI data and OSM data underwent a process of cleaning, matching, and vectorization, simplifying their representation for further research endeavors. The vectorized data are visually depicted in Figure 2. There were three levels (large, medium, and small) of POIs by Amap, including 23 first-level, 267 second-level, and 869 third-level types, as shown in Table 1.
The collected POI data included a total of 12,326 entries for Food and Beverages, 24,445 entries for Place Name and Address, 9217 entries for Enterprises, 17,024 entries for Shopping, 4869 entries for Commercial House, 1447 entries for Accommodation Service, and 1512 entries for Auto Service. Each POI category corresponded to the proper urban surface elements type.

2.3. Experimental Plot

To assess the efficacy of the proposed method, we conducted experiments using the ontology-driven approach alongside the traditional support vector machine (SVM) method [32] in two distinct regions within Haidian District, Beijing.

2.3.1. Site A

The first experimental area, denoted as Site A, lay in the northwest of Beijing close to the 5th ring. The location was proximate to Malianwa Road in Haidian District and was a residential zone outside the city center, as illustrated in Figure 3. This selected study area encompassed a diverse array of urban elements, including forest, grass, buildings, and roads. The imagery employed for this experimental region was acquired from the Beijing-2 remote sensing observation satellite. Following preprocessing, the images exhibited dimensions of 670 × 760 pixels, with a spatial resolution of 0.8 m.

2.3.2. Site B

Experimental Area B was strategically chosen to be near Zhongguancun Street in Haidian District, Beijing, as depicted in Figure 4. Haidian District stands out as one of the most advanced urban regions within Beijing. The vicinity near Zhongguancun Street serves as the epicenter and hub of China’s science and technology industry, boasting numerous scientific research institutions, universities, and thriving business districts. This area enjoys the benefits of excellent transportation connectivity and a rich cultural milieu. The experimental site exhibited a complex LU scenario, encompassing a significant number of buildings (including residential areas), structures (such as parking facilities), green spaces, roads, and other quintessential urban surface elements. This diversity made it an ideal location for evaluating the proposed methodology. The research imagery for this region was sourced from the Beijing-2 remote sensing observation satellite. Subsequent to preprocessing, the images achieved a resolution of 2000 × 1600 pixels, with a spatial resolution of 0.8 m.
To assess the effectiveness of the methodology outlined in this paper, we established a sample set by using region of interest delineation, coupled with visual interpretation of high-resolution images, to subsequently construct a confusion matrix for accuracy verification. Considering the resolution of the experimental areas and the distribution of each LC category, a total of 235 samples were meticulously selected, ensuring an even dispersion across the experimental area.
In Site A, we selected 15 samples for each of the following categories: Buildings, Railways and Roads, Building Shadows, and Forest and Grass Coverage, amounting to a total of 60 samples.
In Site B, we expanded our selection to include 15 samples for Planting Land, 30 for Structures, 30 for Railways and Roads, 30 for Forest and Grass Coverage, 40 for Buildings, and 30 for Building Shadows, resulting in a total of 175 samples.
The spatial distribution of this sample set is visually represented in Figure 5. The definitions of the urban surface elements according to China’s urban fundamental geographic conditions monitoring projects [5] are shown in Table 2.

3. Methodology

Figure 6 illustrates the comprehensive workflow of this study. Initially, high-resolution images underwent segmentation to acquire image patches. Subsequently, feature extraction was executed on the multisource data to assign relevant features to each image patch. Following this, LC types were deconstructed into primitives, facilitating the construction of an ontology for urban surface element classification. Rules were employed to establish connections between the features derived from multisource data and the ontology primitives, as well as to define composition rules governing the relationship between urban surface element types and the ontology primitives. Finally, the classification of urban surface elements was achieved through semantic rule inference.

3.1. Construction of the Urban Land Cover Classification Ontology Model

3.1.1. Ontology Overview

Ontology encompasses several key components, including the concept layer (also referred to as class), relation (which represents semantic relationships between classes), attribute (depicting the characteristics or properties of a class, encompassing object attributes and data attributes), axiom (conveying constraints within the ontology to define or restrict the semantics of classes, relations, and attributes), and instance (representing a specific example of a class). Various ontology languages are available, such as Web Ontology Language (OWL), Resource Description Framework, Description Logic, Knowledge Interchange Format, Semantic Web Rule Language (SWRL), and more. Ontology construction tools are software applications used for creating and editing ontologies. Commonly employed tools include Protégé tools, TopBraid Composer, OntoStudio, WebOnto, OntoEdit, and others.
Ontology reasoning engines, which facilitate logical inference within ontologies, comprise Pellet, HermiT, FaCT++, Jena, and the OWL API, among others [33].
In this study, the primary language used for representing urban surface elements and image object features was OWL. OWL is capable of describing various data types, including classes, object attributes, axioms, and entities within ontologies [34]. SWRL was utilized to depict semantic rules, while the FaCT++ inference engine was employed for reasoning with these semantic rules [35]. The urban surface element ontology classification model was constructed using the Protégé 5.5 tool developed by Stanford University [36]. Protégé software version 5.5.0 was utilized to create both the ontology model for urban surface elements and the ontology model for multisource data features. Furthermore, semantic relationships and rule constraints between ontology primitives and the urban surface element classification ontology were established.

3.1.2. Urban Surface Element Ontology Construction

During the construction of the ontology model for urban surface elements, it was imperative to establish hierarchical relationships between categories that align with the hierarchical structure of the classification system. Figure 7 provides a comprehensive tree diagram detailing the hierarchy of surface element types within the framework of the urban fundamental geographic conditions monitoring classification system.

3.2. Multisource Data Feature Extraction

Urban surface elements were derived from a variety of multisource data, with ontology primitives serving as the intermediary. The process began with the extraction of multiple features from the data, followed by the establishment of relationships between these features and ontology primitives.
Multisource data feature extraction encompassed several steps, including satellite image data segmentation, feature extraction using multitemporal high-resolution remote sensing data, POI data from OSM, feature superposition, and format conversion. For this purpose, eCognition software (version 9.0), developed and distributed by Trimble in Germany, was chosen as the tool for image segmentation and feature overlay [37].
The features extracted from image objects in this study are depicted in Figure 8, where NDVI represents the normalized vegetation index, SI signifies the shadow index, and MBI [38] stands for the morphological building index.

3.2.1. High-Resolution Remote Sensing Image Feature Extraction

In this experiment, high-resolution images captured by Beijing-2 in July 2022 were chosen as the foundational images. Subsequently, a series of data preprocessing procedures were conducted on all images within the experimental area. These operations encompassed radiometric calibration, orthographic correction, image fusion, mosaic creation, and cropping.
Within the context of object-oriented remote sensing image classification, the initial step entails image segmentation, where the quality of segmented objects directly impacts the classification outcomes. Currently, numerous image segmentation methods exist, broadly categorized as boundary-based segmentation, region-based segmentation, and combinations of both boundary-based and region-based segmentation [39,40,41].
In this study, the multiscale segmentation tools available in the eCognition Developer software (version 9.0) were employed to segment the foundational images. This process predominantly involved adjusting three key parameters: the scale factor, compactness factor, and shape factor.

3.2.2. Multisource Data Feature Extraction

ECognition facilitates the extraction of several types of features, including spectral features, texture features, geometric features, and thematic indices. In this context, NDVI values were computed for time series images, and phenological characteristics were derived by analyzing the NDVI differences [42].
Figure 9 illustrates the computed NDVI values across six different time periods for both green forestland and planted land. Statistical analysis of the numerical trends yielded phenological cycle parameters, which served as a valuable reference in this paper.
To aid in the differentiation between green forestland and planted land, this paper introduced a phenological index. This index was computed based on the average NDVI values from September 2021, November 2021, February 2022, and July 2022.
P I = N D V I S e p t + N D V I N o v + N D V I F e b + N D V I J u l 4
Additionally, density information pertaining to POIs and the OSM road network was individually calculated. These density maps served as physical features [43]. This process was accomplished through kernel density calculation in ArcGIS software (version 10.7), with adjustments of output cell size and search radius through trial and error. A subset of these features is depicted in Figure 10. As an illustrative example, when analyzing buildings, the POI density for the building class within each partitioned area enabled the determination of whether an object belonged to the building class.

3.2.3. Feature Overlay and Format Conversion

Due to the diverse data sources, the extracted feature information was inherently independent. To ensure that each segmented object possessed a comprehensive set of feature information, a process of feature superimposition was carried out using eCognition software (version 9.0) and then exported in CSV format.
To integrate each image object as an instance into our ontology file, the CSV file format described above had to be transformed into the OWL ontology format, incorporating feature information that could be input into the classification ontology model. This file format facilitated seamless connectivity with the database. Figure 11 provides a visual representation of the geographic ontology instance file, including image object feature information, generated after importing the image object features into the urban surface element classification ontology model.

3.2.4. Image Object Feature Ontology Model

The OWL language was employed to conduct ontology modelling of multisource data features using a top-down approach, categorizing them into five distinct categories: geometric features, texture features, spectral features, thematic indices, and density features. Building upon this foundation, a secondary refinement was carried out for each category of features [44].
The geometric features were further categorized into area, aspect ratio, density, and circumference. Texture features were refined to include contrast, angular second moment, entropy, mean value, and correlation. Spectral features were divided into brightness, mean value, and maximum value. Thematic indices could be subcategorized into the normalized vegetation index, normalized building index, morphological building index, shadow index, and normalized difference water index. Density characteristics encompassed two tiers: residential building density, road density, structure density, bus density, commerce finances density, and community density.
Figure 12 illustrates this hierarchical structure, presenting a tree diagram that represents the features of multisource data.

3.3. Ontology Primitive

The ontology primitives serve as the foundational “building blocks” of the urban surface elements ontology, acting as a crucial bridge between observational data and the urban surface elements ontology. The construction of ontology primitives involves decomposition and determination based on the semantics of each type of urban surface element.
In this study, the EAGLE matrix served as a shared vocabulary, acting as ontology primitives. It is important to note that the EAGLE matrix dissects the definition of urban surface elements into components, attributes, and features. It does not represent a specific classification system and remains independent of any particular classification system.

3.3.1. Ontology Primitives Established from the EAGLE Matrix

Figure 13 illustrates the EAGLE matrix, comprising three integral parts: LCC, LUA, and CH. This matrix offers a high degree of granularity for decomposing the concept of LC. Such granularity accommodates the definition of urban surface element types at varying scales [45].
During the extraction of ontology primitives, different combinations of components were selected from the three parts of the EAGLE matrix (LCC, LUA, CH) based on the category definitions of various urban surface elements classification systems. The EAGLE matrix encompasses a plethora of attribute information that describes LC units, with some of this information obtainable from remote sensing/EO data, mainly concentrated in the LCC and CH sections. The information extracted from EO data is contingent on the specifications of the original imagery, including spatial, temporal, and spectral resolution. For instance, the use of visible, near-infrared, and shortwave infrared bands in conjunction with time series images allows for detailed expression of vegetation species. By extracting vegetation attributes such as canopy cover density from spectral responses, additional features related to phenological and temporal dynamics can be inferred from multiperiod EO data.
Figure 14 provides an overview of the ontology primitive hierarchies within the Protégé software, encompassing the LCC, LUA, and CH sections.

3.3.2. Relationship between Ontology Primitives and Ontology

A comprehensive analysis of the target classification system was conducted, with a detailed examination of the semantics and hierarchy of each category. Each semantic concept related to urban surface elements was systematically broken down into ontology primitives. These ontology primitives were employed to depict the conceptual models of class definitions within the classification system. The construction of ontology primitives primarily involved two key processes: analyzing the class definitions and employing various ontology primitives to articulate the categories of ground objects. The mapping relationship between ontology primitives was established by utilizing multisource data features. Figure 15 visually represents this process.
As an illustrative example, let us consider the “buildings” category within the urban fundamental geographic conditions monitoring system. We selected relevant components from the ontology primitives, as depicted in Figure 16, based on a thorough analysis of the semantic information associated with the “buildings” category.
In the urban fundamental geographic conditions monitoring classification system, “buildings” are defined as follows: “Buildings include housing construction areas and independent housing construction. The housing construction area is an area enclosed by the outline lines of housing buildings with similar heights, similar structures, regular arrangements, and similar building densities. The independent house building includes large-scale single buildings in the urban area, scattered residential areas, and small-scale scattered house buildings”.
To analyze this definition, we chose components from the LCC and CH modules of the ontology primitives [46]. For the LCC, buildings are considered nonliving and not naturally generated, so we selected “Abiotic/nonvegetated” and “Artificial_surfaces_and_construction” from the LCC. Analyzing the CH module, we found that it can be adapted flexibly, allowing us to add or remove content as needed. We extracted keywords from the definition. For instance, from the term “single building”, we chose “Single_blocks” in the CH module. We selected “Uniform” based on “layout law” and “Regular arrangement” in accordance with “Homogeneous”. Additionally, we introduced a new category, “building coverage”, to the CH module based on “similar building density”. Finally, considering the LUAs of the feature type, we selected relevant content from the service industry category within LUA to describe the functional attributes of buildings, such as accommodation, catering, or shopping.

3.3.3. Relationship between Ontology Primitives and Multisource Data Features

To extract the ontology primitive attributes for each image patch, we establish a relationship between the features obtained from multisource data and the primitives. Multiple features were employed to express each ontology primitive. Figure 17 illustrates the decomposed primitives and their corresponding multisource data features for the “building” type. In the diagram, ellipses represent ontology primitives, cylinders represent data features, and the black arrows denote the relationship between the ontology primitive and the urban surface element category. Dotted arrows describe the attribute relationship between image object features and ontology primitives.
Figure 18 depicts the ontology model for urban surface element classification developed in this study. The first column displays the concepts of urban surface elements and ontology primitives in the urban fundamental geographic conditions monitoring system along with their hierarchical relationships. The second column presents the data attributes associated with these concepts. In the third column, the red boxes signify the semantic relationships between urban surface element concepts utilizing ontology primitives. The blue boxes denote the attribute relationships between data features and ontology primitives.

3.4. Ontology Inference Based on Semantic Rules

Utilizing the constructed urban surface element ontology model, the FaCT++ ontology inference engine was employed to deduce SWRL semantic rules for each instance incorporated into the ontology model. This process enabled the acquisition of semantic information for each instance object based on the model. Below, we exemplify the rules using the category of “buildings”.
  • Markup rules
  • NDVI(?x, ?y), greaterThanOrEqual(?y, −1.0), lessThan(?y, 0.0) -> Abiotic/nonvegetated (?x);
  • Density(?x, ?y), greaterThanOrEqual(?y, 0.0), lessThan(?y, 10.725) -> Single_blocks (?x);
  • GLCM_Mean(?x, ?y), greaterThanOrEqual(?y,125.0) -> Homogenous (?x);
  • Community_density (?x, ?y), greaterThanOrEqual(?y, 500.0) -> Community_services (?x);
  • Commerce,_Finances_density (?x, ?y), greaterThanOrEqual(?y, 500.0) -> “Commerce,_Finances” (?x);
  • Accommodation,gastronomy_density (?x, ?y), greaterThanOrEqual(?y, 500.0) -> Accommodation_gastronomy (?x);
  • MBI(?x, ?y), greaterThanOrEqual(?y, 0.45) -> Artificial_surfaces_and_constructions (?x).
In the SWRL language, when we use C(?x), it signifies that ?x is an individual belonging to class C. Similarly, P(?x, ?y) denotes an OWL property, with ?x and ?y representing variables, OWL instances, or OWL data values. For instance, Rule 1 implies that instances with NDVI values greater than −1.0 and less than 0.0 are categorized as nonbiological. The specific threshold value is determined through iterative trial and error.
2.
Decision rules
  • Single_blocks (?x), Homogenous (?x), Abiotic/nonvegetated (?x), Artificial_surfaces_and_constructions (?x), Community_services (?x) -> Buildings(?x);
  • Single_blocks (?x), Homogenous (?x), Abiotic/nonvegetated (?x), Artificial_surfaces_and_constructions (?x), Commerce_Finances (?x) -> Buildings(?x);
  • Single_blocks (?x), Homogenous (?x), Abiotic/nonvegetated (?x), Artificial_surfaces_and_constructions (?x), Accommodation_gastronomy (?x) -> Buildings(?x);
Using the first rule as an illustration, instance objects characterized by semantics such as Single_blocks, Homogenous, Abiotic/nonvegetated, Artificial_surfaces_and_constructions, and Community_services are classified as buildings. These rules are visually represented in Protege, as depicted in Figure 19.
The Fact++ ontology reasoner was employed to deduce the classification of instance objects according to semantic rules, resulting in the semantic information classification results presented in Figure 20. Subsequently, the inferred semantic information was exported in OWL format and transformed into Shp format, culminating in the final display as a vector.

4. Results and Discussion

4.1. Results for Site A

For Site A, by experiments, the optimal segmentation parameter values were determined as follows: a segmentation scale of 150, a shape weight of 0.8, and a compactness weight of 0.6. The segmentation results are depicted in Figure 21b, while the urban element extraction results achieved through the proposed method are presented in Figure 21c. Additionally, the extraction results obtained using the pixel-based and object-based SVM method are displayed in Figure 21d,e. The object-based SVM method used the same segmentation result as the proposed method.
Table 3, Table 4 and Table 5 display the accuracy assessment results using confusion matrices for three distinct classification methods. In these matrices, the rows correspond to classification predictions, while the columns indicate classification accuracy.
Figure 22a–c present comparisons of the production accuracy, user accuracy, overall accuracy, and kappa coefficient between the three methods. Notably, the proposed method exhibited higher classification accuracy in most aspects when compared to the pixel-based SVM method and the object-based SVM method, except for the production accuracy for Building Shadows and the user accuracy for Forest and Grass Coverage. These results demonstrated the effectiveness of the proposed approach.
In Figure 22d–f, various colors represent distinct urban element categories, with the arc length of each class denoting the number of pixels. Connecting lines between different classes indicate pixels that have been misclassified into objects of another class.

4.2. Results for Site B

For Site B, after extensive experimentation, the optimal segmentation parameters were determined as follows: segmentation scale of 180, shape weight of 0.8, and compactness weight of 0.5. The segmentation results are depicted in Figure 23b, while the urban element extraction results using the proposed method are displayed in Figure 23c, and the extraction results of the pixel-based and object-based SVM methods can be seen in Figure 23d,e.
Based on the confusion matrix, accuracy evaluations were conducted, and the results are summarized in Table 6, Table 7 and Table 8, displaying the accuracy evaluation outcomes for both the proposed method and the SVM methods.
Comparisons of the production accuracy, user accuracy, overall accuracy, and kappa coefficient for the three methods are depicted in Figure 24a–c, respectively. Rows in the confusion matrix represent classification predictions, while columns represent classification accuracy. It is evident that the overall accuracy and kappa coefficient of the proposed method surpassed those of the pixel- and object-based SVM methods. However, the production accuracy for Building Shadow and the user accuracy for Forest and Grass Coverage were slightly lower than those of the SVM methods. This occurred because Building Shadows and Forest and Grass Coverage often coexist around high-rise buildings, and their spectral characteristics are similar. As a result, there is limited differentiation between these two categories during image segmentation, leading to confusion in object-oriented classification.
In Figure 24d–f, different colors represent various categories of urban elements. The length of each arc corresponds to the number of pixels, and the connections between different classes indicate pixels that have been misclassified into other classes. Notably, in terms of overall accuracy, there was a significant disparity between the three methods in extracting complex urban surface elements. The proposed method achieved an impressive overall accuracy of 93.03%, whereas the pixel-based SVM method lagged behind with only 68.58%. Additionally, the object-based SVM method yielded an accuracy of 70.18%.

4.3. Discussions

By conducting experiments in two areas with varying levels of LU complexity, this paper presents a comparison between the ontology-model-based approach and the SVM method for extracting urban surface elements. The following discussion will summarize the sample requirements, data prerequisites, and results accuracy for both methodologies.
Sample Requirements: The ontology-model-based approach exhibits an advantage over the pixel- and object-based SVM methods concerning sample prerequisites. The ontology model employs formal languages such as OWL and SWRL to elucidate urban surface elements and integrates attributes from diverse data sources to delineate feature types. This knowledge-driven classification approach obviates the necessity for labelled samples, a common requisite in conventional supervised techniques and data-driven methodologies such as deep learning. Consequently, the ontology-model-based approach can attain precise extraction of urban surface elements while mitigating the associated workload.
Data Requirements: The ontology-model-based approach imposes more substantial data demands, as it relies on multiple data sources to articulate ontology primitives. This implies the need to acquire various data types, including high-resolution remote sensing imagery and geographic information data, to comprehensively characterize urban surface element types. In contrast, the SVM method exhibits greater flexibility in terms of data prerequisites and can perform classification using pixel-level remote sensing imagery data. Thus, in scenarios with limited data resources, the SVM method may prove to be a more pragmatic choice.
Results Accuracy: According to the experimental findings presented in this paper, the ontology-based method surpasses the SVM methods in terms of overall accuracy and kappa coefficient. The ontology model method achieved overall accuracies of 97.35% and 93.03% in the two experiments, with corresponding kappa coefficients of 0.9607 and 0.9169, respectively. In contrast, the pixel-based SVM method yielded overall accuracies of 85.09% and 68.58%, with corresponding kappa coefficients of 0.7525 and 0.6224, respectively. And the overall accuracies of the object-based SVM method were 90.76% and 70.18%, with corresponding kappa coefficients of 0.8894 and 0.6540, respectively. These results indicated that the ontology-model-based method excels in extracting urban surface elements, exhibiting superior classification accuracy when compared to the SVM methods. Misclassifications between different classes were depicted as pixels erroneously categorized into other object classes. A comparison between the results displayed in Figure 22d–f and Figure 24d–f underscores the efficacy of the method proposed in this paper, demonstrating distinct boundaries between urban surface element categories. In contrast, the extraction results of the pixel- or object-based SVM methods revealed a pronounced blending of buildings with roads and structures.
Utilizing ontology primitives to convey semantic information offers several advantages:
  • Enriched Semantic Expression: Breaking down category semantics into smaller elemental components enables more precise and less ambiguous descriptions of category semantics. This enhances the comprehensibility of the classification system for end-users.
  • Scalability: Classification systems expressed using primitives exhibit robust extensibility. They can be adjusted and expanded by adding, modifying, or combining primitives.
  • Semantic Consistency: By sharing or reusing primitives, it is possible to maintain semantic consistency among categories. This means that when the semantics of a category change, adjustments can be made to the relevant primitive without the need for manual modifications to the entire classification system.
  • Knowledge Reuse and Sharing: The use of ontology primitives facilitates knowledge reuse and sharing. Based on shared primitives, different classification systems can be compared, integrated, and cross-referenced. This promotes interoperability and consistency among classification systems across different domains or organizations.
It is worth mentioning that the framework of this study can be adapted to any other application of interest for LU/LC classification since the EAGLE matrix used for the ontology primitives is valid for any LU/LC class and the method of building the ontology and ontology-based inference is easily be transferred to other applications.
The concept of ontology-based LU/LC classification can improve knowledge sharing. The use of a common conceptualization (the EAGLE matrix serving as ontology primitives, forming a shared vocabulary) and the adoption of a standard ontology language provide a mechanism facilitating collaborative and interdisciplinary research by connecting concepts from different scientific domains (ecology, agriculture, contingency management, etc.). When being used for different industries and monitoring focuses, the different classification systems are often defined according to needs, and each classification system is incompatible with the others, resulting in duplicate construction efforts that consume manpower and material resources. Using the method of this study, corresponding ontology models can be constructed for different LU/LC classification systems, and multiple maps of different classification systems can be obtained using a set of source data.
Also, the explicit specification of the knowledge used to express LU/LC semantic information allows the integration of remote sensing classification products from other information sources. The integration of LU/LC products is a process of combining the advantages or characteristics of several products to generate new products and meet the demand for special needs [46]. Interoperability is a major advantage of ontology-based classification. Similarities between different ontologies can be easily measured through common ontology primitives. Taking the second-level class refinement of GlobeLand30 land cover products as an example, the forest class of GlobeLand30 can be subdivided into broadleaf, coniferous, and mixed forests by integrating the NLCD (National Land Cover Database) and FROM-GLC-Seg (Finer Resolution Observation and Monitoring-Global Land Cover-Segmentation) as source products. The similarity among the evergreen, deciduous, and mixed forests of the NLCD, the broadleaf, needle, and mixed forests of FROM-GLC-Seg source products, and the broadleaf, needle, and mixed forests of GlobeLand30 can be calculated by similarity measurements of ontology primitives such as tree cover (in %), tree height, leaf type (broadleaf or needle), phenology (evergreen or deciduous), etc.

5. Conclusions

In the long term, the aggregation of expert knowledge in remote sensing into ontologies is envisaged to enhance the interpretability of remote sensing images. This enhancement is achieved by facilitating the discovery of relationships between the image attributes of geographic objects and the real-world characteristics of corresponding geographic entities. This paper introduces a method for object-oriented classification of urban surface elements based on ontology and multisource data. It also provides a comprehensive overview of the workflow, encompassing image segmentation, ontology primitive extraction, ontology modelling, and knowledge reasoning guided by ontology semantic rules.
In practical implementation, Protégé, an open-source software developed in Java by Stanford University, serves as the tool for expressing semantic relations using the OWL and SWRL languages. The validation of this method was conducted in two areas characterized by varying levels of LU complexity. The experimental outcomes underscore that the ontology-based object-oriented classification framework does not necessitate an extensive set of training samples, thereby enhancing the automation and reusability of the classification framework. The results of these two sets of experiments unequivocally affirm the method’s feasibility, bridging the semantic gap between remote sensing imagery and end-user requirements. Furthermore, this knowledge-driven classification approach circumvents the requirement for an extensive number of training samples typical in traditional supervised and prevalent data-driven classification methods such as deep learning. This, in turn, alleviates the associated workload and addresses challenges related to the extraction of urban elements of the same category with varying spectral and geometric attributes.
This study solely endeavors to employ the SWRL language within Protégé to articulate expert rules for ontology reasoning, acknowledging certain limitations in the process, encompassing the following aspects:
  • Limited Expressive Ability: While the SWRL language proves suitable for straightforward rule expression, it may demonstrate limitations in handling complex inference tasks. Notably, SWRL rules encounter challenges with recursive and circular reasoning, as well as intricate conditions and constraints.
  • Reasoning Efficiency: The efficiency of inference using SWRL rules may be suboptimal. In instances where the ontology is substantial or a multitude of rules are employed, the inference process can become excessively time-consuming, resulting in performance degradation.
  • Readability and Maintainability: SWRL rules may exhibit diminished readability and maintainability as the number of rules escalates. The dependencies and interactions among rules may become convoluted, making the comprehension and maintenance of rules a formidable task.
  • Scalability and Interoperability: SWRL rules tend to lack scalability and interoperability. Integrating them with other systems or tools can present compatibility challenges, consequently restricting the applicability and extensibility of ontologies.
  • Lack of Reasoning Explanation: The results derived from SWRL rule-based inference may lack adequate interpretation and interpretability. In specific application scenarios, users may need insight into the rationale and path of reasoning to enhance their understanding of the results and validate the reasoning process.
It is essential to emphasize that effective utilization of this technology in practice may require collaboration between remote sensing specialists and professionals from other industries. The subsequent phase should focus on identifying methods to enhance the formulation of expert rules. In the context of the object-oriented classification method, the segmentation results frequently exert a pivotal influence on the ultimate classification outcomes. Determining the most suitable segmentation parameters necessitates iterative experimentation and continual adjustment, which is often a laborious endeavor. Subsequent research endeavors will seek to ameliorate the limitations associated with image segmentation.

Author Contributions

Methodology, L.Z.; Validation, Y.L.; Writing—original draft, Y.L.; Writing—review & editing, L.Z.; Visualization, Y.L.; Project administration, L.Z. and Y.F.; Funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the National Key Research and Development Program of China (No. 2021YFE0194700), the Open Research Fund Program of LIESMARS (Grant No. 21L05), and the Categorical Development Quota Project–Master’s Degree Innovation Project (2023) (03081023002).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feng, Y.; Li, P.; Tong, X.; Meng, R.; Liu, S.; Xu, X. The key technology of intelligent monitoring and simulation of urban typical elements by remote sensing. J. Surv. Mapp. 2022, 51, 577–586. [Google Scholar]
  2. Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M. Evaluating the compatibility of multi-functional and intensive urban land uses. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 375–391. [Google Scholar] [CrossRef]
  3. Arvor, D.; Betbeder, J.; Daher, F.R.G.; Blossier, T.; Le Roux, R.; Corgne, S.; Corpetti, T.; de Freitas Silgueiro, V.; da Silva Junior, C.A. Towards user-adaptive remote sensing: Knowledge-driven automatic classification of Sentinel-2 time series. Remote Sens. Environ. 2021, 264, 112615. [Google Scholar] [CrossRef]
  4. He, G.; Cai, G.; Li, Y.; Xia, T.; Li, Z. Weighted split-flow network auxiliary with hierarchical multitasking for urban land use classification of high-resolution remote sensing images. Int. J. Remote Sens. 2022, 43, 6721–6740. [Google Scholar] [CrossRef]
  5. CH/T 9029; 2019 Content and Index of Fundamental Geographic Conditions Monitoring. Ministry of Natural Resources: Beijing, China, 2020.
  6. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  7. Zhang, D.; Qian, L.; Mao, B.; Huang, C.; Huang, B.; Si, Y. A data-driven design for fault detection of wind turbines using random forests and XGboost. IEEE Access 2018, 6, 21020–21031. [Google Scholar] [CrossRef]
  8. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  9. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef]
  10. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  11. Li, J.; He, Z.; Ke, D.; Zhu, Q. A geographic ontology fusion method for descriptive logic. J. Wuhan Univ. (Inf. Sci.) 2014, 39, 317–321. [Google Scholar]
  12. Agarwal, P. Ontological considerations in GIScience. Int. J. Geogr. Inf. Sci. 2005, 19, 501–536. [Google Scholar] [CrossRef]
  13. Li, Q. Research on Semantic Transformation Model and Method of Geographic Information Based on Ontology Doctoral; PLA Information Engineering University: Zhengzhou, China, 2011. [Google Scholar]
  14. Andrés, S.; Arvor, D.; Mougenot, I.; Libourel, T.; Durieux, L. Ontology-based classification of remote sensing images using spectral rules. Comput. Geosci. 2017, 102, 158–166. [Google Scholar] [CrossRef]
  15. Arvor, D.; Belgiu, M.; Falomir, Z.; Mougenot, I.; Durieux, L. Ontologies to interpret remote sensing images: Why do we need them? GISci. Remote Sens. 2019, 56, 911–939. [Google Scholar] [CrossRef]
  16. Arvor, D.; Durieux, L.; Andrés, S.; Laporte, M.-A. Advances in Geographic Object-Based Image Analysis with ontologies: A review of main contributions and limitations from a remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2013, 82, 125–137. [Google Scholar] [CrossRef]
  17. Di Gregorio, A. Land Cover Classification System: Classification Concepts and User Manual: LCCS; Food & Agriculture Org.: Rome, Italy, 2005; Volume 2. [Google Scholar]
  18. Adamo, M.; Tomaselli, V.; Tarantino, C.; Vicario, S.; Veronico, G.; Lucas, R.; Blonda, P. Knowledge-Based Classification of Grassland Ecosystem Based on Multi-Temporal WorldView-2 Data and FAO-LCCS Taxonomy. Remote Sens. 2020, 12, 1447. [Google Scholar] [CrossRef]
  19. Gu, H.; Li, H.; Yan, L.; Liu, Z.; Blaschke, T.; Soergel, U. An Object-Based Semantic Classification Method for High Resolution Remote Sensing Imagery Using Ontology. Remote Sens. 2017, 9, 329. [Google Scholar] [CrossRef]
  20. Yadav, D.; Nagarajan, K.; Pande, H.; Tiwari, P.; Narawade, R. Automatic urban road extraction from high resolution satellite data using object based ımage analysis: A fuzzy classification approach. J. Remote Sens. GIS 2020, 9, 279. [Google Scholar]
  21. Leinenkugel, P.; Deck, R.; Huth, J.; Ottinger, M.; Mack, B. The potential of open geodata for automated large-scale land use and land cover classification. Remote Sensing 2019, 11, 2249. [Google Scholar] [CrossRef]
  22. Talukdar, S.; Singha, P.; Mahato, S.; Pal, S.; Liou, Y.-A.; Rahman, A. Land-use land-cover classification by machine learning classifiers for satellite observations—A review. Remote Sensing 2020, 12, 1135. [Google Scholar] [CrossRef]
  23. Werner, P.A. Application of the Reed-Solomon Algorithm as a Remote Sensing Data Fusion Tool for Land Use Studies. Algorithms 2020, 13, 188. [Google Scholar] [CrossRef]
  24. Giles, J. Wikipedia rival calls in the experts. Nature 2006, 443, 493. [Google Scholar] [CrossRef] [PubMed]
  25. Heipke, C. Crowdsourcing geospatial data. ISPRS J. Photogramm. Remote Sens. 2010, 65, 550–557. [Google Scholar] [CrossRef]
  26. Zhu, L. Global Land Cover Product Update and Integration; Science Press: Beijing, China, 2020; p. 214. [Google Scholar]
  27. Shan, J.; Qin, K.; Huang, C.; Hu, X.; Yu, Y.; Hu, Q.; Lin, Z.; Chen, J.; Jia, T. Discussion on the processing and analysis methods of multi-source geographic data. J. Wuhan Univ. (Inf. Sci.) 2014, 39, 390–396. [Google Scholar] [CrossRef]
  28. Fan, W.; Wu, C.; Wang, J. Improving Impervious Surface Estimation by Using Remote Sensed Imagery Combined With Open Street Map Points-of-Interest (POI) Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4265–4274. [Google Scholar] [CrossRef]
  29. Huang, H.; Li, Q.; Zhang, Y. Urban Residential Land Suitability Analysis Combining Remote Sensing and Social Sensing Data: A Case Study in Beijing, China. Sustainability 2019, 11, 2255. [Google Scholar] [CrossRef]
  30. Arnold, S.; Kosztra, B.; Banko, G.; Smith, G.; Hazeu, G.; Bock, M.; Valcarcel Sanz, N. The EAGLE concept—A vision of a future European Land Monitoring Framework. In Proceedings of the 33rd EARSeL Symposium towards Horizon, Matera, Italy, 3–6 June 2013; pp. 3–6. [Google Scholar]
  31. Ye, D. Semantic Primitive Extraction Method for XBRL Domain Ontology. Master’s Thesis, Jinan University, Guangzhou, China, 2020. [Google Scholar]
  32. Ustuner, M.; Sanli, F.B.; Dixon, B. Application of Support Vector Machines for Landuse Classification Using High-Resolution RapidEye Images: A Sensitivity Analysis. Eur. J. Remote Sens. 2017, 48, 403–422. [Google Scholar] [CrossRef]
  33. Eiter, T.; Ianni, G.; Polleres, A.; Schindlauer, R.; Tompits, H. Reasoning with rules and ontologies. In Reasoning Web: Second International Summer School 2006, Lisbon, Portugal, September 4–8, 2006, Tutorial Lectures 2; Springer: Berlin/Heidelberg, Germany, 2006; pp. 93–127. [Google Scholar]
  34. OWL Web Ontology Language Reference. Available online: http://www.w3.org/TR/owl-ref/ (accessed on 28 June 2023).
  35. PATEL, M.; TRIKHA, M. Interpreting Inference Engine for Semantic Web. Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 2013, 2, 676. [Google Scholar]
  36. Sivakumar, R.; Arivoli, P. Ontology visualization PROTÉGÉ tools—A review. Int. J. Adv. Inf. Technol. (IJAIT) 2011, 1, 4. [Google Scholar]
  37. Nussbaum, S.; Menz, G.; Nussbaum, S.; Menz, G. eCognition image analysis software. In Object-Based Image Analysis and Treaty Verification: New Approaches in Remote Sensing–Applied to Nuclear Facilities in Iran; Springer: Berlin/Heidelberg, Germany, 2008; pp. 29–39. [Google Scholar]
  38. Huang, X.; Zhang, L. A multidirectional and multiscale morphological index for automatic building extraction from multispectral GeoEye-1 imagery. Photogramm. Eng. Remote Sens. 2011, 77, 721–732. [Google Scholar] [CrossRef]
  39. Kaganami, H.G.; Beiji, Z. Region-Based Segmentation versus Edge Detection. In Proceedings of the 2009 Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, 12–14 September 2009; pp. 1217–1221. [Google Scholar]
  40. Muñoz, X.; Freixenet, J.; Cufı, X.; Martı, J. Strategies for image segmentation combining region and boundary information. Pattern Recognit. Lett. 2003, 24, 375–392. [Google Scholar] [CrossRef]
  41. Zhu, G.; Zhang, S.; Zeng, Q.; Wang, C. Boundary-based image segmentation using binary level set method. Opt. Eng. 2007, 46, 050501–050503. [Google Scholar] [CrossRef]
  42. Gao, D. Research on Construction and Extraction of Urban Surface Elements. Master’s Thesis, Beijing University of Civil Engineering and Architecture, Beijing, China, 2023. [Google Scholar]
  43. Yu, Y.; Li, J.; Zhu, C.; Plaza, A. Urban impervious surface estimation from remote sensing and social data. Photogramm. Eng. Remote Sens. 2018, 84, 771–780. [Google Scholar] [CrossRef]
  44. Shao, C. Study on Geographic Ontology Construction of Land Cover Classification in Remote Sensing Images. Master’s Thesis, Jiangsu Normal University, Xuzhou, China, 2017. [Google Scholar]
  45. Zhu, L.; Jin, G.; Gao, D. Integrating Land-Cover Products Based on Ontologies and Local Accuracy. Information 2021, 12, 236. [Google Scholar] [CrossRef]
  46. Jin, G. Research on Ontology-Based Land Cover Integration Method. Master’s Thesis, Beijing University of Civil Engineering and Architecture, Beijing, China, 2021. [Google Scholar]
Figure 1. Six remote sensing images of Beijing 2. (a) September 2021; (b) November 2021; (c) February 2022; (d) April 2022; (e) June 2022; (f) July 2022.
Figure 1. Six remote sensing images of Beijing 2. (a) September 2021; (b) November 2021; (c) February 2022; (d) April 2022; (e) June 2022; (f) July 2022.
Remotesensing 16 00004 g001
Figure 2. Crowdsourcing data. (a) OSM road network data; (b) part of the POI data.
Figure 2. Crowdsourcing data. (a) OSM road network data; (b) part of the POI data.
Remotesensing 16 00004 g002
Figure 3. Experimental Area A.
Figure 3. Experimental Area A.
Remotesensing 16 00004 g003
Figure 4. Experimental Area B.
Figure 4. Experimental Area B.
Remotesensing 16 00004 g004
Figure 5. Distribution of the sample set. (a) Site A; (b) Site B.
Figure 5. Distribution of the sample set. (a) Site A; (b) Site B.
Remotesensing 16 00004 g005
Figure 6. The overall flowchart of the experiment.
Figure 6. The overall flowchart of the experiment.
Remotesensing 16 00004 g006
Figure 7. Surface element type tree map for urban fundamental geographic conditions monitoring classification system.
Figure 7. Surface element type tree map for urban fundamental geographic conditions monitoring classification system.
Remotesensing 16 00004 g007
Figure 8. Image object features.
Figure 8. Image object features.
Remotesensing 16 00004 g008
Figure 9. NDVI statistics chart.
Figure 9. NDVI statistics chart.
Remotesensing 16 00004 g009
Figure 10. Part of the multisource data feature display. (a) Phenological index; (b) road network density; (c) catering density; (d) accommodation density.
Figure 10. Part of the multisource data feature display. (a) Phenological index; (b) road network density; (c) catering density; (d) accommodation density.
Remotesensing 16 00004 g010
Figure 11. Geographic ontology instance file. The left side is the instance number, and the right side is the feature information of each instance.
Figure 11. Geographic ontology instance file. The left side is the instance number, and the right side is the feature information of each instance.
Remotesensing 16 00004 g011
Figure 12. Tree diagram of image object features.
Figure 12. Tree diagram of image object features.
Remotesensing 16 00004 g012
Figure 13. Three parts in the EAGLE matrix: (a) land cover components (LCC); (b) land use attributes (LUA); (c) further characteristics (CH).
Figure 13. Three parts in the EAGLE matrix: (a) land cover components (LCC); (b) land use attributes (LUA); (c) further characteristics (CH).
Remotesensing 16 00004 g013aRemotesensing 16 00004 g013b
Figure 14. EAGLE matrix LCC, LUA, and CH three-part ontology primitive hierarchy: (a) land cover components (LCC); (b) land use attributes (LUA); (c) further characteristics (CH).
Figure 14. EAGLE matrix LCC, LUA, and CH three-part ontology primitive hierarchy: (a) land cover components (LCC); (b) land use attributes (LUA); (c) further characteristics (CH).
Remotesensing 16 00004 g014
Figure 15. Ontology system architecture. The black arrow mainly represents the transformation of data information to the ontology, and the dotted arrow represents the rule constraint relationship.
Figure 15. Ontology system architecture. The black arrow mainly represents the transformation of data information to the ontology, and the dotted arrow represents the rule constraint relationship.
Remotesensing 16 00004 g015
Figure 16. Building-related modules in the ontology primitives. (a) LCC; (b) LUA; (c) CH.
Figure 16. Building-related modules in the ontology primitives. (a) LCC; (b) LUA; (c) CH.
Remotesensing 16 00004 g016
Figure 17. Relationship between ontology primitives and multisource data features of buildings.
Figure 17. Relationship between ontology primitives and multisource data features of buildings.
Remotesensing 16 00004 g017
Figure 18. Urban surface element ontology model.
Figure 18. Urban surface element ontology model.
Remotesensing 16 00004 g018
Figure 19. SWRL semantic rules for building.
Figure 19. SWRL semantic rules for building.
Remotesensing 16 00004 g019
Figure 20. Classification result of semantic information of instance object No. 1006.
Figure 20. Classification result of semantic information of instance object No. 1006.
Remotesensing 16 00004 g020
Figure 21. (a) Experimental Area A raw image; (b) segmentation result; (c) ontology-driven extraction results of urban surface elements; (d) extraction results of urban surface elements by pixel-based SVM method; (e) results of an object-based SVM method for extracting urban surface elements.
Figure 21. (a) Experimental Area A raw image; (b) segmentation result; (c) ontology-driven extraction results of urban surface elements; (d) extraction results of urban surface elements by pixel-based SVM method; (e) results of an object-based SVM method for extracting urban surface elements.
Remotesensing 16 00004 g021aRemotesensing 16 00004 g021b
Figure 22. (a) Comparison of the production accuracy of the classification results between the proposed method and the SVM method; (b) comparison of user accuracy of classification results between the proposed method and SVM method; (c) comparison of overall accuracy and kappa coefficient of classification results between the proposed method and SVM method; (d) the confusion matrix chord diagram of the urban surface element extraction results of the method in this paper; (e) confusion matrix chord diagram for results of pixel-based SVM method of urban surface element extraction; (f) confusion matrix chord diagram for results of object-based SVM method of urban surface element extraction.
Figure 22. (a) Comparison of the production accuracy of the classification results between the proposed method and the SVM method; (b) comparison of user accuracy of classification results between the proposed method and SVM method; (c) comparison of overall accuracy and kappa coefficient of classification results between the proposed method and SVM method; (d) the confusion matrix chord diagram of the urban surface element extraction results of the method in this paper; (e) confusion matrix chord diagram for results of pixel-based SVM method of urban surface element extraction; (f) confusion matrix chord diagram for results of object-based SVM method of urban surface element extraction.
Remotesensing 16 00004 g022aRemotesensing 16 00004 g022b
Figure 23. (a) Experimental Area B raw image; (b) segmentation result; (c) ontology-driven extraction results of urban surface elements; (d) extraction results of urban surface elements by pixel-based SVM method; (e) results of an object-based SVM method for extracting urban surface elements.
Figure 23. (a) Experimental Area B raw image; (b) segmentation result; (c) ontology-driven extraction results of urban surface elements; (d) extraction results of urban surface elements by pixel-based SVM method; (e) results of an object-based SVM method for extracting urban surface elements.
Remotesensing 16 00004 g023aRemotesensing 16 00004 g023b
Figure 24. (a) Comparison of the production accuracy of the classification results between the proposed method and the SVM method; (b) comparison of user accuracy of classification results between the proposed method and SVM method; (c) comparison of overall accuracy and kappa coefficient of classification results between the proposed method and SVM method; (d) the confusion matrix chord diagram of the urban surface element extraction results of the method in this paper; (e) confusion matrix chord diagram for results of pixel-based SVM method of urban surface element extraction; (f) confusion matrix chord diagram for results of object-based SVM method of urban surface element extraction.
Figure 24. (a) Comparison of the production accuracy of the classification results between the proposed method and the SVM method; (b) comparison of user accuracy of classification results between the proposed method and SVM method; (c) comparison of overall accuracy and kappa coefficient of classification results between the proposed method and SVM method; (d) the confusion matrix chord diagram of the urban surface element extraction results of the method in this paper; (e) confusion matrix chord diagram for results of pixel-based SVM method of urban surface element extraction; (f) confusion matrix chord diagram for results of object-based SVM method of urban surface element extraction.
Remotesensing 16 00004 g024aRemotesensing 16 00004 g024b
Table 1. POI Level 1 category list.
Table 1. POI Level 1 category list.
Serial NumberLevel 1 Category
1Auto Service
2Auto Dealers
3Auto Repair
4Motorcycle Service
5Food and Beverages
6Shopping
7Daily Life Service
8Sports and Recreation
9Medical Service
10Accommodation Service
11Tourist Attraction
12Commercial House
13Governmental Organization and Social Group
14Science/Culture and Education Service
15Transportation Service
16Finance and Insurance Service
17Enterprises
18Road Furniture
19Place Name and Address
20Public Facility
21Incidents and Events
22Indoor Facilities
23Pass Facilities
Table 2. Urban surface element definitions.
Table 2. Urban surface element definitions.
Urban Surface ElementsDefinition
Forest and grass coverageA small sheet or strip area covered by artificially planted green trees (excluding trees planted on rooftops) in alleys, scattered plots, street gardens, and road isolation green belts in densely populated areas such as towns and cities.
Planting landLand cultivated for food crops, as well as perennial woody and herbaceous crops, and regularly cultivated for management, with crop coverage generally greater than 50%.
BuildingsBuildings include housing construction areas and independent housing construction. The housing construction area is an area enclosed by the outlines of housing buildings with similar heights, similar structures, regular arrangements, and similar building density. The independent house building includes large-scale single buildings in the urban area, scattered residential areas, and small-scale scattered housing buildings.
StructuresAn engineering entity or ancillary building facility built for a purpose of use in which people generally do not carry out production and living activities directly.
Railways and roadsTracks and trackless roads cover the surface of the ground.
Artificial stacking sitesSurface that is long-term covered by waste generated by human activities or exposed through artificial excavation, such as during large-scale civil engineering projects in progress.
Bare landVarious natural exposed surfaces with long-term vegetation coverage below 10%. These areas have seen no growth of grass or trees for multiple years. Regions where grass coverage reaches 10% to 20% in the monitoring year are also classified under this category. Excluded are surfaces formed by artificial excavation, compaction, or hardening, such as those resulting from manual digging, tamping, or rolling.
Water bodiesSurfaces covered by liquid and solid water.
Table 3. Experimental Area A confusion matrix for ontology inference classification results.
Table 3. Experimental Area A confusion matrix for ontology inference classification results.
ClassForest and Grass CoverageRailways and RoadsBuildingsBuilding ShadowsTotalUA
Forest and grass coverage25255743166279190.47%
Railways and roads0201850202399.75%
Buildings00622718624599.71%
Building shadows90471881193797.11%
Total253420756322206512,996
PA99.64%97.25%98.50%91.09%
OA = 97.35%                            Kappa = 0.9607
Table 4. Experimental Area A confusion matrix for pixel-based SVM classification results.
Table 4. Experimental Area A confusion matrix for pixel-based SVM classification results.
ClassForest and Grass CoverageRailways and RoadsBuildingsBuilding ShadowsTotalUA
Forest and grass coverage21741645224496.88%
Railways and roads18156713390292453.59%
Buildings429075450783996.25%
Building shadows59133971216168572.17%
Total225518719345122114,692
PA96.41%83.75%80.74%99.59%
OA = 85.09%                            Kappa = 0.7525
Table 5. Experimental Area A confusion matrix for object-based SVM classification results.
Table 5. Experimental Area A confusion matrix for object-based SVM classification results.
ClassForest and Grass CoverageRailways and RoadsBuildingsBuilding ShadowsTotalUA
Forest and grass coverage21744964103239090.96%
Railways and roads1915705630215272.96%
Buildings420754532760199.26%
Building shadows9133971216163574.37%
Total22018718569135113,778
PA98.55%95.04%88.05%90.01%
OA = 90.76%                            Kappa = 0.8894
Table 6. Experimental Area B confusion matrix for ontology inference classification results.
Table 6. Experimental Area B confusion matrix for ontology inference classification results.
Building
Shadows
BuildingsForest and Grass CoverageRailways and
Roads
StructuresPlanting
Land
TotalUA
Building
shadows
11060670630123689.48%
Buildings451233020740137289.87%
Forest and
grass coverage
17061166010134386.82%
Railways and
roads
00161420077151393.85%
Structures0200018150183598.91%
Planting land0002901113114297.46%
Total1321125912491469195311908441
PA83.72%97.93%93.35%96.66%92.93%97.46%
OA = 93.03%Kappa = 0.9160
Table 7. Experimental Area B confusion matrix for pixel-based SVM classification results.
Table 7. Experimental Area B confusion matrix for pixel-based SVM classification results.
Building
Shadows
BuildingsForest and
Grass Coverage
Railways and RoadsStructuresPlanting
Land
TotalUA
Building
shadows
135355914960152788.61%
Buildings793108098811262935.41%
Forest and grass
Coverage
290121390104135589.52%
Railways and
roads
0124345756764770.63%
Structures1175691871222113176769.16%
Planting land05101628953101294.17%
Total1390124013541492228311788937
PA97.34%75.08%89.59%30.63%53.53%80.90%
OA = 68.58%Kappa = 0.6224
Table 8. Experimental Area B confusion matrix for object-based SVM classification results.
Table 8. Experimental Area B confusion matrix for object-based SVM classification results.
Building
Shadows
BuildingsForest and
Grass Coverage
Railways and
Roads
StructuresPlanting
Land
TotalUA
Building
shadows
78505300083893.68%
Buildings1363733242280120552.86%
Forest and grass
coverage
11968040978101679.13%
Railways and
roads
28282575562160113948.81%
Structures081312725075895.65%
Planting land0401251253741109767.55%
Total94597394294314318196053
PA83.07%65.47%85.35%58.96%50.66%90.48%
OA = 70.18%Kappa = 0.6540
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, L.; Lu, Y.; Fan, Y. Classification of Urban Surface Elements by Combining Multisource Data and Ontology. Remote Sens. 2024, 16, 4. https://doi.org/10.3390/rs16010004

AMA Style

Zhu L, Lu Y, Fan Y. Classification of Urban Surface Elements by Combining Multisource Data and Ontology. Remote Sensing. 2024; 16(1):4. https://doi.org/10.3390/rs16010004

Chicago/Turabian Style

Zhu, Ling, Yuzhen Lu, and Yewen Fan. 2024. "Classification of Urban Surface Elements by Combining Multisource Data and Ontology" Remote Sensing 16, no. 1: 4. https://doi.org/10.3390/rs16010004

APA Style

Zhu, L., Lu, Y., & Fan, Y. (2024). Classification of Urban Surface Elements by Combining Multisource Data and Ontology. Remote Sensing, 16(1), 4. https://doi.org/10.3390/rs16010004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop