Next Article in Journal
Multi-Scale Validation of SMAP Soil Moisture Products over Cold and Arid Regions in Northwestern China Using Distributed Ground Observation Data
Next Article in Special Issue
Object-Based Detection of Lakes Prone to Seasonal Ice Cover on the Tibetan Plateau
Previous Article in Journal / Special Issue
Stratified Template Matching to Support Refugee Camp Analysis in OBIA Workflows
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Object-Based Semantic Classification Method for High Resolution Remote Sensing Imagery Using Ontology

1
Institute of Photogrammetry and Remote Sensing, Chinese Academy of Surveying and Mapping, 28 Lianhuachi Road, Beijing 100830, China
2
School of Geodesy and Geomatics, Wuhan University, Luojiashan, Wuhan 430072, China
3
Department of Geoinformatics—Z_GIS, University of Salzburg, Schillerstrasse 30, Salzburg 5020, Austria
4
Institute for Photogrammetry, University of Stuttgart, Geschwister-Scholl-Str. 24D, 70174 Stuttgart, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(4), 329; https://doi.org/10.3390/rs9040329
Submission received: 30 December 2016 / Revised: 17 March 2017 / Accepted: 24 March 2017 / Published: 30 March 2017

Abstract

:
Geographic Object-Based Image Analysis (GEOBIA) techniques have become increasingly popular in remote sensing. GEOBIA has been claimed to represent a paradigm shift in remote sensing interpretation. Still, GEOBIA—similar to other emerging paradigms—lacks formal expressions and objective modelling structures and in particular semantic classification methods using ontologies. This study has put forward an object-based semantic classification method for high resolution satellite imagery using an ontology that aims to fully exploit the advantages of ontology to GEOBIA. A three-step workflow has been introduced: ontology modelling, initial classification based on a data-driven machine learning method, and semantic classification based on knowledge-driven semantic rules. The classification part is based on data-driven machine learning, segmentation, feature selection, sample collection and an initial classification. Then, image objects are re-classified based on the ontological model whereby the semantic relations are expressed in the formal languages OWL and SWRL. The results show that the method with ontology—as compared to the decision tree classification without using the ontology—yielded minor statistical improvements in terms of accuracy for this particular image. However, this framework enhances existing GEOBIA methodologies: ontologies express and organize the whole structure of GEOBIA and allow establishing relations, particularly spatially explicit relations between objects as well as multi-scale/hierarchical relations.

Graphical Abstract

1. Introduction

Geographic object-based image analysis (GEOBIA) is devoted to developing automated methods to partition remote sensing (RS) imagery into meaningful image objects, and assessing their characteristics through spatial, spectral, texture and temporal features, thus generating new geographic information in a GIS-ready format [1,2]. There has been great progress compared to traditional per-pixel image analysis. GEOBIA has the advantages of having a high degree of information utilization, strong anti-interference, a high degree of data integration, high classification precision, and less manual editing [3,4,5,6]. Over the last decade, advances in GEOBIA research have led to specific algorithms and software packages; peer-reviewed journal papers; six highly successful biennial international GEOBIA conferences; and a growing number of books and university theses [7,8,9]. A GEOBIA wiki is used to promote international exchange and development [9]. GEOBIA is a hot topic in RS and GIS [1,8] and has been widely applied in global environmental monitoring, agricultural development, natural resource management, and defence and security [10,11,12,13,14]. It has been recognized as a new paradigm in RS and GIS [15].
Ontology originated in Western philosophy and was then introduced into GIS [16]. The concept of domain knowledge is expressed in the form of machine-understandable rulesets and is utilised for semantic modelling, semantic interoperability, knowledge sharing and information retrieval services in the field of GIS [16,17,18]. Recently, researchers have begun to attach importance to the application of ontology in the field of remote sensing, especially in remote sensing image interpretation. Arvor et al. (2013) described how to utilise ontology experts’ knowledge to improve the automation of image processing and analysis the potential applications of GEOBIA, which can provide theoretical support for remote sensing data discovery, multi-source data integration, image interpretation, workflow management and knowledge sharing [19]. Jesús et al. (2013) built a framework for ocean image classification based on ontologies, which describes how to build ontology model for low and high level of features, classifiers and rule-based expert systems [20]. Dejrriri et al. (2012) presented GEOBIA and data mining techniques for non-planned city residents based on ontology [21]. Kohli et al. (2012) provided a comprehensive framework that includes all potentially relevant indicators that can be used for image-based slum identification [22]. Forestier et al. (2013) built a coastal zone ontology to extract coastal zones using background and semantic knowledge [23]. Kyzirakos et al. (2014) provided wildfire monitoring services by combining satellite images and geospatial data with ontologies [24]. Belgiu et al. (2014a) presented an ontology-based classification method for extracting types of buildings where airborne laser scanning data are employed and obtained effective recognition results [25]. Belgiu et al. (2014b) provided a formal expression tool to express object-based image analysis technology through ontologies [26]. Cui (2013) presented a GEOBIA method based on geo-ontology and relative elevation [27]. Luo (2016) developed an ontology-based framework that was used to extract land cover information while interpreting HRS remote sensing images at the regional level [28]. Durand et al. (2007) proposed a recognition method based on an ontology which has been developed by experts from the particular domain [29]. Bannour et al. (2011) presented an overview and an analysis of the use of semantic hierarchies and ontologies to provide a deeper image understanding and a better image annotation in order to furnish retrieval facilities to users [30]. Andres et al. (2012) demonstrate that expert knowledge explanation via ontologies can improve automation of satellite image exploitation [31]. All these studies focus either on a single thematic aspect based on expert knowledge or on a specific geographic entity. However, existing studies do not provide comprehensive and transferable frameworks for objective modelling in GEOBIA. None of the existing methods allows for a general ontology driven semantic classification method. Therefore, this study develops an object-based semantic classification methodology for high resolution remote sensing imagery using ontology that enables a common understanding of the GEOBIA framework structure for human operators and for software agents. This methodology shall enable reuse and transferability of a general GEOBIA ontology while making GEOBIA assumptions explicit and analysing the GEOBIA knowledge corpus.

2. Methodology

The workflow of the object-based semantic classification is organized as follows: in the ontology-model building step, land-cover models, image object features and classifiers are generated using the procedure described in Section 2.2 (Step 1, Figure 1). The result is a semantic network model. Subsequently, the remote sensing image is classified using a machine learning method and the initial classification result is imported into the semantic network model (Step 2, Figure 1), which is described in Section 2.3. In the last step, the initial classification result is reclassified and validated to get the final classification result based on the semantic rules (Step 3, Figure 1), which is described in Section 2.4. The semantic network model is the interactive file between the initial classification and the semantic classification.

2.1. Study Area and Data

The test site is located in Ruili City, in Yunnan Province, China. We utilised panchromatic (Pan) data from the Chinese ZY-3 satellite with 2.1 m resolution and multispectral (MS) ZY-3 data with 5.8 m resolution (blue, green, red and near-infrared bands), which were acquired in April 2013. The ZY-3 MS imagery was obtained and geometrically corrected to the Universal Transverse Mercator (UTM) projection and then re-sampled to 2.1 m to match the Pan image pixel size; it was then fused using the Pansharp fusion method within the PCI Geomatica software. Figure 2 shows the resulting fused image based on MS bands 4 (near-infrared), 3 (red) and 2 (green).
The part of the city selected for the study is characterised by classes identified as Field, Woodland, Grassland, Orchard, Bare land, Road, Building and Water. The eight land-covers are defined based on the Geographical Conditions Census project in China [32], which are described as follows: Field is often cultivated for planting crops, which includes cooked field, new developed field and grass crop rotation land. It is mainly for planting crops, and there are scattered fruit trees, mulberry trees or others. Woodland is covered by natural forest, secondary forest and plantation, which includes trees, bushes, bamboo, etc. Grassland is covered by herbaceous plants, which includes shrub grassland, pastures, sparse grassland, etc. Orchard is artificially cultivated for perennial woody and herbaceous crops. It is mainly used for collecting fruits, leaves, roots, stems, etc. It also includes various trees, bushes, tropical crops and fruit nursery, etc. Bare land is a variety of natural exposed surface (forest coverage is less than 10%). Road is covered by rail and trackless road surface, including railways, highways, urban roads and rural roads. Building includes contiguous building areas and individual buildings in urban and rural areas. Water includes all types of surface water.

2.2. Ontology Model for GEOBIA

2.2.1. Ontology Overview

As stated in the introduction section, ontology plays a central part in this methodology. It is used to reduce the semantic gap that exists between the image object domain and the human language centred class formulations of human operators [3,14,19]. The ontology serves as the lynchpin to combine image classification and knowledge formalization. Ontology models are generated for land-cover, image object features, and for classifiers. An ontology is a formal explicit description of concepts and includes: classes (sometimes called concepts), properties of each class/concept describing various features and attributes (slots, sometimes called roles or properties), and restrictions on slots (facets, sometimes called role restrictions). An ontology together with a set of individual instances of classes constitutes a knowledge base [33]. There are many ontology languages, such as Ontology Web Language (OWL), Extensible Markup Language (XML), Description Logic (DL), Resource Description Framework (RDF), Semantic Web Rule Language (SWRL), etc. Ontology building methods include enterprise modelling, skeleton, knowledge engineering, prototype evolution, and so on. There are several ontology building tools, such as ontoEdit, ontolingua, ontoSaurus, WebOnto, OilEd, Protégé, etc., and there are several ontology reasoning machines (Jess, Racer, FaCT++, Pellet, Jena, etc.).
In this study, the information for land-cover, object features and machine learning classifiers are expressed in OWL while the semantic rules are expressed in SWRL. The OWL is defined as a recommended standard of ontology language by W3C which is based on the description logic. The relationship of concept and various semantics are expressed by XML/RDF syntax. OWL can describe four kinds of data: class, property, axiom and individual [34]. SWRL is a rule description language which includes OWL-DL, OWL-Lite, RuleML. The knowledge is expressed in OWL by a highly abstract syntax and the combination of Hom-like gauge [35]. The knowledge engineering method and the Protégé software developed by Stanford University have been chosen to build the ontology model for GEOBIA.
Our knowledge engineering method consists of seven steps:
Step 1
Determine the domain and scope of the ontology.
The domain of the ontology is the representation of the whole GEOBIA framework, which includes the information on various features, land-covers and classifiers. We used the GEOBIA ontology to combine land-cover and features for image classification.
Step 2
Consider reusing existing ontologies.
Reusing existing ontologies may be a requirement if our system needs to interact with other applications that have already been committed to particular ontologies or controlled vocabularies [33]. There are libraries of reusable ontologies on the Web and in the literature. For example, we can use the ISO Metadata [36], OGC [37], SWEET [38], etc. For this study, we assumed that no relevant ontologies exist a priori and start developing the ontology from scratch.
Step 3
Enumerate important terms in the ontology.
We aimed to achieve a comprehensive list of terms, For example, important terms include different types of land-cover, such as PrimarilyVegetatedArea, PrimarilyNonVegetatedArea, and so on.
Step 4
Define the classes and the class hierarchy.
There are three main approaches in developing a class hierarchy: top-down, bottom-up, combination. The approach to take depends strongly on the domain [33]. The class hierarchy include land-covers, image object features, classifiers, and so on.
Step 5
Define the properties of classes.
The properties become slots attached to classes. A slot should be attached at the most general class that can have that property. For example, image object features should be attached to the respective land-cover.
Step 6
Define the facets of the slots.
Slots can have different facets describing the value type, the allowed values, the number of the values (cardinality), and other features of the values the slot can take. For example, the domain of various features is “Region”, the range is “double”.
Step 7
Create instances.
Defining an individual instance of a class requires: (1) choosing a class; (2) creating an individual instance of that class; and (3) filling in the slot values [33]. For example, all the segmentation objects are instances, which have their properties.
Step 8
Validation.
The FaCT++ reasoner is used to infer the relationship among all the individuals, it could test the correctness and validity of the ontology.
Following this eight-step process, we designed ontology models for GEOBIA, namely for land-cover, image object features, and classifiers. Then, the semantic network model is formed.

2.2.2. Ontology Model of the Land-Cover

The Land Cover Classification System (LCCS) includes various land cover classification schemes [39]. In this study, we designed an upper level of classes based on the official Chinese Geographical Conditions Census Project [32] and the upper level of LCCS.
The ontology model of the eight land-covers is created as follows.
(1)
A list of important terms, including Fields, Woodland, Grassland, Orchards, Bare land, Roads, Building and Water, was created.
(2)
Classes and class hierarchies were defined. Land cover was defined through the top–down method and was divided into PrimarilyVegetatedArea and PrimarilyNonVegetatedArea. PrimarilyVegetatedArea was divided into ArtificialCropVegetatedArea and NaturalGrowthVegetatedArea. PrimarilyNonVegetatedArea was divided into ArtificialNonVegetatedArea and NaturalNonVegetatedArea. ArtificialCropVegetatedArea is divided into Field and Orchard. NaturalGrowthVegetatedArea is divided into Woodland and Grassland. ArtificialNonVegetatedArea is divided into Building and Road. NaturalNonVegetatedArea is divided into Water and Bare land. The classes are shown in Figure 3. Detailed classes can be defined according to the actual situation.

2.2.3. Ontology Model of the Image Object Features

Feature selection is an important step of GEOBIA as there are thousands of potential features describing objects. Some of the major categories include: layer features (marked as LayerProperty), geometry features (marked as GeometryProperty), position features (marked as PositionProperty), texture features (marked as TextureProperty), class-related features (marked as ClassProperty), and thematic index (marked as ThematicProperty). The ontology model makes use of the feature concepts used in the eCognition software to develop a general upper level ontology [40]. The image object features are defined through the top–down method and are divided into six categories: LayerProperty, GeometryProperty, PositionProperty, TextureProperty, ClassProperty, and ThematicProperty. Each feature category can be subdivided further. For instance, the TextureProperty is divided into ToParentShapeTexture and Haralick. The Haralick (which stands for Haralick’s texture GLCM parameters) is divided into GLCMHom, GLCMContrast and GLCMEntropy as illustrated in Figure 4.

2.2.4. Ontology Model of the Classifiers

Ontology is employed to express two typical algorithms, namely, decision tree and semantic rules.
(1)  Ontology model of the decision tree classifier
The ontology model of the decision tree classifier is based on C4.5 algorithm, which is specified by a set of nodes and leaves where the nodes represent Boolean conditions on features and the leaves represent land-cover classes. It is defined as follows.
(a)
A list of important terms, including DecisionTree, Node and Leaf, was created.
(b)
The slots were defined, which includes relations such as GreaterThan or LessThanOrEqual.
(c)
The lists of instances of decision tree, such as Node1, Node2, etc., were created. The nodes are associated to features and are also linked to two nodes with object properties called GreaterThan and LessThanOrEqual.
The ontology model of the decision tree classifier is shown in Figure 5.
(2)  Ontology model of the semantic rules
The process of modelling semantic rules includes building mark rules and decision rules. Building mark rules is based on a semantic concept, and the process is from low-level features to semantic concepts. Decision rules are obtained based on mark rules and a priori knowledge; the process is from advanced features to the identification of land-covers. The ontology model of mark rules and decision rules are shown as follows:
(a)  Ontology model of the mark rules
The objects are modelled from different semantic aspects, and, according to the common sense knowledge, it is divided into: Strip and Planar from the Morphology; Regular and Irregular from the Shape; Smooth and Rough from the Texture; Light and Dark from the Brightness; High, Medium and Low from the Height; and Adjacent, Disjoint and Containing from the Position relationship. The ontological model of the mark rules is created as follows.
a)
A list of important terms, including Morphology, Shape, Texture, Brightness, Height, Position, etc., was created.
b)
Class hierarchies were defined. Morphology was divided into Strip and Planar; Shape was divided into Regular and Irregular; Texture was divided into Smooth and Rough; Brightness was divided into Light and Dark; Height was divided into High, Medium and Low; and Position was divided into Adjacent, Disjoint and Containing.
The ontology model of the mark rules is shown in Figure 6.
The mark rules are expressed in SWRL, and the semantic relationships between the object features and the classes are built. For example, the Brightness type is expressed in SWRL as follows:
  • Mean (?x, ?y), greaterThanOrEqual (?y, 0.38) -> Light (?x);
  • Mean (?x, ?y), lessThan (?y, 0.38) -> Dark (?x).
This means the Mean feature of an object ≥0.38 denotes Light, whereas that <0.38 denotes Dark. C(?x), X is an individual of C, P(? X? Y) represents attributes, and x and y are variables.
(b)  Ontology model of the decision rules
The decision rules for eight types of land-covers are acquired from literatures, priori knowledge and project technical regulations. In general, the decision rules formalized using OWL are as follows:
  • Field ≡ Regular ∩ Planar ∩ Smooth ∩ Dark ∩ Low ∩ adjacentToRoad.
  • Woodland ≡ Irregular ∩ Planar ∩ Rough ∩ Dark ∩ High ∩ adjacentToField.
  • Orchard ≡ Regular ∩ Planar ∩ Smooth ∩ Dark ∩ Medium ∩ adjacentToField.
  • Grassland ≡ Irregular ∩ Planar ∩ Smooth ∩ Dark ∩ Low∩adjacentToBuilding.
  • Building ≡ Regular ∩ Planar ∩ Rough ∩ Light ∩ High ∩ adjacentToRoad.
  • Road ≡ Regular ∩ Strip ∩ Smooth ∩ Light ∩ Low ∩ adjacentToBuilding.
  • Bare land ≡ Irregular ∩ Planar ∩ Rough ∩ Light ∩ Low.
  • Water ≡ Irregular ∩ Planar ∩ Smooth ∩ Dark ∩ Low.
The decision rules are expressed in SWRL, and the semantic relationships between the mark rules and the classes are built. For example, the Field is expressed in SWRL as follows:
Regular (?x), Planar (?x), Smooth (?x), Dark (?x), Low (?x), adjacentToRoad (?x) -> Field (?x).
This means an image object with Regular, Planar, Smooth, Dark, Low and adjacentToRoad features is a Field.
Other classifiers such as Support Vector Machines (SVM), or Random Forest could be expressed in OWL or SWRL. Later on, the ontology model of the semantic rules can be extended and supplemented to realize the semantic understanding of various land-covers.

2.2.5. Semantic Network Model

The entire semantic network model is formed through the construction of the land-covers, image object features and classifiers using ontology. It is shown in Figure 7.
The semantic network model is a type of directed network graph that expresses knowledge through the concept and its semantic relations. It has the following advantages. Firstly, the concepts, features, and relationships of geographical entities are expressed explicitly, which could reduce the semantic gap between low-level features and high-level semantics. Second, it can be traced back to the parent object, child objects and neighbourhood objects through their relationships. Third, it is easy to express semantic relations using a computer operable formal language [41].

2.3. Initial Classification Based on Data-Driven Machine Learning

The process includes segmentation, feature selection, sample collection and initial classification. The software FeatureStation developed by the Chinese Academy of Surveying and Mapping is chosen to be the image segmentation and classification tool since it has from its onset on centred around segmentation and decision tree classification. The Protégé plugin developed by Jesús [20] is chosen as the semantic classification tool and for the transformation.

2.3.1. Image Segmentation

The objective of image segmentation is to keep the heterogeneity within objects as small as possible, at the same time preserving the integrity of the object. The fusion image is segmented using the G-FNEA method which is based on graph theory and fractal net evolution approach (FNEA) within the FeatureStation software. The method could get high efficiency and maintain good feature boundaries [42].
There are three parameters in the G-FNEA method: T (scale parameter), wcolour (weight factor for colour heterogeneity), and wcompt (weight factor for compactness heterogeneity). A high T value indicates fewer, larger objects than a low T value. The colour heterogeneity wcolour describes the spectral information, which is used to indicate the degree of similarity between two adjacent objects. The higher the wcolour value, the greater influence colour has on the segmentation process. The wcompt value reflects the degree of clustering of the pixels within a region: the lower the value, the more compact the pixels are within the region. It should be noted that the scale parameter is considered to be the most important factor for classification as it controls the relative size of the image objects and has a direct effect on the overall classification accuracy.
There are some methods on automatic determination of appropriate segmentation parameters, such as Estimation of Scale Parameters (ESP) [43], Optimised image segmentation [44], SPT (Segmentation Parameter Tuner) [45], Plateau Objective Function [46]. In this study, the selection of image segmentation parameters is based on an iterative trial-and-error approach that is often utilized in object-based classification [6,10]. The best segmentation results were achieved with the following parameters: T = 100, wcolour = 0.8, and wcompt = 0.3.

2.3.2. Feature Selection

The selection of appropriate object features can be based on a priori knowledge, or can make use of feature-selection algorithms (such as Random Forest [47]). In this study, we make use of a priori knowledge to guide the initial selection of object features, and thus keep to the following four rules: (1) the most important features of an object are the spectral characteristics, which are independent of test area and segmentation scale; (2) the ratio of bands is closely related to vegetation and non-vegetation; (3) the effect of the shape feature, which is used to reduce the image classification error rate, is small; therefore, it becomes effective when the segmentation scale reaches a certain level, that the objects are consistent with the real surface features; and (4) the auxiliary data (DEM, OpenStreetMap, etc.) is dependent on the scale; the smaller the scale, the more important the auxiliary data.
Based on the above four rules, twenty-nine features (e.g., ratio, mean, Normalized Difference Water Index, Normalized Difference Vegetation Index, homogeneity, and brightness) are selected and stored in Shapefile format, and then converted to OWL format. The features of an object in OWL is shown in Figure 8.

2.3.3. Initial Classification

The C4.5 decision tree method is used for the construction of a decision rule, which includes a generation stage and a pruning stage (Figure 9).
Stage 1: The generation of a decision tree
(1)
The training samples are ordered in accordance with the “class, features of sample one, features of sample two, etc.” The training and testing samples are selected by visual image interpretation with their selection being controlled by the requirement for precision and representativeness, and by their statistical properties.
(2)
The training samples are divided. The information gain and information gain rate of all the features of training samples are calculated. The feature is taken as the test attribute, whose information gain rate is the biggest and its information gain is not lower than the mean of all the features, and the feature is taken as a node and leads to a branch. In this circulation way, all the training samples are divided.
(3)
The generation of decision tree. If all the training samples of the current node belongs to a class, the class is marked as a leaf node and marked for the specify feature. It runs in the same way; at last, it forms a decision tree until all the data of a subset are recorded in the main feature and their feature value are the same, or there is no feature to divide again.
Stage 2: The pruning of decision tree.
The possible error probability of sub-node not leaf-node is calculated, the weights of all the nodes are assessed. The subtree is kept if the error rate causes by cutting off the node is high, otherwise, the subtree is cut off. At last, the decision tree with the least expected error rate is the final decision tree as shown in Figure 10. The decision tree is expressed in OWL as illustrated in Figure 11.
The above decision rule is imported into the semantic network model, all objects are classified using the decision rule, and the initial classification result is expressed in OWL file format.

2.4. Semantic Classification Based on Knowledge-Driven Semantic Rules

On the basis of the initial classification, each object is reclassified and validated by semantic rules in SWRL to obtain the semantic information.

2.4.1. Semantic Rules Building

The mark rules and decision rules of the eight classes of the test site are expressed in SWRL according to the ontology model of the above mark rules and decision rules.
(1)  Mark rules are shown as follows:
  • RectFit (?x, ?y), greaterThanOrEqual (?y, 0.5) -> Regular (?x);
  • RectFit (?x, ?y), lessThan (?y, 0.5) -> Irregular (?x);
  • LengthWidthRatio (?x, ?y), greaterThanOrEqual(?y, 1) -> Strip (?x);
  • LengthWidthRatio (?x, ?y), lessThan (?y, 1) -> Planar (?x);
  • Homo (?x, ?y), greaterThanOrEqual (?y, 0.05) -> Smooth (?x);
  • Homo (?x, ?y), lessThan (?y, 0.05) -> Rough(?x);
  • Mean (?x, ?y), greaterThanOrEqual (?y, 0.38) -> Light (?x);
  • Mean (?x, ?y), lessThan (?y, 0.38) -> Dark (?x);
  • MeanDEM (?x, ?y), greaterThanOrEqual (?y, 0.6) -> High (?x);
  • MeanDEM (?x, ?y), lessThan (?y, 0.2) -> Low (?x); and
  • MeanDEM (?x, ?y), greaterThanOrEqual (?y, 0.2), lessThan (?y, 0.6) -> Medium (?x).
This means RectFit of an object >0.5 denotes Regular shape, where <0.5 denotes Irregular shape. The thresholds are obtained by an iterative trial-and-error approach.
(2)  Decision rules are shown by the following:
  • Regular (?x), Planar (?x), Smooth (?x), Dark (?x), Low (?x), adjacentToRoad (?x) -> Field (?x);
  • Irregular (?x), Planar (?x), Rough (?x), Dark (?x), High (?x), adjacentToField (?x)-> Woodland (?x);
  • Regular (?x), Planar (?x), Smooth (?x), Dark (?x), Medium (?x), adjacentToField (?x) -> Orchard (?x);
  • Irregular (?x), Planar (?x), Smooth (?x), Dark (?x), Low (?x), adjacentToBuilding (?x) -> Grassland (?x);
  • Regular (?x), Planar (?x), Rough (?x), Light (?x), High (?x), adjacentToRoad (?x)-> Building (?x);
  • Regular (?x), Strip (?x), Smooth (?x), Light (?x), Low (?x), adjacentToBuilding (?x) -> Road (?x);
  • Irregular (?x), Planar (?x), Rough (?x), Light (?x), Low (?x) -> Bare land (?x); and
  • Irregular (?x), Planar (?x), Smooth (?x), Dark (?x), Low (?x) -> Water (?x).
For example, an object with Regular, Planar, Smooth, Dark and Low features is a Field. C (? X), X is an individual of C, P (? X? Y) represents attributes, and x and y are variables.

2.4.2. Semantic Classification

The initial classification result is reclassified and validated to get the final classification result based on the semantic rules. The exported OWL objects are a way to preserve the semantics of the features the image objects exhibits.

3. Results and Discussion

3.1. Results

The description, picture, decision tree rules and decision rules of eight land-covers are shown in Table 1.
The initial classification result is expressed in OWL file format defined by the above steps of segmentation, feature selection, sample collection and initial classification. Figure 12 shows the classification result of “region208”, whereby “‘region208’is Water”. The expression of all the objects’ classification results is the same as “region208”.
On the basis of the initial classification, each object is reclassified and validated by semantic rules in SWRL to obtain the semantic information. Figure 13 shows a semantic classification result where “the semantic information of ‘region105’ is Regular, Planar, Smooth, Dark, Low, Field”, the expression of all the objects’ classification is the same as “region105”. Thus, the classified objects already exported in OWL format are help for retrieving the object features (Figure 14).
The semantic classification information in OWL format is transformed to Shapefile format, as shown in Figure 15a. A general object-based decision tree classification without ontology, which continues to use “image segmentation, feature extraction, image classification”, was investigated. The segmentation parameters and features are consistent in our method with ontology. The classification results are shown in Figure 15b.
A comprehensive accuracy assessment was carried out. A sample-based error matrix is created and used for performing accuracy assessment. In GEOBIA, a sample refers to an object. The error matrixes of the two methods for the test area are shown in Figure 16. The user’s accuracy, producer’s accuracy, overall accuracy and Kappa coefficient are shown in Table 2.
The error matrixes (Figure 16, Table 2) reveal that the two methods produce similar results, and only minor differences occur. The overall accuracy of our method with the ontology model is 85.95%, and the kappa coefficient is 0.84. The overall accuracy of the decision tree method without ontology model is 84.32%, and the kappa coefficient is 0.82 (see Table 2). Our method with ontology yields small improvements as it depends on the initial segmentation method. This again is based on the semantic rules used in the semantic classification process which validates the initial classification method furthermore, and some obvious classification errors may be corrected already within the following semantic classification step.
The producer’s accuracy of our method for all land-cover types except for Woodland and Grassland are higher than those based on the decision tree method without ontology, as shown in Table 1. The user’s accuracy of our method for Field, Grassland and Bare land are higher than the decision tree method as shown in Table 1. Given that the method employs semantic rules to restrict, it reduces misclassification to a certain extent. However, obvious misclassification instances between Building and Road exist because the two classes are spectrally too similar.

3.2. Discussion

The classification results reveal some small improvements of the accuracies when including ontology. However, the ontology model helps in understanding the complex structure of the overall GEOBIA framework, both for the human operators as well as for the software agents used. Even more importantly, the ontology enables the reuse of the general GEOBIA framework, makes GEOBIA assumptions explicit, and enables the operator to analyse the GEOBIA knowledge in great detail. The ontology model of image object features uses the GEOBIA structure as the upper level knowledge to be further extended. The land-cover ontology model is built based on the official Chinese Geographical Conditions Census Project and the upper level of LCCS. It only builds the decision tree ontology model and the semantic rule ontology model, respectively. Both models can be extended to realize the semantic understanding of various land cover categories. The process of image interpretation in the geographic domain—as opposed to, e.g., industrial imaging—is an expert process and many of the parameters need to be tuned depending on the problem domain [19]. We strongly believe that particularly the high degree of variance of natural phenomena in landscapes and potential regional idiosyncrasies can be managed well when formal ontologies serve as a central part the overall GEOBIA framework.
The ontology model for land-cover, image object features, and classifiers was formalised through the use of OWL and SWRL formal languages. In fact, the entire semantic network model was built in such a formalized way around the central element of the ontology model for object classification. The knowledge for building the model was acquired both from literature [24] as well as by using data mining techniques [48,49]. Although the experiments were so far carried out for one high resolution ZY-3 satellite image scene and can therefore not verify the transferability as such, the experiments clearly validated the feasibility of the methodology. The ontology part—as compared to the decision tree classification without using the ontology—yielded only minor statistical improvements in terms of accuracy for this particular image. Still, it validated the ontology to become the central part of a GEOBIA classification framework and a future standard methodology to handle knowledge for a certain domain or for several domains. The authors believe that this methodology can help to reduce the semantic gap which exists in performing image classification [15,19]. Image objects were classified by combining decision tree and semantic rules, which not only provide the classification result of the geographical objects, but also master the semantic information of the geographical entities, and realize the reuse of domain knowledge and the semantic network model. Our results are in line with Arvor et al. [19] who comprehensively portray GEOBIA ontologies for data discovery, automatic image interpretation, data interoperability, workflow management and data publication. In accordance with Blaschke et al. [15], we also want to emphasize the potential to bridge remote sensing and image processing methods with Geoinformatics/GIScience methods.
The authors should reemphasize that the methodology is knowledge-driven and shall be shared among experts so as to enhance and share. One obstacle here may be that the software framework used called FeatureStation is very powerful and successfully used in operational studies for the whole of China but it is currently only available in Chinese language and therefore not used internationally. The particular challenges of our method include the determination of an appropriate segmentation scale, the determination of the importance of the various features, the choice of the classifier, and the determination of parameters. Still, all four aspects mentioned are typical to any GEOBIA approach and the ontology provides a great stability for the determination of the parameters and for iterative testing. This study focuses on the implementation process of the method, and overall accuracy is used to evaluate the feasibility of the method. It should be pointed out that the determination of a study of systematic influences of the various elements of the GEOBIA framework on the influence of this method such as the segmentation method, the scale or the features used is simply described in this study, and for the problems and consequences of the different segmentation algorithms, the scale parameter and the feature selection methods we refer to the two recent publications of Ma et al. [50] and Li et al. [51].

4. Conclusions

The study has put forward an object-based semantic classification method for high resolution remote sensing imagery using ontology. It aimed to fully exploit the advantages of ontology to GEOBIA. A detailed workflow has been introduced that includes three major steps: ontology modelling, initial classification based on decision tree machine learning method, and semantic classification based on semantic rules. The structure of the GEOBIA framework was organized organically and expressed explicitly through the ontology. Operationally, the semantic relations were expressed in the OWL and SWRL formal languages with which the software can operate directly. Image objects were classified based on the ontology model by using a decision tree and semantic rules. It could be demonstrated that this ontology based GEOBIA framework can serve as an objective model and can enhance remote sensing image classification, particularly by enhancing the degree of automation and the reusability and transferability of the classification framework.
Nevertheless, building a comprehensive ontology is difficult and time-consuming, there is no single correct ontology for any domain, the potential applications of the ontology and the designer’s understanding and view of the domain will undoubtedly affect ontology design choices [33]. Domain experts should be involved in the construction of ontologies. We may also diagnose that, at least in the fields of remote sensing and image classification, ontologies are still rare and have not made their way into standard workflows. As discussed in the introduction section several existing studies have already demonstrated the potential of ontologies for particular domains or specific regional instances. Our study could clearly go one step further: we created a fully functional comprehensive reusable GEOBIA framework. Further in-depth studies may be required to: (a) improve and refine the GEOBIA ontology model; (b) build ontology models for new classifiers such as deep learning, random forests and random fern; and (c) investigate the automation and “geo-intelligence” potential [52] of the ontology-driven object-based semantic classification method.
Particularly through the highly successful GEOBIA conferences such as the 2016 conference organized in Enschede, the Netherlands, worldwide collaborations in ontology research in the field of remote sensing—reaching out to GIS and Geoinformatics—have begun. The authors promote to utilize existing ontologies and the Semantic Web as an extension of the World Wide Web that enables people to share content. It may serve as a web of data and has inspired and engaged scientists to create and share innovative semantic technologies and applications. Therefore, we encourage researchers and experts and to develop shared ontologies to allow for more domain specific ontologies, and to enhance the automation of GEOBIA.

Acknowledgments

We thank the anonymous reviewers for their constructive suggestions. This research was funded by: (1) the National Natural Science Foundation of China (Project Nos. 41371406, 41471299, and 41671440); (2) Central Public-interest Scientific Institution Basal Research Fund (Project Nos. 7771508, 777161101, and 777161102); and (3) DAAD-Strategic partnerships and thematic networks 2015–2018, Program line B: Thematic Networks, Stuttgart University (Project No. 57173947). The Protégé software developed by Stanford University was chosen to build the ontology model for GEOBIA. The software FeatureStation developed by the Chinese Academy of Surveying and Mapping was chosen to be the image segmentation and classification tool. The Protégé plugin developed by M. A. J. Jesús was chosen to be the transformation format and semantic classification tool. Norman Kerle’s and Mariana Belgiu’s comments on earlier drafts greatly improved this paper.

Author Contributions

This research was mainly performed and prepared by Haiyan Gu and Haitao Li. Haiyan Gu and Haitao Li contributed with ideas, conceived and designed the study. Haiyan Gu wrote the paper. Li Yan and Zhengjun Liu supervised the study and their comments were considered throughout the paper. Thomas Blaschke and Uwe Soergel analysed the results of the experiment, reviewed and edited the manuscript.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Hay, G.J.; Castilla, G. Geographic Object-Based Image Analysis (GEOBIA): A New Name for a New Discipline. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Blaschke, T., Lang, S., Hay, G.J., Eds.; Springer: New York, NY, USA, 2008; pp. 75–89. [Google Scholar]
  2. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi- resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  3. Hay, G.J.; Castilla, G. Object-based image analysis: Strengths, weaknesses, opportunities and threats (SWOT). In Proceedings of the 1st International Conference on Object-Based Image Analysis, Salzburg, Austria, 4–5 July 2006. [Google Scholar]
  4. Robertson, L.D.; King, D.J. Comparison of pixel—And object-based classification in land-cover change mapping. Int. J. Remote Sens. 2011, 32, 1505–1529. [Google Scholar] [CrossRef]
  5. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  6. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land-cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  7. Addink, E.A.; Van Coillie, F.M.B.; De Jong, S.M. Introduction to the GEOBIA 2010 special issue: from pixels to geographic objects in remote sensing image analysis. Int. J. Appl. Earth Obs. 2012, 15, 1–6. [Google Scholar] [CrossRef]
  8. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  9. GEO-Object-Based Image Analysis. Available online: http://wiki.ucalgary.ca/page/GEOBIA (accessed on 5 May 2016).
  10. Pu, R.; Landry, S.; Yu, Q. Object-based urban detailed land-cover classification with high spatial resolution IKONOS imagery. Int. J. Remote Sens. 2011, 32, 3285–3308. [Google Scholar] [CrossRef]
  11. Hussain, M.; Chen, D.M.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  12. Blaschke, T.; Hay, G.J.; Weng, Q.; Resch, B. Collective sensing: integrating geospatial technologies to understand urban systems—An overview. Remote Sens. 2011, 3, 1743–1776. [Google Scholar] [CrossRef]
  13. Nussbaum, S.; Menz, G. Object-Based Image Analysis and Treaty Verification: New Approaches in Remote Sensing-Applied to Nuclear Facilities in Iran; Springer: New York, NY, USA, 2008. [Google Scholar]
  14. Lang, S. Object-based image analysis for remote sensing applications: Modeling reality-dealing with complexity. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Blaschke, T., Lang, S., Hay, G.J., Eds.; Springer: New York, NY, USA, 2008; pp. 3–27. [Google Scholar]
  15. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.; van der Meer, F.; van der Werff, H.; Van Coillie, F.; et al. Geographic Object-based Image Analysis: A new paradigm in Remote Sensing and Geographic Information Science. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed]
  16. Agarwal, P. Ontological considerations in GIScience. Int. J. Geogr. Inf. Sci. 2005, 19, 501–536. [Google Scholar] [CrossRef]
  17. Li, J.L.; He, Z.Y.; Ke, D.L.; Zhu, Q.L.A. Approach for Insight into Geo-ontology Merging based on Description Logics. Geomat. Inf. Sci. Wuhan Univ. 2014, 39, 317–321. [Google Scholar]
  18. Li, Q.C. Research of Model and Methods Based on Ontology for Geo-Information Semantic Transformation. Ph.D. Thesis, PLA Information Engineering University, ZhengZhou, China, 2011. [Google Scholar]
  19. Arvor, D.; Durieux, L.; Andres, S.; Laporte, M.A. Advances in Geographic Object-Based Image Analysis with ontologies: A review of main contributions and limitations from a remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2013, 82, 125–137. [Google Scholar] [CrossRef]
  20. Jesús, M.A.J.; Luis, D.; José, A.P.F. A Framework for Ocean Satellite Image Classification Based on Ontologies. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sen. 2013, 6, 1048–1063. [Google Scholar]
  21. Dejrriri, K.; Malki, M. Object-based image analysis and data mining for building ontology of informal urban settlements. In Proceedings of the SPIE Image and Signal Processing for Remote Sensing XVIII, Edinburgh, UK, 24 September 2012. [Google Scholar]
  22. Kohli, D.; Sliuzas, R.; Kerle, N.; Stein, A. An ontology of slums for image-based classification. Comput. Environ. Urban Syst. 2012, 36, 154–163. [Google Scholar] [CrossRef]
  23. Forestier, G.; Puissant, A.; Wemmert, C.; Gançarski, P. Knowledge-based region labeling for remote sensing image interpretation. Comput. Environ. Urban Syst. 2012, 36, 470–480. [Google Scholar] [CrossRef]
  24. Kyzirakos, K.; Karpathiotakis, M.; Garbis, G.; Nikolaou, C.; Bereta, K.; Papoutsis, I.; Herekakis, T.; Michail, D.; Koubarakis, M.; Kontoes, C. Wildfire monitoring using satellite images, ontologies and linked geospatial data. Web Semant. 2014, 24, 18–26. [Google Scholar] [CrossRef]
  25. Belgiu, M.; Tomljenovic, I.; Lampoltshammer, T.J.; Blaschke, T.; Höfle, B. Ontology-based classification of building types detected from airborne laser scanning data. Remote Sens. 2014, 6, 1347–1366. [Google Scholar] [CrossRef]
  26. Belgiu, M.; Höfle, B.; Hofmann, P. Coupling formalized knowledge bases with object-based image Analysis. Remote Sens. Lett. 2014, 5, 530–538. [Google Scholar] [CrossRef]
  27. Cui, W.; Tang, S.; Li, R.; Yao, Z. A method of Identifying Remote Sensing Objects by using Geo-ontology and Relative Elevation. J. Wuhan Univ. Technol. 2013, 37, 695–698. [Google Scholar]
  28. Luo, H.; Li, L.; Zhu, H.; Kuai, X.; Zhang, Z.; Liu, Y. Land Cover Extraction from High Resolution ZY-3 Satellite Imagery Using Ontology-Based Method. ISPRS Int. J. Geo-Inf. 2016, 5, 31. [Google Scholar] [CrossRef]
  29. Durand, N.; Derivaux, S.; Forestier, G.; Wemmert, C.; Gancarski, P.; Boussaid, O.; Puissant, A. Ontology-Based Object Recognition for Remote Sensing Image Interpretation. In Proceedings of the IEEE International Conference on TOOLS with Artificial Intelligence, Patras, Greece, 29–31 October 2007; pp. 472–479. [Google Scholar]
  30. Bannour, H.; Hudelot, C. Towards ontologies for image interpretation and annotation. In Proceedings of the International Workshop on Content-Based Multimedia Indexing, Madrid, Spain, 13–15 June 2011; pp. 211–216. [Google Scholar]
  31. Andres, S.; Arvor, D.; Pierkot, C. Towards an ontological approach for classifying remote sensing images. In Proceedings of the Eighth International Conference on Signal Image Technology and Internet Based Systems, Naples, Italy, 25–29 November 2012; pp. 825–832. [Google Scholar]
  32. Geographical Conditions Census Contents and Index (GDPJ 01—2013). Available online: http://www.jschj.gov.cn/upfile/dlgqdoc/jswd/GDPJ012013.pdf (accessed on 5 February 2016).
  33. Ontology Development 101: A Guide to Creating Your First Ontology. Available online: http://protege.stanford.edu/publications/ontology_development/ontology101.pdf (accessed on 5 February 2016).
  34. OWL Web Ontology Language Reference. Available online: http://www.w3.org/TR/owl-ref/ (accessed on 5 February 2016).
  35. SWRL: A Semantic Web Rule Language Combining owl and RuleML. Available online: http://www.w3.org/Submission/SWRL/ (accessed on 5 February 2016).
  36. ISO-Metadata.owl: Several ISO Geographic Information Ontologies developed with the Protege-OWL editor. Contributed by Akm Saiful Islam, Bora Beran, Luis Bermudez, Stephane Fellah & Michael Piasecki. Available online: http://protegewiki.stanford.edu/wiki/Protege_Ontology_Library (accessed on 5 February 2016).
  37. OGC: Ontology for Geography Markup Language (GML3.0) of Open GIS Consortium (OGC). Contributed by Contributors: Zafer Defne, Akm Saiful Islam and Michael Piasecki. Available online: http://protegewiki.stanford.edu/wiki/Protege_Ontology_Library (accessed on 5 February 2016).
  38. SWEET Ontologies: A Semantic Web for Earth and Environmental Terminology. Source: Jet Propulsion Laboratory. Available online: http://protegewiki.stanford.edu/wiki/Protege_Ontology_Library (accessed on 5 February 2016).
  39. Di Gregorio, A. Land Cover Classification System: Classification Concepts and User Manual: LCCS; Food and Agriculture Organization: Rome, Italy, 2005. [Google Scholar]
  40. Definiens Imaging GmbH. Developer 8 Reference Book; Definiens Imaging GmbH: Munich, Germany, 2011. [Google Scholar]
  41. Tonjes, R.; Glowe, S.; Buckner, J.; Liedtke, C.E. Knowledge-based interpretation of Remote Sensing images using semantic nets. Photogramm. Eng. Rem. S. 1999, 65, 811–821. [Google Scholar]
  42. Yang, Y.; Li, H.T.; Han, Y.S.; Gu, H.Y. High resolution remote sensing image segmentation based on graph theory and fractal net evolution approach. In Proceedings of the International Workshop on Image and Data Fusion, Kona, HI, USA, 21–23 July 2015. [Google Scholar]
  43. Drǎguţ, L.; Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  44. Gao, Y.; Mas, J.F.; Kerle, N.; Navarrete Pacheco, J.A. Optimal region growing segmentation and its effect on classification accuracy. Int. J. Remote Sens. 2011, 32, 3747–3763. [Google Scholar] [CrossRef]
  45. Achanccaray, P.; Ayma, V.A.; Jimenez, L.I.; Bernabe, S. SPT 3.1: A free software tool for automatic tuning of segmentation parameters in optical, hyperspectral and SAR images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Milan, Italy, 26–31 July 2015. [Google Scholar]
  46. Martha, T.R.; Kerle, N.; van Westen, C.J.; Jetten, V.; Kumar, K.V. Segment Optimization and Data-Driven Thresholding for Knowledge-Based Landslide Detection by Object-Based Image Analysis. IEEE Trans. Geosci. Remote. 2011, 49, 4928–4943. [Google Scholar] [CrossRef]
  47. Stumpf, A.; Kerle, N. Object-oriented mapping of landslides using Random Forests. Remote Sens. Environ. 2011, 115, 2564–2577. [Google Scholar] [CrossRef]
  48. Maillot, N.; Thonnat, M.; Boucher, A. Towards ontology-based cognitive vision. Mach. Vis. Appl. 2004, 16, 33–40. [Google Scholar] [CrossRef]
  49. Belgiu, M.; Drǎguţ, L.; Strobl, J. Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery. ISPRS J. Photogramm. Remote Sens. 2014, 87, 205–215. [Google Scholar] [CrossRef] [PubMed]
  50. Ma, L.; Li, M.; Blaschke, T.; Ma, X.; Tiede, D.; Cheng, L.; Chen, Z.; Chen, D. Object-based change detection in urban areas: the effects of segmentation strategy, scale, and feature space on unsupervised methods. Remote Sens. 2016, 8, 761–778. [Google Scholar] [CrossRef]
  51. Li, M.; Ma, L.; Blaschke, T.; Cheng, L.; Tiede, D. A Systematic Comparison of Different Object-Based Classification Techniques Using High Spatial Resolution Imagery in Agricultural Environments. Int. J. Appl. Earth Obs. 2016, 4, 87–98. [Google Scholar] [CrossRef]
  52. Hofmann, P.; Lettmayer, P.; Blaschke, T.; Belgiu, M.; Wegenkittl, S.; Graf, R.; Lampoltshammer, T.J.; Andrejchenko, V. Towards a framework for agent-based image analysis of remote-sensing data. Int. J. Image Data Fusion 2015, 6, 115–137. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overview of the methodology followed in this study.
Figure 1. Overview of the methodology followed in this study.
Remotesensing 09 00329 g001
Figure 2. False colour image fusion result of the ZY-3 satellite for Ruili City, China.
Figure 2. False colour image fusion result of the ZY-3 satellite for Ruili City, China.
Remotesensing 09 00329 g002
Figure 3. The land-cover ontology (every subclass is shown with an “is.a” relationship).
Figure 3. The land-cover ontology (every subclass is shown with an “is.a” relationship).
Remotesensing 09 00329 g003
Figure 4. Image object features ontology (every subclass is shown with an “is.a” relationship).
Figure 4. Image object features ontology (every subclass is shown with an “is.a” relationship).
Remotesensing 09 00329 g004
Figure 5. Ontology model of the decision tree classifier.
Figure 5. Ontology model of the decision tree classifier.
Remotesensing 09 00329 g005
Figure 6. The mark rules ontology model (every subclass is shown with an “is.a” relationship).
Figure 6. The mark rules ontology model (every subclass is shown with an “is.a” relationship).
Remotesensing 09 00329 g006
Figure 7. The semantic network model.
Figure 7. The semantic network model.
Remotesensing 09 00329 g007
Figure 8. The features of an object in OWL.
Figure 8. The features of an object in OWL.
Remotesensing 09 00329 g008
Figure 9. Decision rule based on C4.5 decision tree classifier.
Figure 9. Decision rule based on C4.5 decision tree classifier.
Remotesensing 09 00329 g009
Figure 10. The decision tree model.
Figure 10. The decision tree model.
Remotesensing 09 00329 g010
Figure 11. The decision tree expressed in OWL.
Figure 11. The decision tree expressed in OWL.
Remotesensing 09 00329 g011
Figure 12. The classification result of “region208” in OWL.
Figure 12. The classification result of “region208” in OWL.
Remotesensing 09 00329 g012
Figure 13. Example of the semantic information in OWL format for “region105”.
Figure 13. Example of the semantic information in OWL format for “region105”.
Remotesensing 09 00329 g013
Figure 14. Semantic information of “region105” as displayed in a semantic web interface.
Figure 14. Semantic information of “region105” as displayed in a semantic web interface.
Remotesensing 09 00329 g014
Figure 15. Land cover classification map from the ZY-3 satellite image for the test site: (a) our method with ontology; and (b) decision tree method without ontology.
Figure 15. Land cover classification map from the ZY-3 satellite image for the test site: (a) our method with ontology; and (b) decision tree method without ontology.
Remotesensing 09 00329 g015
Figure 16. Classification confusion matrix, where rows represent reference objects and columns classified objects: (a) our method with ontology; and (b) decision tree method without ontology.
Figure 16. Classification confusion matrix, where rows represent reference objects and columns classified objects: (a) our method with ontology; and (b) decision tree method without ontology.
Remotesensing 09 00329 g016
Table 1. The description, picture, decision tree rules and decision rules of eight land-covers.
Table 1. The description, picture, decision tree rules and decision rules of eight land-covers.
DescriptionPictureDecision Tree RulesOntology RulesDecision Rules in SWRL Format
FieldField is often cultivated for planting crops, which includes cooked field, new developed field and grass crop rotation land. It is mainly for planting crops, and there are scattered fruit trees, mulberry trees or others. Remotesensing 09 00329 i001NDVI > 0.6 and RectangularFit > 0.62 and FractalDimension ≤ 0.37.Field ≡ Regular ∩ Planar ∩ Smooth ∩ Dark ∩ Low ∩ adjacentToRoad.Regular (?x), Planar (?x), Smooth(?x), Dark (?x),Low(?x), adjacentToRoad(?x) -> Field (?x)
OrchardOrchard is artificially cultivated for perennial woody and herbaceous crops. It is mainly used for collecting fruits, leaves, roots, stems, etc. It also includes various trees, bushes, tropical crops and fruit nursery, etc. Remotesensing 09 00329 i002NDVI > 0.6 and RectangularFit > 0.62 and FractalDimension > 0.37.Orchard ≡ Regular ∩ Planar ∩ Smooth ∩ Dark ∩ Medium ∩ adjacentToField.Regular (?x), Planar (?x), Smooth(?x), Dark (?x), Medium(?x), adjacentToField(?x) -> Orchard (?x)
WoodlandWoodland is covered of natural forest, secondary forest and plantation, which includes trees, bushes, bamboo, etc. Remotesensing 09 00329 i003NDVI > 0.6 and RectangularFit ≤ 0.62 and Homogeneity > 0.71.Woodland ≡ Irregular ∩ Planar ∩ Rough ∩ Dark ∩ High ∩ adjacentToField.Irregular (?x), Planar (?x), Rough(?x), Dark (?x), High(?x), adjacentToField(?x) -> Woodland (?x)
GrasslandGrassland is covered of herbaceous plants, which includes shrub grassland, pastures, sparse grassland, etc. Remotesensing 09 00329 i004NDVI > 0.6 and RectangularFit ≤ 0.62 and Homogeneity ≤ 0.71.Grassland ≡ Irregular ∩ Planar ∩ Smooth ∩ Dark ∩ Low∩adjacentToBuilding.Irregular (?x), Planar (?x), Smooth (?x), Dark (?x), Low(?x), adjacentToBuilding(?x) -> Grassland (?x)
BuildingBuilding includes contiguous building areas and individual buildings in urban and rural areas. Remotesensing 09 00329 i005NDVI ≤ 0.6 and MeanB1 > 0.38 and LengthWidthRatio ≤ 4.5.Building ≡ Regular ∩ Planar ∩ Rough ∩ Light ∩ High ∩ adjacentToRoad.Regular (?x), Planar (?x), Rough (?x), Light(?x), High(?x), adjacentToRoad (?x) -> Building(?x)
RoadRoad is covered by rail and trackless road surface, including railways, highways, urban roads and rural roads. Remotesensing 09 00329 i006NDVI ≤ 0.6 and MeanB1 > 0.38 and LengthWidthRatio > 4.5.Road ≡ Regular ∩ Strip ∩ Smooth ∩ Light ∩ Low ∩ adjacentToBuilding.Regular (?x), Strip (?x), Smooth (?x), Light(?x), Low(?x), adjacentToBuilding(?x) -> Road(?x)
Bare landBare land is a variety of natural exposed surface (forest coverage is less than 10%). Remotesensing 09 00329 i007NDVI ≤ 0.6 and MeanB1 ≤ 0.38 and NDWI > 0.6.Bare land ≡ Irregular ∩ Planar ∩ Rough ∩ Light ∩ Low.Irregular (?x), Planar (?x), Rough (?x), Light (?x), Low (?x) -> Bare land(?x)
WaterWater includes all types of surface water. Remotesensing 09 00329 i008NDVI ≤ 0.6 and MeanB1 ≤ 0.38 and NDWI ≤ 0.6.Water ≡ Irregular ∩ Planar ∩ Smooth ∩ Dark ∩ Low.Irregular (?x), Planar (?x), Smooth (?x), Dark (?x), Low (?x) -> Water(?x)
Table 2. Land cover classification accuracy of the two methods.
Table 2. Land cover classification accuracy of the two methods.
AccuracyOur Method with OntologyDecision Tree Method without Ontology
Production Accuracy (%)User Accuracy (%)Production Accuracy (%)User Accuracy (%)
Field88.1486.6785.0077.27
Orchard87.6989.0683.8289.06
Woodland85.1188.8991.1195.35
Grassland76.0985.3785.7178.26
Building85.717582.8680.56
Road84.3872.9771.8876.67
Bare land90.3888.6889.5881.13
Water88.2410080.00100
Overall accuracyOverall accuracy = 85.95%, Kappa coefficient = 0.84Overall accuracy = 84.32%, Kappa coefficient = 0.82

Share and Cite

MDPI and ACS Style

Gu, H.; Li, H.; Yan, L.; Liu, Z.; Blaschke, T.; Soergel, U. An Object-Based Semantic Classification Method for High Resolution Remote Sensing Imagery Using Ontology. Remote Sens. 2017, 9, 329. https://doi.org/10.3390/rs9040329

AMA Style

Gu H, Li H, Yan L, Liu Z, Blaschke T, Soergel U. An Object-Based Semantic Classification Method for High Resolution Remote Sensing Imagery Using Ontology. Remote Sensing. 2017; 9(4):329. https://doi.org/10.3390/rs9040329

Chicago/Turabian Style

Gu, Haiyan, Haitao Li, Li Yan, Zhengjun Liu, Thomas Blaschke, and Uwe Soergel. 2017. "An Object-Based Semantic Classification Method for High Resolution Remote Sensing Imagery Using Ontology" Remote Sensing 9, no. 4: 329. https://doi.org/10.3390/rs9040329

APA Style

Gu, H., Li, H., Yan, L., Liu, Z., Blaschke, T., & Soergel, U. (2017). An Object-Based Semantic Classification Method for High Resolution Remote Sensing Imagery Using Ontology. Remote Sensing, 9(4), 329. https://doi.org/10.3390/rs9040329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop