Next Article in Journal
Monthly Analysis of Wetlands Dynamics Using Remote Sensing Data
Previous Article in Journal
An Improved Progressive TIN Densification Filtering Method Considering the Density and Standard Variance of Point Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Line-Constrained Shape Feature for Building Change Detection in VHR Remote Sensing Imagery

1
School of Geosciences and Info-Physics, Central South University, Changsha 410083, China
2
The Third Surveying and Mapping Institute of Hunan, Changsha 410004, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2018, 7(10), 410; https://doi.org/10.3390/ijgi7100410
Submission received: 21 August 2018 / Revised: 11 October 2018 / Accepted: 12 October 2018 / Published: 16 October 2018

Abstract

:
Buildings represent the most relevant features of human activity in urban regions, but their change detection using very-high-resolution (VHR) remote sensing imagery is still a major challenge. Effective representation of the building is the key point in building change detection. The linear feature can indirectly represent the structure and distribution of man-made objects. Thus, this study proposes a shape feature-based building change detection method. Specifically, a line-constrained shape (LCS) feature is developed to capture the shape characteristics of buildings. This feature improves the discriminability between buildings and other ground objects by integrating the pixel shape feature and line segments. The building candidate area (BCA) is created in accordance with the distribution of the line segments in two-phase images. The problem space is constrained in a high-likelihood region of buildings because of the BCA. Comparative experimental results demonstrate that the combination of the spectral feature and the developed LCS feature achieves the best performance in object-based building change detection in VHR imagery.

Graphical Abstract

1. Introduction

The development of human society is always accompanied by people’s frequent interactions with nature. Among the environmental changes, land use/cover changes in urban areas appear more frequent and complex. Obtaining change information in time is crucial for the government to make decisions on urban construction, planning and management [1,2,3,4]. Singh [5] defined change detection as “the process of identifying differences in the state of an object or phenomenon by observing it at different times”. Multitemporal remote sensing images have become a major data source for change detection due to the high temporal frequency, digital format suitable for computation, synoptic view, and wide selection of spatial and spectral resolutions [6,7], while tradeoff should be considered when applying the temporal, spatial and spectral resolutions depending on the satellite or sensor. Meanwhile, a large number of change detection methods have been widely developed over the past decades [8,9].
The traditional change detection methods use medium- and low-spatial-resolution remote sensing images as data sources but are mostly pixel based, where the pixel is the basic analysis unit. Spectral features are normally used to detect and measure changes, but spatial characteristics are generally not considered [3]. The pixel feature comparison-based change detection has the advantages of fast calculation speed and easy-to-understand results, but it is susceptible to the influences of geometric registration and radiation correction errors between different temporal data [10]. Many works in the literature have reviewed the methods of pixel-based change detection [4,11,12]. With the increase in spatial resolution, the information contained in remote sensing images becomes abundant, which provides a rich basis to support change detection. However, in VHR remote sensing images, the complex spatial distribution of objects and large heterogeneity usually occur between individual objects. They lead to a great challenge in the change detection task. Facing the phenomenon of “high intraclass variability and low interclass variability”, a traditional pixel-based change detection method is not suitable for VHR image processing [3,9].
The object-based image analysis (OBIA) method shows an obvious advantage over the pixel-based method. In recent years, the OBIA approach has been proven to have a considerable advantage in the change detection field, and it is continuously developing [1,13,14]. The image object is the basic analysis unit of object-based change detection (OBCD). The image is divided into several meaningful homogeneous regions, that is, the image objects [15,16]. Each object consists of a set of pixels that are spatially adjacent and similar in feature (e.g., spectra) [17], thereby allowing the change detection to use the spatial information efficiently [1]. Many OBCD approaches have been established. For instance, a fast OBCD method was proposed in [18]. In the method, two-phase images were segmented and overlapped to obtain synthesized objects. For every object, the spectral characteristic and change vector (CV) value were extracted. On the basis of these features, the transductive SVM was used to classify the changed and unchanged objects.
In urban areas, buildings are the most relevant features of human activities and an important manifestation of urbanization. Accurate change detection of buildings is vital in promoting urban planning and achieving sustainable development of the environment [19]. As for the buildings in VHR remote sensing images, their structures are different and their colors are diverse; the spectral characteristics of some buildings may also be confused with those of some non-buildings. Spectral characteristics are ineffective in distinguishing changed buildings [20,21]; therefore, other means are introduced to overcome the deficiency, among which spatial analysis is a widely accepted means [22].
Numerous man-made objects (e.g., buildings) in urban areas have rich corner points and regular appearance shapes. Establishing the spatial feature that contains the shape information of buildings can be convenient for building extraction and change detection in VHR images. For example, feature points and line information are obtained to get buildings’ locations [23,24,25]; indexes that can describe specific attributes of buildings are designed, such as the length and width of connected pixel groups [26], the pixel shape index (PSI) [27] and the morphological building index (MBI) [28]; and a deep convolutional neural network is used to find the expression of the characteristics of buildings [29,30,31]. By combining spatial features and shape information with spectrum and texture features, various methods have been proposed for building extraction and change detection [23,24,25,26,27,28,29,30,31,32,33,34,35,36].
However, the diversity of buildings in VHR remote sensing images is various, that is, differences in spectral, shape, textural and spatial background information naturally exist among buildings. A specific template for accurately depicting the shape and boundary of buildings is difficult to construct even in a certain area. For example, the spectral threshold used to stop the extension vector in PSI is likely to confuse some buildings with their surrounding objects (e.g., bare, grassland or shadow) because of the similar spectral appearance. MBI depends on the luminance image defined as the maximum value of all bands, and it easily ignores the brightness difference of buildings and overlooks some buildings with low brightness. Therefore, encoding the crucial characteristic of buildings in VHR images is a key issue in building change detection research.
A large number of line segments can be extracted from the outer appearance of buildings in VHR images. Consequently, a clustering of line features will emerge in the area where buildings are located. From this viewpoint, lines should be taken as the inherent characteristics to grasp building shapes. In this paper, we present a line-constrained shape (LCS) feature, which more easily distinguishes buildings from other geo-objects. Based on LCS and a spectral feature, an object-based supervised classification method is proposed for building change detection in VHR remote sensing imagery.
The rest of the paper is organized as follows. Section 2 elucidates the proposed method. Section 3 provides the experimental result analysis and the comparison of building change detection based on different spatial features. Section 4 elaborates the conclusion and the future work. The acronyms used in this study are summarized in the Abbreviation section.

2. Methodology

The LCS-based building change detection method proposed in this study mainly contains three steps. First, two-phase images are segmented, and basic analysis units are created by synthesizing the corresponding two-phase objects. Then, line segments are extracted in VHR remote sensing images, and the LCS is accordingly developed to represent the shape feature. Finally, a feature vector that connects spectral, shape and differential information is built for each of the basic analysis units, and it is input to an object-based supervised classification to obtain changed buildings. The flowchart of the proposed method is shown in Figure 1.

2.1. Image Object Generation

By performing OBIA, we can utilize the rich information in VHR remote sensing imagery. The image is firstly segmented into objects, then feature extraction is implemented on the object level, and object-based classification is applied to obtain the final result. The segmentation method, as the front-end step and key part in OBIA, and its output will greatly affect the results of subsequent steps [37]. Here, the simple linear iterative clustering (SLIC) [38] is adopted considering that it can produce compact and homogeneous objects and is excellent in performance speed, contour preserving and shape characterization. Therefore, the SLIC is utilized to segment two-phase images into image objects for achieving object homogeneity. Finally, an object synthesis method [18] is adopted to acquire the final analysis units.

2.2. Shape Feature Extraction

This study focuses on the shape features of buildings to enhance the representing ability of features in building change detection. The calculation of the shape features is constrained by straight lines by utilizing the shape and position information of buildings. The likelihood of pixels belonging to buildings is calculated on the basis of line segments, and the BCA is extracted in accordance with the likelihood. A special assignment strategy is proposed to embody the LCS difference between the objects in BCA and outside objects.

2.2.1. Building Likelihood Map

Although spectra and structural diversity generally represent buildings, the lines of buildings are obvious in VHR images. Large numbers of line segments can be extracted from the extrinsic structure and contour of buildings. The proposed method uses a line segment detector (LSD) [39] to extract the line segments from two-phase images. The LSD can obtain subpixel accuracy, and its result includes the coordinate of the endpoints, width, and orientation of segments. As shown in Figure 2a, the line segments of buildings and non-buildings exhibit differences, namely, the irregular shape of the non-building objects produces random, sporadic and sparse line segments; the line segments can be more densely extracted in the building area. These lines can express the position and shape information of buildings to a large extent.
Figure 2a illustrates that, if numerous line segments exist around a pixel, then the likelihood belonging to the building of this pixel is high. In accordance with this characteristic, this study calculates the likelihood by using Gauss function. A building likelihood (BL) map is created depending on the Gauss function’s attribute by the rule that the decrease in the correlation of two pixels is accompanied with the increase in their spatial distance. The BL is calculated as
BL ( x , y ) = j = 1 N p e ( x x j ) 2 + ( y y j ) 2 2 ω 2 ,
where (x, y) is the coordinate of the current center pixel; ( x j ,   y j ) is the coordinate of the Jth pixel on line segment (POL); N p is the total number of all extracted POLs; ω is the scale factor of each POL’s influence range; and BL(x, y) is the total BL value of the center pixel (x, y). The POLs are collected from every one line segment with an interval of five pixels in consideration of computational efficiency. These discrete points can retain their representation of buildings’ location and shape information.
Equation (1) implies that a high ω indicates a large influence of range R and a high BL value for the center pixel. If the distance from POL to the center pixel is largely beyond the range R, then the BL value of the pixel will become too small to be ignored. As a result, the nearer the pixels are from POLs, the higher their BL values are, and vice versa.
The BL value of a center pixel is the accumulation of POLs’ influence; therefore, BL can be considered a measure of POL spatial distribution. When the center pixel is surrounded by POLs, the likelihood that the center pixel belongs to the building is high. Figure 2b is a BL map of an image block of buildings, which has been normalized into 0–1. The regions with warm color indicate a high likelihood value. The BL values of buildings are higher than those of the surrounding objects.

2.2.2. Building Candidate Area

Otsu [40] is used to extract the initial BCA from the BL map. Figure 2c shows the initial BCA, which is embraced by the smooth outline of the blue region. In this area, blue denotes the pixels that belong to other objects, and green stands for the building pixels. As shown in Figure 2c, parts of buildings (red region) are excluded from the BCA.
The objects are synthesized by bi-temporal objects as mentioned in Section 2.1. Consequently, as shown in Figure 2c, the area of these objects is small with very high internal homogeneity and an edge that can efficiently capture the actual edge of ground objects. A procedure for optimizing the candidate area is proposed to make the edge of the candidate area fit for the actual ground objects’ edges. Specifically, the candidate area is overlapped with all objects to extend the BCA. If any pixels (even only one) of an object locate in the BCA, then all pixels of the object are incorporated into the BCA. After all objects have experienced the procedure, the final extended BCA can be obtained, as shown in Figure 2d. The objects are small, and their internal homogeneity is high. The extended candidate area cannot greatly change the range of initial BCA. Furthermore, the extended region of the BCA will not introduce any other ground objects due to the high internal homogeneity of these objects.

2.2.3. Line-Constrained Shape Feature

The original PSI [27] takes the center pixel as the starting point to draw a vector in several given orientations. The spectral difference between the front pixel of the extension vector and the center pixel is calculated iteratively until the vector stops its extension. The stop rule is that the spectral difference is greater than a given threshold or the vector length reaches a given maximum value. The vector length in all orientations is recorded as the PSI feature. On each orientation, the vector represents the distribution range where pixels have similar spectral features to those of center pixels. However, when PSI is used as the feature to capture the building shape, two problems appear. Firstly, the vector will not stop at the edges where the spectral difference of two side pixels is unobvious. Secondly, this feature is highly susceptible to some irregular spots inside the faces of buildings with a certain difference in spectra. These issues will lead to the consequence that real building edges cannot be reached.
This study proposes an LCS feature to describe the shape of buildings. Its calculating process includes center pixel position judgment, vector generation, vector length computation and shape feature generation.
(1) Center pixel position judgment
If the current center pixel is located in the BCA, then step 2 is performed. Otherwise, the LCS of this center pixel is skipped, and the next pixel is set as center pixel.
(2) Vector generation
The extension orientation is set as D (e.g., all the orientations of yellow arrows in Figure 3a; the number of orientations is set as d). The extended number of steps is initialized as n s t e p = 0 . The initialization starts from the center pixels C (e.g., the pixels labeled as “1” and “2” in Figure 3a), and vectors are drawn along each orientation i ( i D ). The stop conditions of the vector extension are as follows:
C1: The next pixel the vector meets belongs to the line segments (e.g., the points on the red line segments in Figure 3a);
C2: n step equals to the maximum step threshold T step ​  m a x .
If the vector does not stop, then the front pixel of the vector is incorporated into the vector, and n step is increased by one to continue the extension until reaching the stop condition.
(3) Vector length computation
When all the vectors stop extending, the coordinates of all end pixels of vectors are recorded. The length of vectors is the Euclidean distance between the end pixel and the center pixel (e.g., the length of each yellow arrow in Figure 3a). If the vector does not meet the line segments before the vector length reaches the maximum, then the latter is its final length (e.g., the radius of the blue circle in Figure 3a).
Note that the BCA outline is not used as the stopping condition for the vector extension. The vectors start from the center pixels in the candidate area that may go through the outlines of the area. The pixels on both sides of the outline may belong to the same class; as shown in Figure 2d, most of the BCA outlines pass through the grassland. Accordingly, LCS can reflect the natural shape characteristics of objects.
(4) Shape feature generation
For all the pixels in BCA, the final length of the vector in each orientation i D is the LCS feature value, as shown in the green area in Figure 3b. In order to distinguish the building from the ground objects in non-BCA easily, this study assign the LCS value to the pixels in non-BCA. Figure 3b depicts a simple example of value assigning for the pixels of non-BCA. The green area is the BCA, and the yellow area is the non-BCA. Along each orientation, the LCS value of pixels in BCA is obtained through steps 1–3. The maximum LCS value is assigned to all the pixels of non-BCA in the same orientation. Thus, the difference of LCS values between building pixels and the pixels in non-BCA is enlarged.
Steps 1–4 are repeated for all orientations. We can obtain the final LCS feature matrix, which has dimensions of m × n × d, where m and n are the numbers of rows and columns of images, respectively, and d is the number of orientations. The LCS value of all pixels in each orientation can be regarded as an independent shape feature.
Note that many line segments exist near the building; hence, the length of vectors inside the building will not be large in any orientation. As shown in Figure 3a, the vector extension along all orientations for the building pixel labeled 1 is stopped because of the line segments (the red lines in Figure 3a). For the pixels that belong to grass or bare land (the pixel labeled 2 in Figure 3a), the vectors extend along most orientations to reach the maximum length. The likelihood that the length of building pixels’ vectors will be smaller than those of other ground object pixels’ vectors in all orientations is high. As a result, the building’s LCS value has a high likelihood to be small, whereas the likelihood that a pixel in non-BCA belongs to the building is small.

2.3. Feature Vector Construction

Section 2.2.3 indicates that the LCS values of building pixels in all orientations are relatively low in the entire image. However, the LCS values of some ground objects with regular shapes and near to buildings (such as the path-connected building to the main streets) will be similar to those of the buildings. Therefore, the LCS should be combined with other features to be the reference for building change detection. Here, the spectral feature and LCS value are constructed as a feature vector of every object. Spectral feature and LCS are normalized to [0, 1] to unify the dimension. The spectral value of each band in the original image is normalized using the following calculation equation:
X new ( t ) = X old ( t ) / 255 ,
where X ( t ) is the value of each spectral band; t = { 1 , 2 } indicates the image’s phase ordinal; and the subscript “old” represents the original data, and “new” represents the data after normalization, the same as below. The equation for LCS normalization is
LCS new ( t ) = LCS old ( t ) / LCS max ,
where LCS ( t ) , t = { 1 , 2 } represents the LCS value calculated in two-phase images, and LCS max is the maximum value of the two-phase LCS value in all orientations.
After the spectral feature and LCS are normalized, all of these features are stacked to form the feature matrix of each phase, denoted as F ( t ) , t = { 1 , 2 } . As shown in Figure 4a, a feature matrix F extracted from an image contains n + d bands, where n is the number of spectral bands and d is the number of orientations. Each object acquires its own feature vector, in which the value of each component is the mean value of all pixels within the object of each band in F.
In addition, an object-based CV is calculated for each object to obtain the change information. The CV value D i is calculated through the following equation:
D i = 1 R i × C j = 1 C ( x , y ) R i ( F j ( 1 ) ( x , y ) F j ( 2 ) ( x , y ) ) 2 ,
where C is the dimension of feature matrix F. F j ( 1 ) ( x , y ) and F j ( 2 ) ( x , y ) are the values of pixels (x, y) in the jth band of two-phase feature matrices, respectively. R i is the number of pixels inside R i . As shown in Figure 4b, the feature vector of each object is arranged as
v i = [ F 1 ( 1 ) ( R i ) , ... , F C ( 1 ) ( R i ) , D i , F 1 ( 2 ) ( R i ) , ... , F C ( 2 ) ( R i ) ] ,
where F j ( t ) ( R i ) is the mean value of pixels in object R i of the jth band in the tth phase feature matrix.

2.4. Classification

After the feature vector has been constructed for every object, the logistic regression model [41] is adopted in this study to determine the building change information by implementing a binary classification. This model produces regression coefficients for each variable on the basis of data sampling, and the relationship between independent and dependent variables is analyzed on the basis of these coefficients. The logistic model obtains classification results by using maximum likelihood estimation. It gives a likelihood value which ranges from 0 to 1 to each object in accordance with the object feature vector. When the likelihood value is close to 1, the object is likely to be a changed building.
In the first step of the classification process, training samples need to be collected. To be specific, some typical objects with high homogeneity are selected through visual interpretation. As described above, the BCA is the region where buildings may exist with a large likelihood. On the contrary, the likelihood for the objects in the non-BCA to belong to buildings is small. Therefore, this study obtains a union BCA (UBCA) based on the two-phase BCAs. The training sample-collecting process is shown in Figure 5. The samples are then assigned into negative or positive sample sets U 0 and U 1 . The positive sample sets correspond to the changed building class P; the negative sample sets correspond to another class N including all unchanged ground objects and changed non-building objects. The number of training samples is approximately one-fifth to one-quarter of the total number of objects.
If the objects do not locate in the UBCA, similar to the objects in the blue area in Figure 5, then they are directly categorized as the class N and do not participate in classification. This approach not only reduces the false detection rate but also saves the calculation time.
After the training sample-collecting process, the logistic regression classifier is trained using the collected samples. Then, the classifier gives each object in UBCA a prediction likelihood value according to its feature vector. Finally, as for each object in UBCA, if its prediction likelihood value is greater than 0.5, then the object is assigned to the class P; otherwise, it is assigned to the class N.

3. Results and Discussion

Experiments are conducted on VHR imagery to verify the proposed building change detection method. The LCSs of two datasets are visualized and analyzed. The sensitivity analysis of key parameters is discussed. The advantage of the LCS feature in building change detection is validated through comparison experiments.

3.1. Study Data

Two datasets of WorldView-2 satellite images, which are acquired from the suburbs of Miami, Florida, USA, are used in this study. They have red, green and blue bands with a spatial resolution of 0.31 m. The two-phase images and changed buildings in the two datasets are shown in Figure 6. The two-phase images of dataset 1 were acquired on 7 March 2013 and 22 March 2017. The two-phase images of dataset 2 were acquired on 27 March 2011 and 24 January 2016. From visual interpretation, the changed buildings in the two datasets are newly built ones in the second phase. The numbers of new buildings are 28 and 19 for datasets 1 and 2, respectively. Some non-changed buildings also exist in both datasets, and few spectral differences are observed in the two-phase images. In addition, a number of other ground objects, such as roads, trees and grasslands, are found in the two datasets. Some of the ground objects experienced the change from one class to another.

3.2. Evaluation Metrics

Several widely used evaluation metrics [42], such as recall rate, false detection rate, overall accuracy and kappa coefficient, are introduced to evaluate the accuracy of the experimental results. They are all calculated from the pixel-based confusion matrix with the assumption that only two categories, namely, changed and unchanged pixels, exist in the classification results, as shown in Table 1.
Recall   rate :   R e c a l l = T P T c  
False   detection   rate :   F D R = F P D c  
Overall   accuracy :   O A = T P + T N N  
Kappa   coefficient :   K a p p a = N × ( T P + T N ) ( T c × D c + T u × D u ) N 2 ( T c × D c + T u × D u )  
Additionally, the overall thematic accuracy (TA) proposed in [43] is used to further evaluate the result from the object level:
T A = A ( E R ) A ( E R ) ,
where E is the extracted dataset and R is the reference vector layer. A (*) is the area of the object.

3.3. Parameter Setting

Preprocessing operations are as described in [1] to reduce the discrepancies between the two-phase images. Similarly, the present work conducts radiometric calibration to eliminate reflectance differences and co-registration for ensuring the obtainment of the bi-temporal image pixels with the same location.
After the image processing, some parameters are set for conducting the proposed method as follows:
Scale factor ω : This factor exerts a considerable impact on the generation of the BL map and BCA. ω determines the influence range of each POL. By considering the coverage ratio of BCA, the parameters ω of the two datasets are all set as 50 in this study. Further sensitivity analysis of the parameters ω is described in Section 3.5.
Orientation number d: Dense line segments can be generally extracted from buildings; thus, eight orientations are suitable to judge whether a pixel belongs to a building on the basis of LCS value. Therefore, the orientation number d is set to 8, which indicates the orientations [ 0 ° , 45 ° , 90 ° , 135 ° , 180 ° , 225 ° , 270 ° , 315 ° ] .
Maximum step threshold T step max : In the process of LCS creation, for the interior pixels of large homogeneous areas, such as bare land and grass, the vectors in all orientations will generally extend until the maximum step value is reached, and the same state will happen at some orientations of roads. Therefore, in order to make the LCS value of buildings to be distinguishing from other ground objects, T step max needs to be larger than the diagonal length of the minimum bounding rectangle of the largest building. Accordingly, the T step max of the two datasets are both set to 250 in accordance with the image resolution and the size of the buildings.

3.4. LCS Effect

Figure 7 shows the result of LSD line segments and the heat map of LCS mean value. From the first and second rows of Figure 7, the result of line segment extraction is close to the actual edges, and the contour of most buildings and the interior ridge lines are efficiently extracted. The heat map based on LCS mean value, the original PSI mean value, and MBI are shown in the third, fourth and fifth rows, respectively.
In all orientations, the LCS values of the pixels in non-BCA are set to the maximum value of BCA. Therefore, the LCS values of objects in non-BCA are the maximum values in the entire map, thereby resulting in crimson color in the heat map. Most of the building area in the heat map is dark blue, which indicates that the LCS value of buildings is small, as shown in the rectangular regions 1–2 in Figure 7 for example. The reason is that the vectors starting from the inner building pixels are always blocked by the line segments, which results in small LCS values along all orientations. On the contrary, the vectors of the pixels of non-building object in BCA are likely to have a long length, which makes the pixels from buildings and non-buildings easy to distinguish.
Since some buildings have minimal variations in spectral characteristics with surrounding objects, the PSI values of building and non-building pixels may be similar. The rectangular regions 3–4 in Figure 7 are taken as examples. The pixels of buildings and the surrounding objects in the PSI heat map are difficult to distinguish. However, for the same region, the result of the LCS feature proposed in this study is better, and most of the interior pixels are deep blue. Unlike the original PSI, LCS can be more easily distinguished with other surrounding objects. As for MBI, the fifth row of Figure 7 illustrates that the range of MBI values inside the building is large and the MBI values of some buildings are similar to the road due to the spectral differences in the interior of buildings. The MBI values of shady sides of buildings are similar to those of the bare land and grassland, as shown in the rectangular regions 5–6 in Figure 7 for example. By contrast, LCS can coarsely exhibit the buildings in the heat map, whereas MBI may miss some part of buildings.

3.5. Parameter Sensitivity Analysis

Parameter ω plays an important role in LCS calculation. This parameter is used to calculate the BL map and BCA of two-phase images, which are critical for the final change detection result. Under several different values of parameter ω, Figure 8 shows the coverage of buildings and the classification result in two-phase images of dataset 2. The first and second rows show the coverage of BCA to building ground truth (including unchanged and changed buildings) in two-phase images. The classification results obtained by the proposed method are shown in the third row. The accuracy evaluation of the change detection results and the coverage percentage of buildings under different ω settings are shown in Table 2.
As described in Section 2.2.1, the parameter ω determines the value of R, which can be regarded as the effective influence range of POL. Although numerous POLs can be extracted from buildings, these POLs mainly distribute at the outlines and ridge lines of buildings without any POLs in the interior homogenous areas of buildings. The first and second rows present that, if ω is too small, then the effective influence range R is limited, and the BL values of the pixels in the interior homogenous areas of buildings are small. Accordingly, as the region shown in the rectangular box in the first column of the second row of Figure 8 demonstrates, these pixels are excluded from the BCA. As ω increases, the effective influence range R of each POL and the BL value also increase. At this point, the pixels located in the center of the image will receive a high BL value when ω is very large, even if they are far from the line segments. This condition will lead some part of buildings far from the image center to be missing in the final result, as the region shown in the rectangle in the third column of the second row of Figure 8.
Therefore, if ω is too large or too small, then the coverage rate of BCA to buildings (including changed and non-changed) will decline and further affect the LCS value and the classification results. As shown in Table 2, when ω is set as 50, the BCAs of the two-phase images have the highest coverage ratio, and all the overall accuracy, kappa coefficient and thematic accuracy are the best. The buildings in the circle at the third row of the Figure 8 also show that some of the changed building parts that are not in the UBCA are missed. These missing parts directly affect the accuracy of the final change detection.

3.6. Comparison of Different Features

The combination of features (1) used in the proposed method is compared with three other feature sets (2)–(4) to verify the advantages of the LCS feature. All feature sets are as follows:
(1)
Spectrum + Shape (LCS) + Object (Spectra + Shape) CV
(2)
Spectrum + Object Spectra CV
(3)
Spectrum + Shape (PSI) + Object (Spectra + Shape) CV
(4)
Spectrum + Shape (MBI) + Object (Spectra + Shape) CV
The comparison results are shown in Figure 9; each column represents the final building change detection result of a dataset. The first row shows the change detection result based on feature set (1); the second row shows the change detection result based on spectral feature and its CV value; the third row shows the change detection result based on spectral feature, original PSI feature, and the CV value of these features; the fourth row shows the change detection result based on spectral feature, MBI feature, and the CV value of these features. The accuracy evaluation of the result is shown in Table 3.
From the second row in Figure 9, many missing parts (for dataset 1) or false alarms (for dataset 2) are observed when using feature set (2). As illustrated by the value in Table 3, the kappa coefficients and thematic accuracy of two datasets using feature set (2) are very low, and none of those exceed 0.5. It indicates that using spectral features only cannot effectively detect all changed buildings. The reason why the feature set (2) cannot clearly distinguish the changed buildings can be mainly traced back to two factors. On the one hand, spectral difference exists among non-changed buildings in two-phase images. On the other hand, the spectral difference caused by changed buildings is close to the spectral difference caused by other changed ground objects.
As for shape feature, just like what we discussed in Section 3.4, the drawbacks of PSI and MBI directly influence the result. Specifically, after the PSI feature is added (as feature set (3)), as shown in the third row of Figure 9, although the results obtained in the two datasets are higher than those using feature set (2), the improvement is limited. The overall accuracy exhibited in Table 3 is even slightly lower than that of feature set (2) for dataset 1, in which the PSI value of buildings is similar to that of other ground objects, such as bare land. Hence, as shown in the third row of Figure 9, this part of the building pixels are missing (e.g., circles 1–2), and some ground objects with a similar PSI value as buildings are incorrectly detected (e.g., circle 3).
As for the MBI in feature set (4), as shown in the fourth row of Figure 9, not only buildings but also some high-brightness objects with regular shape easily obtain high MBI values. As the region shown in the circles 4–5 in the fourth row of Figure 9 demonstrates, a large area of roads is incorrectly detected, thereby reducing the detection accuracy. Besides, some of the non-changed buildings are also incorrectly detected because of the spectral differences in two-phase images. As given by the value in Table 3, the kappa coefficients and thematic accuracy of two datasets using feature set (4) are lower than those created by feature set (3).
By contrast, the results of feature set (1) used in this study are relatively better than those of feature sets (2)–(4). It can be known from Table 3 that our proposed method acquires high recall rate results with low false-detection rate. For the overall accuracy, kappa coefficient and thematic accuracy, the feature set (1) that uses the proposed LCS as shape feature also obtains the highest value. From the first row in Figure 9, on the one hand, this result illustrates the important role of shape feature in the classification process. In the area with large intraclass spectral differences and small interclass spectral differences, shape features should be used to extract building location information. On the other hand, unlike PSI and MBI, the proposed LCS presents a better result, which proves its superiority in capturing building shape.

4. Conclusions and Future Work

This study utilizes shape feature and OBIA to detect building change detection from VHR remote sensing imagery. Rather than directly modeling building structure, this study represents a shape feature of buildings named as LCS by using line segments. As this feature has the ability of capturing the characteristics of buildings, it has the advantage of individual building change detection in VHR remote sensing imagery, especially in a sparse building region. Furthermore, for a large study area with dense residential zones in VHR imagery, the proposed strategy of change detection can be transferred to extract the change information on the level of building area. Experiments verify the effectiveness and superiority of the LCS feature in a building change detection task over PSI and MBI.
A satisfactory change detection result mainly depends on the features and classification. However, in complex urban environments, it is possible to extract dense lines from some non-building objects, such as roads with shoulders and lanes. These objects can also obtain high LCS values, which will confuse the building change detection to a certain extent. Thus, a highly robust feature representation for various buildings in complex environments will be developed in the future work to extend the generalization of the change detection method.

Author Contributions

H.L. performed literature search, data acquisition, and manuscript preparation. M.Y. collected important background information and carried out manuscript editing. J.C. participated in the concepts, design, definition of intellectual content. J.H. carried out the study and provided assistance for manuscript preparation. M.D. performed manuscript review.

Funding

This research was funded by the National Natural Science Foundation of China (No. 41671357), the Scientific Research Fund of Hunan Provincial Education Department (No. 16K093), the Open Research Fund Program of Key Laboratory of Digital Mapping and Land Information Application Engineering, NASG (No. GCWD2018).

Acknowledgments

We would like to express our sincere gratitude to all manuscript-reviewers for the generous help.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviation

BCAbuilding candidate area
BLbuilding likelihood
CVchange vector
LCS featureline-constrained shape feature
LSDline segment detector
MBImorphological building index
POLpixel on line segment
PSIpixel shape index
SLICsimple linear iterative clustering
FDRfalse-detection rate
OAoverall accuracy
TAthematic accuracy
UBCAunion building candidate area
VHR imagesvery-high-spatial resolution images

References

  1. Wang, X.; Liu, S.; Du, P.; Liang, H.; Xia, J.; Li, Y. Object-Based Change Detection in Urban Areas from High Spatial Resolution Images Based on Multiple Features and Ensemble Learning. Remote Sens. 2018, 10, 276. [Google Scholar] [CrossRef]
  2. Tenedório, J.A.; Rebelo, C.; Estanqueiro, R.; Henriques, C.D.; Marques, L.; Gonçalves, J.A. New Developments in Geographical Information Technology for Urban and Spatial Planning. In Technologies for Urban and Spatial Planning: Virtual Cities and Territories; Pinto, N., Tenedório, J., Antunes, A., Cladera, J., Eds.; IGI Global: Hershey, PA, USA, 2014; pp. 196–227. [Google Scholar]
  3. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change Detection from Remotely Sensed Images: From Pixel-Based to Object-Based Approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  4. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change Detection Techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  5. Singh, A. Digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
  6. Lunetta, R.S.; Johnson, D.M.; Lyon, J.G.; Crotwell, J. Impacts of imagery temporal frequency on land-cover change detection monitoring. Remote Sens. Environ. 2004, 89, 444–454. [Google Scholar] [CrossRef]
  7. Gang, C.; Geoffrey, J.H.; Luis, M.T.C.; Michael, A.W. Object-based change detection. Int. J. Remote Sens. 2012, 33, 4434–4457. [Google Scholar] [CrossRef]
  8. Bruzzone, L.; Bovolo, F. A novel framework for the design of change-detection systems for very-high-resolution remote sensing images. Proc. IEEE 2013, 101, 609–630. [Google Scholar] [CrossRef]
  9. Tewkesbury, A.P.; Comber, A.J.; Tate, N.J.; Lamb, A.; Fisher, P.F. A critical synthesis of remotely sensed optical image change detection techniques. Remote Sens. Environ. 2015, 160, 1–14. [Google Scholar] [CrossRef] [Green Version]
  10. Volpi, M.; Tuia, D.; Bovolo, F.; Kanevski, M.; Bruzzone, L. Supervised Change Detection in Vr Images Using Contextual Information and Support Vector Machines. Int. J. Appl. Earth Obs. Geoinf. 2013, 20, 77–85. [Google Scholar] [CrossRef]
  11. Coppin, P.; Jonckheere, I.; Nackaerts, K.; Muys, B.; Lambin, E. Digital Change Detection Methods in Ecosystem Monitoring: A Review. Int. J. Remote Sens. 2004, 25, 1565–1596. [Google Scholar] [CrossRef]
  12. İlsever, M.; Ünsalan, C. Two-Dimensional Change Detection Methods. SpringerBriefs Comput. Sci. 2012, 43, 469. [Google Scholar]
  13. Ma, L.; Li, M.; Blaschke, T.; Ma, X.; Tiede, D.; Cheng, L. Object-based change detection in urban areas: The effects of segmentation strategy, scale, and feature space on unsupervised methods. Remote Sens. 2016, 8, 761. [Google Scholar] [CrossRef]
  14. Plowright, A.; Tortini, R.; Coops, N.C. Determining Optimal Video Length for the Estimation of Building Height through Radial Displacement Measurement from Space. ISPRS Int. J. Geo-Inf. 2018, 7, 380. [Google Scholar] [CrossRef]
  15. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-Resolution, Object-Oriented Fuzzy Analysis of Remote Sensing Data for Gis-Ready Information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  16. Blaschke, T. A Framework for Change Detection Based on Image Objects. Manuf. Eng. 2005, 73, 30–31. [Google Scholar]
  17. Tang, Y.; Zhang, L.; Huang, X. Object-Oriented Change Detection Based on the Kolmogorov–Smirnov Test Using High-Resolution Multispectral Imagery. Int. J. Remote Sens. 2011, 32, 5719–5740. [Google Scholar] [CrossRef]
  18. Huo, C.; Zhou, Z.; Lu, H.; Pan, C.; Chen, K. Fast Object-Level Change Detection for Vhr Images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 118–122. [Google Scholar] [CrossRef]
  19. Xiao, P.; Zhang, X.; Wang, D.; Yuan, M.; Feng, X.; Kelly, M. Change Detection of Built-up Land: A Framework of Combining Pixel-Based Detection and Object-Based Recognition. ISPRS J. Photogramm. Remote Sens. 2016, 119, 402–414. [Google Scholar] [CrossRef]
  20. Kiema, J.B.K. Texture Analysis and Data Fusion in the Extraction of Topographic Objects from Satellite Imagery. Int. J. Remote Sens. 2002, 23, 767–776. [Google Scholar] [CrossRef]
  21. Myint, S.W.; Lam, N.S.N.; Tyler, J.M. Wavelets for Urban Spatial Feature Discrimination. Photogramm. Eng. Remote Sens. 2004, 70, 803–812. [Google Scholar] [CrossRef]
  22. Dell’Acqua, F.; Gamba, P.; Ferrari, A.; Palmason, J.A. Exploiting Spectral and Spatial Information in Hyperspectral Urban Data with High Resolution. Geosci. Remote Sens. Lett. IEEE 2004, 1, 322–326. [Google Scholar] [CrossRef]
  23. Kovács, A.; Szirányi, T. Improved harris feature point set for orientation-sensitive urban-area detection in aerial images. IEEE Geosci. Remote Sens. Lett. 2013, 10, 796–800. [Google Scholar] [CrossRef]
  24. Tao, C.; Tan, Y.; Zou, Z.R.; Tian, J. Unsupervised Detection of Built-up Areas from Multiple High-Resolution Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1300–1304. [Google Scholar] [CrossRef]
  25. Zhang, C.; Hu, Y.; Cui, W.H. Semiautomatic right-angle building extraction from very high-resolution aerial images using graph cuts with star shape constraint and regularization. J. Appl. Remote Sens. 2018, 12, 1. [Google Scholar] [CrossRef]
  26. Shackelford, A.K.; Davis, C.H. A Combined Fuzzy Pixel-Based and Object-Based Approach for Classification of High-Resolution Multispectral Data over Urban Areas. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2354–2363. [Google Scholar] [CrossRef]
  27. Zhang, L.; Huang, X.; Huang, B.; Li, P. A Pixel Shape Index Coupled with Spectral Information for Classification of High Spatial Resolution Remotely Sensed Imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2950–2961. [Google Scholar] [CrossRef]
  28. Huang, X.; Zhang, L. Morphological Building/Shadow Index for Building Extraction from High-Resolution Imagery over Urban Areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 161–172. [Google Scholar] [CrossRef]
  29. Xu, Z.; Wang, R.; Zhang, H.; Li, N.; Zhang, L. Building extraction from high-resolution sar imagery based on deep neural networks. Remote Sens. Lett. 2017, 8, 888–896. [Google Scholar] [CrossRef]
  30. Yang, H.L.; Yuan, J.; Lunga, D.; Laverdiere, M.; Rose, A.; Bhaduri, B. Building extraction at scale using convolutional neural network: Mapping of the united states. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 99, 1–15. [Google Scholar] [CrossRef]
  31. Shu, Z.; Hu, X.; Sun, J. Center-point-guided proposal generation for detection of small and dense buildings in aerial imagery. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1100–1104. [Google Scholar] [CrossRef]
  32. Peng, F.; Gong, J.; Wang, L.; Wu, H.; Liu, P. A New Stereo Pair Disparity Index (Spdi) for Detecting Built-up Areas from High-Resolution Stereo Imagery. Remote Sens. 2017, 9, 633. [Google Scholar] [CrossRef]
  33. Huang, X.; Zhang, L.; Zhu, T. Building Change Detection from Multitemporal High-Resolution Remotely Sensed Images Based on a Morphological Building Index. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 105–115. [Google Scholar] [CrossRef]
  34. Xiao, P.; Yuan, M.; Zhang, X.; Feng, X.; Guo, Y. Cosegmentation for Object-Based Building Change Detection from High-Resolution Remotely Sensed Images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1587–1603. [Google Scholar] [CrossRef]
  35. Tang, Y.; Huang, X.; Zhang, L. Fault-Tolerant Building Change Detection from Urban High-Resolution Remote Sensing Imagery. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1060–1064. [Google Scholar] [CrossRef]
  36. Zhang, Q.; Huang, X.; Zhang, G. Urban Area Extraction by Regional and Line Segment Feature Fusion and Urban Morphology Analysis. Remote Sens. 2017, 9, 663. [Google Scholar] [CrossRef]
  37. Chen, J.; Deng, M.; Mei, X.; Chen, T.; Shao, Q.; Hong, L. Optimal Segmentation of a High-Resolution Remote-Sensing Image Guided by Area and Boundary. Int. J. Remote Sens. 2014, 35, 6914–6939. [Google Scholar] [CrossRef]
  38. Achanta, R.; Shaji, A.; Smith, K.; Lussichi, A.; Fua, P.; Susstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
  39. Gioi, R.G.V.; Jakubowicz, J.; Morel, J.M.; Randall, G. Lsd: A line segment detector. Image Process. Line 2012, 2, 35–55. [Google Scholar] [CrossRef]
  40. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  41. Dreiseitl, S.; Ohnomachado, L. Logistic regression and artificial neural network classification models: A methodology review. J. Biomed. Inform. 2002, 35, 352–359. [Google Scholar] [CrossRef]
  42. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective; Prentice-Hall: Upper Saddle River, NJ, USA. 2004; p. 382. [Google Scholar]
  43. Freire, S.; Santos, T.; Navarro, A.; Soares, F.; Silva, J.D.; Afonso, N. Introducing mapping standards in the quality assessment of buildings extracted from very high resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2014, 90, 1–9. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overall flowchart of the proposed method.
Figure 1. Overall flowchart of the proposed method.
Ijgi 07 00410 g001
Figure 2. Building candidate area (BCA) generation process. (a) is an image block with the detected line segments (red lines); (b) shows the pixels on line segment (POLs) and building likelihood (BL) map (the colors from blue to red indicate the BL values from low to high); (c) is the initial BCA; and (d) is the final BCA. Green is for the covered building pixels, red is for the missing building pixels, and blue is for other ground object pixels in BCA.
Figure 2. Building candidate area (BCA) generation process. (a) is an image block with the detected line segments (red lines); (b) shows the pixels on line segment (POLs) and building likelihood (BL) map (the colors from blue to red indicate the BL values from low to high); (c) is the initial BCA; and (d) is the final BCA. Green is for the covered building pixels, red is for the missing building pixels, and blue is for other ground object pixels in BCA.
Ijgi 07 00410 g002
Figure 3. Line-constrained shape (LCS) feature illustrations. (a) shows the calculation of the LCS of two pixels; (b) is the LCS value assignment for non-BCA pixels.
Figure 3. Line-constrained shape (LCS) feature illustrations. (a) shows the calculation of the LCS of two pixels; (b) is the LCS value assignment for non-BCA pixels.
Ijgi 07 00410 g003
Figure 4. Components of feature matrix F (a) and the flowchart of object feature vector construction for change detection (b).
Figure 4. Components of feature matrix F (a) and the flowchart of object feature vector construction for change detection (b).
Ijgi 07 00410 g004
Figure 5. Process of training sample selection.
Figure 5. Process of training sample selection.
Ijgi 07 00410 g005
Figure 6. Study data. The first column is the images of phase one, the second column is the images of phase two, and the third column is the ground truth of changed buildings.
Figure 6. Study data. The first column is the images of phase one, the second column is the images of phase two, and the third column is the ground truth of changed buildings.
Ijgi 07 00410 g006
Figure 7. Line segment extraction and heat maps of Line-constrained shape (LCS) feature, pixel shape index (PSI) and morphological building index (MBI). The first row shows two datasets and the line segments; the second row shows some local regions (rectangles in the first row) of line segments; the third row shows the heat map of LCS mean value of all orientations; the fourth row shows the heat map of the PSI mean value of all orientations; and the fifth row shows the heat map of MBI.
Figure 7. Line segment extraction and heat maps of Line-constrained shape (LCS) feature, pixel shape index (PSI) and morphological building index (MBI). The first row shows two datasets and the line segments; the second row shows some local regions (rectangles in the first row) of line segments; the third row shows the heat map of LCS mean value of all orientations; the fourth row shows the heat map of the PSI mean value of all orientations; and the fifth row shows the heat map of MBI.
Ijgi 07 00410 g007
Figure 8. Sensitivity analysis of parameters ω . The first and second rows show the coverage of BCA to the building ground truth (green is for the covered building pixels, red is for the missing buildings, and blue is for other ground objects in BCA) in the two-phase images of dataset 2. The third row shows the change detection result of the proposed method, in which green is for the changed building pixels that are correctly detected, red is for the missing parts, and blue is for the incorrectly detected pixels.
Figure 8. Sensitivity analysis of parameters ω . The first and second rows show the coverage of BCA to the building ground truth (green is for the covered building pixels, red is for the missing buildings, and blue is for other ground objects in BCA) in the two-phase images of dataset 2. The third row shows the change detection result of the proposed method, in which green is for the changed building pixels that are correctly detected, red is for the missing parts, and blue is for the incorrectly detected pixels.
Ijgi 07 00410 g008
Figure 9. Change detection results based on different features. The green represents the correct detection of the changed building pixels, red is for the missing pixels, and blue is for the incorrectly detected pixels.
Figure 9. Change detection results based on different features. The green represents the correct detection of the changed building pixels, red is for the missing pixels, and blue is for the incorrectly detected pixels.
Ijgi 07 00410 g009
Table 1. Change detection confusion matrix.
Table 1. Change detection confusion matrix.
Number of PixelsReal ChangedReal UnchangedTotal
Detected as changedTrue positive (TP)False positive (FP) D c = T P + F P
Detected as unchangedFalse negative (FN)True negative (TN) D u = F N + T N
Total T c = T P + F N T u = F P + T N N
Table 2. Accuracy evaluation for different ω settings.
Table 2. Accuracy evaluation for different ω settings.
ωCoverage Ratio in Image 2011Coverage Ratio in Image 2016RecallFDROAKappaTA
1092.51%92.79%74.66%1.25%96.35%0.78320.6712
5099.14%97.33%87.74%1.41%97.51%0.86180.7787
9099.02%88.11%81.22%1.44%96.82%0.81880.7187
Table 3. Accuracy evaluation for different features.
Table 3. Accuracy evaluation for different features.
Dataset 1Dataset 2
Feature SetsRecallFDROAKappaTARecallFDROAKappaTA
(1)91.57%5.79%93.97%0.70610.585887.74%1.41%97.51%0.86180.7787
(2)48.03%4.12%91.42%0.46370.342795.67%19.06%82.14%0.43930.3521
(3)74.68%6.88%91.4%0.57130.447391.03%6.53%93.22%0.69160.5731
(4)83.8%10.62%88.86%0.52610.412191.57%17.04%83.82%0.45310.3612

Share and Cite

MDPI and ACS Style

Liu, H.; Yang, M.; Chen, J.; Hou, J.; Deng, M. Line-Constrained Shape Feature for Building Change Detection in VHR Remote Sensing Imagery. ISPRS Int. J. Geo-Inf. 2018, 7, 410. https://doi.org/10.3390/ijgi7100410

AMA Style

Liu H, Yang M, Chen J, Hou J, Deng M. Line-Constrained Shape Feature for Building Change Detection in VHR Remote Sensing Imagery. ISPRS International Journal of Geo-Information. 2018; 7(10):410. https://doi.org/10.3390/ijgi7100410

Chicago/Turabian Style

Liu, Haifei, Minhua Yang, Jie Chen, Jialiang Hou, and Min Deng. 2018. "Line-Constrained Shape Feature for Building Change Detection in VHR Remote Sensing Imagery" ISPRS International Journal of Geo-Information 7, no. 10: 410. https://doi.org/10.3390/ijgi7100410

APA Style

Liu, H., Yang, M., Chen, J., Hou, J., & Deng, M. (2018). Line-Constrained Shape Feature for Building Change Detection in VHR Remote Sensing Imagery. ISPRS International Journal of Geo-Information, 7(10), 410. https://doi.org/10.3390/ijgi7100410

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop