Next Article in Journal
Design of Enhanced Rotation Locked Loop for Roll Angle Estimation of Rotating Vehicle in a Weak GPS Signal Environment
Next Article in Special Issue
Urban Tree Species Classification Using a WorldView-2/3 and LiDAR Data Fusion Approach and Deep Learning
Previous Article in Journal
Computation of Traffic Time Series for Large Populations of IoT Devices
Previous Article in Special Issue
Scaling Effect of Fused ASTER-MODIS Land Surface Temperature in an Urban Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object-Level Double Constrained Method for Land Cover Change Detection

1
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100101, China
2
University of Chinese Academy of Sciences, Beijing 100101, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(1), 79; https://doi.org/10.3390/s19010079
Submission received: 22 November 2018 / Revised: 13 December 2018 / Accepted: 21 December 2018 / Published: 26 December 2018
(This article belongs to the Special Issue Advances in Remote Sensing of Land-Cover and Land-Use Changes)

Abstract

:
Land cover change detection based on remote sensing has become increasingly important for protecting the ecological environment. Spatial features of images can be extracted by object-level methods. However, the computational complexity is high when using many features to detect land cover change. Meanwhile, single-constrained change detection (SCCD) methods produce non-objective and inaccurate results. Therefore, we proposed a land cover change detection method: the object-level double constrained change detection (ODCD) method. First, spectral and spatial features were calculated based on multi-scale segmentation results. Second, using the significant difference test (SDT), feature differences among all categories were calculated, and the features with more significant differences were considered as the optimal features. Third, the maximum Kappa coefficient was used as the criterion for determining the optimal change intensity and correlation coefficient. Finally, the ODCD was validated using GF-1 satellite images on March 2016 and February 2017 in north Beiqijia Town, Beijing. Using optimal feature selection, the dimension of features was reduced from 26 to 12. Compared with SCCD methods, the result of the ODCD was more reliable and accurate. Its overall accuracy was 10% higher, overall error was 27% lower, and the Kappa coefficient was 0.22 higher. In conclusion, the ODCD is effective for land cover change detection and can improve computational efficiency.

1. Introduction

Owing to the rapid increase of urban populations and the rapid expansion of urban areas, many ecological and environmental problems, such as reduced vegetation cover and increased surface runoff, have become gradually more serious [1]. As the core of ecological environment change monitoring, land cover change detection has become a hot topic in environmental science and ecology [2]. Remote sensing technology has the advantages of being macroscopic, comprehensive, dynamic, and rapid, as well as being the most economical and effective means for detecting land cover changes [3]. Various remote-sensing methods have been applied to this problem: Yuan et al. used Principal Component Analysis (PCA) to identify land cover changes based on multi-temporal Landsat5 TM images [4]; Johnson et al. used Change Vector Analysis (CVA) to detect land cover changes in Landsat5 TM multispectral images [5]; Zhou and Yang used ratios of different images to detect changes in Anshun City, Guizhou Province [6]; and Li and Ye used PCA to detect changes in Dongguan, Zhujiang Delta [7]. However, these methods were all on the pixel-level; thus, they cannot use the spatial characteristics of images and are prone to the serious “pepper and salt phenomenon” [8]. Due to improvements in spatial resolution, the remote sensing images have become increasingly informative. Im et al. first introduced the object-level method for land cover classification and change detection [9]. Lobo et al. compared object-level methods with pixel-level methods and found that the results from the former are more easily interpreted and have better integrity for each patch [10]. Wang and Zhao detected the land cover changes on the object-level using high-resolution remote sensing images [11]. Moreover, Lu and Scott used object-level methods to detect urbanization changes in high-resolution images [12]. These results showed that the object-level methods had more advantages than the pixel-level methods. However, objects generated by object-level segmentation have multi-dimensional features for remote sensing images, i.e., spectral, texture, and spatial features, and as the number of bands increases, the number of features also increases, which will increase the computational burden if all the features are used to perform land cover change detection. That is, as the dimension of the features increases to a certain number, the change detection accuracy decreases [13]. In addition, object-level change detection methods include a direct comparison method and a post-classification comparison method [14]. However, the object-level post-classification comparison method relies on the effect of classification, which has limitations, whilst the object-level direct comparison method has better stability and accuracy [15]. In recent decades, many scholars have continuously developed many object-level detection methods based on remote sensing. Quampby compared the differences of bands in images to detect urban land-use change [16]; Fan et al. used the image difference method to detect changes in Panzhihua [17]; and Li et al. used PCA to detect changes in Dongguan of the Pearl River Delta [18]. Yan proposed an object-level method. The author used the Mean-Shift algorithm to segment the image and used CVA to obtain the detection result [19]. However, most of these methods used one threshold to detect change, which rendered the results not objective. Object-level CVA is a direct comparison change detection method, which can usually obtain good results. For example, Yu used Landsat TM/ETM+ to compare the object-level CVA with the object-level conventional change detection methods (spectral vector similarity method, principal component difference method), and found that the object-level CVA obtained the best results of land cover change detection [20]. Qi et al. applied object-level CVA to perform land cover change detection with polarized SAR images, which improved the accuracy of change detection [21]. Wang et al. used object-level CVA for land cover change detection and they obtained good results [22]. At the same time, J. Im et al. studied the correlation between image objects based on segmentation and they extracted the change regions based on this method. The authors’ found that with the improvement in image resolution, the object-level correlation analysis was suitable for change information extraction of remote sensing images [9]. Given that CVA can directly compare the differences between image features and that the correlation coefficient can analyze the correlation between image features, the combination of the two methods can more effectively detect land cover change. However, as the number of bands increases, the judgement of change using CVA becomes relatively difficult [23,24]. Moreover, the change threshold is usually determined by empirical judgment that makes the results neither objective nor effective. Therefore, this study proposed a method of double constrained thresholds for the change intensity threshold and the correlation coefficient on the object-level (ODCD), which aimed to reduce the number of dimensions for features and to improve the computational efficiency, objectivity, and accuracy of land cover change detection.

2. Materials and Methods

2.1. Materials

The study area is located at the northern part of Beiqijia Town, Changping, Beijing (Figure 1). The typical classes of land cover in this area are vegetation, residential, bare land, and waterbody. The data used in this study were 8-m resolution multi-spectral images of the GF-1 satellite acquired on 24 March 2016 and 23 February 2017.
GF-1 satellite is a high-resolution earth observation system remote sensing satellite launched by China on 26 April 2013. GF-1 satellite is equipped with two cameras (PMS) with a 2-m resolution panchromatic wave band, 8-m resolution multi-spectra band, and four 16-m resolution multi-spectral wide-format cameras (WFV1–WFV4). Its multispectral sensors have four bands: blue, green, red, and near infrared. The interview period is 4 days. GF-1 satellite data has the characteristics of high resolution, wide width, and short return period, and it can be widely used in agricultural remote sensing, environmental monitoring, and other fields [25,26].

2.2. Methods

This study used object-level CVA and the correlation coefficient to achieve land cover change detection. To reduce redundancy of data and to improve the quality of selected features, the significant difference test (SDT) for features was performed to select the most significant difference feature as the optimal feature. When determining the optimal change threshold, the optimal thresholds of change intensity and the correlation coefficient were selected based on the maximum Kappa coefficient. The Kappa coefficient is a consistency test method proposed by Cohen in 1960 to evaluate the classification results of remote sensing images [27,28]. It is calculated according to Equations (1) and (2).
k = P 0 P e 1 P e
P 0 = i = 1 n P i i N
where k is the Kappa coefficient. P 0 is the proportion of units in which the judges agreed; P e is the proportion of units for which agreement is expected by chance; n is the number of types of classification; N is the total number of samples; P i i is the number of correctly classified samples of type i.
The flow chart for the ODCD is shown in Figure 2, which includes the following steps. First, the atmospheric correction, geometric correction, orthorectification, and image registration were performed to reduce noises and improve image quality. Second, multi-scale segmentation was used to obtain highly homogeneous objects, and the initial land cover categories, such as classification of vegetation, bare land, residential, and waterbody. Third, some common spectral features, shape features, and texture features of objects were calculated based on the multi-scale segmentation results. Fourth, using SDT, the differences of the above features amongst all the categories were calculated, and the features with more significant differences were considered as the optimal features. Fifth, the change intensity was calculated via CVA using the optimal features, and the correlation coefficient between corresponding objects in the GF-1 image of 2016 and 2017 were calculated. Sixth, based on the change intensity and the correlation coefficient calculated in the previous step, the maximum Kappa coefficient was used as the criterion for determining the optimal thresholds of the change intensity and the correlation coefficient. Seventh, using the optimal thresholds of the change intensity and the correlation coefficient, the results of land cover change detection could be obtained. Finally, for the overall accuracy, the Kappa coefficient and the overall error were selected as the accuracy evaluation indexes.

2.2.1. Multi-Scale Segmentation

Multi-scale segmentation is a widely used image segmentation method that exhibits good results for object-level remote sensing image analysis. It comprehensively considers the spectral features and spatial features of remote sensing images, and it uses a bottom-up iterative merging algorithm to segment an image into objects with high homogeneity [29]. The homogeneity of objects is calculated as the standard deviation of the objects’ internal pixels, whilst the heterogeneity includes the spectral and shape heterogeneity of objects [30]. In this study, both images underwent multi-scale segmentation simultaneously to obtain the same segmentation objects. This ensured that the homogeneous objects did not include multiple objects or mixed objects in the other image, if only a single image was used for segmentation. Furthermore, the segmentation results should not be too fragmented and the differences between neighboring objects are demonstrated by setting the segmentation scale. The objects should have high internal homogeneity and be consistent with the actual boundaries of features. The main parameters of multi-scale segmentation include the segmentation scale and homogeneity factor. The homogeneity factor includes spectral features and shape factors, and the sum of the weights of the spectral features and shape factors is 1. Spectral features are typically the most important, being important factors for image object generation. When the weight of the shape factor is higher than 0.5, the generated polygons are too regular and have no practical meaning, and therefore, do not conform to the actual features of the objects [31]. Thus, the weight of the spectral features should be greater than 0.6 [32,33].

2.2.2. Optimal Feature Selection

Feature Construction

Object-level change detection can not only utilize the spectral features, but also the spatial features of images, including texture features and shape features, to obtain more descriptive information. In traditional change detection methods, spectral features are the most important factor due to visual expression of image information [34]. In this study, four commonly used spectral features, including the mean, standard deviation, normalized difference vegetation index (NDVI), and the normalized difference water index (NDWI) were selected, and the features’ calculation formulas are shown in Table 1.
Texture features can reflect the regional characteristics of a remote sensing image. In 1973, Haralick proposed the characteristic parameters for analyzing the Gray Level Co-occurrence Matrix (GLCM) [35]. The GLCM is a widely used method for calculating texture features. In this study, three common features, including correlation, dissimilarity, and energy were selected, and the calculation formulas are shown in Table 2.
Shape features can reflect the shape information of an object in a remote sensing image and describe the assemblage of its shape features, which helps to avoid the phenomena of “same object with different spectra, different objects with same spectrum” [36]. The area, length, width, shape index, and aspect ratio are generally used to describe shape features. In this study, the area, shape index, and aspect ratio were selected. The calculation formulas are shown in Table 3.

Feature Selection

Considering the spectral, texture, and shape features of objects and multi-band characteristics, data redundancy is inevitable. To reduce this redundancy and improve feature quality, it is necessary to select features that effectively describe the information of each object. The SDT is used to test the difference between the experimental group and the control group, or the effect of two different treatments, and whether the difference is significant or not [37]. Therefore, we used the SDT to calculate the significance difference for different land cover types amongst the selected category features in each band for feature optimization. The greater the difference in features, the more significant the band features. This also demonstrates that the features of this band can be selected as optimal features. Variance analysis was applied to perform the SDT in this study. The significant difference was calculated according to Equation (3).
F = M S b / M S w
where,
M S b = S S b / V b
M S w = S S w / V w
S S b = i ( j X i j ) 2 b ( i j X i j ) 2 / N
S S w = S S S S b
S S = i j X i j 2 ( i j X i j ) 2 / N
V b = k 1
V w = N k
where F is the statistic calculated by the analysis of variance, M S b is the variance between groups, M S w is the variance within group, S S b is the sum of squared deviations between groups, S S w is the sum of squared deviations within group, SS is the total sum of squared deviations, X i j is the j-th sample value of the i-th group, N is the total number of samples, b is the total number of samples of each group, V b is the degree of freedom between groups, V w is the degree of freedom within the group, and k is the total number of groups.
F Distribution is F~F ( v b , v w ) as in Reference [38]. According to the F threshold table, F α ( v b , v w ) can be found. Generally, the value of α is 0.05 or 0.01. By comparing F with F α ( v b , v w ) , the significant difference can be found. For example, when F < F 0.05 ( v b , v w ) , the difference is not significant; when F F 0.05 ( v b , v w ) , the difference is significant; when F F 0.01 ( v b , v w ) , the difference is extremely significant.

2.2.3. Change Vector Analysis

CVA was first proposed by Malila in 1980 [39]. It can express multiple characteristics of each object using one-dimension vectors of n bands for an image. In this study, because the dimension and magnitude of the features used were quite different, standardization [40] was performed for each feature prior to its use in change detection. The change vector contains all the change information for a given object between two images and can be expressed as Equation (11).
G = H G = ( x i 1 ( t 1 ) x i 1 ( t 2 ) x i 2 ( t 1 ) x i 2 ( t 2 ) x i n ( t 1 ) x i n ( t 2 ) )
The feature vector of an object in a remote sensing image at time t1 and t2 is represented as G = ( x i 1 ( t 1 ) , x i 2 ( t 1 ) , , x i n ( t 1 ) ) T and H = ( x i 1 ( t 2 ) , x i 2 ( t 2 ) , , x i n ( t 2 ) ) T , respectively, where n is the feature number and x i k ( t ) represents the normalized value of the k-th feature of object i at time t.
The change intensity can be calculated using the Euclidean Distance as in Equation (12):
G = k = 1 n ( x i k ( t 1 ) x i k ( t 2 ) ) 2
where G characterizes all the feature differences between two remote sensing images. The larger the G , the more likely the object is changed. Detection for changed and non-changed objects can be completed by setting the change threshold according to the value of the change intensity. By determining the change threshold, the change area can be determined from the change intensity map easily and accurately.

2.2.4. Correlation Coefficient Calculation

According to pattern recognition theory, multiple unrelated constraints should be used for recognition to avoid the limitations of single constraints [41]. In the traditional land cover detection method, only single change intensity is utilized to determine the change and unchanged areas, and its accuracy is not ideal. This study introduced the correlation coefficient and combined it with the change intensity to determine the change area. Based on the multi-scale segmentation results, we calculated the correlation coefficient between objects. When the object changes, the correlation coefficient is low; when the object does not change, the correlation coefficient is high. The correlation coefficient is calculated using Equation (13).
R = k = 1 n { [ x i k ( t 1 ) x ¯ i ( t 1 ) ] · [ x i k ( t 2 ) x ¯ i ( t 2 ) ] } k = 1 n [ x i k ( t 1 ) x ¯ i ( t 1 ) ] 2 × k = 1 n [ x i k ( t 2 ) x ¯ i ( t 2 ) ] 2
where n is the number of bands, x i k ( t ) represents the average gray value of all the pixels of object i in band k in the t-phase image, and x ¯ i ( t ) represents the mean gray value of object i of the n bands in the t-phase image.

2.2.5. Optimal Threshold Determination for Change Detection

According to the research of Tung Fung et al. [42], the Kappa coefficient based on a confusion matrix may be the most appropriate for determining the optimal change threshold among various indexes. Therefore, in this study, samples with changes and samples with no changes were selected. The change intensity and correlation coefficient with the maximum Kappa coefficient were selected as the optimal change thresholds. Then, the two thresholds were applied to detect the land cover changes. Finally, the change detection result image was generated. The confusion matrix [43] for the land cover change detection results is shown in Table 4. The misjudgment error, omission error, detection accuracy, overall accuracy, and the Kappa coefficient are calculated according to Equations (14)–(18).
Misjudgment   error = N n c / N t c
Omission   error = N c n / N c t
Detection   accuracy = N c c / N c t
Overall   accuracy = ( N n n + N c c ) / N
k h a t = N · ( N n n + N c c ) ( N t n · N n t + N t c · N c t ) N 2 ( N t n · N n t + N t c · N c t )
where N n n represents the number of samples where the detection results are unchanged and have not changed in practice, N c n represents the number of changed samples incorrectly identified as unchanged samples, N t n is the total number of unchanged samples in the test results, N n c represents the number of unchanged samples incorrectly identified as changed samples, N c c represents the number of samples where the detection results are changed and have changed in practice, N t c represents the total number of changed samples in the test results, N n t represents the total number of unchanged samples in practice, N c t represents the total number of changed samples in practice, and N represents the total number of samples. k h a t is the Kappa coefficient.

3. Results and Discussion

3.1. Multi-Scale Segmentation

The optimal parameters for multi-scale segmentation of the remote sensing images were determined by comparison experiments. Different parameter segmentation results are shown in Figure 3, Figure 4 and Figure 5.
Figure 3 shows that when the segmentation scale is 15, the result is too fine, which makes the result complex, and when the segmentation scale is 35, the result is too rough, which will not make a clear difference. The segmentation scale was 25, which can segment the image into patches with high internal homogeneity.
Figure 4 shows that when the weights of the shape and spectral feature are 0.2 and 0.8, respectively, the result provides a clearer difference and can segment the image into patches with high internal homogeneity.
Figure 5 shows that when the weight of compactness is 0.5, the result is too fine, which makes the result complex, and when the weight of compactness is 0.9, the result is too rough, which will not make a clear difference. The weight of compactness was 0.7, which can segment the image into patches with high internal homogeneity.
Therefore, the segmentation scale was 25, the spectral and shape weights were 0.8 and 0.2, and the smoothness and compactness weights were 0.3 and 0.7, respectively. The results of the multi-scale segmentation are shown in Figure 6. The scale and weight set for the multi-scale segmentation was reasonable, since it could avoid too much fragmentation of the segmentation results, and effectively reflect the differences between the different patches, which were consistent with the feature boundaries. Therefore, the results were satisfactory.

3.2. Optimal Feature Selection

Some of the differences within features, e.g., the differences in the mean grey value and the energy of each band, are shown as follow. Figure 7 showed the differences in the mean grey value of each band. F0.05(1, 9) was the limiting value and its value was 5.1174. Bands 1, 2, 3, and 4 represented blue, green, red, and near-infrared band, respectively. Generally, when F was higher than F0.05(1, 9), the corresponding difference between the two categories was more significant. Compared with the differences in the mean of the spectral features (Figure 7), it showed that the differences between the residential and bare land in band 1 (Figure 8) and the differences between the vegetation and residential land in bands 3 and 4 (Figure 9), were not significant, because F was much lower than F0.05(1, 9). The differences between all the categories in band 2 were more significant for F which it was higher than F0.05(1, 9).
Compared with the differences in the energy of the texture features (Figure 10), it showed that the differences between the residential and bare land were not significant in bands 2, 3, and 4, because the F was much lower than F0.05(1, 9). All the differences between the categories in band 1 were significant for F higher than F0.05(1, 9).
Using the SDT, the most distinctive features among the categories were finally selected (Figure 11). The most distinctive spectral features were the mean of band 2, the variance of band 1, the NDVI, and the NDWI. The most distinctive texture features were the correlation of band 3 and band 4, the dissimilarity of band 1 and band 3, and the energy of band 1. The shape features were the length–width ratio, area, and the shape index. In total, 12 features were selected as the optimal features for land cover change detection.

3.3. Change Intensity and the Correlation Coefficient

According to the multi-scale segmentation results, we calculated the change intensity and correlation coefficient of the objects. The change intensity map for the GF-1 images in 2016 and 2017 is shown in Figure 12. When the patch was changed, its change intensity value was greater and the color was brighter. When the patch was unchanged, the change intensity value was smaller and the color was darker. The correlation coefficient map for the GF-1 images in 2016 and 2017 is shown in Figure 13. When the patch was changed, its correlation coefficient was smaller and its color was brighter. When the patch was unchanged, the correlation coefficient was larger and the color was darker.

3.4. Land Cover Change Detection

Using visual interpretation, 213 samples were selected for model training, including 87 changed samples and 126 unchanged samples. These training samples were used to analyze the changed binary map generated during the loop computation. By calculating the Kappa coefficient, the optimal change intensity threshold and the optimal correlation coefficient threshold with the maximum Kappa coefficient were calculated. The ODCD and SCCD methods were both used for land change detection.

3.4.1. Results from SCCD

When SCCD was applied for land cover change detection, only the change intensity was used. According to the training samples, the maximum value of the Kappa coefficient was 0.80 and the optimal threshold of change intensity was 0.30. That is, the object changed when the change intensity value was greater than 0.30. Based on the optimal threshold of change intensity, a final binary map was acquired as shown in Figure 14. The maximum Kappa coefficient corresponding to the confusion matrix is shown in Table 5. By overlaying the land cover change map on the GF-1 image in 2017, as shown in Figure 15, it was clear that most of the changed features involved changes from bare land and vegetation to residential, and small parts involved changes from waterbody to bare land. SCCD change detection was sensitive to seasonal changes in vegetation, spectral changes in the roofs of buildings, and spectral changes in bare land, so they are easily misclassified as changes by SCCD.

3.4.2. Results from the ODCD

When the ODCD was applied for land cover change detection, the change intensity and correlation coefficient were both used. According to the training samples, the maximum Kappa coefficient was 0.87, and the optimal thresholds of the change intensity and correlation coefficient were 0.26 and 0.94, respectively. That is, when the change intensity was greater than 0.26 and the correlation coefficient was less than 0.94, the object changed. According to the optimal change intensity and correlation coefficient thresholds, the change binary image was acquired as shown in Figure 16. By overlaying the land cover change map on the GF-1 image in 2017 as shown in Figure 17, some typical change examples are shown in Figure 18. The maximum Kappa coefficient corresponding to the confusion matrix is shown in Table 6, and it is clear that the overall accuracy and the Kappa coefficient of the ODCD are higher than that of the SCCD. The ODCD can make up for the shortcomings in the seasonal sensitivity of SCCD and improve the accuracy of change detection.
Among all the above examples, the ODCD efficiently detected the changed areas. The typical change detection results in Figure 18a,b show changes from bare land to residential, and the results in Figure 18c show changes from waterbody to bare land, whilst the results in Figure 18d show changes from vegetation to bare land. The results in Figure 18e show changes from vegetation to residential. Owing to the resolution of the images, some small details were detected as changed areas, such as shadows between buildings.

3.5. Precision Comparison

To further confirm the validity and accuracy of the ODCD, a total of 333 samples, excluding the samples used to determine the optimal change threshold, were selected for validation, including 133 changed samples and 200 unchanged samples. For the analysis of the change detection results, we compared the results of the ODCD and SCCD. The overall accuracy, Kappa coefficient, and overall error were selected as the accuracy evaluation indexes. The quantitative evaluation results are shown in Table 7 and Table 8. The accuracy comparison between the ODCD and SCCD is shown in Table 9. The overall accuracy and the Kappa coefficient of the ODCD were higher than that of SCCD. For overall accuracy, the overall accuracy of the ODCD was 92.19% while the overall accuracy of SCCD was 81.98%. The overall accuracy of the ODCD was about 10% higher than that of SCCD. For the Kappa coefficient, the Kappa coefficient of the ODCD was 0.84 and the Kappa coefficient of SCCD was 0.62. The Kappa coefficient of the ODCD was about 0.22 higher than that of SCCD. For the error, misjudgment error, and the omission error, the results for the ODCD were lower than the results of SCCD. The misjudgment error was 15% lower and the omission error was 12% lower. The total error of the ODCD was 27% lower than that of SCCD.
For testing whether the differences of accuracy were statistically significant, we carried out a hypothesis statistical test (t-test) to confirm that the differences in accuracy were statistically significant. An additional total of 121 samples were selected for validation, including 89 changed samples and 32 unchanged samples. The quantitative evaluation results are shown in Table 10, Table 11 and Table 12. The t-test was used to test whether the differences were statistically significant. α is the limiting value in the t-test, and its value is generally 0.05. When the p-value calculated by the t-test was less than α, it could be concluded that the differences were statistically significant [44]. The results are shown in Table 13.
Given the p-value was less than 0.05, the differences were statistically significant. The validity and accuracy of the ODCD could be confirmed.

4. Conclusions

The ODCD proposed for land cover change detection in this study resolved the problem of computational complexity due to excessive feature variables in feature extraction. However, the problem of using empirical judgment to determine the change threshold makes the results less objective and less accurate. The ODCD employs a more objective change threshold determination algorithm that achieves simultaneous determination of the thresholds of the change intensity and the correlation coefficient based on maximizing the Kappa coefficient. It is a more effective method for land cover change detection. The major conclusions are as follows.
(1)
Combining change vector analysis with correlation coefficients based on object-level, the ODCD can reduce the shortcomings of seasonal sensitivity of SCCD and improve the accuracy of land cover change detection. The ODCD’s overall accuracy was 92.19% and this was 10% higher than that of SCCD. At the same time, its overall error was 20% and it was 27% lower than that of SCCD.
(2)
ODCD can be used to reduce the number of features and improve the computational efficiency. The SDT is an effective feature optimization method. Using optimal feature selection, the feature dimensions were reduced from 26 to 12, which increased the calculation speed.
Therefore, the ODCD can provide a useful reference for ecological environment assessment and land use planning. At the same time, the ODCD can provide an effective reference for remote sensing in land cover change detection.
Future studies on the ODCD should involve a greater number of different objects, such as objects related to residential change detection. In addition, the image resolution in this study led to some areas being detected as changed areas; therefore, further studies should also utilize different resolution images.

Author Contributions

For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used conceptualization, Z.W.; methodology, Z.W., Y.L. and Y.R.; validation, Z.W.; formal analysis, Z.W., Y.L. and Y.R.; investigation, Z.W., H.M.; data curation, Z.W.; writing—original draft preparation, Z.W.; writing—review and editing, Z.W., Y.L. and Y.R.

Funding

This study was funded by the major projects of High resolution Earth Observation System of China (30-Y20A07-9003-17/18), the National Natural Science Foundation of China (41601387), and the Director Funds for Young Scholar of Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences (No. Y6SJ2600CX).

Acknowledgments

We would like to thank the staffs who provided the reference data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, Q.; Zhang, X.; Zhang, H.; Niu, H. Coordinated development of a coupled social economy and resource environment system: A case study in Henan Province, China. Environ. Dev. Sustain. 2018, 20, 1385–1404. [Google Scholar] [CrossRef]
  2. Yue, W.Z.; Xu, J.H.; Xu, L.H. An analysis on eco-environmental effect of urban land use based on remote sensing images: a case study of urban thermal environment and ndvi. Acta Ecol. Sin. 2006, 26, 1450–1460. [Google Scholar] [CrossRef]
  3. Liverman, D.; Moran, E.F.; Rindfuss, R.R. People and Pixels: Linking Remote Sensing and Social Science; National Academies Press: Washington DC, USA, 1998; pp. 362–363. [Google Scholar] [CrossRef]
  4. Yuan, D.; Elvidge, C. NALC Land Cover Change Detection Pilot Study: Washington, D.C. Area Experiments. Remote Sens. Environ. 1998, 66, 166–178. [Google Scholar] [CrossRef]
  5. Johnson, R.D.; Kasischke, E.S. Change vector analysis: A technique for the multispectral monitoring of land cover and condition. Int. J. Remote Sens. 2006, 19, 411–426. [Google Scholar] [CrossRef]
  6. Zhou, B. The research on land use change detection by using direct classification of stacked multitemporal TM images. J. Nat. Resour. 2001, 16, 263–268. [Google Scholar] [CrossRef]
  7. Li, X.; Yeh, A.G.O. Application of remote sensing for monitoring and analysis of urban expansion: A case study of Dongguan. Geogr. Res. 1997, 16, 1450–1460. [Google Scholar] [CrossRef]
  8. Li, X.; Shu, N.; Yang, J.; Li, L. The land-use change detection method using object-based feature consistency analysis. In Proceedings of the 2011 19th International Conference on Geoinformatics, Shanghai, China, 24–26 June 2011; pp. 1–6. [Google Scholar] [CrossRef]
  9. Im, J.; Jensen, J.R.; Tullis, J.A. Object-based change detection using correlation image analysis and image segmentation. Int. J. Remote Sens. 2008, 29, 399–423. [Google Scholar] [CrossRef]
  10. Lobo, A.; Chic, O.; Casterad, A. Classification of Mediterranean crops with multisensor data: Per-pixel versus per-object statistics and image segmentation. Int. J. Remote Sens. 1996, 17, 2385–2400. [Google Scholar] [CrossRef]
  11. Wang, W.J.; Zhao, Z.M.; Zhu, H.Q. Object-oriented multi-feature fusion change detection method for high resolution remote sensing image. In Proceedings of the 2009 17th International Conference on Geoinformatics, Fairfax, VA, USA, 12–14 August 2009. [Google Scholar] [CrossRef]
  12. Lu, D.; Hetrick, S.; Moran, E.; Li, G. Detection of urban expansion in an urban-rural landscape with multitemporal Quick Bird images. J. Appl. Remote Sens. 2010, 4, 201–210. [Google Scholar] [CrossRef]
  13. Hussain, E.; Shan, J. Object-based urban land cover classification using rule inheritance over very high-resolution multisensor and multitemporal data. Mapp. Sci. Remote Sens. 2016, 53, 164–182. [Google Scholar] [CrossRef]
  14. Zhou, Q.M. Review on Change Detection Using Multi-temporal Remotely Sensed Imagery. Acta Ecol. Sin. 2011, 2, 28–33. [Google Scholar]
  15. Zhao, M.; Zhao, Y.D. Object -oriented and multi-feature hierarchical change detection based on CVA for high-resolution remote sensing imagery. Acta Ecol. Sin. 2018, 22, 119–131. [Google Scholar] [CrossRef]
  16. Quarmby, N.A.; Townshend, J.R.G. Preliminary analysis of SPOT HRV multispectral products of an arid environment. Int. J. Remote Sens. 1986, 7, 1869–1877. [Google Scholar] [CrossRef]
  17. Fan, H.; Ainai, M.A.; Jing, L.I. Case Study on Image Differencing Method for Land Use Change Detection Using Thematic Data in Renhe District of Panzhihua. J. Remote Sens. 2001, 5, 75–80. [Google Scholar]
  18. Li, X.; Yeh, A.G.O. Accuracy Improvement of Land Use Change Detection Using Principal Components Analysis: A Case Study in the Pearl River Delta. J. Remote Sens. 1997, 1, 283–288. [Google Scholar]
  19. Yuanyong, D.; Fang, S.; Yao, C. Change detection for high-resolution images using multilevel segment method. Acta Ecol. Sin. 2016, 20, 129–137. [Google Scholar] [CrossRef]
  20. Yu, X.F.; Luo, Y.Y.; Zhuang, D.F.; Wang, S.K.; Wang, Y. Comparative analysis of land cover change detection in an Inner Mongolia grassland area. Acta Ecol. Sin. 2014, 34, 7192–7201. [Google Scholar] [CrossRef]
  21. Qi, Z.; Yeh, A.G.O.; Li, X.; Zhang, X. A three-component method for timely detection of land cover changes using polarimetric SAR images. J. Photogramm. Remote Sens. 2015, 107, 3–21. [Google Scholar] [CrossRef]
  22. Wang, L.; Yan, L.I.; Wang, Y. Research on Land Use Change Detection Based on an Object-oriented Change Vector Analysis Method. J. Geo-Inf. Sci. 2014, 27, 74–80. [Google Scholar] [CrossRef]
  23. Chen, J.; Chun-Yang, H.E.; Zhuo, L. Land Use/Cover Change Detection with Change Vector Analysis (CVA): Change Type Determining. J. Remote Sens. 2001, 5, 346–352. [Google Scholar]
  24. Lambin, E.F.; Strahler, A.H. Change-vector analysis in multitemporal space: a tool to detect and categorize land-cover change processes using high temporal-resolution satellite data. Remote Sens. Environ. 1994, 48, 231–244. [Google Scholar] [CrossRef]
  25. Ling, O.; Mao, D.; Wang, Z.; Li, H.; Man, W.; Jia, M.; Liu, M.; Zhang, M.; Liu, H. Analysis crops planting structure and yield based on GF-1 and Landsat8 OLI images. Trans. Chin. Soc. Agric. Eng. 2017, 33, 147–156. [Google Scholar] [CrossRef]
  26. Fan, J.; Zhao, D.; Wang, J. Oil spill GF-1 remote sensing image segmentation using an evolutionary feed forward neural network. In International Joint Conference on Neural Networks. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 460–464. [Google Scholar]
  27. Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  28. Cohen, J. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychol. Bull. 1968, 70, 213–220. [Google Scholar] [CrossRef]
  29. Wei, S.U.; Jing, L.I.; Chen, Y.H.; Zhang, J.S.; Hu, D.Y.; Low, T.M. Object-oriented Urban Land-cover Classification of Multi-scale Image Segmentation Method—A Case Study in Kuala Lumpur City Center, Malaysia. J. Remote Sens. 2007, 11, 521–530. [Google Scholar]
  30. Zhiwei, Q. Object-oriented Multi-scale Segmentation Algorithm for Remote Sensing Image. Geospat. Inf. 2013, 11, 95–96. [Google Scholar] [CrossRef]
  31. Ma, H.R. Object-Based Remote Sensing Image Classification of Forest Based on Multi-Level Segmentation; Beijing Forestry University: Beijing, China, 2014. [Google Scholar]
  32. Zhuang, H.; Deng, K.; Fan, H.; Yu, M. Strategies Combining Spectral Angle Mapper and Change Vector Analysis to Unsupervised Change Detection in Multispectral Images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 681–685. [Google Scholar] [CrossRef]
  33. Wu, X.H. The Studies on Land Cover Change Detection Based on Object-Oriented Method; Henan University of Technology: Zhengzhou, Henan, China, 2013. [Google Scholar]
  34. Zhang, Z.J.; Li, A.N.; Lei, G.B.; Bian, J.H.; Wu, B.F. Change detection of remote sensing images based on multiscale segmentation and decision tree algorithm over mountainous area: A case study in Panxi region, Sichuan Province. Acta Ecol. Sin. 2014, 34, 7222–7232. [Google Scholar] [CrossRef]
  35. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621. [Google Scholar] [CrossRef]
  36. Wu, Z.; Zeng, J.X.; Gao, Q.Q. Aircraft target recognition in remote sensing images based on saliency images and multi-feature combination. J. Image Graph. 2017, 22, 532–541. [Google Scholar] [CrossRef]
  37. Brunsdon, C.; Fotheringham, A.S.; Charlton, M. Some Notes on Parametric Significance Tests for Geographically Weighted Regression. J. Reg. Sci. 2010, 39, 497–524. [Google Scholar] [CrossRef]
  38. Statistical Processing and Explanation of GB/T. 4883-2008; Standardization Administration of the PRC: Beijing, China, 2008.
  39. Malila, W.A. Change Vector Analysis: An Approach for Detecting Forest Changes with Landsat. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.462.1459&rep=rep1&type=pdf (accessed on 26 December 2018).
  40. Pyle, D. Data Preparation for Data Mining; Morgan Kaufmann Publishers: San Francisco, CA, USA, 1999; pp. 375–381. ISBN 9781558605299. [Google Scholar]
  41. Jain, A.K.; Duin, R.P.W.; Mao, J. Statistical pattern recognition: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 4–37. [Google Scholar] [CrossRef]
  42. Fung, T.; Ledrew, E.F. The Determination of Optimal Threshold Levels for Change Detection Using Various Accuracy Indices. Photogramm. Eng. Remote Sens. 1988, 54, 1449–1454. [Google Scholar]
  43. Van Oort, P.A.J. Interpreting the change detection error matrix. Remote Sens. Environ. 2007, 108, 1–8. [Google Scholar] [CrossRef]
  44. Neuhäuser, M. Markus: Two-sample tests when variances are unequal. Anim. Behav. 2002, 63, 823–825. [Google Scholar] [CrossRef]
Figure 1. Map and GF-1 satellite images of the study area.
Figure 1. Map and GF-1 satellite images of the study area.
Sensors 19 00079 g001
Figure 2. Flow chart of object-level double constrained change detection (ODCD).
Figure 2. Flow chart of object-level double constrained change detection (ODCD).
Sensors 19 00079 g002
Figure 3. Different segmentation scale. (a) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 15. (b) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 25. (c) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 35.
Figure 3. Different segmentation scale. (a) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 15. (b) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 25. (c) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 35.
Sensors 19 00079 g003
Figure 4. Different weights of the shape and spectral feature. (a) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 25. (b) The weight of the shape factor is 0.4, the weight of the spectral feature is 0.6, the segmentation scale is 25. (c), The weight of the shape factor is 0.6, the weight of the spectral feature is 0.4, the segmentation scale is 25.
Figure 4. Different weights of the shape and spectral feature. (a) The weight of the shape factor is 0.2, the weight of the spectral feature is 0.8, the segmentation scale is 25. (b) The weight of the shape factor is 0.4, the weight of the spectral feature is 0.6, the segmentation scale is 25. (c), The weight of the shape factor is 0.6, the weight of the spectral feature is 0.4, the segmentation scale is 25.
Sensors 19 00079 g004
Figure 5. Different weights of compactness and smoothness. (a) The weight of smoothness is 0.5, the weight of compactness is 0.5, the segmentation scale is 25. (b) The weight of smoothness is 0.3, the weight of compactness is 0.7, the segmentation scale is 25. (c) The weight of smoothness is 0.1, the weight of compactness is 0.9, the segmentation scale is 25.
Figure 5. Different weights of compactness and smoothness. (a) The weight of smoothness is 0.5, the weight of compactness is 0.5, the segmentation scale is 25. (b) The weight of smoothness is 0.3, the weight of compactness is 0.7, the segmentation scale is 25. (c) The weight of smoothness is 0.1, the weight of compactness is 0.9, the segmentation scale is 25.
Sensors 19 00079 g005
Figure 6. (a) Typical land cover types on the GF-1 images. (b) Multi-scale segmentation results of the GF-1 images.
Figure 6. (a) Typical land cover types on the GF-1 images. (b) Multi-scale segmentation results of the GF-1 images.
Sensors 19 00079 g006
Figure 7. Comparison of differences in the mean grey value of each band for land cover types.
Figure 7. Comparison of differences in the mean grey value of each band for land cover types.
Sensors 19 00079 g007
Figure 8. Comparison of differences in the mean grey value of each band for residential and bare land.
Figure 8. Comparison of differences in the mean grey value of each band for residential and bare land.
Sensors 19 00079 g008
Figure 9. Comparison of differences in the mean grey value of each band for vegetation and residential land.
Figure 9. Comparison of differences in the mean grey value of each band for vegetation and residential land.
Sensors 19 00079 g009
Figure 10. Comparison of the differences in the energy of texture feature energy differences.
Figure 10. Comparison of the differences in the energy of texture feature energy differences.
Sensors 19 00079 g010
Figure 11. The processes of selecting the optimal features.
Figure 11. The processes of selecting the optimal features.
Sensors 19 00079 g011
Figure 12. Change intensity map for the GF-1 images in 2016 and 2017.
Figure 12. Change intensity map for the GF-1 images in 2016 and 2017.
Sensors 19 00079 g012
Figure 13. Correlation coefficient map for the GF-1 images in 2016 and 2017.
Figure 13. Correlation coefficient map for the GF-1 images in 2016 and 2017.
Sensors 19 00079 g013
Figure 14. Change binary map for land cover detection based on SCCD.
Figure 14. Change binary map for land cover detection based on SCCD.
Sensors 19 00079 g014
Figure 15. SCCD land cover change detection result overlaid on the GF-1 image of Beijing in February 2017.
Figure 15. SCCD land cover change detection result overlaid on the GF-1 image of Beijing in February 2017.
Sensors 19 00079 g015
Figure 16. Change binary map for land cover detection based on the object-level double constrained change detection (ODCD).
Figure 16. Change binary map for land cover detection based on the object-level double constrained change detection (ODCD).
Sensors 19 00079 g016
Figure 17. ODCD land cover change detection result overlaid on the GF-1 image of Beijing in February 2017.
Figure 17. ODCD land cover change detection result overlaid on the GF-1 image of Beijing in February 2017.
Sensors 19 00079 g017
Figure 18. Examples of typical land cover change types: (a(1),b(1),c(1),d(1),e(1)): Worldview 02 Images in 2016: (a(2),b(2),c(2),d(2),e(2)): Worldview 02 Images in 2017: (a(3),b(3),c(3),d(3),e(3)): land cover change detection result using GF-1 images.
Figure 18. Examples of typical land cover change types: (a(1),b(1),c(1),d(1),e(1)): Worldview 02 Images in 2016: (a(2),b(2),c(2),d(2),e(2)): Worldview 02 Images in 2017: (a(3),b(3),c(3),d(3),e(3)): land cover change detection result using GF-1 images.
Sensors 19 00079 g018
Table 1. Spectral features.
Table 1. Spectral features.
Feature ParametersFormulaFormula Description
Mean μ = 1 n i = 1 n V i μ is the sum of all pixel values V i divided by the total number of pixels in one object.
Standard deviation δ = 1 n 1 i = 1 n ( V i μ ) 2 V i is the value of all pixels in the object, μ is the mean of the object.
NDVI NDVI = ρ N I R ρ R ρ N I R + ρ R ρ N I R is the reflectance for the near-infrared band, ρ R is the reflectance for the infrared band.
NDWI NDWI = ρ G r e e n ρ N I R ρ G r e e n + ρ N I R ρ G r e e n is the reflectance of the green band, and ρ N I R is the reflectance of the near-infrared band.
Table 2. Texture features.
Table 2. Texture features.
Features ParametersFormulaFormula Description
Correlation Correlation = i = 1 k j = 1 k ( i j ) P ( i , j ) u i u j S i S j i is the gray value of any point in the image; j is the gray value of another point deviating from the point; P ( i , j ) is the frequency of occurrence of the gray pair in the gray level co-occurrence matrix, u i and u j represent the mean values in the row and column direction, respectively, and S i and S j represent the variance in the row and column direction, respectively. It reflects the consistency of image texture and the degree of similarity of metric co-occurrence matrix elements in the row or column direction.
Dissimilarity Dissimilarity = i , j = 0 n 1 P ( i , j ) | i j | P ( i , j ) is the frequency of occurrence of the gray pair in the gray level co-occurrence matrix. The higher the local contrast, the higher the similarity.
Energy Energy = i , j = 0 n 1 P ( i , j ) 2 P ( i , j ) is the frequency of occurrence of the gray pair in the gray level co-occurrence matrix. Energy is also called “the angle second moment.” When the image is a homogeneous area with a consistent texture, its energy is greater.
Table 3. Shape features.
Table 3. Shape features.
Feature ParametersFormulaFormula Description
Area A = i = 1 n x i x i is the value of pixel i. This describes the size of the object. For non-geographically referenced data, the area of the pixel is 1.
Aspect ratio γ = l w = e i g 1 ( s ) e i g 2 ( s ) S is the covariance matrix composed of the coordinates of points after object vectorization, w is the width, and l is the length of each object.
Shape index SI = p 4 × A The variable p is the perimeter of the image object, A is the area of the image object. This describes the compactness of an object. The higher the compactness, the greater the density, and the more similar the shape is to a square.
Table 4. Confusion matrix for land cover change detection.
Table 4. Confusion matrix for land cover change detection.
Assessment Data
UnchangedChangedTotal
Test resultsUnchanged N n n N c n N t n
Changed N n c N c c N t c
Total N n t N c t N
Table 5. Confusion matrix for the maximum Kappa coefficient of single-constrained change detection (SCCD).
Table 5. Confusion matrix for the maximum Kappa coefficient of single-constrained change detection (SCCD).
Verification SamplesTotalUser Accuracy (%)
UnchangedChanged
Test ResultsUnchanged121512696.03
Changed15728782.70
Total13677213
Producer Accuracy (%)88.9793.50
Overall accuracy = 90.61%; Kappa coefficient = 0.80.
Table 6. Confusion matrix for the maximum Kappa coefficient of the ODCD.
Table 6. Confusion matrix for the maximum Kappa coefficient of the ODCD.
Verification SamplesTotalUser Accuracy (%)
UnchangedChanged
Test ResultsUnchanged122813093.85
Changed5788393.98
Total12786213
Producer Accuracy (%)96.0690.70
Overall accuracy = 93.9%; Kappa coefficient = 0.87.
Table 7. Confusion matrix of SCCD.
Table 7. Confusion matrix of SCCD.
Verification SamplesTotalUser Accuracy (%)
UnchangedChanged
Test ResultsUnchanged1722820086.00
Changed3210113375.94
Total204129333
Producer Accuracy (%)84.3178.29
Table 8. Confusion matrix of ODCD.
Table 8. Confusion matrix of ODCD.
Verification SamplesTotalUser Accuracy (%)
UnchangedChanged
ResultsUnchanged1861420093.00
Changed1212113390.98
Total198135333
Producer Accuracy (%)93.9489.63
Table 9. Accuracy comparison table between SCCD and ODCD.
Table 9. Accuracy comparison table between SCCD and ODCD.
Overall AccuracyKappa CoefficientTotal Error
Misjudgment ErrorOmission Error
SCCD81.98%0.6224%22%
ODCD92.19%0.849%10%
Table 10. Confusion matrix of SCCD.
Table 10. Confusion matrix of SCCD.
UnchangedChangedTotal
Unchanged731689
Changed42832
Total7744121
Table 11. Confusion matrix of the ODCD.
Table 11. Confusion matrix of the ODCD.
UnchangedChangedTotal
Unchanged83689
Changed42832
Total8734121
Table 12. Accuracy comparison table between SCCD and ODCD.
Table 12. Accuracy comparison table between SCCD and ODCD.
Overall AccuracyKappa Coefficient
SCCD83.66%0.62
ODCD91.73%0.80
Table 13. t-Test for difference.
Table 13. t-Test for difference.
Overall Accuracy DifferenceKappa Coefficient Difference
Training group3.29%0.07
Verification group 110.21%0.18
Verification group 28.07%0.22
p value0.031860.02286

Share and Cite

MDPI and ACS Style

Wang, Z.; Liu, Y.; Ren, Y.; Ma, H. Object-Level Double Constrained Method for Land Cover Change Detection. Sensors 2019, 19, 79. https://doi.org/10.3390/s19010079

AMA Style

Wang Z, Liu Y, Ren Y, Ma H. Object-Level Double Constrained Method for Land Cover Change Detection. Sensors. 2019; 19(1):79. https://doi.org/10.3390/s19010079

Chicago/Turabian Style

Wang, Zhihao, Yalan Liu, Yuhuan Ren, and Haojie Ma. 2019. "Object-Level Double Constrained Method for Land Cover Change Detection" Sensors 19, no. 1: 79. https://doi.org/10.3390/s19010079

APA Style

Wang, Z., Liu, Y., Ren, Y., & Ma, H. (2019). Object-Level Double Constrained Method for Land Cover Change Detection. Sensors, 19(1), 79. https://doi.org/10.3390/s19010079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop