Next Article in Journal
Shrub Fractional Cover Estimation and Mapping of San Clemente Island Shrubland Based on Airborne Multispectral Imagery and Lidar Data
Previous Article in Journal
PlaNet: A Neural Network for Detecting Transverse Aeolian Ridges on Mars
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unsupervised Change Detection Using Spectrum-Trend and Shape Similarity Measure

1
MNR Key Laboratory of Land Environment and Disaster Monitoring, China University of Mining and Technology, Xuzhou 221116, China
2
School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(21), 3606; https://doi.org/10.3390/rs12213606
Submission received: 23 September 2020 / Revised: 26 October 2020 / Accepted: 31 October 2020 / Published: 3 November 2020
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
The emergence of very high resolution (VHR) images contributes to big challenges in change detection. It is hard for traditional pixel-level approaches to achieve satisfying performance due to radiometric difference. This work proposes a novel feature descriptor that is based on spectrum-trend and shape context for VHR remote sensing images. The proposed method is mainly composed of two aspects. The spectrum-trend graph is generated first, and then the shape context is applied in order to describe the shape of spectrum-trend. By constructing spectrum-trend graph, spatial and spectral information is integrated effectively. The approach is performed and assessed by QuickBird and SPOT-5 satellite images. The quantitative analysis of comparative experiments proves the effectiveness of the proposed technique in dealing with the radiometric difference and improving the accuracy of change detection. The results indicate that the overall accuracy and robustness are both boosted. Moreover, this work provides a novel viewpoint for discriminating changed and unchanged pixels by comparing the shape similarity of local spectrum-trend.

1. Introduction

Change detection is of great significance as an attractive scientific area, and it is a process of distinguishing changed and unchanged regions [1]. It is performed by analyzing remote sensing images that were obtained from the same geographical area at different times [2]. Owing to the rapid improvement of observation platforms [3], it is more convenient for researchers to obtain multi-temporal remote sensing images. In the past decades, change detection has been widely used in different fields, such as land cover detection [4,5,6,7], environment protection [8], and human activity detection [9,10], etc.
Extensive algorithms regarding change detection have been investigated by researchers. According to the existence of training samples, change detection approaches can be categorized into supervised and unsupervised methods. Many supervised algorithms have been already proposed and applied in practice [11,12,13,14]. Generally speaking, the supervised methods can achieve higher accuracy than unsupervised methods; however, it is difficult to collect enough ground truths in many circumstances. This is why substantial researchers devoted much more efforts into unsupervised method. In this work, we focus on unsupervised change detection methods.
A variety of unsupervised methods have been devised for change detection. Their steps mainly include: preprocessing, producing change magnitude image, and the generation of binary change map. The purpose of the first step is to suppress the noise and simultaneously maintain the real change information. As we all know, the interference factors mainly come from radiometric difference and geometric distortion. Some radiometric correction methods [15,16,17] and registration techniques [18] are implemented in order to reduce the impact of interference factors. After the preprocessing, the noise will be reduced and real change information can be enhanced. The generation of change magnitude images is a pivotal part in change detection. Image difference and image ratio can be viewed as the most traditional ways. Such methods have been widely used because of their simple operation, easy implementation, and low computation load. However, low-accuracy and high-quality requirements for images are their shortcomings which cannot be overlooked. Change vector analysis (CVA) [19] is one of the most typical unsupervised algorithms and usually adopted in change detection. It can make full use of information to detect change pixels and provide change information [20]. He et al. [21] integrated textural with spectral information in order to enhance the detection performance of traditional CVA algorithm. The extended CVA achieved better accuracy due to rich textual information. However, CVA is still sensitive to the radiometric difference between remote sensing images [22]. Consequently, there are some “salt and pepper” noises [23] in the change map. In addition to algebraic methods, feature-based methods are also popular methods in change detection. Gabor wavelet features [24] can simultaneously achieve remarkable accuracy and pay a low computation cost. Li et al. [25] extracted features based on Gabor filter at first, and then generated difference images while using Markov random field (MRF) [26,27] neighborhood system. Although there are many methods for feature extraction, most of them are just applied to specific situation, and none of them is universal enough to be applicable to all situations. Finally, we should select appropriate ways to analyze the change magnitude images. K-Means [28] is a common used clustering method, and this algorithm partitions unlabeled samples into K clusters through iterations. Soft clustering algorithms, like fuzzy c-means (FCM) clustering technique [29], based on the corresponding membership degree, are proved to be more efficient in segmentation.
In the past few years, increasing very high resolution (VHR) images are more available to researchers [30,31,32]. It can provide more abundant information, whereas more details of spatial structure and the limitation in the spectral domain increase the difficulty of change detection [33]. Algorithms that are based on spectral information are easily subjected to the effect of radiometric difference, which is caused by atmospheric conditions, solar altitude, and so on. As a result, there is a common phenomenon that serious salt-and-pepper noise appears in the application of VHR images. When compared with low and medium resolution images, more pseudo-changes will be detected due to the increased variabilities in VHR images [34]. Therefore, it is urgent to settle the above problems. This paper focuses on enhancing the usability of the algorithm and improving the accuracy.
When confronted with challenges brought by the VHR images, some algorithms have been investigated recently. Hao et al. [35] applied an improved superpixel-based MRF model to integrate change information. Gong et al. [36] developed a method that was based on hierarchical difference to extract the features from the multi-temporal images. Ding presented robust kernel principal component to improve the performance of detection [37]. These algorithms have displayed good performance in relation to accuracy; however, they are sensitive to radiometric difference, highly dependent on the quality of images, and the effect of segmentation.
Aimed at boosting the accuracy of change detection, this paper proposes a novel method, called local-scene spectrum-trend shape context (LSSC) descriptor. LSSC is able to reduce the effect of radiometric difference for change detection. It abandons using spectral values directly; instead, it constructs the spectrum-trend graph, to express the feature and measure the similarity between the shapes.
The contributions of this work are as follows:
  • Integrating the neighborhood spatial with spectral information effectively in the form of spectrum-trend graph. The discrete spectral values are transformed into two-dimensional (2-D) shape, and change detection is based on the shape. This method can improve the robustness, and it achieves good performance when dealing with VHR images.
  • A novel viewpoint is proposed in order to discriminate changed and unchanged pixels by comparing the shape similarity of local spectrum-trend. The shape distance is calculated as the basis to weigh whether the corresponding pixels have changed or not. If the two target shapes are highly similar, the shape distance will be as small as possible, and it can be considered that no change exists between the two pixels. Otherwise, there has been a change.
This work is organized, as follows. In Section 2, the proposed methodology is introduced in detail, based on two main parts: spectrum-trend graph and shape context. Section 3 presents the details of the data sets. The experimental results will be exhibited in Section 4. Discussion is presented in Section 5. Finally, Section 6 draws the concluding remarks.

2. Materials and Methods

Algorithms that detect changes by directly comparing spectral values between images are not suitable for VHR remote sensing images, neither achieving good accuracy nor keeping robustness. In this work, we aim at proposing a new descriptor, which can address the lack of radiometric consistency between images. The LSSC descriptor is devised in this study. It utilizes local-scene spatial and spectral information to improve the reliability for change detection. Figure 1 shows the flow chart of proposed algorithm.
Let X1 and X2 be two co-registered images of size M × N with B bands, which are captured from the same geographical area at the time T1 and T2, respectively.
First, a sliding window is set in order to obtain the spectral values of each band within the window range. The purpose of this step is to collect local-scene spatial and spectral information. These discrete values are arranged in order and conducted as the vertices of the trend graph. The advantages of spectrum-trend graph are as following: (1) the spatial and spectral information can be integrated effectively and (2) the comparisons between corresponding pixels turn into the relationship between two 2-D shapes. Second, the shape context is implemented to extract the features in the spectrum-trend graph. By comparing the similarity of shape, we can achieve the goal of obtaining the change magnitude image. Finally, the clustering algorithm is performed in order to obtain the final change map. The main steps are presented in detail below.

2.1. Spectrum-Trend Graph

Radiometric differences are commonly seen in the multi-temporal remote sensing images, which have significant influences on change detection. Besides, the seasonal variations are likely to appear and aggravate the difficulty of change detection when VHR images are used [38,39]. Therefore, change detection that is based on VHR image remains a great challenge.
Lately, algorithms integrating spatial and spectral information have been proved to effectively address the aforementioned issues. Lv et al. [40] compared the spectral information within the specific window in order to detect the changed and unchanged pixels. The spectral relationship between the central pixel and its neighborhood is named spectrum trend.
Inspired by the concept of spectrum trend, we proposed “spectrum-trend graph” to build a new descriptor. Spectrum-trend graph aims to depict the distribution of spectral values in the local-scene.
A given pixel p(i, j), representing the pixel located on (i, j) in the images, is used as the central pixel for establishing a sliding window. Supposing the size of window is n*n, let g(i, j)b be the spectral value of p(i, j) in the bth band. The spectrum-trend can be attained, as follows: the first pixel of the first band is scanned first, and then the first pixel of the second band is captured. All of the pixels in the window will be scanned in this way, in the order from left to right and from top to bottom. Spectral trend’s expression can be given as Equation (1):
T r e n d = ( g ( i r , j r ) 1 , g ( i r , j r ) 2 , g ( i r , j r ) B , g ( i r , j r + 1 ) 1 , g ( i r , j r + 1 ) 2 , g ( i r , j r ) B , g ( i r , j + r ) 1 , g ( i r , j + r ) 2 , g ( i r , j + r ) B , g ( i k , j r ) 1 , g ( i k , j r ) 2 , g ( i k , j r ) B , g ( i + r , j + r ) 1 , g ( i + r , j + r ) 2 , g ( i + r , j + r ) B )
where r = (n − 1)/2. An example of the generation of spectrum-trend graph is shown in Figure 2.
Figure 2 demonstrates the details that the process of constructing the spectrum-trend graph. First, a template window (i.e., 3*3 template window) is set centered at p(i,j) in the image. All the spectral values in each band within the specified region will be captured. Subsequently, the spectrum-trend is obtained by Equation (1). Finally, the spectrum-trend graph can be drawn. If the size of windows is n*n, the window coverage will be { ( x , y ) | i - r x i + r ,   j - r y j + r } , and the number of discrete spectral values is N = n*n*B.
Several key points are worth noting. First, it is obviously that, with the expansion of window, the spectrum-trend graph can provide more detailed information. However, when n is too large, it will not only result in increasing the computation load, but also introduce noise. Hence, it is necessary to choose an appropriate window size. Second, the optimal value n is related to the ground resolution. When dealing with different resolution images, pixels represent different coverage areas of actual ground objects. In addition, even for images with the same spatial resolution, the information that images provide tends to be different due to the difference of coverage areas (i.e., urban areas will provide more rich and complex ground information, while rural areas tend to present less information). Hence, experimental tests determine the optimal value of n.
On the basis of this technique, each pixel is regarded as central pixel once and the corresponding spectrum-trend graph is established. Compared with the direct use of spectral values for change detection, constructing spectrum-trend graph can effectively reduce the impact of radiometric difference caused by atmospheric conditions and other factors. Because the detection is based on the trend of spectral values in the local-scene, instead of the isolated spectral values.

2.2. Local-Scene Spectrum-Trend Shape Context Descriptor

After finishing the construction of the spectrum-trend graph, we get the local-scene spectrum information in the form of 2-D shape. Next, we need to choose an appropriate way to extract the features from the shape. The shape context is applied to express the feature by investigating some feature extraction algorithms [33,41,42,43,44].
Shape context is a very popular shape descriptor and it has been widely used for target recognition and measuring similarity [45]. It uses logarithmic polar histogram to describe the distribution of sample points. Its implementation steps are described as following:
  • For a given shape, the shape contour is captured by edge detection operator (e.g., canny operator). The contour of given shape is sampled to obtain a set of discrete points p1, p2, …, pn. Figure 3a,b present the details.
  • Calculating the shape context. Any point pi is taken as a reference point. M concentric circles are established at a logarithmic distance interval in the region, where pi is the center. This area is divided equally along the circumferential direction N in order to form a target shaped template, as shown in Figure 3c. The relative position of the vector from point pi to other points is simplified as the number of points in each sector on the template. The statistical distribution histogram hi(k) of these points, named the shape context of point pi, is calculated as:
h i ( k ) = # { q p i : ( q p i ) b i n ( k ) }
where k = { 1 , 2 , , K } , K = M × N .
The application of logarithmic distance segmentation allows the shape context descriptor to enhance local features and be more sensitive to adjacent sample points rather than to away from the point. Shape features in different positions show great differences, whereas the features in the same position present high similarity, as it is shown in Figure 3. Figure 3d,e are histograms of the point marked with black square in (a) and (b), respectively. And their histogram expressions are highly consistent due to the similarity of shape. Figure 3f is the histogram of the point marked with black circle in (b). Because it is in different position, its histogram expression is quite different from Figure 3d.
LSSC descriptor is designed to extract feature from the spectrum-trend graph based on shape context. As mentioned before, for a given central pixel, the corresponding local-scene spectrum-trend graph can be drawn. If there are Z feature points in the graph, then each feature point pi will be described the distribution with other Z-1 points by Equation (2). In terms of the spectrum-trend graph, its structure information that can be stored in a matrix of size K*Z. Figure 4 elaborates the procedures.
From Figure 4, we can see that, there is a high similarity between P and Q in terms of overall trend. Therefore, they share the similar LSSC feature. On the contrary, the LSSC features of U and V present great differences, due to diversity of shape between U and V.
Next, the task is to quantitatively analyze the similarity between the shapes. The matching cost between the two LSSC feature descriptor can be defined, as follows:
C p , q = 1 2 k = 1 K [ h p ( k ) h q ( k ) ] 2 h p ( k ) + h q ( k )
where p and q represent the nth point on the spectrum trend in P and Q, respectively. The shape distance is used as the basis to measure the similarity of LSSC, and the formula is as follows:
L S S C S D ( P , Q ) = 1 N p P arg min q Q C ( p , q ) + 1 N q Q arg min p P C ( p , q )
where N denotes the total number of feature points in spectrum-trend graph. L S S C S D denotes the similarity of the LSSC feature at time T1 and T2, centered on X1(i, j) and X2(i, j), respectively.
The shape distances between the corresponding pixels from bitemporal remote sensing images are calculated, and then the change magnitude image will be obtained in this manner. Intuitively, if L S S C S D is small, then it can be considered that there is no change; otherwise, we consider that it has changed.

2.3. The Generation of Binary Change Map

As described in Section 2.2, we can obtain the change magnitude images based on the LSSC descriptor. In order to generate binary change maps, we need to choose an appropriate method to analyze the change magnitude images. Many methods have been proposed [32,46,47,48,49] in change detection. An unsupervised method FCM [48] was employed in this study.
The FCM clustering algorithm is one of the most used widely unsupervised techniques based on the partition clustering algorithm. It has been widely used in image segmentation and data clustering analysis [48,50,51]. The objective function of FCM is defined, as follows:
J = i = 1 N j = 1 C u i j q | | x i c j | | 2
where uij is the membership degree of the ith pixel in the jth cluster and cj is the jth center of the cluster; q is the weighting exponent, which should be greater than 1; | | x i c j | | denotes the Euclidean distance between the ith pixel and jth cluster. In our study, the number of class C is 2, which denotes the changed and unchanged classes. Besides, uij is computed by Equation (6) and cj can be calculated by Equation (7) [51].
u i j = 1 k = 1 C ( x i c j x i c k ) 2 q 1
c j = i = 1 N u i j q x i i = 1 N u i j q

2.4. Accuracy Metrics

Aimed at evaluating the performance of the proposed algorithm quantitatively, the percentage of false alarms (Pf), the percentage of missed detection (Pm), the percentage of total errors (Pt), and Kappa coefficient are used as evaluation indicators. These evaluation indicators can be obtained by change detection confusion matrix. Table 1 presents the change detection confusion matrix.
The definitions of these indicators are as following:
P f = F p N 0 × 100 %
where N0 is the total number of unchanged pixels in the reference image.
P t = F p + F n N 0 + N 1 × 100 %
where N1 is the total number of changed pixels in the reference image.
K C = p o p e 1 p e
where KC means Kappa coefficient.
p e = ( T p + F p ) ( T p + F n ) + ( T n + F n ) ( T n + F p ) ( N 0 + N 1 ) 2
p o = T p + T n N 0 + N 1

3. Data Sets

Two VHR data sets were chosen in the experiments in order to assess the performance of the proposed method.
For the first data set, the images cover the cities of Hubei, China, captured by QuickBird satellite. The spatial resolution is 2.4 m. The bands 1, 2, and 3 cover spectral range are 0.45–0.52 μm, 0.52–0.60 μm and 0.63–69 μm, respectively. The bitemporal remote sensing images are obtained on 2 July 2009 and 6 October 2014, respectively. The scene is made up of 500 × 500 pixels. The land cover types are residential areas, roads, and river in the urban district. With the rapid development of the city, there is an increasing number of buildings and traffic roads in the scene.
The images of the north of China, acquired by SPOT-5 satellite, were used as the second data set. This scene consists of 453 × 436 pixels with 2.5 m spatial resolution. The multi-temporal remote sensing images are obtained in September 2007 and September 2010.
Data sets are as depicted in Figure 5. The first row shows the first data set and the second row exhibits the second data set. The first and second column present the images that were obtained at T1 and T2, respectively. The final column presents the reference maps that were obtained by manual analysis, where changed regions are marked green and unchanged areas are marked red.

4. Results

Two high resolution data sets were performed in this experiment in order to identify the effectiveness of the proposed algorithm. As we all know, aerosols and clouds have impacts on change detection and surface reflectance may improves the accuracy of change detection. Song [52] pointed out that absolute surface reflectance measurements are unnecessary for many applications involving change detection. Many investigations [53,54,55,56] used data without absolute atmospheric correction and achieved results with satisfying accuracy and robustness. For this study, we performed the relative atmospheric correction and co-registration in the preprocessing stage. Next, the difference magnitude images were generated. The parameters details of each experiment are presented in Table 2. Finally, FCM clustering algorithm was put into use to produce the binary change map. What is more, EM [57] and K-Means [58] were adopted to be compared with FCM for the purpose of verifying the usability of LSSC. CVA was conducted in the contrast experiment.

4.1. Results of CVA

CVA is a classic algorithm in change detection, and it can make full use of bands information to discriminate the changed and unchanged pixels. It is crucial to choose the appropriate bands for change detection. In order to find the optimal combinations of bands for CVA, a series of comparative experiments were carried out and the results are as shown in Table 3.
From Table 3 we can discover that, when band1, 2 and 3 were put into use, the result of two data set both achieved the best accuracy. Hence, the two data sets were carried out in this bands combination in order to generate the change magnitude images, and then corresponding binary change maps were generated.
Figure 6 presents the change magnitude images and binary change maps obtained by CVA. The first row and second row depict the result of the first and second data set, respectively. First column shows the change magnitude images. From the second column to the fourth column, binary change maps generated by EM, FCM, and K-means are presented, respectively.

4.2. Results of the Proposed Method

In the proposed method, we implemented LSSC in order to extract the feature from the spectrum-trend. The parameters have an impact on accuracy and will be discussed in Section 5. The results in this section were generated by the aforementioned parameters. Figure 7 depicts the results that were generated by LSSC. The first row and the second row present the images of the first and second data set, respectively.

5. Discussion

5.1. The Effect of Window Size n

With regard to the local-scene spectrum-trend graph, the size of window n is a very important variable. When n is smaller, the spatial information is less. When n is larger, it can provide more abundant spatial neighborhood information. At the same time, this will not only increase the computation load, but also lead to introducing irrelevant information. We have carried out a series of tests with different values from 3 to 21 in order to obtain the optimal n. Figure 8 depicts the relationship between the window size n and the accuracy. From Figure 8, we can clearly find that, with the expansion of window, more spatial information will be collected and the accuracy of detection will be higher and higher in the beginning period. However, when the window is too large, some noise and irrelevant information appears in the scene and results in the decline of accuracy. The binary change maps got the lowest Pt and highest KC when n = 9.
Figure 9 and Figure 10 depict change the magnitude images and the corresponding binary change maps for the first and second data set, respectively. The first row exhibits the change magnitude images generated by n = 3, n = 9 and n = 15, respectively. And the second row presents the binary change maps obtained by n = 3, n = 9 and n = 15, respectively.
For the first data set, it can be noted from Figure 9, when n was set to 3, some changed areas could not be detected, because the size of window is small and make it unable to collect enough spatial and spectral information. In contrast, when n was set to 15, more false alarms existed in the change map. The reason is that more noise and irrelevant information appeared in the local-scene, which resulted in the decline of overall accuracy. For the second data set, from Figure 10, we can see that, when n was set to 3, some hollow regions appeared in the binary change map and when n was set to 15, some unchanged regions were falsely detected. When n was set to 9, the overall change detection performance is pretty good in both data sets. Hence, n was set to be 9 in this study.

5.2. The Effect of Shape Context Parameters

In the process of constructing LSSC descriptor, M and N play an important part in shape context. In terms of the implementation of shape context, the default settings are M = 5 and N = 12 [59]. For the sake of getting the optimal parameters values, some comparative experiments were carried out through different combinations of M and N. The results of different combinations were shown in Figure 11. From Figure 11, we find that the first and second data set achieve the best change accuracies with M = 4 and N = 12, M = 5 and N = 12, respectively. Figure 11 shows the experimental results in detail.

5.3. The Comparison with CVA

The proposed algorithm consists of three parts, including the feature extraction based on LSSC, applying the shape distance to produce the change magnitude image and the generation of binary change map. In the third step, we apply the FCM clustering algorithm in order to obtain the final change map in this paper, and some other techniques [28,58,60,61,62] can be as alternatives to apply in the future.
For the first data set, we can find, from Figure 6, that binary change maps generated by CVA have much salt and pepper noise. There are lots of buildings, roads and residential areas in the scene, and the complex characteristics of the scene make it challenging to detect change pixels. In addition, there are season variations, since the images were obtained at summer and autumn. That also contributes to the difficulty of change detection. In such a situation, LSSC produced less noise, which can be found in Figure 7. Because LSSC integrates spectral with spatial information efficiently, the robustness of algorithm is advanced. Some investigations have identified that integrating spectral with spatial information can further exploit the features and improve the performance of change detection [63,64]. With respect to clustering algorithms, LSSC-EM has a deficiency in terms of false alarms when compared with LSSC-FCM. A possible reason is that EM may fall into the local optimum, instead of the global optimum.
In contrast, the land cover types of the second data set are less complex than the first one, because the image covers the rural area. CVA achieves a better detection result than the first data set. However, there is still some salt and pepper noise in the change maps. LSSC still exhibits better performance of detection than CVA, especially in keeping a balance between false alarms and missed detections.
It can be seen from Table 4 and Table 5 that, for the first data set, LSSC-FCM achieved the best detection accuracy. Compared with CVA-FCM, CVA-EM, and CVA-Kmeans, the accuracy of LSSC-FCM was improved by 18.06%, 10.68% and 18.70%, and KC was increased by 0.3488, 0.3608 and 0.3580, respectively. For the second data set, LSSC-EM outperformed other methods. Compared with CVA-FCM, CVA-EM, CVA-Kmeans, the accuracy of LSSC-EM was improved by 18.20%, 13.56%, and 18.75%, and KC was increased by 0.3499, 0.2661 and 0.3593, respectively. Although the accuracy of LSSC-FCM is lower than LSSC-EM in the second experiments, LSSC-FCM still maintained the second highest accuracy and KC. Because the main purpose of this work is to develop the feature descriptor, the reliability of the proposed method can be still illustrated. In a word, LSSC shows the superiority of overall accuracy and robustness on the two data sets.
In terms of the computation complexity, the proposed method spends more time than CVA. CVA compares directly the spectral values to discriminate the changes. Because of the low computation load, CVA obtains the change result with little time. In the proposed method, on the one hand, we need to build the spectrum-trend graph for each pixel. On the other hand, the relationships between the feature points in the graph are calculated. Although the proposed method takes more time, it achieves better experimental accuracy than CVA. If there is no radiometric difference between images, then CVA can meet the accuracy requirements for change detection. However, when there are radiometric differences between the images, CVA tends to have a serious false-alarms and cannot keep a good balance between false-alarms and missed detections. In this situation, the proposed method can overcome the shortcoming that traditional algorithms are sensitive to radiation differences and improve the accuracy of change detection.
In order to further analyze the difference of accuracy between different methods, Pf, Pm, and Pt were used as accuracy indicators. Figure 12 presents the accuracies of various methods in the experiments.
Because of the increased variabilities in complex urban environments in the application of VHR images, CVA exhibits serious false alarms in the first data set. From Figure 12a, we can find that the Pf of CVA all maintained over 30%. In the second data set, CVA achieved the lower Pf, but its Pm increased a lot, which can be concluded from Figure 12b. The reason is CVA neglects the spatial feature, and that brings about the decline of accuracy. In contrast, LSSC keeps a good balance between Pm and Pf in the both data sets. This is because LSSC utilizes the spectral and spatial information efficaciously in the form of spectrum-trend graph. Besides, spectral trend can copy with the radiometric difference effectively.
In this study, we find that the proposed method can effectively reduce the requirement for images and improve the accuracy. In the first experiment, there are seasonal differences and radiometric differences between the two images. When CVA method was applied, the percentages of total errors were approximately 20%, especially there are serious false alarms. In contrast, the proposed method has greatly improved in both visual judgment and quantitative analysis. In the second experiment, CVA method has a low false detection rate, but has a high missed detection rate. The proposed method still keeps a good balance between the false detection rate and the missed detection rate.
It is worth noting that the time of day and local weather have an impact on spectral values and noise in the images. The results indicate that the proposed method can reduce the influences of radiometric differences on change detection, caused by different time of day, local weather, solar heights, imaging conditions, and so on. In order to further analyze the influence of some indicators on change detection, such as weather and time, we need to get more image data that were captured at different times and conduct a series of comparative experiments in the future.

6. Conclusions

In this paper, an unsupervised method that is based on spectrum-trend graph and shape context has been proposed and applied to change detection for very high-resolution remote sensing images. The aims are overcoming the disadvantage that traditional algorithms are sensitive to radiometric differences and improving the accuracy of change detection. The main innovation of this method lies in implementing the shape context in order to extract the feature from the local-scene spectral trend. Specific work are as follows. First, the spectral values of each band in the local region are organized in sequence to construct the spectrum-trend graph. Shape context is then applied to extract the feature from the 2-D shape, and the change magnitude images are generated based on shape distance. Finally, the change maps are obtained by FCM clustering algorithm. Two experiments were carried out on SPOT-5 and QuickBird data, and the quantitative analysis of experimental results proved the effectiveness of the proposed technique.
The advantages of the proposed method are described in the following:
  • Improved change detection accuracies were obtained by the proposed algorithm. The proposed method presented satisfying performance in accuracy and it kept a good balance between the false alarms and the missed detections.
  • A novel viewpoint was proposed to discriminate changed and unchanged pixels by comparing the spectrum-trend shape similarity. The discrete and isolated spectral reflectance values were transformed into the 2-D shape. The detection of change pixels then became into the comparison of similarity between the shapes.
In the future, some efforts can be devoted into the following aspects. First, the determination of parameters can be more automatic. If the window size n and parameters of shape context (M and N) are able to be automatically acquired, the algorithm will be more practical. Second, the more algorithms of shape similarity measure from the computer vision field can be investigated and applied to spectral trend. The approach proposed in this paper is to use shape context and shape distance to describe the similarity between the corresponding spectrum-trend. In the next stage, we will pay more attention to integrate the spectral trend with shape measure algorithms. It may be more efficient in change detection for very high-resolution remote sensing images.

Author Contributions

Conceptualization, Y.T. and M.H.; methodology, Y.T. and M.H.; software, Y.T.; formal analysis, Y.T. and M.H.; writing—original draft preparation, Y.T. and H.Z.; writing—review and editing, M.H. and H.Z.; visualization, Y.T.; supervision, M.H.; funding acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (41701504, 41971400) and A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.

Acknowledgments

The authors would like to thank Qunming Wang, professor of the College of Surveying and Geo-Informatics, Tongji University, for providing the suggestion in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bruzzone, L.; Bovolo, F. A novel framework for the design of change-detection systems for very-high-resolution remote sensing images. Proc. IEEE 2013, 101, 609–630. [Google Scholar] [CrossRef]
  2. Asokan, A.; Anitha, J. Change detection techniques for remote sensing applications: A survey. Earth Sci. Inf. 2019, 12, 143–160. [Google Scholar] [CrossRef]
  3. Furukawa, F.; Morimoto, J.; Yoshimura, N.; Kaneko, M. Comparison of conventional change detection methodologies using high-resolution imagery to find forest damage caused by typhoons. Remote Sens. 2020, 12, 3242. [Google Scholar] [CrossRef]
  4. Paris, C.; Bruzzone, L.; Fernandez-Prieto, D. A novel approach to the unsupervised update of land-cover maps by classification of time series of multispectral images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4259–4277. [Google Scholar] [CrossRef]
  5. Oguz, H.; Zengin, M. Analyzing land use/land cover change using remote sensing data and landscape structure metrics: A case study of erzurum, turkey. Fresenius Environ. Bull. 2011, 20, 3258–3269. [Google Scholar]
  6. Jin, S.; Yang, L.; Zhu, Z.; Homer, C. A land cover change detection and classification protocol for updating Alaska NLCD 2001 to 2011. Remote Sens. Environ. 2017, 195, 44–55. [Google Scholar] [CrossRef]
  7. Smiraglia, D.; Filipponi, F.; Mandrone, S.; Tornato, A.; Taramelli, A. Agreement index for burned area mapping: Integration of multiple spectral indices using sentinel-2 satellite images. Remote Sens. 2020, 12. [Google Scholar] [CrossRef]
  8. Othman, A.A.; Al-Saady, Y.I.; Al-Khafaji, A.K.; Gloaguen, R. Environmental change detection in the central part of Iraq using remote sensing data and GIS. Arab. J. Geosci. 2014, 7, 1017–1028. [Google Scholar] [CrossRef]
  9. Li, X.; Zhang, L.; Guo, H.; Sun, Z.; Liang, L. New approaches to urban area change detection using multitemporal RADARSAT-2 polarimetric synthetic aperture radar (SAR) data. Can. J. Remote Sens. 2012, 38, 253–266. [Google Scholar] [CrossRef]
  10. Gao, L.P.; Shi, W.H.; Miao, Z.L.; Lv, Z.Y. Method based on edge constraint and fast marching for road centerline extraction from very high-resolution remote sensing images. Remote Sens. 2018, 10. [Google Scholar] [CrossRef] [Green Version]
  11. Volpi, M.; Tuia, D.; Bovolo, F.; Kanevski, M.; Bruzzone, L. Supervised change detection in VHR images using contextual information and support vector machines. Int. J. Appl. Earth Obs. Geoinf. 2013, 20, 77–85. [Google Scholar] [CrossRef]
  12. Wu, K.; Du, Q.; Wang, Y.; Yang, Y. Supervised sub-pixel mapping for change detection from remotely sensed images with different resolutions. Remote Sens. 2017, 9. [Google Scholar] [CrossRef] [Green Version]
  13. Dong, H.; Ma, W.; Wu, Y.; Zhang, J.; Jiao, L. Self-Supervised Representation Learning for Remote Sensing Image Change Detection Based on Temporal Prediction. Remote Sens. 2020, 12, 1868. [Google Scholar] [CrossRef]
  14. Zhang, W.Z.; Lu, X.Q.; Li, X.L. A coarse-to-fine semi-supervised change detection for multispectral images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3587–3599. [Google Scholar] [CrossRef]
  15. Latte, N.; Lejeune, P. PlanetScope radiometric normalization and sentinel-2 super-resolution (2.5 m): A straightforward spectral-spatial fusion of multi-satellite multi-sensor images using residual convolutional neural networks. Remote Sens. 2020, 12, 2366. [Google Scholar] [CrossRef]
  16. Markert, K.N.; Markert, A.M.; Mayer, T.; Nauman, C.; Haag, A.; Poortinga, A.; Bhandari, B.; Thwal, N.S.; Kunlamai, T.; Chishtie, F.; et al. Comparing sentinel-1 surface water mapping algorithms and radiometric terrain correction processing in southeast asia utilizing google earth engine. Remote Sens. 2020, 12, 2469. [Google Scholar] [CrossRef]
  17. Silveira, E.M.; Bueno, I.T.; Acerbi-Junior, F.W.; Mello, J.M.; Scolforo, J.R.S.; Wulder, M.A. Using spatial features to reduce the impact of seasonality for detecting tropical forest changes from landsat time series. Remote Sens. 2018, 10, 808. [Google Scholar] [CrossRef] [Green Version]
  18. Teng, S.W.; Hossain, M.T.; Lu, G. Multimodal image registration technique based on improved local feature descriptors. J. Electron. Imaging 2015, 24. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, Q.; Chen, Y. Multi-feature object-based change detection using self-adaptive weight change vector analysis. Remote Sens. 2016, 8, 549. [Google Scholar] [CrossRef] [Green Version]
  20. Xu, R.; Lin, H.; Lue, Y.; Luo, Y.; Ren, Y.; Comber, A. A modified change vector approach for quantifying land cover change. Remote Sens. 2018, 10, 1578. [Google Scholar] [CrossRef] [Green Version]
  21. He, C.; Wei, A.; Shi, P.; Zhang, Q.; Zhao, Y. Detecting land-use/land-cover change in rural–urban fringe areas using extended change-vector analysis. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 572–585. [Google Scholar] [CrossRef]
  22. Rahman, S.; Mesev, V. Change vector analysis, tasseled cap, and NDVI-NDMI for measuring land use/cover changes caused by a sudden short-term severe drought: 2011 texas event. Remote Sens. 2019, 11, 2217. [Google Scholar] [CrossRef] [Green Version]
  23. Liu, S.; Bruzzone, L.; Bovolo, F.; Zanetti, M.; Du, P. Sequential spectral change vector analysis for iteratively discovering and detecting multiple changes in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4363–4378. [Google Scholar] [CrossRef]
  24. Kim, N.C.; So, H.J. Directional statistical Gabor features for texture classification. Pattern Recognit. Lett. 2018, 112, 18–26. [Google Scholar] [CrossRef]
  25. Li, Z.X.; Shi, W.Z.; Zhang, H.; Hao, M. Change detection based on gabor wavelet features for very high resolution remote sensing images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 783–787. [Google Scholar] [CrossRef]
  26. Touati, R.; Mignotte, M.; Dahmane, M. Multimodal change detection in remote sensing images using an unsupervised pixel pairwise-based markov random field model. IEEE Trans. Image Process. 2020, 29, 757–767. [Google Scholar] [CrossRef]
  27. Yu, H.; Yang, W.; Hua, G.; Ru, H.; Huang, P.P. Change detection using high resolution remote sensing images based on active learning and markov random fields. Remote Sens. 2017, 9, 1233. [Google Scholar] [CrossRef] [Green Version]
  28. Lv, Z.Y.; Liu, T.F.; Shi, C.; Benediktsson, J.A.; Du, H.J. Novel land cover change detection method based on k-means clustering and adaptive majority voting using bitemporal remote sensing images. IEEE Access 2019, 7, 34425–34437. [Google Scholar] [CrossRef]
  29. Huang, L.; Peng, Q.Z.; Yu, X.Q. Change detection in multitemporal high spatial resolution remote-sensing images based on saliency detection and spatial intuitionistic fuzzy C-means clustering. J. Spectrosc. 2020, 2020. [Google Scholar] [CrossRef]
  30. Huo, L.; Feng, X.; Huo, C.; Zhou, Z.; Pan, C. Change field: A new change measure for VHR images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1812–1816. [Google Scholar] [CrossRef]
  31. Wang, B.; Choi, S.; Byun, Y.; Lee, S.; Choi, J. Object-based change detection of very high resolution satellite imagery using the cross-sharpening of multitemporal data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1151–1155. [Google Scholar] [CrossRef]
  32. Ma, L.; Li, M.; Blaschke, T.; Ma, X.; Tiede, D.; Cheng, L.; Chen, Z.; Chen, D. Object-based change detection in urban areas: The effects of segmentation strategy, scale, and feature space on unsupervised methods. Remote Sens. 2016, 8, 761. [Google Scholar] [CrossRef] [Green Version]
  33. Wei, L.-F.; Wang, H.-B. Change detection from high-resolution remote sensing image based on MSE model. Spectrosc. Spectr. Anal. 2013, 33, 728–732. [Google Scholar] [CrossRef]
  34. Liu, H.; Yang, M.; Chen, J.; Hou, J.; Deng, M. Line-constrained shape feature for building change detection in VHR remote sensing imagery. ISPRS Int. J. Geo Inf. 2018, 7, 410. [Google Scholar] [CrossRef] [Green Version]
  35. Hao, M.; Zhou, M.; Jin, J.; Shi, W. An advanced superpixel-based markov random field model for unsupervised change detection. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1401–1405. [Google Scholar] [CrossRef]
  36. Song, A.; Kim, Y.; Han, Y. Uncertainty analysis for object-based change detection in very high-resolution satellite images using deep learning network. Remote Sens. 2020, 12, 2345. [Google Scholar] [CrossRef]
  37. Ding, M.; Tian, Z.; Jin, Z.; Xu, M.; Cao, C. Registration using robust kernel principal component for object-based change detection. IEEE Geosci. Remote Sens. Lett. 2010, 7, 761–765. [Google Scholar] [CrossRef]
  38. Cui, X.T.; Guo, X.Y.; Wang, Y.D.; Wang, X.L.; Zhu, W.H.; Shi, J.H.; Lin, C.Y.; Gao, X. Application of remote sensing to water environmental processes under a changing climate. J. Hydrol. 2019, 574, 892–902. [Google Scholar] [CrossRef]
  39. Mutowo, G. Remote sensing lake level fluctuations in response to a changing climate. J. Water Clim. Chang. 2020, 11, 30–38. [Google Scholar] [CrossRef]
  40. Lv, Z.Y.; Liu, T.F.; Zhang, P.; Benediktsson, J.A.; Lei, T.; Zhang, X. Novel adaptive histogram trend similarity approach for land cover change detection by using bitemporal very-high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019, 1–21. [Google Scholar] [CrossRef]
  41. Liu, Z.; Shen, H.; Feng, G.; Hu, D. Tracking objects using shape context matching. Neurocomputing 2012, 83, 47–55. [Google Scholar] [CrossRef]
  42. Zhang, H.; Gong, M.G.; Zhang, P.Z.; Su, L.Z.; Shi, J. Feature-level change detection using deep representation and feature change analysis for multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1666–1670. [Google Scholar] [CrossRef]
  43. Cai, L.P.; Shi, W.Z.; Hao, M.; Zhang, H.; Gao, L.P. A multi-feature fusion-based change detection method for remote sensing images. J. Indian Soc. Remote Sens. 2018, 46, 2015–2022. [Google Scholar] [CrossRef]
  44. Zheng, Z.F.; Cao, J.N.; Lv, Z.Y.; Benediktsson, J.A. Spatial-spectral feature fusion coupled with multi-scale segmentation voting decision for detecting land cover change with VHR remote sensing images. Remote Sens. 2019, 11, 1903. [Google Scholar] [CrossRef] [Green Version]
  45. Zhang, P.P.; Qiao, Y.; Wang, S.Z.; Yang, J.; Zhu, Y.M. A robust coherent point drift approach based on rotation invariant shape context. Neurocomputing 2017, 219, 455–473. [Google Scholar] [CrossRef]
  46. Panigrahi, S.; Verma, K.; Tripathi, P. Optimal threshold value determination for land change detection. Int. Arab J. Inf. Technol. 2019, 16, 265–274. [Google Scholar]
  47. Singh, S.; Talwar, R. Performance analysis of different threshold determination techniques for change vector analysis. J. Geol. Soc. India 2015, 86, 52–58. [Google Scholar] [CrossRef]
  48. Hao, M.; Hua, Z.; Li, Z.; Chen, B. Unsupervised change detection using a novel fuzzy c-means clustering simultaneously incorporating local and global information. Multimed. Tools Appl. 2017, 76, 20081–20098. [Google Scholar] [CrossRef]
  49. Xu, L.; Jing, W.; Song, H.; Chen, G. High-resolution remote sensing image change detection combined with pixel-level and object-level. IEEE Access 2019, 7, 78909–78918. [Google Scholar] [CrossRef]
  50. Roh, S.-B.; Oh, S.-K.; Pedrycz, W.; Seo, K.; Fu, Z. Design methodology for radial basis function neural networks classifier based on locally linear reconstruction and conditional fuzzy C-Means clustering. Int. J. Approx. Reason. 2019, 106, 228–243. [Google Scholar] [CrossRef]
  51. Shao, P.; Shi, W.; He, P.; Hao, M.; Zhang, X. Novel approach to unsupervised change detection based on a robust semi-supervised FCM clustering algorithm. Remote Sens. 2016, 8, 264. [Google Scholar] [CrossRef] [Green Version]
  52. Song, C.; Woodcock, C.E.; Seto, K.C.; Lenney, M.P.; Macomber, S.A. Classification and change detection using landsat TM data: When and how to correct atmospheric effects? Remote Sens. Environ. 2001, 75, 230–244. [Google Scholar] [CrossRef]
  53. Li, Z.; Shi, W.; Myint, S.W.; Lu, P.; Wang, Q. Semi-automated landslide inventory mapping from bitemporal aerial photographs using change detection and level set method. Remote Sens. Environ. 2016, 175, 215–230. [Google Scholar] [CrossRef]
  54. Li, Z.; Shi, W.; Lu, P.; Yan, L.; Wang, Q.; Miao, Z. Landslide mapping from aerial photographs using change detection-based Markov random field. Remote Sens. Environ. 2016, 187, 76–90. [Google Scholar] [CrossRef] [Green Version]
  55. Wang, M.; Tan, K.; Jia, X.; Wang, X.; Chen, Y. A deep siamese network with hybrid convolutional feature extraction module for change detection based on multi-sensor remote sensing images. Remote Sens. 2020, 12, 205. [Google Scholar] [CrossRef] [Green Version]
  56. Muro, J.; Canty, M.; Conradsen, K.; Hüttich, C.; Nielsen, A.; Skriver, H.; Remy, F.; Strauch, A.; Thonfeld, F.; Menz, G. Short-term change detection in wetlands using sentinel-1 time series. Remote Sens. 2016, 8, 795. [Google Scholar] [CrossRef] [Green Version]
  57. Qian, N.; Chang, G.; Gao, J. Smoothing for continuous dynamical state space models with sampled system coefficients based on sparse kernel learning. Nonlinear Dyn. 2020, 100, 3597–3610. [Google Scholar] [CrossRef]
  58. Leichtle, T.; Geiss, C.; Wurm, M.; Lakes, T.; Taubenbock, H. Unsupervised change detection in VHR remote sensing imagery—An object-based clustering approach in a dynamic urban environment. Int. J. Appl. Earth Obs. Geoinf. 2017, 54, 15–27. [Google Scholar] [CrossRef]
  59. Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef] [Green Version]
  60. Ma, A.; Zhong, Y.; Zhang, L. Spectral-spatial clustering with a local weight parameter determination method for remote sensing imagery. Remote Sens. 2016, 8, 124. [Google Scholar] [CrossRef] [Green Version]
  61. Song, M.; Zhong, Y.; Ma, A. Change detection based on multi-feature clustering using differential evolution for landsat imagery. Remote Sens. 2018, 10, 1664. [Google Scholar] [CrossRef] [Green Version]
  62. Lal, A.M.; Anouncia, S.M. Adapted sparse fusion with constrained clustering for semisupervised change detection in remotely sensed images. J. Appl. Remote Sens. 2017, 11. [Google Scholar] [CrossRef]
  63. Mou, L.C.; Bruzzone, L.; Zhu, X.X. Learning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 924–935. [Google Scholar] [CrossRef] [Green Version]
  64. Zhang, W.X.; Lu, X.Q. The spectral-spatial joint learning for change detection in multispectral imagery. Remote Sens. 2019, 11, 240. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flow chart of the proposed algorithm.
Figure 1. The flow chart of the proposed algorithm.
Remotesensing 12 03606 g001
Figure 2. The illustration of produce of spectrum-trend graph (n = 3, B = 3).
Figure 2. The illustration of produce of spectrum-trend graph (n = 3, B = 3).
Remotesensing 12 03606 g002
Figure 3. Shape context calculation and similarity. (a,b) are the letter “A” by handwritten. (c) denotes the log polar coordinate system. (d,e) are histograms of the point marked with black square in (a,b), respectively. (f) is the histogram of the point marked with black circle in (b).
Figure 3. Shape context calculation and similarity. (a,b) are the letter “A” by handwritten. (c) denotes the log polar coordinate system. (d,e) are histograms of the point marked with black square in (a,b), respectively. (f) is the histogram of the point marked with black circle in (b).
Remotesensing 12 03606 g003
Figure 4. The illustration of local-scene spectrum-trend shape context (LSSC). (a,b) are images X1 and X2, respectively. (cf) are the spectrum-trend graphs generated by P(i1,j1), Q(i1,j1), U(i2,j2), and V(i2,j2), respectively. (gj) use LSSC to describe the feature.
Figure 4. The illustration of local-scene spectrum-trend shape context (LSSC). (a,b) are images X1 and X2, respectively. (cf) are the spectrum-trend graphs generated by P(i1,j1), Q(i1,j1), U(i2,j2), and V(i2,j2), respectively. (gj) use LSSC to describe the feature.
Remotesensing 12 03606 g004
Figure 5. The data set in the experiments. (a,b) are the images acquired at T1 and T2, respectively. (c) denotes the reference maps.
Figure 5. The data set in the experiments. (a,b) are the images acquired at T1 and T2, respectively. (c) denotes the reference maps.
Remotesensing 12 03606 g005
Figure 6. The change magnitude images and binary change maps obtained by CVA. (a,e) denote change magnitude images for the first data and the second data set, respectively. (bd) are binary change maps for the first data set generated by CVA-EM, CVA-FCM. and CVA-Kmeans, respectively. (fh) are binary change maps for the second data set generated by CVA-EM, CVA-FCM, and CVA-Kmeans, respectively.
Figure 6. The change magnitude images and binary change maps obtained by CVA. (a,e) denote change magnitude images for the first data and the second data set, respectively. (bd) are binary change maps for the first data set generated by CVA-EM, CVA-FCM. and CVA-Kmeans, respectively. (fh) are binary change maps for the second data set generated by CVA-EM, CVA-FCM, and CVA-Kmeans, respectively.
Remotesensing 12 03606 g006
Figure 7. The change magnitude images and binary change maps obtained by LSSC. (a,e) denote change magnitude images for the first data and the second data set, respectively. (bd) are binary change maps for the first data set generated by LSSC-EM, LSSC-FCM, and LSSC-Kmeans, respectively. (fh) are binary change maps for the second data set generated by LSSC-EM, LSSC-FCM, and LSSC-Kmeans, respectively.
Figure 7. The change magnitude images and binary change maps obtained by LSSC. (a,e) denote change magnitude images for the first data and the second data set, respectively. (bd) are binary change maps for the first data set generated by LSSC-EM, LSSC-FCM, and LSSC-Kmeans, respectively. (fh) are binary change maps for the second data set generated by LSSC-EM, LSSC-FCM, and LSSC-Kmeans, respectively.
Remotesensing 12 03606 g007
Figure 8. The relationship between n and accuracy. (a) Pt-n curves; (b) Kappa coefficient-n curves.
Figure 8. The relationship between n and accuracy. (a) Pt-n curves; (b) Kappa coefficient-n curves.
Remotesensing 12 03606 g008
Figure 9. The results of change magnitude images and the binary change maps for data set 1. (a) change magnitude image (n = 3). (b) change magnitude image (n = 9). (c) change magnitude image (n = 15). (d) binary change map (n = 3). (e) binary change map (n = 9). (f) binary change map (n = 15).
Figure 9. The results of change magnitude images and the binary change maps for data set 1. (a) change magnitude image (n = 3). (b) change magnitude image (n = 9). (c) change magnitude image (n = 15). (d) binary change map (n = 3). (e) binary change map (n = 9). (f) binary change map (n = 15).
Remotesensing 12 03606 g009
Figure 10. The results of change magnitude images and the binary change maps for data set 2. (a) change magnitude image (n = 3). (b) change magnitude image (n = 9). (c) change magnitude image (n = 15). (d) binary change map (n = 3). (e) binary change map (n = 9). (f) binary change map (n = 15).
Figure 10. The results of change magnitude images and the binary change maps for data set 2. (a) change magnitude image (n = 3). (b) change magnitude image (n = 9). (c) change magnitude image (n = 15). (d) binary change map (n = 3). (e) binary change map (n = 9). (f) binary change map (n = 15).
Remotesensing 12 03606 g010
Figure 11. The relationship between the accuracy and different combinations of M and N. (a) the relationship between Pt and different combinations of M and N; (b) the relationship between KC and different combinations of M and N.
Figure 11. The relationship between the accuracy and different combinations of M and N. (a) the relationship between Pt and different combinations of M and N; (b) the relationship between KC and different combinations of M and N.
Remotesensing 12 03606 g011
Figure 12. The accuracies of various methods in the experiments in terms of Pm, Pf and Pt. (a) the accuracy of the first data set; (b) the accuracy of the second data set.
Figure 12. The accuracies of various methods in the experiments in terms of Pm, Pf and Pt. (a) the accuracy of the first data set; (b) the accuracy of the second data set.
Remotesensing 12 03606 g012
Table 1. The change detection confusion matrix.
Table 1. The change detection confusion matrix.
Changed in the Reference Image Unchanged in the Reference Image
Detected ChangesTrue Positive (Tp)False Positive (Fp)
Detected No-changesFalse Negative (Fn)True Negative (Tn)
Table 2. The parameters of proposed method in the experiments.
Table 2. The parameters of proposed method in the experiments.
Data Setthe Window Size Nthe Number of Distance Divisions Mthe Number of Angle Divisions N
Data set 19412
Data set 29512
Table 3. The accuracy of change detection generated by change vector analysis (CVA) with different bands combinations.
Table 3. The accuracy of change detection generated by change vector analysis (CVA) with different bands combinations.
Different Bands Combinations
Band1Band2Band3Band1,2Band1,3Band2,3Band1,2 and 3
Data set 1Pt (%)29.7336.5126.0826.4426.7028.0825.88
KC0.44880.39310.45420.43110.47510.46660.4769
Data set 2Pt (%)31.1125.1528.8722.6322.1621.6521.37
KC0.48890.56240.51730.58320.57120.58540.5863
Table 4. The quantitative analysis of experimental results generated by CVA.
Table 4. The quantitative analysis of experimental results generated by CVA.
Data SetMethodsPt (%)KCTime (s)
Data set 1CVA-EM18.500.46491.8
CVA-FCM25.880.47692.0
CVA-Kmeans26.520.46772.1
Data set 2CVA-EM16.730.67011.6
CVA-FCM21.370.58631.9
CVA-Kmeans21.920.57691.8
Table 5. The quantitative analysis of experimental results generated by LSSC.
Table 5. The quantitative analysis of experimental results generated by LSSC.
Data SetMethodsPt (%)KCTime (s)
Data set 1LSSC-EM8.130.806121.4
LSSC-FCM7.820.825720.6
LSSC-Kmeans7.840.825620.9
Data set 2LSSC-EM3.170.936217.8
LSSC-FCM5.810.883618.7
LSSC-Kmeans6.020.879718.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tian, Y.; Hao, M.; Zhang, H. Unsupervised Change Detection Using Spectrum-Trend and Shape Similarity Measure. Remote Sens. 2020, 12, 3606. https://doi.org/10.3390/rs12213606

AMA Style

Tian Y, Hao M, Zhang H. Unsupervised Change Detection Using Spectrum-Trend and Shape Similarity Measure. Remote Sensing. 2020; 12(21):3606. https://doi.org/10.3390/rs12213606

Chicago/Turabian Style

Tian, Yi, Ming Hao, and Hua Zhang. 2020. "Unsupervised Change Detection Using Spectrum-Trend and Shape Similarity Measure" Remote Sensing 12, no. 21: 3606. https://doi.org/10.3390/rs12213606

APA Style

Tian, Y., Hao, M., & Zhang, H. (2020). Unsupervised Change Detection Using Spectrum-Trend and Shape Similarity Measure. Remote Sensing, 12(21), 3606. https://doi.org/10.3390/rs12213606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop