Next Article in Journal
Impact of Sensor Zenith Angle on MOD10A1 Data Reliability and Modification of Snow Cover Data for the Tarim River Basin
Next Article in Special Issue
The Variations and Trends of MODIS C5 & C6 Products’ Errors in the Recent Decade over the Background and Urban Areas of North China
Previous Article in Journal
A Novel Tri-Training Technique for Semi-Supervised Classification of Hyperspectral Images Based on Diversity Measurement
Previous Article in Special Issue
Assessing Uncertainty in LULC Classification Accuracy by Using Bootstrap Resampling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Scale-Driven Change Detection Method Incorporating Uncertainty Analysis for Remote Sensing Images

1
School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China
2
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China
3
Lancaster Environment Center, Faculty of Science and Technology, Lancaster University, Lancaster LA1 4YQ, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(9), 745; https://doi.org/10.3390/rs8090745
Submission received: 22 June 2016 / Revised: 27 August 2016 / Accepted: 5 September 2016 / Published: 12 September 2016
(This article belongs to the Special Issue Uncertainties in Remote Sensing)

Abstract

:
Change detection (CD) based on remote sensing images plays an important role in Earth observation. However, the CD accuracy is usually affected by sunlight and atmospheric conditions and sensor calibration. In this study, a scale-driven CD method incorporating uncertainty analysis is proposed to increase CD accuracy. First, two temporal images are stacked and segmented into multiscale segmentation maps. Then, a pixel-based change map with memberships belonging to changed and unchanged parts is obtained by fuzzy c-means clustering. Finally, based on the Dempster-Shafer evidence theory, the proposed scale-driven CD method incorporating uncertainty analysis is performed on the multiscale segmentation maps and the pixel-based change map. Two experiments were carried out on Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and SPOT 5 data sets. The ratio of total errors can be reduced to 4.0% and 7.5% for the ETM+ and SPOT 5 data sets in this study, respectively. Moreover, the proposed approach outperforms some state-of-the-art CD methods and provides an effective solution for CD.

Graphical Abstract

1. Introduction

Change detection (CD) is a technique to detect changes occurred on the Earth. It is usually performed by analyzing remotely sensed images acquired from the same geographical area at different times [1,2]. This technique has been used widely in many practical applications due to its cost-effective characteristic, increasing temporal and spatial resolution, such as disaster detection [3], urban growth [4], and more general land cover/land use CD [5].
Considering the differences between multitemporal images (e.g., sunlight and atmospheric conditions, sensor differences, spectral characteristics, and registration strategy), image pre-processing techniques (such as atmospheric correction, image registration, geometric and topographic corrections) are necessary. A number of CD methods [3,6,7] have been investigated to reduce uncertainties in CD. Generally, these methods can be divided into post-classification (supervised) and comparative (unsupervised) methods. The former separately classifies the multitemporal images and finds changes by comparing individual-date classification maps. The post-classification methods have the advantages of providing ‘from-to’ change information and minimizing the impact of sensor and environmental differences. However, this type of method requires an analyst to acquire classification maps and the accuracy of the change maps depends on the quality of all individual-date maps [8,9]. With respect to the comparative approach, it detects changes by directly comparing the differences between multitemporal images based on spectral and texture characteristics. In this study, we focus on unsupervised CD.
The visual analysis was one of the earliest technologies for detecting changes, such as coastal areas [10], forest changes [11], and changes of the Broads Environmentally Sensitive Area [12]. The comparative methods were also developed, including image differencing, image ratio, image regression, principal component analysis and change vector analysis (CVA). They are generally pixel-based methods. Such methods have been used widely because of their easy implementation and direct reflection of changes. Some manual and empirical methods were exploited to determine a threshold involved in the comparative methods [13]. Furthermore, some automated threshold determination techniques were proposed by minimizing the measures of fuzziness [14], minimizing the least error probability based on Bayesian theory [15], and histogram-based methods [16]. Generally, changes are presented as a region that is large to be detected by the sensor used. A pixel belonging to changed or unchanged classes is likely to be surrounded by pixels belonging to the same class. Therefore, more reliable and accurate CD results can be yielded by properly using spatial information. A large number of pattern recognition methods incorporating spatial information were developed to reduce noise, such as fuzzy c-means (FCM) clustering [17,18], genetic algorithm [19,20], active contour model [21,22], and Markov random field [7,23]. These methods have their advantages in not requiring a threshold and in removing noise. Due to the uncertain overlap between the changed and unchanged pixels in the difference image, however, they have limitations in separating changed and unchanged pixels [24].
With the development of satellite technology, increasing higher-resolution images are available for CD, where more details of spatial structure and lower spectral resolution can increase the uncertainty in CD. To cope with the uncertainty, object-based methods were proposed, in which high-resolution images are segmented into objects with homogenous region and changes are then detected in units of objects [25]. Generally, four kinds of object-based methods can be identified, namely image-object, class-object, multitemporal-object and hybrid methods [26]. The image-object methods are similar to pixel-based methods and directly compare image objects to detect changes based on objects’ spectral information or texture features [27,28]. The class-object methods detect ‘from-to’ changes by comparing classified objects from multitemporal images [29,30]. For multitemporal-object methods, the temporally sequential images are combined and then segmented together to produce changed objects [31]. The hybrid methods detect changes through both pixel- and object-based methods, where the image differencing is usually used to generate the difference image and the object-based analysis method is then implemented to detect changes [32]. The hybrid algorithm has shown advantages in reducing noisy changes by combining pixel- and object-based methods [33], but it remains unclear how the final change results are influenced by the different combinations of pixel- and object-based schemes due to the several steps involved [26].
The ranges of pixel values of the difference image belonging to the changed and unchanged parts generally have overlap in pixel-based schemes. In object-based schemes, the segmentation is a pivotal step to generate corresponding objects between images and surface of the earth. However, segmented objects from different image often vary geometrically due to some factors including illumination conditions, view angles and meteorological conditions [34]. Additionally, the segmentation process sometimes suffers from under-segmentation and over-segmentation errors, which may create objects that do not accurately represent real-world feature [35]. Thus, uncertainty in the object-based methods exists inevitably.
In this paper, a scale-driven CD method incorporating uncertainty analysis is proposed, where the pixel- and object-based schemes are combined to reduce the uncertainty and increase accuracy. The proposed method consists of three main steps as shown in Figure 1, namely multitemporal-object segmentation, FCM clustering and uncertainty analysis. First, two temporal images X1 and X2 both including L bands are stacked and segmented to generate multiscale segmentation maps. Then, FCM clusters the difference image obtained by CVA into an initial pixel-based change map, with pixels’ membership belonging to changed and unchanged parts. Third, the Dempster-Shafer (DS) evidence theory is used to analyze the uncertainty in the combination of pixel- and object-based schemes. Two experiments on Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and SPOT 5 data sets were carried out to evaluate the proposed method.

2. Methodology

Two images X 1 = { X 1 l ( i , j ) | 1 i m , 1 j n , 1 l L } and X 2 = { X 2 l ( i , j ) | 1 i m , 1 j n , 1 l L } of size m × n and L bands were acquired by the same sensor from the same geographical area at two different times t1 and t2. The images were co-registered and radiometrically corrected beforehand. The main steps are presented below.

2.1. Multiscale Segmentation of Difference Image

Due to impacts of sunlight, atmospheric condition and phenology cycle, it is difficult to obtain exactly corresponding objects by segmenting two temporal images individually. It discourages the comparison of objects, as uncertainties are accumulated when generating one-to-one correspondence. Therefore, two temporal images X1 and X2 are stacked into one image X = { X l ( i , j ) | 1 i m , 1 j n , 1 l 2 L } by simple band stacking, and segmentation is implemented on the stacked image.
Many clustering methods have been investigated to segment images, such as k-means and iterative self-organizing data analysis techniques algorithm. The results depend on the initialization to a certain extent. Recently, statistical region merging (SRM) was proposed to segment images, which has the ability of removing significant noise, handling occlusions, and can perform scale-sensitive segmentations fast [36]. Thus, SRM is adopted to segment the stacked image in this study.
The stacked image X contains m × n pixels, with each pixel containing 2L values, each of the 2L channels belonging to the set {0, 1, …, g}, and g = 255 here. Let X* denote the perfect segment scene of observed image X. Each channel of X is obtained by sampling each statistical pixel of X* from a set of exactly Q independent random variables (taking values within [0, g/Q]) for observed 2L channels. The tuning Q controls the scale of segmentation: the larger it is, the more regions exist in the final segmentation.
The SRM segments an image based on an interaction between a merging predicate and an order in merging. The merging predicate P ( R , R ) can be expressed as follows
P ( R , R ) = { true if   a [ 1 , 2 , , 2 L ] , | R ¯ a R ¯ a | b ( R , R ) false o t h e r w i s e
where R and R denote a fixed couple of regions of X, R ¯ a and R ¯ a are the average grey values for channel a in region R and R , respectively, and b ( R , R ) = g 1 2 Q ( 1 | R | + 1 | R | ) ln 2 δ ( 0 < δ < 1 ). If P ( R , R ) = true , R and R are merged. The function f of merging order used to sort pixel pairs in X is described as
f a ( p , p ) = | p a p a |
where p and p are pixels in X, and p a and p a are the pixel grey values of channel a. The SRM is then performed to segment the stacked image with different Q values and multiscale segmentation maps are obtained.

2.2. Pixel-Based CD Using FCM

In this part, CVA is used to generate the difference image, and FCM clusters the difference image to produce a change map with memberships belonging to changed and unchanged parts. First, the change vector Δ X can be calculated as follows
Δ X ( i , j ) = X 1 ( i , j ) X 2 ( i , j ) = ( x 11 ( i , j ) x 21 ( i , j ) x 12 ( i , j ) x 22 ( i , j ) x 1 L ( i , j ) x 2 L ( i , j ) )
where Δ X includes all the spectral change information between X1 and X2 for a given pixel. The modulus Δ X of change vector Δ X is computed using Equation (4), which is recorded as the final difference image Xd.
X d ( i , j ) = Δ X ( i , j ) = b = 1 L ( x 1 b ( i , j ) x 2 b ( i , j ) ) 2
The difference image Xd is then normalized to [0, 255] in order to avoid the instability and inconsistency among the data sets and supply a consistent input for the following FCM clustering.
Afterwards, FCM [37] is implemented to cluster the difference image and detect changes. The membership probability u i j [ 0 , 1 ] ( i = 1 c u i j = 1 ( j = 1 , 2 , , n ) , c is the cluster number), that is, the pixel xj in the difference image belonging to the i-th (i = 1 or 2) cluster, is determined by minimizing the objective function
J ( U , V ) = i = 1 c j = 1 n u i j m x j v i 2
where U = [uij] is the membership probability matrix of Xd, and V = [v1, v2, …, vc] is the matrix composed of c central values. The membership probability uij can be calculated using the following equation
u i j = 1 k = 1 c ( x j v i x j v k ) 2 m 1
In Equation (6), vi is computed as follows
v i = j = 1 n u i j m x j j = 1 n u i j m
where m is a weighting exponent. For most data, 1.5 ≤ m ≤ 3 leads to satisfactory results [37]. It is set to 2 in this study, a widely used choice in many works [7,38].
An optimum solution can be obtained by updating U and V iteratively, and the iteration process is stopped when the number of iterations reaches a predefined maximum number or V t V t 1 < ε is achieved, where Vt and Vt−1 are the cluster center matrix in the t-th and (t−1)-th iteration and ε is a predefined threshold. Finally, an initial change map and the membership probability U of pixels in the difference image are produced and used for uncertainty analysis.

2.3. Scale-Driven CD Incorporating Uncertainty Analysis (SDCDUA)

In this study, a scale-driven solution is proposed to analyze the uncertainty existing in traditional CD, where DS evidence theory is employed, as shown in Figure 2.
First, a coarse segmentation map of SRM is analyzed based on the pixel-based CD map and memberships. Then, it is classified into changed, unchanged, and uncertain parts through the uncertainty analysis using the DS evidence theory. More details of the DS evidence theory are introduced in the following content. Only certain changed and unchanged objects are included in ‘Certain CD map 1’ shown in Figure 2. Second, for uncertain objects after the uncertain analysis, the next scale segmentation map with more detailed objects is adopted and analyzed with the pixel-based CD map and memberships. In this stage, the uncertain objects are also divided into changed, unchanged and uncertain parts, and a CD map consisting of identified changed and unchanged objects is also obtained, labeled as ‘Certain CD map 2’ in Figure 2. Next, the uncertain parts are further analyzed as showed in the last stage, which iterates until no uncertain object exits or the final segmentation map has been utilized. Finally, all ‘Certain CD maps’ are combined together to generate the final CD map.
The main step of uncertainty analysis based on DS evidence theory in SDCDUA is presented as follows. The DS evidence theory is an extension of the traditional Bayesian theory. The theory was first introduced by Dempster [39] in the context of statistical inference, and it was later extended to a general framework for modeling epistemic uncertainty by Shafer [40]. Three important functions in DS theory are defined and used to model the uncertainty, namely, the basic probability assignment function (m), the Belief function (Bel), and the Plausibility function (Pls).
Let Θ be the space of hypothesis and 2 Θ denote the set of subsets of Θ . For any hypothesis A of 2 Θ , m(A) ∈ [0, 1] and
{ m ( ) = 0 A 2 Θ m ( A ) = 1
where ∅ represents the null set, m is the basic probability assignment function (BPAF), and m(A) denotes the basic probability of the hypothesis A.
Generally, the belief degree of the combined result from different evidences is represented with an interval. The upper and lower limits of the interval are called the Belief function (Bel) and the Plausibility function (Pls), respectively, which are computed as follows
{ B e l ( A ) = B A m ( B ) P l s ( A ) = B A m ( B )
where A and B are made up of several or all the elements in Θ , B A , and the values of Bel and Pls range from [0, 1].
A new evidence m(c) is then calculated by fusing different source evidences using the following equation
m ( F ) = m 1 m 2 m n ( F ) = 1 k X 1 X n = F i = 1 n m i ( X i )
k = X 1 X n i = 1 n m i ( X i )
where denotes the fusion of evidences, and m1, m2, …, mn are the basis probability assignment function.
In CD problem, Θ = ( C , U ) , where C denotes “change” and U represents “no-change”. Two evidences are generated and combined in this study. One is the object-based evidence m 1 produced based on the segmentation maps, and the other is the pixel-based evidence m 2 obtained with FCM. First, objects in a segmentation map are partitioned into changed and unchanged parts by minimizing the difference of grey values of pixels in each part, and the mean values of changed and unchanged parts are calculated. For an object Ri, its variances corresponding to changed and unchanged parts are obtained and termed v c i and v u i , respectively. Thus, the evidence m 1 = { P 1 c , P 1 u } of the object Ri can be obtained with the following equation
{ P 1 c = v u i / ( v c i + v u i ) P 1 u = v c i / ( v c i + v u i )
where P 1 c and P 1 u are probabilities of the object Ri belonging to changed and unchanged parts in the first evidence, respectively.
The second evidence m 2 = { P 2 c , P 2 u } of the object Ri is calculated from the membership of pixels belonging to changed and unchanged parts with the following equation
{ P 2 c = j = 1 n p c j / n P 2 u = j = 1 n p u j / n
where P 2 c and P 2 u are probabilities of the object Ri belonging to changed and unchanged parts in the second evidence, respectively, and n is the total number of pixels in the object Ri. p c j and p u j are probabilities of the j-th pixel belonging to changed and unchanged parts in the object Ri, respectively. They are computed by clustering the difference image using FCM algorithm in Section 2.2.
The evidences m 1 and m 2 are then combined into a new evidence m = { P c , P u } , where P c and P u are probabilities of the object Ri belonging to changed and unchanged parts in the new evidence, respectively. For example, m 1 = { P 1 c = 0.6 , P 1 u = 0.4 } and m 2 = { P 2 c = 0.7 , P 2 u = 0.3 } . k can be calculated using Equation (11) as
k = m 1 ( change ) m 2 ( change ) + m 1 ( no-change ) m 2 ( no-change ) = P 1 c P 2 c + P 1 u P 2 u = 0.54
The Pc of the new evidence is calculated as
m 1 m 2 ( change ) = 1 / k X 1 X 2 = { change } m 1 ( X 1 ) m 2 ( X 2 ) = 1 / k m 1 ( change ) m 2 ( change ) = 1 / k P 1 c P 2 c = 0.78
The Pu of the new evidence is computed as
m 1 m 2 ( no-change ) = 1 / k X 1 X 2 = { no-change } m 1 ( X 1 ) m 2 ( X 2 ) = 1 / k m 1 ( no-change ) m 2 ( no-change ) = 1 / k P 1 u P 2 u = 0.22
A threshold T m is then set to classify the object into unchanged, uncertain and changed groups with the following equation
l i = { 1 P c > T m 2 P u > T m 3 else
where li = 1, 2, 3 denote that the object Ri belongs to changed, changed and uncertain groups, respectively.
After the uncertainty analysis using DS evidence theory based on multiscale segmentation maps and pixel-based CD, a final change map can be obtained. For quantitative assessment on CD results, several indices are adopted: (1) Missed detections Nm that indicate the number of incorrectly classified unchanged pixels in the CD map. The ratio of missed detections Pm is calculated by P m = N m / N 0 × 100 % , where N0 is the total number of changed pixels counted in the ground reference map; (2) False alarms Nf that indicate the number of the incorrectly classified changed pixels in the CD map. The ratio of false alarms Pf is calculated with the ratio P f = N f / N 1 × 100 % , where N1 is the total number of unchanged pixels counted in the ground reference map; (3) Total errors Nt that indicate the total number of detection errors, including both missed and false detections. This total number refers to the sum of missed detections and false alarms. Hence, the ratio of total errors Pt is calculated with P t = ( N m + N f ) / ( N 0 + N 1 ) × 100 % .

3. Experimental Results and Analysis

Two experiments were conducted to test the performance of the proposed method, and each experiment includes two parts. In the first part, the effects of the initial segmentation map and threshold Tm were analyzed. In the second part, comparative experiments were carried out with some other effective CD methods, including distance regularized level set evolution (DRLSE) [41], CV model, multiresolution level set with Kittler algorithm (MLSK) [21] and expectation-maximization-based level set (EMLS) [22]. DRLSE is an enhanced level set method without revitalization of level set to detect objects. CV model detects changes by the evolution of a contour and minimizing the difference energy inside and outside the contour simultaneously. In MLSK, an initial change map is yielded as the initial contour, and level set is then implemented from coarse to fine difference images. The final change map is obtained from the fine resolution image. EMLS develops the level set method by adding more energy functions of changed and unchanged mean values from EM algorithm.

3.1. Experiments of Landsat-7 ETM+ Data Set

3.1.1. Description of Data Set 1

The first data set was acquired by the sensor of Landsat-7 ETM+ in August 2001 and August 2002 in the northeast of China. A section (400 × 400 pixels) of the entire scene was selected as the test site. Figure 3a,b show the true color images of 2001 and 2002, respectively. The image of 2001 was registered to image of 2002, and the histogram matching method was then implemented on both images for the relative radiometric correction. The difference image was then produced from six bands (except the thermal infrared channel) by CVA, as presented in Figure 3c. Figure 3d shows the reference map, which was obtained manually by a meticulous visual interpretation of the two temporal images. The changes were mainly caused by new crop planting.

3.1.2. Results and Analysis of Experiment 1

Several experiments were conducted according to the following: (1) an analysis of the effect of segmentation scale by implementing traditional object-based CD (OBCD); (2) a quantitative analysis of the performance of the proposed uncertainty analysis techniques in relation to different thresholds; and (3) comparisons among CD results obtained by the proposed method and some state-of-the-art CD methods.
In traditional object-based CD, objects are obtained from the segmentation maps and partitioned into changed and unchanged parts by minimizing the gray variance of objects in each group. Figure 4 shows CD maps generated from multiscale segmentation maps. It can be seen that only obvious changes or large-size changes are detected when a small Q is used, because a coarse segmentation map is produced. With the increase of Q, a more detailed segmentation map is obtained and a CD map including more detailed changes is generated as a result, as shown in Figure 4b,c. However, when Q reaches 256, some detailed changes are missed because of over segmentation. Table 1 presents the accuracy of CD results produced from different scale segmentation maps. The Q value of 64 gives the most accurate CD map among these results, where the ratio of total errors is 6.4%. The reason is that the Q value of 64 leads to a more satisfying segmentation map.
Figure 5 presents the results of uncertainty analysis based on the first scale segmentation map and pixel-based CD result from FCM, where black, white and gray represent unchanged, changed and uncertain parts, respectively. The first scale segmentation map used in SDCDUA is generated with Q = 64. The value of the threshold Tm ranges from 0.6 to 0.9 with steps 0.5, producing results in Figure 5a–g. As can be seen, few uncertain regions exist in the initial CD map when the threshold ranges from 0.6 to 0.75, and more regions are identified as uncertain parts with the increase of threshold. The DS theory is used to fuse the object-based results and pixels’ memberships to analyze the uncertainty in CD.
Figure 6 shows final CD results when the threshold ranges from 0.6 to 0.9 with steps 0.5. It is obvious that similar CD maps are obtained for different thresholds. Table 2 describes the accuracy of different CD maps, where results having the same total errors of around 4.0% are produced as the threshold ranges from 0.6 to 0.9.
To evaluate the effectiveness of the proposed approach, some state-of-the-art methods are used as benchmarks, such as DRLSE, CV, MLSK, and EMAC. Figure 7 shows CD maps obtained by different methods. There is a parameter μ tuning the tradeoff between the fitness and the length of the contour in CV, MLSK and EMAC. With the increase of μ, more false alarms can be removed but over-smooth results are obtained. In this study, μ was set to 0.1 for CV, MLSK and EMAC. It is obvious that many false alarms are detected by DRLSE, MLSK and EMAC, and some missed detections exist in DRLSE and MLSK results. The CD map generated by CV contains more false alarms than the other methods, as shown in Figure 7b. Though OBCD removes many false alarms than the above-mentioned four methods (i.e., DRLSE, CV, MLSK and EMAC), a number of missed detections are produced. Figure 7f shows the CD map obtained by the proposed method, where the threshold was set to 0.85. It is worth noting that the proposed method generates the closest change map to the reference map.
Table 3 displays the quantitative comparisons between the comparative and proposed methods for data set 1. It can be found that OBCD generates a more accurate CD map than the pixel-based methods. The proposed method produces the change map with total errors of 4.0%, which decrease the total errors by 2.4% and at least 3.3% in comparison with the traditional object-based method and pixel-based methods, respectively.

3.2. Experiments of SPOT 5 Data Set

3.2.1. Description of Data Set 2

The second data set, which included two high-resolution images acquired from the same geographical area of Tianjin City, China, was used to evaluate the proposed method. The two images were acquired by SPOT 5 in April 2008 and February 2009, and were generated by fusing the panchromatic with three multispectral images. An area with 346 × 434 pixels was cropped from the entire images as the test site. The image of 2008 was registered to the image of 2009, and a histogram-matching method was implemented on both images for the relative radiometric correction. The difference image was generated with CVA, as shown in Figure 8c. Figure 8d shows the reference image obtained manually by a meticulous visual interpretation of the two temporal images, where changes occurred mainly due to new buildings.

3.2.2. Results and Analysis of Experiment 2

Figure 9 shows CD maps produced by the traditional object-based method with different segmentation scales. As can be seen from Figure 9, only main changes are detected from a coarse segmentation map generated using a small Q. With the increase of Q, more detailed segmentation maps are obtained, and more satisfying results are produced for values of 64 and 128. Many changes are lost for Q = 256 because of the over segmentation. Table 4 lists the accuracy of CD maps produced from different scale segmentation maps. The CD map generated by the Q value of 64 represents the most accurate one, where the ratio of total errors is 9.4%.
Figure 10 shows the results of uncertainty analysis for the first scale segmentation map using the SDCDUA method, where black, white and gray represent unchanged, changed and uncertain parts. The segmentation map is generated for a Q value of 64, and the threshold Tm ranges from 0.6 to 0.9 with steps 0.5. The corresponding results are shown in Figure 10a–g. It is seen that there are almost no uncertain regions when Tm equals 0.6, and very close uncertain regions are obtained when Tm ranges from 0.65 to 0.85. Many uncertain regions exist in the result produced for a threshold of 0.9.
Figure 11 shows the final CD maps of SDCDUA yielded with different thresholds, and Table 5 is the corresponding accuracy. Similar CD maps are produced as the threshold ranges from 0.6 to 0.85, and ratio of total errors is 8.5%. The threshold of 0.9 leads to the most accurate CD map with a ratio of total errors of 7.5%.
The DRLSE, CV, MLSK and EMAC methods were implemented and compared with the proposed method. Figure 12 shows CD maps generated by different methods, where μ was set to 0.35 for CV and MLSK and 0.1 for EMAC, respectively. A number of false alarms exist in CD maps generated by DRLSE, CV, MLSK and EMAC due to the influence of building shadows and seasonal changes. Though OBCD removes many false alarms, many real changes are also removed at the same time. The proposed method detects most of the real changes and obtains a CD map closest to the reference map, as shown in Figure 12f.
Table 6 displays the quantitative comparison between the comparative and proposed methods. It can be found that the object-based methods generate more accurate CD maps than the pixel-based methods. The proposed method produces a CD map with the ratio of total errors of 7.5%, which decrease the ratio of total errors by 1.9% and at least 3.4% compared with the traditional object-based method and pixel-based methods, respectively. For high-resolution images, however, the accuracy increase by the proposed method is not obvious. A possible reason is that the difference image generated using only spectral features may be not sufficient due to the larger intra-spectral variation in high resolution images. This issue is exacerbated by pan-sharpening (as applied to the two images), a process that may enlarge the intra-spectral variation.

4. Discussion

This paper presents a CD approach that incorporates uncertainty analysis to reduce uncertainties in CD. The DS evidence theory is adopted to increase the accuracy of CD maps by fusing two evidences. One piece of evidence is designed based on the memberships of pixels obtained from FCM clustering pixels into changed and unchanged classes. The other is produced from the multiscale segmentation maps yielded by SRM. A scale-driven approach is proposed to account for pixel- and object-based information at the same time by fusing the two evidences. Two experiments were carried out on Landsat-7 ETM+ and SPOT 5 data sets and results show the efficiency of proposed approach. In summary, the proposed approach holds the following characteristics and advantages: (1) The proposed approach incorporates pixel- and object-based CD results. It introduces fuzzy information of all pixels based on gray values and considers contextual information, which inherits merits of the pixel- and object-based methods; (2) A scale-driven strategy is proposed to choose multiscale segmentation maps from coarse to fine scale in the uncertainty analysis, which solves the scale issue in segmentation and make the best use of contextual information; (sechon3) The uncertainty analysis technique is used in the scale-driven strategy to fuse evidences from pixel-based results and multiscale segmentation maps, which increases the accuracy and robustness of CD.
The proposed approach consists of three main steps, including multiscale segmentation of stacked temporal images, clustering of the difference image and uncertainty analysis. In the three different phases, SRM, FCM and DS evidence theory are used in this study. Other alternatives can also be considered. For example, the mean shift and MRF-based segmentation methods can be used to segment stacked images; fuzzy set, another uncertainty analysis methods, is expected to replace DS evidence theory to fuse pixel- and object-based information. In future research, it would be interesting to incorporate these potential alternatives into the framework presented in this study and have a systematic comparison between the derived solutions and the method proposed in this study.
In the proposed approach, two parameters Q and Tm are involved. From Table 1 and Table 4, we can see that the CD accuracy is the greatest when Q takes 64, as this value leads to the most satisfying segmentation map. For the threshold Tm, it can be seen from Table 2 and Table 5 that the CD accuracy of the proposed approach does not change much with its variation. Thus, we recommend that the parameter Q can be set to 64 and the threshold Tm can be selected within the interval of [0.8, 0.9].
In this study, as a first evaluation, the proposed approach was performed on bi-temporal images from the same sensor. Images acquired by different sensors usually have different characteristics (e.g., spectral resolution, spatial resolution and band number). Thus, multitemporal images cannot be stacked and segmented straightforwardly. In addition, traditional CD methods (e.g., image dereferencing and CVA) cannot be used directly to generate a difference image. The approach proposed in this study is designed as a framework for incorporating uncertainty analysis of pixel- and object-based methods. In future research, we will focus on developing effective multiscale segmentation and pixel-based CD methods to make the proposed approach suitable for multi-sensor and multitemporal image CD. Some literature [42,43] shows that temporal correlation exists between long-term time-series images. It can be potentially used to increase CD accuracy. It would be interesting to further extend the proposed approach for continuous CD based on multitemporal images, where temporal correlation exists between images. It is expected that the temporal information would help to further reduce the uncertainties in CD. This issue motivates future research.

5. Conclusions

In this paper, a scale-driven CD method incorporating uncertainty analysis is proposed, where the pixel- and object-based schemes are combined and analyzed together to increase CD accuracy. Two experiments were carried out on Landsat-7 ETM+ and SPOT 5 data sets, and using the proposed approach, the ratios of total errors are reduced to 4% and 7.5%, respectively. Compared with some popular pixel-based methods (i.e., DRLSE, CV, MLSK and EMAC), the ratios of total errors are reduced at least by 3.3% and 2.8%, respectively. The proposed method generates more accurate CD results than the benchmark methods tested in this study. Therefore, the proposed method provides an effective new solution for CD. Future research will be focused on extending the proposed SDCDUA to multi-sensor and multitemporal image CD.

Acknowledgments

The work presented in this paper is supported by the National Natural Science Foundation of China under Grant 41331175, Natural Science Foundation of Jiangsu Province under Grant BK20160248, the China Postdoctoral Science Foundation funded project, Fundamental Research Funds for the Central Universities under Grant 2015XKQY09, and the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.

Author Contributions

Ming Hao had the original idea for the study, Wenzhong Shi and Kazhong Deng improved it. Ming Hao conducted experiments and drafted the manuscript, Hua Zhang and Qunming Wang analyzed and discussed results. All co-authors carried out the design of the CD framework and revised the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bruzzone, L.; Bovolo, F. A novel framework for the design of change-detection systems for very-high-resolution remote sensing images. Proc. IEEE 2013, 101, 609–630. [Google Scholar] [CrossRef]
  2. Lu, D.; Mausel, P.; Brondizio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2407. [Google Scholar] [CrossRef]
  3. Shi, W.Z.; Hao, M. A method to detect earthquake-collapsed buildings from high-resolution satellite images. Remote Sens. Lett. 2013, 4, 1166–1175. [Google Scholar] [CrossRef]
  4. Bouziani, M.; Goïta, K.; He, D.C. Automatic change detection of buildings in urban environment from very high spatial resolution images using existing geodatabase and prior knowledge. ISPRS J. Photogramm. Remote Sens. 2010, 65, 143–153. [Google Scholar] [CrossRef]
  5. Chen, J.; Gong, P.; He, C.; Pu, R.; Shi, P. Land-use/land-cover change detection using improved change-vector analysis. Photogramm. Eng. Remote Sens. 2003, 69, 369–379. [Google Scholar] [CrossRef]
  6. Bovolo, F.; Bruzzone, L.; Capobianco, L.; Garzelli, A.; Marchesi, S.; Nencini, F. Analysis of the effects of pansharpening in change detection on vhr images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 53–57. [Google Scholar] [CrossRef]
  7. Gong, M.; Su, L.; Jia, M.; Chen, W. Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images. IEEE Trans. Fuzzy Syst. 2014, 22, 98–109. [Google Scholar] [CrossRef]
  8. Khorram, S. Accuracy Assessment of Remote Sensing-Derived Change Detection; ASPRS Publications: Bethesda, MD, USA, 1999. [Google Scholar]
  9. Hester, D.; Nelson, S.; Cakir, H.; Khorram, S.; Cheshire, H. High-resolution land cover change detection based on fuzzy uncertainty analysis and change reasoning. Int. J. Remote Sens. 2010, 31, 455–475. [Google Scholar] [CrossRef]
  10. Ulbricht, K.; Heckendorff, W. Satellite images for recognition of landscape and landuse changes. ISPRS J. Photogramm. Remote Sens. 1998, 53, 235–243. [Google Scholar] [CrossRef]
  11. Sader, S.; Winne, J. RGB-NDVI colour composites for visualizing forest change dynamics. Int. J. Remote Sens. 1992, 13, 3055–3067. [Google Scholar] [CrossRef]
  12. Slater, J.; Brown, R. Changing landscapes: Monitoring environmentally sensitive areas using satellite imagery. Int. J. Remote Sens. 2000, 21, 2753–2767. [Google Scholar] [CrossRef]
  13. Fung, T.; LeDrew, E. The determination of optimal threshold levels for change detection using various accuracy indices. Photogramm. Eng. Remote Sens. 1988, 54, 1449–1454. [Google Scholar]
  14. Huang, L.; Wang, M. Image thresholding by minimizing the measures of fuzziness. Pattern Recognit. 1995, 28, 41–51. [Google Scholar] [CrossRef]
  15. Bruzzone, L.; Prieto, D.F. Automatic analysis of the difference image for unsupervised change detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1171–1182. [Google Scholar] [CrossRef]
  16. Patra, S.; Ghosh, S.; Ghosh, A. Histogram thresholding for unsupervised change detection of remote sensing images. Int. J. Remote Sens. 2011, 32, 6071–6089. [Google Scholar] [CrossRef]
  17. Hao, M.; Zhang, H.; Shi, W.; Deng, K. Unsupervised change detection using fuzzy c-means and MRF from remotely sensed images. Remote Sens. Lett. 2013, 4, 1185–1194. [Google Scholar] [CrossRef]
  18. Ma, W.; Jiao, L.; Gong, M.; Li, C. Image change detection based on an improved rough fuzzy C-means clustering algorithm. Int. J. Mach. Learn. Cybern. 2014, 5, 369–377. [Google Scholar] [CrossRef]
  19. Celik, T. Image change detection using gaussian mixture model and genetic algorithm. J. Vis. Commun. Image Represent. 2010, 21, 965–974. [Google Scholar] [CrossRef]
  20. Celik, T. Change detection in satellite images using a genetic algorithm approach. IEEE Geosci. Remote Sens. Lett. 2010, 7, 386–390. [Google Scholar] [CrossRef]
  21. Bazi, Y.; Melgani, F.; Al-Sharari, H.D. Unsupervised change detection in multispectral remotely sensed imagery with level set methods. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3178–3187. [Google Scholar] [CrossRef]
  22. Hao, M.; Shi, W.Z.; Zhang, H.; Li, C. Unsupervised change detection with expectation-maximization-based level set. IEEE Geosci. Remote Sens. Lett. 2014, 11, 210–214. [Google Scholar] [CrossRef]
  23. Chen, Y.; Cao, Z. An improved mrf-based change detection approach for multitemporal remote sensing imagery. Signal Process. 2013, 93, 163–175. [Google Scholar] [CrossRef]
  24. Ghosh, A.; Mishra, N.S.; Ghosh, S. Fuzzy clustering algorithms for unsupervised change detection in remote sensing images. Inf. Sci. 2011, 181, 699–715. [Google Scholar] [CrossRef]
  25. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  26. Chen, G.; Hay, G.J.; Carvalho, L.M.; Wulder, M.A. Object-based change detection. Int. J. Remote Sens. 2012, 33, 4434–4457. [Google Scholar] [CrossRef]
  27. Miller, O.; Pikaz, A.; Averbuch, A. Objects based change detection in a pair of gray-level images. Pattern Recognit. 2005, 38, 1976–1992. [Google Scholar] [CrossRef]
  28. Gong, J.; Sui, H.; Sun, K.; Ma, G.; Liu, J. Object-level change detection based on full-scale image segmentation and its application to wenchuan earthquake. Sci. China Ser. E Technol. Sci. 2008, 51, 110–122. [Google Scholar] [CrossRef]
  29. Durieux, L.; Lagabrielle, E.; Nelson, A. A method for monitoring building construction in urban sprawl areas using object-based analysis of spot 5 images and existing gis data. ISPRS J. Photogramm. Remote Sens. 2008, 63, 399–408. [Google Scholar] [CrossRef]
  30. Walter, V. Object-based classification of remote sensing data for change detection. ISPRS J. Photogramm. Remote Sens. 2004, 58, 225–238. [Google Scholar] [CrossRef]
  31. Desclée, B.; Bogaert, P.; Defourny, P. Forest change detection by statistical object-based method. Remote Sens. Environ. 2006, 102, 1–11. [Google Scholar] [CrossRef]
  32. Carvalho, L.D.; Fonseca, L.; Murtagh, F.; Clevers, J. Digital change detection with the aid of multiresolution wavelet analysis. Int. J. Remote Sens. 2001, 22, 3871–3876. [Google Scholar] [CrossRef]
  33. McDermid, G.J.; Linke, J.; Pape, A.D.; Laskin, D.N.; McLane, A.J.; Franklin, S.E. Object-based approaches to change analysis and thematic map update: Challenges and limitations. Can. J. Remote Sens. 2008, 34, 462–466. [Google Scholar] [CrossRef]
  34. Wulder, M.A.; Ortlepp, S.M.; White, J.C.; Coops, N.C.; Coggins, S.B. Monitoring tree-level insect population dynamics with multi-scale and multi-source remote sensing. J. Spat. Sci. 2008, 53, 49–61. [Google Scholar] [CrossRef]
  35. Möller, M.; Lymburner, L.; Volk, M. The comparison index: A tool for assessing the accuracy of image segmentation. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 311–321. [Google Scholar] [CrossRef]
  36. Nock, R.; Nielsen, F. Statistical region merging. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1452–1458. [Google Scholar] [CrossRef] [PubMed]
  37. Bezdek, J.C.; Ehrlich, R.; Full, W. FCM: The fuzzy C-means clustering algorithm. Comput. Geosci. 1984, 10, 191–203. [Google Scholar] [CrossRef]
  38. Wang, Q.; Shi, W. Unsupervised classification based on fuzzy C-means with uncertainty analysis. Remote Sens. Lett. 2013, 4, 1087–1096. [Google Scholar] [CrossRef]
  39. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 1967, 219, 325–339. [Google Scholar] [CrossRef]
  40. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  41. Li, C.M.; Xu, C.Y.; Gui, C.F.; Fox, M.D. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process. 2010, 19, 3243–3254. [Google Scholar] [PubMed]
  42. Mandanici, E.; Bitelli, G. Multi-image and multi-sensor change detection for long-term monitoring of arid environments with landsat series. Remote Sens. 2015, 7, 14019–14038. [Google Scholar] [CrossRef]
  43. Zhu, Z.; Woodcock, C.E. Continuous change detection and classification of land cover using all available landsat data. Remote Sens. Environ. 2014, 144, 152–171. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Remotesensing 08 00745 g001
Figure 2. Process of the scale-driven CD incorporating uncertainty analysis.
Figure 2. Process of the scale-driven CD incorporating uncertainty analysis.
Remotesensing 08 00745 g002
Figure 3. True color images of data set 1 acquired by Landsat-7 ETM+ in (a) August 2001 and (b) August 2002; (c,d) are the difference image and reference image, respectively.
Figure 3. True color images of data set 1 acquired by Landsat-7 ETM+ in (a) August 2001 and (b) August 2002; (c,d) are the difference image and reference image, respectively.
Remotesensing 08 00745 g003
Figure 4. Object-based CD results based on segmentation maps obtained by a value of Q of (a) 32; (b) 64; (c) 128; (d) 256.
Figure 4. Object-based CD results based on segmentation maps obtained by a value of Q of (a) 32; (b) 64; (c) 128; (d) 256.
Remotesensing 08 00745 g004
Figure 5. Results of first scale uncertainty analysis for the thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Figure 5. Results of first scale uncertainty analysis for the thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Remotesensing 08 00745 g005
Figure 6. CD maps for a value of thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Figure 6. CD maps for a value of thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Remotesensing 08 00745 g006
Figure 7. CD maps generated by different CD methods: (a) DRLSE; (b) CV; (c) MLSK; (d) EMAC; (e) OBCD; (f) SDCDUA and (g) reference map.
Figure 7. CD maps generated by different CD methods: (a) DRLSE; (b) CV; (c) MLSK; (d) EMAC; (e) OBCD; (f) SDCDUA and (g) reference map.
Remotesensing 08 00745 g007
Figure 8. Fused images of data set 2 acquired by SPOT 5 in (a) August 2008 and (b) February 2009; (c,d) are the difference image and reference image, respectively.
Figure 8. Fused images of data set 2 acquired by SPOT 5 in (a) August 2008 and (b) February 2009; (c,d) are the difference image and reference image, respectively.
Remotesensing 08 00745 g008
Figure 9. Object-based CD results based on segmentation maps for different Q values of (a) 32; (b) 64; (c) 128; (d) 256.
Figure 9. Object-based CD results based on segmentation maps for different Q values of (a) 32; (b) 64; (c) 128; (d) 256.
Remotesensing 08 00745 g009
Figure 10. Results of uncertainty analysis for the thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Figure 10. Results of uncertainty analysis for the thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Remotesensing 08 00745 g010
Figure 11. CD maps for a value of the thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Figure 11. CD maps for a value of the thresholds of (a) 0.6; (b) 0.65; (c) 0.7; (d) 0.75; (e) 0.8; (f) 0.85 and (g) 0.9 using the first segmentation map for Q = 64.
Remotesensing 08 00745 g011
Figure 12. CD maps generated by different CD methods: (a) DRLSE; (b) CV; (c) MLSK; (d) EMAC; (e) OBCD; (f) SDCDUA; (g) reference map.
Figure 12. CD maps generated by different CD methods: (a) DRLSE; (b) CV; (c) MLSK; (d) EMAC; (e) OBCD; (f) SDCDUA; (g) reference map.
Remotesensing 08 00745 g012
Table 1. Accuracy of object-based CD results produced from different segmentation maps.
Table 1. Accuracy of object-based CD results produced from different segmentation maps.
QMissed DetectionsFalse AlarmsTotal Errors
NmPm (%)NfPf (%)NtPt (%)
3213,18743.53970.313,5848.5
64935730.98690.710,2266.4
12810,58034.98150.611,3957.1
25612,45741.15660.413,0238.1
Table 2. Accuracy of CD results for different thresholds of data set 1.
Table 2. Accuracy of CD results for different thresholds of data set 1.
TmMissed DetectionsFalse AlarmsTotal Errors
NmPm (%)NfPf (%)NtPt (%)
0.6489616.217351.366314.2
0.65463215.317791.464114.0
0.7463215.317791.464114.0
0.75463215.317791.464114.0
0.8465315.417741.464274.0
0.85465315.417741.464274.0
0.9489616.217741.466704.2
Table 3. Quantitative comparison between the comparative and proposed methods for data set 1.
Table 3. Quantitative comparison between the comparative and proposed methods for data set 1.
MethodsMissed DetectionsFalse AlarmsTotal Errors
NmPm (%)NfPf (%)NtPt (%)
DRLSE914930.240213.113,4408.4
CV630220.898587.616,16010.1
MLSK748324.750593.912,5427.8
EMAC760425.141513.211,7557.3
OBCD935730.98690.710,2266.4
SDCDUA465315.417741.464274.0
Table 4. Accuracy of object-based CD results for different scale segmentation maps.
Table 4. Accuracy of object-based CD results for different scale segmentation maps.
QMissed DetectionsFalse AlarmsTotal Errors
NmPm (%)NfPf (%)NtPt (%)
3223,92968.017050.825,63410.6
6415,50344.173623.522,8659.4
12813,55238.510,9415.324,49310.1
25630,05485.414870.731,54113.0
Table 5. Accuracy of CD results produced with different thresholds of data set 2.
Table 5. Accuracy of CD results produced with different thresholds of data set 2.
TmMissed DetectionsFalse AlarmsTotal Errors
NmPm (%)NfPf (%)NtPt (%)
0.612,34535.180463.920,3918.4
0.6512,34535.182474.020,5928.5
0.712,34535.182474.020,5928.5
0.7512,34535.182474.020,5928.5
0.812,34535.182474.020,5928.5
0.8512,34535.182474.020,5928.5
0.912,95436.853392.618,2937.5
Table 6. Quantitative comparison between the comparative and proposed methods for data set 2.
Table 6. Quantitative comparison between the comparative and proposed methods for data set 2.
MethodsMissed DetectionsFalse AlarmsTotal Errors
NmPm (%)NfPf (%)NtPt (%)
DRLSE17,70050.374803.625,18010.3
CV974727.721,40110.331,14812.8
MLSK12,03534.216,8308.128,91311.9
EMAC12,63335.913,9216.726,55410.9
OBCD15,50344.173623.522,8659.4
SDCDUA12,95436.853392.618,2937.5

Share and Cite

MDPI and ACS Style

Hao, M.; Shi, W.; Zhang, H.; Wang, Q.; Deng, K. A Scale-Driven Change Detection Method Incorporating Uncertainty Analysis for Remote Sensing Images. Remote Sens. 2016, 8, 745. https://doi.org/10.3390/rs8090745

AMA Style

Hao M, Shi W, Zhang H, Wang Q, Deng K. A Scale-Driven Change Detection Method Incorporating Uncertainty Analysis for Remote Sensing Images. Remote Sensing. 2016; 8(9):745. https://doi.org/10.3390/rs8090745

Chicago/Turabian Style

Hao, Ming, Wenzhong Shi, Hua Zhang, Qunming Wang, and Kazhong Deng. 2016. "A Scale-Driven Change Detection Method Incorporating Uncertainty Analysis for Remote Sensing Images" Remote Sensing 8, no. 9: 745. https://doi.org/10.3390/rs8090745

APA Style

Hao, M., Shi, W., Zhang, H., Wang, Q., & Deng, K. (2016). A Scale-Driven Change Detection Method Incorporating Uncertainty Analysis for Remote Sensing Images. Remote Sensing, 8(9), 745. https://doi.org/10.3390/rs8090745

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop