Next Article in Journal
Online Store Aesthetics Impact Efficacy of Product Recommendations and Highlighting
Previous Article in Journal
Recent Advances in Electrochemical Sensors for Caffeine Determination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analyses of Unsupervised PCA K-Means Change Detection Algorithm from the Viewpoint of Follow-Up Plan

by
Deniz Kenan Kılıç
* and
Peter Nielsen
Department of Materials and Production, Aalborg University, 9220 Aalborg, Denmark
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(23), 9172; https://doi.org/10.3390/s22239172
Submission received: 3 November 2022 / Revised: 16 November 2022 / Accepted: 22 November 2022 / Published: 25 November 2022
(This article belongs to the Section Remote Sensors)

Abstract

:
In this study, principal component analysis and k-means clustering (PCAKM) methods for synthetic aperture radar (SAR) data are analyzed to reduce the sensitivity caused by changes in the parameters and input images of the algorithm, increase the accuracy, and make an improvement in the computation time, which are advantageous for scoring in the follow-up plan. Although there are many supervised methods described in the literature, unsupervised methods may be more appropriate in terms of computing time, data scarcity, and explainability in order to supply a trustworthy system. We consider the PCAKM algorithm, which is used as a benchmark method in many studies when making comparisons. Error metrics, computing times, and utility functions are calculated for 22 modified PCAKM regarding difference images and filtering methods. Various images with different characteristics affect the results of the configurations. However, it is evident that the PCAKM becomes less sensitive and more accurate for both the overall results and image results. Scoring by utilizing these results and other map information is a gap and innovation. Obtaining a change map in a fast, explainable, more robust and less sensitive way is one of the aims of our studies on scoring points in the follow-up plan.

1. Introduction

Change detection for temporal differential images is the implementation of an the algorithm/method to detect the changes that have occurred between two images obtained at different times from the same sensor, platform, and location. In other words, it is a process to divide the map into regions that are changed and unchanged.
Change detection algorithms are used in several areas such as video surveillance, remote sensing, medical diagnosis and treatment, civil infrastructure, underwater sensing, and driver assistance systems [1]. Different types of systems in remote sensing and aerial photography are used to detect changes between the scenes of the same location acquired at different times, which is also called remote sensing change detection [2]. Depending on the sensors, systems used, and the time–frequency of the images obtained, different tasks are assigned and executed. Such changes can trigger follow-up activities to determine the cause or type of change, such as triggering additional image requests [3,4], direct actions such as search and rescue missions [5], or influencing decisions made in the area, e.g., threat avoidance [6]. In all of these cases, quick, precise, and interpretable change detection is critical to deriving timely information and properly reacting in the subsequent follow-up. A synthetic aperture radar (SAR) sensor is extensively used in numerous areas [7] to obtain change maps since it is not affected by the weather, light, or flight altitude [8,9]. These images typically form the function for actions such as those listed above. Speckle noise [10,11,12], fuzzy edge of changed regions [12], and limited datasets [12] are the main challenges for the change detection of SAR images.
Future changes that are likely to occur can be predicted using spatial and temporal dynamics. Therefore, the follow-up information acquisition can be streamlined, by creating the foundation for the follow-up plan by scoring the predicted change detection map [13]. Follow-up activities require the consideration of the estimated change map’s accuracy and computation time. Most follow-up planning may not allow for the large amounts of data that are necessary for supervised learning. Due to this, the follow-up planning benefits the adoption of unsupervised techniques that do not need training data and have quick computation times. In addition, classical methods generally provide more transparency, explainability, and interpretability than more complex ones do [14]. These properties support the system’s trustworthiness [15,16,17], which is significant for any response to the detected change.
In this study, the change maps of different satellite images are calculated using the unsupervised change detection algorithm proposed by Celik [18] and called principal component analysis and k-means clustering (PCAKM). In situations where the follow-up plan needs to be made in short periods (such as disaster response, etc.) and training data are lacking, it is more appropriate to use unsupervised methods instead of supervised methods. In addition, it is not necessary for unsupervised change detection methods to specifically identify the kinds of changes in land use or cover that have occurred in the area of interest [19]. Depending on the change in the parameters used in PCAKM, the calculation times and performance results of the obtained change detection alter. Moreover, altering the inputs for Celik’s algorithm affects results notably.
We produce several configurations as modified PCAKMs using different filters and DIs. The performance results obtained based on modified versions of Celik’s algorithm are compared and examined to understand whether they are suitable for generating scores to form the foundation for planning follow-up detailed investigations or responses. As a result of these investigations, we seek to answer the following questions:
  • Is it possible to decrease sensitivity or increase consistency?
  • Is it possible to decrease computing time without decreasing accuracy?
These questions are critical to obtaining a modified method that is less affected by different PCAKM algorithm parameters and input image characteristics. The way to achieve this is to increase the average performance and reduce the variance of the results obtained. On the other hand, providing a decrease in computing time is important in real-time tasks. Change maps with a less sensitive method will then be input into the scoring stage for a follow-up plan. Change maps with lower error rates and variance will help the selection of the areas of interest (AOIs) generating the points of these AOIs and scoring. A gap and innovation in the follow-up plan is scoring using change map results and other map information. We aim to contribute to scoring points in the follow-up plan by focusing on obtaining the change map in a quick, explainable, more accurate, and less sensitive manner.
The paper is organized as follows. Section 2 provides a related literature review and the methods used in this paper. Section 3 explains the data, configurations, and performance metrics used in experiments, and shows the results. Comments and discussions on the results are included in Section 4. Finally, the paper is concluded in Section 4.2.

2. Methods

To find answers to the questions in Section 1, we make comparative analyses by performing unsupervised change detection and performance evaluation. Performance measurements were carried out using the unsupervised learning method for the change detection part. The method includes principal component analysis (PCA) and k-means clustering methods as described in [18].
The PCAKM method proposed by Celik [18] has been used as a benchmark comparison in many unsupervised and supervised SAR image change detection studies and continues to be used in the state-of-the-art research. Li et al. [20] compared PCAKM, Markov random field fuzzy c-means (MRFFCM), Gabor fuzzy c-means (GaborFCM), and Gabor two-layer classifier (GaborTLC). They determined that the Kappa coefficient (KC) difference between PCAKM and other methods for the benchmark data is at most 2.67%. Gao et al. [13] proposed deep semi-non-negative matrix factorization (NMF) and a singular value decomposition network to compare with PCAKM, MRFFCM, GaborTLC, and deep neural networks with MRFFCM (D_MRFFCM). KC differences between PCAKM and methods that give better results for the three benchmark data are less than or equal to 6.77%, 7.57%, and 3.3%, respectively. In addition, PCAKM has a higher KC than MRFFCM and GaborTLC for two out of three data. Gamma deep belief network (g Γ -DBN) is proposed by Jia and Zhao [12] for comparison with PCAKM, convolutional-wavelet neural network (CWNN), deep belief network (DBN), and joint deep belief network (JDBN). According to their experimental results, PCAKM shows better performance than CWNN and DBN for one of the benchmark datasets in terms of KC. When all images are examined, improvements in KC for PCAKM are less than or equal to 7.98%. Wang et al. [21] presented a graph-based knowledge supplement network (GKSNet) to match against PCAKM, a neighborhood-based ratio and extreme learning machine (NR-ELM), Gabor PCA network (GaborPCANet), local restricted convolutional neural network (LR-CNN), transferred multilevel fusion network (MLFN), DBN, and deep cascade network (DCNet). In their study, the KC/F1-measure enhancements for PCAKM are less than or equal to 10.71%/9.42%, 2.75%/2.2%, 19.21%/22.57%, and 11.43%/9.75% for four different benchmark data, respectively. Even though the supervised methods provide these improvements in terms of accuracy, their run-time results show that there are reasonably high differences between PCAKM and other methods. The average run-times in seconds for PCAKM, NR-ELM, GaborPCANet, LR-CNN, MLFN, DBN, DCNet, and GKSNet are 2.3, 22.5, 442.8, 282.6, 187.6, 474.1, 509.6, and 144.92, respectively [21].
Considering its features such as being fast, not requiring learning data, and having a simple algorithm, we selected PCAKM as a benchmark method. It shows promising results for both unsupervised [13,18,20] and supervised methods [12,21].
Speckle is a type of grainy noise that occurs naturally in active radar, SAR, medical ultrasound, and optical coherence tomography images, and decreases their quality. Images of the same region taken at different times have different levels of speckle. Speckling creates difficulty in distinguishing opposite classes [22] since it increases the overlap of opposite-class pixels in the histogram of difference images. On the other hand, there is competitive interaction between altered regions and background regions due to a lack of past information, resulting in a fuzzy edge in the changed region that is difficult to discern. Another challenge is the lack of data, which is an issue for supervised learning.
Noise may develop as a result of the system’s construction, illumination conditions, and image acquisition process. Numerous methods were proposed for speckle reduction or despeckling. Speckle reduction filters are classified as non-adaptive and adaptive filters [23]. Mean and median filtering methods are examples of non-adaptive techniques. On the other hand, Lee, Frost, Kuan, and G-MAP are adaptive filter examples. Qiu et al. [24] claimed that none of these filters consistently outperform others, in principle. Each filter has particular advantages and disadvantages according to the data. For this reason, choosing a more stable and consistent filter is important.
Moreover, speckling reduction techniques are categorized as the spatial domain, transform domain (or wavelet domain), non-local filtering, and total variational [23]. Specifically, anisotropic diffusion, bilateral filter (BF), fast non-local means filter (FNLMF), and guided filter (GF) are some other filters to reduce speckle noise [25]. Choi and Jeong [25] state that BF and the non-local mean filter (NLMF) have a low speckle noise reduction performance. In addition, non-linear methods such as BF and NLMF have poor computational time performance [25]. Partial differential Equation (PDE)-based algorithms including AD and adaptive window anisotropic diffusion also have a weak performance on speckle noise removal [25]. Some other conventional filtering methods such as discrete wavelet transform (DWT), Bayesian multiscale method in a non-homomorphic framework, and expectation maximization DWT perform poorly in terms of speckle noise removal, edge information preservation, and computing complexity [25]. We test the edge-protecting GF method, which has low computational complexity among the speckle noise elimination techniques considering the performance results for the SAR images in [25]. We also used the NLMF and BF methods to acquire the performance characteristics mentioned above.
In this study, we compared the PCAKM with its modified versions (different combinations of difference images and filters) in terms of accuracy and time performance. We consider whether there is a modified method with less sensitivity and higher accuracy for the change map to be used in any follow-up plan.

2.1. Original PCA K-Means Algorithm

The flow of the proposed original method is given in Figure 1 [18]. PCA and k-means methods [18] are utilized for the change detection part.
Firstly, the input image pairs I 1 and I 2 are converted into grayscale images. Then, the absolute difference image for the given image pair is calculated as
D 1 = | I 1 I 2 | .
Afterward, D 1 is divided into b s × b s non-overlapping blocks where b s is the length of one side of square blocks. After converting these blocks into row vectors, PCA is applied to these vector sets to obtain the orthonormal eigenvectors. In the next step, the feature vector space is created by projecting b s × b s overlapping blocks around each pixel onto the eigenvector space. Feature vector space is input for the k-means algorithm to get the change map. Using the k-means algorithm, the feature vector space is grouped into clusters. Then, each pixel is assigned to a cluster in a way that minimizes the distance between its feature vector and the cluster’s mean vector. Briefly, we used two parameters b s and k as block width and cluster number, respectively. In Section 3, b s is between 2 and 8, whereas k is 2 and 3 for each image pair.

2.2. Other Difference Image Methods

The log-ratio difference image method, which is given in Equation (2), is utilized in many studies to reduce the multiplicative distortion effects of noise caused by speckle [10]. Moreover, Zhao et al. [26] produced the difference image via image regression as given in Equation (3) to avoid problems such as atmospheric condition changes, illumination variations, and sensor calibration [27]. The image regression method enhances the performance of the difference image, which is observed from direct subtracting. However, both the log-ratio and absolute log-ratio methods still do not perform well enough to eliminate speckle noise if the input becomes low-quality [10,27].
Definition 1 
(Log-ratio difference image). Log-ratio image is the logarithmic transform of the image pair’s division as
D 2 = f ( I 1 I 2 ) ,
where f = c log ( ( 1 + p ) , 10 ) , c 105 for all pixels p I 1 I 2 .
Definition 2 
(Absolute log-ratio difference image). It is the absolute value of the log-ratio calculation as
D 3 = | f ( I 1 I 2 ) | .
Zhang et al. [10] stated that the SAR images are contaminated by speckle noise, which has the multiplicative Goodman’s model. The Nakagami distribution in Equation (4) is then used to represent the independently and identically distributed pixel amplitudes. The Nakagami distribution is
p ( I s | R s ) = 2 L L Γ ( L ) ( R s ) L I s 2 L 1 exp ( L I s 2 R s ) ,
where R s and I s are the reflectivity and pixel amplitudes in site s, respectively. Moreover, L is the equivalent number of looks, which is a parameter of multi-look SAR images, and represents the amount of averaging done to the SAR measurements both during the creation of the data and, on occasion, even after [28]. After several calculations to which Bayesian decision theory was applied, the difference image is given as in Equation (5), where it considers the knowledge that the speckles follow the Nakagami distributions.
Definition 3 
(Nakagami log-ratio (NLR) difference image). It is a modified version of the log-ratio difference image given as
D 4 = f ( I 1 I 2 + I 2 I 1 ) .
Its absolute version can be written as
D 5 = | f ( I 1 I 2 + I 2 I 1 ) | .
Definition 4 
(Modified NLR difference image 1). In this version of the NLR difference image, we use the squared values of each image given as
D 6 = f ( I 1 2 I 2 2 + I 2 2 I 1 2 ) .
Its absolute value is
D 7 = | f ( I 1 2 I 2 2 + I 2 2 I 1 2 ) | .
Definition 5 
(Modified NLR difference image 2). For this modified version of the NLR difference image, squares of each division are added to the NLR difference image itself as
D 8 = f ( I 1 I 2 + I 2 I 1 + I 1 2 I 2 2 + I 2 2 I 1 2 ) .
The absolute value of it is
D 9 = | f ( I 1 I 2 + I 2 I 1 + I 1 2 I 2 2 + I 2 2 I 1 2 ) | .
Definition 6 
(Improved ratio and log improved ratio difference image [29,30]). The improved ratio and its log transform version are given in Equations (11) and (12), respectively.
D 10 = 1 min { I 1 , I 2 } max { I 1 , I 2 } ,
D 11 = f ( 1 min { I 1 , I 2 } max { I 1 , I 2 } ) .

2.3. Non-Local Means Denoising

Basically, the color of a pixel is changed to an average of the colors of nearby pixels by non-local means denoising (NLMD) [31]. Since there is no justification for the closest pixels to a given pixel to be even close, it searches across a sizable chunk of the image for every pixel that resembles the pixel to be denoised. There are three parameters such as h, templateWindowsSize (tws), and searchWindowsSize (sws). The first one regulates the filter strength. If it is increased, then it removes the noise more precisely but removes the image details as well and vice versa. The tws parameter is the template patch’s size in pixels, which is utilized to calculate weights. Lastly, sws is the window’s size in pixels that is applied to estimate the weighted average for a specific pixel. We used OpenCV’s recommended values for the last two parameters as 7 and 21, respectively. On the other hand, for h, we used 20 since SAR images contain a high degree of noise.

2.4. Bilateral Filter

In addition to using a (multiplicative) Gaussian filter component that is based on pixel intensity differences, the bilateral filter (BF) also employs a Gaussian filter in the space domain. Only pixels that are “spatial neighbors” are taken into account for filtering, owing to the Gaussian function of space. On the other hand, the Gaussian component used in the intensity domain makes sure that only the pixels with intensities close to the core pixel are taken into account when computing the blurred intensity value. BF is a method that preserves the edge information. We used 10 for the parameter denoisingWindowsize (dws), which is larger than the default value 3, which is similar to NLMD, and we consider that the SAR image has substantial noise.

2.5. Guided Filter

A guided filter is a smoothing light filter that preserves the edges. It filters out noise or texture while keeping sharp edges, just like a bilateral filter [32,33]. The GF is defined by the following Equations (13)–(15) as
a k = 1 | w | i w k I i p i μ k p ¯ k σ k 2 + ϵ ,
b k = p ¯ k a k μ k ,
q i = a ¯ i I i + b ¯ i ,
where ( a k , b k ) are linear coefficients for a linear transform of the guidance image I at a pixel i with the input image p and supposed to be constant in a window w k (square window of a radius r) centered at the pixel k. Furthermore, μ k and σ k 2 are the mean and variance of I in w k , | w | is the numbers of pixels in w k , p ¯ k = 1 | w | i w k p i is the mean of p in w k , and ϵ is a regularization parameter penalizing large a k . Moreover, a ¯ i = 1 | w | k w i a k and b ¯ i = 1 | w | k w i b k are the overall average coefficients for windows that overlap with i where q i is the filtering output at a pixel i.

2.6. Truncated Singular Value Decomposition

Truncated singular value decomposition (TSVD) is a reduced rank approximation to any matrix A by selecting the first major singular values. We determine the subset of full components via the percentage of the total variance. Therefore, we utilize the var parameter that is a threshold for total variance. The reason for applying this method is to assess whether we can reduce the time performance without much loss of accuracy.

3. Experiments

3.1. Data

Details of data used in experimental results are given in Table 1. For each image pair, there is a ground truth image for the change between the two images. The ground truth images are used to generate the confusion matrices and calculate the performance metrics mentioned above.
In Table 2, noise variance values based on the method in [34] are given.
In Figure 2, all images with histograms are demonstrated.
In Figure 3, Radon transforms between 0 and 180 degrees are illustrated. Radon transforms, which are also called sinograms, calculate image matrix projections over predetermined directions where lighter tones are more intense.
It is apparent that there are different characteristics not just among each image pair but also between some image pairs across the set, as shown in Table 2, Figure 2 and Figure 3.

3.2. Configurations

There are seven SAR image pairs and seven ground truths for change maps as data. We utilized 22 different configurations among which one is the original paper [18] as a benchmark and the others are modified versions of the original method. For each configuration, there are 98 (7 × 14) change detection results for each performance metric since we have two main parameters block size and number of clusters, which take values in the ranges of 2–8 and 2–3, respectively. We calculate each change detection result 1000 times and then obtain the minimum, maximum, and average calculation times. The accuracy results do not change for these 1000 experiments since all 22 configurations have deterministic skeletons.
All configurations are given in Table 3 with configuration numbers. Configurations containing more than one method are written according to the order of their implementation. The PCAKM algorithm is used after applying the written methods for any configuration. We select the radius of the square window (r) for GF as the block size ( b s ) parameter of the PCAKM algorithm. Explanations for other parameters in Table 3 are given in Section 2.

3.3. Performance Metrics

After calculating the change maps, performance metrics are estimated using the confusion matrix that is given in Table 4.
Below are formulations for performance metrics using the true positive (tp), false positive (fp), false negative (fn), and true negative (tn) in the confusion matrix as
  • Percentage correct classifications: p c c = ( t p + t n ) / n
  • Kappa coefficient: k c = ( p c c p ) / ( 1 p ) , where p = ( t p + f p ) × ( t p + f n ) + ( f n + t n ) × ( t n + f p ) n 2
  • Precision: p r e c = t p / ( t p + f p )
  • Recall: r e c a l l = t p / ( t p + f n )
  • F-measure: f m e a s = 2 × p r e c × r e c a l l / ( p r e c + r e c a l l )
where n = t p + t n + f p + f n .
We use Kappa coefficient and f-measure as accuracy calculations. The range of the former is [ 1 , 1 ] and the latter has a range of [ 0 , 1 ] . For both metrics, a higher value means better accuracy.
On the other hand, we estimate the utility functions by employing Kappa coefficient, f-measure, and average computing times. For each image pair, we have two utility values as
U 1 i j = μ i j 1 + μ i j 2 σ i j 1 2 σ i j 2 2 ,
U 2 i j = ( μ i j 1 + μ i j 2 σ i j 1 2 σ i j 2 2 ) / t ¯ i j ,
where μ i j 1 is the average of Kappa coefficient values, μ i j 1 is the average of f-measure values, σ i j 1 2 is the variance of Kappa coefficient values, σ i j 2 2 is the variance of f-measure values, t ¯ i j is the mean of average computing times, i is the image pair number, and j is the configuration number for i = 1 , , 7 and j = 1 , , 22 . For each configuration, we have 14 results since we utilize the parameters block size and number of clusters, which take values in the ranges 2–8 and 2–3, respectively. Then, we use these 14 values for mean and variance calculations. On the other hand, we have 14 different average time calculations and each parameter pair result is calculated 1000 times. Then, we take the average of these 1000 calculation times and estimate the mean of 14 average calculation times.
In addition to the U 1 and U 2 utility values, we calculate the following utility values for overall images in a single configuration as
U 3 k = μ k 1 + μ k 2 σ k 1 2 σ k 2 2 ,
U 4 k = ( μ k 1 + μ k 2 σ k 1 2 σ k 2 2 ) / t ¯ k ,
where μ k 1 and μ k 2 are the average Kappa coefficient value and the average f-measure value of all 98 results (14 parameters combination for seven images), σ k 1 2 and σ k 2 2 are the average variances of seven different image variance results for each configuration, t ¯ k is the mean of seven images’ average time computations (each image has 14 different averaged time results for 1000 experiments), and k is the configuration number for k = 1 , , 22 . Since, as we mentioned in Section 3.1, each image pair has different characteristics according to noise variances, histograms, and Radon transforms, we take the average of seven different image variance results for each configuration.

3.4. Results

The best and the worst results and the mean and variance for error metrics (kc and fmeas) of 22 configurations, are given in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 for each image pair, respectively. The bs and c demonstrate the block size and the number of clusters parameters. Furthermore, the “No” columns in the tables state configuration numbers. We calculate the U1 and U2 utility values in these tables by utilizing each configuration’s average computing time. Image pairs, ground truth images, and the best change map result for all images are given in Figure A1 in Appendix B. The first two columns contain the image pairs. The third and fourth columns are the ground truths and best change map results, respectively.
Based on the image results given in Appendix A, Table 5 shows the order of configurations from largest to smallest utility values.
On the other hand, Table 6 presents the overall mean and variance of error metrics for each configuration with average computing times regarding the configuration numbers. We calculated U3 and U4, where the mean and variance values are the average values of seven different image results. Bold values show the highest mean and lowest variance estimations for error metrics.
Table 7 demonstrates the ranking of U3 and U4 values in terms of configuration numbers.

4. Discussion

4.1. Image-Based Results

D3 (config. 8), D5 (config. 14), D7 (config. 17), and D9 (config. 20) are the absolute values of D2 (config. 2), D4 (config. 9), D6 (config. 15), and D8 (config. 18), respectively. If we look at Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7, it is evident that the absolute versions of difference images do not increase the mean accuracy values for any configurations and images. There is no systematic decrease or increase for variance and average time performances as well.
We check the configuration pairs 2&3, 9&11, and 9&12 to observe the effects of TSVD. Applying TSVD decreases the average computing times for the Ottawa, Yellow River 2, Yellow River 4, San Francisco, and Bern datasets. It increases the mean values for Ottawa and the Yellow River 4, decreases the variance for Ottawa and Yellow River 2, and increases variance for Yellow River 3. Otherwise, increases and decreases show variability.
Furthermore, if we consider configuration pairs 2&4, 9&10, 15&16, and 18&19, utilizing GF increases the average computing times for all images. However, there is no regular increase and decrease path for mean and variance since difference images and images affect the procedure. In addition, utilizing TSVD and GF with D4 does not expose ordered changes for mean, variance, and time if we regard each image pair. Furthermore, using both NLMD and NLMD with GF increases the average computing times for D2 in a noticeable way due to NLMD. Nevertheless, changes in mean and variance across all images are not in harmony, which is the case also for BF for D2.
Applying different methods generally illustrates different mean, variance, and time change effects as difference images and images are altered. Similarly, the ranking of U1 and U2 values changes depending on the image since each image has different characteristics, as mentioned in Section 3.1. Therefore, at this point, it would be more accurate to focus on the utility values obtained by considering the overall results. Because even if we use the same sensor, the dataset to be obtained may have different characteristics depending on the environment and other factors.

4.2. Overall Results

Table 6 presents the overall accuracy results and utility values. It shows the overall mean and variance of error metrics for each configuration considering all the images’ results. Note that employing TSVD increases the average means and decreases the average variances and computing times for D2 and D4 where averages are calculated from seven different image pair outcomes. On the other hand, GF-added configurations produce higher mean values and computing times except configuration 5, but they generate variances changing in different directions. Additionally, although NLMD shows an improvement in mean values, it gives worse results in variance and time performances for D2. On the other hand, BF increases variance values even if it displays an improvement in time and mean outcomes for D2.
Considering that we work with inputs with different characteristics, the U3 and U4 values of the overall results are considered. If there is no concern about time performance, U3 is calculated considering the high accuracy and low variance that is desired for consistency on different images and parameters. U4 is obtained when the time performance is also taken into account. Table 7 clarifies the ranking of U3 and U4 in terms of configuration number. It is apparent that using one of TSVD, NLMD, GF, or some combination thereof raises the U3 value. Additionally, a higher U3 value is obtained when the absolute value is taken, but D8 (D9 is its absolute version) is an exception. Furthermore, D6 produces a higher U3 value than D8, D4, D2, and D1 do. Difference images with more information seem to work better. As an exception, D8 (a combination of D4 and D6) give a lower U3 value than D6 but still has a close value to D6. Nevertheless, D8 generates a better U3 value than D4, D2, and D1. When we using average time calculations, of course, the rankings change.
Since there are configurations with close utility values, more than one method can be selected and applied to scoring points for the follow-up plan. For example, utility values can be normalized between 0 and 1, and those above the value obtained by subtracting a certain percentage from the highest value can be selected. As such, configurations with high accuracy and low variance are selected for a set containing data with different properties. If time performance is also important, it is also counted. After determining the U3 and U4 values that fall within a certain percentage or above the threshold value, the configurations that are common to both sets can be selected.
According to all outcomes, we can answer the questions in Section 1. We find that it is possible to decrease the sensitivity (i.e., increase consistency). On the other hand, accuracy improves while the computation time is reduced for some, but not all configurations. However, no configuration works faster than the original method (config. 1) in terms of average calculation time. Despite this, in Table 6, there are configurations with high accuracy and average time calculation values that are close to the original method’s result.

5. Conclusions

In this study, we compared the original PCAKM and its modified versions. All the configurations we use are deterministic, so the results are robust. In addition, none of them need the large training datasets required in supervised methods. Unsupervised methods, which work much faster than supervised methods, also stand out in terms of explainability. Today, issues such as explainability, interpretability, and transparency contribute to a trustworthy system [14], which is important for all stakeholders. Trustworthiness is an important concept to ensure that no undesirable consequences of AI systems occur during deployment.
Since PCAKM has more than one parameter combination and the analyses have different image types, it seems reasonable to look at the error metrics from an overall examination. As such, we have more consistent (i.e., less sensitive) information about the mean, variance, and time calculation performances of the error metrics. Since the difference between the error values to be obtained for all parameter combinations will be less, it will be more beneficial to use the combination of all of them. It is apparent that difference image and noise reduction makes a significant difference in the obtained results in terms of accuracy.
In the future, we plan to use the obtained results for point scoring in the follow-up activity, which may affect the road map of different agents [37,38]. A more consistent unsupervised method will help assign specific scores of interest to points on the map in a fast and efficient manner. For example, using different layer information such as the vegetation index, scoring can be done on change map information depending on the follow-up activity. In Figure A2, the figure on the left is an image taken from Google Maps and the figure on the right is the vegetation index [39] map (VIM) we calculate for this image. When a change map is produced for the image taken from Google Maps, the information to be obtained by overlapping the change map and VIM can play an important role in the scoring method according to the follow-up plan. As per the follow-up plan, VIM can belong to the first of the selected times for the change map, or it can belong to the second. In other words, it is determined by the follow-up action taken. Examples could be to investigate specific parts of the road infrastructure after an earthquake or flood.
Other maps similar to VIM can be used as labels to be illustrated on GIS. Maps that can be used for various situations (weather-related maps, information maps from the user or the planner, etc.) are merged with the change map as different layers to determine scores for the follow-up plan. Our next step will be to develop follow-up plan types and important layer maps that will contribute to the planning and scoring methods for each follow-up plan. At this point, it is worth noting the fact that SAR images are not affected by factors such as time zone and weather. Therefore, they offer an advantage in matters such as disaster response in terms of seeing the big picture. In addition, employing the proposed unsupervised method provides a robust, fast, explainable, less sensitive, and more accurate solution. These features will bridge the gap between scoring in AOIs for the follow-up plan that needs to produce a quick and estimated change map. We aim to develop an innovative scoring method for the follow-up action by merging change map results and other relevant map information as significant layers.
In addition, future work will aim to enhance the proposed change detection method with other unsupervised and supervised methods for different sensor types such as optical and thermal. In this way, the purpose of this is to obtain different layer maps by classifying the image [40]. These different layers will be used in scoring for the follow-up planning.

Author Contributions

The concept was developed through rigorous discussion among all the authors. D.K.K. took care of the coding parts for all computational aspects after a thorough discussion with P.N. All the authors were equally involved in the manuscript preparation. All authors read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AOIArea of interest
BFBilateral filter
bsBlock size
cNumber of clusters
config.Configuration
CWNNConvolutional-wavelet neural network
DBNDeep belief network
DCNetDeep cascade network
dwsDenoising window size
DWTDiscrete wavelet transform
fmeasF-measure
FNLMFFast non-local mean filter
FNFalse negative
FPFalse positive
g Γ -DBNGamma deep belief network
GaborFCMGabor fuzzy c-means
GaborPCANetGabor PCA network
GaborTLCGabor two-layer classifier
GFGuided filter
GKSNetGraph-based knowledge supplement network
G-MAPGaussian model adaptive processing
JDBNJoint deep belief network
KCKappa coefficient
LR-CNNLocal restricted convolutional neural network
MLFNMultilevel fusion network
MRFFCMMarkov random field fuzzy c-means
NFMNon-negative matrix factorization
NLMDNon-local means denoising
NLMFNon-local mean filter
NLRNakagami log-ratio
NR-ELMNeighborhood-based ratio and extreme learning machine
PCAKMPrincipal component analysis and k-means clustering
PCCPercentage correct classifications
PDEPartial differential equation
PRECPrecision
SARSynthetic aperture radar
swsSearch windows size
TNTrue negative
TPTrue positive
TSVDTruncated singular value decomposition
twsTemplate windows size
VIMVegetation index map

Appendix A

In this section, individual results for each image pair are given. Tables include best and worst results, mean, variance, average computing time for 1000 experiments, and utility values for each configuration. The highest best results and means, and lowest variance estimations for error metrics are shown by bold numbers.
Table A1. Results of Ottawa Data. 
Table A1. Results of Ottawa Data. 
Best ResultsWorst ResultsMeanVar.Avg.
Time
U1U2
Nobsckcbsckckckc
fmeasfmeasfmeasfmeas
1320.7684820.64120.69350.00111.69531.43080.8440
320.8042830.69890.73920.0008
2320.8933230.30770.67390.03161.80041.33190.7398
320.9097230.38320.71550.0259
3320.8961230.32400.69120.02501.79851.37680.7655
320.9121230.39860.73100.0204
4320.8320730.43530.65530.01711.82341.32050.7242
320.8554730.49400.69680.0145
5320.8997230.29450.65260.04152.93281.27340.4342
320.9154230.36630.69630.0340
6320.8997720.08800.56560.07312.87941.05700.3671
320.9154720.22790.62130.0568
7320.8984230.30620.69140.02701.81121.37310.7581
320.9141230.38000.73090.0222
8320.8933230.31760.67450.03111.81431.33380.7352
320.9096230.39190.71590.0255
9320.8321730.43680.64540.01981.81241.29650.7153
320.8555730.49550.68780.0169
10320.8341830.44220.66920.01371.82951.35340.7398
320.8573830.50240.70950.0116
11320.8379830.55980.68960.00881.74031.40060.8048
320.8612830.60730.72740.0076
12320.8351830.56030.68960.00801.75551.40220.7987
320.8583830.60770.72750.0069
13320.8355230.42730.67820.01221.88161.37290.7296
320.8587230.48780.71740.0105
14320.8321730.43670.65560.01701.68531.32120.7840
320.8555730.49530.69710.0145
15320.8558230.37350.66240.02231.73281.32540.7649
320.8766230.43970.70390.0186
16320.8567230.36470.67390.01921.88361.35280.7182
320.8773230.43100.71430.0162
17320.8558230.36810.66210.02251.82661.32440.7251
320.8766230.43500.70360.0188
18320.8479230.40130.66090.01991.62521.32650.8162
320.8695230.46420.70230.0168
19320.8481230.39100.67140.01731.65871.35130.8147
320.8698230.45450.71180.0146
20320.8479230.40060.66080.01991.68721.32630.7861
320.8696230.46360.70220.0168
21320.8850830.70410.79700.00251.74931.62080.9265
320.9037830.74580.82820.0019
22330.8899830.72200.79850.00211.87651.62530.8661
330.9067830.76270.83040.0015
Table A2. Results of Yellow River Estuary 1. 
Table A2. Results of Yellow River Estuary 1. 
Best ResultsWorst ResultsMeanVar.Avg.
Time
U1U2
Nobsckcbsckckckc
fmeasfmeasfmeasfmeas
1330.415972−0.31180.00610.08561.50460.08080.0537
330.52938300.21740.0571
2520.8183830.30740.70490.02501.56391.41200.9029
520.8516830.39890.75190.0198
3520.8183830.31260.70800.02481.55191.41790.9137
520.8516830.40310.75440.0197
4520.7638230.42190.59400.01471.57251.21270.7712
720.8035230.47590.64660.0132
5320.8261630.24040.68210.04432.32931.33260.5721
320.8572630.33180.73040.0356
6320.8259630.24040.68240.04432.30991.33330.5772
320.8571630.33180.73080.0356
7320.8390830.28450.72800.03271.59731.44080.9020
320.8679830.37760.77140.0259
8520.8183830.30700.70510.02501.60221.42020.8864
520.8516830.39860.75210.0120
9520.7638230.42220.59270.01501.59871.20960.7566
720.8035230.47620.64530.0134
10520.7656830.40920.58920.01661.60741.20040.7468
720.8041230.48320.64240.0146
11520.7629630.40880.58310.01831.56541.18490.7569
720.8016630.47760.63600.0159
12520.7633230.43690.59570.01471.65211.21540.7357
720.8037230.49040.64760.0132
13520.7639830.40130.58980.01701.73131.20040.6934
720.8049830.47760.64250.0149
14520.7638230.42190.59310.01491.56111.21060.7755
720.8037230.47590.64570.0133
15520.7755630.37770.59260.02041.61211.20280.7461
520.8132630.45270.64780.0172
16520.7775630.38040.59440.02041.85771.20610.6492
520.8149530.45420.64930.0172
17520.7755630.37770.60140.01811.82541.22340.6702
520.8132630.45270.65550.0154
18520.7698530.43200.60670.01401.52991.23960.8102
720.8085530.49730.65920.0123
19520.7721530.43570.60890.01391.54281.24400.8063
520.8101530.50010.66120.0122
20520.7698630.38760.59790.01671.56191.21810.7799
720.8084630.46100.65140.0145
21520.7599830.30100.64010.02131.61631.30400.8068
720.8063830.39950.70130.0161
22720.7470220.46900.67620.00671.72521.40230.8128
720.7980220.59070.73650.0037
Table A3. Results of Yellow River Estuary 2. 
Table A3. Results of Yellow River Estuary 2. 
Best ResultsWorst ResultsMeanVar.Avg.
Time
U1U2
Nobsckcbsckckckc
fmeasfmeasfmeasfmeas
1430.1755820.07530.11360.00142.08350.24170.1160
430.1906820.09400.13080.0013
223−0.017072−0.0211−0.0200 1.22 × 10 6 3.4239−0.0188−0.0055
230.00358720.00030.0012 8.32 × 10 7
322−0.018672−0.0211−0.0201 7 . 09 × 10 7 3.0496−0.0191−0.0063
220.00257720.00030.0010 4 . 79 × 10 7
4830.8027530.03080.43560.12243.45430.64180.1858
830.8047530.05000.44560.1170
5730.707382−0.02060.24780.10063.95850.31230.0789
730.71118200.26140.0963
6730.710082−0.02060.24880.10133.93990.31300.0794
730.71378200.26240.0969
7730.658832−0.02120.12210.07463.26260.11580.0355
730.66352300.13940.0711
823−0.016972−0.0211−0.0200 1.22 × 10 6 3.2725−0.0189−0.0058
230.0036720.00030.0011 8.30 × 10 7
9830.8031530.03620.43660.12163.11640.64540.2071
830.8050530.05530.44660.1162
10620.8063520.07700.48970.10473.14750.78340.2489
620.8083520.09580.49850.1001
11620.8145520.07860.52710.09422.68340.87780.3271
620.8164520.09730.53490.0900
12620.8058520.07420.44720.11442.85180.68030.2386
620.8078520.09310.45690.1094
13620.8146520.08430.53000.09223.08580.88750.2876
620.8165520.10280.53780.0881
14830.8027530.03280.43670.12162.76680.64560.2333
830.8047530.05200.44670.1162
15620.8097330.03400.38260.14182.95700.49940.1689
620.8117330.05290.39410.1355
16620.8109520.06040.48220.12843.09520.72250.2334
620.8129520.07970.49130.1226
17620.8097330.03260.38260.13183.3780.51900.1708
620.8117330.05160.39410.1259
18620.8099330.03680.43110.12932.79830.61960.2214
620.8119330.05550.44140.1236
19620.8113520.06520.48320.11472.88430.75130.2605
620.8133520.08430.49230.1095
20620.8099330.03680.43280.12842.94580.62490.2121
620.8119330.05550.44310.1226
21830.7953220.02650.09290.03803.10920.13010.0418
830.7976220.04680.11150.0363
22830.056323−0.01160.03370.00033.25980.08680.0266
830.0757230.00910.05370.0003
Table A4. Results of Yellow River Estuary 3. 
Table A4. Results of Yellow River Estuary 3. 
Best ResultsWorst ResultsMeanVar.Avg.
Time
U1U2
Nobsckcbsckckckc
fmeasfmeasfmeasfmeas
1730.3469320.09260.21450.00852.62810.45670.1738
730.3790320.14730.25770.0070
2530.312462−0.01000.14430.02362.79410.28370.1015
530.3224620.05010.17960.0166
3530.311432−0.01050.14460.02372.77670.28400.1023
530.3213320.04970.17980.0167
4520.7622830.59460.68340.00232.80941.37130.4881
520.7697830.60480.69250.0023
5530.3163820.00180.15480.02304.06980.30410.0747
530.3256820.05990.18860.0163
6330.3190620.00140.15470.02314.05290.30380.0750
330.3292620.05960.18860.0164
7330.324532−0.00660.15240.02502.56470.29660.1156
330.3345320.05290.18700.0178
8530.312462−0.01040.14420.02362.65300.28350.1069
530.3224520.04980.17950.0166
9520.7618830.59460.68400.00232.60351.37240.5271
520.7693830.60480.69300.0023
10520.7658830.58310.68450.00272.67361.37260.5134
520.7731830.59330.69340.0026
11520.7635830.52940.65960.00572.54561.31690.5173
520.7708830.53980.66860.0056
12520.7638830.58790.68000.00282.75671.36340.4946
520.7712830.59810.68900.0028
13520.7681830.56010.67700.00373.02881.35560.4476
520.7754830.57040.68600.0037
14520.7618830.59460.68350.00242.61431.37130.5245
520.7693830.60480.69250.0023
15520.7686830.67310.71180.00082.63571.43100.5429
520.7763830.68290.72080.0008
16520.7729830.67430.71510.00082.64961.43750.5425
520.7804830.68400.72400.0008
17520.7686830.67280.71170.00082.63531.43080.5429
520.7763830.68260.72070.0008
18520.7658830.65380.70400.00102.44701.41500.5783
520.7733830.66380.71300.0010
19520.7689830.64590.70620.00112.45061.41910.5791
520.7763830.65570.71510.0011
20520.7658830.65380.70410.00102.51091.41520.5636
520.7733830.66380.71310.0010
21730.7148220.16120.48620.03422.60830.93390.3580
730.7255220.21030.51150.0296
22730.6692220.12640.40940.03862.61980.77770.2969
730.6825220.17860.44020.0333
Table A5. Results of Yellow River Estuary 4. 
Table A5. Results of Yellow River Estuary 4. 
Best ResultsWorst ResultsMeanVar.Avg.
Time
U1U2
Nobsckcbsckckckc
fmeasfmeasfmeasfmeas
1630.6450220.23410.43960.02281.75550.88780.5057
630.6691220.30940.48840.0174
2630.8563820.65010.76110.00492.01761.52950.7581
630.8646820.67710.77740.0041
3630.8558820.65360.76600.00421.98451.54020.7761
630.8642820.68030.78190.0035
4620.8431730.54530.74420.00842.16601.48510.6856
620.8523730.56490.75710.0078
5630.8677820.62880.77450.00632.84951.55300.5450
630.8755820.65800.79000.0052
6630.8676820.62880.77440.00632.83521.55280.5477
630.8754820.65800.78990.0052
7330.8681820.64940.77410.00551.91471.55370.8115
330.8758820.67660.78970.0046
8630.8564820.65010.76100.00491.94161.52930.7876
630.8647820.67710.77730.0041
9620.8433730.54530.74420.00841.73491.48510.8560
620.8525730.56490.75710.0078
10620.8445730.54610.74530.00841.73551.48720.8569
620.8536730.56560.75810.0078
11620.8415730.55000.74550.00831.62891.48740.9131
620.8508730.56910.75800.0078
12620.8431830.54730.74440.00841.70011.48530.8737
620.8523730.56740.75720.0079
13620.8441730.54780.74930.00841.87931.49160.7937
620.8532730.56730.75860.0079
14620.8433730.54530.74380.00841.73511.48420.8554
620.8525730.56490.75670.0079
15620.8456730.52900.75280.00941.80701.50050.8304
620.8549730.55130.76580.0087
16620.8461830.53090.75390.00941.98941.50260.7553
620.8553730.55220.76680.0087
17620.8456730.52780.75270.00951.89931.50000.7898
620.8549730.54910.76560.0088
18620.8440830.53730.75040.00891.73541.49650.8623
620.8533730.55820.76330.0083
19620.8448730.53710.75100.00901.75271.49760.8545
620.8540730.55750.76390.0083
20620.8438830.53730.75030.00891.76411.49630.8482
620.8531730.55820.76320.0083
21630.8431220.34530.68580.01981.76821.35950.7689
630.8521220.40610.70940.0159
22630.8335220.24760.59380.03761.77171.15380.6512
630.8434220.32160.62760.0300
Table A6. Results of San Francisco. 
Table A6. Results of San Francisco. 
Best ResultsWorst ResultsMeanVar.Avg.
Time
U1U2
Nobsckcbsckckckc
fmeasfmeasfmeasfmeas
1830.7321220.41880.55550.01451.12271.12781.0045
830.7530220.47970.59770.0109
2530.9054220.82260.86910.00051.21291.74721.4405
530.9122220.83680.87900.0004
3330.9067820.85660.88300.00021.19851.75021.4603
330.9133820.86760.86760.0002
4520.873832−0.06800.36970.14521.31660.49310.3745
520.8825320.00000.40170.1331
5230.9157820.85800.88600.00031.82901.78000.9732
230.9219820.86880.89460.0003
6330.9151820.85920.88580.00031.80411.77970.9865
330.9213820.87000.89440.0002
7330.9122820.85940.88620.00021.14131.78041.5600
330.9185820.87010.89460.0002
8530.9054220.82260.86900.00051.14141.74711.5307
530.9122220.83680.87900.0004
9520.8738730.19740.57550.10321.13820.98090.8618
520.8825730.24210.60080.0922
10520.8762730.19830.53510.10641.14160.89620.7850
520.8848730.24280.56260.0951
11320.8654830.23060.55470.09341.00690.95220.9457
320.8744730.26620.57680.0859
12520.8736730.21280.54930.09741.12110.93600.8349
520.8822730.25090.57290.0888
13520.8751730.21370.55160.09671.29360.94170.7280
520.8836730.25220.57500.0882
14520.8738330.19730.57510.10301.07040.98040.9159
520.8825730.24210.60040.0921
15730.8763830.17820.81200.03111.08741.57801.4512
730.8849830.22740.82470.0276
16520.8779730.16300.76260.05851.15731.43021.2358
520.8865730.21290.77810.0520
17730.8763830.17820.81210.03111.10421.57821.4293
730.8849830.22740.82480.0276
18520.8759530.17000.66360.09301.05931.17241.1068
520.8846530.22020.68460.0828
19520.8773430.17130.61850.10601.09631.06030.9672
520.8859730.22190.64210.0943
20520.8759530.17000.66360.09301.10991.17241.0563
520.8846530.22020.68460.0828
21730.885582−0.13610.35890.24421.11730.36510.3268
730.8937220.00340.43480.1844
22730.884082−0.13550.35610.24091.12230.36510.3253
730.8923220.00320.43200.1821
Table A7. Results of Bern. 
Table A7. Results of Bern. 
Best ResultsWorst ResultsMeanVar.Avg.
Time
U1U2
Nobsckcbsckckckc
fmeasfmeasfmeasfmeas
1530.7359220.12040.49660.04151.85990.92220.4958
530.7398220.14120.50640.0393
2320.8292630.56640.70140.00761.92761.39060.7214
320.8312630.56970.70440.0076
3320.8287430.56950.70260.00761.85191.39300.7522
320.8306430.57280.70560.0076
4520.7763230.45590.63000.01231.95661.23850.6330
520.7788230.45910.63310.0123
5320.8398830.57630.72560.00602.72811.44210.5286
320.8417830.57970.72850.0060
6320.8398830.57710.72500.00612.65681.44090.5423
320.8417830.58050.72800.0060
7320.8509830.59360.73330.00651.59511.45660.9132
320.8527830.59700.73620.0064
8320.8292630.56560.70180.00761.86001.39140.7481
320.8312630.56880.70480.0076
9520.7763230.45590.63010.01241.69011.23860.7329
520.7788230.45910.63320.0123
10520.7771230.46500.63170.01201.74891.24250.7104
520.7796230.46830.63480.0120
11720.7669230.53620.63860.00721.48031.26590.8552
720.7701230.53940.64170.0072
12320.7732230.44870.62860.01241.52981.23570.8078
320.7755230.45190.63180.0123
13520.7765230.46570.63300.01171.67981.24580.7416
520.7790230.46900.63610.0116
14520.7763230.45590.63000.01241.52571.23840.8117
520.7788230.45910.63310.0123
15320.8140230.51520.65570.01051.59261.29350.8122
320.8161230.51850.65880.0105
16320.8186230.52740.65850.01041.61961.29930.8022
320.8206230.53060.66160.0104
17320.8140230.51520.65590.01051.61651.29400.8005
320.8161230.51850.65900.0104
18520.7905230.48900.64440.01101.61521.26990.7862
520.7929230.49220.64750.0110
19320.7940230.50860.64760.01071.64121.27700.7781
320.7962230.51180.65070.0106
20520.7905230.48900.64440.01101.65681.26990.7665
520.7929230.49220.64750.0110
21330.8034220.12680.60690.03941.86681.14430.6130
330.8062220.14730.61410.0373
22530.7615220.08640.47570.05662.09520.85180.4065
530.7650220.10840.48630.0536

Appendix B

Image pairs with ground truth and best result images are illustrated in Figure A1. Since we made the images equal in size, some of them seem to be scaled according to their original versions.
Figure A1. First two columns display SAR image pairs, third column shows ground truth change map, and fourth column illustrates predicted best change map results.
Figure A1. First two columns display SAR image pairs, third column shows ground truth change map, and fourth column illustrates predicted best change map results.
Sensors 22 09172 g0a1
Google Maps image and its vegetation index map is illustrated in Figure A2.
Figure A2. Example vegetation index result.
Figure A2. Example vegetation index result.
Sensors 22 09172 g0a2

References

  1. Radke, R.J.; Andra, S.; Al-Kofahi, O.; Roysam, B. Image Change Detection Algorithms: A Systematic Survey. IEEE Trans. Image Process. 2005, 14, 294–307. [Google Scholar] [CrossRef]
  2. Salah, H.S.; Goldin, S.E.; Rezgui, A.; Nour El Islam, B.; Ait-Aoudia, S. What is a remote sensing change detection technique? Towards a conceptual framework. Int. J. Remote Sens. 2020, 41, 1788–1812. [Google Scholar] [CrossRef]
  3. Vasegaard, A.; Picard, M.; Hennart, F.; Nielsen, P.; Saha, S. Multi criteria decision making for the multi-satellite image acquisition scheduling problem. Sensors 2020, 20, 1242. [Google Scholar] [CrossRef] [Green Version]
  4. Vasegaard, A.; Moon, I.; Nielsen, P.; Saha, S. Determining the pricing strategy for different preference structures for the earth observation satellite scheduling problem through simulation and VIKOR. Flex. Serv. Manuf. J. 2022, 1–29. [Google Scholar] [CrossRef]
  5. Pedersen, C.; Nielsen, K.; Rosenkrands, K.; Vasegaard, A.; Nielsen, P.; El Yafrani, M. A grasp-based approach for planning uav-assisted search and rescue missions. Sensors 2022, 22, 275. [Google Scholar] [CrossRef] [PubMed]
  6. Danancier, K.; Ruvio, D.; Sung, I.; Nielsen, P. Comparison of path planning algorithms for an unmanned aerial vehicle deployment under threats. IFAC—PapersOnLine 2019, 52, 1978–1983. [Google Scholar] [CrossRef]
  7. Palm, B.G.; Alves, D.I.; Pettersson, M.I.; Vu, V.T.; Machado, R.; Cintra, R.J.; Bayer, F.M.; Dammert, P.; Hellsten, H. Wavelength-Resolution SAR Ground Scene Prediction Based on Image Stack. Sensors 2020, 20, 2008. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Wang, Z.; Wang, Y.; Wang, B.; Xiang, M.; Wang, R.; Xu, W.; Song, C. Multi-Frequency Interferometric Coherence Characteristics Analysis of Typical Objects for Coherent Change Detection. Remote Sens. 2022, 14, 1689. [Google Scholar] [CrossRef]
  9. Bovenga, F. Special Issue “Synthetic Aperture Radar (SAR) Techniques and Applications”. Sensors 2020, 20, 1851. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Zhang, H.; Ni, W.; Yan, W.; Bian, H.; Wu, J. Fast SAR Image Change Detection Using Bayesian Approach Based Difference Image and Modified Statistical Region Merging. Sci. World J. 2014, 2014, 862875. [Google Scholar] [CrossRef]
  11. Kang, M.; Baek, J. SAR Image Change Detection via Multiple-Window Processing with Structural Similarity. Sensors 2021, 21, 6645. [Google Scholar] [CrossRef]
  12. Jia, M.; Zhao, Z. Change Detection in Synthetic Aperture Radar Images Based on a Generalized Gamma Deep Belief Networks. Sensors 2021, 21, 8290. [Google Scholar] [CrossRef]
  13. Gao, F.; Liu, X.; Dong, J.; Zhong, G.; Jian, M. Change Detection in SAR Images Based on Deep Semi-NMF and SVD Networks. Remote Sens. 2017, 9, 435. [Google Scholar] [CrossRef] [Green Version]
  14. Yang, G.; Ye, Q.; Xia, J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf. Fusion 2022, 77, 29–52. [Google Scholar] [CrossRef]
  15. Alicioglu, G.; Sun, B. A survey of visual analytics for Explainable Artificial Intelligence methods. Comput. Graph. 2022, 102, 502–520. [Google Scholar] [CrossRef]
  16. Rojat, T.; Puget, R.; Filliat, D.; Del Ser, J.; Gelin, R.; Díaz-Rodríguez, N. Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey. arXiv 2021, arXiv:2104.00950. [Google Scholar] [CrossRef]
  17. Li, B.; Qi, P.; Liu, B.; Di, S.; Liu, J.; Pei, J.; Yi, J.; Zhou, B. Trustworthy AI: From Principles to Practices. arXiv 2021, arXiv:2110.01167. [Google Scholar] [CrossRef]
  18. Celik, T. Unsupervised change detection in satellite images using principal component analysis and k-means clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  19. Çelik, T. Bayesian change detection based on spatial sampling and Gaussian mixture model. Pattern Recognit. Lett. 2011, 32, 1635–1642. [Google Scholar] [CrossRef]
  20. Li, H.C.; Celik, T.; Longbotham, N.; Emery, W.J. Gabor Feature Based Unsupervised Change Detection of Multitemporal SAR Images Based on Two-Level Clustering. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2458–2462. [Google Scholar] [CrossRef]
  21. Wang, J.; Gao, F.; Dong, J.; Zhang, S.; Du, Q. Change Detection From Synthetic Aperture Radar Images via Graph-Based Knowledge Supplement Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1823–1836. [Google Scholar] [CrossRef]
  22. Li, L.; Ma, H.; Jia, Z. Change Detection from SAR Images Based on Convolutional Neural Networks Guided by Saliency Enhancement. Remote Sens. 2021, 13, 3697. [Google Scholar] [CrossRef]
  23. Painam, R.K.; Manikandan, S. A comprehensive review of SAR image filtering techniques: Systematic survey and future directions. Arab. J. Geosci. 2021, 14, 37. [Google Scholar] [CrossRef]
  24. Qiu, F.; Berglund, J.; Jensen, J.R.; Thakkar, P.; Ren, D. Speckle Noise Reduction in SAR Imagery Using a Local Adaptive Median Filter. GIScience Remote Sens. 2004, 41, 244–266. [Google Scholar] [CrossRef] [Green Version]
  25. Choi, H.; Jeong, J. Speckle Noise Reduction Technique for SAR Images Using Statistical Characteristics of Speckle Noise and Discrete Wavelet Transform. Remote Sens. 2019, 11, 1184. [Google Scholar] [CrossRef] [Green Version]
  26. Zhao, R.; Peng, G.H.; Yan, W.d.; Pan, L.L.; Wang, L.Y. Change detection in SAR images based on superpixel segmentation and image regression. Earth Sci. Inform. 2021, 14, 69–79. [Google Scholar] [CrossRef]
  27. Ilsever, M.; Ünsalan, C. Two-Dimensional Change Detection Methods; Springer: London, UK, 2012. [Google Scholar] [CrossRef]
  28. Anfinsen, S.N.; Doulgeris, A.P.; Eltoft, T. Estimation of the Equivalent Number of Looks in Polarimetric SAR Imagery. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 6–11 July 2008; Volume 4, pp. 487–490. [Google Scholar] [CrossRef]
  29. Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Southampton, UK, 2004. [Google Scholar]
  30. Zhuang, H.; Tan, Z.; Deng, K.; Fan, H. It is a misunderstanding that log ratio outperforms ratio in change detection of SAR images. Eur. J. Remote Sens. 2019, 52, 484–492. [Google Scholar] [CrossRef] [Green Version]
  31. Baudes, A.; Coll, B.; Morel, J.M. Non-Local Means Denoising. Image Process. Line 2011, 1, 208–212. [Google Scholar] [CrossRef]
  32. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  33. He, K.; Sun, J. Fast Guided Filter. arXiv 2015, arXiv:1505.00996. [Google Scholar] [CrossRef]
  34. Immerkær, J. Fast Noise Variance Estimation. Comput. Vis. Image Underst. 1996, 64, 300–302. [Google Scholar] [CrossRef]
  35. Gong, M.; Zhao, J.; Liu, J.; Miao, Q.; Jiao, L. Change Detection in Synthetic Aperture Radar Images Based on Deep Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 125–138. [Google Scholar] [CrossRef] [PubMed]
  36. Wang, X.; Jia, Z.; Yang, J.; Kasabov, N. Change detection in SAR images based on the logarithmic transformation and total variation denoising method. Remote Sens. Lett. 2017, 8, 214–223. [Google Scholar] [CrossRef]
  37. Song, Y.; Cai, X.; Zhou, X.; Zhang, B.; Chen, H.; Li, Y.; Deng, W.; Deng, W. Dynamic hybrid mechanism-based differential evolution algorithm and its application. Expert Syst. Appl. 2023, 213, 118834. [Google Scholar] [CrossRef]
  38. Ren, Z.; Han, X.; Yu, X.; Skjetne, R.; Leira, B.J.; Sævik, S.; Zhu, M. Data-driven simultaneous identification of the 6DOF dynamic model and wave load for a ship in waves. Mech. Syst. Signal Process. 2023, 184, 109422. [Google Scholar] [CrossRef]
  39. Barbosa, B.D.S.; Ferraz, G.A.S.; Gonçalves, L.M.; Marin, D.B.; Maciel, D.T.; Ferraz, P.F.P.; Rossi, G. RGB vegetation indices applied to grass monitoring: A qualitative analysis. Agron. Res. 2019, 17, 349–357. [Google Scholar] [CrossRef]
  40. Chen, H.; Miao, F.; Chen, Y.; Xiong, Y.; Chen, T. A Hyperspectral Image Classification Method Using Multifeature Vectors and Optimized KELM. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2781–2795. [Google Scholar] [CrossRef]
Figure 1. Unsupervised Change Detection Algorithm Proposed by Celik [18].
Figure 1. Unsupervised Change Detection Algorithm Proposed by Celik [18].
Sensors 22 09172 g001
Figure 2. Image pairs and their histograms.
Figure 2. Image pairs and their histograms.
Sensors 22 09172 g002
Figure 3. Image pairs and their Radon transforms.
Figure 3. Image pairs and their Radon transforms.
Sensors 22 09172 g003
Table 1. Data Information. 
Table 1. Data Information. 
Image 1
Date
Image 2
Date
SatelliteResolution
Ottawa (Canada) [13]May 1997August 1997RADARSAT290 × 350 pixels
Yellow River Estuary 1
(China) [35]
June 2008June 2009RADARSAT-2257 × 289 pixels
Yellow River Estuary 2
(China) [35]
June 2008June 2009RADARSAT-2450 × 280 pixels
Yellow River Estuary 3
(China) [35]
June 2008June 2009RADARSAT-2291 × 444 pixels
Yellow River Estuary 4
(China) [35]
June 2008June 2009RADARSAT-2306 × 291 pixels
San Francisco (USA) [13]August 2003May 2004ERS-2256 × 256 pixels
Bern (Switzerland) [36]April 1999May 1999ERS-2301 × 301 pixels
Table 2. Noise Variance Values. 
Table 2. Noise Variance Values. 
Noise Variance Value
Image 1Image 2
Ottawa11.50679.0350
Yellow River Estuary 118.669137.6355
Yellow River Estuary 26.076512.5834
Yellow River Estuary 39.996926.2616
Yellow River Estuary 415.537332.9121
San Francisco2.60132.8143
Bern8.29917.0199
Table 3. Configurations. 
Table 3. Configurations. 
NoConfiguration before
PCAKM
NoConfiguration before
PCAKM
1 D 1 12 D 4 + TSVD(var = 0.9)
2 D 2 13GF(r = bs, ϵ = 0.0001) + D 4 + TSVD(var = 0.9)
3 D 2 + TSVD(var = 0.9)14 D 5
4GF(r = bs, ϵ = 0) + D 2 15 D 6
5NLMD(h = 20, tws = 7, sws = 21) + D 2 16GF(r = bs, ϵ = 0.0001) + D 6
6NLMD(h = 20, tws = 7, sws = 21) + GF(r = bs, ϵ = 0) + D 2 17 D 7
7BF(dws = 10) + D 2 18 D 8
8 D 3 19GF(r = bs, ϵ = 0.0001) + D 8
9 D 4 20 D 9
10GF(r = bs, ϵ = 0.0001) + D 4 21 D 10
11 D 4 + TSVD(var = 0.8)22 D 11
Table 4. Confusion Matrix. 
Table 4. Confusion Matrix. 
Calculated Change Map    
PixelPositive
(changed)
Negative
(unchanged)
Ground Truth ImagePositive (changed)True PositiveFalse negative
(Type II Error)
Negative (unchanged)False positive
(Type I Error)
True negative
Table 5. Ranking of Utility Values for Image Pairs. 
Table 5. Ranking of Utility Values for Image Pairs. 
U1U2
Ottawa22,21,1,12,11,3,7,13,10,16,19,
8,2,18,20,15,17,14,4,9,5,6
21,22,1,18,19,11,12,20,14,3,15,
7,2,10,8,13,17,4,16,9,5,6
Yellow River
Estuary 1
7,8,3,2,22,6,5,21,19,18,17,
20,12,4,14,9,16,15,13,10,11,1
3,2,7,8,22,18,21,19,20,14,4,
11,9,10,15,12,13,17,16,6,5,1
Yellow River
Estuary 2
13,11,10,19,16,12,14,9,4,20,18,
17,15,6,5,1,21,7,22,2,8,3
11,13,19,10,12,16,14,18,20,9,4,
17,15,1,6,5,21,7,22,2,8,3
Yellow River
Estuary 3
16,15,17,19,20,18,10,9,4,14,
12,13,11,21,22,1,5,6,7,3,2,8
19,18,20,17,15,16,9,14,11,10,12,
4,13,21,22,1,7,8,3,2,6,5
Yellow River
Estuary 4
7,5,6,3,2,8,16,15,17,19,18,
20,13,11,10,12,4,9,14,21,22,1
11,12,18,10,9,14,19,20,15,7,13,
17,8,3,21,2,16,4,22,6,5,1
San Francisco7,5,6,3,2,8,17,15,16,18,20,
1,19,9,14,11,13,12,10,4,21,22
7,8,3,15,2,17,16,18,20,1,6,
5,19,11,14,9,12,10,13,4,21,22
Bern7,5,6,3,8,2,16,17,15,19,18,
20,11,13,10,9,4,14,12,21,1,22
7,11,15,14,12,16,17,18,19,20,3,
8,13,9,2,10,4,21,6,5,1,22
Table 6. Utility Values Based on Overall Accuracy Results and Average Computing Times for Each Configuration. 
Table 6. Utility Values Based on Overall Accuracy Results and Average Computing Times for Each Configuration. 
kcfmeasAvg. TimeU3U4
NoMeanVarianceMeanVariance
10.35990.02510.41970.01911.80710.73540.4070
20.54780.01330.57270.01062.10581.09660.5207
30.55360.01220.57790.00972.03021.10960.5465
40.58750.04610.61050.04292.15701.10910.5142
50.59810.03170.61280.02772.95671.15150.3895
60.57670.03640.60220.03102.92551.11150.3800
70.58390.02450.60710.02121.98381.14530.5773
80.54790.01320.57280.00952.04071.09800.5380
90.61550.04040.63770.03731.95631.17550.6009
100.62070.03780.64280.03481.98341.19090.6004
110.62830.03370.64910.03141.80731.21230.6708
120.61930.03690.64040.03451.90961.18840.6223
130.62940.03460.65050.03212.08291.21320.5825
140.61680.04000.63890.03691.85121.17880.6368
150.65290.03380.67370.03131.91781.26160.6578
160.66300.03530.68370.03262.03611.27880.6281
170.65410.03200.67480.02971.99221.26720.6361
180.63730.03960.65880.03651.83001.22000.6666
190.64100.03900.66240.03581.86091.22860.6602
200.63630.03980.65790.03671.89091.21760.6439
210.52400.05710.55870.04591.97650.97970.4957
220.47770.05470.51520.04352.06720.89470.4328
Table 7. Ranking of Utility Values for Overall Results. 
Table 7. Ranking of Utility Values for Overall Results. 
U3U4
Overall
Results
16,17,15,19,18,20,13,11,10,12,
14,9,5,7,6,3,4,8,2,21,22,1
11,18,19,15,20,14,17,16,12,9,
10,13,7,3,8,2,4,21,22,1,5,6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kılıç, D.K.; Nielsen, P. Comparative Analyses of Unsupervised PCA K-Means Change Detection Algorithm from the Viewpoint of Follow-Up Plan. Sensors 2022, 22, 9172. https://doi.org/10.3390/s22239172

AMA Style

Kılıç DK, Nielsen P. Comparative Analyses of Unsupervised PCA K-Means Change Detection Algorithm from the Viewpoint of Follow-Up Plan. Sensors. 2022; 22(23):9172. https://doi.org/10.3390/s22239172

Chicago/Turabian Style

Kılıç, Deniz Kenan, and Peter Nielsen. 2022. "Comparative Analyses of Unsupervised PCA K-Means Change Detection Algorithm from the Viewpoint of Follow-Up Plan" Sensors 22, no. 23: 9172. https://doi.org/10.3390/s22239172

APA Style

Kılıç, D. K., & Nielsen, P. (2022). Comparative Analyses of Unsupervised PCA K-Means Change Detection Algorithm from the Viewpoint of Follow-Up Plan. Sensors, 22(23), 9172. https://doi.org/10.3390/s22239172

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop