Next Article in Journal
Helmet Wearing Detection of Motorcycle Drivers Using Deep Learning Network with Residual Transformer-Spatial Attention
Previous Article in Journal
A New Multi-Criteria Tie Point Filtering Approach to Increase the Accuracy of UAV Photogrammetry Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Building Change Detection Based on a Gray-Level Co-Occurrence Matrix and Artificial Neural Networks

by
Marianna Christaki
,
Christos Vasilakos
*,
Ermioni-Eirini Papadopoulou
,
Georgios Tataris
,
Ilias Siarkos
and
Nikolaos Soulakellis
Department of Geography, University of the Aegean, 81100 Mytilene, Greece
*
Author to whom correspondence should be addressed.
Drones 2022, 6(12), 414; https://doi.org/10.3390/drones6120414
Submission received: 31 October 2022 / Revised: 5 December 2022 / Accepted: 13 December 2022 / Published: 15 December 2022

Abstract

:
The recovery phase following an earthquake event is essential for urban areas with a significant number of damaged buildings. A lot of changes can take place in such a landscape within the buildings’ footprints, such as total or partial collapses, debris removal and reconstruction. Remote sensing data and methodologies can considerably contribute to site monitoring. The main objective of this paper is the change detection of the building stock in the settlement of Vrissa on Lesvos Island during the recovery phase after the catastrophic earthquake of 12 June 2017, through the analysis and processing of UAV (unmanned aerial vehicle) images and the application of Artificial Neural Networks (ANNs). More specifically, change detection of the settlement’s building stock by applying an ANN on Gray-Level Co-occurrence Matrix (GLCM) texture features of orthophotomaps acquired by UAVs was performed. For the training of the ANN, a number of GLCM texture features were defined as the independent variable, while the existence or not of structural changes in the buildings were defined as the dependent variable, assigning, respectively, the values 1 or 0 (binary classification). The ANN was trained based on the Levenberg–Marquardt algorithm, and its ability to detect changes was evaluated on the basis of the buildings’ condition, as derived from the binary classification. In conclusion, the GLCM texture feature changes in conjunction with the ANN can provide satisfactory results in predicting the structural changes of buildings with an accuracy of almost 92%.

1. Introduction

Within a disaster management scheme, the recovery phase aims to restore the landscape to the pre-destruction level and reduce vulnerability in the future. Following a catastrophic earthquake event, the progression of the recovery phase depends on the decision makers’ characterization of the disaster-hit areas, which is generally divided into three types: collective resettlement area, original site recovery area and non-disaster area [1]. The usefulness of geospatial data acquired based on remote sensing technology has been extensively demonstrated in recent years. By incorporating both spaceborne and airborne techniques, remote sensing has nowadays been evolved into an extremely useful tool in gathering, processing and analyzing geospatial data with the aim of extracting valuable information for proper decision making in various scientific fields [2,3,4]. Change detection is included among the fields where remote sensing is frequently applied. Due to the unique characteristics of remote sensing data, such as their high temporal frequency, digital format and the availability of several sensors with a variety of spatial and spectral resolutions, they can be efficiently used in order to detect changes in numerous applications, such as land use management and planning, urban expansion and planning, and disaster monitoring and management [5,6,7,8,9].
With the continuous progress of remote sensing technology, change detection can be performed through various platforms, such as satellites, manned aircrafts and Unmanned Aerial Vehicles (UAVs). In particular, the use of UAVs is constantly increasing due to their flexibility, low cost and real-time monitoring, as well as their capability to observe objects from multiple angles [10]. Based on UAV data, change detection algorithms can be applied either in 3D derived data, i.e., point clouds [11,12], or in 2D orthoimages [13]. Furthermore, various methods and techniques based on the analysis of remote sensing images have been developed in order to detect changes. Nowadays, machine learning plays an important role in improving the efficiency of change detection and monitoring because of its high abilities in automatic feature learning and visual pattern recognition [14,15]. Machine learning techniques, Support Vector Machines (SVMs), and Artificial Neural Networks (ANNs), etc., are highly used in change detection, especially in cases where conventional change detection methods such as post-classification comparison are often problematic, i.e., in urban change detection [16,17]. The use of ANNs has received increasing attention, while they have been used for different change detection applications such as vegetation change [18,19], land cover change [20] and urban change [17,21,22]. These studies suggest that the ANN method can improve the accuracy of change detection compared to other techniques.
Another approach for remote sensing change detection is the extraction of a large number of image features, with the goal of improving the discriminative capability of image change information; texture is included among these features [23,24]. Texture is one of the key structural characteristics of an image, used to identify objects or regions of interest in an image, and it is also in line with human visual characteristics [25,26]. Texture extraction is critical as it serves as an input for further advanced processing and has a significant impact on the quality of extraction [27]; therefore, numerous studies on texture extraction from remote sensing data have been conducted (e.g., [27,28,29,30]). Moreover, texture-related research is still a hotspot for researchers in computer vision and image processing, while it is continuously evolving [31]. There are various methods of texture analysis that involve both the color and the arrangement of pixels in an image, i.e., GLCM (Gray-Level Co-occurrence Matrix), fractal model, Fourier and wavelet transformations. Furthermore, different classification schemes of these methods have been developed, including, i.e., statistical, model-based and geometrical/structural methods. Among all the texture analysis methods, the GLCM method has proved to be one of the most useful and powerful texture analysis techniques, since it is straightforwardly simple, easily executable and provides good results [27,32,33,34]. Furthermore, the GLCM has shown its efficiency within previous comparison studies [35,36]. GLCM features (e.g., contrast, correlation, energy, entropy, homogeneity) are calculated based on the occurrence of a pair of gray-level pixels in an image in predefined directions [10,34,37], which are used for different remote sensing applications such as land use classification [38], built-up area extraction [39], damaged building detection [29,40,41] and high-resolution satellite image analysis [30]. Another type of texture feature extraction method that has been developed in the last decade is the learning-based approach well-known as Convolutional Neural Networks (CNNs). CNNs are quite powerful in classification and segmentation problems, applied also in remote sensing, and they have shown their superiority to various studies. Between other applications in remote sensing, CNNs have been applied in urban areas for building footprint extraction and change detection [42,43,44,45,46].
In the case of disaster monitoring and management, remote sensing has proven to be an essential and efficient tool for damage assessment and the detection of building changes after a catastrophic event such as an earthquake [34]. Not only are the observations of the damage level and the spatial distribution after a destructive earthquake of paramount importance in order to plan the first rescue activities and understand the effect of seismic activity on buildings, but the monitoring of the affected area over time is also considered essential in order to study the urban development, such as buildings’ collapse and/or construction, after such a catastrophic event [14,23,47,48]. In this context, in [49], the authors monitored building damage after the Haiti earthquake through satellite stereo imagery and DSMs, while using texture features to optimize accuracy. In [23], the recovery of the Italian city of L’Aquila following the 2009 earthquake was monitored by using spectral, textural and geometric features in order to determine changes in buildings, thus allowing a reduction in the extensive fieldwork required. The Haiti 2010 earthquake was also studied in [50], where authors mapped the damaged buildings by developing an ANN model for classifying the affected area into two classes of changed and unchanged areas, using the extracted textural features calculated using the GLCM from pre- and post-event images as an input vector. The ability of the proposed method to detect building changes was proved by the reported overall accuracy of 93%. In the same context, the authors in [40] used high-resolution satellite images obtained before and after the 2010 Haiti earthquake to extract textural features applying the GLCM method. The results showed that dissimilarity was identified as a better classifier in detecting the collapsed buildings and that about 70% of them were correctly identified. In other research, the authors detected collapsed buildings using three different textural features derived from the GLCM and applying an SVM classifier [51]. The SVM and a synergy of high-resolution optical and Synthetic Aperture Radar (SAR) images were also used for the detection of the collapsed buildings after the 2011 Japan earthquake [52]. Finally, in [41], the authors detected collapsed buildings after the 2017 Iran–Iraq earthquake using ten spectral indices in combination with seven different textural features derived from the GLCM and applying an SVM classifier. Finally, CNNs were also applied to building change detection following a catastrophic event such as an earthquake. This task is usually difficult due to complex data with a heterogeneous appearance and with large variations within classes. When the intra-class variation is low (i.e., [45]), the change to be detected is limited to building/no building; therefore, the building extraction and change detection can be more easily approached. However, the building damages from an earthquake event include multiple sub-classes, i.e., collapsing and partial collapsing. For example, in [53], the detection was based on a pre and a post image; the overall accuracy was limited to 76.85% and the F1 score to 0.761. Projecting this method to a more complex problem, i.e., multitemporal change detection within a recovery phase where collapses, partial collapses, debris removal and new construction can take place, training a CNN would be quite challenging.
The aim of the present study is to present a methodology and the results for multitemporal building change detection after a disaster such as an earthquake. In relation to other studies, the applied methodology aims to monitor the changes that occurred in multiple time steps within the buildings’ footprints during the recovery after a catastrophic earthquake for an entire settlement on the building level. More specifically, the proposed method detects changes in the building sites such as significant changes to damaged buildings, i.e., partial collapses due to weather conditions and aftershocks, the removal of debris and the construction of new buildings, thus monitoring the development and rehabilitation of the settlement after the disaster. The identification of these changes was based on the GLCM features and ANN modelling. Even if various studies have been conducted on this topic, the multitemporal monitoring of a post-earthquake site based on GLCM features and an ANN, to the best of our knowledge, has not yet been fully examined. Therefore, taking a step further from previous studies, herein, the multitemporal monitoring of an entire settlement on the building level using remote sensing data was performed in order to fully identify the building changes in the whole study area, as well as to spatially localize them during the various time steps.

2. Materials and Methods

2.1. Study Area

Lesvos Island, located in the North Aegean region, is one of the most seismically active areas in Greece (Figure 1). Lesvos is thoroughly studied by geologists and geoscientists due to the existence of a rich tectonic and geomorphological background of the broader area in which the island is situated. More specifically, Lesvos is located on the Aegean microplate, which, according to [54], is subject to two different stress regimes. On the one hand, the island is subject to a NE–SW stress from the Anatolian fault, while, on the other hand, is subject to a N–S stress due to the retreat of the subducting Mediterranean plate from the Aegean plate.
The seismic activity on the island has caused both material and environmental damage, as well as human losses, due to which Lesvos is placed as the second greatest seismic hazard zone in Greece [54]. On 12 June 2017 (12:28 GMT), the settlement of Vrissa, situated in the southern part of Lesvos, underwent a catastrophic earthquake of a magnitude of 6.3, which caused one death and, at the same time, destroyed about half of the village [55]. The earthquake epicenter was 35 km away from the island’s capital, Mytilene. Even though the epicenter of the earthquake was not located close to Vrissa, the settlement suffered serious damages from the earthquake event, especially the western part of it. This can be mainly attributed to three factors: the direction of the earthquake, the type of soil and the types of buildings in the settlement.

2.2. Data Collection

In general, two main types of GIS data were used in the present study: (a) 12 orthophotomaps with the same resolution of 0.2 m acquired by a UAV within a 52-month period; and (b) the buildings’ footprints, which were used for the computation of the GLCM texture features and the compilation of the final database.

2.3. General Methodological Framework

2.3.1. Photo Interpretation and Building Classification

The main objective of the photo interpretation of the 12 UAV orthophotomaps was to examine the buildings’ status in the Vrissa settlement so that the buildings were classified into change/no change (1 or 0) for all time intervals, depending on whether changes were observed or not; the classification results were then used as the desired output for the ANN model. During this task, the visual comparison of the orthophotomaps taken in sequential time steps was performed, thus resulting in observations of the buildings’ state for 11 time intervals. The aim of the whole process was to find if the buildings in the study area underwent any changes during each time interval. At this point, it should be noted that, as changes were identified, only changes in the building itself and not changes in other types of factors, such as the exterior surroundings of the buildings, were identified. These changes include: (i) partial or total collapse; (ii) debris removal; and (iii) new construction. The photo interpretation process was performed for a total of 1060 buildings located in the study area and for each time interval, thus resulting in 11,660 recorded values, which are set either to 1 or 0, depending on whether changes were observed or not.

2.3.2. GLCM Texture Features

The GLCM method is a widely applied method for measuring and converting gray values to texture information, which provides an essential understanding of texture analysis and feature extraction [31,56,57,58,59]. The pioneer of this widely used method is Haralick [25], who created the texture analysis algorithm by assigning the textural relationship between the pixels of an image. This method extracts the structural information about the texture pattern to be analyzed at different scales and orientations, making the method more efficient and easier to implement. Based on the literature review, as regards the GLCM texture analysis, it was found that 5 out of 14 indicators, namely, contrast, correlation, energy, entropy and homogeneity, are most commonly used in similar studies; these indicators are defined as follows:
Contrast is a measure of the number of local texture variations in an image. Therefore, if there is a high number of variations, the contrast is high, while, in cases of minimal variations, the contrast is low. It is given by the following equation:
C o n s t r a s t = i , j | i j | 2 p ( i , j )
Correlation is a measure of the linear dependence of gray levels between individual pixels in an image. The higher the correlation is, the more uniform the image is, while correspondingly low correlation values indicate non-uniformity [25,60]. It can be computed by the equation:
C o r r e l a t i o n = i , j ( i μ i ) ( j μ j ) p ( i , j ) σ i σ j
Energy is the sum of the squares of all elements of the GLCM. It measures the uniformity of texture in an image and is considered the appropriate measure for detecting its anomaly. The energy is given by the equation:
E n e r g y = i , j p ( i , j ) 2
Entropy is a measure of randomness in the distribution of pixels in an image. Specifically, the denser the texture of an image, the more dispersed the grayscale distribution and the higher the entropy value. Conversely, small entropy values reveal smoother areas in an image. Entropy is directly dependent on the range of the data used for the analysis [61]. It can be computed by applying the equation:
E n t r o p y = i , j p ( i , j ) log 2 p ( i , j )
Homogeneity calculates the uniformity of pixels in the image and shows a comparative arrangement of their equal values. In addition, this index is similar to energy and is given by the equation:
H o m o g e n e i t y = i , j p ( i , j ) 1 + | i j |
where i and j, respectively, are the row and column numbers of the image, and p is the number of gray-level co-occurrence matrices in the GLCMPl.
In order to create the input variables for the ANN model, three main steps were carried out. In a first stage, conversion of the 12 orthophotomaps from the RGB (Red, Green, Blue) color model to grayscale was performed in order to reduce the complexity of the representation of the texture features. By converting the orthophotomaps to grayscale, i.e., by calculating the weighted sum of the three bands, a pixel with only one band is obtained, thus reducing its information. The next step was the calculation of the GLCM texture features for each building footprint of the 12 time steps in the Matlab environment. For achieving this, both the orthophotomaps (raster files) and the shapefiles of the building footprints were used. At first, five GLCM features, i.e., contrast, correlation, energy, entropy and homogeneity, were selected to be calculated based on the literature review, since they are the most commonly used in change detection studies [62,63]. To reduce the ANN model’s complexity, three of the GLCM features were finally selected as inputs for the model. For this task, correlation analysis of all calculated GLCM features was conducted and the correlation coefficients, expressed by the Pearson correlation coefficient (r), were computed. The correlation coefficients were used to investigate the relationship between the five examined GLCM features and assist in selecting the most non-correlated of them. The rationale behind this choice lies in the fact that, in the case of selecting features that are more correlated to each other, the training of the ANN generally becomes more complicated, leading to unreliable results.

2.3.3. Artificial Neural Networks (ANNs)

As ANNs learn from experience, they can perform complicated operations and recognize complex patterns, even if these patterns are noisy and ambiguous. An ANN model consists of a set of nonlinear and densely interconnected elements or neurons, which are inspired by biological neural systems. The fundamental idea of an ANN is to connect a set of inputs to a set of outputs. Hence, the main architecture of an ANN model usually contains three distinct layers: (1) the input layer; (2) the hidden layer(s); and (3) the output layer, where the results of the ANN are generated. Moreover, the performance of ANNs generally relies on several factors, such as the number of hidden layers, the number of hidden nodes, the learning algorithm and the activation function of each node. In this study, a multi-layer perceptron (MLP) Feed-Forward Neural Network (FFNN) was chosen to generate the classification model. The FFNN in general is the fastest neural network and broadly consists of a set of layers representing the input layer, one or more hidden processing neuron layers and an output layer of processing neurons [64,65,66].
As already stated, the ANN model developed in the present study aims to estimate the status of buildings, i.e., whether any changes are detected or not, based on the changes in their textural characteristics determined by applying the GLCM functions. More specifically, the main purpose was to train ANNs to detect any potential building changes (dependent variable) between two sequential time steps using the texture characteristics of the buildings (independent variables) computed for these time steps as an input.
Various structures of the ANN were developed, trained and tested based on the generic scheme of one input layer, one hidden layer and one output layer (Figure 2). The GLCM texture features were defined as the independent variables (input layer), while the existence of building changes or not was defined as the dependent variable by setting the variable value to 1 or 0, respectively. With regard to the input layer, the values of three GLCM features of two different time steps, i.e., six input nodes, were considered. On the other hand, the output layer contains one neuron, since buildings are classified into 1 or 0, depending on whether changes were detected or not.
Based on the binary building classification described in Section 2.3.1, a total of 11,660 records were documented (1060 buildings in 11 time intervals). As expected, most of the recorded values are equal to zero (0) due to the fact that most of the buildings in the study area did not undergo any changes during the entire study period. In fact, only 678 out of 11,660 records were set equal to one (1), indicating building changes between two time steps. By taking into consideration the number of records equal to one (1) and in order to create a balanced training dataset for the ANN model, the same number of records equal to zero (0) were randomly selected, i.e., 678. As a result, a total number of 1356 records are included in the training dataset.
Another important step in the ANN model development process is the selection of the activation (or transfer) function used for controlling the output of the ANN across different domains, since activation functions can significantly affect the complexity and performance of the ANN. In the present study, hyperbolic tangent sigmoid (Tansig) was selected for the hidden layer, and log-sigmoid (Logsig) for the output layer. In general, the sigmoid function, which is one of the most commonly used activation functions in ANN models, transforms the input of any value into a value in the range between 0 and 1.
After defining the ANN architecture, the training of the ANN model was based on the Levenberg–Marquardt (LM) algorithm [67]. The role of the training algorithm is to repetitively correct the weights and biases of neural network throughout the training process and thus to maximize its performance. During the ANN training, a wide number of nodes in the hidden layer were tested while each structure was trained with 10 random weight initializations. At the end, the model with the best performance was saved for the final implementation. Finally, to control the training process and test the performance of ANN models, the cross-validation technique was applied. According to this method, the available data, i.e., 1356 patterns, were divided in three separate datasets: training, validation and test sets with a ratio of 70%, 15% and 15%, respectively. The training dataset was used to train the model, the validation dataset to control the training process, i.e., stop the training before overfitting, and the test dataset for the final performance evaluation of the trained model. Each of these sets contains totally different patterns and, therefore, the ANN model sees completely different data in order to perform each of the three processes, i.e., training, validation and testing. The evaluation of the applied method was based on the following metrics that are widely used for binary classification problems [68]:
O v e r a l l   A c c u r a c y   ( O A ) = T P + T N T P + F P + T N + F N
U s e r s   a c c u r a c y   ( U A ) = T P T P + F N
P r o d u c e r s   a c c u r a c y   ( P A ) = T P T P + F P
F 1 s c o r e = 2 × T P 2 × T P + F P + F N
where TP: true positive, TN: true negative, FP: false positive, and FN: false negative. Figure 3 presents the overall workflow of the current research.

3. Results and Discussion

3.1. Photo Interpretation and Building Classification

Figure 4 shows some indicative examples of buildings with and without structural changes, as they have been categorized during visual interpretation. Changes were considered not only in buildings that have collapsed, but also in those where significant changes (cracks, partial collapses) and other visible damage were observed. Moreover, the removal of debris from a building site was marked as a building change; otherwise, there was a high risk of increasing the commission error due to the large difference in orthophotomaps between two time steps (Figure 4a). On the other hand, in the case that buildings had undergone no changes or debris was not removed from a site, no building changes were considered (Figure 4b).
After the image interpretation of the status of the buildings’ footprints into 0 or 1, a map showing the spatial distribution of all buildings in the settlement of Vrissa that had undergone at least one change within their footprints over the entire study period (13 June 2017–23 October 2021) was created (Figure 5). As can be observed, most building changes are located in the north and western part of the settlement. Since this part of the settlement was the most affected by the earthquake, it is reasonable that most changes over time are observed there. These changes may refer to either the collapse of crumbling buildings that did not collapse during the seismic activity or the construction/repair of buildings. At this point, what is worth mentioning is that buildings with changes do not occupy only 33% (370 buildings) of the total building stock, as shown in the pie chart in Figure 5, but even more; this is because some of the buildings were being continuously reconstructed, thus reaching a total number of 650 buildings with changes. However, in the case of Figure 5, considering each building as one change, the final percentage of buildings with changes is equal to 33% of the total building stock.

3.2. GLCM Texture Features

3.2.1. Correlation Analysis

As already mentioned in Section 2.3.2, the five calculated GLCM texture features, i.e., contrast, correlation, energy, entropy and homogeneity, may contain statistically correlated information, thus making it difficult to estimate the buildings’ conditions through machine learning, that is, to develop a reliable ANN model. Hence, for the purpose of reducing the data, it was considered necessary to use three out of five GLCM features. For this task, a correlation analysis of the five calculated parameters was conducted and the three less correlated parameters were selected as the input for the ANN model. Figure 6 shows the results of the correlation analysis in terms of the correlation matrix, including the scatterplots for all pairs of variables, along with the value of the Pearson correlation coefficients (r) for each pair. After calculating the sum of the correlation coefficient values in each row (or column) for all five variables and considering that the correlation of every variable with itself is 1, it was found that correlation, energy and entropy show the lowest sum values, i.e., 3.30, 3.19, and 3.17, respectively, and, therefore, they were selected as the input for the ANN model. On the contrary, homogeneity shows the highest sum (3.76) and, hence, the highest correlation among the other variables.

3.2.2. Visual Interpretation of Selected GLCM Features

A visual inspection of the three selected GLCM features, i.e., correlation, energy and entropy, was performed to examine how these features differentiate between two time steps in the case of building changes. For better visualization results and in order to draw clearer conclusions, examples with a high difference in the calculated values of the three GLCM features were taken into consideration. At first, Figure 7 depicts the visual differentiation of correlation between two images in the case of a collapsed building. As is obvious, before a building collapse (Figure 7a) the neighboring pixels of the corresponding image are more correlated with each other, thus creating a uniform image; on the contrary, after a building collapse (Figure 7b), the pixel distribution contains various gradations of the grayscale, meaning that the pixels are less related to their neighbors.
Next, Figure 8 illustrates the visual differentiation of energy between two images, again in the case of a collapsed building. It can be clearly seen that, in the image before the building collapse (Figure 8a), the energy is higher, since no texture disorder is detected in comparison with the corresponding image after the collapse (Figure 8b). In other words, there are more repetitions of pairs of pixels in the first image, thus resulting in a higher value of energy.
Finally, as far as the entropy is concerned, it is higher in the second image where the debris was removed from the site due to the fact that the randomness of the pixel distribution is higher, which increases the texture density (Figure 9).

3.3. Development of ANN Models

3.3.1. ANN Model Training and Testing

Different structures of ANNs were tested according to the trial-and-error procedure described above. The optimal number of neurons in the hidden layer was found equal to eight, thus resulting in an ANN architecture composed of six input neurons, eight hidden neurons and one output neuron (6:8:1). Figure 10 shows the learning performance of the ANN model based on the MSE values. It should be noted that, hereinafter, the results of all three independent datasets are presented; however, the accuracy metrics of the testing dataset (the unseen during the training procedure) are the metrics for the final evaluation of the presented approach. Learning curves are a widely used diagnostic tool for evaluating model learning performance over experience, i.e., as the model is exposed to more and more training data, as well as for identifying problems, such as model underfitting or overfitting. According to this figure, the developed ANN model shows an MSE value equal to 0.078 at the convergence point, which is lower than Zahraee and Rastiveis [50], even though it reached the best validation performance after almost the same number of training epochs, i.e., at epoch 18, as in the aforementioned study.
Table 1 presents the evaluation results of the ANN model for all datasets (training, validation and testing). It is evident that, according to the overall accuracy (OA) of the training, validation and testing datasets, the ANN approach on GLCM texture features performed quite well, providing good results. More specifically, the OA for training, validation and testing datasets is 90.4%, 88.7% and 92.6%, respectively, the latter of which is slightly higher than Liu and Lathrop [17], who developed an ANN model to detect newly urbanized areas in New Jersey, USA, and significantly higher than Mansouri et al. [69], who used an ANN classifier on GLCM features to identify building damage after an earthquake event.
Furthermore, with regard to the errors of the ANN model and, in particular, those of the testing data set, the results reveal that 15 instances of building changes were wrongly classified. All these errors were detected, while some of them are visualized and discussed in the next section.
Next, the ROC curves for the three datasets are presented (Figure 11).
The ROC curve is an effective platform for visualizing and evaluating classifiers, including binary classifier systems such as the one developed in the present study. This is usually achieved by plotting the true positive rate (TPR), also known as the sensitivity rate, against the false positive rate (FPR), also known as the specificity rate [70,71]. As an important tool for evaluating the performance of a machine learning model, a perfect classifier has a TPR equal to 1 and an FPR equal to 0. In general, the performance of the ROC curve is indicated by the Area Under the ROC curve (AUC), which provides a scalar measurement of the classification performance and varies between 0 and 1. The higher the value of the AUC, the better the performance of the model. From the acquired curves shown in Figure 11, it is concluded that the training performed quite well, since for all data sets, i.e., training, validation and testing, the curves are far from the diagonal line, with an AUC equal to 0.968, 0.961 and 0.969 for the three datasets, respectively.
Finally, the adequate overall performance of the ANN model is also reflected in the error histograms for the training, validation and testing steps shown in Figure 12. The error histogram visualizes the errors between the target values and predicted/output values after training a neural network, while the more distributed to zero the error values are, the better the model performance is [72]. From this figure, it is clear that the model errors have spread around the zero error value, revealing that the ANN model has a high degree of reliability.

3.3.2. Model Error Visualization

After the ANN model training, the omission and commission errors, i.e., buildings that were wrongly identified within the testing dataset, were visualized and discussed. Out of the 203 patterns used in the testing step (15% of the total dataset), the developed ANN models resulted in 15 errors. The detection and analysis of the omission errors revealed that the model had more difficulty in recognizing changes in the building structure related to building collapses (total or partial), rather than other types of changes, such as debris removal from a site. Figure 13 depicts two examples of omission errors, which are associated with the weakness of the model in detecting changes in building structure. Ιn these examples, the way in which the six GLCM features relate to each other may be very close to the corresponding one that leads the neural network to classify the building as not having undergone any change; hence, the ANN cannot recognize the change, thus leading to an omission error.
On the other hand, the detection and interpretation of the commission errors revealed that the model affected either by image resolution or by image shading resulted in detecting changes in cases where they were not actually observed (Figure 14).

4. Conclusions

The present study attempted to develop a methodology that is based on the GLCM texture features extracted from UAV orthophotomaps and an ANN classifier in order to detect building changes during the recovery phase following an earthquake event. As indicated by the high overall accuracy and the relatively fast execution time, the ANN-based scheme of building change classification works quite properly in estimating the status of buildings in the study area. The error analysis showed that the ANN model was mainly affected by either image resolution or image shading. In general, as the present study concluded, the joint utilization of remote sensing image texture analysis through the GLCM method and neural network algorithm is of great efficiency for detecting changes after a natural disaster such as an earthquake. In future research, it would be of great interest to evaluate how the modern and more powerful CNNs can individually recognize the various types of building changes, i.e., building collapse (partial or total), new building construction and debris removal, to achieve an even more detailed monitoring of the area under study.

Author Contributions

Conceptualization, M.C. and C.V.; Data curation, M.C., C.V., E.-E.P., G.T., I.S. and N.S.; Formal analysis, M.C. and C.V.; Methodology, M.C. and C.V.; Resources, C.V. and N.S.; Software, C.V., E.-E.P., G.T. and N.S.; Supervision, C.V.; Writing—original draft, M.C., C.V. and I.S.; Writing—review and editing, E.-E.P., G.T. and N.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ubaura, M. Changes in Land Use After the Great East Japan Earthquake and Related Issues of Urban Form. In Advances in Natural and Technological Hazards Research; Springer: Berlin/Heidelberg, Germany, 2018; Volume 48. [Google Scholar] [CrossRef]
  2. Dell’Acqua, F.; Gamba, P. Remote Sensing and Earthquake Damage Assessment: Experiences, Limits, and Perspectives. Proc. IEEE 2012, 100, 2876–2890. [Google Scholar] [CrossRef]
  3. Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  4. Janalipour, M.; Taleai, M. Building change detection after earthquake using multi-criteria decision analysis based on extracted information from high spatial resolution satellite images. Int. J. Remote Sens. 2017, 38, 82–99. [Google Scholar] [CrossRef]
  5. Wang, X.; Liu, S.; Du, P.; Liang, H.; Xia, J.; Li, Y. Object-Based Change Detection in Urban Areas from High Spatial Resolution Images Based on Multiple Features and Ensemble Learning. Remote Sens. 2018, 10, 276. [Google Scholar] [CrossRef] [Green Version]
  6. Walter, V. Object-based classification of remote sensing data for change detection. ISPRS J. Photogramm. Remote Sens. 2004, 58, 225–238. [Google Scholar] [CrossRef]
  7. Asokan, A.; Anitha, J. Change detection techniques for remote sensing applications: A survey. Earth Sci. Inform. 2019, 12, 143–160. [Google Scholar] [CrossRef]
  8. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  9. Tehrany, M.S.; Pradhan, B.; Jebuv, M.N. A comparative assessment between object and pixel-based classification approaches for land use/land cover mapping using SPOT 5 imagery. Geocarto Int. 2014, 29, 351–369. [Google Scholar] [CrossRef]
  10. Liu, C.; Sui, H.; Huang, L. Identification of Building Damage from UAV-Based Photogrammetric Point Clouds Using Supervoxel Segmentation and Latent Dirichlet Allocation Model. Sensors 2020, 20, 6499. [Google Scholar] [CrossRef]
  11. Soulakellis, N.; Vasilakos, C.; Chatzistamatis, S.; Kavroudakis, D.; Tataris, G.; Papadopoulou, E.-E.; Papakonstantinou, A.; Roussou, O.; Kontos, T. Post-Earthquake Recovery Phase Monitoring and Mapping Based on UAS Data. ISPRS Int. J. Geo-Inf. 2020, 9, 447. [Google Scholar] [CrossRef]
  12. Franke, K.W.; Lingwall, B.N.; Zimmaro, P.; Kayen, R.E.; Tommasi, P.; Chiabrando, F.; Santo, A. Phased Reconnaissance Approach to Documenting Landslides following the 2016 Central Italy Earthquakes. Earthq. Spectra 2018, 34, 1693–1719. [Google Scholar] [CrossRef]
  13. Qin, R. An Object-Based Hierarchical Method for Change Detection Using Unmanned Aerial Vehicle Images. Remote Sens. 2014, 6, 7911–7932. [Google Scholar] [CrossRef] [Green Version]
  14. Ji, M.; Liu, L.; Du, R.; Buchroithner, M.F. A Comparative Study of Texture and Convolutional Neural Network Features for Detecting Collapsed Buildings after Earthquakes Using Pre- and Post-Event Satellite Imagery. Remote Sens. 2019, 11, 1202. [Google Scholar] [CrossRef] [Green Version]
  15. LeCun, Y.; Hinton, G.; Bengio, Y. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  16. Li, P.; Xu, H.; Liu, S.; Guo, J. Urban building damage detection from very high-resolution imagery using one-class SVM and spatial relations. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009. [Google Scholar] [CrossRef]
  17. Liu, X.; Lathrop, R.G. Urban change detection based on an artificial neural network. Int. J. Remote Sens. 2002, 23, 2513–2518. [Google Scholar] [CrossRef]
  18. Pu, R.; Gong, P.; Tian, Y.; Miao, X.; Carruthers, R.I.; Anderson, G.L. Invasive species change detection using artificial neural networks and CASI hyperspectral imagery. Environ. Monit. Assess. 2008, 140, 15–32. [Google Scholar] [CrossRef]
  19. Shoukry, N. Artificial Neural Networks Based Change Detection for Monitoring Palm Trees Plantation in Al Madinah-Saudi Arabia. Bull. Société Géographie D’egypte 2017, 90, 167–200. [Google Scholar] [CrossRef]
  20. Rahman, M.T.U.; Tabassum, F.; Rasheduzzaman, M.; Saba, H.; Sarkar, L.; Ferdous, J.; Uddin, S.Z.; Islam, A.Z.M.Z. Temporal dynamics of land use/land cover change and its prediction using CA-ANN model for southwestern coastal Bangladesh. Environ. Monit. Assess. 2017, 189, 565. [Google Scholar] [CrossRef]
  21. Nemmour, H.; Chibani, Y. Multiple support vector machines for land cover change detection: An application for mapping urban extensions. ISPRS J. Photogramm. Remote Sens. 2006, 61, 125–133. [Google Scholar] [CrossRef]
  22. Radhika, S.; Tamura, Y.; Matsui, M. Cyclone damage detection on building structures from pre- and post-satellite images using wavelet based pattern recognition. J. Wind. Eng. Ind. Aerodyn. 2015, 136, 23–33. [Google Scholar] [CrossRef]
  23. Contreras, D.; Blaschke, T.; Tiede, D.; Jilge, M. Monitoring recovery after earthquakes through the integration of remote sensing, GIS, and ground observations: The case of L’Aquila (Italy). Cartogr. Geogr. Inf. Sci. 2016, 43, 115–133. [Google Scholar] [CrossRef] [Green Version]
  24. Wen, D.; Huang, X.; Bovolo, F.; Li, J.; Ke, X.; Zhang, A.; Benediktsson, J.A. Change Detection From Very-High-Spatial-Resolution Optical Remote Sensing Images: Methods, applications, and future directions. IEEE Geosci. Remote Sens. Mag. 2021, 9, 68–101. [Google Scholar] [CrossRef]
  25. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  26. Tamura, H.; Mori, S.; Yamawaki, T. Textural Features Corresponding to Visual Perception. IEEE Trans. Syst. Man Cybern. 1978, 8, 460–473. [Google Scholar] [CrossRef]
  27. Kaur, N.; Tiwari, P.S.; Pande, H.; Agrawal, S. Utilizing Advance Texture Features for Rapid Damage Detection of Built Heritage Using High-Resolution Space Borne Data: A Case Study of UNESCO Heritage Site at Bagan, Myanmar. J. Indian Soc. Remote Sens. 2020, 48, 1627–1638. [Google Scholar] [CrossRef]
  28. Akinin, M.V.; Akinina, A.V.; Sokolov, A.V.; Tarasov, A.S. Application of EM algorithm in problems of pattern recognition on satellite images. In Proceedings of the 2017 6th Mediterranean Conference on Embedded Computing, MECO 2017-Including ECYPS 2017, Bar, Montenegro, 11–15 June 2017; pp. 1–4. [Google Scholar] [CrossRef]
  29. Kabir, S. Imaging-based detection of AAR induced map-crack damage in concrete structure. NDT E Int. 2010, 43, 461–469. [Google Scholar] [CrossRef]
  30. Zhang, X.; Cui, J.; Wang, W.; Lin, C. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm. Sensors 2017, 17, 1474. [Google Scholar] [CrossRef]
  31. Li, J. Texture Feature Extraction and Classification: A Comparative Study between Traditional Methods and Deep Learning: A Thesis Presented in Partial Fulfilment of the Requirements for the Degree of Master of Information Science in Computer Sciences at Massey. Master’s Thesis, Massey University, Palmerston North, New Zealand, 2020. [Google Scholar]
  32. Gevaert, C.M.; Persello, C.; Sliuzas, R.; Vosselman, G. Classification of Informal Settlements through the Integration of 2d and 3d Features Extracted from Uav Data. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Copernicus Publications: Gottingen, Germany, 2016; Volume 3. [Google Scholar]
  33. Mboga, N.; Persello, C.; Bergado, J.R.; Stein, A. Detection of Informal Settlements from VHR Images Using Convolutional Neural Networks. Remote Sens. 2017, 9, 1106. [Google Scholar] [CrossRef] [Green Version]
  34. Ghaffarian, S.; Kerle, N. Towards Post-Disaster Debris Identification for Precise Damage and Recovery Assessments from Uav and Satellite Images. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 297–302. [Google Scholar] [CrossRef] [Green Version]
  35. Ruiz, L.A.; Fdez-Sarría, A.; Recio, J.A. Texture Feature Extraction for Classification of Remote Sensing Data Using Wavelet Decomposition: A Comparative Study. In Proceedings of the 20th ISPRS Congress, Istanbul, Turke, 12–23 July 2004; Volume 35. [Google Scholar]
  36. Kupidura, P. The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery. Remote Sens. 2019, 11, 1233. [Google Scholar] [CrossRef] [Green Version]
  37. Rao, P.V.N.; Sai, M.V.R.S.; Sreenivas, K.; Rao, M.V.K.; Rao, B.R.M.; Dwivedi, R.S.; Venkataratnam, L. Textural analysis of IRS-1D panchromatic data for land cover classification. Int. J. Remote Sens. 2002, 23, 3327–3345. [Google Scholar] [CrossRef]
  38. Pacifici, F.; Chini, M.; Emery, W.J. A neural network approach using multi-scale textural metrics from very high-resolution panchromatic imagery for urban land-use classification. Remote Sens. Environ. 2009, 113, 1276–1292. [Google Scholar] [CrossRef]
  39. Pesaresi, M.; Gerhardinger, A.; Kayitakire, F. A Robust Built-Up Area Presence Index by Anisotropic Rotation-Invariant Textural Measure. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2008, 1, 180–192. [Google Scholar] [CrossRef]
  40. Miura, H.; Midorikawa, S.; Soh, H.-C.C. Building Damage Detection of the 2010 Haiti Earthquake Based on Texture Analysis of High-Resolution Satellite Images. In Proceedings of the 15th World Conference on Earthquake Engineering (15WCEE), Lisbon, Portugal, 24–28 September 2012. [Google Scholar]
  41. Hasanlou, M.; Shah-Hosseini, R.; Seydi, S.; Karimzadeh, S.; Matsuoka, M. Earthquake Damage Region Detection by Multitemporal Coherence Map Analysis of Radar and Multispectral Imagery. Remote Sens. 2021, 13, 1195. [Google Scholar] [CrossRef]
  42. Chen, Y.; Tang, L.; Yang, X.; Bilal, M.; Li, Q. Object-based multi-modal convolution neural networks for building extraction using panchromatic and multispectral imagery. Neurocomputing 2019, 386, 136–146. [Google Scholar] [CrossRef]
  43. Alshehhi, R.; Marpu, P.R.; Woon, W.L.; Mura, M.D. Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2017, 130, 139–149. [Google Scholar] [CrossRef]
  44. Zhang, Z.; Vosselman, G.; Gerke, M.; Persello, C.; Tuia, D.; Yang, M.Y. Detecting Building Changes between Airborne Laser Scanning and Photogrammetric Data. Remote Sens. 2019, 11, 2417. [Google Scholar] [CrossRef]
  45. Ji, S.; Shen, Y.; Lu, M.; Zhang, Y. Building Instance Change Detection from Large-Scale Aerial Images using Convolutional Neural Networks and Simulated Samples. Remote Sens. 2019, 11, 1343. [Google Scholar] [CrossRef] [Green Version]
  46. Khoshboresh-Masouleh, M.; Shah-Hosseini, R. Building panoptic change segmentation with the use of uncertainty estimation in squeeze-and-attention CNN and remote sensing observations. Int. J. Remote Sens. 2021, 42, 7798–7820. [Google Scholar] [CrossRef]
  47. Bitelli, G.; Camassi, R.; Gusella, L.; Mognol, A. Image Change Detection on Urban Area: The Earthquake Case. In Proceedings of the 20th ISPRS Congress, Istanbul, Turke, 12–23 July 2004; Volume 35. [Google Scholar]
  48. Tomowski, D.; Ehlers, M.; Klonus, S. Colour and texture based change detection for urban disaster analysis. In 2011 Joint Urban Remote Sensing Event; IEEE: New York City, NY, USA, 2011; pp. 329–332. [Google Scholar] [CrossRef]
  49. Tian, J.; Nielsen, A.; Reinartz, P. Building damage assessm;nt after the earthquake in Haiti using two post-event satellite stereo imagery and DSMs. Int. J. Image Data Fusion 2015, 6, 155–169. [Google Scholar] [CrossRef] [Green Version]
  50. Zahraee, N.K.; Rastiveis, H. Object-Oriented Analysis of Satellite Images Using Artificial Neural Networks for Post-Earthquake Buildings Change Detection. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 139–144. [Google Scholar] [CrossRef] [Green Version]
  51. Yu, H.; Cheng, G.; Ge, X. Earthquake-Collapsed Building Extraction from LiDAR and Aerophotograph Based on OBIA. In Proceedings of the 2nd International Conference on Information Science and Engineering, ICISE2010-Proceedings, Hangzhou, China, 4–6 December 2010. [Google Scholar]
  52. Wang, C.; Zhang, Y.; Xie, T.; Guo, L.; Chen, S.; Li, J.; Shi, F. A Detection Method for Collapsed Buildings Combining Post-Earthquake High-Resolution Optical and Synthetic Aperture Radar Images. Remote Sens. 2022, 14, 1100. [Google Scholar] [CrossRef]
  53. Kalantar, B.; Ueda, N.; Al-Najjar, H.A.H.; Halin, A.A. Assessment of Convolutional Neural Network Architectures for Earthquake-Induced Building Damage Detection based on Pre- and Post-Event Orthophoto Images. Remote Sens. 2020, 12, 3529. [Google Scholar] [CrossRef]
  54. Soulakellis, N.A.; Novak, I.D.; Zouros, N.; Lowman, P.; Yates, J. Fusing Landsat-5/TM Imagery and Shaded Relief Maps in Tectonic and Geomorphic Mapping. Photogramm. Eng. Remote Sens. 2006, 72, 693–700. [Google Scholar] [CrossRef]
  55. Vasilakos, C.; Chatzistamatis, S.; Roussou, O.; Soulakellis, N. Comparison of Terrestrial Photogrammetry and Terrestrial Laser Scanning for Earthquake Response Management. In Lecture Notes in GeoiInformation and Cartography; Springer: Berlin/Heidelberg, Germany, 2019; pp. 33–57. [Google Scholar] [CrossRef]
  56. Wang, C.; Li, D.; Li, Z.; Wang, D.; Dey, N.; Biswas, A.; Moraru, L.; Sherratt, R.; Shi, F. An ef ficient local binary pattern based plantar pressure optical sensor image classification using convolutional neural networks. Optik 2019, 185, 543–557. [Google Scholar] [CrossRef]
  57. Chowdhury, P.R.; Deshmukh, B.; Goswami, A.K.; Prasad, S.S. Neural Network Based Dunal Landform Mapping From Multispectral Images Using Texture Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 4, 171–184. [Google Scholar] [CrossRef]
  58. Pradhan, B.; Hagemann, U.; Tehrany, M.S.; Prechtel, N. An easy to use ArcMap based texture analysis program for extraction of flooded areas from TerraSAR-X satellite image. Comput. Geosci. 2014, 63, 34–43. [Google Scholar] [CrossRef]
  59. Gong, P.; Marceau, D.J.; Howarth, P.J. A comparison of spatial feature extraction algorithms for land-use classification with SPOT HRV data. Remote Sens. Environ. 1992, 40, 137–151. [Google Scholar] [CrossRef]
  60. Albregtsen, F. Statistical Texture Measures Computed from Gray Level Coocurrence Matrices. Available online: https://www.uio.no/studier/emner/matnat/ifi/INF4300/h08/undervisningsmateriale/glcm.pdf (accessed on 10 July 2022).
  61. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using Matlab. Available online: https://www.google.com.hk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwij8O-93Pr7AhU2m1YBHcCeBuoQFnoECBEQAQ&url=https%3A%2F%2Fwww.cin.ufpe.br%2F~sbm%2FDEN%2FDigital%2520Image%2520Processing%2520Using%2520Matlab%2520(Gonzalez).pdf&usg=AOvVaw1cP81HH4_Wys_RYeKYmrcO (accessed on 10 July 2022).
  62. Guo, Y.; Fu, Y.H.; Chen, S.; Bryant, C.R.; Li, X.; Senthilnath, J.; Sun, H.; Wang, S.; Wu, Z.; de Beurs, K. Integrating spectral and textural information for identifying the tasseling date of summer maize using UAV based RGB images. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102435. [Google Scholar] [CrossRef]
  63. Moya, L.; Zakeri, H.; Yamazaki, F.; Liu, W.; Mas, E.; Koshimura, S. 3D gray level co-occurrence matrix and its application to identifying collapsed buildings. ISPRS J. Photogramm. Remote Sens. 2019, 149, 14–28. [Google Scholar] [CrossRef]
  64. Dhar, A.; Datta, B. Saltwater Intrusion Management of Coastal Aquifers. I: Linked Simulation-Optimization. J. Hydrol. Eng. 2009, 14, 1263–1272. [Google Scholar] [CrossRef]
  65. Huang, P.-S.; Chiu, Y.-C. A Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan. Water 2018, 10, 251. [Google Scholar] [CrossRef] [Green Version]
  66. Khatri, N.; Khatri, K.K.; Sharma, A. Artificial neural network modelling of faecal coliform removal in an intermittent cycle extended aeration system-sequential batch reactor based wastewater treatment plant. J. Water Process. Eng. 2020, 37, 101477. [Google Scholar] [CrossRef]
  67. Hagan, M.T.; Menhaj, M.B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 1994, 5, 989–993. [Google Scholar] [CrossRef] [PubMed]
  68. Vasilakos, C.; Kavroudakis, D.; Georganta, A. Machine Learning Classification Ensemble of Multitemporal Sentinel-2 Images: The Case of a Mixed Mediterranean Ecosystem. Remote Sens. 2020, 12, 2005. [Google Scholar] [CrossRef]
  69. Mansouri, B.; Mousavi, S.; Amini-Hosseini, K. Earthquake Building Damage Detection Using VHR Satellite Data (Case Study: Two Villages Near Sarpol-e Zahab). J. Seismol. Earthq. Eng. 2018, 20, 45–55. [Google Scholar]
  70. Meyer-Baese, A.; Schmid Volker, J.J. Pattern Recognition and Signal Analysis in Medical Imaging, 2nd ed.; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  71. Torkashvand, M.; Neshat, A.; Javadi, S.; Pradhan, B. New hybrid evolutionary algorithm for optimizing index-based groundwater vulnerability assessment method. J. Hydrol. 2021, 598, 126446. [Google Scholar] [CrossRef]
  72. Suffoletto, B.; Gharani, P.; Chung, T.; Karimi, H. Using phone sensors and an artificial neural network to detect gait changes during drinking episodes in the natural environment. Gait Posture 2017, 60, 116–121. [Google Scholar] [CrossRef]
Figure 1. Location map of Lesvos Island and Vrissa settlement, along with the buildings located in the settlement.
Figure 1. Location map of Lesvos Island and Vrissa settlement, along with the buildings located in the settlement.
Drones 06 00414 g001
Figure 2. Structure of the Artificial Neural Network (ANN) model.
Figure 2. Structure of the Artificial Neural Network (ANN) model.
Drones 06 00414 g002
Figure 3. Schematic representation of the methodological framework developed in the present study.
Figure 3. Schematic representation of the methodological framework developed in the present study.
Drones 06 00414 g003
Figure 4. Indicative examples of buildings.
Figure 4. Indicative examples of buildings.
Drones 06 00414 g004
Figure 5. Spatial distribution of buildings having undergone some kind of change over the entire study period (13 June 2014–23 October 2021) (the percentage distribution of buildings is shown in the pie chart).
Figure 5. Spatial distribution of buildings having undergone some kind of change over the entire study period (13 June 2014–23 October 2021) (the percentage distribution of buildings is shown in the pie chart).
Drones 06 00414 g005
Figure 6. The correlation matrix summarizing the correlations between all examined GLCM texture features (contrast, correlation, energy, entropy, homogeneity).
Figure 6. The correlation matrix summarizing the correlations between all examined GLCM texture features (contrast, correlation, energy, entropy, homogeneity).
Drones 06 00414 g006
Figure 7. Visual inspection of correlation (a) before and (b) after a building collapse.
Figure 7. Visual inspection of correlation (a) before and (b) after a building collapse.
Drones 06 00414 g007
Figure 8. Visual inspection of energy (a) before and (b) after a building collapse.
Figure 8. Visual inspection of energy (a) before and (b) after a building collapse.
Drones 06 00414 g008
Figure 9. Visual inspection of entropy (a) before and (b) after a building collapse.
Figure 9. Visual inspection of entropy (a) before and (b) after a building collapse.
Drones 06 00414 g009
Figure 10. The performance of the neural network model training for all datasets.
Figure 10. The performance of the neural network model training for all datasets.
Drones 06 00414 g010
Figure 11. Receiver Operating Characteristic (ROC) curves of (a) training, (b) validation and (c) testing datasets.
Figure 11. Receiver Operating Characteristic (ROC) curves of (a) training, (b) validation and (c) testing datasets.
Drones 06 00414 g011
Figure 12. Histogram of errors (i.e., targets–output) for all datasets.
Figure 12. Histogram of errors (i.e., targets–output) for all datasets.
Drones 06 00414 g012
Figure 13. Two examples of omission errors in the case of two different buildings with shadows. (a) after the change and (b) before the change.
Figure 13. Two examples of omission errors in the case of two different buildings with shadows. (a) after the change and (b) before the change.
Drones 06 00414 g013
Figure 14. Two examples of commission errors in the case of two different buildings. (a) without shadows at both time-steps and (b) with shadows at the second time-step.
Figure 14. Two examples of commission errors in the case of two different buildings. (a) without shadows at both time-steps and (b) with shadows at the second time-step.
Drones 06 00414 g014
Table 1. Accuracy metrics for all datasets.
Table 1. Accuracy metrics for all datasets.
DatasetUser’s AccuracyProducer’s AccuracyOverall AccuracyF1-Score
Training0.8960.9070.9040.902
Validation0.8860.8950.8870.890
Test0.9060.9500.9260.927
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Christaki, M.; Vasilakos, C.; Papadopoulou, E.-E.; Tataris, G.; Siarkos, I.; Soulakellis, N. Building Change Detection Based on a Gray-Level Co-Occurrence Matrix and Artificial Neural Networks. Drones 2022, 6, 414. https://doi.org/10.3390/drones6120414

AMA Style

Christaki M, Vasilakos C, Papadopoulou E-E, Tataris G, Siarkos I, Soulakellis N. Building Change Detection Based on a Gray-Level Co-Occurrence Matrix and Artificial Neural Networks. Drones. 2022; 6(12):414. https://doi.org/10.3390/drones6120414

Chicago/Turabian Style

Christaki, Marianna, Christos Vasilakos, Ermioni-Eirini Papadopoulou, Georgios Tataris, Ilias Siarkos, and Nikolaos Soulakellis. 2022. "Building Change Detection Based on a Gray-Level Co-Occurrence Matrix and Artificial Neural Networks" Drones 6, no. 12: 414. https://doi.org/10.3390/drones6120414

APA Style

Christaki, M., Vasilakos, C., Papadopoulou, E. -E., Tataris, G., Siarkos, I., & Soulakellis, N. (2022). Building Change Detection Based on a Gray-Level Co-Occurrence Matrix and Artificial Neural Networks. Drones, 6(12), 414. https://doi.org/10.3390/drones6120414

Article Metrics

Back to TopTop