Next Article in Journal
Preparation, Characterization and Evaluation of Guar Films Impregnated with Relaxing Peptide Loaded into Chitosan Microparticles
Next Article in Special Issue
Training Data Selection for Machine Learning-Enhanced Monte Carlo Simulations in Structural Dynamics
Previous Article in Journal
Traffic Signal Time Optimization Based on Deep Q-Network
Previous Article in Special Issue
A Synthesized Study Based on Machine Learning Approaches for Rapid Classifying Earthquake Damage Grades to RC Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Faster Post-Earthquake Damage Assessment Based on 1D Convolutional Neural Networks

1
Civil, Architectural and Environmental Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
2
Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(21), 9844; https://doi.org/10.3390/app11219844
Submission received: 8 October 2021 / Revised: 18 October 2021 / Accepted: 18 October 2021 / Published: 21 October 2021

Abstract

:
Contemporary deep learning approaches for post-earthquake damage assessments based on 2D convolutional neural networks (CNNs) require encoding of ground motion records to transform their inherent 1D time series to 2D images, thus requiring high computing time and resources. This study develops a 1D CNN model to avoid the costly 2D image encoding. The 1D CNN model is compared with a 2D CNN model with wavelet transform encoding and a feedforward neural network (FNN) model to evaluate prediction performance and computational efficiency. A case study of a benchmark reinforced concrete (r/c) building indicated that the 1D CNN model achieved a prediction accuracy of 81.0%, which was very close to the 81.6% prediction accuracy of the 2D CNN model and much higher than the 70.8% prediction accuracy of the FNN model. At the same time, the 1D CNN model reduced computing time by more than 90% and reduced resources used by more than 69%, as compared to the 2D CNN model. Therefore, the developed 1D CNN model is recommended for rapid and accurate resultant damage assessment after earthquakes.

1. Introduction

Rapid and accurate seismic damage assessment is essential to post-event response and rescue. Traditional approaches generate the fragility estimates of different structural damage states based on low-dimensional intensity measures (IMs, usually a scalar IM or two-element vectorial IM) of ground motion records. These approaches include cloud analysis [1], incremental dynamic analysis (IDA) [2], multiple strip analysis (MSA) [3], and improved models [4,5,6,7,8,9]. However, according to previous studies, low-dimensional IMs were not sufficient to capture and propagate primary earthquake uncertainty through seismic fragility analysis [10,11,12,13]. To overcome the disadvantage of low-dimensional IMs, multivariate regression models were used to obtain a fragility estimation and facilitate more accurate and reliable regional seismic risk estimates by incorporating multiple IMs (up to five) into a vectorial IM [14]. Two artificial neural network (ANN) models, i.e., the feedforward neural network (FNN) and radial basis function (RBF) network models with 14 IMs as inputs, were adopted to predict the seismic-induced damage states of 30 reinforced concrete (r/c) buildings [15,16,17]. It was found that trained FNN and RBF models could reliably and rapidly predict the resultant damage states of r/c buildings after earthquake events. Machine learning (ML) techniques were also used for the seismic-damage classification. Xu et al. selected up to 48 IMs as inputs to train ML models to predict the damage states of structures [18]. Although these regressive and artificial intelligence models were able to consider high-dimensional IMs in seismic damage assessments, they required careful selection and computation of hand-crafted IMs from any IM candidate pool that consisted of dozens, if not hundreds, of developed IMs because what constituted an optimal IM to characterize ground motion records would change based on different structural types, collected ground motion records (GMRs), and structural damage indices [13,19,20,21].
To better consider the uncertainties in seismic damage assessment and avoid the selection and the computation of hand-crafted IMs, GMRs were directly used as inputs for CNN-based seismic damage assessment and seismic vibration identification [22,23,24,25]. Similar to the application of CNNs in 2D image and video analyses [26,27,28], the 2D GMR images were generated from raw 1D GMRs with wavelet transform (WT) encoding for CNN-based seismic assessments [22,23,24]. WTs have a long history of being applied in GMR classification. For example, in [29,30,31], WTs provided detailed time-frequency information of GMRs in the form of 2D images through time-variant spectral decomposition to classify the near-fault GMRs. However, there were other methods for encoding 1D time series to 2D images, such as recurrence plot (RP) [32], Gramian angular summation/difference fields (GAF), and Markov transition fields (MTF) [33]. The test of RP and GAF–MTF encoding techniques on the same dataset from the University of California’s Riverside Time Series Classification Archive [34] indicated that the RP encoding obtained better prediction performance with a smaller encoding data size in the CNN-based time-series image classification than that of the GAF–MTF encoding.
In the previous work of Yuan et al. [25], RP encoding and the newly proposed time-series segmentation (TS) were compared to the widely used WT encoding to explore the most suitable image encoding method for CNN-based seismic damage assessment. It was found that the WT encoding generated the highest prediction accuracy, as compared to RP and TS. Generally, WT encoding is the best technique for GMR image encoding for CNN-based seismic damage assessment in terms of prediction accuracy. Although the contemporary CNN-based seismic damage assessments avoid the computation and selection of hand-crafted IMs, the encoding of a 1D GMR time series to 2D GMR images increases the training time for a large set of GMRs. The CNN training time with WT encoding reached more than 20 h on a general-purpose computer without graphics processing unit (GPU) farms, according to Lu et al. [23]. The expensive computational overhead for a CNN-based seismic damage assessment approach with GMR image encoding compromised its advantages of avoiding the computation and the selection of hand-crafted IMs. Therefore, a direct end-to-end approach that can use 1D GMRs as inputs was more desirable, as compared to the contemporary CNN approaches that use GMR images as inputs for seismic damage assessment.
The 1D CNNs were widely used in real-time time series classification tasks [35,36,37]. As stated by Kiranyaz et al. [38], for the task of 1D time series classification, 1D CNNs were superior due to their cost-effectiveness, computational efficiency, and practical deployment, and thus they were more suitable, as compared to deep 2D CNNs. Moreover, 1D CNNs need relatively shallow architecture to manage the challenging 1D signal tasks, but 2D CNNs required an extra image-encoding step and deeper architectures than the 1D CNNs to handle the same tasks. Additionally, 1D CNNs are quickly trained on a general-purpose computer with a central processing unit (CPU) configuration, and deep 2D CNNs usually require special hardware setups like GPU farms or cloud computing to reduce training time. Therefore, 1D CNNs were preferable in real-time 1D single classification tasks. The post-earthquake damage assessment is inherently a classification task of 1D GMRs according to their resultant structural damage states. Hence, this study proposed the use of 1D CNNs, instead of 2D CNNs, for post-earthquake damage assessment. The 1D CNN seismic classifier does not use extra preprocessing such as the wavelet transform encoding of GMRs into 2D GMR images. Instead, 1D CNNs used the 1D GMRs (accelerograms) to predict resultant seismic damage states of structures. To demonstrate the advantages of 1D CNN seismic classifiers, a case study of a benchmark r/c building was conducted. Three neural network models—namely, a 1D CNN model, a 2D CNN model with WT encoding, and an FNN model—were trained under the same computing configuration with the same training and validation datasets. Their prediction performances were compared from the same test dataset. The performance of these three models was evaluated in terms of their computational efficiencies and prediction accuracies.
The rest of the paper is organized as follows. First, the methodology of the 1D CNN post-earthquake damage assessment is presented. Second, the detailed architectures of the three neural network models are introduced. Thereafter, the case study conducted on these three models, trained with the same training, validation, and test datasets obtained from the nonlinear time history analyses (NLTHA) results of a benchmark r/c frame building, is presented. Third, the prediction performances and the computational efficiencies of these three neural network models are evaluated based on the case-study results. Finally, conclusions from our research and future research recommendations are presented.

2. Methodology of 1D CNN Seismic Damage Assessment

2.1. 1D CNN and 2D CNN Approaches

The 2D CNN-based seismic damage assessment used as a reference is briefly summarized before the 1D CNN approach is presented. Figure 1 shows the pipeline of the 2D CNN approach (dash arrows) as summarized from the previous studies [22,23,25]. The 1D GMRs had to be transformed into 2D GMR images through an encoding technique (e.g., WT encoding) before they were inputted into the 2D CNN model for post-earthquake damage assessment. As a widely used image encoding technique in GMR classification, WT encoding achieved the highest prediction accuracy among three image encoding techniques [25] and was thus used to encode GMR images in this study via the toolbox developed by [39]. In contrast, the 1D CNN approach (solid arrows), in Figure 1, directly used the recorded GMRs in the 1D CNN model and predicted their resultant damage states. The heavy preprocessing of WTs in the 2D CNN approach was avoided in the 1D CNN approach. Therefore, the 1D CNN approach would be preferred if it could maintain a comparable or higher prediction accuracy, as compared to the 2D CNN approach.

2.2. 1D CNN Architecture

The architecture of 1D CNNs mainly consists of two main components, the feature extracting layers and the fully connected layers, as shown in Figure 2. The feature extracting layers of the first component are mainly composed of convolutional layers and pooling layers. The fully connected layers are usually a shallow FNN classifier with similar architecture, as used in [17]. The difference lies in that hand-crafted IMs were inputted into the FNN model in [17] while automatically extracted features (denoted as X ˜ in Figure 2) from the first component of the CNN architecture were inputted in the FNN classifier. To extract the features of the input 1D GMR X, a kernel W with M learnable parameters conducted the convolution operation by sliding on the vector X with a stride S. With K kernels in one convolutional layer, corresponding K feature vectors were extracted (denoted as Depth K in Figure 1. In 2D CNNs, they are known as feature maps). The ReLU activation function σ · was adopted to activate neurons in the neural network because it is computationally efficient [40]. The convolution operation is represented in Equation (1):
p =   σ X | θ = σ W X + b ,   θ = W , b
where means the convolution operation. For an input GMR vector X with a length of N, a scanning stride of S, and a kernel of M trainable parameters, the convolution output p through the activation function ReLu σ · is (N − M)/S + 1. W presented the parameters in the kernel, and b was used as the bias item to form the ensemble of learnable parameters   θ . The pooling layer conducted a similar scanning technique with a certain stride (e.g., 3) on each feature vector generated by the former convolutional layer. The maximum or the average value of a local, receptive slice for each feature vector was extracted in the pooling layer. Max-pooling is very commonly used in CNN training due to its superior performance [41]. Through several stacked convolutional and max-pooling layers, the high-level extracted features were flattened as feature vectors (denoted as X ˜ ) and inputted into the FNN classifier composed of fully connected layers. The FNN classifier also contained a set of trainable synaptic weights (denoted as θ ˜ ). The extracted high-level features were fed into the FNN classifier until the final output layer. In the output layer, the softmax activation function [42] was used for the seismic damage state prediction. The 2D CNN architecture is very similar to 1D CNN architecture, except that the inputs are 2D GMR images and the convolution and pooling operations are usually conducted on a 2D matrix in the horizontal and vertical orientations simultaneously.
The training of the 1D CNN model was an iterative tuning process of the learnable parameters θ ,   θ ˜ of the convolutional kernels and the fully connected synaptic weights to achieve the best prediction performance. To tune the parameters, the performance of the 1D CNN model was evaluated by a loss function of categorical cross-entropy (CE), defined as Equation (2):
CE = i C y i ln s i
where y i is the ground-truth category of an input GMR and s i is the score of the 1D CNN model obtained from the softmax activation function in the output layer of the FNN classifier. In one training iteration, a batch of GMRs from the training dataset was fed into the CNN model, and all trainable parameters in θ , θ ˜ were updated based on the loss CE through the gradient descent optimization algorithm. This training strategy is also known as batch training [43]. A training epoch was reached when all the samples in the training dataset were fed into the model once. A basic gradient descent optimizer is given in Equation (3):
θ ,   θ ˜ new = θ ,   θ ˜ old η CE θ ,   θ ˜
where η is the learning rate and is the partial differential operator. Additional modified advanced optimizers based on Equation (3) were referred to in [44], where the Adam optimizer was adopted in the study due to its superior performance [45]. Finally, the optimal θ ,   θ ˜ * was acquired to achieve the lowest CE on a separate validation dataset. The trained 1D CNN model with the optimal parameters was saved and could be used to rapidly predict the resultant damage states of structures by future GMRs.

3. Configuration of Three Neural Network Models

In this study, three neural network models were trained and compared with the same training, validation, and test GMR datasets in order to evaluate their performances. GMRs in different earthquake events have highly variable durations, lasting from several seconds to hundreds of seconds. Therefore, a short pre-defined 30 s duration for each GMR, centered on the peak ground acceleration (PGA), as adopted in [23], was extracted if the GMRs were longer than the pre-defined duration. A time step of 0.02 s was adopted in this study as a common time step used in NLTHA under earthquake excitations [46]. Thus, each GMR inputted into the neural network models contained 1500 data points. Those raw GMRs that had less than 1500 data points were padded with the equivalent number of zeros to reach 1500. Figure 3 shows the architecture of the three neural network models: the 2D CNN model, the 1D CNN model, and the FNN model. For the 2D CNN model, the WT-encoded GMR image size was 128 × 1500, indicating a scale ranging from 1 to 128. The 1D and 2D CNN models both had four convolutional layers and four max-pooling layers with the same feature map (vector) depth in each convolutional layer. The kernel sizes of the convolution and max-pooling layers are shown in Figure 3. In the fully connected layers, the dropout technique was applied with a probability of 0.3 to reduce overfitting in the deep networks and improve the generalization ability [47]. In addition to the two CNN models, the third FNN model was also trained with the same GMR inputs as were used in the CNN models. Similar to the fully connected layers in the two CNN models, the third FNN model had an extra hidden layer with 200 neurons because two hidden layers are usually recommended to form the feature space in FNNs [48] that do not have high-level extracted features as inputs. A dropout probability of 0.5 was used in the two hidden layers of the FNN model to further regulate the training process and avoid overfitting. These three models were trained and tested under the same computing configuration and using the same training, validation, and test datasets of GMRs in the following case study. Their performance was evaluated in terms of model computational efficiency and prediction accuracy. Details of these three models are presented at https://github.com/yuanxzMST/1d-cnn-GMRs (accessed on 8 October 2021).

4. Case Study

4.1. Benchmark Building and NLTHA

A benchmark building analyzed by [49] was selected as the case study structure. This benchmark building was designed as a hypothetical four-story office building based on the 2003 International Building Code (IBC). The building was located in a typical high-seismic urban region of California. The design incorporated a perimeter-frame system with four perimeter frames mainly providing the lateral resistance for the building. Figure 4 shows a 2D nonlinear finite element model of one perimeter frame of the benchmark building in OpenSEES [50]. In Figure 4a, the symbols of ”b”, “h”, and “bar #” are the width (m), depth (m), and steel rebar diameter (mm) of the beams and columns.
The symbol “ρ” means the reinforcement ratio of the columns, and the symbols “ρbot” and “ρtop” are the bottom and top reinforcement ratios of the beams. In this study, all the columns were fixed at the base since their support conditions had little effect on seismic responses. Figure 4b represents the fiber-distributed plasticity element to simulate the nonlinearity of the beams and the columns of the frame. Each structural element consisted of five integration points. The material characteristics of concrete (with a nominal compressive strength of 34.5 MPa) and steel (with an expected yield strength of 462 MPa) were captured by the Concrete02 and Steel02 material models in OpenSEES, respectively. The stress–strain relationships (left) and the hysteretic behavior (right) of Concrete02 (top) and Steel02 (bottom) are found in Figure 4c. Additional details regarding these two materials were referenced in [51,52].
The dynamic analysis was conducted on the computational model of the benchmark building. The fundamental period for the fixed-base perimeter frame, in Figure 4a, was 0.724 s, slightly lower than the 0.75 s of the flexible-base model in [49] due to the stiffness increment of the fixed base. Using the maximum interstory drift ratio (MIDR) as the damage index in [49], the frame would fail at an MIDR range of 0.07–0.12, and 39% failure scenarios formed the mechanism at the first story while approximately 61% of the scenarios caused the mechanism to form at the third story. For the GMRs with a typical design basis exceedance probability (10% in 50 years), the MIDRs of the frames ranged from 0.005–0.02, which satisfied the design limits set by the design code. For the task of seismic damage assessment, a flexible number of damage states were defined to describe the building’s damage conditions. Two damage classes—namely, the collapse class and the non-collapse class—were defined in the machine learning-based approaches to predict the collapse of ductile r/c building frames [53]. Five damage states of r/c buildings in neural network-based approaches were used to classify the seismic structural damage states [16,17,23]. Three damage states were used to evaluate the performance of r/c buildings after earthquake events in [22,25,54], following the guidelines of the Applied Technology Council (ATC)-20 [55] and ATC-40 [56]. Three damage states, i.e., safe (green placard), limited entry (yellow placard), and unsafe (red placard), were defined in this study, following ATC tag rules. Based on the MIDR limits in [49], GMRs with MIDRs < 0.02 were labeled with green placards, GMRs with 0.02 ≤ MIDRs < 0.05 were labeled with yellow placards, and GMRs with 0.05 ≤ MIDRs were labeled with red placards. Particularly, the lower bound of 0.05 for the red tag was determined by the mean value minus three times the standard deviation of all GMRs in the collapse range of 0.07–0.12, which was identified in [49] to be conservative in the evaluation of unsafe structures.
A total of 1993 worldwide horizontal historical GMRs with PGAs higher than 0.15 g (g = 9.81 m/s2) were selected from the PEER GMR database (https://ngawest2.berkeley.edu/, accessed on 5 April 2021) for the NLTHA of the benchmark building. However, due to the stringent seismic design of the building, only 7 out of the 1993 historical GMRs yielded a red tag, given the NLTHA results. A simple way to obtain sufficient strong GMRs was to uniformly scale up the historical GMRs by a scaling factor [2]. However, scaling factors, along with structural properties, GMR intensity, and types of seismic responses, can cause biased NLTHA results [2,57,58]. In the engineering seismology community, common scaling factors vary from 1 to 10 or more [58]. An alternative way is to synthesize GMRs based on historical GMRs even though it takes more computing time and resources. To obtain enough red tag GMRs and limit the bias introduced by scaling factors, the 1993 historical GMRs were scaled from 2 to 10 with an increment step of 1. Thus, 17,937 extra GMRs were obtained. Meanwhile, 2500 synthesized GMRs were generated, and they were spectrum- and energy-compatible with the historical GMRs, based on an algorithm developed in [59]. Therefore, 22,430 GMRs were obtained and inputted into the building model for NLTHA. To further limit the bias introduced by the potentially excessive scaling of historical GMRs, the MIDRs in the 22,430 GMRs were examined, and the GMRs resulting in MIDRs > 0.12 were excluded because the perimeter frame would have collapsed with an MIDR larger than 0.12, indicating it is impossible for the numerical model to have a converged solution due to instability. An examination of the excluded GMRs showed that most were scaled up with a factor over 8, which affirmed that the excessively scaled GMRs were removed. Among the remaining 17,647 GMRs, only 1067 GMRs were labeled with red tags. To form a balanced training dataset and to ensure that the performance of the neural network models was not biased by the class that had the most training GMR samples, 1067 GMRs were randomly selected from the green and yellow classes. Finally, a balanced dataset composed of 3201 GMRs was collected to train the neural network models where each green, yellow, and red class had 1067 GMRs, respectively. The process of GMR selection is shown in Figure 5.
The 3201 collected GMRs were preprocessed, as mentioned in Section 3, to extract a duration of 30 s centered on their PGAs before training the three neural network models. Particularly, the extracted GMRs were encoded to 2D WT images for input into the 2D CNN model. However, the 1D GMRs were directly inputted into the 1D CNN model and the FNN model. The 3201 collected GMRs were randomly shuffled and proportionally divided into the training set, the validation set, and the test set by 0.70:0.15:0.15. An approximately balanced training set had 2241 GMRs made up of 754 green GMRs, 753 yellow GMRs, and 734 red GMRs. The validation set had 481 GMRs, and the test set had 479 GMRs. These three sets were used to train and test the three neural network models, as described in the following section.

4.2. Model Training, Validation, and Test

The three models were trained and tested with the same datasets under the same computing configuration. A workstation with a GPU GeForce RTX 2080 (8 G memory) was used to train the three neural network models. The TensorFlow 2.2.0 platform [60] was adopted to build and train the models. During the training process, the training set was mainly used to tune learnable parameters. In each training iteration, a batch of 32 GMRs from the training set was fed into the model to update the parameters with the Adam optimizer. Each training epoch had 70 training iterations to feed all 2241 training GMRs into the model once. Fifty training epochs were conducted for each model. The validation set was utilized to monitor the performance of the model during the training process, but it was not used to update the learnable parameters. The model checkpoints at which the trained models achieved the smallest validation losses on the validation sets were saved for assignment using the unseen test set. Figure 6a,b depicts the training and validation histories of the three models. The 1D CNN model and the 2D CNN model had similar loss histories using the same training and validation sets. However, the FNN model, which had higher losses, was less accurate with lower performance than the CNN models. This result indicated that the high-level features extracted by the convolutional and max-pooling layers in CNNs could better characterize the GMRs than the two-hidden-layer feature space in the FNN model. The stars in Figure 6a,b denote the epochs where the three models achieved the lowest validation losses. The tuned weights at these epochs were saved for a further performance assessment using the same unseen test set, which was not involved in the training process.
The performance assessment of the test set is shown in Figure 6. Figure 6c–e represents the confusion matrices of the three models on the same test set. The test set had 479 GMRs composed of 155 green GMRs, 166 yellow GMRs, and 158 red GMRs. The rows of the confusion matrices were predicted classes by the neural network models, and the columns were the true classes of the GMRs obtained from NLTHA. The cells denoted as “recall” in the last row indicated portions of correctly classified GMRs in a true class. The cells denoted as “precision” in the last column were the portion of the true GMRs in a predicted class. The cell in the bottom right corner denoted as “accuracy” was the portion of correctly classified GMRs in one test set. Higher recall, precision, and accuracy close to 1 on the test set indicated better model performance [61]. Figure 6c–e shows that the 1D and 2D CNN models had close recall, precision, and accuracy. However, the FNN model had lower recall, precision, and accuracy than the two CNN models. This comparison indicated that the CNN models outperformed the FNN model in the GMR-dependent seismic damage assessment.

5. Discussion

The model training, validation, and test results in Section 4.2 demonstrate that the 1D CNN model and the 2D CNN model performed similarly; however, the FNN model performed at a lower level. In this section, the computing time and computing resources consumed by the three models were investigated to evaluate their computing efficiencies. The same model training and test strategies outlined in Section 4.2 were used in the CPU computing configurations. The workstation CPU was an Intel Xeon CPU E5-2670 V3 @ 2.30 GHz with 126 G memory and 48 cores. Table 1 compares the three neural network models under the same computing configurations with a GPU and/or a CPU. Though the 2D CNN model with the WT-encoded GMR images achieved the highest prediction accuracy of 81.6%, the WT encoding of the GMRs, the model training, and the testing of the 2D CNN model consumed significantly more time than the 1D CNN model and the FNN model. However, the 1D CNN model had a prediction accuracy of 81.0%, as compared to the 2D CNN model, and it took less computing time and fewer resources. As compared to the 2D CNN model, the 1D CNN model reduced GMR preprocessing time by 99.9%, training time by 96%, and testing time by 90%. The FNN model took the least time among the three models but was also the least accurate at 70.8%. Therefore, the FNN model is not recommended for a GMR-dependent seismic damage assessment. Hand-crafted IMs were required by the FNN model to achieve high prediction performance, as demonstrated in previous studies [15,16,17].
In terms of the computing resources, GPUs are more expensive with less memory size than CPUs. The 2D CNN model had the most trainable parameters and almost reached the GPU memory limit because the WT-encoded GMR images had large input sizes. A batch size larger than 32 in the 2D CNN training could cause problems due to the lack of GPU memory. Compared to the 2D CNN model, the 1D CNN model reduced the trainable parameters by 86% and the memory usage by 69%. The FNN model had the smallest memory usage due to its shallow architecture with middle-size learnable parameters, but the FNN model is not recommended because it was the least accurate. Compared to the 1D CNN model, the 2D CNN model with WT encoding increases the dimensionality of the input GMRs, further leading to computing inefficiency, especially in CPU computing. As stated in [62], the data volume, dimensionality, acceptable time, and storage complexities are important to a novel efficient, effective algorithm. Therefore, based on the results of prediction performance, computing time, and resource usage, the 1D CNN model is recommended for rapid post-earthquake damage assessment among these three studied neural network models. Note that the scarcity of reliable historical GMRs on a specific site, instead of using GMRs scaled and synthesized from historical GMRs all over the world, is still a challenge to training neural network-based seismic classifiers that require a large training set to achieve desirable prediction performance.

6. Conclusions

Existing CNN-based approaches encode 1D ground motion records into 2D images for structural damage classification via deep 2D CNN models. This process leads to unnecessarily high computing time and expensive computing resources. Instead, this study directly inputted 1D ground motion records into the 1D CNN model for structural damage classification. The developed 1D CNN model avoided redundant 2D image encoding and expensive computing time. A benchmark r/c frame building was studied using three neural network models: a 1D CNN model, a 2D CNN model, and a shallow FNN model. They were trained, validated, and tested with the same dataset from the NLTHA of the benchmark building using 3201 ground excitations in the same computing environment. The studied results showed that the 2D CNN model yielded the highest accuracy (81.6%) of prediction, the 1D CNN model provided a similar level of prediction accuracy (81.0%), and the FNN model had the lowest accuracy (70.8%) of prediction. In comparison with the 2D CNN, the 1D CNN model reduced trainable parameters by 86%, memory usage by 69%, GMR preprocessing time by 99.9%, training time by 96%, and testing time by 90%. The proposed 1D CNN model is thus recommended for CNN-based post-earthquake damage assessments due to its high prediction accuracy and computational efficiency. Future studies should focus on the influence of GMR duration and time step on the 1D CNN performance. The optimal size of a GMR dataset also needs to be determined to avoid time-consuming NLTHA. Moreover, the 1D model should be further validated with varying structure types, such as bridges, in a future study.

Author Contributions

X.Y. contributed to the methodology of the study. X.Y. and D.T. contributed to the CNN model training. X.Y. and H.Z. performed the numerical simulation of the building. X.Y. wrote the first draft of the manuscript. L.L., G.C., D.W., D.T. and X.Y. contributed to the manuscript revision and review. All authors approved the submitted version. All authors have read and agreed to the published version of the manuscript.

Funding

Financial support to complete this study was provided in part by the U.S. Department of Transportation, Office of Assistant Secretary for Research and Technology under the auspices of Mid-America Transportation Center at the University of Nebraska, Lincoln (grant no. 00072738). Partial support for this research was received from the Missouri University of Science and Technology Intelligent Systems Center, the Mary K. Finley Missouri Endowment, the National Science Foundation, the Lifelong Learning Machines program from DARPA/Microsystems Technology Office, the Army Research Laboratory (ARL), and the Leonard Wood Institute; and it was accomplished under Cooperative Agreement Numbers W911NF-18-2-0260 and W911NF-14-2-0034. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Leonard Wood Institute, the National Science Foundation, the Army Research Laboratory, or the U.S. Government.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article may be made available upon request from the correspondence author.

Acknowledgments

The help from Pu Jiao and Jun Han with the numerical simulation of the benchmark building is highly acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cornell, C.A.; Jalayer, F.; Hamburger, R.O.; Foutch, D.A. Probabilistic Basis for 2000 SAC Federal Emergency Management Agency Steel Moment Frame Guidelines. J. Struct. Eng. 2002, 128, 526–533. [Google Scholar] [CrossRef] [Green Version]
  2. Vamvatsikos, D.; Cornell, C.A. Incremental dynamic analysis. Earthq. Eng. Struct. Dyn. 2002, 31, 491–514. [Google Scholar] [CrossRef]
  3. Jalayer, F.; Cornell, C.A. Alternative non-linear demand estimation methods for probability-based seismic assessments. Earthq. Eng. Struct. Dyn. 2009, 38, 951–972. [Google Scholar] [CrossRef]
  4. Baker, J.W.; Cornell, C.A. Vector-valued intensity measures for pulse-like near-fault ground motions. Eng. Struct. 2008, 30, 1048–1057. [Google Scholar] [CrossRef]
  5. Baker, J.W.; Cornell, C.A. A vector-valued ground motion intensity measure consisting of spectral acceleration and epsilon. Earthq. Eng. Struct. Dyn. 2005, 34, 1193–1217. [Google Scholar] [CrossRef]
  6. Jalayer, F.; Ebrahimian, H.; Miano, A.; Manfredi, G.; Sezen, H. Analytical fragility assessment using unscaled ground motion records. Earthq. Eng. Struct. Dyn. 2017, 46, 2639–2663. [Google Scholar] [CrossRef]
  7. Pang, Y.; Wang, X. Cloud-IDA-MSA conversion of fragility curves for efficient and high-fidelity resilience assessment. J. Struct. Eng. 2021, 147, 04021049. [Google Scholar] [CrossRef]
  8. Jalayer, F.; Elefante, L.; De Risi, R.; Manfredi, G. Cloud Analysis revisited: Efficient fragility calculation and uncertainty propagation using simple linear regression. In Proceedings of the NCEE 2014—10th U.S. National Conference on Earthquake Engineering Frontiers of Earthquake Engineering, Anchorage, AK, USA, 21–25 July 2014. [Google Scholar] [CrossRef]
  9. Miano, A.; Jalayer, F.; Ebrahimian, H.; Prota, A. Cloud to IDA: Efficient fragility assessment with limited scaling. Earthq. Eng. Struct. Dyn. 2018, 47, 1124–1147. [Google Scholar] [CrossRef]
  10. Giovenale, P.; Cornell, C.A.; Esteva, L. Comparing the adequacy of alternative ground motion intensity measures for the estimation of structural responses. Earthq. Eng. Struct. Dyn. 2004, 33, 951–979. [Google Scholar] [CrossRef]
  11. Grigoriu, M. Do seismic intensity measures (IMs) measure up? Probab. Eng. Mech. 2016, 46, 80–93. [Google Scholar] [CrossRef] [Green Version]
  12. Kostinakis, K.; Papadopoulos, M.; Athanatopoulou-Kyriakou, A. Adequacy of advanced earthquake intensity measures for estimation of damage under seismic excitation with arbitrary orientation. In Proceedings of the CCSEE 2014: International Conference on Civil, Structural and Earthquake Engineering, Paris, France, 28–29 April 2014. [Google Scholar]
  13. Riddell, R. On Ground Motion Intensity Indices. Earthq. Spectra 2007, 23, 147–173. [Google Scholar] [CrossRef]
  14. Du, A.; Padgett, J.E.; Shafieezadeh, A. Influence of intensity measure selection on simulation-based regional seismic risk assessment. Earthq. Spectra 2020, 36, 647–672. [Google Scholar] [CrossRef]
  15. Kostinakis, K.; Morfidis, K. Application of Artificial Neural Networks for the Assessment of the Seismic Damage of Buildings with Irregular Infills’ Distribution. Geotech. Geol. Earthq. Eng. 2020, 291–306. [Google Scholar] [CrossRef]
  16. Morfidis, K.; Kostinakis, K. Comparative evaluation of MFP and RBF neural networks’ ability for instant estimation of r/c buildings’ seismic damage level. Eng. Struct. 2019, 197. [Google Scholar] [CrossRef]
  17. Morfidis, K.; Kostinakis, K. Approaches to the rapid seismic damage prediction of r/c buildings using artificial neural networks. Eng. Struct. 2018, 165, 120–141. [Google Scholar] [CrossRef]
  18. Xu, Y.; Lu, X.; Tian, Y.; Huang, Y. Real-Time Seismic Damage Prediction and Comparison of Various Ground Motion Intensity Measures Based on Machine Learning. J. Earthq. Eng. 2020, 1–21. [Google Scholar] [CrossRef]
  19. Padgett, J.E.; Nielson, B.G.; DesRoches, R. Selection of optimal intensity measures in probabilistic seismic demand models of highway bridge portfolios. Earthq. Eng. Struct. Dyn. 2008, 37, 711–725. [Google Scholar] [CrossRef]
  20. Wang, X.; Shafieezadeh, A.; Ye, A. Optimal intensity measures for probabilistic seismic demand modeling of extended pile-shaft-supported bridges in liquefied and laterally spreading ground. Bull. Earthq. Eng. 2017, 16, 229–257. [Google Scholar] [CrossRef]
  21. Kostinakis, K.; Athanatopoulou, A.; Morfidis, K. Correlation between ground motion intensity measures and seismic damage of 3D R/C buildings. Eng. Struct. 2015, 82, 151–167. [Google Scholar] [CrossRef]
  22. Mangalathu, S.; Jeon, J.-S. Ground Motion-Dependent Rapid Damage Assessment of Structures Based on Wavelet Transform and Image Analysis Techniques. J. Struct. Eng. 2020, 146, 04020230. [Google Scholar] [CrossRef]
  23. Lu, X.; Xu, Y.; Tian, Y.; Cetiner, B.; Taciroglu, E. A deep learning approach to rapid regional post-event seismic damage assessment using time-frequency distributions of ground motions. Earthq. Eng. Struct. Dyn. 2021, 50, 1612–1627. [Google Scholar] [CrossRef]
  24. Liao, W.; Chen, X.; Lu, X.; Huang, Y.; Tian, Y. Deep Transfer Learning and Time-Frequency Characteristics-Based Identification Method for Structural Seismic Response. Front. Built Environ. 2021, 7, 58. [Google Scholar] [CrossRef]
  25. Yuan, X.; Tanksley, D.; Jiao, P.; Li, L.; Chen, G.; Wunsch, D. Encoding Time-Series Ground Motions as Images for Convolutional Neural Networks-Based Seismic Damage Evaluation. Front. Built Environ. 2021, 7, 103. [Google Scholar] [CrossRef]
  26. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  27. Kussul, E.M.; Baidyk, T.N.; Wunsch, D.C.; Makeyev, O.; Martín, A. Permutation Coding Technique for Image Recognition Systems. IEEE Trans. Neural Netw. 2006, 17, 1566–1579. [Google Scholar] [CrossRef] [Green Version]
  28. Li, L.; Fan, Y.; Huang, X.; Tian, L. Real-time UAV weed scout for selective weed control by adaptive robust control and machine learning algorithm. In Proceedings of the 2016 American Society of Agricultural and Biological Engineers Annual International Meeting, ASABE, Orlando, FL, USA, 17–20 July 2016. [Google Scholar] [CrossRef]
  29. Baker, J.W. Quantitative Classification of Near-Fault Ground Motions Using Wavelet Analysis. Bull. Seism. Soc. Am. 2007, 97, 1486–1501. [Google Scholar] [CrossRef]
  30. Li, H.; Yi, T.; Gu, M.; Huo, L. Evaluation of earthquake-induced structural damages by wavelet transform. Prog. Nat. Sci. 2009, 19, 461–470. [Google Scholar] [CrossRef]
  31. Yaghmaei-Sabegh, S. Detection of pulse-like ground motions based on continues wavelet transform. J. Seism. 2010, 14, 715–726. [Google Scholar] [CrossRef]
  32. Debayle, J.; Hatami, N.; Gavet, Y. Classification of time-series images using deep convolutional neural networks. In Proceedings of the Tenth International Conference on Machine Vision, 2017, Vienna, Austria, 13 April 2018. [Google Scholar] [CrossRef] [Green Version]
  33. Wang, Z.; Oates, T. Encoding time series as images for visual inspection and classification using tiled convolutional neural networks. In Proceedings of the Workshops at the 29th AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–26 January 2015. [Google Scholar]
  34. Dau, H.A.; Bagnall, A.; Kamgar, K.; Yeh, C.C.M.; Zhu, Y.; Gharghabi, S.; Ratanamahatana, C.A.; Keogh, E. UCR Time Series Classification Archive. Available online: https://www.cs.ucr.edu/~eamonn/time_series_data/ (accessed on 20 April 2021).
  35. Abdoli, S.; Cardinal, P.; Koerich, A.L. End-to-end environmental sound classification using a 1D convolutional neural network. Expert Syst. Appl. 2019, 136, 252–263. [Google Scholar] [CrossRef] [Green Version]
  36. Abdeljaber, O.; Avci, O.; Kiranyaz, S.; Gabbouj, M.; Inman, D.J. Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks. J. Sound Vib. 2017, 388, 154–170. [Google Scholar] [CrossRef]
  37. Kiranyaz, S.; Ince, T.; Abdeljaber, O.; Avci, O.; Gabbouj, M. 1-D convolutional neural networks for signal processing applications. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019. [Google Scholar] [CrossRef]
  38. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Process. 2020, 151, 107398. [Google Scholar] [CrossRef]
  39. Lee, G.; Gommers, R.; Waselewski, F.; Wohlfahrt, K.; O’Leary, A. PyWavelets: A Python package for wavelet analysis. J. Open Source Softw. 2019, 4, 1237. [Google Scholar] [CrossRef]
  40. Li, Y.; Yuan, Y. Convergence analysis of two-layer neural networks with RELU activation. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  41. Cha, Y.-J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput. Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  42. Pang, T.; Xu, K.; Dong, Y.; Du, C.; Chen, N.; Zhu, J. Rethinking softmax cross-entropy loss for adversarial robustness. arXiv 2019, arXiv:1905.10626. (accessed on 6 April 2021). [Google Scholar]
  43. You, Y.; Gitman, I.; Ginsburg, B. Large batch training of convolutional networks. arXiv 2017, arXiv:1708.03888. [Google Scholar]
  44. Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:160904747. [Google Scholar]
  45. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  46. Tian, Y.; Yang, Q. On time-step in structural seismic response analysis under ground displacement/acceleration. Earthq. Eng. Eng. Vib. 2009, 8, 341–347. [Google Scholar] [CrossRef]
  47. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  48. Haykin, S. Neural Networks and Learning Machines. 2008. Available online: http://plaza.ufl.edu/angelbar/courses/CAP6615_SPRING2012.pdf (accessed on 8 October 2021).
  49. Haselton, C.B.; Goulet, C.A.; Mitrani-Reiser, J.; Beck, J.L.; Deierlein, G.G.; Porter, K.A.; Stewart, J.P.; Taciroglu, E. An assessment to benchmark the seismic performance of a code-conforming reinforced concrete moment-frame building. Peer Rep. 2008. Available online: https://authors.library.caltech.edu/33801/1/web_PEER712_HASELTONetal.pdf (accessed on 8 October 2021).
  50. Mazzoni, S.; McKenna, F.; Scott, M.H.; Fenves, G.L. OpenSees command language manual. Pac. Earthq. Eng. Res. Cent. 2006. Available online: file:///C:/Users/MDPI/AppData/Local/Temp/OpenSeesCommandLanguageManualJune2006.pdf (accessed on 8 October 2021).
  51. Concrete02 Material–Linear Tension Softening-OpenSeesWiki n.d. Available online: https://opensees.berkeley.edu/wiki/index.php/Concrete02_Material_--_Linear_Tension_Softening (accessed on 15 April 2021).
  52. Steel02 Material—Giuffré-Menegotto-Pinto Model with Isotropic Strain Hardening-OpenSeesWiki n.d. Available online: https://opensees.berkeley.edu/wiki/index.php/Steel02_Material_--_Giuffré-Menegotto-Pinto_Model_with_Isotropic_Strain_Hardening (accessed on 15 April 2021).
  53. Hwang, S.-H.; Mangalathu, S.; Shin, J.; Jeon, J.-S. Machine learning-based approaches for seismic demand and collapse of ductile reinforced concrete building frames. J. Build. Eng. 2020, 34, 101905. [Google Scholar] [CrossRef]
  54. Mangalathu, S.; Sun, H.; Nweke, C.; Yi, Z.; Burton, H.V. Classifying earthquake damage to buildings using machine learning. Earthq. Spectra 2020, 36, 183–208. [Google Scholar] [CrossRef]
  55. Applied Technology Council (ATC). ATC-20: Procedures for Post-Earthquake Safety Evaluation of Buildings. 1989. Available online: https://inis.iaea.org/collection/NCLCollectionStore/_Public/23/008/23008479.pdf (accessed on 8 October 2021).
  56. Applied Technology Council. ATC 40 Seismic Evaluation and Retrofit of Concrete Buildings Redwood City California. Seism Saf Commisionsion. 1996. Available online: https://www.scirp.org/(S(351jmbntvnsjt1aadkozje))/reference/referencespapers.aspx?referenceid=1701260 (accessed on 8 October 2021).
  57. Bazzurro, P.; Cornell, C.A.; Shome, N.; Carballo, J.E. Three Proposals for Characterizing MDOF Nonlinear Seismic Response. J. Struct. Eng. 1998, 124, 1281–1289. [Google Scholar] [CrossRef]
  58. Luco, N.; Bazzurro, P. Does amplitude scaling of ground motion records result in biased nonlinear structural drift responses? Earthq. Eng. Struct. Dyn. 2007, 36, 1813–1835. [Google Scholar] [CrossRef]
  59. Li, Z.; Kotronis, P.; Wu, H. Simplified approaches for Arias Intensity correction of synthetic accelerograms. Bull. Earthq. Eng. 2017, 15, 4067–4087. [Google Scholar] [CrossRef] [Green Version]
  60. TensorFlow n.d. Available online: https://www.tensorflow.org/ (accessed on 15 April 2021).
  61. Powers, D.M.W. Evaluation: From precision, recall and f-measure to roc, informedness, markedness & correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
  62. Xu, R.; Wunsch, D. Survey of clustering algorithms. IEEE Trans. Neural Netw. 2005, 16, 645–678. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The pipelines of seismic damage state prediction with 1D CNN (solid arrows) only and 2D CNN (dash arrows) that requires image encoding of the GMR images.
Figure 1. The pipelines of seismic damage state prediction with 1D CNN (solid arrows) only and 2D CNN (dash arrows) that requires image encoding of the GMR images.
Applsci 11 09844 g001
Figure 2. The architecture of 1D CNNs with the feature extracting (convolutional and max-pooling) layers and the fully connected layers (the FNN classifier).
Figure 2. The architecture of 1D CNNs with the feature extracting (convolutional and max-pooling) layers and the fully connected layers (the FNN classifier).
Applsci 11 09844 g002
Figure 3. The architectures of three neural network models: (a) the 1D CNN model; (b) the 2D CNN model; and (c) the FNN model.
Figure 3. The architectures of three neural network models: (a) the 1D CNN model; (b) the 2D CNN model; and (c) the FNN model.
Applsci 11 09844 g003
Figure 4. (a) The fixed-base benchmark r/c frame with beam and column design information; (b) the fiber-distributed plasticity element of beams and columns with Concrete02 material model and Steel02 material model in OpenSEES; (c) the stress and strain relationship (left); and the hysteretic behavior (right) of Concrete02 (top) and Steel02 (bottom).
Figure 4. (a) The fixed-base benchmark r/c frame with beam and column design information; (b) the fiber-distributed plasticity element of beams and columns with Concrete02 material model and Steel02 material model in OpenSEES; (c) the stress and strain relationship (left); and the hysteretic behavior (right) of Concrete02 (top) and Steel02 (bottom).
Applsci 11 09844 g004
Figure 5. The diagram of selecting the balanced dataset of 3201 GMRs.
Figure 5. The diagram of selecting the balanced dataset of 3201 GMRs.
Applsci 11 09844 g005
Figure 6. The training, validation, and test results of the three neural network models: (a) the accuracy histories on the training and validation sets; (b) the loss histories on the training and validation sets; (c) the test confusion matrix on the test set of 1D CNN model; (d) the test confusion matrix on the test set of 2D CNN model; and (e) the test confusion matrix on the test set of FNN model.
Figure 6. The training, validation, and test results of the three neural network models: (a) the accuracy histories on the training and validation sets; (b) the loss histories on the training and validation sets; (c) the test confusion matrix on the test set of 1D CNN model; (d) the test confusion matrix on the test set of 2D CNN model; and (e) the test confusion matrix on the test set of FNN model.
Applsci 11 09844 g006
Table 1. Comparison of computing time and resources of the three neural network models.
Table 1. Comparison of computing time and resources of the three neural network models.
ModelTrainable ParametersMemory
Usage (MiB)
GMR Preprocessing Time (s)Model Training Time (s)Model Testing Time (s)
GPUCPUGPUCPUGPUCPU
1D CNN105,28322990.080.0918660.110.15
2D CNN787,715742310610450469261.136
FNN313,2593130.080.0912.7140.070.08
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, X.; Tanksley, D.; Li, L.; Zhang, H.; Chen, G.; Wunsch, D. Faster Post-Earthquake Damage Assessment Based on 1D Convolutional Neural Networks. Appl. Sci. 2021, 11, 9844. https://doi.org/10.3390/app11219844

AMA Style

Yuan X, Tanksley D, Li L, Zhang H, Chen G, Wunsch D. Faster Post-Earthquake Damage Assessment Based on 1D Convolutional Neural Networks. Applied Sciences. 2021; 11(21):9844. https://doi.org/10.3390/app11219844

Chicago/Turabian Style

Yuan, Xinzhe, Dustin Tanksley, Liujun Li, Haibin Zhang, Genda Chen, and Donald Wunsch. 2021. "Faster Post-Earthquake Damage Assessment Based on 1D Convolutional Neural Networks" Applied Sciences 11, no. 21: 9844. https://doi.org/10.3390/app11219844

APA Style

Yuan, X., Tanksley, D., Li, L., Zhang, H., Chen, G., & Wunsch, D. (2021). Faster Post-Earthquake Damage Assessment Based on 1D Convolutional Neural Networks. Applied Sciences, 11(21), 9844. https://doi.org/10.3390/app11219844

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop