A Convolutional Neural Network for Impact Detection and Characterization of Complex Composite Structures
Abstract
:1. Introduction
2. Convolutional Neural Network
2.1. Convolution Neural Network Theory
2.2. Network Architecture
- The convolution layer is the core of any CNN. It is used for extracting information from its input through the use of a number of filters that are automatically taught to detect certain features in an image. The filters’ size and their numbers is determined by the user. Each filter will scan through the input from the upper left hand side corner to the bottom right hand side corner, each creating a feature map. The neurons at the output are arranged in a volume with a depth equal to the number of filters, a height equal to (where is the input height and is the filter height) and a length equal to (where is the input length and is the filter length). As more convolutional layers are connected in series the output of one such layer becomes the input of another, and its features are extracted again increasing the level of complexity, and hence the accuracy, but also increasing the training time and the risk of overfitting [50,51]. Thus, there is a trade-off, and the number of convolution layers, as well as the number of filters and their sizes in the metamodel, were chosen by performing the trial-and-error method.
- The pooling layer performs the down-sampling in the width and height, reducing the dimensions of its input and, hence, reducing the number of parameters to be computed. This reduces the complexity of the network and the possibility of overfitting. The pooling operation operates on each depth slice of the input separately, down-sampling them all in the same manner. Each of the slices will be divided into a number of patches, equal in area to the filter size set by the user when defining the pooling layer. The most commonly used filter size is (2,2), so each slice will be divided into a number of adjacent but disjoint patches of 2 neurons high and 2 neurons long. The output of the pooling layer will be a smaller volume, but equal in depth to the input. For example, if the input to a pooling layer is 64 × 64 × 6 in volume and the filter size of the layer is (2,2), then the output is 32 × 32 × 6, achieving a great reduction in the complexity of the network. There are multiple types of pooling layers, categorized by the way this operation is performed. The most popular types are [50,51]:
- —
- Average Pooling: Calculates the average of the numbers within each patch and sends it to the corresponding position in the output.
- —
- The flatten layer is used to change the shape of the input, making it an array of 1 neuron depth and height, equal in length to the product between the length, depth and height of the input to that layer. this layer is used in every CNN because the output layer must be a one-dimensional vector [50].
- The dropout layer is used to reduce overfitting by randomly cutting off a fraction of the nodes in the network. This random dropping of neurons in the network can be used to simulate a great number of different architectures which leads to a better generalization of the CNN [50].
- The densely connected layer is a regular fully connected layer. Each of its output neurons is connected to all the neurons from the input. This is usually implemented at the output together with a Softmax function to give the predictions. The nodes at the output of the layer, will, thus, contain the probabilities of the input to the CNN belonging to all classes. As each of those nodes is connected to all the neurons of the input to the layer, each receives all the information from the first half of the network, containing the convolutional and pooling layers. This means that the final prediction is made according to the whole input image, not just the output of some convolution or pooling filters [47,52,53].
2.3. Activation Functions
- Sigmoid function: The curve has an ’S’ shape and it is given by the following equation:
- Tanh function: The hyperbolic tangent function is a slightly improved version of the sigmoid, in that the activation function is now centred on the origin. The function has an ’S’ shape, and will saturate at for , and 1 for . The function is given by:Using the tanh function, the optimisation will be easier, than for the sigmoid case. However, the output still saturates, the high sensitivity region is still small, and the vanishing gradient is still a problem [50,54]. The first derivative can be derived to be:From Equation 7, the relationship between the function and the first derivative is still simple, so it is easy for the function to be performed computationally.
- ReLu function (Rectified Linear Unit): Here, the function curve will have two regions, depending on the value of the input. For negative inputs, the function output is 0, while for positive inputs, the result is equal to the input itself:The ReLu function has numerous advantages when comparing with the sigmoid or the tanh activation functions. Firstly, it was proven to be approximately 6 times faster in convergence comparing to the hyperbolic tangent. Secondly, as the function increases from 0 to ∞ for positive inputs, a large variation in the input will be translated to a large variation in the outputs so the vanishing gradient problem is avoided. The function is no longer saturated and will have one non-linear region (i.e. for ) and one linear region (i.e. for ), but overall it is still a non-linear function. Nevertheless, when using backpropagation for training the network, the linear region will bring many desirable advantages of linear activation functions. It is computationally easier performed than the previous two. On disadvantage of ReLu activation function, is that, for negative inputs, the function is horizontal, and, thus, the gradients will be zero. This means that, in that region, the weights will no longer be adjusted, causing a problem called dying ReLu resulting in a fraction of the network to become passive [56].
- Leaky ReLu function: This activation function is a version of the ReLu that does not have the dying ReLu problem [47]:
- Softmax Function: This function is usually used for the output layer. It is used to normalise the output vector of the CNN, which is of length equal to the number of classes, say , to a vector of length , whose values sum to 1. This final vector will contain a range of probabilities, and the position of the maximum one will be the predicted class. The Softmax function was used during this project, too, and mathematically it can be written as [57]:
2.4. Fitness Function
- Classification accuracyThe classification accuracy is one way to evaluate the efficiency of the developed metamodel in predicting the output. It is defined as the percentage of the correctly predicted values from the total number of predictions [58]. This classification accuracy is useful, however, only when there are equal numbers of inputs belonging to each class [58]. Thus, another metric is needed, to be able to see how the code performs in predicting for each separate class.
- Confusion matrixConfusion matrix is a parameter which can quantify the performance of the metamodel for each class. The confusion matrix has a square shape, the number of rows & columns being equal to the total number of classes in the classification task. The sum of all the elements of column number j and i represents the total number of predictions for classes j-1 and i-1 respectively. In addition, the off-diagonal terms of the matrix show the wrongly predicted classes and the accuracy of the metamodel can be quantified easily, as shown in Figure 3.
- Loss functionAnother method of evaluating the performance of the algorithm is through the loss function. In machine learning, loss is applied as a penalty for a wrong prediction. This is important for the SHM application, since due to high safety factors, false alarm and mis-detection has to be minimum. For an exact prediction, the loss is zero, while inacurate categorization will result in greater loss. Therefore, the program will update the weights and biases until the loss is minimised. In a multi-label classification algorithm, logarithmic loss, also named cross-entropy loss, is commonly used according to [58,59]:
3. Passive Sensing Metamodel Based on CNN
3.1. Input Data Generation - Impact localization
3.2. Input Data Generation-Impact Energy Level
3.2.1. Transferred Energy
3.2.2. Instantaneous Energy and Averaged Stored Energy Method
3.3. CNN Architecture
- Training, in which the initially random weights are adapted by passing the training images in batches, back and forth inside the network, to minimise a pre-defined loss function.
- Validation (optional), which is used for optimising the network architecture. However, as the number of images per class was quite small for many of the applications discussed in this work, the dataset could not be split into three groups, so no validation was used.
- Testing, in which the generalization of the network is assessed and an output of predicted classes is given to the set of testing images.
4. Application of the Metamodel to a Composite Stiffened Panel
4.1. Experimental Set-up and Data Acquisition
4.2. Impact Location Prediction
4.2.1. CNN Architecture
4.2.2. Results
4.2.3. Symmetry and Up-Scalability
4.3. Energy Prediction
4.3.1. CNN Architecture
4.3.2. Results
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Maizuar, M.; Zhang, L.; Miramini, S.; Mendis, P.; Thompson, R.G. Detecting structural damage to bridge girders using radar interferometry and computational modelling. Struct. Control Health Monit. 2017, 24, e1985. [Google Scholar] [CrossRef]
- Katunin, A.; Dragan, K.; Dziendzikowski, M. Damage identification in aircraft composite structures: A case study using various non-destructive testing techniques. Compos. Struct. 2015, 127, 1–9. [Google Scholar] [CrossRef]
- Liu, L.; Miramini, S.; Hajimohammadi, A. Characterising fundamental properties of foam concrete with a non-destructive technique. Nondestr. Test. Eval. 2019, 34, 54–69. [Google Scholar] [CrossRef]
- Aliabadi, M.F.; Sharif-Khodaei, Z. Structural Health Monitoring for Advanced Composite Structures; World Scientific Publishing Company: London, UK, 2017; Volume 8. [Google Scholar]
- Li, B.; Li, Z.; Zhou, J.; Ye, L.; Li, E. Damage localization in composite lattice truss core sandwich structures based on vibration characteristics. Compos. Struct. 2015, 126, 34–51. [Google Scholar] [CrossRef]
- Mustapha, S.; Ye, L.; Dong, X.; Alamdari, M.M. Evaluation of barely visible indentation damage (BVID) in CF/EP sandwich composites using guided wave signals. Mech. Syst. Sig. Process. 2016, 76, 497–517. [Google Scholar] [CrossRef]
- Sharif Khodaei, Z.; Ghajari, M.; Aliabadi, M.F. Determination of impact location on composite stiffened panels. Smart Mater. Struct. 2012, 21, 105026. [Google Scholar] [CrossRef]
- Ghajari, M.; Sharif Khodaei, Z.; Aliabadi, M.H.; Apicella, A. Identification of impact force for smart composite stiffened panels. Smart Mater. Struct. 2013, 22, 085014. [Google Scholar] [CrossRef]
- Sharif Khodaei, Z.; Aliabadi, M.H. Assessment of delay-and-sum algorithms for damage detection in aluminium and composite plates. Smart Mater. Struct. 2014, 23, 075007. [Google Scholar] [CrossRef]
- Sharif Khodaei, Z.; Aliabadi, M. A multi-level decision fusion strategy for condition based maintenance of composite structures. Materials 2016, 9, 790. [Google Scholar] [CrossRef] [PubMed]
- Zhao, G.; Li, S.; Hu, H.; Zhong, Y.; Li, K. Impact localization on composite laminates using fiber Bragg grating sensors and a novel technique based on strain amplitude. Opt. Fiber Technol. 2018, 40, 172–179. [Google Scholar] [CrossRef]
- Morse, L.; Sharif Khodaei, Z.; Aliabadi, M. Reliability based impact localization in composite panels using Bayesian updating and the Kalman filter. Mech. Syst. Sig. Process. 2018, 99, 107–128. [Google Scholar] [CrossRef]
- Fu, H.; Vong, C.M.; Wong, P.K.; Yang, Z. Fast detection of impact location using kernel extreme learning machine. Neural Comput. Appl. 2016, 27, 121–130. [Google Scholar] [CrossRef]
- Lopes, V., Jr.; Park, G.; Cudney, H.H.; Inman, D.J. Impedance-based structural health monitoring with artificial neural networks. J. Intell. Mater. Syst. Struct. 2000, 11, 206–214. [Google Scholar] [CrossRef]
- Park, S.O.; Jang, B.W.; Lee, Y.G.; Kim, Y.Y.; Kim, C.G.; Park, C.Y.; Lee, B.W. Detection of Impact Location for Composite Stiffened Panel Using FBG Sensors. Adv. Mater. Res. 2010, 123, 895–898. [Google Scholar] [CrossRef]
- Yue, N.; Sharif-Khodaei, Z. Assessment of impact detection techniques for aeronautical application: ANN vs. LSSVM. J. Multiscale Modell. 2016, 7, 1640005. [Google Scholar] [CrossRef]
- Seno, A.H.; Aliabadi, M. Impact localisation in composite plates of different stiffness impactors under simulated environmental and operational conditions. Sensors 2019, 19, 3659. [Google Scholar] [CrossRef] [PubMed]
- Xu, Q. A comparison study of extreme learning machine and least squares support vector machine for structural impact localization. Math. Prob. Eng. 2014, 2014, 1–8. [Google Scholar] [CrossRef]
- Kang, F.; Liu, J.; Li, J.; Li, S. Concrete dam deformation prediction model for health monitoring based on extreme learning machine. Struct. Control Health Monit. 2017, 24, e1997. [Google Scholar] [CrossRef]
- Na, S.; Lee, H.K. Neural network approach for damaged area location prediction of a composite plate using electromechanical impedance technique. Compos. Sci. Technol. 2013, 88, 62–68. [Google Scholar] [CrossRef]
- De Oliveira, M.; Araujo, N.; da Silva, R.; da Silva, T.; Epaarachchi, J. Use of savitzky–golay filter for performances improvement of SHM systems based on neural networks and distributed PZT sensors. Sensors 2018, 18, 152. [Google Scholar] [CrossRef] [PubMed]
- Palomino, L.V.; Steffen, V.; Finzi Neto, R.M. Probabilistic neural network and fuzzy cluster analysis methods applied to impedance-based SHM for damage classification. Shock Vibr. 2014, 2014, 1–12. [Google Scholar] [CrossRef]
- AlThobiani, F.; Ball, A.; Choi, B.K. An application to transient current signal based induction motor fault diagnosis of Fourier–Bessel expansion and simplified fuzzy ARTMAP. Expert Syst. Appl. 2013, 40, 5372–5384. [Google Scholar]
- De Oliveira, M.A.; Inman, D.J. Performance analysis of simplified Fuzzy ARTMAP and Probabilistic Neural Networks for identifying structural damage growth. Appl. Soft Comput. 2017, 52, 53–63. [Google Scholar] [CrossRef]
- Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K. The history began from AlexNet: A comprehensive survey on deep learning approaches. arXiv 2018, arXiv:1803.01164. [Google Scholar]
- Rawat, W.; Wang, Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural Computation 2017, 29, 2352–2449. [Google Scholar] [CrossRef] [PubMed]
- Alom, M.Z.; Alam, M.; Taha, T.M.; Iftekharuddin, K.M. Object recognition using cellular simultaneous recurrent networks and convolutional neural network. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 2873–2880. [Google Scholar]
- Lakhani, V.A.; Mahadev, R. Multi-Language Identification Using Convolutional Recurrent Neural Network. arXiv 2016, arXiv:1611.04010. [Google Scholar]
- Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G.; Elsen, E.; Prenger, R.; Satheesh, S.; Sengupta, S.; Coates, A.; et al. Deep speech: Scaling up end-to-end speech recognition. arXiv 2014, arXiv:1412.5567. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition CVPR, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Moeskops, P.; Viergever, M.A.; Mendrik, A.M.; de Vries, L.S.; Benders, M.J.; Išgum, I. Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans. Med. Imaging 2016, 35, 1252–1261. [Google Scholar] [CrossRef] [PubMed]
- Ronao, C.A.; Cho, S.B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 2016, 59, 235–244. [Google Scholar] [CrossRef]
- Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 221–231. [Google Scholar] [CrossRef] [PubMed]
- Khan, S.; Yairi, T. A review on the application of deep learning in system health management. Mech. Syst. Sig. Process. 2018, 107, 241–265. [Google Scholar] [CrossRef]
- Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep learning and its applications to machine health monitoring. Mech. Syst. Sig. Process. 2019, 115, 213–237. [Google Scholar] [CrossRef]
- Abdeljaber, O. Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks. J. Sound Vib. 2017, 388. [Google Scholar] [CrossRef]
- Abdeljaber, O.; Avci, O.; Kiranyaz, M.S.; Boashash, B.; Sodano, H.; Inman, D.J. 1-D CNNs for structural damage detection: verification on a structural health monitoring benchmark data. Neurocomputing 2018, 275, 1308–1317. [Google Scholar] [CrossRef]
- De Oliveira, M.; Monteiro, A.; Vieira Filho, J. A New Structural Health Monitoring Strategy Based on PZT Sensors and Convolutional Neural Network. Sensors 2018, 18, 2955. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chen, F.C.; Jahanshahi, M.R. NB-CNN: Deep learning-based crack detection using convolutional neural network and Naïve Bayes data fusion. IEEE Trans. Ind. Electron. 2018, 65, 4392–4400. [Google Scholar] [CrossRef]
- Xia, M.; Li, T.; Xu, L.; Liu, L.; De Silva, C.W. Fault diagnosis for rotating machinery using multiple sensors and convolutional neural networks. IEEE/ASME Trans. Mechatron. 2018, 23, 101–110. [Google Scholar] [CrossRef]
- Janssens, O.; Slavkovikj, V.; Vervisch, B.; Stockman, K.; Loccufier, M.; Verstockt, S.; Van de Walle, R.; Van Hoecke, S. Convolutional neural network based fault detection for rotating machinery. J. Sound Vib. 2016, 377, 331–345. [Google Scholar] [CrossRef]
- Jeong, H.; Park, S.; Woo, S.; Lee, S. Rotating machinery diagnostics using deep learning on orbit plot images. Procedia Manuf. 2016, 5, 1107–1118. [Google Scholar] [CrossRef] [Green Version]
- Guo, S.; Yang, T.; Gao, W.; Zhang, C. A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network. Sensors 2018, 18, 1429. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Qi, Y.; Shen, C.; Wang, D.; Shi, J.; Jiang, X.; Zhu, Z. Stacked Sparse Autoencoder-Based Deep Network for Fault Diagnosis of Rotating Machinery. IEEE Access 2017, 5, 15066–15079. [Google Scholar] [CrossRef]
- Fu, H.; Khodaei, Z.S.; Aliabadi, M.F. An event-triggered energy-efficient wireless structural health monitoring system for impact detection in composite airframes. IEEE Internet Things J. 2018, 6, 1183–1192. [Google Scholar] [CrossRef]
- Fu, H.; Sharif-Khodaei, Z.; Aliabadi, M.F. An energy-efficient cyber–physical system for wireless on-board aircraft structural health monitoring. Mech. Syst. Sig. Process. 2019, 128, 352–368. [Google Scholar] [CrossRef]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Boser, B.E.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.E.; Jackel, L.D. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1990; pp. 396–404. [Google Scholar]
- Hubel, D.H.; Wiesel, T.N. Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 1968, 195, 215–243. [Google Scholar] [CrossRef] [PubMed]
- Brownlee, J. Deep Learning for Computer Vision - Image Classification, Object Detection and Face Recognition in Python; Machine Learning Mastery: Vermont, VIC, Australia, 2019; pp. 1–563. [Google Scholar]
- CS231n: Convolutional Neural Networks for Visual Recognition, Stanford University. Available online: http://cs231n.github.io/convolutional-networks/ (accessed on 11 November 2019).
- Convolution Neural Networks vs Fully Connected Neural Networks. Available online: https://medium.com/datadriveninvestor/convolution-neural-networks-vs-fully-connected-neural-networks-8171a6e86f15 (accessed on 11 November 2019).
- Zadeh, R.B.; Ramsundar, B. Fully Connected Deep Networks. In TensorFlow for Deep Learning; O’Reilly Media: Sebastopol, CA, USA, 2018; ISBN 9781491980446. [Google Scholar]
- Walia Singh, A. Activation Functions and It’S Types-Which Is Better? Available online: https://towardsdatascience.com/activation-functions-and-its-types-which-is-better-a9a5310cc8f (accessed on 11 November 2019).
- Wang, C.F. The Vanishing Gradient Problem. Available online: https://towardsdatascience.com/the-vanishing-gradient-problem-69bf08b15484 (accessed on 11 November 2019).
- Sharma V, A. Understanding Activation Functions in Neural Networks. Available online: https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0 (accessed on 11 November 2019).
- Lan, H. The Softmax Function, Neural Net Outputs as Probabilities, and Ensemble Classifiers. Available online: https://towardsdatascience.com/the-softmax-function-neural-net-outputs-as-probabilities-and-ensemble-classifiers-9bd94d75932 (accessed on 11 November 2019).
- Mishra, A. Metrics to Evaluate your Machine Learning Algorithm. Available online: https://towardsdatascience.com/metrics-to-evaluate-your-machine-learning-algorithm-f10ba6e38234 (accessed on 11 November 2019).
- Parmar, R. Common Loss functions in machine learning. Available online: https://towardsdatascience.com/common-loss-functions-in-machine-learning-46af0ffc4d23 (accessed on 11 November 2019).
- Thiene, M.; Sharif Khodaei, Z.; Aliabadi, M.H. Optimal sensor placement for maximum area coverage (MAC) for damage localization in composite structures. Smart Mater. Struct. 2016, 25, 095037. [Google Scholar] [CrossRef]
- Mallardo, V.; Aliabadi, M.; Sharif Khodaei, Z. Optimal sensor positioning for impact localization in smart composite panels. J. Intell. Mater. Syst. Struct. 2013, 24, 559–573. [Google Scholar] [CrossRef]
- Fu, H.; Sharif-Khodaei, Z.; Aliabadi, M.H.F. An energy efficient wireless module for on-board aircraft impact detection. In Proceedings of the Nondestructive Characterization and Monitoring of Advanced Materials Aerospace, Civil Infrastructure, and Transportation XIII, Denver, CO, USA, 1 April 2019; Volume 10971. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
Name | Dataset | Total No. of Images | No. of Sensors | Training Data | Training Data Details | Testing Data | Testing Data Details | Classes | Images Per Class | Epochs | Accuracy (%) |
---|---|---|---|---|---|---|---|---|---|---|---|
D1 | D | 96 | 4 | 72 | 3 sets | 24 | 1 set | 3 | 24 | 30 | 100 |
D2 | D | 98 | 4 | 49 | Top (L&R) | 49 | Bottom (L&R) | 3 | 16–17 | 30 | 87.3 |
D3 | D | 98 | 4 | 49 | Top (L&R) | 49 | Bottom (L&R) | 3 | 16–17 | 30 | 99.4 |
Name | Dataset | Total No. of Images | No. of Sensors | Training Data | Training Data Details | Testing Data | Testing Data Details | Classes | Images Per Class | Epochs | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|
D4 | D | 98 | 4 | 49 | Top | 49 | Bottom | 2 | 24–25 | 30 | 96.1 |
D5 | D | 98 | 4 | 49 | Top | 49 | Bottom | 2 | 24–25 | 30 | 100 |
D6 | D | 98 | 8 | 72 | 3 sets | 24 | 1 set | 4 | 18 | 30 | 98.3 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tabian, I.; Fu, H.; Sharif Khodaei, Z. A Convolutional Neural Network for Impact Detection and Characterization of Complex Composite Structures. Sensors 2019, 19, 4933. https://doi.org/10.3390/s19224933
Tabian I, Fu H, Sharif Khodaei Z. A Convolutional Neural Network for Impact Detection and Characterization of Complex Composite Structures. Sensors. 2019; 19(22):4933. https://doi.org/10.3390/s19224933
Chicago/Turabian StyleTabian, Iuliana, Hailing Fu, and Zahra Sharif Khodaei. 2019. "A Convolutional Neural Network for Impact Detection and Characterization of Complex Composite Structures" Sensors 19, no. 22: 4933. https://doi.org/10.3390/s19224933
APA StyleTabian, I., Fu, H., & Sharif Khodaei, Z. (2019). A Convolutional Neural Network for Impact Detection and Characterization of Complex Composite Structures. Sensors, 19(22), 4933. https://doi.org/10.3390/s19224933