Next Article in Journal
Suspended Sediment Yield Forecasting with Single and Multi-Objective Optimization Using Hybrid Artificial Intelligence Models
Next Article in Special Issue
Improved Large Covariance Matrix Estimation Based on Efficient Convex Combination and Its Application in Portfolio Optimization
Previous Article in Journal
An Algorithm for Business Management Based on Portfolio Optimization
Previous Article in Special Issue
Sensor-Based Prognostic Health Management of Advanced Driver Assistance System for Autonomous Vehicles: A Recent Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Contact Detection of Delamination in Composite Laminates Coated with a Mechanoluminescent Sensor Using Convolutional AutoEncoder

1
Department of Mechanical, Robotics and Energy Engineering, Dongguk University–Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
2
Department of Mechanical Engineering, New Mexico Tech, Socorro, NM 97229, USA
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4254; https://doi.org/10.3390/math10224254
Submission received: 4 October 2022 / Revised: 7 November 2022 / Accepted: 10 November 2022 / Published: 15 November 2022
(This article belongs to the Special Issue Applied Computing and Artificial Intelligence)

Abstract

:
Delamination is a typical defect of carbon fiber-reinforced composite laminates. Detecting delamination is very important in the performance of laminated composite structures. Structural Health Monitoring (SHM) methods using the latest sensors have been proposed to detect delamination that occurs during the operation of laminated composite structures. However, most sensors used in SHM methods measure data in the contact form and do not provide visual information about delamination. Research into mechanoluminescent sensors (ML) that can address the limitations of existing sensors has been actively conducted for decades. The ML sensor responds to mechanical deformation and emits light proportional to mechanical stimuli, thanks it can provide visual information about changes in the physical quantity of the entire structure. Many researchers focus on detecting cracks in structures and impact damage with the ML sensor. This paper presents a method of detecting the delamination of composites using ML sensors. A Convolutional AutoEncoder (CAE) was used to automatically extract the delamination positions from light emission images, which offers better performance compared to edge detection methods.

1. Introduction

Recently, fiber-reinforced composite laminates have been widely used in the aerospace, mobility, and shipping industries due to their excellent specific strength, stiffness, and fatigue resistance [1,2,3,4,5,6,7]. However, fiber-reinforced composite laminates fabricated by laminating the fiber layers have several defects at the interface between the laminated layers [2,8]. In the case of defects, delamination is a typical defect of laminated composites [3,9]. Delamination reduces the strength and stiffness of composite materials [2,3,10]. Thus, it is very important to detect delamination in the performance of composite laminates [11,12,13,14,15,16,17,18]. Structural Health Monitoring (SHM) methods using the latest sensors, such as piezoelectric (PZT) sensors [19,20], carbon nanotube (CNT) sensors [21], and fiber Bragg grating (FBG) sensors [22], were proposed for detecting delamination in laminated composite structures. However, most sensors used in SHM measure data in a contact manner [23,24]. These sensors do not provide visual information about delamination, which can be a quick and instinctive sensor signal to detect delamination.
To solve these limitations of the aforementioned contact sensors, the wavefield image scanning technique was proposed as a non-contact technique for damage detection [25,26]. The wavefield image scanning method is a technique for obtaining a guided wavefield of a target structure excited by an excitation source as an image using a laser Doppler vibrometer (LDV). This technique proved to be highly effective in the non-contact detection of delamination in composite structures [25,26]. However, their efficiency depends on the surface treatment of the control structure and the inspection time of the entire structure [25,26]. In addition, a high-power laser beam used for inspection could affect the condition of the structure in the long term [25].
To overcome this problem, research has been actively pursued over the past few decades to develop non-contact sensors using mechanoluminescent (ML) materials. The ML sensor was fabricated in the form of a thin film by spraying a mixture of inorganic ML microparticles and transparent resin onto the substrate [27]. The airbrush allows the ML thin films to be freely coated onto complex surfaces. ML particles embedded in the ML sensor coating are subjected to mechanical stimuli, experienced by the ML sensor coating and consequently emit light. The ML light intensity was shown to vary in tandem with the extent of the mechanical stimuli, such as tension, fracture, impact, and pressure [28]. The ML light sensor signal can be measured in a real time to produce optical images with a camera. Accordingly, the ML sensor coating emits visual non-contact sensing information in the ML light, which can be used to understand the physical phenomena on ML-coated structural surfaces [29]. To harness the unique multi-physics ML characteristics, many different physical sensors were developed using ML materials, such as a strain sensor [30], a stress distribution sensor [23,24,31], an impact sensor [32], a torque sensor [33], and vibration sensor [34]. Further, there are several researchers who have investigated the use of ML materials for crack detection because the ML light intensity is much brighter at the crack tip to generate a high signal-to-noise ratio [27,35,36,37,38,39]. The research was conducted in order to detect the cracks propagation and delamination in the adhesive layer of the double cantilever beam (DCB) using the unique ML feature [29]. However, relatively less attention has been paid to the use of ML to detect delamination between the laminated fiber layers of the composites.
As part of this research, we studied the non-contact sensing capabilities of the ML sensor coating to detect delamination occurring between the fiber membranes of the composite material using the visualized ML light sensor signal. The three specific objectives to achieve the general objective are as follows: (i) the ML sensor system is constructed as a simple non-contact sensor system consisting of ML sensors that are thinly coated on the surface of composite laminates, (ii) the light emitted by the ML sensor on the delaminated interface of the composite laminates was captured with a high-speed camera to record video to export the images, (iii) an image processing technique is applied to automatically extract the delamination locations on the ML images. Convolutional AutoEncoder (CAE) was applied to automatically extract delamination locations from the obtained light emission images. The CAE results were compared with those of the edge detection method.

2. Background on Convolutional AutoEncoder (CAE)

2.1. Motivations on CAE

The measurement data of the ML sensor is in the form of an image, unlike other contact sensors. Therefore, the application of image processing techniques is very important for extracting spatial information from ML sensor data. In this study, it is only necessary to segment the delamination position, that is, the light-emitting part in the ML sensor image. A representative image segmentation technique is the thresholding method which divides the objects to be segmented from the rest using a threshold value of pixels [40]. However, it is very difficult to automatically define the threshold value in low-contrast and noisy images in this way. To overcome these limitations, researchers performed automatic image crack detection using deep learning techniques, Convolutional Neural Network (CNN) [41,42,43], Fully Convolutional Network (FCN) [44], and Convolutional AutoEncoder(CAE) [45,46]. Among them, the CAE anomaly detection study of concrete structures motivated us to use CAE in our research for the following four reasons: (i) other types of autoencoder are designed for learning time series data, whereas CAE is designed for learning image data, (ii) CAE anomaly detection does not require labelled data as required by other deep learning techniques, (iii) CAE is designed to produce reconstructed images of the input images as results, and therefore, the CAE automatically learns the features of the training images, and iv) CAE trained by only the input images does not correctly reconstruct images with properties different than the input images and the detection of anomalies in pixel units is possible by using these CAE characteristics [45].

2.2. Architecture of CAE for Anomaly Detection

CAE is a network composed of an encoder, a latent space, and a decoder. The encoder extracts the representative features of the images through convolutional layers and the compressed features are encoded in the latent space, which is smaller in dimension than the input images [45]. The encoded features pass through the decoder and the decoder, which is composed of transposed convolutional layers, reconstructs the features as close to the original images as possible. In other words, CAE optimizes a huge number of parameters to best reconstruct the training image dataset.
Figure 1 shows the overall CAE architecture. CAE consists of encoder and decoder parts. the CAE input images are normalized images ranging from 0 to 1 with a size of 50 × 50 pixels, and the images are normal images with no light emission. In the encoder, representative features of the input images are extracted, and the feature dimensions are reduced as the images pass through the three modules, which are two convolutional layers and one max pooling layer. In order to minimize information loss caused by the dimensionality reduction, the number of channels of the convolutional layers are doubled every time it passes through the modules. After passing through the last module’s max pooling layer, the features pass through one additional convolutional layer and the features are compressed into 128 dimensions in the latent space. In the decoder, the encoded features are reconstructed into the input image. The reconstruction process involves the passage of the three modules which are one transposed convolutional layer and two convolutional layers. The number of channels of the transposed convolutional layers are reduced as half. After passing through the last convolutional layer of the decoder, the input images are reconstructed by one additional transposed convolutional layer. For training of the CAE, Mean Squared Error (MSE) was adopted as a loss function and the Adam optimizer was used to minimize the loss function. MSE can be calculated by dividing the sum of the reconstruction errors of all images by the total number of images. MSE can be defined as follows:
M e a n   S q u a r e d   E r r o r   M S E = 1 N n = 1 N   p   r   ,   c   p   r ,   c   r e c o n s t r u c t e d   2
where p is the pixel value of the input image, p r e c o n s t r u c t e d is the pixel value of the reconstructed image, r is the image row index, c is the image column index, and N is the total number of images.
After the training, CAE has the optimal parameters to reproduce normal images as well as possible, and these parameters make it difficult to restore defective images.

3. Methodology

The proposed method of ML image-based delamination detection is divided into the data acquisition process, including the fabrication of ML coated composite laminates and image processing of the acquired data. Figure 2 is a brief diagram of the ML image-based delamination detection method, which consists of three steps. In the first step, ML coated composite laminates are fabricated and images are obtained with a camera when a bending load is applied to the composite. In the second stage, feature extraction is performed on the images without light emission using Convolutional AutoEncoder (CAE). Then, the light-emitted images are entered into the trained CAE model, and the reconstruction error images are calculated by obtaining images that are not well reconstructed as results. In the third step, a threshold value of reconstruction error is extracted from the reconstruction error images and a binary classification is carried out for each pixel based on the threshold value. After classification, the location of the composite delamination is extracted by merging the classified images.

4. Experimental Details

4.1. Preparation of Test Specimens

Data acquisition consists of the ML composite fabrication and acquisition of light emission images by means of a camera. ML composite refers to a composite bonded to an ML compound that emits light in response to mechanical deformation. As shown in Figure 3, ML coated composite laminates are manufactured in the following steps: (1) ML compound is made by mixing inorganic particles and a compound of resin material and hardener, (2) composite laminates are coated with an ML compound on the laminate side. In the first step, polydimethylsiloxane (PDMS; product #:Sylgard 184 kit) and silicone elastomer curing agent (product #:Sylgard 184 kit) are mixed in a weight ratio of 10:1. The mixed compound is blended with copper doped zinc sulfide (ZnS:Cu) particles in a weight ratio of 3:7 to complete ML compound. In the second stage, two carbon fiber reinforced composite (CFRP) plates are made by a Vacuum Assisted Resin Transfer Molding (VARTM) process. Both plates are manufactured by laminating 20 carbon fiber prepregs (Fiber glass product #:2069-C), and one of them, polytetrafluoroethylene (PTFE), is inserted between the 10th and 11th layers to initiate delamination under load. After fabrication, the two specimens are cut identically to length, width and thickness of 17.78 cm, 2.54 cm, and 3.81 mm. In the specimen with pre-crack inserted, the length of the pre-crack is 2.08 cm from the end of the specimen. The ML composite is then completed after coating ML compound on the side of the specimens with a wooden stick and heating in an oven at 80 °C for 2 h.

4.2. Test Setup

Figure 4 is a schematic diagram relating to the ML sensor system configuration and the image acquisition process. The ML sensor system consists of ML coated composite laminates, a high speed camera for measuring light emission images, and a computer as a data acquisition device. In this study, a 3-point bending test was performed to trigger delamination of CFRP composite laminates under bending load. For comparison of two specimens, 3-point bending tests were performed on both specimens under certain load conditions. The specific load condition is 10 cycles of the same specification with a minimum displacement of 6mm and a load rate of 1500 mm/min. Image acquisition from both experiments was performed in a dark room to minimize ambient light sources and was recorded with a highspeed camera (Shimadzu HPV-X HyperVision camera). The result was a thousand images for each specimen at 120 frames per second (fps), corresponding to five cycles and one load cycle. The images were stored as 16-bit gray scale images on the computer. Figure 5 shows an experimental setup of an ML sensor system.

4.3. Reconstruction Error of Light Emission Images

The measurement data of the ML sensor is in the form of an image, unlike other contact sensors. Therefore, the application of image processing techniques is very important for extracting spatial information from ML sensor data. In this study, it is only necessary to segment the delamination position, that is, the light-emitting part in the ML sensor image. A representative image segmentation technique is the thresholding method which divides the objects to be segmented from the rest using a threshold value of pixels [40]. However, it is very difficult to automatically define the threshold value in low-contrast and noisy images in this way. To overcome these limitations, researchers performed automatic image crack detection using deep learning techniques, Convolutional Neural Network (CNN) [41,42,43], Fully Convolutional Network (FCN) [44], and Convolutional AutoEncoder(CAE) [45]. Among them, the CAE anomaly detection study of concrete structures motivated us to use CAE in our research for the following three reasons: (i) CAE anomaly detection does not require labelled data as required by other deep learning techniques, (ii) CAE is designed to produce reconstructed images of the input images as results, and therefore, the CAE automatically learns the features of the training images, and (iii) CAE trained by only the input images does not correctly reconstruct images with properties different than the input images and the detection of anomalies in pixel units is possible by using these CAE characteristics [45].
CAE is a network composed of an encoder, a latent space, and a decoder. The encoder extracts the representative features of the images through convolutional layers and the compressed features are encoded in the latent space, which is smaller in dimension than the input images [45]. The encoded features pass through the decoder and the decoder, which is composed of transposed convolutional layers, reconstructs the features as close to the original images as possible. In other words, CAE optimizes a huge number of parameters to best reconstruct the training image dataset. Thus, the trained CAE model does not adequately restore images that are non-uniform from the trained images, leading to a high reconstruction error of the defective part. In this study, the parts with high reconstruction error indicate that light is emitted, and therefore parts signify that delamination is occurring in these parts.
Figure 6 shows the process of obtaining reconstruction error images that indicate portions of light emission. The process of acquiring reconstruction error images consists of four steps. In the first step, the original 250 × 450-pixel images obtained with the camera are cropped into images of the same 50 × 50-pixel size. This is to increase the number of images and divide the original images into square images suitable for CAE training. Through this process, 1000 images obtained from each of the two specimens are augmented to 40,000 images with a size of 50 × 50 pixels for each specimen. In the second step, the cropped 16-bit grayscale images, which consist of pixel values in the range 0 and 216 − 1, are normalized to the range 0 and 1 for efficient computation. In the third step, CAE training takes place with the use of a prepared data set. The training dataset consists of 40,000 images of a normal specimen, and this is to learn the optimal parameters of the CAE model to reconstruct images that do not emit light. The model trained in this way produces a high reconstruction error for images in which light is emitted. The pixel reconstruction error is defined as follows [45].
r e c o n s t r u c t i o n   e r r o r =   p r ,   c p r ,   c r e c o n s t r u c t e d   2  
where p is the pixel value of the input image, p r e c o n s t r u c t e d is the pixel value of the reconstructed image, r is the image row index and c is the image column index.
In the fourth step, the reconstruction error images are finally obtained by calculating the reconstruction error in pixel units between the reconstructed images and the input images. In other words, 40,000 reconstruction error images are obtained where each pixel value consists of the reconstruction error.

4.4. Binary Classification

Figure 7 shows a simplified flowchart of the binary classification in pixel units which is required for segmenting only the light emission part of the light emission image.
The binary classification proceeds in two steps. In the first step, a threshold value is determined based on the reconstruction error images obtained in Section 2.2, and this threshold value determines whether the pixel is emitting light. The threshold value is entirely user-defined and is carefully defined based on whether the light-emitting part is properly segmented when removing noise from an image. In the second step, the CAE model is used to create the reconstruction error images of the defective images, and the reconstruction error images are obtained in the same process as in Section 2.2. After that, all pixels of the reconstruction images are classified based on a predefined threshold value. As a result, pixels that are larger than the threshold have the same value of 1 at their respective positions, and the remaining pixels are mapped to 0.
The classified images are 50 × 50 pixels in size. The cropped images must be merged after binary classification and restored to the original image size to specify the delamination location in the original image. Figure 8 shows a schematic process for combining classified images. The 40 images from one original image, which is 250 × 400 pixels, are returned to their respective cropped positions, and the combined image as a result of this process becomes a binary classification image that determines the location of the delamination in the original image. The merged image is an image in which the pixels corresponding to the delamination location have a pixel value of 1, and the rest of the pixels have a pixel value of 0.

5. Results

This section begins by confirming whether the ML sensor is emitting light due to delamination. That is verified by comparing the mean pixel value (MPV) of the images from the experiments. Then, this paper shows the segmentation results of the light emission images that are classified by CAE. To validate the effectiveness of the segmentation method, the results are compared with the detection results using the Canny edge detection method.

5.1. MPV Changes

MPV is the average value of all pixels in the image. It is used as an indicator of light emission in images for the following reasons: (i) all images are acquired in an experimental environment without exposure to light, and the image datasets consist of a normal dataset and a defective dataset with the same number of images, (ii) the only experimental variable is the presence of pre-crack in the specimen, and (iii) therefore, the obvious MPV difference between the two datasets indicates that the difference is caused by light emission from the delamination location.
Figure 9 is a graph comparing the change in the Mean Pixel Value (MPV) of 1000 images each obtained from the two specimens in Section 2.1. In Figure 9, the blue line shows the MPV for 1000 images obtained from a normal specimen and the red line shows the MPV for defective images obtained from another specimen. MPV can be defined as follows:
M e a n   P i x e l   V a l u e   M P V = s u m   o f   a l l   p i x e l   v a l u e s t h e   n u m b e r   o f   p i x e l s  
MPV can be calculated by dividing the sum of all pixel values by the number of pixels and MPV is between 0 and 2 16 1 because all images are 16-bit grayscale images. As shown in Figure 9, the MPV of the defective images is confirmed to have several peak values that cannot be observed in the normal images. Since the presence of pre-crack is the experimental variable, several peaks show that the ML sensor repeatedly emits light during the loading and unloading cycle at the delamination location.
Figure 10 is a gray color image of the acquired 221st image, which was acquired in a first loading cycle. The clustered pixels showed the light emission locations.

5.2. Pixel-Level Segmentation CAE

Many variables affect the pixel values distribution, including the position of the camera, the shooting angle and the state of the lens. Therefore, it is essential to incorporate image processing techniques to efficiently analyze images from the ML sensor. In this study, pixel-level segmentation was performed to extract the delamination locations on defective images using Convolutional AutoEncoder (CAE).
Based on the CAE characteristics the reconstruction error images were obtained as described in Section 2.2, among which the reconstruction error images for the normal dataset were used to extract a reconstruction error threshold value that can classify the pixel light emission. Figure 11 shows the maximum reconstruction errors of the reconstruction error images for each dataset. As can be seen in Figure 11, it is confirmed that the maximum reconstruction errors of defective images have several peaks that cannot be detected in normal images. The peaks are analogous to the MPV peaks in Figure 8. It can be analyzed that the trained CAE shows high reconstruction errors for the light emission images. To classify light emission in pixel units, the threshold value was selected from the maximum reconstruction errors of normal images in Figure 11. In this study, the threshold value was set with an increase from the mean value of the maximum reconstruction errors for normal images to their maximum value. Finally, the threshold value was set to 1.5 times the mean value, which can eliminate noise while detecting pixels with low light intensity. After selecting the threshold value, the reconstruction error images of the defective images were classified in pixel units, and pixels with reconstruction errors greater than the threshold value were classified as 1, and the remaining pixels were classified as 0. Then, the classified 40,000 images were merged to their original size to finally obtain a total of 1000 classified images.
Figure 12 shows some of the CAE classification results, and the classified images are six consecutive images when the first load is applied to the pre-fracture specimen k. The clustered pixels of the classified images indicate the location and start time of delamination. Furthermore, the classified images show that only the delamination portions are segmented during noise removal by a threshold value extracted from CAE. Figure 13 shows the sequence of six consecutive images when the first unload is applied to the specimen. More pixels are classified in the first unload cycle because sufficient delamination has already occurred. These results show the CAE characteristics where the reconstruction error is increased in the defective portions of the image.

5.3. Comparision of CAE Results to Canny Edge Detection Results

This section conducted a comparative study comparing the results obtained with CAE and the Canny edge detection algorithm. Canny edge detection is a widely used and robust algorithm among various edge detection methods [47]. Edge detection was selected for the performance comparison because light emission is the part where the light intensity changes rapidly and can therefore be considered the edge of the image.
Figure 14 shows the comparative results of the different detection methods. In Figure 14, the numbers in the first row are the gray images for the four consecutive original images in the second load cycles. The images in the second and third rows are the result images obtained by CAE and Canny edge detection, respectively. As can be seen in Figure 13, the gray images for the original images demonstrate the relative pixel intensities at the delamination location, showing that non-contact detection of composite delamination is feasible using an ML sensor. However, the delamination location in the images can be manually identified by a person and the use of image processing techniques is essential to build an automated detection system. Canny edge detection method results show good performance in detecting edges in light-emitting portions while removing noise from the original image. However, the results show that light-emitting parts can only be detected as multiple edges. For that reason, edge detection detects all pixels with larger gradients as edges. Therefore, in the edge detection method, the undetected pixels exist between edges in the light emission part. Whereas, it can be seen that the results using CAE minimized the number of undetected pixels by showing the spatial information of the light emission portions as a cluster. In addition, CAEs show good performance in eliminating noise from the original images.

6. Discussions

We investigated the non-contact detection of delamination in composite laminates using ML sensors. In the Section 5.1, it was confirmed that when the composite is loaded, repetitive light is emitted from the delamination location. Furthermore, no light is emitted even under load as long as there is no delamination. These results show that the delamination of composite laminates can be detected in a non-contact manner with the ML sensor.
In addition, we applied image processing technique to extract the delamination portions in the ML images. Convolutional AutoEncoder (CAE) was selected as the image processing technique, and the CAE classified the images in a pixel unit using the threshold value of the reconstruction errors. In the Section 5.2, the results indicate that the reconstruction error used for the pixel-level classification is sufficient as an indicator of light emission. In the Section 5.3, we compared the performance of CAE and Canny edge detection on the same images. Canny edge detection did not properly detect pixels between many edges, whereas CAE extracted more accurate delamination locations by minimizing the number of undetected pixels. Additionally, CAE has superior performance compared to other network models [41,42,43,44,48] in the following two aspects: (i) the image labeling task is not required for CAE training, which can reduce the huge amount of time to prepare data which are required for training CAE and FCN models, and (ii) the detection resolution of CNN is limited to the training image size, whereas CAE can improve the detection resolution in a pixel unit by using a simple static threshold value.

7. Conclusions

Little research has been carried out to date on detecting delamination in composite laminates using ML sensors, and image processing techniques have rarely been applied to ML sensor images. In this study, we confirmed the applicability of the ML sensor for detecting delamination in composite laminates. In addition, delamination locations were automatically extracted from ML images using CAE, which shows that the application of an image processing technique enables real-time composite delamination of composite to be detected. However, the ML sensor may be limited in an environment where external light sources are exposed because the ML sensor is a light-emitting sensor. The light intensity of the ML sensor has a broad peak in the visible light wavelength band. Therefore, visible light can be noise in the ML images. Thus, more sophisticated image processing techniques should be applied to analyze ML images exposed to external light sources. This will limit the application of the ML sensor in the actual field. To deal with future limitations, we will evaluate the adaptability of the ML sensor to detect delamination of composites when light sources are exposed. In addition, the CAE will be verified as to whether it is also effective for ML images acquired in an environment with external light sources. Another limitation is the versatility of the ML sensor system about internal delamination of composite laminates. This study showed that the ML sensor system is effective as a non-contact sensor for visible delamination such as cracks. However, future studies should focus on various internal defects of composite laminates for the application of the ML sensor system in the actual field.

Author Contributions

Conceptualization, H.S.K., D.R. and S.P.; methodology, S.P. and J.S.; formal analysis S.P. and J.S.; investigation, S.P.; resources, H.S.K.; writing—original draft preparation, S.P., J.S., D.R. and H.S.K.; writing—review and editing, H.S.K., J.S. and D.R.; visualization, S.P.; supervision, H.S.K. and D.R.; project administration, H.S.K.; funding acquisition, H.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MOTIE (Ministry of Trade, Industry, and Energy) in Korea, under the Fostering Global Talents for Innovative Growth Program (P0017307) supervised by the Korea Institute for Advancement of Technology (KIAT).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Prashanth, S.; Subbaya, K.M.; Nithin, K.; Sachhidananda, S. Fiber Reinforced Composites—A Review. J. Mater. Sci. Eng. 2017, 06, 2–6. [Google Scholar] [CrossRef] [Green Version]
  2. Khan, S.U.; Kim, J.-K. Impact and Delamination Failure of Multiscale Carbon Nanotube-Fiber Reinforced Polymer Composites: A Review. Int. J. Aeronaut. Space Sci. 2011, 12, 115–133. [Google Scholar] [CrossRef] [Green Version]
  3. Camanho, P.P.; Davila, C.G.; De Moura, M.F. Numerical Simulation of Mixed-Mode Progressive Delamination in Composite Materials. J. Compos. Mater. 2003, 37, 1415–1438. [Google Scholar] [CrossRef]
  4. An, H.; Youn, B.D.; Kim, H.S. Reliability-based Design Optimization of Laminated Composite Structures under Delamination and Material Property Uncertainties. Int. J. Mech. Sci. 2021, 205, 106561. [Google Scholar] [CrossRef]
  5. Khan, A.; Kim, H.S. A Brief Overview of Delamination Localization in Laminated Composites. Multiscale Sci. Eng. 2022, 4, 102–110. [Google Scholar] [CrossRef]
  6. Huang, B.; Wang, J.; Kim, H.S. A stress function based model for transient thermal stresses of composite laminates in various time-variant thermal environments. Int. J. Mech. Sci. 2020, 180, 105651. [Google Scholar] [CrossRef]
  7. Khalid, S.; Kim, H.S. Recent Studies on Stress Function-Based Approaches for the Free Edge Stress Analysis of Smart Composite Laminates: A Brief Review. Multiscale Sci. Eng. 2022, 4, 73–78. [Google Scholar] [CrossRef]
  8. Khan, A.; Raouf, I.; Noh, Y.R.; Lee, D.; Sohn, J.W.; Kim, H.S. Autonomous assessment of delamination in laminated composites using deep learning and data augmentation. Compos. Struct. 2022, 290, 115502. [Google Scholar] [CrossRef]
  9. Bolotin, V.V. Delaminations in composite structures: Its origin, buckling, growth and stability. Compos. Part B Eng. 1996, 27, 129–145. [Google Scholar] [CrossRef]
  10. Khalid, S.; Kim, H.-S.; Kim, H.S.; Choi, J.-H. Inspection Interval Optimization for Aircraft Composite Tail Wing Structure Using Numerical-Analysis-Based Approach. Mathematics 2022, 10, 3836. [Google Scholar] [CrossRef]
  11. Khan, A.; Kim, N.; Shin, J.K.; Kim, H.S.; Youn, B.D. Damage assessment of smart composite structures via machine learning: A review. JMST Adv. 2019, 1, 107–124. [Google Scholar] [CrossRef] [Green Version]
  12. Khan, A.; Ko, D.-K.; Lim, S.C.; Kim, H.S. Structural vibration-based classification and prediction of delamination in smart composite laminates using deep learning neural network. Compos. Part B Eng. 2019, 161, 586–594. [Google Scholar] [CrossRef]
  13. Khan, A.; Kim, H.S. Assessment of delaminated smart composite laminates via system identification and supervised learning. Compos. Struct. 2018, 206, 354–362. [Google Scholar] [CrossRef]
  14. An, H.; Youn, B.D.; Kim, H.S. A methodology for sensor number and placement optimization for vibration-based damage detection of composite structures under model uncertainty. Compos. Struct. 2022, 279, 114863. [Google Scholar] [CrossRef]
  15. Khan, A.; Khalid, S.; Raouf, I.; Sohn, J.-W.; Kim, H.-S. Autonomous Assessment of Delamination Using Scarce Raw Structural Vibration and Transfer Learning. Sensors 2021, 21, 6239. [Google Scholar] [CrossRef] [PubMed]
  16. Khalid, S.; Lee, J.; Kim, H.S. Series Solution-Based Approach for the Interlaminar Stress Analysis of Smart Composites under Thermo-Electro-Mechanical Loading. Mathematics 2022, 10, 268. [Google Scholar] [CrossRef]
  17. Khan, A.; Kim, H.S. Classification and prediction of multidamages in smart composite laminates using discriminant analysis. Mech. Adv. Mater. Struct. 2022, 29, 230–240. [Google Scholar] [CrossRef]
  18. An, H.; Youn, B.D.; Kim, H.S. Optimal Sensor Placement Considering Both Sensor Faults Under Uncertainty and Sensor Clustering for Vibration-Based Damage Detection. Struct. Multidiscip. Optim. 2022, 65, 102. [Google Scholar] [CrossRef]
  19. Sohn, H.; Park, G.; Wait, J.R.; Limback, N.P.; Farrar, C.R. Wavelet-based active sensing for delamination detection in composite structures. Smart Mater. Struct. 2004, 13, 153–160. [Google Scholar] [CrossRef]
  20. Tan, P.; Tong, L. Delamination Detection of Composite Beams Using Piezoelectric Sensors with Evenly Distributed Electrode Strips. J. Compos. Mater. 2004, 38, 321–352. [Google Scholar] [CrossRef]
  21. Abot, J.L.; Song, Y.; Vatsavaya, M.S.; Medikonda, S.; Kier, Z.; Jayasinghe, C.; Rooy, N.; Shanov, V.N.; Schulz, M.J. Delamination detection with carbon nanotube thread in self-sensing composite materials. Compos. Sci. Technol. 2010, 70, 1113–1119. [Google Scholar] [CrossRef]
  22. Takeda, S.; Okabe, Y.; Takeda, N. Delamination detection in CFRP laminates with embedded small-diameter fiber Bragg grating sensors. Compos. Part Appl. Sci. Manuf. 2002, 33, 971–980. [Google Scholar] [CrossRef]
  23. Xu, C.-N.; Watanabe, T.; Akiyama, M.; Zheng, X.-G. Direct view of stress distribution in solid by mechanoluminescence. Appl. Phys. Lett. 1999, 74, 2414–2416. [Google Scholar] [CrossRef]
  24. Xu, C.-N.; Zheng, X.-G.; Akiyama, M. Dynamic visualization of stress distribution by mechanoluminescence image. Appl. Phys. Lett. 2000, 76, 179–181. [Google Scholar] [CrossRef]
  25. Park, B.; An, Y.-K.; Sohn, H. Visualization of hidden delamination and debonding in composites through noncontact laser ultrasonic scanning. Compos. Sci. Technol. 2014, 100, 10–18. [Google Scholar] [CrossRef]
  26. Sohn, H.; Dutta, D.; Yang, J.Y.; Park, H.J.; DeSimio, M.; Olson, S.; Swenson, E. Delamination detection in composites through guided wave field image processing. Compos. Sci. Technol. 2011, 71, 1250–1256. [Google Scholar] [CrossRef]
  27. Terasaki, N.; Xu, C.-N. Historical-Log Recording System for Crack Opening and Growth Based on Mechanoluminescent Flexible Sensor. IEEE Sens. J. 2013, 13, 3999–4004. [Google Scholar] [CrossRef]
  28. Timilsina, S.; Kim, J.S.; Kim, J.; Kim, G.-W. Review of state-of-the-art sensor applications using mechanoluminescence microparticles. Int. J. Precis. Eng. Manuf. 2016, 17, 1237–1247. [Google Scholar] [CrossRef]
  29. Terasaki, N.; Fujio, Y.; Horiuchi, S.; Akiyama, H. Mechanoluminescent studies of failure line on double cantilever beam (DCB) and tapered-DCB (TDCB) test with similar and dissimilar material joints. Int. J. Adhes. Adhes. 2019, 93, 102328. [Google Scholar] [CrossRef]
  30. Sohn, K.-S.; Timilsina, S.; Singh, S.P.; Lee, J.-W.; Kim, J.S. A Mechanoluminescent ZnS:Cu/Rhodamine/SiO2/PDMS and Piezoresistive CNT/PDMS Hybrid Sensor: Red-Light Emission and a Standardized Strain Quantification. ACS Appl. Mater. Interfaces 2016, 8, 34777–34783. [Google Scholar] [CrossRef]
  31. Terasaki, N.; Fujio, Y.; Sakata, Y.; Uehara, M.; Tabaru, T. Direct Visualization of Stress Distribution Related to Adhesive through Mechanoluminescence. ECS Trans. 2017, 75, 9. [Google Scholar] [CrossRef]
  32. Ryu, D.; Castano, N.; Vedera, K. Mechanoluminescent Composites Towards Autonomous Impact Damage Detection of Aerospace Structures. In Proceedings of the Structural Health Monitoring 2015, Stanford, CA, USA, 1–3 September 2015; Destech Publications: Lancaster, PA, USA, 2015. [Google Scholar]
  33. Kim, J.S.; Kim, G.-W. New non-contacting torque sensor based on the mechanoluminescence of ZnS:Cu microparticles. Sens. Actuators Phys. 2014, 218, 125–131. [Google Scholar] [CrossRef]
  34. Chen, B.; Peng, D.-F.; Lu, P.; Sheng, Z.-P.; Yan, K.-Y.; Fu, Y. Evaluation of vibration mode shape using a mechanoluminescent sensor. Appl. Phys. Lett. 2021, 119, 094102. [Google Scholar] [CrossRef]
  35. Timilsina, S.; Bashnet, R.; Kim, S.H.; Lee, K.H.; Kim, J.S. A life-time reproducible mechano-luminescent paint for the visualization of crack propagation mechanisms in concrete structures. Int. J. Fatigue 2017, 101, 75–79. [Google Scholar] [CrossRef]
  36. Fujio, Y.; Xu, C.-N.; Terasawa, Y.; Sakata, Y.; Yamabe, J.; Ueno, N.; Terasaki, N.; Yoshida, A.; Watanabe, S.; Murakami, Y. Sheet sensor using SrAl2O4:Eu mechanoluminescent material for visualizing inner crack of high-pressure hydrogen vessel. Int. J. Hydrogen Energy 2016, 41, 1333–1340. [Google Scholar] [CrossRef] [Green Version]
  37. Fujio, Y.; Xu, C.-N.; Sakata, Y.; Ueno, N.; Terasaki, N. Invisible crack visualization and depth analysis by mechanoluminescence film. J. Alloys Compd. 2020, 832, 154900. [Google Scholar] [CrossRef]
  38. Kim, W.J.; Lee, J.M.; Kim, J.S.; Lee, C.J. Measuring high speed crack propagation in concrete fracture test using mechanoluminescent material. Smart Struct. Syst. 2012, 10, 547–555. [Google Scholar] [CrossRef]
  39. Timilsina, S.; Lee, K.H.; Kwon, Y.N.; Kim, J.S. Optical Evaluation of In Situ Crack Propagation by Using Mechanoluminescence of SrAl2O4 :Eu2+, Dy3+. J. Am. Ceram. Soc. 2015, 98, 2197–2204. [Google Scholar] [CrossRef]
  40. Raju, P.D.R.; Neelima, G. Image Segmentation by using Histogram Thresholding. Int. J. Comput. Sci. Eng. Technol. 2012, 2, 776–779. [Google Scholar]
  41. Zhang, L.; Yang, F.; Daniel Zhang, Y.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; IEEE: Phoenix, AZ, USA, 2016; pp. 3708–3712. [Google Scholar]
  42. Li, S.; Zhao, X. Image-Based Concrete Crack Detection Using Convolutional Neural Network and Exhaustive Search Technique. Adv. Civ. Eng. 2019, 2019, 6520620. [Google Scholar] [CrossRef] [Green Version]
  43. Cha, Y.-J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks: Deep learning-based crack damage detection using CNNs. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  44. Dung, C.V.; Anh, L.D. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019, 99, 52–58. [Google Scholar] [CrossRef]
  45. Chow, J.K.; Su, Z.; Wu, J.; Tan, P.S.; Mao, X.; Wang, Y.H. Anomaly detection of defects on concrete structures with the convolutional autoencoder. Adv. Eng. Inform. 2020, 45, 101105. [Google Scholar] [CrossRef]
  46. Tang, W.; Vian, C.M.; Tang, Z.; Yang, B. Anomaly detection of core failures in die casting X-ray inspection images using a convolutional autoencoder. Mach. Vis. Appl. 2021, 32, 102. [Google Scholar] [CrossRef]
  47. Ding, L.; Goshtasby, A. On the Canny edge detector. Pattern Recognit. 2001, 34, 721–725. [Google Scholar] [CrossRef]
  48. Dung, C.V.; Sekiya, H.; Hirano, S.; Okatani, T.; Miki, C. A vision-based method for crack detection in gusset plate welded joints of steel bridges using deep convolutional neural networks. Autom. Constr. 2019, 102, 217–229. [Google Scholar] [CrossRef]
Figure 1. Model of the architecture of a convolutional autoencoder for segmentation of light emission.
Figure 1. Model of the architecture of a convolutional autoencoder for segmentation of light emission.
Mathematics 10 04254 g001
Figure 2. Methodology of the proposed ML image-based delamination detection.
Figure 2. Methodology of the proposed ML image-based delamination detection.
Mathematics 10 04254 g002
Figure 3. Fabrication of ML coated composite laminates.
Figure 3. Fabrication of ML coated composite laminates.
Mathematics 10 04254 g003
Figure 4. ML sensor system configuration and image acquisition.
Figure 4. ML sensor system configuration and image acquisition.
Mathematics 10 04254 g004
Figure 5. Experimental setup for the 3-point bending test of ML coated composite laminates.
Figure 5. Experimental setup for the 3-point bending test of ML coated composite laminates.
Mathematics 10 04254 g005
Figure 6. Schematic process of obtaining reconstruction error images.
Figure 6. Schematic process of obtaining reconstruction error images.
Mathematics 10 04254 g006
Figure 7. Schematic process of binary classification using the threshold value.
Figure 7. Schematic process of binary classification using the threshold value.
Mathematics 10 04254 g007
Figure 8. Schematic flow chart of classified images connection.
Figure 8. Schematic flow chart of classified images connection.
Mathematics 10 04254 g008
Figure 9. MPV of images obtained from the experiments.
Figure 9. MPV of images obtained from the experiments.
Mathematics 10 04254 g009
Figure 10. A gray color image of the 221st defective image.
Figure 10. A gray color image of the 221st defective image.
Mathematics 10 04254 g010
Figure 11. Maximum reconstruction error of the images from the experiments.
Figure 11. Maximum reconstruction error of the images from the experiments.
Mathematics 10 04254 g011
Figure 12. Some representative classified images in the first load cycle: (a) 212th; (b) 213th; (c) 214th; (d) 215th; (e) 216th; (f) 217th.
Figure 12. Some representative classified images in the first load cycle: (a) 212th; (b) 213th; (c) 214th; (d) 215th; (e) 216th; (f) 217th.
Mathematics 10 04254 g012
Figure 13. Some representative classified images in the first unload cycle: (a) 282th; (b) 283th; (c) 284th; (d) 285th; (e) 286th; (f) 287th.
Figure 13. Some representative classified images in the first unload cycle: (a) 282th; (b) 283th; (c) 284th; (d) 285th; (e) 286th; (f) 287th.
Mathematics 10 04254 g013
Figure 14. Comparative results—top: gray image for the original image, middle: CAE, bottom: Canny edge detection: (a) 344th; (b) 345th; (c) 346th; (d) 347th.
Figure 14. Comparative results—top: gray image for the original image, middle: CAE, bottom: Canny edge detection: (a) 344th; (b) 345th; (c) 346th; (d) 347th.
Mathematics 10 04254 g014
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, S.; Song, J.; Kim, H.S.; Ryu, D. Non-Contact Detection of Delamination in Composite Laminates Coated with a Mechanoluminescent Sensor Using Convolutional AutoEncoder. Mathematics 2022, 10, 4254. https://doi.org/10.3390/math10224254

AMA Style

Park S, Song J, Kim HS, Ryu D. Non-Contact Detection of Delamination in Composite Laminates Coated with a Mechanoluminescent Sensor Using Convolutional AutoEncoder. Mathematics. 2022; 10(22):4254. https://doi.org/10.3390/math10224254

Chicago/Turabian Style

Park, Seogu, Jinwoo Song, Heung Soo Kim, and Donghyeon Ryu. 2022. "Non-Contact Detection of Delamination in Composite Laminates Coated with a Mechanoluminescent Sensor Using Convolutional AutoEncoder" Mathematics 10, no. 22: 4254. https://doi.org/10.3390/math10224254

APA Style

Park, S., Song, J., Kim, H. S., & Ryu, D. (2022). Non-Contact Detection of Delamination in Composite Laminates Coated with a Mechanoluminescent Sensor Using Convolutional AutoEncoder. Mathematics, 10(22), 4254. https://doi.org/10.3390/math10224254

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop