Deep Machine Learning for Path Length Characterization Using Acoustic Diffraction
Abstract
:1. Introduction
1.1. Motivation
1.2. Theory
2. Materials and Methods
2.1. System Design and Data Collection
2.2. Preprocessing
2.2.1. Convolutional Neural Network
2.2.2. Long Short-Term Memory Neural Network
2.3. Convolutional Neural Networks
2.3.1. Background
2.3.2. Amplitude CNN
- Input: The CNN takes in 51 × 17 × 1 images in the form of SDFTs. This means images which are 51 pixels tall, 17 pixels wide, and in grayscale. The values in the SDFTs are normalized between 0 and 1. Examples of the SDFTs are shown in Figure 8.
- Convolution Layer 1 & 2: The first two convolution layers apply 64 kernels which are sized 7 × 7 and use a stride of 1 with the same size padding. The same size padding will pad 0’s around the exterior of the image so that the kernel can be applied to all pixels of the image without reducing the size of the image. This is important in this model as the images are very narrow, but the relationships are fairly complex.
- Max Pooling 1: The first two convolution layers are followed by a max pooling layer with a kernel-sized 2 × 2. The use of such a max pooling layer allows us to reduce the dimensionality of the model while maintaining the most important features from each kernel. Max pooling also increases the generalizations made by the model to reduce noise in the image.
- Convolution Layer 3: This convolution layer applies 64 5 × 5 kernels with a stride of 1 and same size padding.
- Convolution Layer 4: This convolution layer applies 96 kernels which are sized 5 × 5 and use a stride of 1 with the same size padding.
- Max Pooling 2: The second two convolution layers are followed by a max pooling layer with a kernel sized 2 × 2 to once again reduce the dimensionality and allow generalization of the extracted features.
- Flattening: Flattening layers take the resulting feature maps from the final max pooling layer and flatten them into a feature vector to allow connection to a dense layer. Immediately after flattening, 40% of the features are dropped to avoid overfitting the features in the feature vector.
- Fully Connected Layers: The flattening layer connects to a fully connected, or dense, layer with 400 nodes.
- Output: The fully connected features are connected to a single node which uses a regression with the hyperbolic tangent function to determine the acoustic path length of the input sample. This function works best for the Amplitude model as the relationship between the transfer function amplitude and the normalized acoustic path length is a nonlinear relationship which is increasing between 0 and 1.
2.3.3. Phase Model
- Input: As with the Amplitude model, this CNN takes in 51 × 17 × 1 images in the form of SDFTs. In this case, the SDFTs represent transfer function phase data. Again, the images are 51 pixels tall, 17 pixels wide, and in grayscale and normalized between 0 and 1.
- Convolution Layer 1 & 2: The first two convolution layers apply 64 kernels which are sized 7 × 7 and use a stride of 1 with the same size padding. The same size padding will pad 0’s around the exterior of the image so that the kernel can be applied to all pixels of the image without reducing the size of the image. This is important in this model as the images are very narrow, but the relationships are fairly complex.
- Max Pooling 1: The first two convolution layers are followed by a max pooling layer with a kernel-sized 2 × 2. The use of such a max pooling layer allows us to reduce the dimensionality of the model while maintaining the most important features from each kernel. Max pooling also increases the generalizations made by the model to reduce noise in the image.
- Convolution Layer 3: This convolution layer applies 96 5 × 5 kernels with a stride of 1 and the same size padding.
- Convolution Layer 4: This convolution layer applies 96 kernels which are sized 5 × 5 and use a stride of 1 with the same size padding.
- Max Pooling 2: The second two convolution layers are followed by a max pooling layer with a kernel sized 2 × 2 to once again reduce the dimensionality and allow generalization of the extracted features.
- Flattening: Flattening layers take the resulting feature maps from the final max pooling layer and flatten them into a feature vector to allow connection to a dense layer. Immediately after flattening, 40% of the features are dropped to avoid overfitting the features in the feature vector.
- Fully Connected Layers: The flattening layer connects to a fully connected layer with only 230 nodes.
- Output: The fully connected features are connected to a single node which uses a regression with the ReLU function to determine the acoustic path length. This function works best for the Phase model as the relationship between the transfer function amplitude and the normalized acoustic path length is a more linear relationship which is increasing between 0 and 1.
2.4. Long Short-Term Memory Neural Network
Phase-Amplitude Model
- Input: The LSTM accepts input as seventeen pairs of phase and amplitude values in the form of a vector sized 17 × 2. These have been normalized between 0 and 1 and represent the change in phase and amplitude over all lateral locations for each sample collected.
- LSTM Layers 1 and 2: The first two LSTM layers have a memory size of 17 units allowing the model to remember 17-time steps. Each of these layers also uses a hyperbolic tangent activation function.
- LSTM Layer 3: The second LSTM layer connects to a third layer with only 9 memory units so that only the most important half of the LSTM’s memory is considered at the last layer. This layer again uses a hyperbolic tangent activation function.
- Output Layer: For the output layer, the final LSTM layer connects to an output layer with a single node. This layer is a dense layer that performs a regression to the ReLU activation function. The ReLU activation function led to the best performance when compared with the hyperbolic tangent function for the phase and amplitude pairs.
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AE | Acoustic Emissions |
BVID | Barely Visibile Impact Damages |
CNN | Convolutional Neural Network |
CSV | Comma-Separated Value |
EPS | Extracellular Polymeric Substance |
FFT | Fast Fourier Transform |
LSTM | Long Short-Term Memory |
SDFT | Short Distance Fourier Transform |
References
- Graff, K.F. Wave Motion in Elastic Solids; Oxford University Press: Oxford, UK, 1975. [Google Scholar]
- Krautkrämer, J.; Krautkrämer, H. Ultrasonic Testing of Materials; Springer: Berlin/Heidelberg, Germany, 1990. [Google Scholar]
- Drinkwater, B.W.; Wilcox, P.D. Ultrasonic arrays for non-destructive evaluation: A review. NDT E Int. 2006, 39, 525–541. [Google Scholar] [CrossRef]
- Sharma, A.; Sinha, A.K. Ultrasonic Testing for Mechanical Engineering Domain: Present and Future Perspective. Int. J. Res. Ind. Eng. 2018, 7, 243–253. [Google Scholar] [CrossRef]
- Zinin, P.V.; Arnold, W.; Weise, W.; Slabeyciusova-Berezina, S. Theory and Applications of Scanning Acoustic Microscopy and Scanning Near-Field Acoustic Imaging. In Ultrasonic and Electromagnetic NDE for Structure and Material Characterization, 1st ed.; Kundu, T., Ed.; CRC Press: Boca Raton, FL, USA, 2012; Chapter 11; pp. 611–688. [Google Scholar]
- Parsek, M.R.; Fuqua, C. Biofilms 2003: Emerging Themes and Challenges in Studies of Surface-Associated Microbial Life. J. Bacteriol. 2004, 186, 4427–4440. [Google Scholar] [CrossRef] [Green Version]
- Dewasthale, S.; Mani, I.; Vasdev, K. Microbial biofilm: Current challenges in health care industry. J. Appl. Biotechnol. Bioeng. 2018, 5, 156–160. [Google Scholar] [CrossRef] [Green Version]
- Miyasaka, C.; Yoshida, S. Overview of Recent Advancement in Ultrasonic Imaging for Biomedical Applications. Open Neuroimaging J. 2018, 12, 133–157. [Google Scholar] [CrossRef]
- Jarreau, B.; Yoshida, S.; Laprime, E. Deep Machine Learning for Acoustic Inspection of Metallic Medium. Vibration 2022, 5, 530–556. [Google Scholar] [CrossRef]
- Hilderbrand, J.A.; Rugar, D.; Johnston, R.N.; Quate, C.F. Acoustic microscopy of living cells. Proc. Natl. Acad. Sci. USA 1981, 78, 1656–1660. [Google Scholar] [CrossRef] [Green Version]
- Bas, S.; Kramer, M.; Stopar, D. Biofilm Surface Density Determines Biocide Effectiveness. Front. Microbiol. 2017, 8, 2443. [Google Scholar] [CrossRef]
- Hou, J.; Wang, C.; Rozenbaum, R.T.; Gusnaniar, N.; de Jong, E.D.; Woudstra, W.; Geertsema-Doornbusch, G.I.; Atema-Smit, J.; Sjollema, J.; Ren, Y.; et al. Bacterial Density and Biofilm Structure Determined by Optical Coherence Tomography. Sci. Rep. 2019, 9, 9794. [Google Scholar] [CrossRef] [Green Version]
- Davis, C.A.; Pyrak-Nolte, L.J.; Atekwana, E.A.; Werkema, D.D.; Haugen, M.E. Acoustic and electrical property changes due to microbial growth and biofilm formation in porous media. J. Geophys. Res. 2010, 115, G00G06. [Google Scholar] [CrossRef] [Green Version]
- Anastasiadis, P.; Mojica, K.D.; Allen, J.S.; Matter, M.L. Detection and quantification of bacterial biofilms combining high-frequency acoustic microscopy and targeted lipid microparticles. J. Nanobiotechnol. 2014, 12, 24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Miura, K. Application of scanning acoustic microscopy to pathological diagnosis. Microsc. Anal. 2016, 381–403. [Google Scholar] [CrossRef] [Green Version]
- Yoshida, S. Waves: Fundamentals and Dynamics; IOP Science: Bristol, UK, 2017. [Google Scholar] [CrossRef]
- Sommer, F.G.; Filly, R.A.; Minton, M.J. Acoustic shadowing due to refractive and reflective effects. Am. J. Roentgenol. 1979, 132, 973–979. [Google Scholar] [CrossRef]
- Guenther, B.D.; Steel, D. Encyclopedia of Modern Optics; Academic Press: Cambridge, MA, USA, 2018; p. 69. [Google Scholar]
- Weise, W.; Zinin, P.; Briggs, A.; Wilson, T.; Boseck, S. Examination of the two-dimensional pupil function in coherent scanning microscopes using spherical particles. J. Acoust. Soc. Am. 1998, 104, 181–191. [Google Scholar] [CrossRef] [PubMed]
- Soskind, Y.G. Ebook Topic: Fresnel Diffraction. In Field Guide to Diffractive Optics; SPIE: Bellingham, WA, USA, 2011. [Google Scholar] [CrossRef]
- David, L.; Jean-Francois, S. X-ray Coherent diffraction interpreted through the fractional Fourier transform. Eur. Phys. J. B 2011, 81, 481–487. [Google Scholar] [CrossRef] [Green Version]
- Smith, D.G. Ebook Topic: Huygens’ and Huygens-Fresnel Principles. In Field Guide to Physical Optics; SPIE: Bellingham, WA, USA, 2013. [Google Scholar] [CrossRef]
- Goodman, J.W. Introduction to Fourier Optics. In McGraw-Hill Physical and Quantum Electronics Series; Roberts & Company Publishers: Greenwood Village, CO, USA, 2005. [Google Scholar]
- O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
- Sharma, A.; Kumar, D. Classification with 2-D Convolutional Neural Networks for breast cancer diagnosis. arXiv 2020, arXiv:2007.03218. [Google Scholar] [CrossRef]
- Indolia, S.; Goswami, A.K.; Mishra, S.P.; Asopa, P. Conceptual understanding of convolutional neural network-a deep learning approach. Procedia Comput. Sci. 2018, 132, 679–688. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Duchi, J.; Hazan, E.; Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
- Hinton, G.; Srivastava, N.; Swersky, K. Neural Networks for Machine Learning Lecture 6a Overview of Mini-Batch Gradient Descent. College of Computer Science, University of Toronto. Available online: https://www.cs.toronto.edu/~bonner/courses/2016s/csc321/lectures/lec6.pdf (accessed on 4 February 2023).
- Medsker, L.R.; Jain, L.C. Recurrent neural networks. Des. Appl. 2001, 5, 64–67. [Google Scholar]
- Graves, A. Long Short-Term Memory. In Supervised Sequence Labelling with Recurrent Neural Networks; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2012; Volume 385. [Google Scholar] [CrossRef]
Disc Height (mm) | Disc Diameter (mm) | Anomaly Width (mm) |
---|---|---|
0.68 | 58.75 | 6.47 |
0.68 | 58.75 | 2.17 |
1.32 | 58.75 | 6.47 |
1.32 | 58.75 | 2.17 |
6.46 | 58.75 | 6.47 |
6.46 | 58.75 | 2.17 |
Medium | Density (kg/m) | Velocity (m/s) | Impedance (Rayls) |
---|---|---|---|
Water | 1000 | 1481 | |
Air | 1.204 | 331 | 398.89 |
Steel | 7850 | 5940 | |
PZT | 7500 | 4200 |
Frequency | Anomaly Width (mm) | Anomaly Height (mm) | Wavelength (mm) | Diffraction Limit (mm) |
---|---|---|---|---|
1 MHz | 6.47 | 0.68 | 5.9 | 7.09 |
1 MHz | 2.17 | 0.68 | 5.9 | 0.79 |
1 MHz | 6.47 | 1.32 | 5.9 | 7.09 |
1 MHz | 2.17 | 1.32 | 5.9 | 0.79 |
1 MHz | 6.47 | 6.46 | 5.9 | 7.09 |
1 MHz | 2.17 | 6.46 | 5.9 | 7.07 |
5 MHz | 6.47 | 0.68 | 1.2 | 34.88 |
5 MHz | 2.17 | 0.68 | 1.2 | 3.92 |
5 MHz | 6.47 | 1.32 | 1.2 | 34.88 |
5 MHz | 2.17 | 1.32 | 1.2 | 3.92 |
5 MHz | 6.47 | 6.46 | 1.2 | 34.88 |
5 MHz | 2.17 | 6.46 | 1.2 | 34.77 |
Frequency | Wavelength (mm) | Height (mm) | Path Length (Rad) |
---|---|---|---|
1 MHz | 0.33 | 0.68 | 12.95 |
1 MHz | 0.33 | 1.32 | 25.13 |
1 MHz | 0.33 | 6.46 | 123.0 |
5 MHz | 0.066 | 0.68 | 64.74 |
5 MHz | 0.066 | 1.32 | 125.66 |
5 MHz | 0.066 | 6.46 | 614.99 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jarreau, B.E.; Yoshida, S. Deep Machine Learning for Path Length Characterization Using Acoustic Diffraction. Appl. Sci. 2023, 13, 2782. https://doi.org/10.3390/app13052782
Jarreau BE, Yoshida S. Deep Machine Learning for Path Length Characterization Using Acoustic Diffraction. Applied Sciences. 2023; 13(5):2782. https://doi.org/10.3390/app13052782
Chicago/Turabian StyleJarreau, Brittney Erin, and Sanichiro Yoshida. 2023. "Deep Machine Learning for Path Length Characterization Using Acoustic Diffraction" Applied Sciences 13, no. 5: 2782. https://doi.org/10.3390/app13052782
APA StyleJarreau, B. E., & Yoshida, S. (2023). Deep Machine Learning for Path Length Characterization Using Acoustic Diffraction. Applied Sciences, 13(5), 2782. https://doi.org/10.3390/app13052782