A Deep Learning-Based Fragment Detection Approach for the Arena Fragmentation Test
Abstract
:1. Introduction
2. Deep Learning-Based Fragment Detection
2.1. Overview
2.2. Faster R-CNN Based Fragment Detection Algorithm
- To the best of our knowledge, our work is the first to demonstrate the applicability of deep learning based object detection approaches for analyzing AFT images acquired by a high-speed camera;
- We have verified that a two-step approach (such as Faster R-CNN) is more suitable for detecting warhead fragments (very small objects) compared with a single-stage approach (such as SSD);
- We have empirically found the hyper-parameters of anchor boxes that are optimal for detecting warhead fragments.
2.3. Temporal Filtering
3. Experimental Results
3.1. Experimental Datasets
3.2. Performance Evaluation
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AFT | Arena Fragmentation Test |
CNN | Convolutional Neural Network |
RPN | Region Proposal Network |
TP | True Positive |
FP | False Positive |
FN | False Negative |
References
- Zecevic, B.; Terzic, J.; Catovic, A.; Serdarević-Kadić, S. Characterization of distribution parameters of fragment mass and number for conventional projectiles. In Proceedings of the 14th Seminar on New Trends in Research of Energetic Materials, Pardubice, Czech Republic, 13–15 April 2011; pp. 1026–1039. [Google Scholar]
- Zecevic, B.; Terzic, J.; Catovic, A. Influence of Warhead Design on Natural Fragmentation Performances. In Proceedings of the 15th DAAAM International Symposium, Vienna, Austria, 3–6 November 2004. [Google Scholar]
- Mott, N.F. Fragmentation of Shell Cases. Proc. R. Soc. Lond. 1947, 300–308. [Google Scholar] [CrossRef] [Green Version]
- Held, M. Consideration of the Mass Distribution of Fragments by Natural Fragmentation in Combination with Preformed Fragments. Propellants Explos. I 1979, 1, 20–23. [Google Scholar] [CrossRef]
- Zecevic, B.; Terzic, J.; Catovic, A.; Serdarević-Kadić, S. Influencing Parameters on HE Projectiles With Natural Fragmentation. In Proceedings of the 9th Seminar on New Trends in Research of Energetic Materials, Pardubice, Czech Republic, 19–21 April 2006; pp. 780–795. [Google Scholar]
- Sun, C.; Chen, X.; Yan, R.; Gao, R.X. Composite-Graph-Based Sparse Subspace Clustering for Machine Fault Diagnosis. IEEE Trans. Instrum. Meas. 2020, 69, 1850–1859. [Google Scholar] [CrossRef]
- Nayana, B.R.; Geethanjali, P. Improved Identification of Various Conditions of Induction Motor Bearing Faults. IEEE Trans. Instrum. Meas. 2020, 69, 1908–1919. [Google Scholar] [CrossRef]
- Baghaei Naeini, F.; AlAli, A.A.; Al-Husari, R.; Rigi, A.; Al-Sharman, A.K.; Makris, D.; Zweiri, Y. A Novel Dynamic-Vision-Based Approach for Tactile Sensing Applications. IEEE Trans. Instrum. Meas. 2020, 69, 1881–1893. [Google Scholar] [CrossRef]
- Negri, L.H.; Paterno, A.S.; Muller, M.; Fabris, J.L. Sparse Force Mapping System Based on Compressive Sensing. IEEE Trans. Instrum. Meas. 2017, 66, 830–836. [Google Scholar] [CrossRef]
- Zhang, D.; Gao, S.; Yu, L.; Kang, G.; Zhan, D.; Wei, X. A Robust Pantograph–Catenary Interaction Condition Monitoring Method Based on Deep Convolutional Network. IEEE Trans. Instrum. Meas. 2020, 69, 1920–1929. [Google Scholar] [CrossRef]
- Su, G.; Lu, G.; Yan, P. Planar Motion Measurement of a Compliant Micro Stage: An Enhanced Microscopic Vision Approach. IEEE Trans. Instrum. Meas. 2020, 69, 1930–1939. [Google Scholar] [CrossRef]
- Lee, K.; Hwang, I.; Kim, Y.-M.; Lee, H.; Kang, M.; Yu, J. Real-Time Weld Quality Prediction Using a Laser Vision Sensor in a Lap Fillet Joint during Gas Metal Arc Welding. Sensors 2020, 20, 1625. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, I.H.; Bong, J.H.; Park, J.; Park, S. Prediction of driver’s intention of lane change by augmenting sensor information using machine learning techniques. Sensors 2017, 17, 1350. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ohn-Bar, E.; Trivedi, M.M. Are all objects equal? Deep spatio-temporal importance prediction in driving videos. Pattern Recognit. 2017, 64, 425–436. [Google Scholar] [CrossRef]
- Kang, B.; Lee, Y. High-Resolution Neural Network for Driver Visual Attention Prediction. Sensors 2020, 20, 2030. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Berenguel-Baeta, B.; Bermudez-Cameo, J.; Guerrero, J.J. OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision. Sensors 2020, 20, 2066. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jomaa, R.M.; Mathkour, H.; Bazi, Y.; Islam, M.S. End-to-End Deep Learning Fusion of Fingerprint and Electrocardiogram Signals for Presentation Attack Detection. Sensors 2020, 20, 2085. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Baillargeon, Y.; Lalanne, C. Methods to perform behind-armour debris analysis with x-ray films. In DRDC Valcarier TM; Defence R&D Canada, Technical Memorandum: Valcartier, QC, Canada, 2005; pp. 2003–2123. [Google Scholar]
- Huang, G.; Li, W.; Feng, S. Fragment velocity distribution of cylindrical rings under eccentric point initiation. Propellants Explos. Pyrotech. 2015, 40, 215–220. [Google Scholar] [CrossRef]
- Liu, J.; Zhao, D.; Yangun, L.; Zhou, H. Optoelectronic System for Measuring Warhead Fragments Velocity. J. Phys. Conf. Ser. 2011, 276, 012136. [Google Scholar] [CrossRef]
- Burke, J.; Olson, E.; Shoemaker, G. Stereo Camera Optical Tracker. In Proceedings of the ITEA Las Vegas Instrumentation Conference, Las Vegas, NV, USA, 10–12 May 2016. [Google Scholar]
- Lee, H.; Jung, C.; Park, Y.; Park, W.; Son, J. A New Image Processing-Based Fragment Detection Approach for Arena Fragmentation Test. J. Korea Inst. Milit. Sci. Technol. 2019, 22, 599–606. [Google Scholar]
- Choi, J.; Han, T.; Lee, S.; Song, B. Deep learning-based small object detection. J. Inst. Electron. Inf. Eng. 2018, 55, 57–66. [Google Scholar]
- Kim, H.; Park, M.; Son, W.; Choi, H.; Park, S. Deep Learning based Object Detection and Distance Estimation using Mono Camera. J. Korean Inst. Intell.t Syst. 2018, 28, 201–209. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
Types of Objects | Scale | Aspect Ratio |
---|---|---|
General object | {, , } | {2:1, 1:1, 1:2} |
Warhead fragment | {, , , , } | {4:3, 1:1, 3:4} |
Types of Warhead | Video | The Number of Captured Videos |
---|---|---|
Case 1 | 1 | 1 |
Case 2 | 2–4 | 3 |
Case 3 | 5–7 | 3 |
Case 4 | 8–11 | 4 |
Our Proposed Deep Learning-Based Approach | |||||
---|---|---|---|---|---|
AFT Videos | Precision | Recall | |||
Case 1 | # 1 | 1.000 | 0.429 | 0.600 | 0.790 |
# 1 | 0.926 | 0.714 | 0.806 | 0.874 | |
Case 2 | # 2 | 0.955 | 0.656 | 0.778 | 0.875 |
# 3 | 0.851 | 0.526 | 0.650 | 0.757 | |
# 1 | 0.857 | 0.612 | 0.714 | 0.793 | |
Case 3 | # 2 | 0.952 | 0.455 | 0.616 | 0.781 |
# 3 | 1.000 | 0.525 | 0.689 | 0.847 | |
Case 4 | # 1 | 0.857 | 0.857 | 0.857 | 0.857 |
# 2 | 0.833 | 0.833 | 0.833 | 0.833 | |
# 3 | 0.714 | 0.833 | 0.769 | 0.735 | |
# 4 | 0.688 | 0.917 | 0.786 | 0.724 | |
Average | 0.888 | 0.624 | 0.734 | 0.819 |
The Previous Image Processing-Based Approach [22] | |||||
---|---|---|---|---|---|
AFT Videos | Precision | Recall | |||
Case 1 | # 1 | 0.800 | 0.571 | 0.666 | 0.741 |
# 1 | 0.882 | 0.625 | 0.732 | 0.815 | |
Case 2 | # 2 | 0.813 | 0.609 | 0.696 | 0.762 |
# 3 | 0.828 | 0.585 | 0.685 | 0.765 | |
# 1 | 0.853 | 0.500 | 0.630 | 0.748 | |
Case 3 | # 2 | 1.000 | 0.646 | 0.785 | 0.901 |
# 3 | 0.900 | 0.458 | 0.607 | 0.754 | |
Case 4 | # 1 | 0.618 | 0.955 | 0.750 | 0.665 |
# 2 | 0.692 | 0.947 | 0.800 | 0.731 | |
# 3 | 0.500 | 0.714 | 0.588 | 0.532 | |
# 4 | 0.414 | 0.857 | 0.558 | 0.462 | |
Average | 0.754 | 0.679 | 0.682 | 0.716 |
Avg. Precision | Avg. Recall | |
---|---|---|
Only Faster R-CNN | 0.109 | 0.786 |
Faster R-CNN + Temporal Filtering | 0.888 | 0.624 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, H.; Kim, J.; Jung, C.; Park, Y.; Park, W.; Son, J. A Deep Learning-Based Fragment Detection Approach for the Arena Fragmentation Test. Appl. Sci. 2020, 10, 4744. https://doi.org/10.3390/app10144744
Lee H, Kim J, Jung C, Park Y, Park W, Son J. A Deep Learning-Based Fragment Detection Approach for the Arena Fragmentation Test. Applied Sciences. 2020; 10(14):4744. https://doi.org/10.3390/app10144744
Chicago/Turabian StyleLee, Hyukzae, Jonghee Kim, Chanho Jung, Yongchan Park, Woong Park, and Jihong Son. 2020. "A Deep Learning-Based Fragment Detection Approach for the Arena Fragmentation Test" Applied Sciences 10, no. 14: 4744. https://doi.org/10.3390/app10144744
APA StyleLee, H., Kim, J., Jung, C., Park, Y., Park, W., & Son, J. (2020). A Deep Learning-Based Fragment Detection Approach for the Arena Fragmentation Test. Applied Sciences, 10(14), 4744. https://doi.org/10.3390/app10144744