Image-Based Detection of Modifications in Assembled PCBs with Deep Convolutional Autoencoders
Abstract
:1. Introduction
- We propose a loss function that combines the content loss concept, and the mean squared error function for training a denoising convolutional autoencoder architecture for reconstruction-based anomaly detection. The proposed model can be trained using only anomaly-free images, making it suitable for real-world applications where this kind of sample is much more common and easier to obtain than a representative set of samples containing anomalies.
- We propose a comparison function that can be used to locate and segment regions that differ between a given input image and the reconstructed image produced by a convolutional autoencoder. The comparison is based on higher-level features instead of individual pixels, leading to the detection of structures and components instead of sparse noise.
- We employ the proposed loss and comparison functions to design a robust method to detect modifications on PCBs that can be applied to images containing perspective distortion, noise, and lighting variations. Thus, the method aims to work under the circumstances commonly found in practice, e.g., during the on-site inspection of gas pump PCBs [3], where mobile devices are used to capture images without relying on controlled lighting or positioning. Nonetheless, it is important to highlight that the proposed method may also be applied to other monitoring tasks with similar characteristics, such as quality assurance in an industrial setting.
- We provide a labeled PCB image dataset for training and evaluating anomaly detection and segmentation methods. The dataset is publicly available (https://github.com/Diulhio/pcb_anomaly/tree/main/dataset (accessed on 10 January 2023)) and contains 1742 4096 × 2816-pixel images from one unmodified gas pump PCB, as well as 55 images containing modifications, along with the corresponding segmentation masks.
2. Related Work
3. Proposed Method
3.1. Image Registration and Partitioning
3.2. Convolutional Autoencoder Architecture
3.3. Content Loss Function for Training
3.4. Anomaly Segmentation
4. Experiments and Results
4.1. MPI-PCB Dataset
4.2. Baseline Methods
4.3. Evaluation Metrics
4.4. Training Details
4.5. Results on the MPI-PCB Dataset
4.6. Results on the MVTec-AD Dataset
4.7. Loss Function Ablation Study
4.8. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bergmann, P.; Fauser, M.; Sattlegger, D.; Steger, C. MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9592–9600. [Google Scholar] [CrossRef]
- Cohen, N.; Hoshen, Y. Sub-image anomaly detection with deep pyramid correspondences. arXiv 2020, arXiv:2005.02357. [Google Scholar]
- De Oliveira, T.J.M.; Wehrmeister, M.A.; Nassu, B.T. Detecting modifications in printed circuit boards from fuel pump controllers. In Proceedings of the 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Rio de Janeiro, Brazil, 17–20 October 2017; IEEE: Niteroi, RJ, Brazil, 2017; pp. 87–94. [Google Scholar] [CrossRef]
- Chakraborty, P. The Times of India—8 Lucknow Pumps Caught Using ‘Cheating’ Chip. Available online: https://timesofindia.indiatimes.com/city/lucknow/8-city-pumps-caught-using-cheating-chip/articleshow/58407561.cms (accessed on 15 December 2021).
- Slattery, G. Reuters—Special Report: In Brazil, Organized Crime Siphons Billions from Gas Stations. Available online: https://www.reuters.com/article/us-brazil-fuel-crime-special-report-idUSKBN2B418U (accessed on 15 December 2021).
- Adibhatla, V.A.; Chih, H.C.; Hsu, C.C.; Cheng, J.; Abbod, M.F.; Shieh, J.S. Defect detection in printed circuit boards using you-only-look-once convolutional neural networks. Electronics 2020, 9, 1547. [Google Scholar] [CrossRef]
- Shi, W.; Zhang, L.; Li, Y.; Liu, H. Adversarial semi-supervised learning method for printed circuit board unknown defect detection. J. Eng. 2020, 2020, 505–510. [Google Scholar] [CrossRef]
- Kim, J.; Ko, J.; Choi, H.; Kim, H. Printed Circuit Board Defect Detection Using Deep Learning via A Skip-Connected Convolutional Autoencoder. Sensors 2021, 21, 4968. [Google Scholar] [CrossRef] [PubMed]
- Adibhatla, V.A.; Huang, Y.C.; Chang, M.C.; Kuo, H.C.; Utekar, A.; Chih, H.C.; Abbod, M.F.; Shieh, J.S. Unsupervised Anomaly Detection in Printed Circuit Boards through Student-Teacher Feature Pyramid Matching. Electronics 2021, 10, 3177. [Google Scholar] [CrossRef]
- Volkau, I.; Mujeeb, A.; Dai, W.; Erdt, M.; Sourin, A. The Impact of a Number of Samples on Unsupervised Feature Extraction, Based on Deep Learning for Detection Defects in Printed Circuit Boards. Future Internet 2022, 14, 8. [Google Scholar] [CrossRef]
- Li, D.; Li, C.; Chen, C.; Zhao, Z. Semantic Segmentation of a Printed Circuit Board for Component Recognition Based on Depth Images. Sensors 2020, 20, 5318. [Google Scholar] [CrossRef] [PubMed]
- Mallaiyan Sathiaseelan, M.A.; Paradis, O.P.; Taheri, S.; Asadizanjani, N. Why Is Deep Learning Challenging for Printed Circuit Board (PCB) Component Recognition and How Can We Address It? Cryptography 2021, 5, 9. [Google Scholar] [CrossRef]
- Defard, T.; Setkov, A.; Loesch, A.; Audigier, R. Padim: A patch distribution modeling framework for anomaly detection and localization. In Proceedings of the International Conference on Pattern Recognition, Shanghai, China, 15–17 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 475–489. [Google Scholar] [CrossRef]
- Bergmann, P.; Batzner, K.; Fauser, M.; Sattlegger, D.; Steger, C. The MVTec anomaly detection dataset: A comprehensive real-world dataset for unsupervised anomaly detection. Int. J. Comput. Vis. 2021, 129, 1038–1059. [Google Scholar] [CrossRef]
- Shi, Y.; Yang, J.; Qi, Z. Unsupervised anomaly segmentation via deep feature reconstruction. Neurocomputing 2021, 424, 9–22. [Google Scholar] [CrossRef]
- Wang, G.; Han, S.; Ding, E.; Huang, D. Student-teacher feature pyramid matching for unsupervised anomaly detection. arXiv 2021, arXiv:2103.04257. [Google Scholar]
- Roth, K.; Pemula, L.; Zepeda, J.; Schölkopf, B.; Brox, T.; Gehler, P. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 14318–14328. [Google Scholar] [CrossRef]
- Wang, L.; Zhang, D.; Guo, J.; Han, Y. Image Anomaly Detection Using Normal Data Only by Latent Space Resampling. Appl. Sci. 2020, 10, 8660. [Google Scholar] [CrossRef]
- Hermann, M.; Umlauf, G.; Goldlücke, B.; Franz, M.O. Fast and Efficient Image Novelty Detection Based on Mean-Shifts. Sensors 2022, 22, 7674. [Google Scholar] [CrossRef] [PubMed]
- Tang, T.W.; Hsu, H.; Huang, W.R.; Li, K.M. Industrial Anomaly Detection with Skip Autoencoder and Deep Feature Extractor. Sensors 2022, 22, 9237. [Google Scholar] [CrossRef] [PubMed]
- Venkataramanan, S.; Peng, K.C.; Singh, R.V.; Mahalanobis, A. Attention guided anomaly localization in images. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 485–503. [Google Scholar] [CrossRef]
- Sato, K.; Hama, K.; Matsubara, T.; Uehara, K. Predictable uncertainty-aware unsupervised deep anomaly segmentation. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–7. [Google Scholar] [CrossRef]
- Akcay, S.; Atapour-Abarghouei, A.; Breckon, T.P. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Proceedings of the Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 622–637. [Google Scholar] [CrossRef] [Green Version]
- Perera, P.; Nallapati, R.; Xiang, B. Ocgan: One-class novelty detection using gans with constrained latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2898–2906. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- DeVries, T.; Taylor, G.W. Improved Regularization of Convolutional Neural Networks with Cutout. arXiv 2017, arXiv:1708.04552. [Google Scholar] [CrossRef]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2414–2423. [Google Scholar] [CrossRef]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 694–711. [Google Scholar] [CrossRef] [Green Version]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar] [CrossRef] [Green Version]
- Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
- Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training sample size. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 730–734. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE conference on computer vision and pattern recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- Tang, Y.; Tan, S.; Zhou, D. An Improved Failure Mode and Effects Analysis Method Using Belief Jensen–Shannon Divergence and Entropy Measure in the Evidence Theory. Arab. J. Sci. Eng. 2022, 1–14. [Google Scholar] [CrossRef]
- Park, N.; Kim, S. How Do Vision Transformers Work? In Proceedings of the International Conference on Learning Representations, Virtual Event, 25–29 April 2022. [CrossRef]
Input: | Feature Maps |
---|---|
Conv(filters = 32); BN; LeakyReLU; | |
Conv(filters = 64); BN; LeakyReLU; | |
Conv(filters = 128); BN; LeakyReLU; | |
Conv(filters = 128); BN; LeakyReLU; | |
Conv(filters = 256); BN; LeakyReLU; | |
Conv(filters = 256); BN; LeakyReLU; | |
Conv(filters = 256); BN; LeakyReLU; | |
Fully connected (1024); BN; Leaky ReLU; | 1024 |
Fully connected (500); Leaky ReLU; | 500 |
Fully connected (1024); BN; Leaky ReLU; | 1024 |
TranspConv(filters = 256); BN; LeakyReLU; | |
TranspConv(filters = 256); BN; LeakyReLU; | |
TranspConv(filters = 128); BN; LeakyReLU; | |
TranspConv(filters = 128); BN; LeakyReLU; | |
TranspConv(filters = 64); BN; LeakyReLU; | |
TranspConv(filters = 32); BN; LeakyReLU | |
TranspConv(filters = 3); Sigmoid; |
Metric | Method | grid2_2 | grid2_3 | grid3_1 | grid3_2 | grid4_1 | grid4_3 | Avg. |
---|---|---|---|---|---|---|---|---|
IoU | Ours | 0.755 | 0.664 | 0.608 | 0.525 | 0.778 | 0.732 | 0.677 |
PaDiM | 0.603 | 0.624 | 0.489 | 0.145 | 0.656 | 0.524 | 0.507 | |
SPADE | 0.319 | 0.272 | 0.419 | 0.353 | 0.457 | 0.474 | 0.382 | |
DFR | 0.297 | 0.098 | 0.117 | 0.386 | 0.196 | 0.190 | 0.214 | |
SPTM | 0.505 | 0.428 | 0.447 | 0.314 | 0.240 | 0.502 | 0.406 | |
Segmen. Precision | Ours | 0.858 | 0.752 | 0.767 | 0.643 | 0.849 | 0.840 | 0.785 |
PaDiM | 0.732 | 0.742 | 0.594 | 0.240 | 0.765 | 0.687 | 0.627 | |
SPADE | 0.364 | 0.301 | 0.621 | 0.417 | 0.526 | 0.533 | 0.460 | |
DFR | 0.246 | 0.078 | 0.117 | 0.419 | 0.141 | 0.221 | 0.204 | |
SPTM | 0.601 | 0.577 | 0.457 | 0.410 | 0.300 | 0.627 | 0.495 | |
Segmen. Recall | Ours | 0.876 | 0.858 | 0.758 | 0.747 | 0.915 | 0.856 | 0.835 |
PaDiM | 0.856 | 0.851 | 0.826 | 0.413 | 0.896 | 0.744 | 0.764 | |
SPADE | 0.754 | 0.754 | 0.572 | 0.715 | 0.793 | 0.833 | 0.737 | |
DFR | 0.687 | 0.491 | 0.310 | 0.691 | 0.347 | 0.407 | 0.489 | |
SPTM | 0.760 | 0.853 | 0.668 | 0.643 | 0.395 | 0.760 | 0.680 | |
Segmen. F-Score | Ours | 0.863 | 0.805 | 0.769 | 0.688 | 0.889 | 0.851 | 0.811 |
PaDiM | 0.785 | 0.791 | 0.691 | 0.307 | 0.829 | 0.714 | 0.686 | |
SPADE | 0.489 | 0.436 | 0.597 | 0.521 | 0.632 | 0.645 | 0.553 | |
DFR | 0.357 | 0.126 | 0.163 | 0.511 | 0.209 | 0.283 | 0.275 | |
SPTM | 0.676 | 0.687 | 0.533 | 0.492 | 0.347 | 0.680 | 0.569 |
Metric | Method | Texture | Object |
---|---|---|---|
Detection ROC-AUC | Ours | 0.870 | 0.890 |
PaDiM | 0.960 | 0.880 | |
SPADE | 0.860 | 0.850 | |
DFR | 0.930 | 0.910 | |
SPTM | 0.980 | 0.930 | |
Segmentation ROC-AUC | Ours | 0.880 | 0.960 |
PaDiM | 0.950 | 0.970 | |
SPADE | 0.970 | 0.960 | |
DFR | 0.910 | 0.940 | |
SPTM | 0.960 | 0.870 | |
IoU | Ours | 0.290 | 0.430 |
PaDiM | 0.330 | 0.410 | |
SPADE | 0.380 | 0.420 | |
DFR | 0.310 | 0.310 | |
SPTM | 0.320 | 0.380 | |
Segmentation Precision | Ours | 0.440 | 0.562 |
PaDiM | 0.408 | 0.485 | |
SPADE | 0.460 | 0.518 | |
DFR | 0.364 | 0.481 | |
SPTM | 0.364 | 0.389 | |
Segmentation Recall | Ours | 0.458 | 0.625 |
PaDiM | 0.628 | 0.677 | |
SPADE | 0.682 | 0.640 | |
DFR | 0.614 | 0.515 | |
SPTM | 0.598 | 0.575 | |
Segmentation F-score | Ours | 0.446 | 0.591 |
PaDiM | 0.490 | 0.560 | |
SPADE | 0.546 | 0.571 | |
DFR | 0.452 | 0.478 | |
SPTM | 0.448 | 0.382 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Candido de Oliveira, D.; Nassu, B.T.; Wehrmeister, M.A. Image-Based Detection of Modifications in Assembled PCBs with Deep Convolutional Autoencoders. Sensors 2023, 23, 1353. https://doi.org/10.3390/s23031353
Candido de Oliveira D, Nassu BT, Wehrmeister MA. Image-Based Detection of Modifications in Assembled PCBs with Deep Convolutional Autoencoders. Sensors. 2023; 23(3):1353. https://doi.org/10.3390/s23031353
Chicago/Turabian StyleCandido de Oliveira, Diulhio, Bogdan Tomoyuki Nassu, and Marco Aurelio Wehrmeister. 2023. "Image-Based Detection of Modifications in Assembled PCBs with Deep Convolutional Autoencoders" Sensors 23, no. 3: 1353. https://doi.org/10.3390/s23031353
APA StyleCandido de Oliveira, D., Nassu, B. T., & Wehrmeister, M. A. (2023). Image-Based Detection of Modifications in Assembled PCBs with Deep Convolutional Autoencoders. Sensors, 23(3), 1353. https://doi.org/10.3390/s23031353