Method for Determining Treated Metal Surface Quality Using Computer Vision Technology
Abstract
:1. Introduction
2. Related Works
3. Determining the Quality of Metal Surface Treatment Using Computer Vision Technologies
- Average brightness of the fragment.
- Moment of distribution of brightness over a part of the image (dispersion).
- Asymmetry.
- Entropy of the information representing the image area, and similar indicators by image filtering the image.
- The whole image is read into the computer RAM as consisting of an array of pixels, made of h rows and w columns. Each pixel is represented in the range 0 to 255 according to brightness intensity.
- The image is divided into square fragments with side “d” pixels. These fragments are stored in an array table f [h/d, w/d], as explained in Figure 2. The size of the table is “d” times smaller, both vertically and horizontally (matrix of fragments with a width, w//d, and a height, h//d, where “//” denotes division with integer answer).
- The next block is to test the fragments that are taken in sequence from the table f, and a downward transition occurs after each of the fragments are identified by coordinates (x, y). When all fragments are processed, the loop exits to the right.
- In the body of the loop, each fragment falls into a preselected function, which, based on the provided single fragment, calculates the property of this fragment. A few of the possible functions are discussed in the article. The result is added to the features table of fragment properties. The location of the calculated properties in the table coincides with the placement of the fragments themselves in the table f.
- (a)
- We send the image fragment into the function that gives the number indicating one of the properties of the texture represented by this fragment. The function is one of the implemented means applied to compute the value of the texture property. In this article, we develop mathematical equations to calculate average frequency, given in Equation (9), to segment the image.
- (b)
- When exiting the loop, the resulting table of features is interpolated to the size of the input image (increased by d times). After normalizing the feature values with brightness ranging from 0 to 255, we obtain an image of the initial image fragmentation, according to the selected feature (as in Figure 3).
- To improve the visibility of the result, the resulting fragmentation is mixed with the original image for showing features’ results clearly.
4. Texture Properties of a Fragment of an Image
5. Development of Algorithm for Finding Period Dominating Fluctuations
6. Experimental Details
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
References
- Wei, W.; Hu, X.-W.; Cheng, Q.; Zhao, Y.-M.; Ge, Y.-Q. Identification of common and severe COVID-19: The value of CT texture analysis and correlation with clinical characteristics. Eur. Radiol. 2020, 30, 6788–6796. [Google Scholar] [CrossRef] [PubMed]
- Guo, A.; Huang, W.; Ye, H.; Dong, Y.; Ma, H.; Ren, Y.; Ruan, C. Identification of Wheat Yellow Rust Using Spectral and Texture Features of Hyperspectral Images. Remote Sens. 2020, 12, 1419. [Google Scholar] [CrossRef]
- Neisius, U.; Ms, H.E.; Kucukseymen, S.; Tsao, C.W.; Mancio, J.; Nakamori, S.; Manning, W.J.; Nezafat, R. Texture signatures of native myocardial T1 as novel imaging markers for identification of hypertrophic cardiomyopathy patients without scar. J. Magn. Reson. Imaging 2020, 52, 906–919. [Google Scholar] [CrossRef] [PubMed]
- Nisa, M.; Shah, J.H.; Kanwal, S.; Raza, M.; Khan, M.A.; Damaševičius, R.; Blažauskas, T. Hybrid Malware Classification Method Using Segmentation-Based Fractal Texture Analysis and Deep Convolution Neural Network Features. Appl. Sci. 2020, 10, 4966. [Google Scholar] [CrossRef]
- Shanthi, T.; Sabeenian, R.; Anand, R. Automatic diagnosis of skin diseases using convolution neural network. Microprocess. Microsyst. 2020, 76, 103074. [Google Scholar] [CrossRef]
- Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures. Remote Sens. 2020, 12, 3177. [Google Scholar] [CrossRef]
- Yang, Q.; Shi, W.; Chen, J.; Lin, W. Deep convolution neural network-based transfer learning method for civil infrastructure crack detection. Autom. Constr. 2020, 116, 103199. [Google Scholar] [CrossRef]
- Wang, H.; Li, Y.; Wang, Y.; Hu, H.; Yang, M.H. Collaborative distillation for ultra-resolution universal style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1860–1869. [Google Scholar]
- Nanhao, J. CNN-Based Image Style Transfer and Its Applications. In Proceedings of the 2020 International Conference on Computing and Data Science (CDS), Stanford, CA, USA, 1–2 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 387–390. [Google Scholar]
- Albattah, W.; Khel, M.H.K.; Habib, S.; Islam, M.; Khan, S.; Abdul Kadir, K. Hajj Crowd Management Using CNN-Based Ap-proach. CMC Comput. Mater. Contin. 2021, 66, 2183–2197. [Google Scholar]
- Liu, T.; Ma, X.; Liu, W.; Ling, S.; Zhao, L.; Xu, L.; Song, D.; Liu, J.; Sun, Z.; Fan, Z.; et al. Late Gadolinium Enhancement Amount As an Independent Risk Factor for the Incidence of Adverse Cardiovascular Events in Patients with Stage C or D Heart Failure. Front. Physiol. 2016, 7, 484. [Google Scholar] [CrossRef]
- Gao, F.; Li, S.; You, H.; Lu, S.; Xiao, G. Text Spotting for Curved Metal Surface: Clustering, Fitting, and Rectifying. IEEE Trans. Instrum. Meas. 2020, 70, 5000212. [Google Scholar] [CrossRef]
- Liu, Y.; Guo, L.; Gao, H.; You, Z.; Ye, Y.; Zhang, B. Machine vision based condition monitoring and fault diagnosis of machine tools using information from machined surface texture: A review. Mech. Syst. Signal Processing 2022, 164, 108068. [Google Scholar] [CrossRef]
- Saeedi, J.; Dotta, M.; Galli, A.; Nasciuti, A.; Maradia, U.; Boccadoro, M.; Gambardella, L.M.; Giusti, A. Measurement and inspection of electrical discharge machined steel surfaces using deep neural networks. Mach. Vis. Appl. 2021, 32, 21. [Google Scholar] [CrossRef]
- Serin, G.; Sener, B.; Ozbayoglu, A.M.; Unver, H.O. Review of tool condition monitoring in machining and opportunities for deep learning. Int. J. Adv. Manuf. Technol. 2020, 109, 953–974. [Google Scholar] [CrossRef]
- Gai, X.; Ye, P.; Wang, J.; Wang, B. Research on defect detection method for steel metal surface based on deep learning. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 12–14 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 637–641. [Google Scholar]
- Warsi, A.; Abdullah, M.; Husen, M.N.; Yahya, M.; Khan, S.; Jawaid, N. Gun detection system using YOLOv3. In Proceedings of the 2019 IEEE International Conference on Smart Instrumentation, Measurement and Applica-tion (ICSIMA), Kuala Lumpur, Malaysia, 27–29 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
- Nasreen, J.; Arif, W.; Shaikh, A.A.; Muhammad, Y.; Abdullah, M. Object Detection and Narrator for Visually Impaired People. In Proceedings of the 2019 IEEE 6th International Conference on Engineering Technologies and Applied Sciences (ICETAS), Kuala Lumpur, Malaysia, 20–21 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
- Park, S.-H.; Hong, J.-Y.; Ha, T.; Choi, S.; Jhang, K.-Y. Deep Learning-Based Ultrasonic Testing to Evaluate the Porosity of Additively Manufactured Parts with Rough Surfaces. Metals 2021, 11, 290. [Google Scholar] [CrossRef]
- Sun, X.; Gu, J.; Tang, S.; Li, J. Research Progress of Visual Inspection Technology of Steel Products—A Review. Appl. Sci. 2018, 8, 2195. [Google Scholar] [CrossRef]
- Rudawska, A.; Danczak, I.; Müller, M.; Valasek, P. The effect of sandblasting on surface properties for adhesion. Int. J. Adhes. Adhes. 2016, 70, 176–190. [Google Scholar] [CrossRef]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2005; ISBN 978-0-471-24195-9. Available online: https://cs-114.org/wp-content/uploads/2015/01/Elements_of_Information_Theory_Elements.pdf (accessed on 31 January 2022).
- Yahya, M.; Breslin, J.; Ali, M. Semantic Web and Knowledge Graphs for Industry 4.0. Appl. Sci. 2021, 11, 5110. [Google Scholar] [CrossRef]
- Alkapov, R.; Konyshev, A.; Vetoshkin, N.; Valkevich, N.; Kostenetskiy, P. Automatic visible defect detection and classification system prototype development for iron-and-steel works. In Proceedings of the 2018 Global Smart Industry Conference (GloSIC), Chelyabinsk, Russia, 13–15 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
- Kostenetskiy, P.; Alkapov, R.; Vetoshkin, N.; Chulkevich, R.; Napolskikh, I.; Poponin, O. Real-time system for automatic cold strip surface defect detection. FME Trans. 2019, 47, 765–774. [Google Scholar] [CrossRef]
- Neogi, N.; Mohanta, D.K.; Dutta, P.K. Review of vision-based steel surface inspection systems. EURASIP J. Image Video Process. 2014, 2014, 50. [Google Scholar] [CrossRef]
- Katsamenis, I.; Doulamis, N.; Doulamis, A.; Protopapadakis, E.; Voulodimos, A. Simultaneous Precise Localization and Classification of metal rust defects for robotic-driven maintenance and prefabrication using residual attention U-Net. Autom. Constr. 2022, 137, 104182. [Google Scholar] [CrossRef]
- Petricca, L.; Moss, T.; Figueroa, G.; Broen, S. Corrosion Detection Using AI: A Comparison of Standard Computer Vision Techniques and Deep Learning Model. In Proceedings of the Sixth International Conference on Computer Science, Engineering and Information Technology, Dubai, United Arab Emirates, 22–23 January 2016. [Google Scholar] [CrossRef]
- Dong, C.-Z.; Catbas, F.N. A review of computer vision–based structural health monitoring at local and global levels. Struct. Health Monit. 2020, 20, 692–743. [Google Scholar] [CrossRef]
- Katsamenis, I.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.; Voulodimos, A. Pixel-Level Corrosion Detection on Metal Constructions by Fusion of Deep Learning Semantic and Contour Segmentation. In International Symposium on Visual Computing; Springer: Cham, Switzerland, 2020; pp. 160–169. [Google Scholar]
Source | Approach Used | Target | Data Type | Restrictions |
---|---|---|---|---|
[1] | Clustering, based on the vector of statistical features and features, based on information properties and filtering (1042 features in total), and followed by the elimination of uninformative features. | By the properties of the selected segments, determine potentially severe cases, the course of the disease. | Images obtained from computed tomography. | Initially, the selected signs are not associated with the studied phenomenon, but they are used according to the presence of correlation dependences. As a result, there is no explanation of the mechanism of the dependence of the result on the selected features. |
[2] | Segmentation of the image, based on color gradation, improved by taking into account the statistical properties of the texture in the area of the image. | Identify the infected areas on the shoots. | Hyperspectral digital bitmap. | The method is based on the use of the spectral properties of chemical components of plant samples. |
[3] | Statistical properties of a texture (a collection of pixels) on a portion of a digital image. | Highlight myocardial disorders. | Bitmap digital image. | For other applications, it is necessary to first investigate the dependence of the statistical properties of the texture and the significant indicators in a new task. |
[4] | Clustering image features that are obtained from an already trained neural network (reducing the dimension of the feature vector). | Determine the presence of malicious code in the executable file. | Software byte code that is represented as a digital bitmap image in grayscale. | The method gives the probability of the presence of certain features in the image; however, there is no reasoning for the decision made. Requires a lot of training data. |
[5] | Image markup using convolutional neural network. | Classify skin lesions. | Color, digital, raster, photographic image of a site of human skin. | The system can be used only for recommendations, due to the lack of argumentation, for the decisions made. Requires a lot of training data. |
[6] | Image markup using a neural network. | Mark sources of ignition in images. | Color panoramic digital raster photographic image of the Earth’s surface areas obtained from low-flying vehicles. | Lack of argumentation for the decisions made, requires a lot of training data. |
[7] | Marking images using a neural network. | The markings on the image, the roadbed damage. | Color, digital, raster, photographic image, roadbed. | Requires a lot of training data. Can pass mild damage, which is acceptable for this task. |
[8] | Convolutional neural network, as part of the encoder–decoder. | Generation of a texture, with specified properties, based on a high-resolution sample. | Donor texture, a random vector of parameters, for a variety of results. | The system can be used to transfer a style or generate a family of textures with the same properties. One neural network is capable of generating only one texture. |
[9] | A neural network for the extraction of texture features, with a gradient descent of the image to enhance the given texture features. | Image style transfer. | Donor digital image and resizable image. | Textural features are abstract, the significance of individual components, textural features are unknown. It is possible to use automatic segmentation based on the clustering of the received features. The belonging of the obtained texture classes requires a separate study in each case. |
[10,11] | Crowd analysis to estimate crowds’ density and chest MRI images analysis for HCM diagnosis. | Crowd videos, images, and chest MRI. | Articles relate to producing digital images as color intensity variation. | Static conditions instead of dynamic. |
[12,13,14,15,16,17,18,19] | Related to detection of curved surfaces features for tools conditions monitoring through data collection via varying sensing means. | Surfaces images. | Articles relate to obtaining images through various means. | The density estimation for estimating the target parameter of surface roughness or wear or chatter of the tools. |
Our approach | Computation of relevant parameters reflecting the target surface image areas identifiably, and in linear relation to the metal surfaces irregularities. | Select areas of image irregularities have a specified range of illumination values. | A digital raster image, in grayscale, with a known scale. | Metal surface irregularities fit into the crop fragment of the image under study, as an example of a texture unit (the work uses image segmentation). |
# | Methodology | Principle of Operation | Advantages | Flaws |
---|---|---|---|---|
1 | Highlighting by levels. | It is determined by highlighting zones of the same type, by color, and/or brightness (other criteria are possible). Analogs of geodetic lines are formed, which limit areas of the image. | Very fast. | Sensitive to noise, as in the corollary, not applicable to images with high detail. |
2 | Clustering. | Clustering techniques are used, such as K-means or any others. Clustering algorithms are applied not to the pixels of an image fragment, but their statistical (or other) properties. | Applicable to a wide class of images. | Requires significant computing resources. |
3 | Neural networks. | Trained neural networks are used (fully connected, convolutional, ResNet, transformers, or mixers). | Best markup quality indicators. | Requires a significant number of already segmented images to train. |
4 | Cutoff (threshold values). | For each fragment of the image, a computed parameter is determined. If this parameter is outside the specified range, the area of the image is discarded. | Performance depends on the complexity of computation, the cutoff parameter. | Requires scientific research for formalization (formulation, construction?), a parameter that will allow you to obtain the required markup. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Al-Oraiqat, A.M.; Smirnova, T.; Drieiev, O.; Smirnov, O.; Polishchuk, L.; Khan, S.; Hasan, Y.M.Y.; Amro, A.M.; AlRawashdeh, H.S. Method for Determining Treated Metal Surface Quality Using Computer Vision Technology. Sensors 2022, 22, 6223. https://doi.org/10.3390/s22166223
Al-Oraiqat AM, Smirnova T, Drieiev O, Smirnov O, Polishchuk L, Khan S, Hasan YMY, Amro AM, AlRawashdeh HS. Method for Determining Treated Metal Surface Quality Using Computer Vision Technology. Sensors. 2022; 22(16):6223. https://doi.org/10.3390/s22166223
Chicago/Turabian StyleAl-Oraiqat, Anas M., Tetiana Smirnova, Oleksandr Drieiev, Oleksii Smirnov, Liudmyla Polishchuk, Sheroz Khan, Yassin M. Y. Hasan, Aladdein M. Amro, and Hazim S. AlRawashdeh. 2022. "Method for Determining Treated Metal Surface Quality Using Computer Vision Technology" Sensors 22, no. 16: 6223. https://doi.org/10.3390/s22166223
APA StyleAl-Oraiqat, A. M., Smirnova, T., Drieiev, O., Smirnov, O., Polishchuk, L., Khan, S., Hasan, Y. M. Y., Amro, A. M., & AlRawashdeh, H. S. (2022). Method for Determining Treated Metal Surface Quality Using Computer Vision Technology. Sensors, 22(16), 6223. https://doi.org/10.3390/s22166223