Next Article in Journal
Hollow Direct Air-Cooled Rotor Windings: Conjugate Heat Transfer Analysis
Previous Article in Journal
Enhancing Self-Starting Capability and Efficiency of Hybrid Darrieus–Savonius Vertical Axis Wind Turbines with a Dual-Shaft Configuration
Previous Article in Special Issue
Transitioning from Simulation to Reality: Applying Chatter Detection Models to Real-World Machining Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inline-Acquired Product Point Clouds for Non-Destructive Testing: A Case Study of a Steel Part Manufacturer

by
Michalis Ntoulmperis
1,
Silvia Discepolo
2,
Paolo Castellini
2,
Paolo Catti
1,
Nikolaos Nikolakis
1,*,
Wilhelm van de Kamp
3 and
Kosmas Alexopoulos
1
1
Laboratory for Manufacturing Systems & Automation (LMS), Department of Mechanical Engineering & Aeronautics, University of Patras, Rio, 26504 Patras, Greece
2
Dip. di Ingegneria Industrial e Scienze Matematiche, Universita Politecnica delle Marche, Via Brecce Bianche 10, 60131 Ancona, Italy
3
VDL WEWELER bv, 7325 WC Apeldoorn, The Netherlands
*
Author to whom correspondence should be addressed.
Machines 2025, 13(2), 88; https://doi.org/10.3390/machines13020088
Submission received: 13 December 2024 / Revised: 10 January 2025 / Accepted: 22 January 2025 / Published: 23 January 2025
(This article belongs to the Special Issue Application of Sensing Measurement in Machining)

Abstract

:
Modern vision-based inspection systems are inherently limited by their two-dimensional nature, particularly when inspecting complex product geometries. These systems are often unable to capture critical depth information, leading to challenges in accurately measuring features such as holes, edges, and surfaces with irregular curvature. To address these shortcomings, this study introduces an approach that leverages computer-aided design-oriented three-dimensional point clouds, captured via a laser line triangulation sensor mounted onto a motorized linear guide. This setup facilitates precise surface scanning, extracting complex geometrical features, which are subsequently processed through an AI-based analytical component. Dimensional properties, such as radii and inter-feature distances, are computed using a combination of K-nearest neighbors and least-squares circle fitting algorithms. This approach is validated in the context of steel part manufacturing, where traditional 2D vision-based systems often struggle due to the material’s reflectivity and complex geometries. This system achieves an average accuracy of 95.78% across three different product types, demonstrating robustness and adaptability to varying geometrical configurations. An uncertainty analysis confirms that the measurement deviations remain within acceptable limits, supporting the system’s potential for improving quality control in industrial environments. Thus, the proposed approach may offer a reliable, non-destructive inline testing solution, with the potential to enhance manufacturing efficiency.

1. Introduction

Quality control in manufacturing ensures products meet predefined quality specifications through inspections, testing, and adherence to established procedures [1,2,3]. Non-destructive inspection (NDI) methods play a vital role in reducing waste, enabling early defect detection, and supporting proactive quality control. These methods align with manufacturers’ goals of achieving sustainability and complying with stricter environmental policies [4,5].
While 2D vision-based inspection systems are widely used for NDI, they are limited in capturing depth and spatial information, making them less effective for inspecting products with complex geometries like curved surfaces or intricate edges [6,7,8]. In contrast, 3D inspection techniques provide richer volumetric data, making them ideal for evaluating spatially dependent quality characteristics, such as those in critical automotive components [9].
This study introduces a point cloud inspection system that integrates an artificial intelligence (AI) algorithm for the NDI of steel parts. Using the product’s computer-aided design (CAD) model, 2D point cloud segments are aligned with its shape and properties and then merged to form an aligned 3D representation of a product. A combination of K-nearest neighbors (KNN), primitive shape fitting, and bounding box algorithms are employed for feature extraction. This approach delivers an automated, accurate, and scalable method for quality control, validated through a case study on steel part manufacturing.

2. The State of the Art

NDI techniques have advanced significantly, yet many rely on two-dimensional imaging, limiting their ability to assess complex geometries requiring depth and spatial information [10]. For example, 2D vision-based systems have improved defect detection and productivity in textile manufacturing by over 60%, but their effectiveness radically decreases with intricate geometries and in cases where depth information is required to reach a decision on the product’s quality [10,11,12,13,14,15,16].
In this context, new measurement devices such as laser line triangulation systems have emerged as a key solution for generating accurate three-dimensional point clouds. Originally designed for 1D distance measurements, they have evolved to support 3D imaging and reconstruction, providing high accuracy and data acquisition speeds in the quality control processes for complex products [17,18,19,20,21,22,23,24]. However, challenges such as misalignment, noise, and outliers in raw point cloud data persist, necessitating robust alignment techniques with CAD models and effective data preprocessing methods [25,26,27,28,29,30].
AI systems are increasingly being used in different manufacturing use cases, such as tool wear monitoring [30], scrap management [31], and predictive maintenance [32]. An NDI system necessitates the presence of an intelligent digital system that processes the captured data and extracts the specific features that are of interest to the manufacturer, facilitating the quality control process [29].
AI has enabled advanced point cloud feature extraction for NDI, overcoming the limitations of 2D imaging by identifying spatial, depth, and dimensional characteristics. Architectures such as Convolutional Neural Networks, though adapted for point clouds, often struggle with data loss during 3D-to-2D transformations and high computational demands [33,34,35,36,37]. On the other hand, PointNet and PointNet++, tailored to 3D data, have demonstrated improved accuracy but face challenges with subtle geometric differences and local contextual awareness [38,39,40,41,42]. Hybrid methods, such as combining KNN with geometric primitive fitting, address these challenges by efficiently extracting the contextual and dimensional features. These methods reduce the computational complexity, making them more viable for time-critical applications [43,44,45].
This study combines laser line triangulation with a hybrid AI approach using KNN and circle fitting to identify the critical features in steel parts. The proposed approach addresses challenges in 3D data alignment, noise handling, and computational efficiency, validated using the Guided Uncertainty Methodology (GUM) per the ISO/IEC Guide 98-3:2008 standards [46].

3. Methodology

The proposed methodology is built on data acquisition, alignment of the acquired data, feature extraction, and dimensional analysis through a hybrid AI system. Lastly, the measurements are evaluated using the GUM. An overview of the methodology is presented in Figure 1 and analyzed in the following paragraphs.
The inspection system consists of a laser line triangulation sensor mounted onto the sliding element of a linear guide. The product is positioned in front of the measurement system using an industrial robot clamp. The linear guide provides specific triggers at predetermined distances, independent of the guide’s speed, ensuring consistent and accurate sampling during data acquisition. The positioning system incorporates an encoder featuring a compact readhead that operates up to 1.0 mm from a self-adhesive tape.
The use of an industrial robot to hold a product in front of the scanner ensures that no vibrations are introduced into the scanner while performing the acquisition. If the scanner were fixed to the robot, unwanted vibrations would be introduced into the acquisition procedure that would increase the noise in the scan, deteriorating the quality of the captured data. In addition, alternatives such as using a conveyor system to move the product while the sensor remains stationary would also introduce vibrations and increase the possibility of positional misalignments that could degrade the quality of the scan, especially in cases where products characterized by complex geometries are being scanned.
The reference axes for this procedure are defined as follows: the x-axis refers to the scan direction, the y-axis coincides with the laser line, and the z-axis is the direction of the depth of field of the triangulation sensor. The produced point cloud is high-resolution, with the sampling densities ranging from 68 to 246 μm along the x-axis and from 12.4 to 160 μm along the z-axis. During the measurement, the system is activated and deactivated by the encoder. Once the scan is completed, all of the acquired profiles are stored. For each profile, the laser sensor records the coordinates of the x- and z-axes, where the x-values represent the laser’s field of view and the z-values capture the distance between the sensor and the target. The data are then rearranged into three 1D vectors to construct the 3D point cloud. Next, a segmentation process, based on the distance between the points, is subsequently applied to the point cloud, determining whether the points in the point cloud belong to the same cluster, resulting in an organized point cloud.
The organized point cloud is aligned with the product’s CAD model. To overcome the limitations of the conventional CAD representation methods, a mathematical model was developed. This model simulates a laser beam projection onto the CAD model’s surface, generating a virtual point cloud (see Section 5). By systematically moving the simulated beam along the component’s length, a spatially homogeneous point cloud is created, accurately capturing the surface geometry.
In the aligned point cloud, bounding boxes serve as the spatial constraints and are defined by specifying the minimum and maximum coordinates along the x-, y-, and z-axes. These bounding boxes focus the analysis on specific sections of the scanned item while excluding irrelevant points. The aligned point cloud’s vectors are filtered by checking each point against the bounding box limits to retain only the points that fall within these bounding boxes.
To measure the dimensional characteristics, the aligned point cloud is used by a KNN algorithm to compute the K-nearest neighbors for each point, capturing contextual neighboring information. The centroid of these is calculated as the mean of their coordinates, and the distance of each point from this centroid is used to identify key geometric aspects, such as holes and their centers. This ensures effective utilization of neighboring information. In addition to KNN, the percentile thresholds control the sensitivity to deviations. These balance the inclusion of deviations while minimizing the influence of noise. To optimize the detection parameters, a grid search was applied. The parameters in the grid search included the neighborhood size (k), which was explored in the range of 31 to 37, with steps of 2, for larger holes (over 100 mm in diameter) and 27 to 33, with steps of 2, for smaller holes (less than 100 mm in diameter). The percentile threshold ranged between the 85th and 90th percentiles with a step of 1. These ranges allowed the algorithm to focus on capturing the relevant details of the hole boundaries without overfitting to noise or missing smaller features. Finally, the distance threshold ranged between 1.5 and 3.0 units, refining the boundary point selection. The range was spaced across 10 steps, as required to isolate features that were geometrically distinct from their surroundings.
The refined boundary points are then used to fit a geometric model. For circular features, such as holes, the methodology employs a least-squares optimization method to fit a circle. The process begins with an initial estimation based on the centroid of the boundary points and their mean radial distance. The least-squares method iteratively minimizes the radial discrepancies between the boundary points and the candidate circle, refining the center coordinates and radius for the best fit.
By aggregating the results across multiple iterations, this methodology ensures robust and representative measurements of the feature’s true geometry, minimizing noise and outliers while confirming the successful detection and measurement of the features.
Dimensional measurements extend beyond individual features to analyzing the spatial relationships between the features of a product. These include calculating the distances between the centers of adjacent holes and the distances between the centers of holes with the edges of a product. Measurements are performed in both 3D space and 2D projections by using the Euclidean distances.
In addition, upon the application of the methodology, no human intervention is required. It should be noted that during the initial application of the methodology, human input is required to perform the initial CAD model alignment of the captured point cloud, as well as to define the necessary bounding boxes for the aligned point cloud.
The validation of the reliability of the hybrid AI system is achieved through an uncertainty analysis, which is performed based on the GUM. The methodology calculates the mean, which represents the arithmetic average of the measurements; the standard uncertainty, which quantifies the variability in the measurements relative to the number of observations; the expanded uncertainty, which defines a broader interval that accounts for most of the variability in the measurements; the confidence interval’s lower bound, which specifies the smallest plausible value of the measurement; and the confidence interval’s upper bound, which indicates the largest plausible value of the measurement, as defined in [46].
Hence, this methodology integrates aligned point cloud data, precise filtering, advanced area-of-interest detection through the inclusion of the contextual neighboring information, geometric fitting, dimensional measurements, and an uncertainty analysis, providing a framework for automated quality control using 3D point cloud data.

4. Implementation

The proposed approach was implemented into a prototype, whose architecture is seen in Figure 2. The point cloud is captured by a laser line triangulation system, using commercially available sensors. The laser triangulation system, the Wenglor MLSL 134, is equipped with a laser wavelength of 450 nm, with protective housing and a cooling module to protect it against harsh manufacturing environments, keeping the sensor within its working temperature range of 0 to 45 degrees Celsius. The positioning system used employs an encoder, the RLS LM10, which is a non-contact high-speed sensor designed for linear motion sensing. It features a compact readhead that rides at up to 1.0 mm from the self-adhesive tape.
The prototype architecture of the software implementation is seen in Figure 2. The software was deployed on a PC running Windows 10, equipped with an Intel Core i9-10850K CPU, an NVIDIA GeForce RTX 2070 SUPER GPU (Nvidia, Santa Clara, CA, USA), and 32 GB of RAM.
The acquired point cloud data are transmitted to an MQTT Broker under a specific topic, in JSON format, that contains information related to the product, such as its type and its unique identifier, and the aligned point cloud data in Base64 format. The JSON communicated in the MQTT topic is received by a Node-RED flow, which orchestrates the hybrid AI system’s execution.
Supporting modularity and scalability, a customized implementation of the AI system was dockerized, and Node-RED orchestrated the execution of the specific Docker image based on the product type information contained in the JSON. Each image contains a Python (version 3.10) Fast API (version 0.70.0) app triggered by HTTP Post Requests when provided with the encoded point cloud. The data are validated using Fast API and Pydantic (version 1.8.2) and are temporarily saved in the Docker environment as a .ply file using the Python Tempfile module. The file is then loaded into memory using Open3D (version 0.18.0). NumPy (version 1.21.6) translates the file into an array with x-, y-, and z-coordinates, Scikit-learn implements KNN for feature detection and boundary identification, and SciPy (version 1.7.3) reads the data to perform the spatial data analysis and least-squares optimization. The extracted features are communicated back to Node-RED (version 4.0.2) in response to the HTTP Request, which is then stored in a PostgreSQL (version 16.3) database.
The MQTT protocol used ensures real-time communication by transmitting the point cloud data and product information instantly to the Node-RED flow for processing. Finally, this integration allows the hybrid AI system to analyze the dimensional features and return the results in near real time, which supports inline quality control.

5. The Use Case

The proposed approach was applied to the automated line of a steel trailer arm manufacturer. Currently, the quality inspection approach to determining the dimensional defects in a product is manual and conducted at the end of the line in a dedicated inspection section. This necessitates the need for an inline quality control approach that will enable early defect detection, avoiding defective products being further processed down the line. Furthermore, the extraction of the geometrical features from the product depends on its depth and curvature, necessitating the adoption of a 3D inspection system. To facilitate this, the laser line triangulation system was employed in the manufacturing environment to scan point clouds of the trailer arms. The setup of the inspection system is presented in Figure 3, as is an indicative point cloud for the trailer arm. As seen in Figure 3b points in red correspond to the beams that are projected by the laser-line triangulation instrument to the product and points in green are the intersection of points between the beams and the CAD of the product. Three different product types were scanned during the experiment conducted.
The experimental process to verify the performance of the hybrid AI system included the conduction of repeatability and reproducibility scans in a controlled environment. During repeatability tests, the same product was scanned multiple times to measure how repeatable the system was and how well it could produce the same result. During reproducibility tests, slight variations in the position of the product were introduced, both in transverse and rotational terms, to demonstrate the reliability of the measurement system once installed in the production line. The experimental testbed included the laser line triangulation sensor setup demonstrated in Figure 3, while the bar was held by an industrial robot. The number of point clouds generated in each test can be found in Table 1, amounting to a total of 226 point clouds.
Using the acquired point clouds, the implemented hybrid AI system was employed to measure critical dimensional properties of the steel trailer arms, which include the radiuses of the holes in the product, the distances between centers of specific holes, and the distances between centers of specific holes and the edges of the product. The targeted dimensional features that were under examination for all product types are depicted in Figure 4. It should be highlighted that the product displayed in Figure 4 is of type SJ. However, the necessary distances and diameters that should be detected for the other two product types are of similar nature to those for product type SJ, with the exception of product type SM, which is missing the V7 Top Middle and V7 Bottom Middle holes.
The design specifications for the products, meaning the tolerances of the dimensional features under examination, are provided by the manufacturer and can be seen in Table 2, while the cumulative results for all product types are presented in Table 3.
To compare the results extracted through application of the proposed hybrid AI system with the tolerances described in Table 2, the GUM was used, as defined in 46. Thus, the standard and expanded uncertainties were calculated together with the confidence intervals to identify whether the extracted measurements were reliably within the design tolerances.
Similarly, for product type SN, the standard uncertainty for all of the measurements remains below the 0.2 mark, while the expanded uncertainties were under 0.15 mm with a 95% confidence level. Despite the elevated results when compared with those for product type SJ, the reported uncertainty remains within acceptable limits according to [46] and the design specification of the product as provided by the manufacturer (Table 2).
Lastly, regarding product type SM, the standard uncertainty for all of the measurements remains below the 0.072 mark, while the expanded uncertainties were under 0.16 mm with a 95% confidence level. These results further validate the approach’s applicability and adaptability to different product types since the reported uncertainty is still within the acceptable limits according to [46].
In addition, the results were validated further by using a coordinate-measuring machine (CMM) to perform a test in a controlled environment using the same products used during the testing of the proposed methodology. The distances calculated using the CMM were compared to the extracted measurements by using metrics such as the average accuracy and the average error. On average, for all product types, the experiment conducted resulted in an average accuracy of 95.78% and an average error of 4.22%. This shows high accuracy in the measurement extraction and that the measurements closely resemble the outputs of high-precision measuring tools.
Based on the experimental results, the evaluation of the spatial relationships between neighboring points using the KNN algorithm produces localized point clusters that accurately represent geometric features, such as product holes. These clusters enhance the robustness of feature detection, even in the presence of noisy data, due to the KNN algorithm’s ability to incorporate contextual information. By leveraging this contextual understanding, the hybrid AI system minimizes false feature identification and ensures precise measurement of the product’s dimensional characteristics.
As seen in Figure 5, a comparison of the measured values against their specified upper and lower tolerance was performed. Notable deviations include those for V6 Top Side and V6 Bottom Side, where the measurements for SM exceed the upper tolerance limits due to the presence of noise in the data. To add to, product type SJ had the most deviations in the hole diameters, specifically for the holes V7 Top Left, V7 Top Right, V7 Top Middle, and V7 Bottom Middle given the abnormalities in the distribution of the normals for these specific holes in the point cloud data. Similar observations can be made for product type SN for the measurements V7 Top Left, V7 Bottom Right, and V7 Bottom Left. In contrast, measurements such as V27 Top, V27 Bottom, and the distance for the middle holes consistently fall within the specified tolerances across all three product types, demonstrating adherence to the manufacturing specifications.

6. Discussion

As noted in [47], validating metrology results through an uncertainty analysis is critical in manufacturing. In this context, the robustness of the proposed approach is further assured through the application of the GUM, with the results confirming that the hybrid AI system’s measurements fall within the design tolerances specified in Table 2.
An investigation of the literature identifying similar approaches to extracting the dimensional features for quality control in manufacturing was conducted to compare such findings against the performance of the proposed hybrid AI system. In this context, a robot mounted vision-based setup is discussed in [8], which has been used to extract the geometrical features from the surface of metal products in manufacturing, achieving an average accuracy of 80% for selected dimensional points. Similarly, a 3D measurement system was described in [48] that used ridge detection for geometric feature measurement, whose deployment outside the line, despite it demonstrating a robust measurement performance, limits its adoption in a real-world industrial environment. These findings signify the potential of the proposed hybrid AI system, which tackles both inline deployment and a high measurement accuracy of over 95%, as demonstrated by the GUM and the comparative analysis between its results and the results extracted using the CMM.
Furthermore, validation highlights the importance of the hybrid AI system in detecting dimensional deviations in products. By identifying these deviations during production, the proposed approach prevents unnecessary processing of defective products, directly reducing resource consumption, waste, and emissions, contributing to improved manufacturing sustainability [49]. In addition, unlike commercial line scanners, the modular design in the proposed solution allows for its customization for specific industrial needs per product type beyond the predefined capabilities of conventional systems.
Moreover, the deployment of the hybrid AI system at the edge ensures low computation times. This is especially important for SME manufacturers, who lack the available resources to deploy the solution on high-performance servers. In addition, edge deployment ensures data security since the captured data are not exposed outside the manufacturer’s network. In this context, the computational time for the hybrid AI system to produce an output measurement of the critical dimensions of a product is approximately 1 s. While such a duration labels the proposed hybrid AI solution as a near real-time approach, it still allows for edge deployment, supporting time-critical operations such as inline quality control in manufacturing.
In addition, industrial robots, such as the one holding the product during the point cloud acquisition, are characterized by their pose repeatability. Robots, such as the one used in the context of this study, are high-precision industrial robots, with a pose repeatability in the range of 0.01 to 0.05 mm [50]. While this level of repeatability introduces minimal variation into the scanning process, this does not significantly influence the results of the hybrid AI system given its performance as demonstrated by the performed GUM and the comparison of its outputs against the results of the CMM.
The replicability of the solution, to make it applicable to different products, was considered during the design of the hybrid AI system. The modularity of the hybrid AI system allows for easy customization of critical parameters such as the K of the KNN. Nevertheless, new products should share similar characteristics, such as having dimensions whose calculation relies on the correct identification of the holes and edges of the product.
Lastly, commercial scanners are available that possess the ability to capture product point clouds in manufacturing lines and are able to evaluate specific geometrical features but cannot integrate with CAD models for the alignment of point cloud data and lack straightforward customizability [51]. Therefore, the value of the proposed approach is its adoption of built-in CAD-based alignment of the acquired point cloud and the minimal customization needed for its deployment. With a limited number of parameters available for customization, such as the k in the KNN and the percentile threshold, the hybrid AI system is easily adaptable to different types of products that share similar characteristics to those under examination in the current study.

7. Conclusions

This study presents a methodology for inline quality control that utilizes laser line triangulation to generate CAD-oriented 3D point clouds, subsequently analyzed using a hybrid AI system for dimensional feature measurement.
The results demonstrated the system’s capability to extract precise measurements from point clouds under real-world conditions, with deviations within the industrial tolerances. These were further validated through an uncertainty analysis, conducted in accordance with [46]. The standard and expanded uncertainties remained below 0.2 and 0.15, respectively, underscoring the solution’s reliability. These findings validate the methodology’s potential to support manufacturers in adopting proactive NDI systems, enabling early defect detection and reducing waste and resource consumption.
This study highlights the significance of contextual information in point cloud processing and demonstrates a practical, scalable, and edge-deployable solution for NDI, aligning with the industry’s goals of enhancing quality control and sustainability.
In addition, while the proposed hybrid AI system demonstrates high accuracy in measurement calculation given the collected results, certain challenges may arise with geometries featuring extreme curvatures, deep recesses, or highly reflective surfaces. These geometries can introduce noise, occlusions, or misalignment into the point cloud data, which may impact the accuracy of feature extraction. Furthermore, complex overlapping features may lead to erroneous clustering or boundary detection. Future work will focus on enhancing the system’s robustness to such challenging geometries by incorporating advanced noise filtering and adaptive feature extraction techniques.
Lastly, while the methodology was validated through a single case study, its modular and adaptable design holds potential for broader industrial applications. Future research will focus on expanding its validation across diverse manufacturing contexts, such as automotive and aerospace sectors, and optimizing the algorithm’s accuracy and robustness through advanced grid search techniques.

Author Contributions

Conceptualization: N.N., P.C. (Paolo Castellini) and P.C. (Paolo Catti). Methodology: M.N., P.C. (Paolo Castellini) and W.v.d.K. Software: M.N. and S.D. Validation: M.N., S.D., P.C. (Paolo Catti) and W.v.d.K. Formal analysis: N.N. Data curation: S.D., W.v.d.K., and M.N. Writing—original draft preparation: M.N. and S.D. Writing—review and editing: N.N. and P.C. (Paolo Catti). Project administration: N.N. and K.A. Funding acquisition: N.N. and K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the HORIZON-CL42021-TWIN-TRANSITION-01 openZDM project under Grant Agreement No. 101058673.

Data Availability Statement

The data cannot be made available due to the manufacturer’s confidentiality requirements.

Conflicts of Interest

The author Wilhelm van de Kamp was employed by the company VDL WEWELER bv. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Chryssolouris, G. Manufacturing Systems: Theory and Practice; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; ISBN 1-4757-2213-3. [Google Scholar]
  2. Mitra, A. Fundamentals of Quality Control and Improvement; John Wiley & Sons: Hoboken, NJ, USA, 2016; ISBN 1-118-70514-9. [Google Scholar]
  3. Chryssolouris, G.; Papakostas, N.; Mavrikios, D. A Perspective on Manufacturing Strategy: Produce More with Less. CIRP J. Manuf. Sci. Technol. 2008, 1, 45–52. [Google Scholar] [CrossRef]
  4. Colledani, M.; Tolio, T.; Fischer, A.; Iung, B.; Lanza, G.; Schmitt, R.; Váncza, J. Design and Management of Manufacturing Systems for Production Quality. CIRP Ann. 2014, 63, 773–796. [Google Scholar] [CrossRef]
  5. Nikolakis, N.; Catti, P.; Chaloulos, A.; van de Kamp, W.; Coy, M.P.; Alexopoulos, K. A Methodology to Assess Circular Economy Strategies for Sustainable Manufacturing Using Process Eco-Efficiency. J. Clean. Prod. 2024, 445, 141289. [Google Scholar] [CrossRef]
  6. Raisul Islam, M.; Zakir Hossain Zamil, M.; Eshmam Rayed, M.; Mohsin Kabir, M.; Mridha, M.F.; Nishimura, S.; Shin, J. Deep Learning and Computer Vision Techniques for Enhanced Quality Control in Manufacturing Processes. IEEE Access 2024, 12, 121449–121479. [Google Scholar] [CrossRef]
  7. Taatali, A.; Sadaoui, S.E.; Louar, M.A.; Mahiddini, B. On-Machine Dimensional Inspection: Machine Vision-Based Approach. Int. J. Adv. Manuf. Technol. 2024, 131, 393–407. [Google Scholar] [CrossRef]
  8. Khan, A.; Mineo, C.; Dobie, G.; Macleod, C.; Pierce, G. Vision Guided Robotic Inspection for Parts in Manufacturing and Remanufacturing Industry. J. Remanufactur. 2021, 11, 49–70. [Google Scholar] [CrossRef]
  9. Sioma, A. 3D Imaging Methods in Quality Inspection Systems. SPIE 2019, 11176, 111760L. [Google Scholar]
  10. Rezaei-Malek, M.; Mohammadi, M.; Dantan, J.-Y.; Siadat, A.; Tavakkoli-Moghaddam, R. A Review on Optimisation of Part Quality Inspection Planning in a Multi-Stage Manufacturing System. Int. J. Prod. Res. 2019, 57, 4880–4897. [Google Scholar] [CrossRef]
  11. Chen, Y.; Peng, X.; Kong, L.; Dong, G.; Remani, A.; Leach, R. Defect Inspection Technologies for Additive Manufacturing. Int. J. Extrem. Manuf. 2021, 3, 022002. [Google Scholar] [CrossRef]
  12. Verna, E.; Genta, G.; Galetto, M.; Franceschini, F. Planning Offline Inspection Strategies in Low-Volume Manufacturing Processes. Qual. Eng. 2020, 32, 705–720. [Google Scholar] [CrossRef]
  13. Gao, Y.; Li, X.; Wang, X.V.; Wang, L.; Gao, L. A Review on Recent Advances in Vision-Based Defect Recognition towards Industrial Intelligence. J. Manuf. Syst. 2022, 62, 753–766. [Google Scholar] [CrossRef]
  14. Neogi, N.; Mohanta, D.K.; Dutta, P.K. Review of Vision-Based Steel Surface Inspection Systems. J. Image Video Proc. 2014, 2014, 50. [Google Scholar] [CrossRef]
  15. Park, M.; Jeong, J. Design and Implementation of Machine Vision-Based Quality Inspection System in Mask Manufacturing Process. Sustainability 2022, 14, 6009. [Google Scholar] [CrossRef]
  16. Nascimento, R.; Martins, I.; Dutra, T.A.; Moreira, L. Computer Vision Based Quality Control for Additive Manufacturing Parts. Int. J. Adv. Manuf. Technol. 2023, 124, 3241–3256. [Google Scholar] [CrossRef]
  17. Zhou, L.; Zhang, L.; Konz, N. Computer Vision Techniques in Manufacturing. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 105–117. [Google Scholar] [CrossRef]
  18. Swojak, N.; Wieczorowski, M.; Jakubowicz, M. Assessment of Selected Metrological Properties of Laser Triangulation Sensors. Measurement 2021, 176, 109190. [Google Scholar] [CrossRef]
  19. Álvarez, I.; Enguita, J.M.; Frade, M.; Marina, J.; Ojea, G. On-Line Metrology with Conoscopic Holography: Beyond Triangulation. Sensors 2009, 9, 7021–7037. [Google Scholar] [CrossRef]
  20. Huang, W.; Kovacevic, R. A Laser-Based Vision System for Weld Quality Inspection. Sensors 2011, 11, 506–521. [Google Scholar] [CrossRef]
  21. Brosed, F.J.; Aguilar, J.J.; Guillomía, D.; Santolaria, J. 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot. Sensors 2010, 11, 90–110. [Google Scholar] [CrossRef]
  22. Huo, L.; Liu, Y.; Yang, Y.; Zhuang, Z.; Sun, M. Review: Research on Product Surface Quality Inspection Technology Based on 3D Point Cloud. Adv. Mech. Eng. 2023, 15, 16878132231159523. [Google Scholar] [CrossRef]
  23. Belšak, R.; Gotlih, J.; Karner, T. Simulating and Verifying a 2D/3D Laser Line Sensor Measurement Algorithm on CAD Models and Real Objects. Sensors 2024, 24, 7396. [Google Scholar] [CrossRef] [PubMed]
  24. Lu, W.; Zhang, X.; Jiang, X.; Hong, T. Research on Point Cloud Processing of Gear 3D Measurement Based on Line Laser. J. Braz. Soc. Mech. Sci. Eng. 2024, 46, 645. [Google Scholar] [CrossRef]
  25. Heczko, D.; Oščádal, P.; Kot, T.; Huczala, D.; Semjon, J.; Bobovský, Z. Increasing the Reliability of Data Collection of Laser Line Triangulation Sensor by Proper Placement of the Sensor. Sensors 2021, 21, 2890. [Google Scholar] [CrossRef]
  26. So, E.W.Y.; Michieletto, S.; Menegatti, E. Calibration of a Dual-Laser Triangulation System for Assembly Line Completeness Inspection. In Proceedings of the 2012 IEEE International Symposium on Robotic and Sensors Environments Proceedings, Magdeburg, Germany, 16–18 November 2012; IEEE: Magdeburg, Germany, 2012; pp. 138–143. [Google Scholar]
  27. Denayer, M.; De Winter, J.; Bernardes, E.; Vanderborght, B.; Verstraten, T. Comparison of Point Cloud Registration Techniques on Scanned Physical Objects. Sensors 2024, 24, 2142. [Google Scholar] [CrossRef]
  28. Liu, J. An Adaptive Process of Reverse Engineering from Point Clouds to CAD Models. Int. J. Comput. Integr. Manuf. 2020, 33, 840–858. [Google Scholar] [CrossRef]
  29. Sundaram, S.; Zeid, A. Artificial Intelligence-Based Smart Quality Inspection for Manufacturing. Micromachines 2023, 14, 570. [Google Scholar] [CrossRef]
  30. Papacharalampopoulos, A.; Alexopoulos, K.; Catti, P.; Stavropoulos, P.; Chryssolouris, G. Learning More with Less Data in Manufacturing: The Case of Turning Tool Wear Assessment through Active and Transfer Learning. Processes 2024, 12, 1262. [Google Scholar] [CrossRef]
  31. Alexopoulos, K.; Catti, P.; Kanellopoulos, G.; Nikolakis, N.; Blatsiotis, A.; Christodoulopoulos, K.; Kaimenopoulos, A.; Ziata, E. Deep Learning for Estimating the Fill-Level of Industrial Waste Containers of Metal Scrap: A Case Study of a Copper Tube Plant. Appl. Sci. 2023, 13, 2575. [Google Scholar] [CrossRef]
  32. Cerquitelli, T.; Nikolakis, N.; O’Mahony, N.; Macii, E.; Ippolito, M.; Makris, S. (Eds.) Predictive Maintenance in Smart Factories: Architectures, Methodologies, and Use-Cases; Information Fusion and Data Science; Springer: Singapore, 2021; ISBN 9789811629396. [Google Scholar]
  33. Zhang, Y.; Feng, W.; Quan, Y.; Ye, G.; Dauphin, G. Dynamic Spatial–Spectral Feature Optimization-Based Point Cloud Classification. Remote Sens. 2024, 16, 575. [Google Scholar] [CrossRef]
  34. Liu, Y.; Fan, B.; Xiang, S.; Pan, C. Relation-Shape Convolutional Neural Network for Point Cloud Analysis. arXiv 2019, arXiv:1904.07601. [Google Scholar]
  35. Li, G.; Liu, C.; Gao, X.; Xiao, H.; Ling, B.W.-K. AFpoint: Adaptively Fusing Local and Global Features for Point Cloud. Multimed. Tools Appl. 2024, 83, 79093–79115. [Google Scholar] [CrossRef]
  36. Meng, X.; Lu, X.; Ye, H.; Yang, B.; Cao, F. A New Self-Augment CNN for 3D Point Cloud Classification and Segmentation. Int. J. Mach. Learn. Cyber. 2024, 15, 807–818. [Google Scholar] [CrossRef]
  37. Sun, T.; Liu, M.; Ye, H.; Yeung, D.-Y. Point-Cloud-Based Place Recognition Using CNN Feature Extraction. IEEE Sens. J. 2019, 19, 12175–12186. [Google Scholar] [CrossRef]
  38. Tong, G.; Shao, Y.; Peng, H. Learning Local Contextual Features for 3D Point Clouds Semantic Segmentation by Attentive Kernel Convolution. Vis. Comput. 2024, 40, 831–847. [Google Scholar] [CrossRef]
  39. Haznedar, B.; Bayraktar, R.; Ozturk, A.E.; Arayici, Y. Implementing PointNet for Point Cloud Segmentation in the Heritage Context. Herit. Sci. 2023, 11, 2. [Google Scholar] [CrossRef]
  40. Liu, H.; Tian, S. Deep 3D Point Cloud Classification and Segmentation Network Based on GateNet. Vis. Comput. 2024, 40, 971–981. [Google Scholar] [CrossRef]
  41. Li, Y.; Wang, Y.; Liu, Y. Three-Dimensional Point Cloud Segmentation Based on Context Feature for Sheet Metal Part Boundary Recognition. IEEE Trans. Instrum. Meas. 2023, 72, 2513710. [Google Scholar] [CrossRef]
  42. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. arXiv 2017, arXiv:1706.02413. [Google Scholar]
  43. Lin, W.; Fan, W.; Liu, H.; Xu, Y.; Wu, J. Classification of Handheld Laser Scanning Tree Point Cloud Based on Different KNN Algorithms and Random Forest Algorithm. Forests 2021, 12, 292. [Google Scholar] [CrossRef]
  44. Luo, N.; Wang, Y.; Gao, Y.; Tian, Y.; Wang, Q.; Jing, C. kNN-Based Feature Learning Network for Semantic Segmentation of Point Cloud Data. Pattern Recognit. Lett. 2021, 152, 365–371. [Google Scholar] [CrossRef]
  45. Li, L.; Sung, M.; Dubrovina, A.; Yi, L.; Guibas, L.J. Supervised Fitting of Geometric Primitives to 3D Point Clouds. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2647–2655. [Google Scholar]
  46. ISO/IEC Guide 98-3:2008; Uncertainty of Measurement: Part 3: Guide to the Expression of Uncertainty in Measurement (GUM:1995). ISO: Geneva, Switzerland, 2008. Available online: https://www.iso.org/standard/50461.html (accessed on 10 December 2024).
  47. Pant, M.; Nagdeve, L.; Moona, G.; Kumar, H. Estimation of Measurement Uncertainty of Additive Manufacturing Parts to Investigate the Influence of Process Variables. MAPAN 2022, 37, 765–775. [Google Scholar] [CrossRef]
  48. Giulietti, N.; Chiariotti, P.; Revel, G.M. Automated Measurement of Geometric Features in Curvilinear Structures Exploiting Steger’s Algorithm. Sensors 2023, 23, 4023. [Google Scholar] [CrossRef] [PubMed]
  49. Khakpour, R.; Ebrahimi, A.; Seyed-Hosseini, S.-M. An Integrated Approach of Zero Defect Manufacturing and Process Mining to Avoid Defect Occurrence in Production and Improve Sustainability. Int. J. Lean Six Sigma, 2024; ahead-of-print. [Google Scholar] [CrossRef]
  50. Pham, A.-D.; Ahn, H.-J. High Precision Reducers for Industrial Robots Driving 4th Industrial Revolution: State of Arts, Analysis, Design, Performance Evaluation and Perspective. Int. J. Precis. Eng. Manuf.-Green Technol. 2018, 5, 519–533. [Google Scholar] [CrossRef]
  51. Abreu, N.; Pinto, A.; Matos, A.; Pires, M. Procedural Point Cloud Modelling in Scan-to-BIM and Scan-vs-BIM Applications: A Review. ISPRS Int. J. Geo-Inf. 2023, 12, 260. [Google Scholar] [CrossRef]
Figure 1. Methodology overview.
Figure 1. Methodology overview.
Machines 13 00088 g001
Figure 2. Prototype architecture.
Figure 2. Prototype architecture.
Machines 13 00088 g002
Figure 3. (a) Laser line triangulation sensor setup, (b) CAD-oriented virtual point cloud of the trailer arm.
Figure 3. (a) Laser line triangulation sensor setup, (b) CAD-oriented virtual point cloud of the trailer arm.
Machines 13 00088 g003
Figure 4. Targeted dimensional features under examination for the three product types examined.
Figure 4. Targeted dimensional features under examination for the three product types examined.
Machines 13 00088 g004
Figure 5. Comparison of mean of measurements with the tolerances per product type.
Figure 5. Comparison of mean of measurements with the tolerances per product type.
Machines 13 00088 g005
Table 1. Point clouds tested per product type.
Table 1. Point clouds tested per product type.
Product TypeRepeatability Point CloudsReproducibility Point CloudsTotal Number of Point Clouds per Product Type
SM151530
SN162036
SJ8080160
Table 2. Radius holes and distances’ design tolerances.
Table 2. Radius holes and distances’ design tolerances.
MeasurementsTolerances (mm)
V27 Top Diameter 12   ±   0.5
V27 Bottom Diameter 12   ±   0.5
V7 Bottom Left Diamer 6.5   ±   0.25
V7 Top Left Diameter 6.5   ±   0.25
V7 Bottom Middle Diam. 6.5   ±   0.25
V7 Top Middle Diam. 6.5   ±   0.25
V7 Bottom Right Diam. 6.5   ±   0.25
V7 Top Right Diam. 6.5   ±   0.25
Middle Holes Distance 104   ±   1
V5 Bottom 45   ±   10   S M ,   70   ±   10   ( S N   &   S J )
V5 Top 45   ±   10   S M ,   70   ±   10   ( S N   &   S J )
V6 Bottom 141.4   ±   1
V6 Top 141.4   ±   1
V8 Left<6
V8 Right<6
Table 3. Cumulative results for product types SN, SM, and SJ.
Table 3. Cumulative results for product types SN, SM, and SJ.
Dimensional FeaturesMeanUncertaintyConfidence Intervals
StandardExpandedLowerUpper
Measurements (mm) SNSMSJSNSMSJSNSMSJSNSMSJSNSMSJ
Radiuses of holesV27 Top11.912.312.10.0060.0100.020.0120.0200.04111.812.21211.912.312
V27 Bottom12.412.1120.0570.0240.0090.1150.0490.01912.3121212.512.112
V7 Top Left6.8 16.56.90.0120.0250.0270.0240.0510.0556.76.56.96.86.67
V7 Top Right6.66.56.80.0110.0090.0250.0230.0180.0506.66.56.76.66.56
V7 Bottom Right6.86.56.70.0720.0130.0220.1430.0260.0456.66.56.76.96.56
V7 Bottom Left6.86.570.0080.0200.0260.0160.0400.0536.86.46.96.86.57
V7 Top Middle6.7N/A 26.80.014N/A0.0250.028N/A0.056.6N/A6.86.7N/A6.9
V7 Bottom Middle6.6N/A6.80.010N/A0.0250.020N/A0.056.6N/A6.76.6N/A6.8
DistancesMiddle Holes104.1103.91040.0370.0220.0270.0740.0430.054104103103104104104
V5 Top64.142.1564.40.1050.0650.0860.2110.1300.17363.94264.264.44264.6
V5 Bottom67.442.162.80.1050.0820.0520.2100.1650.05667.24262.767.64262.9
V6 Top Side141.3142.61400.0140.0550.0280.0280.1100.126141142139.9141142140
V6 Bottom Side141.4142.7140.10.0380.0810.0240.0750.1620.048141142140.1141142140.2
V8 Left0.73.11.80.0630.0720.0630.1260.1430.1260.63.01.70.863.32.0
V8 Right5.83.32.90.1500.0690.0540.3000.1380.1085.53.22.86.163.43.0
1 Measurements outside of tolerance limits are indicated in bold. 2 For product type SM, the middle holes on the right part of the product are not present, as they are for the rest of the product types; regarding product type SJ, the standard uncertainty for all features remains below the 0.03 mark, while the expanded uncertainties were under 0.06 mm with a 95% confidence level, indicating that the proposed approach is accurate and reliable in extracting critical dimensional measurements of the product according to [46].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ntoulmperis, M.; Discepolo, S.; Castellini, P.; Catti, P.; Nikolakis, N.; van de Kamp, W.; Alexopoulos, K. Inline-Acquired Product Point Clouds for Non-Destructive Testing: A Case Study of a Steel Part Manufacturer. Machines 2025, 13, 88. https://doi.org/10.3390/machines13020088

AMA Style

Ntoulmperis M, Discepolo S, Castellini P, Catti P, Nikolakis N, van de Kamp W, Alexopoulos K. Inline-Acquired Product Point Clouds for Non-Destructive Testing: A Case Study of a Steel Part Manufacturer. Machines. 2025; 13(2):88. https://doi.org/10.3390/machines13020088

Chicago/Turabian Style

Ntoulmperis, Michalis, Silvia Discepolo, Paolo Castellini, Paolo Catti, Nikolaos Nikolakis, Wilhelm van de Kamp, and Kosmas Alexopoulos. 2025. "Inline-Acquired Product Point Clouds for Non-Destructive Testing: A Case Study of a Steel Part Manufacturer" Machines 13, no. 2: 88. https://doi.org/10.3390/machines13020088

APA Style

Ntoulmperis, M., Discepolo, S., Castellini, P., Catti, P., Nikolakis, N., van de Kamp, W., & Alexopoulos, K. (2025). Inline-Acquired Product Point Clouds for Non-Destructive Testing: A Case Study of a Steel Part Manufacturer. Machines, 13(2), 88. https://doi.org/10.3390/machines13020088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop