Next Article in Journal
Online Investor Sentiment via Machine Learning
Previous Article in Journal
Smooth Sigmoid Surrogate (SSS): An Alternative to Greedy Search in Decision Trees
Previous Article in Special Issue
Enhanced Classification of Human Fall and Sit Motions Using Ultra-Wideband Radar and Hidden Markov Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Surface Defect Positioning Method of Air Rudder Based on Camera Mapping Model

1
School of Mechanical Engineering, Hebei University of Technology, Tianjin 300130, China
2
China State Key Laboratory of Reliability and Intelligence of Electrotechnical Equipment, Hebei University of Technology, Tianjin 300401, China
3
Tianjin Aisda Aerospace Technology Co., Tianjin 300308, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(20), 3191; https://doi.org/10.3390/math12203191
Submission received: 19 August 2024 / Revised: 3 October 2024 / Accepted: 10 October 2024 / Published: 11 October 2024
(This article belongs to the Special Issue Advanced Research in Image Processing and Optimization Methods)

Abstract

:
(1) Background: Air rudders are used to control the flight attitude of aircraft, and their surface quality directly affects flight accuracy and safety. (2) Method: Traditional positioning methods can only obtain defect location information at the image level but cannot determine the defect’s physical surface position on the air rudder, which lacks guidance for subsequent defect repair. We propose a defect physical surface positioning method based on a camera mapping model. (3) Results: Repeated positioning experiments were conducted on three typical surface defects of the air rudder, with a maximum absolute error of 0.53 mm and a maximum uncertainty of 0.26 mm. Through hardware systems and software development, the real-time positioning function for surface defects on the air rudder was realized, with the maximum axial positioning error for real-time defect positioning being 0.38 mm. (4) Conclusions: The proposed defect positioning method meets the required accuracy, providing a basis for surface defect repair in the air rudder manufacturing process. It also offers a new approach for surface defect positioning in similar products, with engineering application value.

1. Introduction

Air rudders are critical structural components in the aerospace industry, playing a pivotal role in controlling an aircraft’s flight attitude. These components operate under high-temperature and impact load conditions, necessitating stringent requirements for structural integrity, material properties, and geometric accuracy [1]. The manufacturing process of air rudders encompasses composite material compression moulding, heat curing, and cutting, during which significant changes occur on both the surface and interior of the components [2]. These changes often result in surface defects, which can severely impact the air rudder’s functionality, diminish material strength, shorten its service life, increase the risk of damage during operation, and pose significant threats to flight safety [3]. Locating the surface defects of air rudders can effectively reduce the risk of using unqualified air rudders and repair and treat the discovered surface defects to ensure that the air rudders meet product standards, minimize resource waste, and reduce scrap rates. An intelligent positioning method is used to locate surface defects generated during the production stage of aviation rudders. Compared with traditional manual measurement methods, it avoids the influence of subjective factors on the positioning results and improves the real-time performance of defect positioning. Compared with precision measuring instruments such as coordinate measuring machines, it is simple to operate, is low cost, has high cost performance, and has strong generalization ability. Accurately and efficiently locating air rudder surface defects can improve the air rudder qualification rate, reduce production costs, and provide valuable guidance for subsequent repairs and manufacturing process optimization [4]. The first part of this article describes the background of the generation and localization of air rudder defects. The second part discusses related work on defect localization and the contributions of this article. The third part elaborates on the defect localization method based on the camera mapping model. The fourth part presents experimental validation based on this method. The fifth part discusses this article, and the sixth part summarizes the work and provides outlooks.

2. Related Work

Defect localization helps to identify defect areas and classify defects. It can also record defect location information for subsequent processing, repair, and other related tasks. In previous studies, the localization of surface defects typically involves utilizing either a defective area mask or a bounding box encompassing the region with defects. Defect localization methods can be categorized into image processing techniques, image segmentation network models, and target detection network models. Surface defect localization methods based on image processing aim to enhance the contrast between surface defects and other areas through various image processing techniques to emphasize the precise location of the defects within an image. Reference [5] introduces a method for segmenting thermal imaging images of aerospace components using an active contour algorithm. Active segmentation seed points are selected from the moving average of pixel values obtained along the region of interest. Experimental results show that this method can successfully distinguish debonding defects, and the segmentation index is better than the Gabor filter and watershed algorithms. Reference [6] pointed out that there are only a few studies on signal processing technology for aircraft wind turbine blades based on ultrasonic B-scan image processing and extracting the size and location of defects. An ultrasonic B-scan image processing technology is proposed. The B-scan image is denoised using the characteristics of the two-dimensional discrete wavelet transform, and the normalized pixel density is compared along the scanning distance of the image area of interest. A −3 dB threshold is applied to the location and size of the defect in the spatial domain, realizing the location determination of the debonding type defect. The surface defect localization method based on image processing relies on the global or local features of the image to distinguish the defect area from the background area. The localization result is relatively rough and cannot quantitatively describe the defect location, nor can it batch process multiple defect samples. Reference [7] proposed a YOLOv5 aerospace aluminium material defect localization method that integrates attention mechanism and multi-scale features. The improved CBAM (Channel-Wise Attention Module) convolutional attention module effectively focuses on the feature information with only a few spatial dimensions in the aluminium profile defect dataset. A bidirectional weighted feature fusion network is combined with a multi-scale feature fusion network with jump connections to aggregate various high-resolution features and enrich the semantic expression of features. Finally, unfused new-size feature maps are introduced into the detection layer network to improve the localization effect of small target defects. Experimental results show that the detection accuracy of the improved YOLOv5 algorithm is improved by 6.2%, which can meet the defect localization requirements of aerospace aluminium material production sites. This study [8] proposes a weakly supervised CNN model designed explicitly for detecting surface cracks in motor commutators. The model was trained using a small subset (approximately 5–30 images) of defect samples and did not necessitate pre-trained weights. It comprises localization and classification networks, enabling simultaneous defect location and type prediction. Experimental results demonstrated an impressive detection accuracy rate of 99.5% achieved by this model. The defect localization method based on the target detection network utilizes bounding boxes as the localization results. However, these methods do not apply to unexpected offsets. Excessive punishment may lead to a decrease in the accuracy of position prediction, and the positioning result will only remain at the image level, which cannot accurately locate the defects on the physical surface of the inspected component. The surface defect positioning method based on the image segmentation network model is based on the defect segmentation function of various segmentation networks. The defect mask output by the network reflects the position of the defect in the image. The advantage of this method is that it uses end-to-end learning, can locate defects of different types and shapes, and is strong and robust. Given the lack of high-quality annotated images in image segmentation, most current deep learning algorithms usually require many training costs and high-quality datasets. Reference [9] proposed an optimizable image segmentation method (OISM) based on simple linear iterative clustering (SLIC), a feature transfer model, and random forest (RF) classifiers to solve the problem of small sample image segmentation. In this method, SLIC is used to extract image boundaries through clustering, the U-net feature transfer model is used to obtain multi-dimensional superpixel features, and the RF classifier is used to predict and update the image segmentation results. The results show that the proposed method retains the target boundary more clearly than the improved U-net model and can reveal the microscopic damage and crack propagation of high-performance structures of aerospace engine components. Reference [10] proposed a vision-based pixel-level defect detection method. First, an attention mechanism and a feature fusion module were introduced to improve Mask Scoring R-CNN. Then, a new classification head consisting of four convolutional layers and one fully connected layer was proposed to reduce the influence of interference information in the defect area. An aircraft skin defect dataset was established. Experimental results show that compared with Mask R-CNN and Mask Scoring R-CNN, the proposed method improves the segmentation accuracy by 21% and 19.59%, respectively. The receptive field of the CNN network limits the defect localization method based on the image segmentation network model, resulting in blurred defect mask edges, decreased defect localization accuracy, and sensitivity to annotation information. The main methods for representing defect localization results include masks, contours, rectangular boxes, etc. These annotations only serve to visualize the defects and achieve accurate localization of defect images but cannot provide the location information required for the physical location of defects during sample inspection and cannot meet the requirements of the manufacturing industry for recording product quality inspection information. In addition, defect localization mainly remains at the image level. However, in practical applications, it is necessary to determine the precise physical location of the defect on the object being tested to facilitate subsequent defect analysis and repair. Therefore, it is essential to quantitatively describe the defect location and achieve accurate localization of the physical surface of the object being tested.
To address the issues above, a camera mapping model-based approach for defect localization on the air rudder surface is proposed:
  • Use the GrabCut algorithm to segment the air rudder surface, establish the air rudder image coordinate system, and obtain the defect positioning coordinates at the image level.
  • Establish a camera mapping model to obtain the defect’s position on the air rudder’s physical surface and conduct repeated positioning experiments on three typical defects. The maximum absolute error of the positioning results is 0.53 mm, and the maximum uncertainty is 0.26 mm.
  • Relying on the hardware system and software interface, real-time positioning of air rudder surface defects is achieved. The maximum real-time positioning error is 0.38 mm. The positioning accuracy and speed meet actual production needs.

3. Methodology for Locating Surface Defects on Air Rudders

3.1. Analysis of Causes and Positioning Requirements for Surface Defects

The air rudder is mainly composed of a metal core, a rocker arm, and a rudder surface, as shown in Figure 1. Among these components, the rudder surface plays a key role because it is exposed to the airflow and changes the direction of the airflow. Therefore, it is the focus of the air rudder manufacturing process. The rudder surface is made of lightweight, high-strength, short-fibre composite reinforced materials that exhibit chemical resistance through the moulding process to meet the demanding requirements of aerospace applications. The manufacturing process includes composite moulding, thermal curing, and cutting stages, which may cause changes in the size, surface roughness, and internal residual stress of the final product. Such changes can cause abnormalities in the rudder surface; in severe cases, surface defects may even form.
Common defects of the rudder surface include cracks, pits, and mould sticking, as shown in Figure 2. Cracks mainly occur during the moulding cooling process, and residual stress is generated due to excessive temperature fluctuations; the cracks will affect the aerodynamic characteristics of the air rudder, causing the flight control panel to respond inaccurately and be unable to effectively control the flight attitude. Pits mainly occur during the cutting process due to external physical impact; the pits will destroy the smoothness of the air rudder surface, causing turbulence and separation of the airflow when passing through the control panel, reducing the efficiency of the air rudder. Mould sticking occurs during the moulding demolding stage, and the release agent is insufficient or unevenly applied; mould sticking will cause the dynamic performance of the air rudder to decrease, and in severe weather conditions, it will further threaten flight stability. The above defects are the focus of defect location in this article.
The purpose of locating defects on the surface of the air rudder is to obtain the location information of the defect. Locating defects only on images has insufficient engineering significance and cannot support the reproduction process. Ultimately, it is necessary to locate surface defects on the physical surface of the air rudder. The defects on the surface of the air rudder are detected using a defect detection algorithm based on the Fs-CAE network and the SPC threshold [11]. The reconstructed residual is used as the basis for defect location. The GrabCut image segmentation algorithm is used to segment the defect area and set the bounding box to obtain the pixel coordinate information of the defect in the image. The camera mapping model is established using the checkerboard calibration method, and the physical space coordinates of the defects on the surface of the air rudder are obtained using the camera mapping model. The reliability of the defect location method is tested using the uncertainty analysis method, and real-time location software is developed. The overall process of air rudder surface defect location is shown in Figure 3.

3.2. Method for Precise Positioning of Air Rudder Surface Defects Based on Image Analysis

To achieve defect positioning at the image level, it is essential to establish an image positioning coordinate system as a quantitative metric for describing defect positions accurately. Establishing this coordinate system involves determining the origin and direction of the coordinate axes, which is based on referencing the air rudder body’s edges to facilitate subsequent identification of physical surface defects. It is essential to extract the contour of the air rudder surface from the image and determine the coordinate origin and axis based on this outline. An image segmentation algorithm called the GrabCut algorithm with machine learning features combined with the idea of energy minimization is used to segment the rudder area, obtain the rudder contour, establish the rudder image coordinate system, identify the defect area, and use the circumscribed moment as the bounding box to display the defect position. The coordinates of its corner points in the coordinate system are used as the image-level positioning results.

Segmentation of Air Rudder Using the GrabCut Algorithm

In segmenting the air rudder’s surface area, the complex background environment is a major interference factor. It is necessary to remove all the background except the air rudder surface. The defect segmentation algorithm based on edge detection can only detect the edge of the air rudder surface. It cannot remove irrelevant background information, which will cause the established air rudder image level coordinate system to be inaccurate. The image segmentation algorithm based on the adaptive threshold is sensitive to noise and illumination changes and cannot cope with complex background segmentation problems. Therefore, the GrabCut image segmentation algorithm is selected. GrabCut, proposed by Boykov and Jolly et al. [12], is an interactive foreground segmentation method that utilizes iterative graph cuts. The process of interactive segmentation typically involves dividing the image into two distinct parts: the “foreground” region representing the object and the “background” region. Initially, seed points are assigned within both areas as “hard constraints,” serving as essential characteristics for accurate segmentation of the target object in the image. The remaining portion of the image is automatically segmented by calculating the globally optimal solution among all segmentations that adhere to the given constraints.
The cost function is defined as a “soft constraint” for segmentation, taking into account the boundary attributes and regional characteristics of the specified area, ensuring the preservation of the target segmentation object’s boundaries. The GrabCut algorithm offers a notable advantage in providing a globally optimal solution for segmentation when the cost function is well defined. In contrast to traditional segmentation methods that rely on analyzing or predicting challenging global image features, which often lead to incomplete or over-segmentation, the segmentation results based on the globally optimal solution are directly influenced by the cost function definition, ensuring more reliable control over the segmentation process.
Regarding the definition of the cost function, let us assume that array P encompasses all pixels in an image, while unordered adjacent pixel pairs {p,q} (where p∈P and q∈P) are considered in the standard eight neighbourhood directions. Vector A = A 1 , , A p , , A P represents the area assigned to each pixel p as either “object” or “background”. Vector A defines the pixel areas within an image and the cost function E(A) describes the “soft constraints” imposed by its boundary attribute B(A) and area attribute R(A).
E ( A ) = λ · R ( A ) + B ( A )
where R A = p P R p A p , B A = p , q B p , q · δ A p , A q , A p , A q = 1 ,   l i k e   A p A q 0 ,   o t h e r .
The coefficient λ (λ ≥ 0) in the formula reflects the relative importance of the regional attribute R(A) of the specified area compared to the boundary attribute B(A). The regional attribute R(A) assumes that pixel p belongs to two situations: “object” and “background”, corresponding to R p o b j and R p b k g , respectively. Here, R p · reflects the extent to which pixel p’s characteristics match those of the known object and background. B p , q represents the weight of the boundary attribute B(A) between pixel pair {p,q}.
The flow of the GrabCut algorithm is illustrated in Figure 4. Firstly, an image is provided and seed points representing the “object” and “background” are set. Subsequently, a graph structure with two terminals is constructed. Leveraging the known positions of the seed points within the image, a globally optimal segmentation that effectively separates the two terminals is computed. Finally, an original image segmentation mask is established based on this optimal segmentation.
The pixels in the original image are partitioned into “object” and “background” regions, with subsets O and B representing the pixel sets where the seed points for the “object” and “background” are located, respectively. Thus, there exist subsets O⊂P and B⊂P such that O∩B = ϕ. For vector A, the following holds true:
p O , A p = o b j
p B , A p = b k g
The graph structure created is denoted as G = 〈V, E〉, where the node set V comprises pixels p in P. Additionally, two terminal nodes are established: the “object” terminal (S) and the “background” terminal (T), with
V = P S , T
The set E in the graph structure encompasses both neighbourhood connections and terminal connections. Each pixel p is connected to terminals S and T, denoted as p , S and { p , T } , respectively. Neighbourhood connection between adjacent pixels p and q is represented by p , q . The weight of each connection in E is presented in Table 1. Thus, E can be expressed as follows:
E = p , q p , S , { p , T }
After the aforementioned procedure, graph G is fully defined, and the segmentation boundary between the object and the background can be determined by identifying the optimal segmentation on graph G.
The objective of optimal segmentation is to minimize the connection cost between two terminals in a graph structure. The following rules apply: Let F represent the set of all feasible cuts C f on graph G. If p and q are connected to different terminals, then the neighbourhood connection p , q C f ; if p B , then the terminal connection p , S C f . A possible cut C f must cut at least one terminal connection at each pixel to separate the two terminals, but it cannot sever both terminal connections as this would not result in minimal cost since one of them can be restored. Similarly, if p and q are connected to different terminals, cutting off their neighbourhood connection p , q is optimal as it separates the two terminals. If p and q are connected to the same terminal, unnecessary neighbour connections p , q should be cut off due to their minimal cost. Additionally, since constant K is greater than any given pixel’s sum of all neighbourhood connection costs, its corresponding terminal connection should not be severed. For instance, if p O and C f severs { p , S } (cost K), restoring {p,S} and cutting off all neighbour connections of p (cost less than K) and T’s terminal connection (cost 0) will create a less costly partition. These rules determine pixels for segmentation boundaries.
A p = obj ,   { p ,   T } C f bkg , { p ,   S } C f
Utilize the maximum flow algorithm to ascertain the minimum cost connection on graph G and determine the optimal partition. The maximum flow algorithm incrementally augments the flow transmitted from terminal S to terminal T, considering the weights of connections in graph G. Upon termination, the maximum flow saturates the subgraph connected to terminal S, with the saturated edge corresponding to an optimal division on graph G.
The current optimal split has been calculated for a given initial set of seed points. Assuming the user introduces a new “object” seed to pixel p, which was previously unassigned any seed, Table 2 displays the resulting updated weight.
Then, we need to change the cost of the two terminal-shaped connections at pixel p and calculate the optimal cut on the new graph. To fit the new “object” seed at the pixel p , adding terminal connection weights according to Table 3 makes these new weights consistent with the edge weights of pixels in O ; because of the extra constant c p   at the junction of the two terminals of the pixel, the size relationship between weights is not changed, so the optimal cut is not changed.
Subsequently, the modification of terminal-shaped connections at pixel p and the subsequent determination of the optimal cut on the modified graph are imperative. In order to accommodate the new “object” seed at pixel p, incorporating terminal connection weights based on Table 3 ensures consistency with edge weights of pixels in O. The introduction of an additional constant   c p   at the junction of these two terminals does not alter the relative weight relationships, thereby preserving the integrity of the optimal cut is not changed.
Then, the optimal cut on the new graph can be effectively obtained starting from the previous flow, and this process can be repeated to obtain the final segmentation boundary of the image. Through the above method, each pixel point will be divided into two graphs by the cutting boundary, and different pixel values are assigned to the pixels in the two graphs to obtain the mask for object and background segmentation. Based on this mask, the object can be segmented from the image, completing the entire image segmentation process, and the rudder area can be segmented from the complex background to locate the defect at the image level.

3.3. Physical Positioning Method for Detecting Defects on Air Rudder Surfaces

After obtaining a quantitative description of defects at the image level, it is necessary to convert the positioning results into physical coordinates on the surface of the air rudder. This conversion involves mapping from physical space to image space. A camera mapping model is established using camera calibration methods [13], enabling the transformation of defect coordinates from the image level to physical surface coordinates in order to meet positioning requirements and achieve accurate surface defect localization on the air rudder.

Establishment of Camera Mapping Model Based on Camera Calibration

(1)
Linear Model
The process of camera image acquisition involves the conversion of physical space points into image pixels through rigid body, projection, and affine transformations. The object being photographed undergoes a rigid body transformation from the world coordinate system to the camera coordinate system, followed by a projection transformation from the camera coordinate system to the physical image coordinate system, and finally an affine transformation to map it onto the linear pixel coordinate system, as depicted in Figure 5.
The world coordinate system P w = X w , Y w , Z w is a three-dimensional rectangular coordinate system in physical space, with its origin and axis directions typically determined based on the specific context. The unit of measurement for coordinates is millimetres. It serves as a reference for determining the position and orientation of the camera and subject within physical space.
The camera coordinate system P c = X c , Y c , Z c is a three-dimensional rectangular coordinate system in physical space, serving as the reference frame for capturing and analyzing images. Its origin coincides with the optical centre of the camera lens, ensuring accurate spatial representation. The X c and Y c axes are aligned parallel to the sides of the lens image plane, facilitating precise measurement and analysis. Meanwhile, the Z c axis aligns with the central optical axis of the lens, providing a consistent reference point along its depth dimension. The unit of measurement utilized in this coordinate system is millimetres.
The image physical coordinate system P I = x I , y I represents a two-dimensional rectangular coordinate system in physical space, which is based on the camera’s imaging plane. The origin corresponds to the central point of the imaging plane, while the x I axis and y I axis are aligned with this plane. It should be noted that both sides of the plane are parallel.
The pixel coordinate system P p = u , v   is a two-dimensional rectangular coordinate system at the image level, established based on the camera imaging plane’s pixel arrangement. The origin is located in the upper-left corner of the image pixels, while the u -axis and v -axis are parallel to both sides of the image. The unit of measurement for each axis in this pixel coordinate system is pixels (px). Figure 6 illustrates schematic diagrams for all aforementioned coordinate systems.
The aforementioned procedure can establish a corresponding mapping model to depict the transformation relationship between natural scenes and image pixels. The transition from the world coordinate system to the camera coordinate system entails a rigid body transformation process, characterized by the following transformation relationship:
X c Y c Z c 1 = R T 0 1 X w Y w Z w 1
The rotation matrix, denoted as R , encapsulates the rotational information inherent in the transformation process, while the translation matrix, denoted as T , captures the translational information associated with movement.
The transformation from the camera coordinate system to the image involves projecting three-dimensional physical coordinates onto a two-dimensional plane in physical space. The relationship between these transformations can be described as follows:
Z c x I y I 1 = f 0 0 f 0 0         0 0 0 0 1 0 X c Y c Z c 1
where f is the lens’s focal length, the transformation from the image’s physical coordinate system to the pixel coordinate system is the transformation process from the two-dimensional coordinates in the physical space to the two-dimensional coordinates at the image level. The transformation relationship is the following:
u v 1 = 1 / d x 0 u 0 0 1 / d y v 0 0 0 1 x I y I 1
The variables d x and d y   represent the pixel density per unit length, and 1 / d x and 1 / d y correspond to the physical length of each pixel in the x-axis and y-axis directions of the image’s physical coordinate system. Typically, there is a misalignment between the pixel coordinate system and the image’s physical coordinate system, necessitating a translation transformation of the coordinate axes. u 0 , v 0   denotes the coordinates of the origin prior to this translation transformation within the pixel coordinate system post-translation. By integrating these aforementioned changes, the process of transforming coordinates from world to image space can be described.
Z c u v 1 = 1 / d x 0 u 0 0 1 / d y v 0 0 0 1 f 0 0 f 0 0         0 0 0 0 1 0 R T 0 1 X w Y w Z w 1
After calculation and sorting, the camera mapping model is obtained:
Z c u v 1 = f x 0 0 f y 0 0         u 0 0 v 0 0 1 0 R T 0 1 X w Y w Z w 1 = I n · E x · X w Y w Z w 1
Among them, f x = f / d x and f y = f / d y   are defined as the internal parameter matrix I n , which represents the camera firmware’s intrinsic parameters. The external parameter matrix E x reflects the translation and rotation process of transforming the world coordinate system into the camera coordinate system. The intrinsic parameter matrix represents the inherent parameters of the camera. For the same industrial camera, its parameters are determined by the hardware design and will not change, so the intrinsic parameter matrix of each imaging process of the same camera is also consistent; the extrinsic parameter matrix is determined by the position and posture of the photographed object. Planes at different distances correspond to different translation matrices, and planes in different directions correspond to different rotation matrices. Different spatial points on the same spatial plane have the same extrinsic parameter matrix in coordinate transformation. It can be seen that the points on the same surface of the detected air rudder correspond to the same extrinsic parameter matrix.
(2)
Nonlinear model
The linear model is a camera mapping model constructed under ideal imaging conditions and does not accurately represent the actual application scenario. In practical imaging processes, the inherent perspective distortion of the lens is inevitable, which significantly impacts the resulting images. Typically, objects in an image exhibit varying degrees of deformation due to lens distortion. Lens distortion can be categorized into three forms: radial distortion, centrifugal distortion, and thin edge distortion [14], with radial distortion having the most significant impact on imaging outcomes. Although lens distortion is currently well controlled through optimized design of lens groups, high-standard manufacturing processes, and high-quality material selection, complete elimination of distortions remains impossible and their presence adversely affects surface defect positioning.
To mitigate image distortion, the camera mapping model incorporates distortion parameters to quantitatively characterize the impact of lens distortion on the imaging process. Considering the varying effects of different distortions on imaging and the complexity of solving the model, radial distortion emerges as having a predominant influence on camera imaging. Consequently, this study solely focuses on addressing the impact of radial distortion on defect positioning and proposes a novel camera mapping model. Radial distortion can be mathematically represented through a series expansion when transforming image points into physical coordinates:
x I = x I + k 1 x I r 2 + k 2 x I r 4 y I = y I + k 1 y I r 2 + k 2 y I r 4
Among them, x I , y I are the actual (with distortion) physical coordinates of the image, x I , y I   are the physical coordinates of the image under ideal conditions (without distortion), r 2 = x I 2 + y I 2 , k 1 and k 2 are the first-order and second-order radial distortion coefficients, and the vector k composed of them is called the distortion vector. According to Equations (9) and (12), the pixel coordinates can be obtained. The transformation relationship of the pixel coordinates with or without the influence of distortion is as follows:
u = u + k 1 u u 0 r 2 + k 2 u u 0 r 4 v = v + k 1 v v 0 r 2 + k 2 v v 0 r 4
u , v represent the actual pixel coordinates with distortion, and u , v denote the ideal pixel coordinates without distortion.

4. Results

4.1. Establishment of an Experimental Platform

The air rudder used in our experiment is trapezoidal in shape, with a long side length of 624.9 mm, a short side length of 238.7 mm, a height of 310 mm, and a thickness of 53 mm. The experimental platform for air rudder surface defect detection, as depicted in Figure 7, was designed and constructed. It comprises industrial cameras, light sources, fixtures, positioning guide rails, and other components. Additionally, it is equipped with corresponding platform control systems and computer operating systems. The platform incorporates two industrial cameras with lenses, four light sources, and power supply devices. This integrated apparatus offers an effective field of view measuring 700 × 400 mm and is specifically suitable for inspecting the control surface area of the air rudder. Moreover, the platform facilitates various functionalities such as fixed installation of air rudders, adjustment of industrial camera positions, image collection, and transmission to a computer for storage.
The experimental platform’s camera, lens, and light source are closely related to the final imaging quality. They will directly affect the effect of defect detection and the accuracy of defect location. The relevant key hardware, models, and main parameters are shown in Table 4.

4.2. Image Level Defect Location Experiment

4.2.1. Rudder Surface Area Image Segmentation

The air rudder image collected by the experimental platform in Section 3.1 is shown in Figure 8. The effective area for surface defect detection and positioning is the rudder surface area of the air rudder. It is necessary to remove the interference of the platform background area on defect detection and positioning and establish an image coordinate system based on the rudder surface area so that the defect positioning has a unified reference point on the physical surface and image level.
We utilized the GrabCut algorithm for air rudder surface area segmentation, with object seed point set in the rudder surface area and background seed point set in the background area to clarify the image foreground and background range. According to the colour and edge information of the air rudder, the seed points were determined, and the optimal seed points were determined through multiple experiments. The seed points on the front of the air rudder were the upper-left (200, 150), upper-right (350, 150), lower-left (200, 350), and lower-right (650, 350) points; the seed points on the back of the air rudder were the upper-left (500, 150), upper-right (650, 150), lower-left (200, 350), and lower-right (650, 350) points. After 20 iterations, we obtained optimal image segmentation. Based on this segmentation, a boundary is obtained as a segmentation mask, as shown in Figure 9.
The white area in the segmentation mask represents the area of the air rudder surface in the image, and the black area is the background area. Based on the segmentation mask, the segmentation result of the rudder surface area is shown in Figure 10. In the segmentation result, the rudder surface area is retained, and the irrelevant background area is removed.
The coordinates of the physical surface of the air rudder are set with the corner points and boundaries of the rudder surface as the origin and coordinate axis of the coordinate system. Corresponding to the image level, a two-dimensional coordinate system is established with the pixel point where the corner point of the rudder surface is located as the origin, as shown in Figure 11. The horizontal and vertical axes of the coordinate system are parallel to the two sides of the image, respectively. The coordinate axis unit is a pixel called the “air rudder image coordinate system” and is described as   P r = u r , v r .

4.2.2. Air Rudder Surface Defect Image-Level Positioning Experiment

The reconstruction residual obtained from the rudder surface image containing surface defects and the reconstructed image of Fs-CAE [11] is presented in Figure 12. Chroma are utilized to represent the amplitude of pixel values in order to facilitate visual observation. It can be observed that there exists a significant contrast in pixel value between the defective region and other regions, providing inspectors with a visual representation of defects and serving as a prerequisite for defect localization.
The reconstructed residual image also contains noise and interference, which can adversely impact the accuracy of defect localization if used directly. Following image filtering, threshold segmentation, and contour screening, the resulting segmentation mask is depicted in Figure 13. The white region within the mask represents the location of surface defects, while the black region serves as a reference for segmenting surface defects.
The defect contour can be derived from the surface defect segmentation mask image. Subsequently, the circumscribed moment of the defect is computed based on the bounding box contour, enabling visualization of the defect position within the original image (Figure 14). The bounding box accurately represents the spatial location of the defect in the image, enabling image-level positioning of surface defects on the air rudder.

4.3. Experimental Investigation of the Precise Localization of Physical Surface Defects

The internal parameter matrix and distortion vector of the camera mapping model are inherent hardware parameters. For a given combination of industrial camera and lens, these parameters remain fixed, while the external parameter matrix varies with changes in the position and orientation of the photographed object. Therefore, to determine the external parameter matrix corresponding to the measured air rudder surface, it is necessary to first calibrate the camera using a conventional method by capturing images of a camera calibration plate from different angles in order to obtain accurate values for the internal parameters of the camera and distortion coefficient of the lens.
The position of the calibration plate under camera coordinates in multiple shots is illustrated in Figure 15a, while the calibration error of each calibration plate is depicted in Figure 15b. By observing the calibration error map and excluding images with significant calibration errors, more precise camera parameters can be obtained during the calibration process. The final average calibration error measures 0.11 pixels, which falls within the normal range and is lower than the commonly specified threshold of 0.3 pixels. These calibrated camera parameters can be utilized to establish a mapping model. Finally, for the industrial camera and lens selected in this study, the internal parameter matrix M and distortion vector k are as follows:
M = f x γ u 0 0 f y v 0 0 0 1 = 3358.39 0 2685.44 0 3358.41 1797.92 0 0 1
k = k 1 , k 2 T = 0.152 , 0.097 T
The camera’s internal parameters and distortion coefficients obtained through calibration can be directly utilized in defect localization experiments. For each defect localization, only the external parameters corresponding to the currently detected part need to be solved. Prior to capturing the air rudder image, the checkerboard calibration plate is securely attached to the surface of the air rudder, as depicted in Figure 16a. An image of the checkerboard calibration plate is captured and its corner points are identified as feature points, as shown in Figure 16b. By comparing these corner points with their actual distribution on the checkerboard, the external parameter matrix can be solved for by transforming the air rudder surface into image coordinates. All camera mapping model parameters are them obtained and utilized to convert pixel positions into physical surface positions, enabling accurate positioning of defects on the physical surface of the rudder.
What needs to be noted during the defect location process is that in the classic camera calibration method, the pixel coordinate system takes the image pixel’s upper-left corner as the coordinates’ origin. The horizontal change in the pixel position represents the coordinate change in the coordinate system, and the vertical change in the pixel position represents the coordinate change in the coordinate system. The coordinate system P r = u r , v r for locating surface defects of the air rudder at the image level is a coordinate system with the corner point of the air rudder end as the origin and the two boundaries forming the corner point as the coordinate axes. The coordinates of its origin in the image pixel coordinate system are u 0 , v 0 , and the transformation relationship between it and the image pixel coordinate system P p = u , v is as follows:
u r , v r = u u 0 , v + v 0
The transformation process of the air rudder surface defect image coordinates to the physical surface coordinates can be obtained by combining it with the mapping model Formula (11).
X w Y w 1 = H T u r + u 0 v r v 0 1

4.4. Analysis of the Positioning Effect of Surface Defects

4.4.1. Method for Verifying the Effect of Defect Positioning Based on Uncertainty Analysis

During the defect localization process, due to random and systemic effects, there will be a deviation between the obtained location results and the actual value. Additionally, it is not possible for repeated localization attempts of the same defect to yield entirely consistent outcomes. The resulting position values are expressed as a quantified range known as Uncertainty, which signifies the degree to which errors prevent precise determination of the exact position. Typically, positioning results derived from repeated experiments are presented as the best estimate ± Uncertainty [15,16,17,18].

4.4.2. Analysis of the Experimental Results Regarding the Localization of Defects on the Air Rudder Surface

To validate the measurement results of the air rudder surface defect location method based on the camera mapping model, we conducted experiments on the dedicated platform established in Section 4.1. The air rudder was securely positioned on the platform by clamping its metal core, ensuring that its surface area was as perpendicular to the central optical axis of the camera and lens as possible. Three air rudders with typical surface defects were selected as tested samples to ensure that each air rudder surface contained three defects and the number of each defect was balanced. The experimental procedure is outlined below.
Step 1: Transmit instructions to the programmable logic controller (PLC) via the control panel of the experimental platform in order to drive the stepper motor and adjust the position of the industrial camera and lens mounted on the slide rail to an appropriate setting.
Step 2: Collect a complete and clear image of the air rudder surface area, detect defects on the air rudder surface, and record the coordinates of the defects at the image level.
Step 3: Securely attach the checkerboard calibration plate onto the surface of the air rudder to be detected, capture images of the calibration plate for camera calibration, and establish a camera mapping model.
Step 4: Utilize the camera mapping model to determine the physical surface coordinates of the defect on the air rudder, based on its image-level coordinates.
Step 5: Send instructions to the PLC via the control panel of the experimental platform, initiating stepper motor movement and adjusting the position of the industrial camera and lens mounted on the slide rail to different locations. Repeat Steps 2 to 4 for a total of eight iterations, and subsequently calculate the uncertainty associated with current air rudder surface defect localization results.
Step 6: Replace the tested sample, repeat Step 1 to Step 5, and collect three air rudder surface defect locations and uncertainty analysis res ults.
During the experiment, the collected images were input into both the Fs-CAE network and GrabCut algorithm to obtain the reconstructed residual image and establish the air rudder image coordinate system P r for each current image. The defect was then segmented, and the coordinates of its upper-left corner point u ^ 1 , v ^ 1 and lower-right corner point u ^ 2 , v ^ 2 in the image coordinate system P r were recorded. The obtained results for positioning at an image-level are presented in Table 5, which includes data from three defects repeated eight times. Although the camera position is constantly changing, because the image coordinate system is defined with the air rudder corner point as the origin, the coordinates of the same measured point in this coordinate system are in principle the same. It can be seen that the image coordinates of each measured point measured in different measurement sequencing are relatively close, and there is no gross error.
The coordinates of the physical surface obtained by mapping and transforming the image plane coordinates in Table 5 are shown in Table 6 as the indirect measurement results of repeated experiments. In Table 6, X 1 , Y 1 represents the coordinates of the upper-left corner point of the defect bounding box, while X 2 , Y 2 represents the coordinates of the lower-right corner point of said bounding box. By conducting camera calibration and comparing data between Table 5 and Table 6, it is observed that under experimental platform conditions, each pixel in the image captured by the industrial camera used in this study corresponds to a physical surface length of 0.17 mm.
The surface defect location result after the uncertainty calculation is taken as the predicted value of the defect position and the defect position measured by the three-dimensional coordinate measuring machine is taken as the actual value of the defect position, as shown in Table 7. In the table, X ^ 1 , Y ^ 1 represents the coordinates of the upper-left corner of the predicted region bounding box, X ^ 2 , Y ^ 2 represents the coordinates of the lower-right corner of the predicted region, X 1 , Y 1 represents the coordinates of the upper-left corner of the real region, and X 2 , Y 2 represents the coordinates of the lower-right corner of the real region. Comparing the coordinate solution results of the corner points of the defect boundary box in the rudder structural parts image with the actual defect position recorded in advance, the absolute error of the pit defect positioning is 0.53 mm, and the uncertainty is 0.21 mm; the absolute error of the crack defect positioning is 0.21 mm, and the uncertainty is 0.26 mm; the absolute error of the mould sticking defect positioning is 0.12 mm, and the uncertainty is 0.25 mm. The true values of the defect positions almost all fall within the uncertainty range of the predicted values of the positioning method, and the uncertainty values of the predicted values are small, not exceeding 0.30 mm. It can be seen that the defect positioning method in this article has high precision and accuracy, is not affected by the camera position, and can meet the positioning requirements.

4.5. Analysis of Real-Time Positioning Effect of Air Rudder Surface Defects

Air rudder surface defect detection and positioning software was developed to verify the positioning accuracy of this defect positioning algorithm in an industrial environment. We conducted functional requirements analysis, designed functional modules according to requirements, developed operation interfaces, and loaded functional modules to realize air rudder surface defect detection and positioning functions. The computer CPU model was Inter(R) Core(TM) i5-6300HQ, and the graphics card model was NVIDIA GeForce GTX 1660M, with 8G running memory and 4G video memory. We used the Python language (https://www.python.org/, accessed on 15 August 2024) and Qt Designer design software (https://www.qt.io/product/ui-design-tools, accessed on 15 August 2024) to develop an operation interface for surface defect detection and positioning functions of the air rudder.

4.5.1. Operation Instructions for Rudder Surface Defect Location Function

The operation flow of the rudder surface defect detection and location software is shown in Figure 17.
The software camera calibration result is shown in Figure 18. After completing camera location, image preprocessing, and camera calibration, the preprocessed rudder surface image is used to locate defects and obtain the defect coordinates in the image. The camera mapping model transforms these coordinates to the physical surface to obtain accurate location results. The surface defect location results are recorded through images and text and saved in a text document, as shown in Figure 19.

4.5.2. Analysis of the Positioning Effect of Air Rudder Surface Defects

In this article, a method of locating defects on the air rudder surface based on a camera mapping model is proposed, and the effectiveness of the process is verified. However, although verification using only test set samples and typical samples can reflect the performance of the detection and positioning method, it cannot be used as the detection effect in actual engineering applications. Therefore, the functional software in the previous section is used to detect and locate the surface defects of the air rudder in real time on the experimental platform and record and analyze the experimental results.
(1)
Evaluation indicators
The indicator IoU is introduced to quantitatively evaluate the accuracy of defect location. In the defect localization task, IoU is defined as the ratio of the intersection area and union area of the predicted defect area and its corresponding real area in two areas, as shown in Figure A1 in Appendix A.
(2)
Positioning results and analysis
The predicted result of the surface defect position is taken as the measured value of the defect position, and the actual defect position measured by the three-dimensional coordinate measuring machine is taken as the theoretical value of the defect position, as shown in Table 8. The defect position coordinates given in the table and Figure 20 are consistent. X ^ 1 , Y ^ 1 denotes the coordinates of the upper-left corner point of the prediction area, X ^ 2 , Y ^ 2   represents the coordinates of the lower-right corner point of the prediction area, X 1 , Y 1 signifies the coordinates of the upper-left corner point of actual area, and X 2 , Y 2   indicates the coordinates of lower-right corner point of actual area.
The solution results of the corner point coordinates of the defect bounding box in the air rudder image were compared with the theoretical value of the pre-recorded defect position. The error for each corner point coordinate was obtained, as depicted in Figure 20. It can be observed that there is no significant deviation between the predicted coordinate values from the positioning method and their actual values, with most errors not exceeding 0.5 mm. This defect positioning method demonstrates high precision and accuracy, making it suitable for application in engineering environments while meeting real-time positioning requirements.
The axial error (ΔX, ΔY) between the IoU of the defect measurement bounding box and the theoretical bounding box, as well as the coordinates of the centre point of the bounding box (expressed in absolute value), are presented in Table 9 for cases where IoU exceeds 93%. The maximum axis error observed is 0.38 mm. The predicted defect area and the theoretical area have a high degree of overlap, the positioning error is small, and its accuracy can be applied in actual industrial environments.
Based on the above analysis, the positioning error of the air rudder surface defect positioning method in this article is small and can meet the requirements of practical engineering. The defect positioning results can be applied to subsequent defect repair work and provide feedback to optimize related processes, theories, and methods.
  • The positioning error of defects in the verification results primarily arises from the following factors:
  • The camera and lens themselves cannot ensure absolute accuracy, leading to systematic errors.
  • The reconstructed defect area in defect detection deviates from the actual defect, resulting in a discrepancy in the bounding box range.
In manually measuring the defect coordinates, there may be measurement errors and random errors in the systematic errors of the measuring equipment and the data reading and recording. When manually installing the air rudder and placing the calibration plate, it cannot be guaranteed that they are in the ideal position every time.

5. Discussion

Defect location on the air rudder surface is very important in defect repair and finished product quality inspection. Currently, most research focuses on using the mask of the defect area or the bounding box containing the defect area as the surface defect location result in the sample image. The main methods and models include image processing, image segmentation networks, and object detection networks. This article analyzes the functional requirements for the surface positioning of air rudders in industry and proposes a method that can realize defect positioning on the physical surface of air rudders. Compared with previous work, it can determine the actual location of defects and can be used in the production process of air rudders. It can be directly applied in surface quality inspection and has more industrial value. However, due to the limitations of experimental conditions and sample types, this method has not yet been validated for defect localization in structural components in other aerospace fields. In future work, we will further diversify the types of products in the sample set, testing the performance and robustness of this localization method on surface defect localization tasks for more products. Additionally, we aim to develop a more automated and intelligent camera calibration method to integrate with the localization method presented in this article, thereby improving localization efficiency. By combining this with heatmap visualization techniques, we intend to grade defect severity, providing more suitable solutions for subsequent air rudder maintenance and manufacturing.

6. Conclusions

To address the need for surface defect localization in air rudders, this article proposes an air rudder surface defect localization method based on a camera mapping model. Specifically, the GrabCut algorithm segments the air rudder surface area, establishing an image coordinate system for the air rudder. The defect’s position in the image is determined by identifying the coordinates of the upper-left and lower-right corners of the defect’s bounding box. Camera calibration is then used to determine the camera parameters and establish a camera mapping model. By inputting the defect’s image coordinates into the model, coordinate transformation is performed to obtain the defect’s position on the physical surface of the air rudder. Additionally, we introduce an uncertainty analysis method, conducting independent repeated experiments on three typical defects. The maximum localization error was 0.53 mm, and the maximum uncertainty was 0.26 mm, validating the localization accuracy of the proposed defect localization method. A hardware system and software interface were developed to achieve real-time localization of surface defects on air rudders. The maximum real-time localization error was 0.38 mm, with the localization accuracy and speed meeting the requirements for application on an actual production line.

Author Contributions

Conceptualization, Z.Y. and K.X.; methodology, M.Z.; software, M.Z.; validation, K.X., Y.Z. and Y.C.; formal analysis, K.X.; investigation, N.H.; resources, Y.J.; data curation, Y.L.; writing—original draft preparation, M.Z.; writing—review and editing, K.X.; visualization, Z.Y.; supervision, Y.Z.; project administration, Y.J.; funding acquisition, Y.C. and N.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 52175461, No. 12227801), Hebei Science and Technology Innovation Project (SJMYF2022X20), Tianjin Intelligent Manufacturing Project (No. 20201199) and National Key R&D Program (2019YFC0840709).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors Yi Zhang, Yi Jin, and Yali Lv were employed by Tianjin Aisda Aerospace Technology Co., Ltd., China. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

The predicted area is defined by the corner coordinates X ^ 1 , Y ^ 1 and X ^ 2 , Y ^ 2 , while the real area is defined by the corner coordinates X 1 , Y 1 and X 2 , Y 2 .
I o U = X 1 X ^ 2 × Y 1 Y ^ 2 X ^ 1 X ^ 2 × Y ^ 1 Y ^ 2 + X 1 X 2 × Y 1 Y 2 X 1 X ^ 2 × Y 1 Y ^ 2
Figure A1. IoU diagram.
Figure A1. IoU diagram.
Mathematics 12 03191 g0a1

References

  1. Liu, S.; Bao, J.; Lu, Y.; Li, J.; Lu, S.; Sun, X. Digital twin modeling method based on biomimicry for machining aerospace components. J. Manuf. Syst. 2020, 58, 180–195. [Google Scholar] [CrossRef]
  2. Tao, J.; Qin, C.; Xiao, D.; Shi, H.; Ling, X.; Li, B.; Liu, C. Timely chatter identification for robotic drilling using a local maximum synchrosqueezing-based method. J. Intell. Manuf. 2019, 31, 1243–1255. [Google Scholar] [CrossRef]
  3. Torabi, A.R.; Shams, S.; Narab, M.F.; Atashgah, M.A. Unsteady aero-elastic analysis of a composite wing containing an edge crack. Aerosp. Sci. Technol. 2021, 115, 106769. [Google Scholar] [CrossRef]
  4. Wang, J.; Xu, C.; Zhang, J.; Zhong, R. Big data analytics for intelligent manufacturing systems: A review. J. Manuf. Syst. 2022, 62, 738–752. [Google Scholar] [CrossRef]
  5. Sreeshan, K.; Dinesh, R.; Renji, K. Nondestructive inspection of aerospace composite laminate using thermal image processing. SN Appl. Sci. 2020, 2, 1830. [Google Scholar] [CrossRef]
  6. Tiwari, K.A.; Raisutis, R.; Tumsys, O.; Ostreika, A.; Jankauskas, K.; Jakutavicius, J. Defect estimation in non-destructive testing of composites by ultrasonic guided waves and image processing. Electronics 2019, 8, 315. [Google Scholar] [CrossRef]
  7. Feng, Y.A.; Song, W.W. Surface Defect Detection for Aerospace Aluminum Profiles with Attention Mechanism and Multi-Scale Features. Electronics 2024, 13, 2861. [Google Scholar] [CrossRef]
  8. Xu, L.; Lv, S.; Deng, Y.; Li, X. A Weakly Supervised Surface Defect Detection Based on Convolutional Neural Network. IEEE Access 2020, 8, 42285–42296. [Google Scholar] [CrossRef]
  9. Fei, C.; Wen, J.; Han, L.; Huang, B.; Yan, C. Optimizable image segmentation method with superpixels and feature migration for aerospace structures. Aerospace 2022, 9, 465. [Google Scholar] [CrossRef]
  10. Ding, M.; Wu, B.; Xu, J.; Kasule, A.N.; Zuo, H. Visual inspection of aircraft skin: Automated pixel-level defect detection by instance segmentation. Chin. J. Aeronaut. 2022, 35, 254–264. [Google Scholar] [CrossRef]
  11. Yang, Z.; Zhang, M.; Chen, Y.; Hu, N.; Gao, L.; Liu, L.; Ping, E.; Song, J.I. Surface defect detection method for air rudder based on positive samples. J. Intell. Manuf. 2024, 35, 95–113. [Google Scholar] [CrossRef]
  12. Boykov, Y.Y.; Jolly, M.P. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In Proceedings of the Proceedings Eighth IEEE International Conference on Computer Vision, ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; pp. 105–112. [Google Scholar]
  13. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  14. Hui, Z.; Kong, Y.; Yao, W.; Gang, C. Aircraft parameter estimation using a stacked long short-term memory network and Levenberg-Marquardt method. Chin. J. Aeronaut. 2024, 37, 123–136. [Google Scholar] [CrossRef]
  15. Fu, C.; Sinou, J.J.; Zhu, W.; Lu, K.; Yang, Y. A state-of-the-art review on uncertainty analysis of rotor systems. Mech. Syst. Signal Process. 2023, 183, 109619. [Google Scholar] [CrossRef]
  16. Taşan, M.; Taşan, S.; Demir, Y. Estimation and uncertainty analysis of groundwater quality parameters in a coastal aquifer under seawater intrusion: A comparative study of deep learning and classic machine learning methods. Environ. Sci. Pollut. Res. 2023, 30, 2866–2890. [Google Scholar] [CrossRef] [PubMed]
  17. Abdar, M.; Pourpanah, F.; Hussain, S.; Rezazadegan, D.; Liu, L.; Ghavamzadeh, M.; Fieguth, P.; Cao, X.; Khosravi, A.; Nahavandi, S.; et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf. Fusion 2021, 76, 243–297. [Google Scholar] [CrossRef]
  18. Zhou, J.; Jiang, Y.; Pantelous, A.A.; Dai, W. A systematic review of uncertainty theory with the use of scientometrical method. Fuzzy Optim. Decis. Mak. 2023, 22, 463–518. [Google Scholar] [CrossRef]
Figure 1. Air rudder structure.
Figure 1. Air rudder structure.
Mathematics 12 03191 g001
Figure 2. Surface defects on the rudder surface: (a) cracks; (b) pits; (c) mould sticking.
Figure 2. Surface defects on the rudder surface: (a) cracks; (b) pits; (c) mould sticking.
Mathematics 12 03191 g002
Figure 3. Flowchart of air rudder surface defect location.
Figure 3. Flowchart of air rudder surface defect location.
Mathematics 12 03191 g003
Figure 4. Workflow diagram illustrating the GrabCut algorithm [12].
Figure 4. Workflow diagram illustrating the GrabCut algorithm [12].
Mathematics 12 03191 g004
Figure 5. Camera mapping transformation process.
Figure 5. Camera mapping transformation process.
Mathematics 12 03191 g005
Figure 6. Schematic representation of the coordinate system.
Figure 6. Schematic representation of the coordinate system.
Mathematics 12 03191 g006
Figure 7. Experimental platform.
Figure 7. Experimental platform.
Mathematics 12 03191 g007
Figure 8. Air rudder.
Figure 8. Air rudder.
Mathematics 12 03191 g008
Figure 9. Surface mask of air rudder.
Figure 9. Surface mask of air rudder.
Mathematics 12 03191 g009
Figure 10. Surface segmentation result of air rudder.
Figure 10. Surface segmentation result of air rudder.
Mathematics 12 03191 g010
Figure 11. Coordinate system for air rudder imaging.
Figure 11. Coordinate system for air rudder imaging.
Mathematics 12 03191 g011
Figure 12. Reconstruction residual diagram of the air rudder.
Figure 12. Reconstruction residual diagram of the air rudder.
Mathematics 12 03191 g012
Figure 13. Segmentation mask diagram for air rudder surface defects.
Figure 13. Segmentation mask diagram for air rudder surface defects.
Mathematics 12 03191 g013
Figure 14. Air rudder surface defect location diagram.
Figure 14. Air rudder surface defect location diagram.
Mathematics 12 03191 g014
Figure 15. Calibration of the camera’s internal parameter matrix and distortion coefficients: (a) determining the position of the calibration plate in the camera coordinate system; (b) evaluating calibration errors.
Figure 15. Calibration of the camera’s internal parameter matrix and distortion coefficients: (a) determining the position of the calibration plate in the camera coordinate system; (b) evaluating calibration errors.
Mathematics 12 03191 g015
Figure 16. Calibration of the external parameter matrix for the camera: (a) Image acquisition process. (b) Corner point acquisition procedure.
Figure 16. Calibration of the external parameter matrix for the camera: (a) Image acquisition process. (b) Corner point acquisition procedure.
Mathematics 12 03191 g016
Figure 17. Flow chart for implementation of defect positioning function.
Figure 17. Flow chart for implementation of defect positioning function.
Mathematics 12 03191 g017
Figure 18. Interface for camera calibration.
Figure 18. Interface for camera calibration.
Mathematics 12 03191 g018
Figure 19. Interface for defect detection and location interface.
Figure 19. Interface for defect detection and location interface.
Mathematics 12 03191 g019
Figure 20. Defect repeat positioning error.
Figure 20. Defect repeat positioning error.
Mathematics 12 03191 g020
Table 1. Weight of each connection.
Table 1. Weight of each connection.
ConnectWeightsPixel Ownership
p , q B p , q p , q
p , S λ · R p b k g p P , p O B
K p O
0 p B
p , T λ · R p o b j p P , p O B
0 p O
K p B
Where K = 1 + m a x q B p , q .
Table 2. New weights for each connection.
Table 2. New weights for each connection.
ConnectInitial WeightNew Weight
p , S λ · R p b k g K
I s   n o t   c h a n g e d ; p , T λ · R p o b j 0
Table 3. Augmented magnitude of each connection.
Table 3. Augmented magnitude of each connection.
ConnectInitial WeightIncrease WeightNew Weight
p , S λ · R p b k g K + λ · R p o b j K
p , T λ · R p o b j λ · R p b k g 0
Table 4. Key hardware parameters.
Table 4. Key hardware parameters.
Hardware DeviceIndicatorsParameters
MV-HS2000GM camera
(Shaanxi Weishi Intelligent Manufacturing Technology Co., Ltd., Xi’an, China)
Maximum resolution5472 × 3648
Pixel   size / μ m 2.4 × 2.4
Interface typeC-Mount
Power supply requirementsDC-12 V
Collection methodContinuous
BT-11C0814MP10 lens
(Shaanxi Weishi Intelligent Manufacturing Technology Co., Ltd., Xi’an, China)
Focal length/mm8
Depth of field/mm2.2
Interface typeC-Mount
Image size2/3″
MV-WL600X27W-V light source
(Shaanxi Weishi Intelligent Manufacturing Technology Co., Ltd., Xi’an, China)
Light source colourWhite
Number of LEDs6
Luminous area/mm600 × 27
Dimensions (length × width × height)/mm612 × 33.5 × 27
Table 5. Results of repeated positioning at the image level for defects.
Table 5. Results of repeated positioning at the image level for defects.
Experimental OrderDefect Location/Pixel
Defect 1 (Pit)Defect 2 (Crack)Defect 3 (Stained Mold)
u ^ 1 , v ^ 1 u ^ 2 , v ^ 2 u ^ 1 , v ^ 1 u ^ 2 , v ^ 2 u ^ 1 , v ^ 1 u ^ 2 , v ^ 2
1(1302, 1350)(1338, 1296)(2292, 159)(2324, 122)(1849, 183)(1904, 134)
2(1303, 1352)(1339, 1296)(2293, 159)(2324, 123)(1850, 185)(1906, 136)
3(1300, 1349)(1337, 1294)(2291, 157)(2321, 120)(1850, 184)(1905, 135)
4(1301, 1350)(1338, 1295)(2292, 158)(2323, 122)(1849, 183)(1903, 135)
5(1299, 1349)(1335, 1294)(2290, 157)(2320, 121)(1848, 183)(1904, 135)
6(1302, 1350)(1338, 1295)(2291, 158)(2322, 122)(1846, 180)(1902, 132)
7(1301, 1349)(1338, 1294)(2288, 155)(2320, 120)(1848, 182)(1903, 134)
8(1302, 1350)(1338, 1294)(2290, 156)(2321, 120)(1847, 181)(1903, 132)
Table 6. Results of repeated positioning at the physical level for defects.
Table 6. Results of repeated positioning at the physical level for defects.
Experimental OrderDefect Location/mm
Defect 1 (Pit)Defect 2 (Crack)Defect 3 (Stained Mold)
X ^ 1 , Y ^ 1 X ^ 2 , Y ^ 2 X ^ 1 , Y ^ 1 X ^ 2 , Y ^ 2 X ^ 1 , Y ^ 1 X ^ 2 , Y ^ 2
1(224.10, 229.27)(230.29, 220.08)(394.49,27.00)(400.01,20.72)(318.25, 31.08)(327.71, 22.75)
2(224.27, 229.61)(230.46, 220.08)(394.67,27.00)(400.01,20.89)(318.42, 31.42)(328.06, 23.10)
3(223.75, 229.10)(230.12, 219.74)(394.32, 26.66)(399.49,20.37)(318.42, 31.25)(327.89, 22.93)
4(223.92, 229.27)(230.29, 219.91)(394.49, 26.83)(399.83,20.72)(318.25, 31.08)(327.54, 22.92)
5(223.58, 229.10)(229.78, 219.74)(394.15, 26.66)(399.31,20.55)(318.07, 31.07)(327.71, 22.92)
6(224.10, 229.27)(230.29, 219.91)(394.32, 26.83)(399.66,20.72)(317.73, 30.56)(327.37, 22.41)
7(223.92, 229.10)(230.29, 219.74)(393.81, 26.32)(399.32,20.38)(318.08, 30.91)(327.54, 22.76)
8(224.10, 229.27)(230.29, 219.74)(394.15, 26.49)(399.49,20.38)(317.90, 30.73)(327.54, 22.42)
Table 7. Comparison of experimental and theoretical localization results.
Table 7. Comparison of experimental and theoretical localization results.
Defect Serial NumberDefect Location/mm
MeasurementsTheoretical Value
X ^ 1 , Y ^ 1 X ^ 2 , Y ^ 2 X 1 , Y 1 X 2 , Y 2
1(222.42 ± 0.21, 229.25 ± 0.15)(230.23 ± 0.19, 219.87 ± 0.14)(222.95, 229.23)(230.23, 219.94)
2(394.31 ± 0.25, 26.73 ± 0.22)(399.64 ± 0.26, 20.59 ± 0.19)(394.53, 26.85)(399.76, 20.65)
3(318.14 ± 0.23, 31.02 ± 0.25)(327.67 ± 0.21, 22.78 ± 0.24)(318.13, 30.98)(327.55, 22.72)
Table 8. Comparison of experimental and theoretical positioning results.
Table 8. Comparison of experimental and theoretical positioning results.
Defect Serial NumberDefect Location/mm
MeasurementsTheoretical Value
X ^ 1 , Y ^ 1 X ^ 2 , Y ^ 2 X 1 , Y 1 X 2 , Y 2
1(222.42, 229.25)(230.23, 219.87)(222.95, 229.23)(230.43, 220.14)
2(394.31, 26.73)(399.64, 20.59)(394.53, 26.85)(399.76, 20.95)
3(318.14, 31.02)(327.67, 22.78)(318.13, 30.98)(327.85, 22.92)
4(7.40, 73.37)(10.49, 66.06)(7.33, 73.31)(10.16, 66.01)
5(8.43, 88.82)(11.5, 80.49)(8.37, 88.80)(11.21, 80.14)
6(408.10, 131.11)(412.39, 127.13)(408.13, 131.14)(412.08, 126.81)
7(397.59, 139.43)(400.96, 27.03)(397.67, 39.40)(400.67, 26.89)
8(273.35, 224.69)(281.50, 244.73)(273.13, 242.65)(281.27, 244.21)
9(16.68, 141.46)(19.97, 132.12)(16.74, 141.46)(19.88, 131.87)
10(418.59, 143.50)(432.08, 131.09)(418.60, 143.53)(432.42, 131.34)
Table 9. Defect repeat positioning error.
Table 9. Defect repeat positioning error.
Defect Serial NumberIoU/% Axial   Error   Δ X , Δ Y /mm
196.83(0.37, 0.13)
294.73(0.32, 0.24)
396.68(0.34, 0.25)
495.44(0.31, 0.09)
593.64(0.24, 0.38)
697.54(0.06, 0.16)
798.31(0.10, 0.08)
897.10(0.32, 0.38)
999.09(0.04, 0.05)
1097.85(0.32, 0.24)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Z.; Xu, K.; Zhang, M.; Chen, Y.; Hu, N.; Zhang, Y.; Jin, Y.; Lv, Y. Research on Surface Defect Positioning Method of Air Rudder Based on Camera Mapping Model. Mathematics 2024, 12, 3191. https://doi.org/10.3390/math12203191

AMA Style

Yang Z, Xu K, Zhang M, Chen Y, Hu N, Zhang Y, Jin Y, Lv Y. Research on Surface Defect Positioning Method of Air Rudder Based on Camera Mapping Model. Mathematics. 2024; 12(20):3191. https://doi.org/10.3390/math12203191

Chicago/Turabian Style

Yang, Zeqing, Kangni Xu, Mingxuan Zhang, Yingshu Chen, Ning Hu, Yi Zhang, Yi Jin, and Yali Lv. 2024. "Research on Surface Defect Positioning Method of Air Rudder Based on Camera Mapping Model" Mathematics 12, no. 20: 3191. https://doi.org/10.3390/math12203191

APA Style

Yang, Z., Xu, K., Zhang, M., Chen, Y., Hu, N., Zhang, Y., Jin, Y., & Lv, Y. (2024). Research on Surface Defect Positioning Method of Air Rudder Based on Camera Mapping Model. Mathematics, 12(20), 3191. https://doi.org/10.3390/math12203191

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop