1. The Introduction
With the rapid development of computer technology since the 1970s, the software and hardware necessary for each stage of digitalization have gradually matured, and digitalization technology has inevitably been involved in other industries, leading to a gradual improvement in the degree of digitalization in industry. The unique advantage of digitalization is that it can accurately and quickly store and present the data of material cultural heritage without damaging the material cultural heritage [
1]. Although there are some problems with digital technology and equipment yet to be solved (e.g., there are some technologies to be further improved in the process of the digital production and digital integration of information), these problems will be readily solved with the development of technology. As far as current digital technology is concerned, it can already accomplish the realization of most digital activities, such as digital art galleries, digital films, digital site reproductions, and so on.
Ancient cultural relics reflect the production mode and lifestyle of a country or a nation in a specific period and condense the fruits of the labor of several generations. Furthermore, as a link to the inheritance and development of a civilization, they are non-renewable treasures [
2,
3]. However, historical relics have been gradually destroyed or have disappeared due to the influence of both natural and human activities. The main idea of restoration is to return to the similar surface of the original model. Cultural relics that cannot be returned are shaped with clay to make similar fragments, and the restoration is completed by means of welding and gluing. Cultural relics with fragile surfaces are likely to suffer irreparable losses in the process of molding and processing. The mature application of reverse engineering and rapid prototyping technology in molding, home appliances, and other industries provides new research ideas for cultural relic conservation workers on how to reduce contact with cultural relics during the process of restoration and maintain the original appearance of cultural relics to the maximum extent. Reverse engineering (RE) is often used in the process of imitating and modifying the design of mechanical parts and other products, as well as the restoration and reproduction of partial incomplete models. Reverse engineering obtains the spatial geometric information of existing products by means of three-dimensional scanning, transforms it into a digital CAD model after data processing, and then improves, repairs, and recreates the model [
4]. Rapid prototyping (RP), also known as 3D printing, is different from traditional reduced-material prototyping (car milling, planing, etc.) and forced prototyping (stamping, forging, etc.). RP technology is a new material prototyping method with layer-by-layer accumulation. The molding process is not affected by the appearance of parts. It is very suitable for new product development, single-piece small batch production, rapid mold production, etc. [
5,
6]. The combination of reverse engineering and rapid prototyping technology can form a complete design and manufacturing closed-loop feedback system, which can quickly realize product design, manufacturing, testing, redesign, and remanufacturing; accelerate the development of reverse engineering; and expand the application scope of rapid prototyping technology. Today, 3D digital technology is constantly developing. The acquisition of 3D model data with 3D scanning equipment is becoming increasingly accurate, and the corresponding modern information technology such as computer graphics and network transmission is becoming increasingly mature. At present, the existing theoretical methods only verify the feasibility of virtual restoration theory. When facing tens of thousands of unearthed cultural relics, the types of problems suddenly increase, and algorithms become not only feasible but also necessary. In practical applications, higher requirements have been put forward for the robustness, accuracy, and fault tolerance of virtual restoration algorithms, reflecting the limitations of existing theoretical methods. To improve the solution methods for virtual restoration problems, it is necessary to establish a systematic engineering system framework, analyze and summarize the characteristics of existing virtual assembly problems, draw on existing research foundations and key technologies, and develop problem-specific classification solutions.
The innovation of this article is that it establishes a framework for virtual restoration engineering: based on the different degrees of damage to the restored objects, virtual restoration is divided into three types of problems: complete matching problems, partial matching problems, and complex merging problems. The corresponding solutions to these problems are developed to be more systematic and organized. Unlike feature-driven problem partitioning methods, problem partitioning based on data characteristics provides a more universal solution for virtual restoration. Key technical points in geometric processing should be optimized to address the high noise and missing geometric information characteristics of cultural relic models, improve the constraints on data processing in the early stages of the process, and reduce the requirements for algorithm performance in the merging stage by optimizing geometric processing to improve model quality and reduce computational errors during registration and alignment.
The virtual restoration process of damaged cultural relics includes model preprocessing, geometric feature extraction and representation, pairwise alignment, and global optimal multi-fragment fusion. The purpose of this study is to address the characteristics of noise sensitivity and missing geometric information in cultural relic models and use digital geometric processing technology to break through the bottlenecks of the theoretical methods mentioned above in practical applications. This study proposes innovation and improvements in the traditional process, and the specific research content includes the following aspects: analyzing the current research status and progress of domestic and foreign scholars in virtual restoration provides an important theoretical basis for this study. The feasibility of these methods has been fully confirmed, but there is currently a lack of systematic problem division and corresponding solution development. By analyzing and summarizing the current research status in various fields such as point cloud processing, feature extraction, and registration in the restoration process, new solutions can be provided for restoration problems, and corresponding engineering system frameworks can be constructed. First, we extracted the contour lines of the fracture surface and found some feature points on the contour lines. Then, we calculated the curvature, torsion, left and right chord lengths, and other parameters at these feature points and analyzed their relationship with adjacent points. In order to synthesize multiple features into a single feature, we used the multilayer perceptron method. Secondly, we processed the intersection points of fragments in an interactive environment. Next, we used the idea of the longest common subsequence (LCS) to preliminarily match fragments within the same group. Finally, we used the iterative nearest point (ICP) algorithm to accurately match and align fragment matching pairs.
2. Literature Review
For small holes, the mesh surface-based method is usually used to fit and reconstruct the hole region by using the topological relationship of the 3D mesh. Tomilina E. M. et al. took the film Server with regional significance as an example to study the possibility of augmented reality technology in the hypothetical (virtual) restoration process of a lost cultural heritage structure [
7]. Muhd M. et al. aimed to identify the features of the Uma Kabuong limousine and see its destruction of its architectural elements. The methods of this study are descriptive, qualitative, and synchronic reading analyses. In the research stage, data were collected by field observation and interviews [
8]. CAO and Ying et al. introduced the “symbiosis” theory to explore a new idea of mining heritage protection from the perspective of cities and proposed a community symbiosis strategy that closely combines mining heritage protection and utilization with community construction, urban environment, and urban space to realize the function activation and integrated restoration of space and city [
9,
10].
By understanding the current market of map repair software, the technology of inpainting has considerable potential. Although every map repair software on the market has its own advantages, they lack unique pertinence. Almost all map repair software has the same functions, basically aiming at beautifying people, food, scenery, and so on. At present, digital image processing technology is widely used in aviation and aerospace, communication, remote sensing, military and public security, and other fields. Therefore, the application of this technology in art restoration, photo restoration, stunt image production, museum and archive damaged file restoration, and other fields has high practical value.
In the process of the computer-aided repair of Terra Cotta Warriors and horses, the virtual splicing of Terra Cotta Warriors and horses is the core. In general, texture features, geometric features, topological features, and other features of the fragments are extracted to compare and match the features of the matched fragments to achieve splicing. In this paper, firstly, Delaunay triangulation was carried out on point cloud data, and the contour of the triangular mesh model was extracted according to the angle between normal vectors.
3. Research Methods
3.1. System Functions
The virtual restoration system of damaged cultural relics can be used for the virtual splicing of cultural relics excavated from the Mausoleum of Emperor Qin Shihuang. The operation process is simple, and the splicing effect is good. The system divides the whole splicing process into multiple modules, which are maintained separately to facilitate later expansion. The system in this paper is mainly divided into three modules: the classification module of cultural relic fragments, the feature extraction module of the fracture surface, and the registration module of cultural relic fragments [
11].
Figure 1 is a schematic diagram of the virtual inlay system for damaged cultural relics in this article.
- (1)
Splicing module of cultural relic fragments. The main function of this module is to classify the read modules. Firstly, the model fragments are processed by feature enhancement, and then the model fragments are classified by the deep neural network, and the fragments of the same part are divided into the same group.
- (2)
Feature extraction module of fracture surface. The main function of the module is to extract the feature points of the fracture surface of the cultural relic fragment. Firstly, the fracture surface curve of the cultural relic fragment is extracted. Then, the feature points on the fracture surface curve are extracted, and the curvature, torsion, and left and right chord length of the feature points are calculated. Finally, the multi-features are mapped to a single feature by multilayer perceptron.
- (3)
Cultural relic fragment splicing system. The main function of this module is to use the LCS idea to complete the coarse matching of two fragments according to the fusion features calculated in the previous step and then to use the ICP algorithm to complete the fine matching of fragments.
3.2. System Implementation
In general, the system in this paper can be divided into two subsystems: one is the model classification subsystem and the other is the model stitching subsystem, which corresponds to the two main steps in the splicing process of the Terra Cotta Warriors: fragment classification and fragment stitching.
Figure 2 shows the system architecture diagram.
The core step of the virtual restoration of the Terra Cotta Warriors and horses is the classification and matching of cultural relic fragments.
Step 1: Import model. Select the workspace, where the point cloud data to be matched are stored. The stitching results of the model and the intermediate computation process are also stored in the workspace.
Step 2: Type identification of cultural relic fragments. The fragments of Terra Cotta Warriors and horses were classified by a deep convolutional network, and the same kind of fragments were grouped into a group.
Step 3: Feature extraction of fracture surface. The contour features of the fracture surface extracted from the selected fragments are the basis for subsequent splicing and correspond to the feature extraction content in
Section 4 of this paper [
12].
Step 4: Splicing of cultural relics. A grouped fragment is selected, and the surface information and fracture surface information of the cultural relics are compared using an interactive environment. The two pieces are moved to the position where the splicing can be achieved. Then, the splicing is carried out using the solution of the longest common subsequence problem, which corresponds to the splicing content in
Section 4 of this paper.
3.3. Automatic Extraction of Feature Regions
When there is a large area of similarity in the sequence image, the effect of the traditional image mosaic algorithm is not very ideal, and there is a mismatch phenomenon. The feature region extraction algorithm can be used to locate the feature region in the image easily and reliably without manual intervention.
3.3.1. Data Preprocessing
Normalization Processing
Due to the inevitable presence of different degrees of noise and gray level changes in the collected images, a 3 × 3 template is first used to carry out median filtering for the mosaic images, and then the normalization algorithm based on image local attributes is used to normalize the images, as shown in Equation (1) below:
represents the gray value of pixel
,
and
are the gray mean and standard variance of the ith local, respectively, and the estimated initial values
and
vary with the local properties of the current pixel
.
and
are the weight factors that represent the influence of the corresponding pixel on the current local attribute [
13].
Edge Detection
Although the image quality of the original image is improved after normalization, it may still cause mismatching, so the algorithm adopted is based on image edges [
14]. Firstly, edge detection is carried out on the matched image, and the result of edge extraction is binarized. Then, the binarized result is computed to find the feature region of the image.
3.3.2. Localization of Feature Object Regions
The edge can best reflect the feature information of the object. The dense area of the edge usually means that the area easily attracts people’s attention. In this paper, the local edge density, LED, is used to measure the edge information richness of the area where a certain point is located, as shown in Equation (4) below:
represents the original gray image, represents the binary edge image of the image, and is the half-width of the LED convolution window.
According to Equation (4), the LED value of each point can be calculated in the image range where the feature object area needs to be found. However, if only the region with the densest edges is taken as the feature object region, it cannot guarantee that this region will not appear repeatedly in the image; that is, this region is not necessarily distinctive, so the difference between the feature region of the object and other regions must be measured. Only when the difference is large is the region distinctive. Therefore, this paper adopts a new method to judge the center point of the feature object region: according to the LED value, the fuzzy C-means algorithm is used to divide each point into three categories, which have high, medium, and low LED values. For the category with the smallest LED value, the edge information is too little, so the region cannot be regarded as the feature region and is not considered. Then, the average value of the two types of points with the larger LED value is recalculated, and the point with the largest average value is selected. The area centered by this point can be regarded as the feature object area.
The classification method of points in this method is essential. Due to the randomness of the image, the number of members of the three categories and the class center are uncertain. At the same time, it cannot be pointed out that a certain point belongs to a certain class. The fuzzy C-means algorithm can better solve this problem, the following is the specific application of fuzzy C-means algorithm in this paper.
The membership degree of the three types of samples to each cluster in this paper is shown in Equation (5) below:
where
is the constant controlling the fuzzy degree of clustering results.
The center of each cluster is shown in Equation (6) below:
The membership degree u of each point to each class is solved by the iterative method, and the center of each class is calculated.
The algorithm steps are as follows:
- (1)
Initialize each type of center, and set , , .
- (2)
Calculate the membership degree of each point to each class according to Equation (5).
- (3)
The clustering center is recalculated according to Equation (6).
- (4)
If the algorithm does not converge, that is, the cluster center changes, repeat steps (2) and (3); otherwise, clustering is completed. Each point takes the class with the largest membership as its class.
After locating the center point
of the feature object region, we take this point as the center to extract a certain size square area as the benchmark template. If the benchmark template is too large, it will increase the calculation amount, and if it is too small, it may cause mismatching. Therefore, the template size in this paper is not fixed but adjusted according to the LED value of the center point. First, the initial width of the square template is given according to the image size, which can be set as
imagewidth/8 first. If the LED value of the center point is lower than the average
, the following Equation (7) can be used to adjust the width of the benchmark template:
3.4. Image Matching Based on Local Entropy
The information entropy of an image is one of the quantitative methods to describe the amount of information contained in an image. The information entropy of an image is regarded as a kind of feature, and the concept of entropy is introduced into image matching technology. In the process of image matching, the local entropy of the image is defined first, and then the block parallel algorithm, sequential detection, and hierarchical search are used to match the image. This matching method can greatly reduce the computation and improve the matching efficiency and accuracy.
The formula of local entropy is shown in Equation (8):
where
is the occurrence probability of the gray level in the sub-image;
is the maximum gray value of the subimage [
15].
For accurate matching, a cross-shaped feature area composed of two overlapping rectangles is used. Since the rotation around the optical axis of the camera may occur during shooting, the image may have rotation errors. The cross-shaped feature area can reduce the impact of such errors on the stitching results to a certain extent. The horizontal region of the cross-shaped region in the image is not sensitive to the error in the vertical direction, while the vertical region is not sensitive to the error in the horizontal direction.
The algorithm is described below.
The first step is rough matching:
- (1)
The benchmark template is divided into blocks in the left figure, and the local entropy H of each block is calculated according to Equation (8).
- (2)
Select a subpicture window every two or three points in the right picture and divide the subpicture window into blocks according to the block method of the benchmark template in the left picture.
- (3)
Calculate the absolute value and T of the local entropy difference between each block of the selected subwindow and the corresponding block of the benchmark template in the right figure.
- (4)
If T is greater than a certain threshold, Thre, then stop calculating the entropy of the remaining blocks of this subwindow in the figure on the right and return to (2).
- (5)
The coordinate with the smallest entropy difference is the best matching point obtained in the coarse matching stage, denoted as (i, j).
The second step is fine matching:
- (1)
Take (i, j) as the center; a 5 × 5 search range is formed on the right figure.
- (2)
The method of coarse matching stage is used to match pixel by pixel. At this time, the position with the smallest entropy difference is the best matching point of the two images.
3.5. Key Technologies in Reverse Engineering
Main Data Types in Reverse Engineering
The main data types used in reverse engineering are point cloud data and grid data:
- (1)
Point cloud data. The original data collected by laser 3D scanning equipment are called point cloud data. Point cloud data compose a set of scattered points in the space, which saves all kinds of physical information on the physical surface. The basic information is the 3D coordinate (i, y, x) of each discrete point on the object surface, and other information includes normal information (ni, ny, nz), color, transparency, texture features of the object surface, etc. The model composed of point cloud data is called the point cloud model. The main formats of the point cloud model include asc and pcd.
- (2)
Grid data. Grid data consist of a series of points and grids connecting these points, including triangular grid data and polygonal grid data.
The storage structure of triangular mesh model mainly includes three-dimensional coordinates representing vertices and triangles composed of vertex subscripts. Some mesh models also include coordinate information such as vertex, normal vector, color, texture, and so on. The triangular mesh model contains three elements: vertexes, edges, and faces. Two vertices form an edge, and three edges form a triangular face, which is the model
. The set
represents all vertices of the model, and the set
} is the triangle vertices, which represents the set of mesh edges. The set
is the triangle vertices} and represents all triangular faces in the model. In a closed triangular mesh model, vertex V, edge E, and triangular face F satisfy
. The number of triangular faces is about twice the number of vertices, and the number of edges is about three times the number of vertices [
16].
The topology structure of the mesh model includes three types: point neighborhood, edge neighborhood, and face neighborhood (
Figure 3,
Figure 4 and
Figure 5).
- (1)
Point neighborhood: For any vertex in the model, its TH order neighborhood is the set of points composed of all points reaching vertex via equal or less than edges. Its one-point neighborhood is a set of vertices directly connected to the target vertex by an edge, . The triangle that shares vertices with the target point is called the adjacent triangle of the target point.
- (2)
Edge neighborhood: the set of two triangular faces that share an edge, .
- (3)
Face neighborhood: the set of triangular faces that share an edge with the target triangular face, .
In this paper, the grid model format is the STL grid model. The STL file represents the surface information of the model discretely through triangular planes. The data of a triangular plane include the normal vector of the triangular plane and three vertices arranged according to the right top.
3.6. Rapid Prototyping Technology
Rapid prototyping technology is completely different from traditional processing technology. Based on the principle of the layered stacking of materials to manufacture three-dimensional entities, rapid prototyping technology integrates computer-aided design (CAD), computer-aided manufacturing (CAM), computer digital control (CNC), laser, new materials, precision servo, and other technologies. The implementation process is shown in
Figure 6 below. Firstly, the CAD model is transformed into the STL triangular mesh model, and then the mesh model is sliced layer by layer with slicing software to generate the print path. Finally, the rapid prototyping equipment is used to complete the printing and post-processing.
Today, the mainstream rapid prototyping technology includes molten deposition (FDM) technology, light curing molding (SLA) technology, selective laser melting (SLM) technology, selective laser sintering (SLS) technology, laminated solid manufacturing (LOM) technology, three-dimensional inkjet printing (3DP) technology, and so on. The most common technologies are FDM technology and SLA technology.
3.7. Extraction of Feature Points from Fragments of Terra Cotta Warriors and Horses
3.7.1. Fracture Plane Segmentation of Debris
The Delaunay triangulation scheme needs to meet two criteria:
- (1)
Empty circle characteristics. A Delaunay triangulation network is unique; that is, any four points in the triangle vertices of the partitioned triangle cannot be cocircular, and there are no other points in the circumscribed circle of any triangle in the Delaunay triangulation network.
- (2)
Maximize the minimum angle characteristic [
17].
Delaunay triangulation network is a companion figure of Voronoi diagram. The main idea of Voronoi diagram is as follows: distribute
on the plane given a set of points and divide a region
for each point
, as shown in Equation (9) below.
where ‘is called the point’ of the Voronoi polygon, and
is called the point of formation of the Voronoi polygon
.
The definition of Delaunay triangulation can be obtained by the Voronoi diagram. Firstly, it is defined that the Voronoi polygons with common edges in the Voronoi diagram are called adjacent Voronoi polygons. The forming points connecting all adjacent Voronoi polygons will lead to a large number of triangles in 3D space, which is called Delaunay triangulation network. In grid subdivision, we refer to the subdivision formed by the Delaunay triangulation as the Delaunay triangulation.
According to the different implementation process, the algorithms that can generate Delaunay triangulation can be divided into three categories: divide and conquer algorithm, triangulation generation method, and point-by-point insertion method. Using the point-by-point insertion method to establish Delaunay triangulation network was first proposed by Lawson. Then, Bowyer, Watson, and others successively improved the point-by-point insertion method on the basis of subsequence. The core idea of point-by-point insertion method is that, firstly, an initial triangulation network and the corresponding point set are established, and then the points are inserted into the above point set one by one by traversing the point cloud data, and new triangles are inserted into the initial triangulation network. According to the different algorithms of triangulation initialization and the different way of building triangles for new points, there are many versions of the algorithm implementation.
The generation algorithm adopted in this paper is the Bowyer–Watson algorithm, and its basic idea is as follows:
- (1)
Assume that a Delaunay triangle with several vertices has been generated initially.
- (2)
When a new node is added, all generated Delaunay triangles are traversed, and the spatial position relationship between them and the newly added node is calculated. If the newly added node is inside the outer circle of the Delaunay triangle, it means that the triangulation network is not unique, and the triangle is deleted to form a cavity [
18].
- (3)
Connect the newly added nodes with the nodes in the cavity successively to form a new Delaunay triangle mesh.
- (4)
Adjust the data structure and add the data of the newly generated triangle to the Delaunay triangle list.
- (5)
Continue step (2) until all nodes have joined.
The algorithm process is shown in Algorithm 1:
Algorithm 1: Bowyer–Watson Algorithm Based on Delaunay Triangulation |
Input: Point list of point cloud data |
Output: Delaunay triangular mesh |
Step1 Initialize the point list with n points in total. |
Step2 Initialize the triangle array T to be empty, which is used to store the generated Delaunay triangular mesh. |
Step3 Perform Step4 to 11 for each point in . |
Step4 Initialize edge list to be empty. |
Step5 Perform Step6 to 11 for each element (one triangle) in the triangle array . |
Step6 Perform Step7–8 when point is in the outer circle of triangle . |
Step7 Compare the three edges of triangle with all the edges in edge list : when the edge already exists in the edge list, the weight of this edge in edge list e is increased by 1; when the edge is not in the edge list , add the edge to the edge list and set its weight to 1. |
Step8 Delete triangle from triangle array . |
Step9 Perform Step10 to 11 for each edge in edge list. |
Step10 When the weight of edge is 1, add all triangles formed by edge and point to triangle array . |
Step11 When the weight of edge is not 1, return to Step9. |
3.7.2. Extraction of Fracture Surface Contour of Debris
According to different selected geometric features, common triangular mesh surface segmentation methods are mainly divided into two categories:
- (1)
Segmentation based on boundary lines. Firstly, the curvature and normal vector difference are calculated point by point for the model data. By comparing the characteristics of curvature and normal vector difference of adjacent points, the points with large degrees of variation on the surface were obtained and assigned to different groups. Then, a group of closed curves was obtained by applying a curve fitting algorithm to the points in the same group. Then, the curve set is used to segment the model, so that multiple disjoint regions can be obtained, to realize the segmentation of the whole model.
- (2)
Face-based segmentation. Firstly, the geometric features of the triangles in the model are calculated one by one, and the triangles with large similarity of geometric features in the adjacent triangles are divided into the same set. Finally, multiple sets of triangles can be obtained, and each set is a segmentation region.
In this paper, the contour extraction method is realized by calculating the normal vector angle of two adjacent triangular surfaces. Firstly, the normal angle between the triangle and its adjacent triangle is calculated triangle by triangle. When the normal angle is greater than a given threshold, the two triangles belong to different surfaces, and the two triangles are divided into different sets of surfaces. When the angle between normal vectors is not greater than a given threshold, the two triangles belong to the same surface and are divided into the same triangle set. Then, the triangle set is merged to calculate the number of triangles in the surface set. When the number of triangles is less than a given threshold, it means that the surface set is overpartitioned, and it is merged into the adjacent surface set. If the normal perturbation is less than a given threshold, it is the original surface; otherwise, it is the fracture surface. The main process of algorithm implementation is shown in Algorithm 2:
Algorithm 2: The Contour of Triangular Mesh Data Is Extracted According to the Angle between Normal Vectors |
Input: Fragmented triangular grid data and point cloud data |
Output: The linked list storage form of the contour |
Step1 Calculate the normal vector of all points in the input point cloud data and the normal vector of all triangles in the input grid data, and then sort the points in descending order according to the angle between the normal vectors. |
Step2 Set the segmentation threshold of the angle between the normal vectors and traverse the set data point by point. |
When the angle between the normal vectors is greater than the given threshold, the corresponding adjacent edge of the point may be the fracture surface contour. Add the point to the temporary calculation point set . |
Step3 Select point by point from temporary calculation point set , assume that the currently selected point is , and calculate the normal vector angle of the two triangles corresponding to the adjacent edge of . If the angle between normal vectors is greater than or equal to the given threshold, the adjacent edge may be a fracture surface curve, which is added to the undetermined edge set E. If the angle between normal vectors is less than a given threshold, the adjacent edge is a component of the original surface. Then, the next point is selected from the temporarily computed point set . |
Step4 Traverse the undetermined edge set E, find the two edges belonging to the same triangle, and take the points corresponding to the two edges as the seed node R, from which the contour linked list is generated. |
3.7.3. Extraction of Feature Points on Contour
The feature points on the curve are defined as the points for which features are significantly different from those of other points in the neighborhood. The commonly used methods for extracting feature points include corner detection and polygon approximation. By extracting the feature points, the computation of the subsequent algorithm can be reduced, and the jitter of the algorithm can be reduced. The core idea of the angle test method is to judge whether a point on a curve is a feature point by its curvature, and the points with large curvatures are regarded as feature points to form a feature point list. The core idea of the polygon approximation method is to constantly insert new points into the initial two points of the algorithm until certain conditions are met to find a feature point and then to continue the process. In this paper, Freeman chain code is used to store contour information. It defines a starting point coordinate to realize the entry access to the contour data and defines the boundary point direction to realize the access to the adjacent points of this point and the access to the entire contour. Assume that the curve is composed of points and that each point is composed of coordinate
, as follows in Equation (10):
where
represents a point in space, and
is an adjacent point of
. Curve
C is represented by 8 connected chain codes, and
n vectors are expressed as Equation (11):
Here, represents a vector pointing to from point , represents the vector representation of point , and represents the vector representation of point , where each vector contains eight elements representing different directions.
Using Freeman chain code to store the contour of the fracture surface can realize a fast and efficient reading of the points on the contour.
The angle detection method realizes the detection and extraction of feature points by comparing the curvature changes in adjacent points on the fracture surface curve. In the implementation of the algorithm, firstly, the curvature of the points on the contour is calculated point by point. The curvature can be obtained by calculating the ratio between the change in the slope and the arc length, which is defined in Equation (12), where
is a given point in the plane region:
It can be obtained from the formula that when the curve to be calculated is continuous, the calculated curvature curve is also continuous, and then the points on the fracture surface curve with large curvature changes can be obtained by calculating the derivative of the curve. However, the contours of the fragments of Terra Cotta Warriors are not smooth and continuous, so the curvature of points on the contours cannot be calculated by Equation (12).
It is found that the curve calculated by the above method is a piecewise function; that is, the curvature value cannot be solved at the fracture of the curve. The method adopted in this paper is to calculate the curvature value of points in the K-neighborhood of a given point and then calculate the curvature value of the point according to the weighted average of the distance between the curvature point and the center point, which is defined in Equations (13)–(15):
where
is the point for which the curvature is to be solved,
is the point in the neighborhood of
, and
calculates the weighting coefficient based on Euclidean distance. Algorithm implementation steps are shown in Algorithm 3:
Algorithm 3: Feature Point Extraction Algorithm of Fracture Surface of Cultural Relic Fragment |
Input: Freeman chain code description of 3D contour |
Output: Feature points on the 3D contour |
Step1 As shown in Figure 7, the center point on the 3D curve and K points in the neighborhood of are selected, and then the curvature value at calculated according to point is estimated according to Equation (16). Then, the curvature value at was estimated according to equation (13). |
Step2 Because the contour curve of the Terra Cotta fragments is not continuous and derivable, it is necessary to estimate the curvature value of the center point according to the information of the points in the neighborhood. Specifically, the curvature value at point calculated based on point is estimated according to Equation (16), and then the curvature value at point is estimated according to Equation (13). |
Step3 Repeats Step1 to Step2 until all points on the contour curve are traversed. |
Step4 Define the threshold and add the points for which the curvature difference of adjacent points is greater than the threshold to the list of contour feature points according to Equation (17). |
3.8. Feature Extraction and Feature Fusion of Terra Cotta Warrior Fragments
Via the process of extracting feature points on the contour line, the salient feature points on the contour line are extracted and stored using a cyclic bidirectional linked list. The previous feature points and the next feature points of a feature point on the contour line can be easily obtained by accessing the cyclic bidirectional linked list. In general, there are two ways to extract features from 3D curves. One method is to perform a projection operation on the 3D curve and transform the problem of 3D curve feature extraction into the problem of 2D curve feature extraction. Multi-angle projection can achieve the maximum information retention of the original 3D curve, and the original 3D curve can be reconstructed by reverse operation on the projection curve. However, the disadvantage of this method is that when the 3D curve is rotated or the model is scaled, the projection data will change; that is, rotation invariance cannot be achieved, so it is not suitable for the calculation of 3D curves with unfixed spatial positions [
19]. Specifically, the curvature, torsion, and left and right chord length of feature points are taken as the feature point descriptors, and the feature vectors are calculated. Then, the feature vectors of all points are combined into a feature matrix and normalized. Then, multiple features of a single feature point are fused by multilayer perceptron and finally mapped to a single feature. Then, the header of the feature point list is defined according to the intersection of two fragment contours in the virtual interactive environment.
3.8.1. Multi-Feature Extraction of Contour Feature Points
For the definition of feature descriptors, this paper mainly refers to the following four features: curvature of feature points, torsion, and left and right chord length of feature points.
For the calculation of curvature and torsion of characteristic points, the curve equation is defined in this paper with arc length as the parameter. Then, the curve equation r is defined in Equation (18), and the curvature k and torsion τ are defined in Equations (18)–(20):
The feature point set of the 3D contour is composed of a series of points, for which the storage form in the computer is a cyclic bidirectional linked list. The feature point set is defined in Equation (21), where
represents the ith feature point on the contour, and the coordinate of
is
. Moreover, since the 3D contour is a closed curve, the feature point set extracted based on the contour should also be closed, so
.
Since the fracture surface curve of the Terra Cotta Warrior fragment is discrete, the chord length is used to represent the arc length of the fracture surface curve in this paper, and the differential calculation formula of chord length is as follows Equations (22)–(24):
By summing
to approximate the arc length, the eigenvector
of curvature and torsion is obtained. According to Equations (19) and (20), the definition of curvature and torsion at point
on the curve is obtained, as shown in Equations (25) and (26):
The above method of estimating the arc length of a curve by chord length not only solves the problem of calculating the arc length of a discontinuous curve but also combines the calculation process with the calculation of left and right chord length, which has a good performance effect in practice. The chord length of the left and right of the feature point can be calculated by calculating the Euclidean distance between the target point and its neighboring points. The definition formula is as follows Equations (27) and (28):
Finally, we obtain the eigenvector corresponding to the eigenpoint .
3.8.2. Adaptive Multi-Feature Fusion Method
In the process of the matching and concatenation of Terra Cotta Warrior fragments, the use of a single feature cannot achieve a good concatenation effect, and the use of multiple features will lead to a high time complexity of the algorithm. In this paper, the number of features used for matching and the time complexity of the algorithm are balanced. The point
of the fusion feature is defined, and the multi-feature fusion function is defined as Equation (29).
where the
function calculates the relationship between the current point
and its left and right adjacent points
,
, and
is the augmented matrix of
.
The
relation function is defined as the difference between the features of the current point and neighboring points. After formula combination,
is defined as follows Equations (30) and (31):
is defined as an augmented matrix of
, which is defined as the following Equation (32):
After feature fusion, the initial four-dimensional features are fused into one-dimensional features. Meanwhile, the multilayer perceptron can adaptively adjust the weight according to the stitching error and optimize the feature fusion process.
3.9. Splicing Algorithm of Terra Cotta Warrior and Horse Fragments
3.9.1. Fragment Coarse Matching Based on LCS Idea
The longest common subsequence problem is to find the longest increasing subsequence in two sequences by searching and comparing. There are two common ways to solve the longest common subsequence problem. One way is to use an exhaustive search method to traverse all possible subsequences in two sequences and then judge the length of the subsequence to select the longest subsequence. This method can solve the longest common subsequence problem, but the time complexity is high, and when the given sequence is too long, it will not be computed. Considering that there are many feature points on the fracture surface of the Terra Cotta Warrior fragments, the operation time of the exhaustive search method is too long, so the dynamic programming method is adopted in this paper. In the conventional solution of the longest common subsequence, the criterion of subsequence matching is whether the element values are equal. However, this method is not applicable in the realization of the matching of Terra Cotta Warrior fragments. The fracture surface of the Terra Cotta Warrior fragments may be worn, and errors may be introduced due to the problem of scanning accuracy and the improper operation of scanning instruments during the digitization of the fragments.
The general longest common subsequence algorithm, when judging whether two sequences X and Y are the same, will directly compare whether the element values of each corresponding position in the two sequences are the same. However, for the N-tuple of the fragment features, due to the inevitable introduction of certain errors in the process of data acquisition and feature extraction and the inevitable wear of the fracture surface, the corresponding fusion features between the fragment matching pairs will also have certain differences. Therefore, when transforming fragment coarse matching into the problem of finding the longest common subsequence, a comparison threshold should be set instead of directly comparing whether the values of the corresponding positions in feature N-tuples are equal, as defined in Equation (33):
where,
represents the ith feature point of the
th cultural relic fragment, and
represents the
th feature point of the
th cultural relic fragment. The function is to take the eigenvalues of the feature points, and
is used to mark whether the ith feature point of the
th fragment matches the
th feature point of the
th fragment. The algorithm implementation process is shown in Algorithm 4:
Algorithm 4: Fracture Surface Contour Matching Based on LCS Idea |
Input: Feature point sequence of fracture surface of cultural relic fragment Output: Matching degree (matching feature points) and matching results of two pieces of cultural relics Step1 Suppose that the same group classification after the fragment classification in Section 3 contains fragments , and each fragment may contain a different number of feature points. Two fragments, and , are selected from the group for matching, and their corresponding point sequences are and . In addition, a two-dimensional array is initialized to hold the value in the calculation process. The dimensions of the array correspond to the number of feature points of the two fragments. Step2 A two-layer cycle structure is implemented, traversing each point in and , and the optimal solution of the current subproblem is solved according to the state transition equation defined in Equation (33). Step3 Finally, backtracking is carried out according to the array to obtain the matching result of the feature points of the fracture surface. Step4 Repeats Step1 to Step3 until all fragments to be matched are matched. |
The matching and concatenation process uses the idea of dynamic programming to solve the LCS problem. The core idea is to construct a two-dimensional array to store the intermediate computation process and reduce the repeated computation of subproblems. Theoretically, the time complexity is the product of the length values of the two matched lists; that is, the time complexity is , where m and n are the lengths of the sequences to be matched.
3.9.2. Fragment Fine Matching Based on ICP Algorithm
The coarse matching of cultural relic fragments with a single fusion feature is realized by using the LCS idea. In order to achieve a better stitching effect, it is necessary to perform a fine matching process on the fragments of cultural relics.
The core steps of the ICP algorithm consist of two parts: determining the corresponding point set and calculating the optimal rigid body transformation. These two steps are repeated until the convergence criterion is met. The ICP algorithm finds the rigid body transformation matrix of R and T, making E in Formula (34) the smallest. From Equation (34), it can be seen that the ICP algorithm is essentially a matching algorithm based on the least square method. Equations (34) and (35) are as follows:(34)
Given two Terra Cotta Warrior fragment models,
and
, their sets of fracture surfaces correspond to
and
, where m and n are the number of points in the fracture surface sets of the model
and
respectively. And
should be ensured; that is, the number of points on the
fracture surface of the model is more than that on the
fracture surface of the model. Since the number of fracture surface X is different from that of fracture surface Y, a subset
of fracture surface X should be selected for operation at each iteration, where
is the iteration number of ICP. Equation (35) defines the selection criteria for point set
. Then, the point set
is registered with the point set Y. The specific implementation of ICP algorithm is shown in Algorithm 5:
Algorithm 5: ICP Registration Method for Point Cloud with Different Number of Points |
Input: Terra Cotta Warrior and horse fragments fracture surface set Output: The result of registration of fracture surface set of Terra Cotta fragments Step1 Define the point cloud data and Y of the fracture surface of the fragmentation model, select subset from X according to constraint 35, and define at the first iteration; Step2 The rotation matrix and translation vector of point clouds and are solved according to Equation (34), where is the iteration number; Step3 Calculate the rotation and translation result of point cloud ; Step4 Calculate
and terminate the iteration when d is less than the defined threshold; otherwise, continue to perform Step1. |
4. Analysis of Results
This paper introduces a matching and splicing method of Terra Cotta Warrior fragments, which is based on the fragment classification method proposed in
Section 3, and splices fragments of the same category [
20]. The splicing method includes the following steps:
Fragment fracture surface contour extraction: Firstly, we extract the fracture surface contour lines of each fragment. This can be achieved via image processing technology and edge detection algorithms.
Feature point extraction and fusion: Then, we extract feature points from the contour line and fuse multiple features on each feature point into a single feature representation. This can reduce data dimensions and extract key shape information.
Linked list construction: Next, in the interactive page, we expand the contour feature points of the two fragments at their intersection and organize them in the form of a doubly linked list. This can establish topological relationships between fragments.
Rough matching and fine matching: Finally, we use the methods of rough matching and fine matching to match the fragments. By calculating the similarity and shape matching between feature points, we can find the optimal splicing position and achieve fragment splicing. In this section, we selected some fragments from the third excavation of Terra Cotta Warriors Pit 1 to conduct experiments and presented the experimental results of each step. This method can help us effectively splice the fragments of Terra Cotta Warriors and restore their original shape and structure.
Table 1 shows the comparison between the Terra Cotta Warrior fragment splicing method based on this method and the matching method based on the concavity and convexity of the fracture surface. The experimental results show that the method proposed in this paper is significantly better than traditional methods in terms of time consumption. When the number of top points of the fracture surface is 15,064, the calculation time of this method is 1890 ms, which is 3012 ms faster than the matching method based on the concavity and convexity of the fracture surface, and 571 ms faster than the complementary shape matching and splicing method of the fractured rigid body. It can effectively improve the efficiency of fragment matching and splicing. As can be seen from the experimental data, the method proposed in this paper is significantly better than the traditional method in terms of time consumption and can effectively improve the efficiency of fragment matching and splicing.
In this section, as shown in
Figure 8, the matching and splicing methods of Terra Cotta Warrior fragments are proposed, which include the following: (1) Extract the fracture surface contour lines and feature points from the contour lines of the fragments; (2) calculate the curvature, torsion, left and right chord length, and the feature relationship of adjacent points at the feature points, and use the multilayer perceptron method to fuse multiple features into a single feature; (3) determine the head node of the contour feature point chain through the intersection points of fragments in an interactive environment; (4) use the LCS idea to perform rough matching on fragments within the same group; (5) use the ICP algorithm to accurately match and align fragment matching pairs. The experimental results show that the method used in this paper can use the multiple features of Terra Cotta Warrior fragments to complete more accurate matching and splicing and has a certain improvement in operating efficiency compared with traditional methods.
The computer-assisted virtual restoration of cultural relics largely avoids the potential secondary damage that may occur during the restoration of damaged cultural relics. By using a scanner to obtain a three-dimensional model of cultural relics, and then combining relevant technologies and algorithms with a computer, preprocessing, feature enhancement, classification, contour line extraction, feature point extraction on the contour line are completed, and finally the stitching is completed. The completion of this system provides a practical and reliable solution for the restoration or splicing of cultural relics themselves, improves the efficiency of cultural relic workers in restoring cultural relics, and has excellent significance in terms of the cost and significance of restoring cultural relics.
5. Discussion
In order to solve the problems of the low classification accuracy and poor splicing effect of Terra Cotta Warrior fragments, a 3D digital cultural relic virtual restoration system and a method based on fuzzy logic algorithm were proposed. This article first extracted the contour lines of the fracture surface and found some feature points on the contour lines. Then, we calculated the curvature, torsion, left and right chord lengths, and other parameters at these feature points and analyzed their relationship with adjacent points. In order to synthesize multiple features into a single feature, we use the multilayer perceptron method. Secondly, we processed the intersection points of fragments in an interactive environment. Next, we used the idea of the longest common subsequence (LCS) to preliminarily match fragments within the same group. Finally, we used the iterative nearest point (ICP) algorithm to accurately match and align fragment matching pairs. The experimental results show that the method proposed in this paper is significantly better than traditional methods in terms of time consumption. When the number of top points of the fracture surface is 15,064, the calculation time of this method is 1890 ms, which is 3012 ms faster than the matching method based on the concavity and convexity of the fracture surface and 571 ms faster than the complementary shape matching and splicing method of the fractured rigid body. It can effectively improve the efficiency of fragment matching and splicing. The fragment-stitching algorithm based on multi-feature adaptive fusion has improved the speed and effectiveness of stitching in fragment-stitching tasks.
In this paper, image processing technology is used to extract the fracture surface contour of Terra Cotta Warrior fragments and determine the feature points on the contour. The curvature, torsion, left and right chord lengths, and adjacent point feature relationships at feature points are calculated. Mathematical methods are used to calculate the curvature and torsion at feature points, as well as the left and right chord lengths and adjacent point feature relationships related to feature points. These features can describe the shape and topological structure of fragments. A multilayer perceptron method is used to fuse multiple features into a single feature vector. A multilayer perceptron is a neural network model that integrates multiple-input feature information using multiple levels of computation and learns to obtain more expressive feature representations. In an interactive environment, we determined the head node of the contour feature point chain table via observation and user interaction. The selection of this head node will play an important role in the subsequent matching and splicing process. The idea of the longest common subsequence (LCS) is used to match the fragments in the same group roughly. The LCS algorithm can find the longest subsequence with the same length between two sequences so as to find the starting point and end point of matching. Using the iterative nearest point (ICP) algorithm, we performed precise matching and alignment on the fragment matching pairs obtained from rough matching. The ICP algorithm continuously adjusts the position and posture of the matching pairs via iterative optimization, making them more accurately spliced together. The experimental results show that the method used in this paper can use multiple features of Terra Cotta Warrior fragments to complete more accurate matching and splicing. Compared with traditional methods, there is also a certain improvement in operational efficiency, enabling the faster completion of matching and splicing tasks. To sum up, the Terra Cotta Warrior fragment matching and splicing method proposed in this paper can achieve high-quality Terra Cotta Warrior fragment splicing using the application of multi-feature fusion and a multilayer perceptron combined with user interaction in an interactive environment and the accurate matching of the ICP algorithm and provides an effective solution for the field of cultural relics restoration.
6. Conclusions
This paper proposes a matching and splicing method for Terra Cotta Warrior fragments, which mainly includes extracting the contour line of the fracture surface and the feature points on the contour line of the fragments; calculating the curvature, torsion, left and right chord lengths, and the feature relationship between adjacent points of the feature points; using the multilayer perceptron method to fuse multiple features into a single feature; determining the head node of the feature point chain list of the contour line through the intersection of the fragments in an interactive environment; using the LCS idea to perform rough matching on fragments within the same group; and using the ICP algorithm to perform precise matching and alignment on fragment matching pairs. The experimental results show that the method proposed in this paper is significantly better than traditional methods in terms of time consumption. When the number of top points of the fracture surface is 15,064, the calculation time of this method is 1890 ms, which is 3012 ms faster than the matching method based on the concavity and convexity of the fracture surface and 571 ms faster than the complementary shape matching and splicing method of the fractured rigid body.
The research focus of this article is on a 3D digital cultural relic virtual restoration system and method based on fuzzy logic algorithm. This research aims to solve the problems of the low classification accuracy and poor splicing effect of Terra Cotta Warriors fragments. To achieve this goal, researchers have proposed a fragment-stitching algorithm based on multi-feature adaptive fusion and combined it with the calculation results of multilayer perceptrons for stitching operations. The focus of this study includes the following aspects:
Feature extraction and fusion: Researchers extracted the curvature, torsion, left and right chord lengths, and other features of the fracture surface contour lines of cultural relic fragments and composed them into feature vectors. By using multilayer perceptron to fuse and compress feature vectors, a more expressive feature representation is obtained.
Fragment splicing algorithm: By utilizing the calculation results of a multilayer perceptron, the system can automatically select appropriate splicing methods to achieve accurate matching and splicing operations of fragments. By adjusting the weights of the multilayer perceptron, the error rate of fragment splicing can also be reduced.
In the future, 3D digital cultural relic virtual restoration systems and methods based on fuzzy logic algorithms have the following prospects:
Improve the efficiency of cultural relic restoration: This method is significantly better than traditional methods in terms of time consumption and can effectively improve the efficiency of fragment matching and splicing. This will greatly accelerate the speed of cultural relic restoration and improve work efficiency.
Improve the accuracy of cultural relic restoration: Via feature fusion and multilayer perceptron calculation, this method can improve the accuracy of fragment splicing. The restored cultural relic model will be more complete and accurate, which will help protect and study cultural relics.
Promote digital cultural relic protection: Digital cultural relic restoration is the process of transforming physical cultural relics into virtual models. The restoration system based on fuzzy logic algorithm provides a feasible digital cultural relic protection scheme. This will promote the digital process of cultural relic protection and facilitate the dissemination and preservation of cultural relics.