1. Introduction
Face plastic surgery (PS) is aimed at achieving harmony of facial features by correcting cosmetic deformities. PS enjoys increasing popularity in different countries as well as among various social groups. However, the traditional facial PS only depends on the plastic surgeon’s experience and personal esthetics for surgical design, which makes the effect non-predictable, resulting in increased surgical risk. The popularity of PS triggers new studies in face recognition that consider facial differences due to these procedures [
1,
2].
Several systems have been proposed for presenting to patients and surgeons the possible outcomes of the surgical procedure [
3,
4,
5,
6,
7]. Compared with the acquisition of 2D images, the acquisition efficiency of 3D scanning is higher. Since the 3D model is designed and repaired according to the 3D data of subjects, it has the advantages of visibility, quantitative analysis, and reflection of the 3D structure. They offer an added dimension to the understanding of the presurgical map of a patient’s anatomy and provide surgeons with a lot of preoperative information. As a result, there has been an expansion of the application in facial plastic surgery.
When facial data is collected using a 3D camera, the imaging can yield sub-millimeter 3D data [
8]. The surgeon can thus perform virtual surgery using these 3D data. Incorporating the behavior of all underlying and surrounding tissues, the surgeon will be reminded which one is at greatest risk.
Researchers and practitioners have developed many different systems for cosmetic surgery simulation. They simulated the 3D model in the 3D scene by deforming the mesh patch of the model [
9]. Lubbers evaluated the handling of the 3dMD system in matters of data acquisition and analysis. They found the system was reliable for the evaluation of the facial surface and could offer new opportunities in PS [
10]. Michael R through image-guided surgery in the form of intraoperative navigation performed to the idealized virtual image. Finally, the accuracy of the surgical reconstruction could be confirmed using modern portable intraoperative CT scanners [
11]. Axis Three’s face system applies structured light technology to create 3D data and supply a 3D simulation software on physics-driven tissue typing for use in esthetic surgery [
12,
13]. Bottino presented a 3D system for the esthetic planning of face PS. The system proposed a “learning by examples” approach. It could suggest changes able to approach the shape of similar but more harmonious faces by constructing a database of harmonious faces [
14]. Yara AlRashidi presented a rule-based adjustment algorithm and 3D imaging tool to assist rhinoplasty surgeons. The system could assist surgeons by giving them the difference in measurements needed to change the patient’s nose [
15]. ZBrush, 3D Vectra, e-stetix 3D, and other engraving software present the plastic effect by using mesh deformation on the 3D model and simulating the surgery of the 3D face model. It could simulate body shaping, nose augmentation, breast augmentation, and face shaping. However, the software cannot measure quantitatively and obtain key distance information, such as eye distance, adjusted area, and filling volume. Moreover, 3D reconstruction usually requires the assistance of a large 3D scanner [
16,
17]. Jia Li et al. proposed an optical technique to acquire the dynamical data of face muscle with the colored encoded stripe pattern in order to objectively evaluate the effect of surgery [
18]. Although flaps are technically difficult, harvesting and in-setting times of flaps are generally improved with preoperative modeling. Moreover, higher patient satisfaction with preoperative image projections was reported in studies [
19,
20]. The models that focused on facial plastic and reconstructive procedures have generally shown significant patient satisfaction and improvements in intraoperative errors [
21,
22].
Nevertheless, most of the studies, although giving examples of the implementation of components of this vision, acknowledged the numerous unattained applications to be realized in the future. First, the system often relies on a large-scale 3D scanner, which is expensive. The second is to present the effect on the shaping process. The deformation algorithm employed should comply with the mechanical principles and enhance the authenticity of the deformation effect. Third, face quantitative analysis and evaluation can provide doctors with objective and quantitative morphological indicators.
3D imaging is taking surgeons to a new level of communication with patients, surgical planning, and outcome evaluation. It allows patients to see 3D images in multiple views. This paper offers surgeons a structured light scanning system to deal with the problems encountered in plastic surgery and improve the success rate of surgery. The structured light is a potential objective way to create more accurate information about the effect of PS [
23], which could help the evaluation of the operations.
This paper aims to edit the 3D face in order to enhance its attractiveness and propose a facial plastic surgery simulation system. An example of this process is in plastic surgery, in which 3D data are obtained from the experimental subject preoperatively, and postoperative is taken to assess the outcome. These modifications are applied to the main 3D facial region, such as the eyes, nose, mouth, chin, cheeks, and profile side.
This study could thus contribute to the literature from two perspectives. First, a new system based on a structured light scanner could be adopted, which uses free deformation to simulate the plastic effect and guide surgery. The system is small in size, can be connected to mobile phones, has good interactivity, can be evaluated, has perfect functions and strong practicability, and can assist in the pre-operative design of facial plastic surgery. Second, the system can realize three-dimensional facial plastic area measurements with high accuracy and negligible error. It can assist in injection, filling plastic surgery, planning the plastic area before surgery, and estimating the content of filling materials.
2. Materials and Methods
We propose a novel 3D facial plastic surgery simulation system to simulate and guide the shaping effect. First, the depth camera based on structured light coding is employed for facial 3D data acquisition, from which the point cloud data of multiple facial perspectives can be obtained. Next, the cascade regression tree algorithm is employed to extract the esthetic key points of the face model and to calculate the facial features composed of the key points, such as the nose, chin, and eyes. Quantitative facial esthetic indexes are given to guide doctors in simulating PS. Afterward, we exploit a face mesh metamorphosis based on finite elements. We design several morphing operators, including augmentation, cutting, and lacerating. Finally, the regional deformation is detected, and the operation effect is quantitatively evaluated by registering the 3D scanning model before and after the operation. The workflow is shown in
Figure 1a.
2.1. 3D Face Data Acquisition: Optical Measurement Strategy
The acquisition of 3D data is the key to the system. The research shows that a single depth camera can be used to collect depth information from multiple perspectives to realize 3D face reconstruction. In optical 3D measurement technology, structured light coding has the advantages of high accuracy, fast measurement speed, and low cost. Its basic principle is that the projector projects the structured light coding pattern onto the measured object, the camera obtains the deformed image modulated by the surface of the measured object at another angle, then the pose relationship between the transmitter and the camera is obtained by camera calibration, and finally, the target point cloud by triangulation measurement calculation is received [
24,
25].
Considering the size, accuracy, and cost of the system, a 3D face scanning system is constructed in our system based on the principle of near-infrared high-frequency binary coded structured light, whose three-dimensional dimension is 101 × 26 × 13 mm. There are two near-infrared structured light lenses, two infrared sensors (1280 × 800), and one VCSEL infrared projector (850 nm) inside the camera. In addition, a color sensor is also needed. The infrared sensor is responsible for the detection of depth, the color sensor (in this paper, a mobile phone camera is used as the color sensor) is responsible for color extraction and recording details, and the laser projector is responsible for scanning. It can combine high-resolution 3D depth data and image texture information to establish a sub-millimeter-level 3D digital model.
Through the parameter calibration of the camera module, the spatial parameters of the calibration plane and the projection reference system are obtained, and then the internal and external parameter calibration process of the whole system is realized by projecting the calibration pattern by the projector. Structured light coding adopts high-frequency binary fringe coding, realizes local coding through high-frequency binary fringe displacement, and realizes global phase unwrapping based on Gray code. In addition, high-frequency binary fringe can give full play to the high-frequency binary output characteristics of DLP and has stronger robustness. The 3D facial model has high precision, with submillimeter accuracy, a single scanning time is 3 s, and texture clarity is 6.0 mp. When the system scans, the person is 0.3~0.4 m away from the camera.
2.2. 3D Face Reconstruction
The flow chart of the facial 3D reconstruction algorithm is shown in
Figure 1b. The first step is multi-angle 3D scanning data registration and fusion. Firstly, the 3D coordinates of multi-angle face feature matching points are obtained based on the face feature point detection results, then the initial registration parameters of multiple groups of 3D data are calculated, and then ICP is introduced for local optimal registration. The registered 3D point cloud data is calculated in the 3D normal direction, and then the overlapping point cloud is optimized and fused with the point cloud normal direction camera optical axis as the weight to retain high-quality point cloud data [
26]. There are significant brightness and color differences in the texture data taken from multiple angles affected by the shooting angle and ambient light, and the color equalization of the stitching area and the normal weight are used for global texture correction.
The second step is the gridding and boundary correction of point cloud data. Firstly, the 3D data obtained from multiple angles are filtered respectively to remove significant noise and outlier point cloud, and then the curvature-based down-sampling method is used to reduce the point cloud density and improve the subsequent 3D model modeling speed. For the resampled point cloud data, the normal direction is calculated and normalized, and then the Poisson reconstruction method is used to obtain the watertight 3D mesh model. The face region and triangular mesh boundary are detected based on the face detection template, and the boundary smoothing of the face 3D model is realized by curvature constraint.
Figure 1a gives a front face point cloud model obtained by coordinate transformation of the depth image collected by the system. The 3D reconstruction of the actual face has a very good display effect, and the slightly complex local features of the face, such as nose, eyes, and mouth, are also very clear. The algorithm has a fast reconstruction time, and a single 3D face reconstruction time is 0.3 s.
2.3. Feature-Based 3D Facial Metamorphosis
The system of 3D facial PS simulation can simulate the surgical effect using the method of mesh metamorphosis, such as facial filling and facial feature fine-tuning. In the procedure of 3D face metamorphosis, the mouse interaction is used to specify the grid stretching deformation direction, and the Gaussian distribution is used to control the approximate strength of the grid deformation; that is, the deformation strength is large near the mouse operation point, and the deformation strength of the surrounding points follows the Gaussian distribution. The closer to the center point, the greater the force. On the contrary, the farther away from the center point, the smaller the force. At the same time, considering the skin elastic modulus and other factors, reasonable deformation control under mouse dragging is realized.
It is worth noting that the adjustment area must be obtained before deformation. In this paper, the bounding box method is used to pick up the 3D face model with the mouse. Three-dimensional Facial Metamorphosis includes water tightness of 3D facial mesh, 3D face model picking, selecting the deformable drive model, establishing mapping relationship, solving the deformation governing equation and mesh deformation, and the flow chart of 3D facial metamorphosis is shown in
Figure 1c.
In this section, we first extracted the 3D facial esthetic key and then measured the geometric features of the key points. Combining with the beauty canons, the system can guide doctors to simulate PS and check whether the plastic effect meets the beauty canons.
2.3.1. Extraction and Measurement of Facial Esthetic Key Points
The facial key point extraction is the basis of esthetic evaluation in a 3D plastic simulation system. In this paper, face key points are extracted by interactive and Dlib face database.
The points to be marked can be selected by picking up the face. Because the texture image of the face model is an RGB color image, it can be presented on the face by modifying the RGB value of the mark point position. The setting RGB value is usually significantly different from the original RGB value, which is equivalent to marking the face.
The key points of the face model are extracted by the cascade regression tree algorithm in the Dlib database. The algorithm needs to build a double-layer regression model, and the iterative formula is as follows:
where
is the vector expression of the key points predicted by the
layer and
;
is the regressor of the
layer;
represents the trained face mode.
According to the geometric characteristics of the face, 3D measurement is divided into straight-line length measurement, vertical length measurement, and spatial angle measurement [
27,
28,
29]. Among them, the straight-line length measurement can be projected by the vector of two points to its corresponding horizontal axis, and then its length is calculated; the vertical length measurement is to project the vector between the two points in the vertical direction and then calculate its length. The calculation formula of straight-line length is calculated by the Equation (2).
where
is the projection vector of the geometric features in the horizontal or vertical direction.
,
,
is the relevant coordinate of the projection vector.
Since the included angle formed by the 3D facial model can be presented as an acute angle or obtuse angle, it is usually processed according to the acute angle, which can be calculated by the Equation (3).
is another projection vector of the geometric feature in the horizontal or vertical direction. Through the established 3D esthetic model, the quantitative data of several facial esthetic indexes before surgery can be given. At the same time, by registering the 3D scanning model before and after surgery and detecting the difference between the regional deformation and the expected index, the expected index can also be quantitatively evaluated for the surgery effect.
In the field of PS, the most familiar standards for evaluating faces are facial symmetry golden section [
13,
30].
2.3.2. Mesh Deformation of 3D Facial Model Based on Finite Element
The 3D plastic surgery simulation system can use the method of mesh deformation to simulate the surgical effects of microplastic surgery, such as facial filling and fine-tuning facial features. Since the face structure has a large number of soft tissues, the algorithm should be able to simulate the deformation state of the facial soft tissue. In addition, the accuracy of algorithm deformation should be solved to improve the accuracy of mesh deformation. Based on the above two points, this paper chose the finite element method to deform the mesh. Firstly, the algorithm must analyze the appropriate and stable driving model standards based on the model’s morphological characteristics and organizational structure. Then, establish a mapping relationship between the robust driving model and its mesh. Finally, solve the equation according to the deformation control equation to make the driving model drive the mesh to realize deformation [
31,
32].
The key point of the deformation algorithm is how to establish a robust mapping relationship. The key connection between the grid and the driving model can be summarized in the following two points: starting from all the vertex of the driving model, find the corresponding vertex on the base mesh, and calculate the displacement vectors of vertex and .Then, finding the nearest vertex to the vertex of the driving model on the base mesh is defined as . When the simplification ratio of the driving model is , the vertex of the driving model can correspond to the vertex of the mesh one by one, and there is no need to calculate the corresponding relationship; when the simplification ratio is not 1, the specific process of solving the vertex and the associated positions of the driving model is as follows:
Firstly, find the mesh surface which is closest to the vertex of the driving model on the base mesh, the corresponding point of vertex can be expressed as a linear combination of three vertexes on the triangular surface . Considering the time complexity, this paper first uses the method of comparing one by one to find the nearest vertex of the base mesh from vertex . Then through the topological relationship of the base mesh, find all the triangular surfaces connected to the vertex , and calculate the distance between the vertex and each triangle surface; the triangle surface with the smallest distance is .
Assuming that
,
and
are the three vertexes of
, the vertical projection point from
to
is
, and
satisfies the following equations:
The values of , , and can be calculated by solving the above four formulas, and the position of point can be obtained at the same time. Through the above step, the mapping relationship between the driving model and the triangular mesh can be established, and then the shape of the corresponding mesh can be changed by controlling the driving model; that is, the mesh deformation can be realized.
2.4. Shaping Area Measurement
The face is an irregular shape, and the area to be shaped can be marked by the generated feature line, and then the area of the area can be calculated. First, the face data in the marked area was sliced along the coordinate direction, then the distance between the adjacent two points in each slice was solved, and the relevant slice curve was constructed. Finally, the slice curve was fully expanded, and the mathematical model of the three-dimensional shaping area was established. The specific steps are as follows:
The face data is sliced along the direction of the coordinate axis. The distance of the slice is taken as , and the number of slices is set as . The smaller the value of , the more accurate the obtained area is, but the longer the calculation time is.
Connect the adjacent points of the taken slice, calculate the length of the line segment, successively connect and calculate the above lines, and finally find the sum of the line segments.
is the number of data on the slice layer.
Let the endpoint coordinates of the line segment be and , respectively, then the area between adjacent slices is calculated, if , , the area can be obtained as , and the value of is related to the surface complexity and the data density of the face mesh.
Face area measurement is divided into three situations:
The two endpoints of the line segment
are perpendicular to
. If the two perpendicular foot points fall on
, and the calculation method of the area is:
(where
, if
)
The two endpoints of the line segment
are perpendicular to
. At this time, the perpendicular point of the point
falls on the line
, and the formula of the area is as follows:
(where
, if
)
Calculate the sum of the above areas,
S is the actual area of face shaping.
3. Results
Some interactive simulations on a 3D face model are shown in this section. The test cases have been chosen from the faces of our researcher, and different target features have been considered in the various cases, such as eyes, eyebrows, nose, mouth, chin, and cheeks.
3.1. Results of 3D Facial Reconstruction and Measurement
To verify the imaging accuracy of the system face model, 10 subjects were 3D reconstructed, geometric and area measured, and compared with the manual measurement results. Considering the actual impact of the concave and convex surface of the face model in the real measurement, and combined with the relevant esthetic theory, the feature points with obvious locations and not easy to be affected by deformation were selected for markings, such as the nose tip, nose root, and eye kernel.
An example related to geometric feature measurement of the system for the parts of interest to doctors is shown in
Figure 2a, which includes face general position linear measurement, special position linear measurement, and angle measurement. The system intuitively and clearly showed the result. The experimental values measured by the system were compared with the actual values measured by plastic surgeons to determine the measurement accuracy. Taking “three courtyards and five eyes” as an example, the plastic surgeon used the soft ruler attached to the face to measure.
Figure 2c shows the error values of system measurement and plastic surgeons’ measurement, and the average error is about 0.458 mm.
Take the nose as an example to measure its area. The doctor first laid fluorescent powder on the contour of the nose and then applied the test paper to the nose for data collection, lighting, and showing fluorescent marks (
Figure 2b). Finally, the calculated area was 4.42 mm
2. The software calculates the area of the nose as 3.69 mm
2 after selecting the overall area of the nose through the interactive arbitrary measuring tool. The area of key parts such as eyes, eyebrows, nose, and cheekbones of 10 groups of subjects is measured as the above method. Each part above is measured 10 times by one operator, and the average error is 0.65 mm
2 (
Figure 2d). Obviously, for the measurement of the face area, because the shape of the face is irregular, the error value is larger than the geometric feature measurement. However, the measurement results can still provide a reference for microplastic surgery and improve the success rate of plastic surgery.
In addition, the technical parameters and scene reconstruction results of what we designed and the commercial near-infrared 3D imaging system are compared.
Table 1 shows the comparison of technical parameters, and we can see that the system has significant advantages over the existing system in terms of resolution, accuracy, scanning time, and size.
3.2. Results of 3D Face Model Evaluation
Firstly, generating facial esthetic key points based on interactive selection, which shows several key points at the right zygomatic bone of the face. In our case, these key points form a symmetrical 3D face, which can assist doctors to quickly infer whether the key parts meet the beauty canons and mark the interesting surgical parts.
The system provides relevant esthetic evaluation figures, including golden face ratio (mainly three courtyards and five eyes), eyebrow and eye evaluation, and nose evaluation. It is worth noting that these esthetic standards are mainly for females, and there are slight differences between females and males. Taking 3D face shape, eyebrows, eyes, and nose as examples, the evaluation results are shown in
Figure 3a.
The evaluation results of the model can draw the following conclusions: (1) Comparing the zygomatic mandibular face value to the standard value, it exceeds the 3% gradient value, which means the mandibular of the model is a little large, and it is recommended to reduce it slightly. (2) By comparing the zygomatic mandibular face value with the standard value, which means that the jaw of the face model is a little large, and it is estimated that the face model belongs to the square face. (3) The nasal frontal angle is larger than the standard value, which indicates that the nose of the model is relatively straight as a whole, and rhinoplasty is not recommended. However, the nose tip angle is quite different from the standard value. It is recommended to pull down the nose.
The above esthetic evaluation presents the indicators that need to be referred to in the beauty canons with intuitive numerical values, which can assist doctors in effectively judging whether the face meets the relevant standards.
3.3. Results of Simulated Deformation
In our design, virtual prediction software was developed, and virtual prediction of the overall appearance of 10 experimental subjects was performed. We present an example of plastic surgery simulation, including nose, chin, and jaw. To make the 3D shape changes more evident, for each case, we present different views of the original cosmetic patient’s scan and the PS simulation.
In
Figure 4a, we simulate the effect of Chin stretching and reduction surgery, in which stretching surgery improves the overall spatial level of the chin, while reduction surgery leads to Chin depression. Through comprehensive analysis, the doctors suggest filling to improve the layering of the Chin.
In
Figure 4b, we simulate the effect of mandibular plastic surgery. The mandible of the model is plump before deformation. After bone cutting, the width of the cheek is significantly reduced, and the beauty of the face is increased before and after plastic surgery.
In
Figure 4c, we simulate the effect of nose augmentation, which can be seen from the data in
Figure 4d that the whole nose frontal angle increased by 3.13 degrees due to the simulated tissue filling, the nose tip angle decreased by 12.41 degrees due to the pull-down of the nose, and the degree of the nose lip angle decreased slightly due to the change of the overall nasal spatial structure.
Meanwhile, the system is designed to be used by plastic and esthetic surgeons. We asked the expert to evaluate the results of the PS from several beauty canons, golden ratio, facial symmetry, and angular profile [
33]. Indeed, the majority of the edited 3D faces comply with these beauty canons. The proposed simulation system is relevant enough to improve the esthetic quality of the 3D face and is applicable in the real-world applications of plastic surgery.
4. Conclusions
The 3D quantitative analysis of facial morphology, such as face reconstruction and surgical simulation, is important in plastic surgery (PS). Both can help surgeons design appropriate plastic procedures before performing the surgery. In parallel to increased demands for plastic surgeries, desires are emerging to verify the patient’s satisfaction and reduce the risks during the surgery. By interactively virtual surgery on 3D face models, doctors can design surgery by comparing the changes in face models before and after virtual surgery to obtain the accurate data required in plastic surgery. This can change the empirical blind operation, improve the operation accuracy, help improve the operation effect, and shorten the operation time. In this paper, we present a new 3D system for guiding plastic surgeons to design surgical plans. The system can suggest, for any selected facial area, new 3D shapes able to improve the general harmony of the face, which meets the esthetic evaluation criteria. A face 3D data acquisition system is built based on a highly integrated, miniaturized, and convenient depth camera. The 3D face model is picked up by employing the bounding box theory, the 3D face morphing is controlled based on the finite element method, and the face feature points are extracted by the Dlib library. The esthetic strategy is used to evaluate the face shape, eyes, eyebrows, and nose.
All corresponding subsystems have been implemented and tested. Several surgical simulation examples are provided to show how the system works on key facial areas such as the nose, mouth, and chin (it is worth noting that our subjects are not real patients, and test cases were selected from our research staff). The tests showed that our method is effective. An accurate display of the reconstructed areas after facial micropigmentation and facial micropigmentation recommendations can be presented. Despite the small database of faces used, experts evaluate the results and demonstrate that the proposed system can improve the esthetic quality of 3D faces.
The 3D facial virtual PS method proposed in this paper has the following advantages. Firstly, it personalizes, quantifies, and digitizes the plastic surgery design process, gives patients a better and more graphic understanding of the post-operative results in advance, and facilitates communication between doctors and patients. Secondly, as an excellent expansion product, it can assist plastic surgeons in effectively determining whether the face meets the relevant standards. Finally, by performing interactive virtual surgery on the 3D face model, the doctor can design the surgery by comparing the changes in the face model before and after the virtual surgery, thus obtaining the accurate data required for plastic surgery. This can change the blind operation by virtue of experience and improve the accuracy of the operation, which is conducive to improving the surgical results and shortening the operation time.
Further work for improving the system will be carried out in three directions. It has already been observed that a 3D scanning system requires a good lighting environment when it is operating and requires users to keep still. If there is blinking or slight movement, it will lead to texture color inconsistency in the 3D facial model. To improve the 3D reconstruction performances, we plan to implement image processing algorithms to remove the effects of uneven illumination and motion artifacts. The second important area of improvement is extending sculpt of the system described, which deals with the esthetic planning of surgery, and also deals with the cosmetology constraints, combining the whitening, freckle removal, and Spot nevus. Finally, the volume calculation algorithm will be developed to improve the accuracy of volume measurement to assist doctors in facial filling surgery effectively.