Next Article in Journal
Fast Real-Time Data Process Analysis Based on NoSQL for IoT Pavement Quality Management Platform
Next Article in Special Issue
Multi-Objective Parametric Optimization Design for Mirrors Combined with Non-Dominated Sorting Genetic Algorithm
Previous Article in Journal
Seismic Data Denoising Based on Wavelet Transform and the Residual Neural Network
Previous Article in Special Issue
Hybrid Adaptive Controller Design with Hysteresis Compensator for a Piezo-Actuated Stage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Facial Plastic Surgery Simulation: Based on the Structured Light

1
School of Life Science and Technology, Changchun University of Science and Technology, Changchun 130022, China
2
School of Computing and Information Systems, The University of Melbourne, Carlton, VI 3010, Australia
3
Department of Information Systems and Business Analytics, Royal Melbourne Institute of Technology, Melbourne, VI 3001, Australia
4
Department of Physics, Northeast Normal University, Changchun 130024, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 659; https://doi.org/10.3390/app13010659
Submission received: 19 November 2022 / Revised: 24 December 2022 / Accepted: 28 December 2022 / Published: 3 January 2023
(This article belongs to the Special Issue Advanced Optoelectronic Devices and Systems)

Abstract

:
The 3D quantitative analysis of facial morphology is of importance in plastic surgery (PS), which could help surgeons design appropriate procedures before conducting the surgery. We propose a system to simulate and guide the shaping effect analysis, which could produce a similar but more harmonious face simulation. To this end, first, the depth camera based on structured light coding is employed for facial 3D data acquisition, from which the point cloud data of multiple facial perspectives could be obtained. Next, the cascade regression tree algorithm is used to extract the esthetic key points of the face model and to calculate the facial features composed of the key points, such as the nose, chin, and eyes. Quantitative facial esthetic indexes are offered to doctors to simulate PS. Afterward, we exploit a face mesh metamorphosis based on finite elements. We design several morphing operators, including augmentation, cutting, and lacerating. Finally, the regional deformation is detected, and the operation effect is quantitatively evaluated by registering the 3D scanning model before and after the operation. The test of our proposed system and the simulation of PS operations find that the measurement error of facial geometric features is 0.458 mm, and the area is 0.65 mm2. The ratings of the simulation outcomes provided by panels of PS prove that the system is effective. The manipulated 3D faces are deemed more beautiful compared to the original faces respecting the beauty canons such as facial symmetry and the golden ratio. The proposed algorithm could generate realistic visual effects of PS simulation. It could thus assist the preoperative planning of facial PS.

1. Introduction

Face plastic surgery (PS) is aimed at achieving harmony of facial features by correcting cosmetic deformities. PS enjoys increasing popularity in different countries as well as among various social groups. However, the traditional facial PS only depends on the plastic surgeon’s experience and personal esthetics for surgical design, which makes the effect non-predictable, resulting in increased surgical risk. The popularity of PS triggers new studies in face recognition that consider facial differences due to these procedures [1,2].
Several systems have been proposed for presenting to patients and surgeons the possible outcomes of the surgical procedure [3,4,5,6,7]. Compared with the acquisition of 2D images, the acquisition efficiency of 3D scanning is higher. Since the 3D model is designed and repaired according to the 3D data of subjects, it has the advantages of visibility, quantitative analysis, and reflection of the 3D structure. They offer an added dimension to the understanding of the presurgical map of a patient’s anatomy and provide surgeons with a lot of preoperative information. As a result, there has been an expansion of the application in facial plastic surgery.
When facial data is collected using a 3D camera, the imaging can yield sub-millimeter 3D data [8]. The surgeon can thus perform virtual surgery using these 3D data. Incorporating the behavior of all underlying and surrounding tissues, the surgeon will be reminded which one is at greatest risk.
Researchers and practitioners have developed many different systems for cosmetic surgery simulation. They simulated the 3D model in the 3D scene by deforming the mesh patch of the model [9]. Lubbers evaluated the handling of the 3dMD system in matters of data acquisition and analysis. They found the system was reliable for the evaluation of the facial surface and could offer new opportunities in PS [10]. Michael R through image-guided surgery in the form of intraoperative navigation performed to the idealized virtual image. Finally, the accuracy of the surgical reconstruction could be confirmed using modern portable intraoperative CT scanners [11]. Axis Three’s face system applies structured light technology to create 3D data and supply a 3D simulation software on physics-driven tissue typing for use in esthetic surgery [12,13]. Bottino presented a 3D system for the esthetic planning of face PS. The system proposed a “learning by examples” approach. It could suggest changes able to approach the shape of similar but more harmonious faces by constructing a database of harmonious faces [14]. Yara AlRashidi presented a rule-based adjustment algorithm and 3D imaging tool to assist rhinoplasty surgeons. The system could assist surgeons by giving them the difference in measurements needed to change the patient’s nose [15]. ZBrush, 3D Vectra, e-stetix 3D, and other engraving software present the plastic effect by using mesh deformation on the 3D model and simulating the surgery of the 3D face model. It could simulate body shaping, nose augmentation, breast augmentation, and face shaping. However, the software cannot measure quantitatively and obtain key distance information, such as eye distance, adjusted area, and filling volume. Moreover, 3D reconstruction usually requires the assistance of a large 3D scanner [16,17]. Jia Li et al. proposed an optical technique to acquire the dynamical data of face muscle with the colored encoded stripe pattern in order to objectively evaluate the effect of surgery [18]. Although flaps are technically difficult, harvesting and in-setting times of flaps are generally improved with preoperative modeling. Moreover, higher patient satisfaction with preoperative image projections was reported in studies [19,20]. The models that focused on facial plastic and reconstructive procedures have generally shown significant patient satisfaction and improvements in intraoperative errors [21,22].
Nevertheless, most of the studies, although giving examples of the implementation of components of this vision, acknowledged the numerous unattained applications to be realized in the future. First, the system often relies on a large-scale 3D scanner, which is expensive. The second is to present the effect on the shaping process. The deformation algorithm employed should comply with the mechanical principles and enhance the authenticity of the deformation effect. Third, face quantitative analysis and evaluation can provide doctors with objective and quantitative morphological indicators.
3D imaging is taking surgeons to a new level of communication with patients, surgical planning, and outcome evaluation. It allows patients to see 3D images in multiple views. This paper offers surgeons a structured light scanning system to deal with the problems encountered in plastic surgery and improve the success rate of surgery. The structured light is a potential objective way to create more accurate information about the effect of PS [23], which could help the evaluation of the operations.
This paper aims to edit the 3D face in order to enhance its attractiveness and propose a facial plastic surgery simulation system. An example of this process is in plastic surgery, in which 3D data are obtained from the experimental subject preoperatively, and postoperative is taken to assess the outcome. These modifications are applied to the main 3D facial region, such as the eyes, nose, mouth, chin, cheeks, and profile side.
This study could thus contribute to the literature from two perspectives. First, a new system based on a structured light scanner could be adopted, which uses free deformation to simulate the plastic effect and guide surgery. The system is small in size, can be connected to mobile phones, has good interactivity, can be evaluated, has perfect functions and strong practicability, and can assist in the pre-operative design of facial plastic surgery. Second, the system can realize three-dimensional facial plastic area measurements with high accuracy and negligible error. It can assist in injection, filling plastic surgery, planning the plastic area before surgery, and estimating the content of filling materials.

2. Materials and Methods

We propose a novel 3D facial plastic surgery simulation system to simulate and guide the shaping effect. First, the depth camera based on structured light coding is employed for facial 3D data acquisition, from which the point cloud data of multiple facial perspectives can be obtained. Next, the cascade regression tree algorithm is employed to extract the esthetic key points of the face model and to calculate the facial features composed of the key points, such as the nose, chin, and eyes. Quantitative facial esthetic indexes are given to guide doctors in simulating PS. Afterward, we exploit a face mesh metamorphosis based on finite elements. We design several morphing operators, including augmentation, cutting, and lacerating. Finally, the regional deformation is detected, and the operation effect is quantitatively evaluated by registering the 3D scanning model before and after the operation. The workflow is shown in Figure 1a.

2.1. 3D Face Data Acquisition: Optical Measurement Strategy

The acquisition of 3D data is the key to the system. The research shows that a single depth camera can be used to collect depth information from multiple perspectives to realize 3D face reconstruction. In optical 3D measurement technology, structured light coding has the advantages of high accuracy, fast measurement speed, and low cost. Its basic principle is that the projector projects the structured light coding pattern onto the measured object, the camera obtains the deformed image modulated by the surface of the measured object at another angle, then the pose relationship between the transmitter and the camera is obtained by camera calibration, and finally, the target point cloud by triangulation measurement calculation is received [24,25].
Considering the size, accuracy, and cost of the system, a 3D face scanning system is constructed in our system based on the principle of near-infrared high-frequency binary coded structured light, whose three-dimensional dimension is 101 × 26 × 13 mm. There are two near-infrared structured light lenses, two infrared sensors (1280 × 800), and one VCSEL infrared projector (850 nm) inside the camera. In addition, a color sensor is also needed. The infrared sensor is responsible for the detection of depth, the color sensor (in this paper, a mobile phone camera is used as the color sensor) is responsible for color extraction and recording details, and the laser projector is responsible for scanning. It can combine high-resolution 3D depth data and image texture information to establish a sub-millimeter-level 3D digital model.
Through the parameter calibration of the camera module, the spatial parameters of the calibration plane and the projection reference system are obtained, and then the internal and external parameter calibration process of the whole system is realized by projecting the calibration pattern by the projector. Structured light coding adopts high-frequency binary fringe coding, realizes local coding through high-frequency binary fringe displacement, and realizes global phase unwrapping based on Gray code. In addition, high-frequency binary fringe can give full play to the high-frequency binary output characteristics of DLP and has stronger robustness. The 3D facial model has high precision, with submillimeter accuracy, a single scanning time is 3 s, and texture clarity is 6.0 mp. When the system scans, the person is 0.3~0.4 m away from the camera.

2.2. 3D Face Reconstruction

The flow chart of the facial 3D reconstruction algorithm is shown in Figure 1b. The first step is multi-angle 3D scanning data registration and fusion. Firstly, the 3D coordinates of multi-angle face feature matching points are obtained based on the face feature point detection results, then the initial registration parameters of multiple groups of 3D data are calculated, and then ICP is introduced for local optimal registration. The registered 3D point cloud data is calculated in the 3D normal direction, and then the overlapping point cloud is optimized and fused with the point cloud normal direction camera optical axis as the weight to retain high-quality point cloud data [26]. There are significant brightness and color differences in the texture data taken from multiple angles affected by the shooting angle and ambient light, and the color equalization of the stitching area and the normal weight are used for global texture correction.
The second step is the gridding and boundary correction of point cloud data. Firstly, the 3D data obtained from multiple angles are filtered respectively to remove significant noise and outlier point cloud, and then the curvature-based down-sampling method is used to reduce the point cloud density and improve the subsequent 3D model modeling speed. For the resampled point cloud data, the normal direction is calculated and normalized, and then the Poisson reconstruction method is used to obtain the watertight 3D mesh model. The face region and triangular mesh boundary are detected based on the face detection template, and the boundary smoothing of the face 3D model is realized by curvature constraint.
Figure 1a gives a front face point cloud model obtained by coordinate transformation of the depth image collected by the system. The 3D reconstruction of the actual face has a very good display effect, and the slightly complex local features of the face, such as nose, eyes, and mouth, are also very clear. The algorithm has a fast reconstruction time, and a single 3D face reconstruction time is 0.3 s.

2.3. Feature-Based 3D Facial Metamorphosis

The system of 3D facial PS simulation can simulate the surgical effect using the method of mesh metamorphosis, such as facial filling and facial feature fine-tuning. In the procedure of 3D face metamorphosis, the mouse interaction is used to specify the grid stretching deformation direction, and the Gaussian distribution is used to control the approximate strength of the grid deformation; that is, the deformation strength is large near the mouse operation point, and the deformation strength of the surrounding points follows the Gaussian distribution. The closer to the center point, the greater the force. On the contrary, the farther away from the center point, the smaller the force. At the same time, considering the skin elastic modulus and other factors, reasonable deformation control under mouse dragging is realized.
It is worth noting that the adjustment area must be obtained before deformation. In this paper, the bounding box method is used to pick up the 3D face model with the mouse. Three-dimensional Facial Metamorphosis includes water tightness of 3D facial mesh, 3D face model picking, selecting the deformable drive model, establishing mapping relationship, solving the deformation governing equation and mesh deformation, and the flow chart of 3D facial metamorphosis is shown in Figure 1c.
In this section, we first extracted the 3D facial esthetic key and then measured the geometric features of the key points. Combining with the beauty canons, the system can guide doctors to simulate PS and check whether the plastic effect meets the beauty canons.

2.3.1. Extraction and Measurement of Facial Esthetic Key Points

The facial key point extraction is the basis of esthetic evaluation in a 3D plastic simulation system. In this paper, face key points are extracted by interactive and Dlib face database.
The points to be marked can be selected by picking up the face. Because the texture image of the face model is an RGB color image, it can be presented on the face by modifying the RGB value of the mark point position. The setting RGB value is usually significantly different from the original RGB value, which is equivalent to marking the face.
The key points of the face model are extracted by the cascade regression tree algorithm in the Dlib database. The algorithm needs to build a double-layer regression model, and the iterative formula is as follows:
Y ( n + 1 ) = Y n + σ n ( P , Y n )
where Y is the vector expression of the key points predicted by the n layer and Y = ( X 1 N , X 2 N , X 3 N , , , X P N ) N ; σ n is the regressor of the n layer; P represents the trained face mode.
According to the geometric characteristics of the face, 3D measurement is divided into straight-line length measurement, vertical length measurement, and spatial angle measurement [27,28,29]. Among them, the straight-line length measurement can be projected by the vector of two points to its corresponding horizontal axis, and then its length is calculated; the vertical length measurement is to project the vector between the two points in the vertical direction and then calculate its length. The calculation formula of straight-line length is calculated by the Equation (2).
q i = x i 2 + y i 2 + c i 2 2
where q i is the projection vector of the geometric features in the horizontal or vertical direction. x , y , z is the relevant coordinate of the projection vector.
Since the included angle formed by the 3D facial model can be presented as an acute angle or obtuse angle, it is usually processed according to the acute angle, which can be calculated by the Equation (3).
θ = arccos q i q j | q i | | q j |
q j is another projection vector of the geometric feature in the horizontal or vertical direction. Through the established 3D esthetic model, the quantitative data of several facial esthetic indexes before surgery can be given. At the same time, by registering the 3D scanning model before and after surgery and detecting the difference between the regional deformation and the expected index, the expected index can also be quantitatively evaluated for the surgery effect.
In the field of PS, the most familiar standards for evaluating faces are facial symmetry golden section [13,30].

2.3.2. Mesh Deformation of 3D Facial Model Based on Finite Element

The 3D plastic surgery simulation system can use the method of mesh deformation to simulate the surgical effects of microplastic surgery, such as facial filling and fine-tuning facial features. Since the face structure has a large number of soft tissues, the algorithm should be able to simulate the deformation state of the facial soft tissue. In addition, the accuracy of algorithm deformation should be solved to improve the accuracy of mesh deformation. Based on the above two points, this paper chose the finite element method to deform the mesh. Firstly, the algorithm must analyze the appropriate and stable driving model standards based on the model’s morphological characteristics and organizational structure. Then, establish a mapping relationship between the robust driving model and its mesh. Finally, solve the equation according to the deformation control equation to make the driving model drive the mesh to realize deformation [31,32].
The key point of the deformation algorithm is how to establish a robust mapping relationship. The key connection between the grid and the driving model can be summarized in the following two points: starting from all the vertex q of the driving model, find the corresponding vertex q on the base mesh, and calculate the displacement vectors v of vertex q and q .Then, finding the nearest vertex q to the vertex of the driving model on the base mesh is defined as q . When the simplification ratio of the driving model is 0 < d < 1 , the vertex of the driving model can correspond to the vertex of the mesh one by one, and there is no need to calculate the corresponding relationship; when the simplification ratio is not 1, the specific process of solving the vertex q and the associated positions q of the driving model is as follows:
Firstly, find the mesh surface N S which is closest to the vertex q of the driving model on the base mesh, the corresponding point q of vertex q can be expressed as a linear combination of three vertexes on the triangular surface N S . Considering the time complexity, this paper first uses the method of comparing one by one to find the nearest vertex q S of the base mesh from vertex q . Then through the topological relationship of the base mesh, find all the triangular surfaces connected to the vertex q S , and calculate the distance between the vertex q S and each triangle surface; the triangle surface with the smallest distance is N S .
Assuming that q 1 , q 2 and q 3 are the three vertexes of N S , the vertical projection point from q to N S is q p , and q p satisfies the following equations:
q p = a q 1 + b q 2 + c q 3
a + b + c = 1
( q q p ) ( q 2 q 1 ) = 0
( q q p ) ( q 3 q 1 ) = 0
The values of a , b , and can be calculated by solving the above four formulas, and the position of point q p can be obtained at the same time. Through the above step, the mapping relationship between the driving model and the triangular mesh can be established, and then the shape of the corresponding mesh can be changed by controlling the driving model; that is, the mesh deformation can be realized.

2.4. Shaping Area Measurement

The face is an irregular shape, and the area to be shaped can be marked by the generated feature line, and then the area of the area can be calculated. First, the face data in the marked area was sliced along the coordinate direction, then the distance between the adjacent two points in each slice was solved, and the relevant slice curve was constructed. Finally, the slice curve was fully expanded, and the mathematical model of the three-dimensional shaping area was established. The specific steps are as follows:
The face data is sliced along the direction of the coordinate axis. The distance of the slice is taken as d , and the number of slices is set as m . The smaller the value of d , the more accurate the obtained area is, but the longer the calculation time is.
Connect the adjacent points of the taken slice, calculate the length of the line segment, successively connect and calculate the above lines, and finally find the sum of the line segments. n is the number of data on the slice layer.
l i = i = 1 n [ ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 + ( z i + 1 z i ) 2 ] 1 2
Let the endpoint coordinates of the line segment l j be ( x j , y j ) and ( x j + l j , y j ) , respectively, then the area between adjacent slices is calculated, if lim d = 0   ( 0 < d ) , l j = l j + 1 , the area can be obtained as S = d l j , and the value of d is related to the surface complexity and the data density of the face mesh.
Face area measurement is divided into three situations:
The two endpoints of the line segment l j are perpendicular to l j + 1 . If the two perpendicular foot points fall on l j + 1 , and the calculation method of the area is:
S j = 1 2 d { 2 l j + [ ( ( x j + 1 x j ) 2 + ( y j + 1 y j ) 2 ) 1 2 d 2 ] 1 2 + [ ( ( x j + 1 + l j + 1 x j l j ) 2 + ( y j + 1 y j ) 2 ) 1 2 d 2 ] 1 2 }
(where l j + 1 > l j , if l j + 1 < l j )
The two endpoints of the line segment l j are perpendicular to l j + 1 . At this time, the perpendicular point of the point ( x j , l j ) falls on the line l j + 1 , and the formula of the area is as follows:
S j = 1 2 d { 2 l j + [ ( ( x j + 1 x j ) 2 + ( y j + 1 y j ) 2 ) 1 2 d 2 ] 1 2 [ ( ( x j + 1 + l j + 1 x j l j ) 2 + ( y j + 1 y j ) 2 ) 1 2 d 2 ] 1 2 }
(where l j + 1 > l j , if l j + 1 < l j )
Calculate the sum of the above areas, S is the actual area of face shaping.
S = j = 1 m 1 S j

3. Results

Some interactive simulations on a 3D face model are shown in this section. The test cases have been chosen from the faces of our researcher, and different target features have been considered in the various cases, such as eyes, eyebrows, nose, mouth, chin, and cheeks.

3.1. Results of 3D Facial Reconstruction and Measurement

To verify the imaging accuracy of the system face model, 10 subjects were 3D reconstructed, geometric and area measured, and compared with the manual measurement results. Considering the actual impact of the concave and convex surface of the face model in the real measurement, and combined with the relevant esthetic theory, the feature points with obvious locations and not easy to be affected by deformation were selected for markings, such as the nose tip, nose root, and eye kernel.
An example related to geometric feature measurement of the system for the parts of interest to doctors is shown in Figure 2a, which includes face general position linear measurement, special position linear measurement, and angle measurement. The system intuitively and clearly showed the result. The experimental values measured by the system were compared with the actual values measured by plastic surgeons to determine the measurement accuracy. Taking “three courtyards and five eyes” as an example, the plastic surgeon used the soft ruler attached to the face to measure. Figure 2c shows the error values of system measurement and plastic surgeons’ measurement, and the average error is about 0.458 mm.
Take the nose as an example to measure its area. The doctor first laid fluorescent powder on the contour of the nose and then applied the test paper to the nose for data collection, lighting, and showing fluorescent marks (Figure 2b). Finally, the calculated area was 4.42 mm2. The software calculates the area of the nose as 3.69 mm2 after selecting the overall area of the nose through the interactive arbitrary measuring tool. The area of key parts such as eyes, eyebrows, nose, and cheekbones of 10 groups of subjects is measured as the above method. Each part above is measured 10 times by one operator, and the average error is 0.65 mm2 (Figure 2d). Obviously, for the measurement of the face area, because the shape of the face is irregular, the error value is larger than the geometric feature measurement. However, the measurement results can still provide a reference for microplastic surgery and improve the success rate of plastic surgery.
In addition, the technical parameters and scene reconstruction results of what we designed and the commercial near-infrared 3D imaging system are compared. Table 1 shows the comparison of technical parameters, and we can see that the system has significant advantages over the existing system in terms of resolution, accuracy, scanning time, and size.

3.2. Results of 3D Face Model Evaluation

Firstly, generating facial esthetic key points based on interactive selection, which shows several key points at the right zygomatic bone of the face. In our case, these key points form a symmetrical 3D face, which can assist doctors to quickly infer whether the key parts meet the beauty canons and mark the interesting surgical parts.
The system provides relevant esthetic evaluation figures, including golden face ratio (mainly three courtyards and five eyes), eyebrow and eye evaluation, and nose evaluation. It is worth noting that these esthetic standards are mainly for females, and there are slight differences between females and males. Taking 3D face shape, eyebrows, eyes, and nose as examples, the evaluation results are shown in Figure 3a.
The evaluation results of the model can draw the following conclusions: (1) Comparing the zygomatic mandibular face value to the standard value, it exceeds the 3% gradient value, which means the mandibular of the model is a little large, and it is recommended to reduce it slightly. (2) By comparing the zygomatic mandibular face value with the standard value, which means that the jaw of the face model is a little large, and it is estimated that the face model belongs to the square face. (3) The nasal frontal angle is larger than the standard value, which indicates that the nose of the model is relatively straight as a whole, and rhinoplasty is not recommended. However, the nose tip angle is quite different from the standard value. It is recommended to pull down the nose.
The above esthetic evaluation presents the indicators that need to be referred to in the beauty canons with intuitive numerical values, which can assist doctors in effectively judging whether the face meets the relevant standards.

3.3. Results of Simulated Deformation

In our design, virtual prediction software was developed, and virtual prediction of the overall appearance of 10 experimental subjects was performed. We present an example of plastic surgery simulation, including nose, chin, and jaw. To make the 3D shape changes more evident, for each case, we present different views of the original cosmetic patient’s scan and the PS simulation.
In Figure 4a, we simulate the effect of Chin stretching and reduction surgery, in which stretching surgery improves the overall spatial level of the chin, while reduction surgery leads to Chin depression. Through comprehensive analysis, the doctors suggest filling to improve the layering of the Chin.
In Figure 4b, we simulate the effect of mandibular plastic surgery. The mandible of the model is plump before deformation. After bone cutting, the width of the cheek is significantly reduced, and the beauty of the face is increased before and after plastic surgery.
In Figure 4c, we simulate the effect of nose augmentation, which can be seen from the data in Figure 4d that the whole nose frontal angle increased by 3.13 degrees due to the simulated tissue filling, the nose tip angle decreased by 12.41 degrees due to the pull-down of the nose, and the degree of the nose lip angle decreased slightly due to the change of the overall nasal spatial structure.
Meanwhile, the system is designed to be used by plastic and esthetic surgeons. We asked the expert to evaluate the results of the PS from several beauty canons, golden ratio, facial symmetry, and angular profile [33]. Indeed, the majority of the edited 3D faces comply with these beauty canons. The proposed simulation system is relevant enough to improve the esthetic quality of the 3D face and is applicable in the real-world applications of plastic surgery.

4. Conclusions

The 3D quantitative analysis of facial morphology, such as face reconstruction and surgical simulation, is important in plastic surgery (PS). Both can help surgeons design appropriate plastic procedures before performing the surgery. In parallel to increased demands for plastic surgeries, desires are emerging to verify the patient’s satisfaction and reduce the risks during the surgery. By interactively virtual surgery on 3D face models, doctors can design surgery by comparing the changes in face models before and after virtual surgery to obtain the accurate data required in plastic surgery. This can change the empirical blind operation, improve the operation accuracy, help improve the operation effect, and shorten the operation time. In this paper, we present a new 3D system for guiding plastic surgeons to design surgical plans. The system can suggest, for any selected facial area, new 3D shapes able to improve the general harmony of the face, which meets the esthetic evaluation criteria. A face 3D data acquisition system is built based on a highly integrated, miniaturized, and convenient depth camera. The 3D face model is picked up by employing the bounding box theory, the 3D face morphing is controlled based on the finite element method, and the face feature points are extracted by the Dlib library. The esthetic strategy is used to evaluate the face shape, eyes, eyebrows, and nose.
All corresponding subsystems have been implemented and tested. Several surgical simulation examples are provided to show how the system works on key facial areas such as the nose, mouth, and chin (it is worth noting that our subjects are not real patients, and test cases were selected from our research staff). The tests showed that our method is effective. An accurate display of the reconstructed areas after facial micropigmentation and facial micropigmentation recommendations can be presented. Despite the small database of faces used, experts evaluate the results and demonstrate that the proposed system can improve the esthetic quality of 3D faces.
The 3D facial virtual PS method proposed in this paper has the following advantages. Firstly, it personalizes, quantifies, and digitizes the plastic surgery design process, gives patients a better and more graphic understanding of the post-operative results in advance, and facilitates communication between doctors and patients. Secondly, as an excellent expansion product, it can assist plastic surgeons in effectively determining whether the face meets the relevant standards. Finally, by performing interactive virtual surgery on the 3D face model, the doctor can design the surgery by comparing the changes in the face model before and after the virtual surgery, thus obtaining the accurate data required for plastic surgery. This can change the blind operation by virtue of experience and improve the accuracy of the operation, which is conducive to improving the surgical results and shortening the operation time.
Further work for improving the system will be carried out in three directions. It has already been observed that a 3D scanning system requires a good lighting environment when it is operating and requires users to keep still. If there is blinking or slight movement, it will lead to texture color inconsistency in the 3D facial model. To improve the 3D reconstruction performances, we plan to implement image processing algorithms to remove the effects of uneven illumination and motion artifacts. The second important area of improvement is extending sculpt of the system described, which deals with the esthetic planning of surgery, and also deals with the cosmetology constraints, combining the whitening, freckle removal, and Spot nevus. Finally, the volume calculation algorithm will be developed to improve the accuracy of volume measurement to assist doctors in facial filling surgery effectively.

Author Contributions

Conceptualization, X.J. and Z.R.; methodology, Z.R. and S.S.; software, Z.R. and S.S.; validation, X.J., S.S. and Z.R.; formal analysis, Z.R. and S.S.; investigation, Z.R., S.S., X.J. and M.L.; resources, X.J., M.L. and J.H.; writing—original draft preparation, X.J., Z.R. and S.S.; writing—review and editing, X.J., Z.R., M.L. and S.S.; visualization, Z.R. and S.S.; supervision, X.J., M.L. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Program for Science and Technology Development of Jilin Province, grant number 20200708104YY.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent was obtained from all participants involved in the study.

Data Availability Statement

Data is unavailable due to privacy or ethical restrictions.

Acknowledgments

The authors thank all the volunteers for their cooperation in the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sabharwal, T.; Gupta, R. Human Identification after Plastic Surgery Using Region Based Score Level Fusion of Local Facial Features. J. Inf. Secur. Appl. 2019, 48, 102373. [Google Scholar] [CrossRef]
  2. Sabharwal, T.; Gupta, R.; Son, L.H.; Kumar, R.; Jha, S. Recognition of Surgically Altered Face Images: An Empirical Analysis on Recent Advances. Artif. Intell. Rev. 2019, 52, 1009–1040. [Google Scholar] [CrossRef]
  3. Qureshi, U.A.; Calaguas, S.; Frank, E.; Inman, J. Implications of Applying New Technology in Cosmetic and Reconstructive Facial Plastic Surgery. Facial Plast. Surg. 2020, 36, 760–767. [Google Scholar] [CrossRef] [PubMed]
  4. Oh, S.-H.; Park, J.-S.; Ryu, J.-J.; Song, I.-S.; Jung, S.-K. Three-Dimensional Reproducibility of the Soft Tissue Landmarks Taken by Structured-Light Facial Scanner in Accordance with the Head Position Change. Healthcare 2021, 9, 428. [Google Scholar] [CrossRef]
  5. Li, Y.; Yang, X.; Li, D. The Application of Three-Dimensional Surface Imaging System in Plastic and Reconstructive Surgery. Ann. Plast. Surg. 2016, 77, S76–S83. [Google Scholar] [CrossRef]
  6. Zhou, Z.; Li, P.; Ren, J.; Guo, J.; Huang, Y.; Tian, W.; Tang, W. Virtual Facial Reconstruction Based on Accurate Registration and Fusion of 3D Facial and MSCT Scans. J. Orofac. Orthop./Fortschr. Der Kieferorthopädie 2016, 77, 104–111. [Google Scholar] [CrossRef]
  7. Olivetti, E.C.; Nicotera, S.; Marcolin, F.; Vezzetti, E.; Sotong, J.P.; Zavattero, E.; Ramieri, G. 3D Soft-Tissue Prediction Methodologies for Orthognathic Surgery—A Literature Review. Appl. Sci. 2019, 9, 4550. [Google Scholar] [CrossRef] [Green Version]
  8. Feng, X.; Ma, Y.; Gao, L. Compact Light Field Photography towards Versatile Three-Dimensional Vision. Nat. Commun. 2022, 13, 3333. [Google Scholar] [CrossRef]
  9. Cheng, K.-J.; Liu, Y.-F.; Wang, J.H.; Wang, R.; Xia, J.; Xu, X.; Jiang, X.-F.; Dong, X.-T. 3D-Printed Porous Condylar Prosthesis for Temporomandibular Joint Replacement: Design and Biomechanical Analysis. Technol. Health Care 2022, 30, 1017–1030. [Google Scholar] [CrossRef]
  10. Masnada, S.; Gibelli, D.; Dolci, C.; De Giorgis, V.; Cappella, A.; Veggiotti, P.; Sforza, C.; Italian Aicardi Study Group. 3D Facial Morphometry in Italian Patients Affected by Aicardi Syndrome. Am. J. Med. Genet. Part A 2020, 182, 2325–2332. [Google Scholar] [CrossRef]
  11. Wu, J.; Liu, Y.; Zhang, J.; Peng, W.; Jiang, X. Biomechanical Investigation of Orthodontic Treatment Planning Based on Orthodontic Force Measurement and Finite Element Method before Implementation: A Case Study. Technol. Health Care 2018, 26, 347–359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Schendel, S.A.; Montgomery, K. A Web-Based, Integrated Simulation System for Craniofacial Surgical Planning. Plast. Reconstr. Surg. 2009, 123, 1099–1106. [Google Scholar] [CrossRef]
  13. Khwanngern, K.; Tiangtae, N.; Natwichai, J.; Kattiyanet, A.; Kaveeta, V.; Sitthikham, S.; Kammabut, K. Jaw Surgery Simulation in Virtual Reality for Medical Training. In Proceedings of the International Conference on Network-Based Information Systems, Oita, Japan, 5–7 September 2019; pp. 475–483. [Google Scholar]
  14. Bottino, A.; De Simone, M.; Laurentini, A.; Sforza, C. A New 3-D Tool for Planning Plastic Surgery. IEEE Trans. Biomed. Eng. 2012, 59, 3439–3449. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. AlRashidi, Y.; Toloza, S.; Denys, H.; Gattuso, J.; Nunez, E.; Akbaș, M.İ. Aiding Rhinoplasty Surgeons with 3D Imaging and a Rule-Based Adjustment Algorithm. In Proceedings of the SoutheastCon 2021, Atlanta, GA, USA, 10–13 March 2021; pp. 1–6. [Google Scholar]
  16. Chang, J.B.; Small, K.H.; Choi, M.; Karp, N.S. Three-Dimensional Surface Imaging in Plastic Surgery: Foundation, Practical Applications, and Beyond. Plast. Reconstr. Surg. 2015, 135, 1295–1304. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. do Vale Voigt, M.; Espíndola-Castro, L.F.; de Melo Monteiro, G.Q.; Ortigoza, L.S.; dos Santos Torreão, A.; Georg, R. DSDapp Use for Multidisciplinary Esthetic Planning. J. Esthet. Restor. Dent. 2020, 32, 738–746. [Google Scholar] [CrossRef] [PubMed]
  18. Li, J.; Zhou, J.; Zhang, J. A Novel Dynamical 3D Smile Measurement Method to Evaluate the Effects of Face-Lifting Surgery: Based on the Optical Structured Light Strategy. Optik 2015, 126, 1716–1719. [Google Scholar] [CrossRef]
  19. Zeng, H.; Yuan-Liang, S.; Xie, G.; Lu, F.; Fu, R. Three-Dimensional Printing of Facial Contour Based on Preoperative Computer Simulation and Its Clinical Application. Medicine 2019, 98, e12919. [Google Scholar] [CrossRef]
  20. Zhu, M.; Xie, Y.; Zhu, Y.; Chai, G.; Li, Q. A Novel Noninvasive Three-Dimensional Volumetric Analysis for Fat-Graft Survival in Facial Recontouring Using the 3L and 3M Technique. J. Plast. Reconstr. Aesthetic Surg. 2016, 69, 248–254. [Google Scholar] [CrossRef] [PubMed]
  21. Chou, J.-K.; Yang, C.-K.; Gong, S.-D. Face-off: Automatic Alteration of Facial Features. Multimed. Tools Appl. 2012, 56, 569–596. [Google Scholar] [CrossRef] [Green Version]
  22. Dong, Y.; Zhao, Y.; Bai, S.; Wu, G.; Zhou, L.; Wang, B. Three-Dimensional Anthropometric Analysis of Chinese Faces and Its Application in Evaluating Facial Deformity. J. Oral Maxillofac. Surg. 2011, 69, 1195–1206. [Google Scholar] [CrossRef]
  23. Ulrich, L.; Vezzetti, E.; Moos, S.; Marcolin, F. Analysis of RGB-D Camera Technologies for Supporting Different Facial Usage Scenarios. Multimed. Tools Appl. 2020, 79, 29375–29398. [Google Scholar] [CrossRef]
  24. Ye, Y.; Song, Z.; Guo, J.; Qiao, Y. SIAT-3DFE: A High-Resolution 3D Facial Expression Dataset. IEEE Access 2020, 8, 48205–48211. [Google Scholar] [CrossRef]
  25. Ye, Y.; Song, Z. An Accurate 3D Point Cloud Registration Approach for the Turntable-Based 3D Scanning System. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Beijing, China, 2–5 August 2015; pp. 982–986. [Google Scholar]
  26. Shi, C.; Feng, J.; Tang, S.; Song, Z. A Robust Feature Detection Method for an Infrared Single-Shot Structured Light System. In Proceedings of the 2018 IEEE International Conference on Real-Time Computing and Robotics (RCAR), Kandima, Maldives, 1–5 August 2018; pp. 693–697. [Google Scholar]
  27. Zhu, Y.; Luo, X.; Zuniga, C.; Lozano-Garzón, C.; Alomia, G. Feature Extraction and Matching of 3D Face Model Based on Facial Landmark Detection. In Proceedings of the 2020 International Conference on Virtual Reality and Visualization (ICVRV), Recife, Brazil, 13–14 November 2020; pp. 285–287. [Google Scholar]
  28. Simmons-Ehrhardt, T.L.; Monson, K.L.; Flint, T.; Saunders, C.P. Quantitative Accuracy and 3D Biometric Matching of 388 Statistically Estimated Facial Approximations of Live Subjects. Forensic Imaging 2020, 21, 200377. [Google Scholar] [CrossRef]
  29. Sruthy, B.S.; Jayasree, M. Recognizing Surgically Altered Face Images and 3D Facial Expression Recognition. Procedia Technol. 2016, 24, 1300–1304. [Google Scholar] [CrossRef] [Green Version]
  30. Manal, E.R.; Arsalane, Z.; Aicha, M.; Allah, O.A. 3D Facial Attractiveness Enhancement Using Free Form Deformation. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 3497–3505. [Google Scholar] [CrossRef]
  31. Lamecki, A.; Balewski, L.; Mrozowski, M. Effect of Mesh Deformation on the Accuracy of 3D FEM Electromagnetic Analysis. In Proceedings of the 2016 IEEE MTT-S International Conference on Numerical Electromagnetic and Multiphysics Modeling and Optimization (NEMO), Beijing, China, 27–29 July 2016; pp. 1–2. [Google Scholar]
  32. de Buhan, M.; Nardoni, C. A Facial Reconstruction Method Based on New Mesh Deformation Techniques. Forensic Sci. Res. 2018, 3, 256–273. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Parks, C.L.; Monson, K.L. Biometric Correspondence between Reface Computerized Facial Approximations and CT-Derived Ground Truth Skin Surface Models Objectively Examined Using an Automated Facial Recognition System. Forensic Sci. Int. 2018, 286, 8–11. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The workflow of proposed facial plastic surgery simulation system.
Figure 1. The workflow of proposed facial plastic surgery simulation system.
Applsci 13 00659 g001
Figure 2. The measurement of the human face using the system. (a) geometric feature measurement. (b) 3D area measurement. (c) average error of geometric measurement. (d) average error of area measurement.
Figure 2. The measurement of the human face using the system. (a) geometric feature measurement. (b) 3D area measurement. (c) average error of geometric measurement. (d) average error of area measurement.
Applsci 13 00659 g002
Figure 3. 3D evaluation results of key parts of face. (a) follow the results obtained "three courtyards", “five eyes”, eyebrows and eyes, and nose. (b) comparison of evaluation results.
Figure 3. 3D evaluation results of key parts of face. (a) follow the results obtained "three courtyards", “five eyes”, eyebrows and eyes, and nose. (b) comparison of evaluation results.
Applsci 13 00659 g003
Figure 4. The simulated surgery. (a) chin. (b) mandible. (c) nose. (d) value of nose adjustment.
Figure 4. The simulated surgery. (a) chin. (b) mandible. (c) nose. (d) value of nose adjustment.
Applsci 13 00659 g004
Table 1. The technical parameters of several 3D scanning systems.
Table 1. The technical parameters of several 3D scanning systems.
DevicesResolutionPrecision (mm)Working Distance (m)Scanning Time (s)Size
(mm)
Kinect v2512 × 4242~100.5–4.5>60 s250 × 66 × 67
RealSense D4351280 × 72050.8–330 s90 × 20 × 23
Orbber persee640 × 480-0.6–8-172 × 63 × 56
Our system1280 × 8000.40.3–0.64 s101 × 26 × 13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rao, Z.; Sun, S.; Li, M.; Ji, X.; Huang, J. 3D Facial Plastic Surgery Simulation: Based on the Structured Light. Appl. Sci. 2023, 13, 659. https://doi.org/10.3390/app13010659

AMA Style

Rao Z, Sun S, Li M, Ji X, Huang J. 3D Facial Plastic Surgery Simulation: Based on the Structured Light. Applied Sciences. 2023; 13(1):659. https://doi.org/10.3390/app13010659

Chicago/Turabian Style

Rao, Zhi, Shuo Sun, Mingye Li, Xiaoqiang Ji, and Jipeng Huang. 2023. "3D Facial Plastic Surgery Simulation: Based on the Structured Light" Applied Sciences 13, no. 1: 659. https://doi.org/10.3390/app13010659

APA Style

Rao, Z., Sun, S., Li, M., Ji, X., & Huang, J. (2023). 3D Facial Plastic Surgery Simulation: Based on the Structured Light. Applied Sciences, 13(1), 659. https://doi.org/10.3390/app13010659

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop