Next Article in Journal
Stem and Progenitor Cells for Musculoskeletal Disease Modeling and Tissue Repair
Previous Article in Journal
A Unified Multi-Task Learning Model with Joint Reverse Optimization for Simultaneous Skin Lesion Segmentation and Diagnosis
Previous Article in Special Issue
Artificial Intelligence Diagnosing of Oral Lichen Planus: A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accuracy Evaluation of a Three-Dimensional Face Reconstruction Model Based on the Hifi3D Face Model and Clinical Two-Dimensional Images

1
Department of Orthodontics, Peking University School and Hospital of Stomatology, National Center for Stomatology, National Clinical Research Center for Oral Diseases, National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing Key Laboratory of Digital Stomatology, Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health, Beijing 100081, China
2
School of Software and Microelectronics, Peking University, Beijing 100091, China
*
Author to whom correspondence should be addressed.
Bioengineering 2024, 11(12), 1174; https://doi.org/10.3390/bioengineering11121174
Submission received: 30 September 2024 / Revised: 11 November 2024 / Accepted: 18 November 2024 / Published: 21 November 2024

Abstract

:
Three-dimensional (3D) facial models have been increasingly applied in orthodontics, orthognathic surgery, and various medical fields. This study proposed an approach to reconstructing 3D facial models from standard orthodontic frontal and lateral images, providing an efficient way to expand 3D databases. A total of 23 participants (average age 20.70 ± 5.36 years) were enrolled. Based on the Hifi3D face model, 3D reconstructions were generated and compared with corresponding face scans to evaluate their accuracy. Root mean square error (RMSE) values were calculated for the entire face, nine specific facial regions, and eight anatomical landmarks. Clinical feasibility was further assessed by comparing six angular and thirteen linear measurements between the reconstructed and scanned models. The RMSE of the reconstruction model was 2.00 ± 0.38 mm (95% CI: 1.84–2.17 mm). High accuracy was achieved for the forehead, nose, upper lip, paranasal region, and right cheek (mean RMSE < 2 mm). The forehead area showed the smallest deviation, at 1.52 ± 0.88 mm (95% CI: 1.14–1.90 mm). In contrast, the lower lip, chin, and left cheek exhibited average RMSEs exceeding 2 mm. The mean deviation across landmarks was below 2 mm, with the Prn displaying the smallest error at 1.18 ± 1.10 mm (95% CI: 0.71–1.65 mm). The largest discrepancies were observed along the Z-axis (Z > Y > X). Significant differences (p < 0.05) emerged between groups in the nasolabial, nasal, and nasofrontal angles, while the other 13 linear and 3 angular measurements showed no statistical differences (p > 0.05). This study explored the feasibility of reconstructing accurate 3D models from 2D photos. Compared to facial scan models, the Hifi3D face model demonstrated a 2 mm deviation, with potential for enriching 3D databases for subjective evaluations, patient education, and communication. However, caution is advised when applying this model to clinical measurements, especially angle assessments.

Graphical Abstract

1. Introduction

With the development of orthodontics, there is a growing emphasis among orthodontists and patients on not only teeth alignment and occlusion but also on facial soft tissue changes [1]. To accurately evaluate the changes in soft tissues during orthodontic treatment, an increasing number of orthodontists are using models obtained from 3D facial scans for research purposes [2,3]. However, a substantial collection of two-dimensional (2D) images has been amassed in orthodontic clinical practice. If these 2D images could be converted into clinically available 3D models, the clinical applicability of 3D facial analysis would be greatly enhanced.
In today’s computer vision (CV) research field, 3D face reconstruction is a popular topic. Its applications span the gaming and film industries, security authentication, and face recognition, among others [4]. Due to the complex details within 3D facial structures, their reconstruction process presents significant challenges and is susceptible to variations in illumination, expression, and posture. Improving model robustness is therefore a priority in reconstruction using CV methods [5]. Orthodontic facial photography, conducted under strict and standardized conditions, supports the enhancement of robustness [6].
The three-dimensional morphable model (3DMM) is a classic algorithm used for 3D face reconstruction, first proposed by Blanz and Vetter [7] in 1999. Its core principle involves creating a 3D facial model through shape and texture parameters, thereby generating a model resembling a real human face by solving for the optimal linear combinations of these parameters. Recent developments in deep learning have led to its integration with 3D facial modelling techniques, such as the Hifi3Dface model [8]. This method introduces an innovative geometric template, generating highly realistic 3D facial structures through detail synthesis, regional pyramid bases, and localized fitting. Compared to traditional 3DMM, this approach better captures facial geometry with only linear bases, achieving a more lifelike reconstruction.
In addition, other 3D face reconstruction algorithms, such as epipolar geometry (EG), shape-from-shading, and one-shot learning (OSL), have seen significant development in recent years. Anbarjafar et al. [9] segmented the face into four regions using 68 facial landmarks, achieving strong generalization. Jiang et al. [10] employed a shape-from-shading technique inspired by face animation, incorporating RGB-D and monocular video data. Xing et al. [11] applied a one-shot learning reconstruction method to create 3D facial models from a single image.
High-precision 3D facial scanners are commonly used in orthodontic studies, particularly for measuring facial symmetry and monitoring soft tissue changes in patients [12,13]. Additionally, facial scanning is extensively applied in the field of digital prosthetics, enabling users to capture complex movements for precise jaw motion analyses [14]. These scanners employ three primary imaging technologies: lasers, structured light, and stereophotography [15]. Research indicates that the root mean square error (RMSE) for these 3D scanners is approximately 0.5 mm, even reaching as low as 0.05 mm for certain brands [16]. Specifically, the FaceSCAN 3D scanner, which is primarily reliant on structured light technology, yields a scanning error of approximately 0.2 mm, as noted in previous studies [16,17]. However, its substantial size and high cost restrict its application. In fact, the purpose of our study is not to replace 3D facial scanners. Instead, we aim to use this method to convert scarce and valuable historical 2D data into 3D models, thereby enriching the orthodontic research database. These models can be used for subjective evaluations, patient education, communication, and so on.
In the present study, we explore the automatic generation of 3D face models using two 2D facial photos, basing their generation on the Hifi3Dface method preliminarily. Subsequently, we compare this reconstruction model to the FaceSCAN model and measure the 3D deviation in the results to assess the accuracy of this modelling. In addition, we measure 19 soft tissue items to assess their clinical feasibility.
The null hypothesis was that there were no statistically significant differences between the face scan models and 3D reconstruction model in clinical accuracy and feasibility.

2. Materials and Methods

2.1. Sample Selection

This study was approved by the Institutional Review Board of Peking University School and Hospital of Stomatology (PKUSSIRB-202058135). The involved patients were treated at Peking University Stomatology Hospital (China) from January to April 2021. According to Jacob Cohen’s statistical book [18], we selected a high effect size of 0.8 for Cohen’s d. With a test power of 0.95 and a significance level of α = 0.05, the calculated minimum sample size for this paired design study was 23 participants. The study ultimately included a total of 23 participants, consisting of 14 females and 9 males, with an average age of 20.70 ± 5.36 years. All participants supplied their written consent.
The criteria for inclusion include (1) Han Chinese individuals, aged between 12 and 35 years, with a complete set of permanent teeth and (2) complete medical information, including 2D photographs, 3D models, etc.
The criteria for exclusion include (1) previous orthognathic surgery; (2) maxillofacial tumours; (3) cleft lip and cleft palate or other developmental anomalies; (4) neurological disorders (e.g., facial paralysis); (5) facial scarring; and (6) significant facial asymmetry.

2.2. Acquisition of 2D Photographs and 3D Scans

The process of collecting 2D photos and 3D data was based on that of Mao et al. [19]. The FaceSCAN3D System (3DShape, Erlangen, Germany) was used to capture the 3D models. To acquire 2D photos, a Canon camera (EOS 60D, 60 mm fixed-focus lens) was used, set to a shutter speed of 1/125, aperture of F7.1, and ISO of 100. Both 2D photos and 3D facial scans were obtained on the same day.

2.3. Three-Dimensional Model Reconstruction

Three-dimensional face reconstruction was performed with frontal and lateral images using the open-source Hifi3D face model. This detailed process involved several key steps.

2.3.1. Initial Model Fitting

We used a linear 3DMM based on Principal Component Analysis (PCA) as our initial model. The shape and albedo variables were expressed as
s = s ¯ + S x shp
a = a ¯ + A x alb
where s ¯ is the average 3D face shape, in vector format; S is the basis for shape recognition; a ¯ is the mean reflectivity graph, in vector form; A is the albedo graph’s basis; and xshp and xalb are the corresponding parameter vectors to be estimated. The detected 3D landmarks (including depth information) were fitted to the initial shape model using the ridge regression method. Some texture maps were extracted by projecting the shape model onto each input image. Then, the partial texture maps were mixed into full texture maps using a Laplacian pyramid technique. The initial albedo parameter was fitted to the mixed texture based on another ridge regression step.
Next, we fitted the initial shape model to the detected 3D landmarks. A partial texture map was produced by projecting the shape model onto each input image.

2.3.2. Optimization

The optimization parameters were defined as follows:
P = {xshp, xalb, xlight, xpose}
In this formula, xshp is the shape parameter, xalb is the reflectance parameter, xlight is the second-order spherical harmonic illumination parameter, and xpose includes the rotation and translation parameters for rigid transformation. For each view, 3DMM parameters xshp and xalb for the user and the lighting parameters xlight and xpose were estimated. The constraints include landmark loss Llan, RGB photoloss Lrgb, depth loss Ldep, and perceptual identity loss Lid.

2.3.3. Morphable Model Augmentation

The morphable model augmentation (MMA) method works as follows. First, data generation and perturbation are performed. A baseline 3DMM was chosen. The baseline model was then perturbed and deformed to generate additional facial models. This involved replacing the nose and mouth regions with models from other sources, applying rotation perturbations, and using rigid transformations such as rotation, translation, and scaling. Additionally, the generated facial models were mirrored. These steps resulted in the generation of many facial models.
The iterative 3DMM construction algorithm consists of two levels of iteration. In the outer loop, a subset of models is randomly selected from the generated dataset as the test set. In the inner loop, the current 3DMM is used to fit the models in the test set, and the models with the highest fitting errors are selected and added to the model collection. The basis vectors of the 3DMM are then recomputed based on the models in the collection. The inner loop is iterative, continuously improving the expressiveness of the 3DMM. The goal of this algorithm is to capture as many data variations as possible with few principal components.
MMA enhanced the capacity and expressiveness of the 3DMM. By introducing diverse data samples and perturbation operations, MMA could better capture the asymmetry and variations in facial shapes. Furthermore, the iterative construction algorithm continually improved the expressiveness of the 3DMM by optimizing the results of the PCA.

2.3.4. Face Reflex Synthesis

To enhance facial detail, we tested a hybrid approach that integrated high-resolution synthetic maps with normal maps. Traditional super-resolution-based methods can effectively capture features like eyebrows and hair, yet directly generating high-resolution texture maps can sometimes yield overly detailed, unrealistic results. By using pyramid parameters, this method achieved a balance between realism and detail.
First, we divided facial regions into 8 subregions, indicated by the different colours in the UV map. Region partitioning was performed because different regions are characterized by different types of skin/hair details. Second, with the regional pyramid bases, the different types of skin/hair details in different regions were separately preserved via high-resolution bases, and the low-resolution fitting process allowed the algorithm to focus on major facial structures and the shapes of the eyebrows and lips. Finally, two refinement networks were applied to synthesize the details from the albedo and normal maps.

2.4. Three-Dimensional Deviation Measurement

The alignment of the 3D reconstructed models and the face scans was performed using Cliniface software (v5.2.1, Unlocking Facial Clubs, Perth, Australia). A 3D coordinate system was established with the midpoint of the bilateral tragion as the origin, while the median sagittal plane was defined by all midpoints of the face, and the Frankfurter horizontal plane was taken as the horizontal plane. The X-, Y-, and Z-axes corresponded to the left, superior, and anterior directions, respectively.
After repositioning, model registration and deviation analysis were conducted through Geomagic Studio 2013 software (Geomagic, Morrisville, North Carolina, USA). The alignment registration method followed that of Mao et al. [19]. First, the bilateral canthus, pronasale, and soft tissue nasion were manually marked for pre-alignment. Then, a best-fit alignment was performed. Additionally, the facial area was divided into nine regions according to Wang [20]: the forehead, left and right para-nasal areas, left and right cheeks, nose, upper lip, labium, and chin. To assess the deviations across the face and nine areas, colour mapping and an RMSE analysis were performed. Finally, the 3D accuracy for Prn, Ls, Li, Lch, Rch, Pg’, Gn’, and Me’ was calculated (Figure 1). In order to assess the consistency of the landmarks, we randomly selected 7 of the 23 patients to repeatedly measure the 3D deviation of these 8 landmarks and calculate their interclass correlation coefficient (ICC) (Table 1).

2.5. Soft Tissue Measurements

The “Measurements Browser” and “Add Calliper Measurements” tools in Cliniface 5.2.1 were used for soft tissue measurements (Table 2, Figure 2). All data were collected by two orthodontists, with repeated measurements taken one week apart and the average of the results computed.

2.6. Statistical Analysis

All the data were analyzed using SPSS (v26.0). The Shapiro–Wilk test was applied to verify normality for all soft tissue measurements, confirming their normal distribution (p > 0.05). Intragroup and intergroup consistency tests were performed on these measurements with 2-way absolute agreement, followed by paired t tests to ascertain the differences between the two sets of models. The significance level was set at α = 0.05, and differences were considered statistically significant when p < 0.05.

3. Results

3.1. Three-Dimensional Deviation Analysis Results for the Full Face and Nine Regions

The root mean square error (RMSE) for the Hifi3Dface-based reconstruction model averaged 2.00 ± 0.38 mm (95% CI: 1.84–2.17 mm) (Table 3, Figure 3). High reconstruction accuracy was observed for the forehead, nose, upper lip, paranasal region, and right cheek, with an average RMSE of less than 2 mm. The labium, chin, and left cheek exhibited average RMSEs greater than 2 mm, which may be related to the significant differences in the lower facial region across different sagittal skeletal types. Additionally, the zygomatic area appeared relatively retruded compared to the scan model, with the brow arch being more pronounced externally. In patients with convex profiles, the chin appeared more pronounced, while in those with concave profiles, nasal features were more prominent. (Figure 4)
Figure 5 illustrates both the best and worst reconstruction outcomes, with the RMSE for the best reconstruction measuring 1.36 mm across the face and under 2 mm in the nine distinct regions. Notably, the RMSE around the cheek reached 0.81 mm. In the worst case, the RMSE was 2.73 mm around the entire face, reaching 3 mm for the labium and cheek.

3.2. Three-Dimensional Deviation of Landmarks

As shown in Table 1, all landmarks have an ICC above 0.8, indicating good consistency in the results. The 3D deviations for each landmark in the reconstruction models are displayed in Figure 6 and Table 4 and compared to the face scans, with errors of <2 mm, which is within the clinically acceptable limits. Landmarks such as Prn, LCh, and RCh demonstrated the highest accuracy, while Gn’ and Me’ showed slightly larger deviations but still stayed below 3 mm. Deviations along the Y-axis, excluding Me’, were generally under 1 mm. The largest discrepancies appeared in the Z-axis direction (Z > Y > X) and largely contributed to landmark deviations.

3.3. Results of Soft Tissue Measurements

The ICC results are presented in Table 5, showing that the intragroup and intergroup average ICC values exceed 0.90 for all the measured features, indicating the high reliability of the measurements. Table 6 shows the outcomes of these measurements. Sixteen out of nineteen soft tissue areas displayed no statistically significant differences (p > 0.05), especially among all linear measurements. However, significant differences were observed in angular measurements, notably for the nasolabial angle (p < 0.05). Among all characteristics, the average errors in the linear measurements remained below 1 mm, while those in angular measurements were high as 4.81°.

4. Discussion

In the present study, the original code of the Hifi3D face was modified to include both front and lateral facial images, and 3D facial models were successfully reconstructed. To assess the error between the reconstructed models and the FaceSCAN model, we adopted a 2 mm criterion as the minimum standard for evaluation. High accuracy was achieved across all regions except for the lower lip, chin, and left cheek, with a mean RMSE < 2 mm. As for the soft tissue measurements, 13 linear and 3 angular measurements showed no significant differences (p > 0.05), except for in the nasolabial, nasal, and nasofrontal angles. The null hypothesis could not be fully confirmed. Currently, there is no uniform standard for the clinical deviation of soft tissue. Deviations ≥ 3 mm should be considered clinically relevant [21], with 1–3 mm deviations deemed relevant only in extremely detailed evaluations for micro-esthetic purposes. In addition, different studies [22,23] have suggested that for facial soft tissues, an error lower than 2 mm is considered clinically acceptable. In addition, Mai et al. [24] found that some handheld 3D facial scanners exhibit a size deviation of approximately 1.5 mm.
To minimize the impact of software operations on measurements, we used Cliniface software for 3D model relocation, landmark positioning assistance, and soft tissue measurements. Cliniface is based on the open-source MATLAB toolbox, MeshMonk [25], which introduces a certain degree of error. An error of 0.2 mm for multiple registrations of the template onto the same facial images has been reported [26]. Nevertheless, several studies [27,28,29] have demonstrated that automatic landmark detection methods can achieve precision comparable to, or even exceeding, that of manual landmark positioning. White et al. [30] found that the mean inter-observer error in manual landmarking was 0.40 mm, while the variation in automatic landmark indication averaged 0.27 mm. The operator’s clinical orthodontic experience and software proficiency also affect the accuracy of landmark positioning. To address this, operators received relevant software training before conducting the measurements. Furthermore, our study involves an inherent degree of unassessable selection bias. Even so, based on the current ICC results, our error remains within an acceptable range.
As the results show, we established 19 soft tissue indicators. Typically, linear distance variations should be within the clinically acceptable range of 1 mm (p > 0.05). However, regarding angular differences, three metrics exhibited statistically significant disparities, with variations reaching up to 4.81°. The differences in the nasolabial and nasal angles can be attributed to the higher and more prominent nose and fuller upper lip in the Caucasian template than in the Asian template. The clinical application of these models requires further consideration.
The primary landmark deviation was 2.00 ± 0.38 mm, mainly concentrated in the Z-axis (sagittal) direction, suggesting room for further depth accuracy improvements. Currently, RGB-D cameras can simultaneously capture depth maps and corresponding colour images during photo capture [31]. Pixel value data, through coordinate transformations, can be transformed into point cloud data, yielding a 3D reconstruction model. This is similar to the basic principle used for some 3D scanners. However, regular cameras cannot capture depth information in their images. Therefore, one fundamental task in 3D facial reconstruction is to compensate for the loss of depth information.
In the field of CV, many studies have attempted to perform 3D reconstructions from single images, but they have mainly focused on morphology, expressions, and facial skin textures. Few articles have addressed the accuracy of soft tissue modelling in the middle and lower face regions [32]. Moreover, limited by the sample size and quality of 3D datasets, the extrapolatability of the code has emerged as the greatest problem for 3D reconstruction. Recent studies have used multiple images for 3D face reconstruction [33,34], typically employing 45° side images, as most landmarks can be aligned across different perspectives. Lium et al. [35] utilized a CNN for 3D facial reconstruction with frontal and lateral images, achieving a mean normalized mean error NME of 0.0164, but an evaluation of the perioral region was not conducted. Garrido et al. [36] focused on lip reconstruction, with an average error of 3 mm reported. Additionally, prior studies indicated that facial expressions, such as mouth opening, may lead to 5–10 mm deviations in the lip area [37].
In forensic medicine [38], many researchers have previously reconstructed 3D soft tissue models with CBCT data, but less accuracy than in our study was observed. Laura et al. [39] used CBCT data to conduct 3D reconstructions, yielding errors within 2.5 mm for 70.9% of their meshes. However, the error rose to 3.7–5.5 mm around the mouth and 3–7 mm near the nose. Additionally, their landmark errors ranged from 0.2 mm to 2.4 mm. Qiu et al. [40] reported an improvement in their reconstruction results using CBCT data, with a facial RMSE of approximately 1.5 mm, but the extrapolation performance of the approach was poor, with an error of 2.68 mm.
High-precision 3D facial remodelling has been achieved [19], using manual modelling software, that surpasses our method, albeit with greater complexity and a reliance on practitioner expertise. In contrast, our proposed method allows for the automatic generation of 3D face models in a short timeframe by inputting frontal and lateral images, thus eliminating the need for meticulous adjustments based on experience.
During the 3D reconstruction process, we utilized the original facial template from the Hifi3Dface model, which was based on data from Caucasian individuals [8]. When applied to Asian individuals, discrepancies in the facial features emerged, leading to the final results retaining some characteristics typical of Caucasian features. In general, the brow bone and chin were relatively prominent in the 3D reconstructed models, while the cheek was more recessed. In addition, the plumpness of soft tissue in the cheek region posed challenges in matching depth information, leading to poor reconstruction in this area. Moreover, in individuals with different sagittal skeletal types, the morphology of their lower facial profile varies significantly, with notable differences in chin curvature. For patients with a skeletal Class II malocclusion, the mandible is retrusive, whereas skeletal Class III cases exhibit mandibular prognathism. In addition, the limited number of chin landmarks further increased the difficulty of reconstructing the lower lip and chin. During the initial model fitting process of the Hifi3dFace model, alignment was initially conducted from top to bottom. As a result, deviations were relatively minimal in the forehead area, but there were more pronounced deviations in the chin and lower lip regions.
The lack of 45° lateral photos and images from other angles increased the complexity of the 3D reconstruction model. Notably, in general, as the number of acquired photos increases, the representation of depth information becomes more easily refined. For this reason, in the field of CV today, screenshots from videos are frequently used for reconstructing 3D facial images [41]. In orthodontic clinical practice, 90° lateral images are typically used, but only half of the landmarks seen in lateral images can be used for fitting. This limitation highlights a direction for future algorithmic advancements in computer vision.
Moreover, this study has other limitations. Patients with significant facial asymmetry or scars were not included, which limits the model’s applicability. Additionally, this study is a preliminary attempt with a small sample size of only 23 Chinese participants, so the findings may primarily apply to Han Chinese individuals. Since generalizing to other ethnicities may introduce bias, larger sample sizes will be necessary for more detailed assessments in future clinical research. The purpose of this study was to automate the conversion of images to 3D models and to preliminarily explore the clinical practicality of reconstruction models. Based on the results, higher-precision models should be explored in the future by integrating orthodontic clinical knowledge and advanced algorithms into computer vision. Future studies could focus on developing facial templates tailored to different ethnic groups to optimize 3D face reconstruction across diverse populations, enhance the code’s generalizability, and broaden its potential clinical applications.

5. Conclusions

This study integrated a CV method into the medical field for efficient 3D model reconstruction, and the obtained modelling results displayed a difference of approximately 2 mm compared to face scans. Nevertheless, caution is advised when applying the proposed method in clinical orthodontics. Further research is necessary to improve the accuracy of the reconstruction models obtained.

Author Contributions

Conceptualization, Y.X. and B.M.; methodology, Y.X. and J.N.; software, Y.X. and J.L.; validation, Y.X.; formal analysis, Y.X. and J.L.; data curation, Y.X. and B.M.; writing—original draft preparation, Y.X. and J.N.; writing—review and editing Y.X. and S.W.; supervision, Y.Z. and D.L.; project administration, Y.Z. and D.L.; funding acquisition, Y.Z. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (62076011) (Y.Z); the National Natural Science Foundation of China (81970909, 82271009) (D.L); Young Scientists Fund of PKUSS (20110203, 20170109) (D.L); the Peking University Medicine Seed Fund for Interdisciplinary Research (BMU2018MX007) (D.L); the Fund of the State Key Laboratory of Oral Disease, Sichuan University (SKLOD2021OF09) (D.L); Key R & D Plan of Ningxia Hui Autonomous Region (2020BCG01001) (D.L); China oral disease foundation (A2021-057) (D.L); National multidisciplinary cooperative diagnosis and treatment capacity building project of PKUSS (PKUSSNMP-202020) (D.L); and the New Clinical Technology Fund of PKUSS (PKUSSNCT-20A07) (D.L).

Institutional Review Board Statement

This study was carried out in compliance with the Declaration of Helsinki and received approval from Institutional Review Board of Peking University School and Hospital of Stomatology (PKUSSIRB-202058135).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets analyzed in this study are available from the corresponding author upon reasonable request.

Acknowledgments

Thanks to Jing Li for providing the initial ideas contained in this article and preparing the original manuscript. Thanks to Hongsen Liao for his contributions and guidance in facial model reconstruction.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Prasad, K.N.; Sabrish, S.; Mathew, S.; Shivamurthy, P.G.; Pattabiraman, V.; Sagarkar, R. Comparison of the influence of dental and facial aesthetics in determining overall attractiveness. Int. Orthod. 2018, 16, 684–697. [Google Scholar] [CrossRef] [PubMed]
  2. Gao, J.; Wang, X.; Qin, Z.; Zhang, H.; Guo, D.; Xu, Y.; Jin, Z. Profiles of facial soft tissue changes during and after orthodontic treatment in female adults. BMC Oral Health 2022, 22, 257. [Google Scholar] [CrossRef] [PubMed]
  3. Zhou, Q.; Gao, J.; Guo, D.; Zhang, H.; Zhang, X.; Qin, W.; Jin, Z. Three-dimensional quantitative study of soft tissue changes in nasolabial folds after orthodontic treatment in female adults. BMC Oral Health 2023, 23, 31. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, F.; Zhao, Q.; Liu, X.; Zeng, D. Joint Face Alignment and 3D Face Reconstruction with Application to Face Recognition. IEEE Trans. Pattern Anal. Mach. Intel. 2020, 42, 664–678. [Google Scholar] [CrossRef]
  5. Sariyanidi, E.; Zampella, C.J.; Schultz, R.T.; Tunc, B. Inequality-Constrained and Robust 3D Face Model Fitting. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 433–449. [Google Scholar]
  6. Meredith, G. Facial photography for the orthodontic office. Am. J. Orthod. Dentofac. Orthop. 1997, 111, 463–470. [Google Scholar] [CrossRef]
  7. Blanz, V.; Vetter, T. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999; pp. 187–194. [Google Scholar]
  8. Lin, X.; Chen, Y.; Bao, L.; Zhang, H.; Zhang, Z. High-Fidelity 3D Digital Human Creation from RGB-D Selfies. ACM Trans. Graph. 2021, 41, 3. [Google Scholar]
  9. Anbarjafari, G.; Haamer, R.E.; Lusi, I.; Tikk, T.; Valgma, L. 3D face reconstruction with region based best fit blending using mobile phone for virtual reality based social media. Bull. Pol. Acad. Sci. Tech. Sci. 2019, 67, 125–132. [Google Scholar]
  10. Jiang, L.; Zhang, J.; Deng, B.; Li, H.; Liu, L. 3D Face Reconstruction with Geometry Details from a Single Image. IEEE Trans. Image Process. 2018, 27, 4756–4770. [Google Scholar] [CrossRef] [PubMed]
  11. Xing, Y.; Tewari, R.; Mendonca, P. A Self-Supervised Bootstrap Method for Single-Image 3D Face Reconstruction. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision, Hilton Waikoloa Village, HI, USA, 7–11 January 2019; pp. 1014–1023. [Google Scholar]
  12. Mao, B.; Tian, Y.; Li, J.; Zhou, Y.; Wang, X. A quantitative analysis of facial changes after orthodontic treatment with vertical control in patients with idiopathic condylar resorption. Orthod. Craniofacial Res. 2023, 26, 402–414. [Google Scholar] [CrossRef]
  13. Zhao, J.; Xu, Y.; Wang, J.; Lu, Z.; Qi, K. 3-dimensional analysis of hard- and soft-tissue symmetry in a Chinese population. BMC Oral Health 2023, 23, 432. [Google Scholar] [CrossRef]
  14. Valenti, C.; Massironi, D.; Truffarelli, T.; Grande, F.; Catapano, S.; Eramo, S.; Tribbiani, G.; Pagano, S. Accuracy of a new photometric jaw tracking system in the frontal plane at different recording distances: An in-vitro study. J. Dent. 2024, 148, 105245. [Google Scholar] [CrossRef] [PubMed]
  15. Liu, J.; Zhang, C.; Cai, R.; Yao, Y.; Zhao, Z.; Liao, W. Accuracy of 3-dimensional stereophotogrammetry: Comparison of the 3dMD and Bellus3D facial scanning systems with one another and with direct anthropometry. Am. J. Orthod. Dentofac. Orthop. 2021, 160, 862–871. [Google Scholar] [CrossRef] [PubMed]
  16. Zhao, R.F.; Wang, X.; Ma, D.; Fang, M.J.; Bai, S.Z. Trueness of 4 three-dimensional facial scanners: An in vitro study. Chin. J. Stomatol. 2022, 57, 1036–1042. [Google Scholar]
  17. Khambay, B.; Nairn, N.; Bell, A.; Miller, J.; Bowman, A.; Ayoub, A.F. Validation and reproducibility of a high-resolution three-dimensional facial imaging system. Br. J. Oral. Maxillofac. Surg. 2008, 46, 27–32. [Google Scholar] [CrossRef]
  18. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Routledge: New York, NY, USA, 1988; ISBN 9780203771587. [Google Scholar]
  19. Mao, B.; Li, J.; Tian, Y.; Zhou, Y. The accuracy of a three-dimensional face model reconstructing method based on conventional clinical two-dimensional photos. BMC Oral Health 2022, 22, 413. [Google Scholar] [CrossRef]
  20. Wang, X.W.; Liu, Z.J.; Diao, J.; Zhao, Y.J.; Jiang, J.H. Morphologic reproducibility in 6 regions of the 3-dimensional facial models acquired by a standardized procedure: An in vivo study. Am. J. Orthod. Dentofac. Orthop. 2022, 161, e287–e295. [Google Scholar] [CrossRef] [PubMed]
  21. Thurzo, A.; Strunga, M.; Havlínová, R.; Reháková, K.; Urban, R.; Surovková, J.; Kurilová, V. Smartphone-Based Facial Scanning as a Viable Tool for Facially Driven Orthodontics? Sensors 2022, 22, 7752. [Google Scholar] [CrossRef]
  22. Kazandjian, S.; Sameshima, G.T.; Champlin, T.; Sinclair, P.M. Accuracy of video imaging for predicting the soft tissue profile after mandibular set-back surgery. Am. J. Orthod. Dentofac. Orthop. 1999, 115, 382–389. [Google Scholar] [CrossRef]
  23. Lu, C.-H.; Ko, E.W.C.; Huang, C.-S. The accuracy of video imaging prediction in soft tissue outcome after bimaxillary orthognathic surgery. J. Oral Maxillofac. Surg. 2003, 61, 333–342. [Google Scholar] [CrossRef]
  24. Mai, H.N.; Lee, D.H. Accuracy of Mobile Device-Compatible 3D Scanners for Facial Digitization: Systematic Review and Meta-Analysis. J. Med. Internet Res. 2020, 22, e22228. [Google Scholar] [CrossRef]
  25. Palmer, R.L.; Helmholz, P.; Baynam, G. Cliniface: Phenotypic Visualisation and Analysis Using Non-Rigid Registration of 3D Facial Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2020, 301–308. [Google Scholar] [CrossRef]
  26. Claes, P.; Walters, M.; Shriver, M.D.; Puts, D.; Gibson, G.; Clement, J.; Baynam, G.; Verbeke, G.; Vandermeulen, D.; Suetens, P. Sexual dimorphism in multiple aspects of 3D facial symmetry and asymmetry defined by spatially dense geometric morphometrics. J. Anat. 2012, 221, 97–114. [Google Scholar] [CrossRef] [PubMed]
  27. Berends, B.; Bielevelt, F.; Schreurs, R.; Vinayahalingam, S.; Maal, T.; de Jong, G. Fully automated landmarking and facial segmentation on 3D photographs. Sci. Rep. 2024, 14, 6463. [Google Scholar] [CrossRef]
  28. Jong, M.A.d.; Wollstein, A.; Ruff, C.; Dunaway, D.; Hysi, P.; Spector, T.; Liu, F.; Niessen, W.; Koudstaal, M.J.; Kayser, M.; et al. An Automatic 3D Facial Landmarking Algorithm Using 2D Gabor Wavelets. IEEE Trans. Image Process. 2016, 25, 580–588. [Google Scholar] [CrossRef]
  29. Huang, R.; Suttie, M.; Noble, J.A. An Automated CNN-based 3D Anatomical Landmark Detection Method to Facilitate Surface-Based 3D Facial Shape Analysis. In Proceedings of the Uncertainty for Safe Utilization of Machine Learning in Medical Imaging and Clinical Image-Based Procedures, Shenzhen, China, 17 October 2019; pp. 163–171. [Google Scholar]
  30. White, J.D.; Ortega-Castrillón, A.; Matthews, H.; Zaidi, A.A.; Ekrami, O.; Snyders, J.; Fan, Y.; Penington, T.; Van Dongen, S.; Shriver, M.D.; et al. MeshMonk: Open-source large-scale intensive 3D phenotyping. Sci. Rep. 2019, 9, 6085. [Google Scholar] [CrossRef]
  31. Scharstein, D.; Szeliski, R. A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
  32. Sharma, S.; Kumar, V. 3D Face Reconstruction in Deep Learning Era: A Survey. Arch. Comput. Methods Eng. 2022, 29, 3475–3507. [Google Scholar] [CrossRef]
  33. Deng, Y.; Yang, J.; Xu, S.; Chen, D.; Jia, Y.; Tong, X. Accurate 3D Face Reconstruction With Weakly-Supervised Learning: From Single Image to Image Set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2019), Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
  34. Wu, F.; Bao, L.; Chen, Y.; Ling, Y.; Song, Y.; Li, S.; Ngi Ngan, K.; Liu, W. MVF-Net: Multi-View 3D Face Morphable Model Regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2019), Long Beach, CA, USA, 16–17 June 2019; pp. 959–968. [Google Scholar]
  35. Lium, O.; Kwon, Y.B.; Danelakis, A.; Theoharis, T. Robust 3D Face Reconstruction Using One/Two Facial Images. J. Imaging 2021, 7, 169. [Google Scholar] [CrossRef]
  36. Garrido, P.; Zollhöfer, M.; Wu, C.; Bradley, D.; Pérez, P.; Beeler, T.; Theobalt, C. Corrective 3D reconstruction of lips from monocular video. ACM Trans. Graph. 2016, 35, 219. [Google Scholar] [CrossRef]
  37. Li, T.; Bolkart, T.; Black, M.J.; Li, H.; Romero, J. Learning a model of facial shape and expression from 4D scans. ACM Trans. Graph. 2017, 36, 194. [Google Scholar] [CrossRef]
  38. Jayakrishnan, J.M.; Kumar, R.B.V. Forensic facial reconstruction using CBCT: A systematic review. Stomatologija 2022, 24, 49–55. [Google Scholar] [PubMed]
  39. Short, L.J.; Khambay, B.; Ayoub, A.; Erolin, C.; Rynn, C.; Wilkinson, C. Validation of a computer modelled forensic facial reconstruction technique using CT data from live subjects: A pilot study. Forensic Sci. Int. 2014, 237, 147.e1–147.e8. [Google Scholar] [CrossRef] [PubMed]
  40. Qiu, Z.; Li, Y.; He, D.; Zhang, Q.; Zhang, L.; Zhang, Y.; Wang, J.; Xu, L.; Wang, X.; Zhang, Y.; et al. Sculptor:Skeleton-Consistent Face Creation Using a Learned Parametric Generator. ACM Trans. Graph. 2022, 41, 1–17. [Google Scholar] [CrossRef]
  41. Gao, X.; Zhong, C.; Xiang, J.; Hong, Y.; Guo, Y.; Zhang, J. Reconstructing Personalized Semantic Facial NeRF Models from Monocular Video. ACM Trans. Graph. 2022, 41, 200. [Google Scholar] [CrossRef]
Figure 1. Colour-coded deviation analysis map of a typical subject. The deviation of all the landmarks is shown as follows: Prn = pronasale; Ls = labrale superius; Li = labrale inferius; Lch = left cheilion; Rch = right cheilion; Pg’ = pogonion of soft tissue; Gn’ = gnathion of soft tissue; and Me’ = menton of soft tissue.
Figure 1. Colour-coded deviation analysis map of a typical subject. The deviation of all the landmarks is shown as follows: Prn = pronasale; Ls = labrale superius; Li = labrale inferius; Lch = left cheilion; Rch = right cheilion; Pg’ = pogonion of soft tissue; Gn’ = gnathion of soft tissue; and Me’ = menton of soft tissue.
Bioengineering 11 01174 g001
Figure 2. Three-dimensional soft tissue measurements. (a) Outercanthal Width; (b) Labial Fissure Width (ChiL-ChiR); (c) nasolabial angle (Prn-Sn-Ls); (d) Facial Convexity (Gl-Sn-Pg’); (e) Total Facial Convexity (Gl-Prn-Pg’); (f) Outer Canthal, Nasal Angle; (g) Nasal Angle (N’-Prn-Sn); (h) Nasofrontal Angle (Gl-N’-Prn); (i) Philtral Length (Sn-Ls); (j) Philtral Width (CphR-CphL); (k) Philtral Depth; (l) Facial Height (N’-Gn’); (m) Upper Lip Height (Sn-Stos); (n) Lower Lip Height (Stoi-Sl); (o) Upper lip Protrusion (|Prn-Ls|z); (p) Lower lip Protrusion (|Prn-Li|z); (q) Mentolabial furrows depth (|Li-Sl|z); (r) Thickness of upper vermilion (|Ls-Stos|z); (s) Thickness of lower vermilion (|Li-Stoi|z).
Figure 2. Three-dimensional soft tissue measurements. (a) Outercanthal Width; (b) Labial Fissure Width (ChiL-ChiR); (c) nasolabial angle (Prn-Sn-Ls); (d) Facial Convexity (Gl-Sn-Pg’); (e) Total Facial Convexity (Gl-Prn-Pg’); (f) Outer Canthal, Nasal Angle; (g) Nasal Angle (N’-Prn-Sn); (h) Nasofrontal Angle (Gl-N’-Prn); (i) Philtral Length (Sn-Ls); (j) Philtral Width (CphR-CphL); (k) Philtral Depth; (l) Facial Height (N’-Gn’); (m) Upper Lip Height (Sn-Stos); (n) Lower Lip Height (Stoi-Sl); (o) Upper lip Protrusion (|Prn-Ls|z); (p) Lower lip Protrusion (|Prn-Li|z); (q) Mentolabial furrows depth (|Li-Sl|z); (r) Thickness of upper vermilion (|Ls-Stos|z); (s) Thickness of lower vermilion (|Li-Stoi|z).
Bioengineering 11 01174 g002
Figure 3. Root mean square error (RMSE) of different regions of the reconstruction models.
Figure 3. Root mean square error (RMSE) of different regions of the reconstruction models.
Bioengineering 11 01174 g003
Figure 4. Cloud maps depicting 3D deviations between face scan models and reconstructions for three patients with varied facial profiles: straight (a), concave (b), and convex (c).
Figure 4. Cloud maps depicting 3D deviations between face scan models and reconstructions for three patients with varied facial profiles: straight (a), concave (b), and convex (c).
Bioengineering 11 01174 g004
Figure 5. Best and worst reconstruction results: (a) best reconstruction outcome; (b) worst reconstruction outcome.
Figure 5. Best and worst reconstruction results: (a) best reconstruction outcome; (b) worst reconstruction outcome.
Bioengineering 11 01174 g005
Figure 6. Three-Dimensional deviation of eight landmarks (mean ± 95% CI, mm; D: deviation; Dx: horizontal deviation; Dy: vertical deviation; Dz: sagittal deviation; Prn: pronasale; Ls: labrale superius; Li: labrale inferius; Lch: left cheilion; Rch: right cheilion; Pg’: pogonion of soft tissue; Gn’: gnathion of soft tissue; Me’: menton of soft tissue).
Figure 6. Three-Dimensional deviation of eight landmarks (mean ± 95% CI, mm; D: deviation; Dx: horizontal deviation; Dy: vertical deviation; Dz: sagittal deviation; Prn: pronasale; Ls: labrale superius; Li: labrale inferius; Lch: left cheilion; Rch: right cheilion; Pg’: pogonion of soft tissue; Gn’: gnathion of soft tissue; Me’: menton of soft tissue).
Bioengineering 11 01174 g006
Table 1. Consistency of the eight landmarks (ICC: intraclass correlation coefficient; Prn: pronasale; Ls: labrale superius; Li: labrale inferius; Lch: left cheilion; Rch: right cheilion; Pg’: pogonion of soft tissue; Gn’: gnathion of soft tissue; Me’: menton of soft tissue).
Table 1. Consistency of the eight landmarks (ICC: intraclass correlation coefficient; Prn: pronasale; Ls: labrale superius; Li: labrale inferius; Lch: left cheilion; Rch: right cheilion; Pg’: pogonion of soft tissue; Gn’: gnathion of soft tissue; Me’: menton of soft tissue).
ICCPrnLsLiRchLchPg’Gn’Me’
total0.9680.9780.8730.8980.9810.9680.9730.967
x0.8710.8660.9020.8380.8700.9440.8720.959
y0.8060.9530.8890.8690.9240.8820.9580.944
z0.9650.9690.8180.8870.9520.9670.9700.942
Table 2. Items for 3D soft tissue measurements ((L, R) Chi = (Left, Right) Cheilion; Sn = Subnasal; Stos = Stomion superius; Stoi = Stomion inferius; Sl = Sublabial; (L, R) Cph = (Left, Right) Crista Philtri; Gl = Glabella; N’ = Nasion of soft tissue).
Table 2. Items for 3D soft tissue measurements ((L, R) Chi = (Left, Right) Cheilion; Sn = Subnasal; Stos = Stomion superius; Stoi = Stomion inferius; Sl = Sublabial; (L, R) Cph = (Left, Right) Crista Philtri; Gl = Glabella; N’ = Nasion of soft tissue).
Measurement IndexDefinition
Outercanthal WidthHorizontal distance between the lateral canthi
Labial Fissure Width (ChiL-ChiR)Distance between the mouth commissures
Nasolabial Angle (Prn-Sn-Ls)Angle at Sn subtended by side Prn–Ls
Facial Convexity (Gl-Sn-Pg’)Angle at Sn subtended by side Gl–Pg’
Total Facial Convexity (Gl-Prn-Pg’)Angle at Prn subtended by side Gl–Pg’
Outer Canthal, Nasal AngleAngle at Sn subtended by the outer canthi
Nasal Angle (N’-Prn-Sn)Angle at Prn subtended by side N’–Sn
Nasofrontal Angle (Gl-N’-Prn)Angle at N’ subtended by side Gl–Prn
Philtral Length (Sn-Ls)Distance from nasal bone/base to the midline of the upper lip vermilion border
Philtral Width (CphR-CphL)Distance between philtral ridges, measured just above the vermilion border
Philtral DepthDepth measured at the deepest midline point between philtral ridges
Facial Height (N’-Gn’)Vertical height (length) of the face (N’-Gn)
Upper Lip Height (Sn-Stos)Vertical distance between Sn and Stos
Lower Lip Height (Stoi-Sl)Vertical distance between Stoi and Sl
Upper Lip Protrusion (|Prn-Ls|z)Sagittal distance between Prn and Ls
Lower Lip Protrusion (|Prn-Li|z)Sagittal distance between Prn and Li
Mentolabial Furrow Depth (|Li-Sl|z)Sagittal distance between Li and Sl
Thickness of Upper Vermilion (|Ls-Stos|z)Sagittal distance between Ls and Stos
Thickness of Lower Vermilion (|Li-Stoi|z)Sagittal distance between Li and Stoi
Table 3. The deviations in each region between the face scan models and 3D reconstruction models (mm) (SD: standard deviation; CI: confidence interval).
Table 3. The deviations in each region between the face scan models and 3D reconstruction models (mm) (SD: standard deviation; CI: confidence interval).
AreaFaceForeheadNoseUpper LipLower LipChinParanasal (L)Paranasal (R)Cheek (L)
Mean2.001.521.721.872.282.101.711.642.13
SD0.380.880.750.670.671.150.710.720.89
Lower 95%CI1.841.141.391.581.991.601.401.331.74
Upper 95%CI2.171.902.042.162.572.592.011.952.52
Table 4. Deviations of landmarks in different directions (SD: standard deviation; CI: confidence interval; D: total deviation; Dx: deviation in horizontal level; Dy: deviation in vertical level; Dz: deviation in sagittal level).
Table 4. Deviations of landmarks in different directions (SD: standard deviation; CI: confidence interval; D: total deviation; Dx: deviation in horizontal level; Dy: deviation in vertical level; Dz: deviation in sagittal level).
DDxDyDz
MeanSD95%CIMeanSD95%CIMeanSD95%CIMeanSD95%CI
Prn1.181.100.711.650.100.110.060.150.140.180.060.211.161.080.701.63
Ls1.611.251.082.150.110.150.040.170.570.550.330.811.481.150.981.97
Li1.780.771.442.110.080.070.050.110.430.350.280.581.700.751.372.02
Lch1.531.021.101.970.210.180.140.290.650.460.450.851.350.920.951.75
Rch1.491.131.011.980.180.180.110.260.630.620.370.901.320.960.901.73
Pg’1.581.660.862.300.040.050.020.060.310.590.060.571.531.580.842.21
Gn’1.881.731.132.630.060.060.030.080.890.890.501.271.651.501.002.30
Me’1.981.571.302.660.150.250.050.261.411.110.931.891.411.130.931.90
Table 5. Consistency test results of soft tissue measurements between face scan models and 3D reconstruction models (ICC: intraclass correlation coefficient; CI: confidence interval).
Table 5. Consistency test results of soft tissue measurements between face scan models and 3D reconstruction models (ICC: intraclass correlation coefficient; CI: confidence interval).
MeasurementIntrarater 1 ICC (95%CI)Intrarater 2 ICC (95%CI)Interrater ICC (95%CI)
Outercanthal width0.983 (0.970–0.991)0.946 (0.904–0.970)0.960 (0.925–0.978)
Labial fissure width (ChiL-ChiR)0.934 (0.885–0.981)0.972 (0.951–0.985)0.951 (0.908–0.974)
Nasolabial angle (Prn-Sn-Ls)0.983 (0.969–0.990)0.990 (0.981–0.994)0.972 (0.950–0.985)
Facial convexity (Gl-Sn-Pg’)0.980 (0.960–0.989)0.991 (0.983–0.995)0.975 (0.803–0.992)
Total facial convexity (Gl-Prn-Pg’)0.992 (0.986–0.996)0.998 (0.997–0.999)0.966 (0.919–0.984)
Outer canthal, nasal angle0.967 (0.941–0.981)0.967 (0.944–0.981)0.930 (0.867–0.962)
Nasal angle (N’-Prn-Sn)0.918 (0.856–0.954)0.990 (0.990–0.995)0.932 (0.875–0.963)
Nasofrontal angle (Gl-N’-Prn)0.985 (0.972–0.992)0.995 (0.990–0.997)0.984 (0.971–0.991)
Philtral length (Sn-Ls)0.963 (0.934–0.979)0.934 (0.882–0.963)0.941 (0.893–0.967)
Philtral width (CphR-CphL)0.924 (0.866–0.957)0.912 (0.818–0.955)0.957 (0.923–0.976)
Philtral depth0.945 (0.903–0.969)0.938 (0.890–0.972)0.905 (0.829–0.947)
Facial height (N’-Gn’)0.995 (0.990–0.997)0.986 (0.975–0.992)0.962 (0.888–0.983)
Upper lip height (Sn-Stos)0.935 (0.887–0.964)0.968 (0.943–0.982)0.946 (0.902–0.970)
Lower lip height (Stoi-Sl)0.912 (0.846–0.950)0.955 (0.918–0.976)0.942 (0.894–0.968)
Upper lip protrusion (|Prn-Ls|z)0.970 (0.947–0.983)0.998 (0.996–0.999)0.986 (0.975–0.992)
Lower lip protrusion (|Prn-Li|z)0.926 (0.869–0.958)0.994 (0.989–0.996)0.962 (0.832–0.986)
Mentolabial furrows depth (|Li-Sl|z)0.953 (0.917–0.974)0.948 (0.908–0.971)0.939 (0.812–0.974)
Thickness of upper vermilion (|Ls-Stos|z)0.932 (0.880–0.961)0.934 (0.884–0.963)0.919 (0.851–0.956)
Thickness of lower vermilion (|Li-Stoi|z)0.962 (0.933–0.979)0.931 (0.878–0.961)0.951 (0.906–0.974)
Table 6. Soft tissue measurements from the face scan models and reconstruction models (*: significance level of p < 0.05).
Table 6. Soft tissue measurements from the face scan models and reconstruction models (*: significance level of p < 0.05).
MeasurementsFace Scan Model
(Mean ± SD)
Reconstruction Model
(Mean ± SD)
Deviation of Two Models (Mean ± SD)tp
Outercanthal width92.95 ± 2.7092.41 ± 1.260.54 ± 2.351.1020.283
Labial fissure width (ChiL-ChiR)44.77 ± 2.8945.68 ± 2.80−0.92 ± 2.79−1.5870.129
Nasolabial angle (Prn-Sn-Ls)105.73 ± 7.61100.92 ± 5.014.81 ± 8.772.6270.015 *
Facial convexity (Gl-Sn-Pg’)167.16 ± 4.11167.81 ± 2.47−0.65 ± 2.86−1.0900.288
Total facial convexity (Gl-Prn-Pg’)144.50 ± 4.42144.65 ± 2.40−0.15 ± 3.79−0.1900.851
Outer canthal, nasal angle93.49 ± 2.3392.69 ± 2.990.80 ± 3.271.1710.254
Nasal angle (N’-Prn-Sn)117.65 ± 3.83122.31 ± 2.40−4.66 ± 3.81−5.8650.000 *
Nasofrontal angle (Gl-N’-Prn)142.54 ± 4.18144.66 ± 2.86−2.12 ± 4.41−2.2980.031 *
Philtral length (Sn-Ls)14.82 ± 0.9314.57 ± 1.380.24 ± 1.051.1070.280
Philtral width (CphR-CphL)12.81 ± 0.6812.42 ± 0.940.39 ± 0.912.0400.054
Philtral depth1.72 ± 0.621.53 ± 0.440.19 ± 0.711.2710.217
Facial height (N’-Gn’)115.11 ± 3.68115.96 ± 2.43−0.85 ± 3.60−1.1300.271
Upper lip height (Sn-Stos)21.68 ± 1.2722.18 ± 1.43−0.50 ± 1.21−1.9600.063
Lower lip height (Stoi-Sl)16.31 ± 1.2316.43 ± 1.30−0.12 ± 1.20−0.4640.647
Upper lip protrusion (|Prn-Ls|z)8.33 ± 2.027.62 ± 1.630.71 ± 2.141.5870.127
Lower lip protrusion (|Prn-Li|z)11.53 ± 2.5910.54 ± 1.620.99 ± 2.491.9160.068
Mentolabial furrows depth (|Li-Sl|z)6.54 ± 1.236.57 ± 1.10−0.03 ± 1.14−0.1220.904
Thickness of upper vermilion (|Ls-Stos|z)4.91 ± 1.105.00 ± 0.88−0.09 ± 1.29−0.3210.751
Thickness of lower vermilion (|Li-Stoi|z)2.50 ± 1.182.84 ± 1.04−0.34 ± 1.36−1.2030.242
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, Y.; Mao, B.; Nie, J.; Liu, J.; Wang, S.; Liu, D.; Zhou, Y. Accuracy Evaluation of a Three-Dimensional Face Reconstruction Model Based on the Hifi3D Face Model and Clinical Two-Dimensional Images. Bioengineering 2024, 11, 1174. https://doi.org/10.3390/bioengineering11121174

AMA Style

Xiao Y, Mao B, Nie J, Liu J, Wang S, Liu D, Zhou Y. Accuracy Evaluation of a Three-Dimensional Face Reconstruction Model Based on the Hifi3D Face Model and Clinical Two-Dimensional Images. Bioengineering. 2024; 11(12):1174. https://doi.org/10.3390/bioengineering11121174

Chicago/Turabian Style

Xiao, Yujia, Bochun Mao, Jianglong Nie, Jiayi Liu, Shuo Wang, Dawei Liu, and Yanheng Zhou. 2024. "Accuracy Evaluation of a Three-Dimensional Face Reconstruction Model Based on the Hifi3D Face Model and Clinical Two-Dimensional Images" Bioengineering 11, no. 12: 1174. https://doi.org/10.3390/bioengineering11121174

APA Style

Xiao, Y., Mao, B., Nie, J., Liu, J., Wang, S., Liu, D., & Zhou, Y. (2024). Accuracy Evaluation of a Three-Dimensional Face Reconstruction Model Based on the Hifi3D Face Model and Clinical Two-Dimensional Images. Bioengineering, 11(12), 1174. https://doi.org/10.3390/bioengineering11121174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop