Next Article in Journal
Sustainable Extraction of Bioactive Compounds and Nutrients from Agri-Food Wastes: Potential Reutilization of Berry, Honey, and Chicory Byproducts
Previous Article in Journal
Fine-Tuning Retrieval-Augmented Generation with an Auto-Regressive Language Model for Sentiment Analysis in Financial Reviews
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Segmentation of Partial and Imperfect Dental Arches

1
Mechanical Engineering Department, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia
2
Biosystems and Machines Interdisciplinary Research Center, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia
3
Department of Computer Engineering, École Polytechnique Montréal, 2900 Edouard-Montpetit Boul, Montréal, QC H3T1J4, Canada
4
Intelligent Dentaire Inc., Bureau 540, 1310 av Greene, Westmont, QC H3Z2B2, Canada
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(23), 10784; https://doi.org/10.3390/app142310784
Submission received: 17 September 2024 / Revised: 12 November 2024 / Accepted: 17 November 2024 / Published: 21 November 2024

Abstract

:
Automatic and accurate dental arch segmentation is a fundamental task in computer-aided dentistry. Recent trends in digital dentistry are tackling the design of 3D crowns using artificial intelligence, which initially requires a proper semantic segmentation of teeth from intraoral scans (IOS). In practice, most IOS are partial with as few as three teeth on the scanned arch, and some of them might have preparations, missing, or incomplete teeth. Existing deep learning-based methods (e.g., MeshSegNet, DArch) were proposed for dental arch segmentation, but they are not as efficient for partial arches that include imperfections such as missing teeth and preparations. In this work, we present the ArchSeg framework that can leverage various deep learning models for semantic segmentation of perfect and imperfect dental arches. The Point Transformer V2 deep learning model is used as the backbone for the ArchSeg framework. We present experiments to demonstrate the efficiency of the proposed framework to segment arches with various types of imperfections. Using a raw dental arch scan with two labels indicating the range of present teeth in the arch (i.e., the first and the last teeth), our ArchSeg can segment a standalone dental arch or a pair of aligned master/antagonist arches with more available information (i.e., die mesh). Two generic models are trained for lower and upper arches; they achieve dice similarity coefficient scores of 0.936 ± 0.008 and 0.948 ± 0.007 , respectively, on test sets composed of challenging imperfect arches. Our work also highlights the impact of appropriate data pre-processing and post-processing on the final segmentation performance. Our ablation study shows that the segmentation performance of the Point Transformer V2 model integrated in our framework is improved compared with the original standalone model.

1. Introduction

In North America, dental clinics handle millions of dental reconstructions annually. The intricacies and irregularities in teeth alignment pose a daunting challenge for manually segmenting optical scans, typically consuming an average of 45 min to complete [1]. Moreover, due to occlusion, digital scans often contain partially missing data. Types of imperfections may entail partial arches, missing teeth, presence of preparations, and noise. According to the statistics of an in-house clinical dataset acquired with the help of a collaborating dental laboratory, the percentage of full arches was less than 17% among upper arches and less than 20% among lower arches. The presence of imperfections hinders the correct segmentation of dental arches.
The use of machine learning algorithms in digital dentistry has significantly increased during the past few years [2,3,4]. A recent review on tooth segmentation from CBCT images reported issues related to the low amount of available dental data and non-uniformity of evaluation metrics [5]. The same challenges exist in segmentation of teeth from 3D surface scans. Moreover, to the best of the authors’ knowledge, published research on dental arch segmentation from 3D surface scans focused on full or perfect arch segmentation such as MeshSegNet [6,7], Dental Arch Prior-assisted 3D Tooth Instance Segmentation (DArch) [8], dynamic graph convolutional neural network (DGCNN) [3], and tooth segmentation using generative adversarial networks [9]. Therefore, it is important to develop an automatic, generic, and accurate framework capable of efficiently segmenting dental arches with imperfections.
In this work, we mainly focus on four main types of imperfections in dental arches, which are classified as follows according to increasing level of difficulty:
(1)
Partial arch with less than eight teeth without preparation. This scenario is challenging from a registration point of view because the scan is short and does not always form a curved arch.
(2)
Partial arch with a preparation. The preparation needs to be segmented accurately because it is required for the dental crown design.
(3)
Partial arch with only missing tooth with or without space. The most difficult situation is an arch with a missing tooth without space.
(4)
Partial arch with a preparation and a missing tooth. This is the most challenging situation because it combines Scenarios 2 and 3.
Segmentation of 3D dental arches is a fundamental task for various computer-aided dental treatments, in particular dental crowns [3]. In the context of dental crown restoration, we distinguish between master and antagonist arches. The master arch usually contains a prepared tooth, centered between two adjacent teeth, to receive a crown treatment. The antagonist arch opposes the master arch and provides occlusal constraints for the designed crown. Because the antagonist arch does not usually possess a preparation, its segmentation framework could be different than the master arch. In this work, we aim to propose an end-to-end framework for tooth segmentation that could be utilized for a variety of dental treatments including dental crown design.
Our main contributions are as follows:
  • A dental arch segmentation framework is designed and implemented to integrate customized trained models which demonstrate robust, scalable, and efficient performance on new data. The model is deployed at https://app.intellidentai.com. Different published deep learning models could be integrated in the segmentation framework based on the selection of the user.
  • A new registration method is applied to bring perfect and imperfect arches in a unified and consistent coordinate system. Most of partial arches do not have an arch shape but rather a rectangular one. This misguides the registration during the lingual/buccal identification. Inspired by this observation, we add a set of three-class models for gingiva cleaning to enhance the registration. Because our dataset contains preparations, we train three-class models to separate preparations from teeth and gingiva. The three-class prediction was also applied in the post-processing in order to remove extra teeth labeling.
  • Novel data augmentation methods are implemented to improve trained model performance for limited datasets. A jaw slicing method is developed to augment the training data with more partial arches. A missing teeth augmentation method is also implemented to improve balance in the training dataset.
The rest of paper is organized as follows. Related work is briefly reviewed in Section 2. Each component of the ArchSeg framework is detailed in Section 3. In Section 4, we present experimental results and discuss the effectiveness of different components of ArchSeg via detailed ablation studies. The paper is concluded in Section 5.

2. Related Work

Advances in digital dentistry have significantly transformed the way dental care is delivered, enhancing precision, efficiency, and patient outcomes through cutting-edge technologies such as CAD/CAM systems and artificial intelligence (AI). Utilization of CAD/CAM technology offers enhanced precision and reproducibility, leading to better patient satisfaction and reduced clinical time [10,11,12,13]. Advanced digital imaging techniques have enhanced the accuracy and efficiency of assessing occlusion, which are crucial for effective treatment planning [14]. AI is also revolutionizing digital dentistry by making procedures faster and more personalized, offering enhanced diagnostics, automation, and continual learning [15,16].
Advanced diagnostics, precise prosthetic design, and accurate treatment planning demand automated and precise identification and analysis of individual teeth in medical imaging, such as X-rays, CT scans, and 3D intraoral scans. Therefore, accurately identifying and separating teeth from the surrounding tissue through semantic segmentation is crucial. Various methodologies like those introduced in [3,8] have advanced this goal, albeit with limitations in handling dental irregularities and anomalies. Techniques such as DentalMAE [17] and DBGANet [18] further contribute to this domain by offering advanced frameworks for 3D dental model segmentation. Nevertheless, these methods grapple with challenges in broader clinical adoption and performance in specific conditions. CNNs [19] and deep learning advancements [20] signify major strides forward in addressing root anatomy and artifacts in CBCT images [21,22]. Yet, they encounter obstacles in data annotation and model generalization, highlighting the ongoing need for innovative solutions.
In response to these barriers, the field saw the integration of novel methods by the authors of [23] and the development of a segmentation and labeling framework in [24]. Each was designed to enhance model accuracy further and handle dental abnormalities more effectively. Similarly, a recent work on utilizing a single 3D intraoral scan for segmentation opened new pathways for self-supervision methods in digital orthodontics, directly tackling the issue of manual labeling and data scarcity [25].
The evaluation of AI-generated tooth models in [26] aimed at overcoming the challenge of producing accurate tooth models for clinical scenarios with mild crowding and without dental restorations. This exploration into AI-driven methodologies highlighted the potential for increased efficiency and effectiveness, yet also emphasizing the necessity for research on complex scenarios.
Nevertheless, segmentation on non-Euclidean data through MeshSegNet [7] and PointNet [27] received a substantial amount of attention by the digital dentistry community. These methods were developed in direct response to the limitations of traditional voxelization techniques, offering innovative solutions to accurately model the complex geometry of teeth and gums. However, due to the huge variability and complexity of dental data, these methods also faced the challenge of accurately processing dental models with missing teeth, partial arches, and dental preparations. Subsequent advancements pushed the boundaries of accuracy and efficiency [28,29,30]. While showcasing the potential for handling complex dental anomalies, these methods also faced challenges in processing dental models with extreme abnormalities or incomplete scans, emphasizing the crucial need for enhanced model generalization on imperfect arches.
The advent of transformer-based methods, illustrated in [31,32,33,34,35], highlight a significant shift towards using transformer architectures for understanding 3D point clouds. With innovations such as group vector attention and partition-based pooling methods, these models offer a substantial leap in capturing the intricate patterns within dental point clouds. Despite recent advances in dental segmentation, applying these methods to complex cases, such as those with missing teeth or partial arches, remains a significant challenge. The state-of-the-art methods for dental arch segmentation still face practical limitations in real-world applications, particularly when dealing with partial arches or full arches that contain imperfections. Therefore, we propose a novel arch segmentation framework ArchSeg which is robust to variations in dental data. We demonstrate that ArchSeg performs well for full and partial arches with or without missing teeth. Our proposed framework is also tuned to segment arches with dental preparations, which provides the necessary teeth context to support crown generation using AI.

3. Methods

3.1. Data Acquisition and Labeling

The data were acquired from the Montreal dental lab Kerenor. The dataset consists of 1006 upper and 984 lower arches. The dataset was segmented by dental technicians and dental interns using a commercial segmentation software. We used the FDI (Federation Dentaire Internationale) notation system for labeling teeth. However, we also applied a 17-class notation for model training and inference (0 for gingiva, 1–14 for normal teeth, and 15, 16 for wisdom teeth). Both FDI and the 17-class notations are illustrated in Figure 1. Note that all colored dental arches were generated using Mesh Labeler (v.3.36) [36]. Our framework also utilized a 3-class segmentation model during pre-processing and post-processing to distinguish teeth from gingiva and preparations, if they exist, as follows (0: gingiva, 1: tooth, 2: preparation).
Inspired by the natural form of a full dental arch, a coordinate system was defined in which the arch was centered at its center of mass and opened toward down/up for upper/lower arch as shown in Figure 1. We refer to this coordinate system as the Global Coordinate System (GCS).

3.2. Arch Cleaning and Simplification

There may exist small disconnected pieces captured in digital scans. These outliers are useless but impact the registration process as well as the segmentation results. To remove such outliers, we cleaned disconnected parts of an input arch using a size threshold of 5% of the input mesh size. A dental model captured by an intraoral scan normally contains more than 200,000 faces [28]. To improve efficiency, model training often needs a simplification of the input mesh to accelerate the training and evaluation procedures. For example, MeshSegNet [7] used a predefined size of 10,000 faces. We applied mesh simplification using quadratic downsampling with volume preservation to obtain decimated arches comprising approximately 10,000 triangles.

3.3. Registration Based on Oriented Bounding Box (OBB)

After aligning the oriented bounding box (OBB) of each labeled large-span arch in GCS, we obtained the average teeth centroids for each tooth position in GCS as illustrated in Figure 2. The arch was positioned in the x–y plane with all teeth positioning toward the z-positive direction.
We registered both full and partial arches with the guidance of these average teeth centroids and range labels. Teeth centroid information, considered as a part of the registration method, was consistently applied to process data during training and inference. The registration method can be described in 4 steps:
(1)
Bring the center of mass of each input mesh to origin;
(2)
Compute the oriented bounding box (OBB) of the centered mesh with the class vtkOBBTree of the VTK library [37]. More specifically, OBB is constructed by finding the mean and covariance matrices of the cells (and their points) that define the mesh. The eigenvectors of the covariance matrix are extracted, offering a set of orthogonal vectors that define the tightest-fitting OBB;
(3)
Align the obtained OBB in the x–y plan of GCS in a way that the largest edge follows the x-axis (the width of an arch), the second largest edge is in the y-axis (the height of an arch), and the smallest edge is in z-axis, the height of teeth on the arch;
(4)
Compute the required translation and rotation in the x-y plan using the estimated center of input arch, the centroids of teeth computed in Figure 2, and the start and end teeth labels specified by the user. The centroids between the start and end labels influence the translation and the two centroids of start and end labels influence the rotation. Then, position the arch in the GCS using the constructed transformation matrix.
Figure 3 illustrates the steps of our registration method with a lower partial arch for which start tooth label1 = 43 and end tooth label2 = 46. In Step 3, the first principal axis of the arch OBB was aligned with the x-axis. While it is relatively easier to register full arches (with more than 8 teeth), there exist issues with registration of partial arches. In short partial arches (containing 3 or 4 teeth), it is challenging to identify the lingual buccal side. Therefore, for partial arch registration in the GCS, we relied on both the first and last labels provided by the user. The die mesh, if available, could also be used to enhance the lingual buccal identification. This could be achieved if the label of the provided die is the same as the first or last labels or close to either of them. The centroids shown in Step 4 are added for demonstration purposes. The die mesh is not only helpful for registering and segmenting arches but is also the essential part used to define a context with a pair of aligned master and antagonist arches for dental crown design purposes.
Figure 4 shows a set of upper arches before and after registration. The OBB-based arch registration is the most important pre-processing step to bring different types of arches (full and partial) to a common GCS. Figure 5 illustrates lingual buccal ambiguity for certain partial arches. In both left sub-figures, we put a registered full lower arch as a reference. For this example, the die mesh is at the middle of the partial arch, so it is not helpful. This example shows that the registration has an important impact on the segmentation result, especially for partial arches.

3.4. Data Augmentation Methods

To balance the training dataset with more rare cases and improve the model’s generalization, we introduced augmentations for missing teeth and partial arches.

3.4.1. Missing Teeth Augmentation

Given a segmented arch, the number of extracted teeth was determined and candidates were extracted in a predefined or random way. Preparations, if existing, were excluded from extraction. The missing teeth augmentation method was composed of essentially 3 steps:
(1)
extract candidate teeth from the arch and fill holes with gingiva;
(2)
map labels of original arch to the processed arch;
(3)
manually repair the labeling, if needed, with Meshlabler.
Figure 6 illustrates the missing tooth augmentation method by extracting Teeth 16 and 23 from the upper arch.

3.4.2. Jaw Slicing Augmentation

To enhance the generalization of the segmentation framework for partial arches, we integrated an augmentation method to slice a full segmented arch and extract several partial arches from it.
Given a segmented arch and a window size (the number of teeth to keep in the sliced jaw), the slicing jaw method generated all possible sliced jaws by iteratively keeping only the selected number of teeth and adding extra closing gingiva with a threshold (5 mm by default).
Figure 7 shows the result obtained with our slicing jaw augmentation. From an arch initially comprising 11 teeth and a slicing window size of 6, 6 partial arches with 6 teeth each were obtained. Note that the window size was defined as the number of teeth present in the sliced jaw. For an arch with a missing tooth, the sliced jaw of the same window size should be greater by one because the missing tooth is ignored during slicing.

3.5. Model Training

PTV2 [34] was a point-based transformer model for point cloud representation learning and segmentation. It improved PTV1 [33] with an efficient grouped vector attention and a partition-based pooling strategy. It was experimented on with public datasets such as ScanNet v2, S3DIS, and ModelNet40 and achieved state-of-the-art performance on point cloud classification and semantic segmentation benchmarks. PTV2 was chosen in this work due to its high proven potential in semantic segmentation tasks on 3D data. Automated processing and analysis of intraoral 3D dental scans exhibit a high level of challenge due the high variability and complexity associated with different cases. It is among the objectives of this work to test and report the performance of PTV2 for segmentation of partial and imperfect dental arches. Because the curvature values vary abruptly around each tooth boundary, it has been well-established that adding discrete curvature channels on mesh vertices is expected to improve boundary segmentation accuracy [38,39].
Three major segmentation frameworks were trained: a 17-class upper, 17-class lower, and a 3-class model for both upper and lower arches. For each framework, we used PTV2 [34] as the deep learning model and trained with 5-fold cross-validation. Consequently, 15 trained models were obtained. We trained models for lower and upper arches separately because the shape of teeth for the lower arch is different from that of upper. For example, the incisor teeth in upper arches are larger than those in lower arches. The 3-class models were trained with both lower and upper arches, but registered as lower arches. To enhance the registration step, the 3-class models were used to clean the master arches from excessive gingiva, which may have caused registration failure. The proper registration of master arches was particularly important, because the corresponding antagonist arch was registered using the same transformation matrix of its master arch. The 3-class models can also be used to separate teeth from gingiva and localize the preparations in the arches.

3.6. Data Pre-Processing

Our dataset contained arches classified as master arches with preparations or antagonist arches in addition to a die mesh. The dataset was split into upper and lower arches. Therefore, both master antagonist types of arches may exist in either the lower or upper datasets. After the annotation of arches, we applied the missing tooth augmentation method described in Section 3.4.1 to each relatively full arch (with more than 9 teeth) and generated two augmented arches with 2 and 3 missing teeth.
The procedure to prepare training set is as follows:
(a)
Clean and decimate the input arch to 10k triangles approximately;
(b)
Register decimated original arches and synthetic arches with missing teeth;
(c)
Slice each registered arch with different window sizes, select 3 randomly, and upsample to 10k triangles;
(d)
Apply reflection augmentation around the x-axis;
(e)
Apply translation, rotation, and scaling augmentations within small ranges;
(f)
Split the dataset into 5 folds for cross-validation.
The ranges for rigid transformation were ± 5 mm for translation in the x and y axes, ± 2 mm for translation in the z axis; ± 5 degrees for rotation in the x and y axes, and ± 15 degrees for rotation in z axis, and [0.9, 1.1] for scaling. Min–max normalization of arches was used based on the origin of GCS.

3.7. Loss Function and Evaluation Metrics

The PTV2 models were trained with a loss based on the Dice Coefficient (DC) defined in [40] and generalized by introducing an exponent parameter d as shown in Equation (1):
D C = 2 i N p i g i i N p i d + i N g i d
where p i P is prediction and g i G is ground truth, with both meshes having N triangles, and d = 2 [40]. A Generalized Dice loss (GDL) function was defined with DC by considering the number of classes and the weight of each class as shown in Equation (2):
G D L = i C w i ( 1 D C i ) C
where C is the number of classes and w is the weight. Three performance evaluation metrics were used for model evaluation: (1) the dice similarity coefficient (DSC); (2) the sensitivity (SEN); (3) the positive prediction value (PPV) [7].

3.8. Training Details

Figure 8 illustrates the architecture of the PTV2 network integrated in the ArchSeg framework. Under each stage block, the number of sampled points and feature dimensions are indicated. PTV2 adopts the U-Net architecture with skip connections. There are four stages of encoders and decoders with block depth [2, 2, 6, 2] and [1, 1, 1, 1], respectively. Details on the Grouped Vector Attention (GVA) and the partition-based pooling blocks can be found in [34].
The training set for the upper and lower models contained 312 and 300 arches, respectively. Compared to the default PTV2 setting, we added curvature as a new feature because the tooth boundary is sensitive to the curvature values. Our model hyper-parameters included num_classes = 17, batch_size = 256, optimizer = AdamW, learning_rate = 0.05 with a scheduler of type OneCycleLR, weight_decay = 0.02, number of feature channels = 9 for each triangle in the mesh representing its barycenter coordinates (3), its normal (3), and its discrete mean curvatures (3). Most of the aforementioned parameters except num_classes, batch_size, and curvature channels were selected as suggested by previous experimental testing of PTV2 [34] on ScanNet [41].
For the lower arches, we used window sizes of 5, 6, 7, and 8 for slicing jaw augmentation. The training dataset size was 32,000 cases after augmentation. The number of epochs was 100. For the upper arches, we applied smaller window sizes for slicing jaw augmentation 3, 4, 5, 6. The size of the training dataset was 60,000 cases after augmentation. The number of epochs was 200 for this training.
For 3-class training, 524 arches were selected. Only arches with preparations were augmented with slicing jaw, and only sliced jaws with preparations were kept using window sizes of 3, 5, and 7. After missing tooth and slicing jaw augmentations, we obtained 1435 arches. After standard augmentation, the training dataset had 33,000 cases. The number of epochs for this training was 300.
All PTV2 models were trained with 5 CPUs together with 2 NVidia A100 GPUs thanks to the resources of Digital Research Alliance of Canada. The PTV2 transformer-based model had more than 11.3 M trainable parameters. Note that PTV2 can be trained with multiple GPUs, which accelerates the training process. All predicted results were further refined by a specific graph-cut post-processing [42].

3.9. Test Set

To evaluate the effectiveness of the proposed segmentation framework, we tested it on challenging cases with a variety of imperfections. A commercial CAD software EXOCAD (DentalCAD 3.0 Galway; EXOCAD GmbH, Darmstadt, Germany) was used to segmented 30 upper and 30 lower partial arches to provide the ground truth. All cases were partial with less than 8 teeth and unseen by the trained models.

4. Results and Discussion

Using the previously described methods, we proposed a novel dental arch segmentation framework ArchSeg. Figure 9 illustrates our segmentation framework in detail.
There are two premises that help in building this framework. The first one is gingiva cleaning, which is helpful for arch registration. The second one is the three-class prediction, which is more robust than the seventeen-class model because it was trained with a smaller number of classes. In the proposed segmentation framework, we defined a general segmentation procedure as a module composed of three functions: registration, prediction with pre-trained models, and post-processing with extra information.
For an input arch, the first step is pre-processing as discussed in Section 3.2. Small disconnected outliers are removed and the input arch is decimated. The next step is gingiva cleaning, which is an instance of segmentation procedure with a three-class pre-trained model. The objective of gingiva cleaning is to remove extra gingiva that may hinder registration of partial arches with lingual/buccal ambiguity as shown in Figure 5. The output of gingiva cleaning is two meshes. The first mesh contains teeth, preparations, and few gingiva triangles around the boundaries of teeth. The second mesh contains the remaining gingiva, which is ignored in the next step and restored to obtain the final output. The gingiva-cleaned three-class mesh is the input of the next step, which is another instance of the segmentation procedure with a 17-class pre-trained model.

4.1. Statistics and Types of Imperfections in Our Dataset

Figure 10 illustrates the distribution of the four imperfectness categories in our training set. Arches of Categories 2 and 4 are master arches, otherwise they are antagonist arches. Note that only 35% upper and 32% lower arches in our training set were partial with less than eight teeth.
Figure 11 illustrates examples of partial arches with different imperfectness in our partial arch test set. Figure 11 also shows an additional imperfectness that was not discussed, which is incomplete side tooth. An example is shown in the second row at the third column in Figure 11. Incomplete side tooth may cause a registration inaccuracy because OBB registration for partial arches depends on the teeth label range. In addition, it may assign one of the class-wise DSC values to zero (or a relatively low score).

4.2. Results with Default Settings

Table 1 shows the three-class segmentation results achieving values of 0.95 and above for all metrics and classes. For both the DSC and PPV metrics, the segmentation accuracy for a tooth was higher than gingiva and preparation. For the SEN metric, the segmentation accuracy for the preparation was the highest. Overall, the segmentation accuracy for the three classes was higher than 0.95 for all metrics. Figure 12 presents the class-wise performance of our 17-class segmentation framework with default settings for partial arch test set. The figure shows that the segmentation accuracy is almost comparable for both the upper and lower arches for all the three considered metrics. In addition, it was also observed that the segmentation accuracy approaches or exceeds 0.9 for almost all positions across the three considered metrics.
Figure 13 shows examples of representative qualitative results. We do not put the ground truth to save space, particularly because most of the cases presented high DSC scores. We can observe that our trained model can segment the wisdom teeth and properly remove the missing tooth. Because we registered the antagonist with the help of the master registration matrix by default, the wall-time needed for segmenting a pair of decimated (10k) and aligned master/antagonist was about 25–30 s.

4.3. Comparison with MeshSegNet

As a baseline, we trained the well-known MeshSegNet [6] with similar pre-processing and dataset. Figure 14 shows the segmentation results for the same cases illustrated in Figure 13. We can observe that PTV2 outperforms MeshSegNet in terms of DSC for all cases. The segmentation accuracy for PTV2 is higher than that of MeshSegNet with values of percentage increase ranging from 1.1% to 15.5%. One explanation is that the transformer-based model PTV2 is much larger than the graph convolution network-based MeshSegNet in size. The number of parameters of PTV2 is 11.4 M, whereas that of MeshSegNet is 1.8 M. Another explanation is that the self-attention, group attention, and appropriate position encoding allow PTV2 to achieve a higher accuracy for large and complex 3D point cloud representation compared with MeshSegNet.

4.4. Dice Similarity Coefficient (DSC) for Imperfect Arches

DSC as a segmentation metric has been widely applied for semantic segmentation of teeth [7]. But in the context of imperfect dental arch segmentation for crown generation, there are two concerns to be highlighted. First, DSC always overestimates the average performance because all absent classes obtain a score of one. For a partial arch with less than eight teeth, the average class-wise DSC is always larger than 10/17 (≈0.588). On the other hand, DSC does not reflect the quality of the segmentation from a dentist’s point of view. In fact, we prefer mislabeled but appropriately segmented teeth to partially segmented teeth with correct labels. Figure 15 shows two examples: first with a smaller averaged DSC (a) ground truth and (b) prediction, but the result is not bad because shifting labels is easier to correct than repairing mixed labels. The second case, (c) ground truth and (d) prediction, presents with a larger DSC value but with two labels on one tooth. DSC is still an adequate metric to evaluate the performance of segmentation in general; however, the aforementioned concerns should be considered when analyzing the results during performance evaluation.

4.5. Ablation Study

We conduct ablation studies to examine the effectiveness of each component, implemented in ArchSeg.

4.5.1. Impact of Registration

To train a model capable of segmenting full and partial arches at the same time, we need to put partial arches in the registration coordinate system (see RCS in Section 3.3). Therefore, the OBB registration is the most important factor for partial arch segmentation. Previous studies reported the importance of registration for the processing of dental arch scans [7,43]. In this section, we provide a comparison with a previously published tooth segmentation model. Because no PTV2 pre-trained tooth segmentation model is available, we compare with the published MeshSegNet model [7] in this section. Figure 16 shows a performance comparison between the published pre-trained MeshSegNet model (without registration and with large range of data augmentation) and the MeshSegNet and PTV2 models trained on our data (with registration and small range of data augmentation). In the case of a full arch as in the top row, the three models obtained close results with an exception of the last tooth that was better segmented with ArchSeg. The trained MeshSegNet and PTV2 integrated in ArchSeg performed much better in segmenting the semi-partial arch of the middle row compared with the published pre-trained MeshSegNet model. The bottom row shows a partial arch with less than eight teeth. The published MeshSegNet model was not able to correctly segment any complete tooth, whereas the MeshSegNet and PTV2 models trained on our data after proper registration provided very good results. The performance of PTV2 was slightly better than that of MeshSegNet based on DSC results.

4.5.2. Impact of Other Factors

Further experiments were conducted to evaluate the effects of (1) pre-processing with/without die meshes; (2) pre-processing with/without gingiva cleaning for master arches; (3) framework with/without post-processing. The rationale behind these experiments was to identify the procedure that affects the segmentation results the most. The results of the ablation experiments were compared with default settings using die mesh, gingiva cleaning for master arch, and post-processing.
Table 2 provides the DSC performance result of our framework with different settings. The standard error was used to measure the dispersion as suggested in [44]. We can observe that PTV2 decreases its performance according to different settings. The PTV2-trained models are most sensitive to the presence of the die mesh. The default setting in bold demonstrates the best performance. Based on these results, registration is considered the most important step for the PTV2 segmentation model because the die mesh provided essential information for proper registration. It is difficult to distinguish two consecutive premolars if one of them is missing without space. The performance of PTV2 was decreased in most of these situations due to shifted labels even when it could properly segment the teeth. Although we used the missing teeth augmentation to increase the training samples with missing teeth with space, the augmentation of missing teeth without space was not addressed in this work.

4.6. Potential Applications and Limitations

The proposed dental arch segmentation framework is valuable for treatment diagnosis and treatment planning [22,45]. In addition, with its ability to process dental preparations, the proposed framework is able to provide the neighbouring context required for dental crown generation. During the training of a crown generation deep learning model [32], the prepared tooth, adjacent teeth on the master arch, and opposing teeth on the antagonist arch should be segmented to provide the necessary context. Dental laboratories usually provide the die mesh for each preparation, which could also be used to enhance the segmentation process. For each case in our dataset, the information describing the labels’ range, missing teeth label(s), preparation label, and labels of teeth with incomplete side were provided. We used this information to select specific test cases for ArchSeg evaluation such that the a wide range of imperfections are present in the test cases.
When available, the die mesh was used for lingual/buccal identification during master arch registration. In most cases, dental laboratories provide aligned scans of both the master and antagonist arches. Therefore, in such cases, the transformation matrix used to register the master arch was also used to register the antagonist arch. The die mesh was also helpful in post-processing and preparation repairing because the scan of the master arch might not contain all preparation details. In the situation when the model predicted a wrong label for the prepared tooth, the die mesh was used to correct the label in the arch because the die’s label is known.
The proposed dental arch segmentation framework, ArchSeg, also presents a few limitations: (1) To segment partial arches, ArchSeg relies on the information provided by user such as the range of teeth labels in the arch and the availability of die meshes to assist arch registration; (2) The registration of partial arches is not robust enough when the ambiguity of lingual/buccal exists; (3) The presence of extreme cases such as missing teeth without space is challenging; (4) Incomplete side teeth of an arch may be mislabeled or labeled as gingiva.
The solution proposed is aimed to assist dental technicians in automated and accurate segmentation of teeth. As such, this tool could be integrated into an application programming interface (API) that allows the user to modify and download the output. In addition, this segmentation tool has been deployed at https://app.intellidentai.com/. It has been validated by collaborating dental professionals and pending further evaluations by external experts and users. Future work will focus on the creation of synthetic data covering cases of missing teeth without space as well as adding more data from different dental laboratories to enhance the generalization of the segmentation framework. Note that our ArchSeg framework is designed to allow the integration of any segmentation model in future. The arch segmentation model can be chosen by advanced users using the framework at https://app.intellidentai.com/.

5. Conclusions

In this work, we proposed a framework to automatically segment and label teeth on full and partial dental arches acquired by 3D intraoral scanners (IOSs). Based on our knowledge, it is the first work allowing to segment and label imperfect partial dental arches with missing, incomplete, prepared, and/or wisdom teeth using an end-to-end framework. Our pre-processing involved OBB registration and three-class inference for gingiva cleaning, which was helpful for partial arch registration. We used the transformer-based PTV2 as the deep learning model with additional curvature features. We also implemented several post-processing methods to enhance the model segmentation results such as using the graph-cut algorithm, die mesh, and the three-class pre-trained model prediction. The performance of our framework when tested on challenging cases proved that it was efficient, robust, and generic with capabilities to segment partial arches with different types of imperfectness. Future work will investigate clinical use cases for the ArchSeg framework in addition to the implementation of uncertainty quantification techniques to provide a confidence measure of the predicted segmentation and labeling in real-time deployment of the model along the same lines of [35].

Author Contributions

Y.Z.: Methodology, Software, Validation, Investigation, Data Curation, Writing—Original Draft. A.A.: Conceptualization, Methodology, Formal analysis, Validation, Writing—Original Draft. G.H.: Writing—Review and Editing. J.K.: Data Curation, Validation, Resources. F.C.: Supervision. Writing—Review and Editing. F.G.: Supervision, Project administration, Funding acquisition, Resources, Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by KerenOr, Intellident Dentaire Inc., iMD Research, the Natural Science and Engineering Research Council of Canada [Ref.: ALLRP 583415-23], MEDTEQ [19-D Volumétrie dentaire 2], and King Fahd University of Petroleum and Minerals (KFUPM) [Ref.: ISP23205].

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Polytechnique Montréal Research Ethics Committee (protocol code CER-2021-20-D, 29 September 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data underlying this article cannot be shared publicly to protect the privacy of individuals that participated in the study.

Acknowledgments

We thank the Canadian Digital Alliance for providing the computational resources used in this work. The authors acknowledge the help and support from JACOBB.

Conflicts of Interest

Author Julia Keren is employed at the companies KerenOr and Intellident Dentaire Inc. All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Alsheghri, A.; Ghadiri, F.; Zhang, Y.; Lessard, O.; Keren, J.; Cheriet, F.; Guibault, F. Semi-supervised segmentation of tooth from 3D scanned dental arches. In Proceedings of the Medical Imaging 2022: Image Processing, San Diego, CA, USA, 20–24 February 2022; SPIE: Bellingham, WA, USA, 2022; Volume 12032, pp. 766–771. [Google Scholar]
  2. Piché, N.; Lasry, N.; Alsheghri, A.; Cheriet, F.; Ghadiri, F.; Guibault, F.; Hosseinimanesh, G.; Keren, J.; Lessard, O.; Zhang, Y.; et al. Automatic Generation of Dental Restorations Using Machine Learning. U.S. Patent 18/017,809, 7 September 2023. [Google Scholar]
  3. Im, J.; Kim, J.Y.; Yu, H.S.; Lee, K.J.; Choi, S.H.; Kim, J.H.; Ahn, H.K.; Cha, J.Y. Accuracy and efficiency of automatic tooth segmentation in digital dental models using deep learning. Sci. Rep. 2022, 12, 9429. [Google Scholar] [CrossRef] [PubMed]
  4. Jang, T.J.; Lee, S.H.; Yun, H.S.; Seo, J.K. Artificial Intelligence for Digital Dentistry. In Deep Learning and Medical Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 177–213. [Google Scholar]
  5. Tarce, M.; Zhou, Y.; Antonelli, A.; Becker, K. The Application of Artificial Intelligence for Tooth Segmentation in CBCT Images: A Systematic Review. Appl. Sci. 2024, 14, 6298. [Google Scholar] [CrossRef]
  6. Lian, C.; Wang, L.; Wu, T.H.; Liu, M.; Durán, F.; Ko, C.C.; Shen, D. Meshsnet: Deep multi-scale mesh feature learning for end-to-end tooth labeling on 3d dental surfaces. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Proceedings, Part VI 22, Shenzhen, China, 13–17 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 837–845. [Google Scholar]
  7. Lian, C.; Wang, L.; Wu, T.H.; Wang, F.; Yap, P.T.; Ko, C.C.; Shen, D. Deep multi-scale mesh feature learning for automated labeling of raw dental surfaces from 3D intraoral scanners. IEEE Trans. Med. Imaging 2020, 39, 2440–2450. [Google Scholar] [CrossRef] [PubMed]
  8. Qiu, L.; Ye, C.; Chen, P.; Liu, Y.; Han, X.; Cui, S. DArch: Dental Arch Prior-assisted 3D Tooth Instance Segmentation. arXiv 2022, arXiv:2204.11911. [Google Scholar]
  9. Kim, T.; Cho, Y.; Kim, D.; Chang, M.; Kim, Y.J. Tooth segmentation of 3D scan data using generative adversarial networks. Appl. Sci. 2020, 10, 490. [Google Scholar] [CrossRef]
  10. Zahel, A.; Roehler, A.; Kaucher-Fernandez, P.; Spintzyk, S.; Rupp, F.; Engel, E. Conventionally and digitally fabricated removable complete dentures: Manufacturing accuracy, fracture resistance and repairability. Dent. Mater. 2024, 40, 1635–1642. [Google Scholar] [CrossRef]
  11. Caron, E.; Marino, F.A.T.; Alageel, O.S.; Alsheghri, A.; Song, J. Computer-Aided Design and Manufacturing of Removable Partial Denture Frameworks with Enhanced Biomechanical Properties. U.S. Patent 10,959,818, 30 March 2021. [Google Scholar]
  12. Richert, R.; Alsheghri, A.A.; Alageel, O.; Caron, E.; Song, J.; Ducret, M.; Tamimi, F. Analytical model of I-bar clasps for removable partial dentures. Dent. Mater. 2021, 37, 1066–1072. [Google Scholar] [CrossRef]
  13. Persson, A.S.; Andersson, M.; Odén, A.; Sandborgh-Englund, G. Computer aided analysis of digitized dental stone replicas by dental CAD/CAM technology. Dent. Mater. 2008, 24, 1123–1130. [Google Scholar] [CrossRef]
  14. Grochala, D.; Paleczek, A.; Lemejda, J.; Kajor, M.; Iwaniec, M. Evaluation of Geometric Occlusal Conditions Based on the Image Analysis of Dental Plaster Models. In Proceedings of the MATEC Web of Conferences, Tlen, Poland, 8–11 September 2020; EDP Sciences: Les Ulis, France, 2022; Volume 357, p. 05006. [Google Scholar]
  15. Naqushbandi, F.S.; John, A. Sequence of actions recognition using continual learning. In Proceedings of the 2022 Second International Conference on Artificial Intelligence and Smart Energy (ICAIS), Coimbatore, India, 23–25 February 2022; pp. 858–863. [Google Scholar]
  16. Arjumand, B. The Application of artificial intelligence in restorative Dentistry: A narrative review of current research. Saudi Dent. J. 2024, 36, 835–840. [Google Scholar] [CrossRef]
  17. Almalki, A.; Latecki, L.J. Self-Supervised Learning With Masked Autoencoders for Teeth Segmentation From Intra-Oral 3D Scans. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 7820–7830. [Google Scholar]
  18. Lin, Z.; He, Z.; Wang, X.; Zhang, B.; Liu, C.; Su, W.; Tan, J.; Xie, S. DBGANet: Dual-branch geometric attention network for accurate 3D tooth segmentation. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 4285–4298. [Google Scholar] [CrossRef]
  19. Polizzi, A.; Quinzi, V.; Ronsivalle, V.; Venezia, P.; Santonocito, S.; Lo Giudice, A.; Leonardi, R.; Isola, G. Tooth automatic segmentation from CBCT images: A systematic review. Clin. Oral Investig. 2023, 27, 3363–3378. [Google Scholar] [CrossRef] [PubMed]
  20. Chen, X.; Ma, N.; Xu, T.; Xu, C. Deep learning-based tooth segmentation methods in medical imaging: A review. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2024, 238, 115–131. [Google Scholar] [CrossRef] [PubMed]
  21. Izzetti, R.; Nisi, M.; Gennai, S.; Graziani, F. Evaluating the relationship between mandibular third molar and mandibular canal with semiautomatic segmentation: A pilot study on CBCT datasets. Appl. Sci. 2022, 12, 502. [Google Scholar] [CrossRef]
  22. Park, J.; Lee, J.; Moon, S.; Lee, K. Deep learning based detection of missing tooth regions for dental implant planning in panoramic radiographic images. Appl. Sci. 2022, 12, 1595. [Google Scholar] [CrossRef]
  23. Jang, T.J.; Yun, H.S.; Hyun, C.M.; Kim, J.E.; Lee, S.H.; Seo, J.K. Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification. Med. Image Anal. 2024, 93, 103096. [Google Scholar] [CrossRef] [PubMed]
  24. Zhuang, S.; Wei, G.; Cui, Z.; Zhou, Y. Robust hybrid learning for automatic teeth segmentation and labeling on 3D dental models. IEEE Trans. Multimed. 2023. [Google Scholar] [CrossRef]
  25. Jana, A.; Subhash, H.M.; Metaxas, D. Automatic tooth segmentation from 3d dental model using deep learning: A quantitative analysis of what can be learnt from a single 3d dental model. In Proceedings of the 18th International Symposium on Medical Information Processing and Analysis, Valparaiso, Chile, 9–11 November 2022; SPIE: Bellingham, WA, USA, 2023; Volume 12567, pp. 42–51. [Google Scholar]
  26. Al-Ubaydi, A.S.; Al-Groosh, D. The validity and reliability of automatic tooth segmentation generated using artificial intelligence. Sci. World J. 2023, 2023, 5933003. [Google Scholar] [CrossRef]
  27. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  28. Xu, X.; Liu, C.; Zheng, Y. 3D tooth segmentation and labeling using deep convolutional neural networks. IEEE Trans. Vis. Comput. Graph. 2018, 25, 2336–2348. [Google Scholar] [CrossRef]
  29. Zheng, Y.; Chen, B.; Shen, Y.; Shen, K. Teethgnn: Semantic 3d teeth segmentation with graph neural networks. IEEE Trans. Vis. Comput. Graph. 2022, 29, 3158–3168. [Google Scholar] [CrossRef]
  30. Cui, Z.; Li, C.; Chen, N.; Wei, G.; Chen, R.; Zhou, Y.; Shen, D.; Wang, W. TSegNet: An efficient and accurate tooth segmentation network on 3D dental model. Med. Image Anal. 2021, 69, 101949. [Google Scholar] [CrossRef]
  31. Lu, D.; Xie, Q.; Wei, M.; Gao, K.; Xu, L.; Li, J. Transformers in 3d point clouds: A survey. arXiv 2022, arXiv:2205.07417. [Google Scholar]
  32. Hosseinimanesh, G.; Ghadiri, F.; Alsheghri, A.; Zhang, Y.; Keren, J.; Cheriet, F.; Guibault, F. Improving the quality of dental crown using a transformer-based method. In Proceedings of the Medical Imaging 2023: Physics of Medical Imaging, San Diego, CA, USA, 19–23 February 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12463, pp. 802–809. [Google Scholar]
  33. Zhao, H.; Jiang, L.; Jia, J.; Torr, P.H.; Koltun, V. Point Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 16259–16268. [Google Scholar]
  34. Wu, X.; Lao, Y.; Jiang, L.; Liu, X.; Zhao, H. Point transformer v2: Grouped vector attention and partition-based pooling. Adv. Neural Inf. Process. Syst. 2022, 35, 33330–33342. [Google Scholar]
  35. Alsheghri, A.; Ladini, Y.; Hosseinimanesh, G.; Chafi, I.; Keren, J.; Cheriet, F.; Guibault, F. Adaptive Point Learning with Uncertainty Quantification to Generate Margin Lines on Prepared Teeth. Appl. Sci. 2024, 14, 9486. [Google Scholar] [CrossRef]
  36. Wu, T.H.; Lian, C.; Piers, C.; Pastewait, M.; Wang, L.; Shen, D.; Ko, C.C. Machine (deep) learning for orthodontic CAD/CAM technologies. In Machine Learning in Dentistry; Springer: Cham, Switzerland, 2021; pp. 117–129. [Google Scholar]
  37. Schroeder, W.; Martin, K.; Lorensen, B. The Visualization Toolkit, 4th ed.; Kitware: Clifton Park, NJ, USA, 2006. [Google Scholar]
  38. Zhao, M.; Ma, L.; Tan, W.; Nie, D. Interactive tooth segmentation of dental models. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; pp. 654–657. [Google Scholar]
  39. Wu, T.H.; Lian, C.; Lee, S.; Pastewait, M.; Piers, C.; Liu, J.; Wang, F.; Wang, L.; Chiu, C.Y.; Wang, W.; et al. Two-stage mesh deep learning for automated tooth segmentation and landmark localization on 3D intraoral scans. IEEE Trans. Med. Imaging 2022, 41, 3158–3166. [Google Scholar] [CrossRef] [PubMed]
  40. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  41. Dai, A.; Chang, A.X.; Savva, M.; Halber, M.; Funkhouser, T.; Nießner, M. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5828–5839. [Google Scholar]
  42. Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef]
  43. Kašparová, M.; Halamová, S.; Dostálová, T.; Procházka, A. Intra-oral 3D scanning for the digital evaluation of dental arch parameters. Appl. Sci. 2018, 8, 1838. [Google Scholar] [CrossRef]
  44. Altman, D.; Bland, J. Statistics notes: Standard deviations and standard errors. Br. Med. J. 2005, 331, 903. [Google Scholar] [CrossRef]
  45. Rubiu, G.; Bologna, M.; Cellina, M.; Cè, M.; Sala, D.; Pagani, R.; Mattavelli, E.; Fazzini, D.; Ibba, S.; Papa, S.; et al. Teeth segmentation in panoramic dental X-ray using mask regional convolutional neural network. Appl. Sci. 2023, 13, 7947. [Google Scholar] [CrossRef]
Figure 1. (left) Mapping between FDI and 17-class notation; (right) A labeled full upper arch and a labeled full lower arche visualized with Meshlabler.
Figure 1. (left) Mapping between FDI and 17-class notation; (right) A labeled full upper arch and a labeled full lower arche visualized with Meshlabler.
Applsci 14 10784 g001
Figure 2. Determining teeth centroids of segmented arches. (a) Teeth centroids of upper arches, (b) Average upper teeth centroids, (c) Teeth centroids of lower arches, (d) Average lower teeth centroids. Different colors correspond to different teeth types.
Figure 2. Determining teeth centroids of segmented arches. (a) Teeth centroids of upper arches, (b) Average upper teeth centroids, (c) Teeth centroids of lower arches, (d) Average lower teeth centroids. Different colors correspond to different teeth types.
Applsci 14 10784 g002
Figure 3. Illustration of the OBB-based registration method for a lower partial arch. (a) Step 1: center mesh, (b) Step 2: compute OBB, (c) Step 3: align OBB to x–y plan, (d) Step 4: map mesh to GCS.
Figure 3. Illustration of the OBB-based registration method for a lower partial arch. (a) Step 1: center mesh, (b) Step 2: compute OBB, (c) Step 3: align OBB to x–y plan, (d) Step 4: map mesh to GCS.
Applsci 14 10784 g003
Figure 4. Illustration of a set of upper arches before (left) and after being registered (right). (a) Original arches, (b) Registered arches.
Figure 4. Illustration of a set of upper arches before (left) and after being registered (right). (a) Original arches, (b) Registered arches.
Applsci 14 10784 g004
Figure 5. Illustration of registration failure caused by lingual buccal ambiguity and its effect on segmentation performance. (a) Failed lingual/buccal detection, (b) Prediction with failed registration (c) Lingual/buccal flipping, (d) Prediction with corrected registration.
Figure 5. Illustration of registration failure caused by lingual buccal ambiguity and its effect on segmentation performance. (a) Failed lingual/buccal detection, (b) Prediction with failed registration (c) Lingual/buccal flipping, (d) Prediction with corrected registration.
Applsci 14 10784 g005
Figure 6. Procedure of missing tooth augmentation. (a) Original ground-truth-labeled arch, (b) Arch with tooth extraction, (c) Arch after label mapping, (d) Final arch with missing teeth.
Figure 6. Procedure of missing tooth augmentation. (a) Original ground-truth-labeled arch, (b) Arch with tooth extraction, (c) Arch after label mapping, (d) Final arch with missing teeth.
Applsci 14 10784 g006
Figure 7. An example of slicing jaw augmentation with a window size of 6.
Figure 7. An example of slicing jaw augmentation with a window size of 6.
Applsci 14 10784 g007
Figure 8. PTV2 model in ArchSeg framework. MLP: Multilayer perceptron; GVA: Grouped vector attention.
Figure 8. PTV2 model in ArchSeg framework. MLP: Multilayer perceptron; GVA: Grouped vector attention.
Applsci 14 10784 g008
Figure 9. ArchSeg: the proposed segmentation framework.
Figure 9. ArchSeg: the proposed segmentation framework.
Applsci 14 10784 g009
Figure 10. Distribution of imperfectness of dental arches.
Figure 10. Distribution of imperfectness of dental arches.
Applsci 14 10784 g010
Figure 11. Different partial arches, first/second row are upper/lower partial arches of different imperfectness. From left to right: partial only, partial with preparation, partial with missing tooth, and partial with both a preparation and a missing tooth.
Figure 11. Different partial arches, first/second row are upper/lower partial arches of different imperfectness. From left to right: partial only, partial with preparation, partial with missing tooth, and partial with both a preparation and a missing tooth.
Applsci 14 10784 g011
Figure 12. Class-wise performance with default framework settings in terms of three evaluation metrics: (a) DSC, (b) SEN, and (c) PPV of a test set with 30 upper and 30 lower partial arches.
Figure 12. Class-wise performance with default framework settings in terms of three evaluation metrics: (a) DSC, (b) SEN, and (c) PPV of a test set with 30 upper and 30 lower partial arches.
Applsci 14 10784 g012
Figure 13. Examples of segmentation results of default settings with PTV2 trained models for 4 lower (top row) and 4 upper (bottom row) partial arches. Each column corresponds to a different type of imperfectness.
Figure 13. Examples of segmentation results of default settings with PTV2 trained models for 4 lower (top row) and 4 upper (bottom row) partial arches. Each column corresponds to a different type of imperfectness.
Applsci 14 10784 g013
Figure 14. Examples of segmentation results of default settings with MeshSegNet-trained models for 4 lower (top row) and 4 upper (bottom row) partial arches. Each column corresponds to a different type of imperfectness.
Figure 14. Examples of segmentation results of default settings with MeshSegNet-trained models for 4 lower (top row) and 4 upper (bottom row) partial arches. Each column corresponds to a different type of imperfectness.
Applsci 14 10784 g014
Figure 15. Issues of DSC evaluation in practice.
Figure 15. Issues of DSC evaluation in practice.
Applsci 14 10784 g015
Figure 16. Impact of registration for full, semi-partial arch, and partial arch with less than 8 teeth compared with published pre-trained MeshSegNet model, and MeshSegNet and PTV2 models trained on our data and integrated in ArchSeg.
Figure 16. Impact of registration for full, semi-partial arch, and partial arch with less than 8 teeth compared with published pre-trained MeshSegNet model, and MeshSegNet and PTV2 models trained on our data and integrated in ArchSeg.
Applsci 14 10784 g016
Table 1. Class-wise test performance of 5-fold trained 3-class models with majority voting in terms of three evaluation metrics: DSC, SEN, and PPV.
Table 1. Class-wise test performance of 5-fold trained 3-class models with majority voting in terms of three evaluation metrics: DSC, SEN, and PPV.
ClassGingivaToothPrep
DSC0.9560.9750.951
SEN0.9630.9720.983
PPV0.9500.9780.951
Table 2. Ablation analysis showing the framework performance in terms of DSC for different settings. MC: Master gingiva Cleaning; PP: Post-Processing, ± is followed by standard error.
Table 2. Ablation analysis showing the framework performance in terms of DSC for different settings. MC: Master gingiva Cleaning; PP: Post-Processing, ± is followed by standard error.
SettingsLowerUpper
w/o die 0.893 ± 0.012 0.892 ± 0.012
w/o MC 0.932 ± 0.008 0.916 ± 0.010
w/o PP 0.911 ± 0.011 0.928 ± 0.009
default0.936 ± 0.0080.948 ± 0.007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alsheghri, A.; Zhang, Y.; Hosseinimanesh, G.; Keren, J.; Cheriet, F.; Guibault, F. Robust Segmentation of Partial and Imperfect Dental Arches. Appl. Sci. 2024, 14, 10784. https://doi.org/10.3390/app142310784

AMA Style

Alsheghri A, Zhang Y, Hosseinimanesh G, Keren J, Cheriet F, Guibault F. Robust Segmentation of Partial and Imperfect Dental Arches. Applied Sciences. 2024; 14(23):10784. https://doi.org/10.3390/app142310784

Chicago/Turabian Style

Alsheghri, Ammar, Ying Zhang, Golriz Hosseinimanesh, Julia Keren, Farida Cheriet, and François Guibault. 2024. "Robust Segmentation of Partial and Imperfect Dental Arches" Applied Sciences 14, no. 23: 10784. https://doi.org/10.3390/app142310784

APA Style

Alsheghri, A., Zhang, Y., Hosseinimanesh, G., Keren, J., Cheriet, F., & Guibault, F. (2024). Robust Segmentation of Partial and Imperfect Dental Arches. Applied Sciences, 14(23), 10784. https://doi.org/10.3390/app142310784

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop