Next Article in Journal
Non-Deep Active Learning for Deep Neural Networks
Next Article in Special Issue
Smartphone-Based Facial Scanning as a Viable Tool for Facially Driven Orthodontics?
Previous Article in Journal
Accounting for Viscoelasticity When Interpreting Nano-Composite High-Deflection Strain Gauges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Three 3D Segmentation Software Tools for Hip Surgical Planning

1
Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
2
Dipartimento di Scienze Cliniche e Molecolari, Università Politecnica delle Marche, Via Tronto 10/a, Torrette di Ancona, 60126 Ancona, Italy
3
Department of Materials, Environmental Sciences and Urban Planning, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(14), 5242; https://doi.org/10.3390/s22145242
Submission received: 19 May 2022 / Revised: 11 July 2022 / Accepted: 12 July 2022 / Published: 13 July 2022
(This article belongs to the Special Issue 3D Sensing and Imaging for Biomedical Investigations)

Abstract

:
In hip arthroplasty, preoperative planning is fundamental to reaching a successful surgery. Nowadays, several software tools for computed tomography (CT) image processing are available. However, research studies comparing segmentation tools for hip surgery planning for patients affected by osteoarthritic diseases or osteoporotic fractures are still lacking. The present work compares three different software from the geometric, dimensional, and usability perspectives to identify the best three-dimensional (3D) modelling tool for the reconstruction of pathological femoral heads. Syngo.via Frontier (by Siemens Healthcare) is a medical image reading and post-processing software that allows low-skilled operators to produce prototypes. Materialise (by Mimics) is a commercial medical modelling software. 3D Slicer (by slicer.org) is an open-source development platform used in medical and biomedical fields. The 3D models reconstructed starting from the in vivo CT images of the pathological femoral head are compared with the geometries obtained from the laser scan of the in vitro bony specimens. The results show that Mimics and 3D Slicer are better for dimensional and geometric accuracy in the 3D reconstruction, while syngo.via Frontier is the easiest to use in the hospital setting.

1. Introduction and Literature Review

Preoperative planning is a fundamental step toward a successful surgery, especially for hip arthroplasty. Indeed, when a hip replacement is needed, inadequate prosthesis fitting, related loosening of the implants, and poorly performed surgery may result in severe implications for the patient [1,2]. Therefore, computed tomography (CT) and anatomical 3D model reconstruction are often performed before arthroplasty to identify the implant that best replicates the patient’s joint anatomy [3,4]. However, due to the surrounding tissues, bone deterioration, and remodelling, the correct interpretation of the bone CT and related 3D model reconstruction may be difficult and inaccurate [5]. The complexity of the anatomical 3D modelling further increases in the case of osteoarthritic, congenital, osteonecrosis, osteoporotic, and post-traumatic diseases [6,7]. Indeed, congenital disorders such as developmental dysplasia of the hip, Legg–Perthes–Calvé disease, and slipped capital femoral epiphysis cause remodelling of the hip joint resulting in an alteration of the normal anatomy of the joint [8]. Bone cell death, which leads to the collapse of articular cartilage, is the leading cause of femoral head osteonecrosis. Osteoarthritis is a degenerative disease that affects the joints. It is characterised by the loss of articular cartilage associated with subchondral bone sclerosis and the production of osteophytes and geodes [9]. The leading causes of secondary osteoarthritis include osteonecrosis and congenital and post-traumatic hip diseases [10]. Even osteoporosis is characterised by changes in bone biologic material and structural distraction with the consequent reduction in the resistance of bone tissue that predisposes to an increased risk of fractures [11]. In all these clinical situations, an anatomical 3D model results in being helpful in surgical planning [12,13] and communication with the patients [14,15]. Moreover, it has been demonstrated that the anatomical 3D model helps reduce surgery duration [16,17] and intra-operative blood loss, allowing better surgical outcomes [18].
The segmentation process is the basis of anatomical 3D reconstruction. It separates and renders the regions of interest (ROI) from the surrounding tissues and structures. However, only cortical bone has the high density required to differentiate it from adjacent soft tissues. In contrast, the density of porous bones differs only slightly from that of the soft tissues. This phenomenon is mainly observable in elderly patients, i.e., a population that frequently suffers from bone weakness. To perform proper bone identification, it is necessary to achieve a substantial contrast between bones and soft tissues [19]. Segmentation can be performed manually, semi-automatically, or automatically. While the manual approach is user-dependent and time-consuming, the automatic and semi-automatic methods are faster and computer-aided [20]. However, several issues arise even with automatic and semi-automatic methods due to the technical and computational complexity of the 3D image reconstruction and segmentation. New, advanced CT reconstruction algorithms are constantly being developed and commercially deployed to improve the classification and recognition of patterns inside biomedical images. The most used and most efficient ones are based on deep learning convolutional neural networks. Several studies have investigated their benefits, weaknesses, and their impact on the reconstructed image quality [21,22,23]. Multiple segmentation methods could be used together on a single 3D model to eliminate minor deficiencies intrinsic to every algorithm [24].
Nowadays, several software tools for processing DICOM (Digital Imaging COmmunications in Medicine) files are available on the market and freely online. Mimics, Medviso, 3D Slicer, ITK-SNAP, MeVisLab, and ImageJ can be included among the most used and best known segmentation software [24,25]. These tools can be divided into three categories: CT embedded, high-end licensed, and open source software. They present different segmentation algorithms, reconstruction accuracy, levels of usability, and, of course, commercial costs. Many studies in the scientific literature have tried to compare segmentation software, focusing on the reconstruction of specific anatomical districts or related to particular kinds of patients or diseases; several examples are described hereafter.
A specific field of interest for the segmentation tools is the 3D reconstruction of organs such as the lungs [26,27,28]. However, the present work focuses on the segmentation of bone structures. Considering maxillofacial surgery, several authors assessed and compared the segmentation tools for the 3D modelling of the skull and mandible. Through the comparison between the commercial software Mimics (used as the gold standard) and the open source Medical Imaging Interaction Toolkit (MITK) [29], InVesalius [20,30], ITK-Snap, 3D Slicer, and the on the market Dolphin 3D [20], the authors found out that the 3D models produced using Mimics and the other software were comparable [20,29,30]. Therefore, they suggested using open source software for preoperative planning to minimise the operational cost [29,30]. Wallner et al. (2018) assessed the accuracy of a license-free segmentation algorithm using the MeVisLab software as ground truth. In this case, the results suggested that the proposed open source software could be used with high accuracy to segment the craniomaxillofacial complex [31]. The parameters used for the comparisons include the Hausdorff distance [29,30,31], Wilcoxon signed rank test [29], Dice score [30,31], the segmentation volume [20,31], time, and the number of voxels [31]. A general review of the metrics for evaluating 3D medical image segmentation is available in Taha et al. [32]. Other performance parameters considered for the software evaluation are the time needed to complete the segmentation, ease of use, algorithm precision [33], automatisation degree, 3D visualisation, image registration tools, tractography tools, supported OS, and potential plugins [25]. In [25], the authors compared fourteen software tools (twelve free source and two commercial) to obtain patient-specific 3D models of the pelvic region of children from magnetic resonance images (MRI). In [33], the authors used three open source software to segment CT images of a vertebra.
Segmentation software comparison for hip surgery planning, for patients affected by osteoarthritic diseases or osteoporotic fractures, is still scarce in the literature. However, the authors in [34] investigated the use of different segmentation algorithms for the 3D reconstruction of a human cadaver femur, divided into four areas: “neck and greater trochanter”, “proximal metaphysis”, “diaphysis”, and “distal metaphysis”. The deviation analysis was carried out between the femur models reconstructed from the CT images and those obtained from the optical 3D scanning of the femur. The results showed that the average deviation of CT-based models from the scan models is very low. However, the highest deviation error was found in the “neck and greater trochanter” area [34]. This area is essential for the planning of hip surgery. It is even the most affected area in osteoarthritic disease and is thus the most difficult to segment correctly. That is why the present work focuses on femoral heads, leaving behind the other hip sections.
Every surgeon should trust CT-based models, obtained by in vivo bone, to reach good surgical outcomes. For this reason, in contrast with the previously cited research, which was performed only on cadaver femurs, the present work considers the CT images of in vivo femoral heads with the surrounding soft tissues. This paper aims to compare three different software tools in the 3D modelling of pathological in vivo femoral heads from CT data:
  • Mimics v.12.11 (by Materialise, Leuven, Belgium): commercial medical 3D image-based engineering software that efficiently takes images to 3D models;
  • 3D Slicer v.4.10.2 (by Slicer Community, www.slicer.org): open source software to solve advanced image computing challenges and a development platform for medical and biomedical applications;
  • Syngo.via Frontier 3D printing v.1.2.0 (by Siemens Healthcare GmbH, Erlangen, Germany): medical image reading and post-processing software that allows low-skilled operators to produce prototypes.
The presented comparison wants to establish the best image segmentation solution for hospital facilities. On the one hand, geometrical and dimensional accuracy is mandatory for estimating the quality of 3D segmented models. However, in a clinical environment, this is not enough. Easy-to-use devices should support doctors and technicians. Hence, software usability is another metric that must be evaluated in this context.
Thus, the comparison concerns three aspects:
  • Geometric quality: geometric accuracy of the segmented femur heads;
  • Dimensional quality: dimensional accuracy of femur heads’ measurements (vertical and horizontal diameters);
  • Usability: user experience during the overall segmentation process for preoperative hip planning.
To the authors’ knowledge, this is the first paper whose software comparison is based on in vivo femoral head images. This research will help hospitals and clinicians improve hip surgery preoperative planning.

2. Materials and Methods

Figure 1 and Figure 2 show the pipeline of the proposed methodology to compare the 3D models of pathological femoral heads. The 3D models obtained from three different segmentation software from the pre-surgery CT images are compared against the reference models reconstructed from a laser scan of the in vitro bony specimens. This methodology analyses the reliability of the 3D anatomical models reconstructed from the CT images using software with different technological characteristics by comparing them against the reference gold standard.
Figure 1 shows the first part of the pipeline. The computed tomography of the in vivo femoral head is acquired before the patient’s surgery. Then, the CT image segmentation occurs with three different software, Mimics, 3D Slicer, and syngo.via Frontier. This procedure results in three geometrical models of the same femoral head, respectively, the G_A (3D model reconstructed using Mimics), G_B (3D model reconstructed using 3D Slicer), and G_C (3D model reconstructed using syngo.via Frontier) geometries. Therefore, different algorithms reconstruct these models by distinguishing between the bone region and the surrounding tissues.
Then, the reference digital 3D model is realised. It considers the digitalisation, through a non-contact 3D laser scan, of the in vitro bony specimen collected after the patient had undergone hip replacement surgery. The obtained 3D model (G_D) is considered the reference since the 3D laser scanner is more accurate than the computed tomography. The surface of the cleaned bone is acquired without any adjacent tissue or material. The specimen is scanned outside the container with the preservative solution.
The reconstructed 3D geometrical models are the input of the second part of the pipeline (Figure 2).
Since the 3D geometries are reconstructed through software with different reference systems, the models could not have the same orientation. For this reason, the first step consists of a manual 3D model overlapping (three-point method), assuring the same direction. Then, in the second step, the four 3D geometries must be aligned among themselves in the most precise and accurate way. This objective can be accomplished through two sub-steps: the best fit alignment algorithm should be run based on points selection after a manual alignment. The best fit alignment is possible because the femur head is not a perfect sphere [35], as illustrated in Figure 3. The femur head shape is mainly defined by the vertical (VD) and horizontal (HD) diameters. Vertical diameter is the “maximal diameter of the femoral head taken in the vertical plane that passes through the axis of the neck” [35]. Horizontal diameter is the “maximal diameter of the femoral head taken in the horizontal plane perpendicular to the vertical diameter of the head” [35]. Their mean difference is approximately 0.48 mm in men and 1.37 in females [36], with VD > HD. Since the best fit is used to optimize the alignment between the geometries, according to the deviations mentioned above on the femur head diameters, a maximum allowed displacement is set to 1.0 mm. This value guarantees only a fine geometry adjustment without compromising the original manual alignment.
The femoral neck must be cut to avoid mismatches and focus the analysis on the femoral head. The neck trim after the models’ alignment makes the cut consistent for all the geometrical models. At this point, the comparison can be split into two other evaluations: the geometric and dimensional quality assessments. For the geometric comparison, G_D geometry is set as the reference model and, in turn, G_A, G_B, and G_C models as test. Results refer to the point-to-point surface distance (signed Euclidean distance) between the test and the reference models in terms of average and standard deviation.
It is noted that models reconstructed via CT segmentation tools contain both the femur head’s cortical (external) and trabecular (internal) structures. In contrast, the reference geometry obtained via reverse engineering has only the outer femur head shape (cortical). A maximum distance value was set to 2.0 mm to evaluate deviations only between external bodies, considering that the average cortical thickness in this region is around 1.3 ± 0.2 mm [37]. In addition, the maximum allowed error in surgical planning is about 2 mm.
Beyond the signed Euclidean distance, the authors evaluated the vertical (VD) and horizontal (HD) diameters of the femoral heads (Figure 3). Through this dimensional evaluation, it is possible to mimic the measurements in practice of surgeons, who measure the vertical and horizontal diameters of the femoral head through a specific calliper. The observed dimensions were obtained using the minimum bounding box algorithm applied to the four geometries (G_A, G_B, G_C, and G_D). Since the four geometries have been cut at the end of the neck, the relative bounding boxes are aligned with the neck axis (per VD definition). The maximum and minimum dimensions of the boxes measured on the plane perpendicular to the neck axis are associated with the vertical and horizontal diameters.
Lastly, the usability of the used software tools must be assessed. As for the geometrical and dimensional analyses, even the usability was conceived as a comparison assessment. Seven evaluation objectives were defined and weighted by a focus group of experts. Five objectives were retrieved from [25,33]. Two were specified by the authors (training time and cost):
  • Automatisation degree: amount of manual interaction required by the user [25].
  • Segmentation time: time required for the segmentation [25].
  • 3D visualisation: ability to represent a 3D model realistically [25].
  • Supported Operative System (OS): supported operative systems [25].
  • Potential extension (plugins): ability of the software tools to be freely extended by add-ons or plugins [33].
  • Training time: time required to start using the tool.
  • Cost: license price.
First, the focus group expressed n-weights (Weightn) for each evaluation objective (n). It then defined the scores (Scoren) for each metric and software tool on a three-point scale. The total score for each software is computed through the weighted arithmetic mean equation (Equation (1))
T o t a l   s c o r e = W e i g h t n * S c o r e n n W e i g h t n

3. Case Study

This section presents the application of the above-mentioned methodology to compare the software tools used for femur head image segmentation for hip surgical planning.

3.1. Participants

Since this study wanted to evaluate segmentation software tools in a clinical environment, patients were enrolled to represent a specific population requesting hip surgical preplanning. Thus, the research involved ten patients with different pathologies (e.g., primitive or secondary coxarthrosis or hip fractures) and femur head shapes who needed total hip arthroplasty. Osteoarthritic, congenital, osteonecrosis, osteoporotic, and post-traumatic diseases are pathologies that must be managed to robustly evaluate segmentation software in case of anatomical 3D modelling complexity.
Subjects who needed hip revision surgery and patients not subjected to a preoperative CT of the pelvis were excluded. Ten patients signed the informed consent to use the preoperative CT images of the pelvis and the bony specimen derived from surgery (femoral head) for scientific research purposes. The ten patients underwent a preoperative CT scan of the pelvis using a CT machine with 120 kV and 0.625 slice thickness. Their demographic data and information related to pathology and CT images are described in Table 1.
Bony specimens of femoral heads were cleaned from blood and placed in containers with formalin. The results refer to eight patients because two of them were discarded based on the following exclusion criteria:
  • The presence of large geodes required massive intervention by the operator to reconstruct the 3D model. This condition caused the inevitable introduction of errors concerning the 3D model rebuilt from the scan of the in vitro bony specimen.
  • The deterioration of the bony specimen (the femur head was poorly preserved) made the scan impossible to perform.

3.2. Procedure

An expert and trained operator produced the 3D digital models of the pathological femoral heads. The operator had good experience with the three software tools. He had already used them in more than ten case studies per software to reconstruct pathological bones.
The three segmentation software tools can be briefly described as:
  • Materialise Mimics: a commercial medical modelling software that allows interfacing between CT data and a computer-aided design (CAD) or solid free-form fabrication (SFF) systems.
  • 3D Slicer: a development platform to quickly and freely build and deploy custom research and commercial product solutions.
  • Siemens syngo.via Frontier: a medical image reading and post-processing software that quickly creates prototypes regardless of expertise level.
Overall, 3D anatomy reconstruction required the initial segmentation of the anatomical bone structures via a CT scan. CT is unable to detect cartilage and the remaining soft tissues. However, articular cartilage was poorly represented in these patients. The resultant 2D image slices were stored in DICOM format. The CT images were then used as the primary data for reconstructing the 3D models. The bony areas were extracted from each slice to obtain 3D anatomy images. Because pathological hips did not have joint space between the head and acetabulum in some places, only one pixel was manually removed to stay below the imaging resolution threshold in each 2D slice. The 3D visual models were then acquired by stacking the segmented slices; the transformation from the sliced images to the STL (standard triangulation language) format employed the marching cubes algorithm. The 3D visual models of the anatomy were converted, and STL format surface models were created. Therefore, the 3D model of the femoral head was reconstructed for each patient with the three software Mimics, 3D Slicer, and syngo.via Frontier following the same four steps:
  • DICOM files from pelvis CT were imported. Br 64, a high-resolution CT convolution kernel, was used for reconstruction images.
  • Only the femoral head was included in the region of interest (ROI). Greater and lesser trochanters were excluded.
  • The bone identification was made by a threshold segmentation process selecting over 200 Hounsfield Units (HU). To separate the femoral head from the acetabulum, manual segmentation was performed by removing only one pixel to remain below the imaging resolution threshold in each 2D slice. Manual segmentation was conducted to remove all the pixels that do not belong to the femoral head in each 2D slice.
  • The STL file of the 3D model was exported (3D models: G_A by Mimics, G_B by 3D Slicer, and G_C by syngo.via Frontier).
The DICOM images were analysed using three different software tools, dividing the patients into three groups. The first three-patient group analysis started with the syngo.via Frontier tool. In contrast, the second group analysis (other three patients) began with 3D Slicer, and the third group analysis (four patients) started with Mimics. This procedure was defined to avoid eventual bias related to the consecutive use of the software tools, always in the same sequence. Indeed, this anti-bias procedure aimed to prevent the operator from remembering the geometrical bone structures, thus having the best results with the last software.
G_A model segmentation required about 35 min (from 30 to 40 min), G_B about 45 min (from 40 to 50 min), and G_C about 40 min (from 30 to 50 min).
Subsequently, the reference 3D models were reconstructed after surgery, scanning the in vitro femoral head specimens. The 3D laser Scanner Range v.7 (by Konica Minolta, Tokyo, Japan) was chosen to digitise the cleaned bony samples. It was selected for its accuracy to produce the reference 3D models of the femoral heads. Indeed, the 3D laser scanner’s accuracy (40 µm) is higher than that of the CT machine (1250 µm). The comparison between the different 3D models was accomplished using the 3D point clouds and mesh processing software CloudCompare v.2.10.2 (open source, by www.cloudcompare.org). The comparison procedure proposed in Figure 2 was followed in this case study. In detail:
  • G_A, G_B, and G_C geometries were overlapped in Rhinoceros 3D v.6 (by Robert McNeel and Associates, Seattle, USA) to have them in the same orientation. G_B geometry was rotated at 180° around the z-axis to have it superimposed on the G_A and G_C geometries (already correctly oriented).
  • G_B geometry was exported in STL format.
  • G_A, G_B, and G_C geometries, which are already overlapped, were imported into Geomagic Design X (by 3D Systems, Rock Hill, USA).
  • G_A, G_B, G_C, and G_D geometries were then aligned in Geomagic Design X through a two steps procedure:
    The manual alignment of geometry G_D concerning geometries G_A, G_B, and G_C (already aligned with each other), with three reference points. In particular, the alignment was conducted between geometry G_D and G_A (the one with the best external surface).
    The automatic alignment (best fit algorithm) of G_D geometry concerning the G_A, G_B, and G_C geometries (as for the previous step, the G_A geometry was chosen for the alignment).
    The femoral necks were cut to leave only the femoral heads. In this way, the cut consistently occurs for all the geometries.
    The four geometries were exported in STL format.
  • Geometric quality evaluation:
    The four geometries were imported into CloudCompare.
    The comparison was performed by setting one of the geometries from CT as the test (G_A, G_B, G_C) and the G_D geometry as the reference. The maximum deviation was set at 2 mm.
    The comparison was repeated for the other two segmented models.
    The results of the comparison were exported as mean values and standard deviation.
  • Dimensional quality evaluation:
    The four geometries were imported into Rhinoceros 3D.
    A Phyton script was executed for computing the minimum bounding box for all four geometries.
    The maximum and minimum bounding box dimensions measured on a plane perpendicular to the femoral neck were, respectively, assigned to the vertical and horizontal diameters.
    The comparison was performed by setting the diameters measured on the G_D geometry as the reference and the other diameters computed on G_A, G_B, and G_C geometries as the test.

4. Results

Three types of results were obtained in the present work and are presented as follows:
  • Geometric quality: this is the geometric deviation between the reference and test geometries of the femoral head. This type of result helps evaluate how much the segmentation tools can precisely reconstruct the external surface of the femoral heads.
  • Dimensional quality: this is the femoral head diameters’ deviation between the reference and test geometries. This result allows surgeons to evaluate how precisely the segmentation tools can catch the femoral head dimensions.
  • Usability: this evaluation refers to several metrics (e.g., automatisation degree, segmentation time, training time), which are helpful for surgeons to evaluate the segmentation tools globally.

4.1. Geometric Quality

Following the previously described procedure, the authors evaluated the signed Euclidean distance (Figure 4, Figure 5 and Figure 6, Table 2) and average distances between the reference (G_D) and the test (G_A, C_B, and G_C) geometries for each patient. Such indicators were selected from twenty metrics used to evaluate 3D medical image segmentation [32]. This selection was made considering the segmentation requirements of this work, that is:
  • The low segmentation quality: often, the patient’s pathologies in this work determined low-quality segmented volumes.
  • The high complex boundaries and presence of outliers: this requirement results from the previous one. Since the external surfaces of the segmented volumes are irregular, outliers may exist.
  • The high importance of the contour: the contour is relevant because it evaluates the hip implant dimensions.
  • The low relevance of the volume: the external surfaces of the segmented volumes are often non-continuous. A robust volume evaluation was not possible.
Figure 4 and Figure 5 show the colour maps for all the patients. To uniformly evaluate the results, deviations are always represented on the reference geometries. Each figure contains a colour scale map (red: maximum negative deviation (−2 mm); blue: maximum positive deviation (+2 mm); white: no deviation).
Signed distance was considered for evaluating both the deviation and direction. The first result is required for assessing the best software tool (in terms of accuracy). The second is necessary to understand whether the test geometries are bigger or smaller than the reference. For each comparison, the authors evaluated the average distance and relative standard deviation (Table 2).
Average and standard deviations were used to create graphs (one for each patient) plotting the distance distribution for each software tool (Figure 6). Such graphs allow a quick comparison of the software tools used to rebuild the femoral head geometries.

4.2. Dimensional Quality

In addition to the signed Euclidean distance, Table 3 presents the horizontal (HD) and vertical (VD) diameters of the femur heads taken on the different segmented geometries. The mean values between HD and VD for each test geometry (G_A, G_B, and G_C) are compared to the reference one to evaluate the deviations (Table 4). These results are required to evaluate the performance of the segmentation tools in assessing the implant dimensions for knee surgery.

4.3. Usability

The third set of results (Table 5) was defined considering the seven usability evaluation objectives. A single focus group (first limitation of this work) of one technician involved in the CT images’ segmentation, one surgeon, two PhD students, and two researchers on biomedical-related topics was set to define the weights for each objective (from one to ten) and the values for each software and goal. First, the focus group expressed weights. “Automatisation degree” is the most crucial criterion because CT scans should be rapidly elaborated, and the result should not depend on the technician. To achieve these goals, it is significant to employ automatic tools. “Segmentation time”, “Training time”, and “Cost” are also crucial because, in hospitals, segmentation tools need to be used by clinicians without an engineering background.
Furthermore, the deployment cost (CAPEX—CApital EXPenditure, training, and license cost for software) must be as low as possible to limit the impact on the national health system. “3D visualization” and “Supported Operative System (OS)” are not as important as the previous criteria because, nowadays, segmentation tools have good rendering characteristics, and Microsoft Windows Operative System is widespread. “Potential extension (plugins)” is marginal because 3D geometries segmented by the software are enough to evaluate dimensions and take decisions (no other functionalities are required).
Then, the focus group members defined (and agreed all together) their evaluations, which were converted into numerical scores (1: the worst software, 3: the best software). A three-point scale was used because the project goal was to compare the tools (an absolute score was not required). Ex aequo scores were assigned for those tools with the same performance for a specific criterion. At last, the weights and scores allowed the authors to rank the segmentation tools.

5. Discussion

The results discussion reflects the three types of results obtained and presented in the previous section.
Concerning the geometric quality, Table 2 allows the observation that the three software tools tend to reconstruct a smaller geometry than the actual one. The mean signed Euclidean distance is about 0.3 mm for Mimics and 3D Slicer, and 0.7 mm for syngo.via Frontier. This conclusion is aligned with the test performed in [34]. The authors compared the geometries obtained by employing four segmentation tools (i.e., Mimics, Amira, YaDiv, and Fiji-Medtool) with that achieved from a 3D optical scanner (GOM ATOS III). They found an average deviation of −0.72 (negative) and +0.66 mm (positive) with an average standard deviation of 0.63 mm. Mimics was the software with the lowest deviation. Measurements were made on the area of specific femur cross-sections.
This conclusion is also recognisable by analysing Table 4 (deviations in the femoral head diameters). Additionally, in this case, the three software tools tend to generate smaller femoral heads, the diameters of which are, on average, approximately 0.6 mm less for Mimics and 3D Slicer and 1.5 mm for syngo.via Frontier. It is noted that, with regards to the diameters, for two patients, Mimics and 3D Slicer estimated femoral heads with bigger diameters. In conclusion, the overall negative deviation should be considered in hip surgeries when the implant size is selected according to the 3D model reconstructed from CT images using the segmentation software tools.
Another conclusion can be drawn by comparing the segmentation quality of the software tools. Table 2 and Figure 6 show that 3D Slicer approximately performs similar to Mimics, with deviations, respectively, of −0.299 vs. −0.314 mm (standard deviation of 0.697 vs. 0.727 mm). Syngo.via Frontier exhibited a more significant geometric difference and standard deviation (respectively, 0.640 and 0.757 mm) due to the irregular surfaces of the segmented geometries. This conclusion is coherent with that drafted in [20], where a mandible was used as a test. Using Mimics and 3DSlicer for measuring the volume of mandibles, the deviations with a fully manual segmented geometry were 40.85 and 40.58 mm3, respectively. Such tools outperformed other solutions such as Dolphin 3D and InVesalius.
Furthermore, the results from [20] can be used to validate the robustness of our conclusions. In this study, eight patients who are not relevant for statistical analysis (this is the most significant limitation of this work) were considered. Nevertheless, for the dimensional quality evaluation (deviations for horizontal and vertical diameters) our results have a standard deviation between 1.0% and 3.4%, which is comparable with [20], where a greater number of patients (i.e., twenty) were taken. In the reference publication, the standard deviation is between 1.9% and 5.1%. Even if a more significant number of cases was convenient, our conclusions could be robust enough.
From Table 2, and Figure 4 and Figure 6, it is possible to evaluate that syngo.via Frontier generated femoral heads with irregular external surfaces, characterised by many holes. In Figure 4 and Figure 6, it is possible to observe that the maximum (negative) distance between the test and reference geometries is located in the region between the femoral head and neck. The reason could be linked to the highest curvature values in that region.
Concerning the robustness of the tools, from the geometric and dimensional evaluation (Table 2 and Table 4), globally, Mimics is the best (with regards to standard deviation). Concerning the geometric deviation, 3D Slicer exhibits, approximately, a standard deviation and mean absolute deviation similar to Mimics. Still, 3D Slicer behaves in a worse way for the dimensional variations (it is even worse than syngo.via Frontier), as visible in Table 4.
Another conclusion worth being drawn concerns patient #1 (surface collapse-geodes). The femoral head geometry, in this case, was entirely different from the other seven patients. Mimics was much better for this femoral head than the other two software with regards to the geometric and dimensional evaluations.
Concerning the usability evaluation, the authors’ findings contradict the geometric and dimensional quality (Table 5). Here, syngo.via Frontier outperforms Mimics and 3D Slicer. Syngo.via Frontier was much easier to use with a shorter learning curve since it has fewer functions and a more intuitive graphical interface.
Moreover, it is faster to use thanks to the direct connection with the DICOM database. Despite that, the accuracy of the geometrical prototype creation was lower than in other software such as Mimics. Accuracy is essential to implant the correct prosthetic components in osteotomies and more complex positions. While using syngo.via Frontier, surgeons must consider the possible accuracy errors during the operative planning to prevent any unforeseen errors.
Nevertheless, accuracy is not crucial for learning or communicating with patients. Regarding usability, 3D Slicer was the most difficult to use but with the significant advantage of being free. In contrast to the expensive high-end software, the open source software, such as 3D Slicer, must be considered, especially in small professional realities. That is why open source software can be a valuable tool with a competitive accuracy level despite being less user friendly and requiring longer times for model production.
The main limitation of this study is related to the small sample of participants. This set should be doubled in the future to obtain more robust results (as in [20,24,26]). Furthermore, the number of focus groups that evaluated the software tools should be extended. Additional evaluation units should be established in other hospitals (distributed in different countries) to understand if results are influenced by testers’ attitudes, backgrounds, and expertise. Other segmentation software tools (e.g., Medviso, ITK-SNAP, MeVisLab, ImageJ, InVesalius, and Dolphin) could be compared following the same methodology proposed in this study.

6. Conclusions

The processing of tomographic images for 3D models’ reconstruction can be arduous from a procedural point of view, especially for patients with pathologies that alter the morphology of hard and soft tissues. Therefore, in a clinical context, it is essential to understand which software best fits the surgeon’s expectations from different perspectives: the geometric and dimensional quality of the 3D model reconstruction and the usability of the segmentation tools.
This work precisely analysed three different categories of software for CT images’ segmentation (the high-end Materialise Mimics, the open source 3D Slicer, and the CT embedded Siemens syngo.via Frontier). Segmented 3D models were compared with a reference model obtained by the laser scan of the in vitro bony specimen.
The main results show that Mimics and 3D Slicer are better for the geometrical and dimensional accuracy in the 3D reconstruction of femur heads. The observed absolute average deviations (geometrical accuracy) are 0.299 mm (3D Slicer), 0.353 mm (Mimics), and 0.757 mm (syngo.via Frontier). The absolute percentage deviations in measuring the horizontal and vertical femur head diameters (dimensional accuracy) are 1.0% (Mimics), 1.4% (syngo.via Frontier), and 2.4% (3D Slicer).
At the same time, syngo.via Frontier is the easiest to use in the hospital setting with a high automatisation degree and requiring a low training time. Considering the “Automatisation degree”, “Segmentation time”, “Training time”, “Cost”, “3D visualization”, “Supported Operative System”, and “Potential extension (plugins)” usability criteria, a focus group with six participants gave scores of 2.28, 2.00, and 1.80, respectively, for syngo.via Frontier, 3D Slicer, and Mimics.
Future work will involve a more significant number of patients with osteoarthritic hips, allowing a more robust statistical analysis and characterisation of the software tools. It is expected to achieve at least 20 cases, similar to other studies in the literature (e.g., [20,24,26]). Furthermore, at least two different focus groups outside of Italy should be organized to extend the results regarding the software usability. This future work is required to evaluate the impact of the testers’ attitudes, backgrounds, and expertise on the results. At last, this comparison can be extended to assess other open source and commercial segmentation software tools to further extend the results available in the literature.

Author Contributions

Conceptualization, G.F.; methodology, G.F. and A.B.; validation, G.F.; formal analysis, G.F., M.M., A.B. and A.M.; resources, G.F.; writing—original draft preparation, G.F., A.B. and M.M.; writing—review and editing, A.M., A.G. and A.F.; project administration, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved (in 2017) by the Department of Clinical and Molecular Science (DISCLIMO) bord in according to the Policy of Clinical Orthopaedics, Università Politecnica delle Marche, Ancona, Italy.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Handels, H.; Ehrhardt, J.; Plötz, W.; Pöppl, S.J. Virtual planning of hip operations and individual adaption of endoprostheses in orthopaedic surgery. Int. J. Med. Inform. 2000, 58, 21–28. [Google Scholar] [CrossRef]
  2. Liu, Z.; Hu, H.; Liu, S.; Huo, J.; Li, M.; Han, Y. Relationships between the femoral neck-preserving ratio and radiologic and clinical outcomes in patients undergoing total-hip arthroplasty with a collum femoris-preserving stem. Medicine 2019, 98, e16926. [Google Scholar] [CrossRef] [PubMed]
  3. Xu, D.F.; Bi, F.G.; Ma, C.Y.; Wen, Z.F.; Cai, X.Z. A systematic review of undisplaced femoral neck fracture treatments for patients over 65 years of age, with a focus on union rates and avascular necrosis. J. Orthop. Surg. Res. 2017, 12, 28. [Google Scholar] [CrossRef] [Green Version]
  4. Melvin, J.S.; Matuszewski, P.E.; Scolaro, J.; Baldwin, K.; Mehta, S. The role of computed tomography in the diagnosis and management of femoral neck fractures in the geriatric patient. Orthopedics 2011, 34, 87. [Google Scholar] [CrossRef] [PubMed]
  5. George, E.; Liacouras, P.; Rybicki, F.J.; Mitsouras, D. Measuring and Establishing the Accuracy and Reproducibility of 3D Printed Medical Models. Radiographics 2017, 37, 1424–1450. [Google Scholar] [CrossRef] [PubMed]
  6. Farinelli, L.; Baldini, M.; Gigante, A. Hip osteoarthritis: What to do before metal. GIOT 2018, 44, 265–271. [Google Scholar]
  7. Yu, P.A.; Hsu, W.H.; Hsu, W.B.; Kuo, L.T.; Lin, Z.R.; Shen, W.J.; Hsu, R.W.W. The effects of high impact exercise intervention on bone mineral density, physical fitness, and quality of life in postmenopausal women with osteopenia: A retrospective cohort study. Medicine 2019, 98, e14898. [Google Scholar] [CrossRef]
  8. Facco, G.; Massetti, D.; Coppa, V.; Procaccini, R.; Greco, L.; Simoncini, M.; Mari, A.; Marinelli, M.; Gigante, A. The use of 3D printed models for the pre-operative planning of surgical correction of pediatric hip deformities: A case series and concise review of the literature. Acta Biomed. 2022, 92, e2021221. [Google Scholar] [CrossRef]
  9. Pereira, D.; Ramos, E.; Branco, J. Osteoarthritis. Acta Med. Port. 2015, 28, 99–106. [Google Scholar] [CrossRef] [Green Version]
  10. Hernigou, P.; Trousselier, M.; Roubineau, F.; Bouthors, C.; Chevallier, N.; Rouard, H.; Flouzat-Lachaniette, C.H. Stem Cell Therapy for the Treatment of Hip Osteonecrosis: A 30-Year Review of Progress. Clin. Orthop. Surg. 2016, 8, 1–8. [Google Scholar] [CrossRef] [Green Version]
  11. Pouresmaeili, F.; Kamalidehghan, B.; Kamarehei, M.; Goh, Y.M. A comprehensive overview on osteoporosis and its risk factors. Ther. Clin. Risk Manag. 2018, 14, 2029–2049. [Google Scholar] [CrossRef] [Green Version]
  12. Facco, G.; Politano, R.; Marchesini, A.; Senesi, L.; Gravina, P.; Pangrazi, P.P.; Gigante, A.P.; Riccio, M. A Peculiar Case of Open Complex Elbow Injury with Critical Bone Loss, Triceps Reinsertion, and Scar Tissue might Provide for Elbow Stability? Strateg. Trauma Limb Reconstr. 2021, 16, 53–59. [Google Scholar] [CrossRef]
  13. Ferretti, A.; Iannotti, F.; Proietti, L.; Massafra, C.; Speranza, A.; Laghi, A.; Iorio, R. The Accuracy of Patient-Specific Instrumentation with Laser Guidance in a Dynamic Total Hip Arthroplasty: A Radiological Evaluation. Sensors 2021, 21, 4232. [Google Scholar] [CrossRef]
  14. Giannetti, S.; Bizzotto, N.; Stancati, A.; Santucci, A. Minimally invasive fixation in tibial plateau fractures using a preoperative and intra-operative real size 3D printing. Injury 2017, 48, 784–788. [Google Scholar] [CrossRef]
  15. Bizzotto, N.; Tami, I.; Tami, A.; Spiegel, A.; Romani, D.; Corain, M.; Adani, R.; Magnan, B. 3D Printed models of distal radius fractures. Injury 2016, 47, 976–978. [Google Scholar] [CrossRef]
  16. Wei, Y.P.; Lai, Y.C.; Chang, W.N. Anatomic three-dimensional model-assisted surgical planning for treatment of pediatric hip dislocation due to osteomyelitis. J. Int. Med. Res. 2020, 48, 0300060519854288. [Google Scholar] [CrossRef]
  17. Ozturk, A.M.; Sirinturk, S.; Kucuk, L.; Yaprak, F.; Govsa, F.; Ozer, M.A.; Cagirici, U.; Sabah, D. Multidisciplinary Assessment of Planning and Resection of Complex Bone Tumor Using Patient-Specific 3D Model. Indian J. Surg. Oncol. 2019, 10, 115–124. [Google Scholar] [CrossRef]
  18. Yang, L.; Grottkau, B.; He, Z.; Ye, C. Three-dimensional printing technology and materials for treatment of elbow fractures. Int. Orthop. 2017, 41, 2381–2387. [Google Scholar] [CrossRef]
  19. Kubicek, J.; Tomanec, F.; Cerny, M.; Vilimek, D.; Kalova, M.; Oczka, D. Recent Trends, Technical Concepts and Components of Computer-Assisted Orthopedic Surgery Systems: A Comprehensive Review. Sensors 2019, 19, 5199. [Google Scholar] [CrossRef] [Green Version]
  20. Giudice, A.L.; Ronsivalle, V.; Grippaudo, C.; Lucchese, A.; Muraglie, S.; Lagravère, M.O.; Isola, G. One Step before 3D Printing—Evaluation of Imaging Software Accuracy for 3-Dimensional Analysis of the Mandible: A Comparative Study Using a Surface-to-Surface Matching Technique. Materials 2020, 13, 2798. [Google Scholar] [CrossRef]
  21. Solomon, J.; Lyu, P.; Marin, D.; Samei, E. Noise and spatial resolution properties of a commercially available deep learning-based CT reconstruction algorithm. Med. Phys. 2020, 47, 3961–3971. [Google Scholar] [CrossRef]
  22. Cha, K.H.; Hadjiiski, L.; Samala, R.K.; Chan, H.P.; Caoili, E.M.; Cohan, R.H. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med. Phys. 2016, 43, 1882–1896. [Google Scholar] [CrossRef]
  23. Dimitri, G.M.; Spasov, S.; Duggento, A.; Passamonti, L.; Lio, P.; Toschi, N. Unsupervised stratification in neuroimaging through deep latent embeddings. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020. [Google Scholar]
  24. Kresanova, Z.; Kostolny, J. Comparison of Software for Medical Segmentation. Cent. Eur. Res. J. 2018, 4, 66–80. [Google Scholar]
  25. Virzì, A.; Muller, C.O.; Marret, J.B.; Mille, E.; Berteloot, L.; Grévent, D.; Boddaert, N.; Gori, P.; Sarnacki, S.; Bloch, I. Comprehensive Review of 3D Segmentation Software Tools for MRI Usable for Pelvic Surgery Planning. J. Digit. Imaging 2020, 33, 99–110. [Google Scholar] [CrossRef]
  26. Nemec, S.F.; Molinari, F.; Dufresne, V.; Gosset, N.; Silva, M.; Bankier, A.A. Comparison of four software packages for CT lung volumetry in healthy individuals. Eur. Radiol. 2015, 25, 1588–1597. [Google Scholar] [CrossRef]
  27. Alnaser, A.; Gong, B.; Moeller, K. Evaluation of open-source software for the lung segmentation. Curr. Dir. Biomed. Eng. 2016, 2, 515–518. [Google Scholar] [CrossRef]
  28. Jalali, Y.; Fateh, M.; Rezvani, M.; Abolghasemi, V.; Anisi, M.H. ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation. Sensors 2021, 21, 268. [Google Scholar] [CrossRef]
  29. Abdullah, J.T.; Omar, M.; Pritam, H.M.H.; Husein, A.; Rajion, Z.A. Comparison of 3D reconstruction of mandible for preoperative planning using commercial and open-source software. AIP Conf. Proc. 2016, 1791, 020001. [Google Scholar] [CrossRef] [Green Version]
  30. Abdullah, J.Y.; Abdullah, A.M.; Hadi, H.; Husein, A.; Rajion, Z.A. Comparison of STL skull models produced using open-source software versus commercial software. Rapid Prototyp. J. 2019, 25, 1585–1591. [Google Scholar] [CrossRef]
  31. Wallner, J.; Hochegger, K.; Chen, X.; Mischak, I.; Reinbacher, K.; Pau, M.; Zrnc, T.; Schwenzer-Zimmerer, K.; Zemann, W.; Schmalstieg, D.; et al. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action. PLoS ONE 2018, 13, e0196378. [Google Scholar] [CrossRef] [PubMed]
  32. Taha, A.A.; Hanbury, A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med. Imaging 2015, 15, 29. [Google Scholar] [CrossRef] [Green Version]
  33. Argüello, D.; Acevedo, H.G.S.; González-Estrada, O.A. Comparison of segmentation tools for structural analysis of bone tissues by finite elements. J. Phys. Conf. Ser. 2019, 1386, 012113. [Google Scholar] [CrossRef]
  34. Soodmand, E.; Kluess, D.; Varady, P.A.; Cichon, R.; Schwarze, M.; Gehweiler, D.; Niemeyer, F.; Pahr, D.; Woiczinski, M. Interlaboratory comparison of femur surface reconstruction from CT data compared to reference optical 3D scan. BioMed. Eng. OnLine 2018, 17, 29. [Google Scholar] [CrossRef] [Green Version]
  35. Purkait, R. Sex determination from femoral head measurements: A new approach. Leg. Med. 2003, 5, S347–S350. [Google Scholar] [CrossRef]
  36. Steppacher, S.D.; Anwander, H.; Schwab, J.M.; Siebenrock, K.A.; Tannast, M. Femoral Dysplasia. Musculoskeletal Key. Available online: https://musculoskeletalkey.com/femoral-dysplasia/ (accessed on 18 May 2022).
  37. Poole, K.E.S.; Treece, G.M.; Mayhew, P.M.; Vaculík, J.; Dungl, P.; Horák, M.; Štěpán, J.J.; Gee, A.H. Cortical Thickness Mapping to Identify Focal Osteoporosis in Patients with Hip Fracture. PLoS ONE 2012, 7, e38466. [Google Scholar] [CrossRef]
Figure 1. Pipeline’s part 1: Methodology for obtaining the 3D anatomical models to be compared (from the pre-surgery in vivo femoral heads and post-surgery in vitro bony specimens).
Figure 1. Pipeline’s part 1: Methodology for obtaining the 3D anatomical models to be compared (from the pre-surgery in vivo femoral heads and post-surgery in vitro bony specimens).
Sensors 22 05242 g001
Figure 2. Pipeline’s part 2: 3D models comparison workflow. HD and VD are, respectively, the horizontal and vertical diameters of the femoral head.
Figure 2. Pipeline’s part 2: 3D models comparison workflow. HD and VD are, respectively, the horizontal and vertical diameters of the femoral head.
Sensors 22 05242 g002
Figure 3. Horizontal (HD) and vertical (VD) diameters of a femoral head.
Figure 3. Horizontal (HD) and vertical (VD) diameters of a femoral head.
Sensors 22 05242 g003
Figure 4. Signed Euclidean distance between the reference (G_D) and test (G_A, G_B, and G_C) geometries for patients #1, #2, #3, and #4. The deviation is represented in the reference geometry. Red: −2.000 mm deviation. White: 0.000 mm deviation. Blue: +2.000 mm deviation.
Figure 4. Signed Euclidean distance between the reference (G_D) and test (G_A, G_B, and G_C) geometries for patients #1, #2, #3, and #4. The deviation is represented in the reference geometry. Red: −2.000 mm deviation. White: 0.000 mm deviation. Blue: +2.000 mm deviation.
Sensors 22 05242 g004
Figure 5. Signed Euclidean distance between the reference (G_D) and test (G_A, G_B, and G_C) geometries for patients #5, #6, #7, and #8. The deviation is represented in the reference geometry. Red: −2.000 mm deviation. White: 0.000 mm deviation. Blue: +2.000 mm deviation.
Figure 5. Signed Euclidean distance between the reference (G_D) and test (G_A, G_B, and G_C) geometries for patients #5, #6, #7, and #8. The deviation is represented in the reference geometry. Red: −2.000 mm deviation. White: 0.000 mm deviation. Blue: +2.000 mm deviation.
Sensors 22 05242 g005
Figure 6. Signed Euclidean distance between the reference (G_D) and test (G_A, G_B, and G_C) geometries.
Figure 6. Signed Euclidean distance between the reference (G_D) and test (G_A, G_B, and G_C) geometries.
Sensors 22 05242 g006
Table 1. Patients’ characteristics.
Table 1. Patients’ characteristics.
PatientPathologyImage Properties
#1osteonecrosissurface collapse-geodes
#2osteoarthritisosteophytes-geodes
#3osteoarthritisseveral geodes
#4osteoarthritisseveral geodes
#5osteoarthritisbig osteophyte-geodes
#6osteoporotic fracturedecreased bone mass
#7osteoarthritisOsteophytes-geodes
#8osteoarthritisOsteophytes-geodes
#9osteonecrosissurface collapse-geodes
#10osteoporotic fracturedecreased bone mass
Table 2. Signed Euclidean distance: average, absolute, and standard deviation between the reference (G_D) and test (G_A, G_B, and G_C) geometries. Values retrieved from CloudCompare (Figure 4 and Figure 5).
Table 2. Signed Euclidean distance: average, absolute, and standard deviation between the reference (G_D) and test (G_A, G_B, and G_C) geometries. Values retrieved from CloudCompare (Figure 4 and Figure 5).
Mimics (mm)3D Slicer (mm)Syngo.via Frontier (mm)
CaseAverageAbsoluteStd. Dev.AverageAbsoluteStd. Dev.AverageAbsoluteStd. Dev.
#1−0.3530.3531.068−0.8000.8001.024−0.5830.5830.971
#2−0.1630.1630.719−0.1560.1560.686−0.9820.9820.698
#3−0.0320.0320.639−0.0370.0370.556−0.3370.3370.824
#4−0.3690.3690.568−0.2370.2370.524−0.9210.9210.637
#5−0.2450.2451.000−0.1980.1980.794−0.6700.6700.913
#6−0.6620.6620.482−0.4820.4820.652−0.6700.6700.664
#7−0.3280.3280.730−0.1680.1680.782−0.3570.3570.743
#8−0.3600.3600.612−0.3170.3170.559−0.5980.5980.608
Mean−0.3140.3530.727−0.2990.2990.697−0.6400.6400.757
Table 3. Femur heads’ measurements (HD: horizontal diameter and VD: vertical diameter) and means values for G_A, G_B, G_C, and G_D geometries.
Table 3. Femur heads’ measurements (HD: horizontal diameter and VD: vertical diameter) and means values for G_A, G_B, G_C, and G_D geometries.
Reference (mm)Mimics (mm)3D Slicer (mm)Syngo.via Frontier (mm)
VD(G_D)HD(G_D)MeanG_DVD(G_A)HD(G_A)MeanG_AVD(G_B)HD(G_B)MeanG_BVD(G_C)HD(G_C)MeanG_C
#150.22049.01049.61549.47048.67049.07046.63045.65046.14048.30046.11047.205
#253.53050.15051.84052.89051.88052.38554.83052.08053.45551.35049.50050.425
#349.73049.09049.41050.62049.67050.14550.79049.93050.36049.00048.69048.845
#444.05043.62043.83544.70042.12043.41044.56043.08043.82042.45041.40041.925
#554.11053.83053.97051.93051.82051.87556.70050.79053.74554.48049.85052.165
#643.55043.08043.31544.62039.97042.29543.77039.54041.65543.82041.46042.640
#758.06055.16056.61057.32054.54055.93057.48054.81056.14556.80053.75055.275
#845.98040.08043.03043.96040.32042.14042.63039.56041.09541.62040.45041.035
Table 4. Signed and absolute deviations for horizontal and vertical femoral head diameters obtained from the three segmentation software.
Table 4. Signed and absolute deviations for horizontal and vertical femoral head diameters obtained from the three segmentation software.
Signed Deviation (mm)
= MeanG_X − MeanG_D
Absolute Deviation (mm)
= |Signed Deviation|
Signed Percentage Deviation (%)
= Signed Deviation/MeanG_D
Absolute Percentage Deviation (%)
= Absolute Deviation/MeanG_D
Mimics3D SlicerSyngo.via FrontierMimics3D SlicerSyngo.via FrontierMimics3D SlicerSyngo.via Frontier Mimics3D SlicerSyngo.via Frontier
#1−0.545−3.475−2.4100.5453.4752.410−1.1%−7.0%−4.9%1.1%7.0%4.9%
#20.5451.615−1.4150.5451.6151.4151.1%3.1%−2.7%1.1%3.1%2.7%
#30.7350.950−0.5650.7350.9500.5651.5%1.9%−1.1%1.5%1.9%1.1%
#4−0.425−0.015−1.9100.4250.0151.910−1.0%0.0%−4.4%1.0%0.0%4.4%
#5−2.095−0.225−1.8052.0950.2251.805−3.9%−0.4%−3.3%3.9%0.4%3.3%
#6−1.020−1.660−0.6751.0201.6600.675−2.4%−3.8%−1.6%2.4%3.8%1.6%
#7−0.680−0.465−1.3350.6800.4651.335−1.2%−0.8%−2.4%1.2%0.8%2.4%
#8−0.890−1.935−1.9950.8901.9351.995−2.1%−4.5%−4.6%2.1%4.5%4.6%
Mean−0.547−0.651−1.5140.8671.2931.514−1.1%−1.4%−3.1%1.8%2.7%3.1%
Std. Dev.0.8951.6460.6460.5331.1330.6461.8%3.4%1.4%1.0%2.4%1.4%
Table 5. Usability evaluation of the three segmentation software tools.
Table 5. Usability evaluation of the three segmentation software tools.
ObjectiveMimics3D SlicerSyngo.via FrontierWeightMimics3D SlicerSyngo.via Frontier
1. Automatisation degreeHighAverageLow10123
2. Segmentation time30 min45 min40 min8312
3. Training time300 min300 min180 min8113
4. CostHighFreewareEmbedded in the CT8132
5. 3D visualisationHighHighHigh6333
6. Supported Operative System (OS)Windows—macOS—LinuxWindows—macOS—LinuxWindows6331
7. Potential extension (plugins)NoNoNo4111
Total 501.802.002.28
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mandolini, M.; Brunzini, A.; Facco, G.; Mazzoli, A.; Forcellese, A.; Gigante, A. Comparison of Three 3D Segmentation Software Tools for Hip Surgical Planning. Sensors 2022, 22, 5242. https://doi.org/10.3390/s22145242

AMA Style

Mandolini M, Brunzini A, Facco G, Mazzoli A, Forcellese A, Gigante A. Comparison of Three 3D Segmentation Software Tools for Hip Surgical Planning. Sensors. 2022; 22(14):5242. https://doi.org/10.3390/s22145242

Chicago/Turabian Style

Mandolini, Marco, Agnese Brunzini, Giulia Facco, Alida Mazzoli, Archimede Forcellese, and Antonio Gigante. 2022. "Comparison of Three 3D Segmentation Software Tools for Hip Surgical Planning" Sensors 22, no. 14: 5242. https://doi.org/10.3390/s22145242

APA Style

Mandolini, M., Brunzini, A., Facco, G., Mazzoli, A., Forcellese, A., & Gigante, A. (2022). Comparison of Three 3D Segmentation Software Tools for Hip Surgical Planning. Sensors, 22(14), 5242. https://doi.org/10.3390/s22145242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop