1. Introduction
Recent developments in three-dimensional (3D) scanning, particularly in photogrammetry and laser scanning, have enabled the quick and accurate recording of complex 3D objects in the real world. However, the complexity of 3D scanned point cloud data tends to be significant, especially when the scanned object possesses internal 3D structures. Highlighting lines formed by the 3D edges of the scanned object, i.e., edge-highlighting visualization, is often helpful for understanding its complex 3D structures. However, the increasing complexity of 3D scanned point cloud data tends to result in too many lines being drawn, thereby deteriorating the comprehensibility. Therefore, this paper proposes an extended edge-highlighting visualization method aiming at improving visibility for complex and large-scale 3D scanned point cloud data.
Our method has two novel aspects for rendering 3D edges:
dual 3D edge extraction and
opacity–color gradation. First, dual 3D edge extraction incorporates not only highly curved sharp edges but also less curved soft edges into the edge-highlighting visualization. Real-world objects, as captured through scanning, usually have numerous soft edges in addition to sharp edges as part of their 3D structures. Therefore, to sufficiently represent the structural characteristics of scanned objects, visualization should encompass both types of 3D edges: sharp and soft edges. Second, opacity–color gradation renders soft edges with gradations in both color and opacity, i.e., with gradations in the RGBA space. This gradation achieves a comprehensive display of fine 3D structures recorded within the soft edges. This gradation also realizes the
halo effect, which enhances the areas around sharp edges, leading to improved resolution and depth perception of 3D edges. The gradation in opacity is achieved using stochastic point-based rendering (SPBR), the high-quality transparent visualization method used for 3D scanned point cloud data [
1,
2]. In our previous paper [
3], we presented a preliminary report on our edge-highlighting visualization method, which was based solely on opacity gradation. This paper extends our previous work by reporting several new contributions, including the two advantages mentioned above.
Figure 1 shows edge-highlighting visualization (a) without and (b) with the novel effects realized by our proposed method. The visualized data are the 3D scanned point cloud of the office building at the Tamaki Shinto Shrine (Nara Prefecture, Japan), a nationally important cultural property. The building contains many small rooms, resulting in the appearance of numerous lines through edge extraction. In
Figure 1a, the excessive lines dramatically reduce visibility. Conversely, in
Figure 1b, the edge lines successfully aid in observing complex internal structures.
This paper is organized as follows: In
Section 2, we summarize related work. In
Section 3, we review the foundational techniques necessary for developing the method proposed in this paper. In
Section 4, we develop and demonstrate dual 3D edge extraction. In
Section 5, we develop high-visibility edge-highlighting visualization based on the dual 3D edge extraction and the opacity–color gradation. In
Section 6, we demonstrate the effectiveness of our edge-highlighting visualization by applying it to real 3D scanned point cloud data, particularly those of cultural heritage objects with high cultural value. In
Section 7, we explain the performance of the proposed method, particularly focusing on the required computation time.
Section 8 concludes the paper by summarizing our achievements.
2. Related Work
Research on highlighting the 3D edges of point-based objects, as well as the broader task of feature highlighting within 3D point clouds, has garnered significant attention (refer to [
4] for review). Various computational methods have been employed in this research. The objectives of this research include understanding the 3D structures of point-based objects, as well as point classification, segmentation, and other related goals.
Recently, statistical methods employing eigenvalue-based 3D feature values have gained prominence [
5,
6,
7,
8,
9,
10,
11,
12,
13]. In these statistical approaches, 3D feature values are derived from the eigenvalues of the local 3D covariance matrix, commonly referred to as the 3D structure tensor [
14]. For each 3D point, the covariance matrix is computed through numerical analysis of local variances and covariances of point distributions within a specified radius of a spherical neighborhood. In this paper, we also utilize the eigenvalue-based 3D feature values to extract high-curvature areas, i.e., 3D edges, from a given point cloud. The statistical method easily achieves effective edge-highlighting visualization if most of the edges of the scanned object are sharp edges. This is because it is sufficient to determine a single feature-value threshold, such as
, for a selected feature value,
f, and simply assign a highlight color to the points with
. However, many real-world objects contain numerous soft edges in addition to sharp edges. Therefore, if
is too high, the fine structures represented by the soft edges will not be incorporated in the visualization. Conversely, if
is too low, the soft edges will appear as thick bands, which is unsuitable for emphasizing the 3D structure of the target object. To address this, our study introduces multiple thresholds to separately extract sharp and soft edges. We then independently apply appropriate visualizations to each edge type, allowing effective edge-highlighting visualization of 3D scanned data that records complex real-world objects.
In computer graphics, the halo effect has been utilized to enhance the sense of depth in space by emphasizing the relative positioning of 3D lines [
15,
16]. In recent years, the halo effect has also been used to increase the visibility of internal 3D structures in the volumetric visualization of flows and medical data [
17,
18,
19,
20,
21,
22]. The halo effect is effective for enhancing the visibility of complex 3D shape visualizations. However, it has not yet been implemented in the visualization of 3D scanned data. The challenges in achieving the halo effect arise from the fact that 3D scanned data consist of large-scale scattered point clouds and, as mentioned above, contain a mixture of sharp and soft edges. In this study, we address these challenges by employing dual 3D edge extraction and opacity–color gradation.
4. Dual 3D Edge Extraction: Extracting Sharp and Soft Edges Independently
Traditionally, 3D edges refer to the local regions that constitute linear areas of high curvature in mathematical polyhedra. These regions can be termed sharp edges. However, real-world objects are not ideal polyhedra and typically include not only sharp edges but also soft edges, which create rounded valleys and ridges with finite widths. Conventional methods for extracting and visualizing 3D edges mainly concentrate on sharp edges. When these conventional methods are applied to soft edges, they often extract wide, band-like regions, leading to a degradation of the linear characteristics of 3D edges and reduced comprehensibility in visualization.
Our approach to handling real-world 3D objects with a mixture of sharp and soft edges involves extracting and processing these two types of edges independently. To achieve this, we introduce two feature-value thresholds, and , for a selected feature value, f. The threshold defines sharp-edge regions as , while defines soft-edge regions as . Note that also defines the total edge regions consisting of both types of edges, as the inequality holds.
We determine the appropriate values of the above two thresholds by following the two steps below:
Step 1: We set the threshold value
to determine the total 3D edge region as
based on histogram analysis. We choose the beginning of the tail area of the feature-value histogram as
(see
Figure 3).
Step 2: We determine the threshold value
to discriminate between sharp edges and soft edges by tuning the sharp-edge ratio,
where
and
are the numbers of points to form sharp edges and soft edges, respectively.
For Step 1, which determines the value of
, we can use various techniques from conventional edge-extraction methods. We adopt a technique that identifies the tail area of the feature-value histogram, where
is determined as the boundary between the tail area and the peak area.
Figure 3 shows the feature-value histogram of the data used to create
Figure 1. The extended tail area of the histogram corresponds to the total 3D edge region, while the peak area corresponds to the nonedge region. We can determine the value of
around the boundary of the tail and peak areas, e.g.,
for the data to create
Figure 1. Although the use of the tail-peak boundary for determining
is fine-tuned manually in this study, our proposed visualization in
Section 5 is robust and not particularly sensitive to adjustments in
. A more systematic approach, currently under development, employs the percentile technique. For instance,
can be defined such that the number of points with
f exceeding
corresponds to a user-defined percentage of the total data points. Our experiments show that percentages between 30% and 50% yield effective results in many cases.
For Step 2, which determines the value of
, we usually set
as the default value. Through numerous experiments, we found that this simple configuration works quite well for our high-visibility edge-highlighting visualization method explained in
Section 5. In fact, most of the images presented in this paper are created by adopting the default value,
. However, fine-tuning can be performed as needed depending on the target data. In our experience, 3D scanned data of a relief engraving, where soft edges tend to appear more frequently than sharp edges do, sometimes require a smaller
, e.g.,
.
Figure 4 shows examples of sharp and soft edge extraction from simple shapes. Sharp edges, determined by the two steps mentioned above, are shown in red, while soft edges are shown in cyan. The change of curvature
defined by expression (
2) is used as the feature value
f.
Figure 4a depicts a cuboid with rounded edges.
Figure 4b represents a shape where sharp and rounded edges coexist, constructed by combining a regular cuboid and the shape shown in
Figure 4a. As shown in
Figure 4a and the right portion of
Figure 4b, soft edges tend to appear around sharp edges. By effectively utilizing this characteristic, soft edges can be used not to diminish the visibility of sharp edges but rather to highlight and enhance them (see
Section 5 for details).
Here, we discuss the determination of the two threshold values,
and
, to create
Figure 4b. In
Figure 4b, sharp edges appear both individually (on the left portion of the image) and as the centerlines of soft edges (on the right portion of the image). In such cases, very subtle parameter adjustments are often needed. In fact, the determined threshold values are
and
, which are very similar values. Finding such a set of values through a trial-and-error approach is extremely time-consuming. However, following the two steps mentioned above significantly simplifies the determination process.
Figure 5 shows the result of the dual 3D edge extraction for the small rooms and Buddha statues in the cave of the Zuigan-ji Buddhist temple in Miyagi Prefecture, Japan. This figure contains 3D edges with colored centerlines, where sharp edges are represented in red and the surrounding narrow areas are represented by soft edges in cyan. Note that there are also 3D edges consisting solely of sharp edges and solely of soft edges. This observation highlights the importance of separately addressing sharp and soft edges to understand complex 3D structures.
Figure 6 shows the result of the dual 3D edge extraction for the wall reliefs in the Borobudur Temple in Indonesia. Even in reliefs, which are bumpy planes, soft edges generally appear around sharp edges. However, compared with truly 3D objects like those in
Figure 5, soft edges tend to appear more frequently. As a result, relief shapes are often characterized by soft edges to a significant extent. Therefore, visualizing only sharp edges does not aid in understanding shapes. Note that the values of
and
tend to become similar in such data, as in the case of
Figure 4b, which makes our two-step determination of the threshold values advantageous.
Although simply assigning distinct colors to sharp and soft edges, as demonstrated in this section, is an effective way of visualization, we propose a more advanced edge-highlighting visualization method in
Section 5. By leveraging the synergistic effects between sharp and soft edges, we can create images with high visibility such as
Figure 1b.
5. High-Visibility Edge-Highlighting Visualization Based on the Opacity–Color Gradation of Soft Edges
In this section, we propose a high-visibility edge-highlighting visualization method, made possible by independently rendering sharp and soft edges with SPBR. This method relies on
opacity–color gradation, in which simultaneous gradation of both opacity and color is applied to soft edges. In our preliminary report [
3], the gradation was applied solely to opacity. By adding color gradation, we can visualize soft edges with sharper lines, making the method more effective for expressing the complex 3D structures of 3D scanned objects. Opacity–color gradation of soft edges also achieves a
halo effect, greatly enhancing the depth perception of scenes with many edge lines, as shown in
Figure 1b.
Our edge-highlighting visualization method uses SPBR (see
Section 3.1) for rendering the extracted sharp and soft edges. When visualizing the edges, we utilize the degree of opacity, which is effectively controlled by SPBR. As long as we use SPBR, our edge-highlighting visualizations are essentially transparent visualizations. However, because the edges are assigned high opacity values, the resulting images appear opaque. Therefore, our edge-highlighting visualization does not need to be strictly classified as transparent visualization. When nonedge regions, which are assigned lower opacity, are included, as shown in
Figure 1, the created image naturally becomes transparent.
5.1. Rendering of Sharp Edges
Sharp edges are rendered with constant opacity and color. Using the opacity–color settings described below, sharp edges are highlighted most brightly with a user-defined color in the visualized scene.
We assign the highest opacity to sharp edges. By using SPBR, opacity control based on Formula (
1) becomes possible. The task is to determine a constant point proliferation ratio,
, that achieves the value of
corresponding to a user-defined highest opacity according to Formula (
1). However, it is usually sufficient and practical to set a sufficiently large
for assigning sufficiently high opacity to sharp edges. Typically,
is of the order
; for example,
, meaning that upsampling is performed to increase the original point density by a factor of 25. For the sharp-edge color, we assign a user-defined constant highlight color,
, such as
, which corresponds to red.
5.2. Rendering of Soft Edges
Soft edges are rendered using opacity–color gradation. To achieve this, we introduce two functions of the feature value f within the range : a point proliferation ratio and a color function . The details are explained below.
To control the soft-edge opacity, we define a point proliferation ratio
as a function of the feature value
f. The function
is designed so that its value, and therefore the opacity (see Formula (
1)), decreases as
f decreases. According to this
, the local surface opacity gradually decreases as a position in a soft-edge region moves farther from sharp-edge regions.
is explicitly defined as follows, realizing opacity gradation in soft-edge regions:
where
is the smallest point proliferation ratio to be adopted at
and the exponent
is a parameter that controls the speed of opacity gradation. For typical 3D scanned point cloud data,
is sufficient. For data with many soft edges, such as reliefs, it is sometimes better to set
, making the opacity decrease more slowly as
f decreases.
To control the soft-edge color, we similarly define a color function with red, green, and blue components:
. The function
is designed so that the local surface color gradually changes from the highlight color
to the background color, as a position in a soft-edge region moves farther from sharp-edge regions.
is explicitly defined as follows, realizing color gradation for soft edges:
where
represents a user-defined background color in visualization, typically set to an achromatic color such as black or white. The exponent
is a parameter that controls the speed of color gradation. For typical 3D scanned point cloud data,
is sufficient.
5.3. Parameter Setting and Default Parameters
Although functions
and
include several user parameters, it is not difficult to find their appropriate values. In fact, the default parameters work quite well in many cases. The values of
and
can be determined following the simple procedure explained in
Section 4. As we explained,
can be automatically determined by setting
, whose default value
works well in many cases. The highlight color
can be arbitrary. For the other parameters,
,
,
,
, and
, the default values work well in many cases:
,
,
, and
(black) or
(white).
For example,
Figure 1 is created using the above default parameters with
,
,
, and
, which corresponds to
. For 3D scanned data of relief engravings or similar 2.5-dimensional objects, where soft edges are dominant, we should often set slightly smaller values for
and
. According to our experiments,
values of
to
and
values of
to
work well.
5.4. A Simple Example
Figure 7 demonstrates the two novel effects achieved by our edge-highlighting visualization method for a simple cuboid with rounded edges. The change of curvature
is used as the feature value
f.
The first effect demonstrated in
Figure 7 is the
edge-thinning effect. The centerline regions of the edges, highlighted in the brightest red, represent sharp edges. On the other hand, the surrounding gradation areas exhibit soft edges. The soft-edge opacity gradually decreases, with the soft edges becoming almost transparent near boundaries, i.e., near the background areas in the created image. Similarly, the soft-edge color transitions gradually from the sharp-edge color (red) to the background color (black) as the distance from the sharp-edge region increases. This opacity–color gradation effectively makes the soft edges look thinner, which helps us understand the 3D structure through edge lines.
The second effect demonstrated in
Figure 7, is the
halo effect (see the circled areas). The soft edges partially obscure the edges in the background, significantly enhancing depth perception. Additionally, the halo effect improves the resolution of sharp edges by highlighting their silhouettes with soft edges. For these reasons, the halo effect significantly enhances the comprehensibility of large-scale 3D scanned objects, particularly those with complex inner structures.
7. Performance
For this work, computations for dual 3D edge extraction and the visualization processes were executed on our PC (MacBook Pro) with an Apple M2 Max chipset, a 38-core GPU, and 96 GB of memory. The computation speed depends on the CPU selection, while the number of points that can be handled depends on the main memory and GPU memory. (We also confirmed that data consisting of 3D points could be effectively handled even on a less powerful laptop PC with a GHz Intel Core i7 processor, 8 GB of RAM, and an NVIDIA GeForce GT 480M GPU.)
In the dual 3D edge-extraction process explained in
Section 4, most of the computation time is consumed traversing all the points stored in an octree, calculating the 3D covariance matrix at each point, and assigning an eigenvalue-based feature value to each point. This situation is common to conventional statistical methods (see
Section 2). In
Table 1, we compare the computation times for the conventional binary 3D edge extraction and the proposed dual 3D edge extraction, using the point cloud data for
Figure 1 and the figures in
Section 6. The computational time of each method depends on the point distribution of the target data. However, in any case, our proposed dual 3D edge extraction requires a computational time nearly comparable to that of the conventional binary 3D edge extraction.
Once 3D edge extraction is completed, the visualization process explained in
Section 5 can be executed at an interactive visualization speed. This is because rendering is performed using SPBR. For details on the performance of SPBR, refer to [
1,
2].
8. Conclusions and Discussion
In this paper, we have proposed a novel method for reliable 3D edge extraction and high-visibility edge-highlighting visualization applicable to 3D scanned point clouds. This method helps us intuitively understand the complex 3D structures of scanned objects by using visualized sharp and soft edges as visual guides.
The dual 3D edge extraction captures low-curvature soft edges in addition to the high-curvature sharp edges. We demonstrated that the total 3D structures of real-world objects can be effectively represented by using both sharp and soft edges extracted by our dual 3D edge extraction. Therefore, our 3D edge extraction is not merely an improvement of conventional methods but rather an extended technique that captures areas not covered by traditional methods.
High-visibility edge-highlighting visualization is achieved by rendering the extracted soft edges with opacity–color gradation, reflecting feature-value variations in the soft-edge regions. This opacity–color gradation effectively represents the fine 3D structures recorded in the soft edges, while the sharp edges tend to depict the global 3D structures. Opacity–color gradation also achieves a halo effect, significantly improving the resolution and depth perception of 3D edges. Our edge-highlighting visualization, incorporating these novel features, allows for a comprehensible representation of the entire 3D structure of scanned objects.
This above-mentioned enhanced clarity in visualization often allows for replacing the conventional transparent visualization with our edge-highlighting approach, achieving a sufficiently effective see-through feature in the visualized scene. The combination of our edge-highlighting visualization with the conventional transparent visualization proves to be highly effective, too.
We demonstrated the effectiveness of our proposed method by applying it to real 3D scanned data, particularly data from cultural heritage objects with high cultural value. The visualizations show that our method aids in understanding both the surface and internal 3D structures of complex objects in the real world, inspiring various novel applications of 3D scanned data.
There are still several issues to be further explored in the proposed method. For example, the procedure for determining
and
could be further refined, even though the current approach is already practical. Additionally, selecting the most relevant path in the color space used for opacity–color gradation is an important issue that has yet to be fully understood. These issues will be addressed in our future work. As a promising direction for future work, we also plan to integrate the results of this study with deep learning techniques. For example, we will apply the highly visible sharp and soft edges, which were made possible in this study, to the digital 3D restoration of cultural heritage objects that are recorded only in 2D photographs. Restoration accuracy should be improved by combining two approaches: learning from the 3D scanned data of similar existing cultural heritage objects and incorporating the information on the sharp and soft edges through multimodal learning. We have already published our preliminary results using existing edge-highlighting techniques in the literature [
23].