Next Article in Journal
Discriminating between Biotic and Abiotic Stress in Poplar Forests Using Hyperspectral and LiDAR Data
Previous Article in Journal
Comparison of the Morrison and WDM6 Microphysics Schemes in the WRF Model for a Convective Precipitation Event in Guangdong, China, Through the Analysis of Polarimetric Radar Data
Previous Article in Special Issue
RGBTSDF: An Efficient and Simple Method for Color Truncated Signed Distance Field (TSDF) Volume Fusion Based on RGB-D Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Visibility Edge-Highlighting Visualization of 3D Scanned Point Clouds Based on Dual 3D Edge Extraction

1
College of Information Science and Engineering, Ritsumeikan University, Ibaraki 567-8570, Osaka, Japan
2
Shrewd Design Co., Ltd., Fushimi-ku, Kyoto 612-8362, Kyoto, Japan
3
Indonesian Heritage Agency, The Ministry of Education, Culture, Research, and Technology, Central Jakarta 10110, Indonesia
4
School of Information and Telecommunication Engineering, Tokai University, Minato-ku, Tokyo 108-8619, Japan
5
School of Intelligence Science and Technology, University of Science and Technology Beijing, 30 Xueyuan Road, Haidian District, Beijing 100083, China
6
Research Center for Area Studies (PRW), National Research and Innovation Agency (BRIN), Jakarta 12170, Indonesia
7
Nara National Research Institute for Cultural Properties, Nijo-cho, Nara 630-8577, Nara, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(19), 3750; https://doi.org/10.3390/rs16193750
Submission received: 14 September 2024 / Revised: 3 October 2024 / Accepted: 3 October 2024 / Published: 9 October 2024
(This article belongs to the Special Issue New Insight into Point Cloud Data Processing)

Abstract

:
Recent advances in 3D scanning have enabled the digital recording of complex objects as large-scale point clouds, which require clear visualization to convey their 3D shapes effectively. Edge-highlighting visualization is used to improve the comprehensibility of complex 3D structures by enhancing the 3D edges and high-curvature regions of the scanned objects. However, traditional methods often struggle with real-world objects due to inadequate representation of soft edges (i.e., rounded edges) and excessive line clutter, impairing resolution and depth perception. To address these challenges, we propose a novel visualization method for 3D scanned point clouds based on dual 3D edge extraction and opacity–color gradation. Dual 3D edge extraction separately identifies sharp and soft edges, integrating both into the visualization. Opacity–color gradation enhances the clarity of fine structures within soft edges through variations in color and opacity, while also creating a halo effect that improves both resolution and depth perception of the visualized edges. Computation times required for dual 3D edge extraction are comparable to conventional binary statistical edge-extraction methods. Visualizations with opacity–color gradation are executable at interactive rendering speeds. The effectiveness of the proposed method is demonstrated using 3D scanned point cloud data from high-value cultural heritage objects.

1. Introduction

Recent developments in three-dimensional (3D) scanning, particularly in photogrammetry and laser scanning, have enabled the quick and accurate recording of complex 3D objects in the real world. However, the complexity of 3D scanned point cloud data tends to be significant, especially when the scanned object possesses internal 3D structures. Highlighting lines formed by the 3D edges of the scanned object, i.e., edge-highlighting visualization, is often helpful for understanding its complex 3D structures. However, the increasing complexity of 3D scanned point cloud data tends to result in too many lines being drawn, thereby deteriorating the comprehensibility. Therefore, this paper proposes an extended edge-highlighting visualization method aiming at improving visibility for complex and large-scale 3D scanned point cloud data.
Our method has two novel aspects for rendering 3D edges: dual 3D edge extraction and opacity–color gradation. First, dual 3D edge extraction incorporates not only highly curved sharp edges but also less curved soft edges into the edge-highlighting visualization. Real-world objects, as captured through scanning, usually have numerous soft edges in addition to sharp edges as part of their 3D structures. Therefore, to sufficiently represent the structural characteristics of scanned objects, visualization should encompass both types of 3D edges: sharp and soft edges. Second, opacity–color gradation renders soft edges with gradations in both color and opacity, i.e., with gradations in the RGBA space. This gradation achieves a comprehensive display of fine 3D structures recorded within the soft edges. This gradation also realizes the halo effect, which enhances the areas around sharp edges, leading to improved resolution and depth perception of 3D edges. The gradation in opacity is achieved using stochastic point-based rendering (SPBR), the high-quality transparent visualization method used for 3D scanned point cloud data [1,2]. In our previous paper [3], we presented a preliminary report on our edge-highlighting visualization method, which was based solely on opacity gradation. This paper extends our previous work by reporting several new contributions, including the two advantages mentioned above.
Figure 1 shows edge-highlighting visualization (a) without and (b) with the novel effects realized by our proposed method. The visualized data are the 3D scanned point cloud of the office building at the Tamaki Shinto Shrine (Nara Prefecture, Japan), a nationally important cultural property. The building contains many small rooms, resulting in the appearance of numerous lines through edge extraction. In Figure 1a, the excessive lines dramatically reduce visibility. Conversely, in Figure 1b, the edge lines successfully aid in observing complex internal structures.
This paper is organized as follows: In Section 2, we summarize related work. In Section 3, we review the foundational techniques necessary for developing the method proposed in this paper. In Section 4, we develop and demonstrate dual 3D edge extraction. In Section 5, we develop high-visibility edge-highlighting visualization based on the dual 3D edge extraction and the opacity–color gradation. In Section 6, we demonstrate the effectiveness of our edge-highlighting visualization by applying it to real 3D scanned point cloud data, particularly those of cultural heritage objects with high cultural value. In Section 7, we explain the performance of the proposed method, particularly focusing on the required computation time. Section 8 concludes the paper by summarizing our achievements.

2. Related Work

Research on highlighting the 3D edges of point-based objects, as well as the broader task of feature highlighting within 3D point clouds, has garnered significant attention (refer to [4] for review). Various computational methods have been employed in this research. The objectives of this research include understanding the 3D structures of point-based objects, as well as point classification, segmentation, and other related goals.
Recently, statistical methods employing eigenvalue-based 3D feature values have gained prominence [5,6,7,8,9,10,11,12,13]. In these statistical approaches, 3D feature values are derived from the eigenvalues of the local 3D covariance matrix, commonly referred to as the 3D structure tensor [14]. For each 3D point, the covariance matrix is computed through numerical analysis of local variances and covariances of point distributions within a specified radius of a spherical neighborhood. In this paper, we also utilize the eigenvalue-based 3D feature values to extract high-curvature areas, i.e., 3D edges, from a given point cloud. The statistical method easily achieves effective edge-highlighting visualization if most of the edges of the scanned object are sharp edges. This is because it is sufficient to determine a single feature-value threshold, such as f th , for a selected feature value, f, and simply assign a highlight color to the points with f > f th . However, many real-world objects contain numerous soft edges in addition to sharp edges. Therefore, if f th is too high, the fine structures represented by the soft edges will not be incorporated in the visualization. Conversely, if f th is too low, the soft edges will appear as thick bands, which is unsuitable for emphasizing the 3D structure of the target object. To address this, our study introduces multiple thresholds to separately extract sharp and soft edges. We then independently apply appropriate visualizations to each edge type, allowing effective edge-highlighting visualization of 3D scanned data that records complex real-world objects.
In computer graphics, the halo effect has been utilized to enhance the sense of depth in space by emphasizing the relative positioning of 3D lines [15,16]. In recent years, the halo effect has also been used to increase the visibility of internal 3D structures in the volumetric visualization of flows and medical data [17,18,19,20,21,22]. The halo effect is effective for enhancing the visibility of complex 3D shape visualizations. However, it has not yet been implemented in the visualization of 3D scanned data. The challenges in achieving the halo effect arise from the fact that 3D scanned data consist of large-scale scattered point clouds and, as mentioned above, contain a mixture of sharp and soft edges. In this study, we address these challenges by employing dual 3D edge extraction and opacity–color gradation.

3. Brief Reviews of the Techniques Used to Develop Our Method

In this section, we briefly review the techniques used in the development of our method, which is explained in the later sections. First, we review stochastic point-based rendering [1,2], which allows for controlling opacity in point-based surfaces. Next, we explain the edge-extraction feature values derived from the eigenvalues of the 3D structure tensor, i.e., the local 3D covariance matrix [5,6,7,8,9,10,11,12,13,14].

3.1. Stochastic Point-Based Rendering (SPBR)

Stochastic point-based rendering (SPBR) [1,2] enables quick and accurate 3D transparent visualization of large-scale 3D scanned point cloud data. Transparency is achieved through stochastic determination of pixel intensities. This stochastic algorithm achieves correct depth perception without requiring the time-consuming depth sorting of 3D points. When executing SPBR, we randomly divide the target point cloud into multiple point subsets, each of which has the same number of 3D points. By creating an image for each point subset with the point-occlusion effect incorporated and then averaging the created images of the subsets, we obtain the final transparent image.
SPBR can control the opacity of a point-based surface by adjusting the local point density. The larger the point density of a local area is, the more opaque the area is visualized. Executing downsampling through random point thinning decreases the local opacity. Conversely, executing upsampling by copying randomly selected points increases the local opacity. These downsampling and upsampling operations achieve a user-defined local opacity, α , which obeys the following opacity formula:
α = 1 1 s S n adj / L ,
where S is the local surface area where α is evaluated, s is the cross-sectional area of a 3D point, which is tuned so that each point image overlaps exactly one pixel in the 2D image plane, n adj is the adjusted number of 3D points after executing upsampling or downsampling operations in the local area S, and L is the number of point subsets, i.e., the number of divisions of the original point cloud for image averaging. The parameter L controls the statistical quality of the created images. In this paper, we set L = 100 for transparent visualization (see Figure 1, for example).

3.2. Eigenvalue-Based Feature Values for 3D Edge Extraction

To extract the 3D edges, i.e., the points forming the edge regions, we need an appropriate feature value that determines the edge regions. We adopt the feature value based on the eigenvalues of the 3D structure tensor [14], i.e., the local 3D covariance matrix: We consider a local spherical area centered at each point of the target point cloud (see Figure 2). In our study, we set its radius to 1 / 300 of the bounding-box diagonal length of the point cloud, ensuring the inclusion of a few hundred points in the area. We calculate the 3D covariance matrix of the point coordinates for each local spherical area. The feature value, f, which is assigned to the centered point, is defined and calculated from the three eigenvalues of the 3D covariance matrix, λ 1 , λ 2 , and λ 3 ( λ 1 λ 2 λ 3 0 ). When the feature value f is used, we normalize it so that the values of f are distributed in the range [ 0 , 1 ] .
The square roots of the three eigenvalues, λ 1 , λ 2 , and λ 3 , represent the spread of the point distribution in three mutually orthogonal directions as shown in Figure 2. λ 1 represents the largest spread, while λ 3 represents the smallest spread. This means λ 1 = λ 2 and λ 3 = 0 in the case of a flat plane, while λ 1 is much larger than λ 2 and λ 3 in the case of a linear edge. Based on these properties of the eigenvalues, we can construct various definitions of the feature value f by appropriately combining the eigenvalues λ 1 , λ 2 , and λ 3 [3,5,6,7,8,9,10,11,12,13]. In our study, we adopt the following definitions for feature value f, selecting the most suitable one for the target data:
Change   of   curvature : C λ = λ 3 λ 1 + λ 2 + λ 3 ,
Linearity: : L λ = λ 1 λ 2 λ 1 ,
Aplanarity : P ¯ λ = 1 λ 2 λ 3 λ 1 .
Change of curvature, C λ , is used to measure the minimal extension of the local point distribution characterized by λ 3 . This definition extracts high-curvature areas directly. Linearity, L λ , measures the disparity between the two independent-directional largest extensions of the local point distribution. This definition utilizes the property of a 3D edge, where λ 1 becomes extremely large. Aplanarity, P ¯ λ , is the opposite of planarity ( λ 2 λ 3 ) / λ 1 and is a measure of the disparity between the two independent-directional smallest extensions of the local point distribution. This definition utilizes the property of a planar area, where λ 2 = λ 3 .
Gradations in opacity and color are achieved by introducing appropriate functions of a selected feature value f, for each. For opacity, we introduce a point proliferation ratio A ( f ) as a function of f. A ( f ) is related to local point density n through the opacity Formula (1), and their user-defined values are achieved through point upsampling, as explained in Section 3.1. Similary, we introduce a color functions, C ( f ) = ( C R ( f ) , C G ( f ) , C B ( f ) ) . See Section 5 for more details.

4. Dual 3D Edge Extraction: Extracting Sharp and Soft Edges Independently

Traditionally, 3D edges refer to the local regions that constitute linear areas of high curvature in mathematical polyhedra. These regions can be termed sharp edges. However, real-world objects are not ideal polyhedra and typically include not only sharp edges but also soft edges, which create rounded valleys and ridges with finite widths. Conventional methods for extracting and visualizing 3D edges mainly concentrate on sharp edges. When these conventional methods are applied to soft edges, they often extract wide, band-like regions, leading to a degradation of the linear characteristics of 3D edges and reduced comprehensibility in visualization.
Our approach to handling real-world 3D objects with a mixture of sharp and soft edges involves extracting and processing these two types of edges independently. To achieve this, we introduce two feature-value thresholds, f th ( sharp ) and f th ( soft ) , for a selected feature value, f. The threshold f th ( sharp ) defines sharp-edge regions as f f th ( sharp ) , while f th ( soft ) defines soft-edge regions as f f th ( soft ) . Note that f th ( soft ) also defines the total edge regions consisting of both types of edges, as the inequality f th ( sharp ) f th ( soft ) holds.
We determine the appropriate values of the above two thresholds by following the two steps below:
  • Step 1: We set the threshold value f th ( soft ) to determine the total 3D edge region as f f th ( soft ) based on histogram analysis. We choose the beginning of the tail area of the feature-value histogram as f th ( soft ) (see Figure 3).
  • Step 2: We determine the threshold value f th ( sharp ) to discriminate between sharp edges and soft edges by tuning the sharp-edge ratio,
    R ( sharp ) = N ( sharp ) N ( sharp ) + N ( soft ) ,
    where N ( sharp ) and N ( soft ) are the numbers of points to form sharp edges and soft edges, respectively.
For Step 1, which determines the value of f th ( soft ) , we can use various techniques from conventional edge-extraction methods. We adopt a technique that identifies the tail area of the feature-value histogram, where f th ( soft ) is determined as the boundary between the tail area and the peak area. Figure 3 shows the feature-value histogram of the data used to create Figure 1. The extended tail area of the histogram corresponds to the total 3D edge region, while the peak area corresponds to the nonedge region. We can determine the value of f th ( soft ) around the boundary of the tail and peak areas, e.g., f th ( soft ) = 0.15 for the data to create Figure 1. Although the use of the tail-peak boundary for determining f th ( soft ) is fine-tuned manually in this study, our proposed visualization in Section 5 is robust and not particularly sensitive to adjustments in f th ( soft ) . A more systematic approach, currently under development, employs the percentile technique. For instance, f th ( soft ) can be defined such that the number of points with f exceeding f th ( soft ) corresponds to a user-defined percentage of the total data points. Our experiments show that percentages between 30% and 50% yield effective results in many cases.
For Step 2, which determines the value of f th ( sharp ) , we usually set R ( sharp ) = 0.5 as the default value. Through numerous experiments, we found that this simple configuration works quite well for our high-visibility edge-highlighting visualization method explained in Section 5. In fact, most of the images presented in this paper are created by adopting the default value, R ( sharp ) = 0.5 . However, fine-tuning can be performed as needed depending on the target data. In our experience, 3D scanned data of a relief engraving, where soft edges tend to appear more frequently than sharp edges do, sometimes require a smaller R ( sharp ) , e.g., R ( sharp ) = 0.3 .
Figure 4 shows examples of sharp and soft edge extraction from simple shapes. Sharp edges, determined by the two steps mentioned above, are shown in red, while soft edges are shown in cyan. The change of curvature C λ defined by expression (2) is used as the feature value f. Figure 4a depicts a cuboid with rounded edges. Figure 4b represents a shape where sharp and rounded edges coexist, constructed by combining a regular cuboid and the shape shown in Figure 4a. As shown in Figure 4a and the right portion of Figure 4b, soft edges tend to appear around sharp edges. By effectively utilizing this characteristic, soft edges can be used not to diminish the visibility of sharp edges but rather to highlight and enhance them (see Section 5 for details).
Here, we discuss the determination of the two threshold values, f th ( sharp ) and f th ( soft ) , to create Figure 4b. In Figure 4b, sharp edges appear both individually (on the left portion of the image) and as the centerlines of soft edges (on the right portion of the image). In such cases, very subtle parameter adjustments are often needed. In fact, the determined threshold values are f th ( soft ) = 0.00100 and f th ( sharp ) = 0.00131 , which are very similar values. Finding such a set of values through a trial-and-error approach is extremely time-consuming. However, following the two steps mentioned above significantly simplifies the determination process.
Figure 5 shows the result of the dual 3D edge extraction for the small rooms and Buddha statues in the cave of the Zuigan-ji Buddhist temple in Miyagi Prefecture, Japan. This figure contains 3D edges with colored centerlines, where sharp edges are represented in red and the surrounding narrow areas are represented by soft edges in cyan. Note that there are also 3D edges consisting solely of sharp edges and solely of soft edges. This observation highlights the importance of separately addressing sharp and soft edges to understand complex 3D structures.
Figure 6 shows the result of the dual 3D edge extraction for the wall reliefs in the Borobudur Temple in Indonesia. Even in reliefs, which are bumpy planes, soft edges generally appear around sharp edges. However, compared with truly 3D objects like those in Figure 5, soft edges tend to appear more frequently. As a result, relief shapes are often characterized by soft edges to a significant extent. Therefore, visualizing only sharp edges does not aid in understanding shapes. Note that the values of f th ( soft ) and f th ( sharp ) tend to become similar in such data, as in the case of Figure 4b, which makes our two-step determination of the threshold values advantageous.
Although simply assigning distinct colors to sharp and soft edges, as demonstrated in this section, is an effective way of visualization, we propose a more advanced edge-highlighting visualization method in Section 5. By leveraging the synergistic effects between sharp and soft edges, we can create images with high visibility such as Figure 1b.

5. High-Visibility Edge-Highlighting Visualization Based on the Opacity–Color Gradation of Soft Edges

In this section, we propose a high-visibility edge-highlighting visualization method, made possible by independently rendering sharp and soft edges with SPBR. This method relies on opacity–color gradation, in which simultaneous gradation of both opacity and color is applied to soft edges. In our preliminary report [3], the gradation was applied solely to opacity. By adding color gradation, we can visualize soft edges with sharper lines, making the method more effective for expressing the complex 3D structures of 3D scanned objects. Opacity–color gradation of soft edges also achieves a halo effect, greatly enhancing the depth perception of scenes with many edge lines, as shown in Figure 1b.
Our edge-highlighting visualization method uses SPBR (see Section 3.1) for rendering the extracted sharp and soft edges. When visualizing the edges, we utilize the degree of opacity, which is effectively controlled by SPBR. As long as we use SPBR, our edge-highlighting visualizations are essentially transparent visualizations. However, because the edges are assigned high opacity values, the resulting images appear opaque. Therefore, our edge-highlighting visualization does not need to be strictly classified as transparent visualization. When nonedge regions, which are assigned lower opacity, are included, as shown in Figure 1, the created image naturally becomes transparent.

5.1. Rendering of Sharp Edges

Sharp edges are rendered with constant opacity and color. Using the opacity–color settings described below, sharp edges are highlighted most brightly with a user-defined color in the visualized scene.
We assign the highest opacity to sharp edges. By using SPBR, opacity control based on Formula (1) becomes possible. The task is to determine a constant point proliferation ratio, A max , that achieves the value of n adj corresponding to a user-defined highest opacity according to Formula (1). However, it is usually sufficient and practical to set a sufficiently large A max for assigning sufficiently high opacity to sharp edges. Typically, A max is of the order 10 1 ; for example, A max = 25 , meaning that upsampling is performed to increase the original point density by a factor of 25. For the sharp-edge color, we assign a user-defined constant highlight color, C = ( C R , C G , C B ) , such as ( 255 , 0 , 0 ) , which corresponds to red.

5.2. Rendering of Soft Edges

Soft edges are rendered using opacity–color gradation. To achieve this, we introduce two functions of the feature value f within the range f th ( soft ) f f th ( sharp ) : a point proliferation ratio A ( f ) and a color function C ( f ) = ( C R ( f ) , C G ( f ) , C B ( f ) ) . The details are explained below.
To control the soft-edge opacity, we define a point proliferation ratio A ( f ) as a function of the feature value f. The function A ( f ) is designed so that its value, and therefore the opacity (see Formula (1)), decreases as f decreases. According to this A ( f ) , the local surface opacity gradually decreases as a position in a soft-edge region moves farther from sharp-edge regions. A ( f ) is explicitly defined as follows, realizing opacity gradation in soft-edge regions:
A ( f ) = A max A min f th ( sharp ) f th ( soft ) d opacity f f th ( soft ) d opacity + A min ,
where A min is the smallest point proliferation ratio to be adopted at f = f th ( soft ) and the exponent d opacity is a parameter that controls the speed of opacity gradation. For typical 3D scanned point cloud data, d opacity = 1 is sufficient. For data with many soft edges, such as reliefs, it is sometimes better to set d opacity = 0.5 , making the opacity decrease more slowly as f decreases.
To control the soft-edge color, we similarly define a color function with red, green, and blue components: C ( f ) = ( C R ( f ) , C G ( f ) , C B ( f ) ) . The function C ( f ) is designed so that the local surface color gradually changes from the highlight color C to the background color, as a position in a soft-edge region moves farther from sharp-edge regions. C ( f ) is explicitly defined as follows, realizing color gradation for soft edges:
C ( f ) = C C bg f th ( sharp ) f th ( soft ) d color f f th ( soft ) d color + C bg ,
where C bg represents a user-defined background color in visualization, typically set to an achromatic color such as black or white. The exponent d color is a parameter that controls the speed of color gradation. For typical 3D scanned point cloud data, d color = 1 is sufficient.

5.3. Parameter Setting and Default Parameters

Although functions A ( f ) and C ( f ) include several user parameters, it is not difficult to find their appropriate values. In fact, the default parameters work quite well in many cases. The values of f th ( soft ) and f th ( sharp ) can be determined following the simple procedure explained in Section 4. As we explained, f th ( sharp ) can be automatically determined by setting R ( sharp ) , whose default value 0.5 works well in many cases. The highlight color C can be arbitrary. For the other parameters, A max , A min , d opacity , d color , and C bg , the default values work well in many cases: A max = 25 , A min = 5 , d opacity = d color = 1 , and C bg = ( 0 , 0 , 0 ) (black) or ( 255 , 255 , 255 ) (white).
For example, Figure 1 is created using the above default parameters with C = ( 0 , 0 , 0 ) , C bg = ( 255 , 255 , 255 ) , f th ( soft ) = 0.150 , and R ( sharp ) = 0.5 , which corresponds to f th ( sharp ) = 0.398 . For 3D scanned data of relief engravings or similar 2.5-dimensional objects, where soft edges are dominant, we should often set slightly smaller values for R ( sharp ) and d opacity . According to our experiments, R ( sharp ) values of 0.3 to 0.5 and d opacity values of 0.5 to 1.0 work well.

5.4. A Simple Example

Figure 7 demonstrates the two novel effects achieved by our edge-highlighting visualization method for a simple cuboid with rounded edges. The change of curvature C λ is used as the feature value f.
The first effect demonstrated in Figure 7 is the edge-thinning effect. The centerline regions of the edges, highlighted in the brightest red, represent sharp edges. On the other hand, the surrounding gradation areas exhibit soft edges. The soft-edge opacity gradually decreases, with the soft edges becoming almost transparent near boundaries, i.e., near the background areas in the created image. Similarly, the soft-edge color transitions gradually from the sharp-edge color (red) to the background color (black) as the distance from the sharp-edge region increases. This opacity–color gradation effectively makes the soft edges look thinner, which helps us understand the 3D structure through edge lines.
The second effect demonstrated in Figure 7, is the halo effect (see the circled areas). The soft edges partially obscure the edges in the background, significantly enhancing depth perception. Additionally, the halo effect improves the resolution of sharp edges by highlighting their silhouettes with soft edges. For these reasons, the halo effect significantly enhances the comprehensibility of large-scale 3D scanned objects, particularly those with complex inner structures.

6. Edge-Highlighting Visualization Experiments for 3D Scanned Point Clouds

In this section, we apply the edge-highlighting visualization method proposed in Section 5 to real 3D scanned point clouds and demonstrate its effectiveness.

6.1. Experiments on the Edge-Thinning Effect

In this subsection, we demonstrate the edge-thinning effect achieved through the opacity–color gradation of soft edges.
Figure 8 presents edge-highlighting visualizations of the same 3D scanned data used in Figure 5. Figure 8a is an image created using conventional binary edge highlighting, where a constant high opacity value is assigned to the entire edge region, including both sharp and soft edges. As shown in Figure 5, both sharp and soft edges are necessary to fully highlight 3D edge regions. However, the conventional binary edge-highlighting method is ineffective because the soft-edge regions are not always visualized as sharp lines. In fact, in Figure 8a, many soft edges appear as scattered dots in nonedge planar areas. In contrast, Figure 8b, created using the proposed method, successfully represents most of the soft edges as sharp lines.
Figure 9 shows visualizations of a relief panel on the inner walls of the Cloud Platform at Juyong Pass, China. Figure 9a presents the photorealistic point-based rendering with the original color for the readers’ reference. Figure 9b shows the result of edge-highlighting visualization using the proposed method. The global structures of the carved characters are highlighted by the brightest sharp edges. Additionally, the local fine details, such as eyebrows and wrinkles in clothes, are highlighted by soft edges with graded color (see the partially magnified views). Dual 3D edge extraction and opacity–color gradation of soft edges enable such hierarchical feature highlighting.

6.2. Experiments on the Halo Effect

In this subsection, we demonstrate the halo effect, which is achieved through the synergistic effects of sharp and soft edges.
Figure 10 shows edge-highlighting visualizations of a different, more complex part of the 3D scanned object depicted in Figure 5 and Figure 8, created using the same feature value f and parameters as those used in Figure 8. Figure 10a shows the conventional binary highlighting of edges, in which both sharp and soft edges are used, while Figure 10b presents the result of our proposed edge-highlighting visualization method. In Figure 10b, we can clearly observe halos created in soft-edge regions, which are not present in Figure 10a, resulting in improved resolution and depth perception of the edges, thereby clearly characterizing a complex 3D structure of the scanned object.
Figure 11 shows edge-highlighting visualizations of 3D scanned data of the festival float, Taishi-Yama, used in the Gion Festival of Kyoto City, Japan. Figure 11a shows an edge-highlighting visualization incorporating only sharp edges. The front–back relationship of the edges is unclear, because this object contains many sharp edges. On the other hand, Figure 11b shows an edge-highlighting visualization using our method. The halo effect significantly improves depth perception, making the front–back relationship of the edges clearer.
Finally, Figure 1 presented in Section 1 shows edge-highlighting visualizations similar to Figure 11, using the change of curvature C λ as f. Figure 1a displays an edge-highlighting visualization that incorporates only sharp edges. In contrast, Figure 1b presents an edge-highlighting visualization using our method that incorporates both sharp and soft edges. The soft edges are represented with opacity–color gradation, achieving the halo effect, which enhances the resolution and depth perception of 3D edges. The parameters adopted for Figure 1b are as follows: f th ( soft ) = 0.150 , f th ( sharp ) = 0.398 ( R ( sharp ) = 0.5 ) , C = ( 0 , 0 , 0 ) , C bg = ( 255 , 255 , 255 ) , and the default parameter values described in Section 5.3. Figure 1b and Figure 11b also demonstrate the application of our method for transparent fusion of the edge regions with the remaining non-edge regions. We can observe the entire object with the highlighted edges superimposed to illustrate its 3D structure. For this purpose, the black edge color, C = ( 0 , 0 , 0 ) , and the white background color, C bg = ( 255 , 255 , 255 ) , often work well.

6.3. Application to “See-Through” Visualization

Figure 12 demonstrates the application of the proposed edge-highlighting visualization method to “see-through” visualization of a complex 3D scanned object with a 3D inner structure. The scanned object is a campus building at Kyoto Women’s University in Kyoto City, Japan. Figure 12a shows the conventional transparent visualization using SPBR, where the inner 3D structure is observable based on the transparent feature assigned to the object. However, this transparent visualization is unclear, because the inner 3D structure is complex and too many elements are included in the scene. In contrast, Figure 12b shows the result of the proposed edge-highlighting visualization method that incorporates both sharp and soft edges, extending the soft-edge regions by setting a smaller f ( soft ) than usual. In Figure 12b, the inner structure of the building is more clearly visible compared to the transparent visualization in Figure 12a. As demonstrated by this example, our proposed edge-highlighting visualization often provides an effective “see-through” effect, enabling a comprehensible representation of the 3D inner structure of a complex scanned object.

7. Performance

For this work, computations for dual 3D edge extraction and the visualization processes were executed on our PC (MacBook Pro) with an Apple M2 Max chipset, a 38-core GPU, and 96 GB of memory. The computation speed depends on the CPU selection, while the number of points that can be handled depends on the main memory and GPU memory. (We also confirmed that data consisting of 10 8 3D points could be effectively handled even on a less powerful laptop PC with a 3.07 GHz Intel Core i7 processor, 8 GB of RAM, and an NVIDIA GeForce GT 480M GPU.)
In the dual 3D edge-extraction process explained in Section 4, most of the computation time is consumed traversing all the points stored in an octree, calculating the 3D covariance matrix at each point, and assigning an eigenvalue-based feature value to each point. This situation is common to conventional statistical methods (see Section 2). In Table 1, we compare the computation times for the conventional binary 3D edge extraction and the proposed dual 3D edge extraction, using the point cloud data for Figure 1 and the figures in Section 6. The computational time of each method depends on the point distribution of the target data. However, in any case, our proposed dual 3D edge extraction requires a computational time nearly comparable to that of the conventional binary 3D edge extraction.
Once 3D edge extraction is completed, the visualization process explained in Section 5 can be executed at an interactive visualization speed. This is because rendering is performed using SPBR. For details on the performance of SPBR, refer to [1,2].

8. Conclusions and Discussion

In this paper, we have proposed a novel method for reliable 3D edge extraction and high-visibility edge-highlighting visualization applicable to 3D scanned point clouds. This method helps us intuitively understand the complex 3D structures of scanned objects by using visualized sharp and soft edges as visual guides.
The dual 3D edge extraction captures low-curvature soft edges in addition to the high-curvature sharp edges. We demonstrated that the total 3D structures of real-world objects can be effectively represented by using both sharp and soft edges extracted by our dual 3D edge extraction. Therefore, our 3D edge extraction is not merely an improvement of conventional methods but rather an extended technique that captures areas not covered by traditional methods.
High-visibility edge-highlighting visualization is achieved by rendering the extracted soft edges with opacity–color gradation, reflecting feature-value variations in the soft-edge regions. This opacity–color gradation effectively represents the fine 3D structures recorded in the soft edges, while the sharp edges tend to depict the global 3D structures. Opacity–color gradation also achieves a halo effect, significantly improving the resolution and depth perception of 3D edges. Our edge-highlighting visualization, incorporating these novel features, allows for a comprehensible representation of the entire 3D structure of scanned objects.
This above-mentioned enhanced clarity in visualization often allows for replacing the conventional transparent visualization with our edge-highlighting approach, achieving a sufficiently effective see-through feature in the visualized scene. The combination of our edge-highlighting visualization with the conventional transparent visualization proves to be highly effective, too.
We demonstrated the effectiveness of our proposed method by applying it to real 3D scanned data, particularly data from cultural heritage objects with high cultural value. The visualizations show that our method aids in understanding both the surface and internal 3D structures of complex objects in the real world, inspiring various novel applications of 3D scanned data.
There are still several issues to be further explored in the proposed method. For example, the procedure for determining f th ( sharp ) and f th ( soft ) could be further refined, even though the current approach is already practical. Additionally, selecting the most relevant path in the color space used for opacity–color gradation is an important issue that has yet to be fully understood. These issues will be addressed in our future work. As a promising direction for future work, we also plan to integrate the results of this study with deep learning techniques. For example, we will apply the highly visible sharp and soft edges, which were made possible in this study, to the digital 3D restoration of cultural heritage objects that are recorded only in 2D photographs. Restoration accuracy should be improved by combining two approaches: learning from the 3D scanned data of similar existing cultural heritage objects and incorporating the information on the sharp and soft edges through multimodal learning. We have already published our preliminary results using existing edge-highlighting techniques in the literature [23].

Author Contributions

Conceptualization, Y.Y. and S.T. (Satoshi Tanaka); methodology, Y.Y., S.T. (Satoshi Takatori), K.H., L.L. and S.T. (Satoshi Tanaka); software, Y.Y., M.A. and K.H.; validation, S.T. (Satoshi Takatori), M.A. and L.L.; formal analysis, K.H. and J.P.; investigation, Y.Y., S.T. (Satoshi Takatori), H.Y.; resources, B., F.I.T., J.P. and H.Y.; data curation, M.A., B., J.P., F.I.T. and H.Y.; writing—original draft preparation, Y.Y. and S.T. (Satoshi Tanaka); writing—review and editing, L.L., K.H. and S.T. (Satoshi Takatori); visualization, Y.Y. and S.T. (Satoshi Tanaka); supervision, L.L. and S.T. (Satoshi Tanaka); project administration, S.T. (Satoshi Tanaka); funding acquisition, J.P. and S.T. (Satoshi Tanaka). All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by JSPS KAKENHI (Grant Number 21H04903) and Fundamental Research Funds for the Central Universities (Grant number FRF-IDRY-23-001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The images presented in this paper are available upon request from the corresponding author. The point cloud data are not publicly available because they involve cultural heritage, which cannot be shared without authorization.

Acknowledgments

We thank Tamaki Shrine, Borobudur Temple, Zuigan-ji Temple, Juyong Pass Great Wall Museum, Taishi-Yama Preservation Society, and Kyoto Women’s University for their support in acquiring the 3D scanned data used in this paper. We also thank H. Date (Hokkaido University), Y. Kitao (Kyoto Women’s University), and Y. Wang (University of Science and Technology Beijing) for their valuable advice.

Conflicts of Interest

Author Motoaki Adachi was employed by the company Shrewd Design Co. Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviation is used in this manuscript:
SPBRstochastic point-based rendering

References

  1. Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K. See-Through Imaging of Laser-scanned 3D Cultural Heritage Objects based on Stochastic Rendering of Large-Scale Point Clouds. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2016, 3, 73–80. [Google Scholar] [CrossRef]
  2. Uchida, T.; Hasegawa, K.; Li, L.; Adachi, M.; Yamaguchi, H.; Thufail, F.I.; Riyanto, S.; Okamoto, A.; Tanaka, S. Noise-robust transparent visualization of large-scale point clouds acquired by laser scanning. ISPRS J. Photogramm. Remote. Sens. 2020, 161, 124–134. [Google Scholar] [CrossRef]
  3. Kawakami, K.; Hasegawa, K.; Li, L.; Nagata, H.; Adachi, M.; Yamaguchi, H.; Thufail, F.I.; Riyanto, S.; Brahmantara; Tanaka, S. Opacity-based edge highlighting for transparent visualization of 3D scanned point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-2-2020, 373–380. [Google Scholar] [CrossRef]
  4. Rusu, R.B. Semantic 3D Object Maps for Everyday Robot Manipulation (Springer Tracts in Advanced Robotics 85); Springer: Berlin/Heidelberg, Germany, 2013; ISBN 978-3-642-35478-6. [Google Scholar]
  5. West, K.F.; Webb, B.N.; Lersch, J.R.; Pothier, S.; Triscari, J.M.; Iverson, A.E. Context-driven automated target detection in 3D data. In Proceedings of the Proceedings Volume 5426, Automatic Target Recognition XIV, SPIE Digital Library, Orlando, FL, USA, 21 September 2004; pp. 133–143. [Google Scholar]
  6. Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Künstliche Intelligenz 2010, 24, 345–348. [Google Scholar] [CrossRef]
  7. Toshev, A.; Mordohai, P.; Taskar, B. Detecting and parsing architecture at city scale from range data. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 398–405. [Google Scholar]
  8. Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensioality based scale selection in 3D lidar point clouds. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 97–102. [Google Scholar]
  9. Mallet, C.; Bretar, F.; Roux, M.; Soergel, U.; Heipke, C. Relevance assessment of full-waveform lidar data for urban area classification. ISPRS J. Photogramm. Remote. Sens. 2011, 66, S71–S84. [Google Scholar] [CrossRef]
  10. Weinmann, M.; Jutzi, B.; Mallet, C. Feature Relevance Assessment for the semantic interpretation of 3D point cloud data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 2, 313–318. [Google Scholar] [CrossRef]
  11. Weinmann, M.; Jutzi, B.; Mallet, C. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 181–188. [Google Scholar] [CrossRef]
  12. Dittrich, A.; Weinmann, M.; Hinz, S. Analytical and numerical investigations on the accuracy and robustness of geometric features extracted from 3D point cloud data. ISPRS J. Photogramm. Remote. Sens. 2017, 126, 195–208. [Google Scholar] [CrossRef]
  13. He, E.; Chen, Q.; Wang, H.; Liu, X. A Curvature Based Adaptive Neighborhood for Individual Point Cloud Classification. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 219–225. [Google Scholar] [CrossRef]
  14. Jutzi, B.; Gross, H. Nearest neighbour classification on laser point clouds to gain object structures from buildings. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2009, 38, 4–7. [Google Scholar]
  15. Appel, A.; Rohlf, F.J.; Stein, A.J. The haloed line effect for hidden line elimination. In Proceedings of the ACM SIGGRAPH ’79, Chicago, IL, USA, 8–10 August 1979; pp. 151–157. [Google Scholar]
  16. Everts, M.H.; Bekker, H.; Roerdink, J.B.; Isenberg, T. Depth-dependent halos: Illustrative rendering of dense line data. IEEE Trans. Vis. Comput. Graph. 2009, 15, 1299–1306. [Google Scholar] [CrossRef] [PubMed]
  17. Interrante, V.; Grosch, C. Strategies for effectively visualizing 3D flow with volume LIC. In Proceedings of the Proceedings. Visualization ’97 (Cat. No. 97CB36155), Phoenix, AZ, USA, 24 October 1997; pp. 421–424. [Google Scholar]
  18. Wenger, A.; Keefe, D.F.; Zhang, S.; Laidlaw, D.H. Interactive volume rendering of thin thread structures within multivalued scientific datasets. IEEE Trans. Vis. Comput. Graph. 2004, 10, 664–672. [Google Scholar] [CrossRef] [PubMed]
  19. Rheingans, P.; Ebert, D. Volume illustration: Nonphotorealistic rendering of volume models. IEEE Trans. Vis. Comput. Graph. 2001, 7, 253–264. [Google Scholar] [CrossRef]
  20. Svakhine, N.A.; Ebert, D.S. Interactive volume illustration and feature halos. In Proceedings of the 11th Pacific Conference on Computer Graphics and Applications, Canmore, AB, Canada, 8–10 October 2003; pp. 347–354. [Google Scholar]
  21. Bruckner, S.; Gröller, M.E. Enhancing depth-perception with flexible volumetric halos. IEEE Trans. Vis. Comput. Graph. 2007, 13, 1344–1351. [Google Scholar] [CrossRef] [PubMed]
  22. Tao, Y.; Lin, H.; Dong, F.; Clapworthy, G. Opacity Volume based Halo Generation for Enhancing Depth Perception. In Proceedings of the 12th International Conference on Computer-Aided Design and Computer Graphics, Jinan, China, 15–17 September 2011; pp. 418–422. [Google Scholar]
  23. Pan, J.; Li, L.; Yamaguchi, H.; Hasegawa, K.; Thufail, F.I.; Brahmantara; Tanaka, S. 3D reconstruction of Borobudur reliefs from 2D monocular photographs based on soft-edge enhanced deep learning. ISPRS J. Photogramm. Remote. Sens. 2022, 183, 439–450. [Google Scholar] [CrossRef]
Figure 1. Edge-highlighting visualizations (a) without and (b) with the novel effects realized by our proposed method. The visualized data are the 3D scanned point cloud of the Tamaki Shinto Shrine Office Building in Nara Prefecture, Japan (nationally important cultural properties). Feature value “change of curvature”, C λ , is used for extracting the edges (see Section 3.2 for details).
Figure 1. Edge-highlighting visualizations (a) without and (b) with the novel effects realized by our proposed method. The visualized data are the 3D scanned point cloud of the Tamaki Shinto Shrine Office Building in Nara Prefecture, Japan (nationally important cultural properties). Feature value “change of curvature”, C λ , is used for extracting the edges (see Section 3.2 for details).
Remotesensing 16 03750 g001aRemotesensing 16 03750 g001b
Figure 2. Schematic illustration of the meaning of the three eigenvalues λ 1 , λ 2 , and λ 3 of the local 3D covariance matrix: The spread of the point distribution in three mutually orthogonal directions is evaluated by the square roots of these three eigenvalues.
Figure 2. Schematic illustration of the meaning of the three eigenvalues λ 1 , λ 2 , and λ 3 of the local 3D covariance matrix: The spread of the point distribution in three mutually orthogonal directions is evaluated by the square roots of these three eigenvalues.
Remotesensing 16 03750 g002
Figure 3. Histogram of the feature value f assigned to the 3D scanned points used to create Figure 1. The change of curvature C λ is used for f. The vertical axis of the histogram represents the number of points to which the value of f on the horizontal axis is assigned.
Figure 3. Histogram of the feature value f assigned to the 3D scanned points used to create Figure 1. The change of curvature C λ is used for f. The vertical axis of the histogram represents the number of points to which the value of f on the horizontal axis is assigned.
Remotesensing 16 03750 g003
Figure 4. Examples of dual 3D edge extraction with simple shapes. The red color represents sharp edges, while the cyan color represents soft edges. The change of curvature C λ is used for f. The determined threshold values are as follows: (a) f th ( soft ) = 0.400 , f th ( sharp ) = 0.657 ( R ( sharp ) = 0.5 ). (b) f th ( soft ) = 0.00100 , f th ( sharp ) = 0.00131 ( R ( sharp ) = 0.5 ).
Figure 4. Examples of dual 3D edge extraction with simple shapes. The red color represents sharp edges, while the cyan color represents soft edges. The change of curvature C λ is used for f. The determined threshold values are as follows: (a) f th ( soft ) = 0.400 , f th ( sharp ) = 0.657 ( R ( sharp ) = 0.5 ). (b) f th ( soft ) = 0.00100 , f th ( sharp ) = 0.00131 ( R ( sharp ) = 0.5 ).
Remotesensing 16 03750 g004
Figure 5. Dual 3D edge extraction for 3D scanned point cloud data of the cave of the Zuigan-ji Buddhism temple in Miyagi Prefecture, Japan. The aplanarity P ¯ λ is used for f, and threshold values of f th ( soft ) = 0.200 , f th ( sharp ) = 0.343 ( R ( sharp ) = 0.5 ) are adopted.
Figure 5. Dual 3D edge extraction for 3D scanned point cloud data of the cave of the Zuigan-ji Buddhism temple in Miyagi Prefecture, Japan. The aplanarity P ¯ λ is used for f, and threshold values of f th ( soft ) = 0.200 , f th ( sharp ) = 0.343 ( R ( sharp ) = 0.5 ) are adopted.
Remotesensing 16 03750 g005
Figure 6. Dual 3D edge extraction for 3D scanned point clouds of wall reliefs in the Borobudur Temple, a UNESCO World’s Cultural Heritage, in Indonesia. The change of curvature C λ is used for f. The determined threshold values are as follows: (a) f th ( soft ) = 0.0200 , f th ( sharp ) = 0.0452 ( R ( sharp ) = 0.5 ) . (b) f th ( soft ) = 0.0100 , f th ( sharp ) = 0.0548 ( R ( sharp ) = 0.5 ) .
Figure 6. Dual 3D edge extraction for 3D scanned point clouds of wall reliefs in the Borobudur Temple, a UNESCO World’s Cultural Heritage, in Indonesia. The change of curvature C λ is used for f. The determined threshold values are as follows: (a) f th ( soft ) = 0.0200 , f th ( sharp ) = 0.0452 ( R ( sharp ) = 0.5 ) . (b) f th ( soft ) = 0.0100 , f th ( sharp ) = 0.0548 ( R ( sharp ) = 0.5 ) .
Remotesensing 16 03750 g006
Figure 7. The two novel effects of our edge-highlighting visualization method: (1) The edge-thinning effect, visible in the soft-edge regions, i.e., the silhouette areas of the sharp edges. (2) The halo effect, visible in the circled areas, where the soft edges partially obscure the edges in the background.
Figure 7. The two novel effects of our edge-highlighting visualization method: (1) The edge-thinning effect, visible in the soft-edge regions, i.e., the silhouette areas of the sharp edges. (2) The halo effect, visible in the circled areas, where the soft edges partially obscure the edges in the background.
Remotesensing 16 03750 g007
Figure 8. Edge-highlighting visualization of the same 3D scanned data used in Figure 5, utilizing the aplanarity P ¯ λ for f: (a) Conventional binary highlighting of the entire edge regions, including both sharp and soft edges ( f th ( soft ) = 0.20 , A max = A min = 25 ). (b) Edge-highlighting visualization using the proposed method ( f th ( soft ) = 0.200 , f th ( sharp ) = 0.343 ( R ( sharp ) = 0.5 ) , C = ( 255 , 0 , 0 ) , C bg = ( 0 , 0 , 0 ) , and the default parameter values described in Section 5.3).
Figure 8. Edge-highlighting visualization of the same 3D scanned data used in Figure 5, utilizing the aplanarity P ¯ λ for f: (a) Conventional binary highlighting of the entire edge regions, including both sharp and soft edges ( f th ( soft ) = 0.20 , A max = A min = 25 ). (b) Edge-highlighting visualization using the proposed method ( f th ( soft ) = 0.200 , f th ( sharp ) = 0.343 ( R ( sharp ) = 0.5 ) , C = ( 255 , 0 , 0 ) , C bg = ( 0 , 0 , 0 ) , and the default parameter values described in Section 5.3).
Remotesensing 16 03750 g008
Figure 9. Visualization of 3D scanned point cloud data of a relief panel on the inner walls of the Cloud Platform at Juyong Pass, China: (a) Photorealistic point-based rendering with the original color. (b) Edge-highlighting visualization using the proposed method (parts of the image are enlarged for better visibility of the soft-edge regions). For the creation of (b), the change of curvature C λ is used for f. The adopted parameters are as follows: f th ( soft ) = 0.01 , f th ( sharp ) = 0.050 ( R ( sharp ) = 0.3 ) , d opacity = 0.5 , d color = 1 , A max = 25 , A min = 5 , C = ( 255 , 0 , 0 ) , C bg = ( 0 , 0 , 0 ) .
Figure 9. Visualization of 3D scanned point cloud data of a relief panel on the inner walls of the Cloud Platform at Juyong Pass, China: (a) Photorealistic point-based rendering with the original color. (b) Edge-highlighting visualization using the proposed method (parts of the image are enlarged for better visibility of the soft-edge regions). For the creation of (b), the change of curvature C λ is used for f. The adopted parameters are as follows: f th ( soft ) = 0.01 , f th ( sharp ) = 0.050 ( R ( sharp ) = 0.3 ) , d opacity = 0.5 , d color = 1 , A max = 25 , A min = 5 , C = ( 255 , 0 , 0 ) , C bg = ( 0 , 0 , 0 ) .
Remotesensing 16 03750 g009
Figure 10. Enhancing the visibility of edge-highlighting visualization using the halo effect for a different, more complex part of the 3D scanned object shown in Figure 5 and Figure 8. The same feature value f and parameters as those in Figure 8 are used: (a) Conventional binary highlighting of the entire edge region, including both sharp and soft edges. (b) Edge-highlighting visualization using the proposed method, demonstrating the halo effect.
Figure 10. Enhancing the visibility of edge-highlighting visualization using the halo effect for a different, more complex part of the 3D scanned object shown in Figure 5 and Figure 8. The same feature value f and parameters as those in Figure 8 are used: (a) Conventional binary highlighting of the entire edge region, including both sharp and soft edges. (b) Edge-highlighting visualization using the proposed method, demonstrating the halo effect.
Remotesensing 16 03750 g010
Figure 11. Enhancing the visibility of edge-highlighting visualization using the halo effect for the festival float, Taishi-Yama, in the Gion Festival of Kyoto City, Japan, adopting the linearity L λ for f: (a) Conventional binary highlighting of the entire edge region, including both sharp and soft edges. ( f th ( soft ) = 0.20 , A max = A min = 25 ). (b) A visualization that incorporates both sharp and soft edges, with opacity–color gradation for the soft edges, showing the halo effect ( f th ( soft ) = 0.200 , f th ( sharp ) = 0.400 ( R ( sharp ) = 0.5 ) , C = ( 0 , 0 , 0 ) , C bg = ( 255 , 255 , 255 ) , and the default parameter values described in Section 5.3).
Figure 11. Enhancing the visibility of edge-highlighting visualization using the halo effect for the festival float, Taishi-Yama, in the Gion Festival of Kyoto City, Japan, adopting the linearity L λ for f: (a) Conventional binary highlighting of the entire edge region, including both sharp and soft edges. ( f th ( soft ) = 0.20 , A max = A min = 25 ). (b) A visualization that incorporates both sharp and soft edges, with opacity–color gradation for the soft edges, showing the halo effect ( f th ( soft ) = 0.200 , f th ( sharp ) = 0.400 ( R ( sharp ) = 0.5 ) , C = ( 0 , 0 , 0 ) , C bg = ( 255 , 255 , 255 ) , and the default parameter values described in Section 5.3).
Remotesensing 16 03750 g011
Figure 12. “See-through” visualizations of the 3D scanned data of a campus building at Kyoto Women’s University in Kyoto City, Japan: (a) Conventional transparent visualization using SPBR. (b) Edge-highlighting visualization using the proposed method, with the original colors chosen as the highlight color and the change of curvature C λ used for f ( f th ( soft ) = 0.01 , f th ( sharp ) = 0.182 ( R ( sharp ) = 0.5 ) , C = ( original   colors ) , C bg = ( 255 , 255 , 255 ) , and the default parameter values described in Section 5.3).
Figure 12. “See-through” visualizations of the 3D scanned data of a campus building at Kyoto Women’s University in Kyoto City, Japan: (a) Conventional transparent visualization using SPBR. (b) Edge-highlighting visualization using the proposed method, with the original colors chosen as the highlight color and the change of curvature C λ used for f ( f th ( soft ) = 0.01 , f th ( sharp ) = 0.182 ( R ( sharp ) = 0.5 ) , C = ( original   colors ) , C bg = ( 255 , 255 , 255 ) , and the default parameter values described in Section 5.3).
Remotesensing 16 03750 g012
Table 1. Comparison of computational times between the conventional binary 3D edge extraction and the proposed dual 3D edge extraction.
Table 1. Comparison of computational times between the conventional binary 3D edge extraction and the proposed dual 3D edge extraction.
DataNum. Points
( × 10 6 )
Time for Binary 3D Edge
Extraction (s)
Time for Dual 3D Edge
Extraction (s)
Figure 127.316011508
Figure 811.410182082
Figure 929.559803517
Figure 1032.4934214,306
Figure 118.3133134
Figure 1210.5809765
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yamada, Y.; Takatori, S.; Adachi, M.; Brahmantara; Hasegawa, K.; Li, L.; Pan, J.; Thufail, F.I.; Yamaguchi, H.; Tanaka, S. High-Visibility Edge-Highlighting Visualization of 3D Scanned Point Clouds Based on Dual 3D Edge Extraction. Remote Sens. 2024, 16, 3750. https://doi.org/10.3390/rs16193750

AMA Style

Yamada Y, Takatori S, Adachi M, Brahmantara, Hasegawa K, Li L, Pan J, Thufail FI, Yamaguchi H, Tanaka S. High-Visibility Edge-Highlighting Visualization of 3D Scanned Point Clouds Based on Dual 3D Edge Extraction. Remote Sensing. 2024; 16(19):3750. https://doi.org/10.3390/rs16193750

Chicago/Turabian Style

Yamada, Yuri, Satoshi Takatori, Motoaki Adachi, Brahmantara, Kyoko Hasegawa, Liang Li, Jiao Pan, Fadjar I. Thufail, Hiroshi Yamaguchi, and Satoshi Tanaka. 2024. "High-Visibility Edge-Highlighting Visualization of 3D Scanned Point Clouds Based on Dual 3D Edge Extraction" Remote Sensing 16, no. 19: 3750. https://doi.org/10.3390/rs16193750

APA Style

Yamada, Y., Takatori, S., Adachi, M., Brahmantara, Hasegawa, K., Li, L., Pan, J., Thufail, F. I., Yamaguchi, H., & Tanaka, S. (2024). High-Visibility Edge-Highlighting Visualization of 3D Scanned Point Clouds Based on Dual 3D Edge Extraction. Remote Sensing, 16(19), 3750. https://doi.org/10.3390/rs16193750

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop