Next Article in Journal
Study on Seismic Performance and Damage Analysis of Steel Plate Shear Wall with Partially Encased Composite (PEC) Columns
Previous Article in Journal
Research on Control of Intelligent Vehicle Human-Simulated Steering System Based on HSIC
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An MFF-SLIC Hybrid Superpixel Segmentation Method with Multi-Source RS Data for Rock Surface Extraction

1
School of Earth Sciences and Engineering, Hohai University, Nanjing 210098, China
2
School of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
3
College of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(5), 906; https://doi.org/10.3390/app9050906
Submission received: 14 January 2019 / Revised: 14 February 2019 / Accepted: 28 February 2019 / Published: 4 March 2019

Abstract

:
Multi-Source RS data integration is a crucial technology for rock surface extraction in geology. Both Terrestrial laser scanning (TLS) and Photogrammetry are primary non-contact active measurement techniques. In order to extract comprehensive and accurate rock surface information by the integration of TLS point cloud and digital images, the segmentation based on the integrated results generated by registration is the crux. This paper presents a Multi-Features Fusion for Simple Linear Iterative Clustering (MFF-SLIC) hybrid superpixel segmentation algorithm to extract the rock surface accurately. The MFF-SLIC algorithm mainly includes three contents: (1) Mapping relationship construction for TLS point cloud and digital images; (2) Distance measure model establishment with multi-features for initial superpixel segmentation; (3) Hierarchical and optimized clustering for superpixels. The proposed method was verified with the columnar basalt data, which is acquired in Guabushan Geopark in China. The results demonstrate that the segmentation method could be used for rock surface extraction with high precision and efficiency, the result of which would be prepared for further geological statistics and analysis.

1. Introduction

Rock masses behavior is governed by properties of rock surface including fractures, joints, faults and other geological structures. For a rock engineering research, it is necessary to make a detailed investigation for geological conditions. With the distribution, combination condition and geometrical information of rock mass, both engineering geological evaluation and rock mass classification would be proceeded well. Therefore, it is worthy for transportation engineering, hydropower engineering and mining engineering to extract rock surface accurately, efficiently and comprehensively, which is necessary for engineering design, exploration, construction and evaluation.
However, for those dangerous and inaccessible regions, such as alpine and gorge, the traditional contact measurements could not be proceeded efficiently and safely with a limited time. Terrestrial Laser Scanning (TLS) and close-range photogrammetry, which are the mainly two non-contact active measurements, could collect point cloud or digital images data and achieve rock surface extraction more comprehensively and conveniently in a virtual digital environment [1,2,3,4].
TLS could obtain an accurate 3D model of an object in high efficiency, and get the spatial coordinates of an object with high accuracy and speed, which could be used for several purposes [5]. With the advantages of high spatial resolution, high temporal resolution and uniform accuracy, the TLS data is more appropriate for many applications, especially for inaccessible and dangerous regions [3,6]. However, as the TLS data is characterized as unstructured 3D point clouds, it is poor at expressing the object features. As another non-contact active measurement technique, photogrammetry is used to process stereoscopic images delivering RGB and spectral channel information, which could be considered largely complementary to TLS [7,8]. So, the integration of the multi-source RS data has complementary advantages, which is valuable for geological investigation.
Registration is one crucial problem that the research of rock surface extraction based on the integration of the multi-source RS data involves, as the correlation of the TLS point cloud and digital images could be obtained by registration. About the registration of the two data for geological objects, the authors have provided a detailed method in another article [9]. For the integration of multi-source RS data, it is also necessary to comprehensively analyze and evaluate multi-feature of the data and make a correct and efficient segmentation, which is the key of accurate, efficient and full extraction for rock surface. So, this paper mainly discusses the segmentation problem which the TLS point cloud and digital images integration is involved in.
The first step to be finished is the construction of the direct mapping relationship between the two data sources. Texture mapping is a simple way for 3D visualization of the textured point cloud. However, the true mapping relationship for the point cloud and digital images could not be obtained. Therefore, exploring a new method for direct mapping relationship construction is the precondition of rock surface extraction. Furthermore, in order to get an efficient and accurate segmentation result, the difference as well as the correlation between the characteristics of geological objects in the Multi-source RS data should be taken full use of.
At present, there is only so much research on point cloud or image segmentation, which would be valuable for the segmentation of multi-source RS data integration. The point cloud segmentation methods include edge-based segmentation [10,11,12], model fitting-based segmentation [13,14,15], region-based segmentation [16,17,18,19,20] and feature clustering-based segmentation [21,22]. For the point cloud, the only spatial information is insufficient for segmentation. Segmentation based on the depth image generated from point cloud would result in an expensive computational cost and information loss, which may affect the precision of segmentation result. For the image segmentation, there are mainly three groups: threshold-based segmentation [23,24,25], edge-based segmentation and region-based segmentation [26,27]. Instead of the above segmentation methods in pixels, some scholars put forward the idea of superpixel segmentation, which is a preprocessing stage to group pixels into “superpixels” [28]. The superpixel is composed of a set of pixels with some similar features, including texture, brightness and color. The image feature description with superpixels could decrease the edge extraction difficulty of complex objects, and improve the efficiency of image segmentation. Generally, superpixel segmentation algorithms could be classified into: (1) graph-based, including Normalized cut, Superpixel Lattice and Entropy Rate; (2) gradient-ascent-based, including Watersheds, Mean-shift, Quick-shift, Turbopixels and SLIC (Simple Linear Iterative Clustering). All the above algorithms show different performance in efficiency, precision, boundary adherence and expansibility [29]. In comparison of segmentation algorithms in pixels, superpixel segmentation has a better consideration of the association between local regional features, merging and clustering based on which could improve the efficiency and precision of segmentation. However, as most of existing algorithms are proposed for image data, which only applies the 2D image features, the precision could not be guaranteed for those objects with the similar image features but different spatial or other features.
Based on the above analysis, this paper develops a new hybrid superpixel segmentation algorithm, MFF-SLIC, for accurate, efficient and full extraction of rock surface. This algorithm firstly obtained the mapping relationship between the TLS point cloud and digital images according to the registration result; and then explored a new distance measure model for initial superpixel segmentation. The new distance measure model integrated multi-features of both two types’ data, including color and pixel location of digital images as well as geological properties and spatial position of TLS point cloud. With the above distance measure model, the geological object could be divided into a number of superpixels which have certain semantic features; next, superpixels would be clustered hierarchically and optimally (i.e., the initial segmentation result would be further processed) to provide credible data for fine extraction of rock surface. At last, with the columnar basalt data (acquired in Guabushan Geopark in Nanjing, China), the segmentation method proposed in this paper was verified. The results show that the MFF-SLIC hybrid superpixel segmentation algorithm could achieve rock surface extraction accurately based on the integration of the Multi-source RS data, even without any manual interactions, which provides a valuable data basis for further geological statistics and analysis.
The rest of this paper is organized as follows: Section 2 reviews the registration overview about multi-source RS data and the existing 2D SLIC algorithm for image segmentation. Section 3 describes the detailed methodology of the MFF-SLIC hybrid superpixel segmentation algorithm for rock surface extraction. In Section 4, an experiment with the columnar basalt data is conducted in order to illustrate the feasibility and applicability in geology of the algorithm. Section 5 draws the conclusion.

2. Related Work

2.1. Registration for the Multi-Source RS Data

Segmentation needs to take use of the direct correlation between the TLS point cloud and digital images, which could be obtained according to the registration result. Existing registration algorithms for point cloud and images mainly includes four groups: (1) Registration by use of artificial targets (e.g., retro-reflective targets); (2) Registration through corresponding features respectively extracted from images and point cloud [30,31]; (3) Registration with the point cloud respectively generated by digital images and acquired by TLS [32,33,34]; (4) Registration with intensity image and digital image generated by point cloud [35,36,37]. For geological objects, as the indeterminate distribution and irregular appearance of rock mass, it is difficult to directly extract corresponding primitives including points, regular line segments and planar features from the point cloud and images.
A G-Super4PCS algorithm for registration of TLS point cloud and digital images is provided [9]. With the Structure from Motion (SfM) technique, a point cloud could be generated from digital images, which has different scale, position and attitude with the TLS point cloud. The G-Super4PCS algorithm firstly performs key-scale rough estimation combining spin images and cumulative contribute rate, then defines a generalized super 4-points congruent base as the registration primitive, and introduces the rock surface features contraints to improve the efficiency and accuracy of its extraction. On the above basis, two key point clouds respectively extracted from the TLS point cloud and generated from digital images are used for rough estimation of rigid transformation parameters, and then the original TLS point cloud and the dense SfM point cloud from digital images are used for fine registration.

2.2. D SLIC Algorithm for Image Segmentation

The SLIC is a gradient-ascend-based segmentation algorithm, which generates superpixels by clustering pixels with the distance measure according to their proximity and color similarity in the image plane [38]. An example is shown in Figure 1. Figure 1a shows the origin image, and Figure 1b shows the corresponding superpixel segmentation result.
The distance measure is constructed by a 5D vector including the color vector (L, a, b) and the pixel position coordinates (x, y). Instead of the Euclidean distance, the 2D SLIC algorithm introduced a normalized distance measure considering superpixel size. Suppose the size of an image I is W by H, then the number of pixels N is described as Equation (1).
N = W × H ,
This algorithm specifies K superpixels with approximately equal size as input. For image I with N pixels, the size of each superpixel is N / K . Therefore, the grid interval S of superpixels is computed as Equation (2).
S N / K ,
where S denotes the size of the superpixel. For the distance measure normalization, the value of S is used to determine weight coefficients of different feature components.
Choose K superpixel cluster centers C k = [ L k , a k , b k , x k , y k ] T (k = [1, K]) at regular grid intervals S. The computational formula of distance measure is shown in Equation (3).
D S = d L a b + m d x y / S d L a b = ( L k L i ) 2 + ( a k a i ) 2 + ( b k b i ) 2 d x y = ( x k x i ) 2 + ( y k y i ) 2 ,
where D S denotes the normalized distance measure; d L a b denotes the Euclidean distance of the color vector ( L k ,   a k ,   b k ) and ( L i ,   a i ,   b i ) in CIELAB color space; d x y denotes the Euclidean distance of pixel ( x k ,   y k ) and pixel ( x i ,   y i ) in the image plane; m is a variable to control the compactness of a superpixel. The greater the value of m, the more important spatial proximity and the more compact the cluster. Experience indicates that the optimal range of values for m is [1,20], which can get a good tradeoff between color similarity and spatial proximity [38].

2.3. Segmentation for the Multi-Source RS Data

Existing segmentation methods for image and point cloud data mainly concentrate on planar (or other geometrically primitive) feature extraction, most of which are computationally expensive and often utilize 2D or 3D information alone. Based on the above consideration, several segmentation schemes combining 2D features from digital image with 3D features from point cloud data are proposed.
Reference [39] builds 3D models of urban environments based on 3D range scans and 2D images. This method utilizes a range segmentation algorithm to extract planar regions and linear features so that a 3D range scan could be converted to a set of bounded planes and finite lines. However, the applicability of the algorithm is limited due to planar regions and linear features are only common for urban environments, but geological objects are often characterized as irregular features. Reference [40], for segmenting a colored laser point cloud, an extension method for camera-only graph-based segmentation is proposed. In this method, segment unions based on spatial proximity is proposed, and a dynamic segment union criterion based on color and surface normal produces a quality segmentation. However, this method requires a fixed positional relationship between camera and laser scanner in order for the co-registration. Reference [41] takes both colorimetric and geometric data as input, clusters pixels and divides the colorimetric data by use of SLIC algorithm, then produces normal vector for each superpixel with a plane-fitting technique, and last performs classification of superpixels by a multi-class support vector machine technique. This scheme in reference [41] needs to place several colored markers in the scene prior to data acquisition, which is infeasible for some geological objects, such as alpine and gorge region which is often dangerous and inaccessible. Besides, although the SLIC algorithm with high efficiency is utilized in this reference, it still performs segmentation only by color vector and planar position of pixels, and ignores the local rock surface attitude, which is an important feature for geological objects. It may affect the rock surface extraction precision, especially for those regions with similar color and texture.
The MFF-SLIC algorithm developed in this paper performs superpixel segmentation not only by color and planar position features but also by spatial and some geological features, which guarantees the accuracy of complicated rock surface extraction.

3. Methodology

Comparing with other superpixel segmentation algorithms, the advantages of 2D SLIC algorithm are as follows: (1) with equally sized compact superpixels; (2) with low computational cost and few input parameters; (3) with an optimal tradeoff between under-segmentation error and boundary recall. However, from the definition of distance measure, it’s apparent that only the color and planar position of pixels are involved in, and the spatial geometric and some other attribute features which would have more of an effect on the segmentation results, are ignored. For geological objects, attitude is an important attribute to reflect the difference of the rock surface. Besides color and planar position, rock surface attitude is an essential component for distance measure to get a reliable segmentation result. Moreover, in some special circumstances, the spatial location may have more impact on geological objects segmentation. For example, pixels with similar color and proximal planar position but very far apart in the direction perpendicular to the image plane may be assigned into the same superpixel with 2D SLIC algorithm. Therefore, a new hybrid superpixel segmentation algorithm, MFF-SLIC, has been developed in this paper. The algorithm firstly describes a 1-N mapping relationship model for the TLS point cloud and digital images based on registration result. Instead of traditional distance measuring in 2D SLIC algorithm, a new multi-feature fusion distance measure model is defined, which combines color and position of pixels in digital images, local attitude of rock surface and spatial location of the TLS point cloud; Then an initial segmentation result consisting of a set of superpixels is obtained; the superpixels need to be further processed by hierarchical and optimized clustering in order to get the final result. The flowchart of MFF-SLIC algorithm is shown as Figure 2.
In the process of initial segmentation, clustering centers are optimized to the position with the minimum gradient in a local neighborhood so that they are not edge pixels or noise pixels. The gradient calculation formula for 3 × 3 neighboring window is shown as Equation (4):
G ( x , y ) = I ( x + 1 , y ) I ( x 1 , y ) 2 + I ( x , y + 1 ) I ( x , y 1 ) 2 ,
where I ( x , y ) means CIELAB color vector ( L , a , b ) of pixel ( x , y ) ; · denotes the norm of the color vector.
As the connectivity of pixels is not explicitly mandatory, some isolated pixels may exist in the end of the iteration for superpixel segmentation. A further connectivity processing is necessary to merge the isolated pixels into the nearest superpixels.

3.1. 1-N Mapping Relationship between TLS Point Cloud and Digital Images

As the point in the TLS point cloud may be projected to more than one image, this paper defines a 1-N mapping relationship between the TLS point cloud and digital images. The coordinate transformation mathematical model is the key to get the 1-N mapping relationship.
Suppose one point P i = ( X TLS , Y TLS , Z TLS ) T in the TLS point cloud, and the corresponding point P i = ( X TLS , Y TLS , Z TLS ) T in the SfM point cloud generated by digital images could be obtained according to the rigid transformation model of registration, which is shown as Equation (5).
P i = [ X TLS Y TLS Z TLS ] = μ R [ X TLS Y TLS Z TLS ] + T = μ R P i + T ,
where R and T respectively denotes the rotation and translation matrix; μ means the scale factor.
Let the rotation matrix R j and translation matrix T j of image I j , then in the camera coordinate system of image I j , coordinates of the point P i ( j ) which corresponds to the point P i could be calculated by Equation (6):
P i ( j ) = [ X TLS ( j ) Y TLS ( j ) Z TLS ( j ) ] = R j [ X TLS Y TLS Z TLS ] + T j = R j P i + T j ,
All elements of P i ( j ) are divided by ( Z TLS ( j ) ) , and denotes as Equation (7):
p = [ X TLS ( j ) / Z TLS ( j ) Y TLS ( j ) / Z TLS ( j ) ] = [ x y ] ,
Let f denote the camera focal length, and k 1 as well as k 2 denote radial distortion parameters, then coordinates of the point p in image coordinate system corresponding to the point P i could be obtained by Equation (8).
p = [ x y ] = f · r ( p ) · [ x y ] = f · r ( p ) · p ,
where,
r ( p ) = 1 + k 1 · p 2 + k 2 · p 4 ,
The image coordinate system o-xy takes the image principle point as the origin, the x axis is to the right, and the y axis is vertical upward. However, instead of the above image coordinate system, the pixel coordinate system o′-uv is commonly used in the process of calculation, the origin of which is located in the upper-left corner of the image. The relation between image coordinate system and pixel coordinate system is shown as Figure 3.
In Figure 3, it can be concluded that for the point P i in the TLS point cloud, its corresponding point p in pixel coordinate system is expressed as Equation (10).
p = [ u v ] = [ ( W / 2 ) + x ( H / 2 ) y ] ,
where W and H respectively means the width and height of an image.
Both the intrinsic and extrinsic parameters for each image as well as transformation parameters between the TLS point cloud and SfM dense point cloud could be obtained with the G-Super4PCS registration algorithm introduced in reference [9]. According to all of the above process, the 1-N mapping relationship between the Point cloud and images could be concluded. The sketch map is shown as Figure 4, where P denotes one 3D point in the TLS point cloud, and [ p 1 p N ] respectively denotes the corresponding image point in digital images [ I 1 I N ] .

3.2. Multi-Feature Fusion Distance Measure Model of MFF-SLIC

In the initial segmentation stage of MFF-SLIC algorithm, K clustering centers are initialized first, and the distance between each pixel in search area and clustering centers is respectively calculated by multi-feature fusion distance measure model based on a local k-means method, then each pixel would be assigned to the clustering center with minimum distance. Finally, the image I is segmented into K superpixels with special sematic characteristics by iteratively.
During the above process, instead of the traditional k-means method searching the whole image, the local k-means defines a local square search area with 2 S × 2 S . The difference of the two methods are shown as Figure 5.
The main function of the distance measure model is to distinguish dissimilar pixels and assign similar pixels into the same superpixel. The quality of the distance measure model determines the accuracy of superpixel segmentation.
With the 1-N mapping relationship of the TLS point cloud and digital images, not only the 2D image features such as color and planar position of pixels but also the 3D TLS point cloud features including spatial geometric as well as other characteristics should be considered for distance measure model. For the geological objects, the local attitude of rock surface is an important component for distance measure model construction. Besides, the 3D spatial location of pixels contributes to distinguishing pixels with similar texture and local attitude but are very far apart from each other in order that they would not be assigned into the same superpixel. Based on the above analysis, this paper defines a multi-feature fusion distance measure model according to 1-N mapping relationship between the 3D TLS point cloud and digital images, which is used to calculate the distance between unclassified pixels and clustering centers. The flowchart of distance measure model construction is shown as Figure 6.

3.2.1. Color Feature Calculation of Digital Image

The transformation from RGB color space to Lab color space needs to use intermediate variables XYZ. Therefore, the color feature calculation of digital images could be divided into two sections.
1. RGB to XYZ [42]
Suppose the three original color variables r, g and b, and the range of values is [0, 255]. A function is defined as Equation (11),
γ ( x ) = { ( x + 0.055 1.055 ) 2.4 , x > 0.04045 x 12.92 , other ,
where γ ( x ) is used to improve contrast by non-linear hue adjustment. The three original variables r, g and b are transformed by Equation (11),
[ R G B ] = [ γ ( r / 255 ) γ ( g / 255 ) γ ( b / 255 ) ] ,
Let [ R , G , B ] T is transformed to [ X , Y , Z ] T by matrix M, which is shown as Equation (13),
[ X Y Z ] = M [ R G B ] ,
where,
M = [ 0.4124 0.3576 0.1805 0.2126 0.7152 0.0722 0.0193 0.1192 0.9505 ] ,
2. XYZ to Lab [42,43]
Suppose a function f ( y ) as Equation (15),
f ( y ) = { y 1 / 3 , y > ( 6 29 ) 3 1 3   ( 29 6 ) 2 y + 4 29 , other ,
where f ( y ) is also a correction function, then the calculation formula of color feature in CIELAB color space is described as Equation (16),
[ L a b ] = [ 116 f ( Y / Y 0 ) 16 500 [ f ( X / X 0 ) f ( Y / Y 0 ) ] 200 [ f ( Y / Y 0 ) f ( Z / Z 0 ) ] ] ,
where X 0 , Y 0 and Z 0 is respectively used for the normalization of X , Y and Z . Generally, the value of [ X 0 , Y 0 , Z 0 ] T is set to [ 95.047 , 100 , 108.883 ] T .

3.2.2. Local Attitude Feature Calculation of TLS Point Cloud

The attitude of rock surface is usually described as dip angle θ and dip α, the value range of which is respectively [ 0 , 90 ] and [ 0 , 360 ] . The values of θ and α could be calculated with the normal vector according to space geometry relation of rock surface. The normal vector of any point in a point cloud is calculated by the local plane fitted by points in neighborhood. The normal vector of the point is approximately equal to the one of the fitted local plane. According to the above, the local attitude of the point cloud could be obtained by the normal vector of the corresponding point.
The geometric relation of normal vector and local attitude is shown as Figure 7. In Figure 7a, the normal vector of the point cloud is denoted as n = O A = ( n x , n y , n z , ) T , and θ and O A respectively means the dip angle and the direction of the dip in the local rock surface. In Figure 7b, the projection of the normal vector in horizontal plane is denoted as n = O A , and α and O A respectively means the magnitude and direction of the dip. The local attitude could be calculated as the following two ways because of the different values of the normal vector.
1. All three components of the normal vector are not equal to 0.
Under this condition, the relation of the normal vector n , dip α and dip angle θ is described as Equation (17).
{ n x = n z · tan θ · cos α n y = n z · tan θ · sin α ,
A calculation formula for dip α and dip angle θ derived from Equation (17) by using the elimination method is shown as Equation (18).
{ θ = tan 1 ( n x 2 + n y 2 / | n z | ) α = tan 1 ( n y / n x ) ,
where the sign of n x , n y and n z determines the quadrant of dip α in the horizontal plane. The four geographic directions of east, south, west and north are respectively represented by E, S, W and N, which is shown as Figure 6b, then the relation of three components of the normal vector and the dip is listed in Table 1.
2. No less than one component of the normal vector is equal to 0.
For some certain special rock surfaces, such as those developed horizontally and vertically, it is not suitable for Table 1. The calculation of local attitude of rock surface could be achieved by Table 2.

3.2.3. Distance Measure Model Construction of the MFF-SLIC

Suppose the fusion eigenvector for pixels ( I i ,   I j ) in image I with M × N is denoted as ( L , a , b , x , y , θ , α , X , Y , Z ) T , then for digital images, the calculation formula for distance of color components ( L i , a i , b i ) T and ( L j , a j , b j ) T is represented as Equation (19).
d L a b = ( L j L i ) 2 + ( a j a i ) 2 + ( b j b i ) 2 ,
The calculation formula for distance of pixel planar position components is described as Equation (20).
d x y = ( x j x i ) 2 + ( y j y i ) 2 ,
For the TLS point cloud, the local attitude (the dip angle θ i and θ j , and the dip α i and α j ) of each point could be calculated as Equation (21).
{ d θ = | θ j θ i | d α = | α j α i | ,
The distance for spatial location components of points in the TLS point cloud could be calculated as Equation (22).
d X Y Z = ( X j X i ) 2 + ( Y j Y i ) 2 + ( Z j Z i ) 2 ,
All above distances defined in various standards have different effects on the construction of multi-feature fusion distance measure model, which should be normalized according to some certain criterions. In this paper, the normalized distance measure model constructed by the eigenvector integrated with multiple features is described as Equation (23).
D S = d L a b / m + d x y / S + d θ / Δ θ + d α / Δ α + d X Y Z / Δ X Y Z ,
where S means the grid interval of superpixels; m means the compactness control variable of superpixels; Δ θ and Δ α respectively means the maximum distance of local dip angle and dip.
The TLS point cloud and digital images acquired independently are not always associated in a one-to-one relationship. The TLS point cloud is composed of a set of points distributing as a certain density, while the digital image consists of a series of pixels distributing densely and uniformly. For those pixels without corresponding points in the TLS point cloud, the spatial features could be obtained by use of the inverse distance weighting method, the expression of which is shown as Equation (24).
Z ˜ ( x 0 , y 0 ) = i = 1 n ω i Z ( x i , y i ) ,   where   ω i = 1 d i / ( i = 1 n 1 d i ) ,

3.3. Hierarchical and Optimized Clustering for Superpixels

The result of superpixel segmentation is a set of superpixels composed by pixels with similar features such as the local attitude. Although these superpixels could express the boundary of rock surface well, it is difficult to accurately describe the whole structure of rock surface. In order to get the optimal results, it is necessary for initial superpixels to make a further process. This paper provides a hierarchical and optimized clustering strategy for superpixels, which takes superpixels as nodes, constructs the Region Adjacency Graph (RAG) and Nearest Neighbor Graph (NNG) by defining a regional dissimilarity function for superpixels based on the attitude of rock surface, and achieves superpixels merging. The flowchart of hierarchical and optimized clustering for superpixels is shown as Figure 8. In this paper, the attitude of a superpixel corresponds to the histogram peak of the dip distribution of all pixels contained in the current superpixel.

3.3.1. Definition of Dissimilarity between Superpixels

The superpixel segmentation is a process to assign a superpixel label for each pixel, while the clustering of superpixels refers to reassign a new label for each superpixel, i.e., superpixels with similar characteristics would be divided into the same group. In this paper, regional dissimilarity is defined as the criterion of superpixels clustering.
Suppose the image I is segmented into n superpixels, which is represented as the following set S:
S = { S 1 , S 2 , , S n } ,
For the ith superpixel S i   ( i = 1 , , n ) , the corresponding set of pixels is denoted as Equation (26).
S i = { p 1 ( i ) , p 2 ( i ) , , p m ( i ) } ,
where m means the number of pixels in the ith superpixel; p k ( i ) denotes the kth pixel in the ith superpixel, 1 k m .
The regional dissimilarity relation between superpixels S i and S j could be described as Equation (27).
( S i , S j ) = 1 ( S i S j / S i S j ) ,
where the range of value is [ 0 , 1 ] , and the more similar the two superpixels, the closer the value and 0, else, the closer the value and 1.
In order to describe the regional dissimilarity in a global sense, Equation (27) should be combined with Equation (28).
sgn ( S ) = { 1 , S i   is   next   to   S j + , e l s e ,
Suppose the dip angle and dip sets of pixels in superpixel S i are denoted as Equation (29), and the corresponding histograms could be obtained.
{ θ 1 ( i ) , θ 2 ( i ) , , θ m ( i ) } { α 1 ( i ) , α 2 ( i ) , , α m ( i ) } ,
Denote the attitude of superpixel S i as ( θ i , α i ) , then values of which are respectively equal to peaks of above histograms. The regional dissimilarity function between superpixels S i and S j is defined as Equation (30).
diff ( S i , S j ) = diff ( θ i , θ j ) + diff ( α i , α j ) ,

3.3.2. Topological Description for Superpixels with RAG

Theoretically, the clustering of superpixels based on the regional dissimilarity needs an exhaustive comparison between superpixels, the complexity of which is O ( n 2 ) , here n means the number of superpixels. However, only the adjacent superpixels are possible to be merged. Therefore, this paper introduces RAG to describe the topological relation between superpixels, on the basis of which only adjacent superpixels would be compared, and the complexity is effectively reduced to linear. Figure 9a,b respectively shows an example of superpixel segmentation result for an image as well as the corresponding RAG.
The RAG of the superpixel set { S 1 , S 2 , , S n } could be described by nodes and edges. Each node represents a superpixel, and each edge connects two adjacent superpixels. An example of RAG composed of nine superpixels is shown as Figure 10. In Figure 10b, { V 1 , V 2 , , V 9 } means the set of nodes, i.e., the set of superpixels, and { E 1 , E 2 , , E 9 } means the set of edges. In Figure 10a, the superpixel S 1 corresponds to the node V 1 in Figure 10b; adjacent superpixels of S 1 is S 2 , S 5 and S 6 , and V 2 , V 5 and V 6 are corresponding nodes, and edge E 1 , E 2 and E 3 respectively connect S 1 to S 2 , S 5 and S 6 .
The weight of each edge is calculated according to the regional dissimilarity function. Suppose attitudes of two adjacent superpixels S i and S j are denoted as ( θ i , α i ) and ( θ j , α j ) , the number of pixels which contain are respectively denoted as m i and m j . Let
η i j = ( m i · m j ) / ( m i + m j ) ,
The calculation formula of weight for edge E k connecting two adjacent superpixels is written as Equation (32).
W E k = 1 η i j · ( ( θ i θ j ) / Δ θ max + ( α i α j ) / Δ α max ) ,
where Δ θ max and Δ α max respectively means the maximum of the dip angle and the dip between all adjacent superpixels.
For the RAG in Figure 10b, if the length of the edge is used to describe its weight, it can be concluded that the weight of the bold black edge E 11 is minimum, which indicates that the node V 5 and V 6 , i.e., the superpixel S 5 and S 6 should be merged. The updated RAG is shown as Figure 11.

3.3.3. Fast Clustering for Superpixels Based on Nearest Neighbor Graph

The clustering of superpixels based on RAG takes a bottom-up way and a hierarchical merge strategy. During the above process, the merging as the minimum of edge weight involves in repeated update and storage of RAG, the computation of which is too expensive. Therefore, this paper combines NNG with RAG in order to get the required results with higher efficiency and less computation.
From Figure 10, it is obvious that one superpixel node may correspond to several adjacent nodes and edges, and these edges have no direction so that the RAG is a scalar graph. The NNG keeps nodes the same as the RAG, while scalar edges are converted to vector ones in the NNG. Figure 12 shows the conversion process between RAG and NNG.
In the NNG, all adjacent superpixels are connected by scalar edges, and the length of each edge reflects the regional dissimilarity of the adjacent superpixels. Compare weights of all scalar edges, preserve the one with minimum weight, and specify the direction from the current node to the adjacent one with minimum weight. As one node corresponds to no more than one edge, the total number of edges is no more than the number of nodes. For example, in Figure 12, node V 1 , V 2 and V 7 respectively correspond to edge E 2 , E 6 and E 14 ; node V 3 V 4 , V 5 V 6 and V 8 V 9 respectively share edge E 7 , E 11 and E 15 . In comparison of RAG and NNG, it can be concluded that the number of edges in NNG is less than the one in RAG although they have the same number of nodes, and edges of NNG are vector. An important characteristic of NNG is that for any NNG, there exists at least one bidirectional edge which connects the two adjacent superpixels with the minimal regional dissimilarity. Therefore, the clustering of superpixels could be achieved by searching all bidirectional edges in NNG.
Based on all the above analysis, superpixels could be clustered hierarchically and efficiently by combination of RAG and NNG.

4. Experiment and Discussion

In order to verify the segmentation algorithm based on the registration result obtained by the G-Super4PCS algorithm proposed by Zhang et.al [9], this paper also takes the columnar basalt data from Guabushan Geopark in Nanjing, China as the experimental data. The digital images are acquired by Fujifilm X-T10 digital camera, and the TLS point cloud is acquired by FARO Focus3D X330 terrestrial laser scanner. The test area is respectively shown as Figure 13a (the digital image) and Figure 13b (the point cloud).
Firstly, the 1-N mapping relationship between the point cloud and images is obtained based on the registration result. According to mapping relationships between the TLS point cloud and different images, colors and textures of images could be added to the TLS point cloud. The correspondences of the TLS point cloud and images (a)~(g) in 3D visualization is shown as Figure 14.
Secondly, distance measure model calculation is an important part of MFF-SLIC algorithm. For color feature calculation of digital image, RGB color space should be transformed into Lab color space first according to Section 3.2.1. There are 251,861 3D points in the experimental TLS point cloud. Calculate the local attitude of each 3D point in the point cloud according to Section 3.3.2. Figure 15a shows an intuitional distribution of local attitudes for 251,861 points (red points in the figure), which is a pole plot projected by equal angle. The range values of the dip angle and dip are respectively [ 0 , 90 ] and [ 0 , 360 ] . The radius of the pole plot is 90 , so the distance between the blue center of the circle and each red point denotes the dip angle of the red point, and correspondingly, its dip is equal to the angle between due North and the direction in which the red point is connected to the blue center of the circle. For example, dips angle and dips of two yellow cross marks in the pole plot are respectively [ 65 , 23 ] and [ 40 , 273 ]
Figure 15b shows the contour plot generated from local attitudes for 251,861 points. The contour plot describes poles density distribution characteristics and laws by use of isopleth based on the pole plot. The color gradations in Figure 15b represent fissure developmental state.
In this paper, the overlapping area of Figure 14a is taken as an example for the following segmentation and clustering experiments of superpixels.
In digital images, attitudes of those pixels corresponding to 3D points in the TLS point cloud could be obtained according to the 1-N mapping relationship between the two data sources. For those pixels without corresponding 3D points in the TLS point cloud, attitudes could be calculated via interpolation with Equation (24) in Section 3.2.3. The 2D visualization of attitudes for a digital image is shown as Figure 16.
The next is to construct the multi-feature fusion distance measure model with all feature components of both the two data sources, and perform MFF-SLIC hybrid super pixel segmentation. The number of superpixels is a key parameter which affects the result of segmentation. This paper makes a comparison between 2D SLIC algorithm and MFF-SLIC algorithm by setting various superpixel number. Figure 17 shows the comparison result when the number of superpixels is set to 500. In this figure, the red and green lines respectively represent superpixel segmentation boundaries of the two algorithms. It is observed from the two local rectangle regions A and B that the minute fissure of rock stratum is processed more accurately with the MFF-SLIC algorithm, while under-segmentation could not be avoided well for the 2D SLIC algorithm, such as the fissures marked with cyan, the color and texture of which are similar to the surrounding area. Therefore, it is difficult to distinguish without the local geological attitude information. It is obvious that the MFF-SLIC algorithm considering local geological features has a better performance on quality segmentation. Moreover, superpixels segmented by the use of MFF-SLIC algorithm are all with geological attributes, which would make the next clustering process achieve higher efficiency and accuracy.
Figure 18 shows the comparison of segmentation results with 1000 superpixels. It can be seen that MFF-SLIC still performs better boundary consistency.
According to the above experimental results, it can be indicated that compared with 2D SLIC, the MFF-SLIC algorithm has precise and reliable advantages. It is worth mentioning that the difference between the two algorithms will not be obvious when the number of superpixels is too large. Figure 19 shows the comparison of segmentation results for the two algorithms when the number of superpixels is set to 2500. The size of the superpixel is so small that it is difficult to evaluate the performance of the two algorithms. Therefore, the number of superpixels may affect both efficiency and precision of the algorithm, which should be reasonably set according to image size and superpixel size. Too few superpixels would cause under-segmentation, which may decrease the rock surface extraction precision. Conversely, too many superpixels would cause over-segmentation, which may reduce segmentation efficiency. Besides, the size of superpixels is so small that the attitude difference between superpixels is not obvious, so it is difficult to distinguish valid rock surface.
Histograms for attitude distributions of some part superpixels are shown as Figure 20, the left column shows dip angle histogram distributions, and the right column shows dip histogram distributions. The horizontal axis represents the value of the dip angle or the dip, and the vertical axis represents pixel numbers corresponding to different dip angles or dips in a superpixel. All histograms indicate that attitudes of pixels in a superpixel are approximately consistent, i.e., most pixels in a superpixel have similar attitude characteristics, which demonstrates that the result of superpixel segmentation is reliable and accurate.
The test image size is 784 by 845 pixels, which is approximately determined by overlapping area, and the size and the number of superpixels are experimentally set to 442 (pixels) and 1500 (superpixels). The corresponding RAG and NNG for superpixel segmentation result are shown as Figure 21. The blue nodes represent superpixels, each red line segment connects two adjacent superpixels, and the weight of the red line segment is reflected by dissimilarity of the adjacent superpixels.
With the RAG and NNG, the hierarchical and optimized clustering of superpixels could be achieved according to Section 3.3. Figure 22 shows the superpixels clustering result for the test image. After 20 iterations, the number of superpixels reduced to 118.
The final segmentation results could be obtained through a further congruency analysis to all overlapping areas between the two data sources. Therefore, 61 structural planes are successfully extracted according to the segmentation results, which are marked in different colors, and the 3D visualization is shown as Figure 23.
In order to verify the reliability of the MFF-SLIC hybrid superpixel segmentation algorithm, this paper makes a statistic and comparison to rock surface extraction results respectively obtained by the proposed segmentation method and the manual measuring method by geologists. Comparing the results shows that the max dip difference and max dip angle difference are both less than 6°, and the accuracy meets the requirement of geological analysis. Moreover, the contour plots for the two rock surface extraction results by the proposed method and the manual measuring method by geologists are shown as Figure 24, and it is not difficult to see that they are basically identical, i.e., the distributions of structural planes are consistent.
All the above experimental results demonstrate that the MFF-SLIC hybrid superpixel segmentation algorithm could achieve reliable and accurate segmentation by integrating multiple features from the TLS point cloud and digital images without any manual interactions, which provides important data materials for further geological statistics and analysis. Besides, most existing segmentation methods involves many parameters so that the quality of segmentation results heavily depends on the choice of parameters. Relatively, the whole process of the MFF-SLIC algorithm needs few parameters, which improves the automatization of segmentation.

5. Conclusions

Compared to traditional segmentation methods in pixels, superpixel segmentation methods could well consider local regional correlation, and clustering on this basis could improve segmention efficiency and accuracy. However, most existing superpixel segmentation algorithms are presented for 2D images, which only depend on the color and pixel planar position of image to define the distance measure. For geological objects, they may also be more affected by other characteristics such as the spatial location and rock surface attitude. Such segmentation methods are not always appropriate for geological objects, and especially for those special circumstances with similar image features but various spatial or other features, it is difficult to guarantee reliable and accurate results. In this paper, a new hybrid superpixel segmentation algorithm, MFF-SLIC, for rock surface extraction by integrating the TLS point cloud and digital images is proposed. In order to take full use of characteristics of Multi-Source RS data, this algorithm constructs the 1-N mapping relationship between TLS point cloud and digital images based on the registration results, defines a new distance measure model integrating multiple features including the color and planar position of pixels in digital images as well as the spatial location and local rock surface attitude of 3D points in the TLS point cloud, and introduces a hierarchical and optimized clustering strategy of superpixels which defines a regional dissimilarity as adjacent superpixels clustering criterion and combines RAG and NNG to improve clustering efficiency. The algorithm was verified by use of the columnar basalt data from Guabushan Geopark in Nanjing, China. The results demonstrate that the proposed method could achieve rock surface extraction with high precision and efficiency, the result of which would be prepared for further geological statistics and analysis.

Author Contributions

All the authors contributed to the development of proposed segmentation method and this manuscript. X.Y. and R.Z. proposed the methodology. X.Y. and R.Z. performed the experiments and analyzed the results; X.Y. and Y.C. was responsible for data acquisition. R.Z. and X.Y. wrote the draft of the manuscript. H.L. guided the research and revised the manuscript.

Funding

The research is supported by Research Project of Surveying Mapping and Geoinformation of Jiangsu Province (No. JSCHKY201801), Research Project of Nanjing University of Posts and Telecommunications (No. NY218136), Natural Science Foundation of Jiangsu Provincial (No. BK20180765 and No. BK20180779), National Key R&D Program of China (2018YFC1508603), and National Natural Science Foundation of China (41830110).

Acknowledgments

The authors would like to thank the editors and anonymous reviewers who provided useful comments, which were incorporated into this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bretar, F.; Arab-Sedze, M.; Champion, J.; Pierrot-Deseilligny, M.; Heggy, E.; Jacquemoud, S. An advanced photogrammetric method to measure surface roughness: Application to volcanic terrains in the Piton de la Fournaise, Reunion Island. Remote Sens. Environ. 2013, 135, 1–11. [Google Scholar] [CrossRef]
  2. Vasuki, Y.; Holden, E.J.; Kovesi, P.; Micklethwaite, S. Semi-automatic mapping of geological Structures using UAV-based photogrammetric data: An image analysis approach. Comput. Geosci. 2014, 69, 22–32. [Google Scholar] [CrossRef]
  3. Slob, S.; van Knapen, B.V.; Hack, R.; Turner, K.; Kemeny, J. Method for automated discontinuity analysis of rock slopes with three-dimensional laser scanning. Transp. Res. Rec. J. Transp. Res. Board 2005, 1913, 187–194. [Google Scholar] [CrossRef]
  4. Gigli, G.; Casagli, N. Semi-automatic extraction of rock mass structural data from high resolution LIDAR point clouds. Int. J. Rock Mech. Min. Sci. 2011, 48, 187–198. [Google Scholar] [CrossRef]
  5. Zhang, R.; Schneider, D.; Strauß, B. Generation and comparison of TLS and SfM based 3D models of solid shapes in hydromechanic research. In Proceedings of the ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016. [Google Scholar] [CrossRef]
  6. Dong, X.J.; Huang, R.Q. Application of 3D Laser Scanning Technology to Geologic Survey of High and Steep Slope. Chin. J. Rock Mech. Eng. 2006, 25, 3629–3635. (In Chinese) [Google Scholar]
  7. Leu, S.S.; Chang, S.L. Digital image processing based approach for tunnel excavation faces. Autom. Constr. 2005, 14, 750–765. [Google Scholar] [CrossRef]
  8. Wang, F.Y.; Chen, J.P.; Yang, G.D.; Sun, F.Y.; Jiang, Q.G. Solution Models of Geometrical Information of Rock Mass Discontinuities Based on Digital Close Range Photogrammetry. J. Jilin Univ. 2012, 42, 1839–1846. (In Chinese) [Google Scholar] [CrossRef]
  9. Zhang, R.C.; Li, H.; Liu, L.F.; Wu, M.F. A G-Super4PCS Registration Method for Photogrammetric and TLS Data in Geology. ISPRS Int. Geo-Inf. 2017, 6, 129. [Google Scholar] [CrossRef]
  10. Lukács, G.; Martin, R.; Marshall, D. Faithful least-squares fitting of spheres, cylinders, cones and tori for reliable segmentation. Lect. Notes Comput. Sci. 1998, 1406, 671–686. [Google Scholar] [CrossRef]
  11. Khalifa, I.; Moussa, M.; Kamel, M. Range image segmentation using local approximation of scan lines with application to CAD model acquisition. Mach. Vis. Appl. 2003, 13, 263–274. [Google Scholar] [CrossRef]
  12. Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. In Proceedings of the Symposium of ISPRS Commission III Photogrammetric Computer Vision, Bonn, Germany, 20–22 September 2006; Volume 36, pp. 86–91. [Google Scholar]
  13. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  14. Bauer, J.; Karner, K.; Schindler, K.; Klaus, A.; Zach, C. Segmentation of building models from dense 3D point-clouds. In Proceedings of the 27th Workshop of the Austrian Association for Pattern Recongnition, Magdeburg, Germany, 10–12 September 2003; pp. 253–259. [Google Scholar]
  15. Boulaassal, H.; Landes, T.; Grussenmeyer, P. Automatic extraction of planar clusters and their contours on building façades recorded by terrestrial laser scanner. Int. J. Archit. Comput. 2009, 7, 1–20. [Google Scholar] [CrossRef]
  16. Rabbani, T.; van den Heuvel, F.A.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. In Proceedings of the 2006 ISPRS Commission V Symposium on Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006; Volume 36, pp. 248–253. [Google Scholar] [CrossRef]
  17. Wang, M.; Tseng, Y.H. Automatic segmentation of LiDAR data into coplanar point clusters using an octree-based split-and-merge algorithm. Photogramm. Eng. Remote Sens. 2010, 76, 407–420. [Google Scholar] [CrossRef]
  18. Al-Durgham, M.; Habib, A. A Framework for the Registration and Segmentation of Heterogeneous Lidar Data. Photogramm. Eng. Remote Sens. 2013, 79, 135–145. [Google Scholar] [CrossRef]
  19. Riquelme, A.J.; Abellán, A.; Tomás, R.; Jaboyedoff, M. A new approach for semi-automatic rock mass joints recognition from 3D point clouds. Comput. Geosci. 2014, 68, 38–52. [Google Scholar] [CrossRef] [Green Version]
  20. Anders, K.; Hämmerle, M.; Miernik, G.; Drews, T.; Escalona, A.; Townsend, C.; Höfle, B. 3D Geological Outcrop Characterization: Automatic Detection of 3D Planes (Azimuth and Dip) Using Lidar Point Clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 105–112. [Google Scholar] [CrossRef]
  21. Filin, S.; Pfeifer, N. Segmentation of airborne laser scanning data using a slope adaptive neighborhood. ISPRS J. Photogramm. Remote Sens. 2006, 60, 71–80. [Google Scholar] [CrossRef]
  22. Kim, C.; Habib, A.; Mrstik, P. New approach for planar patch segmentation using airborne and terrestrial laser data. In Proceedings of the ASPRS 2007 Annual Conference, Tampa, FL, USA, 7–11 May 2007. [Google Scholar]
  23. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  24. Pun, T. Entropic thresholding, a new approach. Comput. Graph. Image Process. 1981, 16, 210–239. [Google Scholar] [CrossRef] [Green Version]
  25. Jiang, F.; Frater, M.R.; Pickering, M. Threshold-based image segmentation through an improved particle swarm optimisation. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications, Fremantle, Australia, 3–5 December 2012. [Google Scholar] [CrossRef]
  26. Tremeau, A.; Borel, N. A region growing and merging algorithm to color segmentation. Pattern Recognit. 1997, 30, 1191–1203. [Google Scholar] [CrossRef]
  27. Panjwani, D.K.; Healey, G. Markov random field models for unsupervised segmentation of textured color images. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 939–954. [Google Scholar] [CrossRef]
  28. Ren, X.; Malik, J. Learning a classification model for segmentation. In Proceedings of the 9th IEEE International Conference on Computer Vision (ICCV 2003), Nice, France, 14–17 October 2003. [Google Scholar] [CrossRef]
  29. Stutz, D. Superpixel segmentation: An Evaluation. In Proceedings of the 37th German Conference on Pattern Recognition (GCPR 2015), Aachen, Germany, 7–10 October 2015. [Google Scholar] [CrossRef]
  30. Al-Manasir, K.; Fraser, C.S. Registration of terrestrial laser scanner data using imagery. Photogramm. Rec. 2006, 21, 255–268. [Google Scholar] [CrossRef] [Green Version]
  31. Yang, B.; Chen, C. Automatic registration of UAV-borne sequent images and LiDAR data. ISPRS J. Photogramm. Remote Sens. 2015, 101, 262–274. [Google Scholar] [CrossRef]
  32. Yao, J.; Ruggeri, M.R.; Taddei, P.; Sequeira, V. Automatic scan registration using 3D linear and planar features. 3D Res. 2010, 1, 1–18. [Google Scholar] [CrossRef]
  33. Yang, B.; Zang, Y. Automated registration of dense terrestrial laser-scanning point clouds using curves. ISPRS J. Photogramm. Remote Sens. 2014, 95, 109–121. [Google Scholar] [CrossRef]
  34. Besl, P.J.; McKay, N.D. A Method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell 1992, 14, 239–256. [Google Scholar] [CrossRef]
  35. Wendt, A.; Heipke, C. Area-based orientation of brightness, range and intensity images. In Proceedings of the 2006 ISPRS Commission V Symposium on Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006. [Google Scholar]
  36. Habib, A.F.; Alruzouq, R.I. Line-based modified iterated hough transform for automatic registration of multi-source imagery. Photogramm. Rec. 2004, 19, 5–21. [Google Scholar] [CrossRef]
  37. Maes, F.; Collignon, A.; Vandermeulen, D.; Marchal, G.; Suetens, P. Multimodality image registration by maximization of mutual information. IEEE Trans. Med. Imaging 1997, 16, 187–198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Susstrunk, S. SLIC superpixels. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
  39. Stamos, I.; Leordeanu, M. Automated feature-based range registration of urban scenes of large scale. In Proceedings of the IEEE Internal Conference of Computer Vision & Pattern Recognition, Madison, WI, USA, 18–20 June 2003. [Google Scholar]
  40. Strom, J.; Richardson, A.; Olson, E. Graph-based segmentation for colored 3D laser point clouds. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots & Systems, Taipei, Taiwan, 18–22 October 2013. [Google Scholar] [CrossRef]
  41. Mahmoudabadi, H.; Shoaf, T.; Olsen, M.J. Superpixel clustering and planar fit segmentation of 3D LIDAR point clouds. In Proceedings of the IEEE 2013 4th International Conference on Computing for Geospatial Research & Application (COM.Geo), San Jose, CA, USA, 22–24 July 2013; pp. 1–7. [Google Scholar] [CrossRef]
  42. Sun, J. A Study of Color Controlling Techniques for Display; Tianjin University: Tianjin, China, 2006. (In Chinese) [Google Scholar]
  43. Katherine, L.; Mery, D.; Pedreschi, F.; Jorge, L. Color measurement in L*a*b* units from RGB digital images. Food Res. Int. 2006, 39, 1084–1091. [Google Scholar] [CrossRef]
Figure 1. An example of superpixel segmentation: (a) The origin image; (b) Overlay display of the origin image and the corresponding superpixel segmentation result.
Figure 1. An example of superpixel segmentation: (a) The origin image; (b) Overlay display of the origin image and the corresponding superpixel segmentation result.
Applsci 09 00906 g001
Figure 2. The flowchart of MFF-SLIC hybrid superpixel segmentation algorithm.
Figure 2. The flowchart of MFF-SLIC hybrid superpixel segmentation algorithm.
Applsci 09 00906 g002
Figure 3. The sketch map of the relation between image coordinate system and pixel coordinate system.
Figure 3. The sketch map of the relation between image coordinate system and pixel coordinate system.
Applsci 09 00906 g003
Figure 4. The 1-N mapping relationship between the TLS point cloud and digital images.
Figure 4. The 1-N mapping relationship between the TLS point cloud and digital images.
Applsci 09 00906 g004
Figure 5. The sketch map for the difference of global k-means and local k-means.
Figure 5. The sketch map for the difference of global k-means and local k-means.
Applsci 09 00906 g005
Figure 6. The flowchart of distance measure model construction.
Figure 6. The flowchart of distance measure model construction.
Applsci 09 00906 g006
Figure 7. The sketch map for geometric relation of rock surface: (a) the spatial relation of the normal vector and the local attitude; (b) the geometric relation of the normal vector and the local dip in projection plane.
Figure 7. The sketch map for geometric relation of rock surface: (a) the spatial relation of the normal vector and the local attitude; (b) the geometric relation of the normal vector and the local dip in projection plane.
Applsci 09 00906 g007
Figure 8. The flowchart of hierarchical and optimized clustering for superpixels.
Figure 8. The flowchart of hierarchical and optimized clustering for superpixels.
Applsci 09 00906 g008
Figure 9. An example of superpixel segmentation for an image: (a) Overlay display of the origin image and the corresponding superpixel segmentation result; (b) The RAG corresponding to superpixels.
Figure 9. An example of superpixel segmentation for an image: (a) Overlay display of the origin image and the corresponding superpixel segmentation result; (b) The RAG corresponding to superpixels.
Applsci 09 00906 g009
Figure 10. An example of RAG: (a) The stetch map of superpixels distribution; (b) The corresponding RAG.
Figure 10. An example of RAG: (a) The stetch map of superpixels distribution; (b) The corresponding RAG.
Applsci 09 00906 g010
Figure 11. The updated RAG.
Figure 11. The updated RAG.
Applsci 09 00906 g011
Figure 12. The updated RAG.
Figure 12. The updated RAG.
Applsci 09 00906 g012
Figure 13. The image of the geological research area.
Figure 13. The image of the geological research area.
Applsci 09 00906 g013
Figure 14. 3D visualization of the mapping relationship between TLS point cloud and images.
Figure 14. 3D visualization of the mapping relationship between TLS point cloud and images.
Applsci 09 00906 g014
Figure 15. (a) The pole plot of local attitudes for 251,861 points in the experimental TLS point cloud; (b) The contour plot of local attitudes for 251,861 points in the experimental TLS point cloud.
Figure 15. (a) The pole plot of local attitudes for 251,861 points in the experimental TLS point cloud; (b) The contour plot of local attitudes for 251,861 points in the experimental TLS point cloud.
Applsci 09 00906 g015
Figure 16. The 2D visualization of attitudes for a digital image: (a) The 2D visualization of dips in the image; (b) The 2D visualization of dip angles in the image.
Figure 16. The 2D visualization of attitudes for a digital image: (a) The 2D visualization of dips in the image; (b) The 2D visualization of dip angles in the image.
Applsci 09 00906 g016
Figure 17. The comparison of segmentation results with 500 superpixels.
Figure 17. The comparison of segmentation results with 500 superpixels.
Applsci 09 00906 g017
Figure 18. The comparison of segmentation results with 1000 superpixels.
Figure 18. The comparison of segmentation results with 1000 superpixels.
Applsci 09 00906 g018
Figure 19. The comparison of segmentation results with 2500 superpixels.
Figure 19. The comparison of segmentation results with 2500 superpixels.
Applsci 09 00906 g019
Figure 20. The histogram for attitude distribution of individual superpixel.
Figure 20. The histogram for attitude distribution of individual superpixel.
Applsci 09 00906 g020
Figure 21. (a) The RAG of the superpixel segmentation result; (b) The NNG of the superpixel segmentation result.
Figure 21. (a) The RAG of the superpixel segmentation result; (b) The NNG of the superpixel segmentation result.
Applsci 09 00906 g021
Figure 22. The clustering result of superpixels for the test image.
Figure 22. The clustering result of superpixels for the test image.
Applsci 09 00906 g022
Figure 23. The 3D visualization of the final segmentation results.
Figure 23. The 3D visualization of the final segmentation results.
Applsci 09 00906 g023
Figure 24. (a) The contour plot for structural plane attitudes by the manual measuring method by geologists; and (b) the contour plot for structural plane attitudes by the proposed method in this paper.
Figure 24. (a) The contour plot for structural plane attitudes by the manual measuring method by geologists; and (b) the contour plot for structural plane attitudes by the proposed method in this paper.
Applsci 09 00906 g024
Table 1. The relation of the normal vector and the dip.
Table 1. The relation of the normal vector and the dip.
n x n y n z Dip (Direction) n x n y n z Dip (Direction)
>0>0>0NE<0>0>0NW
>0>0<0SW<0>0<0SE
>0<0>0SE<0<0>0SW
>0<0<0NW<0<0<0NE
Table 2. The relation of the normal vector and the dip for horizontally and vertically developed rock surface.
Table 2. The relation of the normal vector and the dip for horizontally and vertically developed rock surface.
The   Normal   Vector   n = ( n x , n y , n z ) Dip (Magnitude)Dip (Direction)Dip Angle
( n x = 0 , n y 0 , n z 0 ) ( n y × n z > 0 ) --NEquation (18)
( n y × n z < 0 ) --S
( n x 0 , n y = 0 , n z 0 ) ( n x × n z > 0 ) --E
( n x × n z < 0 ) --W
( n x 0 , n y 0 , n z = 0 ) --Equation (18)--90°
( n x = n y = 0 , n z = 1 ) ----
( n x = n z = 0 , n y = 1 ) ----90°
( n x = 1 , n y = n z = 0 ) ----90°

Share and Cite

MDPI and ACS Style

Yi, X.; Zhang, R.; Li, H.; Chen, Y. An MFF-SLIC Hybrid Superpixel Segmentation Method with Multi-Source RS Data for Rock Surface Extraction. Appl. Sci. 2019, 9, 906. https://doi.org/10.3390/app9050906

AMA Style

Yi X, Zhang R, Li H, Chen Y. An MFF-SLIC Hybrid Superpixel Segmentation Method with Multi-Source RS Data for Rock Surface Extraction. Applied Sciences. 2019; 9(5):906. https://doi.org/10.3390/app9050906

Chicago/Turabian Style

Yi, Xuefeng, Rongchun Zhang, Hao Li, and Yuanyuan Chen. 2019. "An MFF-SLIC Hybrid Superpixel Segmentation Method with Multi-Source RS Data for Rock Surface Extraction" Applied Sciences 9, no. 5: 906. https://doi.org/10.3390/app9050906

APA Style

Yi, X., Zhang, R., Li, H., & Chen, Y. (2019). An MFF-SLIC Hybrid Superpixel Segmentation Method with Multi-Source RS Data for Rock Surface Extraction. Applied Sciences, 9(5), 906. https://doi.org/10.3390/app9050906

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop