Next Article in Journal
Hand Gesture Recognition Based on Computer Vision: A Review of Techniques
Previous Article in Journal
Cross-Depicted Historical Motif Categorization and Retrieval with Deep Learning
Previous Article in Special Issue
A Discriminative Long Short Term Memory Network with Metric Learning Applied to Multispectral Time Series Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge-Based Color Image Segmentation Using Particle Motion in a Vector Image Field Derived from Local Color Distance Images

by
Wutthichai Phornphatcharaphong
and
Nawapak Eua-Anant
*
Department of Computer Engineering, Faculty of Engineering, Khon Kaen University, Khon Kaen 40002, Thailand
*
Author to whom correspondence should be addressed.
J. Imaging 2020, 6(7), 72; https://doi.org/10.3390/jimaging6070072
Submission received: 26 May 2020 / Revised: 13 July 2020 / Accepted: 14 July 2020 / Published: 16 July 2020
(This article belongs to the Special Issue Color Image Segmentation )

Abstract

:
This paper presents an edge-based color image segmentation approach, derived from the method of particle motion in a vector image field, which could previously be applied only to monochrome images. Rather than using an edge vector field derived from a gradient vector field and a normal compressive vector field derived from a Laplacian-gradient vector field, two novel orthogonal vector fields were directly computed from a color image, one parallel and another orthogonal to the edges. These were then used in the model to force a particle to move along the object edges. The normal compressive vector field is created from the collection of the center-to-centroid vectors of local color distance images. The edge vector field is later derived from the normal compressive vector field so as to obtain a vector field analogous to a Hamiltonian gradient vector field. Using the PASCAL Visual Object Classes Challenge 2012 (VOC2012), the Berkeley Segmentation Data Set, and Benchmarks 500 (BSDS500), the benchmark score of the proposed method is provided in comparison to those of the traditional particle motion in a vector image field (PMVIF), Watershed, simple linear iterative clustering (SLIC), K-means, mean shift, and J-value segmentation (JSEG). The proposed method yields better Rand index (RI), global consistency error (GCE), normalized variation of information (NVI), boundary displacement error (BDE), Dice coefficients, faster computation time, and noise resistance.

1. Introduction

In digital image processing, image segmentation that reduces the amount of unnecessary data and preserves the important information needed for analysis plays an important role in image analysis. In general, image segmentation gathers pixels displaying similar characteristics within the same areas and converts them into regions. Among the various techniques, image segmentation methods can be divided into two main groups: machine learning image segmentation and classical image segmentation. First, machine learning image segmentation is a method by which a program can learn and segment an object by itself, without adjusting the program further. There are three types of machining approaches: supervised, unsupervised, and reinforcement learning methods. Supervised methods use a training dataset containing ground truth data to train artificial neural networks to map between input images and segmented results (see the survey [1]). However, the training process is computationally intensive and the ground truth construction that requires manual labeling by experts is labor-intensive. Additionally, when a new object class is added, the whole training dataset must be thoroughly reconstructed and the time-consuming training process must be repeated. In contrast, the unsupervised method does not require a dataset for training. Instead, the result of each iteration is recursively input to the program to adjust its parameters. This type of approach, such as K-means [2,3], mean shift [4,5], and JSEG [6,7], etc., is often more effective and more tolerant of unusual or unpredictable situations. However, the unsupervised methods are usually time-consuming due to its iterative processes embedded in the methods. Finally, the reinforcement learning method uses the reward and punishment techniques from environmental analysis for learning to drive the agent to the target. This method requires a large number of iterations for training of the agent to get a reward [8,9,10]. Second, classical image segmentation is a low-level image processing approach that tries to extract information without knowing the truth. Although, nowadays, machine learning image segmentation is state-of-the-art [11], classical image segmentation is still necessary in cases in which segmentation does not have ground truth images or there is a time constraint. Classical image segmentation also helps to create ground truth data in training datasets for machine learning techniques. Classical image segmentation techniques are comprised of thresholding-based, edge-based, region-based, and graph-based techniques. Thresholding-based techniques are divided into three types [12]: global thresholding, local thresholding, and adaptive thresholding. First, global thresholding weighs the distribution of intensity in the histogram to determine the threshold for separating objects from the background [13,14,15]. Second, local thresholding is used when a single threshold is not possible for images with uneven illumination or shadows. In such a case, it is necessary to use a sub-image to select the threshold [16]. Third, adaptive thresholding makes a calculation of the threshold by a window to find the intensity from the neighbor pixel [17,18]. The edge-based techniques, such as zero-crossing [19], Active Canny [20], PMVIF [21,22,23], EdgeFlow [24], and PointFlow [25], extract object boundaries in the image by creating contours around the objects. The region-based techniques, such as watershed [26,27], are based on the principle of grouping pixels with similar properties into the same regions or the same objects. Finally, the graph-based techniques, such as graph cuts [28,29], normalized cuts [30,31], Superpixel [32,33], and SLIC [34,35] proceed by grouping pixels according to graph theory. The methods mentioned above have various strengths and weaknesses, such as segmentation accuracy, processing time, flexibility, ease of use, and robustness to noise. For example, some algorithms can be applied only to grayscale images while some are only available in the RGB color space. Some methods are not suitable for real-time use or have to adjust too many parameters.
This paper introduces an edge-based classical image segmentation algorithm for a color image using particle motion in a vector image field derived from local color distance images (PMLCD). It is developed from the PMVIF algorithm that is known to have a fast computation time and yields closed boundaries but can be applied only to grayscale images. In the PMVIF algorithm, two vector fields, namely, the normal compressive vector field and the edge vector field, derived from derivatives of grayscale images, are used to force the particle to move along the object edges, which results in closed particle trajectories that resemble the object boundaries. In order to extend this principle to a color image segmentation task, the new formulae for computing the normal compressive vector field and the edge vector field, derived from the local color distance images, are introduced. The method proposed in this paper can be used not only with color images but also multichannel images such as hyperspectral images.
The rest of the paper is organized as follows: Section 2 describes the principle of the PMVIF algorithm; Section 3 describes the developed color image segmentation using particle motion in a vector image field derived from local color distance images; Section 4 presents the experimental validations and benchmarking of the proposed algorithm; finally, conclusions are drawn in Section 5.

2. Background to Particle Motion in a Vector Image Field

This section describes the principle of a traditional boundary extraction algorithm based on particle motion in a vector image field (PMVIF), which is an edge-based classical image segmentation approach. In general, in an N-dimensional space, a boundary can be explicitly represented by a manifold of dimension N-1 interfacing between regions of different attributes; for example, a close curve in a two-dimensional space. However, in a discretized image where a set of pixels or voxels is the only class that can exist, explicit representations of region boundaries, such as a curve or a surface, are difficult to encode. In this case, a normal compressive vector field [21,22,23], where all vectors are normal and point to the nearest interface, providing information about the direction to the nearest boundary, is more suitable to be used as an implicit boundary representation. Nevertheless, the normal compressive vector field itself only provides information about the location of the boundary but cannot offer any clues regarding the direction for tracking edges. In order to be able to locate and track a boundary simultaneously, another vector field containing vectors parallel to edges—namely an edge vector field—combined with the normal compressive vector field is required. The concept of using two such orthogonal vector fields for boundary extraction in a grayscale image was introduced in the PMVIF algorithm, where the gradient–Laplacian vector field used as a normal compressive vector field and the Hamiltonian gradient vector field used as an edge vector field are given as follows:
n = 1 c P · 2 P
and
e = P y i ^ + P x j ^ ,
where c is a normalization factor, and i ^ , j ^ are unit vectors in x and y directions, respectively.
In general, in Equations (1) and (2), partial derivatives can be approximated using difference operators such as Sobel operators. Figure 1 illustrates examples of the gradient P , Laplacian 2 P , edge vector field e , and the gradient–Laplacian vector field n . In order to extract object boundaries, sequences of boundary points were obtained from trajectories of a particle driven by the combined force field α e + β n , computed as follows:
P k + 1 = P k + α e k + β n k ,
where P k is the k th particle position vector; e k is the edge vector, interpolated at the k th particle position; n k is the normal compressive vector, interpolated at the k th particle position; α is a tangential stepping factor, with α > 0 for a particle moving in a clockwise direction and α < 0 for a particle moving in a counter-clockwise direction; and β , β > 0 is a normal stepping factor allowing the trajectory to converge to a boundary line.
Figure 2 demonstrates a combined vector field, α e + β n , α = 0.5 , β = 0.5 , and a boundary extraction result obtained from a particle trajectory, according to Equation (3), as applied to the image in Figure 1. The PMVIF works well in extracting boundaries of regions with a constant intensity in grayscale images, providing subpixel resolution results. Nevertheless, the limitation of the PMVIF method is that the edge and normal compressive vector fields are derived from partial derivative operations that can only be applied to a scalar or intensity image. In the case of color or multispectral images in which each pixel is considered as a vector, there is no exact definition of gradient and Laplacian operators, limiting the application of the PMVIF method to color images. To overcome this limitation, a new scheme to generate a normal compressive vector field and an edge vector field for the vector image is required.

3. Methodology

The PMVIF algorithm requires both normal compressive and edge vector fields as particle driving forces. Due to the gradient definition that is applied only to a scalar image, the original PMVIF method can be applied only to intensity images. In this paper, the PMLCD method for finding the normal compressive and edge vector fields for color images using the center to centroid vectors of local color distance images is presented below.

3.1. Image Moments

For a discrete image I ( x , y ) , a two-dimensional moment of order ( p , q ) [36] is defined as
M p q = x y x p y q I ( x , y )
Analogous to a center of gravity in classical mechanics, the centroid ( x ¯ , y ¯ ) of an image I ( x , y ) can be calculated as follows:
( x ¯ , y ¯ ) = M 10 M 00 , M 01 M 00 = x y x I ( x , y ) x y I ( x , y ) , x y y I ( x , y ) x y I ( x , y )
The displacement between a center and a centroid of an image indicates an unbalanced pixel intensity distribution in a spatial domain.

3.2. Local Color Distance Images

In general, image segmentation can be viewed as a process to determine in which region each pixel should be located. For a multispectral image I, one feature that is widely used to determine whether or not pixels should belong to the same region is the color distance between two pixels, defined as
D c ( x , y ) , ( i , j ) = I 1 ( x , y ) I 1 ( i , j ) 2 + I 2 ( x , y ) I 2 ( i , j ) 2 + + I n ( x , y ) I n ( i , j ) 2
where I n ( x , y ) and I n ( i , j ) are the nth color components of pixels ( x , y ) and ( i , j ) , respectively. In the data classification aspect, the color distance functions as a dissimilarity measurement between two pixels. Using the concept of a moving window, a local color distance image (LCD) of the pixels surrounding pixel ( i , j ) can be computed as
L C D ( x i , y j ) = D c ( x , y ) , ( i , j ) ( x , y ) N ( i , j )
where N ( i , j ) is a neighbor area of a center pixel ( i , j ) .
Each pixel in L C D ( x i , y j ) represents a color distance between a neighboring pixel ( x , y ) and the center pixel ( i , j ) . Figure 3a illustrates examples of RGB local color distance images (i)–(v), obtained using a circular moving window computed at various places in a simple two-object image. As seen in the (i) and (v) cases, if a circular window is placed entirely inside one region, a local color distance image contains all zero pixels. Conversely, if a circular window is located at the border between two regions, the obtained local color distance image comprises pixels, with large values packed to one side of the image, as shown in cases (ii)–(iv) in Figure 3a. As a result, the centroid C T of the local color distance image computed using Equation (5) is shifted from the center C toward a high color distance area belonging to an adjacent region. Thus, for a local color distance image located in the proximity of a boundary, a vector from center C to a centroid C T points in the direction of the nearest boundary, independent of the side of the center of the local color distance on which the image is; for example, cases (iii) and (iv) in Figure 3a.

3.3. The Normal Compressive Vector Field

By gathering (C-to-CT) vectors of local color distance images obtained at all valid positions in an original image, a normal compressive vector field n can be computed as
n ( i , j ) = 1 C x ¯ ( i , j ) i y ¯ ( i , j ) j
where C is a normalization factor making m a x n ( i , j ) = 1 , and ( x ¯ ( i , j ) , y ¯ ( i , j ) ) is a centroid, computed using Equation (5), of L C D ( x i , y j ) computed using Equation (7). Figure 3b demonstrates the n of the image in Figure 3a. By combining Equations (5)–(7), n ( i , j ) can be directly computed as
n ( i , j ) = 1 C ( x , y ) N ( i , j ) ( x i ) D c ( x , y ) , ( i , j ) / ( x , y ) N ( i , j ) D c ( x , y ) , ( i , j ) ( x , y ) N ( i , j ) ( y j ) D c ( x , y ) , ( i , j ) / ( x , y ) N ( i , j ) D c ( x , y ) , ( i , j )
It is worth noting that, in this vector field, the phenomenon that a vector on one side always points in the opposite direction to a vector on another side is called the normal compressive property. In the PMVIF technique, the normal compressive property of the vector field causes a particle to cling to the object boundary. The difference in the n from Equations (1) and (9) is that the vector size obtained from Equation (1) is smaller than Equation (9), as shown in Figure 4. As Equation (9) uses the principle of LCD while Equation (1) only uses grayscale images that include intensity for each band collapsed together and using a gradient, resulting in a smaller vector size, Equation (9) is more suitable when used with color images.

3.4. The Edge Vector Field

The edge vector field in the original PMVIF method, used to drive a particle to move in a direction parallel to object edges in a grayscale image, is derived from a Hamiltonian gradient vector field. However, such a vector field cannot be generated in the case of vector images such as color images where each pixel is represented by a color vector. In order to create a vector field analogous to the edge vector field, firstly, a vector-to-scalar conversion scheme must be applied to a color image to achieve a unique condition, ensuring that different colors, normally represented by vectors, are represented by different scalar values. The linearization technique used to convert a color image into a scalar auxiliary image, based on the number base system, is proposed in this paper as follows:
A u x ( x , y ) = m ( n 1 ) I n ( x , y ) + m ( n 2 ) I n 1 ( x , y ) + + m 2 I 3 ( x , y ) + m I 2 ( x , y ) + I 1 ( x , y )
where m is the maximum intensity level of each color component. The auxiliary image is created to determine whether a neighbor pixel ( x , y ) has the same color as the center pixel ( i , j ) .
Thus, only a difference between A u x ( x , y ) and A u x ( i , j ) is sufficient to determine whether both pixels ( x , y ) and ( i , j ) have the same color.
To obtain a gradient-like vector field, in a normal compressive vector field, as demonstrated in Figure 3b, vectors outside objects must be reverted while vectors inside objects retain the same direction. Thus, Equation (9) is modified by multiplying the local color distance with the sign of a difference between auxiliary image pixels as follows:
G ( i , j ) = G x G y = 1 C ( x , y ) N ( i , j ) ( x i ) s i g n ( A u x ( x , y ) A u x ( i , j ) ) D c ( ( x , y ) , ( i , j ) ) ( x , y ) N ( i , j ) ( y i ) s i g n ( A u x ( x , y ) A u x ( i , j ) ) D c ( ( x , y ) , ( i , j ) )
where C is a normalization factor so that m a x G ( i , j ) = 1 and s i g n ( A ) = 1 A 0 1 A < 0 .
As a result, the normal compressive property of n in Equation (9)—i.e., a vector on one side always points in a direction opposite to a vector on another side, as shown in Figure 3b—is transformed to a gradient-like property of G where vectors on both sides of objects always point in the same direction, as shown in Figure 5a. Next, by rotating all vectors in G by 90 , an edge vector field, similar to a Hamiltonian gradient vector field, is obtained as
e ( i , j ) = G y G x
as shown in Figure 5b. Notice that vectors in e in the proximity of boundaries are always larger than areas farther away. Therefore, the magnitude of e can be used as a measurement for localizing object edges.

3.5. Particle Motion in a Vector Image Field Derived from Local Color Distance Images

The proposed boundary extraction algorithm is based on particle motion in a vector image field derived from local color distance images (PMLCD), using a normal compressive vector field that is calculated using Equation (9) while the edge vector field is calculated using Equation (12) so that particle trajectories, calculated using Equation (3), can be obtained. The object boundaries can then be extracted from a collection of these trajectories. The remaining steps of the PMLCD method are the same as for those of the PMVIF method.

3.6. Appropriate PMLCD Parameter Setting

The PMLCD method has three parameters: T | e | , α , and β . The T | e | parameter sets the threshold of | e | to determine the starting points of the particle, while α is the strength of the particle moving in the direction parallel to the object edges, and β is the strength in which the particle attaches to the object edges. These parameters are difficult to adjust. This article, therefore, suggests methods for adjusting all three parameters as follows:
T | e | is determined using the Otsu’s threshold method [14]. α and β are related as follows:
α = 1 2 + R
β = 1 2 R
R = ( V c ¯ V e ) V n 100 γ
where
V c ¯ is the mean of the normalized variance of the color image.
V e is the normalized variance of | e | .
V n is the normalized variance of | n | .
γ is the ratio of α and β , 0 < γ < 1 α > β γ = 0 α = β 1 < γ < 0 α < β .
The variance (V) of each channel, converted to a vector A, is made up of scalar observation defined as
V = 1 N 1 i = 1 N A i A ¯ 2

3.7. Overall Boundary Extraction Method

The overall segmentation algorithm for color images, as illustrated in Figure 6, is described here. First, an image is smoothed to remove noise using a Gaussian low pass filter. The normal compressive vector field n and edge vector field e are then calculated using Equations (9) and (12), respectively, using a circular moving window of radius R. Local maximum points of | e | that are greater than a threshold are chosen as candidates for the starting points of the boundary extraction process. The suitable threshold value is determined by Otsu’s threshold method of | e | . Commencing at each starting point, under the influence of a compressing edge vector field, a particle is forced to move along object edges, according to Equation (3), in both clockwise ( α > 0 ) and counter-clockwise directions ( α < 0 ) with a subpixel step size until it reaches a starting point or other previously extracted paths. Consequently, boundaries are collected from all obtained particle trajectories. Consequently, a complete edge map is achieved by quantizing the extracted boundaries. Finally, these boundaries are labeled with the fast region growing algorithm to produce the color image segmentation result.
Figure 7 illustrates image segmentation results obtained using both PMVIF and PMLCD evaluated using the same grayscale image and the original color image. Parameters used in all cases were T | e | = 0.08, α = 0.5, β = 0.2 (for both PMVIF and PMLCD), and the radius of LCD = 1 (for PMLCD). As seen in Figure 7b,c, PMLCD can be applied to both grayscale and color images. In addition, when compared to the results evaluated using the grayscale image, PMLCD evaluated using the original color image provided the best results with least fault contours.
Figure 8 shows the simulation of particle motion in a vector image field derived from Equation (3) using the following parameters: radius of LCD = 1, T | e | = 0.25, γ = 0.05 ( α = 0.55, β = 0.45).

4. Experimental Results and Discussion

The experimental results of color image segmentation using MATLAB 2019b with a CPU Intel Core i7-4710HQ, the VOC2012 dataset [37] and the BSDS500 dataset [38] collected to measure the performance of PMLCD compared with other unsupervised machine learning methods including K-means [2,3], mean shift [4,5], and JSEG [6,7] and classical methods including the grayscale PMVIF [21,22,23], grayscale watershed [26,27], and SLIC [34,35] are given in this section. Benchmarks used in this paper include the Rand Index (RI) [39], Global Consistency Error (GCE) [39], Normalized Variation of information (NVI) [40], Boundary Displacement Error (BDE) [39], Dice coefficient [39], computation time, and noise tolerance. Figure 9 shows the experimental color image segmentation results obtained from all methods using the image #2007_000063 from VOC2012. Figure 10 shows similarities between the object chosen from the ground truth image (a dog) and the corresponding segmented regions obtained from all methods in Figure 9. Figure 11 shows the results of the same experiment as those of Figure 9 and Figure 10 for the images randomly selected from VOC2012 and BSDS500. As shown in Table 1, the parameters of all methods for each image in the experiment have been adjusted to achieve high RI and high Dice coefficients. The average benchmarking results show that the method with the highest average RI is PMLCD, at 0.78 (0.11). The methods with the lowest average GCE are PMLCD, at 0.13 (0.05), and Watershed, at 0.13 (0.08). The methods with the lowest average NVI are JSEG, at 0.12 (0.01), and PMLCD, at 0.12 (0.04). The method with the lowest average BDE is SLIC, at 11.82 (4.12). The method with the fastest average calculation time is Watershed, at 0.06 (0.01) seconds. The method with the highest average Dice coefficient is PMLCD, at 0.93 (0.03). Briefly, the PMLCD method yields the four best average values for the RI, GCE, NVI, and Dice coefficient. Figure 12 shows the graphs of computation times used to segment image #3096 from the BSDS500, interpolated to achieve various image sizes. As seen, the Watershed, PMVIF, and PMLCD methods are the fastest, respectively, but the PMLCD is the only true color image segmentation method. Figure 13 demonstrates the result of the noisy color image segmentation of the image #2007_001289 from VOC2012 with additive white Gaussian noise (signal-to-noise ratio (SNR) 0 dB ( σ n o i s e = 0.21)) obtained using the PMLCD algorithm with the following parameters: radius of LCD = 3, T | e | = 0.27 derived from Otsu’s method, γ = −0.14 resulting in α = 0.34 and β = 0.66 obtained from Equations (13) and (14), respectively. The result gives the following benchmarks: RI = 0.91, GCE = 0.01, NVI = 0.01, BDE = 90.23 and computation time = 0.72 s. Figure 14 shows the SNR-performance graph of the PMLCD applied to this image, reflecting the high noise tolerance of the PMLCD method.

5. Conclusions

The PMLCD color image segmentation algorithm is developed from the traditional method of particle motion in a vector image field (PMVIF) which uses two vector fields orthogonal to each other—namely a normal compressive vector field and an edge vector field—to force a particle to travel along object boundaries. Unlike the formulae previously used in the original PMVIF method, a normal compressive vector field is derived from the center-to-centroid vectors of local color distant images, whereas a gradient-like vector field is derived from center-to-centroid vectors of local color distant images in which each pixel is multiplied by the difference of the auxiliary pixels. An edge vector is then obtained by rotating each vector in a gradient-like vector field by 90 to achieve a Hamiltonian gradient-like field. In addition, for ease of use, a method for adjusting parameters related to particle movement, including T | e | , α , and β , is introduced. Experimental results show that the proposed method yields promising results, with better RI, GCE, NVI, and Dice measures as well as a faster computation time and good noise resistance. Since the proposed algorithm is based on color distance measurement, which can be applied to both scalar and vector images, it outperforms other grayscale-based methods, especially in regions in which edge information cannot be visualized in the grayscale image domain. Moreover, the method is not only useful for segmenting color images, but also can be used for all types of color space and vector images including multispectral and hyperspectral images.

Author Contributions

Conceptualization, W.P. and N.E.-A.; methodology, N.E.-A.; software, W.P.; validation, W.P. and N.E.-A.; formal analysis, W.P.; investigation, W.P.; resources, N.E.-A.; data curation, W.P. and N.E.-A.; writing—original draft preparation, W.P.; writing—review and editing, N.E.-A.; visualization, W.P.; supervision, N.E.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the lecturers and staff of the Department of Computer Engineering Faculty of Engineering Khon Kaen University for providing advice and support for this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. arXiv 2020, arXiv:cs.CV/2001.05566. [Google Scholar]
  2. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics; University of California Press: Berkeley, CA, USA, 1967; pp. 281–297. [Google Scholar]
  3. Hamada, M.A.; Kanat, Y.; Abiche, A.E. Multi-Spectral Image Segmentation Based on the K-means Clustering. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 1016–1019. [Google Scholar] [CrossRef]
  4. Fukunaga, K.; Hostetler, L.D. The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition. IEEE Trans. Inf. Theory 1975, 21, 32–40. [Google Scholar] [CrossRef] [Green Version]
  5. Xiao, W.; Zaforemska, A.; Smigaj, M.; Wang, Y.; Gaulton, R. Mean shift segmentation assessment for individual forest tree delineation from airborne lidar data. Remote Sens. 2019, 11, 1263. [Google Scholar] [CrossRef] [Green Version]
  6. Deng, Y.; Manjunath, B.S. Unsupervised segmentation of color-texture regions in images and video. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 800–810. [Google Scholar] [CrossRef] [Green Version]
  7. Aloun, M.S.; Hitam, M.S.; Yussof, W.N.J.H.W.; Abdul Hamid, A.A.K.; Bachok, Z. Modified JSEG algorithm for reducing over-segmentation problems in underwater coral reef images. Int. J. Electr. Comput. Eng. 2019, 9, 5244–5252. [Google Scholar] [CrossRef]
  8. Sahba, F.; Tizhoosh, H.R.; Salama, M.M. A reinforcement learning framework for medical image segmentation. In Proceedings of the IEEE International Conference on Neural Networks, Vancouver, BC, Canada, 16–21 July 2006; pp. 511–517. [Google Scholar] [CrossRef]
  9. Wang, Z.; Sarcar, S.; Liu, J.; Zheng, Y.; Ren, X. Outline Objects using Deep Reinforcement Learning. arXiv 2018, arXiv:1804.04603. [Google Scholar]
  10. Casanova, A.; Pinheiro, P.O.; Rostamzadeh, N.; Pal, C.J. Reinforced active learning for image segmentation. arXiv 2020, arXiv:2002.06583. [Google Scholar]
  11. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.; Asari, V.K. A state-of-the-art survey on deep learning theory and architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef] [Green Version]
  12. Bhargavi, K.; Jyothi, S. A Survey on Threshold Based Segmentation Technique in Image Processing. Int. J. Innov. Res. Dev. 2014, 3, 234–239. [Google Scholar]
  13. Doyle, W. Operations Useful for Similarity-Invariant Pattern Recognition. J. ACM 1962, 9, 259–267. [Google Scholar] [CrossRef]
  14. Otsu, N. A Threshold Selection Method From Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, SMC-9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, W.; Duan, L.; Wang, Y. Fast Image Segmentation Using Two-Dimensional Otsu Based on Estimation of Distribution Algorithm. J. Electr. Comput. Eng. 2017, 2017, 1735176. [Google Scholar] [CrossRef] [Green Version]
  16. Nakagawa, Y.; Rosenfeld, A. Some experiments on variable thresholding. Pattern Recognit. 1979, 11, 191–204. [Google Scholar] [CrossRef]
  17. Chow, C.; Kaneko, T. Boundary Detection of Radiographic Images by a Threshold Method. In Frontiers of Pattern Recognition; Academic Press: Cambridge, MA, USA, 1971; pp. 1530–1535. [Google Scholar] [CrossRef]
  18. Chow, C.K.; Kaneko, T. Automatic boundary detection of the left ventricle from cineangiograms. Comput. Biomed. Res. 1972, 5, 388–410. [Google Scholar] [CrossRef]
  19. Kimmel, R.; Bruckstein, A.M. Regularized Laplacian zero crossings as optimal edge integrators. Int. J. Comput. Vis. 2003, 53, 225–243. [Google Scholar] [CrossRef]
  20. Baştan, M.; Bukhari, S.S.; Breuel, T. Active Canny: Edge detection and recovery with open active contour models. IET Image Process. 2017, 11, 1325–1332. [Google Scholar] [CrossRef] [Green Version]
  21. Eua-Anant, N.; Udpa, L. A novel boundary extraction algorithm based on a vector image model. In Proceedings of the 39th Midwest Symposium on Circuits and Systems, Ames, IA, USA, 21 August 1996; Volume 2, pp. 597–600. [Google Scholar] [CrossRef]
  22. Eua-anant, N.; Lalita, U.; Upda, L. Boundary extraction algorithm based on particle motion in a vector image field. In Proceedings of the International Conference on Image Processing, Santa Barbara, CA, USA, 26–29 October 1997; Volume 2, pp. 732–735. [Google Scholar] [CrossRef]
  23. Eua-Anant, N.; Udpa, L. Boundary detection using simulation of particle motion in a vector image field. IEEE Trans. Image Process. 1999, 8, 1560–1571. [Google Scholar] [CrossRef]
  24. Ma, W.Y.; Manjunath, B.S. EdgeFlow: A Technique for Boundary Detection and Image Segmentation. IEEE Trans. Image Process. 2000, 9, 1375–1388. [Google Scholar]
  25. Yang, F.; Bruckstein, A.M.; Cohen, L.D. PointFlow: A model for automatically tracing object boundaries and inferring illusory contours. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  26. Beucher, S.; Lantuejoul, C. Use of Watersheds in Contour Detection. In Proceedings of the International Workshop on Image Processing; CCETT/IRISA: Rennes, France, 1979. [Google Scholar]
  27. Kornilov, A.S.; Safonov, I.V. An overview of watershed algorithm implementations in open source libraries. J. Imaging 2018, 4, 123. [Google Scholar] [CrossRef] [Green Version]
  28. Greig, D.M.; Porteous, B.T.; Seheult, A. Exact Maximum A Posteriori Estimation for Binary Images. J. R. Stat. Soc. Ser. B 1989, 51, 271–279. [Google Scholar] [CrossRef]
  29. Wang, T.; Ji, Z.; Sun, Q.; Chen, Q.; Ge, Q.; Yang, J. Diffusive likelihood for interactive image segmentation. Pattern Recognit. 2018, 79, 440–451. [Google Scholar] [CrossRef]
  30. Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar] [CrossRef] [Green Version]
  31. Xu, J.; Janowczyk, A.; Chandran, S.; Madabhushi, A. A weighted mean shift, normalized cuts initialized color gradient based geodesic active contour model: Applications to histopathology image segmentation. In Medical Imaging 2010: Image Processing; International Society for Optics and Photonics: Bellingham, WA, USA, 2010; Volume 7623, p. 76230Y. [Google Scholar] [CrossRef]
  32. Ren, X.; Malik, J. Learning a classification model for segmentation. Proc. IEEE Int. Conf. Comput. Vis. 2003, 1, 10–17. [Google Scholar] [CrossRef] [Green Version]
  33. Daoud, M.I.; Atallah, A.A.; Awwad, F.; Al-Najjar, M.; Alazrai, R. Automatic superpixel-based segmentation method for breast ultrasound images. Expert Syst. Appl. 2019, 121, 78–96. [Google Scholar] [CrossRef]
  34. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Li, Z.; Chen, J. Superpixel segmentation using Linear Spectral Clustering. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1356–1363. [Google Scholar] [CrossRef]
  36. Hu, M.K. Visual Pattern Recognition by Moment Invariants. IRE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar] [CrossRef] [Green Version]
  37. Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
  38. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 416–423. [Google Scholar] [CrossRef] [Green Version]
  39. Majid, H.; Hadi Yazdani, B. Color Image Segmentation Metrics. In Encyclopedia of Image Processing; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar] [CrossRef]
  40. Reichart, R.; Rappoport, A. The NVI clustering evaluation measure. In Proceedings of the CoNLL 2009—Proceedings of the Thirteenth Conference on Computational Natural Language Learning, Boulder, CO, USA, 4–5 June 2009; pp. 165–173. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) P , (b) 2 P , (c) an edge vector field e , and (d) a normal compressive vector field n .
Figure 1. (a) P , (b) 2 P , (c) an edge vector field e , and (d) a normal compressive vector field n .
Jimaging 06 00072 g001
Figure 2. (a) a combined vector field, α e + β n , α = 0 . 5 , β = 0 . 5 , and (b) a boundary extraction result.
Figure 2. (a) a combined vector field, α e + β n , α = 0 . 5 , β = 0 . 5 , and (b) a boundary extraction result.
Jimaging 06 00072 g002
Figure 3. (a) local color distance images and (b) a normal compressive vector field obtained from (C-to-CT) vectors.
Figure 3. (a) local color distance images and (b) a normal compressive vector field obtained from (C-to-CT) vectors.
Jimaging 06 00072 g003
Figure 4. (a) original color image, (b) n obtained from Equation (1), and (c) n obtained from Equation (9).
Figure 4. (a) original color image, (b) n obtained from Equation (1), and (c) n obtained from Equation (9).
Jimaging 06 00072 g004
Figure 5. (a) a gradient-like vector field G obtained from Equation (11), and (b) an edge vector field e obtained from Equation (12).
Figure 5. (a) a gradient-like vector field G obtained from Equation (11), and (b) an edge vector field e obtained from Equation (12).
Jimaging 06 00072 g005
Figure 6. Block diagram of the proposed boundary extraction process.
Figure 6. Block diagram of the proposed boundary extraction process.
Jimaging 06 00072 g006
Figure 7. Image segmentation results obtained using (a) PMVIF and (b) PMLCD are evaluated using the same grayscale image and (c) the PMLCD result evaluated using the original color image.
Figure 7. Image segmentation results obtained using (a) PMVIF and (b) PMLCD are evaluated using the same grayscale image and (c) the PMLCD result evaluated using the original color image.
Jimaging 06 00072 g007
Figure 8. (a) BSDS500 #3096, (b) particle trajectory obtained from Equation (3), and (c) zoom of (b).
Figure 8. (a) BSDS500 #3096, (b) particle trajectory obtained from Equation (3), and (c) zoom of (b).
Jimaging 06 00072 g008
Figure 9. Color image segmentation results of the image #2007_000063 from VOC2012.
Figure 9. Color image segmentation results of the image #2007_000063 from VOC2012.
Jimaging 06 00072 g009
Figure 10. The ground truth object in Figure 9 and the corresponding segmented regions.
Figure 10. The ground truth object in Figure 9 and the corresponding segmented regions.
Jimaging 06 00072 g010
Figure 11. Color image segmentation results. (a) dataset images, (b) ground truths; results of (c) PMLCD, (d) PMVIF, (e) watershed, (f) SLIC, (g) K-means, (h) mean shift, and (i) JSEG.
Figure 11. Color image segmentation results. (a) dataset images, (b) ground truths; results of (c) PMLCD, (d) PMVIF, (e) watershed, (f) SLIC, (g) K-means, (h) mean shift, and (i) JSEG.
Jimaging 06 00072 g011
Figure 12. The comparison of the computation time of PMLCD and other methods.
Figure 12. The comparison of the computation time of PMLCD and other methods.
Jimaging 06 00072 g012
Figure 13. Noisy image segmentation results: (a) VOC2012 #2007_001289 image with SNR = 0 dB ( σ n o i s e = 0.21), (b) particle trajectories obtained using PMLCD with a radius of LCD = 3, T | e | = 0.27, γ = −0.14 ( α = 0.34, β = 0.66), (c) extracted boundaries (red lines), and (d) segmented regions.
Figure 13. Noisy image segmentation results: (a) VOC2012 #2007_001289 image with SNR = 0 dB ( σ n o i s e = 0.21), (b) particle trajectories obtained using PMLCD with a radius of LCD = 3, T | e | = 0.27, γ = −0.14 ( α = 0.34, β = 0.66), (c) extracted boundaries (red lines), and (d) segmented regions.
Jimaging 06 00072 g013
Figure 14. The benchmarks of the PMLCD method applied to the noisy VOC2012 #2007_001289 image.
Figure 14. The benchmarks of the PMLCD method applied to the noisy VOC2012 #2007_001289 image.
Jimaging 06 00072 g014
Table 1. The performance of each method from Figure 9, Figure 10 and Figure 11.
Table 1. The performance of each method from Figure 9, Figure 10 and Figure 11.
DatasetMethodBy ImageBy ObjectParameter
RIGCENVIBDETimeDice
VOC2012
#2007_000063
PMLCD0.670.090.1516.660.21s0.97LCD radius 1, T | e | 0.17( O t s u ), γ 0.06( α 0.52, β 0.48)
PMVIF0.630.300.1514.870.49s0.87 T | e | 0.16( O t s u ), γ 0.06( α 0.50, β 0.50)
Watershed0.620.060.3413.960.08s0.96Level 0.10
SLIC0.640.150.2013.8712.99s0.88Number of SuperPixels 20
K-means0.630.230.2414.0212.78s0.63Number of clusters 100
Mean shift0.620.290.2813.92147.71s0.47Bandwidth 0.02
JSEG0.660.300.1218.6984.38s0.79Color quantization 20
VOC2012
#2007_001430
PMLCD0.620.090.1718.750.41s0.95LCD radius 2, T | e | 0.22( O t s u ), γ 0.00( α 0.50, β 0.50)
PMVIF0.600.140.2016.590.75s0.90 T | e | 0.13( O t s u ), γ 0.00( α 0.50, β 0.50)
Watershed0.600.080.2316.850.07s0.90Level 0.08
SLIC0.600.110.1816.053.92s0.90Number of SuperPixels 30
K-means0.580.490.1616.982.12s0.41Number of clusters 8
Mean shift0.570.470.1818.1153.50s0.43Bandwidth 0.07
JSEG0.620.160.1321.9260.72s0.83Color quantization 10
VOC2012
#2010_005626
PMLCD0.650.120.1717.430.39s0.90LCD radius 2, T | e | 0.16( O t s u ), γ 0.30( α 0.70, β 0.30)
PMVIF0.600.240.2016.240.64s0.78 T | e | 0.12( O t s u ), γ 0.30( α 0.50, β 0.50)
Watershed0.600.170.2719.330.07s0.83Level 0.05
SLIC0.640.130.1715.544.35s0.88Number of SuperPixels 20
K-means0.630.400.1416.732.03s0.78Number of clusters 8
Mean shift0.650.380.0919.751.96s0.79Bandwidth 0.25
JSEG0.640.200.1420.1971.54s0.85Color quantization 19
VOC2012
#2010_005746
PMLCD0.820.050.097.650.19s0.93LCD radius 1, T | e | 0.21( O t s u ), γ 0.00( α 0.50, β 0.50)
PMVIF0.730.050.124.260.45s0.91 T | e | 0.18( O t s u ), γ 0.00( α 0.50, β 0.50)
Watershed0.370.110.3019.400.07s0.82Level 0.01
SLIC0.460.160.157.594.13s0.75Number of SuperPixels 5
K-means0.370.160.249.2810.62s0.74Number of clusters 100
Mean shift0.410.170.2313.43373.74s0.71Bandwidth 0.02
JSEG0.460.140.1414.4736.85s0.77Color quantization 2
BSDS500
#2018
PMLCD0.900.210.095.430.20s0.92LCD radius 1, T | e | 0.24( O t s u ), γ 0.35( α 0.86, β 0.14)
PMVIF0.750.460.144.320.49s0.92 T | e | 0.19( O t s u ), γ 0.35( α 0.52, β 0.48)
Watershed0.840.310.167.880.05s0.83Level 0.04
SLIC0.890.220.103.753.54s0.88Number of SuperPixels 10
K-means0.820.610.164.142.33s0.69Number of clusters 10
Mean shift0.740.360.125.018.34s0.73Bandwidth 0.10
JSEG0.810.370.1118.9469.68s0.59Color quantization 30
BSDS500
#81095
PMLCD0.900.170.099.550.35s0.89LCD radius 2, T | e | 0.21( O t s u ), γ 0.12( α 0.64, β 0.36)
PMVIF0.810.260.149.640.38s0.86 T | e | 0.18( O t s u ), γ 0.12( α 0.50, β 0.50)
Watershed0.850.150.1810.630.06s0.84Level 0.05
SLIC0.850.200.119.992.50s0.86Number of SuperPixels 10
K-means0.810.490.168.772.77s0.64Number of clusters 14
Mean shift0.820.500.1410.4138.64s0.63Bandwidth 0.08
JSEG0.840.300.1116.6347.30s0.80Color quantization 12
BSDS500
#107072
PMLCD0.850.180.1012.780.16s0.93LCD radius 1, T | e | 0.22( O t s u ), γ -0.05( α 0.47, β 0.53)
PMVIF0.480.330.1315.340.27s0.47 T | e | 0.23( O t s u ), γ -0.05( α 0.50, β 0.50)
Watershed0.750.120.2515.500.05s0.92Level 0.15
SLIC0.760.120.1612.273.89s0.88Number of SuperPixels 25
K-means0.750.340.1712.333.61s0.43Number of clusters 20
Mean shift0.740.320.2713.5879.77s0.41Bandwidth 0.02
JSEG0.820.220.108.9947.27s0.73Color quantization 10
BSDS500
#238025
PMLCD0.860.100.0913.600.37s0.96LCD radius 2, T | e | 0.19( O t s u ), γ 0.15( α 0.64, β 0.36)
PMVIF0.690.310.1015.950.28s0.93 T | e | 0.18( O t s u ), γ 0.15( α 0.50, β 0.50)
Watershed0.650.060.2919.300.06s0.94Level 0.05
SLIC0.670.080.1715.493.73s0.95Number of SuperPixels 30
K-means0.680.350.1512.802.94s0.67Number of clusters 16
Mean shift0.710.470.1213.4150.11s0.61Bandwidth 0.05
JSEG0.730.240.1115.2742.22s0.61Color quantization 10
Average (Standard Deviation)PMLCD0.78
(0.11)
0.13
(0.05)
0.12
(0.04)
12.73
(4.52)
0.29s
(0.10)
0.93
(0.03)
LCD radius 1.50(0.50), γ 0.14(0.15)
PMVIF0.66
(0.10)
0.26
(0.12)
0.15
(0.03)
12.15
(4.98)
0.47s
(0.16)
0.83
(0.14)
γ 0.14(0.15)
Watershed0.66
(0.15)
0.13
(0.08)
0.25
(0.06)
15.36
(4.03)
0.06s
(0.01)
0.88
(0.05)
Level 0.0.07(0.04)
SLIC0.69
(0.13)
0.15
(0.04)
0.16
(0.03)
11.82
(4.12)
4.88s
(3.11)
0.87
(0.05)
Number of SuperPixels 18.75(8.93)
K-means0.66
(0.14)
0.38
(0.14)
0.18
(0.04)
11.88
(4.05)
4.90s
(3.99)
0.62
(0.13)
Number of clusters 34.50(38.01)
Mean shift0.66
(0.13)
0.37
(0.10)
0.18
(0.07)
13.45
(4.21)
94.22s
(113.90)
0.60
(0.14)
Bandwidth 0.08(0.07)
JSEG0.70
(0.12)
0.24
(0.07)
0.12
(0.01)
16.89
(3.78)
57.50s
(15.60)
0.75
(0.09)
Color quantization 14.13(8.01)

Share and Cite

MDPI and ACS Style

Phornphatcharaphong, W.; Eua-Anant, N. Edge-Based Color Image Segmentation Using Particle Motion in a Vector Image Field Derived from Local Color Distance Images. J. Imaging 2020, 6, 72. https://doi.org/10.3390/jimaging6070072

AMA Style

Phornphatcharaphong W, Eua-Anant N. Edge-Based Color Image Segmentation Using Particle Motion in a Vector Image Field Derived from Local Color Distance Images. Journal of Imaging. 2020; 6(7):72. https://doi.org/10.3390/jimaging6070072

Chicago/Turabian Style

Phornphatcharaphong, Wutthichai, and Nawapak Eua-Anant. 2020. "Edge-Based Color Image Segmentation Using Particle Motion in a Vector Image Field Derived from Local Color Distance Images" Journal of Imaging 6, no. 7: 72. https://doi.org/10.3390/jimaging6070072

APA Style

Phornphatcharaphong, W., & Eua-Anant, N. (2020). Edge-Based Color Image Segmentation Using Particle Motion in a Vector Image Field Derived from Local Color Distance Images. Journal of Imaging, 6(7), 72. https://doi.org/10.3390/jimaging6070072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop