Next Article in Journal
Simulation of Upward Jump Control for One-Legged Robot Based on QP Optimization
Next Article in Special Issue
Deep Learning versus Spectral Techniques for Frequency Estimation of Single Tones: Reduced Complexity for Software-Defined Radio and IoT Sensor Communications
Previous Article in Journal
Comparison of Dual Beam Dispersive and FTNIR Spectroscopy for Lactate Detection
Previous Article in Special Issue
Non-Communication Decentralized Multi-Robot Collision Avoidance in Grid Map Workspace with Double Deep Q-Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel Probabilistic K-Means Clustering

1
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2
School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(5), 1892; https://doi.org/10.3390/s21051892
Submission received: 3 February 2021 / Revised: 4 March 2021 / Accepted: 5 March 2021 / Published: 8 March 2021
(This article belongs to the Special Issue Intelligent Sensors and Machine Learning)

Abstract

:
Kernel fuzzy c-means (KFCM) is a significantly improved version of fuzzy c-means (FCM) for processing linearly inseparable datasets. However, for fuzzification parameter m = 1 , the problem of KFCM (kernel fuzzy c-means) cannot be solved by Lagrangian optimization. To solve this problem, an equivalent model, called kernel probabilistic k-means (KPKM), is proposed here. The novel model relates KFCM to kernel k-means (KKM) in a unified mathematic framework. Moreover, the proposed KPKM can be addressed by the active gradient projection (AGP) method, which is a nonlinear programming technique with constraints of linear equalities and linear inequalities. To accelerate the AGP method, a fast AGP (FAGP) algorithm was designed. The proposed FAGP uses a maximum-step strategy to estimate the step length, and uses an iterative method to update the projection matrix. Experiments demonstrated the effectiveness of the proposed method through a performance comparison of KPKM with KFCM, KKM, FCM and k-means. Experiments showed that the proposed KPKM is able to find nonlinearly separable structures in synthetic datasets. Ten real UCI datasets were used in this study, and KPKM had better clustering performance on at least six datsets. The proposed fast AGP requires less running time than the original AGP, and it reduced running time by 76–95% on real datasets.

1. Introduction

Clustering is an important unsupervised method, and the purpose of clustering is to divide a dataset into multiple clusters (or classes) with high intra-cluster similarity and low inter-cluster similarity. There have been many clustering algorithms, such as k-means (KM) and its variants [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. Others are based on minimal spanning trees [17,18,19], density analysis [20,21,22,23,24,25], spectral analysis [26,27], subspace clustering [28,29], etc.
Generally, k-means minimizes the sum of squared Euclidean distances between each sample point and its nearest clustering center [1]. One variant of k-means is kernel k-means (KKM) [30,31,32,33], which is able to find nonlinearly separable structures by using the kernel function method. Another variant of k-means is fuzzy c-means (FCM) [2], which determines partitions by computing the membership degree of each data point to each cluster. The higher the membership degree, the greater the possibility of the data point belonging to the cluster. Although FCM is more flexible in applications [11,12,13,14,15,16], it is primarily suitable for linearly separable datasets. Kernel fuzzy c-means (KFCM) [34] is a significantly improved version of fuzzy c-means for clustering linearly inseparable datasets. However, the problem of KFCM with fuzzification parameter m = 1 cannot be solved by existing methods.
To solve the special case of KFCM for m = 1 , a novel model called kernel probabilistic k-means (KPKM) is proposed. In fact, KPKM is a nonlinear programming model constrained on linear equalities and linear inequalities, and it is equivalent to KFCM at m = 1. In theory, the proposed KPKM can be solved by the active gradient projection (AGP) method [35,36]. Since the AGP method may take too much time on large datasets, we further propose a fast AGP (FAGP) algorithm to solve KPKM more efficiently. Moreover, we report experiments demonstrating its effectiveness compared with KFCM, KKM, FCM, and KM.
The paper is organized as follows: Section 2 reviews previous work. The KPKM algorithm is proposed in Section 3. Section 4 proposes a solution for KPKM. Section 5 presents descriptions and analyses of experiments. Conclusions and future work are mentioned in Section 6.

2. Background and Related Work

There has been a lot of work related to this paper, mainly including k-means, fuzzy c-means, kernel k-means and kernel fuzzy c-means.
K-means minimizes the sum of squared Euclidean distances between each sample point and its nearest cluster center. K-means first chooses initial clustering centers randomly or manually, and then partitions a dataset into several clusters (a data point belongs to the cluster whose clustering center is nearest to the data point), and computes the mean of a cluster as the clustering center. K-means repeatedly updates clustering centers and clusters until convergence. FCM has the same ideal as k-means. FCM introduces a membership degree w i j and a fuzzy coefficient m into the objective function. The higher w i j is, the greater possibility of the i-th data point belonging to the j-th cluster. K-means and FCM belong to partition-based clustering algorithms, and partition-based clustering algorithms usually are not able to cluster linearly inseparable datasets. Kernel method maps a linearly inseparable dataset into a linearly separable space, so kernel k-means (and FCM) using a kernel function can cluster linearly inseparable datasets.

2.1. K-Means and Fuzzy C-Means

Let X = x i | x i R D , 1 i L represent a dataset. K-means divides it into k clusters. If ω j denotes the j-th cluster, we have X = j = 1 K ω j and 1 i j K , ω i ω j = . Using L j = ω j to stand for the number of elements in ω j with the center of c j , k-means can be described as minimizing the following objective function:
J = i = 1 K x i ω j x i c j 2
where
c j = 1 L j x i ω j x i .
Let L = j = 1 K L j . By using c instead of k and assigning membership degree w i j to the i-th data point in the j-th cluster for 1 i L and 1 j C , the fuzzy c-means clustering model can be formulated as follows:
min J = j = 1 C i = 1 L w i j m x i c j 2 s . t . j = 1 C w i j = 1 , w i j 0
where m > 1 , w i j and c j are computed alternately below [2].
w i j = x i c j 2 m 1 k = 1 C x i c k 2 m 1 c j = i = 1 L w i j m x i i = 1 L w i j m .

2.2. Kernel K-Means and Kernel Fuzzy C-Means

To improve performance of k-means and fuzzy c-means in linearly inseparable datasets, we may develop their kernel versions by choosing a feature mapping φ ( · ) : R D H from data points to kernel Hilbert space [37]. Though φ is usually unknown, it must satisfy
K ( x , y ) = φ ( x ) T φ ( y )
where K ( x , y ) is a kernel function. Commonly used kernel functions are presented in Table 1.
In Table 1, x , y = x · y denotes the inner product of x and y , and σ , α , β are parameters of the kernel. The objective function of kernel k-means is defined as
J k = j = 1 K x i ω j φ ( x i ) c j 2
where
c j = 1 L j x i ω j φ ( x i ) .
Using (5) and (7), we have
φ ( x i ) c j 2 = K ( x i , x i ) 2 L j x k ω j L K ( x k , x i ) + 1 L j 2 x l ω j L x h ω j L K ( x l , x h ) .
Let w i j represent the membership degree of the i-th point belonging to the j-th class. Likewise, we can define the kernel FCM clustering model as follows.
min J f ( W ) = j = 1 C i = 1 L w i j m φ ( x i ) c j 2 s . t . j = 1 C w i j = 1 , w i j 0 , m > 1
where
c j = l = 1 L w l j m φ ( x l ) l = 1 L w l j m ,
φ ( x i ) c j 2 = K ( x i , x i ) 2 l = 1 L w l j m k = 1 L w k j m K ( x k , x i ) + 1 l = 1 L w l j m 2 l = 1 L h = 1 L w l j m w h j m K ( x l , x h ) .
The membership degree is computed via
w i j = φ ( x i ) c j 2 1 m 1 k = 1 C φ ( x i ) c k 2 1 m 1 .

3. Kernel Probabilistic K-Means

In this section, the kernel probabilistic k-means (KPKM) are proposed.
We first review the problem. When m = 1 , the KFCM model gets into a special case, namely,
min J f ( W ) = j = 1 K i = 1 L w i j φ ( x i ) c j 2 s . t . j = 1 K w i j = 1 , w i j 0
where
c j = l = 1 L w l j φ ( x l ) l = 1 L w l j ,
φ ( x i ) c j 2 = K ( x i , x i ) 2 l = 1 L w l j k = 1 L w k j K ( x k , x i ) + 1 l = 1 L w l j 2 l = 1 L h = 1 L w l j w h j K ( x l , x h ) .
This special case cannot be solved by Lagrangian optimization for m > 1 , because (12) cannot be computed when m = 1 (when m = 1 , 1 m 1 = 1 0 cannot be computed).
In this paper, we use the optimization methods to solve this problem, but the partial derivative of (13) with respect to w i j is φ ( x i ) c j 2 , and it does not contain w i j .
In order to solve the problem, we introduce (14) into (13), and redefine the member degrees w i j as probability p i j for 1 i L and 1 j K . Finally, we have
J ( P ) = j = 1 K i = 1 L p i j φ ( x i ) l = 1 L p l j φ ( x l ) l = 1 L p l j 2 s . t . j = 1 K p i j = 1 , p i j 0
where probability vector
P = p 11 , , p 1 K , p 21 , p 2 K , , p L K T .
(16) is the proposed kernel probabilistic k-means.
The proposed KPKM is a soft clustering method. The p i j ( 0 p i j 1 ) is the probability of the i-th data point belonging to the j-th cluster. The higher p i j is, the greater possibility of the i-th data point belonging to the j-th cluster. In KKM, the membership degree has only two values (0 and 1). In the proposed KPKM, w i j 0 , 1 (although final w i j = 0 or 1 ).

4. A Fast Solution to KPKM

KPKM is a nonlinear programming problem with linear equalities and linear inequalities constraints, and it is able to be solved by active gradient projection [35,36] theoretically. In this section, on the basis of AGP, we first calculate the gradient of of the objection function of KPKM, and then design a fast AGP algorithm to solve the KPKM model.
The fast AGP has two advantages: iteratively updating the projection matrix and estimating the maximum step length.

4.1. Gradient Calculation

For convenience, we define F k j as
F k j = φ ( x k ) l = 1 L p l j φ ( x l ) l = 1 L p l j 2 .
Using the chain rule on (16), we obtain
J p i j = k = 1 L p k j F k j p i j + F i j .
According to (17), we can further derive
F k j p i j = 2 i = 1 L p i j K ( x k , x i ) l = 1 L p l j K ( x k , x l ) i = 1 L p i j l = 1 L p l j K ( x i , x l ) i = 1 L p i j + l = 1 L h = 1 L p l j p h j K ( x l , x h ) i = 1 L p i j 2 .
By substituting (19) into (18), we finally get
J p i j = K ( x i , x i ) + 1 l = 1 L p l j 2 l = 1 L h = 1 L p l j p h j K ( x l , x h ) 2 l = 1 L p l j k = 1 L p k j K ( x k , x i ) .
and the gradient
J = J p 11 J p 1 K J p L j J p L K T .

4.2. Fast AGP

In the constraints of KPKM, there are L linear equalities and K×L linear inequalities,
1 i L , j = 1 K p i j = 1 ,
1 i L , 1 j K , p i j 0 .
Let ϕ = K × L . Let I ϕ × ϕ be the identity matrix of size ϕ × ϕ . Define two matrices, inequality matrix A and equality matrix E , where
A = I ϕ × ϕ ,
E = 1 1 K 0 0 0 0 0 0 K 1 1 K 0 0 0 0 0 0 1 1 K L × ϕ .
Note that each row of A corresponds to one and only one inequality in (23), and each row of E corresponds to one and only one equality in (22). Accordingly, the KPKM’s constraints can be simply expressed as
AP 0 , EP = 1
where 1 = [ 1 , , 1 ] L × 1 T . Let P ( 0 ) stand for a randomly initialized probability vector, and P ( k ) for the probability vector at iteration k. The rows of inequality matrix A can be broken into two groups: one is active; the other is inactive. The active group is composed of all inequalities that must work exactly as an equality at P ( k ) , whereas the inactive group is composed of the left inequalities. If A 1 ( k ) and A 2 ( k ) respectively denote the active group and the inactive group, we have
A 1 ( k ) P ( k ) = 0 ,
A 2 ( k ) P ( k ) > 0 .
At iteration k, the active matrix N ( k ) is defined as
N ( k ) = A 1 ( k ) E .
When N = N ( k ) is not a square matrix, we can construct its projection matrix G ( k ) and the corresponding orthogonal projection matrix Q ( k ) as follows:
G ( k ) = N T ( N N T ) 1 N ,
Q ( k ) = I G ( k ) .
Suppose that n from A 2 ( k 1 ) is the active row vector at iteration k, satisfying n T P ( k 1 ) > 0 and n T P ( k ) = 0 . According to matrix theory [38], we can more efficiently compute G ( k ) and Q ( k ) by
G ( k ) = G ( k 1 ) + Q ( k 1 ) n T Q ( k 1 ) n T , Q ( k 1 ) n T 1 n Q ( k 1 ) .
Furthermore, we can compute the projected gradient by
d ( k ) = Q ( k ) J ( P ( k ) ) .
Using d ( k ) , we update the probability vector P k + 1 as
P k + 1 = P k + t ( k ) d ( k )
where t ( k ) is the step length. Usually, t ( k ) is chosen as a small number. For fast convergence, we estimate the maximum step length as follows.
t ( k ) = t max ( k ) .
(1)
Let p i j ( k + 1 ) = p i j ( k ) + t i j d i j ( k ) = 0 for p i j ( k ) > 0 ;
(2)
Compute t i j = p i j ( k ) p i j ( k ) d i j ( k ) d i j ( k ) for p i j ( k ) > 0 and d i j ( k ) < 0 ;
(3)
t max ( k ) = min t i j .
As shown in Figure 1, maximum step length is compared with small step length.
When N = N ( k ) is a square matrix, it must be invertible. In this case, we actually have G ( k ) = N T ( N N T ) 1 N = I L K × L K , and Q ( k ) = 0 . Thus, d ( k ) = Q ( k ) J ( P ( k ) ) = 0 is not a feasible descent direction. A new descent direction can be computed as follows:
(1)
Compute a new vector,
q ( k ) = ( N N T ) 1 N J = N T 1 J .
(2)
Break q ( k ) into two parts q 1 ( k ) and q 2 ( k ) , namely,
q ( k ) = q 1 ( k ) T q 2 ( k ) T T
where the size of q 1 ( k ) is the number of rows of A 1 ( k ) , and that of q 2 ( k ) is the number of rows of E .
(3)
If q 1 ( k ) 0 , stop. Otherwise, choose any element from q 1 ( k ) that is less than 0 and delete the corresponding row of A 1 ( k ) ; then use (29)–(31) and (33) to compute d ( k ) .
The above fast AGP solution to KPKM is outlined in Algorithm 1. Compared with the original AGP, the fast AGP has two advantages: iteratively updating the projection matrix (shown in (32)) and estimating the maximum step length (shown in (35)).

4.3. Analysis of Complexity

In this section, the computational complexities for the traditional AGP algorithm and the proposed FAGP algorithm are analyzed. Let T A represent the iteration number of AGP. Let T F represent the iteration number of FAGP. In computations of all matrices, the computational complexity of the projection matrix G is the highest. In the AGP algorithm, computing G via (30) requires O ( K 3 L 3 ) , so the total computational complexity for AGP algorithm is O ( T A K 3 L 3 ) . In the FAGP algorithm, G is computed via (32), and it does not require to compute the inverse of matrices (i.e., ( NN T ) 1 ), and computing G via (32) requires O ( K 3 L 2 ) , so the total computational complexity for FAGP algorithm (i.e., Algorithm 1) is O ( T F K 3 L 2 ) .
Algorithm 1 Fast active gradient projection (FAGP).
      Input: X and K
      Do:
            (1) Let k = 0 , N = N ( 0 ) = E , G ( 0 ) = N T ( N N T ) 1 N ,
      initialize P 0 , go to (4);
            (2) Find n A 2 ( k 1 ) meeting n T P ( k 1 ) > 0 and n T P ( k ) = 0 ;
            (3) Compute G ( k ) by (32);
            (4) Compute Q ( k ) by (31), and d ( k ) by (33);
            (5) Compute t k by (35), and P ( k + 1 ) by (34);
            (6) Let k = k + 1 , if G ( k ) I L K × L K , go to (2);
            (7) Construct N = N ( k ) by (29), and by (36);
            (8) Break into q 1 ( k ) and q 2 ( k ) by (37);
            (9) If all elements of q 1 ( k ) are more than 0, stop;
            (10) Choose any element less than 0 from q 1 ( k ) ,
      and delete the corresponding row of A 1 ( k ) ;
            (11) Reconstruct N = N ( k ) by (29);
            (12) Compute G ( k ) by (30), and Q ( k ) by (31);
            (13) Compute d ( k ) by (33), and t k by (35);
            (14) Compute P ( k + 1 ) by (34);
            (15) Let k = k + 1 , and go to (7).
      Output: the probability vector P

5. Experimental Results

In order to evaluate performance of the proposed KPKM model (equivalent to KFCM at m = 1) solved by the FAGP algorithm, we have conducted a lot of experiments on one synthetic dataset, ten UCI datasets (http://archive.ics.uci.edu/ml (accessed on 8 March 2021)) and the MNIST dataset (http://yann.lecun.com/exdb/mnist/ (accessed on 8 March 2021)). These datasets are detailed in Section 5.1, Section 5.2 and Section 5.3. In Section 5.1, we use a synthetic dataset to compare KPKM using a Gaussian kernel with KPKM using a linear kernel. In Section 5.2 and Section 5.3, we compare the proposed KPKM with KFCM, KKM, FCM, and KM to evaluate the performance of KPKM solved by the proposed fast AGP. Moreover, the Section 5.4 and Section 5.5 evaluate the descent stability and convergence speed of the proposed FAGP, respectively. In the experiments, we implemented our own MATLAB code for KPKM, KFCM, and KKM with two build-in functions, sparse and full, employed for matrix optimization. Moreover, we called MATLAB’s build-in functions fcm and kmeans for FCM and KM, respectively.
All experiments were carried out on a PC with an Intel(R) Core(TM) i7-4790 CPU at 3.60 GHz, 8.00 G RAM, running Windows 7 and MATLAB 2015a.

5.1. The Experiment on the Synthetic Dataset

In this experiment, we analyzed the influences of the Gaussian radial basis function kernel and the linear kernel on KPKM when clustering one synthetic dataset, which is shown in Figure 2. The synthetic dataset contained 300 points, with 100 and 200 being from two linearly inseparable classes, disc and ring, respectively. The result is displayed in Figure 3. Obviously, Gaussian radial basis function KPKM can cluster perfectly, whereas linear KPKM cannot.

5.2. Experiment on Ten UCI Datasets

In this experiment, we compare the clustering performance of KPKM with the performances of KFCM, KKM, FCM, and KM in terms of three measures: normalized mutual information (NMI), adjusted Rand index (ARI), and v-measure (VM) [39,40]. NMI is a normalization of the mutual information score to scale the results between 0 (no mutual information) and 1 (perfect correlation). The NMI is defined as
NMI ( U , V ) = MI ( U , V ) mean ( H ( U ) , H ( V ) )
where H ( U ) is the entropy of U; MI is given by
MI ( U , V ) = i = 1 | U | j = 1 | V | | U i V j | N log N | U i V j | | U i | | V j |
where | U i | is the number of the samples in cluster U i . ARI is an adjusted similarity measure between two clusters. The ARI is defined as
ARI = RI E ( RI ) max RI E ( RI )
where RI is the ratio of data points clustered correctly to all data points, and E(RI) is the expectation of RI. VM is a harmonic mean between homogeneity and completeness, where homogeneity means that each cluster is a subset of a single class, and completeness means that each class is a subset of a single cluster. The VM is defined as
VM = ( 1 + β ) × homogeneity × completeness ( β × homogeneity + completeness )
The higher the NMI, ARI and VM, the better the performance.
We describe the ten UCI datasets represented in Table 2 by name, code, number of instances, number of classes and number of dimensions. A set of experiential parameters was selected. We set m = 2 for KFCM, and m = 1.3 for FCM. With the Gaussian radial basis function kernel, we also selected appropriate σ (shown in Table 3), and report the clustering results in Table 4. The better results in each case is highlighted in bold. (The performance of many a method depends on parameters [41]. By experiments we found that for different algorithms with the same kernel function, the most appropriate parameters are usually different, so we used different settings to make the methods have the best clustering performances they could. We also tried to select the most appropriate parameter by using our experience.) We ran every algorithm 10 times, and average results were calculated.
From Table 4, we can see that
(1)
For NMI, KPKM had the best clustering results on nine datasets, so the clustering performance of KPKM was the best for NMI.
(2)
For ARI, KPKM had the best clustering results on six datasets. KFCM, KKM, FCM, and KM performed the best on 2, 1, 0, and 1 datasets, respectively, so KPKM performed the best for ARI.
(3)
For VM, KPKM, KFCM, KKM, FCM, and KM had the best clustering results on 7, 0, 2, 1, and 0 datasets, respectively. Thus, KPKM had the best clustering performance for VM.
Overall, KPKM is better than the other models.

5.3. Experiment on the MNIST Dataset

In this experiment, we used the MNIST dataset to compare KPKM with KFCM and KKM when clustering digital images. The MNIST dataset was composed of handwritten digits, with 60,000 examples for training and 10,000 examples for testing. All digits were size-normalized and centered in fixed-size images. To evaluate the clustering performances of KPKM, KFCM, and KKM, we randomly chose 1000 training examples; 100 are shown in Figure 4.
Moreover, we defined a CWSS kernel based on complex wavelet structural similarity (CWSS) [42]. CWSS may be regarded as a coefficient that does not change the structural content of an image. By using C W S S ( x , y ) to denote the similarity between two images x and y , we can express the CWSS kernel as
K cwss ( x , y ) = exp 1 C W S S ( x , y ) 2 σ 2
where σ = 5 is set for KFCM, and σ = 1 for both KPKM and KKM. Additionally, m = 1.3 is set for KFCM. We present the results in Table 5, with the examples of Figure 4 correspondingly being displayed in Figure 5. From Table 5, we can see that KPKM outperformed KFCM and KKM in terms of NMI, ARI, and VM. From Figure 5, we can observe that both KPKM and KKM found ten clusters, whereas KFCM found only seven clusters, although the number of clusters was set to ten.

5.4. Experiment for Descent Stability

In this experiment, we used 10 UCI datasets to evaluate descent stability of FAGP. AGP selects a small step length to converge. If AGP selects a large step length, the objective function value may descend with oscillation, and even does not converge. FAGP iteratively estimates a maximum step length at each iteration for speeding up its convergence. However, does it have any serious influence on the convergence? we ran the proposed FAGP on 10 UCI datasets to demonstrate the descent stability of FAGP. As shown in Figure 6, we can see that FAGP descends stably without oscillation at each iteration.

5.5. Convergence Speed Comparison between FAGP and AGP

In this experiment, we compared the convergence speeds of FAGP and AGP by using running time on 10 UCI datasets. η is the ratio of FAGP’s running time to AGP’s. FAGP and AGP used the same initializations in each case. The results are presented in Table 6 and Table 7 (“–” means the running time was too long to obtain the final clustering results), and we can observe the proposed FAGP ran faster than AGP on all the 6 datasets. For Iris and Seeds datasets, FAGP used less than 10% of running time of AGP. FAGP also required fewer iterations than AGP. For large datasets, the running time of AGP was too long, but the proposed fast AGP could obtain the final clustering results.

6. Conclusions

In this paper, a novel clustering model (i.e., KPKM) was proposed. The proposed KPKM solves the problem of KFCM for m = 1, and this problem cannot be solved by existing method. The traditional AGP method can solve the proposed KPKM, but the efficiency of AGP is low. A fast AGP was proposed to speed up the AGP. The proposed fast AGP uses a maximum step length strategy to reduce the iteration number and uses an iterative method to update the projection matrix. The experimental results demonstrated that the fast AGP is able to solve the KPKM and the fast AGP requires less running time than AGP (the proposed FAGP requires 4.22–27.90% of the running time of AGP on real UCI datasets). The convergence of the proposed method was also analyzed by experiments. Additionally, in the experiments, the KPKM model could produce overall better clustering results than the other models, including KFCM, KKM, FCM, and KM. The proposed KPKM obtained the best clustering results on at least 6 real UCI datasets (a total of 10 real UCI datasets were used).
As future work, the proposed KPKM using other kernels will be evaluated in a variety of applications. For large datasets, the proposed method still has some disadvantages, so the next project will include speeding up the AGP on large datasets.

Author Contributions

Data curation, Z.L. and Z.Z.; writing—original draft, B.L.; writing—review and editing, T.Z. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (61876010, 61806013, and 61906005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

All authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
X dataset
x i i-th data points of X
φ ( · ) feature mapping from data points to kernel Hilbert space
K lap ( · , · ) Laplace Radial Basis Function kernel
K gau ( · , · ) Gaussian Radial Basis Function kernel
K pol ( · , · ) polynomial kernel
K tan ( · , · ) sigmoid kernel
ω j j-th cluster
Lnumber of elements in X
L j number of elements in ω j
c j center of ω j
K and Cnumber of clustering center
W membership degree matrix
w i j membership degree to the i-th data point in the j-th cluster
A inequality matrix
E equality matrix
P probability matrix
I identity matrix
p i j probability to the i-th data point in the j-th cluster
G projection matrix
Q orthogonal projection matrix
N active matrix
N row vector of N
d projected gradient
tstep length

References

  1. Lloyd, S. Least Squares Quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  2. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms. Adv. Appl. Pattern Recognit. 1981, 22, 203–239. [Google Scholar]
  3. Jing, L.; Ng, M.K.; Huang, J.Z. An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data. IEEE Trans. Knowl. Data Eng. 2007, 19, 1026–1041. [Google Scholar] [CrossRef]
  4. Xenaki, S.; Koutroumbas, K.; Rontogiannis, A. Sparsity-aware possibilistic clustering algorithms. IEEE Trans. Fuzzy Syst. 2016, 24, 1611–1626. [Google Scholar] [CrossRef]
  5. Yang, M.S.; Nataliani, Y. A feature-reduction fuzzy clustering algorithm based on feature-weighted entropy. IEEE Trans. Fuzzy Syst. 2018, 26, 817–835. [Google Scholar] [CrossRef]
  6. Gu, J.; Jiao, L.; Yang, S.; Liu, F. Fuzzy double c-means clustering based on sparse self-representation. IEEE Trans. Fuzzy Syst. 2018, 26, 612–626. [Google Scholar] [CrossRef]
  7. Hamasuna, Y.; Endo, Y.; Miyamoto, S. On tolerant fuzzy c-means clustering and tolerant possibilistic clustering. Soft Comput. 2010, 14, 487–494. [Google Scholar] [CrossRef]
  8. Li, Y.; Yang, G.; He, H. A study of large-scale data clustering based on fuzzy clustering. Soft Comput. 2016, 20, 3231–3242. [Google Scholar] [CrossRef]
  9. Zhu, L.F.; Wang, J.S.; Wang, H.Y. A Novel Clustering Validity Function of FCM Clustering Algorithm. IEEE Access 2019, 7, 152289–152315. [Google Scholar] [CrossRef]
  10. Sinaga, K.P.; Yang, M. Unsupervised K-Means Clustering Algorithm. IEEE Access 2020, 8, 80716–80727. [Google Scholar] [CrossRef]
  11. Wang, C.; Pedrycz, W.; Yang, J.; Zhou, M.; Li, Z. Wavelet Frame-Based Fuzzy C-Means Clustering for Segmenting Images on Graphs. IEEE Trans. Cybern. 2020, 50, 3938–3949. [Google Scholar] [CrossRef]
  12. Wang, C.; Pedrycz, W.; Li, Z.; Zhou, M.; Ge, S.S. G-image Segmentation: Similarity-preserving Fuzzy C-Means with Spatial Information Constraint in Wavelet Space. IEEE Trans. Fuzzy Syst. 2020. [Google Scholar] [CrossRef]
  13. Zhang, R.; Li, X.; Zhang, H.; Nie, F. Deep Fuzzy K-Means With Adaptive Loss and Entropy Regularization. IEEE Trans. Fuzzy Syst. 2020, 28, 2814–2824. [Google Scholar] [CrossRef]
  14. Wang, C.; Pedrycz, W.; Zhou, M.; Li, Z. Sparse Regularization-Based Fuzzy C-Means Clustering Incorporating Morphological Grayscale Reconstruction and Wavelet Frames. IEEE Trans. Fuzzy Syst. 2020. [Google Scholar] [CrossRef]
  15. Wang, C.; Pedrycz, W.; Li, Z.; Zhou, M.; Zhao, J. Residual-sparse Fuzzy C-Means Clustering Incorporating Morphological Reconstruction and Wavelet frame. IEEE Trans. Fuzzy Syst. 2020. [Google Scholar] [CrossRef]
  16. Zhang, R.; Nie, F.; Guo, M.; Wei, X.; Li, X. Joint Learning of Fuzzy k-Means and Nonnegative Spectral Clustering with Side Information. IEEE Trans. Image Process. 2019, 28, 2152–2162. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, X.; Yu, F.; Pedrycz, W.; Wang, J. Hierarchical clustering of unequal-length time series with area-based shape distance. Soft Comput. 2019, 23, 6331–6343. [Google Scholar] [CrossRef]
  18. Li, Y. A Clustering Algorithm Based on Maximal θ-Distant Subtrees. Pattern Recognit. 2007, 40, 1425–1431. [Google Scholar]
  19. Esgario, G.M.; Krohling, R.A. Clustering with Minimum Spanning Tree using TOPSIS with Multi-Criteria Information. In Proceedings of the IEEE International Conference on Fuzzy Systems, Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–7. [Google Scholar]
  20. Rodriguez, A.; Laio, A. Clustering by Fast Search and Find of Density Peaks. Science 2014, 344, 1492–1496. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Bryant, A.; Cios, K. RNN-DBSCAN: A density-based clustering algorithm using reverse nearest neighbor density estimates. IEEE Trans. Knowl. Data Eng. 2018, 30, 1109–1121. [Google Scholar] [CrossRef]
  22. Xu, X.; Ding, S.; Xu, H.; Liao, H.; Xue, Y. A feasible density peaks clustering algorithm with a merging strategy. Soft Comput. 2019, 23, 5171–5183. [Google Scholar] [CrossRef]
  23. Wang, X.; Zhang, Y.; Xie, J. A density-core-based clustering algorithm with local resultant force. Soft Comput. 2020, 24, 6571–6590. [Google Scholar] [CrossRef]
  24. Wu, C.; Lee, J.; Isokawa, T.; Yao, J.; Xia, Y. Efficient Clustering Method Based on Density Peaks with Symmetric Neighborhood Relationship. IEEE Access 2019, 7, 60684–60696. [Google Scholar] [CrossRef]
  25. Liu, T.; Li, H.; Zhao, X. Clustering by Search in Descending Order and Automatic Find of Density Peaks. IEEE Access 2019, 7, 133772–133780. [Google Scholar] [CrossRef]
  26. Luxburg, U.V. A Tutorial on Spectral Clustering. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
  27. Chen, J.; Li, Z. Huang B. Linear Spectral Clustering Superpixel. IEEE Trans. Image Process. 2017, 26, 3317–3330. [Google Scholar] [CrossRef]
  28. Elhamifar, E.; Vidal, R. Sparse Subspace Clustering: Algorithm, Theory, and Applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2765–2781. [Google Scholar] [CrossRef] [Green Version]
  29. Lu, C.; Feng, J.; Lin, Z.; Mei, T.; Yan, S. Subspace Clustering by Block Diagonal Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 487–501. [Google Scholar] [CrossRef] [Green Version]
  30. Gu, Y.; Chanussot, J.; Jia, X.; Benediktsson, J.A. Multiple Kernel Learning for Hyperspectral Image Classification: A Review. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6547–6565. [Google Scholar] [CrossRef]
  31. Nguyen, B.; Baets, B.D. Kernel-Based Distance Metric Learning for Supervised k-Means Clustering. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3084–3095. [Google Scholar] [CrossRef]
  32. Liu, X.; Zhu, X.; Li, M.; Wang, L.; Zhu, E.; Liu, T. Multiple Kernel K-means with Incomplete Kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1191–1204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Marin, D.; Tang, M.; Ben, A.I.; Boykov, Y.Y. Kernel Clustering: Density Biases and Solutions. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 136–147. [Google Scholar] [CrossRef] [Green Version]
  34. Huang, H.C.; Chuang, Y.Y.; Chen, C.S. Multiple Kernel Fuzzy Clustering. IEEE Trans. Fuzzy Syst. 2012, 20, 120–134. [Google Scholar] [CrossRef] [Green Version]
  35. Rosen, J.B. The Gradient Projection Method for Nonlinear Programming. Part I. Linear Constraints. J. Soc. Ind. Appl. Math. 1961, 9, 514–532. [Google Scholar] [CrossRef]
  36. Goldfarb, D.; Lapidus, L. Conjugate Gradient Method for Nonlinear Programming Problems with Linear Constraints. Ind. Eng. Chem. Fundam. 1968, 7, 142–151. [Google Scholar] [CrossRef]
  37. Girolami, M. Mercer Kernel-Based Clustering in Feature Space. IEEE Trans. Neural Netw. 2002, 13, 780–784. [Google Scholar] [CrossRef] [Green Version]
  38. Honig, M.; Madhow, U.; Verdu, S. Blind Adaptive Multiuser Detection. IEEE Trans. Inf. Theory 1995, 41, 944–960. [Google Scholar] [CrossRef] [Green Version]
  39. Vinh, N.X.; Epps, J.; Bailey, J. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. J. Mach. Learn. Res. 2010, 11, 2837–2854. [Google Scholar]
  40. Rosenberg, A.; Hirschberg, J. V-measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Prague, Czech Republic, 28–30 June 2007; pp. 410–420. [Google Scholar]
  41. Gao, S.; Zhou, M.; Wang, Y.; Cheng, J.; Yachi, H.; Wang, J. Dendritic Neuron Model With Effective Learning Algorithms for Classification, Approximation, and Prediction. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 601–614. [Google Scholar] [CrossRef]
  42. Sampat, M.P.; Wang, Z.; Gupta, S.; Bovik, A.C.; Markey, M.K. Complex Wavelet Structural Similarity: A New Image Similarity Index. IEEE Trans. Image Process. 2009, 18, 2385–2401. [Google Scholar] [CrossRef]
Figure 1. Maximum step length and small step length.
Figure 1. Maximum step length and small step length.
Sensors 21 01892 g001
Figure 2. The synthetic dataset, which is composed of two linearly inseparable classes: disc and ring.
Figure 2. The synthetic dataset, which is composed of two linearly inseparable classes: disc and ring.
Sensors 21 01892 g002
Figure 3. The results of the synthetic dataset clustered by Gaussian radial basis function kernel probabilistic k-means (KPKM) (a) and linear KPKM (b).
Figure 3. The results of the synthetic dataset clustered by Gaussian radial basis function kernel probabilistic k-means (KPKM) (a) and linear KPKM (b).
Sensors 21 01892 g003
Figure 4. One-hundred digital examples, 10 per class.
Figure 4. One-hundred digital examples, 10 per class.
Sensors 21 01892 g004
Figure 5. The results of the examples of Figure 2 clustered by KPKM (a), KFCM (b), and KKM (c).
Figure 5. The results of the examples of Figure 2 clustered by KPKM (a), KFCM (b), and KKM (c).
Sensors 21 01892 g005
Figure 6. Descent stability of FAGP on 6 UCI datasets: Ionosphere (a), Iris (b), Seeds (c), Glass (d), Segmentation (e), Dermatology (f), Breast (g), Natural (h), Yeast (i), and Waveform (j). The x-axis shows the iteration number. The y-axis shows the objective function value.
Figure 6. Descent stability of FAGP on 6 UCI datasets: Ionosphere (a), Iris (b), Seeds (c), Glass (d), Segmentation (e), Dermatology (f), Breast (g), Natural (h), Yeast (i), and Waveform (j). The x-axis shows the iteration number. The y-axis shows the objective function value.
Sensors 21 01892 g006
Table 1. Commonly used kernel functions.
Table 1. Commonly used kernel functions.
NameCode
Linear kernel K ( x , y ) = x , y
Laplace Radial Basis Function kernel K lap ( x , y ) = exp σ x y
Gaussian Radial Basis Function kernel K gau ( x , y ) = exp x y 2 2 σ 2
Polynomial kernel K pol ( x , y ) = x · y + β α
Sigmoid kernel K tan ( x , y ) = tanh α x · y + β
Table 2. The ten UCI datasets used.
Table 2. The ten UCI datasets used.
NameCodeInstancesClassesDimensions
IrisR115034
SeedsR221037
SegmentationR3210719
GlassR421469
IonosphereR5351233
DermatologyR6358634
Breast-cancerR768329
NaturalR820009294
YeastR92426324
WaveformR105000321
Table 3. Parameter σ selected for kernel clustering methods.
Table 3. Parameter σ selected for kernel clustering methods.
DatasetKPKMKFCMKKM
R11.081.220.9
R21.922
R3510530540
R4510100510
R51.511.3
R63.3218
R7121015
R830.92.7
R9151025
R10101513
Table 4. Comparisons of KPKM with kernel fuzzy c-means (KFCM), kernel k-means (KKM), fuzzy c-means (FCM), and k-means (KM) on ten UCI datasets (DS) in terms of normalized mutual information (NMI), adjusted Rand index (ARI), and v-measure (VM).
Table 4. Comparisons of KPKM with kernel fuzzy c-means (KFCM), kernel k-means (KKM), fuzzy c-means (FCM), and k-means (KM) on ten UCI datasets (DS) in terms of normalized mutual information (NMI), adjusted Rand index (ARI), and v-measure (VM).
DSMethodKPKMKFCMKKMFCMKM
R1NMI0.81460.79000.78200.67230.6733
ARI0.81190.80150.75900.57630.5779
VM0.81460.79000.78200.70810.7149
R2NMI0.70730.69490.70380.69490.7025
ARI0.71410.71660.72310.71660.7135
VM0.70730.69490.70380.69490.6999
R3NMI0.55550.55030.51540.46780.5132
ARI0.39090.41410.34290.31720.3313
VM0.55530.55030.51390.57290.5252
R4NMI0.44360.35940.39430.34890.4178
ARI0.27960.21370.25420.21260.2551
VM0.44240.35930.39340.38070.3857
R5NMI0.24760.23900.27150.12990.1349
ARI0.17470.10980.16570.17270.1777
VM0.24760.23900.27150.12980.1348
R6NMI0.29190.20680.27780.10460.1032
ARI0.17950.13880.16980.02610.0266
VM0.29190.20650.27760.10950.1056
R7NMI0.79030.78250.77410.74780.7478
ARI0.87960.87410.86740.84640.8464
VM0.79030.78250.77410.74780.7478
R8NMI0.05310.03260.05560.05210.0536
ARI0.02530.03030.02830.02730.0261
VM0.05250.03120.05500.04950.0529
R9NMI0.00520.00310.00500.00430.0050
ARI0.01180.00840.01180.01090.0117
VM0.00520.00310.00500.00450.0045
R10NMI0.36540.31620.36370.36060.3622
ARI0.25460.23770.25410.25290.2536
VM0.36540.31620.36370.35590.3622
Table 5. Comparisons of KPKM with KFCM and KKM on the MNIST dataset in terms of NMI, ARI, and VM.
Table 5. Comparisons of KPKM with KFCM and KKM on the MNIST dataset in terms of NMI, ARI, and VM.
MeasureKPKMKFCMKKM
NMI0.68300.54380.6613
ARI0.56850.40470.5256
VM0.68290.54110.6613
Table 6. Running time (s) comparison of FAGP and AGP.
Table 6. Running time (s) comparison of FAGP and AGP.
DSFAGPAGP η
Ionosphere0.7197632.57970027.90%
Iris0.3977289.4104014.22%
Seeds0.6936138.6501798.01%
Glass3.48285622.64451115.38%
Segmentation4.70631619.12548024.60%
Dermatology10.65909950.79622520.98%
Breast1.01536.132516.55 %
Natural2309
Yeast197.37
Waveform213.64
Table 7. Comparison of the numbers of iterations used by FAGP and AGP.
Table 7. Comparison of the numbers of iterations used by FAGP and AGP.
DSFAGPAGP
Ionosphere354975
Iris3576506
Seeds4234061
Glass11235616
Segmentation12753986
Dermatology17686784
Breast682694
Natural16,378
Yeast5609
Waveform9939
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, B.; Zhang, T.; Li, Y.; Liu, Z.; Zhang, Z. Kernel Probabilistic K-Means Clustering. Sensors 2021, 21, 1892. https://doi.org/10.3390/s21051892

AMA Style

Liu B, Zhang T, Li Y, Liu Z, Zhang Z. Kernel Probabilistic K-Means Clustering. Sensors. 2021; 21(5):1892. https://doi.org/10.3390/s21051892

Chicago/Turabian Style

Liu, Bowen, Ting Zhang, Yujian Li, Zhaoying Liu, and Zhilin Zhang. 2021. "Kernel Probabilistic K-Means Clustering" Sensors 21, no. 5: 1892. https://doi.org/10.3390/s21051892

APA Style

Liu, B., Zhang, T., Li, Y., Liu, Z., & Zhang, Z. (2021). Kernel Probabilistic K-Means Clustering. Sensors, 21(5), 1892. https://doi.org/10.3390/s21051892

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop