Next Article in Journal
Solving the Two Echelon Vehicle Routing Problem Using Simulated Annealing Algorithm Considering Drop Box Facilities and Emission Cost: A Case Study of Reverse Logistics Application in Indonesia
Previous Article in Journal
Metal Surface Defect Detection Using Modified YOLO
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients

1
Department of Information Systems, Hanoi University of Science and Technology, Hanoi 10000, Vietnam
2
Department of Chemical Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
Algorithms 2021, 14(9), 258; https://doi.org/10.3390/a14090258
Submission received: 13 July 2021 / Revised: 19 August 2021 / Accepted: 27 August 2021 / Published: 29 August 2021

Abstract

:
Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised machine learning techniques, meaning that there is no information about the labels of the elements. However, when knowledge of data points is known in advance, it will be beneficial to use a semi-supervised algorithm. Within many clustering techniques available, fuzzy C-means clustering (FCM) is a common one. To make the FCM algorithm a semi-supervised method, it was proposed in the literature to use an auxiliary matrix to adjust the membership grade of the elements to force them into certain clusters during the computation. In this study, instead of using the auxiliary matrix, we proposed to use multiple fuzzification coefficients to implement the semi-supervision component. After deriving the proposed semi-supervised fuzzy C-means clustering algorithm with multiple fuzzification coefficients (sSMC-FCM), we demonstrated the convergence of the algorithm and validated the efficiency of the method through a numerical example.

1. Introduction

Data clustering is a method that divides the elements of a data set into clusters such that data elements in the same cluster have similar properties, and data elements in different clusters have different properties [1,2]. Data clustering is an important pre-processing step to produce initial knowledge, supporting the decision-making for the next processing steps. In the current clustering methods, Fuzzy C-means clustering (FCM) algorithm gives relatively good results, taking advantage of the flexible nature of fuzzy logic [3]. In FCM, a data element can flexibly choose a cluster to belong to, characterized by the membership grade of the element in the cluster [4]. Specifically, the membership grade U i k of element i belonging to cluster k has a value in the range from 0 to 1, and the larger the U i k value, the more likely element i belongs to cluster k [5,6].
Clustering belongs to the class of unsupervised learning methods in which there is no given information about the labels of the data elements, dissimilar to classification methods. However, there are also cases in which knowledge of certain data can be known in advance; then, it becomes semi-supervised fuzzy clustering. For a semi-supervised clustering problem, there are often two goals: (i) clustering and labeling the data, and (ii) improving the quality of clustering based on the existing knowledge about the data [7,8,9,10,11]. The first goal is clustering, as defining cluster labels for the data remains the primary goal. With semi-supervision, the structure of the clusters and the cluster centers must still be clearly distinguished. The second goal is to improve the clustering quality based on the existing knowledge. The clustering quality can be evaluated from the clustering label. Of the two objectives, if more attention goes toward clustering labeling, then more supervisory knowledge needs to be introduced, and the clustering results will be expected to be better. The disadvantage of this method is that certain measures of “similarity” may not be better, but this is acceptable when there are data points that are “shaped” by the knowledge; hence, the ability to update the cluster center is also improved [12]. There is knowledge about data that can be considered to be relatively accurate. It will be known to a degree in advance if a data element should or should not belong to a certain cluster. Such factors are dependent on the related expertise of the person conducting the clustering [13].
Yasunori et al. [14] proposed to improve the FCM algorithm by adding an auxiliary supervisory matrix, representing the supervised membership grade, into the objective function. Although the authors did not outline a specific method to determine the value of the supervisory matrix, the study made it clear that semi-supervised fuzzy clustering is completely feasible. The authors called this algorithm semi-supervised standard Fuzzy C-means clustering (sSFCM). This algorithm is mainly based on the approach that the supervised data point must belong to a certain cluster. The membership grade on that cluster then must be larger than the value in other unsupervised clusters.
The FCM method uses an exponential parameter m, also known as the fuzzification coefficient, in the objective function, which adjusts the membership grade U i k of element i belonging to cluster k. In the standard FCM algorithm, the value of parameter m is selected from the beginning, for example, m = 2 . In recent years, there have been studies that extended the selection of parameter m as an interval [ m 1 , m 2 ] or a fuzzy value that, when reducing the type, would essentially be the selection of different m ∈ [ m 1 , m 2 ] values, applied to each iteration [15]. The extension to use multiple fuzzification coefficients m instead of only one value in FCM was presented in [16]. When assigning each data point an appropriate fuzzification coefficient m based on the density of that point to the surrounding points, the clustering quality was improved.
This study is based on the concept of utilizing multiple fuzzification coefficients to apply to semi-supervised fuzzy clustering. When placing a supervised element i into cluster k, the appropriate fuzzification coefficient m i k can be selected, which will be different from other fuzzification coefficients. This affects the calculation of U i k which increases, and at the same time, U i k ^ m i k changing also affects the determination of cluster k center. Similarly, when preventing a supervised element i from being in cluster k, its parameter m i k will be different from other m i j values. The adjustment of the fuzzification coefficient is similar to using the auxiliary matrix similar to other semi-supervised clustering methods, and it is also needed to ensure the convergence of the clustering algorithm. The contribution of this study is the proposal of a novel semi-supervised FCM algorithm using multiple fuzzification coefficients (sSMC-FCM) as well as the method to determine the appropriate fuzzification coefficient values for semi-supervision.
Section 2 of the paper outlines the FCM algorithm and the semi-supervised fuzzy clustering algorithm using auxiliary matrices. Section 3 presents the novel sSMC-FCM algorithm. Section 4 shows a numerical example using the proposed method and the subsequent results, while Section 5 presents concluding remarks.

2. Preliminaries

2.1. Standard Fuzzy C-Means Clustering (FCM) Algorithm

The standard FCM algorithm attempts to divide a finite number of N data elements X = { X 1 , X 2 , , X N } into C clusters based on some given criteria. Each element X i X , i = 1 , 2 , , N , is a vector with D dimensions. The elements in X are divided into C clusters with cluster centers V 1 , V 2 , , V C in the centroid set V .
In FCM algorithm, U is a matrix that represents the membership of each element into each cluster. Matrix U has certain characteristics as below:
  • U i k is the membership grade of an element X i in the cluster k with center V k , where 1 i N ;   1 k C ;
  • 0 U i k 1 ,   1 i N ;   1 k C and j = 1 C U i j = 1 , for each X i ;
  • The larger U i k is, the more element X i belongs in cluster k .
An objective function is defined such that the clustering algorithm must minimize the objective function (1):
J ( U , V ) = i = 1 N k = 1 C U i k m D i k 2 ,
where D i k 2 = | | X i V k | | 2 is the distance between two vectors X i and V k , and m is the fuzzification coefficient of the algorithm.
Summary of steps for the standard FCM algorithm:
Input: the dataset X = { X 1 , X 2 , , X N } .
Output: the partition of X into C clusters.
  • Step 1: Initialize value for V , let l = 0 , set ε > 0 and m > 1 .
  • Step 2: At the l t h loop, update U according to the formula:
    U i k = ( j = 1 C ( D i k D i j ) 2 m 1 ) 1
  • Step 3: Update V for the next step ( l + 1 ) , according to the formula:
    V k = i = 1 N U i k m X i i = 1 N U i k m
  • Step 4: If | | V ( l ) V ( l + 1 ) | | < ε , then go to Step 5; otherwise, let l : = l + 1 , and return to Step 2.
  • Step 5: End.

2.2. Semi-Supervised Standard Fuzzy C-Means Clustering (sSFCM) Algorithms

The semi-supervised fuzzy clustering method in [14] added a supervisory matrix U ¯ that represents the supervision of an element forced to belong or to not belong in a cluster. U ¯ i k > 0 when the supervised element i is placed into cluster k, and U ¯ i k = 0 when there is no supervision. This additional supervised membership grade U ¯ i k was added to the objective function to be minimized, and is shown as follows:
J ( U , V ) = i = 1 N k = 1 C | U i k U ¯ i k | m D i k 2 ,
where j = 1 C U i j = 1 , j = 1 C U ¯ i j 1 , 0 U i k 1 , for all 1 i N and 1 k C . The sSFCM algorithm, shown below, works to minimize J ( U , V ) through many iterations.
Summary of steps for the sSFCM algorithm:
Input: the dataset X = { X 1 , X 2 , , X N } , and the supervised membership grade U ¯ .
Output: the partition of X into C clusters.
  • Step 1: Initialize value for V , let l = 0 , set ε > 0 and m > 1 .
  • Step 2: At the l t h loop, update U according to the formula:
    U i k = U ¯ i k + ( 1 j = 1 C U ¯ i j ) ( j = 1 C ( D i k D i j ) 2 m 1 ) 1
  • Step 3: Update V for the next step ( l + 1 ) , according to the formula:
    V k = i = 1 N | U i k U ¯ i k | m X i i = 1 N | U i k U ¯ i k | m
  • Step 4: If | | V ( l ) V ( l + 1 ) | | < ε , then go to Step 5; otherwise, let l : = l + 1 , and return to Step 2.
  • Step 5: End.
A numerical example of the sSFCM algorithm [14]:
A data set with 20 elements X = { X 1 , X 2 , , X 20 } , X i R 2 , as seen in Table 1, is to be divided into two clusters. Implement the sSFCM algorithm with m = 2 for 3 cases:
  • Case 1: unsupervised, U ¯ i k = 0 , for all 1 i 20 ,   1 k 2 (standard FCM algorithm).
  • Case 2: semi-supervised, attempting to place data points 9 and 10 into cluster 1, U ¯ 9 , 1 = U ¯ 10 , 1 = 0.3 , otherwise, U ¯ i k = 0 .
  • Case 3: semi-supervised, attempting to place data points 9 and 10 into cluster 1, U ¯ 9 , 1 = U ¯ 10 , 1 = 0.6 , otherwise, U ¯ i k = 0 .
After clustering, we obtain matrices U for all 3 cases, as shown in Table 2.
The clustering results indicate that sample points 9 and 10 belong to cluster 2 in case 1 and case 2 and to cluster 1 in case 3, showing the impact of the added semi-supervised component in the sSFCM algorithm through the supervised membership grade U ¯ . Although in case 2, the attempt to place points 9 and 10 in cluster 1 was not successful, the points were much closer to cluster 1 compared to case 1.
In this study, instead of using the matrix U ¯ , we propose the use of multiple fuzzification coefficients as the mean to make the original unsupervised FCM algorithm a semi-supervised one.

3. Semi-Supervised Fuzzy C-Means Clustering Algorithm with Multiple Fuzzification Coefficients (sSMC-FCM)

3.1. Derivation of the Proposed sSMC-FCM Algorithm

In this sub-section, we show the derivation of the proposed sSMC-FCM algorithm. Each membership U i k can receive different fuzzification coefficients m i k . The supervision of element i to belong to cluster k is performed through adjusting m i k . A formula for m i k can be given as follows, with exponential parameters M ,   M > 1 :
  • m i j = M for all 1 j C , for unsupervised elements i.
  • m i k = M , and m i j = M for all j k , for supervised elements i to belong to cluster k.
The approach to determine the exponential parameters M and M is discussed in Section 3.2.
The objective function to be minimized in the sSMC-FCM algorithm is formulated as follows,
J ( U , V ) = i = 1 N k = 1 C U i k m i k D i k 2
where X = { X 1 , X 2 , , X N } is the dataset, N is the number of elements, C is the number of clusters, U i k is the membership grade of an element X i in the cluster k with center V k , 0 U i k 1 , 1 i N , 1 k C , j = 1 C U i j = 1 , i = 1 N U i k > 0 , m i k > 1 is the fuzzification coefficient of X i in cluster k , and D i k 2 = | | X i V k | | 2 is the distance between two vectors X i and V k .
Let Y { 1 , 2 , , N } × { 1 , 2 , , C } be the set of supervised elements, we have:
m i j = { M , k : ( i , k ) Y M , ( i , j ) Y M , k : ( i , k ) Y j k
To solve the optimization problem shown in Equation (7), we utilize the Lagrange multiplier method. Let
L = i = 1 N k = 1 C U i k m i k | | X i V k | | 2 i = 1 N λ i ( k = 1 C U i k 1 )
Moreover, with
{ L U i k = 0 , 1 i N , 1 k C L V k = 0 , 1 k C
we can obtain V k using
L V k = [ i = 1 N U i k m i k ( X i V k ) 2 ] V k = i = 1 N U i k m i k 2 ( X i V k ) ( 1 ) = 0 ,
hence, we have
i = 1 N U i k m i k X i = i = 1 N U i k m i k V k ,
or
V k = i = 1 N U i k m i k X i i = 1 N U i k m i k
And next, we calculate U ik using
L U i k = m i k U i k m i k 1 D i k 2 λ i = 0 ,
which implies the following:
U i k = ( λ i m i k D i k 2 ) 1 m i k 1
Combining with j = 1 C U i j = 1 , we consider the two following cases:
Case 1:
For unsupervised elements i, m i j = M for all 1 j C , Equation (12) becomes:
U i k = ( λ i M D i k 2 ) 1 M 1 = ( λ i M ) 1 M 1 ( 1 D i k 2 ) 1 M 1 = ( λ i M ) 1 M 1 ( 1 D i k ) 2 M 1
Since j = 1 C U i j = 1 , we have j = 1 C ( λ i M D i j 2 ) 1 M 1 = 1 , or ( λ i M ) 1 M 1 j = 1 C ( 1 D i j 2 ) 1 M 1 = 1 , or
( λ i M ) 1 M 1 = 1 j = 1 C ( 1 D i j 2 ) 1 M 1 = 1 j = 1 C ( 1 D i j ) 2 M 1
Replacing (14) into (13), we obtain
U i k = 1 j = 1 C ( D i k D i j ) 2 M 1 = ( j = 1 C ( D i k D i j ) 2 M 1 ) 1
Case 2:
For supervised elements i to belong to cluster k, m i k = M and m i j = M for j k , Equation (11) becomes:
{ M U i k M 1 D i k 2 = λ i , a n d M U i j M 1 D i j 2 = λ i , j k
or
M U i k M 1 D i k 2 = M U i 1 M 1 D i 1 2 = = M U i j M 1 D i j 2 = = M U i C M 1 D i C 2
Combining with j = 1 C U i j = 1 , to calculate U i j , we need to solve the following:
{ M U i k M 1 D i k 2 = M U i 1 M 1 D i 1 2 = = M U i j M 1 D i j 2 = = M U i C M 1 D i C 2 , j k j = 1 C U i j = 1 f o r   a l l   i
The steps to solve Equation (16) are shown through Equations (17)–(20) below. To make the presentation of the derivation seamless, we first give the calculation formulas, and the proof that this solution can solve Equation (16) will be presented in Section 3.2 with Proposition 1.
Specifically, we calculate d m i n = m i n j = 1 , , C { D i j } , then:
d i j = D i j d min , j = 1 , , C
Calculate   μ i j = ( 1 M d i j 2 ) 1 M 1   for   all     j k
Calculate   μ i k   which   is   a   variable   in   μ i k ( μ i k + j = 1 , j k C μ i j ) M M M 1 = ( 1 M d i k 2 ) 1 M 1
Then   normalize :   U i j = μ i j l = 1 C μ i l
From there, we have the sSMC-FCM algorithm as follows.
Summary of steps for the proposed sSMC-FCM algorithm:
Input: the dataset X = { X 1 , , X N } , the fuzzification coefficients M > 1 and M > M , and the set of supervised elements Y { 1 , 2 , , N } × { 1 , 2 , , C } .
Output: the partition of X into C clusters.
  • Step 1: Initialize value for V , let l = 0 , and set ε > 0 .
  • Step 2: At the l t h loop, update U according to Equation (15) for unsupervised elements, or according to Equations (17)–(20) for supervised elements.
  • Step 3: Update V for the next step ( l + 1 ) , according to Equation (10), with m i k calculated using Equation (8).
  • Step 4: If | | V ( l ) V ( l + 1 ) | | < ε , then go to Step 5; otherwise, let l : = l + 1 , and return to Step 2.
  • Step 5: End.
The sSMC-FCM algorithm has similar steps to the FCM and sSFCM algorithms, but it uses Equations (15) and (17)–(20) in Step 2 and Equations (8) and (10) in Step 3.

3.2. Determination of the Fuzzification Coefficients for Supervised Elements

In the proposed sSMC-FCM algorithm, when an element i is to be supervised, U i j will be calculated according to Equations (17)–(20) in Step 2 of the algorithm. In this sub-section, we will discuss these formulas in more detail, as well as how to determine the exponential parameter M for the supervised elements.
Proposition 1.
U i j calculated from Equations (17)–(20) satisfies Equation (16).
Proof of Proposition 1.
From Equation (20), taking the sum of U i j , we can see that it satisfies the condition of j = 1 C U i j = 1 in Equation (16). □
Next, we calculate M U i j M 1 D i j 2 by gradually replacing from the formulas (20), (19), (18), (17), as follows:
M U i k M 1 D i k 2 = M ( μ i k l = 1 C μ i l ) M 1 ( d i k d m i n ) 2 = M ( ( 1 M d i k 2 ) 1 M 1 ( μ i k + j = 1 , j k C μ i j ) M M M 1 l = 1 C μ i l ) M 1 ( d i k d m i n ) 2 = M ( ( 1 M d i k 2 ) 1 M 1 ( j = 1 C μ i j ) M M M 1 l = 1 C μ i l ) M 1 d i k 2 d m i n 2 = M 1 M d i k 2 ( j = 1 C μ i j ) M M ( l = 1 C μ i l ) M 1 d i k 2 d m i n 2 = d m i n 2 ( l = 1 C μ i l ) M 1
Similarly, calculate M U i j M 1 D i j 2 for all j k
M U i j M 1 D i j 2 = M ( μ i j l = 1 C μ i l ) M 1 ( d i j d m i n ) 2 = M ( ( 1 M d i j 2 ) 1 M 1 l = 1 C μ i l ) M 1 d i j 2 d m i n 2 = d m i n 2 ( l = 1 C μ i l ) M 1
From (21) and (22), it can be seen that M U i k M 1 D i k 2 = M U i 1 M 1 D i 1 2 = = M U i j M 1 D i j 2 = = M U i C M 1 D i C 2 for all j k . Therefore, combining with j = 1 C U i j = 1 , Equation (16) is satisfied, and Proposition 1 is proven.
We can hence apply Equations (17)–(20) in Step 2 of the proposed algorithm.
Next, we investigate how to determine the exponential parameter M . We will utilize Proposition 2 after proving it.
Proposition 2.
The function f ( x ) = ( 1 a x ) 1 x 1 with a 1 is an increasing function when x > 1 .
Proof of Proposition 2.
Consider the function f ( x ) = ( 1 a x ) 1 x 1 = e ln ( ( 1 a x ) 1 x 1 ) = e 1 x 1 ln ( 1 a x ) , Calculate its derivative:
f ( x ) = e 1 x 1 ln ( 1 a x ) . ( ( 1 x 1 ) ln ( 1 a x ) + ( 1 x 1 ) ln ( 1 a x ) ) = e 1 x 1 ln ( 1 a x ) . ( 1 ( x 1 ) 2 ( 1 ) ln ( a x ) + ( 1 x 1 ) 1 1 a x ( 1 a x ) ) = e 1 x 1 ln ( 1 a x ) . ( ln ( a x ) ( x 1 ) 2 1 x ( x 1 ) ) = e 1 x 1 ln ( 1 a x ) . x ln ( a x ) x + 1 x ( x 1 ) 2
For x > 1 , a 1 , in f ( x ) , we have e 1 x 1 ln ( 1 a x ) > 0 ,   x ln ( a x ) x + 1 x ln x x + 1 , and x ( x 1 ) 2 > 0 . For g ( x ) = x ln x x + 1 , we have g ( x ) = ln x + x x 1 = ln x > 0 for all x > 1 . Therefore, g ( x ) is an increasing function in the range [ 1 , + ) , and thus, g ( x ) > g ( 1 ) = 0 , and hence, x ln ( a x ) x + 1 x ln x x + 1 > 0 . Overall, all the components of f ( x ) are greater than 0. Therefore, f ( x ) > 0 , or f ( x ) is an increasing function, for all x > 1 . □
Applying Proposition 2 to the following formula used to calculate M for the supervised elements, f ( M ) = ( 1 M d i k 2 ) 1 M 1 , where d i k 2 1 , M > 1 , we see that this function is an increasing function. Therefore, as M increases, the membership grade U i k increases. Setting the supervision for an element corresponds to selecting a larger fuzzification coefficient M for that element compared to other fuzzification coefficients.
Then, Proposition 3 is utilized to guide the determination of parameter M .
Proposition 3.
Let U i k be the membership grade of element X i in its semi-supervise placement into cluster k according to Equations (17)–(20) with a given parameter M and a to-be-determined parameter M . Let U i k be the membership grade of element X i in its unsupervised placement into cluster k according to Equation (15). With α ( 0 , 1 ) , we have U i k α if the following equation is satisfied:
M α M 1 M ( 1 α 1 U i k 1 ) M 1
Proof of Proposition 3.
From Equation (15), we have
U i k = 1 j = 1 C ( D i k D i j ) 2 M 1 = 1 ( D i k D i k ) 2 M 1 + j = 1 , j k C ( D i k D i j ) 2 M 1 = 1 1 + j = 1 , j k C ( D i k D i j ) 2 M 1
From Equation (17), we have d i k d i j = D i k D i j , hence, U i k = 1 1 + j = 1 , j k C ( d i k d i j ) 2 M 1 .
Substitute into (23), we have the equivalent
M α M 1 M ( 1 α ) M 1 ( j = 1 , j k C ( d i k d i j ) 2 M 1 ) M 1 = ( 1 α ) M 1 d i k 2 ( j = 1 , j k C ( 1 M d i j 2 ) 1 M 1 ) M 1
or,
1 M d i k 2 α M 1 ( 1 α ) M 1 ( j = 1 , j k C ( 1 M d i j 2 ) 1 M 1 ) M 1
Substitute in μ i j from Equation (18), we have 1 M d i k 2 α M 1 ( 1 α ) M 1 ( j = 1 , j k C μ i j ) M 1 .
Substitute 1 M d i k 2 in the above with Equation (19), we have
μ i k M 1 ( μ i k + j = 1 , j k C μ i j ) M M α M 1 ( 1 α ) M 1 ( j = 1 , j k C μ i j ) M 1
Next, from Equation (20), we have U i k = μ i k j = 1 C μ i j = μ i k μ i k + j = 1 , j k C μ i j , hence j = 1 , j k C μ i j = μ i k U i k μ i k , which is substituted into Equation (24) to have
μ i k M 1 ( μ i k U i k ) M M α M 1 ( 1 α ) M 1 ( μ i k ( 1 U i k ) U i k ) M 1
Since M > M , and from the above, we have μ i k M 1 U i k M M α M 1 μ i k M 1 ( 1 α ) M 1 ( 1 U i k U i k ) M 1 .
Since μ i k is to be solved in Equation (19), μ i k > 0 , and the above can be simplified into U i k M M U i k M 1 ( 1 U i k ) M 1 α M 1 ( 1 α ) M 1 or
U i k M 1 ( 1 U i k ) M 1 α M 1 ( 1 α ) M 1
Therefore, if Equation (23) is satisfied, then Equation (25) is also satisfied.
Consider the function f ( x ) = x M 1 ( 1 x ) M 1 with x ( 0 , 1 ) we have its derivative
f ( x ) = ( M 1 ) x M 2 ( 1 x ) M 1 x M 1 ( M 1 ) ( 1 x ) M 2 ( 1 ) ( 1 x ) 2 ( M 1 ) = ( 1 x ) M 2 x M 2 ( ( M 1 ) ( 1 x ) + x ( M 1 ) ) ( 1 x ) 2 ( M 1 ) = x M 2 ( M M x + M x 1 ) ( 1 x ) M
Since M > M , and from the above derivative, with x ( 0 , 1 ) , we have
M M x + M x 1 M M x + M x M = M ( 1 x ) M ( 1 x ) = ( M M ) ( 1 x ) > 0
Since x M 2 > 0 and ( 1 x ) M > 0 , hence, f ( x ) > 0 , and therefore, f ( x ) is an increasing function. Equation (25) has the form of f ( U i k ) f ( α ) , so then U i k α . □
From Proposition 3, Equation (23) demonstrates an approach to determine parameter M . The right-hand side of (23) is a value that can be calculated knowing M , α , and the distances, while the left-hand side of (23) is a decreasing function as M increases. From this, we can start at M = M , and then increase M value and check simultaneously until Equation (23) is satisfied to solve for M .
Example: Given the data set from Table 1, the element X 9 has U 91 = 0.189 , with M = 2 . If we want to apply supervision such that U 91 0.5 to put X 9 into cluster 1, then from Equation (20), we need to determine M such that the condition 0.5 M 1 M 0.233 is satisfied. For M = 2 , 0.5 M 1 M = 1 , and hence it does not satisfy the condition. We will have to increase the value of M to decrease the value of 0.5 M 1 M to satisfy the condition. Through certain iterations, we obtain M = 5.582 , and 0.5 M 1 M = 0.233 , which satisfies the condition, and effectively places X 9 into cluster 1.
A point that can be further discussed is that in Step 2 of the sSMC-FCM algorithm, it is necessary to solve Equation (19) to obtain μ i k . In Equation (19), its right-hand side has a value in the range [0,1]. Set A = j = 1 , j k C μ i j , b = M M M 1 , with b < 1 , the left-hand side of this equation has the form of f ( x ) = x ( x + A ) b , which has the derivative of f ( x ) = ( x + A ) b x b ( x + A ) b 1 ( x + A ) 2 b = x + A x b ( x + A ) b + 1 = ( 1 b ) x + A ( x + A ) b + 1 > 0 for x > 0 . From this, we can see that the left-hand side of Equation (19) is an increasing function, while the right-hand side has a value in the range [0,1]. Therefore, we can use an approximation method to obtain μ i k , starting from μ i k = 0 , and then increasing gradually to determine the solution.

4. Numerical Examples

To evaluate the proposed algorithm, we use the same numerical example shown in Section 2.2. A data set with 20 elements X = { X 1 , X 2 , , X 20 } , X i R 2 , as seen in Table 1, is to be divided into two clusters. Implement the sSMC-FCM algorithms with M = 2 for the following 3 cases:
  • Case 1: unsupervised, M = M = 2 (standard FCM algorithm).
  • Case 2: semi-supervised, attempting to place data points 9 and 10 into cluster 1, M = 4 , M = 2 .
  • Case 3: semi-supervised, attempting to place data points 9 and 10 into cluster 1, M = 8 , M = 2 .
The Euclidian distance was used in the calculations and results in this work. Table 3 shows membership matrices U for each case, as the results of clustering.
From the results in Table 3, we have the following observations:
  • As M increases, the membership grades of the supervised elements increase. Initially, with no supervision, data points 9 and 10 were placed into cluster 2. With supervision and M = 4 , their membership grades increased but not enough to move into cluster 1, while with M = 8 , these two data points were successfully placed into cluster 1;
  • In general, the sSMC-FCM algorithm converges after a similar number of iterations to the standard FCM algorithm. For instance, with ε = 10 3 , it required about 10 iterations;
  • When M is changed, the cluster centers change to suit the supervised elements. For instance, case 1 has cluster centers V 1 = ( 2.719 , 5.000 ) , V 2 = ( 9.018 , 5.000 ) , case 2 has cluster centers V 1 = ( 2.735 , 5.000 ) , V 2 = ( 9.200 , 5.000 ) , and case 3 has cluster centers V 1 = ( 2.714 , 5.000 ) , V 2 = ( 9.303 , 5.000 ) . We can observe that, from case 1 to 2, the cluster center V 2 moves further away from data points 9 and 10;
  • Compared to the sSFCM algorithm, we can see that the matrices U in the case of M = 4 , M = 2 using the sSMC-FCM algorithm shown in Table 3 are similar to the corresponding matrices U in the case of U ¯ 9 , 1 = U ¯ 10 , 1 = 0.3 using the sSFCM algorithm shown in Table 2. Both cases were able to increase the membership grade of the points to belong to cluster 1 but were not successfully in moving the points into cluster 1;
  • In future works, it is possible to expand on other different representations for the fuzzification coefficients, combining with hedge algebra [17,18,19] in new representations and calculations.

5. Conclusions

In this study, we developed a novel clustering algorithm, called sSMC-FCM, based on the standard FCM algorithm, adding the semi-supervision aspect through the use of multiple fuzzification coefficients or also known as exponential parameters. In the sSMC-FCM algorithm, we allow the data elements to have different fuzzification coefficients, instead of only one such as in the standard FCM method. The expansion from hard clustering to fuzzy clustering involves the addition of fuzzification coefficients; hence, the determination of the fuzzification coefficient values is an interesting topic to be further researched. In this study, we derived the novel sSMC-FCM algorithm and proved three propositions to show the convergence of the algorithm and to explain how to determine the fuzzification coefficients and demonstrated the efficiency of the algorithm using a numerical example. The proposed algorithm added supervision in the normally unsupervised FCM clustering algorithm. This method can be applied in practical problems such as remote sensing image segmentation, when knowing an image is certainly of an image type such as a lake, but due to image noises and the unsupervised approach, the attribute features may cause that image to be perceived as another image type, such as clouds, leading to undesirable results. With the proposed semi-supervised method, we should be able to place the image into the knowingly correct image type as desired.

Author Contributions

Conceptualization, T.D.K.; methodology, T.D.K. and M.-K.T.; software, T.D.K.; validation, T.D.K., M.-K.T., and M.F.; formal analysis, T.D.K.; writing—original draft preparation, T.D.K. and M.-K.T.; writing—review and editing, M.F.; supervision, T.D.K. and M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant Number 102.05–2018.02.

Institutional Review Board Statement

Not application.

Informed Consent Statement

Not application.

Data Availability Statement

Not application.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arora, J.; Khatter, K.; Tushir, M. Fuzzy c-Means Clustering Strategies: A Review of Distance Measures. Softw. Eng. 2018, 153–162. [Google Scholar]
  2. Everitt, B.S.; Landau, S.; Leese, M.; Stahl, D. Cluster Analysis, 5th ed.; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2011. [Google Scholar]
  3. Havens, T.C.; Bezdek, J.C.; Leckie, C.; Hall, L.O.; Palaniswami, M. Fuzzy c-Means Algorithms for Very Large Data. IEEE Trans. Fuzzy Syst. 2012, 20, 1130–1146. [Google Scholar] [CrossRef]
  4. Gosain, A.; Dahiya, S. Performance Analysis of Various Fuzzy Clustering Algorithms: A Review. Procedia Comput. Sci. 2016, 79, 100–111. [Google Scholar] [CrossRef] [Green Version]
  5. Ruspini, E.H.; Bezdek, J.C.; Keller, J.M. Fuzzy Clustering: A Historical Perspective. IEEE Comput. Intell. Mag. 2019, 14, 45–55. [Google Scholar] [CrossRef]
  6. Vendramin, L.; Campello, R.J.G.B.; Hruschka, E.R. Relative Clustering Validity Criteria: A Comparative Overview. Stat. Anal. Data Min. 2010, 3, 209–235. [Google Scholar] [CrossRef]
  7. Casalino, G.; Castellano, G.; Mencar, C. Data stream classification by dynamic incremental semi-supervised fuzzy clustering. Int. J. Artif. Intell. Tools 2019, 28, 1960009. [Google Scholar] [CrossRef]
  8. Gan, H.; Fan, Y.; Luo, Z.; Huang, R.; Yang, Z. Confidence-weighted safe semi-supervised clustering. Eng. Appl. Artif. Intell. 2019, 81, 107–116. [Google Scholar] [CrossRef]
  9. Mai, S.D.; Ngo, L.T. Multiple kernel approach to semi-supervised fuzzy clustering algorithm for land-cover classification. Eng. Appl. Artif. Intell. 2018, 68, 205–213. [Google Scholar] [CrossRef]
  10. Komori, O.; Eguchi, S. A Unified Formulation of k-Means, Fuzzy c-Means and Gaussian Mixture Model by the Kolmogorov–Nagumo Average. Entropy 2021, 23, 518. [Google Scholar] [CrossRef] [PubMed]
  11. Son, L.H.; Tuan, T.M. Dental segmentation from X-ray images using semi-supervised fuzzy clustering with spatial constraints. Eng. Appl. Artif. Intell. 2017, 59, 186–195. [Google Scholar] [CrossRef]
  12. Maraziotis, I.A. A semi-supervised fuzzy clustering algorithm applied to gene expression data. Pattern Recognit. 2012, 45, 637–648. [Google Scholar] [CrossRef]
  13. Śmieja, M.; Struski, Ł.; Figueiredo, M.A. A classification-based approach to semi-supervised clustering with pairwise constraints. Neural Netw. 2020, 127, 193–203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Yasunori, E.; Yukihiro, H.; Makito, Y.; Yasunori, M.S. On semi-supervised fuzzy c-means clustering. In Proceedings of the IEEE International Conference on Fuzzy Systems, Jeju Island, Korea, 20–24 August 2009. [Google Scholar]
  15. Hwang, C.; Rhee, F.C.-H. Uncertain Fuzzy Clustering: Interval Type-2 Fuzzy Approach to C-Means. IEEE Trans. Fuzzy Syst. 2007, 15, 107–120. [Google Scholar] [CrossRef]
  16. Khang, T.D.; Vuong, N.D.; Tran, M.-K.; Fowler, M. Fuzzy C-Means Clustering Algorithm with Multiple Fuzzification Coefficients. Algorithms 2020, 13, 158. [Google Scholar] [CrossRef]
  17. Khang, T.D.; Phong, P.A.; Dong, D.K.; Trang, C.M. Hedge Algebraic Type-2 Fuzzy Sets. In Proceedings of the Conference: FUZZ-IEEE 2010, IEEE International Conference on Fuzzy Systems, Barcelona, Spain, 18–23 July 2010. [Google Scholar]
  18. Nguyen, C.H.; Tran, D.K.; Nam, H.V.; Nguyen, H.C. Hedge Algebras, Linguistic-Valued Logic and Their Application to Fuzzy Reasoning. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 1999, 7, 347–361. [Google Scholar] [CrossRef]
  19. Phong, P.A.; Khang, T.D.; Dong, D.K. A fuzzy rule-based classification system using Hedge Algebraic Type-2 Fuzzy Sets. In Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), El Paso, TX, USA, 31 October–4 November 2016; pp. 265–270. [Google Scholar]
Table 1. Sample data set.
Table 1. Sample data set.
i(Xi1, Xi2)i(Xi1, Xi2)i(Xi1, Xi2)i(Xi1, Xi2)
1(0, 4.5)6(3.5, 5.5)11(9, 0)16(10, 0)
2(0, 5.5)7(5.25, 4.5)12(9, 2.5)17(10, 2.5)
3(1.75, 4.5)8(5.25, 5.5)13(9, 5)18(10, 5)
4(1.75, 5.5)9(7, 4.5)14(9, 7.5)19(10, 7.5)
5(3.5, 4.5)10(7, 5.5)15(9, 10)20(10, 10)
Table 2. The resulting membership U for the 3 cases using the sSFCM algorithm.
Table 2. The resulting membership U for the 3 cases using the sSFCM algorithm.
iUnsupervisedi U ¯ 9 , 1 = U ¯ 10 , 1 = 0.3 i U ¯ 9 , 1 = U ¯ 10 , 1 = 0.6
1(0.91, 0.09)1(0.92, 0.08)1(0.92, 0.08)
2(0.91, 0.09)2(0.92, 0.08)2(0.92, 0.08)
3(0.98, 0.02)3(0.98, 0.02)3(0.98, 0.02)
4(0.98, 0.02)4(0.98, 0.02)4(0.98, 0.02)
5(0.97, 0.03)5(0.97, 0.03)5(0.97, 0.03)
6(0.97, 0.03)6(0.97, 0.03)6(0.97, 0.03)
7(0.68, 0.32)7(0.70, 0.30)7(0.72, 0.28)
8(0.68, 0.32)8(0.70, 0.30)8(0.72, 0.28)
9(0.19, 0.81)9(0.45, 0.55)9(0.69, 0.31)
10(0.19, 0.81)10(0.45, 0.55)10(0.69, 0.31)
11(0.28, 0.72)11(0.28, 0.72)11(0.28, 0.72)
12(0.12, 0.88)12(0.12, 0.88)12(0.12, 0.88)
13(0.00, 1.00)13(0.00, 1.00)13(0.00, 1.00)
14(0.12, 0.88)14(0.12, 0.88)14(0.12, 0.88)
15(0.28, 0.72)15(0.28, 0.72)15(0.28, 0.72)
16(0.25, 0.75)16(0.25, 0.75)16(0.25, 0.75)
17(0.11, 0.89)17(0.10, 0.90)17(0.10, 0.90)
18(0.02, 0.98)18(0.01, 0.99)18(0.01, 0.99)
19(0.11, 0.89)19(0.10, 0.90)19(0.10, 0.90)
20(0.25, 0.75)20(0.25, 0.75)20(0.25, 0.75)
Table 3. The resulting membership U for the 3 cases using the sSMC-FCM algorithm.
Table 3. The resulting membership U for the 3 cases using the sSMC-FCM algorithm.
iUnsupervisedi M = 4 i M = 8
1(0.91, 0.09)1(0.92, 0.08)1(0.92, 0.08)
2(0.91, 0.09)2(0.92, 0.08)2(0.92, 0.08)
3(0.98, 0.02)3(0.98, 0.02)3(0.98, 0.02)
4(0.98, 0.02)4(0.98, 0.02)4(0.98, 0.02)
5(0.97, 0.03)5(0.98, 0.02)5(0.98, 0.02)
6(0.97, 0.03)6(0.98, 0.02)6(0.98, 0.02)
7(0.68, 0.32)7(0.71, 0.29)7(0.71, 0.29)
8(0.68, 0.32)8(0.71, 0.29)8(0.71, 0.29)
9(0.19, 0.81)9(0.43, 0.57)9(0.60, 0.40)
10(0.19, 0.81)10(0.43, 0.57)10(0.60, 0.40)
11(0.28, 0.72)11(0.28, 0.72)11(0.28, 0.72)
12(0.12, 0.88)12(0.12, 0.88)12(0.12, 0.88)
13(0.00, 1.00)13(0.00, 1.00)13(0.00, 1.00)
14(0.12, 0.88)14(0.12, 0.88)14(0.12, 0.88)
15(0.28, 0.72)15(0.28, 0.72)15(0.28, 0.72)
16(0.25, 0.75)16(0.25, 0.75)16(0.25, 0.75)
17(0.11, 0.89)17(0.11, 0.89)17(0.10, 0.90)
18(0.02, 0.98)18(0.01, 0.99)18(0.01, 0.99)
19(0.11, 0.89)19(0.11, 0.89)19(0.10, 0.90)
20(0.25, 0.75)20(0.25, 0.75)20(0.25, 0.75)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khang, T.D.; Tran, M.-K.; Fowler, M. A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients. Algorithms 2021, 14, 258. https://doi.org/10.3390/a14090258

AMA Style

Khang TD, Tran M-K, Fowler M. A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients. Algorithms. 2021; 14(9):258. https://doi.org/10.3390/a14090258

Chicago/Turabian Style

Khang, Tran Dinh, Manh-Kien Tran, and Michael Fowler. 2021. "A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients" Algorithms 14, no. 9: 258. https://doi.org/10.3390/a14090258

APA Style

Khang, T. D., Tran, M. -K., & Fowler, M. (2021). A Novel Semi-Supervised Fuzzy C-Means Clustering Algorithm Using Multiple Fuzzification Coefficients. Algorithms, 14(9), 258. https://doi.org/10.3390/a14090258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop