Fractional Derivative to Symmetrically Extend the Memory of Fuzzy C-Means
Abstract
:1. Introduction
2. Related Works and Contributions
3. Preliminaries
3.1. Fuzzy C-Means
3.2. Fractional Calculus (FC)
- Grünwald–Letnikov Fractional Derivative
- Grünwald–Letnikov Fractional Integral
3.3. Genetic Algorithm
- Population initialization: The genetic algorithm starts by randomly initializing a population of potential solutions, called chromosomes. This initial population is denoted as , where represents the population size. Each chromosome is then evaluated by a fitness function to assess its effectiveness in solving the problem.
- Fitness function: Each solution is assessed with a fitness function , which measures how well the solution performs or meets the desired criteria. This function is essential for directing the evolutionary process towards finding optimal solutions:The selection process favors individuals with higher fitness values for reproduction.
- Selection: This involves choosing individuals from the current population for reproduction based on their fitness and a defined probability distribution (selection function). This process creates a mating pool for the next generation. There are various methods of selection: for example:Roulette wheel selection: the likelihood of selecting an individual is proportional to its fitness :Tournament selection: a random subset of individuals is selected, and the fittest individual within this subset is chosen for reproduction.
- Crossover (recombination): This involves merging genetic material from two parent individuals to create offspring. This process is controlled by a crossover probability . For example, in a single-point crossover:
- Mutation: This introduces random changes to the offspring to preserve genetic diversity and prevent premature convergence. This process is controlled by a mutation probability . In the case of binary-encoded solutions, mutation typically involves flipping a bit (changing a 0 to a 1, or vice versa).After crossover and mutation, the fitness of the offspring is evaluated using the fitness function. The population is then updated by replacing some or all individuals with the new offspring, ensuring that the population evolves towards better solutions. This iterative process continues until specific convergence criteria are met.
- Convergence criteria: These define when the genetic algorithm is considered to have reached an optimal solution. Typically, convergence is achieved when either the maximum number of generations is reached () or when a predefined fitness threshold is met ().
4. Frac-FCM: The Proposed Fractional Fuzzy C-Means Model
4.1. Centers Tuning Rule
4.2. Memberships Tuning Rule
- Define the feasible set: the feasible set is determined by the constraint; in our case, the feasible set is defined as the set of values that satisfy the constraint ;
- Define the projection operator by taking the current values of and project them onto the feasible set . Projecting to satisfy the constraint directly may not be straightforward; one way to accomplish this is to use a projection method that enforces the constraint indirectly asThis projection formula ensures that the updated membership values for each data point i meet both the sum-to-one and non-negativity constraints, thereby maintaining the validity of the membership values throughout the optimization process.
5. Optimized Fractional Fuzzy C-Means Method
5.1. GA-Frac-FCM: Mathematical Model
- Variables
- ·
- : The derivative order.
- ·
- : The gradient descent rates.
- ·
- m: The fuzziness.
- ·
- : The number of clusters.
- Objective function
- Constraints
- Recap
5.2. GA-Frac-FCM: Algorithms
Algorithm 1: Pseudo Code for GA-Frac-FCM Optimization |
Algorithm 2: Fitness Function |
Input: Derivative order , gradient descent rates , fuzziness m, and number of cluster . Output: Fitness E(). Begin ; Calculate the fitness function E correspond to Equation (13); Return . |
Algorithm 3: Frac-FCM Pseudo code |
6. Experimental Results and Discussion
6.1. Description of Datasets
6.2. Clustering Validity Measurement Indexes
- Fukuyama and Sugeno index (FS)
- Xie and Beni index (XB)
- Silhouette index
- -
- denotes the average distance from sample i to all other samples within the same cluster.
- -
- represents the smallest average distance from sample i to any other cluster. The Silhouette coefficient ranges from to 1, where:
- -
- A value of indicates that the sample is likely assigned to the wrong cluster;
- -
- A value of 0 suggests overlapping clusters;
- -
- A value of 1 indicates that the sample is in a well-separated, compact cluster.
- Dunn index
6.3. Genetic Algorithm: Analyzing Performance and Results
- Best fitness: shows the best and mean fitness values across generations.
- Best individuals: monitors the fitness of the best individual in each generation.
- Distance average: displays the distances between individuals, indicating population diversity.
- Genealogy: visualizes the relationships between parents and offspring.
- Score diversity: represents the diversity of fitness scores within the population.
- Score: shows the distribution of fitness scores across generations.
6.4. Comparison Study
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Moshtaghi, M.; Havens, T.C.; Bezdek, J.C.; Park, L.; Leckie, C.; Rajasegarar, S.; Keller, J.M.; Palaniswami, M. Clustering ellipses for anomaly detection. Pattern Recognit. 2011, 44, 55–69. [Google Scholar] [CrossRef]
- Sharma, P.; Suji, J. A review on image segmentation with its clustering techniques. Int. J. Signal Process. Image Process. Pattern Recognit. 2016, 9, 209–218. [Google Scholar] [CrossRef]
- Ramasubbareddy, S.; Srinivas, T.A.S.; Govinda, K.; Manivannan, S.S. Comparative study of clustering techniques in market segmentation. In Innovations in Computer Science and Engineering: Proceedings of 7th ICICSE; Springer: Singapore, 2020; pp. 117–125. [Google Scholar]
- Li, Q.; Kim, B.M. Clustering approach for hybrid recommender system. In Proceedings of the IEEE/WIC International Conference on Web Intelligence (WI 2003), Halifax, NS, Canada, 13–17 October 2003; pp. 33–38. [Google Scholar]
- Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
- Dunn, J.C. A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. J. Cybern. 1973, 3, 32–57. [Google Scholar] [CrossRef]
- Suganya, R.; Shanthi, R. Fuzzy c-means algorithm—A review. Int. J. Sci. Res. Publ. 2012, 2, 1. [Google Scholar]
- Machado, J.A. System modeling and control through fractional-order algorithms. Nonlinear Dyn. Chaos Control Their Appl. Eng. Sci. 2002, 4, 99–116. [Google Scholar]
- Halkidi, M.; Batistakis, Y.; Vazirgiannis, M. Cluster validity methods: Part I. ACM Sigmod Rec. 2002, 31, 40–45. [Google Scholar] [CrossRef]
- Halkidi, M.; Batistakis, Y.; Vazirgiannis, M. Clustering validity checking methods: Part II. ACM Sigmod Rec. 2002, 31, 19–27. [Google Scholar] [CrossRef]
- Rezaei, M.; Fränti, P. Set matching measures for external cluster validity. IEEE Trans. Knowl. Data Eng. 2016, 28, 2173–2186. [Google Scholar] [CrossRef]
- Vendramin, L.; Campello, R.J.; Hruschka, E.R. Relative clustering validity criteria: A comparative overview. Stat. Anal. Data Min. ASA Data Sci. J. 2010, 3, 209–235. [Google Scholar] [CrossRef]
- Cebeci, Z. Comparison of internal validity indices for fuzzy clustering. Agrárinformatika/J. Agric. Inform. 2019, 10, 1–14. [Google Scholar] [CrossRef]
- Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms, 2nd ed.; Springer: New York, NY, USA, 1987. [Google Scholar]
- Liew, A.W.C.; Leung, S.H.; Lau, W.H. Fuzzy image clustering incorporating spatial continuity. IEE Proc.-Vis. Image Signal Process. 2000, 147, 185–192. [Google Scholar] [CrossRef]
- Forouzanfar, M.; Forghani, N.; Teshnehlab, M. Parameter optimization of improved fuzzy c-means clustering algorithm for brain MR image segmentation. Eng. Appl. Artif. Intell. 2010, 23, 160–168. [Google Scholar] [CrossRef]
- Krishnapuram, R.; Keller, J.M. A possibilistic approach to clustering. IEEE Trans. Fuzzy Syst. 1993, 1, 98–110. [Google Scholar] [CrossRef]
- Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification and Scene Analysis; Wiley: New York, NY, USA, 1973; Volume 3, pp. 731–739. [Google Scholar]
- Jafar, O.M.; Sivakumar, R. A study on possibilistic and fuzzy possibilistic c-means clustering algorithms for data clustering. In Proceedings of the 2012 International Conference on Emerging Trends in Science, Engineering and Technology (INCOSET), Tiruchirappalli, India, 13–14 December 2012; pp. 90–95. [Google Scholar]
- Pal, N.R.; Pal, K.; Keller, J.M.; Bezdek, J.C. A new hybrid c-means clustering model. In Proceedings of the 2004 IEEE International Conference on Fuzzy Systems (IEEE Cat. No. 04CH37542), Budapest, Hungary, 25–29 July 2004; Volume 1, pp. 179–184. [Google Scholar]
- Pal, N.R.; Pal, K.; Keller, J.M.; Bezdek, J.C. A possibilistic fuzzy c-means clustering algorithm. IEEE Trans. Fuzzy Syst. 2005, 13, 517–530. [Google Scholar] [CrossRef]
- El Moutaouakil, K.; Palade, V.; Safouan, S.; Charroud, A. FP-Conv-CM: Fuzzy probabilistic convolution C-means. Mathematics 2023, 11, 1931. [Google Scholar] [CrossRef]
- Yu, J.; Yang, M.S. A generalized fuzzy clustering regularization model with optimality tests and model complexity analysis. IEEE Trans. Fuzzy Syst. 2007, 15, 904–915. [Google Scholar] [CrossRef]
- Chiang, J.H.; Hao, P.Y. A new kernel-based fuzzy clustering approach: Support vector clustering with cell growing. IEEE Trans. Fuzzy Syst. 2003, 11, 518–527. [Google Scholar] [CrossRef]
- Huang, H.C.; Chuang, Y.Y.; Chen, C.S. Multiple kernel fuzzy clustering. IEEE Trans. Fuzzy Syst. 2011, 20, 120–134. [Google Scholar] [CrossRef]
- Chen, L.; Chen, C.P.; Lu, M. A multiple-kernel fuzzy c-means algorithm for image segmentation. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2011, 41, 1263–1274. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Alam, S.; Dobbie, G.; Koh, Y.S.; Riddle, P.; Rehman, S.U. Research on particle swarm optimization based clustering: A systematic review of literature and techniques. Swarm Evol. Comput. 2014, 17, 1–13. [Google Scholar] [CrossRef]
- Silva Filho, T.M.; Pimentel, B.A.; Souza, R.M.; Oliveira, A.L. Hybrid methods for fuzzy clustering based on fuzzy c-means and improved particle swarm optimization. Expert Syst. Appl. 2015, 42, 6315–6328. [Google Scholar] [CrossRef]
- Wang, G.; Yin, X.; Pang, Y.; Zhang, M.; Zhao, W.; Zhang, Z. Studies on fuzzy c-means based on ant colony algorithm. In Proceedings of the 2010 International Conference on Measuring Technology and Mechatronics Automation, Changsha, China, 13–14 March 2010; Volume 3, pp. 515–518. [Google Scholar]
- Doğan, B.; Korürek, M. A new ECG beat clustering method based on kernelized fuzzy c-means and hybrid ant colony optimization for continuous domains. Appl. Soft Comput. 2012, 12, 3442–3451. [Google Scholar] [CrossRef]
- Cheng, X.; Gong, X. An image segmentation of fuzzy C-means clustering based on the combination of improved Ant Colony Algorithm and Genetic Algorithm. In Proceedings of the 2008 International Workshop on Education Technology and Training, 2008 International Workshop on Geoscience and Remote Sensing, Shanghai, China, 21–22 December 2008; Volume 2, pp. 804–808. [Google Scholar]
- Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
- Krishnamoorthi, M.; Natarajan, A.M. Artificial bee colony algorithm integrated with fuzzy c-mean operator for data clustering. J. Comput. Sci. 2013, 9, 404. [Google Scholar] [CrossRef]
- Kumar, A.; Kumar, D.; Jarial, S.K. A hybrid clustering method based on improved artificial bee colony and fuzzy C-Means algorithm. Int. J. Artif. Intell. 2017, 15, 40–60. [Google Scholar]
- Alomoush, W.; Alrosan, A.; Almomani, A.; Alissa, K.; Khashan, O.A.; Al-Nawasrah, A. Spatial information of fuzzy clustering based mean best artificial bee colony algorithm for phantom brain image segmentation. Int. J. Electr. Comput. Eng. (IJECE) 2021, 11, 4050–4058. [Google Scholar] [CrossRef]
- Alata, M.; Molhim, M.; Ramini, A. Optimizing of fuzzy c-means clustering algorithm using GA. Int. J. Comput. Inf. Eng. 2008, 2, 670–675. [Google Scholar]
- Wikaisuksakul, S. A multi-objective genetic algorithm with fuzzy c-means for automatic data clustering. Appl. Soft Comput. 2014, 24, 679–691. [Google Scholar] [CrossRef]
- Ding, Y.; Fu, X. Kernel-based fuzzy c-means clustering algorithm based on genetic algorithm. Neurocomputing 2016, 188, 233–238. [Google Scholar] [CrossRef]
- Krishna, K.; Murty, M.N. Genetic K-means algorithm. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1999, 29, 433–439. [Google Scholar] [CrossRef] [PubMed]
- Kohonen, T. The self-organizing map. Proc. IEEE 1990, 78, 1464–1480. [Google Scholar] [CrossRef]
- Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 1977, 39, 1–22. [Google Scholar] [CrossRef]
- El Moutaouakil, K.; Roudani, M.; El Ouissari, A. Optimal entropy genetic fuzzy-C-means SMOTE (OEGFCM-SMOTE). Knowl.-Based Syst. 2023, 262, 110235. [Google Scholar] [CrossRef]
- El Moutaouakil, K.; El Ouissari, A.; Palade, V.; Charroud, A.; Olaru, A.; Baïzri, H.; Chellak, S.; Cheggour, M. Multi-objective optimization for controlling the dynamics of the diabetic population. Mathematics 2023, 11, 2957. [Google Scholar] [CrossRef]
- El Moutaouakil, K.; Yahyaouy, A.; Chellak, S.; Baizri, H. An optimized gradient dynamic-neuro-weighted-fuzzy clustering method: Application in the nutrition field. Int. J. Fuzzy Syst. 2022, 24, 3731–3744. [Google Scholar] [CrossRef]
- Herzallah, M.A.; Baleanu, D. Fractional-order Euler–Lagrange equations and formulation of Hamiltonian equations. Nonlinear Dyn. 2009, 58, 385–391. [Google Scholar] [CrossRef]
- Magin, R.; Feng, X.; Baleanu, D. Solving the fractional order Bloch equation. Concepts Magn. Reson. Part A Educ. J. 2009, 34, 16–23. [Google Scholar] [CrossRef]
- Tarasov, V.E.; Zaslavsky, G.M. Fokker–Planck equation with fractional coordinate derivatives. Phys. A Stat. Mech. Its Appl. 2008, 387, 6505–6512. [Google Scholar] [CrossRef]
- Compte, A.; Metzler, R. The generalized Cattaneo equation for the description of anomalous transport processes. J. Phys. A Math. Gen. 1997, 30, 7277. [Google Scholar] [CrossRef]
- Podlubny, I. Fractional Differential Equations; Mathematics in Science and Engineering; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
- Ostalczyk, P. Discrete Fractional Calculus: Applications in Control and Image Processing; World Scientific: Singapore, 2015; Volume 4. [Google Scholar]
- Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives; Gordon and Breach Science Publishers: Yverdon-les-Bains, Switzerland, 1993; Volume 1. [Google Scholar]
- Scherer, R.; Kalla, S.L.; Tang, Y.; Huang, J. The Grünwald–Letnikov method for fractional differential equations. Comput. Math. Appl. 2011, 62, 902–917. [Google Scholar] [CrossRef]
- Meerschaert, M.M.; Mortensen, J.; Scheffler, H.P. Vector Grunwald formula for fractional derivatives. Fract. Calc. Appl. Anal. 2004, 7, 61–82. [Google Scholar]
- Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for fractional advection–dispersion flow equations. J. Comput. Appl. Math. 2004, 172, 65–77. [Google Scholar] [CrossRef]
- Wei, Y.; Yin, W.; Zhao, Y.; Wang, Y. A new insight into the Grünwald–Letnikov discrete fractional calculus. J. Comput. Nonlinear Dyn. 2019, 14, 041008. [Google Scholar] [CrossRef]
- Hajipour, M.; Jajarmi, A.; Baleanu, D. An efficient nonstandard finite difference scheme for a class of fractional chaotic systems. J. Comput. Nonlinear Dyn. 2018, 13, 021013. [Google Scholar] [CrossRef]
- Wang, H.; Du, N. A fast finite difference method for three-dimensional time-dependent space-fractional diffusion equations and its efficient implementation. J. Comput. Phys. 2013, 253, 50–63. [Google Scholar] [CrossRef]
- Abdelouahab, M.S.; Hamri, N.E. The Grünwald–Letnikov fractional-order derivative with fixed memory length. Mediterr. J. Math. 2016, 13, 557–572. [Google Scholar] [CrossRef]
- Mathew, T.V. Genetic Algorithm. 2012. Available online: https://datajobs.com/data-science-repo/Genetic-Algorithm-Guide-[Tom-Mathew].pdf (accessed on 12 September 2024).
- Bhoskar, M.T.; Kulkarni, M.O.K.; Kulkarni, M.N.K.; Patekar, M.S.L.; Kakandikar, G.; Nandedkar, V. Genetic algorithm and its applications to mechanical engineering: A review. Mater. Today Proc. 2015, 2, 2624–2630. [Google Scholar] [CrossRef]
- Omara, F.A.; Arafa, M.M. Genetic algorithms for task scheduling problem. J. Parallel Distrib. Comput. 2010, 70, 13–22. [Google Scholar] [CrossRef]
- Metawa, N.; Hassan, M.K.; Elhoseny, M. Genetic algorithm based model for optimizing bank lending decisions. Expert Syst. Appl. 2017, 80, 75–82. [Google Scholar] [CrossRef]
- Manning, T.; Sleator, R.D.; Walsh, P. Naturally selecting solutions: The use of genetic algorithms in bioinformatics. Bioengineered 2013, 4, 266–278. [Google Scholar] [CrossRef] [PubMed]
- Hauswirth, A.; Bolognani, S.; Hug, G.; Dörfler, F. Projected gradient descent on Riemannian manifolds with applications to online power system optimization. In Proceedings of the 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 27–30 September 2016; pp. 225–232. [Google Scholar]
- El Moutaouakil, K.; Touhafi, A. A new recurrent neural network fuzzy mean square clustering method. In Proceedings of the 2020 5th International Conference on Cloud Computing and Artificial Intelligence: Technologies and Applications (CloudTech), Marrakesh, Morocco, 24–26 November 2020; pp. 1–5. [Google Scholar]
- Fukuyama, Y. A new method of choosing the number of clusters for fuzzy c-means method. In Proceedings of the 5th Fuzzy System Symposium, Tokyo, Japan, 25–27 October 1989; Volume 5, pp. 247–250. [Google Scholar]
- Xie, X.L.; Beni, G. A validity measure for fuzzy clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 841–847. [Google Scholar] [CrossRef]
- Starczewski, A.; Krzyżak, A. Performance evaluation of the silhouette index. In Artificial Intelligence and Soft Computing: 14th International Conference, ICAISC 2015, Zakopane, Poland, June 14–18, 2015, Proceedings, Part II; Springer International Publishing: Cham, Switzerland, 2015; pp. 49–58. [Google Scholar]
- Dunn, J.C. Well-separated clusters and optimal fuzzy partitions. J. Cybern. 1974, 4, 95–104. [Google Scholar] [CrossRef]
- Wang, J.; Wen, Y.; Gou, Y.; Ye, Z.; Chen, H. Fractional-order gradient descent learning of BP neural networks with Caputo derivative. Neural Netw. 2017, 89, 19–30. [Google Scholar] [CrossRef] [PubMed]
- Lletí, R.; Ortiz, M.C.; Sarabia, L.A.; Sánchez, M.S. Selecting variables for k-means cluster analysis by using a genetic algorithm that optimises the silhouettes. Anal. Chim. Acta 2004, 515, 87–100. [Google Scholar] [CrossRef]
- Rahman, M.A.; Islam, M.Z. A hybrid clustering technique combining a novel genetic algorithm with K-Means. Knowl.-Based Syst. 2014, 71, 345–365. [Google Scholar] [CrossRef]
- Kuo, R.J.; Lin, J.Y.; Nguyen, T.P.Q. Genetic Algorithm Based Fuzzy c-Ordered-Means to Cluster Analysis. In Proceedings of the 2019 IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA), Tokyo, Japan, 12–15 April 2019; pp. 918–922. [Google Scholar]
- Zhou, S.; Xu, X.; Xu, Z.; Chang, W.; Xiao, Y. Fractional-order modeling and fuzzy clustering of improved artificial bee colony algorithms. IEEE Trans. Ind. Inform. 2019, 15, 5988–5998. [Google Scholar] [CrossRef]
- Chen, Y.; Gao, Q.; Su, X.; Yu, C. Research on consumption prediction of spare parts based on fuzzy C-means clustering algorithm and fractional order model. Vibroeng. Procedia 2017, 16, 129–133. [Google Scholar] [CrossRef]
- Oliveto, P.S.; Witt, C. Improved time complexity analysis of the simple genetic algorithm. Theor. Comput. Sci. 2015, 605, 21–41. [Google Scholar] [CrossRef]
- Hashemi, S.E.; Gholian-Jouybari, F.; Hajiaghaei-Keshteli, M. A fuzzy C-means algorithm for optimizing data clustering. Expert Syst. Appl. 2023, 227, 120377. [Google Scholar] [CrossRef]
Dataset | Features | Samples | Domain Area |
---|---|---|---|
Abalone | 8 | 4177 | Marine Biology |
Balance | 4 | 625 | Cognitive Psychology |
Concrete Compressive Strength | 8 | 1030 | Physics, Chemistry |
Ecoli | 7 | 336 | Biology |
Haberman | 3 | 306 | Medical |
Iris | 4 | 150 | Botany |
Libra | 90 | 360 | Physics |
Liver | 6 | 345 | Medical |
Pageblock | 10 | 5473 | Document Processing |
Pima | 8 | 768 | Medical |
Seed | 7 | 210 | Agriculture |
Segment | 16 | 2310 | Image Processing |
Statlog (Australian Credit Approval) | 14 | 690 | Business |
Wine | 13 | 178 | Chemistry |
GA Option | Values |
---|---|
Initialization | Random uniform |
Population size | |
Selection function | Stochastic uniform |
Crossover function | Scattered crossover |
Crossover probability | |
Mutation function | Gaussian mutation |
Max generations | |
Constraint tolerance | 1 × 10−3 |
Function tolerance | 1 × 10−6 |
Nonlinear constraint algorithm | Augmented Lagrangian |
Data Set | |||||||
---|---|---|---|---|---|---|---|
Abalone | 0.8347 | 0.55588 | 0.46174 | 2.5917 | 22 | 1.8869 × 10−4 | 1.0014 |
Balance | 0.99012 | 0.58211 | 0.62506 | 4.9998 | 50 | 0.0071 | 0.0212 |
Concrete Compressive Strength | 0.95369 | 0.13018 | 0.88148 | 1.7576 | 181 | 0.32236 | 0.77474 |
Ecoli | 0.74087 | 0.81775 | 0.83244 | 4.7027 | 224 | 0.28282 | 0.73210 |
Haberman | 0.9508 | 0.86634 | 0.93594 | 2.8985 | 11 | 6.0193 × 10−5 | 1.0418 |
Iris | 0.991 | 0.92502 | 0.75972 | 3.7868 | 17 | 0.6414 | 0.94329 |
Libra | 0.3125 | 0.71263 | 0.36938 | 3.2398 | 17 | 1.1736 × 10−5 | 1.0134 |
Liver | 0.40721 | 0.45748 | 0.97339 | 4.6291 | 20 | 1.0059 | 1.8658 × 10−2 |
Pageblock | 0.97405 | 0.42475 | 0.61145 | 3.1203 | 5 | 3.1426 × 10−9 | 1.0224 |
Pima | 0.83677 | 0.13917 | 0.83677 | 2.3475 | 49 | 1.8048 × 10−4 | 1.1323 |
Seed | 0.98438 | 0.75347 | 0.56306 | 1.7567 | 185 | 0.11035 | 0.91225 |
Segment | 0.37799 | 0.061637 | 0.51809 | 4.2317 | 4 | 0.50740 | 0.55712 |
Statlog (Australian Credit Approval) | 0.016843 | 0.74695 | 0.57621 | 3.8614 | 3 | 2.6667 × 10−5 | 1.0002 |
Wine | 0.72374 | 0.78268 | 0.67563 | 1.677 | 2 | 2.4298 × 10−11 | 1.0010 |
Dataset | K-Means | FCM | SOM | GMM | Frac-FCM | |||||
---|---|---|---|---|---|---|---|---|---|---|
Silhouette | Dunn | Silhouette | Dunn | Silhouette | Dunn | Silhouette | Dunn | Silhouette | Dunn | |
Abalone | 0.38636 ± 0.00346 | 0.01564 ± 0.00092 | 0.27144 ± 0.00190 | 0.01316 ± 0.00027 | 0.36265 ± 0.00053 | 0.01534 ± 0.00016 | 0.29536 ± 0.00393 | 0.01435 ± 0.00047 | 0.40710 ± 0.00016 | 0.14147 ± 0.00100 |
Balance | 0.28038 ± 0.00110 | 0.14224 ± 0.00030 | 0.25000 ± 0.00422 | 0.13687 ± 0.00066 | 0.28543 ± 0.00031 | 0.14282 ± 0.00008 | 0.26650 ± 0.00789 | 0.14253 ± 0.00028 | 0.34781 ± 0.02031 | 0.14678 ± 0.00102 |
Concrete Compressive Strength | 0.33747 ± 0.00268 | 0.06122 ± 0.00380 | 0.26990 ± 0.00153 | 0.03260 ± 0.00280 | 0.33902 ± 0.00019 | 0.04630 ± 0.00022 | 0.17075 ± 0.00820 | 0.04049 ± 0.00232 | 0.30151 ± 0.00711 | 0.05201 ± 0.00217 |
Ecoli | 0.32003 ± 0.00403 | 0.05432 ± 0.00235 | 0.14606 ± 0.00334 | 0.01594 ± 0.00024 | 0.33644 ± 0.00192 | 0.01486 ± 0.00035 | 0.10528 ± 0.00556 | 0.03721 ± 0.00192 | 0.34648 ± 0.00153 | 0.04458 ± 0.00349 |
Haberman | 0.44908 ± 0.00612 | 0.03038 ± 0.00247 | 0.30311 ± 0.00466 | 0.02026 ± 0.00126 | 0.43432 ± 0.00151 | 0.02743 ± 0.00200 | 0.07845 ± 0.00989 | 0.02253 ± 0.00057 | 0.49020 ± 0.00713 | 0.02893 ± 0.00318 |
Iris | 0.43004 ± 0.00613 | 0.05338 ± 0.00205 | 0.43494 ± 0.00433 | 0.06237 ± 0.00356 | 0.47320 ± 0.00515 | 0.05824 ± 0.00097 | 0.12628 ± 0.01339 | 0.04829 ± 0.00376 | 0.64413 ± 0.02630 | 0.05964 ± 0.00727 |
Libra | 0.27642 ± 0.00362 | 0.06838 ± 0.00228 | 0.10037 ± 0.01690 | 0.05884 ± 0.00363 | 0.34675 ± 0.00132 | 0.09306 ± 0.00349 | 0.24073 ± 0.00488 | 0.07470 ± 0.00354 | 0.40913 ± 0.01802 | 0.08658 ± 0.00969 |
Liver | 0.25858 ± 0.00389 | 0.04828 ± 0.00254 | 0.17583 ± 0.00254 | 0.02289 ± 0.00131 | 0.25605 ± 0.00396 | 0.02917 ± 0.00170 | 0.07958 ± 0.00851 | 0.04327 ± 0.00095 | 0.20920 ± 0.00570 | 0.05392 ± 0.00375 |
Pageblock | 0.41982 ± 0.01017 | 0.00488 ± 0.00020 | 0.12146 ± 0.00232 | 0.00148 ± 0.00005 | 0.35856 ± 0.00184 | 0.00438 ± 0.00053 | 0.02751 ± 0.00381 | 0.00192 ± 0.00019 | 0.25191 ± 0.03312 | 0.22770 ± 0.00144 |
Pima | 0.34238 ± 0.00250 | 0.02662 ± 0.00134 | 0.26679 ± 0.00230 | 0.01500 ± 0.00140 | 0.38023 ± 0.00161 | 0.01873 ± 0.00058 | −0.41161 ± 0.01702 | 0.00781 ± 0.00021 | 0.30758 ± 0.03119 | 0.04539 ± 0.00139 |
Segment | 0.14660 ± 0.00533 | 0.16413 ± 0.00707 | 0.13416 ± 0.00430 | 0.13784 ± 0.00535 | 0.14656 ± 0.00319 | 0.18663 ± 0.00564 | 0.05638 ± 0.00730 | 0.16114 ± 0.00502 | 0.22832 ± 0.00201 | 0.22770 ± 0.00144 |
Seeds | 0.37685 ± 0.00448 | 0.07439 ± 0.00332 | 0.27361 ± 0.00709 | 0.05824 ± 0.00184 | 0.40877 ± 0.00233 | 0.09903 ± 0.00292 | 0.20063 ± 0.00843 | 0.03649 ± 0.00133 | 0.52985 ± 0.03334 | 0.09142 ± 0.00356 |
Statlog | 0.22353 ± 0.01004 | 0.05315 ± 0.00436 | 0.09590 ± 0.01129 | 0.08007 ± 0.00206 | 0.18435 ± 0.00462 | 0.05221 ± 0.00030 | 0.15568 ± 0.00805 | 0.02747 ± 0.00277 | 0.84676 ± 0.00238 | 0.02747 ± 0.00277 |
Wine | 0.23868 ± 0.00787 | 0.17824 ± 0.00397 | 0.10090 ± 0.00524 | 0.14632 ± 0.00091 | 0.26251 ± 0.00272 | 0.19142 ± 0.00540 | 0.17767 ± 0.00855 | 0.16556 ± 0.00432 | 0.38808 ± 0.00607 | 0.17917 ± 0.00850 |
Yeast | 0.19260 ± 0.00114 | 0.05970 ± 0.00049 | 0.12671 ± 0.00436 | 0.04059 ± 0.00093 | 0.19699 ± 0.00055 | 0.03649 ± 0.00133 | 0.04848 ± 0.00699 | 0.04059 ± 0.00093 | 0.22892 ± 0.01705 | 0.06522 ± 0.00297 |
Average Rank | 2.4 | 2.53 | 4.266 | 4.2 | 2.133 | 2.6 | 4.666 | 3.6 | 1.533 | 2.07 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Safouan, S.; El Moutaouakil, K.; Patriciu, A.-M. Fractional Derivative to Symmetrically Extend the Memory of Fuzzy C-Means. Symmetry 2024, 16, 1353. https://doi.org/10.3390/sym16101353
Safouan S, El Moutaouakil K, Patriciu A-M. Fractional Derivative to Symmetrically Extend the Memory of Fuzzy C-Means. Symmetry. 2024; 16(10):1353. https://doi.org/10.3390/sym16101353
Chicago/Turabian StyleSafouan, Safaa, Karim El Moutaouakil, and Alina-Mihaela Patriciu. 2024. "Fractional Derivative to Symmetrically Extend the Memory of Fuzzy C-Means" Symmetry 16, no. 10: 1353. https://doi.org/10.3390/sym16101353
APA StyleSafouan, S., El Moutaouakil, K., & Patriciu, A. -M. (2024). Fractional Derivative to Symmetrically Extend the Memory of Fuzzy C-Means. Symmetry, 16(10), 1353. https://doi.org/10.3390/sym16101353