A Random PRIM Based Algorithm for Interpretable Classification and Advanced Subgroup Discovery
Abstract
:1. Introduction
- -
- Initial PRIM only searches in the chosen feature space, but in our work, the algorithm chooses multiple random feature spaces.
- -
- Once the rules of each class label are discovered, we innovate in pruning the rules using metarules. This means that not only is the ruleset interpretable, but also, no rule is completely removed from the ruleset. Metarules aim at creating association rules having as items the rules generated.
- -
- To build the classifier, the selection of the final rule is primordial. We investigate the literature, especially CART and CBA original papers, to select the final rules that will be part of the final model. Having the algorithm choose only the optimal boxes in each peel makes the selection fall on the first rules with the most significant coverage, support, and confidence.
- -
- We test our random PRIM-based classifier (R-PRIM-Cl) on ten well-known datasets to validate the classifier and identify future challenges, and we compare the results to the three well-known and well-established classifiers: random forest, logistic regression, and XG-Boost. We use four metrics to evaluate the performances: accuracy, precision, recall, and F1-score.
2. Related Works
2.1. The Patient Rule-Induction Method
2.1.1. Overview of PRIM
2.1.2. PRIM’s Evaluation Metrics
2.2. Metarules
2.3. Prior Works and Motivation
- -
- PRIM requires too many interactions with the expert because of the choice of the feature search space.
- -
- In the case of a large number of rules, if the expert chooses multiple search spaces, the interpretability is lost, and so is the explainability, since if the rules are not interpretable, they cannot be evaluated according to the domain knowledge.
- -
- PRIM lacks a classifier to use the discovered rules as a predictive model for future predictions.
2.4. Selected Algorithms for the Comparison
3. Proposed Methodology
3.1. Random PRIM Based Classifier
- Initialization step
- (a)
- A set of N data instances
- (b)
- A set of P categorical or numeric variables as X = {x1, x2, …, xp}
- (c)
- Two class labels target variable Y ∈ {0, 1}
- (d)
- Define a minimum support, peeling, and pasting thresholds {s, α, β}
- 2.
- Procedure
- (a)
- Random choice of the feature search spaces
- (b)
- Implementation of PRIM on each subspace for each class label
- (c)
- Find boxes with the metrics
- (d)
- Implement metarules to find the associations between rules
- (e)
- Execute cross-validation on 10 folds
- (f)
- Retain the rules or metarules based on cross-validation, the support, the density, and the coverage rates
- (g)
- Calculate the accuracy, recall, precision, and F1-Score to validate the model
- 3.
- Output
- (a)
- The set of all boxes found to allow the discovery of new subgroups
- (b)
- The set of selected boxes from the metarules pruning step
- (c)
- The final box measures: coverage, density, support, and dimension
- (d)
- The final model metrics: accuracy, precision, recall, and F1-score
3.2. Construction and Validation of the Classifier
3.3. Illustrative Example
- Initialization step
- (a)
- A set of 150 flowers from the Iris dataset
- (b)
- A set of four numeric variables as X = {sepal length (cm), sepal width (cm), petal length (cm), petal width (cm)}
- (c)
- Y being a three class labels target, we coded setosa as 1 and versicolor and virginica as 0, hence Y ∈ {0, 1}
- (d)
- Define minimum support, peeling, and pasting thresholds:s = 10%, α = 5%, β = 5%
- 2.
- Procedure
- (a)
- Random choice of the feature search spaces by the algorithm:[[‘sepal length (cm)’, ‘sepal width (cm)’],[‘sepal length (cm)’, ‘petal width (cm)’],[‘sepal width (cm)’, ‘petal length (cm)’, ‘petal width (cm)’],[‘sepal width (cm)’, ‘petal length (cm)’],[‘petal length (cm)’, ‘petal width (cm)’]]
- (b)
- Implementation of PRIM on each subspace for each class label
- (c)
- Find boxes with the metrics as displayed in Table 1 before the metarules and cross-validation
Index | Box | Coverage | Density | Res Dim | Mass |
---|---|---|---|---|---|
R1 | 4.3 < sepal length (cm) < 5.35 AND 2.95 < sepal width (cm) < 4.4 | 0.78 | 1.00 | 2 | 0.26 |
R2 | 4.3 < sepal length (cm) < 5.45 AND 2.15 < sepal width (cm) < 4.4 | 0.15 | 0.50 | 2 | 0.10 |
R3 | 3.349 < sepal width (cm) < 4.4 | 0.08 | 0.38 | 1 | 0.07 |
R4 | 4.3 < sepal length (cm) < 5.95 AND 0.1 < petal width (cm) < 0.8 | 1.00 | 1.00 | 2 | 0.33 |
R5 | 1.0 < petal length (cm) < 3.75 AND 0.1 < petal width (cm) < 0.8 | 1.00 | 1.00 | 2 | 0.33 |
R6 | 1.0 < petal length (cm) < 1.79 | 0.95 | 1.00 | 1 | 0.32 |
R7 | 3.25 < sepal width (cm) < 4.4 AND 1.0 < petal length (cm) < 6.05 | 0.05 | 0.25 | 2 | 0.07 |
R8 | 1.0 < petal length (cm) < 3.75 AND 0.1 < petal width (cm) < 0.8 | 1.00 | 1.00 | 2 | 0.33 |
R9 | 4.3 < sepal length (cm) < 7.35 AND 0.1 < petal width (cm) < 2.25 | 0.90 | 0.35 | 2 | 0.86 |
R10 | 3.05 < sepal width (cm) < 4.05 AND 1.35 < petal length (cm) < 5.85 | 0.45 | 0.49 | 2 | 0.31 |
R11 | 4.3 < sepal length (cm) < 7.80 AND 3.05 < sepal width (cm) < 4.05 AND 1.35 < petal length (cm) < 6.9 AND 0.1 < petal width (cm) < 2.25 | 0.40 | 0.52 | 4 | 0.26 |
R12 | 6.55 < sepal length (cm) < 7.80 AND 2.15 < petal width (cm) < 2.5 | 0.10 | 0.44 | 2 | 0.08 |
- (d)
- Implement metarules to prune and detect overlappingNo overlapping detectedMetarules detected for a density = 100%:
- ○
- R2 => R1
- ○
- R3 => R7
- ○
- R4,R8 => R5 and R5,R8 => R4 and R5,R4 => R8
- ○
- R6 => R8
- (e)
- Execute cross-validation on 10 folds
- (f)
- Retain the rules or metarules based on cross-validation, the support, the density, and the coverage rates as displayed in Table 2
- (g)
- Calculate the accuracy, recall, precision, and F1-score to validate the modelAccuracy: 0.97Precision: 0.83Recall: 1.00F1 Score: 0.91
Index | Box | Coverage | Density | Res Dim | Mass |
---|---|---|---|---|---|
(R4, R5, R8) | 4.3 < sepal length (cm) < 5.95 AND 0.1 < petal width (cm) < 0.8 AND 1.0 < petal length (cm) < 3.75 | 1.00 | 1.00 | 3 | 0.33 |
R6 | 1.0 < petal length (cm) < 1.79 | 0.95 | 1.00 | 1 | 0.32 |
R1 | 4.3 < sepal length (cm) < 5.35 AND 2.95 < sepal width (cm) < 4.4 | 0.78 | 1.00 | 2 | 0.26 |
- 3.
- Output
- (a)
- The set of all boxes found to allow the discovery of new subgroups(R1, R2, R3, R4, R5, R6, R7, R8, R9, R10, R11, R12)
- (b)
- The set of selected rules from the metarules pruning step(R1, R4, R5, R6, R8) with R4, R5 and R8 covering the same data and R6 include in R8
- (c)
- The final box measures: coverage, density, support, and dimensionEx: R1 (coverage = 78%, density = 100%, support = 26%, dimension = 2)
- (d)
- The final model metrics: accuracy, precision, recall, and F1-scoreAccuracy = 96.7%; precision = 95%; recall = 100%; F1-score = 97.4%
- -
- If (4.3 < sepal length (cm) < 5.95 AND 0.1 < petal width (cm) < 0.8 AND 1.0 < petal length (cm) < 3.75) THEN flower = ‘setosa’
- -
- If (1.0 < petal length (cm) < 1.79) THEN flower = ‘setosa’
- -
- If (4.3 < sepal length (cm) < 5.35 AND 2.95 < sepal width (cm) < 4.4) THEN flower = ‘setosa’
3.4. Comparison Between R-PRIM-Cl and CART
4. Results
4.1. Empirical Setting
- ○
- Congressional Voting dataset (Vote)
- ○
- Mushroom dataset (Mush)
- ○
- Breast Cancer dataset (Cancer)
- ○
- SPECT heart dataset (Heart)
- ○
- Tic-Tac-Toe Endgame dataset (TicTac)
- ○
- Pima Diabetes dataset (Diabetes)
- ○
- German Credit Card dataset (Credit)
4.2. Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Dixon, M.F.; Halperin, I.; Bilokon, P. Machine Learning in Finance; Springer International Publishing: New York, NY, USA, 2020; Volume 1170. [Google Scholar]
- Ahmed, S.; Alshater, M.M.; El Ammari, A.; Hammami, H. Artificial intelligence and machine learning in finance: A bibliometric review. Res. Int. Bus. Financ. 2022, 61, 101646. [Google Scholar] [CrossRef]
- Nayyar, A.; Gadhavi, L.; Zaman, N. Machine learning in healthcare: Review, opportunities and challenges. In Machine Learning and the Internet of Medical Things in Healthcare; Elsevier: Amsterdam, The Netherlands, 2021; pp. 23–45. [Google Scholar]
- An, Q.; Rahman, S.; Zhou, J.; Kang, J.J. A comprehensive review on machine learning in healthcare industry: Classification, restrictions, opportunities and challenges. Sensors 2023, 23, 4178. [Google Scholar] [CrossRef] [PubMed]
- Gao, L.; Guan, L. Interpretability of machine learning: Recent advances and future prospects. IEEE MultiMedia 2023, 30, 105–118. [Google Scholar] [CrossRef]
- Nassih, R.; Berrado, A. State of the art of Fairness, Interpretability and Explainability in Machine Learning: Case of PRIM. In Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications (SITA’20), Rabat, Morocco, 23–24 September 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–5. [Google Scholar] [CrossRef]
- Capponi, A.; Lehalle, C.-A. (Eds.) Black-Box Model Risk in Finance. In Machine Learning and Data Sciences for Financial Markets: A Guide to Contemporary Practices; Cambridge University Press: Cambridge, UK, 2023; pp. 687–717. [Google Scholar]
- Imrie, F.; Davis, R.; van der Schaar, M. Multiple stakeholders drive diverse interpretability requirements for machine learning in healthcare. Nat. Mach. Intell. 2023, 5, 824–829. [Google Scholar] [CrossRef]
- Friedman, J.H.; Fisher, N.I. Bump hunting in high-dimensional data. Stat. Comput. 1999, 9, 123–143. [Google Scholar] [CrossRef]
- Wrobel, S. An algorithm for multi-relational discovery of subgroups. In Principles of Data Mining and Knowledge Discovery, Proceedings of the First European Symposium, PKDD’ 1997, Trondheim, Norway, 24–27 June 1997; Proceedings, LNCS; Komorowski, H.J., Zytkow, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 1997; Volume 1263, pp. 78–87. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Wadsworth International Group: Belmont, CA, USA, 1984. [Google Scholar]
- Molnar, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2nd ed.; Amazon, Inc.: Bellevue, WA, USA, 2020. [Google Scholar]
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
- Herrera, F.; Carmona, C.J.; González, P.; del Jesus, M.J. An overview on subgroup discovery: Foundations and applications. Int. J. Comput. Intell. Syst. 2011, 12, 1602–1612. [Google Scholar] [CrossRef]
- Atzmueller, M. Subgroup discovery. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2015, 5, 35–49. [Google Scholar] [CrossRef]
- Berrado, A.; Runger, G.C. Using metarules to organize and group discovered association rules. Data Min. Knowl. Discov. 2007, 14, 409–431. [Google Scholar] [CrossRef]
- Azmi, M.; Runger, G.C.; Berrado, A. Interpretable regularized class association rules algorithm for classification in a categorical data space. Inf. Sci. 2019, 483, 313–331. [Google Scholar] [CrossRef]
- Maissae, H.; Abdelaziz, B. Forest-ORE: Mining Optimal Rule Ensemble to interpret Random Forest models. arXiv 2024, arXiv:2403.17588. [Google Scholar]
- Maissae, H.; Abdelaziz, B. A novel approach for discretizing continuous attributes based on tree ensemble and moment matching optimization. Int. J. Data Sci. Anal. 2022, 14, 45–63. [Google Scholar] [CrossRef]
- Azmi, M.; Berrado, A. CARs-RP: Lasso-based class association rules pruning. Int. J. Bus. Intell. Data Min. 2021, 18, 197–217. [Google Scholar] [CrossRef]
- Ghasemkhani, B.; Balbal, K.F.; Birant, D. A New Predictive Method for Classification Tasks in Machine Learning: Multi-Class Multi-Label Logistic Model Tree (MMLMT). Mathematics 2024, 12, 2825. [Google Scholar] [CrossRef]
- Miltiadous, A.; Tzimourta, K.D.; Giannakeas, N.; Tsipouras, M.G.; Afrantou, T.; Ioannidis, P.; Tzallas, A.T. Alzheimer’s Disease and Frontotemporal Dementia: A Robust Classification Method of EEG Signals and a Comparison of Validation Methods. Diagnostics 2021, 11, 1437. [Google Scholar] [CrossRef]
- Wang, Y.; Zhao, Y.; Therneau, T.M.; Atkinson, E.J.; Tafti, A.P.; Zhang, N.; Amin, S.; Limper, A.H.; Khosla, S.; Liu, H. Unsupervised machine learning for the discovery of latent disease clusters and patient subgroups using electronic health records. J. Biomed. Inform. 2020, 102, 103364. [Google Scholar] [CrossRef]
- Rabbani, N.; Kim, G.Y.; Suarez, C.J.; Chen, J.H. Applications of machine learning in routine laboratory medicine: Current state and future directions. Clin. Biochem. 2022, 103, 1–7. [Google Scholar] [CrossRef]
- Smets, J.; Shevroja, E.; Hügle, T.; Leslie, W.D.; Hans, D. Machine Learning Solutions for Osteoporosis—A Review. J. Bone Miner. Res. 2021, 36, 833–851. [Google Scholar] [CrossRef]
- Nagpal, C.; Wei, D.; Vinzamuri, B.; Shekhar, M.; Berger, S.E.; Das, S.; Varshney, K.R. Interpretable subgroup discovery in treatment effect estimation with application to opioid prescribing guidelines. In Proceedings of the ACM Conference on Health, Inference, and Learning (CHIL’20), Toronto, ON, Canada, 2–4 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 19–29. [Google Scholar] [CrossRef]
- Dazard, J.E.; Rao, J.S. Local Sparse Bump Hunting. J. Comput. Graph. Stat. 2010, 19, 900–929. [Google Scholar] [CrossRef]
- Polonik, W.; Wang, Z. PRIM analysis. J. Multivar. Anal. 2010, 101, 525–540. [Google Scholar] [CrossRef]
- Dyson, G. An application of the Patient Rule-Induction Method to detect clinically meaningful subgroups from failed phase III clinical trials. Int. J. Clin. Biostat. Biom. 2021, 7, 38. [Google Scholar] [CrossRef]
- Yang, J.K.; Lee, D.H. Optimization of mean and standard deviation of multiple responses using patient rule induction method. Int. J. Data Warehous. Min. 2018, 14, 60–74. [Google Scholar] [CrossRef]
- Lee, D.H.; Yang, J.K.; Kim, K.J. Multiresponse optimization of a multistage manufacturing process using a patient rule induction method. Qual. Reliab. Eng. Int. 2020, 36, 1982–2002. [Google Scholar] [CrossRef]
- Kaveh, A.; Hamze-Ziabari, S.M.; Bakhshpoori, T. Soft computing-based slope stability assessment: A comparative study. Geomech. Eng. 2018, 14, 257–269. [Google Scholar] [CrossRef]
- Nassih, R.; Berrado, A. Potential for PRIM based classification: A literature review. In Proceedings of the Third European International Conference on Industrial Engineering and Operations Managements, Pilsen, Czech Republic, 23–26 July 2019; Volume 7. [Google Scholar]
- Nassih, R.; Berrado, A. Towards a patient rule induction method-based classifier. In Proceedings of the 2019 1st International Conference on Smart Systems and Data Science (ICSSD), Rabat, Morocco, 3–4 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Biau, G.; Scornet, E. A random forest guided tour. TEST 2016, 25, 197–227. [Google Scholar] [CrossRef]
- Qiu, Y.; Zhou, J.; Khandelwal, M.; Yang, H.; Yang, P.; Li, C. Performance evaluation of hybrid WOA-XGBoost, GWO-XGBoost and BO-XGBoost models to predict blast-induced ground vibration. Eng. Comput. 2022, 38 (Suppl. 5), 4145–4162. [Google Scholar] [CrossRef]
- Cox, D.R. The Regression Analysis of Binary Sequences. J. R. Stat. Soc. Ser. B Stat. Methodol. 1958, 20, 215–242. [Google Scholar] [CrossRef]
- Bellman, R.E. Adaptive Control Processes: A Guided Tour; Princeton University Press: Princeton, NJ, USA, 1961. [Google Scholar]
- Liu, B.; Hsu, W.; Ma, Y. Integrating Classification and Association Rule Mining. In Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining (KDD), New York, NY, USA, 27–31 August 1998; pp. 80–86. [Google Scholar]
- Blake, C.L.; Merz, C.J. UCI Repository of Machine Learning Databases; University of California, Department of Information and Computer Science: Irvine, CA, USA, 1998; Available online: https://archive.ics.uci.edu/datasets (accessed on 1 July 2024).
- Demšar, J.; Curk, T.; Erjavec, A.; Gorup, Č.; Hočevar, T.; Milutinović, M.; Možina, M.; Polajnar, M.; Toplak, M.; Starič, A.; et al. Orange: Data Mining Toolbox in Python. J. Mach. Learn. Res. 2013, 14, 2349–2353. [Google Scholar]
Datasets | No. of Instances | No. of Attributes | Class Labels | Class Distribution |
---|---|---|---|---|
Vote | 232 | 16 | democrat: 0 republican:1 | 142 90 |
Mush | 8124 | 22 | e: 0 p: 1 | 4208 3916 |
Cancer | 286 | 9 | No recurrent: 0 Recurrent: 1 | 201 85 |
Heart | 267 | 2 | 0 1 | 55 212 |
TicTac | 958 | 9 | Negative: 1 Positive: 0 | 332 626 |
Diabetes | 768 | 8 | Yes: 1 No: 0 | 269 499 |
Credit | 1000 | 10 | Bad: 1 Good: 0 | 300 700 |
Algorithms | Hyperparameters | Tuning |
---|---|---|
Random forest | n_estimators: Number of trees in the forest | n_estimators = 100 |
max_depth: Maximum depth of the trees | max_depth = None | |
min_samples_split: Minimum number of samples required to split an internal node | min_samples_split = 2, | |
min_samples_leaf: Minimum number of samples required to be at a leaf node | min_samples_leaf = 1, | |
max_features: Number of features to consider when looking for the best split | max_features = ‘sqrt’, | |
class_weight: Balancing of classes for imbalanced datasets | class_weight = ‘balanced’ | |
XG-Boost | n_estimators: Number of boosting rounds. | n_estimators = 100 |
learning_rate: Step size shrinkage used to prevent overfitting. | learning_rate = 0.1 | |
max_depth: Maximum depth of a tree. | max_depth = 6 | |
min_child_weight: Minimum sum of instance weights needed in a child. | min_child_weight = 1 | |
subsample: Fraction of samples used for training each tree. | subsample = 0.8 | |
colsample_bytree: Fraction of features used for training each tree. | colsample_bytree = 0.8 | |
scale_pos_weight: Balancing factor for imbalanced datasets. | scale_pos_weight = 1 | |
Logistic regression | penalty: Regularization type (l1, l2, or elasticnet). | penalty = ‘l2’ |
C: Inverse of regularization strength (smaller values imply stronger regularization). | C = 1.0 | |
solver: Optimization algorithm for training. | solver = ‘lbfgs’ | |
class_weight: Handling class imbalance | class_weight = ‘balanced’ |
Recall | Precision | |||||||
---|---|---|---|---|---|---|---|---|
Datasets | RF | XGB | LG | R-PRIM-Cl | RF | XGB | LG | R-PRIM-Cl |
Vote | 94.33 | 94.33 | 93.33 | 95.32 | 98.67 | 99.33 | 98.60 | 98.74 |
Mush | 98.12 | 98.12 | 97.72 | 97.65 | 99.72 | 100.00 | 93.67 | 96.47 |
Cancer | 84.17 | 89.64 | 90.31 | 92.18 | 77.39 | 78.60 | 87.72 | 92.13 |
Heart | 92.01 | 94.37 | 78.36 | 93.57 | 87.30 | 86.57 | 91.25 | 90.14 |
TicTac | 97.35 | 98.12 | 96.58 | 98.06 | 98.79 | 97.88 | 98.10 | 96.80 |
Diabetes | 87.30 | 85.64 | 89.51 | 88.67 | 91.31 | 90.68 | 89.67 | 94.36 |
Credit | 89.64 | 82.45 | 91.35 | 90.03 | 88.60 | 85.34 | 91.78 | 95.24 |
F1-Score | Accuracy | |||||||
---|---|---|---|---|---|---|---|---|
Datasets | RF | XGB | LG | R-PRIM-Cl | RF | XGB | LG | R-PRIM-Cl |
Vote | 96.45 | 96.77 | 95.89 | 97.00 | 96.56 | 97.10 | 97.43 | 97.40 |
Mush | 98.91 | 99.05 | 95.65 | 97.06 | 100.00 | 100.00 | 98.63 | 97.54 |
Cancer | 80.64 | 83.76 | 89.00 | 92.15 | 71.38 | 71.78 | 87.63 | 90.04 |
Heart | 89.59 | 90.30 | 84.32 | 91.82 | 80.20 | 83.56 | 92.13 | 96.15 |
TicTac | 98.06 | 98.00 | 97.33 | 97.43 | 98.85 | 98.87 | 97.86 | 98.37 |
Diabetes | 89.26 | 88.09 | 89.59 | 91.43 | 84.65 | 84.37 | 89.61 | 87.98 |
Credit | 89.12 | 83.87 | 91.56 | 92.56 | 90.15 | 89.61 | 91.34 | 92.42 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nassih, R.; Berrado, A. A Random PRIM Based Algorithm for Interpretable Classification and Advanced Subgroup Discovery. Algorithms 2024, 17, 565. https://doi.org/10.3390/a17120565
Nassih R, Berrado A. A Random PRIM Based Algorithm for Interpretable Classification and Advanced Subgroup Discovery. Algorithms. 2024; 17(12):565. https://doi.org/10.3390/a17120565
Chicago/Turabian StyleNassih, Rym, and Abdelaziz Berrado. 2024. "A Random PRIM Based Algorithm for Interpretable Classification and Advanced Subgroup Discovery" Algorithms 17, no. 12: 565. https://doi.org/10.3390/a17120565
APA StyleNassih, R., & Berrado, A. (2024). A Random PRIM Based Algorithm for Interpretable Classification and Advanced Subgroup Discovery. Algorithms, 17(12), 565. https://doi.org/10.3390/a17120565