Next Issue
Volume 6, June
Previous Issue
Volume 5, December
 
 

Mach. Learn. Knowl. Extr., Volume 6, Issue 1 (March 2024) – 33 articles

Cover Story (view full-size image): This study focuses on enhancing a system that detects six types of cyberbullying tweets. By employing multi-classification algorithms on a cyberbullying dataset, our approach achieved a high accuracy, particularly with the TF-IDF (bigram) feature extraction. Our experiment achieved a high performance compared with that of previous experiments on the same dataset. Two ensemble machine learning methods, employing the N-gram with the TF-IDF feature extraction technique, demonstrated a superior performance in classification. Three popular multi-classification algorithms, which were Decision Trees, Random Forest, and XGBoost, were separately combined into two varied ensemble methods. These ensemble classifiers demonstrated a superior performance compared to that of traditional machine learning classifier models. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
38 pages, 9513 KiB  
Review
Medical Image Classifications Using Convolutional Neural Networks: A Survey of Current Methods and Statistical Modeling of the Literature
by Foziya Ahmed Mohammed, Kula Kekeba Tune, Beakal Gizachew Assefa, Marti Jett and Seid Muhie
Mach. Learn. Knowl. Extr. 2024, 6(1), 699-735; https://doi.org/10.3390/make6010033 - 21 Mar 2024
Cited by 4 | Viewed by 8165
Abstract
In this review, we compiled convolutional neural network (CNN) methods which have the potential to automate the manual, costly and error-prone processing of medical images. We attempted to provide a thorough survey of improved architectures, popular frameworks, activation functions, ensemble techniques, hyperparameter optimizations, [...] Read more.
In this review, we compiled convolutional neural network (CNN) methods which have the potential to automate the manual, costly and error-prone processing of medical images. We attempted to provide a thorough survey of improved architectures, popular frameworks, activation functions, ensemble techniques, hyperparameter optimizations, performance metrics, relevant datasets and data preprocessing strategies that can be used to design robust CNN models. We also used machine learning algorithms for the statistical modeling of the current literature to uncover latent topics, method gaps, prevalent themes and potential future advancements. The statistical modeling results indicate a temporal shift in favor of improved CNN designs, such as a shift from the use of a CNN architecture to a CNN-transformer hybrid. The insights from statistical modeling point that the surge of CNN practitioners into the medical imaging field, partly driven by the COVID-19 challenge, catalyzed the use of CNN methods for detecting and diagnosing pathological conditions. This phenomenon likely contributed to the sharp increase in the number of publications on the use of CNNs for medical imaging, both during and after the pandemic. Overall, the existing literature has certain gaps in scope with respect to the design and optimization of CNN architectures and methods specifically for medical imaging. Additionally, there is a lack of post hoc explainability of CNN models and slow progress in adopting CNNs for low-resource medical imaging. This review ends with a list of open research questions that have been identified through statistical modeling and recommendations that can potentially help set up more robust, improved and reproducible CNN experiments for medical imaging. Full article
(This article belongs to the Section Network)
Show Figures

Figure 1

20 pages, 3326 KiB  
Article
Analyzing the Impact of Oncological Data at Different Time Points and Tumor Biomarkers on Artificial Intelligence Predictions for Five-Year Survival in Esophageal Cancer
by Leandra Lukomski, Juan Pisula, Naita Wirsik, Alexander Damanakis, Jin-On Jung, Karl Knipper, Rabi Datta, Wolfgang Schröder, Florian Gebauer, Thomas Schmidt, Alexander Quaas, Katarzyna Bozek, Christiane Bruns and Felix Popp
Mach. Learn. Knowl. Extr. 2024, 6(1), 679-698; https://doi.org/10.3390/make6010032 - 19 Mar 2024
Viewed by 2214
Abstract
AIM: In this study, we use Artificial Intelligence (AI), including Machine (ML) and Deep Learning (DL), to predict the long-term survival of resectable esophageal cancer (EC) patients in a high-volume surgical center. Our objective is to evaluate the predictive efficacy of AI methods [...] Read more.
AIM: In this study, we use Artificial Intelligence (AI), including Machine (ML) and Deep Learning (DL), to predict the long-term survival of resectable esophageal cancer (EC) patients in a high-volume surgical center. Our objective is to evaluate the predictive efficacy of AI methods for survival prognosis across different time points of oncological treatment. This involves comparing models trained with clinical data, integrating either Tumor, Node, Metastasis (TNM) classification or tumor biomarker analysis, for long-term survival predictions. METHODS: In this retrospective study, 1002 patients diagnosed with EC between 1996 and 2021 were analyzed. The original dataset comprised 55 pre- and postoperative patient characteristics and 55 immunohistochemically evaluated biomarkers following surgical intervention. To predict the five-year survival status, four AI methods (Random Forest RF, XG Boost XG, Artificial Neural Network ANN, TabNet TN) and Logistic Regression (LR) were employed. The models were trained using three predefined subsets of the training dataset as follows: (I) the baseline dataset (BL) consisting of pre-, intra-, and postoperative data, including the TNM but excluding tumor biomarkers, (II) clinical data accessible at the time of the initial diagnostic workup (primary staging dataset, PS), and (III) the PS dataset including tumor biomarkers from tissue microarrays (PS + biomarkers), excluding TNM status. We used permutation feature importance for feature selection to identify only important variables for AI-driven reduced datasets and subsequent model retraining. RESULTS: Model training on the BL dataset demonstrated similar predictive performances for all models (Accuracy, ACC: 0.73/0.74/0.76/0.75/0.73; AUC: 0.78/0.82/0.83/0.80/0.79 RF/XG/ANN/TN/LR, respectively). The predictive performance and generalizability declined when the models were trained with the PS dataset. Surprisingly, the inclusion of biomarkers in the PS dataset for model training led to improved predictions (PS dataset vs. PS dataset + biomarkers; ACC: 0.70 vs. 0.77/0.73 vs. 0.79/0.71 vs. 0.75/0.69 vs. 0.72/0.63 vs. 0.66; AUC: 0.77 vs. 0.83/0.80 vs. 0.85/0.76 vs. 0.86/0.70 vs. 0.76/0.70 vs. 0.69 RF/XG/ANN/TN/LR, respectively). The AI models outperformed LR when trained with the PS datasets. The important features shared after AI-driven feature selection in all models trained with the BL dataset included histopathological lymph node status (pN), histopathological tumor size (pT), clinical tumor size (cT), age at the time of surgery, and postoperative tracheostomy. Following training with the PS dataset with biomarkers, the important predictive features included patient age at the time of surgery, TP-53 gene mutation, Mesothelin expression, thymidine phosphorylase (TYMP) expression, NANOG homebox protein expression, and indoleamine 2,3-dioxygenase (IDO) expressed on tumor-infiltrating lymphocytes, as well as tumor-infiltrating Mast- and Natural killer cells. CONCLUSION: Different AI methods similarly predict the long-term survival status of patients with EC and outperform LR, the state-of-the-art classification model. Survival status can be predicted with similar predictive performance with patient data at an early stage of treatment when utilizing additional biomarker analysis. This suggests that individual survival predictions can be made early in cancer treatment by utilizing biomarkers, reducing the necessity for the pathological TNM status post-surgery. This study identifies important features for survival predictions that vary depending on the timing of oncological treatment. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

21 pages, 1030 KiB  
Article
Why Do Tree Ensemble Approximators Not Outperform the Recursive-Rule eXtraction Algorithm?
by Soma Onishi, Masahiro Nishimura, Ryota Fujimura and Yoichi Hayashi
Mach. Learn. Knowl. Extr. 2024, 6(1), 658-678; https://doi.org/10.3390/make6010031 - 16 Mar 2024
Cited by 1 | Viewed by 1880
Abstract
Although machine learning models are widely used in critical domains, their complexity and poor interpretability remain problematic. Decision trees (DTs) and rule-based models are known for their interpretability, and numerous studies have investigated techniques for approximating tree ensembles using DTs or rule sets, [...] Read more.
Although machine learning models are widely used in critical domains, their complexity and poor interpretability remain problematic. Decision trees (DTs) and rule-based models are known for their interpretability, and numerous studies have investigated techniques for approximating tree ensembles using DTs or rule sets, even though these approximators often overlook interpretability. These methods generate three types of rule sets: DT based, unordered, and decision list based. However, very few metrics exist that can distinguish and compare these rule sets. Therefore, the present study proposes an interpretability metric to allow for comparisons of interpretability between different rule sets and investigates the interpretability of the rules generated by the tree ensemble approximators. We compare these rule sets with the Recursive-Rule eXtraction algorithm (Re-RX) with J48graft to offer insights into the interpretability gap. The results indicate that Re-RX with J48graft can handle categorical and numerical attributes separately, has simple rules, and achieves a high interpretability, even when the number of rules is large. RuleCOSI+, a state-of-the-art method, showed significantly lower results regarding interpretability, but had the smallest number of rules. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

16 pages, 5296 KiB  
Article
Enhancing Docking Accuracy with PECAN2, a 3D Atomic Neural Network Trained without Co-Complex Crystal Structures
by Heesung Shim, Jonathan E. Allen and W. F. Drew Bennett
Mach. Learn. Knowl. Extr. 2024, 6(1), 642-657; https://doi.org/10.3390/make6010030 - 11 Mar 2024
Viewed by 2301
Abstract
Decades of drug development research have explored a vast chemical space for highly active compounds. The exponential growth of virtual libraries enables easy access to billions of synthesizable molecules. Computational modeling, particularly molecular docking, utilizes physics-based calculations to prioritize molecules for synthesis and [...] Read more.
Decades of drug development research have explored a vast chemical space for highly active compounds. The exponential growth of virtual libraries enables easy access to billions of synthesizable molecules. Computational modeling, particularly molecular docking, utilizes physics-based calculations to prioritize molecules for synthesis and testing. Nevertheless, the molecular docking process often yields docking poses with favorable scores that prove to be inaccurate with experimental testing. To address these issues, several approaches using machine learning (ML) have been proposed to filter incorrect poses based on the crystal structures. However, most of the methods are limited by the availability of structure data. Here, we propose a new pose classification approach, PECAN2 (Pose Classification with 3D Atomic Network 2), without the need for crystal structures, based on a 3D atomic neural network with Point Cloud Network (PCN). The new approach uses the correlation between docking scores and experimental data to assign labels, instead of relying on the crystal structures. We validate the proposed classifier on multiple datasets including human mu, delta, and kappa opioid receptors and SARS-CoV-2 Mpro. Our results demonstrate that leveraging the correlation between docking scores and experimental data alone enhances molecular docking performance by filtering out false positives and false negatives. Full article
Show Figures

Figure 1

23 pages, 3795 KiB  
Article
Classifying Breast Tumors in Digital Tomosynthesis by Combining Image Quality-Aware Features and Tumor Texture Descriptors
by Loay Hassan, Mohamed Abdel-Nasser, Adel Saleh and Domenec Puig
Mach. Learn. Knowl. Extr. 2024, 6(1), 619-641; https://doi.org/10.3390/make6010029 - 11 Mar 2024
Viewed by 1889
Abstract
Digital breast tomosynthesis (DBT) is a 3D breast cancer screening technique that can overcome the limitations of standard 2D digital mammography. However, DBT images often suffer from artifacts stemming from acquisition conditions, a limited angular range, and low radiation doses. These artifacts have [...] Read more.
Digital breast tomosynthesis (DBT) is a 3D breast cancer screening technique that can overcome the limitations of standard 2D digital mammography. However, DBT images often suffer from artifacts stemming from acquisition conditions, a limited angular range, and low radiation doses. These artifacts have the potential to degrade the performance of automated breast tumor classification tools. Notably, most existing automated breast tumor classification methods do not consider the effect of DBT image quality when designing the classification models. In contrast, this paper introduces a novel deep learning-based framework for classifying breast tumors in DBT images. This framework combines global image quality-aware features with tumor texture descriptors. The proposed approach employs a two-branch model: in the top branch, a deep convolutional neural network (CNN) model is trained to extract robust features from the region of interest that includes the tumor. In the bottom branch, a deep learning model named TomoQA is trained to extract global image quality-aware features from input DBT images. The quality-aware features and the tumor descriptors are then combined and fed into a fully-connected layer to classify breast tumors as benign or malignant. The unique advantage of this model is the combination of DBT image quality-aware features with tumor texture descriptors, which helps accurately classify breast tumors as benign or malignant. Experimental results on a publicly available DBT image dataset demonstrate that the proposed framework achieves superior breast tumor classification results, outperforming all existing deep learning-based methods. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

26 pages, 6113 KiB  
Article
Augmenting Deep Neural Networks with Symbolic Educational Knowledge: Towards Trustworthy and Interpretable AI for Education
by Danial Hooshyar, Roger Azevedo and Yeongwook Yang
Mach. Learn. Knowl. Extr. 2024, 6(1), 593-618; https://doi.org/10.3390/make6010028 - 10 Mar 2024
Cited by 4 | Viewed by 3116
Abstract
Artificial neural networks (ANNs) have proven to be among the most important artificial intelligence (AI) techniques in educational applications, providing adaptive educational services. However, their educational potential is limited in practice due to challenges such as the following: (i) the difficulties in incorporating [...] Read more.
Artificial neural networks (ANNs) have proven to be among the most important artificial intelligence (AI) techniques in educational applications, providing adaptive educational services. However, their educational potential is limited in practice due to challenges such as the following: (i) the difficulties in incorporating symbolic educational knowledge (e.g., causal relationships and practitioners’ knowledge) in their development, (ii) a propensity to learn and reflect biases, and (iii) a lack of interpretability. As education is classified as a ‘high-risk’ domain under recent regulatory frameworks like the EU AI Act—highlighting its influence on individual futures and discrimination risks—integrating educational insights into ANNs is essential. This ensures that AI applications adhere to essential educational restrictions and provide interpretable predictions. This research introduces NSAI, a neural-symbolic AI approach that integrates neural networks with knowledge representation and symbolic reasoning. It injects and extracts educational knowledge into and from deep neural networks to model learners’ computational thinking, aiming to enhance personalized learning and develop computational thinking skills. Our findings revealed that the NSAI approach demonstrates better generalizability compared to deep neural networks trained on both original training data and data enriched by SMOTE and autoencoder methods. More importantly, we found that, unlike traditional deep neural networks, which mainly relied on spurious correlations in their predictions, the NSAI approach prioritizes the development of robust representations that accurately capture causal relationships between inputs and outputs. This focus significantly reduces the reinforcement of biases and prevents misleading correlations in the models. Furthermore, our research showed that the NSAI approach enables the extraction of rules from the trained network, facilitating interpretation and reasoning during the path to predictions, as well as refining the initial educational knowledge. These findings imply that neural-symbolic AI not only overcomes the limitations of ANNs in education but also holds broader potential for transforming educational practices and outcomes through trustworthy and interpretable applications. Full article
(This article belongs to the Topic Artificial Intelligence for Education)
Show Figures

Figure 1

13 pages, 2923 KiB  
Article
Representing Human Ethical Requirements in Hybrid Machine Learning Models: Technical Opportunities and Fundamental Challenges
by Stephen Fox and Vitor Fortes Rey
Mach. Learn. Knowl. Extr. 2024, 6(1), 580-592; https://doi.org/10.3390/make6010027 - 8 Mar 2024
Cited by 1 | Viewed by 2104
Abstract
Hybrid machine learning encompasses predefinition of rules and ongoing learning from data. Human organizations can implement hybrid machine learning (HML) to automate some of their operations. Human organizations need to ensure that their HML implementations are aligned with human ethical requirements as defined [...] Read more.
Hybrid machine learning encompasses predefinition of rules and ongoing learning from data. Human organizations can implement hybrid machine learning (HML) to automate some of their operations. Human organizations need to ensure that their HML implementations are aligned with human ethical requirements as defined in laws, regulations, standards, etc. The purpose of the study reported here was to investigate technical opportunities for representing human ethical requirements in HML. The study sought to represent two types of human ethical requirements in HML: locally simple and locally complex. The locally simple case is road traffic regulations. This can be considered to be a relatively simple case because human ethical requirements for road safety, such as stopping at red traffic lights, are defined clearly and have limited scope for personal interpretation. The locally complex case is diagnosis procedures for functional disorders, which can include medically unexplained symptoms. This case can be considered to be locally complex because human ethical requirements for functional disorder healthcare are less well defined and are more subject to personal interpretation. Representations were made in a type of HML called Algebraic Machine Learning. Our findings indicate that there are technical opportunities to represent human ethical requirements in HML because of its combination of human-defined top down rules and bottom up data-driven learning. However, our findings also indicate that there are limitations to representing human ethical requirements: irrespective of what type of machine learning is used. These limitations arise from fundamental challenges in defining complex ethical requirements, and from potential for opposing interpretations of their implementation. Furthermore, locally simple ethical requirements can contribute to wider ethical complexity. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

26 pages, 931 KiB  
Article
Classification, Regression, and Survival Rule Induction with Complex and M-of-N Elementary Conditions
by Cezary Maszczyk, Marek Sikora and Łukasz Wróbel
Mach. Learn. Knowl. Extr. 2024, 6(1), 554-579; https://doi.org/10.3390/make6010026 - 5 Mar 2024
Cited by 1 | Viewed by 2185
Abstract
Most rule induction algorithms generate rules with simple logical conditions based on equality or inequality relations. This feature limits their ability to discover complex dependencies that may exist in data. This article presents an extension to the sequential covering rule induction algorithm that [...] Read more.
Most rule induction algorithms generate rules with simple logical conditions based on equality or inequality relations. This feature limits their ability to discover complex dependencies that may exist in data. This article presents an extension to the sequential covering rule induction algorithm that allows it to generate complex and M-of-N conditions within the premises of rules. The proposed methodology uncovers complex patterns in data that are not adequately expressed by rules with simple conditions. The novel two-phase approach efficiently generates M-of-N conditions by analysing frequent sets in previously induced simple and complex rule conditions. The presented method allows rule induction for classification, regression and survival problems. Extensive experiments on various public datasets show that the proposed method often leads to more concise rulesets compared to those using only simple conditions. Importantly, the inclusion of complex conditions and M-of-N conditions has no statistically significant negative impact on the predictive ability of the ruleset. Experimental results and a ready-to-use implementation are available in the GitHub repository. The proposed algorithm can potentially serve as a valuable tool for knowledge discovery and facilitate the interpretation of rule-based models by making them more concise. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

48 pages, 47170 KiB  
Article
Refereeing the Sport of Squash with a Machine Learning System
by Enqi Ma and Zbigniew J. Kabala
Mach. Learn. Knowl. Extr. 2024, 6(1), 506-553; https://doi.org/10.3390/make6010025 - 5 Mar 2024
Cited by 2 | Viewed by 2943
Abstract
Squash is a sport where referee decisions are essential to the game. However, these decisions are very subjective in nature. Disputes, both from the players and the audience, regularly occur because the referee made a controversial call. In this study, we propose automating [...] Read more.
Squash is a sport where referee decisions are essential to the game. However, these decisions are very subjective in nature. Disputes, both from the players and the audience, regularly occur because the referee made a controversial call. In this study, we propose automating the referee decision process through machine learning. We trained neural networks to predict such decisions using data from 400 referee decisions acquired through extensive video footage reviewing and labeling. Six positional values were extracted, including the attacking player’s position, the retreating player’s position, the ball’s position in the frame, the ball’s projected first bounce, the ball’s projected second bounce, and the attacking player’s racket head position. We calculated nine additional distance values, such as the distance between players and the distance from the attacking player’s racket head to the ball’s path. Models were trained on Wolfram Mathematica and Python using these values. The best Wolfram Mathematica model and the best Python model achieved accuracies of 86% ± 3.03% and 85.2% ± 5.1%, respectively. These accuracies surpass 85%, demonstrating near-human performance. Our model has great potential for improvement as it is currently trained with limited, unbalanced data (400 decisions) and lacks crucial data points such as time and speed. The performance of our model is almost surely going to improve significantly with a larger training dataset. Unlike human referees, machine learning models follow a consistent standard, have unlimited attention spans, and make decisions instantly. If the accuracy is improved in the future, the model can potentially serve as an extra refereeing official for both professional and amateur squash matches. Both the analysis of referee decisions in squash and the proposal to automate the process using machine learning is unique to this study. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

42 pages, 1068 KiB  
Systematic Review
Alzheimer’s Disease Detection Using Deep Learning on Neuroimaging: A Systematic Review
by Mohammed G. Alsubaie, Suhuai Luo and Kamran Shaukat
Mach. Learn. Knowl. Extr. 2024, 6(1), 464-505; https://doi.org/10.3390/make6010024 - 21 Feb 2024
Cited by 7 | Viewed by 14017
Abstract
Alzheimer’s disease (AD) is a pressing global issue, demanding effective diagnostic approaches. This systematic review surveys the recent literature (2018 onwards) to illuminate the current landscape of AD detection via deep learning. Focusing on neuroimaging, this study explores single- and multi-modality investigations, delving [...] Read more.
Alzheimer’s disease (AD) is a pressing global issue, demanding effective diagnostic approaches. This systematic review surveys the recent literature (2018 onwards) to illuminate the current landscape of AD detection via deep learning. Focusing on neuroimaging, this study explores single- and multi-modality investigations, delving into biomarkers, features, and preprocessing techniques. Various deep models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative models, are evaluated for their AD detection performance. Challenges such as limited datasets and training procedures persist. Emphasis is placed on the need to differentiate AD from similar brain patterns, necessitating discriminative feature representations. This review highlights deep learning’s potential and limitations in AD detection, underscoring dataset importance. Future directions involve benchmark platform development for streamlined comparisons. In conclusion, while deep learning holds promise for accurate AD detection, refining models and methods is crucial to tackle challenges and enhance diagnostic precision. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

16 pages, 3807 KiB  
Article
VisFormers—Combining Vision and Transformers for Enhanced Complex Document Classification
by Subhayu Dutta, Subhrangshu Adhikary and Ashutosh Dhar Dwivedi
Mach. Learn. Knowl. Extr. 2024, 6(1), 448-463; https://doi.org/10.3390/make6010023 - 16 Feb 2024
Viewed by 2736
Abstract
Complex documents have text, figures, tables, and other elements. The classification of scanned copies of different categories of complex documents like memos, newspapers, letters, and more is essential for rapid digitization. However, this task is very challenging as most scanned complex documents look [...] Read more.
Complex documents have text, figures, tables, and other elements. The classification of scanned copies of different categories of complex documents like memos, newspapers, letters, and more is essential for rapid digitization. However, this task is very challenging as most scanned complex documents look similar. This is because all documents have similar colors of the page and letters, similar textures for all papers, and very few contrasting features. Several attempts have been made in the state of the art to classify complex documents; however, only a few of these works have addressed the classification of complex documents with similar features, and among these, the performances could be more satisfactory. To overcome this, this paper presents a method to use an optical character reader to extract the texts. It proposes a multi-headed model to combine vision-based transfer learning and natural-language-based Transformers within the same network for simultaneous training for different inputs and optimizers in specific parts of the network. A subset of the Ryers Vision Lab Complex Document Information Processing dataset containing 16 different document classes was used to evaluate the performances. The proposed multi-headed VisFormers network classified the documents with up to 94.2% accuracy, while a regular natural-language-processing-based Transformer network achieved 83%, and vision-based VGG19 transfer learning could achieve only up to 90% accuracy. The model deployment can help sort the scanned copies of various documents into different categories. Full article
(This article belongs to the Section Visualization)
Show Figures

Figure 1

13 pages, 2749 KiB  
Article
High-Throughput Ensemble-Learning-Driven Band Gap Prediction of Double Perovskites Solar Cells Absorber
by Sabrina Djeradi, Tahar Dahame, Mohamed Abdelilah Fadla, Bachir Bentria, Mohammed Benali Kanoun and Souraya Goumri-Said
Mach. Learn. Knowl. Extr. 2024, 6(1), 435-447; https://doi.org/10.3390/make6010022 - 16 Feb 2024
Cited by 6 | Viewed by 2202
Abstract
Perovskite materials have attracted much attention in recent years due to their high performance, especially in the field of photovoltaics. However, the dark side of these materials is their poor stability, which poses a huge challenge to their practical applications. Double perovskite compounds, [...] Read more.
Perovskite materials have attracted much attention in recent years due to their high performance, especially in the field of photovoltaics. However, the dark side of these materials is their poor stability, which poses a huge challenge to their practical applications. Double perovskite compounds, on the other hand, can show more stability as a result of their specific structure. One of the key properties of both perovskite and double perovskite is their tunable band gap, which can be determined using different techniques. Density functional theory (DFT), for instance, offers the potential to intelligently direct experimental investigation activities and predict various properties, including band gap. In reality, however, it is still difficult to anticipate the energy band gap from first principles, and accurate results often require more expensive methods such as hybrid functional or GW methods. In this paper, we present our development of high-throughput supervised ensemble learning-based methods: random forest, XGBoost, and Light GBM using a database of 1306 double perovskites materials to predict the energy band gap. Based on elemental properties, characteristics have been vectorized from chemical compositions. Our findings demonstrate the efficiency of ensemble learning methods and imply that scientists would benefit from recently employed methods in materials informatics. Full article
Show Figures

Figure 1

15 pages, 3282 KiB  
Article
Overcoming Therapeutic Inertia in Type 2 Diabetes: Exploring Machine Learning-Based Scenario Simulation for Improving Short-Term Glycemic Control
by Musacchio Nicoletta, Rita Zilich, Davide Masi, Fabio Baccetti, Besmir Nreu, Carlo Bruno Giorda, Giacomo Guaita, Lelio Morviducci, Marco Muselli, Alessandro Ozzello, Federico Pisani, Paola Ponzani, Antonio Rossi, Pierluigi Santin, Damiano Verda, Graziano Di Cianni and Riccardo Candido
Mach. Learn. Knowl. Extr. 2024, 6(1), 420-434; https://doi.org/10.3390/make6010021 - 14 Feb 2024
Cited by 1 | Viewed by 2577
Abstract
Background: International guidelines for diabetes care emphasize the urgency of promptly achieving and sustaining adequate glycemic control to reduce the occurrence of micro/macrovascular complications in patients with type 2 diabetes mellitus (T2DM). However, data from the Italian Association of Medical Diabetologists (AMD) Annals [...] Read more.
Background: International guidelines for diabetes care emphasize the urgency of promptly achieving and sustaining adequate glycemic control to reduce the occurrence of micro/macrovascular complications in patients with type 2 diabetes mellitus (T2DM). However, data from the Italian Association of Medical Diabetologists (AMD) Annals reveal that only 47% of T2DM patients reach appropriate glycemic targets, with approximately 30% relying on insulin therapy, either solely or in combination. This artificial intelligence analysis seeks to assess the potential impact of timely insulin initiation in all eligible patients via a “what-if” scenario simulation, leveraging real-world data. Methods: This retrospective cohort study utilized the AMD Annals database, comprising 1,186,247 T2DM patients from 2005 to 2019. Employing the Logic Learning Machine (LLM), we simulated timely insulin use for all eligible patients, estimating its effect on glycemic control after 12 months within a cohort of 85,239 patients. Of these, 20,015 were employed for the machine learning phase and 65,224 for simulation. Results: Within the simulated scenario, the introduction of appropriate insulin therapy led to a noteworthy projected 17% increase in patients meeting the metabolic target after 12 months from therapy initiation within the cohort of 65,224 individuals. The LLM’s projection envisages 32,851 potential patients achieving the target (hemoglobin glycated < 7.5%) after 12 months, compared to 21,453 patients observed in real-world cases. The receiver operating characteristic (ROC) curve analysis for this model demonstrated modest performance, with an area under the curve (AUC) value of 70.4%. Conclusions: This study reaffirms the significance of combatting therapeutic inertia in managing T2DM patients. Early insulinization, when clinically appropriate, markedly enhances patients’ metabolic goals at the 12-month follow-up. Full article
Show Figures

Figure 1

18 pages, 4155 KiB  
Article
Machine Learning Predictive Analysis of Liquefaction Resistance for Sandy Soils Enhanced by Chemical Injection
by Yuxin Cong, Toshiyuki Motohashi, Koki Nakao and Shinya Inazumi
Mach. Learn. Knowl. Extr. 2024, 6(1), 402-419; https://doi.org/10.3390/make6010020 - 14 Feb 2024
Cited by 2 | Viewed by 1856
Abstract
The objective of this study was to investigate the liquefaction resistance of chemically improved sandy soils in a straightforward and accurate manner. Using only the existing experimental databases and artificial intelligence, the goal was to predict the experimental results as supporting information before [...] Read more.
The objective of this study was to investigate the liquefaction resistance of chemically improved sandy soils in a straightforward and accurate manner. Using only the existing experimental databases and artificial intelligence, the goal was to predict the experimental results as supporting information before performing the physical experiments. Emphasis was placed on the significance of data from 20 loading cycles of cyclic undrained triaxial tests to determine the liquefaction resistance and the contribution of each explanatory variable. Different combinations of explanatory variables were considered. Regarding the predictive model, it was observed that a case with the liquefaction resistance ratio as the dependent variable and other parameters as explanatory variables yielded favorable results. In terms of exploring combinations of explanatory variables, it was found advantageous to include all the variables, as doing so consistently resulted in a high coefficient of determination. The inclusion of the liquefaction resistance ratio in the training data was found to improve the predictive accuracy. In addition, the results obtained when using a linear model for the prediction suggested the potential to accurately predict the liquefaction resistance using historical data. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

17 pages, 2020 KiB  
Perspective
Explicit Physics-Informed Deep Learning for Computer-Aided Diagnostic Tasks in Medical Imaging
by Shira Nemirovsky-Rotman and Eyal Bercovich
Mach. Learn. Knowl. Extr. 2024, 6(1), 385-401; https://doi.org/10.3390/make6010019 - 12 Feb 2024
Cited by 3 | Viewed by 2098
Abstract
DNN-based systems have demonstrated unprecedented performance in terms of accuracy and speed over the past decade. However, recent work has shown that such models may not be sufficiently robust during the inference process. Furthermore, due to the data-driven learning nature of DNNs, designing [...] Read more.
DNN-based systems have demonstrated unprecedented performance in terms of accuracy and speed over the past decade. However, recent work has shown that such models may not be sufficiently robust during the inference process. Furthermore, due to the data-driven learning nature of DNNs, designing interpretable and generalizable networks is a major challenge, especially when considering critical applications such as medical computer-aided diagnostics (CAD) and other medical imaging tasks. Within this context, a line of approaches incorporating prior knowledge domain information into deep learning methods has recently emerged. In particular, many of these approaches utilize known physics-based forward imaging models, aimed at improving the stability and generalization ability of DNNs for medical imaging applications. In this paper, we review recent work focused on such physics-based or physics-prior-based learning for a variety of imaging modalities and medical applications. We discuss how the inclusion of such physics priors to the training process and/or network architecture supports their stability and generalization ability. Moreover, we propose a new physics-based approach, in which an explicit physics prior, which describes the relation between the input and output of the forward imaging model, is included as an additional input into the network architecture. Furthermore, we propose a tailored training process for this extended architecture, for which training data are generated with perturbed physical priors that are also integrated into the network. Within the scope of this approach, we offer a problem formulation for a regression task with a highly nonlinear forward model and highlight possible useful applications for this task. Finally, we briefly discuss future challenges for physics-informed deep learning in the context of medical imaging. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

18 pages, 1057 KiB  
Article
Prompt Engineering or Fine-Tuning? A Case Study on Phishing Detection with Large Language Models
by Fouad Trad and Ali Chehab
Mach. Learn. Knowl. Extr. 2024, 6(1), 367-384; https://doi.org/10.3390/make6010018 - 6 Feb 2024
Cited by 11 | Viewed by 8622
Abstract
Large Language Models (LLMs) are reshaping the landscape of Machine Learning (ML) application development. The emergence of versatile LLMs capable of undertaking a wide array of tasks has reduced the necessity for intensive human involvement in training and maintaining ML models. Despite these [...] Read more.
Large Language Models (LLMs) are reshaping the landscape of Machine Learning (ML) application development. The emergence of versatile LLMs capable of undertaking a wide array of tasks has reduced the necessity for intensive human involvement in training and maintaining ML models. Despite these advancements, a pivotal question emerges: can these generalized models negate the need for task-specific models? This study addresses this question by comparing the effectiveness of LLMs in detecting phishing URLs when utilized with prompt-engineering techniques versus when fine-tuned. Notably, we explore multiple prompt-engineering strategies for phishing URL detection and apply them to two chat models, GPT-3.5-turbo and Claude 2. In this context, the maximum result achieved was an F1-score of 92.74% by using a test set of 1000 samples. Following this, we fine-tune a range of base LLMs, including GPT-2, Bloom, Baby LLaMA, and DistilGPT-2—all primarily developed for text generation—exclusively for phishing URL detection. The fine-tuning approach culminated in a peak performance, achieving an F1-score of 97.29% and an AUC of 99.56% on the same test set, thereby outperforming existing state-of-the-art methods. These results highlight that while LLMs harnessed through prompt engineering can expedite application development processes, achieving a decent performance, they are not as effective as dedicated, task-specific LLMs. Full article
Show Figures

Figure 1

25 pages, 1150 KiB  
Article
More Capable, Less Benevolent: Trust Perceptions of AI Systems across Societal Contexts
by Ekaterina Novozhilova, Kate Mays, Sejin Paik and James E. Katz
Mach. Learn. Knowl. Extr. 2024, 6(1), 342-366; https://doi.org/10.3390/make6010017 - 5 Feb 2024
Cited by 4 | Viewed by 5201
Abstract
Modern AI applications have caused broad societal implications across key public domains. While previous research primarily focuses on individual user perspectives regarding AI systems, this study expands our understanding to encompass general public perceptions. Through a survey (N = 1506), we examined [...] Read more.
Modern AI applications have caused broad societal implications across key public domains. While previous research primarily focuses on individual user perspectives regarding AI systems, this study expands our understanding to encompass general public perceptions. Through a survey (N = 1506), we examined public trust across various tasks within education, healthcare, and creative arts domains. The results show that participants vary in their trust across domains. Notably, AI systems’ abilities were evaluated higher than their benevolence across all domains. Demographic traits had less influence on trust in AI abilities and benevolence compared to technology-related factors. Specifically, participants with greater technological competence, AI familiarity, and knowledge viewed AI as more capable in all domains. These participants also perceived greater systems’ benevolence in healthcare and creative arts but not in education. We discuss the importance of considering public trust and its determinants in AI adoption. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

26 pages, 1391 KiB  
Article
SHapley Additive exPlanations (SHAP) for Efficient Feature Selection in Rolling Bearing Fault Diagnosis
by Mailson Ribeiro Santos, Affonso Guedes and Ignacio Sanchez-Gendriz
Mach. Learn. Knowl. Extr. 2024, 6(1), 316-341; https://doi.org/10.3390/make6010016 - 5 Feb 2024
Cited by 4 | Viewed by 3345
Abstract
This study introduces an efficient methodology for addressing fault detection, classification, and severity estimation in rolling element bearings. The methodology is structured into three sequential phases, each dedicated to generating distinct machine-learning-based models for the tasks of fault detection, classification, and severity estimation. [...] Read more.
This study introduces an efficient methodology for addressing fault detection, classification, and severity estimation in rolling element bearings. The methodology is structured into three sequential phases, each dedicated to generating distinct machine-learning-based models for the tasks of fault detection, classification, and severity estimation. To enhance the effectiveness of fault diagnosis, information acquired in one phase is leveraged in the subsequent phase. Additionally, in the pursuit of attaining models that are both compact and efficient, an explainable artificial intelligence (XAI) technique is incorporated to meticulously select optimal features for the machine learning (ML) models. The chosen ML technique for the tasks of fault detection, classification, and severity estimation is the support vector machine (SVM). To validate the approach, the widely recognized Case Western Reserve University benchmark is utilized. The results obtained emphasize the efficiency and efficacy of the proposal. Remarkably, even with a highly limited number of features, evaluation metrics consistently indicate an accuracy of over 90% in the majority of cases when employing this approach. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence (XAI): 2nd Edition)
Show Figures

Figure 1

33 pages, 3390 KiB  
Review
Distributed Learning in the IoT–Edge–Cloud Continuum
by Audris Arzovs, Janis Judvaitis, Krisjanis Nesenbergs and Leo Selavo
Mach. Learn. Knowl. Extr. 2024, 6(1), 283-315; https://doi.org/10.3390/make6010015 - 1 Feb 2024
Cited by 3 | Viewed by 2811
Abstract
The goal of the IoT–Edge–Cloud Continuum approach is to distribute computation and data loads across multiple types of devices taking advantage of the different strengths of each, such as proximity to the data source, data access, or computing power, while mitigating potential weaknesses. [...] Read more.
The goal of the IoT–Edge–Cloud Continuum approach is to distribute computation and data loads across multiple types of devices taking advantage of the different strengths of each, such as proximity to the data source, data access, or computing power, while mitigating potential weaknesses. Most current machine learning operations are currently concentrated on remote high-performance computing devices, such as the cloud, which leads to challenges related to latency, privacy, and other inefficiencies. Distributed learning approaches can address these issues by enabling the distribution of machine learning operations throughout the IoT–Edge–Cloud Continuum by incorporating Edge and even IoT layers into machine learning operations more directly. Approaches like transfer learning could help to transfer the knowledge from more performant IoT–Edge–Cloud Continuum layers to more resource-constrained devices, e.g., IoT. The implementation of these methods in machine learning operations, including the related data handling security and privacy approaches, is challenging and actively being researched. In this article the distributed learning and transfer learning domains are researched, focusing on security, robustness, and privacy aspects, and their potential usage in the IoT–Edge–Cloud Continuum, including research on tools to use for implementing these methods. To achieve this, we have reviewed 145 sources and described the relevant methods as well as their relevant attack vectors and provided suggestions on mitigation. Full article
Show Figures

Figure 1

24 pages, 16062 KiB  
Article
Real-Time Droplet Detection for Agricultural Spraying Systems: A Deep Learning Approach
by Nhut Huynh and Kim-Doang Nguyen
Mach. Learn. Knowl. Extr. 2024, 6(1), 259-282; https://doi.org/10.3390/make6010014 - 26 Jan 2024
Cited by 3 | Viewed by 2456
Abstract
Nozzles are ubiquitous in agriculture: they are used to spray and apply nutrients and pesticides to crops. The properties of droplets sprayed from nozzles are vital factors that determine the effectiveness of the spray. Droplet size and other characteristics affect spray retention and [...] Read more.
Nozzles are ubiquitous in agriculture: they are used to spray and apply nutrients and pesticides to crops. The properties of droplets sprayed from nozzles are vital factors that determine the effectiveness of the spray. Droplet size and other characteristics affect spray retention and drift, which indicates how much of the spray adheres to the crop and how much becomes chemical runoff that pollutes the environment. There is a critical need to measure these droplet properties to improve the performance of crop spraying systems. This paper establishes a deep learning methodology to detect droplets moving across a camera frame to measure their size. This framework is compatible with embedded systems that have limited onboard resources and can operate in real time. The method leverages a combination of techniques including resizing, normalization, pruning, detection head, unified feature map extraction via a feature pyramid network, non-maximum suppression, and optimization-based training. The approach is designed with the capability of detecting droplets of various sizes, shapes, and orientations. The experimental results demonstrate that the model designed in this study, coupled with the right combination of dataset and augmentation, achieved a 97% precision and 96.8% recall in droplet detection. The proposed methodology outperformed previous models, marking a significant advancement in droplet detection for precision agriculture applications. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

26 pages, 2654 KiB  
Article
A Text-Based Predictive Maintenance Approach for Facility Management Requests Utilizing Association Rule Mining and Large Language Models
by Maximilian Lowin
Mach. Learn. Knowl. Extr. 2024, 6(1), 233-258; https://doi.org/10.3390/make6010013 - 26 Jan 2024
Cited by 5 | Viewed by 2286
Abstract
Introduction: Due to the lack of labeled data, applying predictive maintenance algorithms for facility management is cumbersome. Most companies are unwilling to share data or do not have time for annotation. In addition, most available facility management data are text data. Thus, there [...] Read more.
Introduction: Due to the lack of labeled data, applying predictive maintenance algorithms for facility management is cumbersome. Most companies are unwilling to share data or do not have time for annotation. In addition, most available facility management data are text data. Thus, there is a need for an unsupervised predictive maintenance algorithm that is capable of handling textual data. Methodology: This paper proposes applying association rule mining on maintenance requests to identify upcoming needs in facility management. By coupling temporal association rule mining with the concept of semantic similarity derived from large language models, the proposed methodology can discover meaningful knowledge in the form of rules suitable for decision-making. Results: Relying on the large German language models works best for the presented case study. Introducing a temporal lift filter allows for reducing the created rules to the most important ones. Conclusions: Only a few maintenance requests are sufficient to mine association rules that show links between different infrastructural failures. Due to the unsupervised manner of the proposed algorithm, domain experts need to evaluate the relevance of the specific rules. Nevertheless, the algorithm enables companies to efficiently utilize their data stored in databases to create interpretable rules supporting decision-making. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

18 pages, 1495 KiB  
Article
Algorithmic Information Theory for the Precise Engineering of Flexible Material Mechanics
by Liang Luo and George K. Stylios
Mach. Learn. Knowl. Extr. 2024, 6(1), 215-232; https://doi.org/10.3390/make6010012 - 22 Jan 2024
Cited by 2 | Viewed by 1871
Abstract
The structure of fibrous assemblies is highly complex, being both random and regular at the same time, which leads to the complexity of its mechanical behaviour. Using algorithms such as machine learning to process complex mechanical property data requires consideration and understanding of [...] Read more.
The structure of fibrous assemblies is highly complex, being both random and regular at the same time, which leads to the complexity of its mechanical behaviour. Using algorithms such as machine learning to process complex mechanical property data requires consideration and understanding of its information principles. There are many different methods and instruments for measuring flexible material mechanics, and many different mechanics models exist. There is a need for an evaluation method to determine how close the results they obtain are to the real material mechanical behaviours. This paper considers and investigates measurements, data, models and simulations of fabric’s low-stress mechanics from an information perspective. The simplification of measurements and models will lead to a loss of information and, ultimately, a loss of authenticity in the results. Kolmogorov complexity is used as a tool to analyse and evaluate the algorithmic information content of multivariate nonlinear relationships of fabric stress and strain. The loss of algorithmic information content resulting from simplified approaches to various material measurements, models and simulations is also evaluated. For example, ignoring the friction hysteresis component in the material mechanical data can cause the model and simulation to lose more than 50% of the algorithm information, whilst the average loss of information using uniaxial measurement data can be as high as 75%. The results of this evaluation can be used to determine the authenticity of measurements and models and to identify the direction for new measurement instrument development and material mechanics modelling. It has been shown that a vast number of models, which use unary relationships to describe fabric behaviour and ignore the presence of frictional hysteresis, are inaccurate because they hold less than 12% of real fabric mechanics data. The paper also explores the possibility of compressing the measurement data of fabric mechanical properties. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

16 pages, 4736 KiB  
Article
The Impact of Light Conditions on Neural Affect Classification: A Deep Learning Approach
by Sophie Zentner, Alberto Barradas Chacon and Selina C. Wriessnegger
Mach. Learn. Knowl. Extr. 2024, 6(1), 199-214; https://doi.org/10.3390/make6010011 - 18 Jan 2024
Cited by 1 | Viewed by 2457
Abstract
Understanding and detecting human emotions is crucial for enhancing mental health, cognitive performance and human–computer interactions. This field in affective computing is relatively unexplored, and gaining knowledge about which external factors impact emotions could enhance communication between users and machines. Furthermore, it could [...] Read more.
Understanding and detecting human emotions is crucial for enhancing mental health, cognitive performance and human–computer interactions. This field in affective computing is relatively unexplored, and gaining knowledge about which external factors impact emotions could enhance communication between users and machines. Furthermore, it could also help us to manage affective disorders or understand affective physiological responses to human spatial and digital environments. The main objective of the current study was to investigate the influence of external stimulation, specifically the influence of different light conditions, on brain activity while observing affect-eliciting pictures and their classification. In this context, a multichannel electroencephalography (EEG) was recorded in 30 participants as they observed images from the Nencki Affective Picture System (NAPS) database in an art-gallery-style Virtual Reality (VR) environment. The elicited affect states were classified into three affect classes within the two-dimensional valence–arousal plane. Valence (positive/negative) and arousal (high/low) values were reported by participants on continuous scales. The experiment was conducted in two experimental conditions: a warm light condition and a cold light condition. Thus, three classification tasks arose with regard to the recorded brain data: classification of an affect state within a warm-light condition, classification of an affect state within a cold light condition, and warm light vs. cold light classification during observation of affect-eliciting images. For all classification tasks, Linear Discriminant Analysis, a Spatial Filter Model, a Convolutional Neural Network, the EEGNet, and the SincNet were compared. The EEGNet architecture performed best in all tasks. It could significantly classify three affect states with 43.12% accuracy under the influence of warm light. Under the influence of cold light, no model could achieve significant results. The classification between visual stimulus with warm light vs. cold light could be classified significantly with 76.65% accuracy from the EEGNet, well above any other machine learning or deep learning model. No significant differences could be detected between affect recognition in different light conditions, but the results point towards the advantage of gradient-based learning methods for data-driven experimental designs for the problem of affect decoding from EEG, providing modern tools for affective computing in digital spaces. Moreover, the ability to discern externally driven affective states through deep learning not only advances our understanding of the human mind but also opens avenues for developing innovative therapeutic interventions and improving human–computer interaction. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

28 pages, 2108 KiB  
Review
GAN-Based Tabular Data Generator for Constructing Synopsis in Approximate Query Processing: Challenges and Solutions
by Mohammadali Fallahian, Mohsen Dorodchi and Kyle Kreth
Mach. Learn. Knowl. Extr. 2024, 6(1), 171-198; https://doi.org/10.3390/make6010010 - 16 Jan 2024
Cited by 3 | Viewed by 2536
Abstract
In data-driven systems, data exploration is imperative for making real-time decisions. However, big data are stored in massive databases that are difficult to retrieve. Approximate Query Processing (AQP) is a technique for providing approximate answers to aggregate queries based on a summary of [...] Read more.
In data-driven systems, data exploration is imperative for making real-time decisions. However, big data are stored in massive databases that are difficult to retrieve. Approximate Query Processing (AQP) is a technique for providing approximate answers to aggregate queries based on a summary of the data (synopsis) that closely replicates the behavior of the actual data; this can be useful when an approximate answer to queries is acceptable in a fraction of the real execution time. This study explores the novel utilization of a Generative Adversarial Network (GAN) for the generation of tabular data that can be employed in AQP for synopsis construction. We thoroughly investigate the unique challenges posed by the synopsis construction process, including maintaining data distribution characteristics, handling bounded continuous and categorical data, and preserving semantic relationships, and we then introduce the advancement of tabular GAN architectures that overcome these challenges. Furthermore, we propose and validate a suite of statistical metrics tailored for assessing the reliability of GAN-generated synopses. Our findings demonstrate that advanced GAN variations exhibit a promising capacity to generate high-fidelity synopses, potentially transforming the efficiency and effectiveness of AQP in data-driven systems. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

15 pages, 688 KiB  
Article
An Ensemble-Based Multi-Classification Machine Learning Classifiers Approach to Detect Multiple Classes of Cyberbullying
by Abdulkarim Faraj Alqahtani and Mohammad Ilyas
Mach. Learn. Knowl. Extr. 2024, 6(1), 156-170; https://doi.org/10.3390/make6010009 - 12 Jan 2024
Cited by 3 | Viewed by 3342
Abstract
The impact of communication through social media is currently considered a significant social issue. This issue can lead to inappropriate behavior using social media, which is referred to as cyberbullying. Automated systems are capable of efficiently identifying cyberbullying and performing sentiment analysis on [...] Read more.
The impact of communication through social media is currently considered a significant social issue. This issue can lead to inappropriate behavior using social media, which is referred to as cyberbullying. Automated systems are capable of efficiently identifying cyberbullying and performing sentiment analysis on social media platforms. This study focuses on enhancing a system to detect six types of cyberbullying tweets. Employing multi-classification algorithms on a cyberbullying dataset, our approach achieved high accuracy, particularly with the TF-IDF (bigram) feature extraction. Our experiment achieved high performance compared with that stated for previous experiments on the same dataset. Two ensemble machine learning methods, employing the N-gram with TF-IDF feature-extraction technique, demonstrated superior performance in classification. Three popular multi-classification algorithms: Decision Trees, Random Forest, and XGBoost, were combined into two varied ensemble methods separately. These ensemble classifiers demonstrated superior performance compared to traditional machine learning classifier models. The stacking classifier reached 90.71% accuracy and the voting classifier 90.44%. The results of the experiments showed that the framework can detect six different types of cyberbullying more efficiently, with an accuracy rate of 0.9071. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

13 pages, 1194 KiB  
Article
What Do the Regulators Mean? A Taxonomy of Regulatory Principles for the Use of AI in Financial Services
by Mustafa Pamuk, Matthias Schumann and Robert C. Nickerson
Mach. Learn. Knowl. Extr. 2024, 6(1), 143-155; https://doi.org/10.3390/make6010008 - 11 Jan 2024
Cited by 1 | Viewed by 2228
Abstract
The intended automation in the financial industry creates a proper area for artificial intelligence usage. However, complex and high regulatory standards and rapid technological developments pose significant challenges in developing and deploying AI-based services in the finance industry. The regulatory principles defined by [...] Read more.
The intended automation in the financial industry creates a proper area for artificial intelligence usage. However, complex and high regulatory standards and rapid technological developments pose significant challenges in developing and deploying AI-based services in the finance industry. The regulatory principles defined by financial authorities in Europe need to be structured in a fine-granular way to promote understanding and ensure customer safety and the quality of AI-based services in the financial industry. This will lead to a better understanding of regulators’ priorities and guide how AI-based services are built. This paper provides a classification pattern with a taxonomy that clarifies the existing European regulatory principles for researchers, regulatory authorities, and financial services companies. Our study can pave the way for developing compliant AI-based services by bringing out the thematic focus of regulatory principles. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

17 pages, 849 KiB  
Article
Knowledge Graph Extraction of Business Interactions from News Text for Business Networking Analysis
by Didier Gohourou and Kazuhiro Kuwabara
Mach. Learn. Knowl. Extr. 2024, 6(1), 126-142; https://doi.org/10.3390/make6010007 - 7 Jan 2024
Cited by 1 | Viewed by 2506
Abstract
Network representation of data is key to a variety of fields and their applications including trading and business. A major source of data that can be used to build insightful networks is the abundant amount of unstructured text data available through the web. [...] Read more.
Network representation of data is key to a variety of fields and their applications including trading and business. A major source of data that can be used to build insightful networks is the abundant amount of unstructured text data available through the web. The efforts to turn unstructured text data into a network have spawned different research endeavors, including the simplification of the process. This study presents the design and implementation of TraCER, a pipeline that turns unstructured text data into a graph, targeting the business networking domain. It describes the application of natural language processing techniques used to process the text, as well as the heuristics and learning algorithms that categorize the nodes and the links. The study also presents some simple yet efficient methods for the entity-linking and relation classification steps of the pipeline. Full article
(This article belongs to the Section Network)
Show Figures

Figure 1

28 pages, 22846 KiB  
Article
Predicting Wind Comfort in an Urban Area: A Comparison of a Regression- with a Classification-CNN for General Wind Rose Statistics
by Jennifer Werner, Dimitri Nowak, Franziska Hunger, Tomas Johnson, Andreas Mark, Alexander Gösta and Fredrik Edelvik
Mach. Learn. Knowl. Extr. 2024, 6(1), 98-125; https://doi.org/10.3390/make6010006 - 4 Jan 2024
Cited by 3 | Viewed by 2575
Abstract
Wind comfort is an important factor when new buildings in existing urban areas are planned. It is common practice to use computational fluid dynamics (CFD) simulations to model wind comfort. These simulations are usually time-consuming, making it impossible to explore a high number [...] Read more.
Wind comfort is an important factor when new buildings in existing urban areas are planned. It is common practice to use computational fluid dynamics (CFD) simulations to model wind comfort. These simulations are usually time-consuming, making it impossible to explore a high number of different design choices for a new urban development with wind simulations. Data-driven approaches based on simulations have shown great promise, and have recently been used to predict wind comfort in urban areas. These surrogate models could be used in generative design software and would enable the planner to explore a large number of options for a new design. In this paper, we propose a novel machine learning workflow (MLW) for direct wind comfort prediction. The MLW incorporates a regression and a classification U-Net, trained based on CFD simulations. Furthermore, we present an augmentation strategy focusing on generating more training data independent of the underlying wind statistics needed to calculate the wind comfort criterion. We train the models based on different sets of training data and compare the results. All trained models (regression and classification) yield an F1-score greater than 80% and can be combined with any wind rose statistic. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

20 pages, 1186 KiB  
Article
A Data Mining Approach for Health Transport Demand
by Jorge Blanco Prieto, Marina Ferreras González, Steven Van Vaerenbergh and Oscar Jesús Cosido Cobos
Mach. Learn. Knowl. Extr. 2024, 6(1), 78-97; https://doi.org/10.3390/make6010005 - 4 Jan 2024
Cited by 1 | Viewed by 2236
Abstract
Efficient planning and management of health transport services are crucial for improving accessibility and enhancing the quality of healthcare. This study focuses on the choice of determinant variables in the prediction of health transport demand using data mining and analysis techniques. Specifically, health [...] Read more.
Efficient planning and management of health transport services are crucial for improving accessibility and enhancing the quality of healthcare. This study focuses on the choice of determinant variables in the prediction of health transport demand using data mining and analysis techniques. Specifically, health transport services data from Asturias, spanning a seven-year period, are analyzed with the aim of developing accurate predictive models. The problem at hand requires the handling of large volumes of data and multiple predictor variables, leading to challenges in computational cost and interpretation of the results. Therefore, data mining techniques are applied to identify the most relevant variables in the design of predictive models. This approach allows for reducing the computational cost without sacrificing prediction accuracy. The findings of this study underscore that the selection of significant variables is essential for optimizing medical transport resources and improving the planning of emergency services. With the most relevant variables identified, a balance between prediction accuracy and computational efficiency is achieved. As a result, improved service management is observed to lead to increased accessibility to health services and better resource planning. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

25 pages, 2315 KiB  
Article
Machine Learning for an Enhanced Credit Risk Analysis: A Comparative Study of Loan Approval Prediction Models Integrating Mental Health Data
by Adnan Alagic, Natasa Zivic, Esad Kadusic, Dzenan Hamzic, Narcisa Hadzajlic, Mejra Dizdarevic and Elmedin Selmanovic
Mach. Learn. Knowl. Extr. 2024, 6(1), 53-77; https://doi.org/10.3390/make6010004 - 4 Jan 2024
Cited by 2 | Viewed by 6232
Abstract
The number of loan requests is rapidly growing worldwide representing a multi-billion-dollar business in the credit approval industry. Large data volumes extracted from the banking transactions that represent customers’ behavior are available, but processing loan applications is a complex and time-consuming task for [...] Read more.
The number of loan requests is rapidly growing worldwide representing a multi-billion-dollar business in the credit approval industry. Large data volumes extracted from the banking transactions that represent customers’ behavior are available, but processing loan applications is a complex and time-consuming task for banking institutions. In 2022, over 20 million Americans had open loans, totaling USD 178 billion in debt, although over 20% of loan applications were rejected. Numerous statistical methods have been deployed to estimate loan risks opening the field to estimate whether machine learning techniques can better predict the potential risks. To study the machine learning paradigm in this sector, the mental health dataset and loan approval dataset presenting survey results from 1991 individuals are used as inputs to experiment with the credit risk prediction ability of the chosen machine learning algorithms. Giving a comprehensive comparative analysis, this paper shows how the chosen machine learning algorithms can distinguish between normal and risky loan customers who might never pay their debts back. The results from the tested algorithms show that XGBoost achieves the highest accuracy of 84% in the first dataset, surpassing gradient boost (83%) and KNN (83%). In the second dataset, random forest achieved the highest accuracy of 85%, followed by decision tree and KNN with 83%. Alongside accuracy, the precision, recall, and overall performance of the algorithms were tested and a confusion matrix analysis was performed producing numerical results that emphasized the superior performance of XGBoost and random forest in the classification tasks in the first dataset, and XGBoost and decision tree in the second dataset. Researchers and practitioners can rely on these findings to form their model selection process and enhance the accuracy and precision of their classification models. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop