applsci-logo

Journal Browser

Journal Browser

Symbolic Methods of Machine Learning in Knowledge Discovery and Explainable Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 4194
Joint Special Issue: You may choose either journal Mathematics or Applied Sciences.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Networks and Systems, Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Interests: decision support systems; data mining; rule induction; rough sets
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Symbolic methods, also called interpretable or white-box methods, were one of the first methods developed within the machine learning area. These methods are still being developed and they find practical applications particularly in knowledge-discovery tasks. In predictive analytics complex approaches (complex AI/ML models) such as boosting, bagging and deep learning usually achieve better results than white-box methods. However, the explanation of a decision-making process of complex AI/ML models is difficult and, without some additional assumptions, often impossible. For this reason, such models are called black-boxes. The dynamic growth of XAI (Explainable Artificial Intelligence) has been recently stimulated by the necessity to explain decisions made by complex AI/ML systems. In this domain the most progressive development has been observed in local, so- called instance-level explanation (i.e., explanation of reasons for making specific decision for a given example). The global or dataset-level XAI still requires intensive research. Generally, the method of global explanation should help the user to understand how the AI/ML model makes decisions globally, for example, about the patterns of right and wrong decisions made by the AI/ML model. In this context, white-box based approximations of complex AI/ML models may play an important role. Specifically, in recent years research on approximation of decisions made by black-box models using white-box approaches has been done.

This Special Issue focuses on new methods of induction of interpretable AI/ML models (rules, trees, graphs, etc.) in data mining and knowledge discovery. The methods for concept learning, contrast set mining, action mining, regression and censored data analysis are welcome. The Special Issue covers also all proposals related to white-box based XAI dedicated to global explanation of decisions made by the complex AI/ML models.

You may choose our Joint Special Issue in Mathematics.

Prof. Dr. Marek Sikora
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • knowledge discovery
  • white-box ML
  • explainable artificial intelligence
  • decision tree and rule induction
  • rough sets

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 761 KiB  
Article
Knowledge Reasoning via Jointly Modeling Knowledge Graphs and Soft Rules
by Yinyu Lan, Shizhu He, Kang Liu and Jun Zhao
Appl. Sci. 2023, 13(19), 10660; https://doi.org/10.3390/app131910660 - 25 Sep 2023
Cited by 1 | Viewed by 1279
Abstract
Knowledge graphs (KGs) play a crucial role in many applications, such as question answering, but incompleteness is an urgent issue for their broad application. Much research in knowledge graph completion (KGC) has been performed to resolve this issue. The methods of KGC can [...] Read more.
Knowledge graphs (KGs) play a crucial role in many applications, such as question answering, but incompleteness is an urgent issue for their broad application. Much research in knowledge graph completion (KGC) has been performed to resolve this issue. The methods of KGC can be classified into two major categories: rule-based reasoning and embedding-based reasoning. The former has high accuracy and good interpretability, but a major challenge is to obtain effective rules on large-scale KGs. The latter has good efficiency and scalability, but it relies heavily on data richness and cannot fully use domain knowledge in the form of logical rules. We propose a novel method that injects rules and learns representations iteratively to take full advantage of rules and embeddings. Specifically, we model the conclusions of rule groundings as 0–1 variables and use a rule confidence regularizer to remove the uncertainty of the conclusions. The proposed approach has the following advantages: (1) It combines the benefits of both rules and knowledge graph embeddings (KGEs) and achieves a good balance between efficiency and scalability. (2) It uses an iterative method to continuously improve KGEs and remove incorrect rule conclusions. Evaluations of two public datasets show that our method outperforms the current state-of-the-art methods, improving performance by 2.7% and 4.3% in mean reciprocal rank (MRR). Full article
Show Figures

Figure 1

15 pages, 622 KiB  
Article
Contextual Explanations for Decision Support in Predictive Maintenance
by Michał Kozielski
Appl. Sci. 2023, 13(18), 10068; https://doi.org/10.3390/app131810068 - 6 Sep 2023
Cited by 4 | Viewed by 1025
Abstract
Explainable artificial intelligence (XAI) methods aim to explain to the user on what basis the model makes decisions. Unfortunately, general-purpose approaches that are independent of the types of data, model used and the level of sophistication of the user are not always able [...] Read more.
Explainable artificial intelligence (XAI) methods aim to explain to the user on what basis the model makes decisions. Unfortunately, general-purpose approaches that are independent of the types of data, model used and the level of sophistication of the user are not always able to make model decisions more comprehensible. An example of such a problem, which is considered in this paper, is a predictive maintenance task where a model identifying outliers in time series is applied. Typical explanations of the model’s decisions, which present the importance of the attributes, are not sufficient to support the user for such a task. Within the framework of this work, a visualisation and analysis of the context of local explanations presenting attribute importance are proposed. Two types of context for explanations are considered: local and global. They extend the information provided by typical explanations and offer the user greater insight into the validity of the alarms triggered by the model. Evaluation of the proposed context was performed on two time series representations: basic and extended. For the extended representation, an aggregation of explanations was used to make them more intuitive for the user. The results show the usefulness of the proposed context, particularly for the basic data representation. However, for the extended representation, the aggregation of explanations used is sometimes insufficient to provide a clear explanatory context. Therefore, the explanation using simplification with a surrogate model on basic data representation was proposed as a solution. The obtained results can be valuable for developers of decision support systems for predictive maintenance. Full article
Show Figures

Figure 1

13 pages, 326 KiB  
Article
Detecting and Isolating Adversarial Attacks Using Characteristics of the Surrogate Model Framework
by Piotr Biczyk and Łukasz Wawrowski
Appl. Sci. 2023, 13(17), 9698; https://doi.org/10.3390/app13179698 - 28 Aug 2023
Viewed by 1109
Abstract
The paper introduces a novel framework for detecting adversarial attacks on machine learning models that classify tabular data. Its purpose is to provide a robust method for the monitoring and continuous auditing of machine learning models for the purpose of detecting malicious data [...] Read more.
The paper introduces a novel framework for detecting adversarial attacks on machine learning models that classify tabular data. Its purpose is to provide a robust method for the monitoring and continuous auditing of machine learning models for the purpose of detecting malicious data alterations. The core of the framework is based on building machine learning classifiers for the detection of attacks and its type that operate on diagnostic attributes. These diagnostic attributes are obtained not from the original model, but from the surrogate model that has been created by observation of the original model inputs and outputs. The paper presents building blocks for the framework and tests its power for the detection and isolation of attacks in selected scenarios utilizing known attacks and public machine learning data sets. The obtained results pave the road for further experiments and the goal of developing classifiers that can be integrated into real-world scenarios, bolstering the robustness of machine learning applications. Full article
Show Figures

Figure 1

Back to TopTop