Advances in Explainable Artificial Intelligence
A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".
Deadline for manuscript submissions: closed (28 September 2023) | Viewed by 51139
Special Issue Editors
Interests: machine learning; computational intelligence; game theory applications to machine learning and networking
Special Issues, Collections and Topics in MDPI journals
Interests: machine learning; semantic web; information retrieval
Special Issues, Collections and Topics in MDPI journals
Special Issue Information
Dear Colleagues,
Machine Learning (ML)-based Artificial Intelligence (AI) algorithms can learn from known examples of various abstract representations and models that, once applied to unknown examples, can perform classification, regression or forecasting tasks, to name a few.
Very often, these highly effective ML representations are difficult to understand; this holds true particularly for deep learning models, which can involve millions of parameters. However, for many applications, it is of utmost importance for the stakeholders to understand the decisions made by the system, in order to use them better. Furthermore, for decisions that affect an individual, the legislation might even advocate in the future a “right to an explanation”. Overall, improving the algorithms’ explainability may foster trust and social acceptance of AI.
The need to make ML algorithms more transparent and more explainable has generated several lines of research that form an area known as explainable Artificial Intelligence (XAI).
Among the goals of XAI are adding transparency to ML models by providing detailed information about why the system has reached a particular decision; designing more explainable and transparent ML models, while at the same time maintaining high performance levels; finding a way to evaluate the overall explainability and transparency of the models and quantifying their effectiveness for different stakeholders.
The objective of this Special Issue is to explore recent advances and techniques in the XAI area.
Research topics of interest include (but are not limited to):
- Devising machine learning models that are transparent-by-design;
- Planning for transparency, from data collection up to training, test, and production;
- Developing algorithms and user interfaces for explainability;
- Identifying and mitigating biases in data collection;
- Performing black-box model auditing and explanation;
- Detecting data bias and algorithmic bias;
- Learning causal relationships;
- Integrating social and ethical aspects of explainability;
- Integrating explainability into existing AI systems;
- Designing new explanation modalities;
- Exploring theoretical aspects of explanation and interpretability;
- Investigating the use of XAI in application sectors such as healthcare, bioinformatics, multimedia, linguistics, human–computer interaction, machine translation, autonomous vehicles, risk assessment, justice, etc.
Prof. Dr. Gabriele Gianini
Prof. Dr. Pierre-Edouard Portier
Guest Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- machine learning
- deep learning
- explainability
- transparency
- accountability
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.
Further information on MDPI's Special Issue polices can be found here.