Selected Papers from CD-MAKE 2020 and ARES 2020

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).

Deadline for manuscript submissions: closed (30 November 2020) | Viewed by 38291

Special Issue Editors


E-Mail Website
Guest Editor
SBA Research, University of Vienna, 1090 Vienna, Austria
Interests: fundamental and applied research on blockchain and distributed ledger technologies; security of production systems engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
1. Human-Centered AI Lab, Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences, 1190 Vienna, Austria
2. xAI Lab, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB T5J 3B1, Canada
Interests: artificial intelligence (AI); machine learning (ML); explainable AI (xAI); causability; decision support systems; medical AI; health informatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue will mainly consist of extended papers selected from those presented at the 4th International Cross Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE 2020) as well as the 15th International Conference on Availability, Reliability and Security (ARES 2020). Please visit the conference websites for a detailed description: https://www.ares-conference.eu/ and https://cd-make.net/

Each submission to this Special Issue should contain at least 50% of new material, e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases and a change of title, abstract and keywords. These extended submissions will undergo a peer-review process according to the journal’s rules of action. At least two technical committees will act as reviewers for each extended article submitted to this Special Issue; if needed, additional external reviewers will be invited to guarantee a high-quality reviewing process.

Prof. Dr. Edgar Weippl
Mr. Peter Kieseberg
Prof. Dr. Andreas Holzinger
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • DATA – data fusion, preprocessing, mapping, knowledge representation, environments, etc.
  • LEARNING – algorithms, contextual adaptation, causal reasoning, transfer learning, etc.
  • VISUALIZATION – intelligent interfaces, human-AI interaction, dialogue systems, explanation interfaces, etc.
  • PRIVACY – data protection, safety, security, reliability, verifiability, trust, ethics and social issues, etc.
  • NETWORK – graphical models, graph-based machine learning, Bayesian inference, etc.
  • TOPOLOGY – geometrical machine learning, topological and manifold learning, etc.
  • ENTROPY – time and machine learning, entropy-based learning, etc.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

2 pages, 507 KiB  
Editorial
Special Issue “Selected Papers from CD-MAKE 2020 and ARES 2020”
by Edgar R. Weippl, Andreas Holzinger and Peter Kieseberg
Mach. Learn. Knowl. Extr. 2023, 5(1), 173-174; https://doi.org/10.3390/make5010012 - 20 Jan 2023
Cited by 1 | Viewed by 1723
Abstract
In the current era of rapid technological advancement, machine learning (ML) is quickly becoming a dominant force in the development of smart environments [...] Full article
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)

Research

Jump to: Editorial, Review

15 pages, 1182 KiB  
Article
Transfer Learning in Smart Environments
by Amin Anjomshoaa and Edward Curry
Mach. Learn. Knowl. Extr. 2021, 3(2), 318-332; https://doi.org/10.3390/make3020016 - 29 Mar 2021
Cited by 6 | Viewed by 4131
Abstract
The knowledge embodied in cognitive models of smart environments, such as machine learning models, is commonly associated with time-consuming and costly processes such as large-scale data collection, data labeling, network training, and fine-tuning of models. Sharing and reuse of these elaborated resources between [...] Read more.
The knowledge embodied in cognitive models of smart environments, such as machine learning models, is commonly associated with time-consuming and costly processes such as large-scale data collection, data labeling, network training, and fine-tuning of models. Sharing and reuse of these elaborated resources between intelligent systems of different environments, which is known as transfer learning, would facilitate the adoption of cognitive services for the users and accelerate the uptake of intelligent systems in smart building and smart city applications. Currently, machine learning processes are commonly built for intra-organization purposes and tailored towards specific use cases with the assumption of integrated model repositories and feature pools. Transferring such services and models beyond organization boundaries is a challenging task that requires human intervention to find the matching models and evaluate them. This paper investigates the potential of communication and transfer learning between smart environments in order to empower a decentralized and peer-to-peer ecosystem for seamless and automatic transfer of services and machine learning models. To this end, we explore different knowledge types in the context of smart built environments and propose a collaboration framework based on knowledge graph principles for describing the machine learning models and their corresponding dependencies. Full article
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)
Show Figures

Figure 1

23 pages, 981 KiB  
Article
Property Checking with Interpretable Error Characterization for Recurrent Neural Networks
by Franz Mayr, Sergio Yovine and Ramiro Visca
Mach. Learn. Knowl. Extr. 2021, 3(1), 205-227; https://doi.org/10.3390/make3010010 - 12 Feb 2021
Cited by 10 | Viewed by 3418
Abstract
This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata [...] Read more.
This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm. Full article
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)
Show Figures

Figure 1

49 pages, 5457 KiB  
Article
Interpretable Topic Extraction and Word Embedding Learning Using Non-Negative Tensor DEDICOM
by Lars Hillebrand, David Biesner, Christian Bauckhage and Rafet Sifa
Mach. Learn. Knowl. Extr. 2021, 3(1), 123-167; https://doi.org/10.3390/make3010007 - 19 Jan 2021
Cited by 3 | Viewed by 3857
Abstract
Unsupervised topic extraction is a vital step in automatically extracting concise contentual information from large text corpora. Existing topic extraction methods lack the capability of linking relations between these topics which would further help text understanding. Therefore we propose utilizing the Decomposition into [...] Read more.
Unsupervised topic extraction is a vital step in automatically extracting concise contentual information from large text corpora. Existing topic extraction methods lack the capability of linking relations between these topics which would further help text understanding. Therefore we propose utilizing the Decomposition into Directional Components (DEDICOM) algorithm which provides a uniquely interpretable matrix factorization for symmetric and asymmetric square matrices and tensors. We constrain DEDICOM to row-stochasticity and non-negativity in order to factorize pointwise mutual information matrices and tensors of text corpora. We identify latent topic clusters and their relations within the vocabulary and simultaneously learn interpretable word embeddings. Further, we introduce multiple methods based on alternating gradient descent to efficiently train constrained DEDICOM algorithms. We evaluate the qualitative topic modeling and word embedding performance of our proposed methods on several datasets, including a novel New York Times news dataset, and demonstrate how the DEDICOM algorithm provides deeper text analysis than competing matrix factorization approaches. Full article
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)
Show Figures

Figure 1

28 pages, 2162 KiB  
Article
Learning DOM Trees of Web Pages by Subpath Kernel and Detecting Fake e-Commerce Sites
by Kilho Shin, Taichi Ishikawa, Yu-Lu Liu and David Lawrence Shepard
Mach. Learn. Knowl. Extr. 2021, 3(1), 95-122; https://doi.org/10.3390/make3010006 - 14 Jan 2021
Cited by 8 | Viewed by 4854
Abstract
The subpath kernel is a class of positive definite kernels defined over trees, which has the following advantages for the purposes of classification, regression and clustering: it can be incorporated into a variety of powerful kernel machines including SVM; It is invariant whether [...] Read more.
The subpath kernel is a class of positive definite kernels defined over trees, which has the following advantages for the purposes of classification, regression and clustering: it can be incorporated into a variety of powerful kernel machines including SVM; It is invariant whether input trees are ordered or unordered; It can be computed by significantly fast linear-time algorithms; And, finally, its excellent learning performance has been proven through intensive experiments in the literature. In this paper, we leverage recent advances in tree kernels to solve real problems. As an example, we apply our method to the problem of detecting fake e-commerce sites. Although the problem is similar to phishing site detection, the fact that mimicking existing authentic sites is harmful for fake e-commerce sites marks a clear difference between these two problems. We focus on fake e-commerce site detection for three reasons: e-commerce fraud is a real problem that companies and law enforcement have been cooperating to solve; Inefficiency hampers existing approaches because datasets tend to be large, while subpath kernel learning overcomes these performance challenges; And we offer increased resiliency against attempts to subvert existing detection methods through incorporating robust features that adversaries cannot change: the DOM-trees of web-sites. Our real-world results are remarkable: our method has exhibited accuracy as high as 0.998 when training SVM with 1000 instances and evaluating accuracy for almost 7000 independent instances. Its generalization efficiency is also excellent: with only 100 training instances, the accuracy score reached 0.996. Full article
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

28 pages, 3799 KiB  
Review
AI System Engineering—Key Challenges and Lessons Learned
by Lukas Fischer, Lisa Ehrlinger, Verena Geist, Rudolf Ramler, Florian Sobiezky, Werner Zellinger, David Brunner, Mohit Kumar and Bernhard Moser
Mach. Learn. Knowl. Extr. 2021, 3(1), 56-83; https://doi.org/10.3390/make3010004 - 31 Dec 2020
Cited by 29 | Viewed by 18735
Abstract
The main challenges are discussed together with the lessons learned from past and ongoing research along the development cycle of machine learning systems. This will be done by taking into account intrinsic conditions of nowadays deep learning models, data and software quality issues [...] Read more.
The main challenges are discussed together with the lessons learned from past and ongoing research along the development cycle of machine learning systems. This will be done by taking into account intrinsic conditions of nowadays deep learning models, data and software quality issues and human-centered artificial intelligence (AI) postulates, including confidentiality and ethical aspects. The analysis outlines a fundamental theory-practice gap which superimposes the challenges of AI system engineering at the level of data quality assurance, model building, software engineering and deployment. The aim of this paper is to pinpoint research topics to explore approaches to address these challenges. Full article
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2020 and ARES 2020)
Show Figures

Figure 1

Back to TopTop