Crossing “Data, Information, Knowledge, and Wisdom” Models—Challenges, Solutions, and Recommendations

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Processes".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 25656

Special Issue Editors


E-Mail Website
Guest Editor
College of Science and Information Technology, Hainan University, Haikou, China
Interests: information security; artificial intelligence; big data; software engineering
Special Issues, Collections and Topics in MDPI journals
Department of Technical Sciences, State University of Novi Pazar, Novi Pazar, Serbia
Interests: e-commerce architectures; interoperability; social computing; decision support systems; IoT

E-Mail Website
Guest Editor
College of Technological Innovation, Zayed University, Dubai, UAE
Interests: service computing; social computing; Internet of (Cognitive) Things

Special Issue Information

Dear Colleagues,

Currently, most AI techniques and systems are built on hypotheses and assumptions of learning data distribution probabilities, information completeness, or logical consistency of knowledge systems, separately. However, it is hard to guarantee that learning data distribution will be as “big” as Big Data. Static data distribution is even more difficult in terms of modeling dynamics of data sets. Information completeness relies on not only various objective presentations of information but also the subjective purpose side inside human minds. Experience, common sense, and knowledge need coordination to keep conforming to the value of wisdom.

Data, information, knowledge, and wisdom (DIKW) have been used widely as natural language marking terms in various domains for the purposes of expressing understanding. However, there is a lack of common understanding over the meaning of DIKW concepts whether taken separately or combined. Therefore, there have been proposals and models of DIKW such as “layered hierarchy”, “architecture”, “framework”, “network”, “thinking mode”, “style”, “pattern”, “theory”, “methodology”, “model”, “graph”, etc. The more hypotheses and assumptions on the current uses of data, information, knowledge, and wisdom resources emerge, the less they can be used effectively and efficiently. These hypotheses and assumptions also mean a higher cost to collect, accumulate, and process relative resources.

Toward a more general AI landscape, which maps to real situations where we only have small or insufficient data, partial information, and diversified knowledge under a vague value strategy, with enriched processing capability, we propose to integrate the power or value of data, information, knowledge, and wisdom resources to fit more general AI application scenarios with less cost as well as improve effectiveness and efficiency through conversions among data, information, knowledge, and wisdom. In daily reality, we might expect proper imprecision, partial correctness, acceptable uncertainty, of data, information, knowledge, and wisdom, instead of overprecision, complete correctness, and full certainty, at an unexpected cost. In model merging and transformation among DIKW elements and DIKW architecture (e.g., data graph, information graph, knowledge graph, and wisdom graph), we expect the optimization of value-driven solutions toward the integration of efficiency and effectiveness catering to cross-cutting human purposes.

To tap into the benefits and uses of DIKW, the design principles and foundations of DIKW are expected to be explored to ensure an explainable and interactive AI landscape of crossing models based on DIKW premises. Aiming at investigating experimental and theoretical results, novel designs, this Special Issue will report the latest advances and developments in theories, design mechanisms, and extensions on data, information, knowledge, and wisdom interactions in all areas and phases, with empirical or theoretical solutions. The Special Issue will cover issues such as the uncertainties of multimodal content semantic traceability, relevance, migration, interaction, and the evolution of multimodal contexts or environments. This investigation should lead to new solutions toward complex content identification, modeling, processing, and service optimization in the context of massive content interaction in multidimensional, multimodal, multiscale physical and digital space covering data collection, information analysis, knowledge reasoning, and wisdom strategies in the background of the AI trend.

Topics for discussion in this Special Issue include but are not limited to:

  • Application of knowledge representation techniques to semantic modeling
  • Data integration, metadata management, and interoperability
  • Data, information, and knowledge transformation/conversion
  • Data mining and knowledge discovery
  • Data models, information semantics, and query languages
  • Data provenance, cleaning, and curation
  • Data visualization and interactive data exploration
  • Development and management of heterogeneous knowledge bases
  • Domain modeling and ontology building
  • Information storage and retrieval and interface technology
  • Management of data, information, and knowledge hybrid systems
  • Multimedia and cross-modal “Databases”
  • Optimization techniques of DIKW applications
  • Theories of DIKW models and performance evaluation techniques
  • Crossing model interoperability inside and between applications
  • Guidelines and best practices for DIKW architecture
  • Privacy, trust, and security of DIKW architecture 

Dr. Yucong Duan
Dr. Ejub Kajan
Dr. Zakaria Maamar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1571 KiB  
Article
About Challenges in Data Analytics and Machine Learning for Social Good
by Riccardo Martoglia and Manuela Montangero
Information 2022, 13(8), 359; https://doi.org/10.3390/info13080359 - 27 Jul 2022
Viewed by 2690
Abstract
The large number of new services and applications and, in general, all our everyday activities resolve in data mass production: all these data can become a golden source of information that might be used to improve our lives, wellness and working days. (Interpretable) [...] Read more.
The large number of new services and applications and, in general, all our everyday activities resolve in data mass production: all these data can become a golden source of information that might be used to improve our lives, wellness and working days. (Interpretable) Machine Learning approaches, the use of which is increasingly ubiquitous in various settings, are definitely one of the most effective tools for retrieving and obtaining essential information from data. However, many challenges arise in order to effectively exploit them. In this paper, we analyze key scenarios in which large amounts of data and machine learning techniques can be used for social good: social network analytics for enhancing cultural heritage dissemination; game analytics to foster Computational Thinking in education; medical analytics to improve the quality of life of the elderly and reduce health care expenses; exploration of work datafication potential in improving the management of human resources (HRM). For the first two of the previously mentioned scenarios, we present new results related to previously published research, framing these results in a more general discussion over challenges arising when adopting machine learning techniques for social good. Full article
Show Figures

Figure 1

28 pages, 4259 KiB  
Article
A Systematic Procedure for Utilization of Product Usage Information in Product Development
by Quan Deng and Klaus-Dieter Thoben
Information 2022, 13(6), 267; https://doi.org/10.3390/info13060267 - 25 May 2022
Cited by 3 | Viewed by 3496
Abstract
Product design is crucial for product success. Many approaches can improve product design quality, such as concurrent engineering and design for X. This study focuses on applying product usage information (PUI) during product development. As emerging technologies become widespread, an enormous amount of [...] Read more.
Product design is crucial for product success. Many approaches can improve product design quality, such as concurrent engineering and design for X. This study focuses on applying product usage information (PUI) during product development. As emerging technologies become widespread, an enormous amount of product-related information is available in the middle of a product’s life, such as customer reviews, condition monitoring, and maintenance data. In recent years, the literature describes the application of data analytics technologies such as machine learning to promote the integration of PUI during product development. However, as of today, PUI is not efficiently exploited in product development. One of the critical issues to achieve this is identifying and integrating task-relevant PUI fit for purposes of different product development tasks. Nevertheless, preparing task-relevant PUI that fits different product development tasks is often ignored. This study addresses this research gap in preparing task-relevant PUI and rectifies the related shortcomings and challenges. By considering the context in which PUI is utilized, this paper presents a systematic procedure to help identify and specify developers’ information needs and propose relevant PUI fitting the actual information needs of their current product development task. We capitalize on an application scenario to demonstrate the applicability of the proposed approach. Full article
Show Figures

Figure 1

20 pages, 11195 KiB  
Article
Integrating, Indexing and Querying the Tangible and Intangible Cultural Heritage Available Online: The QueryLab Portal
by Maria Teresa Artese and Isabella Gagliardi
Information 2022, 13(5), 260; https://doi.org/10.3390/info13050260 - 19 May 2022
Cited by 9 | Viewed by 3261
Abstract
Cultural heritage inventories have been created to collect and preserve the culture and to allow the participation of stakeholders and communities, promoting and disseminating their knowledges. There are two types of inventories: those who give data access via web services or open data, [...] Read more.
Cultural heritage inventories have been created to collect and preserve the culture and to allow the participation of stakeholders and communities, promoting and disseminating their knowledges. There are two types of inventories: those who give data access via web services or open data, and others which are closed to external access and can be visited only through dedicated web sites, generating data silo problems. The integration of data harvested from different archives enables to compare the cultures and traditions of places from opposite sides of the world, showing how people have more in common than expected. The purpose of the developed portal is to provide query tools managing the web services provided by cultural heritage databases in a transparent way, allowing the user to make a single query and obtain results from all inventories considered at the same time. Moreover, with the introduction of the ICH-Light model, specifically studied for the mapping of intangible heritage, data from inventories of this domain can also be harvested, indexed and integrated into the portal, allowing the creation of an environment dedicated to intangible data where traditions, knowledges, rituals and festive events can be found and searched all together. Full article
Show Figures

Figure 1

13 pages, 1090 KiB  
Article
We Can Define the Domain of Information Online and Thus Globally Uniformly
by Wolfgang Orthuber
Information 2022, 13(5), 256; https://doi.org/10.3390/info13050256 - 16 May 2022
Viewed by 3250
Abstract
Any information is (transported as) a selection from an ordered set, which is the “domain” of the information. For example, any piece of digital information is a number sequence that represents such a selection. Its senders and receivers (with software) should know the [...] Read more.
Any information is (transported as) a selection from an ordered set, which is the “domain” of the information. For example, any piece of digital information is a number sequence that represents such a selection. Its senders and receivers (with software) should know the format and domain of the number sequence in a uniform way worldwide. So far, this is not guaranteed. However, it can be guaranteed after the introduction of the new “Domain Vector” (DV) data structure: “UL plus number sequence”. Thereby “UL” is a “Uniform Locator”, which is an efficient global pointer to the machine-readable online definition of the number sequence. The online definition can be adapted to the application so that the DV represents the application-specific, reproducible features in a precise (one-to-one), comparable, and globally searchable manner. The systematic, nestable online definition of domains of digital information (number sequences) and the globally defined DV data structure have great technical potential and are recommended as a central focus of future computer science. Full article
Show Figures

Figure 1

32 pages, 9129 KiB  
Article
A Framework for Online Public Health Debates: Some Design Elements for Visual Analytics Systems
by Anton Ninkov and Kamran Sedig
Information 2022, 13(4), 201; https://doi.org/10.3390/info13040201 - 15 Apr 2022
Viewed by 3160
Abstract
Nowadays, many people are deeply concerned about their physical well-being; as a result, they invest much time and effort investigating health-related topics. In response to this, many online websites and social media profiles have been created, resulting in a plethora of information on [...] Read more.
Nowadays, many people are deeply concerned about their physical well-being; as a result, they invest much time and effort investigating health-related topics. In response to this, many online websites and social media profiles have been created, resulting in a plethora of information on such topics. In a given topic, oftentimes, much of the information is conflicting, resulting in online camps that have different positions and arguments. We refer to the collection of all such positionings and entrenched camps on a topic as an online public health debate. The information people encounter regarding such debates can ultimately influence how they make decisions, what they believe, and how they act. Therefore, there is a need for public health stakeholders (i.e., people with a vested interest in public health issues) to be able to make sense of online debates quickly and accurately. In this paper, we present a framework-based approach for investigating online public health debates—a preliminary work that can be expanded upon. We first introduce the concept of online debate entities (ODEs), which is a generalization for those who participate in online debates (e.g., websites and Twitter profiles). We then present the framework ODIN (Online Debate entIty aNalyzer), in which we identify, define, and justify ODE attributes that we consider important for making sense of online debates. Next, we provide an overview of four online public health debates (vaccines, statins, cannabis, and dieting plans) using ODIN. Finally, we showcase four prototype visual analytics systems whose design elements are informed by the ODIN framework. Full article
Show Figures

Figure 1

13 pages, 310 KiB  
Article
COVID-19 and Science Communication: The Recording and Reporting of Disease Mortality
by Ognjen Arandjelović
Information 2022, 13(2), 97; https://doi.org/10.3390/info13020097 - 18 Feb 2022
Cited by 1 | Viewed by 2132
Abstract
The ongoing COVID-19 pandemic has brought science to the fore of public discourse and, considering the complexity of the issues involved, with it also the challenge of effective and informative science communication. This is a particularly contentious topic, in that it is both [...] Read more.
The ongoing COVID-19 pandemic has brought science to the fore of public discourse and, considering the complexity of the issues involved, with it also the challenge of effective and informative science communication. This is a particularly contentious topic, in that it is both highly emotional in and of itself; sits at the nexus of the decision-making process regarding the handling of the pandemic, which has effected lockdowns, social behaviour measures, business closures, and others; and concerns the recording and reporting of disease mortality. To clarify a point that has caused much controversy and anger in the public debate, the first part of the present article discusses the very fundamentals underlying the issue of causative attribution with regards to mortality, lays out the foundations of the statistical means of mortality estimation, and concretizes these by analysing the recording and reporting practices adopted in England and their widespread misrepresentations. The second part of the article is empirical in nature. I present data and an analysis of how COVID-19 mortality has been reported in the mainstream media in the UK and the USA, including a comparative analysis both across the two countries as well as across different media outlets. The findings clearly demonstrate a uniform and worrying lack of understanding of the relevant technical subject matter by the media in both countries. Of particular interest is the finding that with a remarkable regularity (ρ>0.998), the greater the number of articles a media outlet has published on COVID-19 mortality, the greater the proportion of its articles misrepresented the disease mortality figures. Full article
Show Figures

Figure 1

29 pages, 7877 KiB  
Article
Interfaces for Searching and Triaging Large Document Sets: An Ontology-Supported Visual Analytics Approach
by Jonathan Demelo and Kamran Sedig
Information 2022, 13(1), 8; https://doi.org/10.3390/info13010008 - 27 Dec 2021
Cited by 1 | Viewed by 3383
Abstract
We investigate the design of ontology-supported, progressively disclosed visual analytics interfaces for searching and triaging large document sets. The goal is to distill a set of criteria that can help guide the design of such systems. We begin with a background of information [...] Read more.
We investigate the design of ontology-supported, progressively disclosed visual analytics interfaces for searching and triaging large document sets. The goal is to distill a set of criteria that can help guide the design of such systems. We begin with a background of information search, triage, machine learning, and ontologies. We review research on the multi-stage information-seeking process to distill the criteria. To demonstrate their utility, we apply the criteria to the design of a prototype visual analytics interface: VisualQUEST (Visual interface for QUEry, Search, and Triage). VisualQUEST allows users to plug-and-play document sets and expert-defined ontology files within a domain-independent environment for multi-stage information search and triage tasks. We describe VisualQUEST through a functional workflow and culminate with a discussion of ongoing formative evaluations, limitations, future work, and summary. Full article
Show Figures

Figure 1

Back to TopTop