Current Trends on Data Management

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 October 2024) | Viewed by 6860

Special Issue Editors


E-Mail Website
Guest Editor
Department of Telematic Engineering, Universidad Carlos III de Madrid, 28911 Madrid, Spain
Interests: software engineering; automated testing; cloud computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Telematic Engineering, Universidad Carlos III de Madrid, 28911 Madrid, Spain
Interests: wearable technologies for health and wellbeing applications; mobile and pervasive computing for assistive living; Internet of Things and assistive technologies; machine learning algorithms for physiological; inertial and location sensors; personal assistants and coaching for health self-management; activity detection and prediction methods
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Telematic Engineering, Universidad Carlos III de Madrid, 28911 Madrid, Spain
Interests: Internet measurements; web transparency; online advertising; data analysis; machine learning

E-Mail Website
Guest Editor
Department of Telematic Engineering, Universidad Carlos III de Madrid, 28911 Madrid, Spain
Interests: machine learning; deep learning; smart grids; data science

Special Issue Information

Dear Colleagues,

We live in a highly data-driven digital-first era. The creation of high volumes of data leads to the formation of large repositories, which necessitate the development of novel techniques to analyze, process and structure these data. The term "data management" pertains to the processes and procedures employed in handling data and involves creating, implementing, and overseeing plans, policies, programs, and practices that ensure the proper delivery, control, protection, and enhancement of the value of data and information assets over their entire lifespan.

New trends in cloudification, microservices, open data, or software as a service (SaaS) approaches, among others, have a significant impact on data management. In parallel, new approaches to Artificial Intelligence (AI), Machine Learning (ML), and Natural Language Processing (NLP) require the management of increasingly large amounts of datasets (e.g., Large Language Models, LLMs) and synthetic data (i.e., artificially generated data).

The main objective of this Special Issue is to bring together novel and impactful research on data management and the adoption of these techniques for practical applications. Specific topics of interest include, but are not limited to, the following:

  • Data governance;
  • Data storage and operations;
  • Data integration and interoperability;
  • Data confidentiality;
  • Data security and protection;
  • Data architecture and infrastructure;
  • Synthetic data ;
  • Open data.

Dr. Boni García
Prof. Dr. Mario Munoz-Organero
Dr. Patricia Callejo
Prof. Miguel Ángel Hombrados
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data management
  • data governance
  • data interoperability
  • data confidentiality
  • data security
  • data infrastructure
  • synthetic data
  • open data

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 2731 KiB  
Article
Technology Keyword Analysis Using Graphical Causal Models
by Sunghae Jun
Electronics 2024, 13(18), 3670; https://doi.org/10.3390/electronics13183670 - 15 Sep 2024
Viewed by 526
Abstract
Technology keyword analysis (TKA) requires a different approach compared to general keyword analysis. While general keyword analysis identifies relationships between keywords, technology keyword analysis must find cause–effect relationships between technology keywords. Because the development of new technologies depends on previously researched and developed [...] Read more.
Technology keyword analysis (TKA) requires a different approach compared to general keyword analysis. While general keyword analysis identifies relationships between keywords, technology keyword analysis must find cause–effect relationships between technology keywords. Because the development of new technologies depends on previously researched and developed technologies, we need to build a causal inference model, in which the previously developed technology is the cause and the newly developed technology is the effect. In this paper, we propose a technology keyword analysis method using casual inference modeling. To understand the causal relationships between technology keywords, we constructed a graphical causal model combining a graph structure with causal inference. To show how the proposed model can be applied to the practical domains, we collected the patent documents related to the digital therapeutics technology from the world patent databases and analyzed them by the graphical causal model. We expect that our research contributes to various aspects of technology management, such as research and development planning. Full article
(This article belongs to the Special Issue Current Trends on Data Management)
Show Figures

Figure 1

26 pages, 2070 KiB  
Article
Leveraging Indoor Localization Data: The Transactional Area Network (TAN)
by Anastasios Nikolakopoulos, Alexandros Psychas, Antonios Litke and Theodora Varvarigou
Electronics 2024, 13(13), 2454; https://doi.org/10.3390/electronics13132454 - 22 Jun 2024
Viewed by 688
Abstract
The fields of indoor localization and positioning have seen extensive research in recent years. Their scientific soundness is of great importance, as information about an entity’s location in indoor environments can lead to innovative services and products. Various techniques and frameworks have been [...] Read more.
The fields of indoor localization and positioning have seen extensive research in recent years. Their scientific soundness is of great importance, as information about an entity’s location in indoor environments can lead to innovative services and products. Various techniques and frameworks have been proposed, some of which are already in practical use. This article emphasizes the value of indoor localization data and proposes the adoption of a new virtual field known as the ‘Transactional Area Network’ (TAN). By presenting a custom yet simple real-time, peer-to-peer (and therefore decentralized) software implementation that provides positioning information to users via their smart devices, this article demonstrates the potential value of TAN. Finally, it explores how TAN can increase the adoption rate of indoor positioning applications, enhance interactions between people in nearby locations and therefore amplify data generation. Full article
(This article belongs to the Special Issue Current Trends on Data Management)
Show Figures

Figure 1

19 pages, 8915 KiB  
Article
A Comparative Study of Deep-Learning Autoencoders (DLAEs) for Vibration Anomaly Detection in Manufacturing Equipment
by Seonwoo Lee, Akeem Bayo Kareem and Jang-Wook Hur
Electronics 2024, 13(9), 1700; https://doi.org/10.3390/electronics13091700 - 27 Apr 2024
Cited by 2 | Viewed by 1508
Abstract
Speed reducers (SR) and electric motors are crucial in modern manufacturing, especially within adhesive coating equipment. The electric motor mainly transforms electrical power into mechanical force to propel most machinery. Conversely, speed reducers are vital elements that control the speed and torque of [...] Read more.
Speed reducers (SR) and electric motors are crucial in modern manufacturing, especially within adhesive coating equipment. The electric motor mainly transforms electrical power into mechanical force to propel most machinery. Conversely, speed reducers are vital elements that control the speed and torque of rotating machinery, ensuring optimal performance and efficiency. Interestingly, variations in chamber temperatures of adhesive coating machines and the use of specific adhesives can lead to defects in chains and jigs, causing possible breakdowns in the speed reducer and its surrounding components. This study introduces novel deep-learning autoencoder models to enhance production efficiency by presenting a comparative assessment for anomaly detection that would enable precise and predictive insights by modeling complex temporal relationships in the vibration data. The data acquisition framework facilitated adherence to data governance principles by maintaining data quality and consistency, data storage and processing operations, and aligning with data management standards. The study here would capture the attention of practitioners involved in data-centric processes, industrial engineering, and advanced manufacturing techniques. Full article
(This article belongs to the Special Issue Current Trends on Data Management)
Show Figures

Figure 1

18 pages, 1231 KiB  
Article
A Formal Model for Reliable Data Acquisition and Control in Legacy Critical Infrastructures
by José Miguel Blanco, Jose M. Del Alamo, Juan C. Dueñas and Felix Cuadrado
Electronics 2024, 13(7), 1219; https://doi.org/10.3390/electronics13071219 - 26 Mar 2024
Viewed by 761
Abstract
The digital transformation of critical infrastructures, such as energy or water distribution systems, is essential for their smart management. Faster issue identification and smoother services enable better adaptation to consumers’ evolving demands. However, these large-scale infrastructures are often outdated. Their digital transformation is [...] Read more.
The digital transformation of critical infrastructures, such as energy or water distribution systems, is essential for their smart management. Faster issue identification and smoother services enable better adaptation to consumers’ evolving demands. However, these large-scale infrastructures are often outdated. Their digital transformation is crucial to enable them to support societies. This process must be carefully planned, providing guidance that ensures that the data they rely on are dependable and that the system remains fully operational during the transition. This paper presents a formal model that supports reliable data acquisition in legacy critical infrastructures, facilitating their evolution towards a data-driven smart system. Our model provides the foundation for a flexible transformation process while generating dependable data for system management. We demonstrate the model’s applicability in a use case within the water distribution domain and discuss its benefits. Full article
(This article belongs to the Special Issue Current Trends on Data Management)
Show Figures

Figure 1

Review

Jump to: Research

20 pages, 782 KiB  
Review
Automated Repair of Smart Contract Vulnerabilities: A Systematic Literature Review
by Rasoul Kiani and Victor S. Sheng
Electronics 2024, 13(19), 3942; https://doi.org/10.3390/electronics13193942 - 6 Oct 2024
Viewed by 899
Abstract
The substantial value held by smart contracts (SCs) makes them an enticing target for malicious attacks. The process of fixing vulnerabilities in SCs is intricate, primarily due to the immutability of blockchain technology. This research paper introduces a systematic literature review (SLR) that [...] Read more.
The substantial value held by smart contracts (SCs) makes them an enticing target for malicious attacks. The process of fixing vulnerabilities in SCs is intricate, primarily due to the immutability of blockchain technology. This research paper introduces a systematic literature review (SLR) that evaluates rectification systems designed to patch vulnerabilities in SCs. Following the guidelines set forth by the PRISMA statement, this SLR meticulously reviews a total of 31 papers. In this context, we classify recently published SC automated repair frameworks based on their methodologies for automatic program repair (APR), rewriting strategies, and tools for vulnerability detection. We argue that automated patching enhances the reliability and adoption of SCs, thereby allowing developers to promptly address identified vulnerabilities. Furthermore, existing automated repair tools are capable of addressing only a restricted range of vulnerabilities, and in some cases, patches may not be effective in preventing the targeted vulnerabilities. Another key point that should be taken into account is the simplicity of the patch and the gas consumption of the modified program. Alternatively, large language models (LLMs) have opened new avenues for automatic patch generation, and their performance can be improved by innovative methodologies. Full article
(This article belongs to the Special Issue Current Trends on Data Management)
Show Figures

Figure 1

23 pages, 1911 KiB  
Review
Ethereum Smart Contract Vulnerability Detection and Machine Learning-Driven Solutions: A Systematic Literature Review
by Rasoul Kiani and Victor S. Sheng
Electronics 2024, 13(12), 2295; https://doi.org/10.3390/electronics13122295 - 12 Jun 2024
Cited by 1 | Viewed by 1661
Abstract
In recent years, emerging trends like smart contracts (SCs) and blockchain have promised to bolster data security. However, SCs deployed on Ethereum are vulnerable to malicious attacks. Adopting machine learning methods is proving to be a satisfactory alternative to conventional vulnerability detection techniques. [...] Read more.
In recent years, emerging trends like smart contracts (SCs) and blockchain have promised to bolster data security. However, SCs deployed on Ethereum are vulnerable to malicious attacks. Adopting machine learning methods is proving to be a satisfactory alternative to conventional vulnerability detection techniques. Nevertheless, most current machine learning techniques depend on sufficient expert knowledge and solely focus on addressing well-known vulnerabilities. This paper puts forward a systematic literature review (SLR) of existing machine learning-based frameworks to address the problem of vulnerability detection. This SLR follows the PRISMA statement, involving a detailed review of 55 papers. In this context, we classify recently published algorithms under three different machine learning perspectives. We explore state-of-the-art machine learning-driven solutions that deal with the class imbalance issue and unknown vulnerabilities. We believe that algorithmic-level approaches have the potential to provide a clear edge over data-level methods in addressing the class imbalance issue. By emphasizing the importance of the positive class and correcting the bias towards the negative class, these approaches offer a unique advantage. This unique feature can improve the efficiency of machine learning-based solutions in identifying various vulnerabilities in SCs. We argue that the detection of unknown vulnerabilities suffers from the absence of a unique definition. Moreover, current frameworks for detecting unknown vulnerabilities are structured to tackle vulnerabilities that exist objectively. Full article
(This article belongs to the Special Issue Current Trends on Data Management)
Show Figures

Figure 1

Back to TopTop