Software Engineering and Data Science II

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Big Data and Augmented Intelligence".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 19736

Special Issue Editor


E-Mail Website
Guest Editor
Department of Theoretical and Applied Sciences (DiSTA), University of Insubria, 21100 Varese, Italy
Interests: software quality; big data; data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Following the great success of Volume I of our Special Issue titled “Software Engineering and Data Science” in Future Internet, we are now launching Volume II. This Special Issue is devoted to recent trends and advancements made in the field of engineering data-intensive software solutions. In the last few years, data-driven software solutions have obtained a great deal of attention in research and development at academic, industry, business, and government levels to exploit the hidden knowledge and big data that can be offered to cities and citizens in the future. However, data-driven software solutions are different from “traditional” software development projects. The focus of the main development core is on managing data (e.g., data store and data quality) and designing behavioral models with the aid of artificial intelligence and machine learning techniques. Toward this end, new life cycles, algorithms, methods, processes, and tools are required. Original and innovative ideas that stress all phases in the life cycle of data-driven software solutions are invited to this new Special Issue to effectively address these challenges in developing, testing, and maintaining such systems.

Dr. Davide Tosi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • software life cycle
  • data science
  • big data
  • data analysis
  • artificial intelligence
  • data-driven development
  • machine learning
  • agile development
  • DevOps
  • citizen data science
  • smart cities
  • engineering AI-intensive systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 168 KiB  
Editorial
Editorial for the Special Issue on “Software Engineering and Data Science”, Volume II
by Davide Tosi
Future Internet 2023, 15(9), 312; https://doi.org/10.3390/fi15090312 - 16 Sep 2023
Viewed by 1140
Abstract
The Special Issue “Software Engineering and Data Science, Volume II” is the natural continuation of its greatly successful predecessor, Volume I [...] Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)

Research

Jump to: Editorial

26 pages, 488 KiB  
Article
Synchronizing Many Filesystems in Near Linear Time
by Elod P. Csirmaz and Laszlo Csirmaz
Future Internet 2023, 15(6), 198; https://doi.org/10.3390/fi15060198 - 30 May 2023
Cited by 1 | Viewed by 1903
Abstract
Finding a provably correct subquadratic synchronization algorithm for many filesystem replicas is one of the main theoretical problems in operational transformation (OT) and conflict-free replicated data types (CRDT) frameworks. Based on the algebraic theory of filesystems, which incorporates non-commutative filesystem commands natively, we [...] Read more.
Finding a provably correct subquadratic synchronization algorithm for many filesystem replicas is one of the main theoretical problems in operational transformation (OT) and conflict-free replicated data types (CRDT) frameworks. Based on the algebraic theory of filesystems, which incorporates non-commutative filesystem commands natively, we developed and built a proof-of-concept implementation of an algorithm suite which synchronizes an arbitrary number of replicas. The result is provably correct, and the synchronized system is created in linear space and time after an initial sorting phase. It works by identifying conflicting command pairs and requesting one of the commands to be removed. The method can be guided to reach any of the theoretically possible synchronized states. The algorithm also allows asynchronous usage. After the client sends a synchronization request, the local replica remains available for further modifications. When the synchronization instructions arrive, they can be merged with the changes made since the synchronization request. The suite also works on filesystems with a directed acyclic graph-based path structure in place of the traditional tree-like arrangement. Consequently, our algorithms apply to filesystems with hard or soft links as long as the links create no loops. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

20 pages, 2842 KiB  
Article
Envisioning Architecture of Metaverse Intensive Learning Experience (MiLEx): Career Readiness in the 21st Century and Collective Intelligence Development Scenario
by Eman AbuKhousa, Mohamed Sami El-Tahawy and Yacine Atif
Future Internet 2023, 15(2), 53; https://doi.org/10.3390/fi15020053 - 30 Jan 2023
Cited by 25 | Viewed by 6270
Abstract
Th metaverse presents a new opportunity to construct personalized learning paths and to promote practices that scale the development of future skills and collective intelligence. The attitudes, knowledge and skills that are necessary to face the challenges of the 21st century should be [...] Read more.
Th metaverse presents a new opportunity to construct personalized learning paths and to promote practices that scale the development of future skills and collective intelligence. The attitudes, knowledge and skills that are necessary to face the challenges of the 21st century should be developed through iterative cycles of continuous learning, where learners are enabled to experience, reflect, and produce new ideas while participating in a collective creativity process. In this paper, we propose an architecture to develop a metaverse-intensive learning experience (MiLEx) platform with an illustrative scenario that reinforces the development of 21st century career practices and collective intelligence. The learning ecosystem of MiLEx integrates four key elements: (1) key players that define the main actors and their roles in the learning process; (2) a learning context that defines the learning space and the networks of expected interactions among human and non-human objects; (3) experiential learning instances that deliver education via a real-life–virtual merge; and (4) technology support for building practice communities online, developing experiential cycles and transforming knowledge between human and non-human objects within the community. The proposed MiLEx architecture incorporates sets of technological and data components to (1) discover/profile learners and design learner-centric, theoretically grounded and immersive learning experiences; (2) create elements and experiential learning scenarios; (3) analyze learner’s interactive and behavioral patterns; (4) support the emergence of collective intelligence; (5) assess learning outcomes and monitor the learner’s maturity process; and (6) evaluate experienced learning and recommend future experiences. We also present the MiLEx continuum as a cyclic flow of information to promote immersive learning. Finally, we discuss some open issues to increase the learning value and propose some future work suggestions to further shape the transformative potential of metaverse-based learning environments. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

28 pages, 3412 KiB  
Article
A Model Based Framework for IoT-Aware Business Process Management
by Paolo Bocciarelli, Andrea D’Ambrogio and Tommaso Panetti
Future Internet 2023, 15(2), 50; https://doi.org/10.3390/fi15020050 - 28 Jan 2023
Cited by 6 | Viewed by 2899
Abstract
IoT-aware Business Processes (BPs) that exchange data with Internet of Things (IoT) devices, briefly referred to as IoT-aware BPs, are gaining momentum in the BPM field. Introducing IoT technologies from the early stages of the BP development process requires dealing with the complexity [...] Read more.
IoT-aware Business Processes (BPs) that exchange data with Internet of Things (IoT) devices, briefly referred to as IoT-aware BPs, are gaining momentum in the BPM field. Introducing IoT technologies from the early stages of the BP development process requires dealing with the complexity and heterogeneity of such technologies at design and analysis time. This paper analyzes widely used IoT frameworks and ontologies to introduce a BPMN extension that improves the expressiveness of relevant BP modeling notations and allows an appropriate representation of IoT devices from both an architectural and a behavioral perspective. In the BP management field, the use of simulation-based approaches is recognized as an effective technology for analyzing BPs. Simulation models need to be parameterized according to relevant properties of the process under study. Unfortunately, such parameters may change during the process operational life, thus making the simulation model invalid with respect to the actual process behavior. To ease the analysis of IoT-aware BPs, this paper introduces a model-driven method for the automated development of digital twins of actual business processes. The proposed method also exploits data retrieved by IoT sensors to automatically reconfigure the simulation model, to make the digital twin continuously coherent and compliant with its actual counterpart. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

21 pages, 412 KiB  
Article
Data Synchronization: A Complete Theoretical Solution for Filesystems
by Elod P. Csirmaz and Laszlo Csirmaz
Future Internet 2022, 14(11), 344; https://doi.org/10.3390/fi14110344 - 21 Nov 2022
Cited by 2 | Viewed by 2246
Abstract
Data reconciliation in general, and filesystem synchronization in particular, lacks rigorous theoretical foundation. This paper presents, for the first time, a complete analysis of synchronization for two replicas of a theoretical filesystem. Synchronization has two main stages: identifying the conflicts, and resolving them. [...] Read more.
Data reconciliation in general, and filesystem synchronization in particular, lacks rigorous theoretical foundation. This paper presents, for the first time, a complete analysis of synchronization for two replicas of a theoretical filesystem. Synchronization has two main stages: identifying the conflicts, and resolving them. All existing (both theoretical and practical) synchronizers are operation-based: they define, using some rationale or heuristics, how conflicts are to be resolved without considering the effect of the resolution on subsequent conflicts. Instead, our approach is declaration-based: we define what constitutes the resolution of all conflicts, and for each possible scenario we prove the existence of sequences of operations/commands which convert the replicas into a common synchronized state. These sequences consist of operations rolling back some local changes, followed by operations performed on the other replica. The set of rolled-back operations provides the user with clear and intuitive information on the proposed changes, so she can easily decide whether to accept them or ask for other alternatives. All possible synchronized states are described by specifying a set of conflicts, a partial order on the conflicts describing the order in which they need to be resolved, as well as the effect of each decision on subsequent conflicts. Using this classification, the outcomes of different conflict resolution policies can be investigated easily. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

13 pages, 858 KiB  
Article
Distributed Big Data Storage Infrastructure for Biomedical Research Featuring High-Performance and Rich-Features
by Xingjian Xu, Lijun Sun and Fanjun Meng
Future Internet 2022, 14(10), 273; https://doi.org/10.3390/fi14100273 - 24 Sep 2022
Cited by 1 | Viewed by 2041
Abstract
The biomedical field entered the era of “big data” years ago, and a lot of software is being developed to tackle the analysis problems brought on by big data. However, very few programs focus on providing a solid foundation for file systems of [...] Read more.
The biomedical field entered the era of “big data” years ago, and a lot of software is being developed to tackle the analysis problems brought on by big data. However, very few programs focus on providing a solid foundation for file systems of biomedical big data. Since file systems are a key prerequisite for efficient big data utilization, the absence of specialized biomedical big data file systems makes it difficult to optimize storage, accelerate analysis, and enrich functionality, resulting in inefficiency. Here we present F3BFS, a functional, fundamental, and future-oriented distributed file system, specially designed for various kinds of biomedical data. F3BFS makes it possible to boost existing software’s performance without modifying its main algorithms by transmitting raw datasets from generic file systems. Further, F3BFS has various built-in features to help researchers manage biology datasets more efficiently and productively, including metadata management, fuzzy search, automatic backup, transparent compression, etc. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

19 pages, 548 KiB  
Article
Deep Learning Forecasting for Supporting Terminal Operators in Port Business Development
by Marco Ferretti, Ugo Fiore, Francesca Perla, Marcello Risitano and Salvatore Scognamiglio
Future Internet 2022, 14(8), 221; https://doi.org/10.3390/fi14080221 - 25 Jul 2022
Cited by 5 | Viewed by 2030
Abstract
Accurate forecasts of containerised freight volumes are unquestionably important for port terminal operators to organise port operations and develop business plans. They are also relevant for port authorities, regulators, and governmental agencies dealing with transportation. In a time when deep learning is in [...] Read more.
Accurate forecasts of containerised freight volumes are unquestionably important for port terminal operators to organise port operations and develop business plans. They are also relevant for port authorities, regulators, and governmental agencies dealing with transportation. In a time when deep learning is in the limelight, owing to a consistent strip of success stories, it is natural to apply it to the tasks of forecasting container throughput. Given the number of options, practitioners can benefit from the lessons learned in applying deep learning models to the problem. Coherently, in this work, we devise a number of multivariate predictive models based on deep learning, analysing and assessing their performance to identify the architecture and set of hyperparameters that prove to be better suited to the task, also comparing the quality of the forecasts with seasonal autoregressive integrated moving average models. Furthermore, an innovative representation of seasonality is given by means of an embedding layer that produces a mapping in a latent space, with the parameters of such mapping being tuned using the quality of the predictions. Finally, we present some managerial implications, also putting into evidence the research limitations and future opportunities. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

Back to TopTop