Best Practices, Challenges and Opportunities in Software Engineering

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: 31 December 2025 | Viewed by 29049

Special Issue Editor

Faculty of Engineering and Computer Science, Concordia University, Montreal, QC H3G 1M8, Canada
Interests: software engineering; distributed computing; cloud-native services and architecture; applied machine learning

Special Issue Information

Dear Colleagues,

Software is pervasive in human society, having grown at an accelerated rate. The capabilities of software have transformed all aspects of our lives, including governance, education, industries, economics, politics, social relations, and cultural development. With the advances in hardware acceleration, communication technologies, and computing models, new types of services and applications delivered by software are emerging at a rapid pace. For example, a cloud-native service can achieve failover in a virtual node in mere minutes, while it takes hours for a node to be replaced within traditional in-house data centers. One challenge in this field is posed by the need to catch up with the development of software lifecycle to address designs, architectures, operations,  technical debts, costs, and emerging domains. On the other hand, the massive expansion of software has begun to positively influence ethics and social relations, demonstrating the benefits of software for human society. This Special Issue calls for papers presenting technological innovations, novel research outcomes, and inspiring applications dedicated to best practices, challenges, and opportunities in the field of software engineering. We invite papers covering the following aspects of software engineering, among other relevant topics:

  • Ethics and social studies for software engineering ;
  • Software design methods and best practices;
  • Requirement engineering;
  • Software testing for distributed computing, cloud services;
  • Software code analysis for emerging domains such IoT, edge computing, block chain, autonomous driving;
  • Software repository mining;
  • Software architecture for large-scale software systems;
  • Software engineering for machine learning;
  • Machine learning for software engineering;
  • Software quality of emerging attributes, including trustworthiness, transparency, explainability, observability, audibility, sustainability;
  • Software process for emerging domains and applications;
  • Software metrics and measurement for emerging attributes;
  • Software engineering for data science and engineering;
  • Tools, methods, and models for software development;
  • CI/CD and DevOps for specific aspects such as security, machine learning, XAI.

Dr. Yan Liu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • software engineering
  • large scale software systems
  • cloud computing
  • distributed computing
  • software services
  • software ethics
  • software social impact
  • sustainability
  • responsible software development

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 552 KiB  
Article
A Review of Non-Functional Requirements Analysis Throughout the SDLC
by Cyrille Dongmo
Computers 2024, 13(12), 308; https://doi.org/10.3390/computers13120308 (registering DOI) - 23 Nov 2024
Abstract
To date, unquestionable efforts have been made, both in academia and industry, to facilitate the development of functional requirements (FRs) throughout the different phases of the software development life cycle (SDLC). Functional requirements are understood to mean the users’ needs pertaining to the [...] Read more.
To date, unquestionable efforts have been made, both in academia and industry, to facilitate the development of functional requirements (FRs) throughout the different phases of the software development life cycle (SDLC). Functional requirements are understood to mean the users’ needs pertaining to the services to be rendered by a software system. For example, semi-formal or graphically based approaches such as UML, and mathematically based or formal approaches such as Z and related tools have all been developed with the intention of addressing FRs. In the same vein, most of the proposed software methodologies, for instance, agile software development and model-driven software development, primarily target functional requirements. Considering the importance and even the criticality of non-functional requirements describing the quality of software systems and the constraints upon them, similar progress would be expected for their development. However, it appears that making headway with NFRs has been more challenging due to the complexity of the requirements. In this regard, the main purpose of this work is to unveil (from the academic perspective) the current state of development of NFRs through the review of publications carefully selected from five online databases. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
16 pages, 282 KiB  
Article
The Conundrum Challenges for Research Software in Open Science
by Teresa Gomez-Diaz and Tomas Recio
Computers 2024, 13(11), 302; https://doi.org/10.3390/computers13110302 - 19 Nov 2024
Viewed by 350
Abstract
In the context of Open Science, the importance of Borgman’s conundrum challenges that have been initially formulated concerning the difficulties to share Research Data is well known: which Research Data might be shared, by whom, with whom, under what conditions, why, and to [...] Read more.
In the context of Open Science, the importance of Borgman’s conundrum challenges that have been initially formulated concerning the difficulties to share Research Data is well known: which Research Data might be shared, by whom, with whom, under what conditions, why, and to what effects. We have recently reviewed the concepts of Research Software and Research Data, concluding with new formulations for their definitions, and proposing answers to these conundrum challenges for Research Data. In the present work we extend the consideration of the Borgman’s conundrum challenges to Research Software, providing answers to these questions in this new context. Moreover, we complete the initial list of questions/answers, by asking how and where the Research Software may be shared. Our approach begins by recalling the main issues involved in the Research Software definition, and its production context in the research environment, from the Open Science perspective. Then we address the conundrum challenges for Research Software by exploring the potential similarities and differences regarding our answers for these questions in the case of Research Data. We conclude emphasizing the usefulness of the followed methodology, exploiting the parallelism between Research Software and Research Data in the Open Science environment. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

32 pages, 616 KiB  
Article
Program Equivalence in the Erlang Actor Model
by Péter Bereczky, Dániel Horpácsi and Simon Thompson
Computers 2024, 13(11), 276; https://doi.org/10.3390/computers13110276 - 23 Oct 2024
Viewed by 446
Abstract
This paper presents the formal semantics of concurrency in Core Erlang, an intermediate language for Erlang, along with a notion of program equivalence (based on barbed bisimulation) that is able to model equivalence between programs that have different communication structures but the same [...] Read more.
This paper presents the formal semantics of concurrency in Core Erlang, an intermediate language for Erlang, along with a notion of program equivalence (based on barbed bisimulation) that is able to model equivalence between programs that have different communication structures but the same observable behaviour. The novelty in our formalisation is its extent: it includes semantics for messages and exit and link signals, in addition to most of Core Erlang’s sequential features. Furthermore, unlike previous studies, this work formalises message receipt using primitive operations, consistent with the standard as of Erlang/OTP 23. In this novel formalisation, we show some generally applicable program equivalences (such as process identifier renaming and silent evaluation) and present a practical case study featuring the equivalence of sequential and concurrent list processing. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

18 pages, 827 KiB  
Article
Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development
by Tahir Rashid, Inam Illahi, Qasim Umer, Muhammad Arfan Jaffar, Waheed Yousuf Ramay and Hanadi Hakami
Computers 2024, 13(10), 266; https://doi.org/10.3390/computers13100266 - 12 Oct 2024
Viewed by 546
Abstract
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task [...] Read more.
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task scheduling, developer recommendations, and reward mechanisms, there has been insufficient attention to the support of platform moderators, or copilots, who are essential to project success. A critical responsibility of copilots is estimating project duration; however, manual predictions often lead to inconsistencies and delays. This paper introduces an innovative machine learning approach designed to automate the prediction of project duration on CSD platforms. Utilizing historical data from TopCoder, the proposed method extracts pertinent project attributes and preprocesses textual data through Natural Language Processing (NLP). Bidirectional Encoder Representations from Transformers (BERT) are employed to convert textual information into vectors, which are then analyzed using various machine learning algorithms. Zero-shot learning algorithms exhibit superior performance, with an average accuracy of 92.76%, precision of 92.76%, recall of 99.33%, and an f-measure of 95.93%. The implementation of the proposed automated duration prediction model is crucial for enhancing the success rate of crowdsourcing projects, optimizing resource allocation, managing budgets effectively, and improving stakeholder satisfaction. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

19 pages, 15516 KiB  
Article
Effects of OpenCL-Based Parallelization Methods on Explicit Numerical Methods to Solve the Heat Equation
by Dániel Koics, Endre Kovács and Olivér Hornyák
Computers 2024, 13(10), 250; https://doi.org/10.3390/computers13100250 - 1 Oct 2024
Viewed by 578
Abstract
In recent years, the need for high-performance computing solutions has increased due to the growing complexity of computational tasks. The use of parallel processing techniques has become essential to address this demand. In this study, an Open Computing Language (OpenCL)-based parallelization algorithm is [...] Read more.
In recent years, the need for high-performance computing solutions has increased due to the growing complexity of computational tasks. The use of parallel processing techniques has become essential to address this demand. In this study, an Open Computing Language (OpenCL)-based parallelization algorithm is implemented for the Constant Neighbors (CNe) and CNe with Predictor–Corrector (CpC) numerical methods, which are recently developed explicit and stable numerical algorithms to solve the heat conduction equation. The CPU time and error rate performance of these two methods are compared with the sequential implementation and Euler’s explicit method. The results demonstrate that the parallel version’s CPU time remains nearly constant under the examined circumstances, regardless of the number of spatial mesh points. This leads to a remarkable speed advantage over the sequential version for larger data point counts. Furthermore, the impact of the number of timesteps on the crossover point where the parallel version becomes faster than the sequential one is investigated. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

19 pages, 3161 KiB  
Article
Modeling and Analysis of Dekker-Based Mutual Exclusion Algorithms
by Libero Nigro, Franco Cicirelli and Francesco Pupo
Computers 2024, 13(6), 133; https://doi.org/10.3390/computers13060133 - 25 May 2024
Cited by 2 | Viewed by 1354
Abstract
Mutual exclusion is a fundamental problem in concurrent/parallel/distributed systems. The first pure-software solution to this problem for two processes, which is not based on hardware instructions like test-and-set, was proposed in 1965 by Th.J. Dekker and communicated by E.W. Dijkstra. The correctness of [...] Read more.
Mutual exclusion is a fundamental problem in concurrent/parallel/distributed systems. The first pure-software solution to this problem for two processes, which is not based on hardware instructions like test-and-set, was proposed in 1965 by Th.J. Dekker and communicated by E.W. Dijkstra. The correctness of this algorithm has generally been studied under the strong memory model, where the read and write operations on a memory cell are atomic or indivisible. In recent years, some variants of the algorithm have been proposed to make it RW-safe when using the weak memory model, which makes it possible, e.g., for multiple read operations to occur simultaneously to a write operation on the same variable, with the read operations returning (flickering) a non-deterministic value. This paper proposes a novel approach to formal modeling and reasoning on a mutual exclusion algorithm using Timed Automata and the Uppaal tool, and it applies this approach through exhaustive model checking to conduct a thorough analysis of the Dekker’s algorithm and some of its variants proposed in the literature. This paper aims to demonstrate that model checking, although necessarily limited in the scalability of the number N of the processes due to the state explosions problem, is effective yet powerful for reasoning on concurrency and process action interleaving, and it can provide significant results about the correctness and robustness of the basic version and variants of the Dekker’s algorithm under both the strong and weak memory models. In addition, the properties of these algorithms are also carefully studied in the context of a tournament-based binary tree for N2 processes. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

16 pages, 3979 KiB  
Article
Performance Comparison of CFD Microbenchmarks on Diverse HPC Architectures
by Flavio C. C. Galeazzo, Marta Garcia-Gasulla, Elisabetta Boella, Josep Pocurull, Sergey Lesnik, Henrik Rusche, Simone Bnà, Matteo Cerminara, Federico Brogi, Filippo Marchetti, Daniele Gregori, R. Gregor Weiß and Andreas Ruopp
Computers 2024, 13(5), 115; https://doi.org/10.3390/computers13050115 - 7 May 2024
Cited by 1 | Viewed by 1233
Abstract
OpenFOAM is a CFD software widely used in both industry and academia. The exaFOAM project aims at enhancing the HPC scalability of OpenFOAM, while identifying its current bottlenecks and proposing ways to overcome them. For the assessment of the software components and the [...] Read more.
OpenFOAM is a CFD software widely used in both industry and academia. The exaFOAM project aims at enhancing the HPC scalability of OpenFOAM, while identifying its current bottlenecks and proposing ways to overcome them. For the assessment of the software components and the code profiling during the code development, lightweight but significant benchmarks should be used. The answer was to develop microbenchmarks, with a small memory footprint and short runtime. The name microbenchmark does not mean that they have been prepared to be the smallest possible test cases, as they have been developed to fit in a compute node, which usually has dozens of compute cores. The microbenchmarks cover a broad band of applications: incompressible and compressible flow, combustion, viscoelastic flow and adjoint optimization. All benchmarks are part of the OpenFOAM HPC Technical Committee repository and are fully accessible. The performance using HPC systems with Intel and AMD processors (x86_64 architecture) and Arm processors (aarch64 architecture) have been benchmarked. For the workloads in this study, the mean performance with the AMD CPU is 62% higher than with Arm and 42% higher than with Intel. The AMD processor seems particularly suited resulting in an overall shorter time-to-solution. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

20 pages, 5169 KiB  
Article
A Rule-Based Algorithm and Its Specializations for Measuring the Complexity of Software in Educational Digital Environments
by Artyom V. Gorchakov, Liliya A. Demidova and Peter N. Sovietov
Computers 2024, 13(3), 75; https://doi.org/10.3390/computers13030075 - 11 Mar 2024
Cited by 1 | Viewed by 2077
Abstract
Modern software systems consist of many software components; the source code of modern software systems is hard to understand and maintain for new developers. Aiming to simplify the readability and understandability of source code, companies that specialize in software development adopt programming standards, [...] Read more.
Modern software systems consist of many software components; the source code of modern software systems is hard to understand and maintain for new developers. Aiming to simplify the readability and understandability of source code, companies that specialize in software development adopt programming standards, software design patterns, and static analyzers with the aim of decreasing the complexity of software. Recent research introduced a number of code metrics allowing the numerical characterization of the maintainability of code snippets. Cyclomatic Complexity (CycC) is one widely used metric for measuring the complexity of software. The value of CycC is equal to the number of decision points in a program plus one. However, CycC does not take into account the nesting levels of the syntactic structures that break the linear control flow in a program. Aiming to resolve this, the Cognitive Complexity (CogC) metric was proposed as a successor to CycC. In this paper, we describe a rule-based algorithm and its specializations for measuring the complexity of programs. We express the CycC and CogC metrics by means of the described algorithm and propose a new complexity metric named Educational Complexity (EduC) for use in educational digital environments. EduC is at least as strict as CycC and CogC are and includes additional checks that are based on definition-use graph analysis of a program. We evaluate the CycC, CogC, and EduC metrics using the source code of programs submitted to a Digital Teaching Assistant (DTA) system that automates a university programming course. The obtained results confirm that EduC rejects more overcomplicated and difficult-to-understand programs in solving unique programming exercises generated by the DTA system when compared to CycC and CogC. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

17 pages, 797 KiB  
Article
Distributed Representation for Assembly Code
by Kazuki Yoshida, Kaiyu Suzuki and Tomofumi Matsuzawa
Computers 2023, 12(11), 222; https://doi.org/10.3390/computers12110222 - 1 Nov 2023
Viewed by 1780
Abstract
In recent years, the number of similar software products with many common parts has been increasing due to the reuse and plagiarism of source code in the software development process. Pattern matching, which is an existing method for detecting similarity, cannot detect the [...] Read more.
In recent years, the number of similar software products with many common parts has been increasing due to the reuse and plagiarism of source code in the software development process. Pattern matching, which is an existing method for detecting similarity, cannot detect the similarities between these software products and other programs. It is necessary, for example, to detect similarities based on commonalities in both functionality and control structures. At the same time, detailed software analysis requires manual reverse engineering. Therefore, technologies that automatically identify similarities among the arge amounts of code present in software products in advance can reduce these oads. In this paper, we propose a representation earning model to extract feature expressions from assembly code obtained by statically analyzing such code to determine the similarity between software products. We use assembly code to eliminate the dependence on the existence of source code or differences in development anguage. The proposed approach makes use of Asm2Vec, an existing method, that is capable of generating a vector representation that captures the semantics of assembly code. The proposed method also incorporates information on the program control structure. The control structure can be represented by graph data. Thus, we use graph embedding, a graph vector representation method, to generate a representation vector that reflects both the semantics and the control structure of the assembly code. In our experiments, we generated expression vectors from multiple programs and used clustering to verify the accuracy of the approach in classifying similar programs into the same cluster. The proposed method outperforms existing methods that only consider semantics in both accuracy and execution time. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

29 pages, 1099 KiB  
Article
Dependability Patterns: A Survey
by Ingrid A. Buckley and Eduardo B. Fernandez
Computers 2023, 12(10), 214; https://doi.org/10.3390/computers12100214 - 21 Oct 2023
Cited by 1 | Viewed by 2483
Abstract
Patterns embody the experience and knowledge of designers and are effective ways to improve nonfunctional aspects of software systems. Although there are several catalogs and surveys of security patterns, there is no catalog or general survey about dependability patterns. Our survey presented an [...] Read more.
Patterns embody the experience and knowledge of designers and are effective ways to improve nonfunctional aspects of software systems. Although there are several catalogs and surveys of security patterns, there is no catalog or general survey about dependability patterns. Our survey presented an enumeration of dependability patterns, which include fault tolerance, reliability, safety, and availability patterns. After defining classification groups and showing basic pattern relationships, we showed the references to the publications where these patterns were introduced and enumerated their intents. Another objective was evaluating these patterns to see if their descriptions are appropriate for a possible catalog, which would make them useful to developers and researchers. We found that most of them need remodeling because they use ad hoc templates or no templates. We considered some models from which we can derive patterns and methodologies that incorporate the use of patterns to build dependable software systems. We also provided directions for research. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

17 pages, 2020 KiB  
Article
Requirement Change Prediction Model for Small Software Systems
by Rida Fatima, Furkh Zeshan, Adnan Ahmad, Muhamamd Hamid, Imen Filali, Amel Ali Alhussan and Hanaa A. Abdallah
Computers 2023, 12(8), 164; https://doi.org/10.3390/computers12080164 - 14 Aug 2023
Cited by 1 | Viewed by 1506
Abstract
The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead [...] Read more.
The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead to project failures. This work focuses on small-scale software systems in which requirements are changed gradually. The work provides a probabilistic prediction model, which predicts the probability of changes in software requirement specifications. The first part of the work considers analyzing the changes in software requirements due to certain variables with the help of stakeholders, developers, and experts by the questionnaire method. Then, the proposed model incorporates their knowledge in the Bayesian network as conditional probabilities of independent and dependent variables. The proposed approach utilizes the variable elimination method to obtain the posterior probability of the revisions in the software requirement document. The model was evaluated by sensitivity analysis and comparison methods. For a given dataset, the proposed model computed the low state revisions probability to 0.42, and the high state revisions probability to 0.45. Thus, the results proved that the proposed approach can predict the change in the requirements document accurately by outperforming existing models. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

19 pages, 17198 KiB  
Article
The Impact of the Web Data Access Object (WebDAO) Design Pattern on Productivity
by Zoltán Richárd Jánki and Vilmos Bilicki
Computers 2023, 12(8), 149; https://doi.org/10.3390/computers12080149 - 27 Jul 2023
Cited by 2 | Viewed by 1980
Abstract
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity [...] Read more.
In contemporary software development, it is crucial to adhere to design patterns because well-organized and readily maintainable source code facilitates bug fixes and the development of new features. A carefully selected set of design patterns can have a significant impact on the productivity of software development. Data Access Object (DAO) is a frequently used design pattern that provides an abstraction layer between the application and the database and is present in the back-end. As serverless development arises, more and more applications are using the DAO design pattern, but it has been moved to the front-end. We refer to this pattern as WebDAO. It is evident that the DAO pattern improves development productivity, but it has never been demonstrated for WebDAO. Here, we evaluated the open source Angular projects to determine whether they use WebDAO. For automatic evaluation, we trained a Natural Language Processing (NLP) model that can recognize the WebDAO design pattern with 92% accuracy. On the basis of the results, we analyzed the entire history of the projects and presented how the WebDAO design pattern impacts productivity, taking into account the number of commits, changes, and issues. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

33 pages, 3184 KiB  
Article
The Applicability of Automated Testing Frameworks for Mobile Application Testing: A Systematic Literature Review
by Natnael Gonfa Berihun, Cyrille Dongmo and John Andrew Van der Poll
Computers 2023, 12(5), 97; https://doi.org/10.3390/computers12050097 - 3 May 2023
Cited by 3 | Viewed by 3804
Abstract
Mobile applications are developed and released to the market every day. Due to the intense usage of mobile applications, their quality matters. End users’ rejection of mobile apps increases from time to time due to their low quality and lack of proper mobile [...] Read more.
Mobile applications are developed and released to the market every day. Due to the intense usage of mobile applications, their quality matters. End users’ rejection of mobile apps increases from time to time due to their low quality and lack of proper mobile testing. This indicates that the role of mobile application testing is crucial in the acceptance of a given software product. Test engineers use automation frameworks for testing their mobile applications. Automated testing brings several advantages to the development team. For example, automated checks are used for regression testing, fast execution of test scripts, and providing quick feedback for the development team. A systematic literature review has been used to identify and collect evidence on automated testing frameworks for mobile application testing. A total of 56 relevant research papers were identified that were published in prominent journals and conferences until February 2023. The results were summarized and tabulated to provide insights into the suitability of the existing automation testing framework for mobile application testing. We identified the major test concerns and test challenges in performing mobile automation testing. The results showed that the keyword-driven testing framework is the widely used approach, but recently, hybrid approaches have been adopted for mobile test automation. On the other hand, this review indicated that the existing frameworks need to be customized using reusable and domain-specific keywords to make them suitable for mobile application testing. Considering this, this study proposes an architecture, the mobile-based automation testing framework (MATF). In the future, to address the mobile application testing challenges, the authors will work on implementing the proposed framework (MATF). Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 1016 KiB  
Review
Exploring the Landscape of Data Analysis: A Review of Its Application and Impact in Ecuador
by Manuel Ayala-Chauvin, Fátima Avilés-Castillo and Jorge Buele
Computers 2023, 12(7), 146; https://doi.org/10.3390/computers12070146 - 22 Jul 2023
Cited by 3 | Viewed by 4463
Abstract
Data analysis is increasingly critical in aiding decision-making within public and private institutions. This paper scrutinizes the status quo of big data and data analysis and its applications within Ecuador, focusing on its societal, educational, and industrial impact. A detailed literature review was [...] Read more.
Data analysis is increasingly critical in aiding decision-making within public and private institutions. This paper scrutinizes the status quo of big data and data analysis and its applications within Ecuador, focusing on its societal, educational, and industrial impact. A detailed literature review was conducted from academic databases such as SpringerLink, Scopus, IEEE Xplore, Web of Science, and ACM, incorporating research from inception until May 2023. The search process adhered to the PRISMA statement, employing specific inclusion and exclusion criteria. The analysis revealed that data implementation in Ecuador, while recent, has found noteworthy applications in six principal areas, classified using ISCED: education, science, engineering, health, social, and services. In the scientific and engineering sectors, big data has notably contributed to disaster mitigation and optimizing resource allocation in smart cities. Its application in the social sector has fortified cybersecurity and election data integrity, while in services, it has enhanced residential ICT adoption and urban planning. Health sector applications are emerging, particularly in disease prediction and patient monitoring. Educational applications predominantly involve student performance analysis and curricular evaluation. This review emphasizes that while big data’s potential is being gradually realized in Ecuador, further research, data security measures, and institutional interoperability are required to fully leverage its benefits. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

32 pages, 863 KiB  
Review
Prioritizing Use Cases: A Systematic Literature Review
by Yousra Odeh and Nedhal Al-Saiyd
Computers 2023, 12(7), 136; https://doi.org/10.3390/computers12070136 - 6 Jul 2023
Cited by 2 | Viewed by 4320
Abstract
The prioritization of software requirements is necessary for successful software development. A use case is a useful approach to represent and prioritize user-centric requirements. Use-case-based prioritization is used to rank use cases to attain a business value based on identified criteria. The research [...] Read more.
The prioritization of software requirements is necessary for successful software development. A use case is a useful approach to represent and prioritize user-centric requirements. Use-case-based prioritization is used to rank use cases to attain a business value based on identified criteria. The research community has started engaging use case modeling for emerging technologies such as the IoT, mobile development, and big data. A systematic literature review was conducted to understand the approaches reported in the last two decades. For each of the 40 identified approaches, a review is presented with respect to consideration of scenarios, the extent of formality, and the size of requirements. Only 32.5% of the reviewed studies considered scenario-based approaches, and the majority of reported approaches were semiformally developed (53.8%). The reported result opens prospects for the development of new approaches to fill a gap regarding the inclusive of strategic goals and respective business processes that support scenario representation. This study reveals that existing approaches fail to consider necessary criteria such as risks, goals, and some quality-related requirements. The findings reported herein are useful for researchers and practitioners aiming to improve current prioritization practices using the use case approach. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

Back to TopTop