Software Analysis, Quality, and Security

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 November 2024) | Viewed by 28618

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, Hanyang University, Ansan 15588, Republic of Korea
Interests: software engineering; software quality; software maintenance; software analysis; software metrics; software testing; formal verification; model checking; requirement engineering; Internet of Things; web development

E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Xi’an 215123, China
Interests: control theory; data analysis; fuzzy set theory; robust controller design; energy optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Engineering, Jeju National University, Jeju 63243, Korea
Interests: computer graphics; VR/AR; vision
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, University of Central Punjab, Lahore 54000, Pakistan
Interests: software product Line; software testing; software requirement engineering; Internet of Things applications; embedded systems

E-Mail
Guest Editor
Department of Electronic Commerce, Paichai University, Daejeon 35345, Korea
Interests: system software; mobile computing; web-app programming; e-commerce system; web database system; web services

Special Issue Information

Dear Colleagues,

The rapid progress of digital transformation, accelerated even further by the COVID-19 pandemic, has led to new corporate innovations, significant changes in industries, and even digitalization of our daily lives. As software is at the center of such changes, and its importance and share in industry and society are ever increasing, it is becoming vital to ensure the quality of software to secure core software technologies. Without proper management of quality, software can often cause problems directly related to safety and security as seen from various software defect cases and security incidents which are commonly encountered.  In addition, software quality management is essential to reduce the cost of producing, operating, and maintaining software-based products and services as well as to increase productivity and reduce time-to-market. Therefore, software needs to be well developed, thoroughly analyzed, and continuously maintained to ensure its adequate quality.

This Special Issue invites researchers and practitioners to present innovative and significant research achievements in the fields of software development, analysis, and maintenance which are employed to effectively manage, control, and ensure software quality. The topics of interest include but are not limited to:

  • Software and system comprehension;
  • Source code analysis and manipulation;
  • Software metrics;
  • Software visualization;
  • Software refactoring;
  • Debugging and fault localization;
  • Change and defect management;
  • Software testing theories and methods;
  • Software validation and verification;
  • Program synthesis and repair;
  • Software evolution and maintenance;
  • Software quality assessment;
  • Software quality management, control, assurance;
  • Software reliability and safety;
  • Software performance;
  • Software security;
  • Dependable and secure computing;
  • Realistic and immersive media technology;
  • Digital signal, image, and video processing technology.

Dr. Scott Uk-Jin Lee
Dr. Sanghyuk Lee
Dr. Soo Kyun Kim
Dr. Asad Abbas
Dr. Seokhun Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • software analysis and testing
  • software analytics
  • software evolution and maintenance
  • software quality
  • software security
  • media technology
  • digital signal, image, video processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 2855 KiB  
Article
Test Coverage in Microservice Systems: An Automated Approach to E2E and API Test Coverage Metrics
by Amr S. Abdelfattah, Tomas Cerny, Jorge Yero, Eunjee Song and Davide Taibi
Electronics 2024, 13(10), 1913; https://doi.org/10.3390/electronics13101913 - 13 May 2024
Cited by 1 | Viewed by 1415
Abstract
Test coverage is a critical aspect of the software development process, aiming for overall confidence in the product. When considering cloud-native systems, testing becomes complex, as it becomes necessary to deal with multiple distributed microservices that are developed by different teams and may [...] Read more.
Test coverage is a critical aspect of the software development process, aiming for overall confidence in the product. When considering cloud-native systems, testing becomes complex, as it becomes necessary to deal with multiple distributed microservices that are developed by different teams and may change quite rapidly. In such a dynamic environment, it is important to track test coverage. This is especially relevant for end-to-end (E2E) and API testing, as these might be developed by teams distinct from microservice developers. Moreover, indirection exists in E2E, where the testers may see the user interface but not know how comprehensive the test suits are. To ensure confidence in health checks in the system, mechanisms and instruments are needed to indicate the test coverage level. Unfortunately, there is a lack of such mechanisms for cloud-native systems. This manuscript introduces test coverage metrics for evaluating the extent of E2E and API test suite coverage for microservice endpoints. It elaborates on automating the calculation of these metrics with access to microservice codebases and system testing traces, delves into the process, and offers feedback with a visual perspective, emphasizing test coverage across microservices. To demonstrate the viability of the proposed approach, we implement a proof-of-concept tool and perform a case study on a well-established system benchmark assessing existing E2E and API test suites with regard to test coverage using the proposed endpoint metrics. The results of endpoint coverage reflect the diverse perspectives of both testing approaches. API testing achieved 91.98% coverage in the benchmark, whereas E2E testing achieved 45.42%. Combining both coverage results yielded a slight increase to approximately 92.36%, attributed to a few endpoints tested exclusively through one testing approach, not covered by the other. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

24 pages, 4669 KiB  
Article
ML-Based Software Defect Prediction in Embedded Software for Telecommunication Systems (Focusing on the Case of SAMSUNG ELECTRONICS)
by Hongkoo Kang and Sungryong Do
Electronics 2024, 13(9), 1690; https://doi.org/10.3390/electronics13091690 - 26 Apr 2024
Cited by 1 | Viewed by 1296
Abstract
Software stands out as one of the most rapidly evolving technologies in the present era, characterized by its swift expansion in both scale and complexity, which leads to challenges in quality assurance. Software defect prediction (SDP) has emerged as a methodology crafted to [...] Read more.
Software stands out as one of the most rapidly evolving technologies in the present era, characterized by its swift expansion in both scale and complexity, which leads to challenges in quality assurance. Software defect prediction (SDP) has emerged as a methodology crafted to anticipate undiscovered defects, leveraging known defect data from existing codes. This methodology serves to facilitate software quality management, thereby ensuring overall product quality. The methodologies of machine learning (ML) and one of its branches, deep learning (DL), exhibit superior accuracy and adaptability compared to traditional statistical approaches, catalyzing active research in this domain. However, it makes it hard to generalize, not only because of the disparity between open-source projects and commercial projects but also due to the differences in each industrial sector. Consequently, further research utilizing datasets sourced from diverse real-world sectors has become imperative to bolster the applicability of these findings. For this study, we utilized embedded software for use with the telecommunication systems of Samsung Electronics, supplemented by the introduction of nine novel features to train the model, and a subsequent analysis of the results ensued. The experimental outcomes revealed that the F-measurement metric has been enhanced from 0.58 to 0.63 upon integration of the new features, thereby signifying a performance augmentation of 8.62%. This case study is anticipated to contribute to bolstering the application of SDP methodologies within analogous industrial sectors. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

22 pages, 10403 KiB  
Article
TraModeAVTest: Modeling Scenario and Violation Testing for Autonomous Driving Systems Based on Traffic Regulations
by Chunyan Xia, Song Huang, Changyou Zheng, Zhen Yang, Tongtong Bai and Lele Sun
Electronics 2024, 13(7), 1197; https://doi.org/10.3390/electronics13071197 - 25 Mar 2024
Cited by 1 | Viewed by 1063
Abstract
Current testing methods for autonomous driving systems primarily focus on simple traffic scenarios, generating test cases based on traffic accidents, while research on generating edge test cases for complex driving environments by traffic regulations is not adequately comprehensive. Therefore, we propose a method [...] Read more.
Current testing methods for autonomous driving systems primarily focus on simple traffic scenarios, generating test cases based on traffic accidents, while research on generating edge test cases for complex driving environments by traffic regulations is not adequately comprehensive. Therefore, we propose a method for scenario modeling and violation testing using an autonomous driving system based on traffic regulations named TraModeAVTest. Initially, TraModeAVTest constructs a Petri net model for complex scenarios based on the combination relationships of basic traffic regulation scenarios and verifies the consistency of the model’s design with traffic regulation requirements using formal methods, to provide a representation of traffic regulation scenario models for the violation testing of autonomous driving systems. Subsequently, based on the coverage criteria of the Petri net model, it utilizes a search strategy to generate model paths that represent traffic regulations, and employs a parameter combination method to generate test cases that cover the model paths, to test the violation behaviors of autonomous driving systems. Finally, simulation experiment results on the Baidu Apollo demonstrate that the test cases representing traffic regulations generated by TraModeAVTest can effectively identify the behaviors of autonomous vehicles violating traffic regulations, and TraModeAVTest can effectively improve the efficiency of generating different types of violation scenarios. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

25 pages, 1040 KiB  
Article
VConMC: Enabling Consistency Verification for Distributed Systems Using Implementation-Level Model Checkers and Consistency Oracles
by Beom-Heyn Kim
Electronics 2024, 13(6), 1153; https://doi.org/10.3390/electronics13061153 - 21 Mar 2024
Viewed by 1135
Abstract
Many cloud services are relying on distributed key-value stores such as ZooKeeper, Cassandra, HBase, etc. However, distributed key-value stores are notoriously difficult to design and implement without any mistakes. Because data consistency is the contract for clients that defines what the correct values [...] Read more.
Many cloud services are relying on distributed key-value stores such as ZooKeeper, Cassandra, HBase, etc. However, distributed key-value stores are notoriously difficult to design and implement without any mistakes. Because data consistency is the contract for clients that defines what the correct values to read are for a given history of operations under a specific consistency model, consistency violations can confuse client applications by showing invalid values. As a result, serious consequences such as data loss, data corruption, and unexpected behavior of client applications can occur. Software bugs are one of main reasons why consistency violations may occur. Formal verification techniques may be used to make designs correct and minimize the risks of having bugs in the implementation. However, formal verification is not a panacea due to limitations such as the cost of verification, inability to verify existing implementations, and human errors involved. Implementation-level model checking has been heavily explored by researchers for the past decades to formally verify whether the underlying implementation of distributed systems have bugs or not. Nevertheless, previous proposals are limited because their invariant checking is not versatile enough to check for the wide spectrum of consistency models, from eventual consistency to strong consistency. In this work, consistency oracles are employed for consistency invariant checking that can be used by implementation-level model checkers to formally verify data consistency model implementations of distributed key-value stores. To integrate consistency oracles with implementation-level distributed system model checkers, the partial-order information obtained via API is leveraged to avoid the exhaustive search during consistency invariant checking. Our evaluation results show that, by using the proposed method for consistency invariant checking, our prototype model checker, VConMC, can detect consistency violations caused by several real-world software bugs in a well-known distributed key-value store, ZooKeeper. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

15 pages, 16224 KiB  
Article
Lightweight Machine Learning Method for Real-Time Espresso Analysis
by Jintak Choi, Seungeun Lee, Kyungtae Kang and Hyojoong Suh
Electronics 2024, 13(4), 800; https://doi.org/10.3390/electronics13040800 - 19 Feb 2024
Viewed by 1482
Abstract
Coffee crema plays a crucial role in assessing the quality of espresso. In recent years, in response to the rising labor costs, aging population, remote security/authentication needs, civic awareness, and the growing preference for non-face-to-face interactions, robot cafes have emerged. While some people [...] Read more.
Coffee crema plays a crucial role in assessing the quality of espresso. In recent years, in response to the rising labor costs, aging population, remote security/authentication needs, civic awareness, and the growing preference for non-face-to-face interactions, robot cafes have emerged. While some people seek sentiment and premium coffee, there are also many who desire quick and affordable options. To align with the trends of this era, there is a need for lightweight artificial intelligence algorithms for easy and quick decision making, as well as monitoring the extraction process in these automated cafes. However, the application of these technologies to actual coffee machines has been limited. In this study, we propose an innovative real-time coffee crema control system that integrates lightweight machine learning algorithms. We employ the GrabCut algorithm to segment the crema region from the rest of the image and use a clustering algorithm to determine the optimal brewing conditions for each cup of espresso based on the characteristics of the crema extracted. Our results demonstrate that our approach can accurately analyze coffee crema in real time. This research proposes a promising direction by leveraging computer vision and machine learning technologies to enhance the efficiency and consistency of coffee brewing. Such an approach enables the prediction of component replacement timing in coffee machines, such as the replacement of water filters, and provides administrators with Before Service. This could lead to the development of fully automated artificial intelligence coffee making systems in the future. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

29 pages, 1457 KiB  
Article
Joint Impact of Agents and Services in Enhancing Software Requirements Engineering
by Mekuria Sinkie, Tor Morten Gronli, Dida Midekso and Abdullah Lakhan
Electronics 2023, 12(18), 3955; https://doi.org/10.3390/electronics12183955 - 20 Sep 2023
Cited by 1 | Viewed by 1465
Abstract
Requirements engineering (RE) is a significant aspect of system development stages in generating reliable software (SW). Despite RE’s decisive impact on project success, SW systems still fail since there is a perplexity in sorting out requirements correctly. Researchers have tried several paradigms to [...] Read more.
Requirements engineering (RE) is a significant aspect of system development stages in generating reliable software (SW). Despite RE’s decisive impact on project success, SW systems still fail since there is a perplexity in sorting out requirements correctly. Researchers have tried several paradigms to deal with the specified challenges, such as agent-oriented RE (AORE), model-based RE, and service-oriented RE (SORE). By investigating the limitations of the independent use of these paradigms, this research sets an objective that proposes a framework which integrates the two paradigms (agent and service) on top of social media to enhance the SW RE processes. Thus, the research addresses challenges in gathering adequate requirements, detecting alignment between business requirements and SW products, prioritizing requirements, and recommending innovative ideas. The research has mainly adopted an empirical research methodology for SW engineering. Accordingly, two distinct expert groups have been formed based on their previous experience in AORE and SORE, respectively. The experts have been selected from enterprises and academic institutions, and they participated in our case study. After performing the necessary assessment based on specified criteria, those experts in the first group have reported that CASCRE (Collaboration of Agents and Services for Crowd-based Requirements Engineering) with a score of 93.7% is found to be better than that of AORE with a score of 88.7%. Moreover, experts in the second group have declared that CASCRE, with a score of 92.3%, is better than SORE, with a score of 83.7%. In both cases, improvements have been observed, which reveals that the synergy of the CASCRE features has a better impact on the RE process than utilizing individual approaches. Moreover, in order to demonstrate the applicability of CASCRE, feedback has been gathered from a focused crowd of local pharmaceuticals using a mini-prototype. Accordingly, 250 requirements related comments have been gathered from the discussion forum, and 1400 keywords were generated. Then, after performing a sentiment analysis using NLP algorithms, the result was demonstrated to experts. Therefore, 93% of gurus strongly agreed on the applicability of CASCRE in real projects. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

18 pages, 1879 KiB  
Article
An Exploratory Study Gathering Security Requirements for the Software Development Process
by Roberto Andrade, Jenny Torres, Iván Ortiz-Garcés, Jorge Miño and Luis Almeida
Electronics 2023, 12(17), 3594; https://doi.org/10.3390/electronics12173594 - 25 Aug 2023
Viewed by 1766
Abstract
Software development stands out as one of the most rapidly expanding markets due to its pivotal role in crafting applications across diverse sectors like healthcare, transportation, and finance. Nevertheless, the sphere of cybersecurity has also undergone substantial growth, underscoring the escalating significance of [...] Read more.
Software development stands out as one of the most rapidly expanding markets due to its pivotal role in crafting applications across diverse sectors like healthcare, transportation, and finance. Nevertheless, the sphere of cybersecurity has also undergone substantial growth, underscoring the escalating significance of software security. Despite the existence of different secure development frameworks, the persistence of vulnerabilities or software errors remains, providing potential exploitation opportunities for malicious actors. One pivotal contributor to subpar security quality within software lies in the neglect of cybersecurity requirements during the initial phases of software development. In this context, the focal aim of this study is to analyze the importance of integrating security modeling by software developers into the elicitation processes facilitated through the utilization of abuse stories. To this end, the study endeavors to introduce a comprehensive and generic model for a secure software development process. This model inherently encompasses critical elements such as new technologies, human factors, and the management of security for the formulation of abuse stories and their integration within Agile methodological processes. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

15 pages, 1106 KiB  
Article
Security Analysis of Web Open-Source Projects Based on Java and PHP
by Zhen Yin and Scott Uk-Jin Lee
Electronics 2023, 12(12), 2618; https://doi.org/10.3390/electronics12122618 - 10 Jun 2023
Viewed by 2429
Abstract
During website development, the selection of suitable computer language and reasonable use of relevant open-source projects is imperative. Although the two languages, PHP and Java, have been extensively investigated in this context, there are not many security test reports based on their open-source [...] Read more.
During website development, the selection of suitable computer language and reasonable use of relevant open-source projects is imperative. Although the two languages, PHP and Java, have been extensively investigated in this context, there are not many security test reports based on their open-source projects. In this article, we conducted separate security analyses on web-related open-source projects based on PHP and Java. To this end, different open-source frameworks and services are used to design websites used to test experimental attacks on 12 popular open-source filters available on GitHub, as well as investigate the use of Lightweight Directory Access Protocol (LDAP) in the Firefox browser environment. Using malicious payloads published by Open Web Application Security Project (OWASP) and others, Cross-site Scripting (XSS), Local File Inclusion (LFI), SQL injection, and LDAP injection are performed on the test targets. The experimental results reveal that although PHP-based open-source projects are more vulnerable to attacks than Java-based ones, there is significant room for improvement. Finally, a whitelist-based filtering scheme is proposed. This scheme filters the inline attributes of label elements so that the filter has an excellent detection rate of malicious payloads while having an excellent pass rate of benign payloads. Effective references and suggestions for web developers are also included to aid the selection of open-source web projects, and feasible solutions to improve filter performance are proposed. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

20 pages, 807 KiB  
Article
Learning and Fusing Multi-View Code Representations for Function Vulnerability Detection
by Zhenzhou Tian, Binhui Tian, Jiajun Lv and Lingwei Chen
Electronics 2023, 12(11), 2495; https://doi.org/10.3390/electronics12112495 - 1 Jun 2023
Cited by 3 | Viewed by 1909
Abstract
The explosive growth of vulnerabilities poses a significant threat to the security of software systems. While various deep-learning-based vulnerability detection methods have emerged, they primarily rely on semantic features extracted from a single code representation structure, which limits their ability to detect vulnerabilities [...] Read more.
The explosive growth of vulnerabilities poses a significant threat to the security of software systems. While various deep-learning-based vulnerability detection methods have emerged, they primarily rely on semantic features extracted from a single code representation structure, which limits their ability to detect vulnerabilities hidden deep within the code. To address this limitation, we propose S2FVD, short for Sequence and Structure Fusion-based Vulnerability Detector, which fuses vulnerability-indicative features learned from the multiple views of the code for more accurate vulnerability detection. Specifically, S2FVD employs either well-matched or carefully extended neural network models to extract vulnerability-indicative semantic features from the token sequence, attributed control flow graph (ACFG) and abstract syntax tree (AST) representations of a function, respectively. These features capture different perspectives of the code, which are then fused to enable S2FVD to accurately detect vulnerabilities that are well-hidden within a function. The experiments conducted on two large vulnerability datasets demonstrated the superior performance of S2FVD against state-of-the-art approaches, with its accuracy and F1 scores reaching 98.07% and 98.14% respectively in detecting the presence of vulnerabilities, and 97.93% and 97.94%, respectively, in pinpointing specific vulnerability types. Furthermore, with regard to the real-world dataset D2A, S2FVD achieved average performance gains of 6.86% and 14.84% in terms of accuracy and F1 metrics, respectively, over the state-of-the-art baselines. This ablation study also confirms the superiority of fusing the semantics implied in multiple distinct code views to further enhance vulnerability detection performance. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

16 pages, 636 KiB  
Article
Boosting Code Search with Structural Code Annotation
by Xianglong Kong, Hongyu Chen, Ming Yu and Lixiang Zhang
Electronics 2022, 11(19), 3053; https://doi.org/10.3390/electronics11193053 - 25 Sep 2022
Cited by 1 | Viewed by 1552
Abstract
Code search is a process that takes a given query as input and retrieves relevant code snippets from a code base. The relationship between query and code is commonly built on code annotation, which is extracted from code comments or other documents. The [...] Read more.
Code search is a process that takes a given query as input and retrieves relevant code snippets from a code base. The relationship between query and code is commonly built on code annotation, which is extracted from code comments or other documents. The current code search studies approximately treat code annotation as a common natural language, regardless of its hidden structural information. To address the information loss, this work proposes a code annotation model to extract features from five perspectives, and further conduct a code search engine, i.e., CodeHunter. CodeHunter is evaluated on a dataset of 7 million code snippets and query descriptions. The experimental results show that CodeHunter obtains more effective results than Lucene and DeepCS. And we also prove that the effectiveness comes from the rich features and search models, CodeHunter can work well with different sizes of query descriptions. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

13 pages, 2185 KiB  
Article
A Comparative Analysis of SVM and ELM Classification on Software Reliability Prediction Model
by Suneel Kumar Rath, Madhusmita Sahu, Shom Prasad Das, Sukant Kishoro Bisoy and Mangal Sain
Electronics 2022, 11(17), 2707; https://doi.org/10.3390/electronics11172707 - 29 Aug 2022
Cited by 12 | Viewed by 2616
Abstract
By creating an effective prediction model, software defect prediction seeks to predict potential flaws in new software modules in advance. However, unnecessary and duplicated features can degrade the model’s performance. Furthermore, past research has primarily used standard machine learning techniques for fault prediction, [...] Read more.
By creating an effective prediction model, software defect prediction seeks to predict potential flaws in new software modules in advance. However, unnecessary and duplicated features can degrade the model’s performance. Furthermore, past research has primarily used standard machine learning techniques for fault prediction, and the accuracy of the predictions has not been satisfactory. Extreme learning machines (ELM) and support vector machines (SVM) have been demonstrated to be viable in a variety of fields, although their usage in software dependability prediction is still uncommon. We present an SVM and ELM-based algorithm for software reliability prediction in this research, and we investigate factors that influence prediction accuracy. These worries incorporate, first, whether all previous disappointment information ought to be utilized and second, which type of disappointment information is more fitting for expectation precision. In this article, we also examine the accuracy and time of SVM and ELM-based software dependability prediction models. Then, after the comparison, we receive experimental results that demonstrate that the ELM-based reliability prediction model may achieve higher prediction accuracy with other parameters, such as specificity, recall, precision, and F1-measure. In this article, we also propose a model for how feature selection utilization with ELM and SVM. For testing, we used NASA Metrics datasets. Further, in both technologies, we are implementing feature selection techniques to get the best result in our experiment. Due to the imbalance in our dataset, we initially applied the resampling method before implementing feature selection techniques to obtain the highest accuracy. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

16 pages, 2120 KiB  
Article
API Message-Driven Regression Testing Framework
by Emine Dumlu Demircioğlu and Oya Kalipsiz
Electronics 2022, 11(17), 2671; https://doi.org/10.3390/electronics11172671 - 26 Aug 2022
Cited by 5 | Viewed by 2080
Abstract
With the increase in the number of APIs and interconnected applications, API testing has become a critical part of the software testing process. Particularly considering the business-critical systems using API messages, the importance of repetitive API tests increases. Successfully performing repetitive manual API [...] Read more.
With the increase in the number of APIs and interconnected applications, API testing has become a critical part of the software testing process. Particularly considering the business-critical systems using API messages, the importance of repetitive API tests increases. Successfully performing repetitive manual API testing for a large number of test scenarios in large business enterprise applications becomes even more difficult due to the fact that human errors may prevent performing thousands of human-written tests with high precision every time. Furthermore, the existing API test automation tools used in the market cannot be integrated into all business domains due to their dependence on applications. These tools generally support web APIs over the HTTP protocol. Hence, this study is motivated by the fact that there is a lack of API message-driven regression testing frameworks in a particular area in which API messages are used in client-server communication. This study has been prepared to close the gap in a specific domain which uses business domain APIs, rather than HTTP, in client-server communication. We propose a novel approach based on the use of network packets for regression testing. We developed a proof-of-concept test automation tool implementing our approach and evaluated it in a financial domain. Unlike prior studies, our approach can provide the use of real data packets in software testing. The use of network packets increases the generalization of the framework. Overall, our study reports remarkable reuse capacity and makes a significant impact on a real-world business-critical system by reducing effort and increasing the automation level of API regression testing. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

15 pages, 330 KiB  
Article
Adaptation of the Four Levels of Test Maturity Model Integration with Agile and Risk-Based Test Techniques
by Ahmet Unudulmaz, Mustafa Özgür Cingiz and Oya Kalıpsız
Electronics 2022, 11(13), 1985; https://doi.org/10.3390/electronics11131985 - 24 Jun 2022
Cited by 1 | Viewed by 2695
Abstract
Many projects that progress with failure, processes managed erroneously, failure to deliver products and projects on time, excessive increases taking place in costs, and an inability to analyze customer requests correctly pave the way for the use of agile processes in software development [...] Read more.
Many projects that progress with failure, processes managed erroneously, failure to deliver products and projects on time, excessive increases taking place in costs, and an inability to analyze customer requests correctly pave the way for the use of agile processes in software development methods and cause the importance of test processes to increase day by day. In particular, the inability to properly handle testing processes and risks with time and cost pressures, the differentiation of software development methods between projects, the failure to integrate risk management, and risk analysis studies, conducted within a company/institution, with software development methods also complicates this situation. It is recommended to use agile process methods and test maturity model integration (TMMI), with risk-based testing techniques and user scenario testing techniques, to eliminate such problems. In this study, agile process transformation of a company, operating in factory automation systems in the field of industry, was followed for two and a half years. This study has been prepared to close the gap in the literature on the integration of TMMI level 2, TMMI level 3, and TMMI level 4 with SAFE methodology and agile processes. Our research has been conducted upon the use of all TMMI level sub-steps with both agile process practices and some test practices (risk-based testing techniques, user scenario testing techniques). TMMI coverage percentages have been determined as 92.85% based on TMMI level 2, 92.9% based on TMMI level 3, and 100% based on TMMI level 4. In addition, agile process adaptation metrics and their measurements between project versions will be shown, and their contribution to quality will be mentioned. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

Review

Jump to: Research

26 pages, 9932 KiB  
Review
A Comprehensive Bibliometric Assessment on Software Testing (2016–2021)
by Shehnila Zardari, Sana Alam, Hamad Abosaq Al Salem, Mana Saleh Al Reshan, Asadullah Shaikh, Aneeq Fayyaz Karim Malik, Muhammad Masood ur Rehman and Haralambos Mouratidis
Electronics 2022, 11(13), 1984; https://doi.org/10.3390/electronics11131984 - 24 Jun 2022
Cited by 15 | Viewed by 3251
Abstract
The research study provides a comprehensive bibliometric assessment in the field of Software Testing (ST). The dynamic evolution in the field of ST is evident from the publication rate over the last six years. The research study is carried out to provide insight [...] Read more.
The research study provides a comprehensive bibliometric assessment in the field of Software Testing (ST). The dynamic evolution in the field of ST is evident from the publication rate over the last six years. The research study is carried out to provide insight into the field of ST from various research bibliometric aspects. Our methodological approach includes dividing the six-year time frame into the set of two symmetric but different periods (2016–2018) and (2019–2021) comprising a total of 75,098 records. VOSViewer is used to perform analysis with respect to collaboration network of countries and co-word assessment. Bibliometrix (Studio R) analysis tool is used to evaluate research themes/topics. The year 2019 leads the publication rate whereas a decrement in publication frequency is observed for the years 2020 and 2021. Our research study shows the influence of ST in other research domains as depicted in different research areas. Especially the impact of ST in the Electrical and Electronics Domain is quite notable. Most of the research publications are from the USA and China as they are among the most resourceful countries. On the whole, the majority of the publications are from Asian countries. Collaboration networks amongst countries demonstrate the fact that the higher the collaboration network, the greater would be the research output. Co-word analysis presents the relatedness of documents based on the keywords. The topic dendrogram is generated based on the identified research themes. Although English is the leading language, prominent studies are present in other languages also. This research study provides a comprehensive analysis based on 12 informative research questions Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

Back to TopTop