Next Issue
Volume 3, June
Previous Issue
Volume 2, December
 
 

Software, Volume 3, Issue 1 (March 2024) – 6 articles

Cover Story (view full-size image): This paper introduces a novel method for enhancing product recommender systems, blending unsupervised models like K-means clustering, content-based filtering (CBF), and hierarchical clustering with the state-of-the-art GPT-4 large language model (LLM). The groundbreaking aspect lies in leveraging GPT-4 for model evaluation, utilizing its advanced, natural language understanding to elevate recommendation precision. This approach empowers e-commerce with advanced unsupervised algorithms, while GPT-4 refines semantic understanding and yields more personalized recommendations. Experimental results validate the framework’s superiority, advancing recommender system technology and providing businesses with a scalable solution to optimize product recommendations. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
39 pages, 1682 KiB  
Article
A Process for Monitoring the Impact of Architecture Principles on Sustainability: An Industrial Case Study
by Markus Funke, Patricia Lago, Roberto Verdecchia and Roel Donker
Software 2024, 3(1), 107-145; https://doi.org/10.3390/software3010006 - 13 Mar 2024
Viewed by 1317
Abstract
Architecture principles affect a software system holistically. Given their alignment with a business strategy, they should be incorporated within the validation process covering aspects of sustainability. However, current research discusses the influence of architecture principles on sustainability in a limited context. Our objective [...] Read more.
Architecture principles affect a software system holistically. Given their alignment with a business strategy, they should be incorporated within the validation process covering aspects of sustainability. However, current research discusses the influence of architecture principles on sustainability in a limited context. Our objective was to introduce a reusable process for monitoring and evaluating the impact of architecture principles on sustainability from a software architecture perspective. We sought to demonstrate the application of such a process in professional practice. A qualitative case study was conducted in the context of a Dutch airport management company. Data collection involved a case analysis and the execution of two rounds of expert interviews. We (i) identified a set of case-related key performance indicators, (ii) utilized commonly accepted measurement tools, and (iii) employed graphical representations in the form of spider charts to monitor the sustainability impacts. The real-world observations were evaluated through a concluding focus group. Our findings indicated that architecture principles were a feasible mechanism with which to address sustainability across all different architecture layers within the enterprise. The experts considered the sustainability analysis valuable in guiding the software architecture process towards sustainability. With the emphasis on principles, we facilitate industry adoption by embedding sustainability in existing mechanisms. Full article
Show Figures

Figure 1

26 pages, 1037 KiB  
Review
Emergent Information Processing: Observations, Experiments, and Future Directions
by Jiří Kroc
Software 2024, 3(1), 81-106; https://doi.org/10.3390/software3010005 - 5 Mar 2024
Viewed by 1105
Abstract
Science is currently becoming aware of the challenges in the understanding of the very root mechanisms of massively parallel computations that are observed in literally all scientific disciplines, ranging from cosmology to physics, chemistry, biochemistry, and biology. This leads us to the main [...] Read more.
Science is currently becoming aware of the challenges in the understanding of the very root mechanisms of massively parallel computations that are observed in literally all scientific disciplines, ranging from cosmology to physics, chemistry, biochemistry, and biology. This leads us to the main motivation and simultaneously to the central thesis of this review: “Can we design artificial, massively parallel, self-organized, emergent, error-resilient computational environments?” The thesis is solely studied on cellular automata. Initially, an overview of the basic building blocks enabling us to reach this end goal is provided. Important information dealing with this topic is reviewed along with highly expressive animations generated by the open-source, Python, cellular automata software GoL-N24. A large number of simulations along with examples and counter-examples, finalized by a list of the future directions, are giving hints and partial answers to the main thesis. Together, these pose the crucial question of whether there is something deeper beyond the Turing machine theoretical description of massively parallel computing. The perspective, future directions, including applications in robotics and biology of this research, are discussed in the light of known information. Full article
Show Figures

Figure 1

19 pages, 1296 KiB  
Article
Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender Systems
by Konstantinos I. Roumeliotis, Nikolaos D. Tselikas and Dimitrios K. Nasiopoulos
Software 2024, 3(1), 62-80; https://doi.org/10.3390/software3010004 - 29 Feb 2024
Cited by 1 | Viewed by 2460
Abstract
This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models—K-means clustering, content-based filtering (CBF), and hierarchical clustering—with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its [...] Read more.
This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models—K-means clustering, content-based filtering (CBF), and hierarchical clustering—with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its advanced natural language understanding capabilities to enhance the precision and relevance of product recommendations. A flask-based API simplifies its implementation for e-commerce owners, allowing for the seamless training and evaluation of the models using CSV-formatted product data. The unique aspect of this approach lies in its ability to empower e-commerce with sophisticated unsupervised recommender system algorithms, while the GPT model significantly contributes to refining the semantic context of product features, resulting in a more personalized and effective product recommendation system. The experimental results underscore the superiority of this integrated framework, marking a significant advancement in the field of recommender systems and providing businesses with an efficient and scalable solution to optimize their product recommendations. Full article
Show Figures

Figure 1

15 pages, 4331 KiB  
Article
Deep-SDM: A Unified Computational Framework for Sequential Data Modeling Using Deep Learning Models
by Nawa Raj Pokhrel, Keshab Raj Dahal, Ramchandra Rimal, Hum Nath Bhandari and Binod Rimal
Software 2024, 3(1), 47-61; https://doi.org/10.3390/software3010003 - 28 Feb 2024
Cited by 2 | Viewed by 2327
Abstract
Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in Python 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework’s primary design criteria. The platform can extract valuable [...] Read more.
Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in Python 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework’s primary design criteria. The platform can extract valuable insights from numerical and text data and utilize them to predict future values by implementing long short-term memory (LSTM), gated recurrent unit (GRU), and convolution neural network (CNN). Its end-to-end machine learning pipeline involves a sequence of tasks, including data exploration, input preparation, model construction, hyperparameter tuning, performance evaluations, visualization of results, and statistical analysis. The complete process is systematic and carefully organized, from data import to model selection, encapsulating it into a unified whole. The multiple subroutines work together to provide a user-friendly and conducive pipeline that is easy to use. We utilized the Deep-SDM framework to predict the Nepal Stock Exchange (NEPSE) index to validate its reproducibility and robustness and observed impressive results. Full article
Show Figures

Figure 1

19 pages, 593 KiB  
Article
Automating SQL Injection and Cross-Site Scripting Vulnerability Remediation in Code
by Kedar Sambhus and Yi Liu
Software 2024, 3(1), 28-46; https://doi.org/10.3390/software3010002 - 12 Jan 2024
Cited by 1 | Viewed by 2496
Abstract
Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is [...] Read more.
Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is a costly and time-consuming process. To reduce these costs, we designed and implemented Code Auto-Remediation for Enhanced Security (CARES), a web application that automatically identifies and remediates the two most common types of vulnerabilities in Java-based web applications: SQL injection (SQLi) and Cross-Site Scripting (XSS). As is shown by a case study presented in this paper, CARES mitigates these vulnerabilities by refactoring the Java code using the Intercepting Filter design pattern. The flexible, microservice-based CARES design can be readily extended to support other injection vulnerabilities, remediation design patterns, and programming languages. Full article
Show Figures

Figure 1

27 pages, 330 KiB  
Article
A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach
by George Murazvu, Simon Parkinson, Saad Khan, Na Liu and Gary Allen
Software 2024, 3(1), 1-27; https://doi.org/10.3390/software3010001 - 2 Jan 2024
Cited by 1 | Viewed by 2314
Abstract
Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its [...] Read more.
Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its advantages while critically analysing its implications across various development stages that are affecting its adoption. Additionally, it analyses the differences in perception between those in nontechnical and technical roles, where nontechnical roles (e.g., management) predominantly strive to reduce costs and delivery time, whereas technical roles are often driven by quality and completeness. This study investigates the difference in attitudes toward automated testing (AtAT), specifically focusing on why it is not adopted. This article presents a survey conducted among software industry professionals that spans various roles to determine common trends and draw conclusions. A two-stage approach is presented, comprising a comprehensive descriptive analysis and the use of Principal Component Analysis. In total, 81 participants received a series of 22 questions, and their responses were compared against job role types and experience levels. In summary, six key findings are presented that cover expertise, time, cost, tools and techniques, utilisation, organisation, and capacity. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop