Topic Editors

Department of Computer Science, Nottingham Trent University, Clifton Lane, Nottingham NG11 8NS, UK
Dr. Houbing Song
Department of Electrical Engineering and Computer Science, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA
Department of Applied Computing, Universiti Teknologi Malaysia, Johor Bahru 81310, Malaysia
School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi 110067, India
School of Science and Technology, Nottingham Trent University, Nottingham NG11 8NS, UK

Modeling and Practice for Trustworthy and Secure Systems

Abstract submission deadline
closed (30 June 2024)
Manuscript submission deadline
closed (31 August 2024)
Viewed by
17355

Topic Information

Dear Colleagues,

There is always a fundamental question that could be asked of any automated decision system based on algorithms, namely: how much do you trust their output? To address such question, we need to model, develop and practice a trustworthy and secure approach. Such a system should fulfill stakeholders’ needs regarding security, privacy, responsibility/accountability, reliability, and governance bodies’ rules and regulations. There have been many solutions outlined to address these requirements, which are mainly focussed on the use of automated models based on Artificial Intelligence (AI). Though it has been a crucial question, it remains unclear if AI can always make the ‘right’ decision in real-time applications? In order to answer such a question, automated data-driven intelligent systems should be designed and tested in such a way that facilitates their decisions being described or explained for verification and validation by an expert human in the system’s context. By doing so, humans/experts will be able to account for their use by developing trustworthy AI models for real-time decision-making for high-risk AI applications. Additionally, the secure-by-design (SBD) paradigm should be optimised in such a way as to automatically check the processes and data involved in developing trustworthy systems.

Dr. Ali Safaa Sadiq Al Shakarchi
Dr. Houbing Song
Dr. Ahmad Fadhil Yusof
Dr. Sushil Kumar
Dr. Omprakash Kaiwartya
Topic Editors

Keywords

  • trustworthy systems for connected vehicles
  • trustworthy systems for internet of things
  • trustworthy systems for industrial networks
  • modelling and practice for wireless systems
  • modelling and practice for internet of things
  • modelling and practice for vehicular communications
  • information security in internet of things
  • communication systems optimization for performance improvement
  • computing optimization for system efficiency

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600
Algorithms
algorithms
1.8 4.1 2008 15 Days CHF 1600
Computers
computers
2.6 5.4 2012 17.2 Days CHF 1800
IoT
IoT
- 8.5 2020 15.9 Days CHF 1200
Mathematics
mathematics
2.3 4.0 2013 17.1 Days CHF 2600
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (9 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
29 pages, 466 KiB  
Article
Elimination Algorithms for Skew Polynomials with Applications in Cybersecurity
by Raqeeb Rasheed, Ali Safaa Sadiq and Omprakash Kaiwartya
Mathematics 2024, 12(20), 3258; https://doi.org/10.3390/math12203258 - 17 Oct 2024
Viewed by 701
Abstract
It is evident that skew polynomials offer promising directions for developing cryptographic schemes. This paper focuses on exploring skew polynomials and studying their properties with the aim of exploring their potential applications in fields such as cryptography and combinatorics. We begin by deriving [...] Read more.
It is evident that skew polynomials offer promising directions for developing cryptographic schemes. This paper focuses on exploring skew polynomials and studying their properties with the aim of exploring their potential applications in fields such as cryptography and combinatorics. We begin by deriving the concept of resultants for bivariate skew polynomials. Then, we employ the derived resultant to incrementally eliminate indeterminates in skew polynomial systems, utilising both direct and modular approaches. Finally, we discuss some applications of the derived resultant, including cryptographic schemes (such as Diffie–Hellman) and combinatorial identities (such as Pascal’s identity). We start by considering a bivariate skew polynomial system with two indeterminates; our intention is to isolate and eliminate one of the indeterminates to reduce the system to a simpler form (that is, relying only on one indeterminate in this case). The methodology is composed of two main techniques; in the first technique, we apply our definition of a (bivariate) resultant via a Sylvester-style matrix directly from the polynomials’ coefficients, while the second is based on modular methods where we compute the resultant by using evaluation and interpolation approaches. The idea of this second technique is that instead of computing the resultant directly from the coefficients, we propose to evaluate the polynomials at a set of valid points to compute its corresponding set of partial resultants first; then, we can deduce the original resultant by combining all these partial resultants using an interpolation technique by utilising a theorem we have established. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

15 pages, 3690 KiB  
Article
Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
by Roman Garaev, Bader Rasheed and Adil Mehmood Khan
Algorithms 2024, 17(4), 162; https://doi.org/10.3390/a17040162 - 19 Apr 2024
Viewed by 1887
Abstract
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training [...] Read more.
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L-norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network’s latent representations, (4) an analysis of networks’ decision boundaries and (5) the use of equivalence of L2 and L perturbation norm theories. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

28 pages, 1233 KiB  
Article
Trustworthy Digital Representations of Analog Information—An Application-Guided Analysis of a Fundamental Theoretical Problem in Digital Twinning
by Holger Boche, Yannik N. Böck, Ullrich J. Mönich and Frank H. P. Fitzek
Algorithms 2023, 16(11), 514; https://doi.org/10.3390/a16110514 - 9 Nov 2023
Cited by 2 | Viewed by 1605
Abstract
This article compares two methods of algorithmically processing bandlimited time-continuous signals in light of the general problem of finding “suitable” representations of analog information on digital hardware. Albeit abstract, we argue that this problem is fundamental in digital twinning, a signal-processing paradigm the [...] Read more.
This article compares two methods of algorithmically processing bandlimited time-continuous signals in light of the general problem of finding “suitable” representations of analog information on digital hardware. Albeit abstract, we argue that this problem is fundamental in digital twinning, a signal-processing paradigm the upcoming 6G communication-technology standard relies on heavily. Using computable analysis, we formalize a general framework of machine-readable descriptions for representing analytic objects on Turing machines. Subsequently, we apply this framework to sampling and interpolation theory, providing a thoroughly formalized method for digitally processing the information carried by bandlimited analog signals. We investigate discrete-time descriptions, which form the implicit quasi-standard in digital signal processing, and establish continuous-time descriptions that take the signal’s continuous-time behavior into account. Motivated by an exemplary application of digital twinning, we analyze a textbook model of digital communication systems accordingly. We show that technologically fundamental properties, such as a signal’s (Banach-space) norm, can be computed from continuous-time, but not from discrete-time descriptions of the signal. Given the high trustworthiness requirements within 6G, e.g., employed software must satisfy assessment criteria in a provable manner, we conclude that the problem of “trustworthy” digital representations of analog information is indeed essential to near-future information technology. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

21 pages, 1254 KiB  
Article
Detecting and Processing Unsuspected Sensitive Variables for Robust Machine Learning
by Laurent Risser, Agustin Martin Picard, Lucas Hervier and Jean-Michel Loubes
Algorithms 2023, 16(11), 510; https://doi.org/10.3390/a16110510 - 7 Nov 2023
Cited by 1 | Viewed by 1880
Abstract
The problem of algorithmic bias in machine learning has recently gained a lot of attention due to its potentially strong impact on our societies. In much the same manner, algorithmic biases can alter industrial and safety-critical machine learning applications, where high-dimensional inputs are [...] Read more.
The problem of algorithmic bias in machine learning has recently gained a lot of attention due to its potentially strong impact on our societies. In much the same manner, algorithmic biases can alter industrial and safety-critical machine learning applications, where high-dimensional inputs are used. This issue has, however, been mostly left out of the spotlight in the machine learning literature. Contrary to societal applications, where a set of potentially sensitive variables, such as gender or race, can be defined by common sense or by regulations to draw attention to potential risks, the sensitive variables are often unsuspected in industrial and safety-critical applications. In addition, these unsuspected sensitive variables may be indirectly represented as a latent feature of the input data. For instance, the predictions of an image classifier may be altered by reconstruction artefacts in a small subset of the training images. This raises serious and well-founded concerns about the commercial deployment of AI-based solutions, especially in a context where new regulations address bias issues in AI. The purpose of our paper is, then, to first give a large overview of recent advances in robust machine learning. Then, we propose a new procedure to detect and to treat such unknown biases. As far as we know, no equivalent procedure has been proposed in the literature so far. The procedure is also generic enough to be used in a wide variety of industrial contexts. Its relevance is demonstrated on a set of satellite images used to train a classifier. In this illustration, our technique detects that a subset of the training images has reconstruction faults, leading to systematic prediction errors that would have been unsuspected using conventional cross-validation techniques. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

14 pages, 1708 KiB  
Article
L-PRNU: Low-Complexity Privacy-Preserving PRNU-Based Camera Attribution Scheme
by Alan Huang and Justie Su-Tzu Juan
Computers 2023, 12(10), 212; https://doi.org/10.3390/computers12100212 - 20 Oct 2023
Viewed by 1946
Abstract
A personal camera fingerprint can be created from images in social media by using Photo Response Non-Uniformity (PRNU) noise, which is used to identify whether an unknown picture belongs to them. Social media has become ubiquitous in recent years and many of us [...] Read more.
A personal camera fingerprint can be created from images in social media by using Photo Response Non-Uniformity (PRNU) noise, which is used to identify whether an unknown picture belongs to them. Social media has become ubiquitous in recent years and many of us regularly share photos of our daily lives online. However, due to the ease of creating a PRNU-based camera fingerprint, the privacy leakage problem is taken more seriously. To address this issue, a security scheme based on Boneh–Goh–Nissim (BGN) encryption was proposed in 2021. While effective, the BGN encryption incurs a high run-time computational overhead due to its power computation. Therefore, we devised a new scheme to address this issue, employing polynomial encryption and pixel confusion methods, resulting in a computation time that is over ten times faster than BGN encryption. This eliminates the need to only send critical pixels to a Third-Party Expert in the previous method. Furthermore, our scheme does not require decryption, as polynomial encryption and pixel confusion do not alter the correlation value. Consequently, the scheme we presented surpasses previous methods in both theoretical analysis and experimental performance, being faster and more capable. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

20 pages, 1708 KiB  
Article
Hashcash Tree, a Data Structure to Mitigate Denial-of-Service Attacks
by Mario Alviano
Algorithms 2023, 16(10), 462; https://doi.org/10.3390/a16100462 - 30 Sep 2023
Cited by 1 | Viewed by 1843
Abstract
Client puzzle protocols are widely adopted mechanisms for defending against resource exhaustion denial-of-service (DoS) attacks. Among the simplest puzzles used by such protocols, there are cryptographic challenges requiring the finding of hash values with some required properties. However, by the way hash functions [...] Read more.
Client puzzle protocols are widely adopted mechanisms for defending against resource exhaustion denial-of-service (DoS) attacks. Among the simplest puzzles used by such protocols, there are cryptographic challenges requiring the finding of hash values with some required properties. However, by the way hash functions are designed, predicting the difficulty of finding hash values with non-trivial properties is impossible. This is the main limitation of simple proof-of-work (PoW) algorithms, such as hashcash. We propose a new data structure combining hashcash and Merkle trees, also known as hash trees. In the proposed data structure, called hashcash tree, all hash values are required to start with a given number of zeros (as for hashcash), and hash values of internal nodes are obtained by hashing the hash values of child nodes (as for hash trees). The client is forced to compute all hash values, but only those in the path from a leaf to the root are required by the server to verify the proof of work. The proposed client puzzle is implemented and evaluated empirically to show that the difficulty of puzzles can be accurately controlled. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

25 pages, 1719 KiB  
Article
An Improved Dandelion Optimizer Algorithm for Spam Detection: Next-Generation Email Filtering System
by Mohammad Tubishat, Feras Al-Obeidat, Ali Safaa Sadiq and Seyedali Mirjalili
Computers 2023, 12(10), 196; https://doi.org/10.3390/computers12100196 - 28 Sep 2023
Cited by 2 | Viewed by 2316
Abstract
Spam emails have become a pervasive issue in recent years, as internet users receive increasing amounts of unwanted or fake emails. To combat this issue, automatic spam detection methods have been proposed, which aim to classify emails into spam and non-spam categories. Machine [...] Read more.
Spam emails have become a pervasive issue in recent years, as internet users receive increasing amounts of unwanted or fake emails. To combat this issue, automatic spam detection methods have been proposed, which aim to classify emails into spam and non-spam categories. Machine learning techniques have been utilized for this task with considerable success. In this paper, we introduce a novel approach to spam email detection by presenting significant advancements to the Dandelion Optimizer (DO) algorithm. The DO is a relatively new nature-inspired optimization algorithm inspired by the flight of dandelion seeds. While the DO shows promise, it faces challenges, especially in high-dimensional problems such as feature selection for spam detection. Our primary contributions focus on enhancing the DO algorithm. Firstly, we introduce a new local search algorithm based on flipping (LSAF), designed to improve the DO’s ability to find the best solutions. Secondly, we propose a reduction equation that streamlines the population size during algorithm execution, reducing computational complexity. To showcase the effectiveness of our modified DO algorithm, which we refer to as the Improved DO (IDO), we conduct a comprehensive evaluation using the Spam base dataset from the UCI repository. However, we emphasize that our primary objective is to advance the DO algorithm, with spam email detection serving as a case study application. Comparative analysis against several popular algorithms, including Particle Swarm Optimization (PSO), the Genetic Algorithm (GA), Generalized Normal Distribution Optimization (GNDO), the Chimp Optimization Algorithm (ChOA), the Grasshopper Optimization Algorithm (GOA), Ant Lion Optimizer (ALO), and the Dragonfly Algorithm (DA), demonstrates the superior performance of our proposed IDO algorithm. It excels in accuracy, fitness, and the number of selected features, among other metrics. Our results clearly indicate that the IDO overcomes the local optima problem commonly associated with the standard DO algorithm, owing to the incorporation of LSAF and the reduction in equation methods. In summary, our paper underscores the significant advancement made in the form of the IDO algorithm, which represents a promising approach for solving high-dimensional optimization problems, with a keen focus on practical applications in real-world systems. While we employ spam email detection as a case study, our primary contribution lies in the improved DO algorithm, which is efficient, accurate, and outperforms several state-of-the-art algorithms in various metrics. This work opens avenues for enhancing optimization techniques and their applications in machine learning. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

30 pages, 9599 KiB  
Article
Hybrid Vulture-Coordinated Multi-Robot Exploration: A Novel Algorithm for Optimization of Multi-Robot Exploration
by Ali El Romeh, Seyedali Mirjalili and Faiza Gul
Mathematics 2023, 11(11), 2474; https://doi.org/10.3390/math11112474 - 27 May 2023
Cited by 7 | Viewed by 1767
Abstract
Exploring unknown environments using multiple robots has numerous applications in various fields but remains a challenging task. This study proposes a novel hybrid optimization method called Hybrid Vulture-Coordinated Multi-Robot Exploration (HVCME), which combines Coordinated Multi-Robot Exploration ( [...] Read more.
Exploring unknown environments using multiple robots has numerous applications in various fields but remains a challenging task. This study proposes a novel hybrid optimization method called Hybrid Vulture-Coordinated Multi-Robot Exploration (HVCME), which combines Coordinated Multi-Robot Exploration (CME) and African Vultures Optimization Algorithm (AVOA) to optimize the construction of a finite map in multi-robot exploration. We compared HVCME with four other similar algorithms using three performance measures: run time, percentage of the explored area, and the number of times the method failed to complete a run. The experimental results show that HVCME outperforms the other four methods, demonstrating its effectiveness in optimizing the construction of a finite map in an unknown indoor environment. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

19 pages, 1011 KiB  
Article
ANAA-Fog: A Novel Anonymous Authentication Scheme for 5G-Enabled Vehicular Fog Computing
by Badiea Abdulkarem Mohammed, Mahmood A. Al-Shareeda, Selvakumar Manickam, Zeyad Ghaleb Al-Mekhlafi, Abdulaziz M. Alayba and Amer A. Sallam
Mathematics 2023, 11(6), 1446; https://doi.org/10.3390/math11061446 - 16 Mar 2023
Cited by 28 | Viewed by 1984
Abstract
Vehicular fog computing enabled by the Fifth Generation (5G) has been on the rise recently, providing real-time services among automobiles in the field of smart transportation by improving road traffic safety and enhancing driver comfort. Due to the public nature of wireless communication [...] Read more.
Vehicular fog computing enabled by the Fifth Generation (5G) has been on the rise recently, providing real-time services among automobiles in the field of smart transportation by improving road traffic safety and enhancing driver comfort. Due to the public nature of wireless communication channels, in which communications are conveyed in plain text, protecting the privacy and security of 5G-enabled vehicular fog computing is of the utmost importance. Several existing works have proposed an anonymous authentication technique to address this issue. However, these techniques have massive performance efficiency issues with authenticating and validating the exchanged messages. To face this problem, we propose a novel anonymous authentication scheme named ANAA-Fog for 5G-enabled vehicular fog computing. Each participating vehicle’s temporary secret key for verifying digital signatures is generated by a fog server under the proposed ANAA-Fog scheme. The signing step of the ANAA-Fog scheme is analyzed and proven secure with the use of the ProfVerif simulator. This research also satisfies privacy and security criteria, such as conditional privacy preservation, unlinkability, traceability, revocability, and resistance to security threats, as well as others (e.g., modify attacks, forgery attacks, replay attacks, and man-in-the-middle attacks). Finally, the result of the proposed ANAA-Fog scheme in terms of communication cost and single signature verification is 108 bytes and 2.0185 ms, respectively. Hence, the assessment metrics section demonstrates that our work incurs a little more cost in terms of communication and computing performance when compared to similar studies. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
(This article belongs to the Section Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop