Next Article in Journal
Analyzing Turning Behavior after Repeated Lithium, Ketamine, or NaCl Injection and Chronic Stress Exposure in Mice
Next Article in Special Issue
A VVC Video Steganography Based on Coding Units in Chroma Components with a Deep Learning Network
Previous Article in Journal
Queue-Size Distribution in a Discrete-Time Finite-Capacity Model with a Single Vacation Mechanism
Previous Article in Special Issue
Deepfake Video Detection Based on MesoNet with Preprocessing Module
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Bio-Latticed Cryptography: A Quantum-Proof Efficient Proposal

by
Ohood Saud Althobaiti
1,2,*,
Toktam Mahmoodi
1 and
Mischa Dohler
1,3
1
Department of Engineering, King’s College London, London WC2R 2LS, UK
2
Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
3
Advanced Technology Group Ericsson Inc., Silicon Valley, NC 94087, USA
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(11), 2351; https://doi.org/10.3390/sym14112351
Submission received: 27 August 2022 / Revised: 29 October 2022 / Accepted: 2 November 2022 / Published: 8 November 2022

Abstract

:
The emergence of the Internet of Things (IoT) and the tactile internet presents high-quality connectivity strengthened by next-generation networking to cover a vast array of smart systems. Quantum computing is another powerful enabler of the next technological revolution, which will improve the world tremendously, and it will continue to grow to cover an extensive array of important functions, in addition to it receiving recently great interest in the scientific scene. Because quantum computers have the potential to overcome various issues related to traditional computing, major worldwide technical corporations are investing competitively in them. However, along with its novel potential, quantum computing is introducing threats to cybersecurity algorithms, as quantum computers are able to decipher many complex mathematical problems that classical computers cannot. This research paper proposes a robust and performance-effective lattice-driven cryptosystem in the context of face recognition that provides lightweight intelligent bio-latticed cryptography, which will aid in overcoming the cybersecurity challenges of smart world applications in the pre- and post-quantum era and with sixth-generation (6G) networks. Since facial features are symmetrically used to generate encryption keys on the fly without sending or storing private data, our proposal has the valuable attribute of dramatically combining symmetric and asymmetric cryptography operations in the proposed cryptosystem. Implementation-based evaluation results prove that the proposed protocol maintains high-performance in the context of delay, energy consumption, throughput and stability on cellular network topology in classical Narrowband-Internet of Things (NB-IoT) mode.

1. Introduction

Recent developments in quantum computing are certain to lead to the widespread use of sophisticated quantum computers. The “bit” is the smallest unit of data storage in conventional computing, and it can store either zero or one, while the “qubit” is the smallest unit of storage in quantum computing and can store zero, one, or both values simultaneously (i.e., ψ C 0 0 > + C 1 | 1 > ). “Superposition” is the term used to describe this simultaneous characteristic. This should cause serious concern, as most current cybersecurity schemes have been proven to be compromised by the inclusion of quantum computing. In other words, the security of wireless networking is a serious problem in the quantum IoT and sixth generation (6G) networking. Over the preceding decades, public key encryption had gained trust due to its resistance to being cracked. Unfortunately, that has changed since 1994, specifically with the discovery by the scientist Peter Shor [1] that the enormous processing capabilities of quantum computers will, in the future, enable them to break encryption keys. Thus, creating quantum-resistant practical advanced cryptographic techniques is essential. Therefore, this research presents a practical lattice-driven cryptographic prototype in terms of face biometrics that offers intelligent bio-latticed cryptography that can overcome the cybersecurity challenges of the smart world in the pre- and post- quantum computing era. Cryptography based on lattice problem theory is considered an appropriate alternative cryptographic method for the IoT in the post-quantum world because it uses short keys with high effectiveness [2,3,4,5]. The emergence of the IoT has had a positive effect on the submission of requests to data warehouses and access to internet services, as the IoT is one of the 21st century’s most revolutionising developments. Moreover, it has led to the appearance of the industrial IoT (iIoT) and the tactile internet, which is an embodiment of critical iIoT. Interest in face recognition for security purposes is growing rapidly due to the technology’s flexibility, high accuracy in identification verification, and its prolific use in IoT applications such as smart cities. Because of its respected reputation in the biometrics world, face recognition technology requires a secure environment.
In this paper, an enhanced artificial neural network based on the Gabor and Kalman filters, Karhunen–Loève algorithm, Principal Component Analysis (PCA), and genetic optimisation algorithm is employed to classify extracted facial features. This will improve the accuracy of the proposed scheme. Filter algorithms minimise redundant information and noise by optimising image models. Gabor functions, which are linear filters, have gained significant interest in the image processing field because they attempt to operate based on some principles of human vision. In computer vision, optimal localisation characteristics can be possessed in frequency and spatial domains using two-dimensional Gabor filters. Gabor filters are used to represent the local structural information of an image, such as spatial location, smoothness, spatial scale, orientation of edges, and direction selectivity [6]. This means that the Gabor function is appropriate for issues of texture segmentation and edge detection, and helps maintain the maximum amount of facial texture information possible. The Kalman filter (linear quadratic estimation) has had increasing significance in computer vision as an optimal recursive mathematical–statistical function used to process data, such as by estimating unknown variables [7,8,9].
One of the most common approaches to pattern recognition is PCA, which is an eigenface technique [10]. This method is easy, practical, and provides satisfactory results [11]. Therefore, our proposal uses statistical PCA (eigenface) to extract facial features. PCA is used to improve the performance of supervised neural network learning and reduce the dimensions of a face image, hence decreasing the amount of memory storage space necessary. In this prototype, the feature extraction process involves the following:
  • Normalising data by Gaussian normal distribution. Gaussian distribution is a normal unique probability distribution that is preferred for describing random systems because numerous random processes that occur in nature act as normal distributions. The central limit theorem declares that for any distribution, the sum of random variables tends to fall within normal distribution under moderate circumstances. Thus, normal distribution has flexible mathematical attributes [7];
  • Calculating a covariance matrix;
  • Deriving eigenvalue and eigenvectors based on the covariance matrix;
  • Picking eigenvector dimension k;
  • Computing the Karhunen–Loève transformation into k.
In real-life scenarios, no single approach is perfectly efficient in all instances. Accordingly, it is necessary to integrate a combination of approaches to improve the performance of the architecture. For example, although machine learning has recently become popular in numerous essential applications, it requires a large amount of data [12], whereas genetic algorithms do not necessitate a large amount of data. Additionally, the majority of high-performing convolutional neural networks are intended to run on high-end graphics processing units with large memory and computing capability. This leads to the limited use of convolutional neural networks in resource-constrained devices such as those used in the IoT [13]. In addition, the accuracy of artificial neural networks increases with an increase in hidden layers. The number of artificial neurons must be large enough to express problems and identify potential hidden patterns. Therefore, there is a trade-off between the complexity and performance of a neural network. To resolve these issues, we use a genetic algorithm to optimise the fitness of a neural network, use filters to reduce redundant information and estimate unknown variables, and use a pattern recognition method and the Karhunen–Loève algorithm to mitigate the dependency on neural networks.
Javadi et al. [14] state that using genetic algorithms to optimise complicated systems might cause the need for significant computing effort because of the repeated re-evaluation process of the objective functions and the characteristics of the search based on population. As a result, they presented a hybrid optimisation approach based on a combination of a back-propagation neural network and a conventional genetic algorithm. This back-propagation neural network is employed to enhance genetic algorithm convergence during the search process for an optimal outcome. Their suggested computational method’s efficiency is demonstrated through the use of a number of test scenarios. In [15], the authors adopt an improved genetic algorithm called the generalised genetic algorithm to find optimal sensor placement. This solves the problem of the optimal sensor placement procedure, which performs the important function of health monitoring for large-scale complicated structures [15].
According to Shiffman [16], using genetic algorithms with a large population will provide more variation, thus generating a more accurate output. Consequently, to increase the accuracy of the genetic algorithm, we initialise the neural network with a population of M components that each involves DNA generated based on PCA. The genetic algorithm performs three actions: selection, reproduction, and replacement. The fitness of each population component is evaluated to form a mating pool in the selection phase. However, there are ways to obtain more variation in this phase, such as using 2 ( n u m b e r o f c o r r e c t c h a r a c t e r s ) , which is especially useful when there is not enough variation during the population-creation stage [16]. The reproduction phase includes the following sub-steps: choosing two parents that have a high probability based on their respective fitness, merging the parents’ DNA to produce a child (crossover), mutating the DNA of the child according to a given probability, and adding the new child to the updated population. Finally, the old population is replaced with the updated population.
Main contributions for intelligent bio-latticed cryptographic proposal:
  • While humanity is benefitted by the information technology revolution, which derives its power from information to transfer knowledge and share resources, many exploit this revolution for malicious purposes such as attacking, eavesdropping, fraud, etc. According to PwC’s survey 2022 [17], cybercrimes have topped the list of external fraud threats faced by businesses worldwide. The survey included 1296 chief executive officers belonging to 53 countries in the world, and nearly half of these corporations ( 46 % ) admitted that they had been subjected to cyber-attack, fraud, or financial crimes due to the high rate of cybercrimes and fraud around the world since the emergence of COVID-19. Moreover, Ref. [17] stated that cyber-attacks pose more risk to organisations than before, as fraud and cyber-attacks have become more sophisticated. One in five global businesses, whose revenues exceed $10 billion, have been exposed to a fraud case that cost more than $50 million. More than a third of corporations ( 38 % ) with revenues of less than USD 100 million reported having experienced some form of cybercrime and 22 % of these corporations were affected by more than USD 1 million. Consequently, to counter this dangerous phenomenon, we propose intelligent performance-efficient lattice-driven cryptography using biometrics.
  • The main benefit of combining lattice theory and biometrics is that doing so eliminates the need to save or send biometric templates, private keys, or any secret information, which solves some public key infrastructure problems, such as public keys distribution challenges and key expiration issues, thereby preserving privacy, improving cybersecurity in a post-quantum era, and minimising the risk of information leakage online or offline.
  • The proposed cryptography resists quantum attacks such as Shor’s quantum algorithm. At the same time, it inherits neither the shortcomings of the quantum computer, such the large gap between the implementation of real devices and physical quantum theory, nor the defects of quantum cryptography, such as a vulnerability to side-channel attacks, source flaws, laser damage, Trojan horses, injection-locking lasers, and timing attacks. Since the first quantum cryptosystems—represented by quantum key distribution systems—were made available, many adversaries have attempted to hack them with unsettling success. Fierce attacks have focused on exploiting flaws in the equipment used to transmit quantum information. Consequently, adversaries have demonstrated that the equipment is not perfect, even though the laws of quantum physics imply perfect security and privacy. Furthermore, one of the most significant drawbacks of quantum computing and quantum cryptography is the limited distance that must be considered for transmitting photons, which often should not exceed tens of kilometres. This is due to the probability that the polarisation of photons may change or even disappear completely as a result of consecutive collisions with other particles while travelling long distances. However, this problem can be solved by adding spaced quantum repeaters at uniform intervals that amplify optical signals and maintain quantum randomness for thousands of kilometres.
  • Enhancement of cybersecurity allows the private keys created from biometric encryption to be stronger, more complex, and less vulnerable to cybersecurity attacks. Traditional/classical biometric systems are susceptible to various attacks, such as manipulations, impersonation attacks, stolen verifier attacks, device compromise attacks, replay attacks, denial-of-service (DoS) attacks, distributed denial-of-service (DDoS) attacks, integrity threats, privacy threats, confidentiality concerns, and insider attacks. The use of the proposed advanced algorithm eliminates these vulnerabilities. It also enhances accuracy and performance by using artificial intelligence (AI), such as machine learning (artificial neural networks) and genetic algorithms.
To this end, the research paper is systematised as follows. An overview of the biometric principles that are used in the technical biometric systems is provided in Section 2. Section 3 presents a glance at the resulting combination of biometrics and asymmetric encryption, and mentions the most important related works. In Section 4, we suggest a powerful and performance-effective cryptosystem based on mathematical lattice problem theory and face verification. Section 5 examines the performance of the proposed cryptosystem required for IoT in terms of the pre- and post-quantum smart world, notably in terms of delay, energy consumption, throughput, and stability period. Security proof of the proposed lattice-driven encryption scheme is demonstrated in Section 6. In Section 7, our concluding remarks are offered.

2. Biometrics

Body characteristics such as the face [18,19], fingerprints [20,21], and iris [19,22] have been used to recognise and verify people for different purposes in police departments, hospitals, etc. These unique human identification data are called biometrics. The word biometric is a combination of ‘bios’ (Greek for life) and ‘metrikos’ (Greek for measure). The biometric technique recognises patterns that capture details such as the face, gait, fingerprint, iris, or voice, and it extracts certain characteristics for either identification or verification [23]. Currently, many biometric applications are used in the government sector, for things such as national ID cards, social security, and welfare schemes; in forensic areas, such as corpse identification, criminal investigation, and parenthood determination; and for commercial purposes, such as data security, network login, cellular phones, and medical record management [24]. Since a user cannot forget or lose biometric data, the biometric system is more reliable than other systems.
As shown in Figure 1, a general biometric system involves the following basic components [25,26]:
  • An enrolment module acquires the data related to biometrics;
  • A feature-extraction module extracts the required set of characteristics from the collected biometric data;
  • A matching module compares the extracted features with the features in existing data;
  • A decision-making module checks whether the identity of the user exists and whether it is accepted or rejected.
There are several requirements for human physiological or behavioural characteristics to be biometric characteristics. Some of the important requirements are as follows [25,26]:
  • Universality: each person must have it;
  • Uniqueness: there should be sufficient and significant differences between the characteristics of any two persons;
  • Longevity: it must be adequately invariant over a certain period.

3. Merits of Combining Biometrics and Asymmetric Encryption

Biometric encryption is the secure binding of a cryptographic key and biometric data in such a way that another party cannot retrieve the key or the biometric data [27]. Biometric encryption has the potential to improve security and privacy. Some of the main uses and advantages of biometric encryption [27] are summarised below.

3.1. Management of Public and Private Keys

In biometric encryption, there is no need to save or send private keys or any secret information, such as biometrics. This can solve challenges to the management of public and private keys and improve cybersecurity [27,28,29,30,31]. Thus, biometric encryption improves public key infrastructure (PKI), since the traditional building of a PKI is very expensive in complexity and cost. However, PKI-based architecture used for communication systems is an important issue for the European Infrastructure Public–Private Partnership (5G PPP).

3.2. No Storage of Biometric Data

The leaking or misuse of biometric information or data from storage is a major concern in biometric applications. Furthermore, biometric information is personally identifiable information (PII), and it is susceptible to privacy leaks and identity theft. The best way to preserve privacy is not to collect any PII at all. Biometric encryption addresses these concerns and threats by encrypting biometric data and giving users full control over the use of their own biometrics [27]. This also enhances trust and confidence in biometric systems.

3.3. Cancellation and Revocation in Biometric Systems

Because biometric encryptions of the same data with different keys are always different, an individual can use the same biometrics for multiple accounts without fear that the accounts will be linked. Even if one of these accounts is compromised, there is a strong probability that other accounts will remain safe.
In a traditional biometric system, if the biometric data of an individual are compromised, it is not possible to replace them, as a person’s new biometric is always the same as their old biometric. Biometric encryption allows the system to cancel or revoke someone’s encrypted biometric and replace it with a newly generated encrypted biometric of the same person [27].

3.4. Security against Known Vulnerabilities in Biometric Systems

Account identities created from biometric encryption are much stronger, more complex and less vulnerable to security attacks. Traditional biometric systems have various vulnerabilities, such as substitution attack, manipulation, masquerading, trojan horse attacks and overriding decision response. Using biometric encryption safeguards the system from all these vulnerabilities [27].

3.5. Security and Privacy of Personal Data

Biometric encryption is easy to use and convenient to implement for any application. Therefore, users can encrypt their personal or sensitive data using biometric encryption [27]. This can be considered an asset. This technology is very powerful, as it can be easily scalable and feasible for anyone to use.

3.6. Public Acceptance Based on Embedded Privacy and Security

The components that play the most important roles in the successful deployment of any system involving personal data are public confidence and trust. A single data breach in such a system can significantly reduce public confidence, and could set the whole industry back for decades. Policies related to data governance are useful for gaining public trust, but a system with embedded privacy, security, and trust is always preferred and better. Biometric encryption directly embeds privacy and security into the system, so it can easily gain public trust [27].
Biometric encryption gives control of biometric data exclusively to the individual in such a way that it minimises the risk of any privacy leakage and it maximises the utility of the system [27]. This will encourage greater use and acceptance of biometrics.

3.7. Making Biometric Systems Scalable

Biometric encryption provides a strong reason for authorities that desire privacy and protection to adopt biometric encryption technologies for use in authenticating or verifying the identity of an individual, and not only for purposes of identification. It also allows biometrics to be used to link the holder of a token or card in a positive way by allowing local storage systems [27].
It is not clear whether biometric technology is sufficiently accurate to allow real-time comparisons of samples of several million or more. This is a concern in biometric applications for large systems. However, many one-to-many public biometric applications for large systems have been presented, and they are functioning well [27].
Most biometric applications use biometric data for authentication, not identification. However, even for authentication, the data must be transmitted to a database for comparison. From a privacy point of view, it is always risky to send biometric data to a centralised database. Some (multimodal) biometric technologies collect and compare the solutions of more than one biometric. The main reason for using a multimodal approach is the insufficiency of the speed of current biometrics. Therefore, collecting more and more data from biometrics and other personal data appears to be a technical solution for the authentication used in biometrics. However, collecting more biometric data makes the system susceptible to possible privacy leakage and identity theft. Biometric encryption can be used with a multimodal database to preserve the privacy of individuals. Thus, this makes a biometric system scalable [27].

4. Lightweight Intelligent Bio-Latticed Cryptography

An overview of proposed intelligent bio-latticed cryptography is depicted in Figure 2. We develop the lattice-based cryptography (shortest vector problem (SVP)) [32,33,34,35,36,37,38] as follows:
We define a square lattice (right-isosceles-triangular) L over Galois field F P n i × j such that L F P n i × j good prime unique Galois polynomial entropies with dimensions i and j and order pn for any integer n and prime p.
A message (i.e., plain text) M s g L , where L is square lattice (right-isosceles-triangular). An example of a square lattice L is displayed in Figure 3.
Select β , α L F P n good Galois entropies.
Select matrices Λ , Γ , Υ L F P n i × j good Galois polynomial entropy ( i × j is 2-dimensional of L ).
  • Key generation:
    S 1 = s h i f t α ( s h u f f l e H ( h ( c o r r e c t e d f a c i a l f e a t u r e s ) Υ ) )
    where h is a one-way hash function [39] and shuffleH is a Henon shuffling map.
    Because biometric data is naturally changeable, while the symmetrical cryptography process requires exact data to operate properly, the representation of biometrics must be corrected symmetrically before it can be employed. To stabilise the biometric matrix, error-correction principles are symmetrically applied [30,40,41]. Further detailed information concerning the principles of the process of data correction can be found in [30,40,41].
    S 2 = Λ Γ S 1
    Secret key (private key): (S1, S2)
    P K = S 1 1 · S 2 T
    where T refers to the matrix transpose operation.
    Public key: (β, PK)
  • Encryption:
    E n c o = S h u f f l e [ S h i f t β [ M s g ( m o d β ) β ] ]
    E n c 1 = E n c o · P K
    E n c = T s E n c 1
    where Ts is the current time of the transmitter’s device (the sender picks up timestamp Ts) and ∥ denotes a concatenation operation.
  • Decryption:
    M o = [ [ ( E n c β ) m o d β ] · S 1 ] · [ ( S 2 T S 2 ) 1 S 2 T ]
    M s g = S h i f t β 1 S h u f f l e 1 M o
    When the receiver obtains the encrypted message at time Tr, this message is decrypted via the secret key ( S 1 , S 2 ) and then tested for the timestamp freshness Ts. If (TrTs) > Δ T, the receiver will reject this message, since it is expired; i.e., it is significantly vulnerable to reply attack, where Δ T indicates the expected time interval for the communication delay in the wireless networks. Conversely, if (TrTs) ≤ Δ T, the receiver will accept this message.
Figure 2. Intelligent bio-latticed cryptography.
Figure 2. Intelligent bio-latticed cryptography.
Symmetry 14 02351 g002
Figure 3. Square lattice (right-isosceles-triangular) [42].
Figure 3. Square lattice (right-isosceles-triangular) [42].
Symmetry 14 02351 g003

5. Implementation-Based Evaluation Results

Here, we provide a simulation-based analysis of the average performance of the proposed intelligent bio-latticed cryptography to make sure that it operates efficiently without degrading the performance of the NB-IoT network. All experiments are executed by our open-sourced simulator [43], which is publicly available on the MDPI website (https://www.mdpi.com/1424-8220/21/5/1824/htm#app1-sensors-21-01824) and on GitHub (https://github.com/Ohood-Althobaiti/NB-IoTD2DSimulation-AnOpen-SourcedFramework), to compute performance metrics (i.e., delay, energy consumption, throughput, and stability period) on cellular network topology in classical mode, adapting the simulator to include various heterogeneous devices beyond sensors. For brevity, interested readers are directed to [43] for further detailed information concerning our open-source simulator. Furthermore, the resources [11,40,41,44,45,46,47,48] are used to implement the face verification phase in the proposed prototype.
We suppose, in this research, the following features for the heterogeneous conventional cellular network displayed in Figure 4.
  • An 80 km2 classical attocell network topology in an urban macrocell scenario such as a smart city;
  • Five cells; each cell ( R c e l l ) is 2 km;
  • 24 Base Stations (BSs). R c e l l is a hexagon bordered by BSs placed at the vertices, with another BS placed at the centre;
  • The number of devices is 250, and these are positioned randomly in 2-dimensional space;
  • In traditional cellular communications, all the devices transmit uplink/downlink requests at the same time to the BS. In other words, all transmissions between cellular devices must be accomplished through the BS without considering if Device-to-Device (D2D) communication is within two cellular devices’ reach;
  • The initial energy for a cell device ( E o ) is 5000–10,000 Joules.
Figure 5 depicts a comparison of the amount of time passed in two different scenarios in an urban macrocell; the first scenario is a secure NB-IoT network (using the suggested cryptosystem) and the second scenario is an unsecure NB-IoT network (without giving any thought to security). Both of these scenarios take place until most cell devices in NB-IoT networking are dead, i.e., depleted batteries (around 300 rounds). Plain communications were sent in the insecure NB-IoT attocell, regardless of whether or not eavesdroppers or adversaries were present in the public network. All communication and processing expenses in the secure NB-IoT attocell design were taken into account. Operation and transmission costs in the presence of the proposed scheme are 16.5698 min, compared to 15.0786 min without any cybersecurity measures, i.e., exposed NB-IoT. This shows that the suggested scheme only adds 1.4912 min to the total time. Consequently, a delay of this magnitude is trivial in comparison to constructing an NB-IoT attocell network that is secure.
The stability period is the amount of time that elapses between the start of network activity and the depletion of a device’s battery (dead) in the network [49]. Figure 6 compares the stability durations of NB-IoT networks that are both secure and insecure. The insecure NB-IoT attocell has a stability duration of 98 rounds, while the secure NB-IoT attocell has a stability period of 87 rounds. As a result, the intervals between stability durations in the two scenarios are quite close together.
Figure 7 compares the delay times at BS number 9 for the insecure and secure NB-IoT networks in seconds, which are 2.1 and 2.8, respectively. In the secure NB-IoT network, however, there is no considerable delay.
A comparison of the number of packets transmitted from each cell device to the BS (throughput/data rates) for secure and exposed NB-IoT attocell is depicted in Figure 8 and Figure 9. In light of the suggested cryptosystem, the number of packets (throughput) remains high.
Figure 10, Figure 11 and Figure 12 illustrate the comparison of the energy consumption profiles of cell devices no. 101, 100, and 13, respectively, in both scenarios. Therefore, the battery health and charge levels of a cell device are handled effectively, as can be shown in these examples. This means that, in the presence of the proposed cryptosystem, there is no major overhead expense besides preventing adversarial assaults on IoT networks.

6. Security Proof of the Proposed Lattice-Driven Encryption Scheme

Lemma 1.
The Proposed Lattice-Driven Encryption Scheme Implies P NP .
Proof. 
Let us consider L to be an i × j -dimensional square lattice (right-isosceles-triangular) that equals a set of vectors such that { m = 1 i × j b m v m | b m , v m F P n i × j good prime unique Galois polynomial entropies}. In shortest vector problem (SVP), the goal is to find the shortest vector (non-zero lattice point) for the given basis vectors v1, v2, v3, …, vn in the Euclidean norm. In the closest vector problem (CVP), the goal is to find the point closest to the given vector. For example, if v1, v2, v3, …, vn were the given basis vectors and v was a given target vector, our objective would be to determine the closest lattice point (non-zero vector) within the Euclidean norm closest to the given target vector v.
As we maintained previously in [5], proof by assuming the opposite (proof by contradiction) is used to establish the validity of the security of lattice-based algorithms, such as SVP or CVP are NP-hard [50,51,52] against different quantum attacks. In other words, assuming the proposition to be false leads to a contradiction. By considering the NP-hard factoring problem to describe the problem of factoring for a given string s, we must have a polynomial-time algorithm that determines whether the given string s equals solution P or not. At present, in cryptography, we do not have proof that P is NP. Therefore, we consider that P NP  [53,54,55]. Furthermore, we discuss the relationship between the proposed lattice-driven cryptosystem and complexity theory in the context of the NP-hard problem to investigate whether they follow P NP , since computational complexity is efficiently used to achieve encryption systems.
The two fields of complexity theory and cryptography science share some main objectives. In complexity theory, one of the main objectives is to find mathematical problems that cannot be calculated in polynomial time, and in cybersecurity science, one of the main goals is to design schemes that cannot be cracked in polynomial time. Both of these objectives are important. It is obvious that they are quite compatible with one another (i.e., complexity-associated cryptography) [56]. Raouf N. Gorgui-Naguib [57] discussed the relationship between the theory of nondeterministic polynomial completeness (NP-completeness) and computational complexity theory, thus defining complexity theory as an assemblage of findings in the field of computer science that aims to quantify this assertion: “problem A is more difficult than problem B”.
Moreover, Salil Vadhan [56] defined computational complexity theory as follows: the analysis of the minimum resources (hardware and software) that are required to solve computational tasks. It especially seeks to differentiate between problems that can be solved using effective methods (referred to as “easy” problems) and those that cannot be solved in any method or form (referred to as “hard” problems). As a result, complexity theory serves as the basis for cryptography, the focus of which is on the development of cryptographic algorithms that are “simple to use” while being “hard to crack”.
According to Raouf N. Gorgui-Naguib [57], the NP-completeness problem is a type of problem that falls under the umbrella of NP problems. For the purpose of defining this category, the satisfiability problem will be defined first, as it is a common problem in the field of computational complexity. There is an algorithm that can, in polynomial time, minimise any specific problem in type NP to the satisfiability problem. This method can do so for every problem in category NP. If the satisfiability problem can be decoded with an algorithm that takes polynomial time, then every problem that falls under the NP umbrella also falls under the P umbrella. On the other hand, if any problem in the NP class is impossible to solve, then the satisfiability problem itself must likewise be unsolvable. Such problems are known as NP-completeness problems, which have intractability features. This means that for a problem P o
P o P c l a s s N P c l a s s i f P o i s s o l v a b l e i n p o l y n o m i a l t i m e d e t e r m i n i s t i c a l l y N P c o m p l e t e n e s s N P h a r d n e s s N P c l a s s i f P o i s u n s o l v a b l e
Woeginger [58] argues that, logically, it cannot be anticipated to discover polynomial-running time techniques for NP-completeness problems. Given the widespread acceptance of the hypothesis P NP , proof of NP-hard proves that a certain problem cannot be solved using an algorithm that runs in polynomial time. A number of studies have been conducted [59] to investigate the impossibility of approximating the NP-hardness problem for CVP and SVP within the terms of the polynomial factors. The problem of the approximation relevant to the closest vector and the SVPs in the context of the promise problem was investigated in [60,61].
Decision problems with a yes/no answer are the norm when addressing difficulties of nondeterministic polynomial-running time in terms of hardness [57]. If a decision problem can be solved in polynomial running time employing a deterministic Turing machine, then it belongs to the computational complexity category P. On contrary, a decision problem is considered to belong to type NP if it can be computed using a non-deterministic machine in polynomial running time [57].
Khot discussed the hardness of approximating the shortest vector problem (SVP) in lattices and in high p norms in [62,63], respectively. He showed that under the assumption N P R P (random polynomial time), there is no method with a polynomial-running time that can approximate the SVP in p norm within a constant coefficient. First, he introduced a randomised reduction of the CVP into the SVP, which obtains certain hard constant factors. The BCH codes were the basis for this reduction. Thus, the SVP instances that are formed as a result of the reduction have good behaviour under the augmented tensor (vector/matrix) product, which is a new tensor product that they have introduced. This is one of its advantages. As a result, it can increase the hardness degree to 2 log n 1 / 2 ϵ .
Khot [62] stated that the NP-hardness of SVP in norm was proven by van Emde Boas [64], who also hypothesised that it holds for any p norm. Additionally, he declared that an alternative public key cryptographic algorithm based on the n 1.5 -hard problem of SVP was presented by Regev [65], and all these findings suppose the hardness of a variant named unique-SVP. Since proving that SVP approximation in n 1.5 is an NP-hard problem is theoretically possible, this could also involve the primitives of cryptography that are dependent on the assumption P NP  [62]. Moreover, demonstrating the NP-hardness of factor n could entail that N P = c o N P , which would lead to the collapse of the polynomial hierarchy. Then, research conducted by Aharonov and Regev [66] illustrated that factor n for SVP could entail that N P = c o N P , i.e., NP-hardness [62].
By aiming to develop techniques that render any cryptanalytic operation intractable, cryptography draws from computational complexity theory and, in particular, from the notion of NP-completeness/NP-hardness [57]. It is important to note, however, that existing attack techniques in the literature do not address the lattice problem with i × j dimensions in any case. However, the adversarial techniques depend mainly on the encrypted message itself, as well as information available publicly to retrieve the related transmitted plaintext. Consequently, we use mathematical and cryptographic preliminaries such as Galois field F P n i × j , XOR, entropy, etc. to increase the complexity of the proposed lattice-driven encryption scheme, thus reflecting complexity theory’s requirement and ensuring that the proposed lattice-driven cryptosystem implies P NP while maintaining its optimised performance.
Lemma 2.
The proposed lattice-driven encryption scheme resists replay attacks.
Proof. 
A replay attack is impossible as a general rule if the data that was acquired is either time-sensitive or cannot be reused [67]. Whenever an adversary is present in the channel between the sender and the receiver device, the adversary obtains unreadable data and expired packets, neither of which can be used again. Therefore, the timestamps Ts and Tr are used in the proposed lattice-driven cryptosystem to abort any attempted replay attacks. If an attacker exploits the communication to replay the old packet, the attacker cannot succeed in the proof process where (TrTs) > Δ T; thus, this packet is aborted because it is expired. Δ T denotes the expected time interval for the communication delay in the wireless network, Ts refers to the current time of the sender’s node, and Tr refers to the message reception time at the receiver’s device.
Lemma 3.
The proposed lattice-driven cryptosystem follows the functional requirement of security by using mathematical and cryptographic preliminaries such as Galois field F P n i × j , XOR, entropy, etc., thereby reflecting the perspectives of computational complexity and cybersecurity.
  • Biometric templates blended with encryption keys can improve online cybersecurity while simultaneously preserving users’ privacy.
    Proof. In the lightweight intelligent bio-latticed cryptosystem, the one-way hash function of a corrected facial image is XORed with Υ matrix F P n i × j 2-dimension good Galois polynomial entropy to regenerate the private key (S1, S2) on-the-fly without transmitting or holding any secret data that may be compromised and breach the privacy of users. In biometric encryption, there is no requirement to memorise either facial images or templates of these images, which adresses the classic flaw of biometric methods. Additionally, an adversary cannot retrieve an encryption parameter Υ , a facial image, or a facial template after binding them and then discarding them. This leads to the dramatic enhancement of security features, such as generating a robust biometric-based lattice private key (S1, S2) on-the-fly and at the same time maintaining the performance of restricted-resource devices, since this private key has more security than typical passwords and less storage space than biometric facial data. Additionally, the key generating process is characterised by its low computational requirements and lightweight operations.
  • Lightweight intelligent bio-latticed cryptography and Galois field F P n i × j .
    Proof. A finite field, which is commonly referred to as a Galois field, is a set of numbers to perform mathematical operations such as addition, multiplication, subtraction, and division that always produces a result contained within the same set of numbers. Cryptography benefits from this, since a restricted set of very large numbers can be used [68]. The proposed bio-lattice cryptography uses Galois field theory, which has many applications in cryptography. Some of the main reasons for this are that it is possible for arithmetic operations to scramble data quickly and efficaciously when the data is represented as a vector in a Galois Field, and subtraction and multiplication in a Galois Field need extra operations/steps, unlike in Euclidean space [69].
    In the proposed bio-lattice cryptography, F 2 63 i × j is used, since manipulating the bytes is required. F 2 63 i × j has an array of elements that together represent all of the various potential values that may be assigned to a byte. Because the Galois field’s addition and multiplication operations are closed, it is easy to perform arithmetic operations on any two bytes to yield a new byte belonging to the array of that field, making it ideal for manipulating bytes [70]. Furthermore, multiplications in F 2 63 can be optimised securely for applications in cryptography when the P n is smaller than the bits of the device (i.e.,  P n < 64, on standard desktops or smartphones) [71].
    The National Institute of Standards and Technology (NIST) has issued a request for standardization of Post-Quantum Cryptography (PQC) [72] because of the growing awareness of the need for PQC in light of the impending arrival of quantum computing. According to Danger et al. [71], code-based encryption, along with multivariate and lattice-based cryptosystems, is one of the primary competitors for this challenge, because of its inherent resistance to quantum cyberattacks. Despite being nearly as age-old as RSA and Diffie–Hellman, the original McEliece cryptography has never been widely employed, mostly because of its large key sizes [71]. There are numerous cryptosystems defined on F 2 N that were recognised as candidates for the first round of the NIST PQC competition [71].The carry-less feature of addition in F 2 N makes arithmetic operations in this setting notably desirable. Consequently, many cryptographic approaches use it, since it provides efficiency in both hardware and software implementations because there is no carry and, thus, there are no lengthy delays [71]. In addition, Danger et al. [71] present a case study in which they assess several implementations of F 2 N multiplication with regard to both their level of safety and how well they perform. They claim in their conclusion that their findings are applicable to accelerate and secure implementations of the other PQC outlined in their research, in addition to symmetric cyphers such as AES that operate on finite fields  F 2 N .
    Moreover, in the implementation of any cryptographic application, the size of the employed finite fields and the conditions imposed on the field parameters are determined by security concerns [73]. Therefore, the proposed intelligent bio-latticed cryptography devotes a square lattice (right-isosceles-triangular) L over Galois field F P n i × j such that L F P n i × j good prime unique Galois polynomial entropies with dimensions i and j and order pn for any integer n and prime p.
  • Entropic randomness, shifting, shuffling, XOR, and proposed lattice-driven cryptosystem.
    Proof. Random entropies are essential for assuring the security of sensitive information stored electronically [74,75]. Furthermore, a MATLAB-based shuffling package was developed to enhance a cryptosystem in [76]. This research paper included suggestions to enhance cryptography and make it invulnerable to data leaks using random shuffling. Hence, in the proposed cryptosystem, entropic randomness distribution, shifting, shuffling, and XORing are all used to make it difficult for an adversary without the appropriate private key to extrapolate anything valuable about the message (plaintext) from the encrypted message (corresponding ciphertext), strengthening proposed cryptosystem’s ability to resist data leaks and preserve privacy and making it more secure.
    Reyzin summarised essential entropy concepts used to study cryptographic architectures in [77], since the capability of assigning a random variable’s value in a single try is often used as a significant metric of its quality, especially in applications related to cybersecurity. Moreover, he defined this capability as follows:
    A random variable A has min-entropy b, indicated by H A = b , if max a P r [ A = a ] = 2 b .
    Extractors of randomness have been expressed in terms of their compatibility with any distribution having a min-entropy [77,78]. Furthermore, the outputs from robust extractors are almost uniform, regardless of the seed, and tend to maximal min-entropy, as these extractors are able to generate outputs with a high possibility over the seed selection [77].
    Similar to cryptography literature employing shuffling to prevent information leaks from encoded correspondences [76], Henon shuffling maps are used in the proposed lattice-driven cryptosystem. Using a random shuffling package, the researcher in [76] improved the security and efficacy of the Goldreich–Goldwasser–Halevi (GGH) public-key scheme. She proposed enhanced functions of GGH encryption and decryption principally relying on MATLAB- based shuffling to prevent sensitive information from leaking in images. In [32], public-key cryptography established on the closest vector problem was presented by Goldreich, Goldwasser, and Halevi to be an NP-hard lattice problem at the Crypto ’97 conference. Unfortunately, later at the Crypto ’99 conference, in [79], Phong Nguyen analysed the GGH cryptography and demonstrated that there are serious shortcomings including: any encrypted message can leak sensitive data concerning the plain message, and the difficulty of decryption can be reduced to a particular closest vector problem, which will significantly be easier than the general problem.
To this end, in order to demonstrate that the suggested lattice-driven encryption scheme satisfies the necessary standards for cybersecurity in the pre- and post-quantum world, we have conducted an analysis of each of the main security preliminaries, as illustrated above.

Reducing a Vector Module to a Lattice-Based Problem

According to [80], when attempting to solve several types of lattice problems, it is necessary to take into account the connection that exists between a vector denoted by v F P n i × j and a lattice denoted by L B . For computational purposes, given the basis matrix B to represent the lattice L B , one is essentially only concerned with the relationship that exists between the vector v and the basis B when it is translated to the neighbourhood of v. Because the domain of the lattice is infinite, this relationship can be simplified by translating the vector v to the origin’s neighbourhood while maintaining relatively the same location inside the lattice. This will retain the same relationship between the vector and the lattice. A lattice modulo reduction is a term for this translation of the vector v. Algorithm 1 depicts the reduction of the proposed lattice-based scheme.
However, to ensure that the reduction of the proposed lattice-based algorithm has good security, that its reduction security is equivalent to SVP reduction security, which is secure against the solution algorithms for SVP such as Lenstra–Lenstra–Lovász (LLL) or Block Korkine–Zolotarev (BKZ), and that it retains required computational complexity, in intelligent bio-latticed cryptography we use facial biometric features to generate the private key, as they constitute live-body identification and difficult to manipulate or impersonate, thus resulting in a high level of security. Accordingly, the proposed lattice-based algorithm is reducible to NP-hard.
Algorithm 1: Reduction of the proposed lattice-based scheme
INPUT: Basis H F P n i × j of Hermite Normal Form, v F P n i × j
OUTPUT: b F P n i × j
Start
b v
 for y j to 1 do
  for x i to 1 do
    c x , y b x , y ÷ h x , y
    b b c x , y × b x , y
  end
 end
 return b
End

7. Conclusions and Future Work

The IoT, while providing infrastructure to foster digitalisation in different areas, can also introduce harmful attacks, especially in a post-quantum society. Quantum computers’ dependence on quantum physical laws makes them have extremely powerful processing capabilities that can decrypt secure data, such as government secrets, bank records, and passwords of Internet users. This means that if these quantum computers are harnessed for criminal purposes, current encryption techniques will not be able to resist them. This is prompting governments, companies, and cryptographic experts around the world to develop encryption techniques that are resistant to attacks from such computers. This study focuses on modern scientific methods for developing cybersecurity for the IoT and the tactile internet, and preserving user privacy in smart cities in the quantum computing era using AI, biometric identification techniques, and lattice theory. The efficiency of the proposed protocol at mitigating the quantum threats while maintaining IoT performance is also assessed. As an important future work, we will compute the performance metrics of our proposal on real cellular network topology in traditional mode using heterogeneous devices such as cell phones. We will also compare our proposal with some of the existing lattice-based cryptosystems in the context of elapsed time for key generation, message encryption, and message decryption. Furthermore, we will empirically examine the influence of the parameters to find the best optimisation between the security level and performance, notably through a comparative study including lattice literature-based selected parameters and corresponding security levels, since these parameters of the cryptographic scheme are related to the hardness of the SVP.

Author Contributions

Writing–original draft, O.S.A.; Writing–review & editing, O.S.A., T.M. and M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The first author would like to thank Taif University and the Royal Embassy of Saudi Arabia Cultural Bureau for sponsoring her Ph.D. study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PKIPublic Key Infrastructure
6GSixth-generation Networks
NB-IoTNarrowband-Internet of Things
IoTInternet of Things
5G PPPEuropean Infrastructure Public Private Partnership
LEACHLow Energy Adaptive Clustering Hierarchy
BSBase Station
MsgMessage
AIArtificial Intelligence
DoSDenial-of-Service attack
DDoSDistributed Denial-of-Service attack
PCAPrincipal Component Analysis
iIoTIndustrial Internet of Things
NPNondeterministic Polynomial-type
NP-hardnessNondeterministic Polynomial-time hardness
SVPShortest Vector Problem
CVPClosest Vector Problem
RPRandom Polynomial time
LLLLenstra–Lenstra–Lovász
BKZBlock Korkine–Zolotarev

References

  1. Shor, P.W. Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA, 20–22 November 1994; pp. 124–134. [Google Scholar]
  2. Cheng, C.; Lu, R.; Petzoldt, A.; Takagi, T. Securing the Internet of Things in a quantum world. IEEE Commun. Mag. 2017, 55, 116–120. [Google Scholar] [CrossRef]
  3. Liu, Z.; Choo, K.K.R.; Grossschadl, J. Securing edge devices in the post-quantum internet of things using lattice-based cryptography. IEEE Commun. Mag. 2018, 56, 158–162. [Google Scholar]
  4. Xu, R.; Cheng, C.; Qin, Y.; Jiang, T. Lighting the way to a smart world: Lattice-based cryptography for internet of things. arXiv 2018, arXiv:1805.04880. [Google Scholar]
  5. Althobaiti, O.S.; Dohler, M. Cybersecurity Challenges Associated with the Internet of Things in a Post-Quantum World. IEEE Access 2020, 8, 157356–157381. [Google Scholar]
  6. Guo, H.; Li, B.; Zhang, Y.; Zhang, Y.; Li, W.; Qiao, F.; Rong, X.; Zhou, S. Gait recognition based on the feature extraction of Gabor filter and linear discriminant analysis and improved local coupled extreme learning machine. Math. Probl. Eng. 2020, 2020, 5393058. [Google Scholar] [CrossRef] [Green Version]
  7. Bishop, G.; Welch, G. An introduction to the kalman filter. Proc. SIGGRAPH Course 2001, 8, 41. [Google Scholar]
  8. Fronckova, K.; Slaby, A. Kalman Filter Employment in Image Processing. In Proceedings of the International Conference on Computational Science and Its Applications (ICCSA 2020), Cagliari, Italy, 1–4 July 2020; Springer: Cham, Switzerland, 2020; pp. 833–844. [Google Scholar]
  9. Welch, G.F. Kalman filter. In Computer Vision; Ikeuchi, K., Ed.; Springer: Berlin, Germany, 2021; pp. 721–727. [Google Scholar]
  10. Rosa, L. Face Recognition Technology. Available online: http://www.facerecognition.it/ (accessed on 1 October 2021).
  11. Jalled, F. Face recognition machine vision system using Eigenfaces. arXiv 2017, arXiv:1705.02782. [Google Scholar]
  12. Tartakovsky, A.M.; Barajas-Solano, D.A.; He, Q. Physics-informed machine learning with conditional Karhunen-Loève expansions. J. Comput. Phys. 2021, 426, 109904. [Google Scholar]
  13. Lin, M.; Ji, R.; Li, S.; Wang, Y.; Wu, Y.; Huang, F.; Ye, Q. Network Pruning Using Adaptive Exemplar Filters. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  14. Javadi, A.A.; Farmani, R.; Tan, T.P. A hybrid intelligent genetic algorithm. Adv. Eng. Inform. 2005, 19, 255–262. [Google Scholar] [CrossRef]
  15. Yi, T.H.; Li, H.N.; Gu, M. Optimal sensor placement for health monitoring of high-rise structure based on genetic algorithm. Math. Probl. Eng. 2011, 2011, 395101. [Google Scholar] [CrossRef] [Green Version]
  16. Shiffman, D. The Nature of Code: Chapter 9. The Evolution of Code; Addison-Wesley: Boston, MA, USA, 2012; Available online: https://natureofcode.com/book/chapter-9-the-evolution-of-code/ (accessed on 29 January 2020).
  17. PwC. Protecting the Perimeter: The Rise of External Fraud. PwC’s Global Economic Crime and Fraud Survey 2022. 2022. Available online: https://www.pwc.com/gx/en/forensics/gecsm-2022/PwC-Global-Economic-Crime-and-Fraud-Survey-2022.pdf (accessed on 29 March 2022).
  18. Agbolade, O.; Nazri, A.; Yaakob, R.; Ghani, A.A.; Cheah, Y.K. Down Syndrome Face Recognition: A Review. Symmetry 2020, 12, 1182. [Google Scholar] [CrossRef]
  19. Sharifi, O.; Eskandari, M. Cosmetic Detection Framework for Face and Iris Biometrics. Symmetry 2018, 10, 122. [Google Scholar] [CrossRef] [Green Version]
  20. Zukarnain, Z.A.; Muneer, A.; Ab Aziz, M.K. Authentication Securing Methods for Mobile Identity: Issues, Solutions and Challenges. Symmetry 2022, 14, 821. [Google Scholar] [CrossRef]
  21. Militello, C.; Rundo, L.; Vitabile, S.; Conti, V. Fingerprint Classification Based on Deep Learning Approaches: Experimental Findings and Comparisons. Symmetry 2021, 13, 750. [Google Scholar] [CrossRef]
  22. Arsalan, M.; Hong, H.G.; Naqvi, R.A.; Lee, M.B.; Kim, M.C.; Kim, D.S.; Kim, C.S.; Park, K.R. Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment. Symmetry 2017, 9, 263. [Google Scholar] [CrossRef]
  23. Wayman, J. Fundamentals of Biometric Authentication Technologies. Int. J. Image Graph. 2001, 1, 93–113. [Google Scholar] [CrossRef]
  24. Uludag, U.; Pankanti, S.; Prabhakar, S.; Jain, A. Biometric cryptosystems: Issues and challenges. Proc. IEEE 2004, 92, 948–960. [Google Scholar] [CrossRef]
  25. Delac, K.; Grgic, M. A survey of biometric recognition methods. In Proceedings of the 46th International Symposium, Zadar, Croatia, 18 June 2004; pp. 184–193. [Google Scholar]
  26. Jain, A.; Ross, A.; Prabhakar, S. An introduction to biometric recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 4–20. [Google Scholar] [CrossRef] [Green Version]
  27. Cavoukian, A.; Stoianov, A.; Carter, F. Keynote Paper: Biometric Encryption: Technology for Strong Authentication, Security and Privacy. Policies Res. Identity Manag. Int. Fed. Inf. Process. 2007, 261, 57–77. [Google Scholar]
  28. Janbandhu, P.; Siyal, M. Novel biometric digital signatures for Internet-based applications. Inf. Manag. Comput. Secur. 2001, 9, 205–212. [Google Scholar] [CrossRef]
  29. Feng, H.; Wah, C. Private key generation from on-line handwritten signatures. Inf. Manag. Comput. Secur. 2002, 10, 4. [Google Scholar]
  30. Al-Hussain, A.; Al-Rassan, I. A biometric-based authentication system for web services mobile user. In Proceedings of the 8th International Conference on Advances in Mobile Computing and Multimedia, Paris, France, 8–10 November 2010; pp. 447–452. [Google Scholar]
  31. Mohammadi, S.; Abedi, S. ECC-Based Biometric Signature: A New Approach in Electronic Banking Security. In Proceedings of the 2008 International Symposium on Electronic Commerce and Security, Guangzhou, China, 3–5 August 2008; pp. 763–766. [Google Scholar]
  32. Goldreich, O.; Goldwasser, S.; Halevi, S. Public-key cryptosystems from lattice reduction problems. In Advances in Cryptology—CRYPTO ’97; Springer: Berlin, Germany, 1997; pp. 112–131. [Google Scholar]
  33. Althobaiti, O.S.; Dohler, M. Quantum-Resistant Cryptography for the Internet of Things Based on Location-Based Lattices. IEEE Access 2021, 9, 133185–133203. [Google Scholar] [CrossRef]
  34. Chen, C.; Hoffstein, J.; Whyte, W.; Zhang, Z. NIST PQ Submission: NTRUEncrypt A Lattice based Encryption Algorithm. NIST Post-Quantum Cryptography Standardization: Round 1 Submissions. 2018. Available online: https://csrc.nist.gov/Projects/post-quantum-cryptography/Round-1-Submissions (accessed on 29 March 2019).
  35. Bergami, F. Lattice-Based Cryptography. Master’s Thesis, Universita di Padova, Padova, Italy, 2016. [Google Scholar]
  36. Yuan, Y.; Cheng, C.M.; Kiyomoto, S.; Miyake, Y.; Takagi, T. Portable implementation of lattice-based cryptography using JavaScript. Int. J. Netw. Comput. 2016, 6, 309–327. [Google Scholar] [CrossRef] [Green Version]
  37. Ahmad, K.; Doja, M.; Udzir, N.I.; Singh, M.P. Emerging Security Algorithms and Techniques; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  38. Hoffstein, J.; Pipher, J.; Silverman, J.H. NTRU: A ring-based public key cryptosystem. In Proceedings of the International Algorithmic Number Theory Symposium; Springer: Berlin/Heidelberg, Germany, 1998; pp. 267–288. [Google Scholar]
  39. Simon, J. DATA HASH—Hash for Matlab Array, Struct, Cell or File. MATLAB Central File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/31272-datahash (accessed on 1 January 2021.).
  40. Narayanan, G.; Haneef, N.; Narayanan, R. Matlab Implementation of “A Novel Approach to Improving Burst Errors Correction Capability of Hamming Code”. 2018. Available online: https://github.com/gurupunskill/novel-BEC (accessed on 29 March 2021).
  41. Afifi, M.; Derpanis, K.G.; Ommer, B.; Brown, M.S. Learning multi-scale photo exposure correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9157–9167. Available online: https://github.com/mahmoudnafifi/Exposure_Correction (accessed on 29 December 2021).
  42. Boboc, A. Pattern Generator for MATLAB. MATLAB Central File Exchange. 2003. Available online: https://www.mathworks.com/matlabcentral/fileexchange/4024-pattern-generator-for-matlab (accessed on 17 October 2021).
  43. Althobaiti, O.S.; Dohler, M. Narrowband-internet of things device-to-device simulation: An open-sourced framework. Sensors 2021, 21, 1824. [Google Scholar] [CrossRef]
  44. Patel, P.; Ganatra, A. Investigate age invariant face recognition using PCA, LBP, Walsh Hadamard transform with neural network. In Proceedings of the International Conference on Signal and Speech Processing (ICSSP-14), Atlanta, GA, USA, 1 December 2014; pp. 266–274. Available online: https://github.com/Priyanka154/-Age-Invariant-Face-Recognition (accessed on 29 March 2021).
  45. Neerubai, S. Using PCA for Dimensionality Reduction of Facial Images. Available online: https://github.com/susmithaneerubai/Data-mining-project--Face-recognition (accessed on 1 March 2022).
  46. Nguyen, M.X.; Le, Q.M.; Pham, V.; Tran, T.; Le, B.H. Multi-scale sparse representation for robust face recognition. In Proceedings of the 2011 Third International Conference on Knowledge and Systems Engineering; 2011; pp. 195–199. Available online: https://github.com/tntrung/sparse_based_face_recognition (accessed on 29 March 2021).
  47. Cervantes, J.I. Face Recognition Written in MATLAB. 2014. Available online: https://github.com/JaimeIvanCervantes/FaceRecognition (accessed on 29 March 2020).
  48. Thomas. Face Recognition Neural Network Developed with MATLAB. 2015. Available online: https://github.com/tparadise/face-recognition (accessed on 29 March 2020).
  49. Aderohunmu, F.A. Energy Management Techniques in Wireless Sensor networks: Protocol Design and Evaluation. Ph.D. Thesis, University of Otago, Otago, New Zealand, 2010. [Google Scholar]
  50. Cai, J.; Nerurkar, A. Approximating the svp to within a factor (1-1/dim/sup/spl epsiv//) is np-hard under randomized conditions in Proceedings. In Proceedings of the Thirteenth Annual IEEE Conference on Comutational Complexity (Formerly: Structure in Complexity Theory Conference), Buffalo, NY, USA, 18 June 1998; pp. 46–55. [Google Scholar]
  51. Dinur, I. Approximating svp to within almost-polynomial factors is np-hard. Theor. Comput. Sci. 2002, 285, 55–71. [Google Scholar] [CrossRef] [Green Version]
  52. Dinur, I.; Kindler, G.; Safra, S. Approximating-CVP to within almost-polynomial factors is NP-hard. In Proceedings of the 39th Annual Symposium on Foundations of Computer Science (Cat. No. 98CB36280), Palo Alto, CA, USA, 8–11 November 1998; pp. 99–109. [Google Scholar]
  53. Buchmann, J.; Schmidt, P. Postquantum Cryptography. 2010. Available online: https://www-old.cdc.informatik.tu-darmstadt.de (accessed on 29 January 2018).
  54. Fortnow, L. The status of the p versus np problem. Commun. ACM 2009, 52, 78–86. [Google Scholar] [CrossRef] [Green Version]
  55. Baker, T.; Gill, J.; Solovay, R. Relativizations of the P=?NP question. SIAM J. Comput. 1975, 4, 431–442. [Google Scholar] [CrossRef]
  56. Vadhan, S.P. Computational Complexity. 2011. Available online: https://dash.harvard.edu/bitstream/handle/1/33907951/ComputationalComplexity-2ndEd-Vadhan.pdf?sequence=1 (accessed on 1 March 2018).
  57. Gorgui-Naguib, R.N. p-adic Number Theory and Its Applications in a Cryptographic Form. Ph.D. Thesis, University of London, London, UK, 1986. [Google Scholar]
  58. Woeginger, G.J. Exact algorithms for NP-hard problems: A survey. In Combinatorial Optimization—Eureka, You Shrink! Springer: Berlin/Heidelberg, Germany, 2003; pp. 185–207. [Google Scholar]
  59. Lagarias, J.C.; Lenstra, H.W., Jr.; Schnorr, C.P. Korkin-zolotarev bases and successive minima of a lattice and its reciprocal lattice. Combinatorica 1990, 10, 333–348. [Google Scholar] [CrossRef]
  60. Goldreich, O.; Goldwasser, S. On the limits of Nonapproximability of lattice problems. J. Comput. Syst. Sci. 2000, 60, 540–563. [Google Scholar] [CrossRef] [Green Version]
  61. Micciancio, D. The shortest vector in a lattice is hard to approximate to within some constant. SIAM J. Comput. 2001, 30, 2008–2035. [Google Scholar] [CrossRef] [Green Version]
  62. Khot, S. Hardness of approximating the shortest vector problem in lattices. J. ACM 2005, 52, 789–808. [Google Scholar] [CrossRef]
  63. Khot, S. Hardness of approximating the Shortest Vector Problem in high lp norms. J. Comput. Syst. Sci. 2006, 72, 206–219. [Google Scholar] [CrossRef] [Green Version]
  64. Van Emde Boas, P. Another NP-Complete Problem and the Complexity of Computing Short Vectors in a Lattice; Tecnical Report; Department of Mathmatics, University of Amsterdam: Amsterdam, The Netherlands, 1981. [Google Scholar]
  65. Regev, O. New lattice-based cryptographic constructions. J. ACM 2004, 51, 899–942. [Google Scholar] [CrossRef]
  66. Aharonov, D.; Regev, O. Lattice problems in NP coNP. In Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science, Rome, Italy, 17–19 October 2004; Volume 45, pp. 362–371. [Google Scholar]
  67. Islam, M.R.; Sayeed, M.S.; Samraj, A. Biometric template protection using watermarking with hidden password encryption. In Proceedings of the 2008 International Symposium on Information Technology, Kuala Lumpur, Malaysia, 26–28 August 2008; Volume 1, pp. 1–8. [Google Scholar]
  68. Bhowmik, A.; Menon, U. An adaptive cryptosystem on a Finite Field. PeerJ Comp. Sci. 2021, 7, e637. [Google Scholar] [CrossRef] [PubMed]
  69. Benvenuto, C.J. Galois Field in Cryptography; University of Washington: Washington, DC, USA, 2012; Volume 1, pp. 1–11. [Google Scholar]
  70. En, N.W. Why AES Is Secure. Available online: https://wei2912.github.io/posts/crypto/why-aes-is-secure.html (accessed on 1 July 2022).
  71. Danger, J.L.; El Housni, Y.; Facon, A.; Gueye, C.T.; Guilley, S.; Herbel, S.; Ndiaye, O.; Persichetti, E.; Schaub, A. On the Performance and Security of Multiplication in GF (2 N). Cryptography 2018, 2, 25. [Google Scholar] [CrossRef] [Green Version]
  72. NIST. Post-Quantum Cryptography PQC. Available online: https://csrc.nist.gov/Projects/post-quantum-cryptography (accessed on 1 May 2018).
  73. Guajardo, J.; Kumar, S.S.; Paar, C.; Pelzl, J. Efficient software-implementation of finite fields with applications to cryptography. Acta Appl. Math. 2006, 93, 3–32. [Google Scholar] [CrossRef]
  74. Simion, E. Entropy and randomness: From analogic to quantum world. IEEE Access 2020, 8, 74553–74561. [Google Scholar] [CrossRef]
  75. Teixeira, A.; Matos, A.; Antunes, L. Conditional rényi entropies. IEEE Trans. Inf. Theory 2012, 58, 4273–4277. [Google Scholar] [CrossRef]
  76. Dadheech, A. Preventing Information Leakage from Encoded Data in Lattice Based Cryptography. In Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Bangalore, India, 19–22 September 2018; pp. 1952–1955. [Google Scholar]
  77. Reyzin, L. Some notions of entropy for cryptography. In Proceedings of the 5th International Conference on Information Theoretic Security, Amsterdam, The Netherlands, 21–24 May 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 138–142. [Google Scholar]
  78. Nisan, N.; Zuckerman, D. Randomness is linear in space. J. Comput. Syst. Sci. 1996, 52, 43–52. [Google Scholar]
  79. Nguyen, P. Cryptanalysis of the Goldreich-Goldwasser-Halevi cryptosystem from crypto’97. In Advances in Cryptology- CRYPTO ’99: Proceedings of the 19th Annual International Cryptology Conference, Santa Barbara, CA, USA, 15–19 August 1999; Springer: Berlin/Heidelberg, Germany, 1999; pp. 288–304. [Google Scholar]
  80. Rose, M. Lattice-Based Cryptography: A Practical Implementation. Master’s Thesis, University of Wollongong, Wollongong, Australia, 2011. [Google Scholar]
Figure 1. An overview of a biometric system.
Figure 1. An overview of a biometric system.
Symmetry 14 02351 g001
Figure 4. Secure heterogeneous cellular network in traditional mode with quantum-resistant intelligent bio-latticed cryptography.
Figure 4. Secure heterogeneous cellular network in traditional mode with quantum-resistant intelligent bio-latticed cryptography.
Symmetry 14 02351 g004
Figure 5. Comparison of elapsed times in minutes.
Figure 5. Comparison of elapsed times in minutes.
Symmetry 14 02351 g005
Figure 6. Stability periods for secure and insecure NB-IoT attocells.
Figure 6. Stability periods for secure and insecure NB-IoT attocells.
Symmetry 14 02351 g006
Figure 7. Delay time comparisons at BS no. 9 with and without encryption.
Figure 7. Delay time comparisons at BS no. 9 with and without encryption.
Symmetry 14 02351 g007
Figure 8. Comparison of numbers of packets.
Figure 8. Comparison of numbers of packets.
Symmetry 14 02351 g008
Figure 9. Comparison of throughput.
Figure 9. Comparison of throughput.
Symmetry 14 02351 g009
Figure 10. Comparison of the power consumption of cell device no. 101.
Figure 10. Comparison of the power consumption of cell device no. 101.
Symmetry 14 02351 g010
Figure 11. Comparison of the power consumption of cell device no. 100.
Figure 11. Comparison of the power consumption of cell device no. 100.
Symmetry 14 02351 g011
Figure 12. Comparison of the energy consumerism of cell device No. 13.
Figure 12. Comparison of the energy consumerism of cell device No. 13.
Symmetry 14 02351 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Althobaiti, O.S.; Mahmoodi, T.; Dohler, M. Intelligent Bio-Latticed Cryptography: A Quantum-Proof Efficient Proposal. Symmetry 2022, 14, 2351. https://doi.org/10.3390/sym14112351

AMA Style

Althobaiti OS, Mahmoodi T, Dohler M. Intelligent Bio-Latticed Cryptography: A Quantum-Proof Efficient Proposal. Symmetry. 2022; 14(11):2351. https://doi.org/10.3390/sym14112351

Chicago/Turabian Style

Althobaiti, Ohood Saud, Toktam Mahmoodi, and Mischa Dohler. 2022. "Intelligent Bio-Latticed Cryptography: A Quantum-Proof Efficient Proposal" Symmetry 14, no. 11: 2351. https://doi.org/10.3390/sym14112351

APA Style

Althobaiti, O. S., Mahmoodi, T., & Dohler, M. (2022). Intelligent Bio-Latticed Cryptography: A Quantum-Proof Efficient Proposal. Symmetry, 14(11), 2351. https://doi.org/10.3390/sym14112351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop