Next Article in Journal
Assessment of Structural Systems to Design Earthquake Resistance Buildings by Employing Multi-Attribute Decision-Making Method Based on the Bipolar Complex Fuzzy Dombi Prioritized Aggregation Operators
Next Article in Special Issue
Application of Decimated Mathematical Equations and Polynomial Root-Finding Method in Protection of Text Messages
Previous Article in Journal
Reconstruction of Unsteady Wind Field Based on CFD and Reduced-Order Model
Previous Article in Special Issue
On Symmetric Weighing Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Finite State Machine-Based Improved Cryptographic Technique

1
Department of Mathematics, College of Science, King Khalid University, Abha 61421, Saudi Arabia
2
Senior Member of Technical Staff, Oracle, 3990 Scottfield Street, Dublin, CA 94568, USA
3
Department of Information Technology, University of Tabuk, Tabuk 71491, Saudi Arabia
4
Department of Computer Science, College of Computer Science & Information Technology, Jazan 45142, Saudi Arabia
5
Department of USCS, Uttaranchal University, Dehradun 248007, India
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(10), 2225; https://doi.org/10.3390/math11102225
Submission received: 17 March 2023 / Revised: 18 April 2023 / Accepted: 4 May 2023 / Published: 9 May 2023
(This article belongs to the Special Issue Codes, Designs, Cryptography and Optimization, 2nd Edition)

Abstract

:
With the advent of several new means of communication, safeguarding the confidentiality of messages has become more crucial. Financial institutions, virtual currencies, and government organizations are all examples of high-risk contexts where information exchanges need particular care. The importance of data security in preventing unauthorized access to data is emphasized. Several cryptographic methods for protecting the secrecy and integrity of data were compared. In this research, the proposed work includes a new Turbo Code-based encryption algorithm. The Turbo encoder’s puncturing process is controlled by a secret key, and a typical random sequence is generated to encrypt the data and fix any mistakes. Key generation utilizing pre-existing data eliminates the requirement for sending keys over a secure channel. Using recurrence relations and the Lower–Upper (LU) decomposition method, the presented study suggests a novel approach to message encryption and decryption. The resulting encrypted grayscale image has a very high level of security, with an entropy of 7.999, a variation from perfection of 0.0245, and a correlation of 0.0092 along the diagonal, 0.0009 along the horizontal, and −0.0015 along the vertical. Directly decrypted pictures have a Peak Signal-to-Noise Ratio (PSNR) of 56.22 dB, but the suggested approach only manages an embedding capacity of 0.5 bpp (bits per pixel). This may be achieved by decreasing the size of the location map by only 0.02 bpp.

1. Introduction

Many people use the word “cryptography” to mean the study of secret communication methods. Cryptography refers to the science and art of safeguarding data via the use of codes and other types of secret writing. Encryption is a sort of cryptography [1] that transforms information from its original intelligent form into a non-intelligent one to ensure its security while in transit or storage. The ciphertext may be restored to its original, Plaintext form by using a decryption technique. There is no way to encrypt or decode data without a private key. Almost all cypher systems are based on two cypher operations: transposition and substitution. In the final cypher, transposition and substitution may go in either direction. In a product cypher system, the first step of encryption is completed by transposition (confusion), while the second stage is completed through substitution (diffusion). Figure 1 is a schematic depiction of the status of symmetric key cryptography. When given a Plaintext (P) and a key (K), symmetric cryptography generates an encrypted message (C). It is possible to encrypt the Plaintext (P) using a secret key (K) and decrypt it using the same key. Given the amount of rearranging and shuffling that took place, the connection between the ciphertext and the Plaintext is, at best, obscure. Subtly altering (P]) by diffusion [2] sets the stage for the intricate interplay between neighboring symbols. The product cypher technique allows for bidirectional obfuscation and dissemination.
The channel is used to send incoming messages to the source encoder with the best feasible data rate and accuracy [3]. The first encoder may read the symbols as code words. There is a unique coded message associated with every symbol. Before a source decoder can convert the output of a binary channel decoder back into a symbol sequence, it must first perform the inverse mapping of the source encoder’s code words. Incorrect binary sequences may have been sent due to interference in the signal’s path to its destination. Channel coding [4] methods are useful for eliminating this kind of mistake. Channel encoding involves an encoder at the transmitter end adding additional bits to the output of the source encoder using an al-algorithm. Information encoded in channel 1 might be decoded by a channel coding block decoder at the receiving end. Many error-correcting methods are used in channel encoding to ensure that the original bit sequence may be recovered in its entirety.
Copyright protection and preventing malicious manipulation are the most pressing and difficult issues. Research suggests that keeping secrets might help alleviate these sorts of concerns. Covert data storage in media may be useful for a variety of purposes, such as facilitating the identification of images, safeguarding intellectual property, and regulating access. The integrity of the message would be compromised if information were distorted in a way that was not immediately obvious to the human eye. When dealing with overly sensitive information or medical imaging, even a little bit of inaccuracy might have serious consequences. Reverse Data Hiding (RDH) [5] provides information on how to inject data into digital media without any quality degradation. After data has been extracted from digital media, the whole storage device may be retrieved. The owner of the picture has little reason to trust the cloud service providers, since they will almost certainly encrypt the shot before uploading it. The picture restoration and analysis processes must be carefully separated for usage in medical and defense applications. The original photo should be encrypted using the owner’s public key. The data hider uses a data-hiding key to include the information into the encrypted signal. After receiving the information, it may be decoded from the image. If the sender and receiver both have the data-hiding key, the recipient may decode the sent signal without knowing what is in the encrypted data. Therefore, even if the re-receiver has the cypher [6], they cannot decode the data in the image without also obtaining the corresponding cypher. His cover photo might be obtained via public decoding of the encrypted signal he was allegedly sent.
The use of the Modified Cryptographic Turbo Code Finite State Machine (MCTC-FSM) is expanding into new domains, such as privacy protection [7], encrypted image authentication, and authorship verification. One major drawback of current methods is that the encoded information can only be accessed either before or after the image has been decoded. Thus, the MCTC-FSM method has been validated as a practical choice for both secret extraction and image recovery. Most TC methods may be an improvement over what has come before, but none of them can ensure complete success when it comes to extracting data or restoring images. Due to their prohibitive computational cost, slow embedding rate, and inadequate security [8], these approaches are not well-suited for usage in real-time applications. The proposed work includes a ground-breaking MCTC-FSM technique with robust security, complete cover and secret reversibility, and a large embedding capacity at a reasonable cost. In this research, we provide a unique MCTC-FSM method based on two-level embedding. The cypher text may be incorporated into the cypher image using this method, even if the cover and secret are unknown. Only those in possession of the corresponding decryption keys will be able to read the encrypted information. The suggested methodology improves the quality of both embedding and encryption, with correlations tending towards zero and histogram values becoming uniform. The final image should have an SSIM of 1, a BER of 0, and a PSNR of infinity.

1.1. Motivation

Error-control coding is a method used to improve a channel’s resistance to interference. The proper unity between mathematical modelling and efficient algorithms, and more critically the realization of this principle in actual systems, is essential for data transfer that is both quick and secure. To code channels in a way that comes close to the Shannon limit, Turbo Code (TC) is a major contender. The bit error rate (BER) performance of currently available Turbo Codes is vastly enhanced, either in the waterfall zone or the error-floor region, or both. Yet, the flattening effect is an issue for most forward-correcting codes. Yet, the development of communication technology calls for an overly complex coding scheme with great potential to address this problem. As a result, the development of better TC designs has been a hot topic in the scientific community during the last several decades.
However, from the early nineteenth century, a large amount of study has been conducted in the field of computing intelligence approaches. Scientists have successfully implemented these algorithms to improve the efficiency of modern communication networks. Existing linear programming methodologies, such as Pentagrid’s method, Bellman’s theory, etc., cannot simplify the complexities of real-world engineering challenges and cannot produce stable optimum values for multi-optima-based rough surfaces. Considering this, the development of strong meta-heuristic algorithms inspired by nature and their use in a wide range of engineering issues has arisen as a new and significant topic of study.
The literature has sparked some fresh ideas for creating more effective TC. As adding another dimension to a Turbo Code (TC) significantly boosts the system’s BER performance, this study tries to build both enhanced Three-Dimensional (3D) and unique Four-Dimensional (4D) TCs. A new modulation method called Superposition Modulation (SM) has been included into this setting. In addition, the unique Simple Augmented State Diagram (SASD) method has been used to assess the suggested 4D-performance TCs. Moreover, utilizing the finite length rule and the asymptotic spectral shape function, an in-depth investigation of the asymptotic behavior of the innovative hybrid interleave-based 4D-TC has been performed. By using the time-slicing and interleave-gain technique, we have also analyzed the worst-case upper limit of the suggested structure. Improved versions of previously developed algorithms inspired by nature have also been successfully utilized in the effective design of the TC. Lastly, the suggested TCs’ BER performances based on existing communication standards such as WiMAX and WIFI are discussed.

1.2. Significance

During the last several decades, the efficient design of Turbo Codes (TCs) for cutting-edge communication systems has been a hotbed of innovative study. Scientists have proposed numerous helpful methods, which can be roughly divided into five categories: TC structural analysis, TC ensemble analysis, TC energy allocation strategies, TC design with optimization approaches, and TC application with multiple structures in contemporary communication systems. It has been shown that either increasing the number of effective encoding and decoding modules or developing a competent interleaving unit may boost TC performance in this regard. In this context, the optimal puncturing pattern has also been thought about. Optimal power sharing between systematic and parity bits has also been studied. The top limit of BER for both 2D and 3D Turbo Codes has also been evaluated using a variety of ensemble methodologies including the state diagram methodology. As a bonus, by using several computationally clever methods, optimal TCs have been designed with remarkable efficiency. Several goal functions have been developed for this purpose, considering critical coding scheme characteristics. Considering these considerations, the primary goals of this thesis are as follows:
First, creating energy-saving designs, a revolutionary Modified Symbiotic Organism Search (MSOS) algorithm is combined with several Superposition Modulation (SM) power allocation strategies to create a 3D Turbo Code (3D-TC).
Symmetric key cryptosystems are used for both data encryption and decryption in the proposed system. The input was a 3 × 3 matrix, which is a shorter matrix and, hence, simpler to solve. To add insult to injury, the key was produced using a range of bits from 16 to 255, which is too small to use a Finite State Machine and so makes the system vulnerable to cracking. We may extrapolate the following from this observation:
(a) The security of a cryptographic system cannot be ensured by using a single key of insufficient length. To prevent hackers from having any hope of cracking the encryption, it is advised that long-length keys are utilized.
(b) An intermediate cypher must be established to keep data secure from hackers, and it is preferred if dynamic keys are employed.
(c) Key transfer through a secure channel should be unnecessary if keys can be generated from publicly available information.
The article is divided into introduction, methods, experiments, results analysis, summary, and recommendations for further study.

2. Literature Survey

Steganography [9] entails encrypting secret information inside a seemingly benign picture. Heavy embedding, on the other hand, drastically alters the cover picture, which flags the presence of embedded data. Thus, steganography is typically insufficient for securing massive volumes of disguised data. Data security may be improved by combining encryption with other methods, such as steganography. Encryption relies on a mathematical function to encrypt data, making it unreadable by anyone except the intended recipient. The data is encrypted, but it may be snooped on if it is transferred through a wireless channel. When combined with encryption, steganography is one of the most effective techniques to conceal sensitive data [10]. To be effective, software must be able to restore both the secret information and the cover media without tampering with them.
Reversible Data Hiding (RDH) may be used to decode and decrypt both the cover picture and the secret data since it is a lossless embedding mechanism. Researchers are now focusing on RDH in encrypted domains, also known as RDH in Encrypted Domain (RDH-ED). In the RDH-ED framework, there are two categories, separable and non-separable, which reflect whether the hidden information can be recovered. Independent RDH-ED allows for data decryption without disclosing any sensitive keys. Nevertheless, in the case of non-separable RDH-ED, the cover picture must be decrypted before the hidden data can be accessed. Puech initially presented the non-separable RDH-ED cypher based on the original cover art’s Advanced Encryption Standard (RDH) [11] encryption.
Before decrypting the encrypted image and analyzing its local standard deviation for the hidden image, we first mark it. Obtaining the secret information is impossible without first decoding the cover art, making this technique inseparable from the cover art itself. This technique is simple to develop; however, it has a low embedding capacity of only 0.0625 bpp (bits per pixel).
Zhang proposes a another RDH-ED approach in [12] that includes encrypting the original cover picture using an exclusive-or (XOR) operation. By carefully examining each individual component, we were able to put together the whole image. A line was drawn through the middle of each cube to separate the spaces. To conceal the information, we need to switch the three Least Significant Bits (LSB) of each pixel in a particular section of the block. This technique may simultaneously reveal the cover art and the secret data to the recipient. If the block size is off, extracting data or restoring images will fail. Using a new estimate equation and side-matching algorithm, we enhance embedding capacity and drastically reduce the error rate between neighboring blocks in approach [13]. Because data recovery requires decryption of the picture, this technique is a non-separable RDH-ED, developing differentiated histograms of data and adjusting preexisting ones [14].
The security and dependability of a system are compromised when error-correcting codes are used with other security mechanisms. The necessity to enhance the current ME technique [15] with multi-bit error correction and a larger embedding capacity and reversibility inspired the proposed study.
Further secret information may be added to a steganographic [16] picture by giving it the illusion of encryption. The cover image may be reversed if you reveal the hidden words on the FSM’s map of its whereabouts. The cover image’s components have new homes after being embedded, and the map showing where they will be kept is called the location map. At the receiver, a position map may be extracted from the codewords to reconstruct the parts of the cover picture that were untouched by the changes made during embedding. Integrating ME with FSM, which employs a cryptographic FSM, would allow for the embedding of secret data and a cover picture with improved quality, security, and reliability.

Issues and Challenge

This suggested effort aims to improve the reliability and security of data transmissions. Steganography is the practice of hiding sensitive data within an apparently innocuous picture. This method protects sensitive information but offers few opportunities for embedding. The quality of the restored cover image likewise suffers when more embedding is applied. Encrypting the steganographic image is one way to boost embedding capacity without sacrificing security. Information that has been encrypted or hidden via steganography may still be vulnerable to channel noise. [17] One possibility is to implement an error-correcting coding scheme. The total cost of computing rises due to the three-step process of steganography, encryption, and error-correcting code required to guarantee the security and dependability of the transmitted data. The updated ME incorporates the concealed data first into a cover photograph. The improved ME was created for maximum embedding effectiveness. Table 1 shows the Comparison of Existing Methodology and their demerits.
The suggested Cryptographic Turbo Code is used to encrypt and encode the steganographic image (an embedded cover picture). It is recommended in the Modified Cryptographic Turbo Code (MCTC) that a secret-key FSM should replace the regular FSM, thereby making it a variant of the FSM. With the secret key, a secret-key FSM encrypts the data and corrects transmission problems by shuffling the bits of the input into a new sequence [29]. As an alternative, we suggest using a cryptographic FSM based on elliptic curves to produce random bitstream noise (MCTC-FSM) [24]. The sensitive data is protected on two fronts by the joint efforts of the Modified Minimum Encryption and Minimal Common Transceiver Complexity. This may be achieved by using a steganographic image to covertly conceal the information and then encrypting the whole thing [25]. The cover photo’s security and embeddability are both improved by using a steganographic picture for encryption. Recovery quality suffers as embedding capacity rises because more advanced embedding methods change more of the cover image’s bits. To get around this issue, the MCTC authors suggest relaying the in-codeword locations of the changed bits through syndrome embedding. Hence, the receiver can decode both the altered bits of the cover image and the hidden data included inside the steganographic picture. Data embedding might be conducted backwards [30].
Due to their reliance on a physical map, current RDH-ED systems can locate the appropriate cover image, but they are unable to insert it. To fill up the discovered knowledge gap, it will be necessary to enhance embedding capabilities via reliable recovery of the cover art. When it comes to correcting bugs, Turbo code is unparalleled. On the downside, it does not safeguard information during transmission. The bit error rate could be decreased by using TC in conjunction with encryption. The present MCTC-FSM is quite effective at detecting and repairing errors. Yet, they might be built with cryptographically safe encryption and portable data storage in mind. There is a known void in the literature about the development of a cryptographically sound FSM that can create a random interleaving pattern at an affordable computing cost.

3. Proposed Methodology

The classic method accomplishes security and reliability via a cascade implementation of the traditional cryptography system and the error-correcting coding scheme. To ensure the safety of the sent data, it is first encrypted and then the transmission channels are encoded. Information received must be decoded and encrypted at the receiving end before it can be used to reconstruct the original message. The Advanced Encryption Standard (AES) is an algorithm for secure data transmission that has been recommended for use. For the goal of data security, it is often used in systems with constrained means, such as satellites.
The performance of a convolution coding unit connected in parallel mode can be assessed from the Input Redundancy Weight Enumerating Function (IRWEF) of the converged constituent recursive convolution code C as given by
A c ( W , Z ) = w Z A w , z c W w Z z
where the coefficient A v , t C implies the coded strings number-devising parity check weight spawned by an input bit stream of weight w . Here, W and Z suggest two dummy variables.
The Conditional Weight Enumerating Functions (CWEFs) A w , C 1 w , Z and A w , C 2 w , Z of the encoding modules C 1 and C 2 can be specified in Equation (2) and Equation (3), respectively.
A w , C 1 ( w , Z ) = z A w , Z C 1 Z z
A w , C 2 ( w , Z ) = z A w c 2 Z Z z
Here, A w , z C 1 and A w , s C 2 signify the coefficients of the weight enumerating function of the encoding modules C 1 and C 2 , respectively. These coefficients denote coded strings with parity check weight z produced by input bit string of weight w . Now, by picking up the appropriate value of λ which is the input factor and N is the pixel range, the fractional part of the parity sequences of length λ N and λ K from C 1 and C 2 are amassed by a P / S multiplexing unit to create the coded bit sequence of length λ N + K . Therefore, the CWEF [17] of the multiplexing unit which accrues the fractional sequence approaching from convolution encoders connected in a parallel fashion and detached by an interleaver of size l 1 is specified by:
P w , A 2 w 5 , Z = A w , C 1 ( w 6 , Z ) A w , C 2 w 5 , Z N l 1 w λ
The authenticity of Equation (4) has been assessed by considering the following norm: It states that the interleaver π 1 of length l 1 should map a set of incoming bits of weight w into all of its dissimilar P permutation with identical probability l 1 w . Now, the output of the multiplexing unit is permuted by the interleaving unit π 2 with a length of λ N + K . Therefore, the CWEF of the interleaving unit π 2 is given by Equation (5).
Where w 5 and w 6 characterize the input weights of the tracks P 2 and P 5 , respectively, which are connected to the puncturing module. In this context, w a and w b represent output weights of the bit strings coming from the puncturing unit. The weight of the output code sequence impending from the post-encoding unit is signified by h w w a w b = w 3 . Additionally, w 1 and w 1 characterize the input weights of the tracks P 3 and P 4 , respectively, which are connected to the puncturing module.
The straitened higher bound on the BEP for the ML soft decoding technique under the Additive White Gaussian Noise (AWGN) channel condition can be evaluated by the Input Output Weight Enumerator Function (IOWEF) coefficients B w , k s . The straitened higher bound of 3D-TC assuming BPSK modulation is specified by
P b 1 N = h w 5   w 6 B w , h S Q 2 R C π 2 h E b N 0
where R C signifies the code rate. A flowchart of the proposed work is shown in Figure 2.
To provide security and dependability, RDH and the FSM combine to provide excellent security but with a drop in dependability compared to the original implementation of FSM. As the RDH-128, RDH-192, and RDH-256 all use keys of length 128, 192, or 256 bits to encrypt and decode 128-bit data blocks, this makes sense. This limits the 4-FSM’s error-correcting capability by limiting the maximum frame length of the FSM to 128 bits. A longer input bit sequence may improve the MCTC-error-correcting FSM’s performance. Nevertheless, encoding a bit sequence longer than 128 bits necessitates many iterations of MCTC, which increases the time and energy needed to process the data.
In this scheme, the m length-based input sequence is passed through the Serial-to-Parallel (S/P) unit. Thereafter, symbols impending from the S / P unit are moved through the Binary Phase-Shift Keying (BPSK) segment. Next, the amplitudes are assigned to each of these symbols. Finally, these symbols are linearly superimposed to generate a finite-alphabet output symbol Y . This whole modulation arrangement can be mathematically articulated by the expression
Y = i = 1 m1   C i 1 = i = 1 m1   β i 1 d i 1 = i = 1 m1   β i 1 ( 1 2 b i 1 )
where β i 1 indicates the magnitude of the i 1 th   binary chip and b i 1 0 , 1 . Three power distribution techniques, namely Equal Power Allocation (EPA), Unequal Power Allocation (UPA), and Grouped Power Allocation (GPA), have been successfully used for the design of the efficient 3D-TC.
Instead, one might modify the existing FSM implementation in such a manner that it provides security and error correction at a reduced computational cost. The only function that the conventional FSM can achieve is to correct errors in the data during transmission; this is a limitation. However, the random shuffling that the random FSM conducts may be used for encryption as well, so it is not only for fixing mistakes.

3.1. Cryptographic Turbo Code

The MCTC is a proposed modification of the FSM that provides additional reliability and security for data while it is in transit. The MCTC system is composed of the transmitter, which has an MCTC encoder and decoder as well as an FSM, and the receiver, which contains an FSM. Before transferring the data through a transmission channel, the MCTC encoder first encodes and encrypts it using the private key that it has been given.

3.2. Cryptographic Turbo Code Encoder

Figure 3 depicts the architecture of the MCTC encoder that is being considered for use.
G D = 1 , g 1 D g o D
g 0 E = 1 + E 2 + E 3   g 1 E = 1 + E + E 3
Figure 1 illustrates the shift register implementation of the transfer function of each RSC encoder. The delay in the shift register is denoted by the E in this equation.
An MCTC sits in between the two RSC encoders to provide separation. The input bit sequence A, which is delivered to the MCTC, is given to the first RSC encoder, which then encodes the sequence.
It is sufficient to send either the A or the B systematic bits since they are both equivalently represented by scrambled copies of the other. However, to prevent the data from being stolen, the MCTC will send out systematic bits, B. In addition, to prevent the confidential data from being guessed by a user who is not permitted to access them, the code word of the first encoder is interleaved with MCTC-2 to produce the letter E. The MCTC’s code word is referred to as MCTC, and it is provided as:
𝐶𝐶𝑇𝐶 = { 𝐵, 𝐸, 𝐹 }
The term “code rate” refers to the proportion of the number of information bits to the number of encoded bits. The amount of information bits included in the MCTC is L, while the number of parity bits is 2L. This combination yields a coding rate of:
R C T C = M M + M + M = 1 3 M
On the other hand, this results in an increase in the system’s bandwidth. An increase in the code rate and a decrease in the code’s bandwidth cost may be achieved by puncturing the parity bits or removing some of them. A decrease in the performance of the code’s error-correcting mechanism is brought on by puncturing; however, a competent puncturing technique may keep this decrease in performance to a minimum. In the field of cellular transmission, the rate of 1/2-pierced FSM is used. The secret word for the rate of 1/2 MCTC is as shown in Figure 3.
Figure 3. Cryptographic Turbo Code (CTC) encoder using MCTC FSM.
Figure 3. Cryptographic Turbo Code (CTC) encoder using MCTC FSM.
Mathematics 11 02225 g003
When it is time to decode the information, the decoder will substitute false bits with a value of zero for the bits that have been punctured. This will allow the information to be decoded. When the RSC encoders have completed encoding all the bits in the input bit sequence, they must reset to an all-zero state before going on to the next input sequence. This ensures that the encoders are ready for the next input sequence. To reset all the shift registers to their original state, which is all zeroes, the input bit sequence is extended with what are known as termination bits, which are also called tail bits. Termination bits are introduced into a code at the end of each code word. This is performed to ensure that the error-correcting performance of the code does not deteriorate because of the introduction of the termination bits.

3.3. Decryption of The Cryptographic Turbo Code

Figure 4 provides a visual representation of the structure of a CTC decoder. The noisy bits that are received are “C-C-T-C,” which equals “B-E-F.” The MCTC de-FSM first performs a de-interleaving operation on the received bits B to produce an A using the keys km, ks, P, and G that are shared by the sender and the receiver. This process is repeated until the desired output is achieved.
The iterative procedure will continue until the estimates produced by the two decoders tend to converge on one another. The iterative decoding technique that is employed in the TC enhances the performance of the error correction with each subsequent iteration.

3.4. Finite State Machine with Reverse Data Hiding

For the t t h   received bit, the decoder computes the algorithm of the Likelihood Ratio (LLR), L e x t   1 d i as
L e x t 1 d i = l o g P r d i = + 1 R 1 ¯ P r d i = 1 R 1 ¯
Equation (11) shows the data-hiding text-using Finite State Machine with the Likelihood Ratio and message.
L e x t   2 d i = { l o g P r d i = + 1 R 2 ¯ P r d i = 1 R 2   a n d     L o d i = L e x t 1 d i + L e x t 2 d i + L a A L }
d ˆ t = 1   i f           ( L o d i > 0 )   d ^ 1 = 0   i f     ( L o d i < 0 )
If the receiver uses the incorrect key to the FSM, the decoders receive incorrect bit sequences R 1 ¯ and R 2 ¯ , thereby recovering the incorrect information shown in Equation (12).
Y v = X u ,   v = π u
where (u) is the mapping function.
A secret number is used in conjunction with a complicated procedure on a computer to encrypt and decode data. To prevent unauthorized parties from understanding sensitive information sent over public networks, it is necessary to encrypt the message before sending it. Only the receiver to whom they are addressed should be able to decode them. A communication encrypted with a key should be decipherable only by the owner of the same key.
In traditional TCs, the input data are interleaved with a well-known permutation pattern that is intended to increase the efficiency of the coding process. This makes it possible for a random FSM to be employed. However, to store the interleaving pattern, the random FSM needs an additional memory and, to send data to the receiver, it needs a wider bandwidth (Table 2).
The development of a key-based cryptographic FSM that can create an extremely random output on the fly is a difficult task, which is one of the primary motivations for the work that is being suggested here. In this line of study, a proposal has been made for a cryptographic FSM that uses a secret key and is known as the Elliptic Curve Cryptographic FSM (MCTC). This is performed to protect the pattern from being accessed by those who are not permitted to do so. The interleaving sequence that is produced by the MCTC is overly sensitive to the fact that the MCTC has a secret key. When the secret key is altered, the interleaving pattern also undergoes considerable revision, which makes it impossible for an unauthorized user to deduce the order in which the inputs will be sent.
Elliptic curve cryptography is a typical encryption procedure that offers a high degree of protection to the data that is sent from a sender to a receiver over a public network. This level of protection is provided by the elliptic curve. The elliptic curve equation must be satisfied by a cloud of points for the elliptic curve arithmetic to be performed, since this is the basis for elliptic curve encryption. The positive integers a and b satisfying the curve equation that defines the points on the elliptic curve Ep(a, b) over a finite field Fp is as follows:
𝑦2 = (𝑥3 + 𝑎𝑥 + 𝑏) 𝑚𝑜𝑑 𝑝.
The domain parameters, also known as the parameters of the elliptic curve, a, b, and p, describe the point (x, y) on the elliptic curve in such a manner that both x and y are contained inside a finite field, Fp. These domain parameters are also known as the elliptic curve parameters.
This is the lowest positive integer that might possibly fulfil the requirements of the condition. The security of the cryptosystem may be ensured by choosing the value of p that will be a large prime integer. This will produce an elliptic curve of high order that contains many points, which will satisfy the requirements for safety.
The creation of a top-secret lookup table is the first step in the implementation of the MCTC that has been suggested. This step is subsequently followed by the development of interleaving locations. Let us assume that the point G on the elliptic curve of prime order Ep is a concealed generator (a, b). The most top-secret lookup table is created using computational techniques
𝑃𝑠 = (𝑥𝑠, 𝑦𝑠) = (𝑠 × 𝐺), 𝑠 = 0, 1, 2, 3, …, 𝑁,
where P s is a point on the elliptic curve.
𝑄𝑑 = (𝑘𝑚 × 𝑃𝑑), 𝑑 = 0, 1, 2, 3, …, 𝑁,
where 𝑄𝑑 is the collection of elliptic points that are concealed inside the index t of the lookup table.
The key 𝑘𝑚 may have a single bit altered, which will cause a whole new lookup table to be generated, as shown in Table 3. This protects the private lookup table from being read by a third party who is not authorized to examine it in any capacity. Both the transmitter and the receiver make use of two hidden parameters that they refer to as G and 𝑘𝑚. These parameters are only known to the transmitter and the receiver. These parameters are accountable for ensuring that the confidential lookup table’s privacy is always preserved. In addition, even if the adversary is successful in learning one of the secret parameters, G or 𝑘𝑚, the ECDLP that is used in elliptic curve arithmetic makes it exceedingly difficult for them to uncover the other secret parameter.
The MCTC oversees creating the interleaved bit locations v for each input bit location u that it receives, and it is accountable for doing so. The MCTC is the one responsible for producing the interleaved bit location v, which serves as the index of the secret lookup table. This index is associated with the elliptic point 𝑃𝑢, which can be determined using the formula
𝑃𝑢 = (𝑢 × 𝑄),                      𝑢 = 1, 2, …, 𝑁 − 1
and 𝑄 = (𝑘𝑠 × 𝑃),
where 𝑘𝑠 is the secret key and 𝑃 is a secret elliptic curve point given to MCTC.
Table 3 provides an illustration of the case that may be used to explain the production of an interleaved bit location for the mask key 𝑘𝑚 = 2,853,234. Case 1 and case 2 demonstrate the positions of the interleaved bits for the same secret key 𝑘𝑠 but for different secret points, P. Even if the keys 𝑘𝑚 and 𝑘𝑠 are the same, the interleaving sequence is completely different because of a change in P, as can be seen in the results for cases 1 and 2. Cases 2 and 3 demonstrate that even though P remains unchanged, the generation of a totally new interleaving sequence is brought about by a single bit shift in the secret key, 𝑘𝑠. According to the findings, the user must be in possession of practically all the secret key parameters to successfully recover the right interleaving pattern. An entirely new sequence is generated if there is a modification to even one of the parameters.
Every FSM has a matching de-FSM that works on the interleaved sequence to construct the sequence in its natural order. This order is maintained throughout the process. To de-interleave the sequence that has been interleaved, the receiver will need to construct an interleaving sequence that is awfully close to the one that was sent. It is possible for the receiver to produce a comparable interleaving sequence by secretly broadcasting the keys “𝑘𝑚, 𝑘𝑠, G, P.” Because of this, there is no longer a need for additional bandwidth, which was necessary to share the whole table between the sender and the receiver.
While using a secure system, an adversary is rendered incapable, or at best exceedingly improbable, of regaining access to the original data. Because of this, one of the most important criteria for a secure system is the ability to generate and transmit random data. Since the random output is hard to anticipate, it makes it difficult for an adversary to launch an attack on the system. Because the output that is created is more random, the data that is collected is also more unexpected, which gives the system an increased level of safety.

4. Results and Analysis of Proposed Work

The primary purpose of the recommended MCTC is to provide a shuffling pattern that is both surprising and completely at random. It is possible to evaluate how much unpredictability the MCTC really injects into its output by using several different randomization tests. In the SIPI Database, the miscellaneous volume consists of 44 images, 16 color and 28 monochrome. The sizes are 14 lots of 256 × 256, 26 lots of 512 × 512, and 4 lots of 1024 × 1024.
The randomness test is an empirical procedure that involves a set of tests designed to evaluate how random a certain sequence really is. The statistical characteristics of the random numbers that are generated may be checked, and several different statistical tests can be used to determine the extent to which those statistical qualities are met. The results of these tests indicate the presence of a pattern in the sequence. When a sequence displays a pattern, it is an unmistakable sign that the sequence is not random. This renders the sequence both predictable and susceptible to being targeted by an adversary.
The amplitude, direction, and form of the link between the input and output may be seen with the use of an MCTC scatterplot. When looking at the correlation coefficient, a result that is high is indicative of an extraordinarily strong link. A pattern in which there is no obvious center and a practically null CC would be a great depiction of a random Finite State Machine (FSM). Dispersion, on the other hand, is a statistical metric that is used to evaluate how much of a stretch there is in a certain distribution. If the dispersion value is made higher, the locations of the bits that are interleaved will become more spread apart. NIST FIPS-140-2, the industry-standard statistical test suite, is used to provide further verification on the proposed MCTC’s statistical characteristics. In the following, you will find a detailed explanation of the different examinations.
A scatter plot may be used to illustrate the unpredictability of the output sequence. It is desirable for the scatter plot to contain points that are distributed in a way that is both random and consistent. If the scatter plot displays a pattern that can be predicted, the parameters that were used to construct the sequence may be uncovered, which would leave the cryptosystem vulnerable to attack.
The bit locations in the input and the bit positions in the output have absolutely no link with one another, as is seen from the picture. Altering the values of 𝑘𝑠 also results in a striking transformation of the pattern. Because of this, the scatterplot ensures that the random numbers that are generated are one-of-a-kind for each user and are founded on the MCTC keys that they have. The link between the MCTC’s input and output is one that is overly sensitive to the way its key is configured. An opponent who does not possess the relevant keys will be unable to discern the connection between the bits that are input and bits that are output because of this. This time, the test was conducted between two interleaved bit location sequences known as vi and vi+1. Both sequences were produced by the MCTC using the ith and (i + 1)th secret key and an interleaving length of 20,000 bits. The interleaving sequence was generated with the use of an elliptic curve with the parameters E19991(3,1), 𝑘𝑚 = 49853, G = (52,17), and P = (62,2).
As a result of the points being evenly distributed around the graph, it is hard to draw any conclusions about the nature of the cypher or their connection to one another that are relevant.
This indicates that the association is not extraordinarily strong. Because of this, it is exceedingly difficult for an unauthorized user to anticipate the interleaving pattern due to the low value of the correlation (Table 4).
The Quadrature Permutation Polynomial, also known as QPP; the random FSM, sometimes known as RI; and the deterministic FSM, often known as DI, are all frequently used by the TC. It was determined whether the MCTC, QPP, RI, and DI had all been generated in a random fashion. In Figure 4, a comparison is made between the scatter plots of the RI, QPP, DI, and MCTC. The output sequence can be seen in the figure because QPP and DI are interleaved in such a way that makes it visible; as a result, the output sequence is predictable and open to attack. However, the interleaved bit sequence may be easily determined by an authorized user based on the original bit sequence, even though both the RI and the suggested MCTC demonstrate a large degree of unpredictability in the pattern (Figure 5).
To calculate the normalized dispersion and spreading factor of the proposed MCTC, simulations were run using an input bit sequence of lengths 133, 499, and 2019, correspondingly. This allowed for the determination of the normalized dispersion and spreading factor.
According to the information shown in Table 5, the value of the RI standard deviation is somewhere around 0.8. Because the suggested MCTC has a dispersion value that is close to that of the RI, it is possible to use it as a method that interleaves bits in a way that is both effective and efficient. In addition to this, it comes as quite a shock. Despite this, the DI and the QPP both exhibit a little degree of dispersion to varying degrees. This is a direct consequence of the fact that the DI and QPP generate a predictable interleaving pattern for the data they process (Table 6).

4.1. Security Checks for Cryptographic Module According to FIPS 140-2

FIPS-140-2 is also known as the Federal Information Processing Standard. Four statistical examinations are carried out by the FIPS-140-2 during a binary sequence that is 20,000 bits in length. The mono-bit test, the poker test, the run test, and the long-run test are the names of these respective tests. A minimum of one hundred binary random sequences are required to be used to carry out the test in an appropriate manner. In each experiment, the value of the statistical parameter that has been supplied is calculated for the binary sequence that has been defined. When the value of the test parameter has been calculated, it is then compared to the range of acceptable values that has been established for that specific test. If it falls within the acceptable range the test is considered successful. If the parameter being evaluated is found to fall within a range that is considered acceptable, the binary sequence will provide a positive result. For the random binary sequence to be eligible for use in cryptography it is required to fulfil all the prior requirements. Failing to do so will result in the sequence not being regarded as suitable for use. The following is a list of the tests that are included in the FIPS-140-2 certification.
  • Frequency/Monobit test:
It is reasonable to anticipate that, in a sequence of ones and zeros that was created at random, approximately half of the bits in the sequence should be ones, and the other half should be zeros. This is because one is the binary representations of the values one and zero is the binary representation of the value zero. The frequency test verifies that the total number of ones and zeros that constitute the sequence do not significantly differ from one another. This guarantees that there is a natural flow to the sequence.
T p = 16 5000 i = 1 16 a F ( i ) 2 5000
2.
Poker test:
The bitstream that is going to be analyzed has been sectioned up into subcomponents that each comprise four bits, considering that F(i) stands for the count of each of the four-bit values.
3.
Run test:
A run is the term used to describe an unbroken string of bits that are all the same. With the run test, a tally is taken of the number of runs of ones and zeros of varied lengths that are present in the input sequence. These runs may be of varying lengths. The value of the test parameter known as Tr is determined by the number of ones and zeros that are present in a certain bitstream.
4.
Long run test:
The value of the test parameter Tl must be more than or equal to 26, and it is determined by counting the number of consecutive ones or zeros that are not broken up in any way. If the bitstream does not include any runs of this sort, then the test may be regarded to have been successful.
Estimates were made for the test parameters for each of the four tests based on around one-hundred different binary sequences that were generated by the MCTC that was recommended. During the generation process, a one-of-a-kind private key was used to construct each binary sequence of length 20,000 utilizing that sequence. The binary sequence was created by converting each of the random numbers that were produced by the MCTC into a bit, bi, by utilizing the binary representation of the numbers that were obtained by calculating
𝑏𝑖 = 1, 𝑖𝑓 𝑣 > 𝑁/2
 = 0, 𝑖𝑓 𝑣 < 𝑁/2
The results of three of the examples are shown in Table 7, which demonstrate that the test value is located within the appropriate interval for each of the four statistical tests required by the FIPS 140-2.

4.2. Periodicity Analysis of Elliptic Curve Cryptographic FSM

The MCTC makes use of a cryptographic FSM that, all the way along its length, randomly permutes and distributes the data that it gets as an input to provide the highest possible level of security. This was performed so that the MCTC could give the largest possible degree of protection. After an interleaving period of that length, the MCTC will then proceed to repeat the permutation sequence N times in each of its subsequent cycles. This will continue until the sequence is exhausted. The passage of time results in an increase not just in the number of feasible combinations but also in the degree of unpredictability in the bits that are created, which ultimately leads to an increase in the total number of combinations that are conceivable. The number of bits encoded by CTC is equal to the frame length, which is denoted by the letter L in the frame length notation. This is the case because the number of bits encoded by CTC is equal to the frame length. If you want each frame to be long enough to interleave the input bits, the bits themselves need to be at least as long as the period of the FSM. The bits that are interleaved are sufficiently random when N is big enough to accommodate them. Because of this, there is less correlation between the code words and, as a result, error-correction performance in CTC is enhanced. Nevertheless, when N is quite low, the interleaved bits are not nearly as random as they should be. This occurs because the interleaved bits are determined by N. Interleaved bits make use of the higher statistical properties offered by huge periods of time, which makes it much more difficult for an adversary to determine the specific interleaving pattern.
A certain amount of time must pass before an interleaving sequence may begin repeating itself; this minimum amount of time is referred to as the sequence’s period. The period of an interleaving sequence is the same as the sequence’s period. If each of the criteria that are given below is met, then we may say that the series of numbers has a period of N, where N is an integer that is greater than zero. If all these conditions are met, then the sequence of numbers has a period of N
𝑣𝑖 = 𝑣𝑖+𝑁, 𝑓𝑜𝑟 𝑖 ≥ 0,
where the letter “i” denotes a particular location inside the integer v. The periodicity of the proposed MCTC is largely governed by several different parameters, the most significant of which is the number of points on an elliptic curve that are created by the generator point G. The suggested MCTC is mostly determined by these factors. It is possible to define a cyclic group using an elliptic curve of prime order. The building of a cyclic group results in the production of all the points on an elliptic curve. This is accomplished by selecting a single point from the cyclic group to serve as the beginning point in the process. One point from the collection may be chosen at random to serve as an example of this. An example of a cyclic group is shown in each of the Figure 6a–c. These groups were generated by an elliptic curve with the order 19 and the generator points (10,11), (7,6), and (16,4), respectively. Each of these figures is shown in its own separate figure. An elliptic curve served as the basis for the construction of each of these groups. In the field of mathematics, an example of an elliptic curve can be found denoted by the notation E17(2,2), which corresponds to this cyclic group. The equation can generate all the points on the elliptic curve, even when the value of G is altered in some way. This is something that can be verified by personal observation. The value of G, which is a variable that can be changed, is what determines the specific sequence that is used to create the point each time. In addition, the point is created by following this sequence.

4.3. Bit Error Rate of Cryptographic Turbo Code

Bits that are sent across a digital communication system incur the danger of getting corrupted if the channel through which they travel is impaired in some way, such as by noise, distortion, interference, fading, or other similar phenomena. It is necessary for a communication system to be able to recognize and rectify problems such as these for it to be reliable. The bit error rate, sometimes referred to as the BER, is a statistic that is used to quantify how well an electronic communication system corrects errors. To calculate it, take the total number of bits that were entered and deduct from that number (Table 8).

4.4. Proposed MCTC

MCTC is responsible for both the encoding and encryption of the bitstream. By combining a bit error rate of around 10-5 with a Signal-to-Noise Ratio (SNR) of 0.7 dB, conventional TC serves as an efficient error-correcting code. Although, there is no encryption during transmission. Given this, the authors of this work offer MCTC as an alternative to error correction for securing data transmission. The proposed MCTC employs an Elliptic Curve Cryptographic Finite State Machine to construct an unpredictable, user-specific, and secure interleaving pattern (MCTC). This method creates the design when fed an elliptic curve and a secret key. The goal is to achieve error-correction performance such as that of the TC with the help of the proposed MCTC, which is designed to successfully decorrelate the code words formed by the CTC. Data sent out by the MCTC using a secret key is protected by the random shuffling pattern and the key itself is kept secret.
Figure 7 compares the bit error rate (BER) vs. the Signal-to-Noise Ratio (SNR) for a non-encrypted system, an MCTC system, a TC system, an MCTC-TC system, and an MCTC system. In contrast to the transmission of data without encoding or encryption, the findings show that the MCTC is less effective at error correction. It is evident since the MCTC is not as good at fixing mistakes. This is because MCTC often leads to errors spreading to neighboring bits. More than half of the bits must be right for the decryption to succeed; hence, even one wrong bit in the input will result in failure. Thus, the Advanced Encryption Standard (MCTC) calls for a second error-correcting code to ensure that the data being transported is legitimate.
In comparison to the original MCTC, the TCs in the MCTC-BER version are much lower. The error-correction performance of MCTC-TC far exceeds that of MCTC because the TC fixes the transmission error. Attempts to employ MCTC-TC to produce a BER on par with a TC have been met with little success so far. For this reason, MCTC is vulnerable to the avalanche effect, which is caused by the bits whose faults have not been corrected by TC. The bits produced by MCTC will be faulty by at least half if TC is unable to fix even a single bit. In addition, the TC’s BER performance is degraded because of the fact that MCTC has a block length of 128 bits, which limits the amount of TC’s frames. The TC and the suggested MCTC have BER performances that are quite similar. The MCTC’s secret key produces private, de-correlated codewords, which is why it is so widely used.
The BER plot of MCTC is shown in Figure 7; compared to TC, MCTC loses just 0.2 dB of coding at a BER of 10-4, while gaining 0.8 dB when compared to MCTC-TC. The diagram displays this data. The fact that MCTC’s random shuffling is controlled by a secret key also helps to ensure the privacy of any sensitive data it transmits.
The code’s total performance is heavily dependent on how well the MCTC algorithm is implemented. Many factors affect its performance in term of error correction: (i) The MCTC’s architecture; (ii) The number of decoding rounds conducted by the CTC decoder; (iii) the CTC’s frame length.
We analyze how adjusting these parameters impacts the CTC’s performance. The bit error rate (BER) is calculated by encoding and decoding a random bit sequence of 65,536 bits at a rate of 1/2 CTC, sending it via an AWGN channel, and then measuring the number of errors in the decoded version eight times.
(i)
Effect of the design of MCTC
CTC analysis suggested MCTC for error correction. Figure 8 shows BER versus SNR scatter plots for each interval. CTC with MCTC has 104 BER at 0.4 dB SNR, 0.2 dB better than CTC with RI. Like the RI, the MCTC randomly interleaves with a dispersion of 0.8. Yet, deterministic FSMs such as the QPPI and SKI FSM use FSM spread to guarantee that permuted bits are no more than a specified distance apart. SKI created both FSMs. A quadratic permutation polynomial generates interleaved bit positions with a broad spread to provide the QPPI error-correcting performance equivalent to the RI. Error correction is achieved by transferring error patterns across decoders. Nevertheless, the SKI length and key affect the spread between interleaved bit positions; thus, a big spread is not always guaranteed. QPPI and SKI feature interleaved bit locations that can take several values, but their patterns are uniformly spaced, making them simple to predict and prone to security breaches. Correlations from the interleaved pattern’s regularity lower the BER curve’s error floor.
(ii)
Effect of the number of decoding iterations of CTC decoder
CTC decoding uses iterative decoding to exchange data between the protocol’s two decoder modules. Figure 9 shows a BER for the CTC’s efficiency during decoder iterations one to eight. Increased decoding frequency improves the BER vs. SNR graphs. The number of rounds increases decoder communication, which helps CTC to fix faults.
TC’s error-correcting efficiency rises from one to two iterations. The decoders communicated and shared a lot of information throughout training. Hence, the receiver receives relevant input bits to accurately assess the situation. After a few cycles, the decoders may learn the input bits to anticipate. More iterations do not enhance decoder information sharing. Because of this, the BER performance only improved after the ninth decode cycle.
(iii)
Effect of frame length of the CTC
What followed was a spectacular error-correcting performance [26] from TC findings. Conclusions are drawn for frame lengths greater than 65,536 bits. Yet, frame lengths of around 1000 bits are required for video transmission. However, far shorter ones may be sufficient for uses such as voice transmission. Figure 10 shows how well CTC works for error correction over a variety of frame durations. This process yielded the following sets of digits: 256, 508, 1020, 2048, and 65,536. Increasing the frame length improves the bit error rate performance, as seen in the Figure 10, and this is particularly apparent at higher Signal-to-Noise Ratios. Increasing the frame length allows the CTC to generate more uncorrelated codewords and carry out efficient decoding due to the increased randomness in the interleaved sequence.

4.5. Effect of Signal-to-Noise Ratio on Image Quality

As a result of channel interference, the sent data is garbled. If there was severe bit corruption, the quality of the restored picture may be severely reduced. Moreover, if many bits are corrupted, it is possible that the correct secret information may be obtained incorrectly. Hence, we can examine the impact of channel noise on the picture quality achieved with CTC by graphing the CC and the PSNR with the Signal-to-Noise Ratio (SNR). The Signal-to-Noise Ratio (SNR) was improved from 0.0 to 1.0. Figure 11a presents the results of plotting PSNR against SNR and CC against SNR. A picture of good quality will have a CC of at least 0.8 and a PSNR of at least 50 dB. Figure 11b shows that at SNR values larger than 0.5 dB, the suggested CTC yields a CC of 1 and a PSNR of more than 50 dB. The research shows that the CTC can restore a high-quality picture to one that was previously distorted by noise of the same strength as the original signal (Table 9, Figure 12).
Table 10 provides a detailed breakdown of image quality characteristics for legitimate and illegitimate photo recovery. According to the findings, a real user can correctly recover the image with a correlation coefficient of nearly 1 and a Peak Signal-to-Noise Ratio (PSNR) of almost infinity, whereas a false user can only do so with a correlation coefficient of nearly 0 and a PSNR of less than 8 dB. The results ensure that an ordinary user may successfully recover the image, but a malevolent user is left with a severely altered version of the original, with a correlation coefficient of approximately 1 and a PSNR of 6.5326 roughly.

5. Conclusions

There are four factors that are crucial to the success of the proposed methods: key length, key order, block size, and intermediate cypher generation. We use a larger key than the methods proposed by previous studies, with a key length of 128–512 bits, a key matrix order of 4 × 4 to 6 × 6, and block sizes of 16 bytes to 36 bytes. Compared to existing solutions, the proposed approaches are superior since their encryption and decryption times scale linearly with the key size. Guessing the Plaintext takes a lot longer and becomes almost impossible without knowing the intermediate encryption. Based on the comparative analysis provided in the previous chapter, the following is provided. The Plaintext can be concealed using the proposed techniques; however, recovering the Plaintext without knowledge of the intermediate encryption and key is incredibly challenging due to the large key size, the high order of the key matrix, and the large block size of the Plaintext. Future work on image encryption applications may benefit from the frequent binary to decimal translation, which increases the strength of the technique. Over time, we want to enhance this algorithm’s protection of text as well as audio, video, and still photos. Other statistical methods will be used to assess the keys’ degree of unpredictability.

6. Limitations and Future Work

The suggested methods’ high operational complexity has two advantages: increased security and less time spent on encryption and decryption. A more likely outcome of a successful cryptanalysis was achieved by using an intermediate cypher. As can be seen from the above, not only do the suggested algorithms improve security, they also provide more robust security than the current methods. It has been shown that the suggested algorithms offer higher throughput than other encryption methods. It is also easy to show that our algorithms are safer than others due to the reduced likelihood of different assaults.
We combine mathematical techniques with those from the finite field GF( 2 7) to arrive at our conclusions. Prospective research has the potential to extend these findings beyond GF(2 n) and GF(3 n). Future work may expand these parameters to provide even more secure techniques; for now, we utilize a key length of 128 bits, a key matrix order of 4–6, and a Plaintext block size of 16–36 bytes. In this work, we establish the framework for future study by developing a novel method for Laplace transform-based data encryption and decryption utilizing polynomials of degree 8 or higher. Calculations grow more complex, and the key becomes longer and more obtuse, when a high-degree polynomial is used. In the future, similar studies may be conducted using a similar method with other integral transforms. Key matrices of orders 33 to 44 and Plaintext chunks of 9 bytes to 16 bytes are also used in the suggested techniques. Such details might be generalized in further research, allowing for the development of more sophisticated algorithms. The proposed methods make use of a Finite State Machine, a recurrence matrix, and the LU decomposition of a randomly selected non-singular matrix, together with key matrix orders of 44 and 55, key sizes of 128 and 200 bits, and block sizes of 16 and 25 bytes. By increasing the matrix size and the number of rounds, further research may make it possible to send more data safely in parallel. The resulting algorithms are more secure because of this and might potentially keep sensitive information hidden. The suggested works provide a novel encryption and decryption approach based on the Genetic Algorithm, making it exceedingly difficult to guess the key by combining a 128-bit key, a 4-by-4 key matrix, and a Plaintext block size of 16 bytes. More time will be spent theorizing on the Plaintext as a result of this.

Author Contributions

Validation, R.R.B. and Z.B.; Investigation, M.M.H.; Writing–original draft, A.A.; Writing–review & editing, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through a research program under grant number R. G. P. 2/109/43.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Acknowledgments

The authors extend their gratitude to the deanship of scientific research at King Khalid University for funding this work through the research groups program under grant number R. G. P. 2/109/43.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ray, P.K.; Dila, G.K.; Patel, B.K. Application of some recurrence relations to cryptography using finite state machine. Int. J. Comput. Sci. Electron. Eng. 2014, 2, 220–223. [Google Scholar]
  2. Almakhour, M.; Sliman, L.; Samhat, A.E.; Mellouk, A. A formal verification approach for composite smart contracts security using FSM. J. King Saud Univ. Comput. Inf. Sci. 2023, 35, 70–86. [Google Scholar] [CrossRef]
  3. Alawida, M.; Teh, J.S.; Alshoura, W.H. A New Image Encryption Algorithm Based on DNA State Machine for UAV Data Encryption. Drones 2023, 7, 38. [Google Scholar] [CrossRef]
  4. He, P.; Tu, Y.; Bao, T.; Sousa, L.; Xie, J. COPMA: Compact and Optimized Polynomial Multiplier Accelerator for High-Performance Implementation of LWR-Based PQC. IEEE Tran. VLSI Syst. 2023, 31, 596–600. [Google Scholar] [CrossRef]
  5. Polese, S. Strength Evaluation of Cryptographic Primitives to Linear, Differential And Algebraic Attacks. Ph.D. Thesis, Department of Computer Science Giovanni Degli Antoni, University of Milan, Milan, Italy, 2022. [Google Scholar]
  6. Rashid, M.; Sonbul, O.S.; Zia, M.Y.I.; Kafi, N.; Sinky, M.H.; Arif, M. Large Field-Size Elliptic Curve Processor for Area-Constrained Applications. Appl. Sci. 2023, 13, 1240. [Google Scholar] [CrossRef]
  7. Hasija, T.; Kaur, A.; Ramkumar, K.R.; Sharma, S.; Mittal, S.; Singh, B. A Survey on Performance Analysis of Different Architectures of AES Algorithm on FPGA. Mod. Electr. Dev. Commun. Syst. 2023, 948, 39–54. [Google Scholar]
  8. Kumar, A.; Singh, P.; Patro, K.A.K.; Acharya, B. High-throughput and area-efficient architectures for image encryption using PRINCE cipher. Integration 2023, 90, 224–235. [Google Scholar] [CrossRef]
  9. Alharbi, A.R.; Tariq, H.; Aljaedi, A.; Aljuhni, A. Latency-Aware Accelerator of SIMECK Lightweight Block Cipher. Appl. Sci. 2023, 13, 161. [Google Scholar] [CrossRef]
  10. Abebe, A.T. Lightweight and Efficient Architecture for AES Algorithm based on FPGA. ISEL Acad. J. Electr. Telecommun. Comp. 2023, 8, 8. [Google Scholar]
  11. Nair, M.; Sadhukhan, R.; Mukhopadhyay, D. Generating Secure Hardware using ChatGPT Resistant to CWEs, Cryptol. ePrint Arch. 2023. (Preprint). Available online: https://eprint.iacr.org/2023/212 (accessed on 20 February 2023).
  12. Alhomoud, A.; Jamal, S.S.; Altowaijri, S.M.; Ayari, M.; Alharbi, A.R.; Aljaedi, A. Large Field-Size Throughput/Area Accelerator for Elliptic-Curve Point Multiplication on FPGA. Appl. Sci. 2023, 13, 869. [Google Scholar] [CrossRef]
  13. Huang, X.; Dong, Y.; Ye, G.; Shi, Y. Meaningful image encryption algorithm based on compressive sensing and integer wavelet transform. Front. Comp. Sci. 2023, 17, 173804. [Google Scholar] [CrossRef]
  14. Kamil Khudhair, S.; Sahu, M.; KR, R.; Sahu, A.K. Secure Reversible Data Hiding Using Block-Wise Histogram Shifting. Electronics 2023, 12, 1222. [Google Scholar] [CrossRef]
  15. Shi, Y.Q.; Li, X.; Zhang, X.; Wu, H.T.; Ma, B. Reversible data hiding: Advances in the past two decades. IEEE Access 2016, 4, 3210–3237. [Google Scholar] [CrossRef]
  16. Shi, Y.Q. Reversible Data Hiding. In Digital Watermarking: Third International Workshop, Proceedings of the IWDW 2004, Seoul, Republic of Korea, 30 October–1 November 2004; Cox, I.J., Kalker, T., Lee, H.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  17. Zhang, X. Reversible data hiding in encrypted image. IEEE Signal Proc. Lett. 2011, 18, 255–258. [Google Scholar] [CrossRef]
  18. Peng, F.; Li, X.; Yang, B. Improved PVO-based reversible data hiding. Digit. Signal Proc. 2014, 25, 255–265. [Google Scholar] [CrossRef]
  19. Dhanda, S.S.; Brahmjit, S.; Poonam, J. Lightweight cryptography: A solution to secure IoT. Wirel. Pers. Commun. 2020, 112, 1947–1980. [Google Scholar] [CrossRef]
  20. Tseng, H.W.; Hsieh, C.P. Prediction-based reversible data hiding. Infor. Sci. 2009, 179, 2460–2469. [Google Scholar] [CrossRef]
  21. Zhang, X. Reversible data hiding with optimal value transfer. IEEE Trans. Multi. 2012, 15, 316–325. [Google Scholar] [CrossRef]
  22. Alshudukhi, K.S.; Khemakhem, M.A.; Eassa, F.E.; Jambi, K.M. An Interoperable Blockchain Security Frameworks Based on Microservices and Smart Contract in IoT Environment. Electronics 2023, 12, 776. [Google Scholar] [CrossRef]
  23. Tai, W.L.; Yeh, C.M.; Chang, C.C. Reversible data hiding based on histogram modification of pixel differences. IEEE Trans. Circ. Syst. Vid. Technol. 2009, 19, 906–910. [Google Scholar]
  24. Li, X.; Li, B.; Yang, B.; Zeng, T. General framework to histogram-shifting-based reversible data hiding. IEEE Trans. Image Proc. 2013, 22, 2181–2191. [Google Scholar] [CrossRef]
  25. Kalker, T.O.N.; Willems, F.M. Capacity bounds and constructions for reversible data-hiding. In Proceedings of the 2002 14th International Conference on Digital Signal Processing Proceedings, DSP 2002, Santorini, Greece, 1–3 July 2002; IEEE: New York, NY, USA, 2003. [Google Scholar]
  26. Wang, X.; Zhao, M.; Feng, S.; Chen, X. An image encryption scheme using bit-plane cross-diffusion and spatiotemporal chaos system with nonlinear perturbation. Soft Comput. 2023, 27, 1–18. [Google Scholar] [CrossRef]
  27. Wu, S.T. A Key-Based Multi-Mode Clock-Controlled Stream Cipher for Real-Time Secure Communications of IoT. Electronics 2023, 12, 1076. [Google Scholar] [CrossRef]
  28. Makhloufi, A.E.; Adib, S.E.; Raissouni, N. Highly Efficient Security Level Implementation in Radiation-Tolerance FPGA Using a Combination of AES Algorithm and Hamming Code: LST-SW Case. Int. J. Electr. Electron. Eng. Telecommun. 2022, 1–12. [Google Scholar]
  29. Almuzaini, K.K.; Shalini, S.; Sindhu, P.M.; Sandeep, K.; Stephen, O.; Prashant, K.S.; Piyush, K.P.; Piyush, K.S. Design and analysis of energy aware interior gateway routing algorithm with particle swarm optimization. Int. J. Commun. Syst. 2023, e5466. [Google Scholar] [CrossRef]
  30. Shukla, P.K.; Amer, A.; Piyush, K.P.; Adel, R.A.; Sajjad, S. AES Based White Box Cryptography in Digital Signature Verification. Sensors 2022, 23, 9444. [Google Scholar] [CrossRef]
Figure 1. Basic Data Hiding Technique.
Figure 1. Basic Data Hiding Technique.
Mathematics 11 02225 g001
Figure 2. Flowchart of the proposed work.
Figure 2. Flowchart of the proposed work.
Mathematics 11 02225 g002
Figure 4. Cryptographic Turbo Code (CTC) decoder using MCTC de-FSM.
Figure 4. Cryptographic Turbo Code (CTC) decoder using MCTC de-FSM.
Mathematics 11 02225 g004
Figure 5. The relationship between the original bit locations and the interleaved bit locations for both the present and planned FSM.
Figure 5. The relationship between the original bit locations and the interleaved bit locations for both the present and planned FSM.
Mathematics 11 02225 g005
Figure 6. Comparison of computational cost of MCTC and proposed MCTC.
Figure 6. Comparison of computational cost of MCTC and proposed MCTC.
Mathematics 11 02225 g006
Figure 7. BER performance of the uncoded MCTC, MCTC with TC, and proposed MCTC and TC.
Figure 7. BER performance of the uncoded MCTC, MCTC with TC, and proposed MCTC and TC.
Mathematics 11 02225 g007
Figure 8. Effect of FSM on the BER performance of CTC.
Figure 8. Effect of FSM on the BER performance of CTC.
Mathematics 11 02225 g008
Figure 9. Complexity of iterative decoding grows with the number of iterations.
Figure 9. Complexity of iterative decoding grows with the number of iterations.
Mathematics 11 02225 g009
Figure 10. Effect of frame length on the BER performance of CTC.
Figure 10. Effect of frame length on the BER performance of CTC.
Mathematics 11 02225 g010
Figure 11. Image quality of image recovered by proposed CTC: (a) Correlation Coefficient versus SNR; (b) PSNR versus SNR.
Figure 11. Image quality of image recovered by proposed CTC: (a) Correlation Coefficient versus SNR; (b) PSNR versus SNR.
Mathematics 11 02225 g011
Figure 12. Stream cypher images with their corresponding histograms, from left to right: (a) The original picture; (b) The decrypted image by an authorized user; (c) The encrypted image; (d) The histogram of the original image; (e) The histogram of the decrypted image; (f) The histogram of the encrypted image.
Figure 12. Stream cypher images with their corresponding histograms, from left to right: (a) The original picture; (b) The decrypted image by an authorized user; (c) The encrypted image; (d) The histogram of the original image; (e) The histogram of the decrypted image; (f) The histogram of the encrypted image.
Mathematics 11 02225 g012
Table 1. Comparison Table of Existing Methodology.
Table 1. Comparison Table of Existing Methodology.
ReferenceMethodologyDemerits
[18]DWT-based watermarking algorithmThe watermark withstands image pre-processing attacks such as median filtering and tolerates noise attacks such as the addition of Gaussian noise and salt-and-pepper noise.
[19]Four hashes of IWT coefficients of different subbandsDifferent parts of the authentication watermark are embedded into different subbands by replacing LSBs.
[20]Proposed technique that shifts the histogram of the difference imageGenerated from the original cover image, and modifies pixel values to obtain embedding capacity.
[21]Block-wise embedding of watermark data Accuracy is less for block-wise detection of forgery.
[22]RDH-ED schemeLess accuracy.
[23]RDH encryption algorithmLow accuracy rate and high error rate.
[24]Image encryption based on block-based confusion and multiple levels of diffusionDifferent image data type, compliance of NIST, and robustness are not considered.
[25]Encrypting images using the chaotic PWLCM system and the Game of Life permutationNon-compliance with pseudorandom chaotic sequences; lack of resilience to data loss or noise.
[26]A novel picture encryption technique based on a hyper-chaotic mapRequirement to raise the security level and resilience against loss of data and noise is not considered.
[27]Selective picture encryption that works wellTime savings and noise resistance are both improved, but security must be tightened. Lack of adherence to NIST standards.
[28]The use of a chaotic map and DNA complementary principles to create a noise-resistant, selective picture encryption for grayscale photos.Possibility of shoring up security by taking any and all photos into account.
Table 2. Algorithm of Encryption and Extraction in Proposed Work.
Table 2. Algorithm of Encryption and Extraction in Proposed Work.
Algorithm for Encryption
Input: a = a k , 1 k K , We ε + U × V
Step 1:Perform 2-level DTCWT on a to get the wavelet coefficients, and select d w 3 as t
d w 1   d w 2   d w 3 a 2 s + D T C W T a
Step 2:Obtain a complex spectrum, applying STFT
D m 2 S T F T d w / 2
Step 3:Take the magnitude and phase spectrum
M n 3 A B S D in 3 P as 33 A N G L E D is 3
Step 4:Take the diagonal element
s ic D I A G   S w
Step 5:Convert the watermark image into a binary watermark image
W = { w u , v , 1 < = u < = U , 1 < = v < = V ) ) B I N A R Y W I
Step 6:Transform a binary image to Arnold to transform and reduce the dimension of W 1 A E W
w 1 = w 1 p , 1 p P
Step 7:Encrypt  w 1 through  B C H encoder
w 3 BCH w 1
Step 8:Perform 2-level DTCWT on  w 2 to take the wavelet coefficients and select  d m 3 d m 1 d min d ing a m   DCCWT   w 2
Step 9:Obtain a complex spectrum applying STFT
D ii 3 STFT d di 3
Step 10:Take the magnitude and phase spectrum
M im 3 A B S D imA P im 3 + A N G L E D im 3
Image extraction algorithm
Input:Compressed input
Step 1:   Obtain   a   complex   spectrum ,   applying   STFT   D ¯ os   S T F T d ˜ or 3  
Step 2:   Take   the   magnitude   spectrum   M ˆ 13 A B S D ˜ 133
Step 3:   Take   diagonal   element   δ ¯ o D L A G S ˜ i
Step 4:   Perform   inverse   S V D   operation   M ¯ imi   U im   S ¯ im   V im T
Step 5:   Perform   inverse   STFT   operation   d img   I S T F T D ˜ Ins  
Step 6:   Perform   inverse   DTCWT   operation   w ¯ 2 I D T C W T d m ! d m 2 d im a i m
Step 7:   Perform   inverse   B C H w ˜ 1 I B C H w 2
Table 3. Secret key lookup table.
Table 3. Secret key lookup table.
Secret Lookup TableInterleaved Bit location
Case 1Case 2Case 3
P = (13,10)P = (9,1)P = (9,1)
𝑘𝑚 = 2,853,234𝑘𝑠 = 129,865,731𝑘𝑠 = 129,865,731𝑘𝑠 = 129,865,732
ixiyjttt
011010
102101
213387
304445
415575
506652
617761
7110894
861491312
Table 4. Correlation coefficient between the input and the output bit locations of MCTC for different values of 𝑘𝑠.
Table 4. Correlation coefficient between the input and the output bit locations of MCTC for different values of 𝑘𝑠.
Secret Key 𝒌𝒔Correlation Coefficient
14,795−0.01457
15,789−0.01489
17,458−0.0147
24,571−0.0014
74,568−0.00793
14,891−0.00741
27,894−0.00143
17,489−0.00369
178,946−0.00753
Average−0.00159
Table 5. The dispersion of each of the FSMs is compared here.
Table 5. The dispersion of each of the FSMs is compared here.
LRIDIQPPMCTC
γspγspγspγsp
1330.824520.0300100.0455160.82162
4990.813020.0075180.0210320.81242
20190.813220.0019270.0103640.81312
Table 6. Comparison of dispersion for MCTC with different values of secret key 𝑘𝑠.
Table 6. Comparison of dispersion for MCTC with different values of secret key 𝑘𝑠.
Secret Key (𝑘𝑠)Normalized Dispersion (𝛾)
15,5120.8221
51,8550.8225
13510.8050
98,5810.8012
58,1110.8080
51,2320.8209
22,9810.8012
52,9830.8080
25,1390.8215
15,3220.8051
12,8510.8225
Average0.1478
Table 7. FIPS-140-2 statistical test results.
Table 7. FIPS-140-2 statistical test results.
Statistical TestRequired IntervalCase 1Case 2Case 3
Mono bit testInput 1999810,00110,019
Poker testInput 210.969611.456015.4880
Example 1Input 3178517201470
Example 2Input 4178917891247
Long run testInput 5123
Final Resultsuccesssuccesssuccess
Table 8. PSNR measurements taken from pictures recovered by CTC using both the right and wrong keys.
Table 8. PSNR measurements taken from pictures recovered by CTC using both the right and wrong keys.
Encryption Time in Sec.Decryption Time in Sec.
Input File
Size KB
Symmetric AlgorithmMCTC FSMSymmetric AlgorithmMCTC FSM
3 3.54 2.43 3.53 2.10
5 5.76 4.65 5.81 4.23
7 8.31 7.20 8.21 7.45
11 13.78 12.67 13.65 12.04
18 22.19 11.08 21.89 19.57
21 28.28 17.17 27.99 24.12
25 31.32 20.21 32.22 30.78
Table 9. Simulation Parameters for steam cipher using proposed MCTC.
Table 9. Simulation Parameters for steam cipher using proposed MCTC.
Simulation ParameterType/Value
FSM EncoderWith modified Turbo coding
Matrix Generation3 × 3
Input rate of data256
Overall length of the frame16,789 bits
TransductionQuadrature Phase Shift Keying (QPSK)
Configuration Options for the Universal Random Number Generator (EC-URNG)a = 5, b = 2, p = 26,741
Code for a hidden transmitter (Ka)45,671
Hidden code for the receiver (Kb)7894
parameters kept under wraps (G, Km, K, P)G = (15,647,15,689), Km = 45,679, K = 753, P = (867,159,786).
Table 10. Performance parameters for encrypted image.
Table 10. Performance parameters for encrypted image.
Secret key (k)Input FeatureCoefficientMean Square ErrorPSNR
Private Key 16.54712.22381.1582 × 1236.5266
Private Key 26.3258−2.22521.1633 × 1236.5268
Private Key 36.2478−2.22331.1631 × 1236.525
Private Key 46.1023−2.22321.1656 × 1236.3992
Private Key 56.7894−2.22331.1536 × 1236.5326
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hazzazi, M.M.; Budaraju, R.R.; Bassfar, Z.; Albakri, A.; Mishra, S. A Finite State Machine-Based Improved Cryptographic Technique. Mathematics 2023, 11, 2225. https://doi.org/10.3390/math11102225

AMA Style

Hazzazi MM, Budaraju RR, Bassfar Z, Albakri A, Mishra S. A Finite State Machine-Based Improved Cryptographic Technique. Mathematics. 2023; 11(10):2225. https://doi.org/10.3390/math11102225

Chicago/Turabian Style

Hazzazi, Mohammad Mazyad, Raja Rao Budaraju, Zaid Bassfar, Ashwag Albakri, and Sanjay Mishra. 2023. "A Finite State Machine-Based Improved Cryptographic Technique" Mathematics 11, no. 10: 2225. https://doi.org/10.3390/math11102225

APA Style

Hazzazi, M. M., Budaraju, R. R., Bassfar, Z., Albakri, A., & Mishra, S. (2023). A Finite State Machine-Based Improved Cryptographic Technique. Mathematics, 11(10), 2225. https://doi.org/10.3390/math11102225

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop