1. Introduction
The transmission of digital images through various networks is a routine process where thousands of digital images are transmitted every moment. In social networks, users do not want others to access their images. In healthcare networks, medical images are sensitive where any misuse of these images may lead to wrong diagnoses and inaccurate medical decisions. Transmission of the military images via different networks requires high-security levels to prevent intruders from getting them. Generally, owners of digital images do not want others to access their images without permission. For these reasons, securing images’ contents has become an important issue. Several security approaches are used to achieve image confidentiality, so an unauthorized user cannot access image content.
Image security approaches are divided into three main categories: data hiding [
1,
2], image watermarking [
3,
4,
5,
6,
7], and encryption [
8,
9,
10,
11]. In data hiding techniques, a secrete message is embedded in the cover image so that it is not detectable. In image watermarking techniques, pieces of digital data inserted in the image where the original and watermarked images’ perceptibility are similar. In image encryption techniques, the digital input image converted to a noise image using a key, which is not understood or predicting its content. Users cannot restore the encrypted image without knowing the key.
There are several techniques used in digital image encryption, such as the theory of chaos [
12,
13,
14], DNA [
15,
16,
17], the method of quantum [
18,
19], and compressive sensing [
20,
21]. Image encryption techniques depend on two significant steps. The first step is confusion in which pixel arrangements changed. Diffusion is the second step, which depends on changing the values of pixels. Chaotic-based methods possess intrinsic properties such as non-periodicity, random behavior, and sensitivity to control parameters and initial conditions [
22]. These properties enable the successful utilization of chaotic-based methods in the encryption of images.
Chai et al. [
23] pointed out that digital images’ chaotic-based encryption systems are classified into two main categories. The first category includes low-dimensional systems such as 1D chaotic maps. The second one is the high-dimensional systems, such as hyperchaotic systems. The low-dimensional chaotic maps friendly applicable due to their simple structures. Despite these intrinsic properties, these maps have a small keyspace and achieve low-security levels [
24].
Several numbers of chaos-based encryption exist, such as [
25,
26,
27,
28,
29,
30,
31]. Chen and Hu [
32] proposed a medical image encryption method using a logistic-sine map for the confusing process. The scrambled image is divided into blocks where a hyperchaotic system is used for diffusing the image blocks. Chai et al. [
33] utilized a memristive chaotic system in image encryption, which improved its ability to resist the differential attack. Chai et al. [
34] presented a new image encryption algorithm based on the parameter-varying chaotic system, elementary cellular automata (ECA), and block compressive sensing (BCS). Tsafack et al. [
35] designed a new 4D chaotic circuit and applied it in image encryption. In [
36], Ramasamy et al. proposed a new algorithm that depends on Block Scrambling and Modified Zigzag Transformation to scramble the plain image, and then the key was generated based on Enhanced Logistic–Tent Map (ELTM) to diffuse the scrambled image. Zheng and Liu [
37] designed a new scheme for encrypting gray images. First, a new 2D chaotic map system (2D-LSMM) was introduced, which is based on both logistic and sine maps. Then, the encryption scheme was based on DNA, where the encoding and operation rules of DNA sequences were determined by 2D-LSMM chaotic sequences. In [
38], Kari et al. introduced a novel image encryption technique based on chaotic maps. In this algorithm, pixel positions were changed in the confusion phase by using Arnold’s cat map. Additionally, the contents of pixels were updated in the diffusion phase that is controlled by the extension of the plain image matrix, XOR operation, and exchange operation. The authors in [
39] presented a fast image encryption technique based on simultaneous permutation and diffusion operation (SPDO). The values of the pixels are permuted and diffused simultaneously using a SineSine map.
Liu et al. [
40] utilized a coupled hyperchaotic system in pathological image encryption. Yu et al. [
41] used Chen’s hyperchaotic system with fractional Fourier transform to encrypt images. Hyperchaotic methods are used as alternatives to the low-dimensional chaotic systems to overcome their limitations. The hyperchaotic methods outperformed the low-dimension chaotic methods in terms of randomness, unpredictability, nonlinearity, and initial conditions. The hyperchaotic methods produced key sequences that have a large keyspace. Generally, the utilization of hyper-chaotic systems improves the level of security. However, image encryption algorithms that used hyperchaotic methods have weaknesses against different attacks. Moreover, the encrypted image histogram is not uniform for some algorithms.
Related works have some limitations that can be summarized as follows:
Low keyspace and less sensitivity to the initial conditions.
The initial condition of the chaotic map does not depend on the plain image that leads to weaknesses in resisting differential attacks.
When the encrypted image is attacked with noise and data cut, some of the encryption algorithms failed to retrieve the plain image.
Some of these algorithms cannot resist statistical attacks as the histogram of the encrypted image is not flat.
These weaknesses motivated the authors to propose a new algorithm for encrypting images. The proposed algorithm utilized a six-dimension (6D) hyperchaotic system and Fibonacci Q-matrix to encrypt grayscale images through two main steps. First, the pixels’ positions in the original image scrambled using the 6D hyperchaotic system. Only three sequences from this 6D hyperchaotic system were randomly selected to permit the original image. Second, the Fibonacci Q-matrix is used in the diffusion process, where this process is performed on a confused image’s sub-blocks. Based on performed experiments, the proposed image encryption algorithm successfully encrypts gray images with excellent performance. The contributions of this work are summarized as:
The first utilization of the Fibonacci Q-matrix in image encryption.
Using 6D hyperchaotic system in image encryption for the first time.
Integration of the 6D hyperchaotic system and Fibonacci Q-matrix assure high-security level.
The large keyspace of the proposed algorithm leads to good resistance to brute force attacks.
The proposed image encryption algorithm has super robustness to most attacks.
Analysis of the obtained results shows the excellent performance of the proposed algorithm.
The following sections are: The mathematical foundations of the 6D hyperchaotic system and the Fibonacci Q-matrix presented in
Section 2. The proposed algorithm is presented in
Section 3. Tests and results are discussed in
Section 4. The conclusion is presented in
Section 5.
4. Tests and Results
The proposed algorithm’s effectiveness was tested using different standard grayscale images (Baboon, Pepper, Boat, Airplane, and Lena) with sizes and . Additionally, the proposed algorithm compared with existing algorithms for image encryption. All performed experiments executed using MATLAB (R2015a) with a Laptop computer equipped with Core i5-2430M 2.4GH CPU and 4 GB RAM.
Eight experiments were performed to evaluate the proposed encryption algorithm using entropy, correlation coefficients, differential attack, noise and data cut attacks, histograms, keyspace, key sensitivity, and NIST Statistical Test.
4.1. Entropy
The image randomness measured by entropy can be defined by:
where the occurrence probability of
is
; the number
refers to the total number of
, where the total number of image pixels is represented by the integer
. An ideal value of entropy for gray images is 8. The entropy of a few gray images encrypted using the new and existing algorithms [
44,
45,
46,
47,
48] shown in
Table 1 and
Table 2. Our proposed method records the highest average entropy value. Additionally, our proposed algorithm is tested on 10 images of the size
, and 10 images of the size
are selected from SIPI datasets. The average of entropy values for each image size obtained using our proposed algorithm is listed in
Table 3. Then, the results are compared with methods [
44,
45,
46,
47,
48]. All entropy values for the chipper images that encrypted with the new method approached 8. The chipper images encrypted using the proposed encryption method have the highest randomness.
4.2. Correlation Coefficient
Generally, the input images’ adjacent pixels have a high correlation in the diagonal, horizontal, and vertical directions. A successful encryption algorithm must minimize this correlation. Any two neighboring pixels,
and
, have the following correlation coefficient:
where the integer
;
and
are the variance and expectation of
respectively. In the successfully encrypted image, the correlation between adjoining pixels should approach 0.
In this experiment, nearby pixels are grouped in pairs, where 40,000 of these pairs are randomly selected, then the correlation coefficients computed for the three directions.
Table 4 and
Table 5 shows the encrypted images’ calculated correlation coefficients’ absolute values using the new and existing image encryption algorithms [
44,
45,
46,
47,
48]. The average coefficient correlations for the new encryption algorithm are very close to 0. All the results confirm that our proposed algorithm can remove the correlation between adjacent pixels in the encrypted image.
4.3. Differential Attack
In this attack, the attacker aims to decrypt the encrypted images without using the key through determining the relation between original and encrypted images. Therefore, small pixel changes in the original image significantly affect the encrypted image, making it more difficult for attackers to crack the encrypted image. Successful algorithms for image encryption must resist this attack. Robustness to this attack based on the Number of Pixels Change Rate (NPCR) and Unified Average Changing Intensity (UACI):
with
The symbol refers to the chipper image that encrypted from the original image by changing only one pixel, while refers to the chipper image encrypted from the same plain image.
Table 6 shows the computed values of the five gray images encrypted using the proposed and the existing image encryption algorithms [
44,
45,
46,
47,
48]. In addition, the average values of NPCR and UACI of the images selected from SIPI datasets are presented in
Table 7. To confirm the efficiency of our algorithm, the results are compared with other methods [
44,
45,
46,
47,
48].
As mentioned in [
49], the critical values of NPCR and UACI are
and
, respectively, which are calculated as follows:
To resist the differential attacks, the value of NPCR for the encrypted image should be larger than
, and the value of UACI should be in the range of
. When significant level
= 0.05, then
and
for the image with size
. However, when the size of the image is
, the
is
and
. In
Table 6 and
Table 7, the values that did not pass the test are displayed in bold. Our proposed algorithm achieves the highest pass rate compared to other methods, reflecting excellent robustness of the differential attack.
4.4. Noise and Data Cut Attacks
When images are transmitted over the network, they are vulnerable to noise or cropping (data cut). Successful image encryption algorithms should have robustness against noise and cropping attacks. The well-known measure, PSNR (peak signal to noise ratio), is used to evaluate the decrypted image quality. Mathematically, for original and decrypted images,
and
the PSNR is:
where
refers to the mean square error:
A higher value reflects high image quality. For a , original and decrypted images are indistinguishable.
This experiment was performed to test robustness against noise and data cut attacks. In this experiment, an encrypted image is contaminated with “salt and peppers” noise of 2 different levels, 0.002 and 0.005, decrypted using the new method. The encrypted images were also attacked by a data cut of
and
and then decrypted using the new algorithm. The PSNR for the five tested images with noise and data cut with a size of
is shown in
Table 8.
The new algorithm is robust against “salt and peppers” noise with density 0.002, where all values of PSNR are approaching 30db. When the level of noise increased to 0.005, the average value of PSNR decreased to 25.6db. For the data cut off size the PSNR values are around 24db, and the decrypted image’s content is visible. Moreover, when the encrypted image is attacked with the data cut off size 128 × 128, a relatively big cut off (i.e., the encrypted image lost 1/8 information), the PSNR is decreased to 18dB. Despite the reduction in PSNR values, the decrypted image is recognizable.
Figure 2 shows the noise and data cut attacks for an encrypted image, demonstrating that the reader can easily recognize the decrypted images’ content in different cases (i.e., noise, data cut). Therefore, the new algorithm is durable and resistant to these attacks.
4.5. Histograms
Visual representation of image pixels distribution is called “Image Histogram,” used to evaluate image encryption algorithms. A successful algorithm for image encryption must generate a flat histogram for the encrypted image.
Three standard gray images, Peppers, Airplane, and Boat, encrypted using the new algorithm. The histogram of the original and encrypted images displayed in
Figure 3. Based on the distinguishable contents of the original images, their histograms are different. On the other side, the encrypted images have very similar and uniform histograms. Attackers are not able to recover the original images from encrypted image histograms. To ensure the uniform distribution of the histogram, the chi-square test is calculated by the following equation:
where
refers to the recurrence rate of the grey value
;
is the expected frequency of each grey value. Assume a significant level of 0.05,
. The histogram of the encrypted image is considered to be uniform if the value of
is less than 293. Here we calculate the
for the encrypted images and record the results in
Table 9. All values in
Table 9 are less than 293, so the histograms of images encrypted using the proposed algorithm have uniform distribution. These results ensure the efficiency of the new algorithm.
4.6. Keyspace
The keyspace size is crucial in the encryption process. The encryption algorithm is robust to brute force attacks if its keyspace size >. The proposed encryption algorithm has different security keys: , , , , , , , , , , , , and . If we assume the accuracy of the initial value equals to , then the total keyspace is larger than , which shows robustness to the brute force attack.
4.7. Key Sensitivity
Successful image encryption algorithms must show high sensitivity to the secrete keys, which results in a noticeable change in a decrypted image with minimal modifications in initial conditions of the utilized secrete key used in the encryption process. An experiment was performed to test the key sensitivity of the new algorithm. The original image of “Lena” encrypted using the initial conditions (0.1, 0.1, 0.1, 0.1, 0.1, and 0.1).
Figure 4a,b show the original and encrypted images of Lena.
The key is modified with only one-bit difference (0.1, 0.1, 0.1, 0.1, 0.1, and 0.1000001). The decryption process with the modified key failed to restore the original image, as shown in
Figure 4c. On the other side, decryption using the original secret key successfully recovered the original image, as displayed in
Figure 4d.
4.8. NIST Statistical Test
A good encryption algorithm should produce an encrypted image with high randomness. The NIST statistical test suite provides statistical tests to respect the randomness of the sequence generated with the encryption algorithm. The significance level is set to 0.01 for all tests in NIST. In this experiment, we calculate
p-values for encrypted peppers image of a size
which is being changed into a binary sequence. Then we record the results for different statistical tests in
Table 10. The
p-values
and indicates the randomness of the binary sequence. From the results, we can see that the sequence generated using the proposed algorithm passed all tests, which assures the randomness of the binary sequence.
4.9. Computational Complexity
The steps required to perform the encryption process are used to measure the computational complexity of the algorithm. For the plain image of size , the time complexity of the confusion steps in the proposed algorithm is . Regarding the diffusion step, the time complexity is where is the number of blocks in the image. Therefore, the total time complexity of the proposed algorithm is .
5. Conclusions
The authors proposed a new algorithm for gray image encryption. In this algorithm, the Fibonacci Q-matrix is integrated with a 6D hyperchaotic system. First, we generate random sequences using a 6D hyperchaotic system, and we select three of these sequences to change the pixel position. Then, we use the Fibonacci Q-matrix with to change the pixels value for each sub-block (size()) of the shuffled image. Double confusion/diffusion operations are applied to increase the security level.
The new algorithm is sensitive to minimal modifications in pixel distribution, and the secret key, where an entirely different encrypted image, is obtained. Therefore, the proposed algorithm successfully resists the differential attack. The new algorithm resists a brute force attack where the keyspace size is large enough. Moreover, the new algorithm’s security performance was evaluated using information entropy, correlation coefficients, noise, and data cut attack and histogram. The new algorithm can encrypt gray images with high-security levels. In the future, we will study the effectiveness of our algorithm in encrypting color images.