Next Article in Journal
Controlling Dispersion Characteristic of Focused Vortex Beam Generation
Previous Article in Journal
Segmentation and Quantitative Analysis of Photoacoustic Imaging: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Attack Detection: General Defense Strategy Based on Neural Networks for CV-QKD

School of Computer and Engineering, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Photonics 2022, 9(3), 177; https://doi.org/10.3390/photonics9030177
Submission received: 30 January 2022 / Revised: 7 March 2022 / Accepted: 9 March 2022 / Published: 12 March 2022
(This article belongs to the Special Issue Photonic Neural Networks)

Abstract

:
The security of the continuous-variable quantum key distribution (CVQKD) system is subject to various attacks by hackers. The traditional detection method of parameter estimation requires professionals to judge known attacks individually, so the general detection model emerges to improve the universality of detection; however, current universal detection methods only consider the independent existence of attacks but ignore the possible coexistence of multiple attacks in reality. Here, we propose two multi-attack neural network detection models to handle the coexistence of multiple attacks. The models adopt two methods in multi-label learning: binary relevance (BR) and label power (LP) to deal with the coexistence of multiple attacks and can identify attacks in real-time by autonomously learning the features of known attacks in a deep neural network. Further, we improve the model to detect unknown attacks simultaneously. The experimental results show that the proposed scheme can achieve high-precision detection for most known and unknown attacks without reducing the key rate and maximum transmission distance.

1. Introduction

Compared with the classical secure communication system, quantum key distribution (QKD) can detect eavesdropping behavior and perceive the network security situation [1]. Its theoretical unconditional security is guaranteed by the Heisenberg uncertainty principle [2], and quantum state non-cloning principle [3]; however, its practical security is challenged by the imperfection of the devices, which may bring security loopholes for the active attack of Eve [4,5,6,7].
According to the carrier of information transmission, QKD can be divided into discrete-variable (DV) QKD [8,9,10] and continuous-variable (CV) QKD [11,12,13,14]. CVQKD can complete unconditional and secure distribution of keys only with standardized optical communication devices and has lower system costs and better application prospects. The system based on Gaussian modulated coherent state (GMCS) protocol has the properties of a classic light field, easy preparation, and longer secure transmission distance so that it can better meet application requirements [15]; therefore, this paper carries out research work around GMCS CVQKD systems.
The security analysis of the system is to calculate the key curve based on the channel transmission and the excess noise, and finally judge whether it is attacked [16]. There are many ways for Eve to attack the practical systems. On the one hand, Eve can manipulate local oscillator (LO) optical signals to perform calibration attacks (CA) [17] and local oscillator intensity attacks (LOIA) [18]. On the other hand, Eve may use the limited linear domain of the system detector to implement saturation attacks (SA) [19] and blinding attacks [20]. Moreover, Eve may carry out wavelength attacks through the wavelength dependence of the beam splitter at the receiving end of the system [21,22,23]. The main idea of counteracting quantum attacks is to add corresponding real-time monitoring modules for different types of attacks on the system. For example, to defend against LO pulse attacks, researchers have designed shot noise real-time monitoring solutions [17,24]; however, in practical systems, we do not know in advance which kind of attack Eve will launch, so we cannot take targeted measures and judge how many attacks occurred simultaneously. Thus it is necessary to detect multiple attacks. The quantum attack can be detected by some machine learning strategies, such as k-nearest neighbors (KNN) [25], support vector machines (SVM) [26], ensemble learning [27], hidden Markov models (HMM) [28], and neural networks (NN) [29,30]. These strategies can automatically detect and classify some known attacks by learning feature distribution instead of simple thresholds; however, there is also a drawback: the previous detection scheme was to select the one with the highest probability from different types of attacks without considering the coexistence of multiple attacks in practice.
In this paper, we propose a multi-attack detection and classification solution to defense CVQKD systems. We studied several typical features that would be affected by attacks and analyzed the changes of these features under different attacks, taking the normal unattacked state as a reference. Then, we put the preprocessed attack data into the multi-label neural network model for training. The model uses a unique multi-label detection method to simultaneously identify multiple types of attacks from different features, not only identifying a single attack but also including the coexistence of multiple attacks. With this model, Bob can automatically detect abnormal data in real-time to avoid attacks. Once the system receives abnormal data, it can immediately stop the key transmission with Alice without waiting until the key transmission is completed to check whether it is attacked. Experiments demonstrate that our model can identify known attacks and unknown attacks with high accuracy. In our work, we mainly considered three typical attack strategies against the systems, including calibration attacks, LO intensity attacks, and saturation attacks. The main idea of three typical attack strategies is to keep the excess noise and channel transmission unchanged before and after the attack in the system by changing the shot noise and using the inherent linear area defects of the detector. In addition, hybrid attack 1 [31] and hybrid attack 2 [29] are considered as two types of unknown attacks to apply to the detection and classification of unknown attacks.

2. Detection and Classification of Multi-Attack

2.1. Preparation of Multi-Attack Data Set

The study of combined attacks requires generating new data sets. The preparation of our data set consists of the following three steps. Firstly, each type of attack generates 500 data points as the original dataset, and the combined attack data are added to it to form a new data set. Secondly, the new data set is divided into M groups with 25 pulses in each group. At last, each group takes 25 pulses as a whole to generate shot noise variance as the unit for calculating group variance. According to the assumptions satisfied by machine learning theory: the training set and the test set meet the same distribution. So the larger the amount of data, the higher the quality, and the better the model fits the actual situation. In our experiment, only 500 data were used for the experiment, and the number of samples will be increased in further experiments. The reason we set the number of pulses per group to 25 is that the data are too long to detect short-term attacks. After experiments with 20, 25, and 30 pulses in each group, it was found that the experiment with 25 pulses in each group had the best effect.
In the absence of attacks, the mean and variance measured by Bob are given as Equations (1) and (2). Where the excess noise caused by signal pulses is the technical excess noise of the system ξ t e c h introduced by the preparation of Gaussian signal pulses, and the shot noise N 0 is the variance measured by Bob’s homodyne detector when the input signal is almost 0.
y ¯ = 0
V i = η T V A N 0 + ξ t e c h + N 0 + V e l
The variance formula under the unattacked state can be described as follows. Alice’s side generates random pulses of Gaussian distribution with mean zero variance V A N 0 and the system has inherent technical excess noise ξ t e c h = ε N 0 . When the above three variances are transmitted to Bob through the channel, the channel transmission loss T and the detection loss η of the detector should be considered. On Bob’s side, the detector has electronic noise V e l = v e l N 0 , and the homodyne detection will introduce vacuum noise with a simplified variance N 0 (more details can be found in Appendix A).
For calibration attacks and LO intensity attacks, their strategy is to reduce the shot noise and thus reduce the excess noise introduced by the attack. The shot noise directly impacts the two-dimensional factor variance. The strategy of saturation attacks is to add a proper displacement Δ x to quadratures X A which will make the detector abnormally work in the saturation region, thus the attack is hidden. The adding displacement Δ x directly impacts the one-dimensional factor mean (more details can be found in Appendix B); here are the mean and variance under different combinations of calibration attacks, LO intensity attacks, and saturation attacks.
The joint attack of calibration attacks and LO intensity attacks: since both N 0 C A and N 0 L O I A have a proportional relationship with N 0 , their superimposed shot noise N 0 C A & L O I A is multiplicative. The excess noise introduced on signal pulses under the joint attack of calibration attacks and LO intensity attacks could be divided into three parts: (i) ξ P I R C A introduced by the intercept-resend (PIR) attack of calibration attacks, (ii) ξ G a u L O I A introduced by the Gaussian collective attack of LO intensity attacks, and (iii) ξ t e c h C A & L O I A introduced by the system; therefore, the total excess noise ξ and the superimposed shot noise N 0 C A & L O I A can be written as
ξ = ξ P I R C A + ξ G a u L O I A + ξ t e c h C A & L O I A = 2 + 1 k k T + ε N 0 C A & L O I A
N 0 C A & L O I A = N 0 C A N 0 L O I A = k 1 k 2 N 0
Since calibration attacks and LO intensity attacks only attack the variance without modifying the mean, the mean and variance under this condition can be expressed as
y ¯ S A & C A = 0
V C A & L O I A = η T V A N 0 C A & L O I A + ξ + 1 + v e l N 0 C A & L O I A V i = η T V A N 0 + ξ t e c h + N 0 + V e l
The joint attack of saturation attacks and calibration attacks, since saturation attacks do not change the shot noise, their superimposed shot noise is N 0 C A . The excess noise introduced on signal pulses under the joint attack of saturation attacks and calibration attacks could be divided into three parts: (i) ξ P I R C A introduced by the PIR attack of calibration attacks, (ii) ξ P I R S A introduced by the PIR attack of saturation attacks, and (iii) ξ t e c h C A introduced by the system; therefore, the total excess noise ξ and the superimposed shot noise N 0 C A can be written as
ξ = ξ P I R S A + ξ P I R C A + ξ t e c h C A = 4 + ε N 0 C A
N 0 C A = k 1 N 0
The variance of saturation attacks and calibration attacks under linear region can be expressed as
V C A & l i n = η T V A N 0 C A + ξ + 1 + v e l N 0 C A
Replacing V l i n by V C A & l i n in V S A (Equation (A26)), the mean and variance of saturation attacks and calibration attacks under saturation region can be expressed as
y ¯ C A & S A = α + C
V S A & C A = V C A & l i n 1 + A 2 B 2 2 π ( α Δ ) V C A & l i n 2 π A * B + ( α Δ ) 2 4 1 A 2
where
A = erf α Δ 2 V C A & l i n
B = e ( α Δ ) 2 2 V C A & l i n
C = V C A & l i n 2 π B + α Δ 2 + α Δ 2 A
The joint attack of saturation attacks and LO intensity attacks, since saturation attacks do not change the shot noise, their superimposed shot noise is N 0 L O I A . The excess noise introduced on signal pulses under the joint attack of saturation attacks and LO intensity attacks could be divided into three parts: (i) ξ P I R S A introduced by the PIR attack of saturation attacks, (ii) ξ G a u L O I A introduced by the Gaussian collective attack of LO intensity attacks, and (iii) ξ t e c h L O I A introduced by the system. The total excess noise ξ and the superimposed shot noise N 0 L O I A can be written as
ξ = ξ P I R S A + ξ G a u L O I A + ξ t e c h L O I A = 2 + 1 k k T + ε N 0 L O I A
N 0 L O I A = k 2 N 0
The variance of saturation attacks and LO intensity attacks under linear region can be expressed as
V L O I A & l i n = η T V A N 0 L O I A + ξ + 1 + v e l N 0 L O I A
Replacing V l i n by V L O I A & l i n in V S A (Equation (A26)), the mean and variance of saturation attacks and LO intensity attacks under the saturation region can be expressed as
y ¯ S A & L O I A = α + C
V S A & L O I A = V L O I A & l i n 1 + A 2 B 2 2 π ( α Δ ) V L O I A & l i n 2 π A * B + ( α Δ ) 2 4 1 A 2
where
A = erf α Δ 2 V L O I A & l i n
B = e ( α Δ ) 2 2 V L O I A & l i n
C = V L O I A & l i n 2 π B + α Δ 2 + α Δ 2 A
For the joint attack of calibration attacks, LO intensity attacks, and saturation attacks, since saturation attacks do not change the shot noise, their superimposed shot noise is N 0 C A & L O I A . The excess noise introduced on signal pulses under the joint attack of calibration attacks, LO intensity attacks, and saturation attacks could be divided into four parts: (i) ξ P I R C A introduced by the PIR attack of calibration attacks; (ii) ξ G a u L O I A introduced by the Gaussian collective attack of LO intensity attacks; (iii) ξ P I R S A introduced by the PIR attack of saturation attacks; (iv) ξ t e c h C A & L O I A introduced by the system. The total excess noise ξ can be written as
ξ = ξ P I R C A + ξ P I R S A + ξ G a u L O I A + ξ t e c h L O I A = 4 + 1 k k T + ε N 0 C A & L O I A
Then, the variance of calibration attacks, LO intensity attacks, and saturation attacks under linear region can be expressed as
V C A & L O I A & l i n = η T V A N 0 C A & L O I A + ξ + 1 + v e l N 0 C A & L O I A
Replacing V l i n by V C A & L O I A & l i n in V S A (Equation (A26)), the mean and variance of calibration attacks, LO intensity attacks, and saturation attacks under saturation region can be expressed as
y ¯ C A & L O I A & S A = α + C
V C A & L O I A & S A = V C A & L O I A & l i n 1 + A 2 B 2 2 π ( α Δ ) V C A & L O I A & l i n 2 π A * B
+ ( α Δ ) 2 4 1 A 2
where
A = erf α Δ 2 V C A & L O I A & l i n
B = e ( α Δ ) 2 2 V C A & L O I A & l i n
C = V C A & L O I A & l i n 2 π B + α Δ 2 + α Δ 2 A

2.2. Feature Extraction

In the GMCS protocol, Alice prepares a series of coherent states X A + i P A , where the quadrature values X A and P A obey a Gaussian distribution with mean zero and variance V A . Bob receives a series of time-based measurements X B = x 1 , x 2 , , x n . Different attacks or unknown threats introduced by Eve can affect multiple optical parameters and thus deviate from the measured values of the system. These optical parameters are the intensity I L O of the LO, the shot noise variance N 0 , the mean y ¯ , and variance V y measured by Bob.
Table 1 shows the impact of combined attack strategies on measurable characteristics. In the experiment, the feature type of the sample we choose is a very important parameter in parameter estimation, and it is also an important parameter in the actual system. As the description in [29], learning the variation of these features helps to detect and classify different attacks.
Figure 1 shows a schematic diagram of Bob’s detection setup for simultaneously measuring the features of Table 1 and the experimental preparation for acquiring training data. In the experiment, a 1310 nm light source was introduced as the system independent clock. The pulse passes through the quantum channel and reaches Bob through CWDM for wavelength division multiplexing to separate the signal light source and the clock light source. The separated 1310 nm light source is used as the system clock to realize real-time shot noise variance monitoring. Alice generates 1550 nm coherent light at a repetition rate of 1 MHz from an external telecom-diode, which is the signal light source to Bob. Then a polarized beam splitter (PBS) is applied to separate the signal light source into signal pulses and LO pulses. After passing through the PBS, the LO pulses are split by a 90:10 beam splitter BS1, 10 % of the LO pulses are used for LO intensity monitoring, and the remaining 90 % of the LO pulses are split by another 90:10 beam splitter BS2. After passing through the BS2, 10 % of the LO pulses are used for shot noise monitoring, and the remaining 90 % of the LO pulses are used for homodyne detection. On the path of monitoring shot noise, the ATT is added to change the magnitude of the LO intensity to obtain the variance of the system shot noise under different LO intensities, thereby obtaining the linear relationship between the LO intensity and the variance of the shot noise. At the same time, in the LO pulse path for homodyne detection, a phase modulator (PM) is set in to control the measurement phase to randomly select the real-time intersection. Finally, data preprocessing is performed on the measurement results, and then they are put into a multi-label neural network model for attack detection. The results of model detection are used to assist the parameter estimation process.
In the post-processing stage, we install additional data preprocessing and attack detection procedures on the original processing module, which can provide pre-detection of known and unknown threats before implementing parameter estimation without changing the procedures for original parameter estimation and key extraction (more details can be found in Appendix B). Real-time data are collected and calculated in a high repetition frequency CV-QKD system, the experimental process or actual identification process takes less than 0.05 ms on a typical laptop computer with 16 GB memory, Intel Core 4.0 GHz CPU, and GeForce RTX 3070 GPU, which can fully fit about 1–100 Mbps system.
We perform preprocessing operations on the data under different attacks to make the original data more suitable for input to the neural network, including vectorization, normalization, sequential processing, and feature extraction. Additional data preprocessing in our model is as follows. Firstly, 500 data with Gaussian distribution are generated for each attack type to avoid data imbalance, and the combined attack has a total of 4000 data. Then the 4000 data are extracted to the output feature vectors. Our pulses are input in time series, every 25 pulses are grouped into groups, and each pulse data in this group are represented by the feature vectors y ¯ , V y , I L O , N 0 . Finally, the attack data transformed into feature vectors are put into our model for training.

2.3. Multi-Label Neural Network Attack Detection Model

This section introduces the structure of the neural network in our model and the method to improve the output of the neural network to deal with multi-label attacks.

2.3.1. Multi-Label Neural Network Framework

The data received by Bob are input in time series, so we choose long short-term memory (LSTM) class network. LSTM is very suitable for processing and predicting time series due to its unique design structure [32]. As a variant of LSTM, the gated recurrent unit (GRU) network has the advantages of low computational complexity, less overfitting, and relatively convenient experimental operations [33]. So we use the GRU network in our experiments.
Then we consider the network output of a multi-label attack. The existing research on the detection of quantum attacks is a single output, that is, only the one with the highest probability is considered as the result of multiple attack detection, but in fact, there may be a combination of multiple attacks, so the output is not a certain type of attack but a collection of attack labels. Thus, we introduce multi-label classification methods [34,35]. Multi-label classification methods can be divided into two types: problem transformation (PT) method and algorithm adaptation (AA) method. The PT strategy is to convert the multi-label classification problem into a single-label classification problem, so that the existing single-label learning algorithm can be simply applied to solve the multi-label problem. The AA strategy is to improve the current single-label learning algorithm and apply it to multi-label classification tasks. Compared with the AA method with high system complexity and easy to cause over-fitting, the PT method has better experimental results and lower system complexity, which is easier to study; therefore, the PT method is selected to classify quantum attacks. Considering the characteristics of quantum attacks and the controllability of the generated data sets, the binary relevance (BR) method and the label powerset (LP) methods of the PT strategy are used to solve the problem of detecting the combination of multiple quantum attacks.
We choose three known attacks to detect our model, which are calibration attacks, LO intensity attacks, and saturation attacks, with the unattacked state as a reference. To clearly show the model, we consider calibration attacks, LO intensity attacks, and saturation attacks as the notation y1, y2, and y3, respectively. Thus, we got Y = y 1 , y 2 , y 3 as the label space with three possible class labels. The task of the multi-label neural network is to learn a function h : X 2 Y from the training data set, where X represents the label set of the feature vector.

2.3.2. The BR-NN Model

The BR-NN model uses multiple binary classifiers to predict each label separately, and the output result is a set of original labels, which can be expressed as Y = y 1 , y 2 , y 3 . In our model, We replace the softmax layer of the neural network with four binary classifiers, that is, four sigmoid layers for output. Each sigmoid layer detects if there is a corresponding attack. If the attack is detected, it will output 1 or 0 if it is not detected. One of the binary classifiers is to detect if the model is attacked, and it will output 0 if it detects attacks or 1 if it is not detected. In this way, multiple attacks can be output in parallel, as shown in Figure 2.
Figure 2 shows the structure of the BR-NN model. The input of the BR-NN model is the same as the LP-NN. After passing through the four-layer network, it is output by the sigmoid function. The input of the GRU layer is a three-dimensional vector (4000, 25, 4), and the output is a 64-dimensional vector; then the output of the GRU layer is divided into four sub-models, each model learns an attack type separately, and one of the models learns whether it is attacked. The input of the submodel is a 64-dimensional vector, and the output is the probability of each attack. The role of the intermediate dropout layer is to reduce the number of intermediate feature parameters and prevent overfitting. The four sub-models learn a specific attack type, respectively, which is clearly described as three sub-models learning calibration attacks, LO intensity attacks, and saturation attacks, while the other sub-model learns the unattacked state to judge whether the attack exists, as is shown in Table 2. It is worth noting that contradictory abnormal situations such as the output [1 1 0 0] are what we will follow up later. The occurrence of this situation indicates that the collected data are abnormal, but there is no such abnormal situation in our experiment at present.
The attack labels of the BR-NN model only affect each other in the training stage and are independent of each other in the prediction stage. Because the BR labels are detected by learning the features of different labels in the training stage, the labels are related to each other; however, in the prediction stage, the output is multiple binary classifiers, so they are independent of each other.

2.3.3. The LP-NN Model

The LP-NN model regards any combination of different labels in the multi-label training data set as a new label and then adds them to the original label set, so that multi-label classification is transformed into a series of single-label classification problems. The LP model changed the output type of the softmax layer of the neural network. The softmax layer has changed from the original four outputs to eight outputs, as shown in Figure 3. The original four outputs included three types of attacks and one type of non-attack. The new label set can be expressed as Y = y 1 , y 2 , y 3 , y 1 + y 2 , y 1 + y 3 , y 2 + y 3 , y 1 + y 2 + y 3 .
Figure 3 shows the structure of the LP-NN model. As mentioned in Section 2.2, the input of the LP-NN model is the feature vectors y ¯ , V y , I L O , N 0 , a matrix of shape (25,4). After passing through the three-layer network, it is output by the softmax function. The input layer, GRU layer, and dense layer 1 are the feature extraction part, and dense layer 2, and softmax layer are the decision part. The input layer inputs a two-dimensional vector set (25,4), where 25 is the time dimension and 4 is the feature dimension; the softmax layer outputs the confidence score of each attack and then takes the highest probability as the output.
Unlike the BR-NN model, the LP-NN model considers the interaction between the attacks in the prediction stage while solving the problem of multiple outputs of the attack. Since the output is the softmax layer, the attacks are related to each other.
The comparison between the LP-NN model and the LP-NN model is as follows. The BR model has fewer parameters because the neurons between modules are not linked, so the resource consumption is low, and it is not easy to overfit; however, the LP-NN model is more practical significance, that is, considering the interaction between attacks in practice and the model can fully analyze the impact of each feature on the prediction results, thereby the performance of the LP-NN model is better than the BR-NN model for known attacks.

2.4. Detection of Unknown Attacks

For the detection of unknown attacks, it can be attributed to an open set classification problem in essence. That is, the classifier has not learned unknown attacks during training, but when unknown attacks appear in the test data set, the classifier can perceive the unknown attack data and judge it as an unknown type. The most commonly used anomaly detection models are based on content similarity [36], among which the most representative is the one-class SVM [37]. Alternatively, deep learning can be used for anomaly detection, such as confidence estimation [38] through neural networks, or autoencoders [39] and generative adversarial networks (GAN) [40] based on reconstruction errors. Reconstruction error-based methods require a high complexity of the generative model to learn good features, which is due to the complexity inherent features of high-dimensional data. At the same time, the basis of the generated model is a neural network with many hyperparameters. The parameter adjustment process is time-consuming and labor-intensive. The complexity of the model is far greater than that of the one-class SVM, not to mention the overfitting problem of the generated model.
The method of detecting unknown attacks in this paper is to directly add the one-class SVM anomaly detection module to our model, as shown in Figure 4. The one-class SVM can learn a hypersphere in the high-dimensional feature space by modeling all known attack types in the training set, and those within the hypersphere are considered to be known attack types. Then the classifier judges whether the data in the test data set belongs to a known attack type, and if not, it is judged as an unknown attack. The anomaly detection method using the one-class SVM has low model complexity and is easy to implement. In addition, it has good generality, which means that it can be directly grafted onto models of detecting known attacks without parameter tuning.
The confidence estimation method can also be used on the LP-NN model to detect unknown attacks because the LP-NN models all attacks and takes the attack class with the highest confidence as the output. When the confidence level meets the requirements, the output is the known attack; otherwise, it is the unknown attack. The method of setting a fixed threshold value can be used as the confidence level, but it requires continuous experimentation to obtain a suitable threshold value, which is time-consuming and labor-intensive. The neural network method can also be used to learn the appropriate confidence level, but parameter adjustment is still a complex process, and inappropriate parameters will reduce the classification accuracy; therefore, the one-class SVM method is used in the paper to detect unknown attacks.

3. Performance

3.1. Implementation Details and Comparison with Existing Scheme

We implemented the data set preparation on MatlabR2019b, and the training and testing of the attack detection model on pycharm 2021. In the experiment, the learning rate of the attack model is set to 0.01, and the maximum number of the epoch is 400. The data set size for each attack type is N = 1 × 500, and the number of pulses in each block is Q = 25, so the data set for each attack type can be divided into M = 400 feature vectors. To verify the actual performance of the model, we tested the trained model. The validated experimental data adopts nine different sequence groups, including seven known attack sequence groups, one normal sequence group and an unknown attack sequence group. For known attacks, the attack types of the training and test datasets are consistent; however, unlike known attacks, unknown attacks in the test data set are not included in the training data set of the model. It is worth noting that our test and training data set are generated independently of each other. The training phase of our model takes 10–15 min on a typical laptop with 16 GB memory, Intel Core 4.0 GHz CPU, and GeForce RTX 3070 GPU. In the prediction phase, we only need a brief preprocessing of the data and then apply the trained model to any future dataset, which can take about 2–5 min to implement on a computer. The trained model will be used as an attack detection module before parameter estimation in the post-processing of the system (more details can be found in Appendix C).
In the case of coexistence of multiple attacks, the single-attack detection model will theoretically miss detection or misidentify. In addition, it is verified through experiments that the single attack model will misidentify which type of attack it is, and it is analyzed from the experimental results which type of attack is dominant when multiple attacks occur at the same time, so it is more necessary to be vigilant about the defense of this attack. Table 3 shows the detection results of combined attacks by different models.
In Table 3, our model can correctly detect the simultaneous attacks, while the ANN-based model has the following omissions: (i) the joint attack of calibration attacks and LO intensity attacks, the detection result is LO intensity attacks while the calibration attack is ignored; (ii) the joint attack of saturation attacks and calibration attacks, the detection result is calibration attacks while the saturation attack is ignored; (iii) the joint attack of saturation attacks and LO intensity attacks, the detection result is LO intensity attacks while the saturation attack is ignored; (iv) the joint attack of calibration attacks, LO intensity attacks, and saturation attacks, the detection result is LO intensity attacks while the saturation attack and the calibration attack are ignored. Moreover, the result of (iv) is also a centralized description of the error detection results of the ANN-based model, that is, when the calibration attacks or LO intensity attacks occur simultaneously with saturation attacks, saturation attacks are not dominant; when calibration attacks and LO intensity attacks occur simultaneously, LO intensity attacks dominate. It is worth noting that when calibration attacks or LO intensity attacks occur simultaneously with saturation attacks, calibration attacks, or LO intensity attacks must be carried out first. Because triggering saturation attacks first will cause the detector to directly enter the saturation region, calibration attacks or LO intensity attacks that work in the linear region cannot be triggered.

3.2. Multi-Attack Detection Performance of the Model

We use accuracy, precision, and recall as metrics to evaluate the performance of our model during the training phase.
Figure 5a shows the performance of the LP-NN model. We can conclude that the accuracy of the LP-NN model metrics grows between 0 and 20 training epochs, and then fluctuates between 20 and 60 training epochs at 0.8 and 1. After 60 training epochs, the metrics of the LP-NN model tend to be stable and can reach 1. Figure 5b–d show the performance of each sub-model in the BR-NN model. We can conclude that the accuracy of the BR-NN model metrics grows between 0 and 35 training epochs. After 35 training epochs, the metrics of the BR-NN model tend to be stable and can reach 1. It shows that both the BR-NN model and the LP-NN model can detect the known combined attacks with 100 % accuracy, but the training time of the LP-NN model is twice that of the BR-NN model. Because the LP model has more parameters, the training time is longer. In the case of the same accuracy, the LP model is more realistic considering the mutual influence between attacks, so the LP-NN model is more recommended to detect multiple known attacks. In our experiments, the confusion matrix is used to show the prediction of our model. As can be seen in Figure 6, the data falls exactly on the diagonal of the confusion matrix, so the BR-NN model and the LP-NN model can obtain completely correct classification results for known attacks.

3.3. Multi-Attack Detection for Unknown Attacks

The above experiments are all for known attacks. We mentioned in Section 2.4 that the one-class SVM could be utilized to detect unknown attacks. We treat hybrid attack 1 and hybrid attack 2 as two types of unknown attacks generating 500 data, which is the same amount as each known attack above, and then put the test data set with the unknown attacks into the one-class SVM classifier for anomaly detection. The unknown attacks we mention here are attacks other than known attacks, which may be a single unknown attack or a combination of unknown attacks. In addition, the detection of unknown attacks is carried out independently and there is no combined relationship between known attacks.
Figure 7 shows the predicted results of our model for unknown attacks after adding the one-class SVM. It can be concluded that the improved model can perfectly detect unknown attacks while hardly affecting the prediction accuracy of known attacks. Based on further analysis of Figure 7, we can conclude that the BR-NN model is more suitable for detecting unknown attacks since the BR-NN model has less impact on the detection of known attacks.

3.4. Security Analysis

Compared with the method of measuring shot noise by blocking the signal light in [29], we add another detector to measure the shot noise without influencing signal pulses; therefore, our scheme does not suffer from the reduction in maximum security distance and key rate due to the decrease in the number of signal pulses available to extract the key as mentioned in [17]. Because the monitoring method in our experiment has no effect on the key rate and the communication of the GMCS CVQKD systems, we only need to consider the security of the quantum attack detection model.
In general, the following four parts of a detection model can be attacked: the input, the data set, the model, and the output. In our system, the model input and output are dependent on the system itself; therefore, we mainly analyze the attacks on both the data set and the model and then give corresponding countermeasures. According to different camouflage methods, attacks on data sets can be divided into two types: poisoning attacks [41,42] and data preprocessing attacks [43]. In poisoning attacks, the attacker injects poisoned samples into the training data set, thereby affecting the training of the model. Hence, the introduction of poisoning data makes model training meaningless. In data preprocessing attacks, the attacker changes the data preprocessing rules so that the model detects anomalies; therefore, the defender will mistake it as a system change rather than an attack. Next, we consider the attack on the model, in which the most common attack is the adversarial example attack [44,45]. In adversarial example attacks, attackers add tiny and imperceptible noise perturbations to the original samples, thereby causing the model misclassification, i.e., giving incorrect outputs with high confidence.
Several attacks against the neural network detection model have been discussed above, then we present defenses against these attacks below. For poisoning attacks, data cleaning [46], and robust machine learning including pruning defense and fine-tuning methods can be employed [47]. For data preprocessing attacks, an autoencoder can be placed between the input data and the neural network as an input preprocessor so that the input and output of the preprocessor are of the same size, and the output of the preprocessor is the input of the neural network. For defense against adversarial sample attacks, adversarial training [44,48], gradient masking [49], adversarial sample detection [50], and formal verification of model robustness can be implemented [51,52,53]. Adversarial training is a simple way to defend against adversarial examples because the robustness of the model can be enhanced simply by adding some adversarial examples to the training set to retrain the model; however, adversarial training is only effective on adversarial samples of known methods, so it is difficult to defend against black-box attacks. For the black-box model, generating adversarial examples with adversarial networks is an effective defense [54].

4. Conclusions

We propose two deep learning models for detecting known attacks, which could detect the coexistence of multiple attacks and achieve real-time monitoring of the CVQKD systems. In addition, to detect unknown attacks, the step is to adopt the one-class SVM to our model. Experiments show that our models can identify known attacks and unknown attacks with high accuracy. Given the invariance of the shot noise measurement to the key transmission in our experiment, the defense scheme of the detection models needs to be further studied.

Author Contributions

Conceptualization, H.D.; methodology, H.D.; resources, D.H.; data curation, H.D.; writing—original draft preparation, H.D.; software, H.D.; writing—review and editing, D.H.; supervision, D.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Mean and Variance under the Unattacked State

From the perspective of the excess noise, the variance formula under the unattacked state can be described as follows. Alice generates random pulses of Gaussian distribution with mean zero variance V A , but the preparation of the Gaussian pulses will introduce vacuum noise with variance N 0 and the system has inherent noise called technical excess noise ξ t e c h . The above three variances are transmitted to Bob through the channel, so the channel transmission loss T and the detection loss η of detectors should be considered. The variance after considering T and η is η T ( V A + N 0 + ξ t e c h ) . In addition, the detection at the BOb side will also introduce the excess noise. One part is the vacuum noise with variance ( 1 η T ) N 0 introduced by the homodyne detection, and the other part is the inherent electronic noise of the detector V e l . It is worth noting that the vacuum noise introduced by homodyne detection ( 1 η T ) N 0 and the vacuum noise induced by Gaussian pulse preparation after channel transmission η T N 0 are added to obtain the vacuum noise N 0 . Therefore, the variance measured by Bob can be expressed as Equation (2).
From the perspective of the linear Gaussian model, that is, the regular quantity, the variance formula under no attack can be described as follows. Alice prepares a series of coherent states X A + i P A , where X A is the canonical amplitude and P A is the canonical phase. Since the two quadrature values X A and P A are symmetric, so we only consider X A , and then the quadrature value received by Bob can be expressed as
X B = η T X + X t e c h + 1 η T X V + X e l , X = X A + X V
where X A is a Gaussian random variable centered on zero with variance V A , X V is the vacuum noise introduced in the preparation process, X t e c h is the technical noise of the system centered on zero with variance ξ t e c h . While 1 η T X V and X e l are the excess noise introduced by the detection at the Bob end, where 1 η T X V is the vacuum noise introduced by the homodyne detection and X e l is the inherent electronic noise of the detector. In addition, both X V and X V are centered on zero with variance N 0 ; therefore, the mean and variance measured by Bob can be expressed as
X B = 0
V B = X B 2 X B 2 = η T V A + N 0 + ξ t e c h + ( 1 η T ) N 0 + V e l = η T V A + ξ t e c h + N 0 + V e l
By analyzing the mean and variance under the unattacked state from the perspectives of quadrature values and excess noise, we have a more thorough understanding of calibration attacks, LO intensity attacks, and saturation attacks.

Appendix B. Traditional Attack Detection

The traditional attack detection judges whether it is attacked by estimating whether the parameters, namely channel transmission T and excessive noise ξ , exceed a certain threshold. We set the data sent by Alice to be x ^ , and the measurement result of Bob to be y ^ , the relationship between the measurement result and each parameter is as follows
x ^ = 0 x ^ 2 = V A N 0 , y ^ 2 = η T V A N 0 + ξ + N 0 + V e l , x ^ y ^ = η T V A N 0
Among these parameters, only η and V e l can be determined before QKD run, the others need to be obtained from experimental data and estimation. The excess noise can be estimated as
ξ = 1 η T y ^ 2 N 0 V e l x ^ 2
where, y ^ 2 and x ^ 2 are experimental data, so it is unchanged. V e l is pre-determined, so it is also unchanged; therefore, N 0 and η T decides the excess noise.

Appendix C. Implementation Details and Attack Description

In a safe CVQKD system, the system parameters are in a dynamically stable state. According to the standard realistic assumptions [17], we set the parameters in the GMCS CVQKD system as: V A = 10 , η = 0.6 , ξ = 0.1 N 0 , V e l = 0.01 N 0 , T = 10 α L / 10 , where the transmission distance is 30km, and the loss coefficient of fiber α is 0.2 dB/km. The LO intensity I L O on Bob’s side is set as 10 7 photons and the fluctuation is 1 % . According to the calibrated linear relationship in [18], the shot noise variance N 0 is 0.4.
Our setup considers the receiving attack of a real QKD system, and the parameters are set as follows. In the practical experiments, the full secret key can be exploited by Eve at a transmission distance of 30 km with LO fluctuation rate of 0.05 [17]. The value range of the attenuation coefficient k can be set between 0.95 and 1. For simplifying calculation and maximizing information, we set the value k as 0.95. The linear range of HD is [ α , α ] ,we set the value of α is 20 N 0 . For better attack effect [19], Δ is set as 19.5 N 0 to make the value close to upper limit α .
In addition, we briefly describe the principles of the attack strategies used in the experiments, including LO strength attack, calibration attack, and saturation attack.
In the calibration attack, the intercept-resend (PIR) attack is performed on a fraction μ ( 0 μ 1 ) of signal pulses to steal the key information. By delaying the trigger signal of LO pulses reaching the detector, the linear relationship between the LO intensity and the HD output variance (i.e., shot noise) is changed, so that the value of attacked shot noise N 0 C A is lower than the normal shot noise N 0 .
In calibration attacks, the excess noise on signal pulses is the excess noise introduced by the PIR attack ξ P I R C A and the technical excess noise of the system ξ t e c h C A , and the excess noise under the unattacked state is the technical excess noise of the system ξ t e c h . They can be expressed as
ξ C A = ξ t e c h C A + ξ P I R C A , ξ t e c h C A = ε N 0 C A
ξ t e c h = ε N 0
where μ = 1
ξ P I R C A = 2 N 0 C A
According to Equation (A5), the excess noise between calibration attacks and the unattacked state has the following relationship
ξ C A ξ t e c h = N 0 N 0 C A η T
In calibration attacks, the reduced excess noise is ξ C A ξ t e c h , and the PIR attack introduces the excess noise ξ P I R C A and the system introduces technical excess noise ξ t e c h C A . When the sum of the above three is equal to the unattacked excess noise ξ t e c h , the PIR attack on signal pulses is hided. So put Equations (A6) and (A7) into Equation (A9), the ratio N 0 C A / N 0 can be expressed as
N 0 C A N 0 = η ε T + 1 2 η T + η ε T + 1 = k 1
when regardless of the factor η ε T on the molecule, N 0 C A can be smaller than before so the excess noise reduced is larger, which can definitely hide the attack; therefore, ε = 0.1 , the ratio N 0 CA / N 0 can also be expressed as
N 0 C A N 0 = 1 2.1 η T + 1
Therefore, the mean and variance of Bob’s measurement results under calibration attacks can be expressed as
y ¯ C A = 0
V C A = η T V A N 0 C A + ξ t e c h C A + ξ P I R C A + N 0 C A + v e l N 0 C A
In the LO intensity attack, the attack on signal pulses is a Gaussian collective attack. The shot noise is changed by attenuating the intensity k (0 < k < 1) of LO pulses, so that the value of the attacked shot noise N 0 L O I A is lower than the normal shot noise N 0 .
In LO intensity attacks, the excess noise on signal pulses is the excess noise introduced by the Gaussian collective attack ξ G a u L O I A and the technical excess noise of the system ξ t e c h L O I A , and the excess noise under the unattacked state is the technical excess noise of the system ξ t e c h . They can be expressed as
ξ L O I A = ξ t e c h L O I A + ξ G a u L O I A , ξ t e c h L O I A = ε N 0 L O I A
ξ t e c h = ε N 0
According to the description in [18], the excess noise introduced by the Gaussian collective attack is expressed as
ξ G a u = ( 1 η T ) ( N 1 ) η T N 0 , N = 1 k η T k ( 1 η T )
Put N into ξ G a u we can obtain
ξ G a u L O I A = 1 k k T N 0 L O I A
According to Equation (A5), the excess noise between LO intensity attacks and the unattacked state has the following relationship
ξ L O I A ξ t e c h = N 0 N 0 L O I A η T
Similar to calibration attacks, put Equations (A14) and (A15) into Equation (A18), the ratio N 0 L O I A / N 0 can be expressed as
N 0 L O I A N 0 = ε η k T + k 1 + ε η k T = k 2
when, regardless of the factor ε η k T , the ratio N 0 L O I A / N 0 can be smaller than before so the excess noise reduced is lager, which can definitely hide the attack; therefore, the ratio N 0 C A / N 0 can also be expressed as
N 0 L O I A N 0 = k
Therefore, the mean and variance of Bob’s measurement results under LO intensity attacks can be expressed as
y ¯ L O I A = 0
V L O I A = η T V A N 0 L O I A + ξ t e c h L O I A + ξ G a u L O I A + N 0 L O I A + v e l N 0 L O I A
In the saturation attack, when the detector is in the linear region, the excess noise on signal pulses is the excess noise introduced by a full PIR attack ξ l i n ( μ = 1 ); after adding a proper displacement Δ x to the quadratures X A , the detector enters the saturation region; therefore, the inherent defects of the saturation region of the detector can be used to reduce excess noise, that is, the linear region is infinite in theoretical analysis, but the linear region is limited. The variance of Bob’s measurement results under the linear region can be expressed as
V l i n = η T V A N 0 + ξ t e c h + ξ l i n + N 0 + v e l N 0
where
ξ l i n = 2 N 0
Saturation attack adds a proper displacement Δ x to quadratures X A , which will make the detector abnormally work in the saturation region instead of the linear region on normal conditions, that is, the output variance of the homodyne detector (HD) is a constant value rather than linearly varying with the LO pulse power. HD working in the saturation region can decrease the excess noise introduced by the full PIR attack until it equals zero, and there is ξ P I R s a t = 0 ; therefore, saturation attacks hide the full PIR attack on signal pulses. It is worth noting that saturation attacks do not attack the LO pulse so the shot noise is unchanged, and there is N 0 s a t = N 0 . According to the description in [19], the mean and variance of Bob’s measurement results under the saturation region can be expressed as
y ¯ S A = α + C
V S A = V l i n 1 + A 2 B 2 2 π ( α Δ ) V l i n 2 π A * B + ( α Δ ) 2 4 1 A 2
where
A = erf α Δ 2 V l i n
B = e ( α Δ ) 2 2 V l i n
C = V l i n 2 π B + α Δ 2 + α Δ 2 A

References

  1. Scarani, V.; Bechmann-Pasquinucci, H.; Cerf, N.J.; Dusek, M.; Lutkenhaus, N.; Peev, M. The security of practical quantum key distribution. Rev. Mod. Phys. 2009, 81, 1301–1350. [Google Scholar] [CrossRef] [Green Version]
  2. Gisin, N.; Ribordy, G.; Tittel, W.; Zbinden, H. Quantum cryptography. Rev. Mod. Phys. 2002, 74, 145. [Google Scholar] [CrossRef] [Green Version]
  3. Weedbrook, C.; Pirandola, S.; García-Patrón, R.; Cerf, N.; Ralph, T.; Shapiro, J.; Lloyd, S. Gaussian quantum information. Rev. Mod. Phys. 2011, 84, 621–669. [Google Scholar] [CrossRef]
  4. Shen, Y.; Xiang, P.; Yang, J.; Guo, H. Continuous-variable quantum key distribution with Gaussian source noise. Phys. Rev. A 2011, 83, 10017–10028. [Google Scholar] [CrossRef] [Green Version]
  5. Chi, Y.M.; Qi, B.; Zhu, W.; Qian, L.; Lo, H.K.; Youn, S.H.; Lvovsky, A.I.; Tian, L. A balanced homodyne detector for high-rate Gaussian-modulated coherent-state quantum key distribution. New J. Phys. 2010, 13, 87–92. [Google Scholar] [CrossRef] [Green Version]
  6. Huang, D.; Fang, J.; Wang, C.; Huang, P.; Zeng, G.H. A 300-MHz bandwidth balanced homodyne detector for continuous variable quantum key distribution. Chin. Phys. Lett. 2013, 30, 114209. [Google Scholar] [CrossRef]
  7. Lodewyck, J.; Debuisschert, T.; Tualle-Brouri, R.; Grangier, P. Controlling excess noise in fiber optics continuous variables quantum key distribution. Phys. Rev. A 2005, 72, 762–776. [Google Scholar] [CrossRef] [Green Version]
  8. Bennett, C.H.; Brassard, G. Quantum Cryptography: Public Key Distribution and Coin Tossing. Theor. Comput. Sci. 2014, 560, 7–11. [Google Scholar] [CrossRef]
  9. Ekert, A.K. Quantum cryptography based on Bell’s theorem. Phys. Rev. Lett. 1991, 67, 661. [Google Scholar] [CrossRef] [Green Version]
  10. Bennett, C.H. Quantum cryptography using any two nonorthogonal states. Phys. Rev. Lett. 1992, 68, 3121–3124. [Google Scholar] [CrossRef]
  11. Grosshans, F.; Grangier, P. Continuous Variable Quantum Cryptography Using Coherent States. Phys. Rev. Lett. 2002, 88, 057902. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Leverrier, A.; Grangier, P. Unconditional security proof of long-distance continuous-variable quantum key distribution with discrete modulation. Phys. Rev. Lett. 2009, 102, 180504. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Leverrier, A.; Grangier, P. Continuous-variable quantum-key-distribution protocols with a non-Gaussian modulation. Phys. Rev. A 2011, 83, 42312. [Google Scholar] [CrossRef] [Green Version]
  14. Huang, D.; Huang, P.; Lin, D.; Zeng, G. Long-distance continuous-variable quantum key distribution by controlling excess noise. Sci. Rep. 2016, 6, 19201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Grosshans, F.; Van Assche, G.; Wenger, J.; Brouri, R.; Cerf, N.J.; Grangier, P. Quantum key distribution using gaussian-modulated coherent states. Nature 2003, 421, 238–241. [Google Scholar] [CrossRef] [Green Version]
  16. Leverrier, A. Composable Security Proof for Continuous-Variable Quantum Key Distribution with Coherent States. Phys. Rev. Lett. 2015, 114, 070501. [Google Scholar] [CrossRef] [Green Version]
  17. Jouguet, P.; Kunz-Jacques, S.; Diamanti, E. Preventing calibration attacks on the local oscillator in continuous-variable quantum key distribution. Phys. Rev. A 2013, 87, 062313. [Google Scholar] [CrossRef] [Green Version]
  18. Ma, X.C.; Sun, S.H.; Jiang, M.S.; Liang, L.M. Local oscillator fluctuation opens a loophole for Eve in practical continuous-variable quantum-key-distribution systems. Phys. Rev. A 2013, 88, 022339. [Google Scholar] [CrossRef] [Green Version]
  19. Qin, H.; Kumar, R.; Alleaume, R. Quantum hacking: Saturation attack on practical continuous-variable quantum key distribution. Phys. Rev. A 2016, 94, 012325. [Google Scholar] [CrossRef] [Green Version]
  20. Qin, H.; Kumar, R.; Makarov, V.; Alleaume, R. Homodyne-detector-blinding attack in continuous-variable quantum key distribution. Phys. Rev. A 2018, 98, 012312. [Google Scholar] [CrossRef] [Green Version]
  21. Ma, X.C.; Sun, S.H.; Jiang, M.S.; Liang, L.M. Wavelength attack on practical continuous-variable quantum-key-distribution system with a heterodyne protocol. Phys. Rev. A 2013, 87, 052309. [Google Scholar] [CrossRef] [Green Version]
  22. Huang, J.; Weedbrook, C.; Yin, Z.; Wang, S.; Li, H.; Chen, W.; Guo, G.; Han, Z. Quantum hacking of a continuous-variable quantum-key-distribution system using a wavelength attack. Phys. Rev. A 2013, 87, 062329. [Google Scholar] [CrossRef] [Green Version]
  23. He, Z.; Wang, Y.; Huang, D. Wavelength attack recognition based on machine learning optical spectrum analysis for the practical continuous-variable quantum key distribution system. J. Opt. Soc. Am. B 2020, 37, 1689–1697. [Google Scholar] [CrossRef]
  24. Liu, W.; Peng, J.; Huang, P.; Huang, D.; Zeng, G. Monitoring of continuous-variable quantum key distribution system in real environment. Opt. Express 2017, 25, 19429–19443. [Google Scholar] [CrossRef]
  25. Laaksonen, J.; Oja, E. Classification with learning k-nearest neighbors. In Proceedings of the International Conference on Neural Networks (ICNN’96), Washington, DC, USA, 3–6 June 1996; Volume 3, pp. 1480–1483. [Google Scholar]
  26. Wu, Q.; Zhou, D.X. Analysis of support vector machine classification. J. Comput. Anal. Appl. 2006, 8, 99–119. [Google Scholar]
  27. Sagi, O.; Rokach, L. Ensemble learning: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1249. [Google Scholar] [CrossRef]
  28. Mao, Y.; Wang, Y.; Huang, W.; Qin, H.; Huang, D.; Guo, Y. Hidden-Markov-model-based calibration-attack recognition for continuous-variable quantum key distribution. Phys. Rev. A 2020, 101, 062320. [Google Scholar] [CrossRef]
  29. Mao, Y.; Huang, W.; Zhong, H.; Wang, Y.; Qin, H.; Guo, Y.; Huang, D. Detecting quantum attacks: A machine learning based defense strategy for practical continuous-variable quantum key distribution. New J. Phys. 2020, 22, 083073. [Google Scholar] [CrossRef]
  30. Huang, D.; Liu, S.; Zhang, L. Secure Continuous-Variable Quantum Key Distribution with Machine Learning. Photonics 2021, 8, 511. [Google Scholar] [CrossRef]
  31. Huang, J.Z.; Kunz-Jacques, S.; Jouguet, P.; Weedbrook, C.; Yin, Z.Q.; Wang, S.; Chen, W.; Guo, G.C.; Han, Z.F. Quantum Hacking on Quantum Key Distribution using Homodyne Detection. Phys. Rev. A 2014, 89, 4–16. [Google Scholar] [CrossRef] [Green Version]
  32. Zheng, Z.; Chen, W.; Wu, X.; Chen, P.; Liu, J. LSTM network: A deep learning approach for short-term traffic forecast. IET Intell. Transp. Syst. 2017, 11, 68–75. [Google Scholar]
  33. Dey, R.; Salem, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
  34. Zhang, M.L.; Zhou, Z.H. A Review on Multi-Label Learning Algorithms. IEEE Trans. Knowl. Data Eng. 2014, 26, 1819–1837. [Google Scholar] [CrossRef]
  35. Gibaja, E.; Ventura, S. A Tutorial on Multilabel Learning. ACM Comput. Surv. 2015, 47, 1–38. [Google Scholar] [CrossRef]
  36. AlEroud, A.; Karabatis, G. Detecting Unknown Attacks Using Context Similarity. In Information Fusion for Cyber-Security Analytics; Springer: Berlin, Germany, 2017; pp. 53–75. [Google Scholar]
  37. Song, J.; Takakura, H.; Okabe, Y.; Kwon, Y. Unsupervised anomaly detection based on clustering and multiple one-class SVM. IEICE Trans. Commun. 2009, 92, 1981–1990. [Google Scholar] [CrossRef]
  38. Hendrycks, D.; Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv 2016, arXiv:1610.02136. [Google Scholar]
  39. Lange, S.; Riedmiller, M. Deep auto-encoder neural networks in reinforcement learning. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  40. Ravanbakhsh, M.; Nabi, M.; Sangineto, E.; Marcenaro, L.; Regazzoni, C.; Sebe, N. Abnormal event detection in videos using generative adversarial nets. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1577–1581. [Google Scholar]
  41. Steinhardt, J.; Koh, P.W.; Liang, P. Certified defenses for data poisoning attacks. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Monticello, IL, USA, 22–24 September 2016; pp. 341–346. [Google Scholar]
  42. Muñoz-González, L.; Biggio, B.; Demontis, A.; Paudice, A.; Wongrassamee, V.; Lupu, E.C.; Roli, F. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA, 3 November 2017; pp. 27–38. [Google Scholar]
  43. Xiao, Q.; Chen, Y.; Shen, C.; Chen, Y.; Li, K. Seeing is not believing: Camouflage attacks on image scaling algorithms. In Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA, 14–16 August 2019; pp. 443–460. [Google Scholar]
  44. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
  45. Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbruecken, Germany, 21–24 March 2016; pp. 372–387. [Google Scholar]
  46. Cretu, G.F.; Stavrou, A.; Locasto, M.E.; Stolfo, S.J.; Keromytis, A.D. Casting out demons: Sanitizing training data for anomaly sensors. In Proceedings of the 2008 IEEE Symposium on Security and Privacy (sp 2008), Oakland, CA, USA, 18–22 May 2008; pp. 81–95. [Google Scholar]
  47. Jagielski, M.; Oprea, A.; Biggio, B.; Liu, C.; Nita-Rotaru, C.; Li, B. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–24 May 2018; pp. 19–35. [Google Scholar]
  48. Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial machine learning at scale. arXiv 2016, arXiv:1611.01236. [Google Scholar]
  49. Papernot, N.; McDaniel, P.; Sinha, A.; Wellman, M.P. Sok: Security and privacy in machine learning. In Proceedings of the 2018 IEEE European Symposium on Security and Privacy (EuroS&P), London, UK, 24–26 April 2018; pp. 399–414. [Google Scholar]
  50. Xu, W.; Evans, D.; Qi, Y. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv 2017, arXiv:1704.01155. [Google Scholar]
  51. Lecuyer, M.; Atlidakis, V.; Geambasu, R.; Hsu, D.; Jana, S. Certified robustness to adversarial examples with differential privacy. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 656–672. [Google Scholar]
  52. Raghunathan, A.; Steinhardt, J.; Liang, P. Certified defenses against adversarial examples. arXiv 2018, arXiv:1801.09344. [Google Scholar]
  53. Wong, E.; Kolter, Z. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5286–5295. [Google Scholar]
  54. Xiao, C.; Li, B.; Zhu, J.Y.; He, W.; Liu, M.; Song, D. Generating adversarial examples with adversarial networks. arXiv 2018, arXiv:1801.02610. [Google Scholar]
Figure 1. Schematic diagram of Bob’s detection setup for the feature measurements of Table 1 and the experimental preparation for acquiring training data. CWDM: Coarse wavelength-division multiplexing. PBS: Polarization beam splitter. ATT: Variable optical attenuator. BS: Beam splitter. PM: Phase modulator. FM: Faraday mirror. PIN: PIN photodiode. HD: Homodyne detector. P-METER: The power meter is to monitor LO intensity.
Figure 1. Schematic diagram of Bob’s detection setup for the feature measurements of Table 1 and the experimental preparation for acquiring training data. CWDM: Coarse wavelength-division multiplexing. PBS: Polarization beam splitter. ATT: Variable optical attenuator. BS: Beam splitter. PM: Phase modulator. FM: Faraday mirror. PIN: PIN photodiode. HD: Homodyne detector. P-METER: The power meter is to monitor LO intensity.
Photonics 09 00177 g001
Figure 2. The BR-NN model.
Figure 2. The BR-NN model.
Photonics 09 00177 g002
Figure 3. The LP-NN model.
Figure 3. The LP-NN model.
Photonics 09 00177 g003
Figure 4. The detection model for unknown attacks.
Figure 4. The detection model for unknown attacks.
Photonics 09 00177 g004
Figure 5. (a) The performance of the LP-NN model. (b) The accuracy of the BR-NN model. (c) The precision of the BR-NN model. (d) The recall of the BR-NN model. UA: the unattacked state. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks.
Figure 5. (a) The performance of the LP-NN model. (b) The accuracy of the BR-NN model. (c) The precision of the BR-NN model. (d) The recall of the BR-NN model. UA: the unattacked state. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks.
Photonics 09 00177 g005
Figure 6. (a) The prediction of the LP-NN model. (b) The prediction of the BR-NN model. C_LO: calibration attacks and LO intensity attacks. C_S: calibration attacks and saturation attacks. LO_S: LO intensity attacks and saturation attacks. C_LO_S: calibration attacks, LO intensity attacks, and saturation attacks.
Figure 6. (a) The prediction of the LP-NN model. (b) The prediction of the BR-NN model. C_LO: calibration attacks and LO intensity attacks. C_S: calibration attacks and saturation attacks. LO_S: LO intensity attacks and saturation attacks. C_LO_S: calibration attacks, LO intensity attacks, and saturation attacks.
Photonics 09 00177 g006
Figure 7. (a) The prediction of the BR-NN model for unknown attacks. (b) The prediction of the LP-NN model for unknown attacks. UA: the unattacked state. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks. C_LO: calibration attacks and LO intensity attacks. C_S: calibration attacks and saturation attacks. LO_S: LO intensity attacks and saturation attacks. C_LO_S: calibration attacks, LO intensity attacks, and saturation attacks. UK: unknown attacks.
Figure 7. (a) The prediction of the BR-NN model for unknown attacks. (b) The prediction of the LP-NN model for unknown attacks. UA: the unattacked state. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks. C_LO: calibration attacks and LO intensity attacks. C_S: calibration attacks and saturation attacks. LO_S: LO intensity attacks and saturation attacks. C_LO_S: calibration attacks, LO intensity attacks, and saturation attacks. UK: unknown attacks.
Photonics 09 00177 g007
Table 1. Impacts of combined attack strategies on measurable features. The symbol ‘✓’ represents that the feature is changed by the corresponding attack. The symbol ‘-’ represents that the feature is unchanged by the corresponding attack. k, k 1 , and k 2 represent the proportional relationship compared with the original feature.
Table 1. Impacts of combined attack strategies on measurable features. The symbol ‘✓’ represents that the feature is changed by the corresponding attack. The symbol ‘-’ represents that the feature is unchanged by the corresponding attack. k, k 1 , and k 2 represent the proportional relationship compared with the original feature.
Feature y ¯ V y I LO N 0
CA-- k 2
LOIA-k k 2
SA--
CA&LOIA-k k 1 k 2
CA&SA- k 1
LOIA&SAk k 2
CA&LOIA&SAk k 1 k 2
Table 2. The multi-attack output of four sigmoid. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks.
Table 2. The multi-attack output of four sigmoid. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks.
OutputSigmoid1Sigmoid2Sigmoid3Sigmoid4
Unattacked State1000
CA [17]0100
LOIA [18]0010
SA [19]0001
CA&LOIA0110
CA&SA0101
LOIA&SA0011
CA&LOIA&SA0111
Table 3. Detection results of combined attacks by different models. The symbol ‘✓’ represents that the model can correctly identify the combined attack. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks.
Table 3. Detection results of combined attacks by different models. The symbol ‘✓’ represents that the model can correctly identify the combined attack. CA: calibration attacks. LOIA: LO intensity attacks. SA: saturation attacks.
ModelCA-LOIACA-SALOIA-SACA-LOIA-SA
BR-NN
LP-NN
ANN-based [29]LOIASASALOIA
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Du, H.; Huang, D. Multi-Attack Detection: General Defense Strategy Based on Neural Networks for CV-QKD. Photonics 2022, 9, 177. https://doi.org/10.3390/photonics9030177

AMA Style

Du H, Huang D. Multi-Attack Detection: General Defense Strategy Based on Neural Networks for CV-QKD. Photonics. 2022; 9(3):177. https://doi.org/10.3390/photonics9030177

Chicago/Turabian Style

Du, Hongwei, and Duan Huang. 2022. "Multi-Attack Detection: General Defense Strategy Based on Neural Networks for CV-QKD" Photonics 9, no. 3: 177. https://doi.org/10.3390/photonics9030177

APA Style

Du, H., & Huang, D. (2022). Multi-Attack Detection: General Defense Strategy Based on Neural Networks for CV-QKD. Photonics, 9(3), 177. https://doi.org/10.3390/photonics9030177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop