Next Article in Journal
A Novel Nonlinear Adaptive Control Method for Longitudinal Speed Control for Four-Independent-Wheel Autonomous Vehicles
Previous Article in Journal
Mean-Field-Type Transformers
Previous Article in Special Issue
Mathematical Modeling and Analysis of Credit Scoring Using the LIME Explainer: A Comprehensive Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

QuEst: Adversarial Attack Intensity Estimation via Query Response Analysis

Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3508; https://doi.org/10.3390/math12223508
Submission received: 25 September 2024 / Revised: 7 November 2024 / Accepted: 8 November 2024 / Published: 9 November 2024

Abstract

:
Deep learning has dramatically advanced computer vision tasks, including person re-identification (re-ID), substantially improving matching individuals across diverse camera views. However, person re-ID systems remain vulnerable to adversarial attacks that introduce imperceptible perturbations, leading to misidentification and undermining system reliability. This paper addresses the challenge of robust person re-ID in the presence of adversarial examples by estimating attack intensity to enable effective detection and adaptive purification. The proposed approach leverages the observation that adversarial examples in retrieval tasks disrupt the relevance and internal consistency of retrieval results, degrading re-ID accuracy. This approach estimates the attack intensity and dynamically adjusts the purification strength by analyzing the query response data, addressing the limitations of fixed purification methods. This approach also preserves the performance of the model on clean data by avoiding unnecessary manipulation while improving the robustness of the system and its reliability in the presence of adversarial examples. The experimental results demonstrate that the proposed method effectively detects adversarial examples and estimates the attack intensity through query response analysis. This approach enhances purification performance when integrated with adversarial purification techniques in person re-ID systems.

1. Introduction

Deep learning has considerably advanced the field of computer vision, demonstrating remarkable performance in such tasks as image retrieval. Person re-identification (re-ID), a general example of a retrieval task, involves matching individuals across disparate camera views. Person re-ID is critical in public safety, from aiding in criminal investigations to locating missing persons. Although deep learning has revolutionized person re-ID, deep learning-based systems are susceptible to adversarial attacks, where malicious inputs can deceive these systems. Adversarial attacks exploit vulnerabilities in deep learning models by introducing imperceptible perturbations, causing the models to make incorrect predictions. Several studies have explored adversarial attacks explicitly designed for person re-ID systems [1,2,3]. This research stresses the importance of developing robust defenses against adversarial attacks to guarantee the reliability of person re-ID systems.
Various defense strategies have been proposed to address this challenge. For instance, adversarial training [4,5,6,7] is a simple and common approach involving training the model on clean and adversarial examples to enhance its robustness. However, adversarial training approaches lead to a trade-off between model generalizability and robustness [8]. Moreover, these approaches can only defend against specific attacks for which they have been trained, making it difficult to generalize these models to unseen attacks [9].
Adversarial purification approaches have garnered attention as a promising solution to this challenge. These methods apply generative models [10,11,12,13,14] to restore potentially compromised images to a clean state, removing adversarial perturbations before the images are input into the target model. Despite their benefits, adversarial purification methods are typically inferior to adversarial training techniques, particularly in white-box settings. This limitation is often due to the weaknesses of generative models employed for purification. Specific challenges include mode collapse in generative adversarial networks (GANs), suboptimal sample quality in energy-based models, and insufficient randomness [9].
Recently, diffusion model advancements, which progressively add and remove noise from input images, have renewed research interest in adversarial purification approaches [9,15,16,17,18,19]. These models advance a more flexible and manageable framework for image reconstruction, mitigating some limitations of previous generative models. By applying diffusion models, adversarial purification techniques could surpass the performance of adversarial training methods. These purification approaches enhance the robustness and reliability of person re-ID systems, demonstrating effectiveness against unseen attacks. Despite these promising advancements, room for improvement remains. Diffusion-based purification methods, like their predecessors, tend to purify all images indiscriminately, regardless of whether they are perturbed or clean. This approach can degrade prediction accuracy on clean examples. Therefore, adversarial detection must precede purification to purify only perturbed images, avoiding manipulating clean images. Additionally, although intensity of adversarial attacks can be adjusted based on attack type or hyperparameters, existing diffusion-based purification methods typically use fixed diffusion time steps. This approach renders a constant purification strength that does not adapt to the state of the input data, limiting the purification effectiveness.
Recent studies on distinguishing adversarial examples from clean examples have proposed various approaches, such as leveraging a Bayesian neural network [20] and a method based on distributional discrepancies [21]. However, these methods have limitations in estimating the attack intensity, which is defined as the magnitude of the adversarial perturbation ( ϵ ). This paper introduces an adversarial defense method that detects and estimates the intensity of adversarial attacks. The approach is based on query response analysis and addresses the limitations of fixed purification strength found in current purification methods. This research builds on the pioneering work of Lee et al. [19], who were the first to estimate attack intensity using identity stability and attribute inconsistency. However, their method lacks a clear metric for estimating attack intensity. The proposed method is founded on the observation that retrieval results from clean data are typically plausible and consistent. In contrast, adversarial attacks degrade the relevance of these results to the query, diminishing their internal consistency. As the attack intensity increases, the retrieved images diverge from the query and exhibit reduced consistency in the retrieval results, increasing disorder and incoherence. Figure 1 illustrates this phenomenon. Our approach estimates attack intensity based on the retrieval results of a query. Since untargeted attacks universally aim to return query-irrelevant results—where stronger attacks yield more dissimilar outputs—the proposed method is effective against unseen attacks.
This paper builds on these observations and addresses the limitations of fixed diffusion time steps by introducing a dynamic adjustment mechanism that optimizes purification based on the intensity of the adversarial perturbations. The proposed method employs query response data to estimate the attack intensity more accurately, enhancing the effectiveness of the purification process. Furthermore, the proposed method functions as an adversarial detector, preserving the original predictive performance on clean data by avoiding unnecessary manipulation, thereby ensuring the reliability and effectiveness of the system. The summarized contributions are as follows:
  • Adversarial attacks in retrieval tasks degrade the relevance of the retrieval results to the query and disrupt the internal consistency of the results. This observation forms the basis for the proposed method.
  • Based on this observation, this paper proposes an adversarial defense method for accurately estimating adversarial attack intensity by analyzing the query response data. This approach allows the dynamic adjustment of the purification strength in response to varying these adversarial perturbations.
  • The proposed method preserves the predictive performance on clean data by avoiding unnecessary manipulation and enhancing the effectiveness of adversarial purification. This approach ensures the robust performance and reliability of the system in the presence of adversarial examples.

2. Related Work

2.1. Person Re-Identification

Person re-ID is a significant challenge in video surveillance, aiming to accurately identify and track individuals across multiple non-overlapping cameras. It is commonly used for identifying fugitive criminals and locating missing persons. With the advancement of deep learning, feature representation techniques and metric learning have been widely adopted [24,25,26,27,28,29,30,31,32,33].
Siamese networks, in particular, have focused on enhancing re-ID performance. Zheng et al. [34] introduced a unified network that integrates identification and verification models, enabling the network to learn discriminative embedding and simultaneously measure similarity. Wu et al. [35] developed a Siamese attention mechanism that jointly learns spatiotemporal video representation and similarity measurement. Chung et al. [36] proposed a two-stream convolutional neural network where each stream functions as a Siamese network, capturing the spatial and temporal features separately. Moreover, Li et al. [37] proposed a Siamese multiple granularity network that simultaneously learns global and local features and a multichannel weighted fusion loss function to combine the verification and identification losses for maximizing person re-ID performance.
Recent research in Re-ID has explored a variety of approaches. Attention-based approaches [38,39,40,41,42,43,44,45,46,47] have garnered significant interest due to their capability to effectively capture relevant features. Jia et al. [47] proposed the semi-attention partition method for occluded person re-identification, which leverages knowledge distillation to enable an attention-based student model to learn aligned part features from a noisy semantic partition teacher. In addition to attention-based methods, substantial research has also focused on graph neural networks (GNNs) for person re-ID in recent years [48,49,50,51,52,53,54,55,56,57,58]. These approaches leverage the relational structure among features to improve matching accuracy, offering a complementary perspective to traditional feature extraction methods. Xian et al. [57] proposed graph-based self-learning, a robust person re-identification framework utilizing graph neural networks to enhance discriminative representation learning and correct label noise, significantly boosting robustness without changing the network architecture or loss functions. Zhang et al. [58] proposed a unified framework for partial person re-identification that leverages graph neural networks and multi-head attention, using an adaptive threshold-guided masked graph convolutional network to minimize noisy key points and a cyclic heterogeneous graph convolutional network to integrate cross-modal pedestrian information.
These deep learning approaches have significantly advanced person re-ID, achieving state-of-the-art results. However, despite their success, the existing deep learning-based re-ID systems remain vulnerable to adversarial attacks, lacking sufficient security and robustness. Therefore, this research focuses on enhancing robustness against adversarial attacks.

2.2. Adversarial Metric Attack

Adversarial attacks have been extensively studied regarding deceiving deep learning models [59,60,61,62]. Several studies have proposed adversarial attacks targeting person re-ID systems. For example, Bai et al. [1] introduced the metric-fast gradient sign method (metric-FGSM) to perturb person re-ID systems by increasing the distance between perturbed and reference features. Wang et al. [2] proposed an adversarial attack method called deep mis-ranking, which leverages a generative adversarial network framework to produce deceptive noise that is added to input images. This generated noise creates adversarial examples that mislead the person re-ID system into incorrectly ranking similar images as dissimilar and vice versa. By utilizing a generator and a novel discriminator, the method ensures that the perturbations are subtle enough to remain inconspicuous while effectively degrading the model’s performance on key re-ID tasks. Yang et al. [3] presented MetaAttack, designed to enhance the effectiveness and universality of adversarial attacks in deep person re-ID systems. By utilizing a holistic attack-defense framework, it systematically explores the interactions between various attack strategies and their effects on model robustness. A core contribution of MetaAttack is its combinatorial adversarial attack, which integrates functional color distortions with additive adversarial perturbations, enabling effective targeting of unseen domains and model types. This design mimics real-world variations, such as differences in camera settings, making the attack more realistic. Zheng et al. [63] proposed an opposite-direction feature attack, using adversarial gradients to shift the features in the opposite direction.
Additionally, adversarial queries are efficient to create and highly successful at deceiving re-ID systems. Subramanyam [64] proposed meta-generative attack (MeGA), which combines GAN, masking techniques, and meta-learning to generate adversarial examples with high transferability and performance. Although attacking the gallery set is an option, it is typically much larger than the query set, making the process more time-consuming. This approach makes adversarial queries the more practical alternative [65].
Therefore, this study focuses on defending against adversarial attacks applied to query images in a white-box scenario where the attacker has full access to the architecture and parameters of the re-ID model. This setting provides a rigorous test of the defense robustness of the method. If the method performs well under these conditions, it is also expected to be effective in real-world scenarios.

2.3. Adversarial Defense

Extensive research has been conducted on defense strategies to reduce the vulnerability of deep learning-based models to adversarial attacks. Adversarial training [4,5,6] is a commonly employed approach to defend against attacks. Yu et al. [66] proposed the loss stationary condition to regulate weight perturbation, improving model robustness by focusing perturbations on adversarial examples with smaller classification loss and avoiding unnecessary disruptions. They proposed a joint adversarial defense combining proactive and passive strategies to enhance the robustness of re-ID models.
However, adversarial training yields a trade-off, causing significant performance degradation on clean data and increasing the computational complexity. In addition, its defense capabilities are limited to the specific attacks on which it was trained, making it difficult to generalize to new, unseen threats [7]. Adversarial purification methods [9,10,11,12,13,14,15,17,18,19] concentrate on restoring potentially corrupted images by removing adversarial perturbations to ensure the model receives clean data for processing. Adversarial purification methods have demonstrated robustness against unseen attacks [15,17,18,19], making them promising approaches for enhancing the security of machine learning systems. Energy-based models using Markov chain Monte Carlo techniques for image purification are effective [13,67,68]; however, these methods often suffer from low sample quality and slow sampling speed.
The GAN-based methods aim to generate clean versions of perturbed images [69,70,71]. Nevertheless, GAN training is naturally unstable, and adversarial attacks can exploit latent space vulnerabilities to generate incorrect images. Additionally, they are prone to mode collapse, where the generated outputs lack diversity, leading to ineffective purification for more complex attacks. Diffusion models [9,72,73,74,75,76,77] operate through two primary processes: a forward process that gradually adds noise to transform data into a noise-like state, and a reverse process that progressively removes this noise to recover data from a noisy input. Through this denoising procedure, diffusion models effectively purify perturbed samples, functioning similarly to purification models and producing high-quality outputs that closely approximate the distribution of clean data. Furthermore, the inherent stochasticity of diffusion models enhances their capability to act as robust defenses against adversarial perturbations [78]. Nie et al. [9] proposed DiffPure, an adversarial purification method that uses the adjoint method to efficiently compute gradients during the reverse process. Lee and Kim [18] introduced gradual noise-scheduling-based purification (GNSP), an enhanced purification strategy built on a gradual noise scheduling approach. However, these diffusion-based purification models usually apply fixed diffusion time steps, which can result in over-purifying clean images or under-purifying heavily perturbed ones, affecting the overall purification performance.
These methods face a common problem of indiscriminate purification, where clean and perturbed images are treated equally, reducing the accuracy of the downstream tasks. Lee et al. [19] proposed IntensPure and were the first to estimate the attack intensity using identity stability and attribute inconsistency. They suggested a technique to determine the optimal diffusion time step. However, IntensPure lacks background on attack intensity estimation and does not offer a clear metric. The proposed metrics change based on the presence and intensity of attacks; thus, this paper identifies and proposes a metric that can directly estimate the purification intensity.

3. Methods

3.1. Preliminaries

Adversarial example x is generated by adding perturbation δ to the original input x, such that x = x + δ . Perturbation δ is computed to mislead the model. One popular method for determining δ in a white-box setting is the Fast Gradient Sign Method (FGSM) [79]. This method calculates δ based on the gradient of the loss function concerning the input. Specifically, δ is given by
δ = ϵ · sign ( x J ( θ , x , y ) )
where ϵ controls the magnitude of the perturbation. The notation sign ( · ) denotes the sign function applied element-wise. Furthermore, x J ( θ , x , y ) represents the gradient of the classification loss function concerning the input x. Here, θ denotes the model parameters and y is the true label. Such adversarial attacks typically target classification models, where they manipulate the input to push it across the decision boundaries. Classification attacks can effectively deceive models by shifting the decision boundary; however, they do not generalize well to person re-ID systems. These attacks alter the image to push it away from the decision boundary, which does not directly translate to manipulating pairwise distances between images, a critical aspect in person re-ID [1]. For person re-ID systems, metric-based attacks, which focus on distorting the distances between feature representations, are more suitable for undermining the retrieval accuracy of the system.
Adversarial metric attacks target distance-based metrics to degrade system performance. They achieve this by manipulating the distance between the feature representations of the probe image p and the gallery image g, ultimately reducing retrieval accuracy. This manipulation is achieved by defining and altering the distance metric D. The Euclidean distance D between feature representations f p and f g is defined as follows:
D ( f p , f g ) = f p f g 2 ,
where f p and f g represent the feature representations extracted from the probe image p and the gallery image p, respectively, and  · 2 denotes the L2 norm, also known as the Euclidean norm.
The attack objective is to adjust perturbation δ to maximize or minimize the distance D ( f p + δ , f g ) . Maximizing the distance corresponds to an untargeted attack, aiming to reduce true positive matches by making individuals harder to identify. Minimizing the distance corresponds to a targeted attack, aiming to mislead the system into falsely identifying an individual as a specific target. These attacks aim to impair the ability of the system to correctly identify individuals by manipulating pairwise distances. The perturbation δ can be computed using the FGSM:
δ = ϵ · sign ( p D ( f p , f g ) ) .
By manipulating pairwise distances, adversarial metric attacks undermine the retrieval accuracy of the system, making reliable identification challenging.

3.2. Statistical Metrics for Experimental Study Based on Query Response

As illustrated in Figure 1, adversarial attacks in person re-ID systems cause the retrieval system to return irrelevant images in the ranked list. A secondary effect of these attacks is disrupting the consistency in the retrieved results. As the attack intensity increases, the retrieval results become more disordered. Based on these observations from the query responses, we conducted an empirical study to estimate the presence of adversarial perturbations and the intensity of such attacks. We employ part of the identity stability metric proposed in [19] to capture the characteristics of the retrieval results under attack: top- k and inter-rank similarities. Figure 2 illustrates the process of calculating the top 10 and inter-rank similarities when k = 10 , providing a visual representation of how these metrics are derived.
We employed a feature extractor from a deep learning-based person re-ID network to measure identity similarity for each image of a person. This network comprises a feature extractor and classification layers, which train the identities in the training data as distinct labels. The trained classification layer is discarded during inference, and only the feature extractor is employed to obtain embedding vectors from each image.
As depicted in Figure 2, the top-k similarities refer to the similarity between the query image and each of the top- k retrievals, measuring the relevance of the retrieved images to the query. To evaluate the relevance of retrieval feature vectors to a given query, we computed the cosine similarity between the query feature vector q and each of the top-k retrieval feature vectors r i . Both q and r i represent the output of the convolutional layers in the re-ID network, which function as the feature extractor before the fully connected layers. This set of similarities is referred to as the top-k similarities. The cosine similarity between the query feature vector and each retrieval feature vector is computed as follows:
Top - k similarities = q · r 1 q r 1 , q · r 2 q r 2 , , q · r k q r k ,
where top-k similarities denotes the vector of the cosine similarity values between the query feature vector q and each of the k retrieval feature vectors r i . Each vector has the dimensions 1 × 2048 .
Inter-rank similarities are defined as pairwise similarities between the top- k retrieved results. This metric captures the internal consistency in the retrieved images. To assess the coherence of the top- k retrieval feature vectors, we computed the cosine similarity between each pair of retrieval feature vectors r i and r j . This set of pairwise similarities, referred to as inter-rank similarities, is computed as follows:
Inter - rank similarities = r i · r j r i r j 1 i < j k ,
where inter-rank similarities denotes the vector of cosine similarity values for all C 2   k   unique pairs of retrieval feature vectors.
To quantify the effect of adversarial attacks on retrieval results, we introduced response incoherence, reflecting the inconsistency of the top 10 and inter-rank similarities. The standard deviations of these similarities measure the response incoherence. The response incoherence for the top- k similarities is defined as follows:
Response incoherence Top k = 1 k i = 1 k q · r i | q | | r i | μ Top k 2 ,
where μ Top k denotes the mean of the top- k similarities, given by
μ Top k = 1 k i = 1 k q · r i q r i .
The response incoherence for inter-rank similarities is computed as follows:
Response incoherence Inter rank = 2 k ( k 1 ) 1 i < j k r i · r j | r i | | r j | μ Inter - rank 2 ,
where the mean of the inter-rank similarities, μ Inter rank , is defined as follows:
μ Inter rank = 2 k ( k 1 ) 1 i < j k r i · r j | r i | | r j | .
Figure 3 presents histograms illustrating the response incoherence for top 10 and inter-rank similarities on the Market-1501 [23] dataset to investigate the effect of adversarial attacks. Figure 3a depicts the response incoherence for the clean query set and top 10 retrievals. Figure 3b–d displays the response incoherence for perturbed query sets subjected to metric-FGSM attacks with intensities of 4, 12, and 16, respectively.
As the attack intensity increases, each response incoherence tends to rise, accompanied by the reduced variance of both metrics. The gap between the two distributions widens, reflecting the increasing divergence between top- k and inter-rank similarities. This trend demonstrates that as adversarial attacks intensify, the disorder in retrieval results increases. Furthermore, the response incoherence captured by the standard deviations of top- k and inter-rank similarities reveals its potential for estimating the intensity of the adversarial attacks.
Figure 4 and Figure 5 present histograms of response incoherence under varying adversarial attack intensities across datasets and attack methods. Figure 4a–d illustrates the response incoherence for top 10 and inter-rank similarities on the Market-1501 dataset when subjected to deep misranking attacks of varying intensities. Figure 5a–d presents the response incoherence for the DukeMTMC-reID [80] dataset under metric-FGSM attacks at the same intensity levels as in Figure 3. The observed trend in retrieval consistency is not confined to a specific dataset or attack type.
Similarly, both top-k similarity and inter-rank similarity demonstrate a clear correlation with attack intensity. To enhance clarity, we depicted the average values of top-k similarity and inter-rank similarity against attack intensity in Figure 6. This figure illustrates these relationships, showing a monotonic decline in both metrics as a function of increasing attack intensity, thereby confirming our theoretical predictions. The reason for this correlation lies in untargeted adversarial attacks. Specifically, adversarial perturbations change the feature representations of images in the latent space, leading to greater distances between the query and relevant samples. Inter-rank similarity decreases because untargeted attacks focus solely on maximizing the distance between the query and relevant samples, often disregarding the consistency among the ranked results. As a result, the internal consistency of the rankings is disrupted. Figure 6 illustrates a monotonic decline in both top 10 similarity (a) and inter-rank similarity (b) across all three person re-ID attacks (Metric-FGSM, Deep mis-ranking, and MetaAttack).

3.3. Adversarial Attack Intensity Estimation

Figure 7 illustrates the framework of the proposed query response analysis-based attack intensity estimator (QuEst). QuEst estimates the intensity of the adversarial attacks using input features from the query responses, including the response incoherence obtained through metrics from an empirical study. Initially, person re-ID is performed using a target network without considering the state of the query sample to obtain the query responses. We employed pretrained convolutional layers in ResNet-50 as a feature extractor. The top- k similarities, inter-rank similarities, and response incoherence are extracted from the query and retrieval results. The QuEst framework incorporates a simple regression model in its architecture to estimate the attack intensity using query responses. This regression model comprises an input layer and two hidden layers with 512 and 256 nodes, respectively. The output layer estimates the attack intensity on a scale from 0 to 16. The network integrates batch normalization and rectified linear unit activation functions in the hidden layers. The following algorithm provides a detailed overview of the QuEst framework, outlining the steps in estimating the attack intensity based on query responses. Figure 7 illustrates the framework, with Algorithm 1 presenting the training procedure and Algorithm 2 detailing the inference procedures used in QuEst.
Algorithm 1 Training Procedure for QuEst
Require: Training set T without query-gallery distinction
Ensure: Trained regression model E
  1:
Initialize regression model E
  2:
Randomly select one image of individual i as the query q
  3:
Set the remaining images of the same identity i as gallery samples G
  4:
for each attack intensity ϵ { 0 , 1 , 2 , , 16 }  do
  5:
   Generate perturbed query q ˜ by applying metric-FGSM or deep mis-ranking with intensity ϵ
  6:
   Assign label y = ϵ { ϵ denotes clean sample label when ϵ = 0 }
  7:
   Extract features from the query image and gallery images using a pretrained person re-ID network
  8:
   Compute cosine similarities between the query feature and each gallery image feature
  9:
   Identify the top- k retrieval results based on the highest cosine similarities and record their corresponding cosine similarity values, as defined in Equation (4)
10:
   Calculate pairwise cosine similarities among the top- k retrieval results, using Equation (5), with the number of pairwise similarities given by C 2   k  
11:
   Compute response incoherence to evaluate the consistency of the retrieval results, following the formulas in Equations (6) and (8). This is done by calculating the standard deviations of the top-k similarities and inter-rank similarities
12:
   Input the extracted features (top-ksimilarities, inter-rank similarities, response incoherence) into E to return the estimated attack intensity
13:
   Update the regression model E using the pair ( q ˜ , y ) by minimizing the MSE loss
14:
end for
15:
return trained regression model E
Algorithm 2 Query Response Analysis-based Attack Intensity Estimator (QuEst)
Input: Query image
Output: Estimated attack intensity
1:
Extract features from the query image and gallery images using pretrained person re-ID network.
2:
Compute cosine similarities between the query feature and each gallery image feature.
3:
Identify the top- k retrieval results based on the highest cosine similarities and record their corresponding cosine similarity values, as defined in Equation (4).
4:
Calculate pairwise cosine similarities among the top- k retrieval results, using Equation (5), with the number of pairwise similarities given by C 2   k   .
5:
Compute response incoherence to evaluate the consistency of the retrieval results, following the formulas in Equations (6) and (8). This is done by calculating the standard deviations of the top- k similarities and inter-rank similarities.
6:
Input the extracted features (top- k similarities, inter-rank similarities, response incoherence) into the regression model to return the estimated attack intensity.

3.4. Adjusting Purification Strength Using Estimated Attack Intensity

In diffusion-based adversarial purification methods, the diffusion time step influenced the effectiveness of purification during diffusion. The optimal purification varies depending on the adversarial attack intensity. We referenced the empirical results from a study [19] demonstrating how the diffusion time step should be adjusted based on the attack intensity to determine the purification strength using this optimal diffusion time step. The empirical data in that paper demonstrate that the diffusion time step for optimal purification aligns with a logarithmic curve relative to the attack intensity.
For the Market-1501 dataset, the optimal diffusion time step t * is given by
t * = round ( 144.51 log 2 ( 48.66 ( ϵ ^ + 1 ) ) ) ,
where ϵ ^ represents the estimated attack intensity. For the DukeMTMC-reID dataset, the formula is
t * = round ( 51.66 log 10 ( 4.40 ( ϵ ^ + 1 ) ) ) .
A formula can be derived by fitting a logarithmic curve to these data to estimate the optimal time step for a given attack intensity. Therefore, the estimated attack intensity is employed to adjust the purification strength in diffusion-based methods by determining the most effective time step for the diffusion process.

4. Experiments

4.1. Experimental Setup

4.1.1. Datasets and Adversarial Attacks

This study employs two widely recognized person re-ID datasets: Market-1501 [23] and DukeMTMC-reID [80]. Market-1501 comprises 32,668 images, representing 1501 unique identities captured by six distinct cameras. The dataset was partitioned into a training set containing 12,936 images of 750 identities and a testing set comprising 19,732 images of the remaining 751 identities. The DukeMTMC-reID dataset includes 36,411 images of 1404 identities collected from eight cameras. The standard dataset split is 16,522 images of 702 identities for training and 17,661 images of 702 identities for testing. We applied a range of adversarial attack methods for person re-ID, including metric-FGSM [1], deep mis-ranking [2], and MetaAttack [3], on the Market1501 and DukeMTMC-reID datasets. We evaluated the defense performance across attack intensities, ranging from ϵ = 0 to 16.

4.1.2. Models

We employed ResNet-50 [22] as the baseline network for person re-ID to demonstrate the influence of adversarial attacks and the effectiveness of adversarial defenses. The widely adopted ResNet-50 deep learning architecture is known for its strong performance in person re-ID tasks. We applied the pretrained weights provided by Zheng et al. [34] for a fair comparison. Additionally, we employed state-of-the-art diffusion-based purification methods to validate the influence of integrating QuEst with diffusion-based adversarial purification methods [9,18,19]. These methods were employed to assess how QuEst enhances the adaptive purification process.

4.2. Implementation Details

The QuEst method uses an ID feature extractor and a regression model to estimate the attack intensity. The feature extractor for obtaining query responses shares the weights and architecture with the re-ID model to optimize attack detection and intensity estimation. The estimator is implemented with the same architecture used by MEAAD [81] to demonstrate the effectiveness of the proposed response incoherence, minimizing the performance gap between deep learning networks. Additionally, data from the training set is used to train the estimator. The training set does not distinguish between queries and galleries; thus, images of a single individual are randomly selected as queries, with the remaining images serving as gallery samples. The training procedure and evaluation metrics are aligned with the setup used by Wang et al. [81] to ensure a fair comparison. Training is conducted by perturbing the query samples from the training set using metric-FGSM and deep mis-ranking with attack intensities ranging from ϵ = 0 to 16. The clean samples are labeled zero, whereas perturbed samples are assigned labels corresponding to the applied attack intensities. Last, to convert the estimated attack intensity into an appropriate diffusion time step, we employed the optimal diffusion time steps in specific attack environments described in prior research and the logarithmic curve for attack intensity estimation proposed by Lee et al. [19].
The attack intensity estimation network was trained for 50 epochs with the learning rate set at 1 × 10 4 . Training was conducted using a single Nvidia GeForce RTX 3090 GPU and implemented with PyTorch 1.9.0. The stochastic gradient descent algorithm (SGD Momentum) with a momentum of 0.9 was employed for optimization.

4.3. Evaluation Metrics

We employed binary classification accuracy and the area under the receiver operating characteristic curve (AUROC) to evaluate the effectiveness of the proposed QuEst in detecting adversarial attacks. In addition, we evaluated the accuracy of attack intensity estimation. We applied the mean absolute error (MAE) to quantify how closely the estimated attack intensity aligns with the actual intensity. Finally, we assessed the effectiveness of the diffusion-based purification method combined with QuEst for the adaptive adjustment of the purification strength, using rank-1 accuracy as the evaluation metric.

4.4. Comparison with State-of-the-Art Attack Detection Methods

The proposed method, which primarily functions as an attack intensity estimator, also better detects adversarial attacks. We transformed the estimator into an adversarial detector by distinguishing between the rounded estimation results of zero and non-zero values. For IntensPure, we compared only the estimator without the purifier in Section 4.4 and Section 4.5.
As listed in Table 1, for metric-FGSM attacks, the proposed method achieved the highest accuracy of 99.08% and an AUROC of 1.000, outperforming the previous state-of-the-art model. Similarly, for deep misranking attacks, the proposed led with 99.71% accuracy and an AUROC of 1.000, surpassing IntensPure’s 99.55% accuracy and 1.000 AUROC. For MetaAttack, the proposed method achieved an accuracy of 98.69% and an AUROC of 0.991. The accuracy and AUROC of the proposed method remained the highest, demonstrating its robust detection capability even for this unseen attack scenario, which was not included during training. These results emphasize the effectiveness of the proposed response incoherence metric in detecting adversarial attacks.
Table 2 presents the detection accuracy and AUROC for the adversarial attack detection methods on the DukeMTMC-reID dataset. The proposed method consistently outperformed existing approaches across all evaluated attack scenarios, achieving the highest accuracy and AUROC scores. This performance, demonstrated across both the Market-1501 and DukeMTMC-reID datasets, underscores the robustness and generalizability of the proposed approach.

4.5. Comparison with Previous Attack Intensity Estimation Method

We adapted the adversarial detection methods LiBRe [20] and EPS-AD [21] to compare the adversarial attack detection performance. However, adjusting the networks for attack intensity estimation was challenging because these models only report the AUROC for adversarial detection. Thus, we employed only MEAAD and IntensPure as baselines after modifying their architectures to output attack intensity values. We replaced the final layers of the MEAAD network with regression layers and retrained them under the same conditions to transform them from detectors into estimators for a fair comparison.
Table 3 compares the MAE in estimating the adversarial attack intensity across attack types on the Market-1501 dataset. The comparison involves three methods: MEAAD, IntensPure, and the proposed method, which were evaluated under three adversarial attack scenarios—metric-FGSM, deep misranking, and MetaAttack. The proposed method consistently achieved the lowest MAE across all attack types. For metric-FGSM, the proposed approach outperformed IntensPure and MEAAD with an MAE of 0.747, compared to 0.806 and 3.340, respectively. Similarly, for deep misranking, the proposed method achieved an MAE of 0.720. For MetaAttack, an unseen attack not included in the training data, the proposed method displayed superior performance with an MAE of 0.994, compared to 1.071 for IntensPure and 3.912 for MEAAD. These results emphasize the superior generalizability of the method for attacks used during training and unseen attacks, such as the MetaAttack, demonstrating significant improvements in accuracy over previous state-of-the-art methods.
Table 4 compares the MAE for estimating the adversarial attack intensity on the DukeMTMC-reID dataset. Similar to the Market-1501 results, the proposed method consistently achieved the lowest MAE across all attack types, demonstrating its effectiveness across datasets.

4.6. Estimated Attack Intensity Effectiveness on Diffusion-Based Adversarial Purification Methods

Table 5 evaluates the effectiveness of the proposed attack intensity estimator on diffusion-based adversarial purification methods across attack intensities ( ϵ = 0 , 4 , 8 , 12 , 16 ) for person re-ID on the Market-1501 dataset. For IntensPure, QuEst replaces the existing attack intensity estimator. The baseline ResNet50, which does not employ adversarial purification, is a reference point for performance degradation under adversarial attacks. Across all attack intensities, methods integrated with the proposed attack intensity estimator consistently outperformed their original counterparts.
For example, under the metric-FGSM attack with ϵ = 16 , GNSP with our estimator achieved a rank-1 accuracy of 62.84%, compared to 54.44% without it, demonstrating a significant performance improvement. Similarly, DiffPure with the proposed method also displayed better robustness, achieving 62.21% accuracy compared to 38.93% for the original DiffPure at the highest attack intensity. Moreover, for deep misranking and MetaAttack, which are more challenging attack scenarios, the methods enhanced by the attack intensity estimator demonstrated substantial performance gains. In particular, GNSP with the estimator achieved 79.22% rank-1 accuracy for deep misranking at ϵ = 4 , whereas IntensPure with the estimator reached 68.11% accuracy for MetaAttack at ϵ = 16 , demonstrating superior resilience even against unseen attacks, such as the MetaAttack. These results indicate that the diffusion-based purification methods can better mitigate the influence of adversarial attacks by dynamically adjusting the purification strength based on the estimated attack intensity, achieving more robust performance in person re-ID tasks.
Table 6 assesses the performance of adversarial purification methods on person re-ID using the DukeMTMC-reID dataset under various attack intensities ( ϵ = 0 , 4 , 8 , 12 , 16 ). Integrating the proposed attack intensity estimator into DiffPure, GNSP, and In-tensPure consistently improved the rank-1 accuracy compared to the original methods. For example, under the metric-FGSM attack at ϵ = 16 , IntensPure with the estimator achieved 56.82% accuracy compared to 54.13% for the original method. Across attack methods, the approach demonstrated consistent improvements, with the most significant gains observed for MetaAttack, an unseen attack not included during training. IntensPure with the proposed estimator reached 68.48% accuracy at ϵ = 8 , surpassing the original method at 66.42%.
These results confirm that dynamically adjusting the purification strength based on estimated attack intensity enhances the robustness of the diffusion-based methods, even against stronger adversarial attacks. The effectiveness of the proposed method was also validated across diverse datasets, demonstrating its generalizability. This method confirms that the attack intensity estimation-based purification technique is effective across diverse datasets and adversarial attack scenarios.
Table 7 provides a comparative analysis of the complexity of diffusion-based adversarial purification methods on the Market1501 dataset, with and without the proposed QuEst. The results indicate that while DiffPure and GNSP show a marginal increase in computational complexity with the integration of QuEst, there is an average 0.90% increase in floating point operations (FLOPs), 21.3% more parameters, and a 1.69% rise in runtime. To ensure a fair comparison, we calculated diffusion-based adversarial purification models with a single diffusion time step. This approach highlights that the runtime of our proposed method is significantly shorter than that of a single diffusion step. The time required for estimating attack intensity is only 5 ms, contributing minimally to the overall overhead. Furthermore, by adjusting the diffusion time step according to the estimated attack intensity, our method effectively prevents unnecessary diffusion processes, thereby optimizing both performance and speed. This minor overhead relative to the purification process leads to a significant improvement in purification performance.
Table 8 summarizes the complexity comparison of attack intensity estimation methods on the Market1501 dataset. MEAAD [81] employs five expert models, resulting in higher FLOPs and parameters. However, its parallel processing capabilities allow it to mitigate the impact on inference speed. Similarly, IntensPure [19] utilizes an ID feature extractor and an attribute recognition network, achieving an inference time comparable to our proposed method, QuEst, which shows significantly reduced complexity with only 5G FLOPs and 27M parameters. Notably, QuEst effectively reduces both FLOPs and parameters without compromising the performance in attack intensity estimation. Since the proposed operates at the inference stage, the size of the dataset is not relevant to our runtime analysis. Furthermore, image sizes are standardized through preprocessing, ensuring consistent runtime performance.

4.7. Ablation Studies

Table 9 presents the influence of varying the number of top-ranked images on the adversarial detection accuracy and attack intensity estimation, measured by MAE. A rank range of 1 refers to using only the top-1 retrieval result, and as the rank range increases, both performance metrics generally improve. Performance peaked when using the top 10 ranked images, achieving the highest detection accuracy of 99.08% and the lowest estimation error of 0.747. Increasing the rank range up to 10 strengthened the correlation between the retrieved images, leading to better adversarial detection and attack intensity estimation. However, extending the range beyond 10 slightly degraded the performance. For instance, with a rank range of 15 and 20, the accuracy decreased to 98.85% and 98.77%, respectively, and MAE increased to 0.806 and 0.850. This performance degradation is likely due to the inclusion of less relevant images, weakening the coherence of the retrieval results.
Table 10 provides the results of an ablation study on QuEst, assessing its detection accuracy and mean absolute error on the Market1501 dataset under the Metric-FGSM attack. The study evaluates the effect of different features, namely top-k similarities, inter-rank similarities, and response incoherence. The presence of each feature is marked by a checkmark. When using only a single feature (top-k similarities, inter-rank similarities, or response incoherence), the model shows limited performance. Combining multiple features shows a clear improvement in performance. Notably, response incoherence as a key feature shows an impact on performance. When paired with other features, it consistently improves accuracy and reduces estimation errors. The best results are obtained when all three features are used together, with the model reaching 99.08% accuracy and a mean absolute error of 0.747. This highlights the benefit of integrating all features for both detection accuracy and attack intensity estimation.

5. Discussion

As mentioned, we employed the logarithmic curve derived from IntensPure to align the estimated attack intensity with the purification strength. This approach is an empirically determined transformation method, and there are instances in which the estimated attack intensity and optimal purification strength do not perfectly align. This finding indicates room for further refinement and improvement in this alignment process.
Furthermore, our experiments were conducted exclusively in an in-domain setting without exploring cross-domain scenarios. Applying cross-dataset testing may reveal limitations in the selected diffusion timestep, as it may not be optimally aligned with the characteristics of a new domain. The optimal diffusion timestep for purification can vary significantly between datasets, primarily due to the unique visual properties and data distributions within each dataset. For example, datasets may differ in terms of lighting conditions, camera resolution, background complexity, and subject appearance, all of which influence the image’s underlying feature distribution. When such variability exists, the diffusion-based purification method may struggle to apply a one-size-fits-all approach, as the characteristics that influence optimal purification strength vary across domains.
Considering this, a valuable direction for future research would involve the integration of domain adaptation techniques aimed at minimizing domain shift, particularly when the source and target domains exhibit notable differences in visual characteristics, camera angles, or environmental conditions. For instance, incorporating domain adaptation strategies that align feature distributions or adjust the diffusion timestep dynamically to suit cross-domain conditions could enhance both the robustness and generalizability of diffusion-based purification methods. By combining these techniques with our proposed approach, we could better handle the variability in adversarial perturbations and improve purification performance across diverse domains. This line of investigation holds potential not only to advance cross-domain effectiveness but also to offer a more comprehensive adversarial defense solution adaptable to a wider range of real-world scenarios.
The proposed method estimates the attack intensity by analyzing the inconsistencies in retrieval results. However, this approach faces limitations when dealing with targeted adversarial attacks. In targeted attacks, the objective is to manipulate the retrieval system to consistently return results for a specific identity, thereby maintaining a degree of consistency among the retrieval outcomes. Ideally, a successful targeted attack would result in the re-ID system retrieving incorrect identities while producing retrieval results that exhibit similar consistency to those obtained from a benign query. This consistency poses a significant challenge for QuEst, which primarily relies on discrepancies in retrieval results to infer the intensity of adversarial attacks. The effectiveness of QuEst is contingent upon the identification of variations in retrieval results that indicate the presence of adversarial perturbations. When a targeted attack is executed effectively, the manipulations may not produce the discrepancies that QuEst depends on, thereby impairing its ability to accurately assess the attack intensity. This limitation necessitates the development of additional mechanisms to enhance the robustness of the proposed method.
To address these challenges, we plan to integrate a mechanism that utilizes an auxiliary task model, such as attribute recognition. This auxiliary model operates under a different objective from the primary re-ID task, allowing us to evaluate the consistency between the query and the retrieval results from a new perspective. By analyzing how well the retrieved identities align with the expected attributes associated with the queried identity, we can enhance our ability to detect targeted attacks. This auxiliary task model would serve as a complementary tool, providing additional context that QuEst can leverage to identify inconsistencies that may not be apparent through retrieval discrepancies alone. For example, if a targeted attack successfully manipulates the retrieval system to return consistent but incorrect results, the auxiliary model could highlight mismatches in expected attributes—such as discrepancies in clothing color, hair length, or other identifiable traits.

6. Conclusions

This paper introduced an attack intensity estimation method based on a query response analysis. The proposed approach effectively detects adversarial attacks and addresses the limitations of fixed purification strength methods by dynamically adjusting the purification parameters based on the estimated intensity of the adversarial attacks. This method employs the observation that adversarial attacks disrupt the relevance of retrieval results to the query and reduce their internal consistency, providing a robust basis for attack intensity estimation. This approach enhances the overall robustness and reliability of person re-ID systems, significantly advancing addressing adversarial challenges. Furthermore, the detector will be provided by the installer alongside the re-ID model, mirroring the deployment scenario of the adversarial purification model. By integrating the re-ID model and the adversarial defense model, operators can effectively mitigate the impact of adversarial threats, ensuring the robustness of the deployed re-ID system.

Author Contributions

Conceptualization, E.G.L. and S.B.Y.; methodology, E.G.L. and S.B.Y.; software, E.G.L. and C.H.M.; validation, E.G.L. and C.H.M.; formal analysis, E.G.L. and S.B.Y.; investigation, E.G.L.; resources, E.G.L., C.H.M. and S.B.Y.; data curation, E.G.L.; writing—original draft preparation, E.G.L. and C.H.M.; writing—review and editing, E.G.L. and S.B.Y.; visualization, E.G.L. and C.H.M.; supervision, S.B.Y.; project administration, S.B.Y.; funding acquisition, S.B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0020536) and the IITP grant funded by the Korea government (MSIT) (No. 2021-0-02068, RS-2022-00156287, RS-2023-00256629, RS-2024-00437718).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author. The source code is available at https://github.com/st0421/QuEst (accessed on 19 September 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bai, S.; Li, Y.; Zhou, Y.; Li, Q.; Torr, P.H. Adversarial metric attack and defense for person re-identification. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2119–2126. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, H.; Wang, G.; Li, Y.; Zhang, D.; Lin, L. Transferable, controllable, and inconspicuous adversarial attacks on person re-identification with deep mis-ranking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–19 June 2020; pp. 342–351. [Google Scholar]
  3. Yang, F.; Weng, J.; Zhong, Z.; Liu, H.; Wang, Z.; Luo, Z.; Sebe, N. Towards Robust Person Re-Identification by Defending Against Universal Attackers. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5218–5235. [Google Scholar] [CrossRef] [PubMed]
  4. Gowal, S.; Qin, C.; Uesato, J.; Mann, T.; Kohli, P. Uncovering the limits of adversarial training against norm-bounded adversarial examples. arXiv 2020, arXiv:2010.03593. [Google Scholar]
  5. Kang, Q.; Song, Y.; Ding, Q.; Tay, W.P. Stable neural ode with lyapunov-stable equilibrium points for defending against adversarial attacks. In Proceedings of the Advances in Neural Information Processing Systems, Virtual, 6–14 December 2021; pp. 14925–14937. [Google Scholar]
  6. Jin, G.; Yi, X.; Wu, D.; Mu, R.; Huang, X. Randomized adversarial training via taylor expansion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16447–16457. [Google Scholar]
  7. Bai, T.; Luo, J.; Zhao, J.; Wen, B.; Wang, Q. Recent Advances in Adversarial Training for Adversarial Robustness. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021; pp. 4312–4321. [Google Scholar]
  8. Frosio, I.; Kautz, J. The Best Defense Is a Good Offense: Adversarial Augmentation Against Adversarial Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 20–22 June 2023; pp. 4067–4076. [Google Scholar]
  9. Nie, W.; Guo, B.; Huang, Y.; Xiao, C.; Vahdat, A.; Anandkumar, A. Diffusion models for adversarial purification. arXiv 2022, arXiv:2205.07460. [Google Scholar]
  10. Samangouei, P. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models. arXiv 2018, arXiv:1805.06605. [Google Scholar]
  11. Song, Y.; Kim, T.; Nowozin, S.; Ermon, S.; Kushman, N. PixelDefend: Leveraging Generative Models to Understand and Defend Against Adversarial Examples. arXiv 2017, arXiv:1710.10766. [Google Scholar]
  12. Yang, Z.; Xu, Z.; Zhang, J.; Hartley, R.; Tu, P. Adversarial Purification with the Manifold Hypothesis. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–28 February 2024; pp. 16379–16387. [Google Scholar]
  13. Grathwohl, W.; Wang, K.-C.; Jacobsen, J.-H.; Duvenaud, D.; Norouzi, M.; Swersky, K. Your Classifier Is Secretly an Energy Based Model and You Should Treat It Like One. arXiv 2019, arXiv:1912.03263. [Google Scholar]
  14. Schott, L.; Rauber, J.; Bethge, M.; Brendel, W. Towards the First Adversarially Robust Neural Network Model on MNIST. In Proceedings of the Seventh International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019; pp. 1–16. [Google Scholar]
  15. Yoon, J.; Hwang, S.J.; Lee, J. Adversarial Purification with Score-Based Generative Models. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 12062–12072. [Google Scholar]
  16. Lee, I.; Yoo, S.B. Latent-per: Ica-latent code editing framework for portrait emotion recognition. Mathematics 2022, 10, 4260. [Google Scholar] [CrossRef]
  17. Wang, J.; Lyu, Z.; Lin, D.; Dai, B.; Fu, H. Guided diffusion model for adversarial purification. arXiv 2022, arXiv:2205.14969. [Google Scholar]
  18. Lee, M.; Kim, D. Robust Evaluation of Diffusion-Based Adversarial Purification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 134–144. [Google Scholar]
  19. Lee, E.G.; Lee, M.S.; Yoon, J.H.; Yoo, S.B. IntensPure: Attack Intensity-Aware Secondary Domain Adaptive Diffusion for Adversarial Purification. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, Jeju, Republic of Korea, 3–9 August 2024; pp. 956–964. [Google Scholar]
  20. Deng, Z.; Yang, X.; Xu, S.; Su, H.; Zhu, J. LIBRE: A Practical Bayesian Approach to Adversarial Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 972–982. [Google Scholar]
  21. Zhang, S.; Liu, F.; Yang, J.; Yang, Y.; Li, C.; Han, B.; Tan, M. Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 41429–41451. [Google Scholar]
  22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  23. Zheng, L.; Shen, L.; Tian, L.; Wang, S.; Wang, J.; Tian, Q. Scalable Person Re-Identification: A Benchmark. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1116–1124. [Google Scholar]
  24. Qian, X.; Fu, Y.; Jiang, Y.-G.; Xiang, T.; Xue, X. Multi-Scale Deep Learning Architectures for Person Re-Identification. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5399–5408. [Google Scholar]
  25. Li, W.; Zhu, X.; Gong, S. Harmonious Attention Network for Person Re-Identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2285–2294. [Google Scholar]
  26. Lee, I.; Yun, J.S.; Kim, H.H.; Na, Y.; Yoo, S.B. Latentgaze: Cross-domain gaze estimation through gaze-aware analytic latent code manipulation. In Proceedings of the Asian Conference on Computer Vision, Macao, China, 4–8 December 2022; pp. 3379–3395. [Google Scholar]
  27. Zheng, F.; Deng, C.; Sun, X.; Jiang, X.; Guo, X.; Yu, Z.; Huang, F.; Ji, R. Pyramidal Person Re-Identification via Multi-Loss Dynamic Training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8514–8522. [Google Scholar]
  28. Wu, W.; Tao, D.; Li, H.; Yang, Z.; Cheng, J. Deep Features for Person Re-Identification on Metric Learning. Pattern Recognit. 2021, 110, 107424. [Google Scholar] [CrossRef]
  29. Kim, M.H.; Yoo, S.B. Memory-Efficient Discrete Cosine Transform Domain Weight Modulation Transformer for Arbitrary-Scale Super-Resolution. Mathematics 2023, 11, 3954. [Google Scholar] [CrossRef]
  30. Mohammed, H.J.; Al-Fahdawi, S.; Al-Waisy, A.S.; Zebari, D.A.; Ibrahim, D.A.; Mohammed, M.A.; Kadry, S.; Kim, J. ReID-DeePNet: A Hybrid Deep Learning System for Person Re-Identification. Mathematics 2022, 10, 3530. [Google Scholar] [CrossRef]
  31. Hong, Y.; Kim, M.J.; Lee, I.; Yoo, S.B. Fluxformer: Flow-Guided Duplex Attention Transformer via Spatio-Temporal Clustering for Action Recognition. IEEE Robot. Autom. Lett. 2023, 8, 6411–6418. [Google Scholar] [CrossRef]
  32. Li, Q.; Yan, C.; Peng, X. Learning the Meta Feature Transformer for Unsupervised Person Re-Identification. Mathematics 2024, 12, 1812. [Google Scholar] [CrossRef]
  33. Yun, J.S.; Kim, M.H.; Kim, H.I.; Yoo, S.B. Kernel adaptive memory network for blind video super-resolution. Expert Syst. Appl. 2024, 238, 122252. [Google Scholar] [CrossRef]
  34. Zheng, Z.; Zheng, L.; Yang, Y. A Discriminatively Learned CNN Embedding for Person Re-Identification. ACM Trans. Multimed. Comput. Commun. Appl. 2017, 14, 1–20. [Google Scholar] [CrossRef]
  35. Wu, L.; Wang, Y.; Gao, J.; Li, X. Where-and-When to Look: Deep Siamese Attention Networks for Video-Based Person Re-Identification. IEEE Trans. Multimed. 2018, 21, 1412–1424. [Google Scholar] [CrossRef]
  36. Chung, D.; Tahboub, K.; Delp, E.J. A Two Stream Siamese Convolutional Neural Network for Person Re-Identification. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1983–1991. [Google Scholar]
  37. Li, D.X.; Fei, G.Y.; Teng, S.W. Learning Large Margin Multiple Granularity Features with an Improved Siamese Network for Person Re-Identification. Symmetry 2020, 12, 92. [Google Scholar] [CrossRef]
  38. Gong, X.; Zhu, S. Person re-identification based on two-stream network with attention and pose features. IEEE Access 2019, 7, 131374–131382. [Google Scholar] [CrossRef]
  39. Zhang, W.; He, X.; Yu, X.; Lu, W.; Zha, Z.; Tian, Q. A multi-scale spatial-temporal attention model for person re-identification in videos. IEEE Trans. Image Process. 2019, 29, 3365–3373. [Google Scholar] [CrossRef]
  40. Yoon, J.H.; Jung, J.W.; Yoo, S.B. Auxcoformer: Auxiliary and Contrastive Transformer for Robust Crack Detection in Adverse Weather Conditions. Mathematics 2024, 12, 690. [Google Scholar] [CrossRef]
  41. Xu, Y.; Zhao, L.; Qin, F. Dual attention-based method for occluded person re-identification. Knowl.-Based Syst. 2021, 212, 106554. [Google Scholar] [CrossRef]
  42. Chen, G.; Gu, T.; Lu, J.; Bao, J.A.; Zhou, J. Person re-identification via attention pyramid. IEEE Trans. Image Process. 2021, 30, 7663–7676. [Google Scholar] [CrossRef] [PubMed]
  43. Lee, E.G.; Lee, I.; Yoo, S.B. ClueCatcher: Catching Domain-Wise Independent Clues for Deepfake Detection. Mathematics 2023, 11, 3952. [Google Scholar] [CrossRef]
  44. Yang, F.; Yan, K.; Lu, S.; Jia, H.; Xie, X.; Gao, W. Attention driven person re-identification. Pattern Recognit. 2019, 86, 143–155. [Google Scholar] [CrossRef]
  45. Lee, I.; Lee, E.; Yoo, S.B. Latent-OFER: Detect, mask, and reconstruct with latent vectors for occluded facial expression recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 1536–1546. [Google Scholar]
  46. Lu, Y.; Jiang, M.; Liu, Z.; Mu, X. Dual-branch adaptive attention transformer for occluded person re-identification. Image Vis. Comput. 2023, 131, 104633. [Google Scholar] [CrossRef]
  47. Jia, M.; Sun, Y.; Zhai, Y.; Cheng, X.; Yang, Y.; Li, Y. Semi-Attention Partition for Occluded Person Re-Identification. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 998–1006. [Google Scholar]
  48. Wu, Y.; Bourahla, O.E.F.; Li, X.; Wu, F.; Tian, Q.; Zhou, X. Adaptive graph representation learning for video person re-identification. IEEE Trans. Image Process. 2020, 29, 8821–8830. [Google Scholar] [CrossRef]
  49. Zhang, Y.; Qian, Q.; Wang, H.; Liu, C.; Chen, W.; Wang, F. Graph convolution based efficient re-ranking for visual retrieval. IEEE Trans. Multimedia 2023, 26, 1089–1101. [Google Scholar] [CrossRef]
  50. Kim, M.H.; Kim, M.J.; Yoo, S.B. Occluded Part-aware Graph Convolutional Networks for Skeleton-based Action Recognition. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation, Yokohama, Japan, 13–17 May 2024; pp. 7310–7317. [Google Scholar]
  51. Pan, H.; Liu, Q.; Chen, Y.; He, Y.; Zheng, Y.; Zheng, F.; He, Z. Pose-aided video-based person re-identification via recurrent graph convolutional network. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 7183–7196. [Google Scholar] [CrossRef]
  52. Hong, X.; Adam, T.; Ghazali, M. Tran-GCN: A Transformer-Enhanced Graph Convolutional Network for Person Re-Identification in Monitoring Videos. arXiv 2024, arXiv:2409.09391. [Google Scholar]
  53. Lian, Y.; Huang, W.; Liu, S.; Guo, P.; Zhang, Z.; Durrani, T.S. Person re-identification using local relation-aware graph convolutional network. Sensors 2023, 23, 8138. [Google Scholar] [CrossRef] [PubMed]
  54. Jung, J.W.; Yoon, J.H.; Yoo, S.B. DenseSphere: Multimodal 3D Object Detection under a Sparse Point Cloud Based on Spherical Coordinate. Expert Syst. Appl. 2024, 251, 124053. [Google Scholar] [CrossRef]
  55. Huang, M.; Hou, C.; Yang, Q.; Wang, Z. Reasoning and tuning: Graph attention network for occluded person re-identification. IEEE Trans. Image Process. 2023, 32, 1568–1582. [Google Scholar] [CrossRef] [PubMed]
  56. Lv, Y.; Wang, G.; Zhao, W.; Zhao, W.; Guan, Z. Edge-weight-embedding Graph Convolutional Network for Person Re-identification. IEEE Intell. Syst. 2024, 39, 74–82. [Google Scholar] [CrossRef]
  57. Xian, Y.; Yang, J.; Yu, F.; Zhang, J.; Sun, X. Graph-based self-learning for robust person re-identification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Honolulu, HI, USA, 3–7 January 2023; pp. 4789–4798. [Google Scholar]
  58. Zhang, H.; Liu, M.; Li, Y.; Yan, M.; Gao, Z.; Chang, X.; Nie, L. Attribute-Guided Collaborative Learning for Partial Person Re-Identification. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 14144–14160. [Google Scholar] [CrossRef]
  59. Zhang, J.; Peng, W.; Wang, R.; Lin, Y.; Zhou, W.; Lan, G. Enhance domain-invariant transferability of adversarial examples via distance metric attack. Mathematics 2022, 10, 1249. [Google Scholar] [CrossRef]
  60. Chen, Z.; Li, B.; Wu, S.; Ding, S.; Zhang, W. Query-efficient decision-based black-box patch attack. IEEE Trans. Inf. Forensics Secur. 2023, 18, 5522–5536. [Google Scholar] [CrossRef]
  61. Chen, Z.; Li, B.; Wu, S.; Jiang, K.; Ding, S.; Zhang, W. Content-based unrestricted adversarial attack. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; pp. 51719–51733. [Google Scholar]
  62. Wang, F.; Ma, Z.; Zhang, X.; Li, Q.; Wang, C. DDSG-GAN: Generative Adversarial Network with Dual Discriminators and Single Generator for Black-Box Attacks. Mathematics 2023, 11, 1016. [Google Scholar] [CrossRef]
  63. Zheng, Z.; Zheng, L.; Yang, Y.; Wu, F. Query Attack via Opposite-Direction Feature: Towards Robust Image Retrieval. arXiv 2018, arXiv:1809.02681. [Google Scholar]
  64. Subramanyam, A.V. Meta generative attack on person reidentification. IEEE Trans. Circuit Syst. Video Technol. 2023, 33, 4429–4434. [Google Scholar] [CrossRef]
  65. Zheng, Z.; Zheng, L.; Hu, Z.; Yang, Y. Open Set Adversarial Examples. arXiv 2018, arXiv:1809.02681. [Google Scholar]
  66. Yu, C.; Han, B.; Gong, M.; Shen, L.; Ge, S.; Du, B.; Liu, T. Robust weight perturbation for adversarial training. arXiv 2022, arXiv:2205.14826. [Google Scholar]
  67. Du, Y.; Mordatch, I. Implicit generation and modeling with energy based models. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 3608–3618. [Google Scholar]
  68. Hill, M.; Mitchell, J.; Zhu, S.C. Stochastic security: Adversarial defense using long-run dynamics of energy-based models. arXiv 2020, arXiv:2005.13525. [Google Scholar]
  69. Kang, M.; Tran, T.Q.; Cho, S.; Kim, D. CAP-GAN: Towards adversarial robustness with cycle-consistent attentional purification. In Proceedings of the 2021 International Joint Conference on Neural Networks, Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  70. Jin, G.; Shen, S.; Zhang, D.; Dai, F.; Zhang, Y. Ape-gan: Adversarial perturbation elimination with gan. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; pp. 3842–3846. [Google Scholar]
  71. Qin, H.; Fu, Y.; Zhang, H.; El-Yacoubi, M.A.; Gao, X.; Song, Q.; Wang, J. MsMemoryGAN: A Multi-scale Memory GAN for Palm-vein Adversarial Purification. arXiv 2024, arXiv:2408.10694. [Google Scholar]
  72. Ankile, L.L.; Midgley, A.; Weisshaar, S. Denoising diffusion probabilistic models as a defense against adversarial attacks. arXiv 2023, arXiv:2301.06871. [Google Scholar]
  73. Shi, Y.; Du, M.; Wu, X.; Guan, Z.; Sun, J.; Liu, N. Black-box backdoor defense via zero-shot image purification. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; pp. 57336–57366. [Google Scholar]
  74. Sun, J.; Wang, J.; Nie, W.; Yu, Z.; Mao, Z.; Xiao, C. A critical revisit of adversarial robustness in 3D point cloud recognition with diffusion-driven purification. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 33100–33114. [Google Scholar]
  75. Xiao, C.; Chen, Z.; Jin, K.; Wang, J.; Nie, W.; Liu, M.; Song, D. Densepure: Understanding diffusion models for adversarial robustness. In Proceedings of the The Eleventh International Conference on Learning Representations, Virtual, 25–29 April 2023. [Google Scholar]
  76. Lee, E.; Lee, E.-J.; Anwar, S.M.; Yoo, S.B. Child FER: Domain-Agnostic Facial Expression Recognition in Children Using a Secondary Image Diffusion Model. In Proceedings of the ICASSP 2024—2024 IEEE International Conference on Acoustics, Speech and Signal Processing, Seoul, Republic of Korea, 14–19 April 2024; pp. 2750–2754. [Google Scholar]
  77. Carlini, N.; Tramer, F.; Dvijotham, K.D.; Rice, L.; Sun, M.; Kolter, J.Z. (Certified!!) Adversarial robustness for free! arXiv 2023, arXiv:2206.10550. [Google Scholar]
  78. He, Z.; Rakin, A.S.; Fan, D. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 588–597. [Google Scholar]
  79. Goodfellow, I.J. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
  80. Ristani, E.; Solera, F.; Zou, R.; Cucchiara, R.; Tomasi, C. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 17–35. [Google Scholar]
  81. Wang, X.; Li, S.; Liu, M.; Wang, Y.; Roy-Chowdhury, A.K. Multi-expert adversarial attack detection in person re-identification using context inconsistency. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 19–21 June 2021; pp. 15097–15107. [Google Scholar]
Figure 1. Top 10 retrieval results from a ResNet-50-based person re-identification model [22] on the Market-1501 dataset [23] under various adversarial attacks. Clean images produce plausible and consistent matches, whereas adversarial perturbations, such as those generated by metric-FGSM [1], deep mis-ranking [2], and MetaAttack [3], lead to more disordered and incoherent retrievals. Increasing the attack intensity causes the retrieved images to diverge from the query and diminishes consistency in the top 10 retrieval results. The facial area is obscured for privacy protection.
Figure 1. Top 10 retrieval results from a ResNet-50-based person re-identification model [22] on the Market-1501 dataset [23] under various adversarial attacks. Clean images produce plausible and consistent matches, whereas adversarial perturbations, such as those generated by metric-FGSM [1], deep mis-ranking [2], and MetaAttack [3], lead to more disordered and incoherent retrievals. Increasing the attack intensity causes the retrieved images to diverge from the query and diminishes consistency in the top 10 retrieval results. The facial area is obscured for privacy protection.
Mathematics 12 03508 g001
Figure 2. Visualization of similarities between retrievals, represented by connecting lines. Top 10 similarities as connections between the query and each of the top 10 retrievals. Inter-rank similarities are depicted by lines connecting people in the top 10 list, displaying their relative similarities in ranking.
Figure 2. Visualization of similarities between retrievals, represented by connecting lines. Top 10 similarities as connections between the query and each of the top 10 retrievals. Inter-rank similarities are depicted by lines connecting people in the top 10 list, displaying their relative similarities in ranking.
Mathematics 12 03508 g002
Figure 3. Histograms of response incoherence for top 10 similarities (blue) and inter-rank similarities (orange) using the Market-1501 dataset query set. (a) Response incoherence for the clean query set and its top 10 retrievals. (bd) Response incoherence for perturbed query sets subjected to metric-FGSM attacks with intensities of 4, 12, and 16, respectively.
Figure 3. Histograms of response incoherence for top 10 similarities (blue) and inter-rank similarities (orange) using the Market-1501 dataset query set. (a) Response incoherence for the clean query set and its top 10 retrievals. (bd) Response incoherence for perturbed query sets subjected to metric-FGSM attacks with intensities of 4, 12, and 16, respectively.
Mathematics 12 03508 g003
Figure 4. Histograms of response incoherence for top 10 similarities (blue) and inter-rank similarities (orange) using the Market-1501 dataset query set. (a) Response incoherence for the clean query set and its top 10 retrievals. (bd) Response incoherence for perturbed query sets subjected to deep misranking attacks with intensities of 4, 12, and 16, respectively.
Figure 4. Histograms of response incoherence for top 10 similarities (blue) and inter-rank similarities (orange) using the Market-1501 dataset query set. (a) Response incoherence for the clean query set and its top 10 retrievals. (bd) Response incoherence for perturbed query sets subjected to deep misranking attacks with intensities of 4, 12, and 16, respectively.
Mathematics 12 03508 g004
Figure 5. Histograms of response incoherence for top 10 similarities (blue) and inter-rank similarities (orange) using the DukeMTMC-reID dataset query set. (a) Response incoherence for the clean query set and its top 10 retrievals. (bd) Response incoherence for perturbed query sets subjected to Metric-FGSM attacks with intensities of 4, 12, and 16, respectively.
Figure 5. Histograms of response incoherence for top 10 similarities (blue) and inter-rank similarities (orange) using the DukeMTMC-reID dataset query set. (a) Response incoherence for the clean query set and its top 10 retrievals. (bd) Response incoherence for perturbed query sets subjected to Metric-FGSM attacks with intensities of 4, 12, and 16, respectively.
Mathematics 12 03508 g005
Figure 6. Correlation between attack intensity and both top 10 similarity and inter-rank similarity for person re-ID attacks (Metric-FGSM, Deep mis-ranking, and MetaAttack) using the Market-1501 query set. (a) Top 10 similarity under varying attack intensities. (b) Inter-rank similarity under the same attack intensities.
Figure 6. Correlation between attack intensity and both top 10 similarity and inter-rank similarity for person re-ID attacks (Metric-FGSM, Deep mis-ranking, and MetaAttack) using the Market-1501 query set. (a) Top 10 similarity under varying attack intensities. (b) Inter-rank similarity under the same attack intensities.
Mathematics 12 03508 g006
Figure 7. Illustration of the attack intensity estimator based on the query response analysis. Identity features are extracted from the last convolutional layer of ResNet50, followed by global average pooling.
Figure 7. Illustration of the attack intensity estimator based on the query response analysis. Identity features are extracted from the last convolutional layer of ResNet50, followed by global average pooling.
Mathematics 12 03508 g007
Table 1. Comparison of detection accuracy (Acc) and the area under the receiver operating characteristic curve (AUROC) of adversarial attack detection on the Market-1501 dataset. The best results are in boldface.
Table 1. Comparison of detection accuracy (Acc) and the area under the receiver operating characteristic curve (AUROC) of adversarial attack detection on the Market-1501 dataset. The best results are in boldface.
MethodMetric-FGSM ( ϵ = 4 )Deep Mis-Ranking ( ϵ = 16 )MetaAttack ( ϵ = 8 )
AccAUROCAccAUROCAccAUROC
LiBRe [20]-0.933-0.962-0.913
EPS-AD [21]-0.955-0.972-0.941
MEAAD [81]97.300.98098.501.00094.020.944
IntensPure [19]98.851.00099.551.00095.880.985
QuEst (Ours)99.081.00099.711.00098.690.991
Table 2. Comparison of detection accuracy (Acc) and the area under the receiver operating characteristic curve (AUROC) of adversarial attack detection on the DukeMTMC-reID dataset. The best results are in boldface.
Table 2. Comparison of detection accuracy (Acc) and the area under the receiver operating characteristic curve (AUROC) of adversarial attack detection on the DukeMTMC-reID dataset. The best results are in boldface.
MethodMetric-FGSM ( ϵ = 4 )Deep Mis-Ranking ( ϵ = 16 )MetaAttack ( ϵ = 8 )
AccAUROCAccAUROCAccAUROC
LiBRe [20]-0.945-0.968-0.961
EPS-AD [21]-0.962-0.986-0.975
MEAAD [81]93.750.96495.340.99290.800.972
IntensPure [19]96.500.96197.620.99591.500.985
QuEst (Ours)96.870.98098.540.99693.350.988
Table 3. Comparison of the mean absolute error for estimating the attack intensity in person re-identification attacks on the Market-1501 dataset. The best results are in boldface.
Table 3. Comparison of the mean absolute error for estimating the attack intensity in person re-identification attacks on the Market-1501 dataset. The best results are in boldface.
MethodMetric-FGSMDeep Mis-RankingMetaAttack
MEAAD [81]3.3403.1893.912
IntensPure [19]0.8060.7691.071
QuEst (Ours)0.7470.7200.994
Table 4. Comparison of the mean absolute error for estimating the attack intensity in person re-identification attacks on the DukeMTMC-reID dataset. The best results are in boldface.
Table 4. Comparison of the mean absolute error for estimating the attack intensity in person re-identification attacks on the DukeMTMC-reID dataset. The best results are in boldface.
MethodMetric-FGSMDeep Mis-RankingMetaAttack
MEAAD [81]3.9013.8484.150
IntensPure [19]1.0600.9471.544
QuEst (Ours)0.8520.7981.039
Table 5. Rank-1 accuracy of person re-identification on the Market-1501 dataset under various attack and attack intensities ( ϵ = 0 , 4 , 8 , 12 , 16 ), comparing the performance of adversarial purification methods with and without the proposed attack intensity estimator for adjusting purification strength. The best results are in boldface.
Table 5. Rank-1 accuracy of person re-identification on the Market-1501 dataset under various attack and attack intensities ( ϵ = 0 , 4 , 8 , 12 , 16 ), comparing the performance of adversarial purification methods with and without the proposed attack intensity estimator for adjusting purification strength. The best results are in boldface.
Attack Method-Metric-FGSMDeep Mis-RankingMetaAttack
Attack Intensity ϵ = 0 ϵ = 4 ϵ = 8 ϵ = 12 ϵ = 16 ϵ = 4 ϵ = 8 ϵ = 12 ϵ = 16 ϵ = 4 ϵ = 8 ϵ = 12 ϵ = 16
ResNet50 [22] (Baseline)88.8452.9520.864.590.0066.4829.6311.675.7067.1338.698.143.00
DiffPure [9]74.7365.0851.5447.1238.9370.5452.0221.2913.0071.5966.3346.0239.73
GNSP [18]73.2568.1760.7756.0454.4468.4353.1523.1619.7170.9859.5955.6148.75
IntensPure [19]88.3672.7466.6562.0560.4276.5174.5256.4149.5278.5074.5270.9965.88
DiffPure with QuEst88.5174.0867.8363.8162.2177.4375.6756.7851.4579.1175.1274.0367.29
GNSP with QuEst88.5175.1668.2464.0362.8477.2075.3356.4551.7879.2276.5973.3067.05
IntensPure with QuEst88.4073.1567.0062.9862.5676.9875.0257.8352.0778.8375.9673.0468.11
Table 6. Rank-1 accuracy of person re-identification on the DukeMTMC-reID dataset under various attack and attack intensities ( ϵ = 0 , 4 , 8 , 12 , 16 ), comparing the performance of adversarial purification methods with and without the proposed attack intensity estimator for adjusting purification strength. The best results are in boldface.
Table 6. Rank-1 accuracy of person re-identification on the DukeMTMC-reID dataset under various attack and attack intensities ( ϵ = 0 , 4 , 8 , 12 , 16 ), comparing the performance of adversarial purification methods with and without the proposed attack intensity estimator for adjusting purification strength. The best results are in boldface.
Attack Method-Metric-FGSMDeep Mis-RankingMetaAttack
Attack Intensity ϵ = 0 ϵ = 4 ϵ = 8 ϵ = 12 ϵ = 16 ϵ = 4 ϵ = 8 ϵ = 12 ϵ = 16 ϵ = 4 ϵ = 8 ϵ = 12 ϵ = 16
ResNet50 [22] (Baseline)79.3554.0915.722.010.0049.6918.275.792.0655.0719.211.170.40
DiffPure [9]70.6962.7952.5246.9942.2969.3949.3342.5521.8663.5163.3357.0553.59
GNSP [18]69.4063.8950.3945.6041.7964.9551.6643.2035.8468.3164.8659.9156.94
IntensPure [19]78.9564.9957.5954.6254.1371.3258.3544.8344.6170.6066.4260.5759.69
DiffPure with QuEst79.1565.2058.1256.1555.0071.5459.6146.5747.5071.6267.9161.5061.30
GNSP with QuEst79.1065.5058.2556.3055.2071.6059.1846.1047.6071.3067.6562.0661.40
IntensPure with QuEst79.1065.0658.9056.3456.8271.4359.8846.5047.9771.0168.4861.6461.12
Table 7. Complexity comparison of diffusion-based adversarial purification methods on the Market1501 dataset, with and without QuEst.
Table 7. Complexity comparison of diffusion-based adversarial purification methods on the Market1501 dataset, with and without QuEst.
Purification MethodFLOPs (G) ↓Params (M) ↓Time (ms) ↓
DiffPure [9]583190366
GNSP [18]53095249
IntensPure [19] (only purifier)3975159
DiffPure with QuEst588 (583 + 5)217 (190 + 27)371 (366 + 5)
GNSP with QuEst535 (530 + 5)122 (95 + 27)254 (249 + 5)
IntensPure (only purifier) with QuEst44 (39 + 5)778 (751 + 27)64 (59 + 5)
Table 8. Complexity comparison of attack intensity estimation methods on the Market1501 dataset.
Table 8. Complexity comparison of attack intensity estimation methods on the Market1501 dataset.
Attack Intensity Estimation MethodFLOPs (G) ↓Params (M) ↓Time (ms) ↓
MEAAD [81]282127
IntensPure [19] (only estimator)11825
QuEst5275
Table 9. Evaluation of adversarial detection accuracy and mean absolute error based on the number of top-rank images on the Market1501 dataset under the metric-FGSM attack. The best results are in boldface.
Table 9. Evaluation of adversarial detection accuracy and mean absolute error based on the number of top-rank images on the Market1501 dataset under the metric-FGSM attack. The best results are in boldface.
Number of Rank Images15101520
Accuracy ↑81.6897.4999.0898.8598.77
Mean absolute error ↓3.4131.2090.7470.8060.850
Table 10. Ablation study on the QuEst, showing detection accuracy (Acc) and mean absolute error (MAE) for the Market1501 dataset under Metric-FGSM attack. A checkmark (✓) indicates the presence of the method. The best results are in boldface.
Table 10. Ablation study on the QuEst, showing detection accuracy (Acc) and mean absolute error (MAE) for the Market1501 dataset under Metric-FGSM attack. A checkmark (✓) indicates the presence of the method. The best results are in boldface.
Top-k SimilaritiesInter-Rank SimilaritiesResponse IncoherenceAcc ↑MAE ↓
92.134.094
93.773.614
91.803.928
94.893.508
95.182.480
97.371.894
99.080.747
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, E.G.; Min, C.H.; Yoo, S.B. QuEst: Adversarial Attack Intensity Estimation via Query Response Analysis. Mathematics 2024, 12, 3508. https://doi.org/10.3390/math12223508

AMA Style

Lee EG, Min CH, Yoo SB. QuEst: Adversarial Attack Intensity Estimation via Query Response Analysis. Mathematics. 2024; 12(22):3508. https://doi.org/10.3390/math12223508

Chicago/Turabian Style

Lee, Eun Gi, Chi Hyeok Min, and Seok Bong Yoo. 2024. "QuEst: Adversarial Attack Intensity Estimation via Query Response Analysis" Mathematics 12, no. 22: 3508. https://doi.org/10.3390/math12223508

APA Style

Lee, E. G., Min, C. H., & Yoo, S. B. (2024). QuEst: Adversarial Attack Intensity Estimation via Query Response Analysis. Mathematics, 12(22), 3508. https://doi.org/10.3390/math12223508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop