Next Article in Journal
A Novel Multi-Scale Feature Enhancement U-Shaped Network for Pixel-Level Road Crack Segmentation
Previous Article in Journal
Spatial Attention-Based Kernel Point Convolution Network for Semantic Segmentation of Transmission Corridor Scenarios in Airborne Laser Scanning Point Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Variable Step SAMP Method Based on Correlation Principle

College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450002, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(22), 4502; https://doi.org/10.3390/electronics13224502
Submission received: 7 October 2024 / Revised: 31 October 2024 / Accepted: 14 November 2024 / Published: 15 November 2024

Abstract

:
The fixed step size in the sparse adaptive matching pursuit algorithm can result in limited accuracy and overestimation. To address this, this paper proposes a variable-step sparse adaptive matching pursuit algorithm based on the Spearman correlation coefficient. By measuring the Spearman correlation coefficient between the candidate set and the input signal, and introducing an adaptive step size adjustment method based on the parameter values of the correlation coefficient, the performance of the SAMP algorithm is optimized and its adaptability is enhanced. Extensive experiments demonstrate that the proposed method achieves good reconstruction results for one-dimensional sparse signals and two-dimensional images.

1. Introduction

In recent years, with the continuous development of artificial intelligence, smart devices have been continuously updated and applied across various industries, making a variety of sensors an important bridge connecting these smart devices. Sensors collect data information and perform calculations, extractions, reconstructions, and other processes to achieve control over smart devices. In light of the vast amounts of data that need to be stored and transmitted, efficient and high-quality storage and transmission have garnered the attention of researchers.
The Nyquist sampling theorem is one of the most important theoretical foundations of modern signal processing. To prevent distortion or aliasing in the reconstructed signal, the sampling rate must exceed twice the highest frequency of the sampled signal [1]. However, high sampling rates and the process of compressed transmission waste a significant amount of resources. The emergence of compressed sensing provides a new approach to effectively address this issue. Compressed sensing can capture all the information in the original signal effectively with a lower sampling rate, enabling the reconstruction of the original signal and significantly reducing the amount of data required for storage or transmission [2]. Therefore, it has been widely applied in fields such as image encryption [3], medical diagnosis [4], radar imaging [5], and superharmonic detection [6].
The key to reconstructing signals with compressed sensing technology lies in the choice of reconstruction algorithm [7]. Currently, common algorithms for compressed sensing reconstruction are divided into three categories: convex optimization algorithms [8,9], greedy algorithms [10], and combinatorial algorithms [11]. Convex optimization algorithms can accurately reconstruct the original signal under certain conditions; however, their high computational complexity makes them difficult to apply to large-scale or high-dimensional signals [12]. Combinatorial algorithms achieve the reconstruction of the original signal by combining different algorithms. However, compared to single algorithms, this approach requires more computational resources, making it challenging to implement in resource-constrained situations [13]. In contrast, greedy algorithms have garnered widespread attention due to their low computational complexity and short reconstruction time [14]. The Matching Pursuit (MP) algorithm has been the most widely used greedy algorithm for signal reconstruction in recent years, leading to the development of many improved algorithms based on MP, such as OMP, ROMP, and STOMP. However, these algorithms require the sparsity of the signal as prior information, which is often difficult to ascertain, thus reducing the applicability of many MP algorithms. The emergence of the SAMP algorithm has broken this deadlock, as it is widely used due to its characteristic of not requiring sparsity as prior information; researchers have also proposed many improved algorithms based on SAMP, such as CCSAMP and VSSAMP. Although the aforementioned methods yield good reconstruction results under certain conditions, they are somewhat limited by their need for prior information or fixed step sizes, often resulting in unsatisfactory reconstruction outcomes. To address this, this paper proposes a variable-step sparse adaptive matching pursuit algorithm based on the Spearman correlation coefficient, which introduces an adaptive step size adjustment method without requiring prior information, enabling the high-quality reconstruction of one-dimensional sparse signals and two-dimensional image signals.
The organization of the rest of this paper is as follows: Section 2 discusses related work on reconstruction algorithms, while Section 3 covers the theoretical background of compressed sensing and the process of the SAMP algorithm. Section 4 presents the algorithm proposed in this paper, Section 5 details the experimental results of the proposed method under different signals, and finally, Section 6 concludes the paper.

2. Related Work

Reliable reconstruction algorithms not only improve reconstruction accuracy but also exhibit good work efficiency, and greedy algorithms possess this characteristic; therefore, this subsection will focus on greedy algorithms.
The main idea of the greedy algorithm is to approximate the original signal during the iterative process, resulting in a reconstructed signal with high precision. The Matching Pursuit (MP) [15] and Orthogonal Matching Pursuit (OMP) [16] algorithms are among the earliest greedy algorithms; OMP effectively resolves the issue of atom redundancy in MP by ensuring the orthogonality between atoms selected in successive iterations, thus providing greater robustness. However, this orthogonality also introduces a series of problems, for instance, the OMP algorithm may become trapped in local optima and struggle to converge in certain cases [17]. To address this, Needell et al. proposed the Regularized Orthogonal Matching Pursuit (ROMP) algorithm [18], which introduces randomness in atom selection, thereby mitigating the issues arising from orthogonality and reducing computational complexity while ensuring algorithm performance. For the aforementioned algorithms, if the selected atoms do not contain information from the original signal, the reconstruction quality will also diminish. In response, Dileep et al. proposed the Compressed Sampling Matching Pursuit (CoSaMP) algorithm, which incorporates a backtracking approach to identify and reselect atoms that do not meet termination conditions, effectively enhancing algorithm performance by choosing columns most relevant to the current residual. Similarly, the Subspace Pursuit (SP) [19] algorithm also incorporates the backtracking concept.
However, the aforementioned algorithms require the sparsity of the signal as prior information, limiting their applicability. In response, Dohono et al. proposed the Phase Orthogonal Matching Pursuit (StOMP) algorithm [20], which does not require signal sparsity as an input parameter and is advantageous when sparsity is unknown. Subsequently, to address the issue of slow convergence speed due to strong orthogonality, Blumensath et al. introduced the Phase-Controlled Weak Orthogonal Matching Pursuit (SWOMP) [21], which incorporates a weak selection process and sets a threshold for atom selection to ensure reconstruction accuracy, though the choice of threshold remains an unresolved issue. Similarly, the Sparsity Adaptive Matching Pursuit (SAMP) [22] algorithm also does not require sparsity as prior information; it is not constrained by preset thresholds and gradually approximates the original signal through continuous iteration with a fixed step size. However, the choice of an appropriate step size affects the reconstruction performance; a smaller step size may slow convergence, increasing iterations and time, while a larger step size may skip the optimal solution, reducing reconstruction accuracy [23]. In response, researchers improved the SAMP algorithm by introducing a variable step size strategy to reduce iteration counts and enhance reconstruction accuracy. Huang et al. proposed the Adaptive Regularized Matching Pursuit Algorithm for Compressed Sensing with Sparsity and Step Size [24], which uses a coefficient  α  to dynamically adjust the step size. Wang et al. introduced the Improved Sparsity Adaptive Matching Pursuit Algorithm based on Compressed Sensing [25], which dynamically adjusts the step size by incorporating a nonlinear function and a variable step size correction factor, enhancing reconstruction accuracy. The Variable Step Size Sparsity Adaptive Matching Pursuit (VSSAMP) algorithm estimates sparsity and determines the step size based on the energy difference between two residuals, optimizing reconstruction performance for one-dimensional signals and two-dimensional images while reducing reconstruction time [26]. The Correlation Coefficient Sparsity Adaptive Matching Pursuit (CCSAMP) Algorithm [27] identifies a linear relationship between the signal recovered from the support set and the candidate set. It uses the Pearson correlation coefficient (PCC) to measure the linear relationship between two vectors, enabling dynamic adjustment of the step size. However, the Pearson correlation coefficient requires variables to follow a normal distribution, which may limit its applicability to non-normally distributed data. Additionally, the Pearson correlation coefficient is highly sensitive to outliers, which may cause results to deviate from the true relationship.
To address the aforementioned issues, this paper proposes a variable step size sparsity adaptive Matching Pursuit Algorithm based on the Spearman correlation coefficient, which measures the similarity between the candidate set and the input signal, adapting the step size through correlation-based partitioning. This algorithm achieves high reconstruction accuracy for one-dimensional sparse signals and two-dimensional image signals.

3. Preliminaries

3.1. Compressed Perception Theory

Compressed sensing offers unique advantages in signal processing methods [28]; the premise for signal reconstruction is that the original signal is sparse, yet most signals do not meet the sparsity condition. Therefore, compressed sensing utilizes a sparse basis matrix  Ψ = Ψ 1 , Ψ 2 , , Ψ N , allowing the original signal x to be expressed as a sparse signal θ according to Formula (1) as follows:
x = i = 1 N Ψ i θ i = Ψ θ
where θ is a sparse signal under the sparse basis Ψ. The sparse basis is typically one of the discrete Fourier transform basis, discrete cosine transform basis, or wavelet transform basis. The essence of compressed sensing is to project high-dimensional signals into a low-dimensional space using a measurement matrix, expressed mathematically as follows:
y = Φ x
The compressed signal y is obtained through Formula (2), where y is a column vector of size M × 1, x is a column vector of size N × 1, and M << N. Φ is the measurement matrix. The measurement matrix can generally be divided into deterministic measurement matrices and random measurement matrices. Since Gaussian random matrices and Bernoulli random matrices in random measurement matrices can generally satisfy the restricted isometry property (RIP), random measurement matrices are typically chosen. Combining (1) and (2), we can conclude that
y = Φ x = Φ Ψ θ = A θ
As shown in Formula (3), the product of the measurement matrix and the sparse basis yields the sensing matrix A, expressed as A = ΦΨ. Thus, given the compressed signal y, the measurement matrix Φ, and the sparse basis Ψ, the sparse signal θ, and the original signal x can be obtained from Formula (3). Generally, the accurate reconstruction of the sparse signal can be transformed into solving the following optimization problem:
min θ θ 0 , subject   to   y = Φ x
where ‖·‖0 represents the L0-norm, which counts the number of non-zero elements in a vector. However, the L0-norm seeks the subset with the fewest non-zero elements, which requires combinatorial optimization across all subsets, complicating the problem; thus, the L0-norm is NP-hard. Since the L1-norm has been shown to be the optimal convex approximation of the L0-norm, it is used to replace the L0-norm to solve for sparse solutions [29]. The formula is as follows:
min θ θ 1 , subject   to   y = Φ x
From the above, it is clear that successfully reconstructing a signal involves three important components: the selection of the sparse basis, the choice of the measurement matrix, and the design of the reconstruction algorithm. These also represent three distinct research directions in the study of compressed sensing [30].

3.2. Introduction to SAMP Algorithm

The SAMP algorithm is widely used in compressed sensing due to its characteristic of not requiring sparsity as prior information. The pseudo-code for the SAMP algorithm is expressed as follows (Algorithm 1) [22]:
Algorithm 1: SAMP
Input: compressed signal  y , perception matrix  A , step size  S .
Output: Signal sparse representation coefficient estimated  θ t .
1:
Initialization:
2:
r 0 = y Λ t = A t = L = S s t a g e = 1 t = 1
3:
Repeat:
4:
S t = M a x ( A T r t 1 , L )
5:
C t = Λ t 1 S t
6:
θ ^ t = arg min y A t θ t = ( A t T A t ) 1 A t T y
7:
F t = max ( A t y , L )
8:
r n e w = y A t L ( A t L T A t L ) 1 A t L T y
9:
If  r n e w 2 < 1 e 6  then
10:
Output  θ t
11:
else if  r n e w 2 r t 1 2
12:
s t a g e = s t a g e + 1
13:
L = s t a g e × S
14:
t = t + 1
15:
else
16:
r n e w = r t 1
17:
Λ t = F
18:
t = t + 1
19:
end if
20:
Until  r n e w 2 < 1 e 6
21:
Output:
22:
θ ^ t = ( A t T A t ) 1 A t T y
where Λt is the index of the t-th iteration, aj represents the j-th column of the sensing matrix A, At represents the set of columns selected from the sensing matrix A under the index set Ck, and θt represents the estimated sparse representation coefficients of the signal.
From the steps of the algorithm, it can be seen that, during the step size update phase, in each iteration, the step size increases by a fixed S. This prevents the dynamic updating of the step size, affecting the reconstruction accuracy of the signal. When S is small, the accuracy of the sparse estimate is high, but the number of iterations increases, leading to inefficient reconstruction. Conversely, when S is large, the number of iterations decreases, but there is a risk of losing the optimal solution, resulting in reduced reconstruction accuracy.

4. Proposed Method

The SAMP algorithm estimates the sparsity of each iteration by comparing the residual value, gradually approaching the optimal sparse solution, and finally successfully reconstructing the original signal. However, due to the use of fixed step size during iteration, the accuracy of sparsity estimation is compromised. Therefore, the Spearman correlation coefficient SAMP (SCCSAMP) algorithm is proposed to dynamically change the step size according to the correlation between the compressed signal and the candidate set.
The initial parameters are expressed as follows:
r 0 = y F 0 = L = S
After parameters initialization, u is defined as the dot product of sensing matrix A and residual r. Then, L maximum values are selected from u to form St, which corresponds to column set. Then, the inner product of the sensing matrix A and the residual r is calculated, the largest L values are selected, and the relevant column sequence number in St is stored:
S t = M a x ( A T r t 1 , L )
The index set Λt−1 of the previous iteration was merged with the St of the current iteration to construct the candidate set Ct of the t-th iteration:  C t = Λ t 1 S t . The sparse signal of the t-th iteration is calculated using the least squares method as follows:
θ ^ t = arg min y A t θ t = ( A t T A t ) 1 A t T y
The L-th term with the largest absolute value from θt is selected and labeled as θtL, while the L-th column corresponding to At is selected and denoted as AtL, and the set of sequence numbers of AtL is reserved in the support set Ft, The residuals are obtained as follows:
r n e w = y A t L ( A t L T A t L ) 1 A t L T y
The residual r is used to judge whether the stop iteration condition is satisfied; the iteration is terminated if it is satisfied, and the iteration is continued if it is not satisfied.
In the SAMP algorithm, we construct the vector ytL, ytL that was generated by the candidate set Ft.
y t L = y r n e w = A t L ( A t L T A t L ) 1 A t L T y
As the number of iterations increases, the rnew tends to zero; therefore, the vectors y and ytL are gradually close to each other. Compressed sensing theory states that the sparse signal x can be recovered from the small number of linear measurements  y = A x , requiring the perception matrix A to satisfy the restricted isometry property (RIP).
During the recovery of sparse signals, the selection of atoms most relevant to vector y is achieved by maximizing  A t T y , in other words, the projection of the vector y onto the column space of the sensing matrix A is actually calculated. Therefore, selecting the set of atoms Ft corresponding to the L component with the largest absolute value means selecting the atoms that contribute the most to vector y. This selection strategy ensures that the atoms selected in each iteration will contribute the most to signal y, so that the vector ytL contains more useful atoms and minimizes the reconstruction error at each step. This theoretically increases the representativeness and information content of the selected atoms, and better preserves the geometric properties of the signal, making the selected atoms more orthogonal and independent, thus increasing the likelihood that the sensing matrix A satisfies the RIP.
Therefore, the algorithm proposed in this paper deals with the step size in segments by measuring the similarity between the vectors y and ytL. A higher similarity means that atoms closer to the sparse solution are selected. By judging the similarity, different step sizes can be selected to effectively approximate the sparse solution.
Common correlation analysis methods include covariance, the Pearson correlation coefficient, and the Spearman correlation coefficient. The calculation result of covariance is only used to judge whether the two variables are correlated, and its value is not easy to define the degree of correlation. The Pearson coefficient can be used to analyze the degree of correlation, but it is only suitable for data that conform to the normal distribution. The Spearman correlation coefficient can be used for the correlation analysis of any two groups of data with the same length. Spearman’s correlation coefficient reflects the correlation between variables by using monotone function, which is not affected by data dimension and is not sensitive to outliers [31]. Therefore, the Spearman correlation coefficient is selected for correlation analysis in this paper, and its calculation method is as follows:
ρ = i N ( y i y ¯ ) ( ( y t L ) i y ¯ t L ) i N ( y i y ¯ ) 2 i N ( ( y t L ) i y ¯ t L ) 2
where yi and (ytL)i are the observed rank of two well-ordered variables,  y ¯  and  y ¯ t L  are the average rank of two variables, and N is the total number of each variable.
The range of the Spearman’s correlation coefficient ρ is [–1, 1]. A negative value indicates a negative correlation between two variables, while a positive value indicates a positive correlation between two variables. The absolute value of the calculated results can be used to characterize the degree of correlation between the two variables. The usual judgment criteria are as follows: 0.8~1.0 indicates a very strong correlation, 0.6~0.8 indicates a strong correlation, 0.4~0.6 indicates a moderate correlation, 0.2~0.4 indicates a weak correlation, and 0.0~0.2 indicates a very weak correlation or no correlation.
Assuming that ρ represents the Spearman correlation between y and ytL, the formula is as follows:
ρ = S p e a r m a n ( y , y t L )
According to Spearman’s criterion, we can process step size in different steps. When 0 < ρ < 0.4, it is considered that the correlation between the two is weak and the step length should be taken. If 0.4 < ρ < 0.8, it is considered that the correlation between the two is above medium, and a small length is taken; if ρ > 0.8, it is considered that the correlation between the two vectors is very strong, and the sparse solution can be gradually approached with step size 1.
We calculate the complexity of the method in this paper according to the steps of the pseudocode. The pseudocode of the method proposed in this article is as follows (Algorithm 2, based on Algorithm 1):
Algorithm 2: SCCSAMP
Input: Compressed signal  y , sensing matrix  A , step size  S .
Output: Sparse coefficient of the estimated signal  θ t .
    1:
Initialization:
    2:
r 0 = y Λ t = A t = L = S s t a g e = 1 t = 1
    3:
Repeat:
    4:
S t = M a x ( A T r t 1 , L )
    5:
C t = Λ t 1 S t
    6:
θ ^ t = arg min y A t θ t = ( A t T A t ) 1 A t T y
    7:
F t = max ( A t y , L )
    8:
r n e w = y A t L ( A t L T A t L ) 1 A t L T y
    9:
y t L = y r n e w = A t L A t L T A t L 1 A t L T y
    10:
ρ = S p e a r m a n ( y , y t L )
    11:
If  0.4 < ρ < 0.8  then
    12:
S = S + r o u n d ( ρ ( 0.4 ) )
    13:
else if  0 < ρ < 0.4  then
    14:
S = r o u n d ( ρ ( 0.8 ) )
    15:
else
    16:
S = 1
    17:
end if
    18:
If  r n e w 2 < 1 e 6  then
    19:
Output  θ t
    20:
else if  r n e w 2 r t 1 2
    21:
s t a g e = s t a g e + 1
    22:
L = s t a g e × S
    23:
t = t + 1
    24:
else
    25:
r n e w = r t 1
    26:
Λ t = F
    27:
t = t + 1
    28:
end if
    29:
Until  r n e w 2 < 1 e 6
    30:
Output:
    31:
θ t = ( A t T A t ) 1 A t T y
In the code above, Step 4 computes the inner product and sorts to extract the top L values. The complexity of the inner product is O(MN), and the complexity of the descending sort is O(NlogN). Therefore, the overall complexity is (MN + NlogN).
In Step 5, the process of establishing the candidate set Ct requires the most N operations, resulting in a complexity of O(N).
In Step 6, the calculation of the least squares solution involves matrix–matrix multiplication, with a complexity of O(L2). The complexity for matrix inversion is O(L3), leading to a total complexity of O(ML2 + L3).
In Step 7, the complexity of constructing F is O(L).
In Steps 8 and 9, the calculation of the residual involves a matrix–vector multiplication with a time complexity of O(ML).
In Step 10, calculating the Spearman correlation coefficient has a complexity of O(MlogM).
Based on the above analysis, after k iterations, the overall complexity can be summarized as O(K(MN + NlogN + N + L2M + L3 + L + ML + MlogM)). Since L is significantly smaller than both M and N, the total complexity for the SCCSAMP algorithm can be approximated as O(k(MN + NlogN)).

5. Experimental Results and Discussion

To verify the performance of the method proposed in this paper, we conducted the following work:
(1)
Reconstructed one-dimensional signals and compared them with the OMP, SP, STOMP, ROMP, CoSaMP, SAMP, VSSAMP, and CCSAMP algorithms.
(2)
Utilized the proposed method along with some SAMP-based algorithms and their improvements to reconstruct a set of commonly used two-dimensional images and present the visual results.

5.1. Performance Metrics

To evaluate the performance of the algorithm proposed in this paper, we selected the reconstruction success rate, relative error, and peak signal-to-noise ratio as performance metrics. The calculation methods are as follows:
P s r = n s n i × 100 %
E R R O R = x x r 2 x 2 × 100 %
M S E = 1 M N i = 0 M 1 j = 0 N 1 x ( i , j ) x r ( i , j ) 2
P S N R ( x , x r ) = 10 log ( M A X M S E )
where the reconstruction success rate Psr represents the ratio of successful reconstructions ns to the total number of experiments ni; a higher success rate indicates better reconstruction performance of the algorithm. The relative error ERROR indicates the percentage deviation between the reconstructed signal and the original signal, used to measure the error of the reconstructed signal; smaller errors suggest that the reconstructed signal is closer to the original, reflecting superior algorithm performance. The quality of the restored image is assessed using the peak signal-to-noise ratio (PSNR, in dB) between the original and restored images, and MAX is the maximum pixel value.

5.2. Experimental Results of One-Dimensional Signal

In this section, two types of signals are selected as the original signals for reconstruction: a one-dimensional Gaussian random sparse signal and a multicomponent cosine signal representing a complex signal with multiple frequency components. Both signals have a length of 256 and use a Gaussian matrix as the measurement matrix. The one-dimensional Gaussian random sparse signal utilizes an identity matrix as the sparse basis, and the OMP, SP, STOMP, ROMP, CoSaMP, SAMP, VSSAMP, and CCSAMP algorithms are chosen as the comparison group. For the multicomponent cosine signal, a discrete wavelet transform matrix serves as the sparse basis, and the SAMP, VSSAMP, and CCSAMP algorithms are used for comparison. The multicomponent cosine signal model is as follows:
x ( n ) = 0.3 cos ( 2 π f 1 n t s ) + 0.6 cos ( 2 π f 2 n t s ) + 0.1 cos ( 2 π f 3 n t s ) + 0.9 cos ( 2 π f 4 n t s )
where n is 0, 1, 2……N − 1.
The relevant parameters are set as follows: f1 = 50 Hz, f2 = 100 Hz, f3 = 200 Hz, and f4 = 400 Hz, with fs = 800 Hz as the sampling frequency and ts = 1/fs, The frequencies f1, f2, f3, and f4 correspond to four cosine signals.

5.2.1. The Reconstruction Success Rate of Sparse Signal Under Different Sparsity

To evaluate the algorithm’s performance at different sparsity levels, we set the parameter M = 128 and repeated measurements 5000 times at each sparsity level. A reconstruction is considered successful when the reconstruction error is less than 10−6, with the reconstruction success rate used as the evaluation metric. The experimental results are shown in Figure 1. As the sparsity level K increases, the reconstruction success rates of various algorithms decline to different extents. Among the algorithms that use sparsity as prior information, the ROMP algorithm, which treats sparsity as the strongest prior, exhibits the fastest decline in the reconstruction success rate across different sparsity levels, while the SP algorithm shows the slowest decline. Notably, when K = 45, the reconstruction success rate of the SP algorithm is 91%, while the success rates of other algorithms, which do not rely on sparsity as prior information, approach 100%. When the sparsity level is between 50 < K < 60, the proposed algorithm continues to demonstrate good performance, with a reconstruction success rate significantly higher than that of SAMP and VSSAMP.

5.2.2. Success Rate of Reconstruction of One-Dimensional Signal Under Different

To evaluate whether the reconstruction algorithms consistently maintain high performance under various measurement conditions and to ensure the accuracy and reliability of simulated measurements, this study assesses reconstruction success rates across different measurement values. Through multiple measurements and detailed analysis of the results, the credibility of the measurement outcomes is effectively enhanced.
At different measurement values M, we observed the reconstruction success rates for one-dimensional sparse signals and multi-component cosine signals, each reconstructed 5000 times. The experimental results are shown in Figure 2. As seen in Figure 2a, when the measurement values are within 75 < M < 95, the SCCSAMP algorithm exhibits a significantly higher reconstruction success rate for sparse signals compared to other algorithms. For M > 90, SCCSAMP consistently maintains a superior success rate relative to other algorithms, demonstrating the method’s robust reconstruction capability for sparse signals across various measurement values. Figure 2b further illustrates that with fewer measurements (e.g.,  M  between 10 and 25), SCCSAMP achieves higher reconstruction success rates than other algorithms, indicating that it provides stable reconstruction even with limited measurement data. This suggests that SCCSAMP is more effective in utilizing restricted measurement resources. When the measurement count increases to 30 or above, SCCSAMP’s reconstruction success rate consistently remains close to 100%, outperforming all comparison algorithms, particularly at higher measurement counts where its exceptional accuracy becomes evident.

5.2.3. Success Rate of Sparse Signal Reconstruction Under Asynchronous Length

To assess the performance of the proposed algorithm under different step sizes, we compared the reconstruction success rates of various algorithms across different step sizes. The experimental results are shown in Figure 3. As shown in Figure 3a–c, SCCSAMP demonstrates a marked advantage under various synchronization lengths. For instance, with synchronization lengths of S = 3 and S = 4, SCCSAMP achieves a reconstruction success rate approaching 80% with as few as 90 measurements, whereas other algorithms generally perform less effectively. This performance indicates that SCCSAMP can more accurately reconstruct signals with fewer measurements, showcasing superior robustness and precision. At low measurement counts, SCCSAMP’s success rate notably surpasses that of CCSAMP, VSSAMP, and SAMP, especially when the synchronization length is short (e.g., S = 3), where this gap is even more pronounced. This characteristic allows SCCSAMP to maintain high-quality signal reconstruction in data-constrained environments, highlighting one of its core strengths. These findings suggest that SCCSAMP exhibits robust reconstruction capabilities across different synchronization lengths, with its improvement strategy enhancing reconstruction accuracy while minimizing required measurements.

5.2.4. The Measurement Error of Sparse Signal Under Different Sparsity Level

Sparsity refers to the number of non-zero elements in a signal; higher sparsity indicates greater signal complexity, which increases reconstruction difficulty and leads to higher errors. Comparing the measurement errors of each algorithm at different sparsity levels can reflect their performance advantages, to some extent. As shown in Table 1, the measurement errors of all algorithms increase with higher sparsity levels. When the sparsity level is between [20, 40], the SCCSAMP algorithm achieves lower measurement errors compared to other algorithms. When sparsity exceeds 40, the increased difficulty in atom selection for reconstruction algorithms results in SCCSAMP’s reconstruction error not being the smallest. Its performance is comparable to that of other algorithms and remains within the same order of magnitude.

5.2.5. The Reconstruction Success Rate of Noisy Signal

According to the findings in [27], the CCSAMP algorithm demonstrates superior noise resistance compared to the SAMP algorithm and its improved versions. Therefore, this study uses CCSAMP as a benchmark to verify the noise resistance of SCCSAMP. As shown in Figure 4a, the reconstruction success rate of SCCSAMP for sparse signals significantly surpasses that of CCSAMP as SNR increases, particularly within the 30 dB–50 dB range. This indicates that SCCSAMP achieves superior reconstruction performance at higher SNRs, providing more effective sparse signal recovery. Figure 4b shows that, for multicomponent cosine signals, SCCSAMP also slightly outperforms CCSAMP under high SNR conditions, especially at 40 dB and above, where SCCSAMP’s success rate approaches 100%, slightly exceeding CCSAMP under the same conditions. This further confirms SCCSAMP’s enhanced reconstruction performance for multicomponent cosine signals at higher SNRs.

5.3. Experimental Results for Two-Dimensional Images

5.3.1. Experiment A

In this section, we used 512 × 512 grayscale images—Barbara, Boat, Couple, Lake, Lena, Pirate, Walkbridge, and Woman—as source signals. Under different compression ratios, we evaluated the reconstruction performance of the proposed algorithm and compared it with the SAMP, VSSAMP, and CCSAMP algorithms based on the visual results and performance metrics obtained from reconstructing these eight images. In practical applications, when two-dimensional images are stacked into one-dimensional long vectors, the sparsity basis level in the transform domain is often much greater than 1. Therefore, a larger step size is typically used for image processing applications; in this study, we set the step size to 64. During the experiments, the images were processed in blocks, using discrete wavelet transform as the sparsity basis for sparse representation, with a Gaussian matrix used as the observation matrix.
To provide a more intuitive comparison of the proposed algorithm with the competing algorithms, the visual results for Woman and Pirate at different compression ratios are shown in Figure 5. From the figure, it is evident that under the same initial step size, the SCCSAMP algorithm outperforms the other algorithms, demonstrating the best reconstruction performance. The SCCSAMP algorithm exhibits stable reconstruction performance as the compression ratio varies. To provide a more precise comparison of the performance of different algorithms, we used reconstruction error and peak signal-to-noise ratio (PSNR) to measure image reconstruction quality. The experimental results under different compression ratios are presented in Table 1, Table 2 and Table 3, with the best experimental data highlighted in bold.
As shown in Table 2, Table 3 and Table 4, for performance indicator error, SCCSAMP consistently outperforms the other three algorithms, particularly on images such as Barbara, Couple, and Lake. Specifically, at a compression ratio of 0.4, SCCSAMP achieves an error of only 0.0925 on the Barbara image, while the other algorithms exhibit error values ranging from 0.0955 to 0.1060, highlighting SCCSAMP’s high precision at lower compression ratios. Similarly, at compression ratios of 0.6 and 0.8, SCCSAMP’s error values remain significantly lower across multiple images. For instance, at a compression ratio of 0.8, the error on the Lake image for SCCSAMP is 0.0200, while SAMP, VSSAMP, and CCSAMP yield errors of 0.0374, 0.0267, and 0.0252, respectively. This indicates that SCCSAMP is able to maintain lower errors even at higher compression rates. Overall, while all algorithms exhibit an increase in error with higher compression ratios, SCCSAMP demonstrates a relatively modest increase, underscoring its stability and robustness in preserving the essential characteristics of the reconstructed images. Higher PSNR values indicate that the reconstructed image is closer in quality to the original. SCCSAMP also shows superior performance in PSNR across most conditions. At a compression ratio of 0.4, SCCSAMP achieves PSNR values of 29.37 and 28.88 on the Lake and Couple images, respectively, surpassing the PSNR values of SAMP, VSSAMP, and CCSAMP and indicating SCCSAMP’s ability to reconstruct images with greater clarity under identical compression conditions. At a compression ratio of 0.6, SCCSAMP maintains a distinct advantage in PSNR on images such as Barbara, Lena, and Walkbridge, reaching values of 29.60, 31.01, and 29.44, respectively, which are the highest among the four algorithms. This suggests that SCCSAMP is more effective at reconstructing high-quality images. Even at the high compression ratio of 0.8, SCCSAMP continues to outperform, achieving a PSNR of 34.64 on the Woman image, which is significantly higher than the values for SAMP (32.82), VSSAMP (33.08), and CCSAMP (34.12). Taken together, SCCSAMP achieves superior PSNR across all compression ratios, underscoring its effectiveness in maintaining visual quality.

5.3.2. Experiment B

To validate the stability and generalizability of the proposed algorithm, we selected various types of images from the CVG-UGR dataset and the Linnaeus 5 dataset for experimentation. The detailed information of the selected images is shown in Figure 6, where images (a) and (b) are sized 256 × 256, and images (c) and (d) are sized 128 × 128. In this experiment, we took into account the impact of both image type and image size on the experimental results. The performance metrics error and PSNR were used for evaluation, and the experimental results of different methods at various compression ratios are presented in Table 5, Table 6 and Table 7.
As shown in Table 5, Table 6 and Table 7, for performance indicator error, at different compression ratios, SCCSAMP consistently achieves the lowest error values in all images, indicating superior detail preservation after compression. For example, at a compression ratio of 0.4, SCCSAMP achieves an error of 0.0818 on image 220015, which is substantially lower than the errors of 0.1757, 0.1859, and 0.1967 for SAMP, VSSAMP, and CCSAMP, respectively. This trend persists across higher compression ratios; at a ratio of 0.8, SCCSAMP’s error on image 220005 is only 0.0642, compared to 0.0897 for SAMP, 0.0700 for VSSAMP, and 0.0750 for CCSAMP. These results indicate that SCCSAMP provides more accurate compression performance across all compression levels; for performance indicator PSNR, SCCSAMP also achieves the highest PSNR values in most cases, suggesting superior image quality retention post-compression. For instance, at a compression ratio of 0.8 on image 220015, SCCSAMP achieves a PSNR of 32.41, outperforming SAMP (29.75), VSSAMP (31.15), and CCSAMP (29.38). At lower compression ratios, SCCSAMP maintains this advantage, as seen with image 220005 at a 0.4 compression ratio, where SCCSAMP’s PSNR reaches 25.67, exceeding the values obtained by the other algorithms. This further demonstrates that SCCSAMP not only minimizes error but also provides enhanced perceptual image quality.

6. Conclusions

To address the limitations of the SAMP algorithm, this paper proposes a variable step-size sparse adaptive matching pursuit algorithm based on the Spearman correlation coefficient. The algorithm adjusts the step size by calculating the similarity coefficient between the compressed signal and the signals in the candidate set using the Spearman coefficient. Experimental simulations on one-dimensional sparse signals and two-dimensional image signals demonstrate that, compared to traditional methods, the proposed algorithm not only improves the reconstruction success rate but also maintains good reconstruction accuracy.

Author Contributions

Conceptualization, methodology, writing—original draft preparation, writing—review and editing, Y.J.; writing—review and editing, validation, formal analysis, X.W.; writing—review and editing, validation, G.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, L.; Zhu, S.; Zhao, G.; Jin, M.; YOO, S. Orthogonal Matching Pursuit Algorithms based on Double Selection Strategy. In Proceedings of the 2019 9th International Conference on Information Science and Technology (ICIST), Hulunbuir, China, 2–5 August 2019; pp. 339–343. [Google Scholar]
  2. Zhao, Z.; Teng, D.; Liu, L.; Xiang, Y. Compressed Sensing for Full Matrix Capture Data Based on Optimal Reconstruction Algorithm. In Proceedings of the 2023 8th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 21–23 April 2023; pp. 981–984. [Google Scholar]
  3. Wang, C.; Ling, S. An image encryption scheme based on chaotic system and compressed sensing for multiple application scenarios. Inf. Sci. 2023, 642, 119166. [Google Scholar] [CrossRef]
  4. Kami, Y.; Chikui, T.; Togao, O.; Kawano, S.; Fujii, S.; Ooga, M.; Kiyoshima, T.; Yoshiura, K. Usefulness of reconstructed images of Gd-enhanced 3D gradient echo sequences with compressed sensing for mandibular cancer diagnosis: Comparison with CT images and histopathological findings. Eur. Radiol. 2023, 33, 845–853. [Google Scholar] [CrossRef]
  5. Kang, M.S.; Baek, J.M. SAR image reconstruction via incremental imaging with compressive sensing. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 4450–4463. [Google Scholar] [CrossRef]
  6. Zhuang, S.; Zhao, W.; Wang, R.; Wang, Q.; Huang, S. New measurement algorithm for supraharmonics based on multiple measurement vectors model and orthogonal matching pursuit. IEEE Trans. Instrum. Meas. 2018, 68, 1671–1679. [Google Scholar] [CrossRef]
  7. Li, L.; Fang, Y.; Liu, L.; Peng, H.; Kurths, J.; Yang, Y. Overview of compressed sensing: Sensing model, reconstruction algorithm, and its applications. Appl. Sci. 2020, 10, 5909. [Google Scholar] [CrossRef]
  8. Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  9. Van, D.; Friedlander, M.P. Probing the Pareto frontier for basis pursuit solutions. Siam J. Sci. Comput. 2009, 31, 890–912. [Google Scholar]
  10. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef]
  11. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  12. Wang, H.; Yang, S.; Liu, Y.; Li, Q. Compressive sensing reconstruction for rolling bearing vibration signal based on improved iterative soft thresholding algorithm. Measurement 2023, 210, 112528. [Google Scholar] [CrossRef]
  13. Lu, X.; Su, Y.; Wu, Q.; Wei, Y.; Wang, J. An improved algorithm of segmented orthogonal matching pursuit based on wireless sensor networks. Int. J. Distrib. Sens. Netw. 2022, 18, 15501329221077165. [Google Scholar] [CrossRef]
  14. Shoitan, R.; Nossair, Z.; Ibrahim, I.I.; Tobal, A. Improving the reconstruction efficiency of sparsity adaptive matching pursuit based on the Wilkinson matrix. Front. Inf. Technol. Electron. Eng. 2018, 19, 503–512. [Google Scholar] [CrossRef]
  15. Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef]
  16. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  17. Thomas, T.J.; Rani, J.S. Recovery from compressed measurements using sparsity independent regularized pursuit. Signal Process. 2020, 172, 107508. [Google Scholar] [CrossRef]
  18. Needell, D.; Vershynin, R. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 2010, 4, 310–316. [Google Scholar] [CrossRef]
  19. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  20. Donoho, D.L.; Tsaig, Y.; Drori, I.; Starck, J.L. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory 2012, 58, 1094–1121. [Google Scholar] [CrossRef]
  21. Blumensath, T.; Davies, M.E. Stagewise weak gradient pursuits. IEEE Trans. Signal Process. 2009, 57, 4333–4346. [Google Scholar] [CrossRef]
  22. Do, T.T.; Gan, L.; Nguyen, N.; Tran, T.D. Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In Proceedings of the 2008 42nd Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 12 June 2009; pp. 581–587. [Google Scholar]
  23. Manur, V.B.; Ali, L. Compressed sensing channel estimation for STBC-SM based hybrid MIMO-OFDM system for visible light communication. Int. J. Commun. Syst. 2020, 33, e4403. [Google Scholar] [CrossRef]
  24. Huang, W.; Zhao, J.; Lv, Z.; Ding, X. Sparsity and step-size adaptive regularized matching pursuit algorithm for compressed sensing. In Proceedings of the 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference, Chongqing, China, 20–21 December 2014; pp. 536–540. [Google Scholar]
  25. Wang, C.; Zhang, Y.; Sun, L.; Han, J.; Chao, L.; Yan, L. Improved sparsity adaptive matching pursuit algorithm based on compressed sensing. Displays 2023, 77, 102396. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Liu, Y.; Zhang, X. A variable stepsize sparsity adaptive matching pursuit algorithm. IAENG Int. J. Comput. Sci. 2021, 48, 770–775. [Google Scholar]
  27. Li, Y.; Chen, W. A correlation coefficient sparsity adaptive matching pursuit algorithm. IEEE Signal Process. Lett. 2023, 30, 190–194. [Google Scholar] [CrossRef]
  28. Zhang, X.; Liu, Y.; Wang, X. A sparsity preestimated adaptive matching pursuit algorithm. J. Electr. Comput. Eng. 2021, 2021, 5598180. [Google Scholar] [CrossRef]
  29. Chen, X. A new signal reconstruction method in compressed sensing. Comput. Electr. Eng. 2018, 69, 865–880. [Google Scholar] [CrossRef]
  30. Ji, Y.; Zhu, W.P.; Yan, J. Improved lorentzian greedy iterative algorithm based on bi-directional support estimation for compressed sensing. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
  31. Yu, H.; Hutson, A.D. A robust Spearman correlation coefficient permutation test. Commun. Stat. Theory Methods 2024, 53, 2141–2153. [Google Scholar] [CrossRef]
Figure 1. The reconstruction success rate of different algorithms under different sparsity levels.
Figure 1. The reconstruction success rate of different algorithms under different sparsity levels.
Electronics 13 04502 g001
Figure 2. The reconstruction success rate of different algorithms under different measurement values. (a) One-dimensional sparse signal; (b) Multi-component cosine signal.
Figure 2. The reconstruction success rate of different algorithms under different measurement values. (a) One-dimensional sparse signal; (b) Multi-component cosine signal.
Electronics 13 04502 g002
Figure 3. The reconstruction success rate of different algorithms under different synchronization lengths. (a) S = 3; (b) S = 4; (c) S = 5.
Figure 3. The reconstruction success rate of different algorithms under different synchronization lengths. (a) S = 3; (b) S = 4; (c) S = 5.
Electronics 13 04502 g003aElectronics 13 04502 g003b
Figure 4. The reconstruction success rate of different algorithms at various SNR levels (dB). (a) One-dimensional sparse signal; (b) Multi-component cosine signal.
Figure 4. The reconstruction success rate of different algorithms at various SNR levels (dB). (a) One-dimensional sparse signal; (b) Multi-component cosine signal.
Electronics 13 04502 g004
Figure 5. Visualization results of image reconstruction using different algorithms at different compression ratios. (a) The compression ratio is 0.4; (b) the compression ratio is 0.6; (c) the compression ratio is 0.8.
Figure 5. Visualization results of image reconstruction using different algorithms at different compression ratios. (a) The compression ratio is 0.4; (b) the compression ratio is 0.6; (c) the compression ratio is 0.8.
Electronics 13 04502 g005
Figure 6. Different types of images. (a) 220005; (b) 220015; (c) 310011; (d) 310008.
Figure 6. Different types of images. (a) 220005; (b) 220015; (c) 310011; (d) 310008.
Electronics 13 04502 g006
Table 1. Measurement errors of different algorithms under different sparsity levels.
Table 1. Measurement errors of different algorithms under different sparsity levels.
Num.SAMPVSSAMPCCSAMPSCCSAMP
205.88 × 10−165.89 × 10−166.03 × 10−165.86 × 10−16
256.98 × 10−166.64 × 10−166.70 × 10−166.45 × 10−16
307.54 × 10−167.10 × 10−167.43 × 10−166.99 × 10−16
358.16 × 10−168.14 × 10−168.44 × 10−167.90 × 10−16
408.01 × 10−168.17 × 10−168.95 × 10−167.99 × 10−16
451.01 × 10−159.28 × 10−169.85 × 10−169.63 × 10−16
501.13 × 10−151.06 × 10−151.12 × 10−151.06 × 10−15
551.07 × 10−151.08 × 10−151.17 × 10−151.14 × 10−15
601.07 × 10−151.05 × 10−151.10 × 10−151.11 × 10−15
In the table, the minimum measurement error is highlighted in bold.
Table 2. Experimental results of different algorithms at a compression ratio of 0.4.
Table 2. Experimental results of different algorithms at a compression ratio of 0.4.
ImageERRORPSNR
SAMPVSSAMPCCSAMPSCCSAMPSAMPVSSAMPCCSAMPSCCSAMP
Barbara0.09550.09780.10600.092526.2126.0025.3126.48
Boat0.09090.09010.09550.089226.1726.2425.7526.34
Couple0.10500.10020.10840.097525.4625.8625.2326.15
Lake0.04300.04520.04910.042129.1928.7528.0329.37
Lena0.07300.07530.08170.069128.0527.7727.0728.52
Pirate0.10340.10240.10940.095325.6725.7525.1826.38
Walkbridge0.05660.05490.06010.052627.0127.2726.4727.64
Woman0.06110.06130.06430.058630.5330.530.0930.89
Table 3. Experimental results of different algorithms at a compression ratio of 0.6.
Table 3. Experimental results of different algorithms at a compression ratio of 0.6.
ImageERRORPSNR
SAMPVSSAMPCCSAMPSCCSAMPSAMPVSSAMPCCSAMPSCCSAMP
Barbara0.07520.07090.07410.064528.2828.8028.4129.60
Boat0.08300.07470.07850.071227.5528.4628.0328.88
Couple0.08520.07790.08130.076427.2728.0627.6928.22
Lake0.03480.03330.03320.030331.0231.4131.4432.24
Lena0.06090.05670.05670.051929.6130.2330.2431.01
Pirate0.08670.07930.08130.073927.227.9827.7628.59
Walkbridge0.04620.04270.04310.039628.7729.4429.3630.09
Woman0.05160.04750.04930.044731.9932.7232.3833.24
Table 4. Experimental results of different algorithms at a compression ratio of 0.8.
Table 4. Experimental results of different algorithms at a compression ratio of 0.8.
ImageERRORPSNR
SAMPVSSAMPCCSAMPSCCSAMPSAMPVSSAMPCCSAMPSCCSAMP
Barbara0.06430.05600.05610.052329.6526.6930.8331.44
Boat0.06560.05900.05790.057129.0029.9230.0830.21
Couple0.07160.06270.06240.058828.8429.9930.0330.54
Lake0.03000.02730.02670.025032.3133.1533.3433.9
Lena0.05370.04780.04670.044530.7231.7331.9232.35
Pirate0.07580.06680.06670.061828.3629.4729.4730.14
Walkbridge0.03930.03550.03520.033430.1731.0531.1431.59
Woman0.04660.04040.04050.038132.8934.1234.0934.64
Table 5. The experimental results on different types of images with compression ratio of 0.4.
Table 5. The experimental results on different types of images with compression ratio of 0.4.
ImageERRORPSNR
SAMPVSSAMPCCSAMPSCCSAMPSAMPVSSAMPCCSAMPSCCSAMP
2200050.12810.12660.13400.106124.0324.1423.6425.67
2200150.17570.18590.19670.081829.7531.1531.0832.41
3100110.11050.10920.12200.105225.8525.9524.9926.28
3100080.20700.21630.24180.199023.5823.2022.2323.92
Table 6. The experimental results on different types of images with compression ratio of 0.6.
Table 6. The experimental results on different types of images with compression ratio of 0.6.
ImageERRORPSNR
SAMPVSSAMPCCSAMPSCCSAMPSAMPVSSAMPCCSAMPSCCSAMP
2200050.10410.09110.09940.082825.8426.9926.2327.83
2200150.07550.07000.07110.059628.7229.3829.2530.78
3100110.08920.08220.08360.075327.7028.4228.2729.18
3100080.16050.15450.15860.131325.7926.1225.9027.54
Table 7. The experimental results on different types of images with compression ratio of 0.8.
Table 7. The experimental results on different types of images with compression ratio of 0.8.
ImageERRORPSNR
SAMPVSSAMPCCSAMPSCCSAMPSAMPVSSAMPCCSAMPSCCSAMP
2200050.08970.07000.07500.064227.1229.2728.6830.03
2200150.06700.05710.05760.049429.7531.1531.0832.41
3100110.07620.06740.06740.063029.0730.1430.1330.73
3100080.13310.12020.12280.104227.4228.3028.1229.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Jiang, Y.; Ding, G. An Improved Variable Step SAMP Method Based on Correlation Principle. Electronics 2024, 13, 4502. https://doi.org/10.3390/electronics13224502

AMA Style

Wang X, Jiang Y, Ding G. An Improved Variable Step SAMP Method Based on Correlation Principle. Electronics. 2024; 13(22):4502. https://doi.org/10.3390/electronics13224502

Chicago/Turabian Style

Wang, Xiaolei, Yingqi Jiang, and Guoqiang Ding. 2024. "An Improved Variable Step SAMP Method Based on Correlation Principle" Electronics 13, no. 22: 4502. https://doi.org/10.3390/electronics13224502

APA Style

Wang, X., Jiang, Y., & Ding, G. (2024). An Improved Variable Step SAMP Method Based on Correlation Principle. Electronics, 13(22), 4502. https://doi.org/10.3390/electronics13224502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop