Next Article in Journal
An Improved Genetic Algorithm with Swarm Intelligence for Security-Aware Task Scheduling in Hybrid Clouds
Previous Article in Journal
Map-in-Parallel-Coordinates Plot (MPCP): Field Trial Studies of High-Dimensional Geographical Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deterministic Construction of Compressed Sensing Measurement Matrix with Arbitrary Sizes via QC-LDPC and Arithmetic Sequence Sets

Institute of Fiber-Optic Communication and Information Engineering, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(9), 2063; https://doi.org/10.3390/electronics12092063
Submission received: 8 March 2023 / Revised: 24 April 2023 / Accepted: 28 April 2023 / Published: 29 April 2023

Abstract

:
It is of great significance to construct deterministic measurement matrices with good practical characteristics in Compressed Sensing (CS), including good reconstruction performance, low memory cost and low computing resources. Low-density-parity check (LDPC) codes and CS codes can be closely related. This paper presents a method of constructing compressed sensing measurement matrices based on quasi-cyclic (QC) LDPC codes and arithmetic sequence sets. The cyclic shift factor in each submatrix of QC-LDPC is determined by arithmetic sequence sets. Compared with random matrices, the proposed method has great advantages because it is generated based on a cyclic shift matrix, which requires less storage memory and lower computing resources. Because the restricted isometric property (RIP) is difficult to verify, mutual coherence and girth are used as computationally tractable indicators to evaluate the measurement matrix reconstruction performance. Compared with several typical matrices, the proposed measurement matrix has the minimum mutual coherence and superior reconstruction capability of CS signal according to one-dimensional (1D) signals and two-dimensional (2D) image simulation results. When the sampling rate is 0.2, the maximum (minimum) gain of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) is up to 2.89 dB (0.33 dB) and 0.031 (0.016) while using 10 test images. Meanwhile, the reconstruction time is reduced by nearly half.

1. Introduction

Compressed Sensing (CS) theory is proposed [1,2,3,4], which suggests that sparse signals can be recovered from a small number of linear measurements by finding solutions to the underdetermined linear system of equations. For sparse or compressible signals, under certain conditions, signals can be sampled and reconstructed with much fewer samples than Nyquist theory. The core of this method is to use the measurement matrix to project the original high-dimensional signal into the low-dimensional space [3,4]. Once proposed, the theory has been widely applied in many fields such as single-pixel imaging, image encryption, synthetic aperture radar imaging, and nuclear magnetic resonance imaging [5,6,7,8]. Restricted isometric property (RIP) is an important criterion proposed by Candes and Tao [3]. Elad proposed the definition of mutual coherence in 2007 [9], which is used to measure the coherence within a matrix or between two matrices. Bourgain et al. [10] linked mutual coherence with RIP. RIP [11,12,13] and mutual coherence [14,15,16] are important tools for analyzing the properties of measurement matrices. In this paper, mutual coherence will be used to analyze the performance of the proposed measurement matrix because it is easier to calculate.
Now, the measurement matrix is divided into two main categories: random matrix and deterministic matrix [16,17]. Gaussian random matrix and Bernoulli random matrix have been proved to be suitable for CS, but there are some bottlenecks in the application of the above random matrices [18]. All the measurement matrix elements need to be stored, and this process needs to be repeated when a new implementation is needed, which will consume a lot of storage resources and have high requirements on the system [3,4,5]. Deterministic measurement matrices are convenient to control, and the hardware implementation is also convenient. Moreover, it requires less storage space.
In recent years, many researchers have put forward many principles and methods to construct [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33] and optimize [34,35,36] measurement matrices. This paper mainly studies the construction method of deterministic measurement matrices. Dimakis et al. [19], Jiang et al. [20] and Xia et al. [23] used the binary matrix obtained from the LDPC code as the measurement matrix. Zeng et al. introduced a deterministic structure which combines orthogonal matrices and a chaos-based Toeplitz matrix [24]. Naidu et al. constructed a class of deterministic measurement matrices based on the Euler square principle [25]. Bryant et al. proposed a construction method based on a pair-balanced design [26]. Lu et al. studied how to construct a deterministic binary CS matrix of any size by using the idea of a bipartite graph [27]. Liu et al. proposed a method based on Bose balanced incomplete block design [28]. Fardad et al. designed low-complexity hardware for generating a deterministic measurement matrix based on Euclidean geometric LDPC codes [29]. Torshizi et al. proposed the construction of a deterministic measurement matrix based on Singer perfect difference set [30]. Tong et al. proposed the construction of a deterministic measurement matrix with low storage space and complexity [31]. Kazemi et al. proposed a class of sparse binary matrices constructed based on a finite field algorithm [32]. Liang et al. proposed a method based on combinational design correlation matrix expansion [33].
Since the fabrication cost of single-point detectors is generally lower than that of area array detectors, and it is relatively easier to fabricate single-point detectors that can operate in the non-visible light band, single-pixel imaging technology is of great significance in the field of non-visible light imaging. During the imaging process, the digital micromirror device (DMD) is sampled by the Hadamard matrix. The size of the Hadamard matrix is relatively fixed, and the number of columns is a power of 2. Therefore, the matrix with arbitrary sizes cannot be constructed. In this paper, we propose a method to construct a deterministic measurement matrix by arithmetic sequence sets. The LDPC code uses a QC structure, which makes the matrix very suitable for hardware implementation by simple shift registers. This method is to expand the small base matrix into a matrix of any size, which is more flexible. In addition, we compare the proposed measurement matrix with some typical measurement matrices. The simulation results show that the performance of the proposed measurement matrix is better than the Gaussian matrix, Bernoulli matrix and Toeplitz matrix. The rest of the paper is as follows. Section 2 introduces some basic CS theory and basic concepts of the LDPC code and QC-LDPC code. Section 3 introduces the determination of the shift factor in QC-LDPC cyclic shift matrices by the arithmetic sequence sets and analyzes the performance of the matrix by mutual coherence and girth. Section 4 presents the reconstruction simulation results of 1D signals and 2D images. Moreover, the computational complexity, storage space, and reconstruction time are compared. Section 5 is the conclusion.

2. Related Work

2.1. Compressed Sensing Theory

For a compressible original signal x R N , the purpose of CS is to obtain the observed value by projecting signals into the low-dimensional space through the measurement matrix, so as to efficiently reconstruct the original signal. This process can be expressed as follows:
y = Φ x
where Φ R M × N is the measurement matrix, the sampling rate is R = M / N , and y is the measurement vector. In the process of restoring an M-dimensional vector to an N-dimensional vector, the following reconstruction algorithms can be selected:
argmin x ^ p   subject   to   y = Φ x
where p stands for greedy reconstruction algorithm based on an l 0 -model or convex optimization reconstruction algorithm based on the l 1 -model [37]. One of the reconstruction algorithms includes orthogonal matching pursuit (OMP) [38] and its improved algorithms. The other is to turn it into an l 1 -minimization problem.
In order to reconstruct the original signal from (2), the measurement matrix needs to meet some strict conditions. RIP is a sufficient condition to ensure that the signal can be reconstructed accurately in CS. Let Φ be a M × N matrix, for any k-sparse signal x R N , this satisfies the RIP of order k, if there is a constant 0 < δ k < 1 such that
( 1 δ k ) x 2 2 Φ x 2 2 ( 1 + δ k ) x 2 2
where δ k is a non-negative restricted isometry constant (RIC). Because RIP is a sufficient condition and difficult to prove, the mutual coherence can be used as the evaluation criterion for the measurement matrix. Let Φ be a M × N matrix, then the mutual coherence of the Φ is defined as
μ ( Φ ) max 1 i j n ϕ i , ϕ j ϕ i 2 ϕ j 2
where ϕ 1 , ϕ 2 , , ϕ n are the column vectors of the matrix Φ .
For an M × N matrix, there is a famous Welch bound [10,39]:
μ ( Φ ) N M M ( N 1 )

2.2. LDPC Codes and QC-LDPC Codes

LDPC code is a kind of linear block code, which is very close to the Shannon limit [40,41]. The matrix constructed by LDPC codes has good sparsity and strong irrelevance. The check matrix [42] can be constructed by using the Tanner graph. M rows and N columns of a check matrix H correspond to M check nodes and N variable nodes in the Tanner graph, respectively. Variable nodes correspond to columns in the check matrix and represents the actual variable. Check nodes correspond to rows in the check matrix and represent constraints between variable nodes. When elements in the matrix H are non-zero elements, there is an edge connection between the check nodes and variable nodes. Each column has λ non-zero elements, and each row has ρ non-zero elements. λ and ρ are much smaller, and the number of rows is M, and the number of columns is N, respectively, which is a sparse matrix. A regular LDPC code with a column (row) weight of 2 (3) and its corresponding Tanner graph are shown in Figure 1.
From Figure 1, the edges of the red and blue lines correspond to the first row and second column in H, respectively. If there are closed loops in the Tanner graph of an LDPC code, the shortest loop is called girth. For example, in the above Tanner graph, the girth is 6. The girth greatly affects the performance of LDPC codes.
QC-LDPC codes are important subsets of structured LDPC codes [43], and their parity check matrices (PCM) can be divided into multiple square matrices of equal size, each of which is the cyclic shift matrix of the identity matrix or all-zero matrix, which is very convenient for storage and addressing. The check matrix 𝐻 of a QC-LDPC code is represented by a J × L cyclic matrix array [42]. Each element I ( p i , j ) in the matrix H represents a q × q cyclic submatrix defined by the shift factor p i , j or a q × q zero matrix, I is the q × q identity matrix, and p i , j represents the number of times that each row of the identity matrix is cyclically shifted to the right. If p i , j = 0 , then I ( p i , j ) is an identity matrix, and the specific form is
H = I ( p 1 , 1 ) I ( p 1 , 2 ) I ( p 1 , L ) I ( p 2 , 1 ) I ( p 2 , 2 ) I ( p 2 , L ) I ( p J , 1 ) I ( p J , 2 ) I ( p J , L )
where the shifts matrix P can be expressed as
P = p 1 , 1 p 1 , 2 p 1 , L p 2 , 1 p 2 , 2 p 2 , L p J , 1 p J , 2 p J , L

3. Determine QC-LDPC Codes Based on Arithmetic Sequence Sets

3.1. Constructing Elements in the Basis Matrix

We use a special arithmetic algorithm to obtain an arithmetic sequence and take it as the element of the shifts matrix P. An arithmetic sequence is a common mathematical sequence, which belongs to combinatorial mathematics. It is a sequence, from the second term onwards, whereby the difference between each term and its predecessor is equal to the same constant. The arithmetic sequence referred to this paper is a special one whose difference changes constantly by using the idea of mathematical analysis. The specific construction principle is:
d 1 , j = d , d 0 d 3 k + 2 , j = d 3 k + 1 , j + 1 , k 0 d 3 k + 3 , j = max L j + d 3 k + 2 , j , j + d 3 k + 2 , j ,   k 0 d 3 k + 1 , j = D 3 k , j + 1 , k 1
where d i , j (i = 1,3k + 1,…,3k + 3) is the difference between the shift factor at the position of the i(j) th row(column) in the basis matrix P and the previous adjacent element in the same row; d is the difference in the first row and is a constant; k is an integer, and D 3 k , j represents the sum of the differences d 3 k , j ( 1 j L 1 ) between the latter element and the previous element in the 3kth row; and L is the number of rows of the basis matrix. The specific algorithm description is shown in Algorithm 1.
Algorithm 1: Construction of the measurement matrix Φ
Input: Two positive integers m and n, where m < n, constant d, J, L and q.
Output: measurement matrix Φ
Initialization: Initialize an q × q identity matrix I.
  • We can obtain the shifts matrix P by determining the d
    d 1 , j = d , d 0 d 3 k + 2 , j = d 3 k + 1 , j + 1 , k 0 d 3 k + 3 , j = max L j + d 3 k + 2 , j , j + d 3 k + 2 , j ,   k 0 d 3 k + 1 , j = D 3 k , j + 1 , k 1

For i = 1 to m
For j = 1 to n do
2.
I p i , j is an q × q identity matrix if p i , j = 0 ;
3.
Except in this case, according to the value of p i , j , each row of the identity matrix I is cyclically shifted to the right p i , j times.
4.
End for
5.
With each I p i , j determined, Φ can be determined.
6.
By changing the values of J, L, q, measurement matrices of different sizes can be obtained.
7.
Φ = I ( p 1 , 1 ) I ( p 1 , 2 ) I ( p 1 , L ) I ( p 2 , 1 ) I ( p 2 , 2 ) I ( p 2 , L ) I ( p J , 1 ) I ( p J , 2 ) I ( p J , L )
Storing random measurement matrices takes up a large amount of storage space. On the contrary, deterministic structure consumes less memory and reduces computational complexity. Therefore, a deterministic structure of matrices is adopted in this paper. In QC structure, each matrix consists of several square matrices. Since each row of a submatrix is obtained by looping the elements of its previous row to the right, each submatrix requires only one cell of memory for storage [44]. In other words, the amount of memory required to store each parity matrix is equal to the number of its submatrices, saving a lot of storage space. The current work is to construct a measurement matrix with high memory efficiency, easy implementation and good performance, and it is suitable for portable applications such as single pixel imaging. In order to achieve this goal, we adopt the structure of QC-LDPC and use the arithmetic sequence as the shift factor in the cyclic shift matrix. Different measurement matrices can be obtained by setting different d values and the size of the H.

3.2. Girth Analysis

Theoretically, a check matrix with a long girth is preferred, and it implies that the nodes maintain independence [42,45]. Fossorier [43] proved that the PCM of this code does not have a Tanner graph with a girth greater than 12. On this basis, the sufficient and necessary conditions of codes with different girth values are given. Khajehnejad et al. [46] proved the good performance of sparse binary measurement matrix by using girth. Suppose Φ is a CS measurement matrix, girth is g and the column weight is γ . Consider any two different columns of Φ that have at most λ common ones; λ = 1 when g 6 . The corresponding conditions of the CS measurement matrix are given. The girth of the proposed measurement matrix is at least 8. The specific proof process can be found in Appendix A. Notably, on the one hand, the results show that there is a significant relationship between the size of the circulant permutation matrix (CPM) and the girth of the measurement matrix, it is because the codes with large girth have better bit-error rate performances; on the other hand, there is a significant relationship between the girth of the measurement matrix and the performance of the CS method [47,48,49].

3.3. Mutual Coherence Analysis

Given an M × N binary matrix Φ ( M < N ) , the column weight is J, and we have the following bounds:
μ Φ 1 J
Proof. 
Because M < N , column weight J 2 , the following can be easily obtained:
ϕ i , ϕ j 1
where ϕ i , ϕ j are the columns of the matrix Φ . □
The relationship between mutual coherence and girth can be expressed as follows: given an M × N binary matrix Φ ( M < N ) , the column weight is J, and if girth g > 4 , then the matrix Φ can achieve the optimal coherence μ Φ = 1 / J .
Proof. 
For a given binary matrix Φ , if μ Φ > 1 / J , then considering ϕ i , ϕ j Φ , we can obtain ϕ i , ϕ j 1 . The elements of the proposed measurement matrix are “0” and “1”. Then, we can obtain the result of ϕ i , ϕ j 2 . This means that there are at least two “1” elements between the columns ϕ i and the columns ϕ j of the matrix. Through the girth analysis in 3.2, it is proved that the girth of the proposed measurement matrix is at least 8. According to the definition of girth, when girth g > 4 , there is μ Φ 1 / J . From (9), we can obtain μ Φ 1 / J . Thus, we finally prove μ Φ = 1 / J . □

4. Experiment and Discussion

In this section, through simulation experiments of 1D signals and 2D images, the performance of the proposed measurement matrix and several typical measurement matrices are compared. Simulation results show that the performance of our proposed measurement matrix is better than other typical matrices under the OMP algorithm.

4.1. Reconstruction Performance of One-Dimensional Signals

In our simulation, we conducted 1000 experiments for each k-sparse signal and obtained the relationship between signal sparsity and reconstruction success rate. Sparse signals are uniformly and randomly generated, whereas the corresponding values of non-zero elements are generated according to the standard Gaussian distribution. Relative reconstruction error can be expressed as
e = x x 2 x 2
where x is the reconstructed signal, and x is the original signal. We set e 0.001 , which means that the experiment is a “perfect reconstruction”, and we use the OMP algorithm as the reconstruction algorithm to compare the reconstruction performance.
We construct the measurement matrix according to the construction rules of (6)~(8). According to the construction rules, d = 1 or d = 0 , q > L . When d = 1 , we set J = 5 , L = 10 , q = 11 , and then we can obtain the basis matrix P of the CPM, as shown in (12):
P = 0 1 2 3 4 5 6 7 8 9 0 2 4 6 8 10 12 14 16 18 0 11 21 30 38 45 53 62 72 83 0 84 168 252 336 420 504 588 672 756 0 85 170 255 340 425 510 595 680 765
Then, the measurement matrix based on the CPM can be obtained, which can be simplified, as shown in (13):
Φ = I I ( 1 ) I ( 2 ) I ( 3 ) I ( 4 ) I ( 5 ) I ( 6 ) I ( 7 ) I ( 8 ) I ( 9 ) I I ( 2 ) I ( 4 ) I ( 6 ) I ( 8 ) I ( 10 ) I ( 1 ) I ( 3 ) I ( 5 ) I ( 7 ) I I I ( 10 ) I ( 8 ) I ( 5 ) I ( 1 ) I ( 9 ) I ( 7 ) I ( 6 ) I ( 6 ) I I ( 7 ) I ( 3 ) I ( 10 ) I ( 6 ) I ( 2 ) I ( 9 ) I ( 5 ) I ( 1 ) I ( 8 ) I I ( 8 ) I ( 5 ) I ( 2 ) I ( 10 ) I ( 7 ) I ( 4 ) I ( 1 ) I ( 9 ) I ( 6 )
where (.) represents the number of the identity matrix I that is cyclically shifted to the right. The measurement matrix is Φ 55 × 110 .
Set J = 6 , L = 18 , d = 1 , q = 19 , J = 5 , L = 16 , d = 1 , q = 17 and J = 5 , L = 16 , d = 1 , q = 32 . Then, through (6)~(8), the proposed measurement matrix can be finally obtained and expressed separately by Φ 114 × 342 , Φ 85 × 272 , Φ 160 × 512 .
Table 1 compares the mutual coherence values of the random Gaussian matrix, Bernoulli matrix, Toeplitz matrix, Hadamard Matrix and the proposed matrix with four sizes. It can be seen from Table 1 that the mutual coherence of the proposed measurement matrix is lower than the other three measurement matrices except Hadamard Matrix, among which the random Gaussian matrix has the highest mutual coherence. Although the Hadamard matrix has the lowest mutual coherence, the Hadamard matrix has some disadvantages. The size of the Hadamard matrix is relatively fixed, and the number of columns is a power of 2, so the matrix with various sizes cannot be constructed. Due to these limitations, the following simulation experiments will not use the Hadamard matrix as a comparison object.
When d = 0 , the above parameters J, L and q are also set to the same values. Set parameters as J = 5 , L = 10 , d = 0 , q = 11 , J = 6 , L = 18 , d = 0 , q = 19 , J = 5 , L = 16 , d = 0 , q = 17 and J = 5 , L = 16 , d = 0 , q = 32 , and then through (6)~(8), the proposed measurement matrix can be finally obtained and expressed separately by Φ 55 × 110 , Φ 114 × 342 , Φ 85 × 272 , Φ 160 × 512 .
Figure 2 shows the relationship between signal sparsity and the percentage recovered of one-dimensional signals. Signal lengths are set to 110 × 1 , 342 × 1 , 272 × 1 and 512 × 1 . OMP is used as reconstruction algorithm. Therefore, from Figure 2, we can easily see that the performance of the proposed measurement matrix is significantly better than that of the Gaussian matrix and Bernoulli matrix, and slightly better than that of Toeplitz matrix. As for the proposed measurement matrix, the girth of the matrix when d = 0 is larger than that when d = 1 , so it can be seen from Figure 2 that the performance of the measurement matrix when d = 0 is better than that when d = 1 .
Figure 3 shows the relationship between the percentage recovered of the one-dimensional signal and the number of measurements. Signal lengths are set to 110 × 1 , 342 × 1 , 272 × 1 and 512 × 1 . OMP is used as reconstruction algorithm. It can be seen from Figure 3 that the percentage recovered of signal increases with the increase in measurements. The performance of proposed measurement matrix is significantly better than that of the Gaussian matrix, Bernoulli matrix and Toeplitz matrix. As for the proposed measurement matrix, the girth of the matrix when d = 0 is larger than that when d = 1 , so it can be seen from Figure 3 that the performance of the measurement matrix when d = 0 is better than that when d = 1 .

4.2. Reconstruction Performance of Two-Dimensional Images

In the experiment, we use MATLAB R2019a for simulation. Ten images (512 × 512 pixels), from the image library of the Institute of Signal and Image Processing of University of Southern California, USA, are selected as test images, as shown in Figure 4. The discrete cosine transform (DCT) is used as the sparse basis for signals, and block sampling is adopted. The block size is set as 32 × 32, and the OMP algorithm is used as the reconstruction algorithm. Then, we compare the PSNR and SSIM of reconstructed images by changing the sampling rate to illustrate the performance of the reconstruction.
In Figure 5, the random Gaussian matrix, Bernoulli matrix, Toeplitz matrix and the proposed measurement matrix are selected, respectively. The PSNR and SSIM of Barbara and Peppers are compared, and the sampling rates are, respectively, 0.05, 0.1, 0.2, 0.3, 0.4 and 0.5. The dotted line represents SSIM and the solid line represents PSNR. As can be seen from Figure 5, both PSNR and SSIM of the proposed measurement matrix are higher than other three measurement matrices, and the reconstruction performance is better. When the sampling rate is 0.2, four images (Barbara, Peppers, Resolution Chart and Camera Test) are selected to compare the reconstruction performance, as shown in Figure 6. The red boxes in Figure 6 indicate that the image is partially enlarged to show some details more clearly. Numerical values above Figure 6 represent the values of PSNR(dB)|SSIM. It can be seen from Figure 6 that the local details of the reconstructed image of the four measurement matrices are basically visible, and the texture details are basically clear. The reconstructed images of the Gaussian matrix and the Bernoulli matrix still have artifacts of different degrees, the reconstructed images of the Toeplitz matrix have local artifacts of small range, the PSNR and SSIM values of the proposed matrix are higher, the reconstruction performance is better, and the reconstructed images are processed better in some texture details. The visual effect is better than the other three measurement matrices, so the reconstruction effect is better. Table 2 shows all values of PSNR and SSIM of 10 test images with sampling rates of 0.05, 0.1, 0.2, 0.3, 0.4 and 0.5, respectively. The best values are shown in bold for observation. From Table 2, it can be observed that the proposed matrix significantly outperforms the other matrices at low sampling rates. When the sampling rate is 0.2, it can be seen from the data in Table 2 that compared with the other three types of measurement matrices, for the proposed measurement matrix, the maximum gain of PSNR and SSIM can reach 2.89 dB and 0.031, and the minimum gain of PSNR and SSIM can reach 0.33 dB and 0.016. In Table 2, when the sampling rate is 0.4, the PSNR|SSIM values of “Woman” and “Boat” are not the highest when selecting the proposed matrix measurement matrix, the numerical difference is not obvious, and mainly from the point of the numerical results, the Toeplitz matrix reconstruction effect is slightly better than the proposed matrix, but visually, there is no obvious difference.

4.3. Comparison of Computing Complexity, Storage Space and Reconstruction Time

In this section, we will examine the amount of addition, multiplication and storage space allocation required for some different matrices in CS. We consider the sum of the required number of additions and multiplications to be computational complexity. Table 3 shows the numerical comparison of computational complexity and storage space for several measurement matrices. J = M/q, L = N/q.
Table 3 shows that the proposed measurement matrix has the lowest computational complexity because it is generated based on CPM and requires less storage space. In addition, due to the sparsity and QC structure of the proposed measurement matrix, it can be more easily implemented in hardware and uses the shift register for storage. Furthermore, we can see that the Gaussian matrix has the highest computational complexity because the corresponding CS encoder must perform both multiplication and addition operations. Bernoulli matrix and Toeplitz matrix simplify the calculation and reduce the computational complexity by abandoning the multiplication operation. Specifically, we give the Landau notation for these measurement matrices. Thus, the computational complexity of the Gaussian matrix can be expressed as O 2 q 2 J L J q , and both the Bernoulli matrix and Toeplitz matrix can be expressed as O q 2 J L J q . Then, the computational complexity of our proposed measurement matrix is O q J L + J L . In order to compare the storage space of the four types of measurement matrices, we choose the measurement matrix Φ 55 × 110 , Φ 114 × 342 , Φ 85 × 272 and Φ 160 × 512 . The quantization results of computational complexity are shown in Table 4. For a matrix element a i j , it takes 8 Bytes to store the value of this element and 4 Bytes to store the subscripts i and j, respectively, so a total of 16 Bytes are needed to store a matrix element (1 KB = 1024 Bytes). From Table 3 and Table 5, we can see that the storage space required by the proposed measurement matrix is significantly smaller than that of Gaussian, Bernoulli and Toeplitz matrices. It is worth noting that with the increase in measurement matrix size, the performance superiority of the proposed method becomes more obvious.
We consider reconstruction time as another performance index to evaluate the performance of the proposed measurement matrix. According to the construction rules, it can be seen that the proposed measurement matrix is very sparse, in which most matrix elements are zero. The sparsity of the matrix helps to reduce the reconstruction time of different reconstruction algorithms. The zero elements in the sensor matrix transmit almost no information to the measurement vector without any effort. Therefore, the sparsity of the matrix is very important to reduce computational complexity and recovery time. Figure 7a shows the relationship between reconstruction time and sparsity for a 512 × 1 one-dimensional signal. OMP algorithm is selected as the reconstruction algorithm, the measurement matrix is Φ 160 × 512 , and the value range of sparsity is 10 K 80 . Figure 7b selects 5 test images (Cameraman, Peppers, Airplane, Barbara and Camera test) to compare their reconstruction time, and the sampling rate is 0.2. It can be seen from Figure 7 that the reconstruction time of the proposed measurement matrix is shorter than that of the random Gaussian matrix, Bernoulli matrix and Toeplitz matrix. In the reconstruction of the one-dimensional signal, the record is the total time for 1000 experiments. The proposed measurement matrix saves nearly half of the time in the reconstruction process. In the reconstruction of the two-dimensional image, because the OMP algorithm is selected, the reconstruction speed itself is very fast, so for different measurement matrices, the reconstruction time difference is 1~2 s.

5. Conclusions

This paper proposes a new method to design a measurement matrix. We use the arithmetic sequence sets and QC-LDPC structure to construct the binary measurement matrix. We further analyze the performance of the proposed measurement matrix through girth and mutual coherence. We also express the necessary conditions for the proposed measurement matrix to have a girth g 6 , and we prove that the girth g 8 . Our method has some other advantages besides their deterministic, sparsity and QC structure. Among them, the proposed method expands the small base matrix into a matrix of any size, which is more flexible and more suitable for resource-constrained applications, such as single-pixel imaging. At the same time, it can reduce computational complexity. Theoretical analysis and numerical simulation results show that the reconstruction performance of the proposed measurement matrix is better than several typical measurement matrices. In 2D image reconstruction, the maximum gain of PSNR and SSIM can reach 2.89 dB and 0.031, and the minimum gain of PSNR and SSIM can reach 0.33 dB and 0.016, respectively, when the sampling rate is 0.2. The proposed method outperforms others at a low sampling rate. Our group is studying the deep learning method used in the reconstruction of compressed sensing images, which can reconstruct the image at a very low sampling rate with good reconstruction quality. This paper uses OMP as the reconstruction method, and our next work will try to add other algorithms and use the deep learning method.

Author Contributions

Concept and structure of this paper, Y.W.; resources, Y.Q.; writing original draft preparation, Y.W.; writing review and editing, Y.W., Y.Q. and H.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (NSFC) (61675184).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof. 
Let 1 s < t L , where L is the number of columns of shifts matrix P, p represents the dimension of the CPM, and s and t represent a column of the check matrix H. Prove it by contradiction. If there are four loops, at least one of the items in (A1) is satisfied:
p 2 , t p 2 , s + p 1 , s p 1 , t = 0 mod p p 3 , t p 3 , s + p 1 , s p 1 , t = 0 mod p p 3 , t p 3 , s + p 2 , s p 2 , t = 0 mod p
When L is odd,
p 3 , s + 1 p 3 , s p 2 , s + 1 p 2 , s min d 3 , s d 2 , s = L + 3 2 + d d + 1 > 0
When L is even,
p 3 , s + 1 p 3 , s p 2 , s + 1 p 2 , s min d 3 , s d 2 , s = L 2 + 1 + d d + 1 = L 2 > 0
So, we can obtain:
p 3 , t p 3 , s = p 3 , t p 3 , t 1 + p 3 , t 1 p 3 , t 2 + + p 3 , s + 1 p 3 , s > p 2 , t p 2 , t 1 + p 2 , t 1 p 2 , t 2 + + p 2 , s + 1 p 2 , s = p 2 , t p 2 , s > p 1 , t p 1 , s
Let 3 k + 1 r 3 k + 3 < L , and r represents a column of H, and then we have:
min p l , s + 1 p l . s max p r , s + 1 p r , s = p 4 , s + 1 p 4 , s p 3 , s + 1 p 3 , s = p 4 , s + 1 p 4 , s L + d > 0
Thus, there are no four loops.
Let 1 i < j < l L , L is even, and then we can obtain four types of analysis.
(1)
when 3 k + 1 r < s < t 3 k + 3 , k 0
p k + 2 , l p k + 2 , j + p k , j p k , i + p k + 1 , i p k + 1 , l ( l j ) ( L 2 + d k + 1 , 1 ) + ( j i ) d k , 1 + ( i l ) d k + 1 , 1 = ( l j ) L 2 + ( i j ) > 0
(2)
when 3 k + 1 r < s 3 k + 3 < t , k 0
min p t , l p t , j = ( l j ) D 3 k + 3 , j min ( p r , j p r , i ) = ( j i ) d max ( p s , l p s , i ) = ( l i ) d 3 k + 3 , 1 min ( p t , l p t , j ) + min ( p r , j p r , i ) max ( p s , l p s , i ) > 0
(3)
when r < 3 k + 1 s < t 3 k + 3 , k 1
max p r , j p r , i = ( j i ) D 3 k , j max ( p t , l p t , j ) = ( l j ) ( d 3 k + 1 , j + 1 ) min ( p s , l p s , i ) = ( l i ) d 3 k + 1 , j max ( p r , j p r , i ) + max ( p t , l p t , j ) min ( p s , l p s , i ) < 0
(4)
when r < 3 k + 1 s 3 k + 3 < t , k 1 .
min p r , j p r , i = ( j i ) d min ( p t , l p t , j ) = ( l j ) d 3 k + 3 , j max ( p s , l p s , i ) = ( l i ) d 3 k + 3 , j min ( p r , j p r , i ) + min ( p t , l p t , j ) max ( p s , l p s , i ) > 0
If L is odd, the same can be proved, and then there are no six loops.
To sum up, the girth of the matrix is at least 8. □

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory. 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Wakin, M.B. An Introduction To Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  3. Romberg, J. Imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 14–20. [Google Scholar] [CrossRef]
  4. Candes, E.J.; Tao, T. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef]
  5. Xiao, Y.; Zhou, L.; Chen, W. Single-Pixel Imaging Authentication Using Sparse Hadamard Spectrum Coefficients. IEEE Photon-Technol. Lett. 2019, 31, 1975–1978. [Google Scholar] [CrossRef]
  6. Yang, J.; Jin, T.; Huang, X. Compressed Sensing Radar Imaging With Magnitude Sparse Representation. IEEE Access 2019, 7, 29722–29733. [Google Scholar] [CrossRef]
  7. Scheffle, K.; Loktyushin, A. Spread-spectrum magnetic resonance imaging. Magn. Reason. Med. 2019, 82, 877–885. [Google Scholar]
  8. Axell, E.; Leus, G.; Larsson, E.G.; Poor, H.V. Spectrum Sensing for Cognitive Radio: State-of-the-Art and Recent Advances. IEEE Signal Process. Mag. 2012, 29, 101–116. [Google Scholar] [CrossRef]
  9. Elad, M. Optimized Projections for Compressed Sensing. IEEE Trans. Signal Process. 2007, 55, 5695–5702. [Google Scholar] [CrossRef]
  10. Bourgain, J.; Dilworth, S.; Ford, K.; Konyagin, S.; Kutzarova, D. Explicit constructions of RIP matrices and related problems. Duke Math. J. 2011, 59, 145–185. [Google Scholar] [CrossRef]
  11. Gan, H.; Li, Z.; Li, J.; Wang, X.; Cheng, Z. Compressive sensing using chaotic sequence based on Chebyshev map. Nonlinear Dyn. 2014, 78, 2429–2438. [Google Scholar] [CrossRef]
  12. Castorena, J.; Creusere, D. The restricted isometry property for banded random matrices. IEEE Trans. Signal Process. 2014, 63, 5073–5084. [Google Scholar] [CrossRef]
  13. Gan, H.; Xiao, S.; Zhao, Y. A Novel Secure Data Transmission Scheme Using Chaotic Compressed Sensing. IEEE Access 2017, 6, 4587–4598. [Google Scholar] [CrossRef]
  14. Li, S.; Ge, G. Deterministic Sensing Matrices Arising From Near Orthogonal Systems. IEEE Trans. Inf. Theory 2014, 60, 2291–2302. [Google Scholar] [CrossRef]
  15. Andrianiaina, R.; Rabah, H.; Rouane, A. Compressed sensing: A simple deterministic measurement matrix and a fast recovery algorithm. IEEE Trans. Instrum. Meas. 2015, 64, 3405–3413. [Google Scholar]
  16. Devore, R.A. Deterministic constructions of compressed sensing matrices. J. Complex. 2007, 23, 918–925. [Google Scholar] [CrossRef]
  17. Yang, H.; Zhang, C.; Ding, D.; Sui, W. The theory of compressed sensing and reconstruction algorithm. Acta Electron. Sinica 2011, 39, 142–148. [Google Scholar]
  18. Dossal, C.; Peyre, G.; Fadili, J.A. Numerical Exploration of Compressed Sampling Recovery. Linear Algebra Its Appl. 2010, 432, 1663–1679. [Google Scholar] [CrossRef]
  19. Dimakis, A.; Smarandache, R.; Vontobel, P. LDPC codes for compressed sensing. IEEE Trans. Inf. Theory 2012, 58, 3093–3114. [Google Scholar] [CrossRef]
  20. Jiang, X.Y.; Xie, Z.G. Sparse binary matrixes of QC-LDPC code for compressed sensing. In Proceedings of the 2014 11th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 19–21 December 2014; pp. 284–288. [Google Scholar]
  21. He, Z.; Ogawa, T.; Haseyama, M.; Zhao, X.; Zhang, S. A compressed sensing-based low-density parity-check real-number code. Radioengineering 2013, 22, 851. [Google Scholar]
  22. Zeng, W.; Wang, H. Peeling Decoding of LDPC Codes with Applications in Compressed Sensing. Math. Probl. Eng. 2016, 2016, 4. [Google Scholar] [CrossRef]
  23. Xia, S.; Liu, X.; Jiang, Y. Deterministic constructions of binary measurement matrices from finite geometry. IEEE Trans. Signal Process. 2015, 63, 1017–1029. [Google Scholar] [CrossRef]
  24. Zeng, L.; Zhang, X.; Chen, L.; Cao, T.; Yang, J. Deterministic construction of Toeplitzed structurally chaotic matrix for compressed sensing. Circuits Syst. Signal Process. 2015, 34, 797–813. [Google Scholar] [CrossRef]
  25. Naidu, R.; Jampana, P.; Sastry, C.S. Deterministic compressed sensing matrices: Construction via Euler Squares and applications. IEEE Trans. Signal Process. 2016, 64, 3566–3575. [Google Scholar] [CrossRef]
  26. Bryant, D.; Colbourn, C.J.; Horsley, D.; Cathain, P.O. Compressed sensing with combinatorial designs: Theory and simulations. IEEE Trans. Inf. Theory 2017, 63, 4850–4859. [Google Scholar] [CrossRef]
  27. Lu, W.; Dai, T.; Xia, S. Binary matrices for compressed sensing. IEEE Trans. Signal Process. 2018, 66, 77–85. [Google Scholar] [CrossRef]
  28. Liu, H.; Yin, J.; Hua, G.; Yin, H.; Zhu, A. Deterministic construction of measurement matrices based on Bose balanced incomplete block designs. IEEE Access. 2018, 6, 21710–21718. [Google Scholar]
  29. Fardad, M.; Sayedi, S.M.; Ehsan, Y. A low-complexity hardware for deterministic compressive sensing reconstruction. IEEE Trans. Circuits Syst. I Regul. Pap. 2018, 65, 3349–3361. [Google Scholar] [CrossRef]
  30. Torshizi, E.O.; Tinati, M.A.; Meshgini, S. Deterministic construction of array QC CS measurement matrices based on Singer perfect difference sets. IET Commun. 2019, 13, 2512–2522. [Google Scholar] [CrossRef]
  31. Tong, F.; Li, L.; Peng, H.; Yang, Y. Flexible construction of compressed sensing matrices with low storage space and low coherence. Signal Process. 2020, 182, 107951. [Google Scholar] [CrossRef]
  32. Kazemi, V.; Shahzadi, A.; Bizaki, H.K. New flexible deterministic compressive measurement matrix based on finite Galois field. IET Image Process. 2021, 16, 239–251. [Google Scholar] [CrossRef]
  33. Liang, J.; Peng, H.; Li, L. Flexible construction of measurement matrices in compressed sensing based on extensions of incidence matrices of combinatorial designs. Appl. Math. Comput. 2021, 420, 126901, ISSN 0096-3003. [Google Scholar] [CrossRef]
  34. Abolghasemi, V.; Ferdowsi, S.; Sanei, S. A gradient-based alternating minimization approach for optimization of the measurement matrix in compressive sensing. Signal Process. 2012, 92, 999–1009. [Google Scholar] [CrossRef]
  35. Yi, R.; Cui, C.; Wu, B.; Gong, Y. A new method of measurement matrix optimization for compressed sensing based on alternating minimization. Mathematics 2021, 9, 329. [Google Scholar] [CrossRef]
  36. Xu, Q.; Sheng, Z.; Fang, Y.; Zhang, L. Measurement Matrix Optimization for Compressed Sensing System with Constructed Dictionary via Takenaka–Malmquist Functions. Sensors 2021, 21, 1229. [Google Scholar] [CrossRef]
  37. Candes, E.; Tao, T. Decoding by Linear Programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  38. Tropp, J.A.; Gilbert, A.C. Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  39. Welch, L. Lower bounds on the maximum cross correlation of signals. IEEE Trans. Inf. Theory 1974, 20, 397–399. [Google Scholar] [CrossRef]
  40. Gallager, R.G. Low-density parity-check codes. IEEE Trans. Inf. Theory 1962, 8, 21–28. [Google Scholar] [CrossRef]
  41. MacKay, D.J.; Neal, R.M. Near Shannon limit performance of low-density parity-check codes. Electron. Lett. 1996, 32, 1645–1646. [Google Scholar] [CrossRef]
  42. Torshizi, E.O.; Sharifi, H.; Seyrafi, M. A new hybrid decoding algorithm for LDPC codes based on the improved variable multi weighted bit-flipping and BP algorithms. In Proceedings of the 21st Iranian Conference on Electrical Engineering, Mashhad, Iran, 14–16 May 2013; pp. 1–6. [Google Scholar]
  43. Fossorier, M. Quasi-Cyclic Low-Density Parity-Check Codes From Circulant Permutation Matrices. IEEE Trans. Inf. Theory 2004, 50, 1788–1794. [Google Scholar] [CrossRef]
  44. Zhan, Y.; Da, X. Construction of girth-eight QC-LDPC codes from arithmetic progression sequence with large column weight. Electron. Lett. 2015, 51, 1257–1259. [Google Scholar] [CrossRef]
  45. Xu, H.; Feng, D.; Luo, R.; Bai, B. Construction of Quasi-Cyclic LDPC Codes via Masking With Successive Cycle Elimination. IEEE Commun. Lett. 2016, 20, 13–16. [Google Scholar] [CrossRef]
  46. Khajehnejad, A.; Tehrani, A.S.; Dimakis, A.G.; Hassibi, B. Explicit matrices for sparse approximation. In Proceedings of the 2011 IEEE International Symposium on Information Theory, Saint-Petersburg, Russia, 31 July–5 August 2011; pp. 469–473. Available online: https://scholar.google.com/scholar?hl=zh-CN&as_sdt=0%2C5&q=Explicit+matrices+for+sparse+approximation.+&btnG= (accessed on 7 March 2023).
  47. Liu, H.; Zhang, H.; Ma, L. On the spark of binary LDPC measurement matrices from complete photographs. IEEE Signal Process. Lett. 2017, 24, 1616–1620. [Google Scholar] [CrossRef]
  48. Tasdighi, A.; Banihashemi, A.H.; Sadeghi, M.R. Symmetrical constructions for regular girth-8 QC-LDPC codes. IEEE Trans. Commun. 2017, 65, 14–22. [Google Scholar] [CrossRef]
  49. Wang, Z.; Wang, L.; Wang, D.; Fei, A.; Chen, X.; Ju, C.; Wang, H.; Zhang, Q. Construction method of large girth QC-LDPC codes based on Fibonacci-Lucas sequence. J. Chongqing Univ. Posts Telecommun. 2018, 30, 505–510. [Google Scholar]
Figure 1. Tanner graph of LDPC code with column weight 2 and row weight 3.
Figure 1. Tanner graph of LDPC code with column weight 2 and row weight 3.
Electronics 12 02063 g001
Figure 2. Relationship between signal sparsity and percentage recovered with different measurement matrices by using OMP algorithm: (a) the signal length is 110 × 1, (b) the signal length is 342 × 1, (c) the signal length is 272 × 1 and (d) the signal length is 512 × 1.
Figure 2. Relationship between signal sparsity and percentage recovered with different measurement matrices by using OMP algorithm: (a) the signal length is 110 × 1, (b) the signal length is 342 × 1, (c) the signal length is 272 × 1 and (d) the signal length is 512 × 1.
Electronics 12 02063 g002
Figure 3. Relationship between percentage recovered of signal and the number of measurements with different measurement matrices by using OMP algorithm: (a) the signal length is 110 × 1, (b) the signal length is 342 × 1, (c) the signal length is 272 × 1 and (d) the signal length is 512 × 1.
Figure 3. Relationship between percentage recovered of signal and the number of measurements with different measurement matrices by using OMP algorithm: (a) the signal length is 110 × 1, (b) the signal length is 342 × 1, (c) the signal length is 272 × 1 and (d) the signal length is 512 × 1.
Electronics 12 02063 g003
Figure 4. Ten test images: (a) Barbara, (b) Peppers, (c) Woman, (d) Mandrill, (e) House, (f) Boat, (g) Cameraman, (h) Couple, (i) Resolution chart and (j) Camera test.
Figure 4. Ten test images: (a) Barbara, (b) Peppers, (c) Woman, (d) Mandrill, (e) House, (f) Boat, (g) Cameraman, (h) Couple, (i) Resolution chart and (j) Camera test.
Electronics 12 02063 g004
Figure 5. The PSNR and SSIM of the reconstructed images versus the sampling rates: (a) Barbara; (b) Peppers.
Figure 5. The PSNR and SSIM of the reconstructed images versus the sampling rates: (a) Barbara; (b) Peppers.
Electronics 12 02063 g005
Figure 6. When the sampling rate is R = 0.2, reconstructed image comparison under the condition of selecting different measurement matrices: (a) the original image, (b) random Gaussian matrix, (c) Bernoulli matrix, (d) Toeplitz matrix and (e) the proposed measurement matrix.
Figure 6. When the sampling rate is R = 0.2, reconstructed image comparison under the condition of selecting different measurement matrices: (a) the original image, (b) random Gaussian matrix, (c) Bernoulli matrix, (d) Toeplitz matrix and (e) the proposed measurement matrix.
Electronics 12 02063 g006
Figure 7. Comparison of reconstruction time with different measurement matrices: (a) one-dimensional signal; (b) two-dimensional images.
Figure 7. Comparison of reconstruction time with different measurement matrices: (a) one-dimensional signal; (b) two-dimensional images.
Electronics 12 02063 g007
Table 1. Comparison of coherence values.
Table 1. Comparison of coherence values.
Measurement Matrix Φ 55 × 110 Φ 114 × 342 Φ 85 × 272 Φ 160 × 512
Coherence μ Φ
Gaussian Matrix0.52040.48780.39530.3657
Bernoulli Matrix0.49090.45880.38600.3375
Toeplitz Matrix0.45450.38820.33330.3000
Hadamard Matrix///0.1875
Proposed Matrix0.25010.16670.20100.2000
Table 2. PSNR(dB) and SSIM of 10 test images.
Table 2. PSNR(dB) and SSIM of 10 test images.
ImageMeasurement Matrix PSNR(dB)|SSIM
0.050.10.20.30.40.5
BarbaraGaussian matrix15.29|0.693318.90|0.861721.58|0.925324.31|0.959826.00|0.972727.63|0.9813
Bernoulli matrix15.99|0.730818.96|0.863821.61|0.925624.40|0.960626.16|0.973727.64|0.9813
Toeplitz matrix15.34|0.697818.93|0.864621.61|0.925624.30|0.959726.09|0.973227.96|0.9826
Proposed matrix17.92|0.821121.25|0.918723.91|0.956025.26|0.967627.98|0.982630.00|0.9901
PeppersGaussian matrix16.33|0.752621.05|0.913024.79|0.962928.16|0.982930.19|0.989331.85|0.9927
Bernoulli matrix16.65|0.773521.16|0.914624.79|0.962828.32|0.983530.20|0.989331.96|0.9929
Toeplitz matrix16.21|0.748720.99|0.911924.89|0.963728.37|0.983730.37|0.989832.13|0.9931
Proposed matrix19.21|0.876523.64|0.951727.45|0.979929.06|0.986031.80|0.991433.86|0.9960
WomanGaussian matrix16.98|0.653219.87|0.818922.18|0.892524.95|0.942926.73|0.962028.32|0.9736
Bernoulli matrix16.36|0.601219.80|0.816322.21|0.893525.04|0.944226.69|0.961628.28|0.9734
Toeplitz matrix16.47|0.610219.85|0.817422.16|0.892124.95|0.943026.91|0.963628.53|0.9749
Proposed matrix18.67|0.765421.99|0.887224.59|0.937925.87|0.953826.61|0.960929.71|0.9713
MandrillGaussian matrix15.28|0.499316.14|0.578717.26|0.675518.52|0.753119.50|0.801420.50|0.8411
Bernoulli matrix15.32|0.491916.05|0.580917.04|0.661618.51|0.753519.49|0.801420.46|0.8393
Toeplitz matrix15.44|0.498716.05|0.575717.11|0.668018.44|0.750619.54|0.803320.56|0.8433
Proposed matrix16.01|0.580115.67|0.567817.44|0.681018.44|0.748620.55|0.843121.45|0.8601
HouseGaussian matrix17.12|0.785621.41|0.903623.79|0.944526.17|0.967727.64|0.976929.09|0.9835
Bernoulli matrix17.79|0.786421.25|0.903023.88|0.945426.15|0.967627.64|0.976928.99|0.9831
Toeplitz matrix17.82|0.787721.46|0.905523.85|0.945226.19|0.967827.83|0.977929.29|0.9842
Proposed matrix20.12|0.894523.36|0.938225.82|0.964926.92|0.972728.60|0.981330.01|0.9915
BoatGaussian matrix15.83|0.641419.18|0.822621.72|0.900424.47|0.946926.19|0.964227.88|0.9757
Bernoulli matrix15.36|0.639819.36|0.830721.74|0.901224.38|0.945926.16|0.963927.88|0.9757
Toeplitz matrix15.01|0.630119.37|0.830221.84|0.903524.47|0.946926.42|0.966028.06|0.9766
Proposed matrix18.97|0.811921.55|0.896624.20|0.943425.45|0.957526.22|0.964428.21|0.9786
CameramanGaussian matrix14.86|0.748720.09|0.917624.01|0.966528.38|0.987730.99|0.993333.16|0.9959
Bernoulli matrix14.61|0.741220.00|0.916623.87|0.965428.34|0.987631.07|0.993433.30|0.9961
Toeplitz matrix14.58|0.740120.06|0.917823.77|0.964728.57|0.988331.06|0.993433.70|0.9964
Proposed matrix17.45|0.823122.74|0.955225.41|0.974729.61|0.990831.67|0.994733.66|0.9971
CoupleGaussian matrix16.61|0.662319.29|0.807721.64|0.888324.33|0.939426.07|0.959227.40|0.9700
Bernoulli matrix16.91|0.681219.26|0.807821.70|0.889624.29|0.938925.84|0.957127.42|0.9701
Toeplitz matrix16.98|0.683519.25|0.809621.66|0.888924.36|0.939726.10|0.959627.59|0.9712
Proposed matrix19.01|0.801121.64|0.887024.09|0.936025.13|0.949626.99|0.963428.89|0.9821
Resolution chartGaussian matrix12.39|0.629214.18|0.742316.66|0.889918.93|0.911820.13|0.933022.33|0.9507
Bernoulli matrix12.14|0.612814.40|0.749116.09|0.880518.85|0.918920.37|0.934322.04|0.9563
Toeplitz matrix12.58|0.642514.59|0.754616.85|0.889418.47|0.916220.92|0.934622.34|0.9579
Proposed matrix14.61|0.754716.30|0.837218.46|0.912420.04|0.931022.55|0.950924.96|0.9732
Camera testGaussian matrix9.45|0.194512.33|0.646214.74|0.833316.12|0.922018.11|0.966921.92|0.9827
Bernoulli matrix9.25|0.222912.03|0.641214.78|0.835316.22|0.924018.15|0.966821.50|0.9812
Toeplitz matrix10.62|0.266712.07|0.625614.87|0.839916.15|0.922918.45|0.967021.68|0.9823
Proposed matrix12.27|0.641014.82|0.796316.35|0.883118.36|0.941220.03|0.969622.65|0.9919
Table 3. Computational complexity and storage space allocation of measurement matrices.
Table 3. Computational complexity and storage space allocation of measurement matrices.
Measurement MatrixNumber of AdditionsNumber of MultiplicationsRequired Storage Space
Gaussian Matrix M × ( N 1 ) M × N 16 × M × N
Bernoulli Matrix M × ( N 1 ) 0 16 × M × N
Toeplitz Matrix M × ( N 1 ) 0 16 × M × N
Proposed Matrix J × L × q ( J 1 ) × ( L 1 ) 16 × ( J 1 ) × ( L 1 )
Table 4. Quantization results of computational complexity with different measurement matrices.
Table 4. Quantization results of computational complexity with different measurement matrices.
Measurement MatrixComputational Complexity
Φ 55 × 110 Φ 114 × 342 Φ 85 × 272 Φ 160 × 512
Gaussian Matrix12,04577,86246,155163,680
Bernoulli Matrix599538,87423,03581,760
Toeplitz Matrix599538,87423,03581,760
Proposed Matrix58622,13714202620
Table 5. Quantization results of storage space with different measurement matrices.
Table 5. Quantization results of storage space with different measurement matrices.
Measurement MatrixRequired Storage Space (Unit: KB)
Φ 55 × 110 Φ 114 × 342 Φ 85 × 272 Φ 160 × 512
Gaussian Matrix956103621280
Bernoulli Matrix956103621280
Toeplitz Matrix956103621280
Proposed Matrix0.6251.411.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Qin, Y.; Ren, H. Deterministic Construction of Compressed Sensing Measurement Matrix with Arbitrary Sizes via QC-LDPC and Arithmetic Sequence Sets. Electronics 2023, 12, 2063. https://doi.org/10.3390/electronics12092063

AMA Style

Wang Y, Qin Y, Ren H. Deterministic Construction of Compressed Sensing Measurement Matrix with Arbitrary Sizes via QC-LDPC and Arithmetic Sequence Sets. Electronics. 2023; 12(9):2063. https://doi.org/10.3390/electronics12092063

Chicago/Turabian Style

Wang, Yue, Yali Qin, and Hongliang Ren. 2023. "Deterministic Construction of Compressed Sensing Measurement Matrix with Arbitrary Sizes via QC-LDPC and Arithmetic Sequence Sets" Electronics 12, no. 9: 2063. https://doi.org/10.3390/electronics12092063

APA Style

Wang, Y., Qin, Y., & Ren, H. (2023). Deterministic Construction of Compressed Sensing Measurement Matrix with Arbitrary Sizes via QC-LDPC and Arithmetic Sequence Sets. Electronics, 12(9), 2063. https://doi.org/10.3390/electronics12092063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop