Next Article in Journal
Overview of High-Power and Wideband Radar Technology Development at MIT Lincoln Laboratory
Next Article in Special Issue
A UAV-Based Single-Lens Stereoscopic Photography Method for Phenotyping the Architecture Traits of Orchard Trees
Previous Article in Journal
Reduction of Subsidence and Large-Scale Rebound in the Beijing Plain after Anthropogenic Water Transfer and Ecological Recharge of Groundwater: Evidence from Long Time-Series Satellites InSAR
Previous Article in Special Issue
Shadow-Aware Point-Based Neural Radiance Fields for High-Resolution Remote Sensing Novel View Synthesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(9), 1529; https://doi.org/10.3390/rs16091529
Submission received: 7 February 2024 / Revised: 26 March 2024 / Accepted: 2 April 2024 / Published: 26 April 2024

Abstract

:
Faced with the problem of incompatibility between traditional information acquisition mode and spaceborne earth observation tasks, starting from the general mathematical model of compressed sensing, a theoretical model of block compressed sensing was established, and a full-process adaptive coding and decoding compressed sensing framework for remote sensing images was proposed, which includes five parts: mode selection, feature factor extraction, adaptive shape segmentation, adaptive sampling rate allocation and image reconstruction. Unlike previous semi-adaptive or local adaptive methods, the advantages of the adaptive encoding and decoding method proposed in this paper are mainly reflected in four aspects: (1) Ability to select encoding modes based on image content, and maximizing the use of the richness of the image to select appropriate sampling methods; (2) Capable of utilizing image texture details for adaptive segmentation, effectively separating complex and smooth regions; (3) Being able to detect the sparsity of encoding blocks and adaptively allocate sampling rates to fully explore the compressibility of images; (4) The reconstruction matrix can be adaptively selected based on the size of the encoding block to alleviate block artifacts caused by non-stationary characteristics of the image. Experimental results show that the method proposed in this article has good stability for remote sensing images with complex edge textures, with the peak signal-to-noise ratio and structural similarity remaining above 35 dB and 0.8. Moreover, especially for ocean images with relatively simple image content, when the sampling rate is 0.26, the peak signal-to-noise ratio reaches 50.8 dB, and the structural similarity is 0.99. In addition, the recovered images have the smallest BRISQUE value, with better clarity and less distortion. In the subjective aspect, the reconstructed image has clear edge details and good reconstruction effect, while the block effect is effectively suppressed. The framework designed in this paper is superior to similar algorithms in both subjective visual and objective evaluation indexes, which is of great significance for alleviating the incompatibility between traditional information acquisition methods and satellite-borne earth observation missions.

1. Introduction

Optical remote sensing imaging technology is an important branch of remote sensing technology, serving as the “eyes” for human exploration of the universe and the unknown world. Satellite remote sensing image data with “four highs” characteristics of high space, high time, high spectrum and high radiation resolution provides important information and services for human monitoring and research on the earth’s environment and resources [1,2,3,4,5]. The traditional remote sensing satellite image signal acquisition process uses an optical remote sensing camera to perform Nyquist full sampling imaging on the target, then converts the collected analog signal to digital, and finally compresses and encodes the image information through digital processing [6,7]. This method of obtaining image information is similar to the traditional ground camera mode, but contrast to ground working cameras, satellite resources are often tight, while ground receiving end resources are relatively sufficient. Therefore, the working mode with simple encoding and complex decoding is more suitable for spaceborne earth observation missions. The emergence of compressive sensing (CS) theory provides a good solution for this pattern, which directly collects compressed data and senses compressed information, avoiding the redundancy of compressing the data after collecting it, and compression is easier to process than decompression [8,9,10,11].
Traditional compressive sensing observes and reconstructs remote sensing images as a whole, which requires a large amount of space to store the measurement matrices and also increases the difficulty of reconstruction. The hybrid coding framework of block compressed sensing (BCS) divides images into blocks and performs downsampling to alleviate high storage and computational complexity issues through independent measurement and non-overlapping block recovery. To some extent, matrix storage requirements and algorithm computational complexity will be reduced, and the robustness of the transmission process will increase. When using BCS to process images, the images are mostly evenly segmented, and a fixed sampling rate is used for random sampling. However, the texture distribution of each spatial position in each frame of the image is different. When the same observation matrix is used for sampling measurement, the non-stationary characteristics of the image may cause block artifacts, especially at lower sampling rates, which will lead to too many sampling points in smooth areas with fewer detailed textures and too few sampling points in areas with rich texture details, which will affect the reconstruction quality of later image [12,13,14].
To solve these problems, many studies have focused on optimizing the sampling measurement process, by dynamically setting the sampling rates of different image blocks during the sampling process, or classifying image blocks according to different parameters, and then assigning different sample rates to different categories. These methods adaptively select the number of samples from all blocks. The main difference lies in the way of selecting samples or the adaptive measurement extraction process. Common adaptive block compressive sensing (ABCS) methods include the following categories [15]. Adaptive reweighted compressive sensing assigns measurements to each block based on statistical information such as variance, entropy, and number of significant coefficients, in order to effectively recover the image [16]. In reference [17] the standard deviation is used to allocate adaptive sampling rates to each block based on its own data structure, in addition to a fixed sampling rate, to achieve real sampling rate allocation. In this ABCS coding system investigated by Li et al. [18], the structural complexity of blocks can be determined by the error between blocks and sampling rates are allotted according to the error values obtained. The ABCS method based on spatial entropy proposed in reference [19] has similar complexity and reconstruction quality like ABCS method based on error between blocks but differs only in the calculation process. Resources are allocated based on the amount of information, where regions with rich information represent more edges and textures, and higher sampling rates are allocated to blocks with more information and vice versa. All the above methods are subclasses of ABCS and all use some adaptive measurement extraction processes. However, all of them are based on uniform image segmentation without considering the texture feature information of the image. Adaptive sampling is limited by the size of the image block and cannot effectively separate complex and smooth regions. When the sampling rates between adjacent blocks differ greatly, block level artifacts are usually observed, making the reconstruction look highly pixelated.
Based on the above analysis, starting from the general mathematical model of compressed sensing, a theoretical model of segmented compressed sensing is established. Faced with the problem of incompatibility between traditional information acquisition modes and spaceborne ground observation tasks, a fully adaptive encoding and decoding compressed sensing framework for remote sensing images is proposed. Unlike previous semi adaptive or local adaptive methods, the advantages of the adaptive encoding and decoding framework proposed in this paper are mainly reflected in four aspects: (1) the ability to select encoding modes according to the image content; (2) capable of utilizing image texture details for adaptive segmentation; (3) able to detect the sparsity of encoding blocks and adaptively allocating sampling rates; (4) the reconstruction matrix can be adaptively selected based on the size of the encoding block.

2. BCS Mathematical Model

When compressive sensing theory is directly applied to large-scale remote sensing image reconstruction, large-scale imaging signals will lead to long encoding observation time, huge computational complexity in image restoration algorithms, and large storage space required for observation matrices [20,21]. Constructing a mathematical model for block compression perception, which uniformly blocks the image and uses the same observation matrix to sample and measure each sub block separately, can effectively reduce the size of the measurement matrix and the complexity of the image reconstruction algorithm, improve image sampling efficiency and reconstruction performance. The mathematical model of block compression perception observation process is as follows:
Assume that the size of the target image x is H × W , decompose the image into non-overlapping encoding tree blocks (ETB), where d represents the sub-block size, d = h × w , and the entire image is divided into B = H × W / h × w blocks, then the original signal x defined on equal block sizes can be expressed as:
x = x 1 , , x d 1 x [ 1 ] , x d 1 + 1 , , x 2 d 1 x [ 2 ] , , x ( B 1 ) d 1 + 1 , , x B d 1 x [ B ] T
For each sub block, the same observation Φ d matrix is used for independent sampling, where Φ d is a n ^ × d sized matrix, n ^ = n d H W , n is the number of linear observation values, and the entire image measurement matrix is composed of:
Φ = Φ d 0 0 0 Φ d 0 0 0 Φ d
The mathematical model of BCS can be expressed as:
y = y 1 y 2 y B = Φ d 0 0 0 Φ d 0 0 0 Φ d x 1 x 2 x B = Φ x 1 x 2 x B
The set of observation vectors obtained is:
y = y i y i = Φ d x i , i = 1 , 2 , , B
Assuming the sparse signal x = x 1 , x 2 , x B , given the observation matrix Φ d R n ^ × d , sparse basis matrix Ψ R h × h , the sparse representation coefficient of the block signal after the basis transformation is θ , where x i = Ψ θ , then the sensor matrix Θ = Φ d Ψ , compressed observation signal y i = Φ d x i = Θ θ i . For any block K sparse coefficient θ i , there is a constant δ K 0 < δ K < 1 that makes the following formula valid.
1 δ K θ i 2 2 Θ θ i 2 2 1 + δ K θ i 2 2
Then it is said that matrix Θ satisfies the K order Block RIP condition, and the minimum δ d satisfying the above condition is the block finite isometric constant (Block RIC) of matrix Θ .

3. Remote Sensing Images Adaptive Encoding and Decoding Framework

The framework of adaptive coding and decoding of remote sensing images is shown in Figure 1, which consists of five parts: mode selection, feature factor extraction, adaptive shape segmentation, adaptive sampling rate allocation, and image reconstruction:
(1)
Mode selection stage: Design the maximum between-class variance (OTSU) method to calculate the optimal threshold between the background and target pixels, and adaptively select image processing modes (complex mode and simple mode) based on the optimal threshold of the image;
(2)
Feature factor extraction stage: Establish a texture roughness saliency model, take edge roughness as the salient feature, use Robert and Prewitt operators to describe the global feature saliency expression, and build the corresponding saliency models for complex and simple patterns in the pattern selection stage;
(3)
Adaptive morphological blocking stage: Introducing the idea of image quadtree partitioning, based on setting the maximum and minimum coding blocks (CB), utilizing feature saliency factors as constraint conditions, and adopting a top-down partitioning strategy to recursively divide the image space into different levels of tree structures;
(4)
Sampling rate adaptive allocation stage: Develop image sparsity judgment criteria, adaptively set sparsity as the information density function of a given coding block based on the different information densities between different coding blocks, and adaptively allocate sampling rates for a given image block;
(5)
Image adaptive reconstruction stage: Propose the adaptive blocked compression- orthogonal matching pursuit algorithm (ABCS-OMP), adaptively select the reconstruction matrix and sparse basis according to the encoding block category, set an iterative threshold based on the known sparsity, and use the OMP algorithm to reconstruct the encoding block.
Firstly, this framework uses the maximum between-class variance method to calculate the image content complexity and adaptively selects the coding mode. By extracting image feature factors, a saliency model of the corresponding mode is established to guide the adaptive morphological segmentation of the coding tree block. Then, by using the information density function to detect the sparsity of the encoding block, the sparsity of the image is explored as much as possible to achieve adaptive sampling rate allocation for the image. Finally, the reconstruction matrix and sparse basis are adaptively selected based on the coding block category, and the iteration threshold is set according to the known sparsity. OMP algorithm is used to reconstruct the coding block.

3.1. Adaptive Mode Selection Stage

The image adaptive mode selection stage proposed in this section mainly includes two parts: first, image content analysis, defining the global image complexity calculation function; second, scene model classification, setting threshold parameters for adaptive mode discrimination.
In view of the image content, this article defines an image complexity calculation method based on the maximum between-class variance. According to the grayscale characteristics of the image, the variance is used to calculate the uniformity of the grayscale distribution, and the image is divided into two parts, the foreground and the background. The larger the inter class variance between foreground and background, the greater the difference between the two parts that make up the image, and the more obvious the contrast between foreground and background. And when part of the background is mistakenly classified into the foreground or part of the foreground is mistakenly classified into the background, the difference between the two parts will be reduced, maximizing the inter-class variance means minimizing the probability of misclassification [22]. Based on the above analysis, by calculating the maximum inter class variance of the encoding tree block, the optimal threshold of the image block is obtained, and the optimal threshold is used for image segmentation. The image complexity C is defined as the number of foreground pixels of the encoding tree block, and the complexity threshold α is set to complete the adaptive mode selection of the encoding tree block.
Based on the maximum between-class variance to evaluate the complexity of image content, the specific mathematical method is as follows: assuming that image pixels can be divided into background and foreground parts according to the threshold, the image with a size of M × N has a total pixel size of S u m = M × N , N 1 and N 2 are the number of foreground and background pixels, respectively, then the proportion of background and foreground pixels w 1 and w 2 can be expressed as:
w 1 = N 1 S u m w 2 = N 2 S u m
If P i represents the number of pixels with a grayscale value of i in the background, and μ t 1 represents the mathematical expectation of the background pixels relative to the entire image, the average grayscale value of the background pixels can be expressed as:
μ 1 = i = 0 t i P i N 1 = i = 0 t i P i / S u m N 1 / S u m = μ t 1 w 1
Similarly, μ t 2 represents the mathematical expectation of foreground pixels relative to the entire image, and the average grayscale value of foreground pixels can be expressed as:
μ 2 = i = t + 1 M i P i N 2 = i = t + 1 M i P i / S u m N 2 / S u m = μ t 2 w 1
Then the average gray value of the image within the 0 ~ M gray level range is:
μ = i = 0 t i P i N 1 + i = t + 1 M i P i N 2 = μ t 1 + μ t 2 = w 1 μ 1 + w 2 μ 2
The between-class variance is:
G = w 1 ( μ μ 1 ) 2 + w 2 ( μ μ 2 ) 2 = w 1 w 2 ( μ 1 μ 2 ) 2
Assuming that when the gray level is i = i t , the between-class variance is the largest G max , then the image complexity C can be expressed as:
C T B B < i t = 0 C = n o n z e r o s ( C T B )
For the scene model, the main purpose of this section is to adaptively select an appropriate processing method based on the image content and maximize the use of the richness of the image to select an appropriate sampling method. According to the global complexity calculation model, the complexity threshold α is set as the judgment threshold for the image content complexity C . The original image is adaptively divided into two categories: simple and complex. The complex type corresponds to scenery such as cities, forests and clouds, simple types correspond to features such as oceans. In addition, considering the great increase in the resolution of remote sensing images, the image content information represented by a single coding tree block is greatly reduced, and there is a significant disparity between the foreground and background of a single coding tree block (for example: the content of the coding tree block is all ocean), resulting in a bimodal or multimodal between-class variance function, and misjudgment of the maximum between-class variance occurs. Based on this, the optimal threshold constraint i t is added during mode adaptive selection.
The specific steps of coding tree block adaptive mode selection based on the maximum between-class variance are as follows:
Step 1: Divide each frame/scene of the original remote sensing image into 512 × 512 -sized coding tree blocks (CTB).
Step 2: Calculate the histogram of the CTB and normalize it.
Step 3: Set the foreground and background classification thresholds i , i [ 0 , M ] , and iterate from 0.
Step 4: Calculate the between-class variance G = w 1 w 2 ( μ 1 μ 2 ) 2 .
Step 5: i + 1 , return to step 4 until i > M .
Step 6: Calculate the maximum between-class variance G max and the optimal threshold i t .
Step 7: Calculate the image content complexity C .
Step 8: Adaptive mode selection, define the adaptive mode selection threshold α and the optimal threshold constraint condition i α . When C < α , it is in simple mode; When C > α and i t < i α , it is a simple mode; When C > α and i t > i α , it is a complex pattern.

3.2. Feature Factor Extraction Stage

In addition to conventional statistical features such as variance and information entropy, two-dimensional images also have visual saliency features such as texture and edge information. The edges of an image are the most basic features, and feature maps can be extracted based on the mutation of grayscale, color, and texture structure. The saliency model studied in this section is mainly a pure mathematical calculation method. The purpose of establishing the saliency model is to provide a basis for the adaptive segmentation task of images. The edge detection operator is used to extract the edge roughness information of images, and the corresponding saliency models are constructed for the complex and simple modes in the pattern selection stage. By setting the saliency factor threshold of the global feature map, the quadtree is guided to adaptively partition the image in the next stage.
The edges of images have two attributes: direction and amplitude. Edges can usually be detected through first-order derivatives or second-order derivatives, where the first derivative takes the maximum value as the corresponding edge position, and the second derivative takes the zero crossing point as the corresponding edge position. The research object of this article is that remote sensing images contain noise and radiation stripes. The purpose of the research is to alleviate the resource constraints on the satellite and speed up the processing timeliness. Therefore, noise and processing time are the primary considerations when selecting operators. For this purpose, this section mainly discusses the first-order gradient operator, Sobel operator, Roberts operator, Prewitt operator, Kirsch operator and LoG operator [23,24].
Figure 2 shows the edge texture maps of remote sensing images of several typical landmarks such as cities, forests, clouds, and oceans using different operators. It can be observed from the figure that for images with rich content and texture details such as cities, forests, and clouds, the edge details extracted by Sobel operator, Roberts operator, and Prewitt operator are more comprehensive. However, the Sobel operator and Prewitt operator have the phenomenon that the edge is too wide when extracting natural scenes such as forests and clouds, which is not suitable for expressing the saliency of these types of features. For scenes with relatively simple ocean image content, the Sobel operator and Prewitt operator have good performance in extracting sea surface details. Compared to the edge contours of ships, the Prewitt operator is superior. Based on the above analysis, with edge roughness as the salient feature, the Roberts operator and Prewitt operator are used to establish the saliency expressions for complex (such as cities, forests, clouds) and simple (such as oceans) models, respectively.
Based on this, the Roberts operator is used to detect complex CTBs and construct an urban forest pattern saliency model. The correlation between the CTBs and the two Roberts operators can be expressed as:
G x = R x f ( x , y ) G y = R y f ( x , y ) G = G x 2 + G y 2 G x + G y
Among them, f ( x , y ) represents the CTB, R x , and R y are the horizontal and vertical Roberts operators. If the threshold T R is set, the local feature saliency factor of complex patterns can be expressed as:
s = 1 ,   G > T R 0 ,   G < T R
Similarly, use the Prewitt operator to detect simple CTBs and construct a simple pattern saliency model. The correlation G between the CTB and two Prewitt operators can be expressed as:
G x = P x f ( x , y ) G y = P y f ( x , y ) G = G x 2 + G y 2 G x + G y
P x and P y are Prewitt’s two direction operators. If the threshold T P is set, the local feature salience factor of the ocean model can be expressed as:
s = 1 ,   G > T P 0 ,   G < T P

3.3. Adaptive Shape Blocking Stage

The purpose of BCS is to fully utilize image texture features to make block segmentation more accurate, improve the compression sampling effect of each image block, and thus improve the overall image reconstruction effect. In BCS, the image is directly divided into blocks of the same scale. Although the method is simple, it ignores the content and structural texture of the image, inevitably resulting in image reconstruction effects.
In order to improve the reconstruction effect and achieve adaptive image segmentation, this section proposes an image adaptive segmentation strategy based on quadtree, which sets the saliency factor as a constraint condition, fully utilizes the texture structure of the image itself, and guides the quadtree method to adaptively divide the image into encoding blocks of different scales, with large scales corresponding to smooth regions of the image and small scales corresponding to edge regions. The detailed process of dividing 512 × 512 -sized CTBs using the adaptive blocking algorithm is shown in Algorithm 1.
Algorithm 1. Adaptive morphological blocking algorithm based on quadtree images.
Task : Adaptive blocking of 512 × 512 -sized coding tree blocks
Input : 512 × 512 encoding tree block Y , blocking threshold σ ,   maximum   encoding   block   l max ,   minimum   encoding   block   l min
Initialization : block level   L = 0 ,   encoding   block   size   l = 0 .
Step:
( 1 )   Extract   the   feature   map   of   the   512 × 512 CTB and calculate the salient factor s of the CTB feature.
( 2 )   Let   L = L + 1 , calculate the saliency factors sum of the feature map of the C -th coding block (CB):
σ = i = 1 l j = 1 l s
Among   them ,   l = 512 2 L .
( 3 )   Compare   σ in step (2) with the blocking threshold σ . If it is less than the blocking threshold σ   and   l min l l max ,   the   image   division   is   completed .   If   σ is greater than the blocking threshold σ   and   l min < l , return to step (2).
Output :   Adaptive   Blocked   Image   Y

3.4. Sampling Rate Adaptive Allocation Stage

After the CB undergoes two-dimensional discrete cosine transform (DCT), most of the energy is concentrated in the low-frequency part in the upper left corner of the coefficient matrix, while the high-frequency coefficients are distributed in the lower right corner. The expression only contains the cosine term, and the CB can be represented by fewer spectral coefficients. Moreover, when the absolute value of the low-frequency coefficients is greater than the absolute value of the high-frequency coefficients, the CB becomes sparser and the greater the proportion of the low frequency part. In this section. γ is used as the sparsity judgment threshold of the DCT coefficient T i , j . After DCT of the CB, the maximum amplitude coefficient T max is used to normalize the absolute value T i , j of the coefficient. If the normalized coefficient T i , j / T max is less than the threshold γ , the coefficient determined to be smaller is set to 0. Otherwise, the coefficient is regarded as the larger DCT coefficient. k represents the number of larger coefficients among the DCT coefficients of each coding block. In this process, the γ threshold is constant across all encoding blocks, but k varies between coding blocks. k is used as a measure of sparsity, defined as the information density function of the coding block.
From the above analysis, it can be seen that the smaller the k is, the sparser the coding block is, and the better image reconstruction can be achieved through fewer sampling points. This section adaptively allocates the sampling rate to the image block by calculating the information density function of the CT. However, in the actual image processing process, an image may contain many 512 × 512 -sized CTBs, and a CTB can be adaptively divided into multiple CBs of different sizes, especially high-resolution and large-width remote sensing images. Assigning a sampling rate to each CB undoubtedly increases the burden of on-board storage and transmission. Based on this, this article adopts clustering method to cluster CBs with the same size in an image into one category, and assign the same sampling rate to the CBs in the category according to their average information density. In fact, for large-scale images, there may be hundreds to thousands of CBs in a class. Considering the compression timeliness issue, if the number of CBs in the category is less than 10, the average information density of all blocks is calculated; while if it is greater than 10, 10 CBs are randomly selected to calculate the average information density.
Furthermore, according to the principle of compressed sensing, image reconstruction relies on both the measurement results and the prior of signal sparsity. Therefore, the number of measurements m depends largely on the sparsity k of the CB, rather than the length n × n of the CB. For high-quality reconstruction of the CB, the following constraints must be satisfied [25]:
c k log n × n / k m n × n
where c is a constant. In practical situations, according to the rule of thumb, the number of measured values m must be at least three times greater than the sparsity k .
Figure 3 is a flow chart of the sampling rate adaptive allocation method based on information density function and the specific algorithm is shown in Algorithm 2.
Algorithm 2. Sampling rate adaptive allocation algorithm based on information density function.
Task: Adaptive allocation of sample rate to different scale CBs
Input :   Different   scale   CB   n n , n 4 , 8 , 16 , 32 , 64
Step:
( 1 )   CB   clustering .   After   the   adaptive   partitioning ,   the   CBs   of   different   sizes   are   grouped   into   S o r t n n according to the size.
( 2 )   The   coding   block   undergoes   discrete   cosine   transform .   Count   the   number   of   CBs   n u m   in   each   category .   When   n u m 10 ,   calculate   the   DCT   coefficients   c o e f s i of   all   CBs   in   the   category ;   when   n u m > 10 , randomly select 10 CBs to calculate the DCT coefficients.
(3) Calculate the CB information density function k :
n o r m a l i z e d _ c o e f s = c o e f s / max ( c o e f s ) c o e f s n o r m a l i z e d _ c o e f s < γ = 0 k = n o n z e r o s ( c o e f s )
( 4 )   Calculate   various   types   of   adaptive   sampling   rates   S a m p l e n n :
S a m p l e n n = 3 i = 1 n u m k / n u m
( 5 )   To   prevent   the   difference   between   adaptive   sampling   rates   from   being   too   large ,   set   the   upper   bound   of   the   sampling   rate   S a m p l e max   and   the   lower   bound   of   the   sampling   rate   S a m p l e min :
S a m p l e = 0.2 , 0 < S a m p l e < 0.2 0.9 , 0.9 < S a m p l e
Output :   Adaptive   sampling   rate   S o r t n n   for   various   types   of   S a m p l e n n

3.5. Image Adaptive Reconstruction Stage

Most traditional compressed sensing reconstruction algorithms use a fixed observation matrix for the observation samples of all CBs, such as Gaussian random matrix, Bernoulli random matrix, partial Fourier matrix, Toeplitz matrix, Hadamard matrix, etc. This naturally ignores the image content and texture structure, does not fully exploit the sparsity of the image, and the sparsity is different between different CBs. Applying the same sampling rate can easily cause the reconstructed image to be highly pixelated.
Based on the image adaptive coding in Section 3.1, Section 3.2, Section 3.3 and Section 3.4, this section proposes an adaptive block compressed sensing-orthogonal matching pursuit algorithm (ABCS-OMP) in the decoding and reconstruction stage. The reconstruction matrix and sparse basis are adaptively selected for the CB category, the iteration threshold is set according to the known sparsity, and the OMP algorithm is used to reconstruct the image block. Figure 4 is the complete workflow structure diagram of the remote sensing image fully adaptive coding and decoding compressed sensing framework designed in this paper, in which the ABCS-OMP algorithm is shown in Algorithm 3, where the input in Algorithm 3 is the measured value of the image after adaptive sampling and observation, and the output is the adaptive reconstruction algorithm to restore the measured value to the original image.
Algorithm 3. Adaptive decoding-orthogonal matching pursuit algorithm based on CS.
Task: Coding block adaptive decoding reconstruction
Input: observation value Y ,   measurement   matrix   Φ n n ,   sparse   basis   Ψ n n ,   coding   block   sparsity   k ¯ n n
Step:
( 1 )   Initialize .   Residual   r 0 = y i ,   number   of   iterations   t = 1 ,   sparse   representation   coefficient   θ = 0 R n n ,   encoding   block   index   set   Λ 0 = .
( 2 )   Calculate   the   sensing   matrix .   According   to   the   coding   block   size   n n   adaptively   selects   the   measurement   matrix   Φ n n   and   the   sparse   basis   Ψ n n ,   and   calculates   the   sensing   matrix   T i :
T i = Φ n n i n v ( Ψ n n )
( 3 )   Calculate   the   index   j t with   the   greatest   correlation ,   j t = arg max j T i j r t 1 2 .
( 4 )   Update   index   set   Λ t ,   Λ t = Λ t 1 j t .
( 5 )   Find   the   least   squares   solution   θ ^ t = T i   Λ t T T i   Λ t 1 T i   Λ t T y i .
( 6 )   When   t k ¯ n n ,   the   algorithm   ends ,   return   the   sparse   representation   coefficient   θ ^ t ,   otherwise   let   t = t + 1 and return to Step 3.
( 7 )   Calculate   the   reconstruction   CB   vector   y :
y = i n v ( Ψ n n ) θ ^ t  

4. Simulation

The operating system used in the test was Windows 10 flagship Edition 64-bit, the processor was 11th Gen Intel(R) Core (TM) i7-1165G7 @ 2.80 GHz, the test tool was MATLAB 2019b, and the timing functions were tic and toc. In order to verify the effectiveness of the full-process adaptive encoding and decoding method for remote sensing images proposed in this article, the aerospace remote sensing data set NWPU VHR-10 [26] and Landsat 7 Cloud Cover Assessment Validation Data [27] were selected as test images. This chapter mainly contains four performance analysis tests: (1) Discuss the adaptive mode selection threshold α in Section 3.1; (2) Verify the establishment of simple and complex modes in Section 3.2 combined with the adaptive morphological blocking in Section 3.3; (3) Discuss the sparsity judgment threshold γ of DCT coefficient T i , j in Section 3.4; (4) Performance comparison of different algorithms. In order to verify the performance of the algorithm in this paper, it is compared with various algorithms in terms of subjective reconstruction effects, objective evaluation indicators, and encoding and decoding calculation time.

4.1. Analyze Adaptive Mode Selection Threshold α

In this section, the adaptive pattern selection method based on the maximum between-class variance method proposed in Section 3.1 is simulated and verified, and the judgment criteria of the two constraint conditions of the adaptive pattern selection of image content complexity C and the optimal threshold i t are discussed.
Firstly, the aerospace remote sensing data set NWPU VHR-10 and Landsat 7 Cloud Cover Assessment Validation Data are divided into blocks to form a set of 512 × 512 -sized CTBs, and the set was selected to construct three complex image content data sets of clouds, cities and forests, as well as a simple ocean image content data set. Each data set contains 30 512 × 512 -sized CTBs. Each data set contains 30 512 × 512 -sized coding tree blocks. The image content complexity C and optimal threshold i t of each coding tree block in the four data sets are calculated according to the adaptive mode selection algorithm of the maximum inter-class difference method.
In this test, the image is quantized by 8 b i t , and the maximum image content complexity of a single CTB is 262,144. The larger the complexity C , the richer the image content and the more texture details. Figure 5 is a two-dimensional scatter plot of the complexity and optimal threshold of the four datasets. It is observed that the point distribution of the complexity C and the optimal threshold i t of the cloud, city and forest complex content datasets are relatively stable, and the complexity C almost all are greater than 32,768 (maximum complexity), and the optimal threshold I is also almost all greater than 0.125 (maximum optimal threshold). It was observed that the point distribution of the complexity C and the optimal threshold i t of the cloud, city and forest complex content datasets is relatively stable. The complexity C is almost all greater than 32,768 (one-eighth of the maximum complexity), and the optimal threshold i t is also almost entirely greater than 0.125 (one-eighth of the maximum optimal threshold).
Based on the above analysis, α = 32768 defines the judgment threshold of image content complexity C , and i α = 0.125 is the judgment threshold of the optimal threshold i t . Combining threshold α and i α , the maximum between-class variance method is used to adaptively select modes for the four datasets of clouds, cities, forests and oceans. The test results and accuracy are shown in Table 1, where “1” indicates complex modes and “0” indicates simple mode. The test results show that the adaptive mode selection threshold determined in this chapter has a mode selection accuracy of more than 90% for CTBs such as oceans, clouds, cities and forests. The image adaptive mode selection algorithm based on the maximum between-class variance method can be used for remote sensing image adaptive mode selection.

4.2. Image Adaptive Morphological Blocking Test

In order to test the simple and complex saliency models constructed in Section 3.2, as well as the two inn ovations proposed in Section 3.3 using feature saliency factors to guide quadtrees to adaptively divide images into shapes, this section selects cities, airports, forests, and clouds. As well as typical remote sensing image features such as oceans, the effectiveness of the proposed method is tested, and compared with several current typical image adaptive blocking methods, the advantages of the proposed method are analyzed. The test results are shown in Figure 6.
The test results show that compared to the three methods based on difference, mean and variance, and grayscale entropy, the saliency factor method proposed in this paper can effectively perform adaptive image segmentation for complex terrain such as cities, airports, forests, and clouds based on edge textures; secondly, for simple ocean images, sea surface ships and ripples can be detected. The detection of edge details and adaptive morphological division are the advantages of the proposed method. The method based on saliency factors proposed in this article first uses the maximum between-class variance method to construct two adaptive selection modes, simple and complex. Then two feature saliency models are established based on the modes, and finally the feature saliency factor is used to guide the quadtree to carry out adaptive morphological division of images. Whether it is a simple ocean image or a complex city, forest, and cloud image, the image can be partitioned according to the image texture details. The method proposed in this paper has good stationarity.

4.3. Analyze Sparsity Judgment Threshold γ

In the adaptive allocation method of sampling rate based on the information density function, by setting the sparsity threshold γ , the sparsity k is adaptively set to the information density function of a given block, and the sampling rate is allocated to the coding block according to the average information density, so as to achieve adaptive downsampling of images. The smaller the judgment threshold γ , the larger the sparsity k , the higher the sampling rate, and the smaller the reconstruction error. However, as the average percentage of coefficients used to reconstruct the CB increases, the compression effect of the algorithm decreases. In order to further balance the relationship between the reconstruction error e and the reconstruction coefficient percentage, the following simulation test is performed:
(1)
Set the sparsity judgment threshold γ to 30 points between 10 4 , 10 1 .
(2)
Calculate the reconstruction error e and the reconstruction coefficient percentage for different γ respectively.
(3)
Draw γ and reconstruction error curves.
(4)
Draw the reconstruction coefficient percentage and reconstruction error curve.
The test results are shown in Figure 7. As γ increases, the reconstruction error increases and the percentage of coefficients used for reconstruction decreases. To further observe the reconstruction effect, the reconstruction effect diagram is displayed at four different positions on the curve. From an empirical perspective, point (B) is the highest error that can be tolerated, and the reconstruction effect is good. To sum up, the sparsity judgment threshold γ = 0.02 is the optimal value (or close enough to the optimal value).

4.4. Performance Comparison of Different Algorithms

In this section, the classic compressed sensing reconstruction algorithm (CS-OMP) [28] based on orthogonal matching pursuit, the widely used uniform block compressed sensing reconstruction algorithm (BCS-OMP) [29] and the BCS-SPL [30] algorithm optimized in two-dimensional compression, block compression and mathematical model are selected for testing and comparison. And analyzed from the aspects of sampling rate, encoding time, decoding time, peak signal-to-noise ratio, structural similarity and subjective reconstruction effect. The whole-process adaptive encoding and decoding framework for images based on compressed sensing proposed in this article has four functions: (1) It can select the encoding mode according to the image content; (2) It can use the image texture details to adaptively block; (3) It can detect the sparsity of the encoding block and adaptively allocates the sampling rate; (4) The reconstruction matrix can be adaptively selected according to the coding block size. The framework does not need manual intervention to set the compression ratio in advance, which effectively avoids the problem of mismatch between the sampling rate and the redundancy of the image, and can explore the sparsity of the image to the maximum extent and allocate the sampling rate reasonably.
The sequence of experiments was as follows: Firstly, the ABCS-OMP algorithm was used for adaptive sampling of the input image to obtain the optimal or near-optimal sampling rate of the image; then, the same sampling rate was set for the three methods of CS-OMP, BCS-OMP and BCS-SPL to compare the reconstructed image quality of different algorithms. Under the same compression ratio and consistent test environment, this section selects five typical remote sensing images with different image complexities, such as cities, airports, forests, clouds, and oceans for performance testing, and compares them from both subjective and objective aspects. Among them, Table 2 shows the adaptive sampling results of 5 different types of images by the method proposed in this paper, where ‘SubR’ represents the subsampling rate of each type of image block, and ‘Num’ represents the number of each type of image block. Table 3 lists the test results of different indicators of the four algorithms for 5 images. The subjective test results are shown in Figure 8.
Analyzed from both subjective and objective perspectives, the traditional CS-OMP algorithm has a slow calculation speed at high sampling rates and serious artifacts in reconstructed images at low sampling rates. The classic BCS-OMP algorithm also has a slow calculation speed at high sampling rates and the block-level artifacts are obvious at low sampling rate, resulting in highly pixelated reconstructed images. Compared with the previous two methods, the optimized BCS-SPL algorithm has greatly improved the operation speed and image reconstruction quality. However, due to multiple Wiener filtering, the edge of the image is not sharp enough, the edge details are not kept intact enough, the structural similarity of objective evaluation indicators is low, and the pixel blocks at the edge are obvious when the simple image content is reconstructed.
Compared with the first three algorithms, the ABCS-OMP algorithm proposed in this article has four main advantages: (1) In the mode selection stage, through the comprehensive analysis of the image, the maximum inter-class variance is used to calculate the image complexity, the parameters are combined with the mode, and the observation mode is selected according to the content to improve the compression efficiency; (2) In the adaptive segmentation stage, the image space is recursively divided into tree structures of different levels based on the edge feature significance factor guided by quadtree, which makes full use of the structure texture of the image itself, divides the image into smooth and edge regions for partition processing, which can alleviate the problem of high pixelation of reconstructed images; (3) In the adaptive sampling rate allocation stage, according to the different information density of the encoded block, the sparsity adaptive is set as the information density function of the given block, and the encoded block is dynamically sampled, which can excavate the compressibility of the image as much as possible and effectively avoid image artifacts; (4) In the adaptive reconstruction stage, OMP calculation is used to accelerate the reconstruction speed by using the prior knowledge of known image block sparsity. The simulation results are consistent with the analysis results. From the subjective and objective evaluation indicators of codec time, PSNR, SIMM and BRISQUE, the fully adaptive coding and decoding compressed sensing framework designed in this paper maintains a peak-to-noise ratio and structural similarity above 35 dB and 0.8 for remote sensing images with complex edge texture, which have good stability. Secondly, especially for ocean images with relatively simple image content, when the sampling rate is 0.26, the peak signal-to-noise ratio reaches 50.8 dB and the structural similarity is 0.99. Compared with the other three methods, the BRISQUE value of the image recovered by ABCS-OMP method is the smallest, with better clarity and less distortion. In the subjective aspect, the reconstructed image has clear edge details and good reconstruction effect, while the block effect is effectively suppressed.

5. Conclusions

In order to solve the problem of incompatibility between traditional information acquisition modes and spaceborne earth observation tasks, this paper draws on the theory of compressed sensing and starts from the general mathematical model of compressed sensing to establish a theoretical model of block compressed sensing and proposes a full process adaptive encoding and decoding framework for remote sensing images. Firstly, this framework uses the maximum between-class variance method to calculate the image content complexity and adaptively selects the coding mode. By extracting image feature factors, a saliency model of the corresponding mode is established to guide the adaptive morphological segmentation of the coding tree block. Then, by using the information density function to detect the sparsity of the encoding block, the sparsity of the image is explored as much as possible to achieve adaptive sampling rate allocation for the image. Finally, the reconstruction matrix and sparse basis are adaptively selected based on the coding block category, and the iteration threshold is set according to the known sparsity. OMP algorithm is used to reconstruct the coding block. Compared with CS-OMP, BCS-OMP, BCS-SPL and other methods, the framework proposed in this article does not require manual preset compression ratio, can select the sampling mode according to the image content and adaptively sets the compression efficiency by detecting sparsity, so as to achieve the requirements of fast compression on the satellite and high fidelity reconstruction on the ground. Under a certain compression ratio, the method proposed in this article has better stability characteristics for images with complex textures, and can effectively alleviate block-level artifacts and enrich image texture details. For simple image content, when the sampling rate is 0.26, the structural similarity increases by 52.3%, the peak signal-to-noise ratio has reached 50.8 dB, and the reconstructed image has better decoding image quality and visual effects.

Author Contributions

Conceptualization, H.H. and C.L.; methodology, H.H. and S.L.; software, H.H.; writing—original draft preparation, H.H., S.Y., C.W. and Y.D.; writing—review and editing, H.H., C.L., S.L., S.Y., C.W. and Y.D.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 62175236.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CSCompressive sensing
BCSBlock compressed sensing
ABCSAdaptive block compressive sensing
CBCoding blocks
CTBCoding tree blocks
DCTDiscrete cosine transform
ABCS-OMPAdaptive blocked compression- orthogonal matching pursuit algorithm

References

  1. Cavender-Bares, J.; Schneider, F.D.; Santos, M.J.; Armstrong, A.; Carnaval, A.; Dahlin, K.M.; Fatoyinbo, L.; Hurtt, G.C.; Schimel, D.; Townsend, P.A.; et al. Integrating remote sensing with ecology and evolution to advance biodiversity conservation. Nat. Ecol. Evol. 2022, 6, 506–519. [Google Scholar] [CrossRef] [PubMed]
  2. Ying, S.; Qu, H.; Tao, S.; Zheng, L.; Wu, X. Radiation Sensitivity Analysis of Ocean Wake Information Detection System Based on Visible Light Remote Sensing. Remote Sens. 2022, 14, 4054. [Google Scholar] [CrossRef]
  3. Monika, R.; Dhanalakshmi, S. An optimal adaptive reweighted sampling-based adaptive block compressed sensing for underwater image compression. Vis. Comput. 2023. [CrossRef]
  4. Chen, L.; Lan, Z.; Qian, S.; Hou, X.; Zhang, L.; He, J.; Chou, X. Real-Time Data Sensing for Microseismic Monitoring via Adaptive Compressed Sampling. IEEE Sens. J. 2023, 23, 10644–10655. [Google Scholar] [CrossRef]
  5. Shi, Y.; Chen, R.; Liu, D.; Wang, B. A visually secure image encryption scheme based on adaptive block compressed sensing and non-negative matrix factorization. Opt. Laser Technol. 2023, 163, 109345. [Google Scholar] [CrossRef]
  6. Fu, W.; Ma, J.; Chen, P.; Chen, F. Remote sensing satellites for digital earth. In Manual of Digital Earth; Springer: Singapore, 2020; pp. 55–123. [Google Scholar]
  7. Liu, S. Remote Sensing Satellite Image Acquisition Planning: Framework, Methods and Application. Ph.D. Thesis, University of South Carolina, Columbia, SC, USA, 2014. [Google Scholar]
  8. Rani, M.; Dhok, S.B.; Deshmukh, R.B. A Systematic Review of Compressive Sensing: Concepts, Implementations and Applications. IEEE Access 2018, 6, 4875–4894. [Google Scholar] [CrossRef]
  9. Belgaonkar, S.M.; Singh, V. Image compression and reconstruction in compressive sensing paradigm. Glob. Transit. Proc. 2022, 3, 220–224. [Google Scholar] [CrossRef]
  10. Jiao, L.; Huang, Z.; Liu, X.; Yang, Y.; Ma, M.; Zhao, J.; You, C.; Hou, B.; Yang, S.; Liu, F.; et al. Brain-Inspired Remote Sensing Interpretation: A Comprehensive Survey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2992–3033. [Google Scholar] [CrossRef]
  11. Upadhyaya, V.; Salim, M. Compressive Sensing: An Efficient Approach for Image Compression and Recovery. In Recent Trends in Communication and Intelligent Systems: Proceedings of ICRTCIS 2019; Springer: Singapore, 2020. [Google Scholar]
  12. Gan, L. Block Compressed Sensing of Natural Images. In Proceedings of the 2007 15th International Conference on Digital Signal Processing, Cardiff, UK, 1–4 July 2007; pp. 403–406. [Google Scholar]
  13. Gao, Z.; Xiong, C.; Ding, L.; Zhou, C. Image representation using block compressive sensing for compression applications. J. Vis. Commun. Image Represent. 2013, 24, 885–894. [Google Scholar] [CrossRef]
  14. Pan, J.S.; Li, W.; Yang, C.S.; Yan, L.J. Image steganography based on subsampling and compressive sensing. Multimed. Tools Appl. 2015, 74, 9191–9205. [Google Scholar] [CrossRef]
  15. Monika, R.; Samiappan, D.; Kumar, R. Adaptive block compressed sensing—A technological analysis and survey on challenges, innovation directions and applications. Multimed. Tools Appl. 2021, 80, 4751–4768. [Google Scholar] [CrossRef]
  16. Zhu, S.; Zeng, B.; Gabbouj, M. Adaptive reweighted compressed sensing for image compression. In Proceedings of the 2014 IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne, VIC, Australia, 1–5 June 2014; pp. 1–4. [Google Scholar]
  17. Zhang, J.; Xiang, Q.; Yin, Y.; Chen, C.; Luo, X. Adaptive compressed sensing for wireless image sensor networks. Multimed. Tools Appl. 2016, 76, 4227–4242. [Google Scholar] [CrossRef]
  18. Li, R.; Duan, X.; Lv, Y. Adaptive compressive sensing of images using error between blocks. Int. J. Distrib. Sens. Netw. 2018, 14, 155014771878175. [Google Scholar] [CrossRef]
  19. Li, R.; Duan, X.; Guo, X.; He, W.; Lv, Y. Adaptive Compressive Sensing of Images Using Spatial Entropy. Comput. Intell. Neurosci. 2017, 2017, 9059204. [Google Scholar] [CrossRef] [PubMed]
  20. Zhu, Y.; Liu, W.; Shen, Q. Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation. Electronics 2019, 8, 753. [Google Scholar] [CrossRef]
  21. Chen, Q.; Chen, D.; Gong, J. Low-Complexity Adaptive Sampling of Block Compressed Sensing Based on Distortion Minimization. Sensors 2022, 22, 4806. [Google Scholar] [CrossRef] [PubMed]
  22. Peiyang, L.; Huacai, L.; Xiuyun, Z.; Hefeng, L. An improved method of maximum inter class variance for image shadow processing. In Proceedings of the 2021 International Conference on Big Data Analysis and Computer Science (BDACS), Kunming, China, 25–27 June 2021; pp. 237–241. [Google Scholar]
  23. Chaple, G.N.; Daruwala, R.D.; Gofane, M.S. Comparisions of Robert, Prewitt, Sobel operator based edge detection methods for real time uses on FPGA. In Proceedings of the 2015 International Conference on Technologies for Sustainable Development (ICTSD), Mumbai, India, 4–6 February 2015; pp. 1–4. [Google Scholar]
  24. Yang, G.; Xu, F. Research and analysis of Image edge detection algorithm Based on the MATLAB. Procedia Eng. 2011, 15, 1313–1318. [Google Scholar] [CrossRef]
  25. Stern, A.; Rivenson, Y.; Javidi, B. Optically compressed image sensing using random aperture coding. In Proceedings of the Enabling Photonics Technologies for Defense, Security, and Aerospace Applications IV, Orlando, FL, USA, 17–18 March 2008; Volume 6975, pp. 93–102. [Google Scholar]
  26. Available online: https://www.scidb.cn/en/detail?dataSetId=028975a398ea43e9a1adb3c827f5c91d (accessed on 20 November 2023).
  27. Available online: https://www.sciencebase.gov/catalog/item/62bf0f46d34e82c548ced83e (accessed on 20 November 2023).
  28. Zhang, T. Sparse Recovery with Orthogonal Matching Pursuit Under RIP. IEEE Trans. Inf. Theory 2011, 57, 6215–6221. [Google Scholar] [CrossRef]
  29. Wen, J.; Chen, H.; Zhou, Z. An Optimal Condition for the Block Orthogonal Matching Pursuit Algorithm. IEEE Access 2018, 6, 38179–38185. [Google Scholar] [CrossRef]
  30. Shoitan, R.; Nossair, Z.; Ibrahim, I.I.; Tobal, A. Performance improvement of the decoding side of the BCS-SPL technique. In Proceedings of the 2017 International Conference on Advanced Control Circuits Systems (ACCS) Systems & 2017 International Conference on New Paradigms in Electronics & Information Technology (PEIT), Alexandria, Egypt, 5–8 November 2017; pp. 5–9. [Google Scholar]
Figure 1. Framework diagram of image full process adaptive encoding and decoding based on compressive sensing.
Figure 1. Framework diagram of image full process adaptive encoding and decoding based on compressive sensing.
Remotesensing 16 01529 g001
Figure 2. Comparison of edges of several typical remote sensing images extracted by different operators. (14) are typical remote sensing images of cities, forests, clouds, oceans, etc. (ag) are the original images and edge schematic diagrams extracted from different images by 6 different operators, namely: Gradient, LOG, Krisch, Sobel, Roberts and Prewitt.
Figure 2. Comparison of edges of several typical remote sensing images extracted by different operators. (14) are typical remote sensing images of cities, forests, clouds, oceans, etc. (ag) are the original images and edge schematic diagrams extracted from different images by 6 different operators, namely: Gradient, LOG, Krisch, Sobel, Roberts and Prewitt.
Remotesensing 16 01529 g002
Figure 3. Flow chart of sample rate adaptive allocation algorithm.
Figure 3. Flow chart of sample rate adaptive allocation algorithm.
Remotesensing 16 01529 g003
Figure 4. Article method input to output complete workflow structure.
Figure 4. Article method input to output complete workflow structure.
Remotesensing 16 01529 g004
Figure 5. Scatter plot of image content complexity and optimal threshold degree for four datasets: (a) Scatter plot with complex image content; (b) Scatter plot of optimal threshold.
Figure 5. Scatter plot of image content complexity and optimal threshold degree for four datasets: (a) Scatter plot with complex image content; (b) Scatter plot of optimal threshold.
Remotesensing 16 01529 g005
Figure 6. Comparison of test results for multiple image adaptive blocking methods. (15) are typical remote sensing images of cities, airports, forests, clouds, oceans, etc. (ae) are the original images and test results of four adaptive morphological division methods, namely difference method, mean and variance method, gray entropy method and the significant factor method in this article, etc.
Figure 6. Comparison of test results for multiple image adaptive blocking methods. (15) are typical remote sensing images of cities, airports, forests, clouds, oceans, etc. (ae) are the original images and test results of four adaptive morphological division methods, namely difference method, mean and variance method, gray entropy method and the significant factor method in this article, etc.
Remotesensing 16 01529 g006
Figure 7. The relationship between sparsity judgment threshold and reconstruction error.
Figure 7. The relationship between sparsity judgment threshold and reconstruction error.
Remotesensing 16 01529 g007
Figure 8. Image reconstruction effects of different algorithms. (15) are typical remote sensing images of cities, airports, forests, clouds, oceans, etc. (ae) are the original images and the test results of four algorithms, namely CS-OMP, BCS-OMP, BCS-SPL, and the ABCS-OMP method proposed in this paper.
Figure 8. Image reconstruction effects of different algorithms. (15) are typical remote sensing images of cities, airports, forests, clouds, oceans, etc. (ae) are the original images and the test results of four algorithms, namely CS-OMP, BCS-OMP, BCS-SPL, and the ABCS-OMP method proposed in this paper.
Remotesensing 16 01529 g008
Table 1. Datasets adaptive mode selection test based on maximum between-class variance method.
Table 1. Datasets adaptive mode selection test based on maximum between-class variance method.
DatasetOceansCloudsCitiesForests
The number of “1”3302929
The number of “0”27011
Accuracy90%100%97%97%
Table 2. Number of various image blocks and sub-sampling rate.
Table 2. Number of various image blocks and sub-sampling rate.
CityAirportForestCloudOcean
4 × 4SubR0.90000.56250.90000.90000.8813
Num13732413214596105241460
8 × 8SubR0.20000.20000.20000.20000.2000
Num5871131343665591
16 × 16SubR0.20000.20000.20000.20000.2000
Num1948326200269
32 × 32SubR00000.2000
Num000089
64 × 64SubR00000.2000
Num000010
Sampling rate0.78670.29140.82630.69630.2624
Table 3. Objective evaluation index test results of different algorithms.
Table 3. Objective evaluation index test results of different algorithms.
Test ImageAlgorithmObjective Evaluation Index
Sampling RateCoding TimeDecoding TimePSNRSIMMBRISQUE
CityCS-OMP0.78670.0818181.228326.18940.679443.4582
BCS-OMP0.78670.056686.31.8226.31910.744822.4270
BCS-SPL0.78670.58513.902431.55520.822639.7210
ABCS-OMP0.78670.82852.968035.98150.915015.0426
AirportCS-OMP0.29140.082410.245428.11410.248641.0038
BCS-OMP0.29140.05057.103226.60670.348132.3333
BCS-SPL0.29140.56976.863135.98750.651339.3433
ABCS-OMP0.29140.47733.248536.34160.803928.5709
ForestCS-OMP0.82360.0853152.292330.43090.702941.5029
BCS-OMP0.82360.058099.243132.95630.846528.0624
BCS-SPL0.82360.59354.173437.83840.823631.0780
ABCS-OMP0.82360.89243.142239.01260.929825.9349
CloudCS-OMP0.69630.080755.870733.95900.689320.9542
BCS-OMP0.69630.049960.173731.52850.738926.2792
BCS-SPL0.69630.56435.232942.07610.831038.5014
ABCS-OMP0.69630.57823.412540.46280.958619.7492
OceanCS-OMP0.26240.08364.750943.89200.168449.5071
BCS-OMP0.26240.05804.508135.50930.434050.3378
BCS-SPL0.26240.55436.000940.22440.655749.1216
ABCS-OMP0.26240.29223.266050.80270.992546.5433
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, H.; Liu, C.; Liu, S.; Ying, S.; Wang, C.; Ding, Y. Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing. Remote Sens. 2024, 16, 1529. https://doi.org/10.3390/rs16091529

AMA Style

Hu H, Liu C, Liu S, Ying S, Wang C, Ding Y. Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing. Remote Sensing. 2024; 16(9):1529. https://doi.org/10.3390/rs16091529

Chicago/Turabian Style

Hu, Huiling, Chunyu Liu, Shuai Liu, Shipeng Ying, Chen Wang, and Yi Ding. 2024. "Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing" Remote Sensing 16, no. 9: 1529. https://doi.org/10.3390/rs16091529

APA Style

Hu, H., Liu, C., Liu, S., Ying, S., Wang, C., & Ding, Y. (2024). Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing. Remote Sensing, 16(9), 1529. https://doi.org/10.3390/rs16091529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop