Next Article in Journal
Privacy Policies of IoT Devices: Collection and Analysis
Previous Article in Journal
Neurofeedback Therapy for Sensory Over-Responsiveness—A Feasibility Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deconvolutional Deblurring Algorithm Based on Short- and Long-Exposure Images

1
Aerospace Information Research Institute, Chinese Academy of Sciences, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
2
School of Optoelectronics, University of Chinese of Academy Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing 100039, China
3
Department of Key Laboratory of Computational Optical Imagine Technology, CAS, No.9 Dengzhuang South Road, Haidian District, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 1846; https://doi.org/10.3390/s22051846
Submission received: 7 December 2021 / Revised: 18 February 2022 / Accepted: 23 February 2022 / Published: 25 February 2022
(This article belongs to the Section Intelligent Sensors)

Abstract

:
An iterative image restoration algorithm, directed at the image deblurring problem and based on the concept of long- and short-exposure deblurring, was proposed under the image deconvolution framework by investigating the imaging principle and existing algorithms, thus realizing the restoration of degraded images. The effective priori side information provided by the short-exposure image was utilized to improve the accuracy of kernel estimation, and then increased the effect of image restoration. For the kernel estimation, a priori filtering non-dimensional Gaussianity measure (BID-PFNGM) regularization term was raised, and the fidelity term was corrected using short-exposure image information, thus improving the kernel estimation accuracy. For the image restoration, a P norm-constrained relative gradient regularization term constraint model was put forward, and the restoration result realizing both image edge preservation and texture restoration effects was acquired through the further processing of the model results. The experimental results prove that, in comparison with other algorithms, the proposed algorithm has a better restoration effect.

1. Introduction

In the exposure process of an imaging detector, the relative motion between the image surface and object surface will lead to image motion and the degradation of image quality [1]. This motion problem is usually solved by two means: (1) the speed of moving the camera relative to the object surface is measured, and the image motion is compensated for by placing a fast reflection mirror in the light path [2]. By correcting the image motion directly through the principle of relative motion, this method is of favorable effect, but it will increase the cost and complexity of the camera hardware system. (2) The optical imaging mechanism is combined with the signal processing algorithm, and the image motion is compensated for using a deblurring algorithm [3]; this method is low-cost and easy to implement, but the kernel estimation accuracy of the image motion is greatly affected by the algorithm, which will impact the correction effect of the image motion.
A motion-deblurred image can be expressed as the projection of the convolution of the imaging target and the image motion kernel on the image surface. Therefore, a deblurring algorithm can be denoted as a deconvolutional process with an ill-posed problem [3]. If the shift image kernel is estimated only through the blurred image itself [3,4,5,6], the ill-posed problem will be serious in the equation solving process, with a large kernel estimation error, and it will not be easy to acquire good restoration results. To solve this problem, some scholars have proposed algorithms for multichannel deconvolution using two or more blurred images, such as in recent studies [7,8,9,10], and many scholars have proposed long- and short-exposure images, combining deblurring methods [11,12,13,14,15,16,17,18,19].
A deconvolutional algorithm was proposed in [11], which is based on a framework using short- and long-exposure images. The long-exposure images are motion-blurred, and short-exposure images are images obtained by controlling the exposure times until the relative motion is smaller than the scope of one image element. For short-exposure images, as the exposure time is short, the images are not blurred in the same way as motion-blurred images, but the contrast and signal-to-noise ratio (SNR) is poor. Therefore, the image motion of long-exposure blurred images can be corrected using the partial high-frequency information reserved by the short exposure, thus improving the estimation accuracy of the image motion kernel and mitigating the ill-posed problem of deconvolution. The method from [11] proved that the kernel estimation accuracy could be improved by including the short-exposure images in the estimation process as a priori side information. The regularization term design was simple in this kernel estimation algorithm; the kernel was acquired directly through the model derivation, but the ill-posed problem was not mitigated through more constraint methods, and the estimation result was not accurate enough [16]. In the image restoration part of this algorithm, an improved algorithm based on Richardson–Lucy (RL) was used, but the ringing effect problem still could not be well controlled. The image block restoration algorithm was also explored in [17,18], specific to the estimation problem of the space-varying kernel; the authors restored the images via the hierarchical Bayesian approach, but during the processing of the kernel estimation problem, the only constraint condition was that the energy sum of the kernel was 1. The model was directly solved through the constrained least-squares method, so the estimation accuracy of the image motion kernel was poor. Reference [19] proposed an improved RL algorithm to restore the images with short-exposure image information, to solve the deblurring problem of remote sensing images. In comparison with the restoration algorithm in [11], this algorithm improved the image restoration effect. However, the estimation method in the estimation part of the motion kernel was identical to that in [11], so, similarly, the estimation of the kernel was not accurate enough.
The problems existing in the present long- and short-exposure algorithms above, such as low kernel estimation accuracy and poor image restoration effect, were deeply studied in this research. The priori side information was mined using the correlation between two frames of images, and a new kernel estimation model and image restoration model were proposed, which relieved the ill-posed problem of the existing algorithms, improved the kernel estimation accuracy, and enhanced the image restoration effect. Future algorithms can significantly improve the imaging quality with the imaging results of the different camera lenses on deblurred scenarios involving devices, such as multi-lens smartphones, etc.
Section 2 introduces the main structure of the algorithm. Section 3 introduces the details of the modified algorithm proposed in this paper. Section 4 uses two groups of experiments to verify the effect of the algorithm.

2. Design Principle and Framework

Basic Model of the Proposed Method

In general, imaging models for long- and short-exposure images can be expressed as below [11]:
Y L T o t a l = H M T o t a l X T o t a l + N L T o t a l Y S T o t a l = X T o t a l + N S T o t a l
where ” ” denotes discrete convolution operator, Y L T o t a l is the long-exposure image, Y S T o t a l is the short-exposure image, H M T o t a l is the blur kernel from the long-exposure image, X T o t a l is an ideal real image, N L T o t a l and N S T o t a l denote the additive noises incurred in long-exposure and short-exposure imaging processes, respectively, both of which are zero-mean Gaussian noises by default. In the long- and short-exposure deconvolution framework, the ideal can be generally expressed as below [11]:
X T o t a l = N d T o t a l + Δ X T o t a l
where N d T o t a l is the preprocessed image after the registration [20], straight variance balancing, and denoising [21] operations of the short-exposure image Y S T o t a l , aiming to enhance the information in the short-exposure image, which is consistently the same with the ideal image’s corresponding part, for example, the part of “Carton”(areas with large gradients in the image). Δ X T o t a l is the residual error of X T o t a l and N d T o t a l , and it is so small that it is usually ignored in the kernel estimation process. The deconvolutional algorithm model under the long- and short-exposure framework can be written into the following form:
H M T o t a l = arg min H M T o t a l M H T o t a l N d T o t a l Y L T o t a l 2 2                     ( E s t i m a t e   K e r n e l ) X o p t = arg min X T o t a l X H T o t a l X i t e r T o t a l Y L T o t a l 2 2                           ( E s t i m a t e   O r i g i n a l   i m a g e )
where         2 is the norm operator, and m , X denote the pool of allowed H M T o t a l and X i t e r T o t a l . X o p t is the preliminary restoration image. H T o t a l is the iter-kernel in the process, X i t e r T o t a l is the iter-image in the process.
Under normal circumstances, this problem is an ill-posed problem, which cannot be directly solved, so it should be constrained by introducing the regularization method.
As Figure 1 shows, the work of Equation (3) is divided into two parts: kernel estimation part and original image estimation part. Divide the problem to be solved by Equation (3) into two parts according to the steps shown in Figure 1.
The kernel estimation model in gradient field was obtained from Equation (3):
H T o t a l = arg min H T o t a l h i t e r T o t a l   ( N d T o t a l ) Y L T o t a l 2 2 + λ ϕ ( h i t e r T o t a l )
where λ is the regularization coefficient of kernel estimation, denote the pool of allowed H T o t a l , ϕ ( h i t e r T o t a l ) is the regularization term function for kernel estimation, h i t e r T o t a l is the iter-kernel in the process (same as H T o t a l ). Because of the higher calculation efficiency, the kernel estimation model Equation (4) was solved by sparsity [22] in the image gradient domain. Therefore, the ‘’ ” means the function of the gradient operator.
The main work of Section 3.1 was to use short-exposure images to reconstruct the kernel estimation model Equation (4). By introducing the short-exposure images into the iterative image estimation process, with a new regularization term model proposed and the iterative optimization of data items, the accuracy of the kernel estimation were improved.
In detail, according to the solution framework in [3], first of all, the paper built an image pyramid by processing the blurred and short-exposure images through a stepwise downsampling method. Then, the kernel was estimated from the low-resolution image in the bottom layer of the pyramid. Next, the kernel estimation result of the low-resolution layer was used as the initial term of the kernel in the kernel estimation process of the upper high-resolution layer. Finally, by calculating layer-by-layer, the kernel estimation result of the original image was obtained. Since the estimation process is the same for every layer, Section 3.1 provides an example of the kernel estimation process of any layer to introduce the kernel estimation method in this paper.
  • Part 2: original image estimation by the part 1 result H T o t a l (introduced in Section 3.2)
The image deblur model in the spatial domain was obtained from Equation (3):
X o p t = argmin   X o p t X o p t H M T o t a l X i t e r T o t a l Y L T o t a l 2 2 + γ X R ( X i t e r T o t a l )
γ X is the regularization coefficient for an iter-restored image as a value, X o p t is the pool of allowed X o p t , X i t e r T o t a l is iter-image in the process, and R ( X i t e r T o t a l ) is the regularization term function for image restoration. In part 2, the paper used short-exposure images to design a new regularization term. The regularization term used the short-exposure image and the iter-restored image to jointly constrain the iterative solution process. This obtained a better restoration effect.

3. Main Works: Estimate Blur Kernel and Restore the Image Using Priori Side Information

As shown in the above section, in order to solve ill-conditioned problems, such as Equations (4) and (5), it is necessary to design appropriate models to highlight certain features of the image.
Aimed at the kernel estimation problem in Equation (4), traditional algorithms usually solve the equation with an easily designed regularization term. However, it is too difficult to obtain a good solution in this way, because of the nonconvex problem. Therefore, in this study, with the application of the expectation maximum, we obtained the estimation result of the kernel by the alternating layer-by-layer calculation of the layer iterative image and layer iterative kernel layer. Section 3.1 introduces the details of the method of kernel estimation in the selected layer with priori side information from the short-exposure image.
Aimed at the problem of Equation (5), Section 3.2 proposes a regularization term design method that uses short-exposure image information. The proposed regularization term, which retains information from the short-exposure image, is used to improve the restoration effect by retaining the advantage of the Lp-norm gradient operator, which is closer to the characteristics of natural image distribution.

3.1. Kernel Estimation in This Layer: Based on a Priori Filter of Short-Exposure Images to Build the Model

In this section, the estimation method for the kernel of example layer is introduced as an example for any layer of the image pyramid. As mentioned above, the main aim of the layer kernel estimation process is to apply the maximum expectation algorithm to estimate the layer iter-kernel and the layer iter-image iteratively. Therefore, for the target of improving the accuracy of the layer kernel estimation, we introduced the preprocessed layer short-exposure image as the correction template and used the JBF algorithm to calculate the result of correcting the current layer iter-image by the template. The result was the filtered iterative image for the current iteration step. As shown in Section 3.1.2 and Section 3.1.3, the a priori filtered iterative image could be applied to improve the regularization term design and optimize the fidelity term calculation results.
For the layer-kernel estimation problem, Equation (4) can be written in the following iterative form:
H = argmin X h h i t e r X i t e r Y L 2 2 + λ x T ( X i t e r ) + λ h F ( h i t e r )
where H is the result of the kernel estimation in this layer, Y L is the deblurred image in this layer, and argmin a h represents the alternate iterative minimization of layer iterative image X i t e r and layer-iterative kernel h i t e r in this layer. The λ X and λ h are regularization coefficients for iterative image estimation and kernel estimation in each layer. T ( X i t e r ) and F ( h i t e r ) are regularization terms for the layer iterative image estimation and layer-kernel estimation models, respectively.
As shown in Figure 2, the layer-kernel estimation process is mainly divided into three steps:
Step 1. Apply a joint bilateral filter (JBF) [23] algorithm to update the example layer’s priori filter image (Section 3.1.1).
Step 2. Apply the updated priori filter image to obtain a layer iterative image (Section 3.1.2).
Step 3. Apply the priori filter image to estimate the layer iterative kernel (Section 3.1.3).
The design requirement of the regularization term T ( X i t e r ) in the layer iterative image solution process aims to improve the similarity between the gradient map of the layer iterative result and the gradient map ideal image. In Section 3.1.1, we propose a layer iterative image estimation model with short-exposure images, while optimizing the design of regularization and data items, and improving the estimation accuracy of iterative images.
The design requirements of the regularization term F ( h i t e r ) in the layer iterative kernel solution process usually need to consider the feature of the kernel itself. In this paper, the common regularization term h i t e r 2 2 is used to constrain the iterative kernel solution. The calculation process is simple, and the iterative result has a better overfitting control effect.
Therefore, by the improvements above, with the appropriate regularization term and continuously iterating alternately to convergence, the optimal solution of the iterative layer kernel H can be obtained.

3.1.1. Calculate the Priori Filter Iterative Image

In this example layer, the gradient of the priori filter iterative image ” Y J B L i t e r ” is obtained by the JBF of the iterative image X i t e r and the preprocessed image N d :
Y J B L i t e r = 1 w p p Ω , q Φ ( X i t e r ) f ( ( X i t e r ) p ( X i t e r ) q ) g ( ( N d ) p ( N d ) q )                             w p = p Ω , q Φ ( X i t e r ) f ( ( X i t e r ) p ( X i t e r ) q ) g ( ( N d ) p ( N d ) q )                                                     f ( · ) , g ( · ) = exp ( [ ( m x n x ) 2 + ( m y n y ) 2 ] / 2 δ 2 )
where w p is the normalization terms, f() and g(), are the spatial distance weight template function and the similarity weight template function, respectively, both of which are Gaussian functions in this algorithm with the default parameters δ . Φ is the range of the block filter area for example, [3,3], and p and q are the pixel coordinates; p is the center pixel of the block filtered area, q is the neighbor pixel of p in the block filter region, and Ω is the entire image. The values of f() and g() are both in the range [0, 1], so the result of the Hadamard product will be very small. Use of this result directly will result in severe distortion, so it is necessary to multiply it by a normalization factor, which is the inverse of the sum of all elements of the product matrix obtained by the Hadamard product from two matrices with the spatial distance weights and the similarity weights. ( X i t e r ) p and ( X i t e r ) q are the pixel gradient values of the iterative image. ( N d ) p and ( N d ) q are the pixel gradient values at the corresponding positions in N d .
As the Figure 3 shown, assuming that there is a template matrix (a) and a target matrix (b), the same parameter settings are applied to obtain the bilateral filtering result (c) and the joint bilateral filter result (d). It is evident that the distribution of (d) is closer to template (a) than (c) and shows more details from (b) than template (a). This is determined by the principle of the joint bilateral algorithm. Therefore, it can be concluded that using N d images that are closer to the real distribution to participate in the calculation can make the distribution of the iterative results closer to the real distribution. The main reason why the N d image is not directly used in the calculation as a priori filter image is that the signal-to-noise ratio of the short-exposure image is low, and the detail loss in the smooth part of the image intensity transition is large. Therefore, using the results of bilateral filtering to participate in the iteration can have a good correction effect on both the texture and edge parts of the image.
For example:
Figure 3. The matrix of examples: (a) template; (b) target; (c) bilateral filter; and (d) joint bilateral filter.
Figure 3. The matrix of examples: (a) template; (b) target; (c) bilateral filter; and (d) joint bilateral filter.
Sensors 22 01846 g003

3.1.2. Iterative Image Estimation

The main concept in this section is that, with the priori filter non-dimensional Gaussianity measure regularization term, which was designed to constrain the layer iterative image estimation process, the iterative updating of the fidelity term was realized using the priori filtering term in the kernel estimation process. Compared with the common kernel estimation algorithms from the literature [11,15,18], this algorithm constrained the iteration process through the partial high-frequency information provided by short-exposure images, thus improving the kernel estimation accuracy.
The iterative process is divided into two steps:
Step 1: solve the iterative image model with BID-PFNGM regularization term and updated data item that introduces a priori filter image for reconstruction.
The BID-PFNGM is as below:
T ( X i t e r ) = | X i t e r p i x e l | | X i t e r p i x e l | + E [ Y J B L i t e r ]
where E [ Y J B L i t e r ] means solving mathematical expectation, and the value is the mean of the gradients of all pixels of the priori filter image. As above, Y J B L i t e r is the priori filter term, and the initial value is N d . X i t e r p i x e l is the pixel value of the selected location.
As the constraint term, BID-PFNGM can be used to judge the gradient information needing to be reserved or attenuated. If the gradient of pixel point is greater than the global mean of relative gradient, the pixel will be reserved, or otherwise abandoned. Through repeated updating of threshold and iterative Equation (9), only a small part of pixels was reserved at the end, which improved the kernel estimation accuracy while ensuring the sparsity.
The original NGM, which, from [24], was calculated iteratively in each iteration based on the previous result. The blur kernel used at the beginning was an artificial initial value, so the result of the initial iterative image deviated greatly from the final result. Although the final iterative result converged, if the mean value can be adjusted using the more reliable pixel gradient distribution provided by the short-exposure image in each iteration process and adding the priori distribution term and the fidelity term for calculation together (as shown in Equation (9)), the estimated effect of the final blur kernel is better. As described in 3.1.1, the addition of the joint bilateral filtering algorithm can make the value of the result of the corrected image have both the characteristics of the template image and the characteristics of the corrected image. If the template image has high reliability, then the corrected image is more credible than the uncorrected one.
Step 2: update the iterative image model.
Applying the priori filter image Y J B L and iterative image X i t e r to reconstruct the iterative image model X i t e r = argmin X i t e r h i t e r X i t e r Y i t e r p i x e l 2 2 + λ x T ( X i t e r ) + λ x T ( X i t e r ) from Equation (6) in gradient domain, it can be written as:
X i t e r = argmin X i t e r X i t e r h i t e r ( X i t e r ) Y J B L i t e r 2 2 + λ x | X i t e r p i x e l | | X i t e r p i x e l | + E [ Y J B L i t e r ]
As in [24], λ x is a fixed value determined by experimental experience, X i t e r denote the pool of allowed X i t e r . As mentioned before, the physical meaning of this regularization term is to keep a relatively large portion of the result compared to the mean of the priori filtered image. If the selection weight is too small, the final result will contain more noise information, and if the selection weight is too large, the resulting image information will be lost. Therefore, different images need different weight parameters according to the characteristics of the image (sharpness, texture, etc.), and the selection of this item is mainly determined by experimental experience.
To solve this problem, this paper used the iterative shrinkage-threshold (IST) algorithm, which has high calculation accuracy and a good restoration effect for non-uniform blur kernel [25]. After calculation, the iterative image solution is obtained: X i t e r [24].

3.1.3. Iterative Kernel Estimation Algorithm

The iter-kernel h i t e r can be obtained by updating the kernel iteration model [26] using the priori filtering term Y J B L and iterative image X i t e r :
h i t e r = arg min h h t e m p X i t e r Y J B L i t e r 2 2 + λ h h t e m p 2 2 s . t . h t e m p p i x e l = 1   a n d   h t e m p p i x e l 0
where λ h is a hyperparameter of the regularization term, denote the pool of allowed h , h t e m p is the iter-temp result of h i t e r , and the limitations are that the energy of the sum of all pixels from the kernel h t e m p p i x e l is 1, and the energy h t e m p p i x e l of each point in the kernel is not less than 0. In this paper, we adopt a similar algorithm from [24] to obtain the iterative kernel by using the gradient projection method [26] to solve the equation. It is worth noting that the algorithm uses the Dirichlet function to approximate the true distribution of the iterative kernel [27]. From a statistical perspective, the process of solving the iterative kernel can be regarded as the process of solving the probability distribution of h i t e r , when the probability of satisfying the conditional distribution P ( h Y J B L ) is the largest. However, it is very difficult to solve the model directly, so the probability of satisfying a certain Dirichlet approximate distribution P ( D D i r i c l e t ) is used to approximate P ( h Y J B L ) , where D D i r i c l e t represents the Dirichlet distribution to be determined. The relative entropy Kullback–Leibler (KL) distance is usually used to judge the similarity of two distributions. And as below in Table 1, the flow of the method is shown.
The proposed algorithm was compared with the algorithms in [11,24] (All results from [24,27] calculated by the code from URL: https://sites.google.com/site/fbdhsgp accessed on 6 December 2021. With GNU3.0 License. And the result of [11] calculated by the code from https://github.com/jtyuan/deblur accessed on 6 December 2021) in the aspect of kernel estimation, to verify its estimation accuracy. In the simulation process of the long-exposure image, the original image experienced blur processing using a kernel with the size of 6 pixels and in the direction of 45°. As for the short-exposure image, the exposure of the original image was reduced via PS software, the parameters were set as 30% of the original image, and Gaussian noise with a mean value of 0 and variance of 0.05 was added to simulate the low SNR characteristic of the short-exposure image.
Figure 4 shows the kernel estimation results. It can be directly observed that the energy distribution of the kernel estimated by the proposed algorithm was more approximate to the original kernel. The results were evaluated using the quantitative index evaluation method as seen in Table 2. A comparative evaluation was conducted using two objective criteria: peak signal-to-noise ratio (PSNR, unit: dB) and kernel structural similarity (SSIM). Figure 4e,f show the PSNR and SSIM normalized comparison results of the blur kernel estimation results in the blur kernel estimation process. In the iterative process based on the image pyramid, due to down-sampling, the initial image was small and the value was high, and the overall trend was a downward trend. This was an inevitable result after resolution normalization. However, it did not affect the estimation effect of this algorithm in the entire iterative process, which was better than the results in [24]. It can be seen that the kernel estimation algorithm based on BID-PFNGM was better than the kernel estimation based on NGM. According to Table 2, the proposed algorithm showed better performance in both indexes, indicating that the estimation result of the proposed algorithm was superior to the algorithms in [11,24].

3.2. Image Deblur: Deconvolutional Restoration Based on Relative Gradient Operator

In the convolutional image restoration process, [17] used the global gradient function of the to-be-restored image as the constraint term. Affected by the staircase effect, the texture part in the image could not be well restored. References [11,18] used the RL algorithm to restore the image and obtain the original solution, but the RL algorithm had a poor image restoration effect, which also affected the final restoration effect.
To solve the above problems, in this paper we put forward an Lp-norm constraint function called the relative gradient operator (RGO) to construct the regularization. The regularization term constrained by the Lp-norm has a good edge preservation effect [28,29]. When the RGO was applied to the image, besides the global gradient information, the RGO contained the relative difference information between the iterative deblurred image and the short-exposure preprocessed image. Compared with the TV operator, which was defined as nonconvex in the image area, the texture information of the restoration result was clearer, and the effect was better. The following first gives the definition and properties of the RGO.
Any bounded Lipschitz region Ω in a two-dimensional image is defined as a domain interval [30], and the RGO can be written as follows:
C D ( N d , X R i t e r ) = Ω | N d j X R i t e r j 1 | p d Ω p                                   = | ( N d j X R i t e r j ) + ( X R i t e r j X R i t e r j 1 ) | p d Ω p                                   = Ω | X R i t e r j X R i t e r j + N d j | p d Ω p
where a is the pixel in Ω , j is the location order of the pixel, and p is the power number in Section 3.2. Observing the above equation, it can be seen that the RGO must be integrable in the two-dimensional bounded Lipschitz bounded region (because this region is a two-dimensional real number field, and N d is a constant, the RGO must have an upper bound in this region, satisfying integrability) [31]; that is, it satisfies Ω | X j X j + N d j | p d Ω p < , and the RGO has homogeneity.
Therefore, the RGO is a function defined on domain Ω belonging to LP space: C D L p ( Ω ) ,and L p ( Ω ) = { u | Ω | u ( x ) | p d x < } , and u is any function that satisfies the condition: Ω | u ( x ) | p d x < [31].
From the above equations, the relative gradient operator can be expressed as the sum of the summation operator and gradient operator; namely, the RGO contains the gradient information of the image itself and the difference information between the iterative deblur image and the priori image N d . From the above equations, the relative gradient operator can be expressed as the sum of summation operator and gradient operator, namely, the RGO contains the gradient information of the image itself and difference information between iterative deblur image and priori image N d .
So, it can be used to rewrite Equation (5) into the following form:
X o p t = arg min X R i t e r X H X t e m p Y L 2 2 + γ R C D ( X R i t e r , N d ) p
where γ R is a hyperparameter of the regularization term, X denote the pool of allowed X R i t e r , and X t e m p is the iterative temp value of X o p t . Equation (12) denotes the model of an objective function constrained by the relative gradient operator regularization term of the P-norm. It generally cannot be solved directly through the gradient descent-like method [32], so smooth approximation functions are usually used, such as the Huber function [33]:
μ ε ( v ) = { p 2 ε p 2 v 2 + 2 p 2 ε p                 ( i f   | v | < ε ) | v | p                                                             ( i f   | v | ε )
where v is the independent variable and ε is the limited parameter, which determines the similarity between the Huber function and the original function.
Equation (12) is solved by approximately expressing it as a first-order continuously differentiable 2-norm function:
X o p t = argmin X R i t e r X H X t e m p Y L 2 2 + γ μ ( C D ( X R i t e r ) )
As N d is a constant, for any p > 0 , Equation (12) is first-order continuous differentiable. When ε is sufficiently small, the solutions of Equations (12) and (14) are approximately equivalent. C D ( X R i t e r , N d ) can be written into a function C D ( X R i t e r ) about X R i t e r . When Equation (12) reaches the minimum value, the following can be obtained through the derivation:
H 2 X t e m p H Y L + γ R M C D ( X R i t e r ) 2 2 = 0
M is a diagonal matrix similar to d i a g ( m i n ( p ε p 2 , p | C D ( X R i t e r ) | p 2 ) ) .
With the introduction of auxiliary function T ( a ) [34] and based on the idea of majorization minimization (MM) [35], this problem can be solved by the alternating direction method of multipliers (ADMM) [36], so as to obtain the initial solution X .
After the initial solution was solved, the high-frequency detailed information of the image was extracted to enhance its texture presentation. The extraction of high-frequency detail information was solved by a similar method to that in [18].
First, the residual error image and residual error image Δ Y L = Y L H N d were used as the inputs. Then the residual error solution X R R L and gain control solution X g c l were acquired through the residual Richardson–Lucy (RRL) [37] and gain control Richardson–Lucy (GCRL) [38] algorithms, respectively. Secondly, the united bilateral filtering of X R R L and the initial solution X o p t were conducted to acquire the smooth image X ¯ , followed by the operation of X d e t a i l = X g X ¯ to acquire the detailed image solution X d e t a i l . In the end, the initial solution and detailed solution were integrated ( X = X o p t + X d e t a i l ) to obtain the final restored image X . The flow of the whole algorithm in Section 3.2 was shown in Table 3.
As shown in Table 4 and Figure 5, in the case of using the same kernel, we used the PSNR and SSIM indicators to compare the results of the restoration effect of the algorithm in this paper and the algorithm in [27]. Table 4 gives a comparison of the parameters of the final restoration result, and Figure 4 shows a comparison of the relevant parameters of the iterative restoration image in each iteration. The top row of Figure 4 shows the difference in the PSNR of the restored iterative image, and the bottom row of Figure 4 shows the comparison of the SSIM of the restored iterative image. It can be seen that the proposed algorithm works well. Both of the results were better than those in [27].

4. Results and Discussion

In this paper, three groups of experiments were used to examine the performance of our algorithm. The first group comprised simulation experiments, which aimed to examine the restoration effect of this algorithm by measuring the quality evaluation index with reference images. The second group of experiments, named the live shot group, were conducted to verify the practicability of the algorithm by examining its restoring effect on live-photography images. The live shot group consisted of experiment 1 and experiment 2, where the algorithms in [11,17] constituted the first control group in experiment 1—group 2, and those in [11,27] formed experiment 2—group 2. In the third group, composed of the modulation transfer function (MTF) measurement experiments [39], the curve of the image was measured by restoring the simulation image.

4.1. Group 1: Simulation Experiment

Experiment 1: the first group of the simulation experiment used remote sensing images with an image size of 512 × 512 pixels, and the long-exposure images were manually blurred. The short-exposure image used PS software to reduce the exposure of the original image, the exposure was set to 30% of the original image, and Gaussian noise with a mean value of 0 and a variance of 0.05 was added to simulate the low signal-to-noise ratio of the short-exposure image.
The experimental results are shown in Figure 6. It can be seen by directly observing the restoration results that the methods in [11,27] were not accurate enough in the kernel estimation. However, the proposed algorithm had better performance in both edge preservation and texture restoration. This group of experiments comprised simulation experiments, and there was an original image, so the reference image quality index was used to measure the quality of the restored image. This group of experiments used SSIM and PSNR as indicators for measurement. As shown in Table 5, this algorithm performed well in both metrics.
Experiment 2: the second group of the simulation experiment used Image Library, and the long-exposure images were manually blurred. The short-exposure image used PS software to reduce the exposure of the original image, the exposure was set to 30% of the original image, and Gaussian noise with a mean value of 0 and a variance of 0.05 was added to simulate the low signal-to-noise ratio of the short-exposure image. The main purpose of this experiment is to verify the generalizability of this algorithm, so this algorithm used eight kernels to blur all four images of Levin’s image library for blur simulation. The test metrics were selected as PSNR and SSIM to measure the overall performance of each algorithm in each image. The specific calculation method used the summation method, and the PSNR and SSIM values of the recovered images of the eight kernels for each image were summed up individually, and the total value was used as the final evaluation result among them, the PSNR parameters processed by histogram matching with the original image.
Figure 7 shows (a) the original state images of the images and (b) the self-designed simulation blur kernel. As shown in Table 6 and Table 7, the algorithm in this paper performs well overall in both PSNR and SSIM metrics of the images; combined with the recovery effect shown in Figure 6, it can be believed that the algorithm in this paper can outperform the comparison algorithm in most scene fields in both visual intuitive perception and objective metric measurements, on most occasions.

4.2. Group 2: Live Shot Group, Experiment 1

This group of experiments comprised real-shot experiments, so the subjective evaluation method was used to evaluate the restoration quality of the image. However, based on some existing image quality assessment (IQA) indicators, such as natural image quality evaluator(NIQE, the smaller the value, the smaller the gap with the natural image, the better the image quality) [40], average gradient (the larger the value, the clearer the image texture), cumulative probability of blur detection (CPBD, the larger the number, the sharper the image) [41], these three indicators could also evaluate the image restoration quality from another perspective.
The live shot data in [17] were used in the first control group, and the results were compared with the processing results in [11,17]. The live shot image was from [17], page 4, with a size of 498 × 498 pixels, and the copyright belongs to the original author, Tallon.
The results are shown in Figure 8 and Table 8. It can be seen by directly observing the restoration results that the methods in [11,17] were not accurate enough in the kernel estimation; the yellow spine on the left in the figure is quite fuzzy, as is the pattern edge on the cup. Moreover, the characters in the upper right corner (almost) cannot be identified. However, the proposed algorithm had better performance in both edge preservation and texture restoration. Through subjective observation, combined with measurement of the IQA index, it can be seen that the recovery performance of this algorithm was better. The restored image texture is clearer, the larger gradient part is sharper, and the visual effect is better.

4.3. Group 2: Live Shot Group Experiment 2

The second experimental picture was taken by an industrial camera (The Imaging Source DFK290). In order to test the result of the proposed algorithm restored from a natural scene in bad imaging conditions, we selected the exposure time and ISO of the long-exposure image as 1/47 s and 100, respectively. The length and width of a single CMOS pixel were 3 μm, the focal length was 50 mm, and the object distance was 5 m. The image resolution was 1920 × 1080, and the area with a resolution of 400 × 360 was intercepted for the restoration. The exposure time and ISO of the short-exposure image were 1/1600 s and 400, respectively. After calculating the camera and lens parameters, the upper speed limit, at which the relative motion would not be generated within the shutter time, was determined to be about 1.7 m/s; in other words, no motion-induced fuzziness would be generated if the relative motion speed were lower than this speed limit. Through the test, the relative motion speed in the experiment was much higher than 1.7 m/s, so the shutter time selected was suitable and conformed to the short-exposure imaging requirement. The relative motion-triggered fuzziness was large in this experiment, and the algorithm in [11] followed different principles from the proposed restoration algorithm, so an obvious relative position difference existed in the restoration result.
The experimental results by the IQA are seen in Table 9:
The experimental results are seen in Figure 9. It can be directly observed that the image restored using the algorithm in [11] generated too smooth an effect, with great loss of texture details. The restoration effect of [27] was improved somewhat, but the characters on the bottle still could not be identified. Through subjective observation combined with the measurement of the IQA index, it can be seen that the recovery performance of this algorithm was better. The image restored using the proposed algorithm presented clearer edges than the results obtained through the previous two algorithms; for example, the character profile on the shot object was clearer, there were more texture details, and the image was more real and natural without obvious ringing effects; moreover, it was richer in colors.

4.4. Group 3: MTF Measurement Experiment

The sharpness of the image restored by the proposed algorithm was evaluated through the MTF in this group of experiments. The MTF parameters were measured through the method conforming to ISO12233 [39]. The blade edge image (420 × 650) was selected as the original image, and the motion-induced fuzziness of a long-exposure image was simulated using a 45° kernel with a pixel size of 6. The short-exposure image was simulated by reducing the exposure of the original image via PS software. The input–output ratio of the brightness parameter curve was adjusted to 36:89, the exposure was reduced to 30% of the original image, and Gaussian noise with a mean value of 0 and variance of 0.05 was added to simulate the low SNR characteristic of a short-exposure image. The measurement algorithm implemented the calculation by selecting the program (SFR_1.41) compiled by MITRE Corporation in 2006.
The experimental results are displayed in Figure 10. The main design goal of this experiment was to evaluate the restoration effect of this algorithm by measuring the MTF index of the deblur image. The test program was used to determine the MTF-related value by calculating the smoothness of the transition part in the longitudinal direction of the edge area. By observing the restored image (d) and the original image (c), the transition of the deblur image in the longitudinal direction is more obvious, and the original image is smoother and fuzzier. Therefore, in the MTF comparison chart, the MTF of the image deblurred by this algorithm was better than the MTF of the original image. The results show that the MTF of the proposed algorithm improved the edge sharpness effectively, with an excellent restoration effect, reaching the expectation of the algorithm design.

5. Conclusions

The advantages and disadvantages of single-image blind deconvolution algorithms and multi-image deconvolution algorithms were firstly analyzed, and then the merits of the long- and short-exposure deblur algorithm and the blind deblur algorithm were integrated to propose a deconvolutional deblur solution with practical value. Through the united kernel estimation of a short-exposure image and a long-exposure image, the priori side information provided by the short-exposure image was fully utilized, and the morphology of kernel was accurately estimated in the estimation part; in the image restoration part, a new regularization term was designed using the short-exposure image, the image details were enriched by introducing the detailed image information, and the image edges were kept sharp, thus effectively improving the final image restoration effect. The final experimental results also provide that the proposed algorithm basically reaches the design expectation, and the restoration effect of the existing algorithm is apparently enhanced.

Author Contributions

Conceptualization, Z.T., Y.B. and Q.L.; methodology, M.H.; software, Y.B.; investigation, Z.T., M.H. and Q.L.; resources, Z.T.; data curation, Y.B.; writing—original draft preparation, Y.B.; writing—review and editing, Y.B. and Z.T.; supervision, Z.T., M.H. and Q.L.; project administration, Z.T., M.H. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (no. 61635002); National Defense Science and Technology Innovation fund of the Chinese Academy of Sciences (no. CXJJ-20S016); Science Technology Foundation Strengthening Field fund project (no. 2020-JCJQ-JJ-492).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson: London, UK, 2018. [Google Scholar]
  2. Catanzaro, B.E.; Thomas, J.A.; Cohen, E.J. Comparison of full-aperture interferometry to subaperture stitched interferometry for a large-diameter fast mirror. Optomech. Des. Eng. 2001, 4444, 224–237. [Google Scholar]
  3. Freeman, W.T.; Fergus, R.D.; Singh, B.; Hertzmann, A.P.; Roweis, S.T. Removing Camera Shake from a Single Photograph Using Statistics of a Natural Image. U.S. Patent 7,616,826, 10 November 2019. [Google Scholar]
  4. Bishop, T.E.; Babacan, S.D.; Amizic, B.; Katsaggelos, A.K.; Chan, T.; Molina, R. Blind Image Deconvolution: Problem regaluration and existing approaches. Blind Image Deconvol. Theor. Appl. 2007, 1, 1–41. [Google Scholar]
  5. Qi, S.; Jia, J.; Agarwala, A. High-quality motion deblurring from a single image. Acm Trans. Graph. 2008, 27, 1–10. [Google Scholar]
  6. Pan, Z.; Tan, Z.; Lv, Q.B. Improved joint deblurring algorithm in Fourier domain and wavelet domain. Acta Photonica Sin. 2017, 46, 171–178. (In Chinese) [Google Scholar]
  7. Yang, L.; Ji, H. A Variational EM Framework With Adaptive Edge Selection for Blind Motion Deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  8. Li, W.; Zhang, J.; Dai, Q. Exploring aligned complementary image pair for blind motion deblurring. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011. [Google Scholar]
  9. Li, Z.; Deshpande, A.; Xin, C. Denoising vs. deblurring: HDR imaging techniques using moving cameras. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
  10. Roubek, F.; Flusser, J. Resolution enhancement via probabilistic deconvolution of multiple degraded images. Pattern Recognit. Lett. 2005, 27, 287–293. [Google Scholar] [CrossRef]
  11. Yuan, L.; Sun, J.; Quan, L.; Shum, H.Y. Image deblurring with blurred noisy image pairs. In ACM SIGGRAPH 2007 Papers, Proceedings of the SIGGRAPH07: Special Interest Group on Computer Graphics and Interactive Techniques Conference, San Diego, CA, USA, 5–9 August 2007; Association for Computing Machinery: New York, NY, USA, 2007. [Google Scholar]
  12. Bentum, M.J.; Arendse, R.G.; Slump, C.H.; Mistretta, C.A.; Peppler, W.W.; Zink, F.E. Design and realization of high speed single exposure dual energy image processing. In Proceedings of the Fifth Annual IEEE Symposium on Computer-Based Medical Systems, Durham, NC, USA, 14–17 June 1992. [Google Scholar]
  13. Gao, Z.; Yao, S.; Yang, C.; Xu, J. A Dynamic Range Extension Technique for CMOS Image Sensors With In-Pixel Dual Exposure Synthesis. IEEE Sens. J. 2015, 15, 3265–3273. [Google Scholar] [CrossRef]
  14. Vengsarkar, A.M.; Zhong, Q.; Inniss, D.; Reed, W.A.; Lemaire, P.J.; Kosinski, S.G. Birefringence reduction in side-written photoinduced fiber devices by a dual-exposure method. Opt. Lett. 1994, 19, 1260–1262. [Google Scholar] [CrossRef]
  15. Tallón, M.; Javier, M.A.; Babacan, S.D.; Katsaggelos, A.K. Full Length Article Space-variant blur deconvolution and denoising in the dual exposure problem. Inf. Fusion 2013, 14, 396–409. [Google Scholar] [CrossRef]
  16. Li, X.C.; Li, H.K.; Song, B. Application of energy functional regularization model in image restoration. J. Image Graph. 2014, 19, 1247–1259. (In Chinese) [Google Scholar]
  17. Tallón, M.; Mateos, J.; Babacan, S.D.; Molina, R.; Katsaggelos, A.K. Space-variant kernel deconvolution for dual exposure problem. In Proceedings of the 19th European Signal Processing Conference, Barcelona, Spain, 29 August–2 September 2011. [Google Scholar]
  18. Zhang, G.M.; Gao, S.; Yin, Z.S. Motion blur restoration method of remote sensing image based on fuzzy image and noise image. Electron. Des. Eng. 2017, 25, 82–86. [Google Scholar]
  19. Cui, G.; Hua, W.; Zhao, J.; Gong, X.; Zhu, L. A motion deblurring method with long/short exposure image pairs. In International Conference on Optical Instruments and Technology 2017: Optoelectronic Imaging/Spectroscopy and Signal Processing Technology, Proceedings of the International Conference on Optical Instruments and Technology 2017, Beijing, China, 28–30 October 2017; International Society for Optics and Photonics: Bellingham, DA, USA, 2018. [Google Scholar]
  20. Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary Robust invariant scalable keypoints. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  21. Djurovi, I. BM3D filter in salt-and-pepper noise removal. EURASIP J. Image Video Process. 2016, 2016, 13. [Google Scholar] [CrossRef] [Green Version]
  22. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. Siam J. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  23. Xiao, C.; Gan, J. Fast image dehazing using guided joint bilateral filter. Vis. Comput. 2012, 28, 713–721. [Google Scholar] [CrossRef]
  24. Xu, Z.; Fugen, Z.; Bei, X.Z. Blind deconvolution using a nondimensional Gaussianity measure. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013. [Google Scholar]
  25. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Siam J. Imaging Hences 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  26. Nocedal, J.; Wright, S. Numerical Optimization. Springer Sci. 1999, 35, 7. [Google Scholar]
  27. Zhou, X.; Mateos, J.; Zhou, F.; Molina, R.; Katsaggelos, A.K. Variational Dirichlet Blur Kernel Estimation. IEEE Trans. Image Process. 2015, 24, 5127–5139. [Google Scholar] [CrossRef]
  28. Krishnan, D.; Fergus, R. Fast image deconvolution using Hyper-Laplacian prioris. Adv. Neural Inf. Process. Syst. 2009, 22, 1033–1041. [Google Scholar]
  29. Levin, A.; Fergus, R.; Durand, F.; Freeman, W.T. Image and Depth from a Conventional Camera with a Coded Aperture. ACM Trans. Graph. (TOG) 2007, 26, 70-es. [Google Scholar] [CrossRef]
  30. Mumford, D.; Gidas, B. Stochastic Models for Generic Images. Q. Appl. Math. 2001, 59, 85–111. [Google Scholar] [CrossRef] [Green Version]
  31. Chen, F.C.; Shen, J.H.; Chen, W.B. Image Processing and Analysis: Variational, PDE, Wavelet and Stochastic Methods; Science Press: Beijing, China, 2011; pp. 15–43. [Google Scholar]
  32. Kloft, M.; Brefeld, U.; Sonnenburg, S.; Zien, A. Lp-Norm Multiple Kernel Learning. J. Mach. Learn. Res. 2011, 12, 953–997. [Google Scholar]
  33. Babacan, S.D.; Molina, R.; Do, M.N.; Katsaggelos, A.K. Bayesian Blind Deconvolution with General Sparse Image Prioris. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 341–355. [Google Scholar]
  34. Giusti, E.; Williams, G.H. Minimal Surfaces and Functions of Bounded Variation; Birkhäuser Verlag: Basel, Switzerland, 1984. [Google Scholar]
  35. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning: With Applications in R (Springer Texts in Statistics); Springer: New York, NY, USA, 2013. [Google Scholar]
  36. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  37. Faisal, M.; Lanterman, A.D.; Snyder, D.L.; White, R.L. Implementation of a modified Richardson-Lucy method for image restoration on a massively parallel computer to compensate for space-variant point spread of a charge-coupled-device camera. J. Opt. Soc. Am. A 1995, 12, 2593–2603. [Google Scholar] [CrossRef]
  38. Cui, G.; Zhao, J.; Gao, X.; Feng, H.; Chen, Y. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution. Opt. Rev. 2017, 24, 128–138. [Google Scholar] [CrossRef]
  39. ISO 122333:2017; Photography Electronic Still Picture Imaging Resolution and Spatial Frequency Responses. ISO: Geneva, Switzerland, 2017.
  40. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Processing Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  41. Narvekar, N.D.; Karam, L.J. A No-Reference Image Blur Metric Based on the Cumulative Probability of Blur Detection (CPBD). IEEE Trans. Image Process. 2011, 20, 2678–2683. [Google Scholar] [CrossRef]
Figure 1. Structure flow chart.
Figure 1. Structure flow chart.
Sensors 22 01846 g001
Figure 2. Flow chart of the kernel estimation model in the example layer.
Figure 2. Flow chart of the kernel estimation model in the example layer.
Sensors 22 01846 g002
Figure 4. The kernel estimate result of (a) original kernel; (b) literature [11] algorithm result; (c) literature [24] algorithm result; (d) this paper; (e) normalized SSIM with [24]; (f)normalized PSNR with [24].
Figure 4. The kernel estimate result of (a) original kernel; (b) literature [11] algorithm result; (c) literature [24] algorithm result; (d) this paper; (e) normalized SSIM with [24]; (f)normalized PSNR with [24].
Sensors 22 01846 g004aSensors 22 01846 g004b
Figure 5. Comparison of iteration: (a) PSNR result of difference; (b) SSIM result.
Figure 5. Comparison of iteration: (a) PSNR result of difference; (b) SSIM result.
Sensors 22 01846 g005
Figure 6. Remote image restoration: (a) blurred image; (b) local blurred image; (c) local Scheme 11. algorithm restored image; (e) local [27] algorithm restored image; (f) local restoration result of this algorithm; (g) the kernel estimated by this paper.
Figure 6. Remote image restoration: (a) blurred image; (b) local blurred image; (c) local Scheme 11. algorithm restored image; (e) local [27] algorithm restored image; (f) local restoration result of this algorithm; (g) the kernel estimated by this paper.
Sensors 22 01846 g006
Figure 7. The Levin library; (a) the original images from Levin library; (b) kernels.
Figure 7. The Levin library; (a) the original images from Levin library; (b) kernels.
Sensors 22 01846 g007
Figure 8. Real image restored experiment; (a) blurred image; (b) blurred image part; (c) Scheme 11. (e) algorithm restored image in [17]; (f) this paper; (g) kernel by proposed.
Figure 8. Real image restored experiment; (a) blurred image; (b) blurred image part; (c) Scheme 11. (e) algorithm restored image in [17]; (f) this paper; (g) kernel by proposed.
Sensors 22 01846 g008
Figure 9. Real image results, the important part of the image comprises the Chinese characters as shown in (l): (a) blurred image; (b) short exposure image; (c) algorithm restored image in [27]; (d) algorithm restored image in [11]; (e) the restoration result of this algorithm; (f) result of kernel; (g) blurred image; (h) short exposure image; (i) algorithm restored image in [27]; (j) algorithm restored image in [11]; (k) algorithm proposed in this paper; (l) details of the Chinese characters.
Figure 9. Real image results, the important part of the image comprises the Chinese characters as shown in (l): (a) blurred image; (b) short exposure image; (c) algorithm restored image in [27]; (d) algorithm restored image in [11]; (e) the restoration result of this algorithm; (f) result of kernel; (g) blurred image; (h) short exposure image; (i) algorithm restored image in [27]; (j) algorithm restored image in [11]; (k) algorithm proposed in this paper; (l) details of the Chinese characters.
Sensors 22 01846 g009
Figure 10. Edge image restoration results: (a) blurred image; (b) enlarged blurred image; (c) original image; (d) enlarged results; and (e) MTF comparison.
Figure 10. Edge image restoration results: (a) blurred image; (b) enlarged blurred image; (c) original image; (d) enlarged results; and (e) MTF comparison.
Sensors 22 01846 g010
Table 1. Iterative estimation algorithm flow of kernel final.
Table 1. Iterative estimation algorithm flow of kernel final.
The Steps of the Flow
Input: N d , Y , λ x and λ h
1. Conduct image d-sampling and establish an image pyramid consisting of n layers.
2. Estimate the convolution kernel at the current layer.
3. Update the fidelity term and regularization term through the united filtering algorithm.
4. Estimate the iterative image.
5. Estimate the convolution kernel.
6. Interpolate both kernel and image and extend to the next layer and repeat Step 2.
7. End the estimation and return to kernel H o p t .
Table 2. Kernel estimation quality evaluation table.
Table 2. Kernel estimation quality evaluation table.
Test ImageEvaluation
Criterion
The Algorithm in the Literature [11]The Algorithm in the Literature [24]Proposed Algorithm
Convolutional kernel restoration resultSSIM 0.62120.78350.8014
PSNR15.628916.328917.6426
Table 3. Deconvolutional algorithm flow based on long- and short-exposure.
Table 3. Deconvolutional algorithm flow based on long- and short-exposure.
Deconvolutional Algorithm Flow Based on Long- and Short-Exposure
Input: Y L , N d and H
1. Calculate the initial value of regularization term
2. Iteratively solve Equation (15) to acquire the initial solution X o p t
3. Calculate the result of the RRL algorithm “ X R R L
4. Calculate the result of GCRL algorithm “ X g c l
5. Calculate the image details “ X d e t a i l ” and acquire the restoration result “ X
Table 4. Final result of RGO deblur and [27].
Table 4. Final result of RGO deblur and [27].
Test ImageEvaluation
Criterion
Algorithm in
the Literature [27]
Proposed Algorithm
Convolutional kernel
Restoration result
PSNR26.233026.5231
SSIM0.75190.7707
Table 5. Final result of simulation experiment.
Table 5. Final result of simulation experiment.
Test ImageEvaluation
Criterion
Algorithm in
the Literature [11]
Algorithm in
the Literature [27]
Proposed
Algorithm
JetSSIM0.97680.98920.9992
PSNR25.432825.425126.8518
Table 6. Final results of total PSNR.
Table 6. Final results of total PSNR.
Test ImageAlgorithm in the
Literature [11]
Algorithm in the
Literature [27]
Proposed
Algorithm
Image 1144.4002145.6224159.2353
Image 2144.8377152.0045157.8592
Image 3140.7298147.0867153.9975
Image 4117.0562143.5941144.8717
Table 7. Final results of total PSNR.
Table 7. Final results of total PSNR.
Test ImageAlgorithm in the
Literature [11]
Algorithm in the
Literature [27]
Proposed
Algorithm
Image 16.30986.71556.7284
Image 26.60046.61716.6784
Image 36.33576.76616.8571
Image 44.33646.32276.5260
Table 8. Final results of the no. 1 experiment.
Table 8. Final results of the no. 1 experiment.
Test ImageEvaluation CriterionAlgorithm in the Literature [11]Algorithm in the Literature [17]Proposed
Algorithm
BottleNIQE5.36346.60164.4486
Average Gradient7.53765.005510.9265
CPBD0.49620.21820.5753
Table 9. Final results of the no. 2 experiment.
Table 9. Final results of the no. 2 experiment.
Test ImageEvaluation
Criterion
Algorithm in
the Literature [11]
Algorithm in
the Literature [27]
Proposed
Algorithm
BottleNIQE4.36064.72634.1259
Average Gradient3.68993.67454.2400
CPBDM0.49620.21820.5753
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bai, Y.; Tan, Z.; Lv, Q.; Huang, M. A Deconvolutional Deblurring Algorithm Based on Short- and Long-Exposure Images. Sensors 2022, 22, 1846. https://doi.org/10.3390/s22051846

AMA Style

Bai Y, Tan Z, Lv Q, Huang M. A Deconvolutional Deblurring Algorithm Based on Short- and Long-Exposure Images. Sensors. 2022; 22(5):1846. https://doi.org/10.3390/s22051846

Chicago/Turabian Style

Bai, Yang, Zheng Tan, Qunbo Lv, and Min Huang. 2022. "A Deconvolutional Deblurring Algorithm Based on Short- and Long-Exposure Images" Sensors 22, no. 5: 1846. https://doi.org/10.3390/s22051846

APA Style

Bai, Y., Tan, Z., Lv, Q., & Huang, M. (2022). A Deconvolutional Deblurring Algorithm Based on Short- and Long-Exposure Images. Sensors, 22(5), 1846. https://doi.org/10.3390/s22051846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop