Next Article in Journal
Detecting Axial Ratio of Microwave Field with High Resolution Using NV Centers in Diamond
Previous Article in Journal
Using Video Processing for the Full-Field Identification of Backbone Curves in Case of Large Vibrations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Soft-Threshold Filtering Approach for Tomography Reconstruction from a Limited Number of Projections with Bilateral Edge Preservation

by
Tiago T. Wirtti
*,† and
Evandro O. T. Salles
Programa de Pós-graduação em Engenharia Elétrica (PPGEE), Departamento de Engenharia Elétrica, Universidade Federal do Espírito Santo, Campus Goiabeiras. Av. Fernando Ferrari, 514, Goiabeiras, Vitória-ES 29075-910, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(10), 2346; https://doi.org/10.3390/s19102346
Submission received: 2 April 2019 / Revised: 16 May 2019 / Accepted: 18 May 2019 / Published: 21 May 2019
(This article belongs to the Section Intelligent Sensors)

Abstract

:
In X-ray tomography image reconstruction, one of the most successful approaches involves a statistical approach with l 2 norm for fidelity function and some regularization function with l p norm, 1 < p < 2 . Among them stands out, both for its results and the computational performance, a technique that involves the alternating minimization of an objective function with l 2 norm for fidelity and a regularization term that uses discrete gradient transform (DGT) sparse transformation minimized by total variation (TV). This work proposes an improvement to the reconstruction process by adding a bilateral edge-preserving (BEP) regularization term to the objective function. BEP is a noise reduction method and has the purpose of adaptively eliminating noise in the initial phase of reconstruction. The addition of BEP improves optimization of the fidelity term and, as a consequence, improves the result of DGT minimization by total variation. For reconstructions with a limited number of projections (low-dose reconstruction), the proposed method can achieve higher peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM) results because it can better control the noise in the initial processing phase.

1. Introduction

X-ray computer tomography (CT) measures the attenuation of X-ray beams passing through an object, generating projections. Such projections are processed, resulting in an image (slice) of the examined object. This is known as a CT image reconstruction. The CT scan, formed by concatenating a large number of adjacent reconstructed images, has been proven to have great value in delivering rapid and accurate diagnoses for many cases in modern medicine. Although CT scanning has evolved considerably since its creation in 1972 by Godfrey Hounsfield, only recently has the concern with radiation levels in radiological examinations become important, leading to the “as-low-as-reasonably-achievable principle”, known as the ALARA principle. The ALARA principle states that only the minimum amount of radiation must be applied to the patient. For this reason, ALARA is widely accepted in the medical CT community [1]. To reduce the X-ray dose of the patient during the CT scan, there are two possibilities: (1) reduce the amount of projection (the quantity of X-rays emitted) during the CT scan or (2) reduce the power of the X-ray source during image acquisition. Both cases generally lead to low-quality reconstructed CT images. Then, a state-of-the-art problem is to propose a method that allows good-quality CT image reconstruction with a low-dose X-ray source. The central theme of this work is the reconstruction of images from the signal of the CT process, where the X-ray dosage is a concern. Before discussing CT image reconstruction approaches with low-dose X-rays, the next sections will present the most important classical, iterative, and statistical techniques to enable the reader to understand how CT image reconstruction has evolved to the current state of the art.

1.1. Classical, Iterative, and Statistical CT Image Reconstruction Approaches

The first approach to become popular, especially for its performance, was the filtered backprojection (FBP) reconstruction technique [2,3]. FBP is a classic method based on the Fourier central slice theorem and is implemented with the fast Fourier transform (FFT). Although it exhibits good performance, FBP has difficulty in being adapted to new CT scanner architectures. FBP requires high-dose radiation (in comparison to modern methods) and is not consistent with the ALARA (as-low-as-reasonably-achievable) principle [1].
The classical approaches, while successful, do not favor the incorporation of physical-statistical phenomena in the CT framework. For example, photon emission is a rare event and may be well described by the Poisson distribution [4]; beam behavior is best described by a response function that models the shadows cast onto detectors using a Gaussian model [5]; the beam hardening phenomenon (lines and shadows adjacent to high-density reconstructed areas) that appears due to the polyenergetic nature of X-ray emissions can be statistically corrected [6]; the loss of photons by sensors, known as photon read-out, is a Gaussian phenomenon [7]; data acquisition electronic noise and energy-dependent signals can be modeled as compound Poisson plus Gaussian noise [8]; etc. In this context, a statistical approach means adding to the mathematical model elements that describe physical-statistical phenomena present in the CT image reconstruction process. Therefore, the incorporation of detailed statistical models into CT reconstruction is not straightforward. In this sense, many solutions for the CT image reconstruction problem use some form of statistical approach [9,10,11,12,13,14,15,16,17,18,19,20]. Adaptive statistical iterative reconstruction techniques have shown significant results compared to non-adaptive techniques [17,18,21,22]. In general, although the models incorporate part of the statistical phenomena, most of these phenomena are not modeled since the practical effects are relatively insignificant and/or result in high-cost computational solutions.
An important method is proposed by Clark et al. [23], which consists of using rank-sparse kernel regression filtration with bilateral total variation (BTV) to map the reconstructed image into spectral and temporal contrast images. In this work, the authors strictly constrain the regularization problem while separating temporal and spectral dimensions, resulting in a highly compressed representation and enabling substantial undersampling of acquired signals. The method (5D CT data acquisition and reconstruction protocol) efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time. However, a remark should be made regarding the use of BTV (regularization based on l 1 norm). This often leads to the piecewise constant result and hence tends to produce artificial edges on the smooth areas. In order to mitigate this counterpoint of l 1 norm regularization, Charbonnier et al. [24] developed an edge-preserving regularization scheme known as bilateral edge preservation (BEP), which allows the used of an l p norm, 1 < p < 2 , and is applied in this work. Sreehari et al. [25] proposed a plug-and-play (P&P) priors framework with a maximum a posteriori (MAP) estimate approach used to design an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the non-local redundancy in images. The power of the P&P approach is that it allows a wide array of modern denoising algorithms to be used as a prior model for tomography and image interpolation. Pirelli et al. [26] propose the denoising CT generalized approximate message passing algorithm (DCT-GAMP), an adaptation of approximate message passing (AMP) techniques that represents the state of the art for solving undersampling compressed sensing problems with random linear measurements. In contrast, this approach uses minimum mean square error (MMSE) instead of MAP, and the authors show that using MMSE favors decoupling between the noise conditioning effects and the system models.
The Bayesian statistical approach is widely applied to the reconstruction of X-ray tomographies, with some variations, [14,15,16,18,27,28], and makes it possible to insert prior knowledge into the CT system model. This approach promises two advantages. First, it provides the search with more satisfactory solutions (noiseless ones) through the limitation of the searchable set of solutions using an a priori function (known as restriction). This restriction may mean that, for example, very high variations (high frequencies) between neighboring pixels will be considered as noise (and therefore must be discarded), and moderate frequencies will be considered as edges (and therefore must be preserved). In the context of computed tomography, this means that large differences in intensity between neighboring pixels tend to be interpreted as noise, and therefore, such a solution should be disregarded [5]. Moreover, solving the CT reconstruction system, y = A x + e , is an inverse and ill-posed problem, and prior knowledge often ensures the stability of the solution. Second, we can adopt a simplified mathematical model for tomographic image reconstruction and compensate its inefficiency (instability, noisy reconstruction, etc.) by adding the statistical component (prior knowledge) to the objective function. However, the model simplification has its limitations and should be used restrictively [5]. As a consequence of prior knowledge introduction, more satisfactory solutions (low noise level) can be found. Maximum a posteriori (MAP) is a useful statistical framework for CT reconstruction [12,15,16,27,28] and favors the incorporation of the regularization term with prior knowledge into the model. The MAP strategy provides an objective function composed by the sum of the probability (also known as fidelity) and the regularization function that establishes the optimization restriction criteria, also known as a prior.

1.2. Signal Modeling and Error Considerations in CT Image Reconstruction

As previously discussed, the statistical approaches can reduce deficiencies caused by classical mathematical modeling without having to literally incorporate the complexity of a real-world model. However, before proposing a statistical (non-deterministic) model that results in a lower noise reconstruction, it is necessary to establish the process as a whole. As shown in Figure 1, the process begins with a synthetic image, μ .
In this work, we use different synthetic images, namely Shepp–Logan head phantom and FORBILD head and abdomen phantom definitions [2,29]. The synthetic image is submitted to the Radon transform, R . , generating the ideal (free of noise) signal of the CT scan. The acquisition of data in the CT equipment depends on the amount of photons that reaches each detector. Once this problem is a particle countable process, well described by Poisson statistics [8], it is used in the model in Figure 1, represented by the N p random variable. However, due to the high number of photons, the acquisition process can be modeled as Gaussian due to the central limit theorem. In addition, the Gaussian model leads to additive algorithms, whereas the Poisson model leads to multiplicative, and therefore less efficient, algorithms [24]. In this work, we assume the signal arriving at the CT equipment detectors is influenced by Gaussian additive noise and, from among the dosage reduction methods presented in Section 1, we chose to emulate the low radiation dosage by reducing the number of projection angles processed. This means we assume that each detector absorbs an amount of photons that allows modeling the noise as a Gaussian additive, and the low dosage occurs by reducing the number of projections captured by the detectors (by reducing scanning angles). Accordingly, even with the process having a Poissonian nature, Gaussian additive noise can be added to the process, as
y N = R μ + N g ,
where y N is the resulting signal that approximates a tomography signal, R μ is the result of applying the Radon transform on the synthetic image, μ , and N g is the Gaussian additive noise.
The remainder of this work is dedicated to the reconstruction of the CT image, μ N , from the signal, y N , and the reduction of the global error, i.e., the reduction of the difference between synthetic and reconstructed images. As a criterion for measuring the quality of image reconstruction, we use peak signal-to-noise ratio (PSNR) and structural similarity, known as SSIM [30].

1.3. The Contribution of This Work

This work proposes a MAP solution with adaptive regularization term modeled by a bilateral edge-preserving (BEP) function [31,32] to regularize the fidelity term ( l 2 norm). The result of BEP regularization is then subject to regulation by the total variation (TV) of the discrete gradient transform (DGT) function. The results of reconstruction with an adaptive norm, l a , using BEP are compared via structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) with a simultaneous algebraic reconstruction technique (SART) reconstruction regularized via the TV minimization of the discrete gradient transform (DGT) function. The image reconstruction is an iterative process, and SSIM is calculated for each step, making it possible to objectively compare the methods step by step. The assumption is that the better the reconstruction method, the higher the SSIM value associated with the reconstructed image. We also use the well-known PSNR metric to compare the process of image reconstruction step by step. The proposal is to determine if both SSIM and PSNR present consistent results in comparison to each other. Both approaches use synthetic images (the same used to generate the input signal, y N ) as a reference (for comparison). The rest of this work is organized as follows. In Section 2, the MAP model is developed, resulting in the objective function. In Section 3, the optimization technique of the objective function is developed. In Section 4, experiments are performed, and the results are presented and analyzed. Finally, in Section 5, we present the conclusions and final considerations.

2. Modeling the Objective Function

Most modern CT scanners use energy integration detectors whose photon counts are proportional to the total energy incident on them. Energy, in turn, is proportional to the number of X-ray photons that affect the detectors (sensors) of the tomograph. The denser the region traversed by X-ray photons, the lower the count I i of detected photons over integral line L i , i = 1 , . . . , N I , where N I is the maximum number of projections acquired by the CT scanner. This is known as the Beer–Lambert law, defined as
I i = I 0 e x p - L i μ x , y d L , i = 1 , . . . , N I ,
where I 0 is the number of detected photons when the beam finds no obstacle, and the exponential term is the integral of all linear attenuation coefficients μ x , y on the line L i (with x , y being 2-D coordinates following L i ), which is the path of the beam. Equation (2) assumes that every X-ray emission has the same energy level, meaning that the process is monoenergetic. This approach is adopted in many works such as [9,10,11,15,16,27,28,33,34,35], with the advantage of avoiding the beam hardening problem. Moreover, the monoenergetic approach leads to a more tractable mathematical model. However, the emission of X-rays is, by nature, polyenergetic. As a consequence, the same object reacts differently when subjected to X-rays of different energy levels, generating unwanted artifacts in the reconstructed image. These defects can be avoided, but with the adoption of complex models as in [12,13,36] and at a high computational cost. This topic is complex and still subject to change because CT scanners using monoenergetic X-ray sources are beginning to emerge [37].
In favor of a better understanding of the purpose of this work, we first present a base solution that uses SART reconstruction regularized via TV minimization of the DGT function (SART+DGT), highlighting the relevant parts, and then we present our approach. This strategy is trustworthy because makes it clear the value of the contribution in this work. The proposed method is abbreviated as SART+BEP+DGT.

2.1. Objective Function Modeling Using Soft-threshold Filtering for CT Image Reconstruction

This method consists of modeling an objective function with a l 2 norm fidelity function added and a DGT prior function regularized by a l 1 norm with TV minimization. Optimization of the objective function (Section 3) is performed using alternating minimization. The fidelity term is minimized by SART. The regularization (prior minimization) is performed by constructing a pseudo-inverse of the DGT and adapting a soft-threshold filtering algorithm whose convergence and efficiency have been theoretically proven by [38].
The key aspect of the modeling process is that reconstruction estimates the discrete attenuation, μ x , y for each j pixel of the image, with j = 1 , , N J . Thus, the integral over the line, p i = L i μ x , y d L , can be discretized as
p i j = 1 N J a i j μ j = A μ i , i = 1 , , N I ,
where A = a i j N I × N J is the matrix representing the system geometry, μ = μ 1 , , μ N J T is the linear attenuation coefficient vector with μ j representing the j-th pixel, and the symbol T is the transpose of the matrix. In this model, every a i j is defined as the normalized length of the intersection between the i-th projection beam and j-th rectangular pixel centered in x , y . The emission of X-ray photons is a rare event, so a Poisson distribution is usually adopted to describe the probabilistic model, expressed as
y i P o i s s o n y ¯ i = I 0 e - p i , i = 1 , , N I ,
where y i is the projection (measurement) along the i-th X-ray beam, and y i ¯ is the expected value. Because the X-ray beams are independent from each other, taking into account Equation (4), the joint probability of y = y 1 , y 2 , . . . , y N I given μ , P y | μ and observing y i countable events may be expressed as
P y | μ = i = 1 N I P y i | μ = i = 1 N I y ¯ i y i y i ! e - y ¯ i .
Using the MAP approach, as in [9,15,16,27], we have the objective function as follows:
Φ μ = i = 1 N I y i 2 A μ i - p ^ i 2 + R D G T μ ,
where i = 1 N I y i 2 A μ i - p ^ i 2 is the fidelity term, with p ^ i being an estimate of p i , and R D G T μ (multiplied by a factor β = 1 ignored in Equation (6)) is the regularization term with l 1 norm based on DGT, defined as
D j μ = D m , n μ = μ m , n - μ m + 1 , n 2 + μ m , n - μ m , n + 1 2 ,
where j = m - 1 × W + n , m = 1 , 2 , , H , n = 1 , 2 , , W , with W and H being, respectively, the width and height of the matrix representing the image with N J = W × H pixels. By definition, TV is the sum of DGT for all pixels of the image:
T V μ = D μ 1 = R D G T μ ,
with D μ = D 1 μ , , D N J μ T . Thus, introducing the auxiliary variable ν = D μ and applying the transformation
A Λ = Λ A = a Λ i j , p ^ Λ = Λ p ^ , p ^ = p ^ 1 , p ^ 2 , . . . , p ^ N I ,
with Λ = d i a g ( y i / 2 ) R N I × R N I being a diagonal matrix, the objective function in Equation (6) can be rewritten as
Φ μ = A Λ μ - p Λ ^ 2 2 + β ν 1 , ν = D μ ,
where β is a positive adjustment parameter to balance the terms of fidelity and TV and is usually set to 1 [12,16]. The ultimate goal is to minimize the objective function Φ μ , obtaining μ ^ , as shown below:
μ ^ = a r g m i n μ F μ - β R μ ,
where the fidelity term, F μ , represented both in the expanded version, as in Equation (6), and in the compact version, as in Equation (10), is shown below as
F μ = i = 1 N I y i 2 A μ i - p ^ i 2 = A Λ μ - p ^ Λ 2 2 ,
and R μ is the restriction that drives the solution according to certain criteria ( l 1 norm in this case). The optimization of F μ , although simple, is an important concept and can be defined as follows:
μ ˜ = a r g m i n μ F μ .
Inspired by the model in Equation (10), we propose in what follows a method for CT image reconstruction using adaptive soft-threshold filtering, which means, in brief, that the proposed method is intended to balance edge preservation and noise mitigation.

2.2. Objective Function Modeling by Using Bilateral Edge Preservation for CT Image Reconstruction

Regularization using the DGT ( l 1 norm) works well for the CT reconstruction problem because it searches among the solutions of fidelity ( l 2 norm) optimization looking for the one with a lower TV. However, it is common sense that the regularization based on the l 1 norm often introduces artificial edges in smooth transition areas. Moreover, a good regularization strategy must simultaneously perform noise suppression and edge preservation. With this motivation, Charbonnier et al. [24] proposed the bilateral edge preserving (BEP) regularization function, inspired by the bilateral total variation (BTV) regularization technique [39]. BTV regularization is defined by
R B T V X = l = - q q m = 0 q l + m 0 α | l | + | m | X - S x l S y m X 1 ,
where q is a positive number, S x l and S y m are displacements by l and m pixels in the horizontal and vertical directions, respectively, X is the image in reconstruction/regularization, and α , 0 < α < 1 , is applied to create a spatial decay effect for the sum of terms in BTV regularization. The BEP regulation uses the same principle as BTV but with an adaptive norm (instead of the l 1 norm) defined by
ρ s , a = a a 2 + s 2 - a 2 ,
where a is a positive value and s is the difference that one wants to minimize. This function was initially proposed by [24] to preserve edges in the image regularization process. The parameter a is used to specify the error value for which the regularization becomes linear (growing with the error) to constant (saturated, regardless of the error). The same adaptive norm definition is also used in super-resolution problems [32]. The ρ s , a function is an M-estimator since it corresponds to the maximum likelihood (LM) type estimation [40], and has its influence function given by
ψ s , a = ρ s , a s = a s a 2 + s 2 .
The influence function indicates how much a particular measure contributes to the solution [32]. We illustrate in the graphs of Figure 2a the behavior of ρ s , a (the error norm function), and in Figure 2b, its influence function. It can be observed that as parameter a evolves from 0 to 1, the function changes its behavior from l 1 to l 2 norm. Thus, as mentioned by [31] and [32], Equation (15) behaves adaptively with respect to the norm that it implements.
Therefore, combining Equations (14) and (15) for the particular case of tomography reconstruction, we propose an adaptive operator defined as
R B E P μ ˜ = l = - q q m = 0 q l + m 0 j = 1 N j α | l | + | m | ρ a μ ˜ j - S x l S y m μ ˜ j ,
where q, α , S x l , and S y m are the same as in Equation (14); μ ˜ , defined in Equation (13), is the estimated image obtained in the i-th iteration by l 2 minimization of the objective function in Equation (12); and μ ˜ j is the j-th pixel of image μ ˜ , with j = 1 , , N j . It is noteworthy that the term R B E P μ ˜ imposes an l a regularization norm, 1 < a < 2 , on the image μ ˜ . Thus, we can rewrite the objective function of Equation (10), Φ μ , so that a new regularization term, R B E P μ ˜ , is introduced between the l 2 norm minimization and TV minimization. As a consequence, the objective function proposed in this paper incorporates adaptive regularization to the objective function, and defining an auxiliary variable σ = R B E P μ ˜ , we have
Φ μ = A Λ μ - p Λ ^ 2 2 + γ σ η + β ν 1 ,
where γ is a positive adjustment parameter to balance the terms of fidelity and adaptive regularization. The other parameters are the same as in Equation (10), and η , 1 η 2 , is the norm BEP method imposed on the regularization process.

3. Objective Function Optimization

The alternating minimization technique makes the simultaneous optimization of two or more terms of an objective function possible. Thus, for the proposed method, three steps are necessary: (1) minimizing F μ with SART, (2) applying the gradient descent (GD) method to the result of the first stage, using R B E P μ = γ σ η as a regularization term, and (3) applying DGT regularization to the previous result, minimizing β ν 1 with soft-threshold filtration. The three stages are repeated iteratively until a satisfactory result is obtained or a certain number of steps is reached. For the purpose of better understanding, each of the three stages is presented in sequence.

3.1. First Stage: Minimization of the Fidelity Term with SART

The first step is to solve the optimization problem described by Equation (13). A popular solution was proposed by Ge and Ming [33], and it can be computationally expressed by the iterative equation
μ ˜ j k = μ ˜ j k - 1 + λ k 1 a + j i = 1 N I a i , j a + i p ^ i - A i μ k - 1 ,
where a + j = i = 1 N I a i j > 0 , a + i = j = 1 N J a i j > 0 , A i is the i-th line of A , k is the iteration index, and 0 < λ k < 2 is an arbitrary relaxation parameter [15,41]. To simplify the notation, one can establish Λ + N J R N J × R N J as a diagonal matrix with Λ j j + N J = 1 a + j , and Λ + N I R N I × R N I also as a diagonal matrix with Λ i i + N I = 1 a + i . Then, Equation (19) can be rewritten as
μ ˜ k = μ ˜ k - 1 + λ k Λ + N J A Λ T Λ N I + p Λ - A Λ μ ˜ k - 1 ,
where the term λ k is usually constant and equal to 1. The method described in Equation (19) is commonly known as SART. This method produces a relatively noisy reconstruction, as can be observed in Section 4. We reinforce here that μ ˜ is the input of the second stage in the reconstruction process.

3.2. Second Stage: Bilateral Edge-Preserving with a Gradient Descent Method

In the second stage, the goal is to solve the optimization problem defined by
μ ^ = a r g m i n μ μ ˜ - γ R B E P μ ˜ ,
where γ is a parameter that weights the contribution of the constraint R B E P (Equation (17)). The gradient descent method can be applied to solve this problem as
μ k = μ k - 1 - γ R B E P μ k - 1 ,
resulting in an optimization problem written as follows (see Appendix A for details):
μ ^ = a r g m i n μ ρ a μ ˜ + l = - q q m = 0 q l + m 0 α | l | + | m | ρ c μ ˜ - S x l S y m μ ˜ ,
where μ ˜ , as defined in Equation (13), is the result of first-stage minimization; ρ a = ρ s , a is as in Equation (15) but with s = μ ˜ ; and ρ c = ρ s , c is the same as in Equation (15) but with a constant c instead of a constant a and s = μ ˜ - S x l S y m μ ˜ , where S x l and S y m are the same as in Equation (14). A computable matrix form was derived from Equation (23), and the result is shown below as
μ ^ k = μ ˜ k - γ k H a μ ˜ k μ ˜ k + φ l = - q q m = 0 q l + m 0 α | l | + | m | I - S x l S y m H c M M ,
where γ k is an adjustment parameter to balance the k-th value of μ ˜ k with the gradient descent contribution, R B E P μ ˜ k - 1 ; φ is also an adjustment parameter, but it balances terms inside gradient descent; ⊙ is the element-by-element product of two matrices of compatible dimensions; and I is the identity matrix. The matrix M = μ ˜ k - S x l S y m μ ˜ k is the difference between μ ˜ k and its version shifted by S x l S y m , and the operators H a . and H c . are defined, respectively, as
H a x = a a 2 + x 2 , H c x = c c 2 + x 2 .
H a μ ˜ k μ ˜ k and H c M M are influence functions (as defined in Equation (16)) resulting from the application of the gradient descent method.
It is important to clarify that in Equation (22), μ appears with the upper index k - 1 instead of k because the previous result of the gradient descent, μ k - 1 , feeds the calculation of the current value, μ k , and this is the manner in which gradient descent works. In contrast, Equation (24) shows μ ˜ with upper index k (as in μ ^ ) rather than k - 1 because μ ˜ is obtained in the same interaction step, k, as μ ^ , but in a previous stage denoted by the upper mark “tilde” . ˜ , while the current stage is denoted by the upper mark “hat” . ^ .

3.3. Third Stage: TV Minimization by Soft-threshold Filtering

The third stage (TV optimization, R D G T ) is to solve the problem ν = D μ , where D is not invertible, as proposed by Yu and Wang [15] and shown below:
μ m , n k = 1 4 2 μ m , n k , a + μ m , n k , b + μ m , n k , c ,
with
μ m , n k , a = 2 μ ˜ m , n k + μ ˜ m + 1 , n k + μ ˜ m , n + 1 k 4 , D m , n μ ˜ k < ω μ ˜ m , n k - ω 2 μ ˜ m , n k - μ ˜ m + 1 , n k - μ ˜ m , n + 1 k 4 D m , n μ ˜ k , D m , n μ ˜ k ω ,
μ m , n k , b = μ ˜ m , n k + μ ˜ m - 1 , n k 2 , D m - 1 , n μ ˜ k < ω μ ˜ m , n k - ω μ ˜ m , n k - μ ˜ m - 1 , n k 2 D m - 1 , n μ ˜ k , D m - 1 , n μ ˜ k ω ,
μ m , n k , c = μ ˜ m , n k + μ ˜ m , n - 1 k 2 , D m , n - 1 μ ˜ k < ω μ ˜ m , n k - ω μ ˜ m , n k - μ ˜ m , n - 1 k 2 D m , n - 1 μ ˜ k , D m , n - 1 μ ˜ k ω ,
where ω is a pre-established threshold; μ ˜ k = μ ˜ k m n , with m = 1 , 2 , , H and n = 1 , 2 , , W , with W and H being the width and height of the reconstructed image, respectively. D m , n μ ˜ k is the DGT matrix as defined in Equation (7).
As explained in detail by [15] and observing Equation (27), when D m , n μ ˜ k < ω , the values of μ ˜ m , n k , μ ˜ m + 1 , n k , and μ ˜ m , n + 1 k must be adjusted so that D m , n μ ˜ k = 0 . This means that if neighboring pixels in the reconstructed image are very close in value, it is likely that they have equal (or very close) values in the real image. Then, the method smooths the region around the pixel so that they look alike. Alternately, when D m , n μ ˜ k ω , the goal is to reduce μ ˜ m , n k - μ ˜ m + 1 , n k 2 and μ ˜ m , n k - μ ˜ m , n + 1 k 2 but not cancel them. In this case, the method recognizes the differences between values of neighboring pixels as too significant to be totally eliminated. Instead, the differences are just softened.

3.4. Convergence and Convexity Considerations

The model proposed in this work, represented by Equation 18, gives an important initial gain in terms of PSNR to the reconstruction of CT images, as reported in Section 4. However, it is important to know how this model behaves in long-term processing. It is therefore necessary to investigate its convergence. We do this empirically by comparing the PSNR values of the cost function between the iterations k and k - 1 , with 1 < k 5000 . We present in Figure 3 a chart for each image used in the simulations of Section 4, and more specifically for the graphs and images of Figures 5 and 6 (Shepp–Logan head phantom), 7 and 8 (FORBILD head phantom), 9 and 10 (FORBILD abdomen phantom). It is worth noting that in all cases shown in Figure 3, the SART+BEP+DGT method is consistent with respect to convergence and presents better error reduction in terms of the PSNR metric. This simulation uses the same initial values defined in Section 4. The same situation applies when we graphically analyze the convergence of the proposed method using the SSIM metric, as can be seen in Figure 4.
It is noteworthy that, according to Charbonnier et al. [24], the function described in Equation (15) is convex, and therefore, it would be possible to apply an iTV-style minimization procedure [42] to assure data consistency and regularization term improvement in each iteration step.

4. Experiments and Results

In the experiments, we used the synthetic images presented in Section 1.2, that is, Shepp–Logan head phantom, FORBILD head phantom, and FORBILD abdomen phantom. The signal from the CT equipment is simulated according to the model in Equation (1) addressing the low dosage scenario, considering a limited number of projections (meaning a limited number of scanning angles). On the image reconstruction side, we use the model y = A x + e , which, as discussed in Section 1.1, denotes an inverse and ill-posed problem, where A ( N I × N J ) is the matrix that describes the capture system, x ( N J × 1 ) is the phantom represented lexicographically, and e ( N J × 1 ) is the error, whose features were presented in Section 1.2. It is worth remembering that x is the image we intend to reconstruct from the noise signal y and, in the modeling process presented in Section 2 and Section 3, we use the variable μ to represent it. By improving the system description, N I = n l n θ is the number of projections, where n l is the number of projection lines (i.e., the number of detectors) for each scan angle, and n θ is the total number of scan angles. n θ is the parameter whose value should be changed when the intention is to set a new dosage value, i.e., when we want to define a different (lower) number of projections, N I . The image has dimensions d × d , where d = N J = 2 R , R N + (positive natural). In this work, we use R = 9 ( d = 512 ) , and therefore, N J = d 2 = 262 , 144 , and N I = n l n θ = 300 n θ , with n l = 300 detectors. Thus, A has dimensions 300 n θ × 262 , 144 , which are compatible with the dimensions of y and μ , respectively, i.e., 300 n θ × 1 and 262 , 144 × 1 . It is important to note that the N I dimension of A (and y) is related to the number of scan angles, n θ , and this number of angles is what determines if the signal is of low (or regular) dosage, as discussed in Section 1, according to the ALARA principle. For a low dosage, we consider subsets of Θ , i.e., equally spaced sets of integer values between 0 and 179 degrees named Θ g . For example, Θ g = 5 = { 0 , 44 , 88 , 132 , 176 } would be a possible subset, in which the g = 5 angles are equally spaced at 44 degrees. Using this notation, Θ is equivalent to Θ 180 , meaning that there are g = 180 scan angles equally spaced by 1 degree (which represents a regular dosage). In the experiments with low dosage, the sets Θ g , with g in { 15 , 30 } , will be used. A was obtained for a parallel architecture scanner.
By observing the objective function optimization process detailed in Section 3, and in alignment with the proposal in Section 1.3, we design a test framework that involves (1) the execution of the first stage (Section 3.1) alternating with the third stage (Section 3.3), which we will call here SART+DGT, and (2) execution of the first, second (Section 3.2), and third stages alternately and in this sequence, named SART+BEP+DGT. Simulations are shown in Table 1.
For both arrangements (SART+BEP and SART+BEP+DGT), the first stage is mandatory because it is the core of the reconstruction process. That is, it represents the optimization of the fidelity term in Equation (13). Because of its omnipresence, we also show simulations with SART only, which serve as a basis for comparing how much constraint terms actually contribute to the reconstruction process. The subsequent stages represent the application of constraint terms as described in Section 3.2 and Section 3.3, respectively, Equations (24) and (26). For each of these test arrangements, it was arbitrarily established that the iterator, k, ranges from 1 to L, with 350 L < 1500 , approximately. Because Gaussian noise and the Poisson process are random, each experiment is performed a considerable number of times, defined arbitrarily as 101 executions by experiment, and each result in Table 1 is the mean of the 101 SSIM and PSNR values. It is important to note that the result presented for each experiment (with a particular additive Gaussian noise or a certain number of projections) is the mean of 101 executions performed. Each execution produces a particular SSIM and PSNR result. We do not average pixels in any reconstructed image, but rather the SSIM and PSNR of the 101 executions performed for each testing case. The idea of using the average of a considerable number of iterations is based on the central limit theorem, which states that the arithmetic mean of a sufficiently large number of iterates of independent random variables will be approximately normally distributed, regardless of the underlying distribution, provided that each iteration has a finite expected value.

Low Dosage Tests and Results

As recommended by the ALARA principle, an alternative to reduce the total amount of radiation applied to a patient is decreasing the number of projections in the acquisition of the CT signal. According to the signal model proposed in Equation (1), we will consider the projections as individually influenced by Gaussian additive noise, and the low dosage signal is provided by reducing the number of scanning angles. In the batch of tests with low dosage projections, we consider using the sets of angles Θ g , with g in { 15 , 30 } , where g is the amount of angles in Θ g (remember that in a regular dosage, we have 180 angles, from 0 o , , 179 o ). In our model, these values represent a reduced amount of photon emission, which can be understood as a low radiation dosage. All low dosage presented in this section is performed with signal-to-noise ratio (SNR) = 32 , 46, and 60 dB. The SART stage used λ = 1 , according [15] and [41]. The DGT stage maintained β = 1 , as discussed in Section 2.1, and used as a threshold, ω , the average of the DGT for each k iteration. The BEP stage used γ = 0 . 001 , φ = 0 . 150 (Section 3.2), a = 0 . 5 , q = 3 , α = 0 . 6 , and c = 0 . 1 (Section 2.2). All parameters were empirically set.
A batch of experiments using the set Θ g , with g in { 15 , 30 } , of projections for SART+BEP+DGT (named here as method A), SART+DGT (named here as B), and pure SART (named here as method C) methods are shown in Table 1 for PSNR and SSIM metrics, using the FORBILD abdomen phantom (FA), FORBILD head phantom (FH), and Shepp–Logan head phantom (SL) synthetic images. Analyzing the results for the PSNR metric with 15 and 30 projections, it is observed that for k = 350 steps, the results of the proposed method present a higher PSNR value in general. The exceptions are the FA and FH images, with an SNR of 32 dB. However, for k = 700 and 1000 steps and 30 projections, the results favor the SART+DGT method according to the PSNR metric. For 15 projections, results for k = 700 steps favor the proposed method in most tests performed with PSNR metrics. For the SSIM metric, the proposed method presents interesting results when compared to the SART+DGT method.
For the reconstruction of Shepp–Logan head phantom with 15 angles of projection, Figure 5 shows the evolution of the SSIM and PSNR values for SART+BEP+DGT (proposed), SART+DGT, and pure SART methods for SNR = 60 dB. In this particular experiment, marker 1 in Figure 5a indicates the highest SSIM value, 0 . 9240 , reached by the proposed method and corresponding to the highest PSNR value, 72 . 7103 , indicated by marker 1 in Figure 5b. Marker 2 shows in Figure 5a,b, respectively, the SSIM ( 0 . 8819 ) and PSNR ( 71 . 7521 ) values obtained in step k = 551 . Marker 3 in Figure 5b highlights the point at which the SART+DGT method reaches the same PSNR value as the proposed method, in step k = 1015 , approximately, and the graphs in Figure 5 agree with Table 1.
Looking closely at Figure 6b,c, it is possible to note the presence of random noise (indicated by the white arrows) that manifests as small white dots in Figure 6c, while in Figure 6b this phenomenon is not easily perceived. This is because the BEP regularization used in the proposed method (Figure 6b) tends to eliminate noise faster. However, the reconstruction performed by the SART+DGT method produces a more homogeneous image, as shown in Figure 6c. This is also related to the elimination of random noise by the introduction of BEP regularization in the reconstruction process. Figure 6d presents the result of the image reconstruction using the pure SART method.
For the reconstruction of the FORBILD head phantom with 30 projections with SNR = 46 dB, shown in Figure 7, we observe that the best PSNR ( 70 . 7700 ) obtained by the proposed method in step k = 520 (Figure 7b, marker 1) is reached by the SART+DGT method in step k = 570 (marker 3). Marker 2 shows, in Figure 7a,b, respectively, the SSIM ( 0 . 8728 ) and PSNR ( 70 . 6901 ) values obtained in step k = 685 . The SSIM values remain higher for the proposed method, according the graph of Figure 7a. We show the evolution of the pure SART method in terms of SSIM and PSNR for comparison purposes only. Observing the reconstructions shown in Figure 8b (with k = 520 steps, PSNR = 70 . 7700 , SSIM = 0 . 9015 ) and Figure 8c (with k = 570 steps, PSNR = 70 . 7706 , SSIM = 0 . 8774 ), the results are low in quality due to the number of projections, and the images practically do not present a difference, except for a better contrast level presented by Figure 8b. The pure SART reconstruction is shown in Figure 8d.
Figure 9 shows the evolution of the SSIM and PSNR values for the reconstruction with 15 angles of projection with SNR = 60 dB for the FORBILD abdomen phantom using the SART+BEP+DGT, SART+DGT, and pure SART methods. In this particular experiment, marker 1 in Figure 9a indicates the highest SSIM value, 0 . 9419 , reached by the proposed method and corresponding to the highest PSNR value, 77 . 7180 , indicated by marker 1 in Figure 9b, both obtained in step k = 685 . Marker 2 shows, in Figure 9a,b, respectively, the SSIM ( 0 . 9327 ) and PSNR ( 77 . 2279 ) values obtained in step k = 685 . Marker 3 in Figure 9b highlights the point at which the SART+DGT method reaches the same PSNR value of the proposed method, in step k = 920 , approximately. The image of Figure 10b shows a tendency to eliminate the characteristic lines and bands of the reconstruction process with few scanning angles.
Each of the examples of reconstructed images shown in Figure 6, Figure 8 and Figure 10, and their SSIM and PSNR graphs in Figure 5, Figure 7, and Figure 9, respectively, result from a single execution test pinched from a set of 101 executions. Figure 11 shows box plot graphs for various situations, combining image to be reconstructed, SNR dB value, number of projections, method used, and step k of the execution. As an example, for the reconstruction of Shepp–Logan with 15 projections, method A and SNR = 32 dB using the column (step) k = 600 of the processing matrix 101 × 1000 is highlighted in the graph of Figure 11a. Table 2 shows the details of each element of the box plot graphs in Figure 11, and it is straightforward to note that the standard deviation (Table 2) increases with noise (Figure 11).

5. Conclusions and Final Comments

The proposed method, composed by the steps (1) SART reconstruction, (2) BEP adaptive minimization, and (3) TV minimization via DGT, synthesized in Equation (18), presents, in the first steps of the processing, a more pronounced reduction in the noise level of the reconstructed image both for SSIM and PSNR metrics, as can be seen in Section 4. It is important to emphasize that the tests were done with 15 and 30 projections, as shown in Table 1. At some point, the proposed method reaches its maximum PSNR value. It can be observed that at this point (maximum PSNR), the images are reasonably intelligible. From this point forward, the SART+DGT method gives higher values of PSNR and, consequently, a less noisy reconstruction. Even after the apex of the proposed method with regard to the value of PSNR, the value of SSIM remains above in many of the cases studied, when compared to the result of the SART+DGT method. The best values for SSIM generally result in images with better contrast, and this is very important for artifact viewing and contour distinction in the reconstructed image. Structural similarity works considering morphological features in the evaluation of reconstruction results and for this reason presents results more suitable to human standards, when compared with the PSNR metric. However, the main disadvantage of this method is that in a practical application, we cannot know the maximum PSNR since we do not have an original image for comparison. On the other hand, the advantage of the proposed method is that it delivers results earlier in the reconstruction process.
The use of BEP minimization soon after the SART reconstruction, as explained in Section 3, is intended to promote image noise reduction in the reconstruction process, delivering a less noisy image to the later stage (of minimization by TV using DGT). In fact, the noise reduction occurs up to a certain point, and although it is not possible in a practical application to define the ideal stopping point (maximum PSNR), it may be possible to estimate this point based on the type of image and the number of projections used.

Author Contributions

The contributions of the authors can be described as follows: conceptualization, T.T.W. and E.O.T.S.; formal analysis, T.T.W. and E.O.T.S; funding acquisition, T.T.W., investigation: T.T.W.; metodology, T.T.W. and E.O.T.S.; project administration, E.O.T.S.; software, T.T.W.; supervision, E.O.T.S.; validation, T.T.W. and E.O.T.S.; visualization, T.T.W.; writing–original draft, T.T.W.; writing–review editing, T.T.W. and E.O.T.S.

Acknowledgments

Thanks to PPGEE—Programa de Pós-graduação em Engenharia Elétrica—UFES for the support provided during the preparation and submission of this work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALARAAs-low-as-reasonably-achievable
AMPApproximate message passing
ARTAlgebraic reconstruction technique
BEPBilateral edge preserving
CTComputer tomography
DCT-GAMPDenoising CT generalized Approximate message passing
DGTDiscrete gradient transform
FBPFiltered backprojection
FFTFast Fourier transform
MAPMaximum a posteriori
MMSEMinimum mean square error
OS-SARTOrdered subset simultaneous algebraic reconstruction technique
PSNRPeak signal-to-noise ratio
TVTotal variation
SARTSimultaneous algebraic reconstruction technique
SSIM]Structural similarity
VW-SARTVariable weighted simultaneous algebraic reconstruction technique

Appendix A

With Equation (22) in mind and substituting Equation (15) into the core (within the braces) of Equation (23), we have
R B E P μ ˜ = μ ˜ a a 2 + μ ˜ 2 + φ l = - q q m = 0 q l + m 0 α | l | + | m | c c 2 + μ ˜ - S x l S y m μ ˜ 2 ,
and performing the derivative in relation to μ ˜ results in
R B E P μ ˜ = a μ ˜ a 2 + μ ˜ 2 + φ l = - q q m = 0 q l + m 0 α | l | + | m | c I - S x l S y m μ ˜ - μ ˜ S x l S y m c 2 + μ ˜ - S x l S y m μ ˜ 2 .
Assuming the same considerations and notation presented in Section 3.2, Equation (A1) can be rewritten as
R B E P μ ˜ = H a μ ˜ μ ˜ + φ l = - q q m = 0 q l + m 0 α | l | + | m | I - S x - l S y - m M H c M ,
and Equation (23) can be written in compact form as
μ ^ k = μ ˜ k - γ R B E P μ ˜ k
and in its expanded form as
μ ^ k = μ ˜ k - γ k H a μ ˜ k μ ˜ k + φ l = - q q m = 0 q l + m 0 α | l | + | m | I - S x l S y m H c M M .

References

  1. Brody, A.S.; Frush, D.P.; Huda, W.; Brent, R.L. Radiation Risk to Children From Computed Tomography. Am. Acad. Pediatrics 2007, 120, 677–682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Shepp, L.A.; Logan, B.F. The Fourier reconstruction of a head section. IEEE Trans. Nucl. Sci. 1973, NS-21, 21–43. [Google Scholar] [CrossRef]
  3. Horn, B.K.P. Fan-beam reconstruction methods. Proc. IEEE 1979, 67, 1616–1623. [Google Scholar] [CrossRef]
  4. Hsieh, J. Computed Tomography—Principles, Design, Artifacts and Recent Advances. In Computed Tomography—Principles, Design, Artifacts and Recent Advances, 2nd ed.; Wiley Inter-science: Bellingham, WA, USA, 2009; Chapter 2; pp. 29–30. [Google Scholar]
  5. Hsieh, J. Computed Tomography—Principles, Design, Artifacts and Recent Advances, 2nd ed.; Wiley Inter-science: Bellingham, WA, USA, 2009; Chapter 3; pp. 102–112. [Google Scholar]
  6. Lange, K.; Bahn, M.; Little, R. A Theoretical Study of Some Maximum Likelihood Algorithms for Emission and Transmission Tomography. IEEE Trans. Med. Imaging 1987, 6, 106–114. [Google Scholar] [CrossRef]
  7. Snyder, D.L.; Helstrom, C.W.; Lanterman, A.D.; Faisal, M.; White, R.L. Compensation for readout noise in CCD images. J. Opt. Soc. Am. 1995, 12, 272–283. [Google Scholar] [CrossRef]
  8. Whiting, B.R. Signal statistics in x-ray computed tomography. Proc. SPIE 2002, 4682, 53–60. [Google Scholar]
  9. Yu, H.; Wang, G. Sart-Type Half-Threshold Filtering Approach for CT Reconstruction. Access IEEE 2014, 2, 602–613. [Google Scholar]
  10. Carson, R.E.; Lange, K. E-M Reconstruction Algorithms for Emission and Transmission Tomography. J. Comput. Assist. Tomogr. 1984, 8, 306–316. [Google Scholar]
  11. Man, B.D.; Nuyts, J.; Dupont, P.; Marchal, G.; Suetens, P. Reduction of metal streak artifacts in X-ray computed tomography using a transmission maximum a posteriori algorithm. IEEE Trans. Nucl. Sci. 2000, 47, 977–981. [Google Scholar] [CrossRef] [Green Version]
  12. Man, B.D.; Nuyts, J.; Dupont, P.; Marchal, G.; Suetens, P. An iterative maximum-likelihood polychromatic algorithm for CT. IEEE Trans. Med. Imaging 2001, 20, 999–1008. [Google Scholar] [CrossRef] [PubMed]
  13. Elbakri, I.A.; Fessler, J.A. Statistical Image Reconstruction for Polyenergetic X-Ray Computed Tomography. IEEE Trans. Med. Imaging 2002, 21, 89–99. [Google Scholar] [CrossRef] [PubMed]
  14. Yu, H.; Wang, G. Compressed sensing based interior tomography. Phys. Med. Biol. 2009, 54, 2791–2805. [Google Scholar] [CrossRef] [PubMed]
  15. Yu, H.; Wang, G. A soft-threshold filtering approach for reconstruction from a limited number of projections. Phys. Med. Biol. 2010, 1, 3905–3916. [Google Scholar] [CrossRef]
  16. Xu, Q.; Mou, X.; Wang, G.; Sieren, J.; Hoffman, E.A.; Yu, H. Statistical Interior Tomography. IEEE Trans. Med. Imaging 2011, 30, 1116–1128. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Deák, Z.; Grimm, J.M.; Treitl, M.; Geyer, L.L.; Linsenmaier, U.; Körner, M.; Reiser, M.F.; Wirth, S. Filtered Back Projection, Adaptive Statistical Iterative Reconstruction, and a Model-based Iterative Reconstruction in Abdominal CT: An Experimental Clinical Study. Radiology 2013, 266, 197–206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Choudhary, C.; Malik, R.; Choi, A.; Weigold, W.G.; Weissman, G. Effect of Adaptive Statistical Iterative Reconstruction on Coronary Artery Calcium Scoring. J. Am. Coll. Cardiol. 2014, 63, A1157. [Google Scholar] [CrossRef]
  19. Aday, A.W.; MacRae, C.A. Genomic Medicine in Cardiovascular Fellowship Training. Circulation 2017, 136, 345–346. [Google Scholar] [CrossRef]
  20. Zhu, Z.; Pang, S. Few-photon computed x-ray imaging. Appl. Phys. Lett. 2018, 113, 231109. [Google Scholar] [CrossRef] [Green Version]
  21. Kim, J.H.; Choo, K.S.; Moon, T.Y.; Lee, J.W.; Jeon, U.B.; Kim, T.U.; Hwang, J.Y.; Yun, M.J.; Jeong, D.W.; Lim, S.J. Comparison of the image qualities of filtered back-projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction for CT venography at 80 kVp. Eur. Radiol. 2016, 26, 2055–2063. [Google Scholar] [CrossRef]
  22. Zhang, G.; Tzoumas, S.; Cheng, K.; Liu, F.; Liu, J.; Luo, J.; Bai, J.; Xing, L. Generalized Adaptive Gaussian Markov Random Field for X-Ray Luminescence Computed Tomography. IEEE Trans. Biomed. Eng. 2018, 65, 2130–2133. [Google Scholar] [CrossRef]
  23. Clark, D.P.; Lee, C.L.; Kirsch, D.G.; Badea, C.T. Spectrotemporal CT data acquisition and reconstruction at low dose. Med. Phys. 2015, 42, 6317–6336. [Google Scholar] [CrossRef] [Green Version]
  24. Charbonnier, P.; Blanc-Feraud, L.; Aubert, G.; Barlaud, M. Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 1997, 6, 298–311. [Google Scholar] [CrossRef] [PubMed]
  25. Sreehari, S.; Venkatakrishnan, S.V.; Wohlberg, B.; Buzzard, G.T.; Drummy, L.F.; Simmons, J.P.; Bouman, C.A. Plug-and-Play Priors for Bright Field Electron Tomography and Sparse Interpolation. IEEE Trans. Comput. Imag. 2016, 2, 408–423. [Google Scholar] [CrossRef]
  26. Perelli, A.; Lexa, M.A.; Can, A.; Davies, M.E. Denoising Message Passing for X-ray Computed Tomography Reconstruction. arXiv 2016, arXiv:1609.04661. [Google Scholar]
  27. Sun, Y.; Chen, H.; Tao, J.; Lei, L. Computed tomography image reconstruction from few views via Log-norm total variation minimization. Digital Signal Process. 2019, 88, 172–181. [Google Scholar] [CrossRef]
  28. Gu, P.; Jiang, C.; Ji, M.; Zhang, Q.; Ge, Y.; Liang, D.; Liu, X.; Yang, Y.; Zheng, H.; Hu, Z. Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests. Sensors 2019, 19, 207. [Google Scholar] [CrossRef] [PubMed]
  29. Yu, Z.; Noo, F.; Dennerlein, F.; Wunderlich, A.; Lauritsch, G.; Hornegger, J. Simulation tools for two-dimensional experiments in x-ray computed tomography using the FORBILD head phantom. Phys. Med. Biol. 2012, 57, N237. [Google Scholar] [CrossRef]
  30. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  31. Zeng, X.; Yang, L. Mixed impulse and Gaussian noise removal using detail-preserving regularization. Opt. Eng. 2010, 49, 097002. [Google Scholar] [CrossRef]
  32. Zeng, X.; Yang, L. A robust multiframe super-resolution algorithm based on half-quadratic estimation with modified BTV regularization. Digital Signal Process. 2013, 1, 98–109. [Google Scholar] [CrossRef]
  33. Ge, W.; Ming, J. Ordered-subset simultaneous algebraic reconstruction techniques (OS-SART). J. X-Ray Sci. Technol. 2004, 12, 169–177. [Google Scholar]
  34. Pan, J.; Zhou, T.; Han, Y.; Jiang, M. Variable Weighted Ordered Subset Image Reconstruction Algorithm. Int. J. Biomed. Imaging 2006, 2006, 10398. [Google Scholar] [CrossRef] [PubMed]
  35. Yu, H.; Wang, G. SART-Type Image Reconstruction from a Limited Number of Projections with the Sparsity Constraint. Int. J. Biomed. Imaging 2010, 2010, 3. [Google Scholar] [CrossRef] [PubMed]
  36. Gu, R.; Dogandzc, A. Polychromatic X-ray CT Image Reconstruction and Mass-Attenuation Spectrum Estimation. IEEE Trans. Comput. Imag. 2016, 2, 150–165. [Google Scholar] [CrossRef]
  37. Achterhold, K.; Bech, M.; Schleede, S.; Potdevin, G.; Ruth, R.; Loewen, R.; Pfeifferb, F. Monochromatic computed tomography with a compact laser-driven X-ray source. Sci. Rep. 2013, 13, 150–165. [Google Scholar] [CrossRef] [PubMed]
  38. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  39. Farsiu, S.; Dirk, M.; Elad, M.; Milanfar, P. Fast and robust multiframe super resolution. IEEE Trans. Image Process. 2004, 13, 1327–1344. [Google Scholar] [CrossRef] [PubMed]
  40. Rabie, T. Robust estimation approach for blind denoising. IEEE Trans. Image Process. 2005, 14, 1755–1765. [Google Scholar] [CrossRef]
  41. Hansen, P.C.; Saxild-Hansen, M. AIR Tools—A MATLAB Package of Algebraic Iterative Reconstruction Methods. J. Comput. Appl. Math. 2012, 236, 2167–2178. [Google Scholar] [CrossRef]
  42. Ritschl, L.; Bergner, F.; Fleischmann, C.; Kachelrieß, M. Improved total variation-based CT image reconstruction applied to clinical data. Phys. Med. Biol. 2011, 56, 1545–1561. [Google Scholar] [CrossRef]
Figure 1. From acquisition to reconstruction and measurement of error.
Figure 1. From acquisition to reconstruction and measurement of error.
Sensors 19 02346 g001
Figure 2. (a) Error norm function, Equation (15), and (b) influence function, Equation (16).
Figure 2. (a) Error norm function, Equation (15), and (b) influence function, Equation (16).
Sensors 19 02346 g002
Figure 3. Peak signal-to-noise ratio (PSNR) difference along k iterations, 1 < k 5000 , for the pure simultaneous algebraic reconstruction technique (SART), SART+discrete gradient transform (DGT), and SART+bilateral edge-preserving (BEP)+DGT reconstructions for (a) Shepp–Logan head phantom, (b) FORBILD head phantom, and (c) FORBILD abdomen phantom.
Figure 3. Peak signal-to-noise ratio (PSNR) difference along k iterations, 1 < k 5000 , for the pure simultaneous algebraic reconstruction technique (SART), SART+discrete gradient transform (DGT), and SART+bilateral edge-preserving (BEP)+DGT reconstructions for (a) Shepp–Logan head phantom, (b) FORBILD head phantom, and (c) FORBILD abdomen phantom.
Sensors 19 02346 g003
Figure 4. Sructural similarity (SSIM) difference along k iterations, 1 < k 5000 , for pure SART, SART+DGT, and SART+BEP+DGT reconstructions for (a) Shepp–Logan head phantom, (b) FORBILD head phantom, and (c) FORBILD abdomen phantom.
Figure 4. Sructural similarity (SSIM) difference along k iterations, 1 < k 5000 , for pure SART, SART+DGT, and SART+BEP+DGT reconstructions for (a) Shepp–Logan head phantom, (b) FORBILD head phantom, and (c) FORBILD abdomen phantom.
Sensors 19 02346 g004
Figure 5. Evolution of (a) SSIM and (b) PSNR values for the reconstruction of the Shepp–Logan phantom with 15 projections for pure SART, SART+DGT, and SART+BEP+DGT.
Figure 5. Evolution of (a) SSIM and (b) PSNR values for the reconstruction of the Shepp–Logan phantom with 15 projections for pure SART, SART+DGT, and SART+BEP+DGT.
Sensors 19 02346 g005
Figure 6. (a) The original Shepp–Logan head phantom and particular reconstructions for 15 projections (b) from SART+BEP+DGT with k = 553 steps, PSNR: 72 . 7103 , SSIM: 0 . 9240 , (c) from SART+DGT with k = 1015 steps, PSNR: 72 . 7103 , SSIM: 0 . 8920 , and (d) from pure SART with k = 553 steps, PSNR: 63 . 4625 , SSIM: 0 . 1662 . All with SNR = 60 dB.
Figure 6. (a) The original Shepp–Logan head phantom and particular reconstructions for 15 projections (b) from SART+BEP+DGT with k = 553 steps, PSNR: 72 . 7103 , SSIM: 0 . 9240 , (c) from SART+DGT with k = 1015 steps, PSNR: 72 . 7103 , SSIM: 0 . 8920 , and (d) from pure SART with k = 553 steps, PSNR: 63 . 4625 , SSIM: 0 . 1662 . All with SNR = 60 dB.
Sensors 19 02346 g006
Figure 7. Evolution of (a) SSIM and (b) PSNR values for a particular reconstruction of the FORBILD head phantom with 30 projections using the pure SART, SART+DGT, and SART+BEP+DGT methods.
Figure 7. Evolution of (a) SSIM and (b) PSNR values for a particular reconstruction of the FORBILD head phantom with 30 projections using the pure SART, SART+DGT, and SART+BEP+DGT methods.
Sensors 19 02346 g007
Figure 8. (a) The original FORBILD head phantom and particular reconstructions for 30 projections (b) from SART+BEP+DGT with k = 520 steps, PSNR: 70 . 7700 , SSIM: 0 . 9015 , (c) from SART+DGT with k = 570 steps, PSNR: 70 . 7706 , SSIM: 0 . 8774 , and (d) from pure SART with k = 520 steps, PSNR: 62 . 8476 , SSIM: 0 . 1332 . All with SNR = 46 dB.
Figure 8. (a) The original FORBILD head phantom and particular reconstructions for 30 projections (b) from SART+BEP+DGT with k = 520 steps, PSNR: 70 . 7700 , SSIM: 0 . 9015 , (c) from SART+DGT with k = 570 steps, PSNR: 70 . 7706 , SSIM: 0 . 8774 , and (d) from pure SART with k = 520 steps, PSNR: 62 . 8476 , SSIM: 0 . 1332 . All with SNR = 46 dB.
Sensors 19 02346 g008
Figure 9. Evolution of (a) SSIM and (b) PSNR values for a particular reconstruction of the FORBILD abdomen phantom with 15 projections for pure SART, SART+DGT, and SART+BEP+DGT methods.
Figure 9. Evolution of (a) SSIM and (b) PSNR values for a particular reconstruction of the FORBILD abdomen phantom with 15 projections for pure SART, SART+DGT, and SART+BEP+DGT methods.
Sensors 19 02346 g009
Figure 10. (a) The original FORBILD abdomen phantom and particular reconstructions for 15 projections (b) from SART+BEP+DGT with k = 685 steps, PSNR: 77 . 7180 , SSIM: 0 . 9419 , (c) from SART+DGT with k = 920 steps, PSNR: 77 . 7197 , SSIM: 0 . 9397 , and (d) pure SART with k = 685 steps, PSNR: 66 . 0972 , SSIM: 0 . 0946 .
Figure 10. (a) The original FORBILD abdomen phantom and particular reconstructions for 15 projections (b) from SART+BEP+DGT with k = 685 steps, PSNR: 77 . 7180 , SSIM: 0 . 9419 , (c) from SART+DGT with k = 920 steps, PSNR: 77 . 7197 , SSIM: 0 . 9397 , and (d) pure SART with k = 685 steps, PSNR: 66 . 0972 , SSIM: 0 . 0946 .
Sensors 19 02346 g010
Figure 11. (a) Box plot graph for the Shepp–Logan head reconstruction with 15 projections and k = 600 , (b) box plot graph for the FORBILD head phantom with 30 projections and k = 400 , and (c) box plot graph for the FORBILD abdomen phantom with 15 projections and k = 320 .
Figure 11. (a) Box plot graph for the Shepp–Logan head reconstruction with 15 projections and k = 600 , (b) box plot graph for the FORBILD head phantom with 30 projections and k = 400 , and (c) box plot graph for the FORBILD abdomen phantom with 15 projections and k = 320 .
Sensors 19 02346 g011
Table 1. Comparison of computer tomography (CT) reconstruction methods A (SART+BEP+DGT), B (SART+DGT), and C (pure SART) for the Shepp–Logan (SL), FORBILD abdomen (FA), and FORBILD head (FH) phantom images using PSNR and SSIM metrics for 15 and 30 projections and signal-to-noise ratio (SNR) of 32, 46, and 60 dB. Each result is the mean of 101 executions of a particular testing case. The values in bold represent the highest value comparing methods A, B and C for each metric (PSNR or SSIM), number of projections (15 or 30) and iterations (350, 700 or 1000).
Table 1. Comparison of computer tomography (CT) reconstruction methods A (SART+BEP+DGT), B (SART+DGT), and C (pure SART) for the Shepp–Logan (SL), FORBILD abdomen (FA), and FORBILD head (FH) phantom images using PSNR and SSIM metrics for 15 and 30 projections and signal-to-noise ratio (SNR) of 32, 46, and 60 dB. Each result is the mean of 101 executions of a particular testing case. The values in bold represent the highest value comparing methods A, B and C for each metric (PSNR or SSIM), number of projections (15 or 30) and iterations (350, 700 or 1000).
PSNR MetricSSIM Metric
Image Noise (dB) Method 30 projections15 Projections30 Projections15 Projections
k = 3507001000k = 3507001000k = 3507001000k = 3507001000
A75.985975.831175.514274.498774.50273.89980.85220.85680.85480.84240.85060.8480
32B76.140576.247876.270174.403474.539274.5870.83890.84520.84700.82770.83680.8403
C67.900167.900167.900166.023866.023866.02380.12810.12810.12810.09050.09050.0905
A79.739779.794779.123576.753177.700176.6340.95890.96250.95720.92780.94180.9318
FA 46 B79.525780.75981.211376.246677.265077.85930.95060.96300.96620.91470.93340.9412
C68.119968.119968.119966.094166.094166.09410.14180.14180.14180.09420.09420.0942
A80.024680.084079.389476.933577.95176.76560.96340.96650.96150.93280.94680.9356
60 B79.798281.243281.829576.386977.527778.21390.95640.96920.97270.91940.93900.9473
C68.129068.129068.129066.097366.097366.09730.14240.14240.14240.09470.09470.0947
A68.849168.742068.545767.409367.008566.26630.7530.76240.76070.72920.73130.7203
32 B68.898169.016169.04367.340667.418167.42020.72360.73400.73680.69980.71120.7143
C62.657662.657662.657661.324161.324161.32410.10960.10960.10960.10130.10130.1013
A70.568170.726870.515968.391368.476168.15210.88520.90280.8980.82880.83940.8361
FH 46 B70.351170.903871.085668.072468.468168.63760.86050.88150.88760.79850.82160.8292
C62.848062.848062.848061.397561.397561.39750.13270.13270.13270.11300.11300.1130
A70.673370.853970.632168.443368.556368.24070.89190.91000.90490.83370.84500.8420
60 B70.442271.046671.251868.111068.532768.71610.86790.88990.89640.80330.82740.8356
C62.855162.855162.855161.400261.400261.40020.13420.13420.13420.11360.11360.1136
A74.094574.058373.812871.954972.144771.55470.93250.93580.93500.89380.90660.8989
32 B73.558674.392174.658870.745271.81272.20270.90360.91510.91740.84330.87400.8831
C64.526864.526864.526863.448963.448963.44890.18000.18000.18000.16040.16040.1604
A74.638374.647574.342272.258672.583272.00340.95420.95790.95720.90930.92510.9186
SL 46 B73.948675.117075.580970.899072.157072.67810.92980.94440.94770.85750.89170.9026
C64.559264.559264.559263.461963.461963.46190.19400.19400.19400.16610.16610.1661
A74.658874.664774.355272.274172.603372.02130.9550.95870.95810.90990.92570.9193
60 B73.963675.150475.626370.904772.169972.69750.93090.94580.94910.85800.89250.9036
C64.560664.560664.560663.462563.462563.46250.19460.19460.19460.16630.16630.1663
Table 2. Mean, median, maximum, minimum, standard deviation, and number of outliers of box plot element in Figure 11a.
Table 2. Mean, median, maximum, minimum, standard deviation, and number of outliers of box plot element in Figure 11a.
ImageMethodSNR dBMeanMedianMaxMinStandard
Deviation
3271.6058971.6040071.6728971.556720.03613
B 4671.9001371.8977071.9101071.888570.00725
SL 6071.9111071.9110371.9147471.907790.00199
3272.2579772.2665472.3063772.190370.04360
A 4672.6862672.6866472.7057372.664940.01435
6072.7071972.7070972.7135972.702820.00356

Share and Cite

MDPI and ACS Style

Wirtti, T.T.; Salles, E.O.T. A Soft-Threshold Filtering Approach for Tomography Reconstruction from a Limited Number of Projections with Bilateral Edge Preservation. Sensors 2019, 19, 2346. https://doi.org/10.3390/s19102346

AMA Style

Wirtti TT, Salles EOT. A Soft-Threshold Filtering Approach for Tomography Reconstruction from a Limited Number of Projections with Bilateral Edge Preservation. Sensors. 2019; 19(10):2346. https://doi.org/10.3390/s19102346

Chicago/Turabian Style

Wirtti, Tiago T., and Evandro O. T. Salles. 2019. "A Soft-Threshold Filtering Approach for Tomography Reconstruction from a Limited Number of Projections with Bilateral Edge Preservation" Sensors 19, no. 10: 2346. https://doi.org/10.3390/s19102346

APA Style

Wirtti, T. T., & Salles, E. O. T. (2019). A Soft-Threshold Filtering Approach for Tomography Reconstruction from a Limited Number of Projections with Bilateral Edge Preservation. Sensors, 19(10), 2346. https://doi.org/10.3390/s19102346

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop