Next Article in Journal
LSTMAtU-Net: A Precipitation Nowcasting Model Based on ECSA Module
Next Article in Special Issue
The Effects of mmW and THz Radiation on Dry Eyes: A Finite-Difference Time-Domain (FDTD) Computational Simulation Using XFdtd
Previous Article in Journal
Retinal Prostheses: Engineering and Clinical Perspectives for Vision Restoration
Previous Article in Special Issue
Multi-Sensor Medical-Image Fusion Technique Based on Embedding Bilateral Filter in Least Squares and Salient Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction

Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(13), 5783; https://doi.org/10.3390/s23135783
Submission received: 8 May 2023 / Revised: 18 June 2023 / Accepted: 19 June 2023 / Published: 21 June 2023
(This article belongs to the Collection Biomedical Imaging and Sensing)

Abstract

:
We present an adaptive method for fine-tuning hyperparameters in edge-preserving regularization for PET image reconstruction. For edge-preserving regularization, in addition to the smoothing parameter that balances data fidelity and regularization, one or more control parameters are typically incorporated to adjust the sensitivity of edge preservation by modifying the shape of the penalty function. Although there have been efforts to develop automated methods for tuning the hyperparameters in regularized PET reconstruction, the majority of these methods primarily focus on the smoothing parameter. However, it is challenging to obtain high-quality images without appropriately selecting the control parameters that adjust the edge preservation sensitivity. In this work, we propose a method to precisely tune the hyperparameters, which are initially set with a fixed value for the entire image, either manually or using an automated approach. Our core strategy involves adaptively adjusting the control parameter at each pixel, taking into account the degree of patch similarities calculated from the previous iteration within the pixel’s neighborhood that is being updated. This approach allows our new method to integrate with a wide range of existing parameter-tuning techniques for edge-preserving regularization. Experimental results demonstrate that our proposed method effectively enhances the overall reconstruction accuracy across multiple image quality metrics, including peak signal-to-noise ratio, structural similarity, visual information fidelity, mean absolute error, root-mean-square error, and mean percentage error.

1. Introduction

Positron emission tomography (PET) is a non-invasive imaging technique that enables the visualization of biochemical processes in the patient body by using a radioactive substance known as a radiotracer [1,2]. The patient undergoing the PET scan is injected with the radiotracer, which travels through the body and gets absorbed by the targeted organ or tissue. Once the radiotracer is injected into the body, it begins to emit positrons. When a positron collides with an electron in the surrounding tissue, it annihilates and produces two gamma rays in opposite directions. These gamma rays are detected by a ring of detectors surrounding the patient. The aim of image reconstruction in this case is to accurately map the distribution of the radioactive material in the patient’s body, which can provide valuable information about various physiological and biochemical processes. However, PET images are often characterized by low spatial resolution and high noise, which can limit their diagnostic accuracy. To address these limitations, various image reconstruction methods have been developed over the last decades, which aim to improve the spatial resolution and signal-to-noise ratio of PET images and reduce the amount of radiation exposure required for accurate imaging [3].
Among the various reconstruction methods, the penalized-likelihood (PL) approach, which is also known as the model-based iterative reconstruction method, has been shown to offer remarkable advantages over the traditional filtered back-projection method by providing improved spatial resolution, reduced imaging noise, and increased detection sensitivity [3,4,5,6]. The PL approach is a statistical method that uses the measured data and a mathematical model of the imaging process to estimate the distribution of radioactivity in the patient’s body, while also applying a penalty function (or a regularizer) that promotes spatial smoothness and noise reduction.
Recently, inspired by the rapid development of artificial intelligence in a variety of research and industrial fields, efforts have been made to improve the quality of medical images using deep learning techniques [7,8,9,10,11,12,13]. For tomographic image reconstruction, deep learning methods have also been applied to the PL reconstruction methods [14,15]. However, the PL methods involve hard-to-find hyperparameters (also known as regularization parameters) that significantly affect the quality of reconstructed images. The selection of appropriate regularization parameters is a challenging task, as it involves balancing the trade-off between noise reduction and preservation of important features in the underlying image. Moreover, the optimal regularization parameters may vary depending on the specific imaging task and the characteristics of the data being reconstructed.
Over the years, several methods for automatic parameter adjustment have been developed [16,17,18,19,20]. The representative early methods include the L-curve [16] and generalized cross-validation (GCV) [17,18] methods. The L-curve method relies on the shape of the L-curve indicating the trade-off between data fidelity and regularization so that the corner point of the L-curve is chosen as the optimal regularization parameter. The GCV method relies on the mean squared error and effective degrees of freedom to determine the optimal parameters. While the L-curve method typically requires multiple reconstructions with different regularization parameters to obtain the L-curve, the GCV method is computationally efficient since it avoids the need for repeated reconstructions with different regularization parameters. It has also been reported that assessing image quality can guide hyperparameter adjustment [19,20].
Recently, deep learning-based hyperparameter-tuning methods have been proposed in the literature [21,22,23]. The method presented in [21] exhibits an intelligent approach by utilizing deep reinforcement learning to determine the direction and magnitude of parameter adjustment in a human-like manner. However, this method learns a hyperparameter tuning strategy based on feedback from intermediate image reconstruction results, which necessitates running multiple iterations of an image reconstruction algorithm before parameter adjustment. This process has the potential to significantly decrease the overall workflow efficiency. In contrast, the methods proposed in [22,23] employ convolutional neural network-based hyperparameter learning frameworks. These frameworks employ a training pair consisting of the sinogram as the input and the desirable hyperparameter as the output. Although these methods generate hyperparameters in a feedforward manner once the network is trained, their applicability is limited to simple quadratic smoothing regularization, rather than edge-preserving non-quadratic regularization.
Here, we note that, for edge-preserving regularization, in addition to the smoothing parameter that balances data fidelity and regularization, one or more control parameters are typically incorporated to adjust the sensitivity of edge preservation by modifying the shape of the penalty function [24]. Without appropriately selecting these control parameters, it is challenging to obtain high-quality images. Unfortunately, the parameter-tuning methods discussed in [16,17,18,19,20,21,22,23] primarily focus on the smoothing parameter. In this work, to enhance the efficacy of existing parameter-tuning methods, we propose a method to precisely tune the hyperparameters, which are initially set with a fixed value for the entire image, either manually or using an automated approach. The fundamental strategy involves adjusting the initial value of the control parameter at each pixel, either increasing or decreasing it, based on the degree of the patch similarities calculated from the previous iteration within the pixel’s neighborhood that is being updated. This approach allows our new method to integrate with a wide range of existing parameter-tuning techniques from prior research.
Our work is inspired by the well-known non-local means approach [25], which has been widely used for image denoising [25,26,27,28,29] and restoration/reconstruction [30,31,32,33,34] by exploiting the measure of similarity between the image patches. The nonlocal means approach is based on the idea that in an image, pixels that are similar to each other tend to have similar values. Therefore, instead of averaging the values of neighboring pixels to obtain an estimate of the value of a particular pixel, the nonlocal means approach takes into account the similarity between the patches centered on each pixel in the image. The weighted average of the patch values is then used to obtain an estimate of the value of the pixel of interest. While our work is inspired by the non-local means approach, our method for fine-tuning the control parameter differs from the non-local means denoising approach. Instead of using the similarity measure between patches to calculate the weighted average for edge-preserving smoothing, our approach applies the similarity measure to calculate the optimal value for the control parameter for each pixel. The experimental results demonstrate that our proposed method enables adaptive selection of the optimal control parameter for each pixel, leading to enhanced image quality in the reconstruction process.
The remainder of this paper is organized as follows: Section 2 first describes the PL approach to PET image reconstruction and illustrates the two representative edge-preserving convex non-quadratic (CNQ) penalty functions, which involve the hyperparameters controlling the sensitivity of edge preservation. The details about our main idea of using the similarity-driven method for hyperparameter tuning are then described. The optimization method for the PL reconstruction algorithm with the CNQ penalty functions is also derived. Section 3 shows our experimental results using both digital and physical phantoms, where our proposed method effectively enhances the overall reconstruction accuracy across multiple image quality metrics. Finally, Section 4 draws a conclusion.

2. Methods

2.1. Penalized Likelihood Approach

The PL approach to PET image reconstruction is to seek the estimate f ^ of the underlying source image f from the emission measurement g by using the following minimization:
f ^ = arg min f L ( g | f ) + λ R ( f )
where L ( g | f ) is the log-likelihood term represented by the log of a Poisson distribution, R ( f ) is the regularization term to penalize the image roughness, λ is the smoothing parameter that controls the balance between the two terms. The regularization term is usually defined in such a way that it penalizes the roughness of the estimate by the intensity difference between neighboring pixels, which is given by
R ( f ) = j j N j φ ( f j f j ) ,
where φ ( ) is the penalty function, f j is the j-th pixel in an image, f j is the neighbor of f j , and N j is the neighborhood system of the pixel f j .
In this work, among many different convex non-quadratic (CNQ) penalty functions, we consider the following two most popular CNQ functions proposed by Lange [35] (denoted as LN hereafter) and Huber [36] (denoted as HB hereafter):
φ L N ( ξ ) = δ 2 ξ δ log 1 + ξ δ ,
φ H B ( ξ ) = ξ 2 , 2 σ ξ σ 2 , ξ σ ξ > σ ,
where δ and σ are the positive hyperparameters that control the sensitivity of edge preservation by modifying the shape of the penalty functions φ L N ( ) and φ H B ( ) , respectively. The typical shapes of the LN and HB penalty functions are shown in Figure 1, where they are compared with the quadratic (QD) penalty function φ Q D ( ξ ) = ξ 2 .
Based on the observation in Figure 1a, it can be seen that the CNQ penalty functions exhibit lower penalization than the QD penalty for significant intensity differences between the adjacent pixels. This characteristic enables the CNQ penalties to effectively preserve edges. The first-order derivative of the penalty function in Figure 1b indicates the strength of smoothing. Comparing it to the QD penalty function, which shows a linear increase in the magnitude of the derivative with increasing intensity difference, the LN penalty demonstrates a slower increase beyond a large value of the intensity difference. Furthermore, the HB penalty remains constant once the intensity difference reaches a large value. Therefore, both the LN and HB penalty functions satisfy the necessary condition for a CNQ penalty function to preserve edges, which is summarized as lim ξ φ ( ξ ) = K , where φ ( ξ ) is the first-order derivative of the penalty function and K is a positive constant [37]. For a given intensity difference between the adjacent pixels, as the hyperparameter δ (or σ) decreases, K also decreases, which results in more edges, and vice versa. To effectively preserve edges while suppressing noise, selecting an appropriate value for the hyperparameter is crucial. In this work, we assume that all hyperparameters (λ, δ, and σ) are preselected for the entire image before the reconstruction process begins. We aim to refine the value of δ (or σ) for each pixel during the reconstruction process by using the patch similarities within the neighborhood of a pixel to be updated. This approach enables us to fine-tune the hyperparameter value on a per-pixel basis, optimizing edge preservation in the reconstructed image.

2.2. Similarity-Driven Hyperparameter Tuning

In this work, inspired by the well-known non-local mean (NLM) approach [25], which has shown great potential in removing noise while preserving image details such as edges and textures by exploiting the redundancy and self-similarity of the image structure, we propose a new method of fine-tuning the hyperparameter δ (or σ) by using the self-similarity of the underlying image structure. The NLM approach is based on the idea that in an image, pixels that are similar to each other tend to have similar values. Therefore, instead of averaging the values of neighboring pixels to obtain an estimate of the value of a particular pixel, the NLM approach takes into account the similarity between the patches centered on each pixel in the image and computes a weighted average of patches centered around each pixel in the image.
In the NLM approach, the similarity between the two patches is defined by [25]
W j j = exp ρ j j h 2 ,
where ρ j j is the patch difference and h is a positive parameter. The patch difference ρ j j is defined as
ρ j j ρ ( N j ) ρ ( N j ) 2 = p = 1 P f j ( p ) f j ( p ) 2 ,
where ρ ( N j ) and ρ ( N j ) are the patches centered at the pixel j and j′, respectively, P is the total number of pixels in a patch, and f j ( p ) and f j ( p ) are the p-th pixels in the patches ρ ( N j ) and ρ ( N j ) , respectively. For a 3 × 3 patch window, ρ j j defined in (6) can be calculated by visiting each of the 9 pixels ( p = 1 , 2 , , 9 ). Figure 2 shows how the similarity matrix W j is calculated when the neighborhood system N j consists of four neighbors (north, south, east, and west) and one ( W j j = 1 ) in the center.
Note that the NLM approach in image denoising uses the similarity between the two patches defined by (5) for weighted smoothing, which can be expressed as
R ( f ) = j j N j ω j j φ ( f j f j ) ,   where   ω j j = W j j / j N j W j j .
In contrast, our method uses the similarity in (5) to adjust the initially tuned value of δ (or σ). The basic strategy to refine the initially tuned parameter δ = δ 0 is that the value of δ at a pixel may be increased or decreased depending on the degree of the patch similarity. To incorporate the patch similarity W j j into the adjustment of δ, we use the following formula:
δ j j = δ 0 1 + W j j + α j w ,
where δ j j is the fine-tuned value of δ using the patch similarity between the two patches centered at the pixels j and j′, w is the mean of W j j evaluated for all pixels in the estimated image obtained from the previous iteration of the PL reconstruction process, and α j is in [−1,1] which is also determined from the previous iteration by measuring the degree of roughness within the neighborhood of the pixel j. (The value of α j approaches −1 when the pixel roughness is very low, whereas it approaches 1 when the roughness is very high.) In (8), a negative value of α j decreases δ j j , whereas a positive value of α j increases δ j j . In an extreme case, where α j = 1 due to the irregular edges with relatively low similarities, the value of δ j j can be smaller than δ 0 . On the other hand, when α j = 1 due to the regular edges with high similarities, δ j j can be close to δ 0 . When α j = 1 in a flat region, δ j j is larger than δ 0 . In summary, δ j j adaptively varies around δ 0 depending on the patch similarities in the neighborhood of the pixel j.
To avoid a sudden change of the sign of δ j j , we define α j using the following modified Butterworth polynomial:
α z j = 2 1 + t / z j 2 r 1 ,   α 1 , 1 ,
where t is the turning point of the r-th order polynomial, z j is the j-th pixel in the image z standing for the pixel-wise roughness in the estimated image obtained from the previous iteration of the reconstruction process. Various pixel-wise roughness measures may be used for z. In this work, we compare the three different roughness measures: gradient (GR), standard deviation (SD), and mean of patch similarity (PS). The GR of an image is a vector field that represents the magnitude and direction of the change in intensity at each pixel in the image. To measure the pixel-wise roughness only, the magnitude of the GR is used. The pixel-wise SD image is calculated as follows:
s j = 1 L 1 k N j f k 1 L j N j f j 2 ,   j , ,
where s j is the j-th pixel in the SD image calculated within the 3 × 3 neighborhood system N j of the pixel f j and L = 9 in this case. The mean of patch similarity for the j-th pixel is defined by W j = j N j W j j . Figure 3 shows shapes of α z j in (9) for several different values of r. (In our experiments, the value of t was set to the mean of z and r = 0.1λ.)

2.3. Derivation of PL Reconstruction Algorithm

To derive a PL reconstruction algorithm that employs the similarity-driven fine-tuning method for hyperparameter optimization, we first use an accelerated version of the maximum likelihood (ML) algorithm, namely, the complete data ordered subsets expectation–maximization (COSEM) [38] algorithm and extend it to a PL algorithm to include the regularization term. While the well-known ordered subsets expectation–maximization (OSEM) [39] algorithm accelerates the original expectation maximization (EM) algorithm [40] by subdividing the projection data into several subsets (or blocks) and then progressively processing each subset by performing projection and back-projection operations in each iteration, it is not provably convergent due to the lack of its objective function. On the other hand, the COSEM algorithm is fast and convergent with an objective function.
The COSEM algorithm applies the idea of ordered subsets used in the OSEM algorithm on the “complete data” C rather than on the projection data g. The complete data C, whose elements are denoted as C i j , represents the number of coincidence events that originated at the j-th pixel in the underlying source and recorded by the i-th detector pair so that the following relationship holds: j C i j = g i .
The COSEM-ML algorithm can be expanded to the COSEM-PL algorithm by including the regularization term. For our COSEM-PL algorithm, if C is fixed to C = C ( n ) at the n-th iteration in an alternative updating procedure, the overall energy function with the regularizer in (2) can be expressed as:
E f ; C ( n ) = q = 1 Q i S q j C i j ( n , q ) log f j + i j H i j f j + λ R f ,
where S q , q = 1,…, Q, is the q-th subset of the detector pairs, and C i j ( n , q ) denotes the update of C i j at outer iteration n and subset iteration q. When the regularization term in (11) takes a CNQ form as described by (3) or (4), it is not possible to obtain a closed-form solution. Therefore, we employ the method of optimization transfer using paraboloidal surrogates [41,42,43] that can efficiently find a global minimum of a convex function by using the following surrogate function for the penalty term [42]:
ϕ ^ ( ξ ) = ϕ ( ξ n 1 ) + ϕ ( ξ n 1 ) ( ξ ξ n 1 ) + 1 2 ψ ( ξ n 1 ) ( ξ ξ n 1 ) 2 ϕ ( ξ ) ,
where the φ ξ is the first-order derivative, ξ n 1 denotes the value of ξ at the (n − 1)-th iteration, and ψ ξ = φ ξ / ξ . By dropping the terms that are independent of the variable ξ , (12) can be written as
φ ^ ξ = 1 2 ψ ξ n 1 ξ 2 .
To avoid the coupling problem of f j and f j when ξ is substituted with f j f j in the quadratic term in (13), the regularization term is modified by using the separable paraboloidal surrogate (SPS) function [44,45] as follows:
R ^ f ; f n 1 = j j N j ψ f j n 1 f j n 1 2 f j f j n 1 f j n 1 2
By replacing the regularization term in (11) with R ^ f j ; f n 1 , the overall energy function for each f j is expressed as
E f j ; f ( n , q 1 ) , C ( n , q ) = i C i j ( n , q ) log f j + i H i j f j + λ j N j ψ f j ( n , q 1 ) f j ( n , q 1 ) 2 f j f j ( n , q 1 ) f j ( n , q 1 ) 2 ,
where f j ( n , q ) denotes the update of f j at outer iteration n and subset iteration q. Note that after the completion of the subset iteration at the n-th iteration, f ( n , Q ) is assigned to f ( n + 1 ) . By setting the derivative of (15) to zero and solving for the positive root of the quadratic equation, the final update equation is given by
f j ( n , q ) = b + b 2 4 a c 2 a ,
where a, b, and c are given by
a = 8 λ j N j ψ f j ( n , q 1 ) f j ( n , q 1 ) , b = i H i j 4 λ j N j ψ f j ( n , q 1 ) f j ( n , q 1 ) f j ( n , q 1 ) + f j ( n , q 1 ) , c = j C i j ( n , q ) .
In the COSEM-PL algorithm, the C-update is the same as the C-update in the COSEM-ML algorithm. Therefore, the COSEM-PL algorithm is performed by alternately updating C i j ( n , q ) and f j ( n , q ) at outer iteration n and subset iteration q.
Figure 4 shows the schematic diagram of the COSEM-PL algorithm, where our parameter fine-tuning method is applied. Note that the control parameter δ (or σ) is updated using one of the three roughness measures (GR, SD, and PS) calculated from the image reconstructed in the previous iteration, and the initial values of the hyperparameters (λ, δ or σ) are preset either manually or using an automated method before the COSEM-PL iteration begins.

3. Results

3.1. Numerical Studies Using Digital Phantom

To test our idea, we first performed numerical studies using a 128 × 128 digital Hoffman brain phantom slice shown in Figure 5a. The activity ratio of the phantom is 4:1:0 in gray matter, white matter, and cerebrospinal fluid (CSF), respectively. For projection data, we used 128 projection angles over 180° with 128 detector pairs. To generate projection data with noise, we first scaled the phantom so that the total counts of its projection data could be approximately 500,000, and then added independent Poisson noise to the noiseless projection data obtained from the scaled phantom. Figure 5b provides a qualitative representation of the typical noise level observed in the 40th iteration of the EM-ML (or the COSEM-ML with a single subset) reconstruction from a noisy sinogram with approximately 500,000 photon counts.
For PL reconstruction, we compared two different methods: the standard PL method, which uses fixed hyperparameter values for all pixels in the entire image, and the similarity-driven PL (SDPL) method, which employs our proposed method of parameter fine-tuning on a per-pixel basis. To ensure convergence, we used 4 subsets and 80 iterations, which effectively corresponds to 320 iterations for a single subset. To assess the effectiveness of the SDPL algorithm across diverse hyperparameter configurations, we employed two distinct (high and low) levels of initial parameter values for both the smoothing parameter λ and the control parameter δ (or σ). Note that our approach can seamlessly integrate with a wide range of existing parameter-tuning methods, thereby eliminating the need for a specific criterion in selecting the initial parameter values.
Figure 6 shows the anecdotal PL and SDPL reconstructions using the LN penalty function. The figure comprises four groups of results, each corresponding to a different parameter setting. Specifically, Figure 6a–d shows the results obtained with high λ and high δ, Figure 6e–h with high λ and low δ, Figure 6i–l with low λ and high δ, and Figure 6m–p with low λ and low δ. Within each row, the reconstruction methods are displayed from left to right as PL-LN and SDPL-LN (GR, SD, and PS), respectively. A qualitative comparison of the results in Figure 6 clearly reveals that the SDPL method better preserves fine details than the standard PL method.
To elaborate further, when both λ and δ are excessively large (Figure 6a–d), the PL result in Figure 6a appears over-smoothed, whereas the SDPL results in Figure 6b–d exhibit enhanced detail. By reducing the value of δ while keeping λ fixed, the SDPL result in Figure 6e becomes sharper than its PL counterpart in Figure 6a. Similarly, the SDPL results in Figure 6f–h, like those in Figure 6b–d, demonstrate superior preservation of fine details compared to the result in Figure 6e. Based on the observations from Figure 6a–h, we tentatively conclude that the SDPL method effectively mitigates the over-smoothing issue of the PL method for relatively high λ values. As expected, when the smoothing parameter λ is decreased, the results become sharper and exhibit more details. However, even in this case, the SDPL method further enhances reconstruction accuracy by better preserving fine details, as evident in Figure 6i–p. In an extreme case, where the values of both λ and δ are very small, the results become noisy, a phenomenon that is not specific to the SDPL method but holds true for any regularization method. In conclusion, the SDPL method surpasses the standard PL method in effectively preserving fine details when the hyperparameters are chosen to be sufficiently large, ensuring effective noise suppression.
To evaluate and compare, in an ensemble sense, the quantitative performance of the reconstruction algorithms with the parameter settings used for Figure 6, we generated 50 independent noise realizations of projection data for the phantom shown in Figure 5a.
Table 1 presents a quantitative performance comparison between the PL-LN and SDPL-LN in terms of six different image quality assessments (IQAs): peak signal-to-noise ratio (PSNR); structural similarity (SSIM); visual information fidelity (VIF); mean absolute error (MAE); root-mean-square error (RMSE); and mean percentage error (MPE). All IQA metrics used in this work were evaluated from 50 independent Poisson noise trials. For example, the MPE is defined as
M P E = 1 K k = 1 K f ^ j k f j 2 / f j 2 × 100 % ,
where f ^ j k is the j-th pixel value of the reconstructed image for the k-th noise trial, f j is the j-th pixel value of the noiseless phantom, and K = 50 is the total number of noise trials. The PSNR [46] measures the ratio between the maximum possible peak of the signal and the noise. The SSIM [46,47] measures the similarity between the reconstructed image and the phantom. The VIF [48] evaluates the image quality based on the natural scene statistics and the image notion extracted by the human visual system. The MAE [49] calculates the mean absolute error between the reconstructed image and the phantom. In Table 1, the best results, obtained from the SDPL-LN method using the SD roughness measure, are highlighted in bold.
Figure 7 visualizes the quantitative results for the six IQAs presented in Table 1 through bar graphs, with each IQA depicted individually. The abscissa indexes the group number (1 to 4) for parameter settings (two distinct levels of initial parameter values for both λ and δ) used in Table 1. It is evident that the SDPL-LN methods clearly outperform the PL-LN method in all IQAs.
Figure 8 presents the anecdotal reconstructions using the HB penalty function, following the same layout as Figure 6 for the LN penalty function. Similar to the findings in Figure 6, the SDPL reconstructions consistently exhibit superior preservation of details compared to the standard PL reconstructions across all hyperparameter settings.
Table 2 presents a performance comparison between the PL-HB and SDPL-HB methods based on six different IQAs. Again, the SDPL methods demonstrate the best outcomes. Although the best results are distributed across three different roughness measures, the differences among them are practically negligible. Figure 9 presents bar graphs visualizing the quantitative results in Table 2. The results clearly demonstrate that the SDPL-HB methods outperform the PL-HB method across all IQAs.
To evaluate the regional performance of our method, we first selected regions of interest (ROIs) as shown in Figure 10, and performed regional studies using the PL-LN and SDPL-LN reconstructions obtained with the same initial values of λ and δ, respectively. Figure 11 shows the five zoomed-in rectangular regions R1-R5 in Figure 10a, where the images in Figure 11a are zoomed-in regions of the phantom, Figure 11b zoomed-in regions of PL-LN reconstructions, Figure 11c zoomed-in regions of SDPL-LN-GR reconstructions (with the GR roughness measure), Figure 11d zoomed-in regions of SDPL-LN-SD, and Figure 11e zoomed-in regions of SDPL-LN-PS. As already seen in Figure 6, the SPDL-based methods clearly outperform the standard PL method, which is also verified in terms of the regional MPEs represented by the bar graphs shown in Figure 12a.
Figure 10b shows the three circular ROIs and one circular background region used for calculating the contrast recovery coefficient (CRC). The CRC is a metric that evaluates how well the algorithm restores the contrast of an ROI with respect to its background. The regional CRC is defined as
C R C R = C R R / C R R 0 ,
where CR R = A ^ R A ^ B g / A ^ B g , A ^ R = 1 / T j R f ^ j denotes the mean activity in each ROI, A ^ B g is the mean activity in the background region, and CR R 0 is the true contrast in the phantom. Note that the value of CRC indicates the performance of the algorithm, with higher values closer to one indicating better performance.
Figure 12b presents the regional mean CRC (MCRC) calculated over K = 50 independent noise trials, which is defined as
M C R C R = 1 K k = 1 K C R C R k ,
where C R C R k stands for the regional CRC calculated from the k-th noise realization. It is evident that the SDPL-based methods with GR and SD roughness measures remarkably outperform the standard PL method in terms of the MCRC in all three ROIs.

3.2. Qualitative Validation Using Physically Acquired Data

To observe qualitatively the efficacies of our SDPL methods, we acquired physical data using a GE Advance PET scanner, which contains 18 detector rings yielding 35 slices at 4.25 mm center-to-center slice separation. We acquired 2D data from the physical Hoffman brain phantom using the scanner’s high sensitivity mode with septa in. The sinogram dimension was 145 detector pairs and 168 angles. The projection data were acquired for 10 min from an 18FDG scan. The corresponding number of detected coincident counts was approximately 1,000,000. Figure 13 shows the typical noise level observed in the EM-ML reconstruction with 40 iterations, obtained from the physical PET data. Since there is no ground-truth data available for this experiment, the efficacies of using the COSEM-PL and COSEM-SPDL methods can be observed qualitatively by comparing their results with the EM-ML reconstruction. It is important to note that, compared to the reconstructions shown in Figure 6 and Figure 8, which were obtained from the digital phantom, the resolution of the EM-ML reconstruction for the real PET data is significantly low, which may limit our qualitative observation of the efficacies of using the SPDL methods in the real data experiments.
Figure 14 shows two groups of images, Figure 14a–f and Figure 14g–l, reconstructed by COSEM-PL with the LN and HB penalty functions, respectively. For the LN penalty, the smoothing parameter values were set to 40, 20, and 10 for Figure 14a,b, Figure 14c,d, and Figure 14e,f, respectively. For the HB penalty, the smoothing parameter values were set to 20, 10, and 5 for Figure 14g,h, Figure 14i,j, and Figure 14k,l, respectively. For each value of λ, a value of δ (or σ) was chosen for the standard PL first, and it was used as an initial value of δ (or σ) for the SDPL. For each value of λ, a close inspection reveals that, as already observed in Figure 6 and Figure 8 using the digital phantom, the SDPL method further improves the reconstruction of fine details. In fact, the visual improvement from the standard PL reconstruction to the SDPL reconstruction in Figure 14 is not as stunning as that in Figure 6 and Figure 8. This is presumably due to the fact that the physical factors that affect the quality of reconstruction were not modeled in our reconstruction algorithms. While the attenuation correction was done by a conventional method that uses the ratio of the measurements in the blank and transmission scans, the factors to model scattered and random coincidences were not included in our reconstruction algorithms. In this case, the measurement is not strictly Poisson. (Our future work includes modeling the physical factors in the likelihood term and expanding accordingly the overall energy function described in (11)).

4. Summary and Conclusions

We have presented similarity-driven hyperparameter fine-tuning methods for penalized-likelihood image reconstruction in PET. Our proposed method aims to optimize the regularization parameter by leveraging similarity information between neighboring patches, leading to improved image quality and quantitative accuracy.
The experimental results obtained from the digital phantom studies demonstrated the effectiveness of the proposed method in achieving superior image reconstruction performance compared to the conventional PL method with fixed hyperparameters. By incorporating similarity information into the hyperparameter optimization process, the proposed method effectively balanced the trade-off between noise reduction and preservation of fine details, resulting in visually enhanced images with reduced noise. Our numerical studies supported the visual comparison by showing better quantitative performance of the proposed method across multiple image quality metrics. Finally, the additional results from the physical experiments using the real PET data also supported the good performance of the proposed method. However, to fully evaluate the clinical potential and generalizability of the proposed method, more comprehensive investigations that incorporate the physical factors, such as attenuation, scattered, and random coincidences, into reconstruction algorithms for real PET scans, are needed.
We acknowledge here that, besides the regularization approach employing CNQ penalties discussed in this study, there exist several other types of regularization methods used in PET reconstruction, which encompass total variation regularization [50,51,52], sparse coding-based regularization [53,54,55], and low-rank/sparse decomposition-based regularization [56]. These regularization methods also involve hyperparameters that significantly impact the quality of the reconstructed image. Since our proposed method specifically focuses on CNQ penalties, further investigation is required to determine the feasibility of integrating it with these diverse regularization methods.
We also note that, as the proposed method requires initially tuned hyperparameters for the entire image, it is not fully automated. For our future work, we would continue to seek a more advanced approach to optimizing the regularization parameter to fully automate the tuning process. One possible approach may be to use our method in conjunction with machine learning-based parameter tuning methods [21,22,23] so that the parameters initially tuned by machine learning for the entire image can be refined by our method for further improvements in reconstruction accuracy. However, we acknowledge the inherent challenge for machine learning methods to incorporate the additional control parameters responsible for adjusting edge reservation sensitivity by modifying the shape of the penalty function. Despite this challenge, we expect that our proposed method, in conjunction with more advanced machine learning-based approaches that can handle the control parameters, will substantially reduce the dependence on subjective trial-and-error hyperparameter tuning in regularized PET reconstruction.

Author Contributions

Supervision, S.-J.L.; conceptualization, S.-J.L.; methodology, S.-J.L. and W.Z.; article preparation, S.-J.L. and W.Z.; evaluation, W.Z.; writing—original draft preparation, S.-J.L. and W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation (NRF) of Korea grant funded by the Korean government under NRF-2022R1F1A1060484.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request. Please contact the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Cherry, S.R.; Sorenson, J.A.; Phelps, M.E. Physics in Nuclear Medicine; Saunders: Philadelphia, PA, USA, 2012. [Google Scholar]
  2. Ollinger, J.H.; Fessler, J.A. Positron Emission Tomography. IEEE Signal Process. Mag. 1997, 14, 43–55. [Google Scholar] [CrossRef]
  3. Lewitt, R.M.; Matej, S. Overview of methods for image reconstruction from projections in emission computed tomography. Proc. IEEE 2003, 91, 1588–1611. [Google Scholar] [CrossRef]
  4. Tong, S.; Alessio, A.M.; Kinahan, P. Image reconstruction for PET/CT scanners: Past achievements and future challenges. Imaging Med. 2010, 2, 529–545. [Google Scholar] [CrossRef] [Green Version]
  5. Reader, A.J.; Zaidi, H. Advances in PET image reconstruction. PET Clin. 2007, 2, 173–190. [Google Scholar] [CrossRef]
  6. Qi, J.; Leahy, R.M. Iterative reconstruction techniques in emission computed tomography. Phys. Med. Biol. 2006, 51, 541–578. [Google Scholar] [CrossRef] [PubMed]
  7. Gong, K.; Berg, E.; Cherry, S.R.; Qi, J. Machine learning in PET: From photon detection to quantitative image reconstruction. Proc. IEEE 2019, 108, 51–68. [Google Scholar] [CrossRef]
  8. Reader, A.J.; Corda, G.; Mehranian, A.; da Costa-Luis, C.; Ellis, S.; Schnabel, J.A. Deep Learning for PET image reconstruction. IEEE Trans. Rad. Plasma Med. Sci. 2021, 5, 1–25. [Google Scholar] [CrossRef]
  9. Hashimoto, F.; Ote, K.; Onishi, Y. PET image reconstruction incorporating deep image prior and a forward projection model. IEEE Trans. Radiat. Plasma Med. Sci. 2022, 6, 841–846. [Google Scholar] [CrossRef]
  10. Kim, K.; Wu, D.; Gong, K.; Dutta, J.; Kim, J.H.; Son, Y.D.; Kim, H.K.; El Fakhri, G.; Li, Q. Penalized PET Reconstruction Using Deep Learning Prior and Local Linear Fitting. IEEE Trans. Med. Imaging 2018, 37, 1478–1487. [Google Scholar] [CrossRef]
  11. Hong, X.; Zan, Y.; Weng, F.; Tao, W.; Peng, Q.; Huang, Q. Enhancing the Image Quality via Transferred Deep Residual Learning of Coarse PET Sinograms. IEEE Trans. Med. Imaging 2018, 37, 2322–2332. [Google Scholar] [CrossRef]
  12. Kang, E.; Min, J.; Ye, J.C. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med. Phys. 2017, 44, e360–e375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Pain, C.D.; Egan, G.F.; Chen, Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 3098–3118. [Google Scholar] [CrossRef] [PubMed]
  14. Mehranian, A.; Reader, J. Model-based deep learning PET image reconstruction using forward-backward splitting expectation-maximization. IEEE Trans. Rdiat. Plasma Med. Sci. 2020, 5, 54–64. [Google Scholar] [CrossRef]
  15. Adler, J.; Öktem, O. Learned primal-dual reconstruction. IEEE Trans. Med. Imaging 2018, 37, 1322–1332. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Hansen, P.C. Analysis of discrete ill-posed problems by means of the L-curve. SIAM Rev. 1992, 34, 561–580. [Google Scholar] [CrossRef]
  17. Golub, G.H.; Heath, M.; Wahba, G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 1979, 21, 215–223. [Google Scholar] [CrossRef]
  18. Ramani, S.; Liu, Z.; Nielsen, J.-F.; Fessler, J.A. Regularization parameter selection for nonlinear iterative image restoration and MRI reconstruction using GCV and sure-based methods. IEEE Trans. Image Process. 2012, 21, 3659–3672. [Google Scholar] [CrossRef] [Green Version]
  19. Zhu, X.; Milanfar, P. Automatic parameter selection for denoising algorithms using a no-reference measure of image content. IEEE Trans. Image Process. 2010, 19, 3116–3132. [Google Scholar]
  20. Liang, H.; Weller, D.S. Comparison-based image quality assessment for selecting image restoration parameters. IEEE Trans Image Process. 2016, 25, 5118–5130. [Google Scholar] [CrossRef] [Green Version]
  21. Shen, C.; Gonzalez, Y.; Chen, L.; Jiang, S.B.; Jia, X. Intelligent parameter tuning in optimization-based iterative CT reconstruction via deep reinforcement learning. IEEE Trans. Med. Imaging 2018, 37, 1430–1439. [Google Scholar] [CrossRef]
  22. Xu, J.; Noo, F. Patient-specific hyperparameter learning for optimization-based CT image reconstruction. Phys. Med. Biol. 2021, 66, 19. [Google Scholar] [CrossRef] [PubMed]
  23. Lee, J.; Lee, S.-J. Smoothing-parameter tuning for regularized PET image reconstruction using deep learning. In Proceedings of the SPIE 12463, Medical Imaging 2023: Physics of Medical Imaging, San Diego, CA, USA, 19–23 February 2023. [Google Scholar]
  24. Lee, S.-J. Performance comparison of convex-nonquadratic priors for Bayesian tomographic reconstruction. J. Electron. Imaging 2000, 9, 242–250. [Google Scholar] [CrossRef]
  25. Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  26. Deledalle, C.-A.; Denis, L.; Tupin, F. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights. IEEE Trans. Image Process. 2009, 18, 2661–2672. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Sharifymoghaddam, M.; Beheshti, S.; Elahi, P.; Hashemi, M. Similarity validation based nonlocal means image denoising. IEEE Sig. Process. Lett. 2015, 22, 2185–2188. [Google Scholar] [CrossRef]
  28. Zhang, X.; Feng, X.; Wang, W. Two-direction nonlocal model for image denoising. IEEE Trans. Image Process. 2013, 22, 408–412. [Google Scholar] [CrossRef]
  29. Leal, N.; Zurek, E.; Leal, E. Non-local SVD denoising of MRI based on sparse representations. Sensors 2020, 20, 1536. [Google Scholar] [CrossRef] [Green Version]
  30. Nguyen, V.-G.; Lee, S.-J. Incorporating anatomical side information into PET reconstruction using nonlocal regularization. IEEE Trans. Image Process. 2013, 22, 3961–3973. [Google Scholar] [CrossRef]
  31. Wang, G.; Qi, J. Penalized likelihood PET image reconstruction using patch-based edge-preserving regularization. IEEE Trans. Med. Imaging 2012, 31, 2194–2204. [Google Scholar] [CrossRef] [Green Version]
  32. Tahaei, M.S.; Reader, A.J. Patch-based image reconstruction for PET using prior-image derived dictionaries. Phys. Med. Biol. 2016, 61, 6833–6855. [Google Scholar] [CrossRef]
  33. Xie, N.; Chen, Y.; Liu, H. 3D tensor based nonlocal low rank approximation in dynamic PET reconstruction. Sensors 2019, 19, 5299. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Ren, X.; Lee, S.-J. Penalized-likelihood PET image reconstruction using similarity-driven median regularization. Tomography 2022, 8, 158–174. [Google Scholar] [CrossRef] [PubMed]
  35. Lange, K. Convergence of EM image reconstruction algorithms with Gibbs smoothing. IEEE Trans. Med. Imaging 1990, 9, 439–446. [Google Scholar] [CrossRef] [PubMed]
  36. Huber, P.J. Robust Statistics; John Wiley & Sons: New York, NY, USA, 1981. [Google Scholar]
  37. Li, S.Z. Close-form solution and parameter selection for convex minimization-based edge-preserving smoothing. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 916–932. [Google Scholar] [CrossRef]
  38. Hsiao, I.-T.; Rangarajan, A.; Gindi, G. An accelerated convergent ordered subset algorithm for emission tomography. Phys. Med. Biol. 2004, 49, 2145–2156. [Google Scholar] [CrossRef]
  39. Hudson, H.M.; Larkin, R.S. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans. Med. Imaging 1994, 13, 601–609. [Google Scholar] [CrossRef] [Green Version]
  40. Vardi, A.; Shepp, L.A.; Kaufman, L. A statistical model for positron emission tomography. J. R. Stat. Soc. 1985, 80, 8–37. [Google Scholar] [CrossRef]
  41. Erdoğan, H.; Fessler, J.A. Ordered subsets algorithms for transmission tomography. Phys. Med. Biol. 1999, 44, 2835–2851. [Google Scholar] [CrossRef]
  42. Ahn, S.; Fessler, J.A. Globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms. IEEE Trans. Med. Imaging 2003, 22, 613–626. [Google Scholar] [CrossRef] [Green Version]
  43. Erdoğan, H.; Fessler, J.A. Monotonic algorithms for transmission tomography. IEEE Trans. Med. Imaging 1999, 18, 801–814. [Google Scholar] [CrossRef] [Green Version]
  44. De Pierro, A.R. A modified expectation maximization algorithm for penalized likelihood estimation in emission tomography. IEEE Trans. Med. Imaging 1995, 14, 132–137. [Google Scholar] [CrossRef] [PubMed]
  45. De Pierro, A.R. On the convergence of an EM-type algorithm for penalized likelihood estimation in emission tomography. IEEE Trans. Med. Imaging 1995, 14, 762–765. [Google Scholar] [CrossRef] [PubMed]
  46. Seshadrinathan, K.; Pappas, T.N.; Safranek, R.J.; Chen, J.; Wang, Z.; Sheikh, H.R.; Bovik, A.C. Image quality assessment. In The Essential Guide to Image Processing, 2nd ed.; Bovik. A, Ed.; Academic Press: Burlington, MA, USA, 2009; pp. 535–595. [Google Scholar]
  47. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef]
  49. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  50. Panin, V.Y.; Zeng, G.L.; Gullberg, G.T. Total variation regulated EM algorithm. IEEE Trans. Nucl. Sci. 1999, 46, 2202–2210. [Google Scholar] [CrossRef]
  51. Burger, M.; Muller, J.; Papoutsellis, E.; Schonlieb, C.B. Total variation regularization in measurement and image space for PET reconstruction. Inv. Probl. 2014, 30, 105003. [Google Scholar] [CrossRef]
  52. Yu, H.; Chen, Z.; Zhang, H.; Loong Wong, K.K.; Chen, Y.; Liu, H. Reconstruction for 3D PET based on total variation constrained direct fourier method. PLoS ONE 2015, 10, 0138483. [Google Scholar] [CrossRef]
  53. Tang, J.; Yang, B.; Wang, Y.; Ying, L. Sparsity-constrained PET image reconstruction with learned dictionaries. Phys. Med. Biol. 2016, 61, 6347–6368. [Google Scholar] [CrossRef]
  54. Xie, N.; Gong, K.; Guo, N.; Qin, Z.; Wu, Z.; Liu, H.; Li, Q. Penalized-likelihood PET image reconstruction using 3D structural convolutional sparse coding. IEEE Trans. Biomed. Eng. 2022, 69, 4–14. [Google Scholar] [CrossRef]
  55. Ren, X.; Lee, S.-J. Joint sparse coding-based super-resolution PET image reconstruction. In Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference, Boston, MA, USA, 31 October–7 November 2020. [Google Scholar]
  56. Chen, S.; Liu, H.; Hu, Z.; Zhang, H.; Shi, P.; Chen, Y. Simultaneous reconstruction and segmentation of dynamic PET via low-rank and sparse matrix decomposition. IEEE Trans. Biomed. Eng. 2015, 62, 1784–1795. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Three representative penalty functions: (a) typical shapes of the three (QD, LN, and HB) penalty functions; (b) first-order derivatives of the three penalty functions indicating the strength of smoothing.
Figure 1. Three representative penalty functions: (a) typical shapes of the three (QD, LN, and HB) penalty functions; (b) first-order derivatives of the three penalty functions indicating the strength of smoothing.
Sensors 23 05783 g001
Figure 2. Calculating the patch similarity matrix W j using a 3 × 3 patch window. The similarity matrix W j consists of the four elements in the neighbors and one ( W j j = 1 ) in the center.
Figure 2. Calculating the patch similarity matrix W j using a 3 × 3 patch window. The similarity matrix W j consists of the four elements in the neighbors and one ( W j j = 1 ) in the center.
Sensors 23 05783 g002
Figure 3. Plot of the modified r-th order Butterworth polynomial α z j with several different values of r.
Figure 3. Plot of the modified r-th order Butterworth polynomial α z j with several different values of r.
Sensors 23 05783 g003
Figure 4. Schematic diagram of the COSEM-PL algorithm with adaptive parameter tuning.
Figure 4. Schematic diagram of the COSEM-PL algorithm with adaptive parameter tuning.
Sensors 23 05783 g004
Figure 5. Digital phantom used in simulations and typical 500,000-count noise level for EM-ML reconstruction: (a) 128 × 128 digital Hoffman brain phantom; (b) EM-ML reconstruction (40 iterations) from noisy projection data with 500,000 photon counts.
Figure 5. Digital phantom used in simulations and typical 500,000-count noise level for EM-ML reconstruction: (a) 128 × 128 digital Hoffman brain phantom; (b) EM-ML reconstruction (40 iterations) from noisy projection data with 500,000 photon counts.
Sensors 23 05783 g005
Figure 6. Anecdotal reconstructions using PL-LN and SDPL-LN with two different (high and low) levels of λ and two different (high and low) levels of δ for each λ. (The results in the first column (a,e,i,m) are PL-LN reconstructions, whereas the rest of the results are SDPL-LN reconstructions). (ah) λ = 40 (ip) λ = 20 (ad) δ = 0.1; (eh) δ = 0.03; (il) δ = 0.15; (mp) δ = 0.05.
Figure 6. Anecdotal reconstructions using PL-LN and SDPL-LN with two different (high and low) levels of λ and two different (high and low) levels of δ for each λ. (The results in the first column (a,e,i,m) are PL-LN reconstructions, whereas the rest of the results are SDPL-LN reconstructions). (ah) λ = 40 (ip) λ = 20 (ad) δ = 0.1; (eh) δ = 0.03; (il) δ = 0.15; (mp) δ = 0.05.
Sensors 23 05783 g006
Figure 7. Performance comparison of PL-LN and SDPL-LN in terms of six image quality assessments: (a) PSNR; (b) SSIM; (c) VIF; (d) MAE; (e) RMSE; (f) MPE.
Figure 7. Performance comparison of PL-LN and SDPL-LN in terms of six image quality assessments: (a) PSNR; (b) SSIM; (c) VIF; (d) MAE; (e) RMSE; (f) MPE.
Sensors 23 05783 g007
Figure 8. Anecdotal reconstructions using PL-HB and SDPL-HB with two different (high and low) levels of λ and two different (high and low) levels of σ for each λ. (The results in the first column (a,e,i,m) are PL-HB reconstructions, whereas the rest of the results are SDPL-HB reconstructions). (ah) λ =20 (ip) λ =10 (ad) σ = 0.06; (eh) σ = 0.03; (il) σ = 0.1; (mp) σ = 0.05.
Figure 8. Anecdotal reconstructions using PL-HB and SDPL-HB with two different (high and low) levels of λ and two different (high and low) levels of σ for each λ. (The results in the first column (a,e,i,m) are PL-HB reconstructions, whereas the rest of the results are SDPL-HB reconstructions). (ah) λ =20 (ip) λ =10 (ad) σ = 0.06; (eh) σ = 0.03; (il) σ = 0.1; (mp) σ = 0.05.
Sensors 23 05783 g008
Figure 9. Performance comparison of PL-HB and SDPL-HB in terms of six different image quality assessments: (a) PSNR; (b) SSIM; (c) VIF; (d) MAE; (e) RMSE; (f) MPE.
Figure 9. Performance comparison of PL-HB and SDPL-HB in terms of six different image quality assessments: (a) PSNR; (b) SSIM; (c) VIF; (d) MAE; (e) RMSE; (f) MPE.
Sensors 23 05783 g009
Figure 10. ROIs superimposed on the phantom image: (a) ROIs for regional percentage error; (b) ROIs for contrast recovery coefficient.
Figure 10. ROIs superimposed on the phantom image: (a) ROIs for regional percentage error; (b) ROIs for contrast recovery coefficient.
Sensors 23 05783 g010
Figure 11. Zoomed-in images of PL-LN and SDPL-LN reconstructions using ROIs in Figure 8a. (a) phantom; (b) PL-LN; (c) SDPL-LN-GR; (d) SDPL-LN-SD; (e) SDPL-LN-PS.
Figure 11. Zoomed-in images of PL-LN and SDPL-LN reconstructions using ROIs in Figure 8a. (a) phantom; (b) PL-LN; (c) SDPL-LN-GR; (d) SDPL-LN-SD; (e) SDPL-LN-PS.
Sensors 23 05783 g011
Figure 12. Regional performance comparison between the PL-LN and SDPL-LN methods: (a) regional mean percentage error (MPE) for ROIs shown in Figure 10a; (b) mean contrast recovery coefficient (MCRC) for ROIs shown in Figure 10b.
Figure 12. Regional performance comparison between the PL-LN and SDPL-LN methods: (a) regional mean percentage error (MPE) for ROIs shown in Figure 10a; (b) mean contrast recovery coefficient (MCRC) for ROIs shown in Figure 10b.
Sensors 23 05783 g012
Figure 13. EM-ML reconstruction (40 iterations) from physically acquired projection data.
Figure 13. EM-ML reconstruction (40 iterations) from physically acquired projection data.
Sensors 23 05783 g013
Figure 14. COSEM-PL reconstructions from physically acquired data: (a) PL-LN with λ = 40; (b) SDPL-LN with λ = 40; (c) PL-LN with λ = 20; (d) SDPL-LN with λ = 20; (e) PL-LN with λ = 10; (f) SDPL-LN with λ = 10; (g) PL-HB with λ = 20; (h) SDPL-HB with λ = 20; (i) PL-HB with λ = 10; (j) SDPL-HB with λ = 10; (k) PL-HB with λ = 5; (l) SDPL-HB with λ = 5.
Figure 14. COSEM-PL reconstructions from physically acquired data: (a) PL-LN with λ = 40; (b) SDPL-LN with λ = 40; (c) PL-LN with λ = 20; (d) SDPL-LN with λ = 20; (e) PL-LN with λ = 10; (f) SDPL-LN with λ = 10; (g) PL-HB with λ = 20; (h) SDPL-HB with λ = 20; (i) PL-HB with λ = 10; (j) SDPL-HB with λ = 10; (k) PL-HB with λ = 5; (l) SDPL-HB with λ = 5.
Sensors 23 05783 g014
Table 1. Quantitative performance comparison of PL-LN and SDPL-LN.
Table 1. Quantitative performance comparison of PL-LN and SDPL-LN.
IQA MetricsPL-LNSDPL-LN
GRSDPS
λ = 40
δ = 0.1
PSNR(dB)13.943 15.569 15.62114.999
SSIM0.821 0.872 0.8740.854
VIF0.404 0.540 0.5470.531
MAE0.090 0.0680.0680.071
RMSE0.201 0.167 0.1660.178
MPE36.651 30.396 30.21632.459
λ = 40
δ = 0.03
PSNR(dB)15.475 17.133 17.23216.856
SSIM0.869 0.910 0.9120.905
VIF0.539 0.675 0.6880.672
MAE0.069 0.051 0.0500.052
RMSE0.168 0.139 0.1380.144
MPE30.728 25.387 25.10126.210
λ = 20
δ = 0.15
PSNR(dB)14.659 16.186 16.22215.756
SSIM0.843 0.8870.8870.876
VIF0.474 0.593 0.5980.590
MAE0.079 0.0610.0610.063
RMSE0.185 0.1550.1550.163
MPE33.754 28.311 28.19429.748
λ = 20
δ = 0.05
PSNR(dB)16.001 17.543 17.60117.151
SSIM0.882 0.919 0.9200.913
VIF0.587 0.720 0.7300.706
MAE0.063 0.047 0.0460.050
RMSE0.159 0.133 0.1320.139
MPE28.922 24.217 24.05625.335
Table 2. Quantitative performance comparison of PL-HB and SDPL-HB.
Table 2. Quantitative performance comparison of PL-HB and SDPL-HB.
IQA MetricsPL-HBSDPL-HB
GRSDPS
λ = 20
σ = 0.06
PSNR(dB)13.85715.79816.23915.421
MSSIM0.8230.8800.8890.868
VIF0.4010.5740.6110.567
MAE0.0910.0640.0590.065
RMSE0.2030.1620.1540.169
MPE37.01829.60528.14230.920
λ = 20
σ = 0.03
PSNR(dB)15.838 17.277 17.35317.134
SSIM0.880 0.914 0.9160.912
VIF0.569 0.697 0.7100.691
MAE0.064 0.0490.0490.050
RMSE0.162 0.137 0.1360.139
MPE29.469 24.971 24.75525.386
λ = 10
σ = 0.1
PSNR(dB)14.387 16.17616.149 15.974
SSIM0.837 0.8880.887 0.883
VIF0.450 0.606 0.607 0.615
MAE0.083 0.0590.0590.059
RMSE0.191 0.1550.156 0.159
MPE34.826 28.34628.434 29.011
λ = 10
σ = 0.05
PSNR(dB)15.564 17.08617.08616.938
SSIM0.872 0.9090.9090.908
VIF0.550 0.684 0.6890.688
MAE0.067 0.0500.0500.051
RMSE0.167 0.1400.1400.142
MPE30.412 25.52625.527 25.963
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, W.; Lee, S.-J. Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction. Sensors 2023, 23, 5783. https://doi.org/10.3390/s23135783

AMA Style

Zhu W, Lee S-J. Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction. Sensors. 2023; 23(13):5783. https://doi.org/10.3390/s23135783

Chicago/Turabian Style

Zhu, Wen, and Soo-Jin Lee. 2023. "Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction" Sensors 23, no. 13: 5783. https://doi.org/10.3390/s23135783

APA Style

Zhu, W., & Lee, S. -J. (2023). Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction. Sensors, 23(13), 5783. https://doi.org/10.3390/s23135783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop