Next Article in Journal
A Tangible Solution for Hand Motion Tracking in Clinical Applications
Next Article in Special Issue
Incorporating Negative Sample Training for Ship Detection Based on Deep Learning
Previous Article in Journal
A Multiscale Denoising Framework Using Detection Theory with Application to Images from CMOS/CCD Sensors
Previous Article in Special Issue
Dynamic Pose Estimation Using Multiple RGB-D Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests

1
Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
2
School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
3
Shanghai United Imaging Healthcare, Shanghai 201807, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(1), 207; https://doi.org/10.3390/s19010207
Submission received: 5 December 2018 / Revised: 5 January 2019 / Accepted: 6 January 2019 / Published: 8 January 2019
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)

Abstract

:
Aiming at reducing computed tomography (CT) scan radiation while ensuring CT image quality, a new low-dose CT super-resolution reconstruction method based on combining a random forest with coupled dictionary learning is proposed. The random forest classifier finds the optimal solution of the mapping relationship between low-dose CT (LDCT) images and high-dose CT (HDCT) images and then completes CT image reconstruction by coupled dictionary learning. An iterative method is developed to improve robustness, the important coefficients for the tree structure are discussed and the optimal solutions are reported. The proposed method is further compared with a traditional interpolation method. The results show that the proposed algorithm can obtain a higher peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM) and has better ability to reduce noise and artifacts. This method can be applied to many different medical imaging fields in the future and the addition of computer multithreaded computing can reduce time consumption.

1. Introduction

Computed tomography (CT) uses precisely collimated X-rays, gamma rays, ultrasonic waves, or other types of beams in concert with highly sensitive detectors to sequentially scan individual sections of the human body. CT has a fast scan time and results in clear images. Thus, CT is used in examinations for a variety of diseases. CT scanners are one of the most commonly installed types of medical imaging diagnostic equipment and are widely used in various clinical fields. Various types of radiation can be used for CT; however, radiation can cause certain damage to the patient’s body, such as to the head, which may lead to headaches or insomnia [1]. Therefore, the ideal radiation dose for medical applications should be minimized [2]. Many methods currently exist for reducing radiation doses, such as reducing the voltage, the current, the clinical scanning time and so on. However, these approaches cause increased noise, granularity and serious artifacts in the resulting CT images, which can result in misdiagnoses [3]. Many methods to reduce these disadvantages of low-dose CT images have emerged in the super-resolution field in recent years [4,5,6].
Super-resolution (SR) reconstruction is a classical image recovery technique usually divided into three categories. The first category is the traditional interpolation method [7,8,9]. Simple interpolation methods such as bicubic interpolation can produce a smoother image that achieves a certain denoising effect and preserves edges in the zoomed image but it is powerless for removing artifacts. When dealing with visually complex real images (such as CT images) the effect of traditional interpolation is limited and can even generate artifacts. The second category is based on models [10,11,12,13]. Model-based techniques perform image reconstruction by projecting features of the image based on the degradation process of the simulated image. When a priori knowledge of the model image is effectively applied, these techniques can guarantee the quality of the reconstructed image [10,13]. However, when no a priori knowledge is available, they tend to result in an ill-posed problem because of an insufficient number of low-resolution images. Conversely, using excessive numbers of images in training can lead to long runtimes and lengthy computation.
The third category of SR reconstruction is based on machine learning [14]. Machine learning algorithms learn a nonlinear mapping of a training database consisting of low-resolution (LR) and high-resolution (HR) image pairs to obtain connections between the LR images and HR images [4,15,16,17,18,19,20,21]. In recent years, the academic community has become increasingly interested in implementing SR based on sparse representation methods because this approach robustly preserves image features and suppresses noise and artifacts [15,18,21]. For example, Dong et al. [22] used adaptive sparse domain selection and adaptive regularization to cluster the training data and create a compact dictionary. This approach obtained a good SR result. Yang et al. [15] proposed a novel coupled dictionary training method for SR based on patchwise sparse recovery. Jiang et al. [18] proposed a single CT image SR reconstruction scheme. However, these methods require sparse coding in both the training and inference phases; therefore, their processing speeds are slow. To solve the above problems, Timofte et al. [23,24] proposed an instance-based neighbourhood regression SR algorithm and Samuel et al. [25] proposed a fast and accurate SR method based on a random forest classification mapping relationship.
Random forest (RF) is suitable for the problem framework of local linear multiple regression [26,27,28]. RF has highly nonlinear learning characteristics, is usually very fast during training and evaluation and can easily adapt to inputs consisting of noisy low-resolution images; thus, RF is widely applied in the computer vision field. Inspired by coupled dictionary learning and RF, a similar method to solve the SR of low-dose CT (LDCT) and obtain reconstructed CT images with similar quality to high-dose CT (HDCT) images is proposed here. In addition, during the SR imaging process, a series of iterations are added to improve the quality of the final reconstructed image. The proposed method is also compared with the traditional interpolation method and important indicators are evaluated.
This paper is organized as follows. Section 2 provides background information concerning the related sparse representation and dictionary learning techniques. In Section 3, a random forest-based solution for SR was proposed. Section 4 presents the experimental results. Finally, Section 5 provides discussions and future works and concludes the paper.

2. Background

2.1. Sparse Representation

According to the principle of compressed sensing [29,30] and sparse representation [31], an image vector x can be represented as a sparse linear combination of a dictionary D and it is mathematically expressed as follows:
x = D α   for   some   α R K   with   | | α | | 0 K
where α is the sparse representation coefficient and the content | | α | | 0 K , where K is the dimension of x , represents an image block. The matrix D is a dictionary with K × n dimensions. An overcomplete dictionary, that is, where the number of atoms n is larger than the dimension of the image block K , is often used for sparse representation; the sparse coefficient α can be obtained by an optimized estimation of the cost function. Generally, the cost function is expressed as follows:
F ( α ) = | | x D α | | 2 2 + λ | | α | | 1
where λ is a constant parameter. The sparse representation is extended to the SR problem via the following function:
F ( α ) = | | y H D α | | 2 2 + λ | | α | | 1
where the vector y is the LR image block and H is the sampling matrix. Using the matrix   H , the degradation of the geometric shift, blur, or down-sampling operator can be determined for the LR image y . The cost function is minimized as follows:
I = i m i n [ | | y i D α i | | 2 2 + λ | | α i | | 1 ]
When solving the optimal vector problem in Equation (4), how the dictionary is established is highly important for mapping the LR and HR images.

2.2. Coupled Dictionary Learning

The main method for dictionary-based single-image super-resolution was based on coupled dictionary learning. The most effective method was proposed by Yang et al. [15,16]. N samples sampled from the LR and HR images are denoted X L R D L × N and X H R D H × N , respectively. The symbols X L and X H represent the LR and HR data matrices, respectively and each column represents a sample x L and x H . The coupled dictionary learning method can be defined as follows:
m i n = 1 D L | | X L D L E | | 2 2 + 1 D H | | X H D H E | | 2 2 + Γ ( E )
where D L R D L × B represents the LR dictionaries and D H R D H × B represents the HR dictionaries. The code sparse matrix connecting these two dictionaries is E R B × N . The regularization term Γ ( E ) is usually a sparse specification constraint of E using the l 0 -norm or l 1 -norm.
In Equation (5), in coupled dictionary learning, the mapping relationship between LR and HR image is critical, as defined below:
X H = W ( X L ) · X L
Equation (6) shows that dictionary training can be performed only when the mapping relation function W ( X L ) is known. Using a random forest, the method of learning this mapping is discussed below.

3. Proposed Reconstruction Method

3.1. Mapping Relation Function Learning

This section discusses learning the mapping relation function W ( X L ) . First, consider a two-paradigm objective function, as follows:
argmin n = 1 N | | X H n W ( X L n ) · X L n | | 2 2
According to different basis functions ψ ( x ) , Equation (7) is converted to
argmin n = 1 , j N | | X H n j = 0 γ W j ( X L n ) · ψ j ( X L n ) | | 2 2
The goal of this paper is to find the regression matrix W j ( X L n ) for each γ + 1 basis function. While one option is to choose a linear basis function, such as ψ j ( x ) = x , a polynomial function, such as ψ j ( x ) = x j , can also be chosen. Different parameter settings has different effects. In either case, the target linear and nonlinear parameters can be learned through their dependencies.
This paper used random forests to create data dependence. A random forest is a binary tree and multivariate regression is performed using the dimension of the dictionary D H ; that is, each tree independently separates the data space, the leaf nodes are determined and then, the nodes are overlapped by using multiple trees and multiple forests so that each leaf node learns a linear model:
m l ( x l ) = j = 0 γ W j l · ψ j ( x L )
However, to find all the matrices W j l , the regularized least squares problem must be solved, which can be solved by the formula W l T = ( Ψ ( X L ) T Ψ ( X L ) + η I ) 1 Ψ ( X L ) T · X H . Therefore, all the data are stacked into the matrix W l ,   Ψ ( X L ) and X H and the user specifies the regularization parameter η . Because all the binary trees are used for prediction during the inference process, the data dependency matrix W ( X L ) can be described as follows:
x ^ H = m ( x L ) = W ( x L ) · x L = 1 T t = 1 T m l ( t ) ( x L )
where l ( t ) is the leaf node of tree t generated by sampling point x L and T is the number of trees.

3.2. Tree Structure Learning

We obtain the leaf node model using Equation (9) and then train the tree to find the optimal solution of the mapping relation function. N samples { x L n , x H n } X × Y are taken, where X and Y represent the LR and HR images, respectively. A single random tree is trained by finding the split function and using recursion to segment the training data into disjoint subsets. The split function is
δ ( x L , Θ ) = { 0     r Θ ( x L ) < 0 1   o t h e r w i s e
For all internal tree nodes, the split starts at the root node and continues down the tree in a greedy manner until it reaches the maximum depth ξ m a x , at which point the leaf nodes are created.
To find a good parameter Θ for the split function δ ( x L , Θ ) , the general method is to sample the random group by a quality metric to obtain the parameter value Θ k and choose the best one. The quality of the splitting function δ ( x L , Θ ) is defined as follows:
Q ( δ , Θ , X L , X H ) = c { L e f t , R i g h t } | X c | · E ( X L c , X H c )
where L e f t and R i g h t are the left and right sub-nodes, respectively and | · | is the cardinal operator. According to the split function in Equation (11), two new domains are defined:
[ X L L e f t , X H L e f t ] = { [ x L , x H ] : δ ( x L , Θ ) = 0 }
[ X L R i g h t , X H R i g h t ] = { [ x L , x H ] : δ ( x L , Θ ) = 1 }
The function E ( X L , X H ) is used to measure the purity of the data, causing similar data to fall into the same leaf node to achieve the random forest classification goal.
A new regularization expression is thus defined:
E ( X L , X H ) = 1 N n = 1 N ( | | x H n m ( x L n ) | | 2 2 + k · | | x L n x ¯ L | | 2 2 )
where m ( x L n ) is the prediction of sample x L n , x ¯ L is the mean of sample x L n   and   k is a hyperparameter. Here, | | x H n m ( x L n ) | | 2 2 is the label space operation and k · | | x L n x ¯ L | | 2 2 is the data space operation (different k values produce different results as discussed in the next section). This regularization (similar to the E ( X L , X H ) in Equation (15)) can simplify the calculation of the linear regression model m ( x L n ) . After the data in the current node are split and forwarded to the left and right child nodes, respectively, the tree continues to grow until the last leaf node has been created. Finally, classification is accomplished through voting to determine the optimal solution.

3.3. The Method Scheme

This section provides a brief description of the logic in the proposed algorithm, both the basic scheme for SR and the tree-structure construction algorithm for the random forest. These basic schemes are summarized in Table 1 and Figure 1 for clarity.
The first stage is the training stage (the red block). In this module, using the LDCT image and the corresponding HDCT image as a training set, according to the third section, a decision tree is generated by the training set and a random forest is trained to find the mapping relationship W ( X L ) between the two images. The second stage is the test stage (the blue block). A non-training set LDCT image is used as the input image and using the developed mapping function and the LDCT image matrix X L , the new image matrix X H is reconstructed. Finally, the coupled dictionary learning of D L and D H is performed according to Equation (5), the inverse process of image down-sampling is performed according to Equation (4) to obtain the final reconstructed image.
Steps 3 and 4 of Table 1 mention training an individual tree and a random forest. Table 2 provides the algorithm for generating the random forest.

4. Experiments and Results

In this section, experiments based on clinical data are performed using the proposed random forest solution for SR. All the experiments were executed in MATLAB 2016a on an Ubuntu 18.04 operating system with an Intel® CoreTM i5-7500 CPU @ 3.40 GHz and 64.0 GB of RAM.
All the CT images in the following experimental sections were provided by the United Imaging company. For this experiment, 100 LDCT images and the corresponding HDCT images are selected as low-resolution image training sets and high-resolution image training sets, respectively, for training and the mapping relationship is determined. This step constitutes the training phase. Here, HDCT denotes a full-dose CT image and LDCT denotes a quarter-dose CT image. In the testing phase, a non-training set LDCT image is used as the input image, combined with the training mapping relationship and then, a new CT image is obtained by reconstructed by coupled dictionary learning. Finally, the CT image reconstructed by the method of the present invention is compared with the input LDCT image, the original HDCT image and the image reconstructed by the conventional interpolation method. The findings prove that the proposed method has strong robustness in reducing noise and artifacts.

4.1. Experimental Parameters and Evaluation Function

In the experiment, the main parameters include the number of trees T in the system, the maximum tree depth ξ m a x , the regularization parameter λ for linear regression in the leaf nodes and the regularization parameter k of the last splitting target. When no special values are provided, the above parameters are set to T = 10 , ξ m a x = 15 , η = 0.01 and k = 1 .
The resulting reconstructed image was evaluated using the peak signal-to-noise ratio (PSNR) and the structural similarity index measurement (SSIM) of the image as evaluation criteria.
The definition of PSNR is as follows:
P S N R = 10 × lg ( 255 2 M S E ) ,   M S E = ( j = 1 h e i g h t i = 1 w i d t h ( I o r i g ( i , j ) I t a r ( i , j ) ) 2 ) h e i g h t × w i d t h
where M S E is the mean square error, h e i g h t and w i d t h are the height and width of the image, respectively, I o r i g is the source image and I t a r is the image to be evaluated. The P S N R reflects the loss of high-frequency components from the image: higher PSNR values indicate smaller loss and a better reconstruction effect.
The SSIM is defined as follows [32]:
S S I M ( x , y ) = ( 2 u x u y + C 1 ) ( 2 σ x y + C 2 ) ( u x 2 + u y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
where u x , u y and σ x , σ y are the mean and standard deviation of the image at x, y, respectively; σ x y is the covariation of x, y; and C 1 and C 2 are constants (set to 1 in the experiment). The S S I M is the structural similarity index of x and y images and is used to measure the similarity between two images. The S S I M is more similar to the human eye’s evaluation of image quality; its value ranges from 0 to 1. The closer the S S I M value is to 1, the more similar the two images are.

4.2. Clinical Data Experiments

In this experiment, clinical data were used for validation and to test the performance and robustness of the proposed method. Taking a low-dose CT image of the non-training set as input, the method proposed in this paper and the bicubic interpolation method are applied to reconstruct the input CT images. Figure 2 compares the image quality between the two methods based on the PSNR and SSIM metrics mentioned in the previous section. Figure 2a shows the original image, a high-dose CT (HDCT) image, for reference and Figure 2b shows the corresponding input image, a low-dose CT (LDCT) image. Figure 2c shows the image reconstructed by the bicubic interpolation method (PSNR = 25.37 dB, SSIM = 0.79) and Figure 2d shows the image reconstructed by the method proposed in this paper (PSNR = 35.94 dB, SSIM = 0.91). The improvements in the two image quality indexes when using the proposed method are clear: a 41.66% improvement in P S N R and a 15.19% improvement in S S I M . These results demonstrate that the proposed method achieves significant improvements in image high frequency retention, denoising and image reconstruction quality compared with a traditional interpolation method.
The profile and residual images are also compared in Figure 3 and Figure 4. It can be concluded that the effect and performance of the proposed method in image reconstruction is superior to those of the traditional bicubic interpolation method.
Different numbers of iterations were employed in the proposed method and the reconstructed images obtained in Figure 3 were compared. To be more convincing, three representative parts were selected for comparison in Figure 5 and Figure 6 and the related data are shown in Table 3, Table 4 and Table 5.
As shown in Figure 7a,b and Table 3, Table 4 and Table 5, the image quality clearly changes as the number of iterations changes. The correlation image quality parameters P S N R and S S I M are optimal when iterating twice, after which these parameters have a downward trend.

4.3. Parameter Evaluation

According to the analysis in Section 2, factors that affect the random forest include the objective function for evaluating the potential segmentation function and the inherent randomness. Therefore, during the statistical analysis of the reconstruction results, two factors are considered here: the number of trees T in the random forest and the maximum depth of each tree structure ξ m a x .
To control the variables and ensure authenticity during this experiment, all the following experiments involve only one iteration.
Random forest classifiers function similarly to voting. The construction of random forest classifiers [27] involves first generating a decision tree; then, multiple decision trees form the random forest. Each decision tree functions as a ballot; all the trees vote to yield the final result. A larger number of trees tends to produce a better final result but increases the time required to reach a final decision; therefore, an optimal solution must be found. Here, ξ m a x = 15 is set as the default.
Figure 8a shows the effect of the parameter T on the experiment. The PSNR value increases steadily and eventually becomes saturated as T increases. As shown in Figure 8a, the PSNR is saturated when T = 10 . Figure 8b shows the relationship between the number of trees T and the total calculation time.
According to the graphs in Figure 8, it can be concluded that T = 10 is optimal, that is, the algorithm achieves good results and completes in a reasonable amount of time when 10 trees are used.
After determining the optimal number of trees ( T = 10 ), the maximum depth of each decision tree can be discussed. Decision tree classification starts from the root node, classifies new subnodes according to their characteristics and then classifies the subnodes as new root nodes; consequently, the subclasses are sorted down to the maximum depth to obtain the final result. The maximum depth principle is the same as that for the number of trees: greater depth provides a better classification effect but requires more time to generate the tree. Therefore, finding the best solution for tree depth is also crucial.
Figure 9a shows the relationship between the maximum tree depth ξ m a x and the experimental outcome. Tree depth has a strong influence on the training. Figure 9a shows that a steady state is reached when the depth ξ m a x = 15 , that is, the selected sample image is saturated. This relationship is reflected by Equation (15), which directly affects the training of LDCT and HDCT images. Figure 9b shows the relationship between the maximum depth of the tree ξ m a x and the training time. It is concluded that the maximum depth of the tree is ξ m a x = 15 .
The regularization parameter η of the linear regression in the leaf node mentioned in Section 2.2 and the regularization parameter k of the splitting target mentioned in Equation (15) also have a certain influence on the final random forest result but their influences are not as obvious as those of the first two factors; consequently, comparisons are provided herein but detailed explanations are omitted. As shown in Figure 10a, when η > 10 2 , the declining PSNR trend is obvious and in Figure 10b, a k value between 0.5 and 1 is most appropriate; that is, the PSNR value remains the highest within this interval.

5. Conclusions

In this paper, a new method for low-dose CT image SR reconstruction is proposed that avoids using sparse coding dictionaries to learn the mapping from LR images to HR images, as in general sparse representation of compressed sensing. Instead, the problem of mapping HDCT image blocks to LDCT image blocks is solved by using a random forest and combined with coupled dictionary learning to complete LDCT image reconstruction. CT images acquired from various parts of the human body have similar features and therefore, CT images of different parts of the body are included in the training set. To obtain a better reconstruction effect for a specific part of the test, CT images of that specific body part can be used as the training set. An iterative capability is also incorporated in this paper to improve the robustness of the method. Compared with traditional interpolation methods, the proposed method greatly reduces noise and artifacts. The algorithm proposed in this paper improves the resolution of noisy images and produces larger PSNR values and SSIM values. The method proposed in this paper can be applied in different CT fields, such as dual-source CT (DSCT) and can also be applied to other medical imaging fields, such as positron emission computed tomography (PET). In the training process, computer multithread computing is used to reduce the training time. Compared with the deep learning-based CT super-resolution reconstruction method, which is of great interest in the academic world, this method has a substantial advantage in terms of running time but cannot handle large training sets because of CPU and computer memory limitations. In the future, the method proposed herein will be combined with deep learning in the field of super-resolution imaging and a larger database will be trained to improve the reconstruction effect.

Author Contributions

Data curation, C.J., M.J., Q.Z. and Y.G.; Funding acquisition, D.L., Y.Y. and Z.H.; Methodology, Z.H.; Supervision, D.L., X.L., Y.Y. and H.Z.; Writing – original draft, P.G.; Writing – review & editing, Z.H.

Funding

This work was supported by the National Natural Science Foundation of China (81871441), the Guangdong Special Support Program (2017TQ04R395), the Natural Science Foundation of Guangdong Province in China (2017A030313743), the Guangdong International Science and Technology Cooperation Project (2018A050506064), the Shenzhen Overseas High-Level Talent Peacock Team of China (KQTD2016053117113327), the Basic Research Program of Shenzhen in China (JCYJ20160608153434110, JCYJ20150831154213680) and the National Natural Science Foundation of China (81527804).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brenner, D.J.; Hall, E.J. Computer tomography-An increasing source of radiation exposure. N. Engl. J. Med. 2007, 357, 2277–2284. [Google Scholar] [CrossRef] [PubMed]
  2. Hsieh, J. Adaptive streak artifact reduction in computed tomography resulting from excessive X-ray photon noise. Med. Phys. 1998, 25, 2139–2147. [Google Scholar] [CrossRef] [PubMed]
  3. Yun, S.J. Comparison of Low- and Standard-Dose CT for the Diagnosis of Acute Appendicitis: A Meta-Analysis. Am. J. Roentgenol. 2017, 208, W198–W207. [Google Scholar] [CrossRef] [PubMed]
  4. Hu, Z.; Liu, Q.; Zhang, N.; Zhang, Y.; Peng, X. Image Reconstruction from Few-view CT Data by Gradient-domain Dictionary Learning. J. X-Ray Sci. Technol. 2016, 24, 627–638. [Google Scholar] [CrossRef] [PubMed]
  5. Mouton, A.; Breckon, T.P. On the relevance of denoising and artefact reduction in 3d segmentation and classification within complex computed tomography imagery. J. X-Ray Sci. Technol. 2018. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, Y.; Qi, Z. A new adaptive-weighted total variation sparse-view computed tomography image reconstruction with local improved gradient information. J. X-Ray Sci. Technol. 2018, 26, 957–975. [Google Scholar] [CrossRef] [PubMed]
  7. Hou, H.S.; Andrews, H.C. Cubic spline for image interpolation and digital filtering. IEEE Trans. Signal Process. 1978, 26, 508–517. [Google Scholar]
  8. Sun, J.; Xu, Z.; Shum, H. Image super-resolution using gradient profile prior. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  9. Dai, S.; Han, M.; Xu, W.; Wu, Y.; Gong, Y. Soft edge smoothness prior for alpha channel super resolution. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  10. Zhang, R.; Thibault, J.-B.; Bouman, C.A.; Sauer, K.D.; Hsieh, J. Model-based iterative reconstruction for dual-energy x-ray ct using a joint quadratic likelihood model. IEEE Trans. Med. Imag. 2014, 33, 117–134. [Google Scholar] [CrossRef]
  11. Yu, G.; Sapiro, G.; Mallat, S. Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity. IEEE Trans. Image Process. 2012, 21, 2481–2499. [Google Scholar]
  12. Peleg, T.; Elad, M. A Statistical Prediction Model Based on Sparese Representations for Single Image Super-Resolution. IEEE Trans. Image Process. 2014, 23, 2569–2582. [Google Scholar] [CrossRef]
  13. Yu, Z.; Thibault, J.-B.; Bouman, C.A.; Sauer, K.D.; Hsieh, J. Fast model-based x-ray ct reconstruction using spatially nonhomogeneous icd optimization. IEEE Trans. Image Process. 2011, 20, 161–175. [Google Scholar] [CrossRef] [PubMed]
  14. Bishop, C.M. Pattern Recogntion and Machine Learning; Springer: Berlin, Germany, 2007. [Google Scholar]
  15. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sprase representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef]
  16. Yang, J.; Wang, Z.; Lin, Z.; Cohen, S.; Huang, T.S. Couple dictionary training for image super-resolution. IEEE Trans. Image Process 2012, 21, 3467–3478. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, Z.; Yang, Y.; Wang, Z.; Chang, S.; Yang, J.; Huang, T.S. Learning super-resolution jointly from external and internal examples. IEEE Trans.Image Process. 2015, 24, 4359–4371. [Google Scholar] [CrossRef] [PubMed]
  18. Jiang, C.; Zhang, Q.; Fan, R.; Hu, Z. Super-resolution ct image reconstruction based on dictionary learning and sparse representation. Sci. Rep. 2018, 8, 8799. [Google Scholar] [CrossRef] [PubMed]
  19. Hu, Z.; Liang, D.; Xia, D.; Zheng, H. Compressive sampling in computed tomography: Method and application. Nucl. Instrum. Methods Phys. Res. A 2014, 748, 26–32. [Google Scholar] [CrossRef]
  20. Hu, Z.; Zhang, Y.; Liu, J.; Ma, J.; Zheng, H.; Liang, D. A feature refinement approach for statistical interior CT reconstruction. Phys. Med. Biol. 2016, 61, 5311–5334. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Li, T.; Jiang, C.; Gao, J.; Yang, Y.; Liang, D.; Liu, X.; Zheng, H.; Hu, Z. Low-count PET image restoration using sparse representation. Nucl. Instrum. Methods Phys. Res. A 2018, 888, 222–227. [Google Scholar] [CrossRef]
  22. Dong, W.S.; Zhang, L.; Shi, G.; Wu, X.L. Image deblurring and super-resolution by adaptive sparse domain selction and adaptive regularization. IEEE Trans.Image Process 2011, 20, 1838–1857. [Google Scholar] [CrossRef]
  23. Timofte, R.; Smet, V.D.; Gool, L.V. A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution. In Proceedings of the Asian Conference on Computer Vision (ACCV 2014), Singapore, 1–5 November 2014. [Google Scholar]
  24. Timofte, R.; Smet, V.D.; Gool, L.V. Anchored Neighborhood Regression for Fast Example-Based Super-Resolution. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013. [Google Scholar]
  25. Schulter, S.; Leistner, C.; Bischof, H. Fast and Accurate Image Upscaling with Super-Resoltion Forests. In Proceedings of the IEEE Conference on Computer Vision and Pattern Classification (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3791–3799. [Google Scholar]
  26. Amit, Y.; Geman, D. Shape quantization and recongnition with randomized trees. NECO 1997, 9, 1545–1588. [Google Scholar]
  27. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  28. Criminisi, A.; Shotton, J. Decision Forests for Computer Vision and Medical Image Analysis; Springer: London, UK, 2013; Volume 10, pp. 4471–4929. [Google Scholar]
  29. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  30. Wang, Z.; Yang, J.; Wang, Z.; Chang, S.; Yang, Y.; Liu, D. Sparse Coding and Its Application in Computer Vision; World Scientific: Singapore, 2016. [Google Scholar]
  31. Freeman, W.T.; Jones, T.R.; Pasztor, E.C. Example-based super-resolution. IEEE Comput. Graph. Appl. 2002, 22, 56–65. [Google Scholar] [CrossRef] [Green Version]
  32. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the SR algorithm.
Figure 1. Flowchart of the SR algorithm.
Sensors 19 00207 g001
Figure 2. From left to right, top to bottom: (a) HDCT image; (b) LDCT image; (c) reconstructed image obtained using the bicubic interpolation method; (d) image reconstructed by the proposed method.
Figure 2. From left to right, top to bottom: (a) HDCT image; (b) LDCT image; (c) reconstructed image obtained using the bicubic interpolation method; (d) image reconstructed by the proposed method.
Sensors 19 00207 g002
Figure 3. Profiles of different results are shown for the 320th row of the image in Figure 2. The black curve represents the profile of the original CT image in Figure 2a. The red curve represents the profile of the reconstructed CT image obtained using the bicubic interpolation method in Figure 2c. The blue curve represents the profile of the reconstructed CT image obtained using the proposed method in Figure 2d.
Figure 3. Profiles of different results are shown for the 320th row of the image in Figure 2. The black curve represents the profile of the original CT image in Figure 2a. The red curve represents the profile of the reconstructed CT image obtained using the bicubic interpolation method in Figure 2c. The blue curve represents the profile of the reconstructed CT image obtained using the proposed method in Figure 2d.
Sensors 19 00207 g003
Figure 4. From left to right, (ac) respectively represent the residual image of the LDCT image in Figure 2b, the reconstructed results by the bicubic interpolation in Figure 2c and the method proposed in this paper in Figure 2d.
Figure 4. From left to right, (ac) respectively represent the residual image of the LDCT image in Figure 2b, the reconstructed results by the bicubic interpolation in Figure 2c and the method proposed in this paper in Figure 2d.
Sensors 19 00207 g004
Figure 5. (a) HDCT image; (b) LDCT image; (c) Reconstructed image obtained using the bicubic interpolation method; (d) The image reconstructed by the method of this paper with 1 iteration; (e) The image reconstructed by the method of this paper with 2 iterations; (f) The image reconstructed by the method of this paper with 5 iterations.
Figure 5. (a) HDCT image; (b) LDCT image; (c) Reconstructed image obtained using the bicubic interpolation method; (d) The image reconstructed by the method of this paper with 1 iteration; (e) The image reconstructed by the method of this paper with 2 iterations; (f) The image reconstructed by the method of this paper with 5 iterations.
Sensors 19 00207 g005
Figure 6. Images (af) show zoomed images of the portions marked with red squares in Figure 5a, providing more detail of the differences in reconstructed image quality under different iterations.
Figure 6. Images (af) show zoomed images of the portions marked with red squares in Figure 5a, providing more detail of the differences in reconstructed image quality under different iterations.
Sensors 19 00207 g006
Figure 7. Changes in PSNR and SSIM values with the number of iterations for the simulation experiment using the proposed method.
Figure 7. Changes in PSNR and SSIM values with the number of iterations for the simulation experiment using the proposed method.
Sensors 19 00207 g007
Figure 8. As shown in (a), when T = 10 , the PSNR is close to saturated; in (b) the time increases linearly as T increases.
Figure 8. As shown in (a), when T = 10 , the PSNR is close to saturated; in (b) the time increases linearly as T increases.
Sensors 19 00207 g008
Figure 9. (a) shows that when ξ m a x = 15 , the result is saturated; (b) shows the relationship between the maximum depth of the tree ξ m a x and the training time.
Figure 9. (a) shows that when ξ m a x = 15 , the result is saturated; (b) shows the relationship between the maximum depth of the tree ξ m a x and the training time.
Sensors 19 00207 g009
Figure 10. (a) The effect of the regularization parameter η on the results; (b) the effect of the regularization parameter k on the results.
Figure 10. (a) The effect of the regularization parameter η on the results; (b) the effect of the regularization parameter k on the results.
Sensors 19 00207 g010
Table 1. Basic Scheme for SR.
Table 1. Basic Scheme for SR.
1 Input: an LDCT image x
2 Output: the final processed image y
3 LDCT image and HDCT image N-sample points { x L n , x H n } in the training set
4 Train individual random forest trees and then combine the trained trees into a random forest
5 The dependence matrix function W ( x L ) is obtained by Equation (10)
6 Compute the mapping relationship function W ( X L ) using Equations (7) and (8)
7 The relationship between the data matrix X L   of   the   LR   and   the   data   matrix   X H of the HR is obtained by Equation (6)
8 Coupling dictionary learning of the LR dictionary D L   and   the   HR   dictionary   D H is completed by Equation (5)
9 Implement the inverse of image down-sampling by Equation (4) and obtain the final image y by Equation (3)
Table 2. Tree construction for a random forest.
Table 2. Tree construction for a random forest.
1 for k = 1 to K
2 Randomly extract N-samples to construct feature vector sets
3 while (tree depth is below the minimum)
   (1) randomly select n eigenvectors from the set of feature vectors
   (2) select the optimal vector and the optimal split point from the feature vectors
   (3) split the optimal split point into left and right child nodes
   (4) update tree depth
4 end while
5   create   a   tree   T k ( x )
6 end for
7   return   the   collection   of   trees   { T k ( x ) } k = 1 K
Table 3. Comparison of relevant data.
Table 3. Comparison of relevant data.
LDCTBicubicRFSRRFSR 2ndRFSR 5th
PSNR(dB)21.6526.2336.0537.0334.08
SSIM0.750.800.920.950.86
Table 4. The PSNR value of the four ROIs marked by red squares in Figure 5a.
Table 4. The PSNR value of the four ROIs marked by red squares in Figure 5a.
ROILDCTBicubicRFSRRFSR 2ndRFSR 5th
120.5525.7635.8936.9734.01
221.3326.1335.9737.0134.03
322.3127.4336.1237.0934.09
422.0626.5436.4537.6334.61
Table 5. The SSIM value of the four ROIs marked by red squares in Figure 5a.
Table 5. The SSIM value of the four ROIs marked by red squares in Figure 5a.
ROILDCTBicubicRFSRRFSR 2ndRFSR 5th
10.710.790.900.920.86
20.740.810.920.950.88
30.780.830.910.940.87
40.760.820.890.930.85

Share and Cite

MDPI and ACS Style

Gu, P.; Jiang, C.; Ji, M.; Zhang, Q.; Ge, Y.; Liang, D.; Liu, X.; Yang, Y.; Zheng, H.; Hu, Z. Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests. Sensors 2019, 19, 207. https://doi.org/10.3390/s19010207

AMA Style

Gu P, Jiang C, Ji M, Zhang Q, Ge Y, Liang D, Liu X, Yang Y, Zheng H, Hu Z. Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests. Sensors. 2019; 19(1):207. https://doi.org/10.3390/s19010207

Chicago/Turabian Style

Gu, Peijian, Changhui Jiang, Min Ji, Qiyang Zhang, Yongshuai Ge, Dong Liang, Xin Liu, Yongfeng Yang, Hairong Zheng, and Zhanli Hu. 2019. "Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests" Sensors 19, no. 1: 207. https://doi.org/10.3390/s19010207

APA Style

Gu, P., Jiang, C., Ji, M., Zhang, Q., Ge, Y., Liang, D., Liu, X., Yang, Y., Zheng, H., & Hu, Z. (2019). Low-Dose Computed Tomography Image Super-Resolution Reconstruction via Random Forests. Sensors, 19(1), 207. https://doi.org/10.3390/s19010207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop