Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks
Abstract
:1. Introduction
- We propose a dense-inception network integrating an inception structure and dense skip connection to estimate the noise level. Note that we follow the definition of noise level in [25,26,27,28] as the variance of the Gaussian noise added in the image. To the best of our knowledge, it is the first attempt to adopt a deep convolutional neural network to estimate noise in medical images. Different from the traditional methods that need to extract noise from the image first, we train a network to classify a noisy medical image into the class labelled by the variance of the noise distribution.
- We propose a modified residual-dense network to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, the noise level that is needed for reconstruction is directly calculated by the proposed noise level estimation network, so the reconstruction network can process the noisy blurred medical images with much less prior knowledge about the noise than the state-of-the-art works.
- We propose a joint loss function to restrain the super-resolution network. Both the perceptual loss and the mean square error (MSE) loss are used to achieve a better performance in the reconstruction of image edges and details.
2. Materials and Methods
2.1. Dense-Inception Network for Noise Estimation
2.1.1. Overall Network Structure
- Pre-processing module: We used of a convolutional layer with 96 channels as the pre-processing layer of the degradation estimation network to convert the input image into the 96-dimensional feature space. A batch normalization (BN) layer and a rectified linear units (ReLU) layer [34] were followed to avoid the problem of gradient vanishing. After that, a max-pooling layer was applied to reduce the size of each feature map so as to decrease the amount of calculation.
- Inception module: Three inception-residual blocks were connected in series to compose the inception modules for the noisy image feature extraction. The inception module uses feature maps from the pre-processing module as input and processes them using receptive fields with different sizes. It guarantees that the image features with multiple scales can be preserved as much as possible, because the inception structure consists of convolution layers with different kernel sizes.
- Dense connection: The theory of skip connection proposed in Dense-Net [32] was used to improve the performance of our proposed degradation estimation network. The skip connections achieve the reuse of image features that are not processed by the corresponding inception-residual modules, so that it can reduce the disappearance of image noise features in the processing of feature transmission and fusion. Moreover, bottle necks were adopted in the dense connection, which keep the parameters at a reasonable amount and avoids overfitting.
- Global average-pooling and fully connected layer: After the inception module with dense connection, we applied average-pooling, whose kernel size was the size of feature maps to get the mean of each feature map, leading to a further reduction of the parameters and inhibition of over-fitting. Then, a full connection layer was used to output the probability of the noise level classification through the soft-max layer. The type with the largest probability was considered as the estimation of image degradation.
2.1.2. Inception Structure
2.1.3. Dense Connection of Inception Modules
2.2. Dense-Residual-Inception Network for Image Quality Enhancement
2.2.1. Concatenation of the Degraded Image and Degradation Maps
2.2.2. Residual-Dense-Inception Network
- Pre-processing moduleSimilar to that used in the degradation estimation network, here, the pre-processing module was also adopted to roughly extract shallow image features. However, since the input can be considered as an image block with multiple layers that contain a degraded image, pre-set blurring kernel, and noise map, the pre-processing module in the image reconstruction network plays the role of relating the image features and those degradation factors together. The input image block was mapped from the image space to the feature space, where the 128-dimensional feature maps were composed of 128 dimensions of extracted features.
- Residual-dense-inception moduleFigure 7 shows our proposed residual-dense-inception block to extract global dense features from input shallow features. To make full use of the multiple shallow features extracted from the input image block, we adopted the residual-dense block proposed in [24] as the basis of this module. The skip connections in the residual-dense block preserves the features from all the previous layers. The residual connection combines the shallow features and deep features so that the edges, textures, and tiny structures in the degraded image can be recovered better. Furthermore, we embedded the inception block into all the skip connection and residual connection routines to eliminate the noise information from the features because it functions little in the following sub-pixel convolution and reconstruction modules. Figure 8 shows the inception block we applied in our network, which was originally proposed in [31]. Three residual-dense-inception blocks were connected in series to get the accumulative dense features.
- Sub-pixel convolution moduleAfter the pre-processing module and the residual-dense-inception module, the size of the feature map was 40 × 40, which is much lower than our target image dimension 128 × r2 (r is the scale factor). To increase the resolution of the feature map, each pixel of p2 channels were re-arranged into an r × r image block corresponding to the sub-region with size r × r in the target high resolution feature map. Therefore, the feature map of 128 × r2 × H × W was re-arranged into a feature map with a higher dimensionality, 128 × rH × rW. Different from the traditional interpolation, the rH × rW region includes degraded information more adaptively, leading to a better super-resolution effect.
- Reconstruction moduleThe high-resolution feature maps processed by the sub-pixel convolution module were fully connected to form a full image by a convolution layer and a sigmoid layer. In our work, the 3 × 3 convolution kernel was defined to map the 128-channel feature maps into a single-channel grayscale image with higher resolution and lower degradation.
2.2.3. Joint Loss of the Residual-Dense-Inception Network
2.3. Experimental Materials
2.3.1. Dataset
- Dataset for the degradation estimation network: In total, 42,000 image patches with the size of 100 × 100 were randomly cropped from the 1600 lung CT images. One integer was randomly generated within a variation range of 0 to 18 and used as the variance of the Gaussian noise added to every image patch. Note that we treated each image patch with random noise as an individual noisy image and the variances of different additive noise were used as the labels of the images for classification. As a result, we obtained 42,000 degraded images with the size of 100 × 100, of which 40,000 were used as the training set and 2000 were used as the testing set. Figure 10 shows some sampled degraded image patches with different noise levels. The noise level equals the variance of the Gaussian noise added to the image patch. For example, noise level 8 means the variance of the Gaussian noise added to the image patch is 8.
- Dataset for the image quality enhancement network: All 1600 lung CT images were corrupted with random Gaussian white noise (5, 10, 15) and down-sampled to 256 × 256 images. Then, 1400 pairs of degraded and target images were used as the training set, while the other 200 pairs of images were used as the testing set. Figure 11 shows some sampled degraded images with noise and low-resolution.
2.3.2. Comparator Methods
2.4. Parameter Setting
2.5. Experiment Implementation
- The testing noisy image patches were processed to provide the estimated noise level by the proposed noise estimation network, as well as by the comparators.
- The accuracy and the confusion rate of the classification of noise levels by different methods were calculated to evaluate the performance of our proposed method quantitatively.
- The testing of noisy low-resolution images were processed to provide clean high-resolution images by the proposed image quality enhancement network with the noise level generated from the well-trained noise estimation network, as well as by the comparator methods.
3. Results and Discussion
3.1. Performance on Noise Estimation
3.2. Performance on Image Super-Resolution
3.2.1. Examination of Design Strategies
3.2.2. Comparisons with Other Models
4. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Jalalian, A.; Mashohor, S.; Mahmud, R.; Karasfi, B.; Saripan, M.I.; Ramli, A.R. Computer-Assisted Diagnosis System for Breast Cancer in Computed Tomography Laser Mammography (CTLM). J. Digit. Imaging 2017, 30, 796–811. [Google Scholar] [CrossRef] [PubMed]
- Zhao, L.; Dai, W.; Soman, S.; Hackney, D.B.; Wong, E.T.; Robson, P.M.; Alsop, D.C. Using Anatomic Magnetic Resonance Image Information to Enhance Visualization and Interpretation of Functional Images: A Comparison of Methods Applied to Clinical Arterial Spin Labeling Images. IEEE Trans. Med. Imaging 2017, 36, 487–496. [Google Scholar] [CrossRef] [PubMed]
- Mosleh, A.; Sola, Y.E.; Zargari, F.; Onzon, E.; Langlois, J.M.P. Explicit Ringing Removal in Image Deblurring. IEEE Trans. Image Process. 2018, 27, 580–593. [Google Scholar] [CrossRef] [PubMed]
- Foroozan, F.; O’Reilly, M.A.; Hynynen, K. Microbubble Localization for Three-Dimensional Superresolution Ultrasound Imaging Using Curve Fitting and Deconvolution Methods. IEEE Trans. Biomed. Eng. 2018, 65, 2692–2703. [Google Scholar] [CrossRef] [PubMed]
- Allman, D.; Reiter, A.; Bell, M.A.L. Photoacoustic Source Detection and Reflection Artifact Removal Enabled by Deep Learning. IEEE Trans. Med. Imaging 2018, 37, 1464–1477. [Google Scholar] [CrossRef] [PubMed]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; Volume 5, pp. 60–65. [Google Scholar]
- Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
- Meng, T.; Wu, C.; Jia, T. Recombined Convolutional Neural Network for Recognition of Macular Disorders in SD-OCT Images. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 9362–9367. [Google Scholar]
- Jain, V.; Seung, S. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2009; pp. 769–776. [Google Scholar]
- Chen, Y.; Pock, T. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1256–1272. [Google Scholar] [CrossRef] [PubMed]
- Yang, Q.; Yan, P.; Zhang, Y.; Yu, H.; Shi, Y.; Mou, X.; Kalra, M.K.; Zhang, Y.; Sun, L.; Wang, G.; et al. Low-Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss. IEEE Trans. Med. Imaging 2018, 37, 1348–1357. [Google Scholar] [CrossRef]
- Keys, R. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 1153–1160. [Google Scholar] [CrossRef]
- Hong, S.; Wang, L.; Truong, T. An Improved Approach to the Cubic-Spline Interpolation. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 1468–1472. [Google Scholar]
- Schultz, R.R.; Stevenson, R.L. Extraction of high-resolution frames from video sequences. IEEE Trans. Image Process. 1996, 5, 996–1011. [Google Scholar] [CrossRef] [PubMed]
- Nasir, H.; Stankovic, V.; Marshall, S. Singular value decomposition based fusion for super-resolution image reconstruction. In Proceedings of the 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 16–18 November 2011; pp. 393–398. [Google Scholar]
- Dian, R.; Fang, L.; Li, S. Hyperspectral Image Super-Resolution via Non-local Sparse Tensor Factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3862–3871. [Google Scholar]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 391–407. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1874–1883. [Google Scholar]
- Zhang, Y.; Chan, W.; Jaitly, N. Very deep convolutional networks for end-to-end speech recognition. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 4845–4849. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 3262–3271. [Google Scholar]
- Wu, J.; Yue, T.; Shen, Q.; Cao, X.; Ma, Z. Multiple-image super resolution using both reconstruction optimization and deep neural network. In Proceedings of the 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Montreal, QC, Canada, 14–16 November 2017; pp. 1175–1179. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 2472–2481. [Google Scholar]
- Jiang, P.; Zhang, J. Fast and reliable noise level estimation based on local statistic. Pattern Recognit. Lett. 2016, 78, 8–13. [Google Scholar] [CrossRef]
- Hashemi, M.; Beheshti, S. Adaptive noise variance estimation in BayesShrink. IEEE Signal Process. Lett. 2010, 17, 12–15. [Google Scholar] [CrossRef]
- Tian, J.; Chen, L. Image noise estimation using a variation-adaptive evolutionary approach. IEEE Signal Process. Lett. 2012, 19, 395–398. [Google Scholar] [CrossRef]
- Rakhshanfar, M.; Amer, M.A. Estimation of Gaussian, Poissonian–Gaussian, and processed visual noise and its level function. IEEE Trans. Image Process. 2016, 25, 4172–4185. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L. Densely connected convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Chvetsov, A.; Palta, J. SU-GG-T-124: Probability Density Distribution of Proton Range as a Function of Noise in CT Images. Med. Phys. 2008, 35, 2754. [Google Scholar] [CrossRef]
- Lim, B.; Son, S.; Kim, H. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Volume 1, p. 4. [Google Scholar]
- Justin, J.; Alexandre, A.; Li, F. Perceptual Losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 694–711. [Google Scholar]
- Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The cancer imaging archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [PubMed]
- Yue, H.; Liu, J.; Yang, J.; Nguyen, T.; Hou, C. Image noise estimation and removal considering the bayer pattern of noise variance. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2976–2980. [Google Scholar]
- Aja-Fernández, S.; Vegas-Sánchez-Ferrero, G.; Martín-Fernández, M. Automatic noise estimation in images using local statistics. Additive and multiplicative cases. Image Vis. Comput. 2009, 27, 756–770. [Google Scholar] [CrossRef]
- Amer, A.; Dubois, E. Fast and reliable structure-oriented video noise estimation. IEEE Trans. Circuits Syst. Video Technol. 2005, 15, 113–118. [Google Scholar] [CrossRef]
- Pyatykh, S.; Hesser, J.; Zheng, L. Image noise level estimation by principal component analysis. IEEE Trans. Image Process. 2013, 22, 687–699. [Google Scholar] [CrossRef] [PubMed]
- Zhao, L.; Bai, H.; Liang, J.; Zeng, B.; Wang, A.; Zhao, Y. Simultaneous color-depth super-resolution with conditional generative adversarial networks. Pattern Recognit. 2019, 88, 356–369. [Google Scholar] [CrossRef]
- Mao, X.; Shen, C.; Yang, Y. Image restoration using convolutional auto-encoders with symmetric skip connections. arXiv 2016, arXiv:1606.08921. [Google Scholar]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. A comprehensive evaluation of full reference image quality assessment algorithms. In Proceedings of the 2012 IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1477–1480. [Google Scholar]
Noise Level | 0 | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 |
---|---|---|---|---|---|---|---|---|---|---|
Fiter-based | 0.055 | 0.250 | 0.075 | 0.070 | 0.075 | 0.085 | 0.095 | 0.090 | 0.085 | 0.075 |
ANPE | 0.045 | 0.245 | 0.075 | 0.065 | 0.065 | 0.050 | 0.055 | 0.025 | 0.050 | 0.010 |
SOBA | 0.045 | 0.240 | 0.065 | 0.010 | 0.055 | 0.035 | 0.050 | 0.030 | 0.040 | 0.015 |
PCA | 0.025 | 0.255 | 0.025 | 0.020 | 0.020 | 0.025 | 0.025 | 0.020 | 0.005 | 0.000 |
proposed | 0.040 | 0.160 | 0.000 | 0.000 | 0.020 | 0.010 | 0.000 | 0.000 | 0.005 | 0.000 |
Predicted | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | ||
Actual | 0 | 192 | 8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
2 | 31 | 169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
4 | 0 | 0 | 200 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
6 | 0 | 0 | 0 | 200 | 0 | 0 | 0 | 0 | 0 | 0 | |
8 | 0 | 0 | 0 | 3 | 196 | 1 | 0 | 0 | 0 | 0 | |
10 | 0 | 0 | 0 | 0 | 0 | 198 | 2 | 0 | 0 | 0 | |
12 | 0 | 0 | 0 | 0 | 0 | 0 | 200 | 0 | 0 | 0 | |
14 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 200 | 0 | 0 | |
16 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 199 | 0 | |
18 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 200 | |
P | 0.86 | 0.955 | 1.00 | 0.985 | 1.00 | 0.995 | 0.99 | 0.995 | 1.00 | 1.00 | |
R | 0.96 | 0.84 | 1.00 | 1.00 | 0.98 | 0.99 | 1.00 | 1.00 | 0.995 | 1.00 | |
F1-measure | 0.91 | 0.89 | 1.00 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 1.00 |
Noise Level | 5 | 10 | 15 | |||
---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
MSE | 29.12 ± 0.18 | 0.84 ± 0.01 | 28.53 ± 0.23 | 0.83 ± 0.01 | 28.09 ± 0.15 | 0.80 ± 0.01 |
Perceptual | 28.09 ± 0.15 | 0.85 ± 0.01 | 27.85 ± 0.27 | 0.84 ± 0.01 | 27.12 ± 0.57 | 0.83 ± 0.01 |
Perceptual-MSE | 30.79 ± 0.25 | 0.88 ± 0.01 | 30.23 ± 0.41 | 0.86 ± 0.01 | 29.25 ± 0.42 | 0.84 ± 0.01 |
Noise Level | 5 | 10 | 15 | |||
---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
Bicubic | 23.53 ± 0.17 | 0.70 ± 0.01 | 23.22 ± 0.15 | 0.53 ± 0.01 | 22.74 ± 0.15 | 0.40 ± 0.01 |
SRMD | 28.89 ± 0.16 | 0.85 ± 0.01 | 28.62 ± 0.20 | 0.83 ± 0.01 | 28.13 ± 0.22 | 0.82 ± 0.01 |
SRCGAN | 28.38 ± 0.11 | 0.86 ± 0.01 | 27.95 ± 0.26 | 0.85 ± 0.01 | 27.78 ± 0.13 | 0.83 ± 0.01 |
RED-CNN | 29.14 ± 0.18 | 0.85 ± 0.01 | 28.71 ± 0.23 | 0.83 ± 0.01 | 28.54 ± 0.17 | 0.82 ± 0.01 |
RDN | 29.27 ± 0.21 | 0.87 ± 0.01 | 28.89 ± 0.17 | 0.86 ± 0.01 | 28.67 ± 0.55 | 0.83 ± 0.01 |
proposed | 30.79 ± 0.25 | 0.88 ± 0.01 | 30.23 ± 0.41 | 0.86 ± 0.01 | 29.25 ± 0.42 | 0.84 ± 0.01 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chi, J.; Zhang, Y.; Yu, X.; Wang, Y.; Wu, C. Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks. Sensors 2019, 19, 3348. https://doi.org/10.3390/s19153348
Chi J, Zhang Y, Yu X, Wang Y, Wu C. Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks. Sensors. 2019; 19(15):3348. https://doi.org/10.3390/s19153348
Chicago/Turabian StyleChi, Jianning, Yifei Zhang, Xiaosheng Yu, Ying Wang, and Chengdong Wu. 2019. "Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks" Sensors 19, no. 15: 3348. https://doi.org/10.3390/s19153348
APA StyleChi, J., Zhang, Y., Yu, X., Wang, Y., & Wu, C. (2019). Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks. Sensors, 19(15), 3348. https://doi.org/10.3390/s19153348