Fast-MFQE: A Fast Approach for Multi-Frame Quality Enhancement on Compressed Video
Abstract
:1. Introduction
2. The Proposed Fast-MFQE
2.1. Image Pre-Processing Building Modules (IPPB)
2.1.1. Mean Shift
2.1.2. Pixel Unshuffle
2.2. Spatio-Temporal Attention Fusion (STAF)
2.3. Feature Reconstruction Network (FRN)
2.4. Loss Function
3. Experiments
3.1. Settings
3.1.1. Datasets
3.1.2. Quality Enhancement Assessment Metrics
3.1.3. Parameter Settings
3.2. Performance Comparison
3.2.1. Quantitative Comparison
3.2.2. Subjective Comparison
3.2.3. Comparison of Inference Performance
3.2.4. Ablation Studies
3.2.5. Perceptual Quality Comparison
3.2.6. Subjective Quality and Inference Speed at Different Resolutions
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
IPPB | Image pre-processing building module. |
STFA | Spatio-temporal fusion attention. |
FRN | Feature reconstruction network. |
UHD | Ultra-high-definition. |
QoE | Quality of experience. |
CNN | Convolutional neural network. |
PSNR | Peak Signal-to-Noise Ratio. |
SSIM | Structural Similarity Index Measure. |
DSC | Depthwise Separable Convolution. |
References
- Sullivan, G.J.; Ohm, J.-R.; Han, W.-J.; Wiegand, T. Overview of the High Efficiency Video Coding (HEVC) Standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 12, 1649–1668. [Google Scholar] [CrossRef]
- Ohm, J.-R.; Sullivan, G.J.; Schwarz, H.; Tan, T.K.; Wiegand, T. Comparison of the coding efficiency of video coding standards—including high efficiency video coding (hevc). IEEE Trans. Circuits Syst. Video Technol. 2012, 12, 1669–1684. [Google Scholar]
- Li, S.; Xu, M.; Deng, X.; Wang, Z. Weight-based R-λ rate control for perceptual high efficiency video coding coding on conversational videos. Signal Process. Image Commun. 2015, 10, 127–140. [Google Scholar] [CrossRef]
- Lu, G.; Ouyang, W.; Xu, D.; Zhang, X.; Cai, C.; Gao, Z. An end-to-end deep video compression framework. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10998–11007. [Google Scholar]
- Galteri, L.; Seidenari, L.; Bertini, M.; Bimbo, A.D. Deep generative adversarial compression artifact removal. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4836–4845. [Google Scholar]
- Foi, A.; Katkovnik, V.; Egiazarian, K. Pointwise Shape-Adaptive DCT for High-Quality Denoising and Deblocking of Grayscale and Color Images. IEEE Trans. Image Process. 2007, 5, 1395–1411. [Google Scholar] [CrossRef]
- Zhang, X.; Xiong, R.; Fan, X.; Ma, S.; Gao, W. Compression artifact reduction by overlapped-block transform coefficient estimation with block similarity. IEEE Trans. Image Process. 2013, 12, 4613–4626. [Google Scholar] [CrossRef]
- Sheikh, H.R.; Bovik, A.C.; de Veciana, G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 2005, 11, 2117–2128. [Google Scholar] [CrossRef]
- Jancsary, J.; Nowozin, S.; Rother, C. Loss-specific training of non-parametric image restoration models: A new state of the art. In Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 112–125. [Google Scholar]
- Jung, C.; Jiao, L.; Qi, H.; Sun, T. Image deblocking via sparse representation. Signal Process. Image Commun. 2012, 3, 663–677. [Google Scholar] [CrossRef]
- Chang, H.; Ng, M.K.; Zeng, T. Reducing artifacts in JPEG decompression via a learned dictionary. IEEE Trans. Signal Process. 2014, 2, 718–728. [Google Scholar] [CrossRef]
- Dong, C.; Deng, Y.; Loy, C.C.; Tang, X. Compression Artifacts Reduction by a Deep Convolutional Network. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 576–584. [Google Scholar]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 2017, 7, 3142–3155. [Google Scholar] [CrossRef]
- Han, W.; Zhao, B.; Luo, J. Towards Smaller and Stronger: An Edge-Aware Lightweight Segmentation Approach for Unmanned Surface Vehicles in Water Scenarios. Sensors 2023, 23, 4789. [Google Scholar] [CrossRef]
- Coates, W.; Wahlström, J. LEAN: Real-Time Analysis of Resistance Training Using Wearable Computing. Sensors 2023, 23, 4602. [Google Scholar] [CrossRef]
- Xiao, S.; Liu, Z.; Yan, Z.; Wang, M. Grad-MobileNet: A Gradient-Based Unsupervised Learning Method for Laser Welding Surface Defect Classification. Sensors 2023, 23, 4563. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Li, K.; Zhong, B.; Fu, Y. Residual non-local attention networks for image restoration. arXiv 2019, arXiv:1903.10082. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X.; Xu, C. MemNet: A persistent memory network for image restoration. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4549–4557. [Google Scholar]
- Jin, Z.; Iqbal, M.Z.; Zou, W.; Li, X.; Steinbach, E. Dual-Stream Multi-Path Recursive Residual Network for JPEG Image Compression Artifacts Reduction. IEEE Trans. Circuits Syst. Video Technol. 2021, 2, 467–479. [Google Scholar] [CrossRef]
- Lin, M.-H.; Yeh, C.-H.; Lin, C.-H.; Huang, C.-H.; Kang, L.-W. Deep Multi-Scale Residual Learning-based Blocking Artifacts Reduction for Compressed Images. In Proceedings of the IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 18–20 March 2019; pp. 18–19. [Google Scholar]
- Wang, T.; Chen, M.; Chao, H. A novel deep learning-based method of improving coding efficiency from the decoder-end for high efficiency video coding. In Proceedings of the Data Compression Conference (DCC), Snowbird, UT, USA, 4–7 April 2017; pp. 410–419. [Google Scholar]
- Yang, R.; Xu, M.; Wang, Z. Decoder-side high efficiency video coding quality enhancement with scalable convolutional neural network. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 817–822. [Google Scholar]
- Yang, R.; Xu, M.; Wang, Z.; Li, T. Multi-frame quality enhancement for compressed video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–32 June 2018; pp. 6664–6673. [Google Scholar]
- Guan, Z.; Xing, Q.; Xu, M.; Yang, R.; Liu, T.; Wang, Z. Mfqe 2.0: A new approach for multi-frame quality enhancement on compressed video. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 946–963. [Google Scholar] [CrossRef] [PubMed]
- Yang, R.; Sun, X.; Xu, M.; Zeng, W. Quality-gated convolutional lstm for enhancing compressed video. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; pp. 532–537. [Google Scholar]
- Deng, J.; Wang, L.; Pu, S.; Zhuo, C. Spatio-temporal deformable convolution for compressed video quality enhancement. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 2374–3468. [Google Scholar]
- Zhang, T.; Zhang, Y.; Xin, M.; Liao, J.; Xie, Q. A Light-Weight Network for Small Insulator and Defect Detection Using UAV Imaging Based on Improved YOLOv5. Sensors 2023, 23, 5249. [Google Scholar] [CrossRef]
- Han, N.; Kim, I.-M.; So, J. Lightweight LSTM-Based Adaptive CQI Feedback Scheme for IoT Devices. Sensors 2023, 23, 4929. [Google Scholar] [CrossRef]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. arXiv 2017, arXiv:1610.02357. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
- Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv 2019, arXiv:1801.04381. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. arXiv 2018, arXiv:1807.11164. [Google Scholar]
- Huang, G.; Liu, S.; van der Maaten, L.; Weinberger, K.Q. CondenseNet: An Efficient DenseNet Using Learned Group Convolutions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2752–2761. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. arXiv 2018, arXiv:1807.02758. [Google Scholar]
- Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising. IEEE Trans. Image Process. 2018, 9, 4608–4622. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 4, 600–612. [Google Scholar] [CrossRef]
- Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; pp. 1398–1402. [Google Scholar]
- Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 2, 430–444. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv 2018, arXiv:1801.03924. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Change Loy, C.; Qiao, Y.; Tang, X. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. arXiv 2018, arXiv:1809.00219. [Google Scholar]
QP | Video Sequence | AR-CNN [12] | DnCNN [13] | RNAN [17] | MFQE1.0 [23] | MFQE2.0 [24] | Fast-MFQE | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | PSNR↑ | SSIM↑ | |||
37 | A | Traffic | 0.27 | 0.50 | 0.35 | 0.64 | 0.40 | 0.86 | 0.50 | 0.90 | 0.59 | 1.02 | 0.61 | 1.23 |
PeopleOnStreet | 0.37 | 0.76 | 0.54 | 0.94 | 0.74 | 1.30 | 0.80 | 1.37 | 0.92 | 1.57 | 0.97 | 1.67 | ||
B | Kimono | 0.20 | 0.59 | 0.27 | 0.73 | 0.33 | 0.98 | 0.50 | 1.13 | 0.55 | 1.18 | 0.66 | 1.23 | |
ParkScene | 0.14 | 0.44 | 0.17 | 0.52 | 0.20 | 0.77 | 0.39 | 1.03 | 0.46 | 1.23 | 0.53 | 1.33 | ||
Cactus | 0.20 | 0.41 | 0.28 | 0.53 | 0.35 | 0.76 | 0.44 | 0.88 | 0.50 | 1.00 | 0.64 | 1.16 | ||
BQTerrace | 0.23 | 0.43 | 0.33 | 0.53 | 0.42 | 0.84 | 0.27 | 0.48 | 0.40 | 0.67 | 0.52 | 0.86 | ||
BasketballDrive | 0.23 | 0.51 | 0.33 | 0.63 | 0.43 | 0.92 | 0.41 | 0.80 | 0.47 | 0.83 | 0.74 | 0.91 | ||
C | RaceHourse | 0.23 | 0.49 | 0.31 | 0.70 | 0.39 | 0.99 | 0.34 | 0.55 | 0.39 | 0.80 | 0.53 | 0.93 | |
BQMall | 0.28 | 0.69 | 0.38 | 0.87 | 0.45 | 1.15 | 0.51 | 1.03 | 0.62 | 1.20 | 0.72 | 1.23 | ||
PartyScene | 0.14 | 0.52 | 0.22 | 0.69 | 0.30 | 0.98 | 0.22 | 0.73 | 0.36 | 1.18 | 0.44 | 1.31 | ||
BasketballDrill | 0.23 | 0.48 | 0.42 | 0.89 | 0.50 | 1.07 | 0.48 | 0.90 | 0.58 | 1.20 | 0.63 | 1.26 | ||
D | RaceHorses | 0.26 | 0.59 | 0.34 | 0.80 | 0.42 | 1.02 | 0.51 | 1.13 | 0.59 | 1.43 | 0.68 | 1.47 | |
BQSquare | 0.21 | 0.30 | 0.30 | 0.46 | 0.32 | 0.63 | -0.01 | 0.15 | 0.34 | 0.65 | 0.47 | 0.68 | ||
BlowingBubles | 0.16 | 0.46 | 0.25 | 0.76 | 0.31 | 1.08 | 0.39 | 1.20 | 0.53 | 1.70 | 0.61 | 1.89 | ||
BasketballPass | 0.26 | 0.63 | 0.38 | 0.83 | 0.46 | 1.08 | 0.63 | 1.38 | 0.73 | 1.55 | 0.88 | 1.67 | ||
E | FourPeople | 0.40 | 0.56 | 0.54 | 0.73 | 0.70 | 0.97 | 0.66 | 0.85 | 0.73 | 0.95 | 0.87 | 0.97 | |
Johnny | 0.24 | 0.21 | 0.47 | 0.54 | 0.56 | 0.88 | 0.55 | 0.55 | 0.60 | 0.68 | 0.71 | 0.73 | ||
KristenAndSara | 0.41 | 0.47 | 0.59 | 0.62 | 0.63 | 0.80 | 0.66 | 0.75 | 0.75 | 0.85 | 0.86 | 0.88 | ||
Average | 0.25 | 0.50 | 0.36 | 0.69 | 0.41 | 0.62 | 0.46 | 0.88 | 0.56 | 1.09 | 0.67 | 1.18 | ||
32 | Average | 0.19 | 0.17 | 0.33 | 0.41 | / | / | 0.43 | 0.58 | 0.52 | 0.68 | 0.63 | 0.69 | |
27 | Average | 0.16 | 0.09 | 0.33 | 0.26 | / | / | 0.40 | 0.34 | 0.49 | 0.42 | 0.55 | 0.48 |
Inference Speed(f/s) | Param(k) | ||||||
---|---|---|---|---|---|---|---|
Res. | 120p | 240p | 480p | 720p | 1080p | ||
Method | |||||||
DnCNN [13] | 191.8 | 54.7 | 14.1 | 6.1 | 2.6 | 556 | |
RNAN [17] | 5.6 | 3.2 | 1.4 | 0.6 | 0.08 | 8957 | |
MFQE1.0 [23] | 34.3 | 12.6 | 3.8 | 1.6 | 0.7 | 1788 | |
MFQE2.0 [24] | 56.5 | 25.3 | 8.4 | 3.7 | 1.6 | 255 | |
STDF [26] | 13.27 | 36.4 | 9.1 | 3.8 | 1.6 | 365 | |
Fast-MFQE | 162.1 | 60.3 | 43.1 | 32.3 | 25.7 | 243 |
Model | Model1 | Model2 | Model3 |
---|---|---|---|
IPPB | Yse | No | No |
STFA | Yes | Yes | No |
FRN | Yes | Yes | Yes |
PSNR/SSIM () | 0.68/1.19 | 0.42/0.89 | 0.21/0.45 |
Inference speed(f/s) | 32.1 | 45.3 | 73.2 |
QP | Video Sequence | AR-CNN [12] | DnCNN [13] | RNAN [17] | MFQE1.0 [23] | MFQE2.0 [24] | Fast-MFQE | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LPIPS↓ | PI↓ | LPIPS↓ | PI↓ | LPIPS↓ | PI↓ | LPIPS↓ | PI↓ | LPIPS↓ | PI↓ | LPIPS↓ | PI↓ | |||
37 | A | Traffic | 0.028 | 0.720 | 0.027 | 0.653 | 0.026 | 0.644 | 0.027 | 0.593 | 0.023 | 0.572 | 0.018 | 0.569 |
PeopleOnStreet | 0.029 | 0.726 | 0.028 | 0.631 | 0.029 | 0.643 | 0.026 | 0.631 | 0.019 | 0.539 | 0.017 | 0.520 | ||
B | Kimono | 0.030 | 0.733 | 0.031 | 0.664 | 0.032 | 0.657 | 0.029 | 0.622 | 0.020 | 0.617 | 0.018 | 0.639 | |
ParkScene | 0.028 | 0.728 | 0.032 | 0.635 | 0.031 | 0.624 | 0.028 | 0.572 | 0.024 | 0.691 | 0.021 | 0.701 | ||
Cactus | 0.031 | 0.699 | 0.026 | 0.586 | 0.025 | 0.590 | 0.027 | 0.630 | 0.030 | 0.616 | 0.027 | 0.593 | ||
BQTerrace | 0.029 | 0.746 | 0.027 | 0.614 | 0.028 | 0.573 | 0.027 | 0.621 | 0.026 | 0.593 | 0.028 | 0.582 | ||
BasketballDrive | 0.030 | 0.732 | 0.031 | 0.720 | 0.029 | 0.680 | 0.030 | 0.675 | 0.025 | 0.623 | 0.021 | 0.641 | ||
C | RaceHourses | 0.027 | 0.751 | 0.026 | 0.709 | 0.025 | 0.716 | 0.031 | 0.695 | 0.018 | 0.641 | 0.019 | 0.611 | |
BQMall | 0.032 | 0.638 | 0.031 | 0.682 | 0.032 | 0.644 | 0.024 | 0.632 | 0.017 | 0.671 | 0.028 | 0.576 | ||
PartScene | 0.027 | 0.699 | 0.028 | 0.627 | 0.027 | 0.609 | 0.025 | 0.594 | 0.026 | 0.617 | 0.030 | 0.548 | ||
BasketballDrill | 0.032 | 0.721 | 0.031 | 0.662 | 0.032 | 0.614 | 0.030 | 0.601 | 0.027 | 0.614 | 0.021 | 0.509 | ||
D | RaceHourse | 0.030 | 0.758 | 0.029 | 0.631 | 0.028 | 0.629 | 0.027 | 0.622 | 0.030 | 0.601 | 0.019 | 0.627 | |
BQSquare | 0.029 | 0.771 | 0.029 | 0.691 | 0.027 | 0.678 | 0.025 | 0.631 | 0.029 | 0.597 | 0.020 | 0.561 | ||
BlowingBubles | 0.032 | 0.712 | 0.031 | 0.786 | 0.032 | 0.645 | 0.031 | 0.591 | 0.022 | 0.596 | 0.023 | 0.558 | ||
BasketballPass | 0.031 | 0.733 | 0.027 | 0.673 | 0.028 | 0.593 | 0.032 | 0.573 | 0.023 | 0.610 | 0.026 | 0.606 | ||
E | FourPeople | 0.026 | 0.726 | 0.028 | 0.765 | 0.027 | 0.712 | 0.026 | 0.670 | 0.022 | 0.632 | 0.021 | 0.617 | |
Johnny | 0.028 | 0.761 | 0.027 | 0.668 | 0.027 | 0.623 | 0.028 | 0.640 | 0.024 | 0.621 | 0.023 | 0.618 | ||
KristenAndSara | 0.031 | 0.715 | 0.030 | 0.719 | 0.031 | 0.639 | 0.030 | 0.627 | 0.026 | 0.615 | 0.024 | 0.597 | ||
Average | 0.029 | 0.726 | 0.028 | 0.673 | 0.028 | 0.639 | 0.027 | 0.623 | 0.023 | 0.614 | 0.022 | 0.592 | ||
32 | Average | 0.028 | 0.564 | 0.026 | 0.533 | 0.023 | 0.515 | 0.023 | 0.501 | 0.020 | 0.495 | 0.018 | 0.493 | |
27 | Average | 0.026 | 0.377 | 0.024 | 0.345 | 0.022 | 0.386 | 0.021 | 0.374 | 0.019 | 0.326 | 0.017 | 0.314 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, K.; Chen, J.; Zeng, H.; Shen, X. Fast-MFQE: A Fast Approach for Multi-Frame Quality Enhancement on Compressed Video. Sensors 2023, 23, 7227. https://doi.org/10.3390/s23167227
Chen K, Chen J, Zeng H, Shen X. Fast-MFQE: A Fast Approach for Multi-Frame Quality Enhancement on Compressed Video. Sensors. 2023; 23(16):7227. https://doi.org/10.3390/s23167227
Chicago/Turabian StyleChen, Kemi, Jing Chen, Huanqiang Zeng, and Xueyuan Shen. 2023. "Fast-MFQE: A Fast Approach for Multi-Frame Quality Enhancement on Compressed Video" Sensors 23, no. 16: 7227. https://doi.org/10.3390/s23167227
APA StyleChen, K., Chen, J., Zeng, H., & Shen, X. (2023). Fast-MFQE: A Fast Approach for Multi-Frame Quality Enhancement on Compressed Video. Sensors, 23(16), 7227. https://doi.org/10.3390/s23167227