Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment
Abstract
:1. Introduction
2. Related Work
3. Proposed Method
Network Fusion
4. Results and Discussion
4.1. Experimental Data
4.2. Evaluation Methodology
4.3. Comparative Evaluation
4.4. Computational Complexity
4.5. Cross—Database Experiments
4.6. Ablation Tests
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Welvaert, M.; Rosseel, Y. On the Definition of Signal-To-Noise Ratio and Contrast-To-Noise Ratio for fMRI Data. PLoS ONE 2013, 8, e77089. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yu, S.; Dai, G.; Wang, Z.; Li, L.; Wei, X.; Xie, Y. A consistency evaluation of signal-to-noise ratio in the quality assessment of human brain magnetic resonance images. BMC Med. Imaging 2018, 18, 17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Baselice, F.; Ferraioli, G.; Grassia, A.; Pascazio, V. Optimal configuration for relaxation times estimation in complex spin echo imaging. Sensors 2014, 14, 2182–2198. [Google Scholar] [CrossRef] [Green Version]
- Chow, L.S.; Paramesran, R. Review of medical image quality assessment. Biomed. Signal Process. Control 2016, 27, 148. [Google Scholar] [CrossRef]
- Chow, L.S.; Rajagopal, H. Modified-BRISQUE as no reference image quality assessment for structural MR images. Magn. Reson. Imaging 2017, 43, 74–87. [Google Scholar] [CrossRef] [PubMed]
- Jang, J.; Bang, K.; Jang, H.; Hwang, D.; Initiative, A.D.N. Quality evaluation of no-reference MR images using multidirectional filters and image statistics. Magn. Reson. Med. 2018, 80, 914–924. [Google Scholar] [CrossRef]
- Obuchowicz, R.; Oszust, M.; Bielecka, M.; Bielecki, A.; Piórkowski, A. Magnetic Resonance Image Quality Assessment by Using Non-Maximum Suppression and Entropy Analysis. Entropy 2020, 22, 220. [Google Scholar] [CrossRef] [Green Version]
- Oszust, M.; Piórkowski, A.; Obuchowicz, R. No-reference image quality assessment of magnetic resonance images with high-boost filtering and local features. Magn. Reson. Med. 2020, 84, 1648–1660. [Google Scholar] [CrossRef]
- Esteban, O.; Birman, D.; Schaer, M.; Koyejo, O.O.; Poldrack, R.A.; Gorgolewski, K.J. MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites. PLoS ONE 2017, 12, e0184661. [Google Scholar] [CrossRef] [Green Version]
- Pizarro, R.A.; Cheng, X.; Barnett, A.; Lemaitre, H.; Verchinski, B.A.; Goldman, A.L.; Xiao, E.; Luo, Q.; Berman, K.F.; Callicott, J.H.; et al. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm. Front. Neuroinform. 2016, 10, 52. [Google Scholar] [CrossRef] [Green Version]
- Kustner, T.; Liebgott, A.; Mauch, L.; Martirosian, P.; Bamberg, F.; Nikolaou, K.; Yang, B.; Schick, F.; Gatidis, S. Automated reference-free detection of motion artifacts in magnetic resonance images. Magn. Reson. Mater. Phys. Biol. Med. 2018, 31, 243–256. [Google Scholar] [CrossRef] [PubMed]
- Sujit, S.J.; Gabr, R.E.; Coronado, I.; Robinson, M.; Datta, S.; Narayana, P.A. Automated Image Quality Evaluation of Structural Brain Magnetic Resonance Images using Deep Convolutional Neural Networks. In Proceedings of the 9th Cairo International Biomedical Engineering Conference (CIBEC), Cairo, Egypt, 20–22 December 2018; pp. 33–36. [Google Scholar] [CrossRef]
- Moorthy, A.K.; Bovik, A.C. Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef] [PubMed]
- Saad, M.A.; Bovik, A.C.; Charrier, C. Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef] [PubMed]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Ye, P.; Kumar, J.; Kang, L.; Doermann, D. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105. [Google Scholar] [CrossRef] [Green Version]
- Xu, J.; Ye, P.; Li, Q.; Du, H.; Liu, Y.; Doermann, D. Blind Image Quality Assessment Based on High Order Statistics Aggregation. IEEE Trans. Image Process. 2016, 25, 4444–4457. [Google Scholar] [CrossRef]
- Xue, W.; Mou, X.; Zhang, L.; Bovik, A.C.; Feng, X. Blind Image Quality Assessment Using Joint Statistics of Gradient Magnitude and Laplacian Features. IEEE Trans. Image Process. 2014, 23, 4850–4862. [Google Scholar] [CrossRef]
- Liu, L.; Hua, Y.; Zhao, Q.; Huang, H.; Bovik, A.C. Blind image quality assessment by relative gradient statistics and adaboosting neural network. Signal Process. Image 2016, 40, 1–15. [Google Scholar] [CrossRef]
- Oszust, M. No-Reference Image Quality Assessment with Local Gradient Orientations. Symmetry 2019, 11, 95. [Google Scholar] [CrossRef] [Green Version]
- Li, Q.; Lin, W.; Fang, Y. No-Reference Quality Assessment for Multiply-Distorted Images in Gradient Domain. IEEE Signal Process. Lett. 2016, 23, 541–545. [Google Scholar] [CrossRef]
- Oszust, M. No-Reference Image Quality Assessment Using Image Statistics and Robust Feature Descriptors. IEEE Signal Process. Lett. 2017, 24, 1656–1660. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Bovik, A.C. A Feature-Enriched Completely Blind Image Quality Evaluator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Min, X.; Gu, K.; Zhai, G.; Liu, J.; Yang, X.; Chen, C.W. Blind Quality Assessment Based on Pseudo-Reference Image. IEEE Trans. Multimed. 2018, 20, 2049–2062. [Google Scholar] [CrossRef]
- Bosse, S.; Maniry, D.; Wiegand, T.; Samek, W. A deep neural network for image quality assessment. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3773–3777. [Google Scholar] [CrossRef]
- Kim, J.; Lee, S. Fully Deep Blind Image Quality Predictor. IEEE J. Sel. Top. Signal 2017, 11, 206–220. [Google Scholar] [CrossRef]
- Ma, K.; Liu, W.; Liu, T.; Wang, Z.; Tao, D. dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs. IEEE Trans. Image Process. 2017, 26, 3951–3964. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zeng, H.; Zhang, L.; Bovik, A.C. A Probabilistic Quality Representation Approach to Deep Blind Image Quality Prediction. arxiv 2017, arXiv:1708.08190. [Google Scholar]
- Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2021, 109, 43–76. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.E.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv 2019, arXiv:1801.04381. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2018, arXiv:1608.06993. [Google Scholar]
- Li, Y.; Ye, X.; Li, Y. Image quality assessment using deep convolutional networks. AIP Adv. 2017, 7, 125324. [Google Scholar] [CrossRef] [Green Version]
- He, Q.; Li, D.; Jiang, T.; Jiang, M. Quality Assessment for Tone-Mapped HDR Images Using Multi-Scale and Multi-Layer Information. In Proceedings of the 2018 IEEE International Conference on Multimedia Expo Workshops (ICMEW), San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Lin, H.; Hosu, V.; Saupe, D. DeepFL-IQA: Weak Supervision for Deep IQA Feature Learning. arXiv 2020, arXiv:2001.08113. [Google Scholar]
- Ieremeiev, O.; Lukin, V.; Okarma, K.; Egiazarian, K. Full-Reference Quality Metric Based on Neural Network to Assess the Visual Quality of Remote Sensing Images. Remote Sens. 2020, 12, 2349. [Google Scholar] [CrossRef]
- Maqsood, M.; Nazir, F.; Khan, U.; Aadil, F.; Jamal, H.; Mehmood, I.; Song, O. Transfer Learning Assisted Classification and Detection of Alzheimer’s Disease Stages Using 3D MRI Scans. Sensors 2019, 19, 2645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Griswold, M.; Heidemann, R.; Jakob, P. Direct parallel imaging reconstruction of radially sampled data using GRAPPA with relative shifts. In Proceedings of the 11th Annual Meeting of the ISMRM, Toronto, ON, Canada, 10–16 July 2003. [Google Scholar]
- Breuer, F.; Griswoldl, M.; Jakob, P.; Kellman, P. Dynamic autocalibrated parallel imaging using temporal GRAPPA (TGRAPPA). Off. J. Int. Soc. Magn. Reson. Med. 2005, 53, 981–985. [Google Scholar] [CrossRef] [PubMed]
- Reykowski, A.; Blasche, M. Mode Matrix—A Generalized Signal Combiner For Parallel Imaging Arrays. In Proceedings of the 12th Annual Meeting of the International Society for Magnetic Resonance in Medicine, Kyoto, Japan, 15–21 May 2004. [Google Scholar]
- Deshmane, A.; Gulani, V.; Griswold, M.; Seiberlich, N. Parallel MR imaging. J. Magn. Reson. Imaging 2012, 36, 55–72. [Google Scholar] [CrossRef] [Green Version]
- Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef]
- Ma, K.; Liu, W.; Zhang, K.; Duanmu, Z.; Wang, Z.; Zuo, W. End-to-End Blind Image Quality Assessment Using Deep Neural Networks. IEEE Trans. Image Process. 2018, 27, 1202–1213. [Google Scholar] [CrossRef]
- Gu, K.; Zhai, G.; Yang, X.; Zhang, W. Using Free Energy Principle For Blind Image Quality Assessment. IEEE Trans. Multimed. 2015, 17, 50–63. [Google Scholar] [CrossRef]
- Zhang, Z.; Dai, G.; Liang, X.; Yu, S.; Li, L.; Xie, Y. Can Signal-to-Noise Ratio Perform as a Baseline Indicator for Medical Image Quality Assessment. IEEE Access 2018, 6, 11534–11543. [Google Scholar] [CrossRef]
- Gu, K.; Zhai, G.; Yang, X.; Zhang, W. Hybrid No-Reference Quality Metric for Singly and Multiply Distorted Images. IEEE Trans. Broadcast. 2014, 60, 555–567. [Google Scholar] [CrossRef]
- Zhu, X.; Milanfar, P. Automatic Parameter Selection for Denoising Algorithms Using a No-Reference Measure of Image Content. IEEE Trans. Image Process. 2010, 19, 3116–3132. [Google Scholar] [PubMed]
- Leclaire Arthur, M.L. No-Reference Image Quality Assessment and Blind Deblurring with Sharpness Metrics Exploiting Fourier Phase Information. J. Math. Imaging Vis. 2015, 52, 145–172. [Google Scholar] [CrossRef] [Green Version]
- Chang, C.C.; Lin, C.J. LIBSVM: A Library for Support Vector Machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
Method | DB1 | DB2 | Overall | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
SRCC | KRCC | PLCC | RMSE | SRCC | KRCC | PLCC | RMSE | SRCC | KRCC | PLCC | RMSE | |
NOMRIQA * | 0.7030 | 0.5527 | 0.7978 | 0.4322 | 0.8040 | 0.6087 | 0.8737 | 0.4605 | 0.7535 | 0.5807 | 0.8358 | 0.4464 |
HOSA | 0.4804 | 0.3909 | 0.6997 | 0.5318 | 0.8756 | 0.7052 | 0.9276 | 0.3388 | 0.6780 | 0.5481 | 0.8137 | 0.4353 |
NOREQI | 0.4359 | 0.2922 | 0.8045 | 0.4182 | 0.8675 | 0.6984 | 0.9072 | 0.3833 | 0.6517 | 0.4953 | 0.8559 | 0.4008 |
IL-NIQE | 0.1695 | 0.1275 | 0.3619 | 0.6674 | 0.1197 | 0.0836 | 0.3090 | 0.8821 | 0.1446 | 0.1056 | 0.3355 | 0.7748 |
GM-LOG | 0.4673 | 0.3424 | 0.6515 | 0.4779 | 0.8854 | 0.7123 | 0.9010 | 0.4091 | 0.6764 | 0.5274 | 0.7763 | 0.4435 |
GWHGLBP | 0.5075 | 0.3935 | 0.6886 | 0.5257 | 0.8726 | 0.6927 | 0.8947 | 0.4080 | 0.6901 | 0.5431 | 0.7917 | 0.4669 |
BRISQUE | 0.4610 | 0.3648 | 0.6100 | 0.5311 | 0.8544 | 0.6738 | 0.8951 | 0.4076 | 0.6577 | 0.5193 | 0.7526 | 0.4694 |
SISBLIM | 0.3976 | 0.2776 | 0.6240 | 0.5449 | 0.7216 | 0.5419 | 0.7592 | 0.6047 | 0.5596 | 0.4098 | 0.6916 | 0.5748 |
metricQ | 0.2596 | 0.1657 | 0.2792 | 0.6709 | 0.5066 | 0.3701 | 0.5227 | 0.7791 | 0.3831 | 0.2679 | 0.4010 | 0.7250 |
BPRI | 0.2412 | 0.1890 | 0.4785 | 0.5756 | 0.1317 | 0.0973 | 0.4928 | 0.7883 | 0.1865 | 0.1432 | 0.4857 | 0.6820 |
SINDEX | 0.2939 | 0.2112 | 0.3243 | 0.7034 | 0.2673 | 0.1933 | 0.3185 | 0.8874 | 0.2806 | 0.2023 | 0.3214 | 0.7954 |
NFERM | 0.5073 | 0.4091 | 0.7662 | 0.4491 | 0.8833 | 0.7087 | 0.9157 | 0.3872 | 0.6953 | 0.5589 | 0.8410 | 0.4182 |
SEER | 0.4776 | 0.3574 | 0.7108 | 0.5267 | 0.8938 | 0.7335 | 0.9196 | 0.3594 | 0.6857 | 0.5455 | 0.8152 | 0.4431 |
MEON | 0.2518 | 0.1879 | 0.3439 | 0.6428 | 0.5851 | 0.4001 | 0.6194 | 0.7426 | 0.4185 | 0.2940 | 0.4817 | 0.6927 |
DEEPIQ | 0.1133 | 0.0827 | 0.5902 | 0.5707 | 0.2837 | 0.2078 | 0.5393 | 0.7822 | 0.1985 | 0.1453 | 0.5648 | 0.6765 |
SNRTOI * | 0.1321 | 0.0728 | 0.4094 | 0.6784 | 0.1016 | 0.0720 | 0.3169 | 0.8930 | 0.1169 | 0.0724 | 0.3632 | 0.7857 |
ENMIQA * | 0.3630 | 0.2479 | 0.5093 | 0.5873 | 0.7941 | 0.6119 | 0.8313 | 0.5130 | 0.5786 | 0.4299 | 0.6703 | 0.5502 |
R18GR50M * | 0.6299 | 0.5012 | 0.7999 | 0.4381 | 0.8998 | 0.7398 | 0.9270 | 0.3465 | 0.7649 | 0.6205 | 0.8635 | 0.3923 |
R50GR18 * | 0.7423 | 0.6039 | 0.8206 | 0.4238 | 0.9083 | 0.7490 | 0.9294 | 0.3479 | 0.8253 | 0.6765 | 0.8750 | 0.3859 |
MR50 * | 0.7036 | 0.5740 | 0.8576 | 0.3865 | 0.8919 | 0.7176 | 0.9241 | 0.3560 | 0.7978 | 0.6458 | 0.8909 | 0.3712 |
Method | Time (s) |
---|---|
NOMRIQA * | 0.1564 |
HOSA | 0.2992 |
NOREQI | 0.1315 |
IL-NIQE | 4.4956 |
GM-LOG | 0.0138 |
GWHGLBP | 0.0336 |
BRISQUE | 0.0232 |
SISBLIM | 0.7821 |
metricQ | 0.1994 |
BPRI | 0.1473 |
SINDEX | 0.0141 |
NFERM | 9.5449 |
SEER | 0.3473 |
MEON | 0.0775 |
DEEPIQ | 1.2746 |
SNRTOI * | 0.0018 |
ENMIQA * | 0.0737 |
R18GR50M * | 0.0293 |
R50GR18 * | 0.0237 |
MR50 * | 0.0226 |
Method | Training on Database 1 | Training on Database 2 | Overall | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Testing on Database 2 | Testing on Database 1 | |||||||||||
SRCC | KRCC | PLCC | RMSE | SRCC | KRCC | PLCC | RMSE | SRCC | KRCC | PLCC | RMSE | |
NOMRIQA * | 0.7348 | 0.5280 | 0.7861 | 0.5979 | 0.6116 | 0.4436 | 0.7113 | 0.5116 | 0.6732 | 0.4858 | 0.7487 | 0.5548 |
HOSA | 0.7625 | 0.5612 | 0.7968 | 0.5845 | 0.4550 | 0.3311 | 0.6428 | 0.5574 | 0.6088 | 0.4462 | 0.7198 | 0.5710 |
NOREQI | 0.7259 | 0.5312 | 0.7436 | 0.6468 | 0.5082 | 0.3719 | 0.7019 | 0.5183 | 0.6171 | 0.4516 | 0.7228 | 0.5826 |
IL-NIQE | 0.0050 | 0.0044 | 0.1773 | 0.9520 | 0.1796 | 0.1162 | 0.3465 | 0.6826 | 0.0923 | 0.0603 | 0.2619 | 0.8173 |
GM-LOG | 0.7064 | 0.5134 | 0.7420 | 0.6486 | 0.2721 | 0.1774 | 0.1379 | 0.7207 | 0.4893 | 0.3454 | 0.4400 | 0.6847 |
GWHGLBP | 0.6247 | 0.4315 | 0.6656 | 0.7220 | 0.5207 | 0.3694 | 0.6189 | 0.5716 | 0.5727 | 0.4005 | 0.6423 | 0.6468 |
BRISQUE | 0.6528 | 0.4640 | 0.7294 | 0.6618 | 0.4895 | 0.3353 | 0.6172 | 0.5725 | 0.5712 | 0.3997 | 0.6733 | 0.6172 |
SISBLIM | 0.6836 | 0.5037 | 0.6746 | 0.7140 | 0.2885 | 0.1820 | 0.5733 | 0.5962 | 0.4861 | 0.3429 | 0.6240 | 0.6551 |
metricQ | 0.4642 | 0.3271 | 0.3931 | 0.8942 | 0.2300 | 0.1520 | 0.2243 | 0.7091 | 0.3471 | 0.2396 | 0.3087 | 0.8017 |
BPRI | 0.0747 | 0.0558 | 0.4592 | 0.8593 | 0.1515 | 0.1120 | 0.3440 | 0.6832 | 0.1131 | 0.0839 | 0.4016 | 0.7713 |
SINDEX | 0.2807 | 0.1935 | 0.3604 | 0.9024 | 0.2802 | 0.1962 | 0.3307 | 0.6869 | 0.2805 | 0.1949 | 0.3456 | 0.7947 |
NFERM | 0.6718 | 0.4856 | 0.7240 | 0.6672 | 0.4660 | 0.3536 | 0.4637 | 0.6447 | 0.5689 | 0.4196 | 0.5939 | 0.6560 |
SEER | 0.7855 | 0.5960 | 0.8356 | 0.5314 | 0.5397 | 0.4053 | 0.7341 | 0.4941 | 0.6626 | 0.5007 | 0.7849 | 0.5128 |
MEON | 0.5314 | 0.3701 | 0.5148 | 0.8293 | 0.1247 | 0.0771 | 0.1401 | 0.7205 | 0.3281 | 0.2236 | 0.3275 | 0.7749 |
DEEPIQ | 0.3620 | 0.2528 | 0.5778 | 0.7895 | 0.3030 | 0.2037 | 0.4041 | 0.6656 | 0.3325 | 0.2283 | 0.4910 | 0.7276 |
SNRTOI * | 0.0681 | 0.0443 | 0.1033 | 0.9622 | 0.1828 | 0.1245 | 0.2262 | 0.7088 | 0.1255 | 0.0844 | 0.1648 | 0.8355 |
ENMIQA * | 0.7631 | 0.5736 | 0.8040 | 0.5753 | 0.3540 | 0.2428 | 0.6741 | 0.5375 | 0.5586 | 0.4082 | 0.7391 | 0.5564 |
R18GR50M * | 0.8451 | 0.6574 | 0.8911 | 0.4390 | 0.6098 | 0.4402 | 0.7231 | 0.5026 | 0.7275 | 0.5488 | 0.8071 | 0.4708 |
R50GR18 * | 0.8638 | 0.6684 | 0.8930 | 0.4354 | 0.6163 | 0.4502 | 0.7306 | 0.4968 | 0.7401 | 0.5593 | 0.8118 | 0.4661 |
MR50 * | 0.8568 | 0.6709 | 0.8941 | 0.4332 | 0.6299 | 0.4686 | 0.7345 | 0.4938 | 0.7434 | 0.5697 | 0.8143 | 0.4635 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Stępień, I.; Obuchowicz, R.; Piórkowski, A.; Oszust, M. Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment. Sensors 2021, 21, 1043. https://doi.org/10.3390/s21041043
Stępień I, Obuchowicz R, Piórkowski A, Oszust M. Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment. Sensors. 2021; 21(4):1043. https://doi.org/10.3390/s21041043
Chicago/Turabian StyleStępień, Igor, Rafał Obuchowicz, Adam Piórkowski, and Mariusz Oszust. 2021. "Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment" Sensors 21, no. 4: 1043. https://doi.org/10.3390/s21041043
APA StyleStępień, I., Obuchowicz, R., Piórkowski, A., & Oszust, M. (2021). Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment. Sensors, 21(4), 1043. https://doi.org/10.3390/s21041043