No-Reference Hyperspectral Image Quality Assessment via Ranking Feature Learning
Abstract
:1. Introduction
- We propose a novel no-reference quality assessment metric for HSIs. Currently, there are few no-reference quality assessment methods for HSIs. To the best knowledge of the authors, the proposed metric is the first method that uses deep features for HSI quality assessment.
- A S-Transformer is proposed. The proposed S-Transformer is designed to extract deep features based on the characteristics of HSIs, which could capture the interspectral similarity of HSIs through spectral self-attention.
- We choose ranking feature learning as the pre-training task of the S-Transformer. The ranking feature learning task ranks pairs of images according to their quality, enabling the S-Transformer to better capture the quality-related differences between images.
- The Wasserstein distance is introduced to measure the distance between the distributions of the deep features. By introducing the Wasserstein distance, the discrepancy between the distributions of the deep features can be more accurately reflected, thereby enhancing the assessment ability of the proposed method.
2. Materials and Methods
2.1. Related Work
2.1.1. Full-Reference Hyperspectral Image Quality Assessment
2.1.2. No-Reference Image Quality Assessment
2.1.3. Vision Transformer
2.2. Method
2.2.1. S-Transformer for Extracting Deep Features
2.2.2. Ranking Feature Learning for Pretraining
2.2.3. Wasserstein Distance for Measuring Non-Overlapping Distribution
3. Experiment Design and Results
3.1. Dataset and Experiment Setting
- lambda-Net (denoted as -Net) [7].
- Deep tensor admm-net (denoted as ADMM-Net) [8].
- High-resolution dual-domain learning for spectral compressive imaging (denoted as HDNet) [9].
- Mask-guided Spectral-wise Transformer (denoted as MST) [32].
- Coarse-to-fine sparse transformer (denoted as CST) [10].
- Degradation-Aware Unfolding Half-Shuffle Transformer (denoted as DAUHST) [11].
- Residual Degradation Learning Unfolding Framework-MixS2 Transformer (denoted as RDLUF-MixS2) [12].
3.2. Evidence of Quality Sensitivity of Our Deep Features
3.3. Consistency between R-NHSIQA, QSFL, and FR-IQA Metrics
3.4. Comparison with Different Feature Extracting Networks
4. Discussion
- From the results in Section 3.2 and Section 3.4, it can be observed that the deep neural network trained with the ranking feature learning task could capture feature distributions highly related to the degree of image distortion. Therefore, the extent of the deviation in feature distribution from the benchmark distribution could indicate the quality of reconstructed image.
- Extracting features from both spectral and spatial domains is more indicative of the quality of HSIs compared to extracting features solely from the spatial domain. For instance, as shown in Section 3.3, when using the S-Transformer, which simultaneously extracts spatial and spectral information, the quality score of the reconstructed image is consistent with objective metrics such as SAM. However, when using VGG16, which only operates in the spatial dimension, the quality score of the reconstructed image is not entirely consistent with FR-IQA metrics such as SAM.
- The proposed method evaluates image quality faster and is more consistent with FR-IQA metrics compared to previous manual-feature-based methods, such as QSFL [24]. QSFL required about 10 min to evaluate 350 reconstructed HSIs, while our method takes about 60 s. Furthermore, our method eliminates the cumbersome process of manual features. Lastly, as shown by the results in Section 3.3, our method achieves a stronger correlation with FR-IQA metrics compared to QSFL.
5. Conclusion and Limitation
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Borengasser, M.; Hungate, W.S.; Watkins, R. Hyperspectral Remote Sensing: Principles and Applications; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Yuan, Y.; Zheng, X.; Lu, X. Hyperspectral image superresolution by transfer learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1963–1974. [Google Scholar] [CrossRef]
- Kim, M.H.; Harvey, T.A.; Kittle, D.S.; Rushmeier, H.; Dorsey, J.; Prum, R.O.; Brady, D.J. 3D imaging spectroscopy for measuring hyperspectral patterns on solid objects. ACM Trans. Graph. 2012, 31, 1–11. [Google Scholar] [CrossRef]
- Pan, Z.; Healey, G.; Prasad, M.; Tromberg, B. Face recognition in hyperspectral images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1552–1560. [Google Scholar]
- Van, N.H.; Banerjee, A.; Chellappa, R. Tracking via object reflectance using a hyperspectral video camera. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA, USA, 13–18 June 2010; pp. 44–51. [Google Scholar]
- Miao, X.; Yuan, X.; Pu, Y.; Athitsos, V. L-net: Reconstruct hyperspectral images from a snapshot measurement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4059–4069. [Google Scholar]
- Ma, J.; Liu, X.-Y.; Shou, Z.; Yuan, X. Deep tensor admm-net for snapshot compressive imaging. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 10223–10232. [Google Scholar]
- Hu, X.; Cai, Y.; Lin, J.; Wang, H.; Yuan, X.; Zhang, Y.; Timofte, R.; Van Gool, L. Hdnet: High-resolution dual-domain learning for spectral compressive imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 17542–17551. [Google Scholar]
- Cai, Y.; Lin, J.; Hu, X.; Wang, H.; Yuan, X.; Zhang, Y.; Timofte, R.; Van Gool, L. Coarse-to-fine sparse transformer for hyperspectral image reconstruction. In Proceedings of the European Conference on Computer Vision, Tel-Aviv, Israel, 23–27 October 2022; pp. 686–704. [Google Scholar]
- Cai, Y.; Lin, J.; Wang, H.; Yuan, X.; Ding, H.; Zhang, Y.; Timofte, R.; Van Gool, L. Degradation-aware unfolding half-shuffle transformer for spectral compressive imaging. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; pp. 37749–37761. [Google Scholar]
- Dong, Y.; Gao, D.; Qiu, T.; Li, Y.; Yang, M.; Shi, G. Residual Degradation Learning Unfolding Framework with Mixing Priors across Spectral and Spatial for Compressive Spectral Imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 22262–22271. [Google Scholar]
- Shi, Q.; Tang, X.; Yang, T.; Liu, R.; Zhang, L. Hyperspectral image denoising using a 3-D attention denoising network. IEEE Trans. Geosci. Remote. Sens. 2021, 59, 10348–10363. [Google Scholar] [CrossRef]
- Zhuang, L.; Ng, M.K.; Gao, L.; Wang, Z. Eigen-CNN: Eigenimages plus Eigennoise Level Maps Guided Network for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–18. [Google Scholar] [CrossRef]
- Fu, G.; Xiong, F.; Lu, J.; Zhou, J.; Zhou, J.; Qian, Y. Hyperspectral Image Denoising via Spatial–Spectral Recurrent Transformer. IEEE Trans. Geosci. Remote. Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
- Dong, W.; Zhou, C.; Wu, F.; Wu, J.; Shi, G.; Li, X. Model-guided deep hyperspectral image super-resolution. IEEE Trans. Image Process. 2021, 30, 5754–5768. [Google Scholar] [CrossRef]
- Arun, P.V.; Buddhiraju, K.M.; Prowal, A.; Chanussot, J. CNN-based super-resolution of hyperspectral images. IEEE Trans. Geosci. Remote. Sens. 2020, 58, 6106–6121. [Google Scholar] [CrossRef]
- Hu, J.; Zhao, M.; Li, Y. Hyperspectral image super-resolution by deep spatial-spectral exploitation. Remote Sens. 2019, 11, 1229. [Google Scholar] [CrossRef]
- Wagadarikar, A.; John, R.; Willett, R.; Brady, D. Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt. 2008, 47, B44–B51. [Google Scholar] [CrossRef] [PubMed]
- Meng, Z.; Ma, J.; Yuan, X. End-to-end low cost compressive spectral imaging with spatial-spectral self-attention. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 187–204. [Google Scholar]
- Gehm, M.E.; John, R.; Brady, D.J.; Willett, R.M.; Schulz, T.J. Single-shot compressive spectral imaging with a dual-disperser architecture. Opt. Express 2007, 15, 14013–14027. [Google Scholar] [CrossRef] [PubMed]
- Huang, T.; Dong, W.; Yuan, X.; Wu, J.; Shi, G. Deep gaussian scale mixture prior for spectral compressive imaging. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual Event, UK, 19–25 June 2021; pp. 16216–16225. [Google Scholar]
- Huang, T.; Yuan, X.; Dong, W.; Wu, J.; Shi, G. Deep Gaussian Scale Mixture Prior for Image Reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 10778–10794. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.-Q.; Yi, C.; Chan, J.C.-W. No-reference hyperspectral image quality assessment via quality-sensitive features learning. Remote Sens. 2017, 9, 305. [Google Scholar] [CrossRef]
- Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 1733–1740. [Google Scholar]
- Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef] [PubMed]
- Lin, K.Y.; Wang, G. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 732–741. [Google Scholar]
- Liu, X.; Van De Weijer, J.; Bagdanov, A.D. Rankiqa: Learning from rankings for no-reference image quality assessment. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1040–1049. [Google Scholar]
- Ou, F.; Wang, Y.; Li, J.; Zhu, G.; Kwong, S. A novel rank learning based no-reference image quality assessment method. IEEE Trans. Multimedia 2021, 24, 4197–4211. [Google Scholar] [CrossRef]
- Liu, X.; Van, D.W.; Joost, B.; Andrew, D. Exploiting unlabeled data in cnns by self-supervised learning to rank. IEEE Trans. Pattern Anal. 2019, 41, 1862–1878. [Google Scholar] [CrossRef]
- Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2018, 30, 36–47. [Google Scholar] [CrossRef]
- Cai, Y.; Lin, J.; Hu, X.; Wang, H.; Yuan, X.; Zhang, Y.; Timofte, R.; Van Gool, L. Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 17502–17511. [Google Scholar]
- Rubner, Y.; Tomasi, C.; Guibas, L.J. The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vis. 2000, 40, 99–121. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Zhu, R.; Zhou, F.; Xue, J. MvSSIM: A quality assessment index for hyperspectral images. Neurocomputing 2018, 272, 250–257. [Google Scholar] [CrossRef]
- Das, S.; Bhattacharya, S.; Khatri, P.K. Feature extraction approach for quality assessment of remotely sensed hyperspectral images. J. Appl. Remote Sens. 2020, 14, 026514. [Google Scholar] [CrossRef]
- Yuhas, R.H.; Boardman, J.W.; Goetz, A.F.H. Determination of Semi-Arid Landscape Endmembers and Seasonal Trends Using Convex Geometry Spectral Unmixing Techniques; NTRS: Chicago, IL, USA, 1993. [Google Scholar]
- Garzelli, A.; Nencini, F. Hypercomplex quality assessment of multi/hyperspectral images. IEEE Geosci. Remote Sens. 2009, 6, 662–665. [Google Scholar] [CrossRef]
- Zhou, B.; Shao, F.; Meng, X.; Fu, R.; Ho, Y. No-reference quality assessment for pansharpened images via opinion-unaware learning. IEEE Access. 2019, 7, 40388–40401. [Google Scholar] [CrossRef]
- Agudelo, M.; Oscar, A.; Benitez, -R.; Hernan, D.; Vivone, G.; Bovik, A. Perceptual quality assessment of pan-sharpened images. Remote Sens. 2019, 11, 877. [Google Scholar] [CrossRef]
- Li, S.; Yang, Z.; Li, H. Statistical evaluation of no-reference image quality assessment metrics for remote sensing images. ISPRS Int. J. Geo-Inf. 2017, 6, 133. [Google Scholar] [CrossRef]
- Badal, N.; Soundararajan, R.; Garg, A.; Patil, A. No reference pansharpened image quality assessment through deep feature similarity. IEEE J-STARS 2022, 15, 7235–7247. [Google Scholar] [CrossRef]
- Stępień, I.; Oszust, M. No-reference quality assessment of pan-sharpening images with multi-level deep image representations. Remote Sens. 2022, 14, 1119. [Google Scholar] [CrossRef]
- Xu, L.; Chen, Q. Remote-sensing image usability assessment based on ResNet by combining edge and texture maps. IEEE J.-Stars. 2019, 12, 1825–1834. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the Thirty-first Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Zhao, Y.Q.; Yang, J. Hyperspectral image denoising via sparse representation and low-rank constraint. IEEE Trans. Geosci. Remote. Sens. 2014, 53, 296–308. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W.; Kong, S.G. Coupled sparse denoising and unmixing with low-rank constraint for hyperspectral image. IEEE Trans. Geosci. Remote. Sens. 2015, 54, 1818–1833. [Google Scholar] [CrossRef]
- Berisha, S.; Nagy, J.G.; Plemmons, R.J. Deblurring and sparse unmixing of hyperspectral images using multiple point spread functions. SIAM J. Sci. Comput. 2015, 37, S389–S406. [Google Scholar] [CrossRef]
- Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y. Ntire 2022 spectral recovery challenge and data set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 863–881. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
Noise_1 | Noise_2 | Blur_1 | Blur_2 |
---|---|---|---|
5.2114639 | 8.3326150 | 0.2056402 | 0.3490431 |
Metrics | ADMM-Net | CST | DAUHST | -Net | RDLUF-MixS2 | HDNet | MST |
---|---|---|---|---|---|---|---|
PSNR ↑ | 32.42039 | 34.90362 | 36.87901 | 28.18398 | 38.08767 | 33.94883 | 34.05283 |
SSIM [34]↑ | 0.85144 | 0.92686 | 0.94296 | 0.74009 | 0.95781 | 0.91139 | 0.91672 |
Q [38]↑ | 0.63312 | 0.71071 | 0.77288 | 0.44178 | 0.81445 | 0.66966 | 0.69030 |
MvSSIM [35]↑ | 0.89628 | 0.95191 | 0.96830 | 0.73761 | 0.97485 | 0.93707 | 0.93725 |
SAM [37]↓ | 14.73825 | 9.15665 | 6.82514 | 31.08199 | 4.83432 | 11.67378 | 10.41821 |
QSFL [24]↓ | 22.43255 | 22.63995 | 22.71708 | 37.24140 | 22.58776 | 23.25978 | 21.93860 |
R-NHSIQA↓ | 3.18004 | 2.25054 | 1.77165 | 7.55191 | 1.50417 | 2.79436 | 2.63320 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
QSFL [24] | −0.43157 | −0.27368 | −0.49587 |
R-NHSIQA | −0.79098 | −0.63158 | −0.64128 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
QSFL [24] | −0.56090 | −0.31578 | −0.48856 |
R-NHSIQA | −0.75188 | −0.54737 | −0.69440 |
Metrics | SROCC↑ | KROCC↑ | PLCC↑ |
---|---|---|---|
QSFL [24] | 0.60150 | 0.44210 | 0.48292 |
R-NHSIQA | 0.77669 | 0.60526 | 0.71525 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
QSFL [24] | −0.61127 | −0.48421 | −0.52506 |
R-NHSIQA | −0.74098 | −0.53421 | −0.74859 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
QSFL [24] | −0.54135 | −0.30526 | −0.33847 |
R-NHSIQA | −0.76616 | −0.59473 | −0.70100 |
Noise_1 | Noise_2 | Blur_1 | Blur_2 |
---|---|---|---|
8.4404264 | 62.4436738 | 0.6140266 | 1.3637162 |
Metrics | ADMM-Net | CST | DAUHST | -Net | RDLUF-MixS2 | HDNet | MST |
---|---|---|---|---|---|---|---|
PSNR↑ | 32.42039 | 34.90362 | 36.87901 | 28.18398 | 38.08767 | 33.94883 | 34.05283 |
SSIM [34]↑ | 0.85144 | 0.92686 | 0.94296 | 0.74009 | 0.95781 | 0.91139 | 0.91672 |
Q [38]↑ | 0.63312 | 0.71071 | 0.77288 | 0.44178 | 0.81445 | 0.66966 | 0.69030 |
MvSSIM [35]↑ | 0.89628 | 0.95191 | 0.96830 | 0.73761 | 0.97485 | 0.93707 | 0.93725 |
SAM [37]↓ | 14.73825 | 9.15665 | 6.82514 | 31.08199 | 4.83432 | 11.67378 | 10.41821 |
Ours (VGG16)↓ | 2.51364 | 2.56315 | 2.41836 | 3.35436 | 2.33156 | 2.64563 | 2.47669 |
Ours (S-Transformer)↓ | 3.18004 | 2.25054 | 1.77165 | 7.55191 | 1.50417 | 2.79436 | 2.63320 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
w/VGG16 | −0.64552 | −0.48631 | −0.50153 |
w/S-Transformer | −0.79098 | −0.63158 | −0.64128 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
w/VGG16 | −0.62685 | −0.45210 | −0.48303 |
w/S-Transformer | −0.75188 | −0.54737 | −0.69440 |
Metrics | SROCC↑ | KROCC↑ | PLCC↑ |
---|---|---|---|
w/VGG16 | 0.63541 | 0.46896 | 0.56314 |
w/S-Transformer | 0.77669 | 0.60526 | 0.71525 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
w/VGG16 | −0.62563 | −0.49511 | −0.54630 |
w/S-Transformer | −0.74098 | −0.53421 | −0.74859 |
Metrics | SROCC↓ | KROCC↓ | PLCC↓ |
---|---|---|---|
w/VGG16 | −0.61335 | −0.47895 | −0.52604 |
w/S-Transformer | −0.76616 | −0.59473 | −0.70100 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Dong, Y.; Li, H.; Liu, D.; Xue, F.; Gao, D. No-Reference Hyperspectral Image Quality Assessment via Ranking Feature Learning. Remote Sens. 2024, 16, 1657. https://doi.org/10.3390/rs16101657
Li Y, Dong Y, Li H, Liu D, Xue F, Gao D. No-Reference Hyperspectral Image Quality Assessment via Ranking Feature Learning. Remote Sensing. 2024; 16(10):1657. https://doi.org/10.3390/rs16101657
Chicago/Turabian StyleLi, Yuyan, Yubo Dong, Haoyong Li, Danhua Liu, Fang Xue, and Dahua Gao. 2024. "No-Reference Hyperspectral Image Quality Assessment via Ranking Feature Learning" Remote Sensing 16, no. 10: 1657. https://doi.org/10.3390/rs16101657
APA StyleLi, Y., Dong, Y., Li, H., Liu, D., Xue, F., & Gao, D. (2024). No-Reference Hyperspectral Image Quality Assessment via Ranking Feature Learning. Remote Sensing, 16(10), 1657. https://doi.org/10.3390/rs16101657