A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion
Abstract
:1. Introduction
- We propose a new observation model for the MS/HS fusion task using the convolutional sparse coding (CSC) technique. Specifically, we separately model the HS and MS images using CSC, incorporating two types of features: common features and unique features. These features are derived from two key observations. First, the HS and MS images capture the same scene, indicating the presence of common features. Second, the images are generated by distinct sensors on the satellite, implying specific unique features specific to each image.
- We reformulate the observation model as an optimization problem and then place implicit prior on both the common and unique features. This allows us to leverage additional information and improve the quality of the fused image. Then, we develop a proximal gradient algorithm to solve the optimization problem efficiently, which is then unfolded into a deep network. Each module of the network corresponds to a specific operation in the iterative algorithm and, thus, the interpretability of the network can be significantly improved.
- Experimental results on benchmark datasets demonstrate the superiority of our network both quantitatively and qualitatively compared with other state-of-the-art methods.
2. Proposed Method
2.1. Problem Formulation
2.2. Model Optimization
2.2.1. U-Subproblem
2.2.2. V-Subproblem
2.2.3. C-Subproblem
2.3. Network Architecture
2.4. Network Training
3. Experimental Results
3.1. Evaluation at Reduced Resolution
3.2. Evaluation at Full Resolution
3.3. Generalization to New Satellites
4. Parameters and Time Comparisons
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
- Borsoi, R.; Imbiriba, T.; Bermudez, J. Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability. IEEE Trans. Image Process. 2020, 29, 116–127. [Google Scholar] [CrossRef] [PubMed]
- Hong, D.; Yao, J.; Li, C.; Meng, D.; Yokoya, N.; Chanussot, J. Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image Super-Resolution with Subpixel Fusion. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5527812. [Google Scholar]
- Liu, N.; Li, W.; Tao, R. Geometric Low-Rank Tensor Approximation for Remotely Sensed Hyperspectral And Multispectral Imagery Fusion. In Proceedings of the ICASSP 2022—2022 IEEE International Conference On Acoustics, Speech and Signal Processing (ICASSP), Virtual, 7–13 May 2022; pp. 2819–2823. [Google Scholar]
- Wang, K.; Wang, Y.; Zhao, X.; Chan, J.; Xu, Z.; Meng, D. Hyperspectral and Multispectral Image Fusion via Nonlocal Low-Rank Tensor Decomposition and Spectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7654–7671. [Google Scholar] [CrossRef]
- Jin, W.; Wang, M.; Wang, W.; Yang, G. FS-Net: Four-Stream Network With Spatial–Spectral Representation Learning for Hyperspectral and Multispecral Image Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 8845–8857. [Google Scholar] [CrossRef]
- Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS + Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
- Liu, J. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
- Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
- Simoes, M.; Bioucas-Dias, J.; Almeida, L.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2014, 53, 3373–3388. [Google Scholar] [CrossRef]
- Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2011, 50, 528–537. [Google Scholar] [CrossRef]
- Akhtar, N.; Shafait, F.; Mian, A. Sparse spatio-spectral representation for hyperspectral image super-resolution. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part VII 13. pp. 63–78. [Google Scholar]
- Lanaras, C.; Baltsavias, E.; Schindler, K. Hyperspectral super-resolution by coupled spectral unmixing. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3586–3594. [Google Scholar]
- Eismann, M. Resolution Enhancement of Hyperspectral Imagery Using Maximum a Posteriori Estimation with a Stochastic Mixing Model. Ph.D. Thesis, University of Dayton, Dayton, OH, USA, 2004. [Google Scholar]
- Wei, Q.; Dobigeon, N.; Tourneret, J. Fast fusion of multi-band images based on solving a Sylvester equation. IEEE Trans. Image Process. 2015, 24, 4109–4121. [Google Scholar] [PubMed]
- Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J. Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef] [PubMed]
- Xu, T.; Huang, T.; Deng, L.; Zhao, X.; Huang, J. Hyperspectral image superresolution using unidirectional total variation with tucker decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4381–4398. [Google Scholar] [CrossRef]
- Laben, C.; Brower, B. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875, 4 January 2000. [Google Scholar]
- Carper, W.; Lillesand, T.; Kiefer, R. Others The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens. 1990, 56, 459–467. [Google Scholar]
- Selva, M.; Aiazzi, B.; Butera, F.; Chiarantini, L.; Baronti, S. Hyper-sharpening: A first approach on SIM-GA data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3008–3024. [Google Scholar] [CrossRef]
- Nascimento, J.; Dias, J. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
- Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X. CoSpace: Common Subspace Learning From Hyperspectral-Multispectral Correspondences. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4349–4359. [Google Scholar] [CrossRef]
- Lee, D.; Seung, H. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.; Chan, J. Hyperspectral and multispectral image fusion via deep two-branches convolutional neural network. Remote Sens. 2018, 10, 800. [Google Scholar] [CrossRef]
- Li, Y.; Hu, J.; Zhao, X.; Xie, W.; Li, J. Hyperspectral image super-resolution using deep convolutional neural network. Neurocomputing 2017, 266, 29–41. [Google Scholar] [CrossRef]
- Wang, Q.; Li, Q.; Li, X. Hyperspectral image superresolution using spectrum and feature context. IEEE Trans. Ind. Electron. 2020, 68, 11276–11285. [Google Scholar] [CrossRef]
- Wei, W.; Nie, J.; Li, Y.; Zhang, L.; Zhang, Y. Deep recursive network for hyperspectral image super-resolution. IEEE Trans. Comput. Imaging 2020, 6, 1233–1244. [Google Scholar] [CrossRef]
- Hu, J.; Huang, T.; Deng, L.; Dou, H.; Hong, D.; Vivone, G. Fusformer: A transformer-based fusion network for hyperspectral image super-resolution. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6012305. [Google Scholar] [CrossRef]
- Han, X.; Yu, J.; Luo, J.; Sun, W. Hyperspectral and multispectral image fusion using cluster-based multi-branch BP neural networks. Remote Sens. 2019, 11, 1173. [Google Scholar] [CrossRef]
- Dian, R.; Li, S.; Kang, X. Regularizing hyperspectral and multispectral image fusion by CNN denoiser. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1124–1135. [Google Scholar] [CrossRef]
- Hu, J.; Huang, T.; Deng, L.; Jiang, T.; Vivone, G.; Chanussot, J. Hyperspectral image super-resolution via deep spatiospectral attention convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 7251–7265. [Google Scholar] [CrossRef]
- Palsson, F.; Sveinsson, J.; Ulfarsson, M. Sentinel-2 image fusion using a deep residual network. Remote Sens. 2018, 10, 1290. [Google Scholar] [CrossRef]
- Chen, H.; Yokoya, N.; Wu, C.; Du, B. Unsupervised Multimodal Change Detection Based on Structural Relationship Graph Representation Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5635318. [Google Scholar] [CrossRef]
- Chen, H.; Yokoya, N.; Chini, M. Fourier domain structural relationship analysis for unsupervised multimodal change detection. ISPRS J. Photogramm. Remote Sens. 2023, 198, 99–114. [Google Scholar] [CrossRef]
- Xie, Q.; Zhou, M.; Zhao, Q.; Meng, D.; Zuo, W.; Xu, Z. Multispectral and hyperspectral image fusion by MS/HS fusion net. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1585–1594. [Google Scholar]
- Dong, W.; Zhou, C.; Wu, F.; Wu, J.; Shi, G.; Li, X. Model-guided deep hyperspectral image super-resolution. IEEE Trans. Image Process. 2021, 30, 5754–5768. [Google Scholar] [CrossRef]
- Ma, Q.; Jiang, J.; Liu, X.; Ma, J. Deep Unfolding Network for Spatiospectral Image Super-Resolution. IEEE Trans. Comput. Imaging 2022, 8, 28–40. [Google Scholar] [CrossRef]
- Sun, Y.; Liu, J.; Yang, J.; Xiao, Z.; Wu, Z. A deep image prior-based interpretable network for hyperspectral image fusion. Remote Sens. Lett. 2021, 12, 1250–1259. [Google Scholar]
- Deng, X.; Dragotti, P. Deep convolutional neural network for multi-modal image restoration and fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3333–3348. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Cao, X.; Zhao, Q.; Zhang, L.; Meng, D. Online rain/snow removal from surveillance videos. IEEE Trans. Image Process. 2021, 30, 2029–2044. [Google Scholar] [CrossRef]
- Vivone, G.; Garzelli, A.; Xu, Y.; Liao, W.; Chanussot, J. Panchromatic and Hyperspectral Image Fusion: Outcome of the 2022 WHISPERS Hyperspectral Pansharpening Challenge. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 166–179. [Google Scholar]
- Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S. Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef]
- Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 193–200. [Google Scholar]
- Yokoya, N.; Iwasaki, A. Airborne Hyperspectral Data over Chikusei; Technical Report SAL-2016-05-27; Space Application Laboratory, the University of Tokyo: Tokyo, Japan, 2016; Volume 5, p. 5. [Google Scholar]
- Yuhas, R.; Boardman, J.; Goetz, A. Determination of semi-arid landscape endmembers and seasonal trends using convex geometry spectral unmixing techniques. In Summaries of the 4th Annual JPL Airborne Geoscience Workshop. Volume 1: AVIRIS Workshop; JPL: Pasadena, CA, USA, 1993. [Google Scholar]
- Wald, L. Quality of high resolution synthesised images: Is there a simple criterion? In Proceedings of the Third Conference “Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images”, Sophia Antipolis, France, 26–28 January 2000; pp. 99–103. [Google Scholar]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
- Garzelli, A.; Nencini, F. Hypercomplex quality assessment of multi/hyperspectral images. IEEE Geosci. Remote. Sens. Lett. 2009, 6, 662–665. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar]
- Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.; Restaino, R.; Wald, L. A Critical Comparison Among Pansharpening Algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar]
- Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and panchromatic data fusion assessment without reference. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef]
Method | PSNR | SSIM | SAM | ERGAS | SCC | Q2n |
---|---|---|---|---|---|---|
GSA [7] | 37.578 | 0.976 | 6.621 | 3.483 | 0.992 | 0.717 |
SFIM-HS [8] | 30.179 | 0.941 | 10.140 | 22.596 | 0.967 | 0.645 |
GLP-HS [9] | 32.956 | 0.962 | 6.106 | 5.136 | 0.985 | 0.670 |
CNMF [11] | 36.647 | 0.971 | 8.251 | 3.847 | 0.985 | 0.708 |
S3RHSR [12] | 31.564 | 0.922 | 19.770 | 7.500 | 0.977 | 0.657 |
HySure [10] | 35.879 | 0.968 | 7.274 | 4.056 | 0.989 | 0.742 |
FUSE [15] | 27.247 | 0.934 | 5.035 | 9.248 | 0.955 | 0.601 |
UTV [17] | 29.952 | 0.815 | 12.209 | 7.988 | 0.959 | 0.441 |
CSTF [16] | 33.694 | 0.954 | 7.326 | 4.948 | 0.985 | 0.621 |
HSRnet [31] | 42.796 | 0.994 | 2.814 | 2.147 | 0.995 | 0.817 |
MHFnet [35] | 41.308 | 0.987 | 5.573 | 2.802 | 0.989 | 0.732 |
Ours | 43.485 | 0.993 | 3.319 | 1.986 | 0.995 | 0.799 |
Method | PSNR | SSIM | SAM | ERGAS | SCC | Q2n |
---|---|---|---|---|---|---|
GSA [7] | 32.519 | 0.881 | 3.745 | 5.511 | 0.698 | 0.630 |
SFIM-HS [8] | 28.468 | 0.885 | 4.392 | 60.326 | 0.782 | 0.617 |
GLP-HS [9] | 32.418 | 0.917 | 3.288 | 5.717 | 0.922 | 0.647 |
CNMF [11] | 31.163 | 0.883 | 3.457 | 6.059 | 0.876 | 0.560 |
S3RHSR [12] | 25.936 | 0.751 | 10.115 | 12.367 | 0.749 | 0.441 |
HySure [10] | 30.464 | 0.842 | 3.716 | 7.020 | 0.606 | 0.555 |
FUSE [15] | 31.871 | 0.906 | 3.251 | 6.170 | 0.759 | 0.627 |
UTV | 25.085 | 0.652 | 9.130 | 12.253 | 0.712 | 0.261 |
CSTF [16] | 29.701 | 0.829 | 4.853 | 7.376 | 0.871 | 0.460 |
HSRnet [31] | 35.685 | 0.948 | 2.131 | 3.668 | 0.955 | 0.778 |
MHFnet [35] | 33.611 | 0.938 | 3.587 | 5.345 | 0.936 | 0.659 |
Ours | 36.462 | 0.955 | 2.242 | 3.174 | 0.959 | 0.753 |
Method | |||
---|---|---|---|
GSA [7] | 0.1096 | 0.0948 | 0.806 |
SFIM-HS [8] | 0.1712 | 0.0602 | 0.7789 |
GLP-HS [9] | 0.0778 | 0.0891 | 0.8401 |
CNMF [11] | 0.0704 | 0.0786 | 0.8565 |
S3RHSR [12] | 0.1512 | 0.1484 | 0.7229 |
HySure [10] | 30.0789 | 0.0885 | 0.8396 |
FUSE [15] | 0.0223 | 0.1022 | 0.8778 |
UTV [17] | 0.1046 | 0.0545 | 0.8466 |
CSTF [16] | 30.0741 | 0.0631 | 0.8675 |
HSRnet [31] | 0.0171 | 0.1027 | 0.882 |
MHFnet [35] | 0.1284 | 0.1124 | 0.7736 |
Ours | 0.0222 | 0.0967 | 0.8832 |
Method | PSNR | SSIM | SAM | ERGAS | SCC | Q2n |
---|---|---|---|---|---|---|
GSA [7] | 31.926 | 0.952 | 3.350 | 5.796 | 0.988 | 0.738 |
SFIM-HS [8] | 28.701 | 0.916 | 3.446 | 7.200 | 0.971 | 0.665 |
GLP-HS [9] | 29.558 | 0.928 | 3.630 | 6.671 | 0.977 | 0.678 |
CNMF [11] | 31.440 | 0.949 | 3.065 | 6.027 | 0.984 | 0.734 |
S3RHSR [12] | 30.420 | 0.920 | 5.778 | 7.001 | 0.977 | 0.683 |
HySure [10] | 31.240 | 0.948 | 3.341 | 6.141 | 0.986 | 0.730 |
FUSE [15] | 27.803 | 0.876 | 3.242 | 7.636 | 0.952 | 0.516 |
UTV [17] | 33.170 | 0.931 | 4.722 | 4.351 | 0.975 | 0.647 |
CSTF [16] | 33.927 | 0.930 | 3.660 | 4.042 | 0.978 | 0.654 |
HSRnet [31] | 38.634 | 0.974 | 3.159 | 2.828 | 0.985 | 0.752 |
MHFnet [35] | 37.820 | 0.969 | 4.412 | 4.747 | 0.978 | 0.736 |
Ours | 38.930 | 0.975 | 3.405 | 3.585 | 0.986 | 0.755 |
Method | Image Size | HSRnet | MHFnet | Our |
---|---|---|---|---|
Parameters () | 128 × 128 × 31,512 × 512 | 0.63 | 1.21 | 5.52 |
FLOPs () | 128 × 128 × 31,512 × 512 | 2.57 | 5.10 | 52.33 |
Running time (s) | 128 × 128 × 31,512 × 512 | 0.47 | 12.91 | 23.24 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, B.; Cao, X.; Meng, D. A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion. Remote Sens. 2024, 16, 3979. https://doi.org/10.3390/rs16213979
Zhang B, Cao X, Meng D. A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion. Remote Sensing. 2024; 16(21):3979. https://doi.org/10.3390/rs16213979
Chicago/Turabian StyleZhang, Bihui, Xiangyong Cao, and Deyu Meng. 2024. "A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion" Remote Sensing 16, no. 21: 3979. https://doi.org/10.3390/rs16213979
APA StyleZhang, B., Cao, X., & Meng, D. (2024). A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion. Remote Sensing, 16(21), 3979. https://doi.org/10.3390/rs16213979