Complex-Valued U-Net with Capsule Embedded for Semantic Segmentation of PolSAR Image
Abstract
:1. Introduction
- A lightweight CV U-Net is designed for semantic segmentation of PolSAR image. It uses a polarimetric coherence matrix as the input of the network, aiming to utilize both the amplitude and phase information of PolSAR data. The lightweight structure of the network can match the PolSAR datasets with a small number of training samples.
- A CV capsule network is embedded between the encoder and decoder of CV U-Net to extract abundant features of PolSAR image. To make the CV capsule network suitable for semantic segmentation, the segmented capsule is adopted to replace the digital capsule used in the image classification.
- The locally constrained CV dynamic routing is proposed for the connection between capsules in two adjacent layers. The locally constrained characteristic helps to extend the dynamic routing to the connection of capsules with large sizes, and the routing consistency of the real part and imaginary part of the CV capsules improves the correctness of the extracted entity properties.
- Experiments on two airborne datasets and one Gaofen-3 PolSAR dataset verify that the proposed network can achieve better segmentation performance than other RV and CV networks, especially when the training set size is small.
2. Related Work
2.1. PolSAR Data
2.2. U-Net
2.3. Capsule Network
3. Methodology
3.1. CV Encoder
3.2. CV Decoder
3.3. CV Capsule Network
3.4. Locally Constrained CV Dynamic Routing
Algorithm 1 Locally Constrained CV Dynamic Routing |
1: Procedure Routing (, d, l, , ) |
2: for all CV child capsule types within a kernel in layer l and a CV parent capsule in layer l + 1: . |
3: while iteration < d do |
4: for all CV child capsule types in layer l: |
5: softmax computes Equation (6) |
6: for the CV capsule in layer l + 1: |
7: |
8: for the CV capsule in layer l + 1: |
9: squash computes Equation (7) |
10: for all CV capsule types and the CV capsule : |
11: computes Equation (9) |
12: end while |
13: return |
4. Experiments and Analysis
4.1. Experimental Datasets
- (1)
- Flevoland dataset: It was collected in the Flevoland area of the Netherlands in 1989. The Pauli RGB image of this L-band dataset is shown in Figure 6a, and its size is 1024 × 750. In the following experiments, 15 types of land covers are considered and others are regarded as backgrounds.
- (2)
- San Francisco dataset: It was collected in the area of San Francisco Bay in 1988. The Pauli RGB image of this L-band dataset is shown in Figure 6b, and its size is 1024 × 900. In the following experiments, five types of land covers are considered and others are regarded as backgrounds.
- (3)
- Hulunbuir dataset: It was collected in the Hulunbuir area of China. The Pauli RGB image of this C-band dataset is shown in Figure 6c, and its size is 1265 × 1147. In the following experiments, eight types of land covers are considered and others are regarded as backgrounds.
4.2. Data Preprocessing and Experimental Setup
4.3. Experimental Results and Analysis
4.3.1. Experiments on Flevoland Dataset
4.3.2. Experiments on San Francisco Dataset
4.3.3. Experiments on Hulunbuir Dataset
4.3.4. Parameters and Training Time
4.3.5. Convergence Performance
5. Discussion
5.1. Influence of Training Set Size on Segmentation Performance
5.2. Advantages of the Capsule Network in Feature Extraction
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lee, J.S.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
- Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
- Lee, J.S.; Grunes, M.R.; Ainsworth, T.L.; Du, L.J.; Schuler, D.L.; Cloude, S.R. Unsupervised classification using polarimetric decomposition and the complex Wishart classifier. IEEE Trans. Geosci. Remote Sens. 1991, 37, 2249–2258. [Google Scholar]
- Jiao, L.; Liu, F. Wishart deep stacking network for fast PolSAR image classification. IEEE Trans. Image Process. 2016, 25, 3273–3286. [Google Scholar] [CrossRef]
- Zhang, Z.; Wang, H.; Xu, F.; Jin, Y.Q. Complex-valued convolutional neural network and its application in polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
- Cheng, J.; Zhang, F.; Xiang, D.; Yin, Q.; Zhou, Y. PolSAR image classification with multiscale superpixel-based graph convolutional network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Dong, H.; Zou, B.; Zhang, L.; Zhang, S. Automatic design of CNNs via differentiable neural architecture search for PolSAR image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6362–6375. [Google Scholar] [CrossRef] [Green Version]
- Liu, H.; Yang, S.; Gou, S.; Chen, P.; Wang, Y.; Jiao, L. Fast classifiction for large polarimeteric SAR data based on refined spatial-anchor graph. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1589–1593. [Google Scholar] [CrossRef]
- Ding, L.; Zheng, K.; Lin, D.; Chen, Y.; Bruzzone, L. MP-ResNet: Multipath residual network for the semantic segmentation of high-resolution PolSAR images. IEEE Geosci. Remote. Sens. Lett. 2022, 19, 4014205. [Google Scholar] [CrossRef]
- Xiao, D.; Wang, Z.; Wu, Y.; Gao, X.; Sun, X. Terrain segmentation in polarimetric SAR images using dual-attention fusion network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4006005. [Google Scholar] [CrossRef]
- Garg, R.; Kumar, A.; Bansal, N.; Prateek, M.; Kumar, S. Semantic segmentation of PolSAR image data using advanced deep learning model. Sci. Rep. 2021, 11, 15365. [Google Scholar] [CrossRef]
- Ren, S.; Zhou, F. Semi-supervised classification for PolSAR data with multi-scale evolving weighted graph convolutional network. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 2911–2927. [Google Scholar] [CrossRef]
- Chen, S.W.; Tao, C.S. PolSAR image classification using polarimetric-feature-driven deep convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 627–631. [Google Scholar] [CrossRef]
- Ni, J.; Zhang, F.; Yin, Q.; Zhou, Y.; Li, H.-C.; Hong, W. Random neighbor pixel-block-based deep recurrent learning for polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7557–7569. [Google Scholar] [CrossRef]
- Liu, F.; Jiao, L.; Hou, B.; Yang, S. POL-SAR image classification based on Wishart DBN and local spatial information. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3292–3308. [Google Scholar] [CrossRef]
- Xie, W.; Jiao, L.; Hou, B.; Ma, W.; Zhao, J.; Zhang, S.; Liu, F. PolSAR image classification via Wishart-AE model or Wishart-CAE model. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 3604–3615. [Google Scholar] [CrossRef]
- Liu, F.; Jiao, L.; Tang, X. Task-oriented GAN for PolSAR image classification and clustering. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2707–2719. [Google Scholar] [CrossRef]
- Fang, Z.; Zhang, G.; Dai, Q.; Kong, Y.; Wang, P. Semisupervised deep convolutional neural networks using pseudo labels for PolSAR image classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4005605. [Google Scholar] [CrossRef]
- Liu, H.; Xu, D.; Zhu, T.; Shang, F.; Yang, R. Graph convolutional networks by architecture search for PolSAR image classification. Remote Sens. 2021, 13, 1404. [Google Scholar] [CrossRef]
- Bi, H.; Sun, J.; Xu, Z. A graph-based semisupervised deep learning model for PolSAR image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2116–2132. [Google Scholar] [CrossRef]
- Liu, S.J.; Luo, H.; Shi, Q. Active ensemble deep learning for polarimetric synthetic apetrue radar image classification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1580–1584. [Google Scholar] [CrossRef]
- Bi, H.; Xu, F.; Wei, Z.; Xue, Y.; Xu, Z. An active deep learning approach for minimally supervised PolSAR image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9378–9395. [Google Scholar] [CrossRef]
- Zhou, Y.; Wang, H.; Xu, F.; Jin, Y.Q. Polarimetric SAR image classification using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
- Gao, F.; Huang, T.; Wang, J.; Sun, J.P.; Hussain, A.; Yang, E. Dual-branch deep convolution neural network for polarimetric SAR image classification. Appl. Sci. 2017, 7, 447. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Chen, J.; Zhou, Y.; Zhang, F.; Yin, Q. A multi-channel fusion convolution neural network based on scattering mechanism for PolSAR image classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4007805. [Google Scholar]
- Dong, H.; Zhang, L.; Zou, A.B. PolSAR image classification with lightweight 3D convolutional networks. Remote Sens. 2020, 12, 396. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Jiao, L.; Tang, X.; Sun, Q.; Zhang, D. Polarimetric convolutional network for PolSAR image classification. IEEE Trans. Geosc. Remote Sens. 2019, 57, 3040–3054. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Dong, H.; Zou, B. Efficently utilizing complex-valued PolSAR image data via a multi-task deep learning framework. ISPRS J. Photogramm. Remote Sens. 2019, 157, 59–72. [Google Scholar] [CrossRef] [Green Version]
- Tan, X.; Li, M.; Zhang, P.; Wu, Y.; Song, W. Complex-valued 3D convolutional neural network for PolSAR image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1022–1026. [Google Scholar] [CrossRef]
- Zhang, P.; Tan, X.; Li, B.; Jiang, Y.; Wu, Y. PolSAR image classification using hybrid conditional random fields model based on complex-valued 3D CNN. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 1713–1730. [Google Scholar] [CrossRef]
- Xie, W.; Ma, G.; Zhao, F.; Zhang, L. PolSAR image classification via a novel semi-supervised recurrent complex-valued convolution neural network. Neurocomputing 2020, 388, 255–268. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer Assisted Interventions, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Lin, G.; Milan, A.; Shen, C.; Reid, I. RefineNet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5168–5177. [Google Scholar]
- Sun, X.; Shi, A.; Huang, H.; Mayer, H. BAS4NET: Boundary-aware semi-supervised semantic segmentation network for very high resolution remote sensing images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 5398–5413. [Google Scholar] [CrossRef]
- Shahzad, M.; Maurer, M.; Fraundorfer, F.; Wang, Y.; Zhu, X.X. Buildings detection in VHR SAR images using fully convolution neural networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1100–1116. [Google Scholar] [CrossRef] [Green Version]
- Shi, X.; Fu, S.; Chen, J.; Wang, F.; Xu, F. Object-level semantic segmentation on the high-resolution Gaofen-3 FUSAR-map dataset. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 3107–3119. [Google Scholar] [CrossRef]
- Bianchi, F.M.; Grahn, J.; Eckerstorfer, M.; Malnes, E.; Vickers, H. Snow avalanche segmentation in SAR images with fully convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 75–82. [Google Scholar] [CrossRef]
- Wang, Y.; He, C.; Liu, X.L.; Liao, M.S. A hierarchical fully convolutional network integrated with sparse and low-rank subspace representations for PolSAR imagery classification. Remote Sens. 2018, 10, 342. [Google Scholar] [CrossRef] [Green Version]
- He, C.; He, B.; Tu, M.; Wang, Y.; Qu, T.; Wang, D.; Liao, M. Fully convolutional networks and a manifold graph embedding-based algorithm for PolSAR image classification. Remote Sens. 2020, 12, 1467. [Google Scholar] [CrossRef]
- Li, Y.; Chen, Y.; Liu, G.; Jiao, L. A novel deep fully convolutional network for PolSAR image classification. Remote Sens. 2018, 10, 1984. [Google Scholar] [CrossRef] [Green Version]
- Mohammadimanesh, F.; Salehi, B.; Mandianpari, M.; Gill, E.; Molinier, M. A new fully convolutional neural network for semantic segmentation of polarimetric SAR imagery in complex land cover ecosystem. ISPRS J. Photogramm. Remote Sens. 2019, 151, 223–236. [Google Scholar] [CrossRef]
- Pham, M.T.; Lefèvre, S. Very high resolution airborne PolSAR image classification using convolutional neural networks. arXiv 2019, arXiv:1910.14578. [Google Scholar]
- Wu, W.; Li, H.; Li, X.; Guo, H.; Zhang, L. PolSAR image semantic segmentation based on deep transfer learning-realizing smooth classification with small training sets. IEEE Geosci. Remote Sens. Lett. 2019, 19, 977–981. [Google Scholar] [CrossRef]
- Zhao, F.; Tian, M.; Wen, X.; Liu, H. A new parallel dual-channel fully convolutional network via semi-supervised fcm for PolSAR image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 4493–4505. [Google Scholar] [CrossRef]
- Jing, H.; Wang, Z.; Sun, X.; Xiao, D.; Fu, K. PSRN: Polarimetric space reconstruction network for PolSAR image semantic segmentation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 10716–10732. [Google Scholar] [CrossRef]
- Cao, Y.; Wu, Y.; Zhang, P.; Liang, W.; Li, M. Pixel-wise PolSAR image classification via a novel complex-valued deep fully convolutional network. Remote Sens. 2019, 11, 2653. [Google Scholar] [CrossRef] [Green Version]
- Yu, L.; Zeng, Z.; Liu, A.; Xie, X.; Wang, H.; Xu, F.; Hong, W. A lightweight complex-valued DeepLabv3+ for semantic segmentation of PolSAR image. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 930–943. [Google Scholar] [CrossRef]
- Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic routing between capsules. arXiv 2017, arXiv:1710.09829. [Google Scholar]
- Xiang, C.; Lu, Z.; Zou, W.; Yi, T.; Chen, X. Ms-CapsNet: A novel multi-scale capsule network. IEEE Signal Process. Lett. 2018, 25, 1850–1854. [Google Scholar] [CrossRef]
- Jaiswal, A.; AbdAlmageed, W.; Wu, Y.; Natarajan, P. CapsuleGAN: Generative adversarial capsule network. arXiv 2018, arXiv:1802.06167. [Google Scholar]
- Mobiny, A.; Van Nguyen, H. Fast CapsNet for lung cancer screening. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Cham, Switzerland, 2018; pp. 741–749. [Google Scholar]
- Afshar, P.; Plataniotis, K.N.; Mohammadi, A. Capsule networks for brain tumor classification based on MRI images and coarse tumor boundaries. In Proceedings of the ICASSP 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1368–1372. [Google Scholar]
- Zhang, W.; Tang, P.; Zhao, L. Remote sensing image scene classification using CNN-CapsNet. Remote Sens. 2019, 11, 494. [Google Scholar] [CrossRef] [Green Version]
- Yu, Y.; Liu, C.; Guan, H.; Wang, L.; Gao, S.; Zhang, H.; Zhang, Y.; Li, J. Land cover classification of multispectral lidar data with an efficient self-attention capsule network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6501505. [Google Scholar] [CrossRef]
- Cheng, J.; Zhang, F.; Xiang, D.; Yin, Q.; Zhou, Y.; Wang, W. PolSAR image land cover classification based on hierarchical capsule network. Remote Sens. 2021, 13, 3132. [Google Scholar] [CrossRef]
- LaLonde, R.; Bagci, U. Capsules for object segmentation. arXiv 2018, arXiv:1804.04241. [Google Scholar]
- Liu, A.; Yu, L.J.; Zeng, Z.X.; Xie, X.C.; Guo, Y.T.; Shao, Q.Q. Complex-valued U-Net for PolSAR image semantic segmentation. IOP J. Phys. Conf. Ser. 2021, 2010, 012102. [Google Scholar] [CrossRef]
- Yu, L.; Hu, Y.; Xie, X.; Lin, Y.; Hong, W. Complex-valued full convolutional neural network for SAR target classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1752–1756. [Google Scholar] [CrossRef]
Dataset | Training | Test | |
---|---|---|---|
Before Expansion | After Expansion | ||
Flevoland dataset | 51 | 909 | 75 |
San Francisco dataset | 96 | 1710 | 144 |
Hulunbuir dataset | 56 | 1003 | 83 |
Class | U-Net | DeepLabv3+ | CV U-Net | L-CV-DeepLabv3+ | Proposed |
---|---|---|---|---|---|
1 | 97.52 | 88.04 | 92.33 | 89.07 | 98.58 |
2 | 74.24 | 30.40 | 92.04 | 90.82 | 92.97 |
3 | 98.22 | 65.00 | 88.83 | 93.47 | 99.47 |
4 | 67.43 | 53.30 | 97.99 | 95.84 | 93.26 |
5 | 85.47 | 45.20 | 88.05 | 82.18 | 95.88 |
6 | 96.15 | 71.85 | 97.79 | 97.07 | 98.14 |
7 | 96.36 | 51.07 | 88.96 | 96.75 | 98.51 |
8 | 62.51 | 62.68 | 96.98 | 93.84 | 99.70 |
9 | 64.86 | 11.65 | 97.05 | 94.48 | 89.10 |
10 | 75.15 | 30.33 | 91.52 | 90.03 | 98.02 |
11 | 99.96 | 71.09 | 99.25 | 99.18 | 99.42 |
12 | 79.20 | 67.85 | 82.58 | 78.08 | 98.64 |
13 | 99.35 | 72.97 | 96.31 | 98.98 | 98.91 |
14 | 86.80 | 82.90 | 97.76 | 94.78 | 96.08 |
15 | 92.70 | 62.63 | 83.97 | 80.43 | 88.63 |
MIOU | 85.99 | 60.32 | 93.20 | 92.13 | 96.57 |
OA | 97.65 | 90.39 | 98.80 | 98.52 | 99.43 |
MPA | 93.37 | 73.14 | 96.62 | 96.11 | 98.22 |
Class | U-Net | DeepLabv3+ | CV U-Net | L-CV-DeepLabv3+ | Proposed |
---|---|---|---|---|---|
1 | 88.95 | 72.24 | 92.15 | 94.13 | 96.76 |
2 | 87.55 | 77.13 | 94.21 | 94.23 | 97.53 |
3 | 98.18 | 96.14 | 99.08 | 99.05 | 99.83 |
4 | 88.75 | 68.63 | 90.34 | 93.43 | 96.06 |
5 | 70.35 | 45.25 | 87.06 | 96.08 | 91.46 |
MIOU | 88.96 | 74.72 | 93.80 | 95.86 | 96.93 |
OA | 96.57 | 90.89 | 97.93 | 98.24 | 99.18 |
MPA | 93.60 | 85.17 | 96.83 | 97.80 | 98.74 |
Class | U-Net | DeepLabv3+ | CV U-Net | L-CV-DeepLabv3+ | Proposed |
---|---|---|---|---|---|
1 | 96.41 | 70.26 | 98.80 | 96.14 | 99.16 |
2 | 99.27 | 91.17 | 99.69 | 97.93 | 99.94 |
3 | 99.68 | 93.43 | 99.86 | 98.53 | 99.97 |
4 | 96.00 | 74.76 | 97.73 | 93.91 | 97.97 |
5 | 85.10 | 31.55 | 89.82 | 78.09 | 92.94 |
6 | 87.66 | 34.31 | 86.30 | 88.50 | 95.30 |
7 | 94.70 | 3.38 | 94.91 | 88.20 | 95.97 |
8 | 56.03 | 36.12 | 90.32 | 88.56 | 96.48 |
MIOU | 90.53 | 59.29 | 95.27 | 92.15 | 97.53 |
OA | 99.64 | 96.23 | 99.80 | 99.27 | 99.87 |
MPA | 94.09 | 71.33 | 97.33 | 95.61 | 98.59 |
Parameter | U-Net | DeepLabv3+ | CV U-Net | L-CV-DeepLabv3+ | Proposed |
---|---|---|---|---|---|
Trainable | 1,466,380 | 3,011,789 | 2,934,366 | 8,361,528 | 3,411,760 |
Non-trainable | 3212 | 41,724 | 8030 | 144,620 | 8080 |
Total | 1,469,592 | 3,053,513 | 2,942,396 | 8,506,148 | 3,419,840 |
Dataset | CV U-Net | L-CV-DeepLabv3+ | Proposed |
---|---|---|---|
Flevoland dataset | 9.28 | 29.81 | 9.68 |
San Francisco dataset | 12.98 | 35.59 | 16.5 |
Hulunbuir dataset | 9.44 | 32.68 | 10.76 |
Dataset | CV U-Net | Proposed |
---|---|---|
Flevoland dataset | 540 538 527 | 445 |
San Francisco dataset | 308 | |
Hulunbuir dataset | 397 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, L.; Shao, Q.; Guo, Y.; Xie, X.; Liang, M.; Hong, W. Complex-Valued U-Net with Capsule Embedded for Semantic Segmentation of PolSAR Image. Remote Sens. 2023, 15, 1371. https://doi.org/10.3390/rs15051371
Yu L, Shao Q, Guo Y, Xie X, Liang M, Hong W. Complex-Valued U-Net with Capsule Embedded for Semantic Segmentation of PolSAR Image. Remote Sensing. 2023; 15(5):1371. https://doi.org/10.3390/rs15051371
Chicago/Turabian StyleYu, Lingjuan, Qiqi Shao, Yuting Guo, Xiaochun Xie, Miaomiao Liang, and Wen Hong. 2023. "Complex-Valued U-Net with Capsule Embedded for Semantic Segmentation of PolSAR Image" Remote Sensing 15, no. 5: 1371. https://doi.org/10.3390/rs15051371
APA StyleYu, L., Shao, Q., Guo, Y., Xie, X., Liang, M., & Hong, W. (2023). Complex-Valued U-Net with Capsule Embedded for Semantic Segmentation of PolSAR Image. Remote Sensing, 15(5), 1371. https://doi.org/10.3390/rs15051371