A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images
Abstract
:1. Introduction
- A SAR-tailored SFONet is proposed to reconstruct a 3-D target using one or more azimuthal images as the input. It includes a basic network and a pluggable module. In the basic network, a lightweight complex-valued (CV) encoder is designed to extract features from 2-D CV SAR images. The pluggable module is designed to include a CV long short-term memory (LSTM) submodule and a CV attention submodule. The former extracts structural features of the target from multiple azimuthal images, and the latter fuses these features.
- A two-stage training strategy is also proposed when two input modes of the SFONet coexist. The basic SFONet is trained using one azimuthal image as the input, and then the pluggable module is trained using multiple azimuthal images as the input. This strategy saves training time and allows the second stage to focus on mining the target structure information implied in multiple azimuthal SAR images.
- One dataset containing 2-D images and 3-D ground truth is constructed using the Gotcha echo dataset. Comparative experiments with other deep learning methods and ablation experiments are implemented. The number of CV LSTM layers and refinement times in the reference are also analyzed. Additionally, the roles of CV LSTM and CV attention and the composition of the training samples are discussed.
2. Related Work
2.1. Occupancy Network
2.2. LSTM
3. Methodology
3.1. Framework of the Structurally Flexible Occupancy Network
3.2. CV Encoder
3.3. CV LSTM
3.4. CV Attention
3.5. RV Perceptron and Decoder
3.6. Training
Algorithm 1 Two-Stage Training |
Stage 1: Inputs: Batch size B, Number of sampling points M for the number of iterations do |
|
Output: |
Stage 2: Inputs: Batch size B, Number of sampling points M, Parameter set for the number of iterations do |
|
Output: |
3.7. Inference
Algorithm 2 Inference |
for the number of test samples do
|
4. Experiments and Analysis
4.1. Dataset
4.2. Implement Details and Evaluation Metrics
4.3. Comparative Experiments
4.4. Ablation Experiments
4.5. The Number of CV LSTM Layers
4.6. The Number of Refinement Iterations in the Reference
5. Discussion
5.1. Roles of CV LSTM and CV Attention
5.2. Influence of the Composition of Training Samples
5.2.1. Influence of the Azimuthal Interval
5.2.2. Influence of the Number of Images
5.2.3. Influence of the Number of Passes
5.2.4. Influence of the Number of Sub-Apertures per Pass
5.2.5. Discussion About the Composition of One Training Sample
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhu, X.X.; Bamler, R. Superresolving SAR tomography for multidimensional imaging of urban areas: Compressive sensing-based TomoSAR inversion. IEEE Signal Process. Mag. 2014, 31, 51–58. [Google Scholar] [CrossRef]
- Zhang, H.; Lin, Y.; Teng, F.; Feng, S.; Yang, B.; Hong, W. Circular SAR incoherent 3D imaging with a NeRF-Inspired method. Remote Sens. 2023, 15, 3322. [Google Scholar] [CrossRef]
- Lei, Z.; Xu, F.; Wei, J.; Cai, F.; Wang, F.; Jin, Y.-Q. SAR-NeRF: Neural radiance fields for synthetic aperture radar multiview representation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5221415. [Google Scholar] [CrossRef]
- Liu, A.; Zhang, S.; Zhang, C.; Zhi, S.; Li, X. RaNeRF: Neural 3-D reconstruction of space targets from ISAR image sequences. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5107215. [Google Scholar] [CrossRef]
- Reigber, A.; Moreira, A. First demonstration of airborne SAR tomography using multibaseline L-band data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2142–2152. [Google Scholar] [CrossRef]
- Fornaro, G.; Lombardini, F.; Serafino, F. Three-dimensional multipass SAR focusing: Experiments with long-term spaceborne data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 702–714. [Google Scholar] [CrossRef]
- Zhu, X.X.; Bamler, R. Very high resolution spaceborne SAR tomography in urban environment. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4296–4308. [Google Scholar] [CrossRef]
- Wang, Z.; Ding, Z.; Sun, T.; Zhao, J.; Wang, Y.; Zeng, T. UAV-based P-band SAR tomography with long baseline: A multimaster approach. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5207221. [Google Scholar] [CrossRef]
- Liu, M.; Wang, Y.; Ding, Z.; Li, L.; Zeng, T. Atomic norm minimization based fast off-grid tomographic SAR imaging with nonuniform sampling. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5203517. [Google Scholar] [CrossRef]
- Lombardini, F.; Montanari, M.; Gini, F. Reflectivity estimation for multibaseline interferometric radar imaging of layover extended sources. IEEE Trans. Signal Process. 2003, 51, 1508–1519. [Google Scholar] [CrossRef]
- Shi, Y.; Zhu, X.X.; Yin, W.; Bamler, R. A fast and accurate basis pursuit denoising algorithm with application to super-resolving tomographic SAR. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6148–6158. [Google Scholar] [CrossRef]
- Ponce, O.; Prats-Iraola, P.; Scheiber, R.; Reigber, A.; Moreira, A. First airborne demonstration of holographic SAR tomography with fully polarimetric multicircular acquisitions at L-band. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6170–6196. [Google Scholar] [CrossRef]
- Bao, Q.; Lin, Y.; Hong, W.; Shen, W.; Zhao, Y.; Peng, X. Holographic SAR tomography image reconstruction by combination of adaptive imaging and sparse bayesian inference. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1248–1252. [Google Scholar] [CrossRef]
- Wu, K.; Shen, Q.; Cui, W. 3-D tomographic circular SAR imaging of targets using scattering phase correction. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5221914. [Google Scholar] [CrossRef]
- Bi, H.; Feng, J.; Jin, S.; Yang, W.; Xu, W. Mixed-Norm regularization-based polarimetric holographic SAR 3-D imaging. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4002305. [Google Scholar] [CrossRef]
- Rambour, C.; Denis, L.; Tupin, F.; Oriot, H.M. Introducing spatial regularization in SAR tomography reconstruction. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8600–8617. [Google Scholar] [CrossRef]
- Wang, X.; Dong, Z.; Wang, Y.; Chen, X.; Yu, A. Three-dimensional reconstruction of partially coherent scatterers using iterative sub-network generation method. Remote Sens. 2024, 16, 3707. [Google Scholar] [CrossRef]
- Tebaldini, S.; Rocca, F. Multibaseline polarimetric SAR tomography of a boreal forest at P-and L-bands. IEEE Trans. Geosci. Remote Sens. 2012, 50, 232–246. [Google Scholar] [CrossRef]
- Ngo, Y.-N.; Minh, D.H.T.; Baghdadi, N.; Fayad, I.; Ferro-Famil, L.; Huang, Y. Exploring tropical forests with GEDI and 3D SAR tomography. IEEE Geosci. Remote Sens. Lett. 2023, 20, 2503605. [Google Scholar] [CrossRef]
- Jin, S.; Bi, H.; Guo, Q.; Zhang, J.; Hong, W. Iterative adaptive based multi-polarimetric SAR tomography of the forested areas. Remote Sens. 2024, 16, 1605. [Google Scholar] [CrossRef]
- Tebaldini, S.; Rocca, F.; Meta, A.; Coccia, A. 3D imaging of an alpine glacier: Signal processing of data from the AlpTomoSAR campaign. In Proceedings of the European Radar Conference (EuRAD), Paris, France, 9–11 September 2015; pp. 37–40. [Google Scholar]
- Nouvel, J.; Jeuland, H.; Bonin, G.; Roques, S.; Du Plessis, O.; Peyret, J. A Ka band imaging radar: DRIVE on board ONERA motorglider. In Proceedings of the IEEE International Symposium on Geoscience and Remote Sensing, Denver, CO, USA, 31 July–4 August 2006; pp. 134–136. [Google Scholar]
- Weiss, M.; Gilles, M. Initial ARTINO radar experiments. In Proceedings of the 8th European Conference on Synthetic Aperture Radar, Aachen, Germany, 7–10 June 2010; pp. 1–4. [Google Scholar]
- Peng, X.; Tan, W.; Hong, W.; Jiang, C.; Bao, Q.; Wang, Y. Airborne DLSLA 3-D SAR image reconstruction by combination of polar formatting and L1 regularization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 213–226. [Google Scholar] [CrossRef]
- Qiu, X.; Luo, Y.; Song, S.; Peng, L.; Cheng, Y.; Yan, Q.; ShangGuan, S.; Jiao, Z.; Zhang, Z.; Ding, C. Microwave vision three-dimensional SAR experimental system and full-polarimetric data processing method. J. Radars 2024, 13, 941–954. [Google Scholar] [CrossRef]
- Zhang, F.; Liang, X.; Wu, Y.; Lv, X. 3D surface reconstruction of layover areas in continuous terrain for multi-baseline SAR interferometry using a curve model. Int. J. Remote Sens. 2015, 36, 2093–2112. [Google Scholar] [CrossRef]
- Wang, J.; Chen, L.-Y.; Liang, X.-D.; Ding, C.-b.; Li, K. Implementation of the OFDM chirp waveform on MIMO SAR systems. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5218–5228. [Google Scholar] [CrossRef]
- Hu, F.; Wang, F.; Yu, H.; Xu, F. Asymptotic 3-D phase unwrapping for very sparse airborne array InSAR images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5234115. [Google Scholar] [CrossRef]
- LI, H.; LIANG, X.; ZHANG, F.; Ding, C.; Wu, Y. A novel 3-D reconstruction approach based on group sparsity of array InSAR. Sci. Sin. Inform. 2018, 48, 1051–1064. [Google Scholar] [CrossRef]
- Jiao, Z.; Ding, C.; Qiu, X.; Zhou, L.; Chen, L.; Han, D.; Guo, J. Urban 3D imaging using airborne TomoSAR: Contextual information-based approach in the statistical way. ISPRS J. Photogramm. Remote Sens. 2020, 170, 127–141. [Google Scholar] [CrossRef]
- Cui, C.; Liu, Y.; Zhang, F.; Shi, M.; Chen, L.; Li, W.; Li, Z. A novel automatic registration method for array InSAR point clouds in urban scenes. Remote Sens. 2024, 16, 601. [Google Scholar] [CrossRef]
- Li, Z.; Zhang, F.; Wan, Y.; Chen, L.; Wang, D.; Yang, L. Airborne circular flight array SAR 3-D imaging algorithm of buildings based on layered phase compensation in the wavenumber domain. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5213512. [Google Scholar] [CrossRef]
- Wang, S.; Guo, J.; Zhang, Y.; Wu, Y. Multi-baseline SAR 3D reconstruction of vehicle from very sparse aspects: A generative adversarial network based approach. ISPRS J. Photogramm. Remote Sens. 2023, 197, 36–55. [Google Scholar] [CrossRef]
- Budillon, A.; Johnsy, A.C.; Schirinzi, G.; Vitale, S. SAR tomography based on deep learning. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3625–3628. [Google Scholar]
- Wang, M.; Zhang, Z.; Qiu, X.; Gao, S.; Wang, Y. ATASI-Net: An efficient sparse reconstruction network for tomographic SAR imaging with adaptive threshold. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4701918. [Google Scholar] [CrossRef]
- Qian, K.; Wang, Y.; Jung, P.; Shi, Y.; Zhu, X.X. Basis pursuit denoising via recurrent neural network applied to super-resolving SAR tomography. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4710015. [Google Scholar] [CrossRef]
- Wang, M.; Wei, S.; Zhou, Z.; Shi, J.; Zhang, X.; Guo, Y. CTV-Net: Complex-valued TV-driven network with nested topology for 3-D SAR imaging. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 5588–5602. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Liu, C.; Zhu, R.; Liu, M.; Ding, Z.; Zeng, T. MAda-Net: Model-adaptive deep learning imaging for SAR tomography. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5202413. [Google Scholar] [CrossRef]
- Chen, J.; Peng, L.; Qiu, X.; Ding, C.; Wu, Y. A 3D building reconstruction method for SAR images based on deep neural network. Sci. Sin. Inform. 2019, 49, 1606–1625. [Google Scholar] [CrossRef]
- Yang, Z.-L.; Zhou, R.-Y.; Wang, F.; Xu, F. A point clouds framework for 3-D reconstruction of SAR images based on 3-D parametric electromagnetic part model. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 11–16 July 2021; pp. 4818–4821. [Google Scholar]
- Yu, L.; Zou, J.; Liang, M.; Li, L.; Xie, X.; Yu, X.; Hong, W. Lightweight pixel2mesh for 3-D target reconstruction from a single SAR image. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4000805. [Google Scholar] [CrossRef]
- Choy, C.B.; Xu, D.; Gwak, J.; Chen, K.; Savarese, S. 3D-R2N2: A unified approach for single and multi-view 3-D object reconstruction. In Proceedings of the 2016 European Conference on Computer Vision (ECCV), Berlin, Germany, 11–14 October 2016; pp. 628–644. [Google Scholar]
- Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; Guibas, L. Learning representations and generative models for 3-D point clouds. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 40–49. [Google Scholar]
- Wang, N.; Zhang, Y.; Li, Z.; Fu, Y.; Liu, W.; Jiang, Y.-G. Pixel2mesh: Generating 3-D mesh models from single RGB images. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 52–67. [Google Scholar]
- Mescheder, L.; Oechsle, M.; Niemeyer, M.; Nowozin, S.; Geiger, A. Occupancy networks: Learning 3-D reconstruction in function space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 4460–4470. [Google Scholar]
- Chen, Z.; Zhang, H. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5932–5941. [Google Scholar]
- Tatarchenko, M.; Dosovitskiy, A.; Brox, T. Octree generating networks: Efficient convolutional architectures for high-resolution 3D outputs. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2088–2096. [Google Scholar]
- Lorensen, W.E.; Cline, H.E. Marching cubes: A high resolution 3D surface construction algorithm. ACM Siggraph Comput. Graph. 1987, 21, 163–169. [Google Scholar] [CrossRef]
- Peng, S.; Niemeyer, M.; Mescheder, L.; Pollefeys, M.; Geiger, A. Convolutional occupancy networks. In Proceedings of the 2020 European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 523–540. [Google Scholar]
- Lionar, S.; Emtsev, D.; Svilarkovic, D.; Peng, S. Dynamic plane convolutional occupancy networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 1829–1838. [Google Scholar]
- Zhao, C.; Zhang, C.; Yan, Y.; Su, N. A 3D reconstruction framework of buildings using single off-nadir satellite image. Remote Sens. 2021, 13, 4434. [Google Scholar] [CrossRef]
- Schmidhuber, J.; Hochreiter, S. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Lattari, F.; Rucci, A.; Matteucci, M. A deep learning approach for change points detection in InSAR time series. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5223916. [Google Scholar] [CrossRef]
- Kulshrestha, A.; Chang, L.; Stein, A. Use of LSTM for sinkhole-related anomaly detection and classification of InSAR deformation time series. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2022, 15, 4559–4570. [Google Scholar] [CrossRef]
- Ding, J.; Wen, L.; Zhong, C.; Loffeld, O. Video SAR moving target indication using deep neural network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7194–7204. [Google Scholar] [CrossRef]
- Zhou, Y.; Shi, J.; Wang, C.; Hu, Y.; Zhou, Z.; Yang, X.; Zhang, X.; Wei, S. SAR ground moving target refocusing by combining mRe³ network and TVβ-LSTM. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5200814. [Google Scholar] [CrossRef]
- Wang, C.; Liu, X.; Pei, J.; Huang, Y.; Zhang, Y.; Yang, J. Multiview attention CNN-LSTM network for SAR automatic target recognition. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2021, 14, 12504–12513. [Google Scholar] [CrossRef]
- Bai, X.; Xue, R.; Wang, L.; Zhou, F. Sequence SAR image classification based on bidirectional convolution-recurrent network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9223–9235. [Google Scholar] [CrossRef]
- Ni, J.; Zhang, F.; Yin, Q.; Zhou, Y.; Li, H.-C.; Hong, W. Random neighbor pixel-block-based deep recurrent learning for polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7557–7569. [Google Scholar] [CrossRef]
- Yang, B.; Wang, S.; Markham, A.; Trigoni, N. Robust attentional aggregation of deep feature sets for multi-view 3D reconstruction. Int. J. Comput. Vis. 2019, 128, 53–73. [Google Scholar] [CrossRef]
- Dungan, K.E.; Potter, L.C. Classifying vehicles in wide-angle radar using pyramid match hashing. IEEE J. Sel. Topics Signal Process. 2011, 5, 577–591. [Google Scholar] [CrossRef]
- Yang, L.; Zhu, Z.; Lin, X.; Nong, J.; Liang, Y. Long-range grouping transformer for multi-view 3D reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 18257–18267. [Google Scholar]
Training Set | Validation Set | Test Set | |
Pass number | 1, 2, 3 | 4 | 5, 6, 7, 8 |
Number of images | 3024 | 1008 | 4032 |
Category | IoU ↑ (×10−2) | CD ↓ (×10−2) | NC ↑ (×10−1) | |||
---|---|---|---|---|---|---|
1 | 3 | 1 | 3 | 1 | 3 | |
Azimuth | Azimuths | Azimuth | Azimuths | Azimuth | Azimuths | |
1 | 95.63 | 97.44 | 4.17 | 3.05 | 9.53 | 9.58 |
2 | 87.05 | 89.40 | 13.99 | 12.45 | 8.51 | 8.59 |
3 | 94.28 | 93.49 | 5.29 | 5.55 | 9.40 | 9.37 |
4 | 90.11 | 93.39 | 7.08 | 4.88 | 9.26 | 9.35 |
5 | 90.60 | 94.16 | 6.82 | 4.92 | 8.85 | 8.91 |
6 | 94.13 | 97.17 | 4.75 | 2.94 | 9.63 | 9.72 |
7 | 91.78 | 94.90 | 5.60 | 3.65 | 9.49 | 9.58 |
mean | 91.94 | 94.27 | 6.81 | 5.35 | 9.24 | 9.30 |
Methods | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) |
---|---|---|---|
P2M (1 azimuth) | 68.17 | 18.39 | 8.35 |
LP2M (1 azimuth) | 78.56 | 8.52 | 8.89 |
ONet (1 azimuth) | 90.10 | 7.48 | 9.16 |
SFONet (1 azimuth) | 91.94 | 6.81 | 9.24 |
R2N2 (3 azimuths) | 75.14 | - | - |
LRGT+ (3 azimuths) | 75.27 | - | - |
SFONet (3 azimuths) | 94.27 | 5.35 | 9.30 |
Baseline | CV Encoder | CV LSTM | CV Attention | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) |
---|---|---|---|---|---|---|
✓ | 90.10 | 7.48 | 9.16 | |||
✓ | ✓ | 91.94 | 6.81 | 9.24 | ||
✓ | ✓ | ✓ | 94.07 | 5.54 | 9.30 | |
✓ | ✓ | ✓ | 93.95 | 5.61 | 9.30 | |
✓ | ✓ | ✓ | ✓ | 94.27 | 5.35 | 9.30 |
Number of CV LSTM Layers | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) |
---|---|---|---|
1 | 93.51 | 5.74 | 9.27 |
2 | 94.27 | 5.35 | 9.30 |
3 | 94.22 | 5.52 | 9.30 |
Number of Refinement Iterations | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) | Elapsed Time | Parameters |
---|---|---|---|---|---|
0 | 90.34 | 7.93 | 9.0 | 1.84 s | 81 KB |
1 | 93.59 | 5.84 | 9.23 | 1.92 s | 371 KB |
2 | 94.27 | 5.35 | 9.30 | 2.22 s | 1602 KB |
3 | 94.36 | 5.25 | 9.31 | 3.97 s | 6638 KB |
4 | 94.36 | 5.23 | 9.31 | 15.02 s | 28,252 KB |
Azimuthal Interval | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) |
---|---|---|---|
10° | 92.92 | 6.10 | 9.25 |
30° | 93.32 | 5.85 | 9.26 |
60° | 93.90 | 5.56 | 9.28 |
90° | 94.10 | 5.41 | 9.29 |
120° | 94.27 | 5.35 | 9.30 |
Number of Images | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) |
---|---|---|---|
1 | 91.94 | 6.81 | 9.24 |
2 | 92.78 | 6.26 | 9.24 |
3 | 92.92 | 6.10 | 9.25 |
4 | 93.19 | 5.93 | 9.26 |
Number of Passes | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) |
---|---|---|---|
1 | 93.92 | 5.56 | 9.25 |
2 | 94.08 | 5.47 | 9.27 |
3 | 94.27 | 5.35 | 9.30 |
Number of Sub-Apertures per Pass | mIoU ↑ (×10−2) | mCD ↓ (×10−2) | mNC ↑ (×10−1) |
---|---|---|---|
6 | 92.84 | 6.77 | 9.22 |
12 | 93.69 | 5.50 | 9.28 |
18 | 94.16 | 5.79 | 9.30 |
36 | 94.27 | 5.35 | 9.30 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, L.; Liu, J.; Liang, M.; Yu, X.; Xie, X.; Bi, H.; Hong, W. A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images. Remote Sens. 2025, 17, 347. https://doi.org/10.3390/rs17020347
Yu L, Liu J, Liang M, Yu X, Xie X, Bi H, Hong W. A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images. Remote Sensing. 2025; 17(2):347. https://doi.org/10.3390/rs17020347
Chicago/Turabian StyleYu, Lingjuan, Jianlong Liu, Miaomiao Liang, Xiangchun Yu, Xiaochun Xie, Hui Bi, and Wen Hong. 2025. "A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images" Remote Sensing 17, no. 2: 347. https://doi.org/10.3390/rs17020347
APA StyleYu, L., Liu, J., Liang, M., Yu, X., Xie, X., Bi, H., & Hong, W. (2025). A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images. Remote Sensing, 17(2), 347. https://doi.org/10.3390/rs17020347