Next Article in Journal
Exploiting High Geopositioning Accuracy of SAR Data to Obtain Accurate Geometric Orientation of Optical Satellite Images
Next Article in Special Issue
SAR Target Detection Based on Domain Adaptive Faster R-CNN with Small Training Data Size
Previous Article in Journal
Mapping Multi-Temporal Population Distribution in China from 1985 to 2010 Using Landsat Images via Deep Learning
Previous Article in Special Issue
Performance Evaluation of Different SAR-Based Techniques on the 2019 Ridgecrest Sequence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Technology in Geospatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China
3
The School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Electronic Information Engineering, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(17), 3534; https://doi.org/10.3390/rs13173534
Submission received: 4 August 2021 / Revised: 2 September 2021 / Accepted: 2 September 2021 / Published: 6 September 2021
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Remote Sensing)

Abstract

:
3D reconstruction has raised much interest in the field of CSAR. However, three dimensional imaging results with single pass CSAR data reveals that the 3D resolution of the system is poor for anisotropic scatterers. According to the imaging mechanism of CSAR, different targets located on the same iso-range line in the zero doppler plane fall into the same cell while for the same target point, imaging point will fall into the different positions at different aspect angles. In this paper, we proposed a method for 3D point cloud reconstruction using projections on 2D sub-aperture images. The target and background in the sub-aperture images are separated and binarized. For a projection point of target, given a series of offsets, the projection point will be mapped inversely to the 3D mesh along the iso-range line. We can obtain candidate points of the target. The intersection of iso-range lines can be regarded as voting process. For a candidate, the more times of intersection, the higher the number of votes, and the candidate point will be reserved. This fully excavates the information contained in the angle dimension of CSAR. The proposed approach is verified by the Gotcha Volumetric SAR Data Set.

Graphical Abstract

1. Introduction

Traditional synthetic aperture radar (SAR) can obtain projection of the target on the two-dimensional plane, which loses the elevation information and limits the wide application of SAR imagery. 3D reconstruction of the observed object plays an important role in the SAR image applications [1,2]. In the field of computer vision, it is a fundamental problem to reconstruct 3D structures from single image [3,4] or image sequences taken at different viewpoints [5,6,7]. It is known that under different observation angles, different features of the target can be obtained. When the number of observation angles increases, more information of the target can be obtained, which is more conducive to 3D reconstruction [8].
Circular SAR (CSAR) moving along a circular trajectory [9], makes it possible to provide features of the target from different aspect angles. For isotropic scatterers, a single circular flight can achieve very high resolution and 3D imaging [10]. However, the 3D resolution of the anisotropic scatterers is very poor. The impulse response function (IRF) and consequently the retrieved information is distorted by undesired sidelobes [11,12]. It is almost impossible to achieve 3D imaging for anisotropic scatterers.
It has been deeply investigated for digital elevation map (DEM) generation using CSAR images by radargrammetry [13,14]. The key step is to measure the parallax between the two corresponding image points. However, due to the anisotropic nature of the object [15], it is difficult to find the corresponding points in the sub-aperture images under different aspect angles. Meanwhile the elevation information provided by DEM is insufficient for a detailed description of the target. Then from the work in [16,17], after polarimetric processing, the result provides rough shape of the vehicle outline from multi-view layover. Interferometric SAR (IFSAR) is a method of sparse 3D image reconstruction [18]. For two SAR images obtained at two elevation angles with little difference, it extracts height of the target by phase difference of the two images. These ways can give a better description of the target. Above methods all need to use the polarization information or multiple trajectory data. They have certain requirements for the type of data acquisition, and increase the difficulty of data processing.
In fact, we believe that the dimension of aspect angle can provide a rich amount of information. The research idea of this paper is to make full use of the image features under different aspect angles using single pass of CSAR images and try to avoid image registration step.
For an elevated target at the scene, the shape and location of projection of the target are related to the transient velocity vector of the SAR platform and the target itself. To be specific, if the target locates at the scene center, the projection of the target will be in different extent due to the height of the target. If the target is away from the scene center and is near to the radar platform, the projection becomes little. If the 3D coordinates of the target in the actual scene can be found by projection in each sub-aperture image, it is feasible to retrieve 3D shape of the target from CSAR images.
We proposed a 3D reconstruction algorithm to build the 3D point cloud of the object based on the projections in sub-aperture images provided by CSAR in this paper. Firstly, the target and background are separated by image segmentation and binarized. There is a certain offset between the projection point and the target in 3D space. Then based on different offsets, a projection point in the sub-aperture image is inversely mapped into the 3D mesh along the iso-range line. The above operations are performed on all sub-aperture images and the intersection times are accumulated in the 3D mesh. This process is defined as “voting” for each grid point, which is consistent with the idea in [19]. If the number of votes is greater than a certain threshold, the point is considered to be existing in 3D space. Otherwise it does not exist. Finally we can obtain the 3D point cloud from CSAR images.
Here the separation of objects and scene will affect the result of 3D reconstruction [20]. Due to the inherent characteristics of SAR image, such as speckle noise, target segmentation processing technology of SAR image is quite different from that of optical image [21]. In this paper, based on the characteristics of CSAR image, the target and background are segmented. The projection direction of the target in each sub-aperture image is perpendicular to the irradiation direction of SAR. Thus, each sub-aperture image can be viewed as the superposition of the inherent background and the aspect angle-dependent projection [22]. Sub-aperture images are reshaped to a vector and form a composite matrix. Sparse and low rank matrix decomposition techniques are used to separate the similar and unique parts of each sub-aperture image, that is, the background and foreground are separated. We keep the sparse matrix and separate the foreground target. Robust principal component analysis (RPCA) which is a robust and efficient method of solving the separation problem [23], is adopted in this paper.
The proposed method realizes the 3D reconstruction of the target without additional information, which can describe the characteristics of the target more precisely and make full use of the information contained in the angle dimension of CSAR. It transforms the inevitable image matching step in radargrammetry into the voting process, which is more efficiency.
The remainder of this letter is organized as follows. In Section 2.1, RPCA is explained. In Section 2.2, we describe the basic principle of the proposed method. In Section 3, the proposed method is applied on real Gotcha mission data. Furthermore, in Section 4, we give some discussions about the result.

2. Methodology

2.1. Robust Principal Component Analysis

The problem solved by low rank decomposition model is to decompose real data A and interference data E from observation data D. In practical application, the observation matrix in the model can be understood as the data we have got in reality. Through the decomposition algorithm, the corresponding real data and interference data can be obtained, i.e., D = A + E . The real data A need to satisfy the low rank constraint, while the interference data E needs to satisfy the sparse constraint. Using the similarity between sub-aperture images, each image is reshaped into a vector to form a composite matrix, which has a relatively low rank. In this way, the low rank part can be extracted from the constructed observation matrix. The low-rank matrix contains the targets with the same height, which equals to the reference imaging height, such as the inherent scene and speckle noise. The sparse matrix contains the targets with other heights, which is related to aspect angle of SAR. Therefore, the objects and changes in the image appear in the sparse part. Speckle noise is an inherent feature of SAR image and is evenly distributed. Thus, it has the same sparsity as the scene. The SAR image can be denoised at the same time. RPCA [24] is adopted in this paper because of its stability and effectiveness.
The recovery of matrix can be described by the following optimization problem:
min A , E r a n k ( A ) + λ E 0 s u b j e c t A + E = D
where λ is the weight of the sparse part of the objective function. 0 Represents a l 0 -norm operation, which solves the number of non-zero elements in a matrix.
In the above formula, the rank function and l 0 -norm are not convex, which is NP hard. The kernel norm of matrix is a reasonable relaxation of the solution of matrix rank function and l 0 -norm is relaxation of l 1 -norm.
For these relaxations, the final form of RPCA is derived as follows:
min A , E A + λ E 1 s u b j e c t A + E = D
Equation (2) can be solved by an augmented Lagrangian method [25]. Constructing Lagrange function as follows:
L ( A , E , Y ) = A + λ E 1 + < Y , D A E >
where Y is the Lagrange multiplier.
The constrained problem is transformed into an unconstrained problem, and a penalty term is added.
L ( A , E , Y , μ ) = A + λ E 1 + < Y , D A E > + μ 2 · D A E F 2
where μ is positive scalar quantity. Then coordinate descent method is applied to solve the Y, A, E and μ which satisfy the conditions. In each iteration, the extreme value is calculated along one coordinate axis direction (e.g., Y), while all other coordinate axes (e.g., A, E and μ ) are fixed. ρ is the updated factor of μ in each iteration.

2.2. Inversely Mapping and Voting

In this paper, sub-aperture images of CSAR are formed on a reference ground plane by back-projection algorithm (BPA) [26]. In this way, all the sub-aperture images are formed in the same ground coordinate system.
Consider that a point T ( x T , y T , z T ) is above the reference ground plane where the height of the imaging plane is z = z p . As is shown in Figure 1, if the elevation of the imaging plane is not consistent with the actual elevation of the target, the two dimensional coordinate of the target on the sub-aperture image will deviate from the actual position, resulting in geometric deformation of the full-aperture image [27].
Here we take viewing angle B for example to explain the imaging geometry of the target point, shown in Figure 2.
According to the imaging mechanism of SAR [28], which is described as follows:
V s · S T = 0 | S T | = | S P |
where S denotes the zero doppler point corresponding to the viewing angle B. V S is velocity of SAR platform. θ denotes the incidence angle. The projection point P is located at the intersection of zero doppler plane, the iso-range line L and imaging plane. Therefore when target is located on the plane which is perpendicular to the velocity direction of SAR platform, its doppler frequency is zero.
With Equation (5), we can project any point T from the 3D object space to the imaging plane along iso-range line L. In Figure 2, Line L T is the intersection line between the imaging plane and the zero doppler plane. It indicates that the intersection line L T is perpendicular to the instantaneous velocity of the center of synthetic aperture. The aspect angle φ refers to the angle between intersection line L T and the positive direction of X-axis. Unless explicitly noted during the rest of this article, aspect angle is used to refer to different SAR imaging positions. The slope of intersection line L T of a sub aperture image is expressed as follows:
k L T V S = 0
Compared with the distance from the radar to the target, iso-range line L is approximately considered as a straight line L . Then the height of target Δ h can be obtained by Equation (7).
Δ h = Δ r tan θ
where Δ h is the height difference between the target and the reference ground plane. Δ r is the offset between the target and the projection point.
Above is the process of generating projection point from target. Then we will map the projection points into 3D space against the projection process step by step.
The calibration rules of candidate point: Firstly, along the intersection line L T , for any offset Δ x in x-direction, we can get the only Δ y in y-direction. The offset between imaging point and candidate point can be solved by Equation (8). Furthermore, the height value can be obtained by Equation (7). Finally corresponding to different offset, the projection point can be inversely mapped to a set of candidate points ( P 1 , P 2 , P 3 , P n ) in 3D object space. The inversely mapping process of projection point P is shown in Figure 3. The projection point Q is inversely mapped along iso-range line to a set of candidate points ( Q 1 , Q 2 , Q 3 , Q n ) under another aspect angle. Two iso-range lines intersect at P 3 and Q 3 . The times of intersection of iso-range lines can be regarded as the process of voting for candidate points. Each candidate point in the 3D object space obtained from the inversely mapping process of the two-dimensional image is marked as “1” in the 3D object space. The inversely mapping process and voting process are carried out under multiple aspect angles. In Figure 3, for a candidate point, the more iso-range lines intersect, the higher the number of votes. Finally, the candidate point whose number of votes is greater than a certain threshold will be reserved, which is considered as a point of the target in 3D space.
Δ r = ( Δ x ) 2 + ( Δ y ) 2
where Δ x and Δ y is the component of the offset in the X-axis and Y-axis, respectively.
Then the procedure of 3D point cloud reconstruction method is described in detail. The algorithm flow chart is shown in Figure 4.
Step 1: The CSAR data is divided into several sub apertures. Back projection girds are utilized to form SAR images.
Step 2: The target and background of each sub-aperture image is separated by RPCA. Then the image is binarized. The projection part is set to 1, and the background part is set to 0.
Step 3: Under a certain aspect angle, the projection direction of the sub-aperture image is consistent. Under the constraint of the line of iso-range line, any point in the binarized “1” region is reversely positioned after given offset and projected into the 3D mesh. A series of candidate points are generated.
Step 4: Under a certain aspect angle, traverse all the points in the projection part.
Step 5: The sub-aperture images of all aspect angles are processed by Step 3 to Step 4.
Step 6: Counting the voting results of each point in the 3D grid. If the voting results are greater than a certain threshold F, the candidates are preserved.

3. Experiment and Results

In this section, to test the reconstruction performance of the proposed algorithm, the Gotcha volumetric SAR data is used. It is collected at the X-band with a 640-MHz bandwidth [29]. The spotlighted scene is located at a parking lot. In this experiment, we only use the eighth-pass data.
BPA is used to process the phase history data and form the raw sub-images sequence. Each sub-aperture covers 1 . The pixel spacing of the sub-aperture images is 0.2× 0.2 m. In this paper, the height of the imaging plane is 0 m.
Quantitative analyses of the reconstruction result and computational burden with comparative results of the conventional CSAR approach [16] using single pass CSAR data are demonstrated here. These experiments were conducted with the MATLAB R2018b software on a computer with an Intel Core 2.2-GHz processor and 64.0 GB of physical memory.

3.1. Results Obtained by the Proposed Method

Based on the aforementioned RPCA model, the targets and the background are separated from the raw sub-images sequence. The low-rank matrix contains the targets with the same height, which equals to the reference imaging height. The sparse matrix contains the targets with other heights. It is supposed that the sparse portion involved targets, whereas the low-rank portion indicated the scene and inherent noise. The full scene of entire aperture after RPCA is shown in Figure 5.
Then the entire aperture is divided into 36 sub-apertures and every 10 sub-aperture images after RPCA are non-coherently accumulated. In total, 36 group images can be obtained through the way of dividing sub aperture in this experiment. Two parameters μ and ρ are set to perform RPCA processing. In Equation (4), μ is the weight of penalty item and ρ is the updated factor of μ in each iteration. μ = 1 / D 2 , ρ = 1.5 . The weight on the sparse term is set as λ = 1 / m a x ( m , n ) for m × n matrix D.
Under a certain aspect angle, some strong scattering points of the target form projection points in the 2D plane. As is shown in Figure 6, it is difficult to separate the target from the scene because the image is affected by the speckle noise. The projection of the extracted target on the plane is flaky, so the iso-range lines generated by the projection points in the reverse mapping will be very dense. The structure of the final target is blocky and hard to be recognised. However, in Figure 7, the aspect-independent object can be easily obtained from the image.
Then the cars marked by yellow box, red box and blue box in Figure 7 are selected to restore 3D point cloud from the SAR image with the proposed method.
Some targets are anisotropic which means that they can only be illuminated under some aspect angles. So the threshold of voting number is set at 20 in this paper. Meanwhile the grid spacing in elevation direction is 0.2 m. The models of the three selected vehicles are different, which are shown in Figure 8a, Figure 9a and Figure 10a.
The 3D reconstruction results of the selected targets are shown in Figure 8b, Figure 9b and Figure 10b. Compared with the optical image of the car, 3D point cloud and the heading are restored successfully. The structures of the three kinds of vehicles can be distinguished and are consistent with the models shown in the optical image. The shape of the vehicles can also be seen in the result of 3D reconstruction. Furthermore, we can distinguish different types of vehicles according to the reconstructed image using only single pass CSAR data.
Table 1, Table 2 and Table 3 show the actual vehicles’ size parameters and the estimated results obtained by using this method. It can be seen from the tables that compared with the actual parameters, the estimated geometric parameter error of the vehicle is less than 0.2 m, using only single pass CSAR data. On this basis, without additional information, a more detailed description of the target is achieved, where the shape and contour information of the target are provided.

3.2. Comparison with 3D Imaging Method Using Single Pass of CSAR Data

Traditional 3D Back projection(3D BP) imaging [10] performs interpolation operation and phase compensation operation for each azimuth pulse and grid point by grid point. The number of full aperture pulses is specified as N a and the size of 3D imaging grid of CSAR is specified as N x × N y × N z . 3D reconstruction image is obtained by coherently accumulating the 3D imaging results of each azimuth pulse. The final result can be obtained by testing of generalized likelihood ratio (GLRT) [30] after 3D imaging of full aperture. The maximum value of the corresponding grid position in the three-dimensional imaging results of each sub aperture can be selected as the final imaging result at the grid.
3D imaging results of the three vehicles are shown in Figure 11. As mentioned before, the retrieved information is distorted by undesired sidelobes. It is hardly impossible to achieve 3D CSAR imaging using only one single pass CSAR data for anisotropic scatterers. Therefore the geometric size of the target can not be obtained from the 3D imaging results using one single pass CSAR data.
The following is the comparison of computational cost of the two methods:
For an N x × N y CSAR image, the size of 3D imaging grid of CSAR is specified as N x × N y × N z . The operation time of one interpolation and phase compensation of BP algorithm is specified as t 1 .
Traditional 3D BP algorithm: the traditional 3D imaging BP algorithm needs to do interpolation and phase compensation operation N x × N y × N z × N a times, and the time consumed by the 3D BP algorithm is:
T 1 = N x × N y × N z × N a × t 1
The proposed method is to decompose 3D BP imaging algorithm into two parts: 2D BPA and restoring height information from 2D sub-aperture images. Therefore the time consumed by our algorithm needs to consider two parts: one is the 2D BP algorithm, and the other is the process of inversely mapping and voting process for the 3D mesh N x × N y × N z on the sub-aperture images. The full aperture is divided into N s u b sub-apertures. The times of execution of the second part is N x × N y × N z × N s u b . The operation time of inversely mapping and voting process is specified as t 2 . The time consumed by the proposed algorithm is:
T 2 = N x × N y × N a × t 1 + N x × N y × N z × N s u b × t 2
Therefore, the time-consuming ratio of the two methods is
T 2 T 1 = N x × N y × N a × t 1 + N x × N y × N z × N s u b × t 2 N x × N y × N z × N a × t 1 = N x × N y × N a × t 1 + N x × N y × N z × N s u b × t 2 N x × N y × N a × t 1 + N x × N y × ( N z 1 ) × N a × t 1
where t 2 < t 1 , N s u b N a .
According to the above analysis, we transform the ( N x × N y × N z × N a ) times interpolation and phase compensation operation along elevation direction into ( N x × N y × N z × N s u b ) times inversely mapping and voting process with two multiplication operations and one addition operation, which greatly reduces the amount of calculation. Hence, compared to 3D BP imaging method, the proposed method can realize the 3D point cloud reconstruction using single pass CSAR image with much higher efficiency.
The time-consuming comparison between the traditional 3D BP imaging of single pass CSAR image and the proposed method is shown in Table 4. It can be seen from the results that the time consumption of this method is much lower than that of 3D BP imaging method.

4. Discussion

The proposed method utilized the relationship between the projection point of circular SAR image sequence and the position of target in 3D space to reconstruct 3D point cloud of the target. The candidate points of target are located on the iso-range lines of each projection point. The intersection of iso-range lines can be regarded as a voting process. For a candidate point, the more times of intersection, the higher the number of votes, and the candidate point will be reserved.
According to the results of 3D reconstruction, not only the geometric parameters of the target can be obtained, such as length, width and height, but also different types of vehicles can be distinguished according to the reconstructed 3D point cloud. Under the constraint of projection direction, the inverse mapping of projection points is carried out in the projection area, which achieves more accurate and fast reconstruction. In the case of no additional information, such as polarimetric scattering information, single circular SAR data is used to realize 3D reconstruction. In this way, the information contained in the aspect angle dimension is mined to the greatest extent.
It can be seen that there is noise interference in SAR measured data in Figure 6, which makes it difficult to segment the target region. In this paper, RPCA is used to segment the target and background preliminarily which achieves the effect of denoising at the same time, shown in Figure 7. Then it is conducive to preserve the details of the subsequent three-dimensional reconstruction. Compared with the optical image and the actual geometric parameters of the target, the proposed method is effective.
However, we also noticed that the contour part is not complete when only single pass CSAR data is used for processing, shown in Figure 10b. Since the vehicle is close to the lawn, the projection information of the vehicle itself is affected, the wrong points appear in the 3D reconstruction point cloud of the vehicle.
In this paper, we only select independent vehicle targets with a gap for the experiment, so as to ensure that they are not affected by other vehicles. In the future research, we will consider the situation of high target density. In this case, the projection information between targets will overlap. Therefore, it may be necessary to add other types of auxiliary information to help expand the application scope of the proposed method.

Author Contributions

S.F. proposed the idea of the method and wrote the paper; Y.L. and W.H. supervised the work and provided suggestions; F.T. provided suggestions about the experiments; Y.W. provided suggestions about the revision of the paper. All authors read and approved the final manuscript.

Funding

This research was funded by National Natural Science Foundation of China under grant number 61860206013.

Acknowledgments

The authors would like to thank the Air Force Research Laboratory, Washington, DC, USA, for providing the Gotcha volumetric SAR data set to the community.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, L.; An, D.; Huang, X.; Zhou, Z. A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data. IEEE Trans. Image Process. 2017, 26, 5545–5554. [Google Scholar] [CrossRef] [PubMed]
  2. Fu, K.; Peng, J.; He, Q.; Zhang, H. Single image 3D object reconstruction based on deep learning: A review. Multimed. Tools Appl. 2020, 80, 463–498. [Google Scholar] [CrossRef]
  3. Navaneet, K.; Mandikal, P.; Agarwal, M.; Babu, R. CAPNet: Continuous Approximation Projection for 3D Point Cloud Reconstruction Using 2D Supervision. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8819–8826. [Google Scholar] [CrossRef]
  4. Mandikal, P.; Babu, R. Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 7–11 January 2019; pp. 1550–5790. [Google Scholar] [CrossRef] [Green Version]
  5. Pollefeys, M.; Gool, L.J.V.; Vergauwen, M.; Verbiest, F.; Koch, R. Visual Modeling with a Hand-Held Camera. Int. J. Comput. Vis. 2004, 59, 207–232. [Google Scholar] [CrossRef]
  6. Streckel, B.; Bartczak, B.; Koch, R.; Kolb, A. Supporting Structure from Motion with a 3D-Range-Camera; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  7. Kim, S.J.; Gallup, R.; Frahm, R.M.; Pollefeys, R. Joint radiometric calibration and feature tracking system with an application to stereo. Comput. Vis. Image Underst. 2010, 114, 574–582. [Google Scholar] [CrossRef]
  8. Zhou, Y.; Zhang, L.; Cao, Y.; Wu, Z. Attitude Estimation and Geometry Reconstruction of Satellite Targets Based on ISAR Image Sequence Interpretation. IEEE Trans. Aerosp. Electron. Syst. 2018, 55, 1698–1711. [Google Scholar] [CrossRef]
  9. Soumekh, M. Reconnaissance with Slant Plane Circular SAR Imaging. IEEE Trans. Image Process. 1996, 5, 1252–1265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Ponce, O.; Prats-Iraola, P.; Scheiber, R.; Reigber, A.; Moreira, A. First Airborne Demonstration of Holographic SAR Tomography with Fully Polarimetric Multicircular Acquisitions at L-Band. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6170–6196. [Google Scholar] [CrossRef]
  11. Moore, L.J.; Majumder, U.K. An analytical expression for the three-dimensional response of a point scatterer for circular synthetic aperture radar. In Proceedings of the SPIE-The International Society for Optical Engineering, Beijing, China, 18–19 October 2010; Volume 7699, pp. 223–263. [Google Scholar]
  12. Ponce, O.; Prats-Iraola, P.; Pinheiro, M.; Rodriguez-Cassola, M.; Scheiber, R.; Reigber, A.; Moreira, A. Fully Polarimetric High-Resolution 3-D Imaging With Circular SAR at L-Band. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3074–3090. [Google Scholar] [CrossRef]
  13. Palm, S.; Oriot, H.M. Radargrammetric DEM Extraction Over Urban Area Using Circular SAR Imagery. In Proceedings of the European Conference on Synthetic Aperture Radar, Aachen, Germany, 7–10 June 2010. [Google Scholar]
  14. Zhang, J.; Suo, Z.; Li, Z.; Zhang, Q. DEM Generation Using Circular SAR Data Based on Low-Rank and Sparse Matrix Decomposition. IEEE Geoence Remote. Sens. Lett. 2018, 15, 724–772. [Google Scholar] [CrossRef]
  15. Teng, F.; Lin, Y.; Wang, Y.; Shen, W.; Hong, W. An Anisotropic Scattering Analysis Method Based on the Statistical Properties of Multi-Angular SAR Images. Remote Sens. 2020, 12, 2152. [Google Scholar] [CrossRef]
  16. Ertin, E.; Austin, C.D.; Sharma, S.; Moses, R.L.; Potter, L.C.; Zelnio, E.G.; Garber, F.D. GOTCHA experience report: Three-dimensional SAR imaging with complete circular apertures. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIV, Orlando, FL, USA, 7 May 2007; International Society for Optics and Photonics: Bellingham, WA, USA, 2007; Volume 6568, p. 656802. [Google Scholar]
  17. Dungan, K.E.; Potter, L.C. 3-D Imaging of Vehicles using Wide Aperture Radar. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 187–200. [Google Scholar] [CrossRef]
  18. Austin, C.D.; Moses, R.L. Interferometric Synthetic Aperture Radar Detection and Estimation Based 3D Image Reconstruction. In Proceedings of the Conference on Algorithms for Synthetic Aperture Radar Imagery XIII 20060417-20, Kissimmee, FL, USA, 17 May 2006. [Google Scholar]
  19. Panagiotakis, C. Point clustering via voting maximization. J. Classif. 2015, 32, 212–240. [Google Scholar] [CrossRef]
  20. Yu, D.; Ji, S.; Liu, J.; Wei, S. Automatic 3D building reconstruction from multi-view aerial images with deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 171, 155–170. [Google Scholar] [CrossRef]
  21. Salehi, H.; Vahidi, J.; Abdeljawad, T.; Khan, A.; Rad, S.Y.B. A SAR Image Despeckling Method Based on an Extended Adaptive Wiener Filter and Extended Guided Filter. Remote Sens. 2020, 12, 2371. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Lei, H.; Lv, Z. Vehicle Layover Removal in Circular SAR Images via ROSL. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1–5. [Google Scholar] [CrossRef]
  23. Wen, F.; Ying, R.; Liu, P.; Qiu, R.C. Robust PCA Using Generalized Nonconvex Regularization. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1497–1510. [Google Scholar] [CrossRef]
  24. Bouwmans, T.; Javed, S.; Zhang, H.; Lin, Z.; Otazo, R. On the Applications of Robust PCA in Image and Video Processing. Proc. IEEE 2018, 106, 1427–1457. [Google Scholar] [CrossRef] [Green Version]
  25. Swingle, W.W. Copyright-Constrained Optimization and Lagrange Multiplier Methods. J. Gen. Physiol. 1919, 2, 161–171. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Gorham, L.A.; Moore, L.J. SAR image formation toolbox for MATLAB. Proc. SPIE 2010, 7699, 223–263. [Google Scholar]
  27. Feng, S.; Lin, Y.; Wang, Y.; Yang, Y.; Hong, W. DEM Generation With a Scale Factor Using Multi-Aspect SAR Imagery Applying Radargrammetry. Remote Sens. 2020, 12, 556. [Google Scholar] [CrossRef] [Green Version]
  28. Leberl, F.W. Radargrammetric Image Processing; Technical Report; Artech House: London, UK, 1990. [Google Scholar]
  29. Casteel, C.H., Jr.; Gorham, L.R.A.; Minardi, M.J.; Scarborough, S.M.; Naidu, K.D.; Majumder, U.K. A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIV, Orlando, FL, USA, 7 May 2007; International Society for Optics and Photonics: Bellingham, WA, USA, 2007. [Google Scholar]
  30. Voccola, K.; Yazıcı, B.; Ferrara, M.; Cheney, M. On the relationship between the generalized likelihood ratio test and backprojection for synthetic aperture radar imaging. In Proceedings of the Automatic Target Recognition XIX, Orlando, FL, USA, 22 May 2009; International Society for Optics and Photonics: Bellingham, WA, USA, 2009; Volume 7335. [Google Scholar]
Figure 1. The imaging diagram under two viewing angles. T is a target above ground plane. P and Q are projection points from different viewing angles.
Figure 1. The imaging diagram under two viewing angles. T is a target above ground plane. P and Q are projection points from different viewing angles.
Remotesensing 13 03534 g001
Figure 2. The imaging geometry under viewing angle B. T is a target above ground plane. P is a projection point.
Figure 2. The imaging geometry under viewing angle B. T is a target above ground plane. P is a projection point.
Remotesensing 13 03534 g002
Figure 3. Inversely mapping and voting process. P is projection point. P i is target point in 3D space corresponding to different offset along iso-range line L . Q is another projection point which is mapped inversely along iso-range line under another aspect angle.
Figure 3. Inversely mapping and voting process. P is projection point. P i is target point in 3D space corresponding to different offset along iso-range line L . Q is another projection point which is mapped inversely along iso-range line under another aspect angle.
Remotesensing 13 03534 g003
Figure 4. The algorithm flow chart of the proposed method.
Figure 4. The algorithm flow chart of the proposed method.
Remotesensing 13 03534 g004
Figure 5. Full scene of entire 360 aperture after RPCA.
Figure 5. Full scene of entire 360 aperture after RPCA.
Remotesensing 13 03534 g005
Figure 6. Certain 10 sub-aperture images without RPCA are non-coherently accumulated.
Figure 6. Certain 10 sub-aperture images without RPCA are non-coherently accumulated.
Remotesensing 13 03534 g006
Figure 7. Sparse portion of certain 10 sub-aperture images after RPCA are non-coherently accumulated.
Figure 7. Sparse portion of certain 10 sub-aperture images after RPCA are non-coherently accumulated.
Remotesensing 13 03534 g007
Figure 8. Comparison between real target of Vehicle C and 3D reconstruction result. (a) Optical image of Vehicle C. (b) Reconstructed 3D point cloud of Vehicle C.
Figure 8. Comparison between real target of Vehicle C and 3D reconstruction result. (a) Optical image of Vehicle C. (b) Reconstructed 3D point cloud of Vehicle C.
Remotesensing 13 03534 g008
Figure 9. Comparison between real target of Vehicle B and 3D reconstruction result. (a) Optical image of Vehicle B. (b) Reconstructed 3D point cloud of Vehicle B.
Figure 9. Comparison between real target of Vehicle B and 3D reconstruction result. (a) Optical image of Vehicle B. (b) Reconstructed 3D point cloud of Vehicle B.
Remotesensing 13 03534 g009
Figure 10. Comparison between real target of Vehicle F and 3D reconstruction result. (a) Optical image of Vehicle F. (b) Reconstructed 3D point cloud of Vehicle F.
Figure 10. Comparison between real target of Vehicle F and 3D reconstruction result. (a) Optical image of Vehicle F. (b) Reconstructed 3D point cloud of Vehicle F.
Remotesensing 13 03534 g010
Figure 11. 3D imaging using 3D BP with single pass CSAR data. (a) 3D imaging result of Vehicle C. (b) 3D imaging result of Vehicle B. (c) 3D imaging result of Vehicle F.
Figure 11. 3D imaging using 3D BP with single pass CSAR data. (a) 3D imaging result of Vehicle C. (b) 3D imaging result of Vehicle B. (c) 3D imaging result of Vehicle F.
Remotesensing 13 03534 g011
Table 1. Quantitative evaluation of vehicle C.
Table 1. Quantitative evaluation of vehicle C.
DimensionsLength (m)Width (m)Height (m)
actual value4.981.861.42
restored value4.881.741.44
error0.100.12−0.02
Table 2. Quantitative evaluation of vehicle B.
Table 2. Quantitative evaluation of vehicle B.
DimensionsLength (m)Width (m)Height (m)
actual value4.751.741.41
restored value4.601.601.40
error0.150.140.01
Table 3. Quantitative evaluation of vehicle F.
Table 3. Quantitative evaluation of vehicle F.
DimensionsLength (m)Width (m)Height (m)
actual value4.451.771.44
restored value4.401.801.60
error0.05−0.03−0.16
Table 4. Time-consuming comparison between 3D BP imaging and the proposed method.
Table 4. Time-consuming comparison between 3D BP imaging and the proposed method.
Time-Consuming3D Imaging (s)The Proposed Method (s)
vehicle C81111023
vehicle B7608970
vehicle F81351037
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Feng, S.; Lin, Y.; Wang, Y.; Teng, F.; Hong, W. 3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images. Remote Sens. 2021, 13, 3534. https://doi.org/10.3390/rs13173534

AMA Style

Feng S, Lin Y, Wang Y, Teng F, Hong W. 3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images. Remote Sensing. 2021; 13(17):3534. https://doi.org/10.3390/rs13173534

Chicago/Turabian Style

Feng, Shanshan, Yun Lin, Yanping Wang, Fei Teng, and Wen Hong. 2021. "3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images" Remote Sensing 13, no. 17: 3534. https://doi.org/10.3390/rs13173534

APA Style

Feng, S., Lin, Y., Wang, Y., Teng, F., & Hong, W. (2021). 3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images. Remote Sensing, 13(17), 3534. https://doi.org/10.3390/rs13173534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop