Next Article in Journal
Autonomous Navigation of Unmanned Aircraft Using Space Target LOS Measurements and QLEKF
Next Article in Special Issue
System Design and Echo Preprocessing of Spaceborne Squinted Two-Dimensional Beam Scanning Synthetic Aperture Radar
Previous Article in Journal
Null Broadening Robust Adaptive Beamforming Algorithm Based on Power Estimation
Previous Article in Special Issue
Near-Field High-Resolution SAR Imaging with Sparse Sampling Interval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Drone-Based 3D Synthetic Aperture Radar Imaging with Trajectory Optimization

by
Jedrzej Drozdowicz
* and
Piotr Samczynski
Department of Electronics and Information Technology, Institute of Electronic Systems, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(18), 6990; https://doi.org/10.3390/s22186990
Submission received: 30 July 2022 / Revised: 2 September 2022 / Accepted: 13 September 2022 / Published: 15 September 2022
(This article belongs to the Special Issue Recent Advancements in Radar Imaging and Sensing Technology II)

Abstract

:
This paper presents a trajectory determination and optimization method of multirotors equipped with a single-channel radar to obtain 3D Synthetic Aperture Radar imaging. The result is a realistic trajectory that allows to obtain an imaging of the assumed quality in less time than using a multi-pass trajectory. The optimization criteria, in addition to the cross-range resolution, are the Peak Sidelobe Ratio (PSLR), Integrated Sidelobe Ratio (ISLR), and time of flight. The algorithm is based on a realistic motion model of the radar platform. This paper presents all the steps of the algorithm and provides simulation results that show its practical applicability. The advantage of the presented approach over the existing ones is indicated and further research directions are proposed.

1. Introduction

Synthetic Aperture Radar (SAR) technology has made significant progress from simple two-dimensional (2D) aerial imagery [1] to high-resolution three-dimensional (3D) drone imagery [2] in the last decades, and it is successfully becoming complementary to lidar as a three-dimensional (3D) imaging technique [3]. A commonly used three-dimensional (3D) Synthetic Aperture Radar (SAR) imaging method is Interferometric Synthetic Aperture Radar (InSAR), which is suitable for imaging the surface of the Earth. For urban area imaging, where Interferometric Synthetic Aperture Radar (InSAR) fails due to large height differences, tomographic Synthetic Aperture Radar (SAR) and Circular Synthetic Aperture Radar (CSAR) are commonly used. An important feature of these geometries is a very long, straight trajectory, and the high maneuverability of Unmanned Aerial Vehicles (UAVs), especially multirotors, is not fully utilized. The long flight time is a significant problem, especially for small UAVs with a limited battery capacity and significant radar power consumption.
The solution to these issues is to use a non-rectilinear trajectory of the radar platform, one that allows the imaging to be achieved with the assumed quality while being as short as possible. Because the multirotor can fly along any trajectory and its motion model is known, it is possible to optimize the trajectory and reduce its length. The effect is, first of all, energy savings and the possibility to perform imaging without a break for battery charging but also a reduction in the amount of recorded data, which speeds up the processing.
This paper presents a method of determining the non-rectilinear trajectory of a radar platform to obtain three-dimensional (3D) images of specific areas, with assumed quality. The main contribution of this article is the method of determining the trajectory, taking into account the radar platform’s (multirotor) motion model and the method of optimizing this trajectory.
The remainder of this section introduces the existing three-dimensional (3D) Synthetic Aperture Radar (SAR) imaging methods and covers the state of the art. The next section describes in detail the three-dimensional (3D) Synthetic Aperture Radar (SAR) imaging method, using a non-rectilinear trajectory of the radar platform along with the method of its derivation, trajectory determination, and optimization. The next section presents the simulation results of the presented method, developed using a realistic model of the motion of the radar platform. The last section is devoted to a discussion of the results and a summary.

1.1. Synthetic Aperture Radar Signal Model

Synthetic Aperture Radar (SAR) is a technique for obtaining radar images using a radar sensor placed on a moving radar platform. It should be stressed that the movement of the platform itself is not necessary to obtain a Synthetic Aperture Radar (SAR) image, but only the fact that the radar sensor is placed at different positions [4]. A synthetic aperture comprising those positions produces a Point Spread Function (PSF) in the form of [5]:
E ( p ) = Σ n = 1 n = N exp ( j 2 π r n λ ) sinc r n c 2 B ,
where p denotes the point at which the PSF is calculated; r n = | p p a n | | p t p a n | is the difference between the distance from the point p to the n t h antenna position p a n and the distance from the point target p t to the n t h antenna position p a n ; λ is the wavelength; c is the speed of light; and B is the signal bandwidth. The most common waveform type for the Synthetic Aperture Radar (SAR) is Frequency Modulated Continuous Wave (FMCW) [6] in the form of
x ( t ) = A rect t T cos 2 π f 0 t + π α t 2 ,
where f c is the carrier (center) frequency, α = B T is the LFM slope, B is the bandwidth, T is the pulse duration, and A is the signal amplitude. Other types of radar are also used, such as noise [7] and passive [8]. This paper is not specifically related to any of these methods and assumes that all conditions for the validity of (1) are met and that the resolution of the Synthetic Aperture Radar (SAR) image depends solely on the trajectory of the radar carrier, wavelength, and signal bandwidth. Such conditions include, e.g., a time-bandwidth product higher than approximately 100 ( B T 100 ) for FMCW [4].

1.2. Existing 3D SAR Imaging Methods

A one-dimensional (1D) synthetic aperture (a straight line) produces two-dimensional (2D) Synthetic Aperture Radar (SAR) image and a two-dimensional (2D) synthetic aperture can produce a three-dimensional (3D) Synthetic Aperture Radar (SAR) image. The most popular approach to three-dimensional (3D) Synthetic Aperture Radar (SAR) imaging is Interferometric Synthetic Aperture Radar (InSAR) [6], which requires at least two receiving antennas or at least two consecutive passes of the radar carrier, as presented in Figure 1.
The vertical resolution of Interferometric Synthetic Aperture Radar (InSAR) is [6]:
δ V = λ r 0 2 L V ,
where r0 is the distance to the scene and L V is the distance between the antennas (baseline). For the two-antenna or two-pass case, Equation (3) also defines the maximum unambiguous height of the scene [6]:
h m a x V = λ r 0 2 d V ,
where d V is the distance between the antennas (note that in this case d V = L V ). This can be seen in Figure 2 where the mainlobe is duplicated with a raster equal to its width, which means that the maximum unambiguous height is equal to the resolution (unless noted otherwise, simulations in this paper use the following radar parameters: carrier frequency f c = 3 GHz, bandwidth B = 300 MHz, and range r 0 = 100 m; simulations assume that there is no signal noise). To overcome height ambiguity, phase unwrapping can be used; however, in urban areas where height differences are significant, such unwrapping does not produce reliable results. In such cases, a compromise between resolution and height ambiguity has to be made: a large distance between antennas/passes (baseline) ensures high resolution, while high unambiguous height requires a short baseline. To avoid the need for the aforementioned compromise, CSAR or Multi-Baseline Synthetic Aperture Radar (MBSAR) can be used [9,10]. An example of such a trajectory is presented in Figure 3.
In this case, the distance between passes ( d V ) is not equal to the vertical length of the synthetic aperture ( L V ). An example of PSF is presented in Figure 4.
It can be seen that increasing the vertical length of the synthetic aperture ( L V ) by adding more baselines increases the resolution without increasing the height ambiguity—the distance between the repeated lobes h m a x V is the same in Figure 2 and Figure 4 (in the first case, h m a x V = δ V —see (3) and (4)), but the size of the lobe δ V is smaller in the second case. The main drawback of these particular methods is the unnecessarily long scanning time and the large amount of data.
Another solution is the use of non-rectilinear trajectories. The literature available on that topic is scarce; however, two sources provide a decent basis for further development. The research presented in [11] focuses on bistatic three-dimensional (3D) Synthetic Aperture Radar (SAR) imaging and a UAV is considered as a RAdio Detection And Ranging (radar) carrier. In [11], both parametric and nonparametric imaging are considered, and trajectory non-rectilinearity is introduced by adding curvature. This approach is an important step to establish the relationship between the three-dimensional (3D) image quality and the receiver flight path. Another study [12] focuses on a sinusoidal trajectory (see Figure 5) and presents numerous simulation results of imaging simple scenes with isotropic reflecting points for trajectories that are a single period of a sinusoidal function, a segment of a circle, and a combination (sum) of sinusoidal and circular trajectories. Although the article often emphasizes the strong dependence of the characteristics on the aspect angle, the simulations were carried out only for isotropic points. The problem of the possibility of implementing a given trajectory with a radar carrier was raised, but the only conclusion presented was the need to minimize the height gradient, indicating that a fixed-wing aircraft was probably considered a radar carrier.
Both approaches mentioned above do not fully exploit the potential of a multirotor that can fly along trajectories much more complex than a curved line [13]. What is more, the practical application of such trajectories can be difficult, because they are determined without taking into account the platform motion model. In the case of fixed wings, following the curve accurately requires solving a series of equations, while in the case of multirotors, it requires determining many waypoints.

1.3. Imaging Algorithms

To obtain a Synthetic Aperture Radar (SAR) image from reflected echoes (or the beat signal), an imaging algorithm must be applied. For a typical two-dimensional (2D) Synthetic Aperture Radar (SAR), a matched filter method can be applied that is often described as the fastest method producing high-resolution images [14]. An Interferometric Synthetic Aperture Radar (InSAR) image is usually obtained by determining the phase difference between two images from two antennas or passes, and a similar approach can be applied to CSAR and MBSAR. In the case of a non-rectilinear trajectory, this method is not applicable, and other methods have to be used. The simplest but most computationally demanding method is the Back Projection Algorithm (BPA), which relies on the direct application of (1) [8]. An improvement in terms of computational complexity is provided by the Polar Format Algorithm (PFA) [15]. Both the BPA and PFA allow for an arbitrary image pixel size, where in the case of the matched filter method, the pixel size is determined by the radar parameters [6].

2. Method

The trajectories described in the previous section are simple, but because they are not sufficiently parameterizable, they are not suitable for advanced optimization. The trajectories presented so far are the result of changing the position of the radar platform over time. However, each trajectory can be considered in isolation from time, as a set of points in space, i.e., a synthetic aperture [4]. Following this line of reasoning, each synthetic aperture is an example of sampling a certain surface (synthetic aperture surface). The shape (outline) of the surface corresponds to the mainlobe of PSF, while the distribution of sampling points corresponds to the sidelobes (or grating lobes in the case of even sampling). An example is presented in Figure 6.
Evident grating lobes can be seen. If uniform sampling is replaced by random sampling (Figure 7), the grating lobes become smeared. In general, a sufficiently densely sampled aperture of dimension L H by L V provides δ H by δ V resolution when:
L H = r 0 λ 2 δ H ,
L V = r 0 λ 2 δ V ,
where δ H and δ V are the horizontal and vertical resolutions, respectively, and r0 is the distance from the aperture to the target. Note that both multiple rectilinear flights at different altitudes (Figure 4) and a flight along some non-rectilinear trajectories are examples of aperture sampling (Figure 8).
This provides an introduction to the method described in this paper. It assumes the determination of a radar platform trajectory that samples the aperture surface in order to achieve the quality requirements. The block diagram of the algorithm is shown in Figure 9.
The algorithm starts with the determination of the synthetic aperture surface, and then the waypoints are placed on it through which the trajectory is carried out according to the motion model of the platform. The final step is optimization to improve the quality parameters and reduce the flight time. The subsequent stages are described in the following subsections.

2.1. Synthetic Aperture Surface Determination

The synthetic aperture surface outline is characterized by the distance from the imaged object and the span. The distance from the object can be taken arbitrarily, with the requirement that the entire object of size D H by D V fits within the antenna beam:
r 0 > D V 2 tan θ V 2 ,
r 0 > D H 2 tan θ H 2 ,
where D V and D H are vertical and horizontal dimensions of Region Of Interest (ROI), and θ V and θ H are vertical and horizontal antenna beamwidths. Far-field conditions are assumed to be met:
r 0 D ,
r 0 λ ,
where r0 is the distance to the object and D is the largest ROI dimension. The aperture span is determined according to the Formulas (6) and (5).

2.2. Trajectory Determination

From a practical point of view, it is important that the trajectory is defined unambiguously by a minimum number of parameters, as the number of parameters determines the number of dimensions of the function to be optimized. If the motion model of the radar platform is known, and the influence of atmospheric conditions (wind) on the actual trajectory can be neglected, the trajectory is unambiguously determined by the location of waypoints. Waypoints are the navigation points used by the radar platforms’ autopilot to create the trajectory. Waypoints are visited by the platform in predefined order and a waypoint is considered visited when the distance between the platform and the waypoint is lower than specified acceptance radius. In this paper, the number of waypoints is indicated by N and they are distributed along the horizontal dimension of the aperture plane, placed alternately on the lower and upper sides of it, visited by the platform according to a zigzag pattern (see Figure 8), where the positions of the initial ( P W 1 ) and final ( P W N ) points are given by:
P W 1 = r 0 L H 2 L V 2 ,
P W N = r 0 L H 2 L V 2 if 2 N r 0 L H 2 L V 2 if 2 N ,
where r is the distance to the object, L H and L V are the horizontal and vertical aperture dimensions, respectively, and 2 N means N is divisible by 2 and 2 N means N is not divisible by 2. The positions of the rest of the points:
P W n = P W n x P W n y P W n z
are determined by:
P W n x = R
P ( P W n y < γ ) = 0 for γ L H 2 γ L H + 1 2 for L H 2 < γ L H 2 1 for γ > L H 2
P W i y P W j y i < j
P W n z = L V 2 for 2 n L V 2 for 2 n ,
where operator P ( · ) denotes probability and γ is an independent variable used only for definition. The horizontal position of the points is random (non-uniform). If it were uniform, one would obtain a trajectory close to the sinusoidal one, as described in [12], which is characterized by very prominent sidelobes (see Figure 5). To counteract this, the points are randomly distributed, which reduces the prominence of the strongest sidelobe, at the expense of introducing multiple weaker sidelobes as seen in Figure 8.

2.3. Waypoints Following

In this paper, the following waypoint-following model is adopted: only one consecutive waypoint is active at each moment of platform movement, the velocity and acceleration are chosen so that the platform can stop when reaching a waypoint. An acceptance radius is defined for each waypoint; if the distance of the platform from the waypoint is smaller than the acceptance radius, the next waypoint becomes active. A demonstration of the motion model is presented in Figure 10.
The trajectory calculation process has the following steps for each discrete time moment t:
  • Check if the current waypoint has been reached:
    | P P ( t ) P W n | r a n : = n + 1
    n > N trajectory complete ,
    where P P ( t ) is the position of the platform at time t, P W n is the n-th waypoint, r a is the radius of the acceptance circle, and N is the total number of waypoints.
  • Determine the directional vector of the trajectory that points toward current waypoint:
    d = P P ( t ) P W n P P ( t ) P W n .
  • Determine maximum speed in that direction:
    v m = 2 a m a x ( d ) s b ( d ) ,
    where a m a x ( · ) is the maximum acceleration in the specified direction (motion model limit) and s b ( · ) is the braking distance-distance required for the radar platform to stop when moving with the maximum speed:
    s b = 1 2 v m a x 2 ( d ) a m a x ( d ) ,
    where v m a x ( · ) is the maximum velocity in the specified direction (motion model limit).
  • Determine target velocity:
    v t = 2 a m a x ( d ) ( P P ( t ) P W n ) d if P P ( t ) P W n r a min ( v m , v m a x ( d ) ) d otherwise .
  • Determine the difference between the target velocity and the current velocity:
    Δ v = v t v ( t ) ,
    where v ( t ) is the radar platform velocity at time t (current velocity).
  • Determine the target acceleration:
    a t = Δ v Δ v a m a x ( Δ v Δ v )
    and clip if it is too high:
    a ( t + Δ t ) = Δ v a t a t if a t Δ t > Δ v a t otherwise ,
    where Δ t is the calculation time step.
  • Finally, determine updated velocity and position:
    v ( t + Δ t ) = v ( t ) + a ( t + Δ t ) Δ t
    P P ( t + Δ t ) = P P ( t ) + v ( t + Δ t ) Δ t .

2.4. Quality Assessment

With the trajectory determined, its PSF can be calculated so that quality metrics can be assigned, allowing optimization. The PSF is determined according to Formula (1) in the x = 0 plane, in the range:
y D H 2 , D H 2 ,
z D V 2 , D V 2 ,
where D H and D V are the horizontal and vertical dimensions of ROI, respectively. PSF can be determined with any resolution; the finer, the better. In this study, it was empirically found that an acceptable compromise between speed and accuracy of the calculation is adopting a pixel size at least ten times smaller than the expected smallest resolution.

2.5. Mainlobe Extraction

Mainlobe extraction is required to determine all quality parameters. For this purpose, the standard recursive flood-fill algorithm was used, which is as follows.
  • Mark the center pixel as belonging to the mainlobe;
  • Mark each neighboring pixel with a smaller value as belonging to the mainlobe;
  • Start from step 2 for each marked neighboring pixel.
With the mainlobe extracted, quality parameters can be estimated.

2.6. Resolution Estimation

Because the mainlobe shape deviates from a rectangle when SA is sampled unevenly, it is necessary to develop an equivalent measure of resolution. For the purpose of this research, it is defined as a rectangle whose outline is most similar to that of the mainlobe. Vertical and horizontal resolutions are determined by δ V ˜ ( α 0 ) and δ H ˜ ( α 0 ) , respectively. α 0 denotes the angle at which the condition of the best similarity is met:
α 0 = arg min α ( 0 , π 2 ) arctan ( δ V ˜ ( α ) δ H ˜ ( α ) ) α ,
where δ V ˜ ( α ) and δ H ˜ ( α ) are the vertical and horizontal dimensions of assumed rectangle, calculated as the average of z and y coordinates of the points belonging to the vertical and horizontal mainlobe outline, respectively:
δ V ˜ ( α ) = | p o V z | ¯ ,
δ H ˜ ( α ) = | p o H y | ¯ .
Vertical and horizontal outlines p o V and p o H for given α are the sets of points defined as “top” and “right” sides of mainlobe outline:
p o i p o V arctan | p o i z p o i y | > α ,
p o i p o H arctan | p o i z p o i y | < α .
The outline of the mainlobe is obtained by determining those PSF pixels p i that do not belong to the mainlobe (ML) but are adjacent to at least one pixel belonging to the mainlobe:
p i p o p i p M L p j p M L : p j = p i + δ , δ ( 1 , 0 ) , ( 0 , 1 ) , ( 1 , 0 ) , ( 0 , 1 ) ,
where p o is the set of pixels belonging to the outline of the mainlobe, p M L is the set of pixels belonging to the mainlobe. This provides a measure that is equivalent to the resolution that would be achieved with a rectangular aperture, which PSF is the most similar to the PSF under consideration.

2.7. PSLR and ISLR Estimation

Peak Sidelobe Ratio (PSLR) describes how prominent the strongest sidelobe is and is determined by Formula [16]:
P S L R = 20 log 10 max p p S L [ E ( p ) ] max p p M L [ E ( p ) ] ,
whereas Integrated Sidelobe Ratio (ISLR) describes the overall level of the sidelobes and is determined by Formula [16]:
I S L R = 20 log 10 p p S L [ E ( p ) ] p p M L [ E ( p ) ] ,
where E ( p ) is the PSF at the point p   p p S L denotes points in the sidelobe and p p M L denotes points in the mainlobe.

2.8. Trajectory Cost

Once the quality parameters are known, the value of the cost function can be determined. The value of the cost function for a given trajectory is the sum of the products of the transfer function values for each parameter and their weights.
c = w δ H c δ H + w δ V c δ V + w P S L R c P S L R + w I S L R c I S L R + w t c t ,
where:
  • c δ H is the cost related to the horizontal resolution;
  • c δ V is the cost related to the vertical resolution;
  • c P S L R is the cost related to the Peak Sidelobe Ratio (PSLR);
  • c I S L R is the cost related to the Integrated Sidelobe Ratio (ISLR);
  • c t is the cost related to the trajectory length (flight time, energy consumption).
and w δ H , w δ V , w P S L R , w I S L R , w t are the corresponding weighting factors. This set of optimization parameters (resolution, Peak Sidelobe Ratio (PSLR), Integrated Sidelobe Ratio (ISLR), flight time) was arbitrarily chosen by the authors because they are meaningful and are directly linked with the trajectory shape. Other important image-quality parameters, such as Imange Contrast (IC) and Image Entropy (IE), are not as affected by the shape of the trajectory.
The weighting factors and fractional cost functions must be defined in a way that ensures the proper optimization goal, that the optimizers main goal is to ensure the proper imaging quality and that the others costs do not prevail.

2.9. Transfer Functions

Trajectory functions are used to assign each trajectory parameter a cost value that is comparable in magnitude so that the cost of the trajectory can be determined. Two types of transfer functions are used in the model presented in this paper, a range-type transfer function and a step-type transfer function.

2.9.1. Range-Type Transfer Function

Range-type transfer function is defined as
c = 0 if v < v L v v L v U v L if v L v v U 1000 v v U v U v L + 1 if v U < v ,
where c is cost, v is value (quality parameter), v L is the lower limit of the acceptable range of values, and v U is the upper limit. The purpose of using this function is to ensure that the cost varies linearly from 0 to 1 in the range from v L to v U . Below v L , the cost is 0, so that the optimizer does not attempt to reduce the parameter below this value, while above v U , the cost increases linearly with a factor of 1000 (chosen arbitrarily to be large, but not large enough to hinder the optimizer), so that the optimizer will devote considerable effort to reducing the parameter to an acceptable level. This function should be applied to parameters for which an acceptable range of values can be specified, namely resolution, Peak Sidelobe Ratio (PSLR), and Integrated Sidelobe Ratio (ISLR). An example graph of the range transfer function is shown in Figure 11.

2.9.2. Step-Type Transfer Function

The step-type transfer function is used to determine the cost of the flight time and is defined as
c = v v U 1 + c s + v   mod   v U v U if v mod v U v U c S v U 1000 v v U 1 + c s + 1000 v   mod   v U v U + 1 + c s 1 otherwise
where c is cost, v is value (flight time), where vU is the platform operating time provided by a full battery (or other energy source) and cS is a parameter that specifies the time required to return to base and charge the battery relative to the cost of consuming the entire battery. It was introduced in order to allow the analysis to take into account the situation when it is not possible to perform the entire measurement in one go and should be set to zero if this should not be taken into account. The step-type transfer function is presented in Figure 12.

2.10. Optimization

Trajectory optimization involves minimizing a multidimensional, nonlinear, non-convex cost function whose argument is the waypoint location:
P W O p t = arg min P c ( P ) ,
where P W O p t is the set of optimized waypoints and P is the superset of the set of waypoints covered by the optimization procedure. There are numerous algorithms for optimizing convex problems [17]. Some of them require the function to be differentiable, and others do not. A much more serious issue is non-convex optimization. There are various approaches, the most popular of which is to divide the function area into subareas, using different starting points, and applying convex optimization. Subarea partitioning can be implemented in a way that ensures that a global extremum is reached, but it requires a deep analysis of the function. In this paper, the approach of using different starting points is used.
The Nelder–Mead algorithm [18], available as part of the MATLAB package [19], was used to optimize (minimize) the cost function. This algorithm can be used successfully to minimize non-differentiable functions, as it does not require the determination of a gradient. It uses a simplex, a 2N-dimensional shape in the input data space (two coordinates of N waypoints form a 2N-dimensional space), at whose vertices it determines the values of the minimized function. Subsequent steps of the algorithm move the simplex or change its shape to reach a local minimum. Simplified steps of the algorithm minimizing function c ( P ) are shown below:
  • Initial simplex: create a simplex consisting of 2 N + 1 points around a starting point, including the starting point.
  • Sort the points so that c ( P 1 ) c ( P 2 ) c ( P 2 N + 1 ) .
  • Determine the centroid P 0 of the simplex, excluding the worst point P 2 N + 1 : P 0 = A ( P 1 P 2 N ) , where A ( · ) denotes mean.
  • Reflection: determine the reflected point P r = P 0 + k α ( P 0 P 2 N + 1 ) , where k α is the reflection coefficient. If P r is the best point c ( P r ) < c ( P 1 ) , go to step 5. If it is the worst point c ( P r ) > c ( P 2 N + 1 ) , go to step 6. If it is the second worst c ( P 2 N ) < c ( P r ) < c ( P 2 N + 1 ) , go to step 7. Otherwise, add it to the simplex in place of P 2 N + 1 and go back to step 1.
  • Expansion: determine the expanded point: P e = P 0 + k γ ( P r P 0 ) , where k γ is the expansion coefficient. If it is better than reflected, add it in place of P 2 N + 1 . Otherwise, add reflected. Return to step 1.
  • Contract inside: determine the contracted point P c = P 0 + k ρ ( P 2 N + 1 P 0 ) , where k ρ is the contraction coefficient, and if it is not the worst, put it in place of the worst and return to step 1. Otherwise, go to step 8.
  • Contract outside: determine the contracted point P c = P 0 + k ρ ( P r P 0 ) , where k ρ is the contraction coefficient, and if it is better than reflected, replace the worst with it and return to step 1. Otherwise, go to step 8.
  • Shrink: reduce the size of the simplex: i > 1 : P i = P 1 + k σ ( P i P 1 ) , where k σ is shrink coefficient.
The algorithm stops on two conditions: The iteration limit has been reached, or both the simplex size and the difference between the function values are below certain thresholds: max ( | P i P j | ) < T P and | c ( P 2 N + 1 ) c ( P 1 ) | < T c , where T P is the argument step threshold and T c is the function step threshold. The values used in this experiment are as follows: the iteration limit is 3000, T P = T c = 0.0001 , k α = 1 , k γ = 2 , k ρ = 0.5 , k σ = 0.5 .

3. Results

3.1. Performance Analysis

The algorithm was tested for a test case with the radar and geometry parameters summarized in Table 1. A C–band radar with a low bandwidth was chosen as an example of a low-cost, lightweight sensor. However, while higher-band, low-cost, lightweight radars certainly exist, they require very high precision (sub-cm) GNSS platforms, which are neither low cost nor lightweight.
The optimization requirements were determined as follows: First, an MBSAR reference aperture was prepared to ensure that there are no grating lobes, that is, the vertical sampling distance (the distance between consecutive passes) is
d V = λ r 0 2 D V ,
where r0 is the distance from the synthetic aperture to the object and D V is the vertical size of the object. For this aperture, the quality parameters, denoted further by the subscript ref, were determined. Then, the initial aperture was generated, and for it, the quality parameters, denoted further by the subscript ini, were also determined. Based on these results, the optimization parameters are defined and collected in Table 2.
The next step is to perform the optimization, where the cost is determined according to (38), and the variables are the y and z coordinates of the waypoints. The optimization was performed in MATLAB using fminsearch [19], that is, using the derivative-free Nelder–Mead simplex algorithm [18]. The optimization was repeated ten times for different starting points. The results are summarized in Table 3.
A graph of the dependence of the cost on the number of function calls is shown in Figure 13.
The following observations can be made:
  • The first step of the algorithm requires 10 function calls because the function is 10 dimensional (five waypoints with two coordinates being optimized).
  • The relationship between the initial cost and the final cost is not evident, but it does occur. It can be generalized that for most cases, the lower the initial cost, the lower the final cost.
  • For call counts greater than 100, there is no significant further cost reduction in most cases.
A view of the mainlobe region for the worst initial situation, the worst final situation, and the best final situation is shown in Figure 14, Figure 15 and Figure 16, respectively. For comparison, the initial situation for the worst final situation is presented in Figure 17.
For the selected cases, a comparison between the initial and final trajectory was presented in Figure 18.
It can be observed that the best final case (#1) has the biggest change between the initial and final positions. Looking at Figure 13, it can be confirmed that this corresponds to a significant reduction in costs. However, in the worst final case (#8), the difference is almost unnoticeable, but the cost reduction is also significant (albeit much lower than in the case discussed above). This leads to the conclusion that relatively small changes in waypoint positions can lead to observable changes in cost. Investigating this case further by looking at Table 3 reveals that, in case #8, the optimization led to an improved vertical resolution at the price of a worsening Integrated Sidelobe Ratio (ISLR), which was due to a slight change in the PSF shape which caused some portion of the energy originally classified as the mainlobe to be classified as a sidelobe at some point during the optimization (note the abrupt change in cost around step 400 of the optimization—Figure 13). This can be confirmed by comparing Figure 15b with Figure 17b, which are almost identical.
In summary, the following conclusions are drawn from this observation:
  • In some cases, a small change in the position of the points causes a noticeable change in the cost, but the actual change in the PSF is small and does not have a significant impact on the actual imaging parameters. Using at least a few initial values helps reduce the impact of such cases on the final result.
  • The appropriate formulation of the weights of the components of the cost function is key to achieving the expected results.
  • Minor inaccuracies in the navigation of the radar platform or external factors that alter the trajectory, such as wind, do not significantly affect the final quality of the image. It should be emphasized that this conclusion applies to the inaccuracy of platform guidance, not to the inaccuracy of platform position determination. The inaccuracy of the platform position determination results in a blurring of the imaging proportional to the position determination error and inversely proportional to the carrier wavelength [6].

3.2. Comparison with Reference

In this experiment, the best of the previously determined trajectories was compared with the MBSAR trajectories for two, three, four, and five baselines, demonstrated in Figure 19, Figure 20, Figure 21 and Figure 22, respectively. The optimized trajectory is shown in Figure 23. Compared to the previous section, the PSF is shown in a wider range so that the sidelobes, whose shapes differ significantly between the non-rectilinear and MBSAR trajectories, can be clearly seen.
Table 4 compares the PSF quality parameters for the previously mentioned scenarios. It should be noted that the time was determined only for the visible portions of the trajectory, which means that if the visible portion of the MBSAR trajectory was not treated as a slice of the CSAR trajectory, the flight time would be longer. The cost was determined according to the same criteria for each trajectory. The difference in cost for trajectory #1 compared to Table 3 is due to the wider range of the PSF determination which translates into a larger Integrated Sidelobe Ratio (ISLR) value.
By analyzing the above results, it can be seen that
  • The 2-pass and 3-pass trajectories have close and strong sidelobes (grating lobes), making it impossible to obtain three-dimensional (3D) imaging without additional processing, such as phase unwrapping.
  • In terms of the flight time, the optimized trajectory ranks between the 3-pass and 4-pass trajectories.
  • The optimized trajectory is characterized by the lowest Peak Sidelobe Ratio (PSLR) (comparable to the 2-pass).
  • The optimized trajectory is characterized by the highest Integrated Sidelobe Ratio (ISLR) (comparable to the 5-pass).
Based on the results presented, it can be concluded that the presented method can be applied in scenarios where the benefits of a shorter flight time outweigh the disadvantages of a higher Integrated Sidelobe Ratio (ISLR). It should be noted that the PSF and quality parameter values also depend on the radar parameters and the geometry of the specific scenario.

4. Discussion

This paper presents an algorithm to determine the optimized trajectories for drone-based three-dimensional (3D) Synthetic Aperture Radar (SAR). The simulation results are presented, which confirm that trajectories with unevenly spaced waypoints allow shorter flight times than the traditionally used CSAR/MBSAR [2] and a lower Peak Sidelobe Ratio (PSLR) than the sinusoidal/hat maneuver trajectory [12].
On the basis of the results presented, it can be concluded that in order to determine a low-cost trajectory, many calls to the function that determines the PSF for a given trajectory are needed. This opens a field for further research, allowing to determine the tradeoff between the number of function calls for a given starting point and the number of different starting points.
It should be noted that even the high computational complexity of the presented algorithm is not its significant drawback from a practical point of view, because the trajectory of the radar platform can be determined long before the measurement campaign, at the stage of its planning.
The algorithm presented can be placed in a broader context: on the one hand, to use it in the trajectory planning work of a radar platform [20] to add three-dimensional (3D) imaging capability, and, on the other hand, to use the existing knowledge of three-dimensional (3D) Synthetic Aperture Radar (SAR) imaging using sparse apertures to improve the quality of acquired images [2,21,22,23]. Taking [20] into account, it may also be desirable to further develop the proposed algorithm, to be used by drone swarms [24].

Author Contributions

Conceptualization, J.D. and P.S.; Investigation, J.D.; Resources, P.S.; Software, J.D.; Supervision, P.S.; Validation, J.D.; Writing—original draft, J.D.; Writing—review & editing, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

The author received funding under a doctoral fellowship from the National Science Centre, Poland, grant number 2019/32/T/ST7/00120.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Wiley, C.A. Synthetic aperture radars—A paradigm for technology evolution. IEEE Trans. Aerosp. Electron. Syst. 1985, AES-21, 440–443. [Google Scholar] [CrossRef]
  2. Zhang, R.; Cao, S. 3D imaging millimetre wave circular synthetic aperture radar. Sensors 2017, 17, 1419. [Google Scholar] [CrossRef] [PubMed]
  3. Hajeb, M.; Karimzadeh, S.; Matsuoka, M. SAR and LIDAR datasets for building damage evaluation based on support vector machine and random forest algorithms—A case study of Kumamoto earthquake, Japan. Appl. Sci. 2020, 10, 8932. [Google Scholar] [CrossRef]
  4. Munson, D.C.; O’brien, J.D.; Kenneth Jenkins, W. A Tomographic Formulation of Spotlight-Mode Synthetic Aperture Radar. Proc. IEEE 1983, 71, 917–925. [Google Scholar] [CrossRef]
  5. Özdemir, C. Inverse Synthetic Aperture Radar Imaging with MATLAB Algorithms; Wiley Series in Microwave and Optical Engineering; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar] [CrossRef]
  6. Skolnik, M.I. Radar Handbook, 3rd ed.; Electronics Electrical Engineering; McGraw-Hill Education: New York, NY, USA, 2008. [Google Scholar]
  7. Lukin, K.A.; Mogyla, A.A.; Palamarchuk, V.P.; Vyplavin, P.L.; Zemlyaniy, O.V.; Shiyan, Y.A.; Zaets, M. Ka-band bistatic ground-based noise waveform SAR for short-range applications. IET Radar Sonar Navig. 2008, 2, 233–243. [Google Scholar] [CrossRef]
  8. Gromek, D.; Radecki, K.; Drozdowicz, J.; Samczyński, P.; Szabatin, J. Passive SAR imaging using DVB-T illumination for airborne applications. IET Radar Sonar Navig. 2019, 13, 213–221. [Google Scholar] [CrossRef]
  9. Pinheiro, M.; Prats, P.; Scheiber, R.; Nannini, M.; Reigber, A. Tomographic 3D reconstruction from airborne circular SAR. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Cape Town, South Africa, 12–17 July 2009; Volume 3. [Google Scholar] [CrossRef]
  10. Góes, J.A.; Castro, V.; Sant’Anna Bins, L.; Hernandez-Figueroa, H.E. Spiral sar imaging with fast factorized back-projection: A phase error analysis. Sensors 2021, 21, 5099. [Google Scholar] [CrossRef] [PubMed]
  11. Rigling, B.D.; Moses, R.L. Flight path strategies for 3-D scene reconstruction from bistatic SAR. IEE Proc. Radar Sonar Navig. 2004, 151, 149–157. [Google Scholar] [CrossRef]
  12. André, D. An analysis of 3D SAR from single pass nonlinear radar platform trajectories. Algorithms Synth. Aperture Radar Imag. XVII 2010, 7699, 769908. [Google Scholar] [CrossRef]
  13. Mueller, M.W.; Hehn, M.; Dandrea, R. A Computationally Efficient Motion Primitive for Quadrocopter Trajectory Generation. IEEE Trans. Robot. 2015, 31, 1294–1310. [Google Scholar] [CrossRef]
  14. Oliver Chris, Q.S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Raleigh, NC, USA, 2004. [Google Scholar]
  15. Rigling, B.D.; Moses, R.L. Polar format algorithm for bistatic SAR. IEEE Trans. Aerosp. Electron. Syst. 2004, 40, 1147–1159. [Google Scholar] [CrossRef]
  16. Zhang, H.; Li, Y.; Su, Y. SAR image quality assessment using coherent correlation function. In Proceedings of the CISP 2012: 2012 5th International Congress on Image and Signal Processing, Chongqing, China, 16–18 October 2012; pp. 1129–1133. [Google Scholar] [CrossRef]
  17. Lagarias, J.C.; Reeds, J.A.; Wright, M.H.; Wright, P.E. Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM J. Optim. 1998, 9, 112–147. [Google Scholar] [CrossRef]
  18. Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  19. Mathworks. Find Minimum of Unconstrained Multivariable Function Using Derivative-Free Method–MATLAB fminsearch. 2013. Available online: https://www.mathworks.com/help/matlab/ref/fminsearch.html (accessed on 30 July 2022).
  20. Stecz, W.; Gromada, K. UAV mission planning with SAR application. Sensors 2020, 20, 1080. [Google Scholar] [CrossRef]
  21. Austin, C.D.; Ertin, E.; Moses, R.L. Sparse multipass 3D SAR imaging: Applications to the GOTCHA data set. Algorithms Synth. Aperture Radar Imag. XVI 2009, 7337, 733703. [Google Scholar] [CrossRef]
  22. Drozdowicz, J.; Wielgo, M.; Baczyk, M.K. Generalized SL0 algorithm for 3D circular SAR imaging. In Proceedings of the MIKON 2018–22nd International Microwave and Radar Conference, Poznan, Poland, 14–17 May 2018; pp. 265–266. [Google Scholar] [CrossRef]
  23. Hao, J.; Li, J.; Pi, Y. Three-dimensional imaging of terahertz circular SAR with sparse linear array. Sensors 2018, 18, 2477. [Google Scholar] [CrossRef] [Green Version]
  24. Siemiatkowska, B.; Stecz, W. A framework for planning and execution of drone swarm missions in a hostile environment. Sensors 2021, 21, 4150. [Google Scholar] [CrossRef]
Figure 1. Traditional Interferometric Synthetic Aperture Radar (InSAR) geometries: (a) Dual antenna, single pass. (b) Single antenna, dual pass. v denotes the direction of motion of the radar platform, L V is the baseline, and x, y, z are Cartesian coordinates.
Figure 1. Traditional Interferometric Synthetic Aperture Radar (InSAR) geometries: (a) Dual antenna, single pass. (b) Single antenna, dual pass. v denotes the direction of motion of the radar platform, L V is the baseline, and x, y, z are Cartesian coordinates.
Sensors 22 06990 g001
Figure 2. InSAR example: (a) Trajectory. (b) PSF. L V is the baseline and δ V is the vertical resolution.
Figure 2. InSAR example: (a) Trajectory. (b) PSF. L V is the baseline and δ V is the vertical resolution.
Sensors 22 06990 g002
Figure 3. Circular SAR geometry. v denotes the direction of motion of the radar platform, L V is the vertical synthetic aperture size (the distance between the highest and lowest pass), and d V is the vertical sampling distance (the distance between consecutive passes).
Figure 3. Circular SAR geometry. v denotes the direction of motion of the radar platform, L V is the vertical synthetic aperture size (the distance between the highest and lowest pass), and d V is the vertical sampling distance (the distance between consecutive passes).
Sensors 22 06990 g003
Figure 4. CSAR/MBSAR example: (a) Trajectory. (b) PSF. L V is the vertical synthetic aperture size (the distance between the highest and lowest pass) and d V is the vertical sampling distance (the distance between consecutive passes).
Figure 4. CSAR/MBSAR example: (a) Trajectory. (b) PSF. L V is the vertical synthetic aperture size (the distance between the highest and lowest pass) and d V is the vertical sampling distance (the distance between consecutive passes).
Sensors 22 06990 g004
Figure 5. Sinusoidal trajectory/HAT maneuver example: (a) Trajectory. (b) PSF.
Figure 5. Sinusoidal trajectory/HAT maneuver example: (a) Trajectory. (b) PSF.
Sensors 22 06990 g005
Figure 6. Evenly sampled aperture example: (a) Sampling points (“Trajectory”). (b) PSF.
Figure 6. Evenly sampled aperture example: (a) Sampling points (“Trajectory”). (b) PSF.
Sensors 22 06990 g006
Figure 7. Randomly sampled aperture example: (a) Sampling points (“Trajectory”). (b) PSF.
Figure 7. Randomly sampled aperture example: (a) Sampling points (“Trajectory”). (b) PSF.
Sensors 22 06990 g007
Figure 8. Real trajectory example: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Figure 8. Real trajectory example: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Sensors 22 06990 g008
Figure 9. General diagram of three-dimensional (3D) Synthetic Aperture Radar (SAR).
Figure 9. General diagram of three-dimensional (3D) Synthetic Aperture Radar (SAR).
Sensors 22 06990 g009
Figure 10. An example of the motion model used with acceleration and velocity shown in the figure. Waypoints are denoted by P W · and the acceptance circles are marked with a dotted line.
Figure 10. An example of the motion model used with acceleration and velocity shown in the figure. Waypoints are denoted by P W · and the acceptance circles are marked with a dotted line.
Sensors 22 06990 g010
Figure 11. Range-type transfer function used to determine the cost related to the quality parameters.
Figure 11. Range-type transfer function used to determine the cost related to the quality parameters.
Sensors 22 06990 g011
Figure 12. Step-type transfer function used to determine the cost related to flight time.
Figure 12. Step-type transfer function used to determine the cost related to flight time.
Sensors 22 06990 g012
Figure 13. Cost versus function calls. Different starting points are denoted with different colors.
Figure 13. Cost versus function calls. Different starting points are denoted with different colors.
Sensors 22 06990 g013
Figure 14. Worst initial situation—run #5: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Figure 14. Worst initial situation—run #5: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Sensors 22 06990 g014
Figure 15. Worst final situation—run #8: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Figure 15. Worst final situation—run #8: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Sensors 22 06990 g015
Figure 16. Best final situation—run #1: (a) Trajectory with waypoints denoted by P W and acceptance circles marked with a dotted line. (b) PSF.
Figure 16. Best final situation—run #1: (a) Trajectory with waypoints denoted by P W and acceptance circles marked with a dotted line. (b) PSF.
Sensors 22 06990 g016
Figure 17. Initial state for the worst final situation—run #8: (a) Trajectory with waypoints denoted by P W and acceptance circles marked with a dotted line. (b) PSF.
Figure 17. Initial state for the worst final situation—run #8: (a) Trajectory with waypoints denoted by P W and acceptance circles marked with a dotted line. (b) PSF.
Sensors 22 06990 g017
Figure 18. Comparison between the initial and final trajectory for selected cases. The dashed lines represent the initial trajectory, the solid lines the final trajectory, and the dashed and solid circles represent the initial and final acceptance circles, respectively.
Figure 18. Comparison between the initial and final trajectory for selected cases. The dashed lines represent the initial trajectory, the solid lines the final trajectory, and the dashed and solid circles represent the initial and final acceptance circles, respectively.
Sensors 22 06990 g018
Figure 19. Reference scenario with two passes: (a) Trajectory. (b) PSF.
Figure 19. Reference scenario with two passes: (a) Trajectory. (b) PSF.
Sensors 22 06990 g019
Figure 20. Reference scenario with three passes: (a) Trajectory. (b) PSF.
Figure 20. Reference scenario with three passes: (a) Trajectory. (b) PSF.
Sensors 22 06990 g020
Figure 21. Reference scenario with four passes: (a) Trajectory. (b) PSF.
Figure 21. Reference scenario with four passes: (a) Trajectory. (b) PSF.
Sensors 22 06990 g021
Figure 22. Reference scenario with five passes: (a) Trajectory. (b) PSF.
Figure 22. Reference scenario with five passes: (a) Trajectory. (b) PSF.
Sensors 22 06990 g022
Figure 23. Best final situation—run #1—wide PSF range: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Figure 23. Best final situation—run #1—wide PSF range: (a) Trajectory with waypoints denoted by P W and the acceptance circles marked with a dotted line. (b) PSF.
Sensors 22 06990 g023
Table 1. Test scenario and radar parameters.
Table 1. Test scenario and radar parameters.
ParameterValue
ROI horizontal dimension D H 10 m
ROI vertical dimension D V 10 m
distance to ROI r 0 100 m
max. upward acceleration a max 3 m/s2
max. downward acceleration a max 8 m/s2
max. vertical acceleration a max 5 m/s2
max. upward velocity v max 4 m/s
max. downward velocity v max 8 m/s
max. vertical velocity v max 6 m/s
carrier frequency f c 3 GHz
bandwidth B300 MHz
horizontal beamwidth θ H 10°
vertical beamwidth θ V 20°
PRF1 kHz
Table 2. Optimization parameters.
Table 2. Optimization parameters.
ParameterLower ValueUpper ValueWeight
δ H L / U 0.9   δ H 1.1   δ H 0.3
δ V L / U 0.9   δ V 1.1   δ V 0.3
P S L R 0.9   P S L R r e f 1.1   P S L R r e f 0.2
I S L R I S L R r e f I S L R i n i 0.1
t- t r e f 0.1
Table 3. Trajectory generation result parameters for ten different start points.
Table 3. Trajectory generation result parameters for ten different start points.
# δ H δ V PSLRISLRcost c
inifininifininifininifininifin
10.940.821.971.930.710.4620.3722.90561.2816.63
21.321.122.021.700.600.6014.2115.80653.68345.26
31.791.491.661.600.520.489.6411.891145.60650.58
41.241.211.851.730.660.5014.1214.15653.42234.86
51.191.143.682.090.590.4712.0813.681529.3499.23
60.991.081.821.490.660.4716.0917.28456.4218.50
71.241.201.801.870.670.5512.0311.98679.96345.06
81.591.532.581.860.470.459.5310.361037.71656.06
91.251.121.692.110.570.4313.6412.68467.2927.27
101.391.271.681.870.510.4511.8511.59543.47253.53
Table 4. Quality parameter comparison for MBSAR trajectories and an optimized trajectory.
Table 4. Quality parameter comparison for MBSAR trajectories and an optimized trajectory.
Scenario δ H δ V PSLRISLRTime tCost c
MBSAR 2-pass0.831.020.9923.933.051258.84
MBSAR 3-pass0.831.350.9513.624.581131.92
MBSAR 4-pass0.891.520.799.666.10766.22
MBSAR 5-pass0.871.620.487.557.6337.84
optimized #10.571.810.4625.955.2235.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Drozdowicz, J.; Samczynski, P. Drone-Based 3D Synthetic Aperture Radar Imaging with Trajectory Optimization. Sensors 2022, 22, 6990. https://doi.org/10.3390/s22186990

AMA Style

Drozdowicz J, Samczynski P. Drone-Based 3D Synthetic Aperture Radar Imaging with Trajectory Optimization. Sensors. 2022; 22(18):6990. https://doi.org/10.3390/s22186990

Chicago/Turabian Style

Drozdowicz, Jedrzej, and Piotr Samczynski. 2022. "Drone-Based 3D Synthetic Aperture Radar Imaging with Trajectory Optimization" Sensors 22, no. 18: 6990. https://doi.org/10.3390/s22186990

APA Style

Drozdowicz, J., & Samczynski, P. (2022). Drone-Based 3D Synthetic Aperture Radar Imaging with Trajectory Optimization. Sensors, 22(18), 6990. https://doi.org/10.3390/s22186990

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop