Next Article in Journal
Prediction of Deformation in Expansive Soil Landslides Utilizing AMPSO-SVR
Next Article in Special Issue
Millimeter-Wave Radar Detection and Localization of a Human in Indoor Complex Environments
Previous Article in Journal
Effects of Climate Change and Human Activities on Runoff in the Upper Reach of Jialing River, China
Previous Article in Special Issue
SCRP-Radar: Space-Aware Coordinate Representation for Human Pose Estimation Based on SISO UWB Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Pseudopolar Format Matrix Description of Near-Range Radar Imaging and Fractional Fourier Transform

1
School of Computing and Engineering, University of West London, London W5 5RF, UK
2
Science and Engineering, Kaplan International College London, London SE1 9DE, UK
3
Faculty of Engineering, Computing and the Environment, Kingston University, London KT1 1LQ, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2482; https://doi.org/10.3390/rs16132482
Submission received: 28 April 2024 / Revised: 30 June 2024 / Accepted: 4 July 2024 / Published: 6 July 2024
(This article belongs to the Special Issue State-of-the-Art and Future Developments: Short-Range Radar)

Abstract

:
Near-range radar imaging (NRRI) has evolved into a vital technology with diverse applications spanning fields such as remote sensing, surveillance, medical imaging and non-destructive testing. The Pseudopolar Format Matrix (PFM) has emerged as a promising technique for representing radar data in a compact and efficient manner. In this paper, we present a comprehensive PFM description of near-range radar imaging. Furthermore, this paper also explores the integration of the Fractional Fourier Transform (FrFT) with PFM for enhanced radar signal analysis. The FrFT—a powerful mathematical tool for signal processing—offers unique capabilities in analysing signals with time-frequency localization properties. By combining FrFT with PFM, we have achieved significant advancements in radar imaging, particularly in dealing with complex clutter environments and improving target detection accuracy. Meanwhile, this paper highlights the imaging matrix form of FrFT under the PFM, emphasizing the potential for addressing challenges encountered in near-range radar imaging. Finally, numerical simulation and real-world scenario measurement imaging results verify optimized accuracy and computational efficiency with the fusion of PFM and FrFT techniques, paving the way for further innovations in near-range radar imaging applications.

1. Introduction

Near-range radar imaging (NRRI) plays a crucial role in modern technology by enabling the generation of high-resolution images of objects or scenes located in close proximity to a radar system [1]. Unlike traditional radar systems that are designed to operate over long distances, NRRI focuses specifically on imaging targets within a range of a few metres to a few kilometres. This unique capability of NRRI has significantly broadened its applicability across various fields. One of the primary advantages of NRRI is its ability to produce detailed images with fine resolution, which is essential for applications that require precise and accurate imaging. For instance, in remote sensing, NRRI is used to monitor and analyse the Earth’s surface, vegetation and urban environments. The high resolution of NRRI allows the detailed observation of changes in these environments, making it invaluable for environmental monitoring and disaster management. In the field of surveillance, NRRI provides critical capabilities for security and defence. It enables the detection and identification of objects or individuals within close range, which is crucial for perimeter security, border control and urban surveillance. The ability to produce clear images in various conditions, including through obstacles and in poor visibility, enhances the effectiveness of NRRI in maintaining security. Medical imaging is another area in which NRRI has made significant contributions. Techniques such as radar-based breast imaging and the monitoring of vital signs benefit from the high-resolution capabilities of NRRI. These applications require the ability to image fine details within the human body, which NRRI can provide without the need for invasive procedures. Additionally, NRRI is widely used in non-destructive testing and evaluation (NDT/NDE). It allows the inspection of materials and structures for defects or damage without causing harm to the object under inspection. This is particularly useful in sectors such as the aerospace, automotive and civil engineering industries, where the integrity and safety of materials and structures are of the utmost importance.
The roots of near-range radar imaging can be traced back to the early 20th century, with pioneering experiments conducted by researchers such as Heinrich Hertz and Guglielmo Marconi [2]. Hertz’s experiments in the late 19th century demonstrated the existence and properties of electromagnetic waves, laying the groundwork for radar technology. Marconi’s study of wireless telegraphy further advanced the field, setting the stage for future developments in radar systems. However, significant advancements in NRRI techniques occurred in the latter half of the 20th century with the development of synthetic aperture radar (SAR) and interferometric SAR (InSAR) [3,4]. SAR revolutionized radar imaging by utilizing the motion of the radar platform to synthesize a large antenna aperture, thereby achieving high-resolution imaging capabilities. This innovation allowed the detailed and accurate imaging of the Earth’s surface and other targets from relatively short distances. SAR’s ability to produce high-resolution images regardless of weather conditions or daylight intensity made it an invaluable tool in remote sensing, surveillance and environmental monitoring. InSAR extended the capabilities of SAR by measuring the phase difference between radar signals acquired from multiple positions. This technique enabled precise elevation mapping and deformation monitoring, providing critical data for geophysical studies, earthquake monitoring and infrastructure assessment. InSAR’s ability to detect minute changes in the Earth’s surface has made it a powerful tool for understanding and mitigating natural disasters. The fundamental principles underlying NRRI include radar signal propagation, target reflection, signal processing and image reconstruction algorithms. These principles have been extensively studied and refined over the years, leading to the sophisticated NRRI systems in use today. Advances in digital signal processing, antenna design and computational algorithms have continually enhanced the accuracy, resolution and applicability of NRRI across various fields. As a result, NRRI has become an essential technology in modern science and industry, providing critical insights and data across a wide range of applications [5,6,7].
The evolution of near-range radar imaging (NRRI) techniques has been marked by significant advancements in signal processing algorithms, hardware design and system integration. Initially, NRRI systems relied on simple pulse radar configurations, which offered limited imaging capabilities and resolution. These early systems were constrained by their rudimentary technology and the analogue nature of their signal processing. The advent of digital signal processing (DSP) techniques and high-speed computing platforms transformed NRRI systems, greatly enhancing their performance. Modern NRRI systems provide significantly improved resolution, accuracy and imaging speeds due to these advancements. DSP allows sophisticated filtering, noise reduction and signal enhancement, which are critical for producing clear and detailed radar images. One of the most impactful developments has been the adaptation of synthetic aperture radar (SAR) techniques for near-range applications. SAR utilizes the motion of the radar platform to synthesize a large antenna aperture, enabling the generation of high-resolution images from multiple radar measurements. This adaptation has allowed NRRI systems to achieve detailed imaging over short distances, making them highly effective for various applications, from environmental monitoring to security surveillance [8,9,10]. Moreover, the integration of multiple-input multiple-output (MIMO) radar systems has further advanced NRRI technology. MIMO radar systems use multiple transmitting and receiving antennas to capture a greater amount of spatial information. This integration has led to the development of compact, cost-effective NRRI solutions with enhanced imaging capabilities. MIMO technology improves resolution, target detection and imaging accuracy, making NRRI systems more versatile and efficient [11,12,13].
Near-range radar imaging (NRRI) finds applications across a wide range of domains, significantly enhancing capabilities in remote sensing, surveillance, medical diagnostics and industrial inspection. In remote sensing, NRRI is used for high-resolution terrain mapping, land cover classification, vegetation monitoring and environmental monitoring. Its ability to produce detailed images regardless of weather conditions or lighting makes it invaluable for tracking changes in the environment, assessing agricultural fields and monitoring natural disasters. In the realm of surveillance and security, NRRI systems are deployed for perimeter monitoring, target detection and tracking in both indoor and outdoor environments. These systems provide real-time, high-resolution imaging that is essential for identifying and monitoring potential threats, securing sensitive areas and conducting law enforcement operations [14]. Medical imaging represents another promising application area for NRRI. This technology is used for the non-invasive imaging of biological tissues and organs, providing critical insights for diagnosis and treatment planning. NRRI can help visualize internal structures, detect abnormalities and monitor physiological processes without the risks associated with invasive procedures [15,16,17]. Additionally, NRRI is widely used in industrial settings for non-destructive testing (NDT). It allows the inspection of structures, materials characterization and quality control without damaging the objects under examination. This capability is crucial for ensuring the integrity and safety of critical infrastructure, such as bridges, pipelines, aircraft components and planet exploration [18,19,20].
Looking ahead, several promising research directions and technological advancements are expected to shape the future of near-range radar imaging. Continued efforts in signal processing, machine learning and artificial intelligence are likely to lead to the development of more robust and adaptive NRRI systems capable of operating in complex environments with minimal human intervention [21,22]. Furthermore, the integration of emerging technologies such as millimetre-wave radar, terahertz imaging and quantum radar could unlock new possibilities for high-resolution imaging and sensing in near-range applications [23,24]. Moreover, interdisciplinary collaborations between researchers from diverse fields, including radar engineering, computer science, physics and medicine, are essential for advancing the state-of-the-art in near-range radar imaging and realizing its full potential across various domains [25,26,27,28,29,30,31,32,33,34,35].
Near-range radar imaging has emerged as a versatile and powerful technology with a wide range of applications in various domains. Advancements in signal processing, hardware design and system integration have significantly enhanced the resolution, accuracy and imaging speeds of NRRI systems [36,37,38,39]. Despite its numerous applications, near-range radar imaging faces several challenges that must be addressed to further improve its performance and reliability. One of the primary challenges is the presence of clutter and noise in radar signals, which can significantly degrade image quality and hinder target detection and recognition. Clutter suppression techniques, such as adaptive beamforming, matched filtering and wavelet denoising, have been developed to mitigate this challenge [40,41]. Another critical aspect is the real-time processing and analysis of radar data, particularly in dynamic environments where targets may move or change rapidly. High-performance computing platforms and parallel processing algorithms are being explored to meet the computational demands of real-time NRRI systems. In the quest for the real-time processing and analysis of radar data, researchers have scrutinized advanced signal processing techniques. Among these, the Fourier Transform (FT) has served as a cornerstone, providing a powerful tool for analysing radar signals in the frequency domain [42,43,44,45].
The FT decomposes a signal into its constituent frequency components, enabling the representation of complex signals in terms of their frequency content. In radar imaging, this facilitates the extraction of valuable information regarding the location, velocity and characteristics of targets within the scene. However, traditional FT is limited in its ability to capture signals with non-stationary and time-varying characteristics, which are prevalent in near-range radar scenarios.
To address these limitations, the Fractional Fourier Fransform (FrFT) has emerged as a versatile alternative. Unlike the traditional FT, the FrFT offers a parameterized transformation that allows the analysis of signals with varying degrees of time-frequency localization. By adjusting the fractional order of the transform, researchers can fine-tune the balance between time and frequency resolution, thereby accommodating the diverse signal characteristics encountered in near-range radar imaging [46,47,48].
In recent years, the integration of FrFT with near-range radar imaging has garnered significant attention due to its potential to enhance resolution, mitigate clutter and improve target detection performance. This integration leverages the complementary strengths of both techniques, enabling a more comprehensive analysis of radar signals in the time-frequency domain. In this paper, we derive the new discrete pseudopolar format for imaging and then expand this format to form the discrete FT and FrFT. Meanwhile, the matrix formulation of both the discrete pseudopolar format and the discrete FrFT is given. Images of two-dimensional example calculations that verify these algorithms is also shown.

2. Short-Range Discrete Fourier Imaging

As seen in Figure 1a, one-dimensional aperture radar imaging is commonly characterized by integrating the time domain signal with respect to various radar positions x . As a result, the following estimate of the radar reflectivity map at point P is made:
  P x p , z p = D t , x · e j 4 π R λ d x ,
D t , x represents the compressed time domain reflected signal, where t is the time domain received echo from each acquisition point x . Due to the fact that electromagnetic waves move significantly faster than platform waves, an alternative name is the fast-time domain. The wavelength of the system’s working frequency is indicated by λ , and the distance R indicates the double range to the target in metres.
When the working frequency band-width of a radar system moves slowly along its azimuth within a specific range, its amplitude term can be disregarded. An appropriate expression for the distance between the target and the radar antenna can be defined as follows, by assuming the pseudopolar coordinate for the object space:
R x ; x p , z p = ρ s i n θ x 2 + ρ c o s θ 2 = ρ 1 2 x s i n θ ρ + x 2 ρ 2   ,
As seen in Figure 1b, the pseudopolar coordinate transform is used to express the azimuth resolution as δ a and the range resolution as δ r . Using the first-order Taylor expansion, we can rewrite Equation (2) in the following way:
R x ; x p , z p ρ x s i n θ + x 2 2 ρ ,
By the approximation:
2 x s i n θ x 2 ρ < 1 ,
Therefore, the focusing formulation could be rewritten as follows:
P ρ , θ = e j 2 π ρ λ 2 s i n 2 θ · D t , x · e j 2 π λ ρ ρ s i n θ x 2 d x ,
which can also be expanded and written as follows:
P ρ , θ = e j 4 π ρ λ · D t , x · e j 2 π λ ρ x 2   · e j 2 π λ ρ 2 ρ s i n θ x d x ,
For simplicity, let the constant phase e j 2 π ρ λ 2 s i n 2 θ from this point be denoted as Ω . To improve the readability of the notation, the focused formulation form seen in Equation (6) will be used. Now, the D t , x is expressed as follows: if the initial signal D s t , x is a duplicate spread on the aperture to generate a new signal, length d / 2 to d / 2 is sampled at equidistant intervals of d / 2 N + 1 :
D s t , x = D x , t δ x n d 2 N , n = N , 0 , , N ,
Then, the focusing formulation for D s t , x becomes:
P s ρ , θ = Θ · n = N N D t , x n d 2 N · e j 2 π λ ρ ρ s i n θ x 2 d x ,
Create a new set of variables for substitution:
x = x n d 2 N x = x + n d 2 N x 2 = x 2 + 2 x n d 2 N + n 2 d 2 4 N 2 ,
Equation (9) allows for Equation (8) to be written as follows:
P s ρ , θ = Θ · n = N N D t , x · e j 2 π λ ρ ρ s i n θ x n d 2 N 2 d x ,
From Equations (6) and (10), we could have the following equation:
P s ρ , θ = Θ · n = N N P ρ n d 2 N , θ
Equation (11) shows that a replication pattern is produced by the near-range imaging process under the pseudopolar coordinate. In contrast, sampling in the Fourier domain is produced by the imaging process under the pseudopolar coordinate in the far-reaching field. This interpretation is possible: the sampled FT pattern is formed in the far-range limit by the interference patterns from an unlimited number of spherical patterns from an infinite number of replications of an aperture. The shifting characteristic of the delta function allows the following representation of Equation (8):
P s ρ , θ = Θ n = N N P ρ , n d N · e j 2 π λ ρ ρ 2 s i n 2 θ 2 ρ s i n θ n d 2 N + n d 2 N 2 ,
Select the value of ρ as follows:
ρ = ρ m i n = d 2 λ N ,
Then, Equation (12) becomes:
P s ρ , θ = Θ n = N N D n d N ·   e j 2 π N d s i n θ λ n 2 ,
Equation (14) can also be expanded and written as follows:
P s ρ , θ = Θ · e j 2 π N d 2 s i n 2 θ λ 2 · n = N N D n d N · e j 4 π n d s i n θ N λ · e j 2 π N n 2 ,
The discrete focusing formulation can be assessed repeatedly until the required value of ρ is obtained once it has been assessed for ρ = ρ m i n = d 2 λ N .
We now examine the consequences of solving the focusing formulation on Equation (12) for ρ values that differ from those given in Equation (13). When ρ is less than the value specified in Equation (14), the transform is essentially under-sampled and aliased. For an input signal of length d , for a particular wavelength λ and a specified or needed value of ρ , the input signal must be sampled at equidistant intervals d N . In order to prevent aliasing issues, N is now determined by:
N = d 2 λ ρ ,
The algorithm can only determine the diffraction patterns without aliasing at a distance larger than or equal to ρ = ρ m i n = d 2 λ N if the input signal has a fixed sampling rate. By comparing the maximum recorded spatial frequency (obtained through sampling) with the maximum recorded spatial frequency at a distance λ from a width d aperture, this aliasing can be better understood. In the event that the synthetic aperture radar system’s spatial frequency is d aperture width, we obtain the following equation:
f s = N d ,
Consequently, the sampled object’s maximum recorded spatial frequency is as follows:
f s , m a x = N 2 d ,
The greatest recordable spatial frequency at a distance   ρ from an aperture of width d is defined as follows:
f ρ , m a x = d λ ρ ,
It is well known that, in the original sampled space, no spatial frequencies can be recorded at a distance ρ (without causing aliasing). The relationship that results is as follows:
f s , m a x f ρ , m a x N 2 d d λ ρ ,
In order to prevent aliasing, this yields a lower constraint on ρ :
ρ ρ m i n = d 2 λ N ,
The discrete focusing formulation, as specified in Equation (15), may only be applied to an integer multiple of the distance ρ m i n when applied to a replicated signal.
All phase factors obtained from each replication in Equation (15) at these discrete distances are either ± 1 or 2 N + 1 , and, as a result, they add together to generate a near-range diffraction pattern identical to that obtainable if there had been no replication. Subsequently, we suggest that the aforementioned process can be effectively employed to assess a different, but closely associated, integral transform.

3. Short-Range Discrete Fractional Fourier Imaging

The FT can be constructed to incorporate fractional order α , where 0 α 1 , from both a mathematical and physical perspective. The input has an order of α = 0 , while the entire FT with a phase factor has an order of α = 1 . With the exception of unimportant variables, this integral transformation can be written as follows [49,50,51,52]:
F α u = G t · e j π c o t ϕ u 2 + t 2 · e j 2 π c s c ϕ u t d t ,
where α = s i n ϕ specifies the fractional order, with 0 ϕ π 2 . Using the subsequent variable modification, we have the following equation:
s = u 4 π λ ρ c o t ϕ 1 2 t = v 4 π λ ρ c o s ϕ s i n ϕ 1 2 U v = G t ,
The definition in Equation (22) can be written as follows:
F α u = U v · e j 2 π u 2 + v 2 c o s 2 ϕ λ ρ · e j 4 π u v λ ρ d u ,
By contrasting Equations (5) and (24), we can see that, if the input has the following form, the fractional-order Fourier transform may be connected to the near-range pseudopolar imaging as follows:
D x = U x · e j 2 π x 2 s i n 2 ϕ λ ρ ,
where the fractional order is defined as follows:
α = s i n ϕ = λ ρ 4 + λ 2 ρ 2 ,
Put another way, we understand that the near-range pseudopolar imaging problem is comparable to the following: a complex amplitude transmittance U x is illuminated by a spherical wavefront with a radius R , that is:
D x = U x · e j 2 π x 2 λ R ,
Therefore, a fractional-order Fourier transform is applied as the radar signal moves from the synthetic aperture radar at ρ = 0 (then α = 0 ) to the long range at ρ R (then α = 1 ). From our perspective, applying a fractional-order Fourier transform means taking into account the fact that the input’s sampled version in Equation (7) now becomes:
D s x = n = 0 N 1 D x · e j 2 π x 2 λ R δ x n d N                                                               = n = 0 N 1 D n d N · e j 2 π n 2 d 2 λ R N 2 δ x n d N
It is now convenient to choose:
R = d 2 / λ N s i n 2 ϕ ,
To ensure that the conditions in Equation (13) and the conditions of 0 s i n ϕ = x / R 1 are met, Equation (28) can be sampled with the input and becomes:
D s x = n = N N D n d N · e j 2 π N n 2 s i n 2 ϕ · δ x n d N ,
We can thus replicate the steps from Equation (7) to Equation (14) once more to discover that the discrete form of the fractional-order Fourier transform is:
P s ρ , θ = Θ n = N N D n d N ·   e j 2 π N d s i n θ λ n 2 · e j 2 π N n 2 s i n 2 ϕ ,
The parameter sin ϕ can be thought of as a correction factor for varying the sphere wavefront to the fractional-order Fourier transform. If we set s i n ϕ = 0 , we are employing the spherical wavefront. For s i n ϕ = 0 , we use the plane wavefront hypothetical situation. We can evaluate other values of sin ϕ as a step-by-step development from the spherical wavefront to the plane wavefront, until reaching the far field (the plane wavefront).
When converting from the sphere wavefront to the fractional-order Fourier transform, the parameter s i n ϕ can be considered a correction factor. Using the spherical wavefront when s i n ϕ = 0 is set, we use the plane wavefront hypothetical case for s i n ϕ = 1 . If sin ϕ has a different value, we can assess it as a gradual transition from a spherical wavefront to a plane wavefront, to the point at which it reaches the far field (the plane wavefront). The focusing procedure in the near-range for a frequency modulated continuous wave (FMCW) or stepped frequency continuous wave (SFCW) radar system is shown in Figure 2.

4. Matric Representation for Discrete Fourier and Fractional Fourier Transforms

4.1. Discrete Pseudopolar Format Matrix

Viewing fractional-order Fourier transforms and the recently developed discrete pseudopolar format imaging formulation in matrix form can be very helpful for computational speed and ease of usage. The matrix representation of the Pseudopolar Format Matrix M P M is as follows:
M P M = 1 N 1 W d s i n θ λ 2 W N 2 W n 2 W n d s i n θ λ 2 W n N 2 W N 2 W N d s i n θ λ 2 1 ,
where it is as used in the discrete Fourier transform (DFT):
W = e j 2 π N ,
The features of this matrix and the DFT matrix are very similar. The M P M matrix, for instance, is circular, but its columns are orthogonal. Because M P M is also unitary, its inverse can be obtained by transposing the complicated conjugate, as shown below:
M P M M P M * = M P M * M P M = I ,
where I is the identity matrix and * is the complex conjugate transpose. This indicates that the input signal’s overall power will not be impacted by the discrete Pseudopolar Format Matrix. When examining the diffraction effects of propagation over particular distances, ρ ρ m i n , from a target, the Pseudopolar Format Matrix becomes convenient. We now offer an analysis of the discrete pseudopolar format operation and its implications for general target distributions. The matrix multiplication process is used to perform the pseudopolar format procedure:
D = M P M D ,
where D and D are the raw data matrix and its image matrix, respectively, for a one-dimensional aperture.

4.2. Discrete Fractional Fourier Transform Matrix

A diagonal matrix containing the parameters   s i n ϕ is multiplied by the discrete Pseudopolar Format Matrix to generate a matrix representation of the fractional-order DFT. The matrix of the s i n ϕ -phase factors is expressed as follows [53,54,55]:
M s i n ϕ = 1 0 0 0 W n 2 s i n 2 ϕ 0 0 0 W N 2 s i n 2 ϕ ,
The matrix form of the FrFT for column vectors M s i n ϕ can be obtained by multiplying the matrices M s i n ϕ M P M . Its precise wording is:
M F r P M = M s i n ϕ M P M ,
The fractional-order Fourier Transform (or operation in Equation (33)) is carried out because the matrix in Equation (37) may be used, via a straightforward matrix calculation, to determine the fractional-order Fourier transform of any given column vector of length N . It can be observed that the discrete Pseudopolar Format Matrix from Equation (34) is performed as the DFT with a phase factor.

5. Results

5.1. Numerical Simulation Validation

The simulation parameters were carefully chosen to closely match those of a real automotive short-range radar system (SRR), ensuring that the results are representative and applicable to real-world scenarios. The radar operates within the frequency band of 77 to 81 GHz, a common choice for automotive radar due to its balance of resolution and range capabilities. This frequency band allows the radar to effectively detect objects at short ranges, which is crucial for automotive applications such as collision avoidance and parking assistance. In the simulation, a total of 101 frequency points were sampled. This level of sampling provides a detailed frequency spectrum, which is essential for accurate signal processing and object detection. To simplify the simulation and focus on the core aspects of radar signal processing, a one-dimensional (1D) scan was performed. This scan consisted of 41 acquisition points within a 10 cm aperture size. The aperture size and the number of acquisition points were selected to provide a balance between computational efficiency and the accuracy of the simulation results.
Before applying the discrete fractional Fourier imaging procedure, the data were pre-processed using the Hanning window function. This windowing technique was applied to the data in both the frequency domain and the time domain. The Hanning window helps to reduce spectral leakage, which can distort the results of the Fourier Transform (FT). By minimizing these distortions, windowing ensures that the subsequent signal processing steps are more accurate. After windowing, the FT was applied to convert the frequency domain data into time domain signals. This transformation is a crucial step in radar signal processing, as it allows the analysis of the signals in the time domain, where the echoes from targets are more easily interpreted. The combination of these techniques—careful parameter selection, windowing and Fourier transformation—ensures that the simulation closely replicates the performance of a real automotive SRR system, providing valuable insights for the development and testing of radar technologies.
Figure 3a depicts a carefully structured scene comprising nine distinct point scatterers. These scatterers are strategically positioned within a range extending from 1.5 m to 3 m in the range direction and spanning from −1.5 m to 1.5 m in the azimuth direction. This arrangement is designed to simulate a realistic environment for testing the performance of radar imaging techniques. The focused image resulting from the application of the discrete fractional Fourier imaging procedure, as detailed in Equation (36), is illustrated in Figure 3b. This procedure, which involves transforming the data into a different fractional domain, is particularly effective for focusing and resolving scatterers in radar imaging. The resulting image in Figure 3b demonstrates a high level of clarity and resolution, indicating that the procedure has successfully concentrated the energy of the scattered signals. A flowchart of the proposed imaging processing procedure is shown in Figure 4.
The point scatterers, when processed using the proposed method, exhibit excellent focus. This demonstrates that the technique is capable of accurately resolving closely spaced scatterers, which is a critical requirement for high-resolution radar imaging applications. The focusing condition for this technique is based on the optimization parameters outlined in [45], which is given by:
s i n ϕ = λ 0 ρ λ 0 2 ρ 2 + 2 N 2 Δ x 4 .
where λ 0 indicates the wavelength of the starting frequency of the system and ρ indicates the slant range. N is the number of acquisition points, and Δ x is the step size of each acquisition point. According to their research, the optimal rotated angle is crucial for achieving the best possible focus in the discrete fractional Fourier domain. This angle is derived from a set of conditions that take into account the geometry and characteristics of the radar scene, ensuring that the imaging process is finely tuned for maximum clarity and resolution.
Figure 5 illustrates the optimization of the rotated angle for the 77 GHz short-range radar (SRR) setting. This angle is crucial for accurately focusing the radar signals and is depicted across different ranges. At the starting point, or range 0, the rotated angle is 0 degrees, indicating that the radar wavefront is initially spherical. As the range increases, the rotated angle steadily grows, approaching 90 degrees at a distance of approximately 3.5 m. This progression highlights the transition from a spherical to a planar wavefront. The significance of this transformation lies in its impact on radar imaging accuracy. At shorter ranges, the wavefront is spherical due to the proximity of the radar to the scatterers, which means the signals need more precise focusing adjustments. As the distance increases, the wavefront gradually becomes planar, representing the far-field condition where the wavefronts are essentially parallel. This planar wavefront is easier to handle and requires less complex focusing techniques. The figure clearly demonstrates the methodical development from a spherical wavefront, which is typical at close ranges, to a plane wavefront characteristic of the far-field region. Understanding and optimizing this transition is critical for enhancing the performance of SRR systems, ensuring they provide accurate and reliable data across various distances.
Figure 6 illustrates the azimuth cut of a simulated point target, which is indicated by the red arrow in Figure 3b. The azimuth cut is presented using two methods: the conventional Fourier-based method and the newly proposed method. The solid line in the figure represents the azimuth cut of the point target as obtained by the proposed method, while the dashed line corresponds to the azimuth cut using the conventional method. The comparison between these two lines clearly demonstrates that the azimuth resolution achieved by the proposed method is superior to that of the conventional method. This improvement in resolution can be attributed to the way each method manages the wavefront of the signal. The proposed method utilizes the FrFT with a specific rotated angle, which allows it to fully focus the spherical wavefront. By accurately focusing the wavefront, the proposed method minimizes distortion and enhances resolution. On the other hand, the conventional Fourier-based approach assumes that the wavefront is a plane wave. This assumption leads to inaccuracies and results in a blurred azimuth cut.

5.2. Experimental Data Validation

A discrete fractional Fourier imaging technique has been evaluated using real experimental data acquired with the IBIS-L Ground-Based Synthetic Aperture Radar (GB-SAR) system. This evaluation took place at the Kawauchi Campus of Tohoku University in Sendai, Japan, as depicted in Figure 7a. The IBIS-L GB-SAR system used for this study features two vertically polarized horn antennas as follows: one dedicated to transmitting signals and the other to receiving them. The system operates in the Ku-band, characterized by a centre frequency of 17.175 GHz and a bandwidth of 300 MHz, allowing high-resolution imaging capabilities. The radar system’s frequency sampling sites were set using a stepped-frequency scheme tailored to the observational range, ensuring efficient data acquisition across the specified bandwidth. This setup facilitates precise measurements by incrementally adjusting the frequency, which enhances the resolution and clarity of the radar images. The entire radar and antenna assembly is mounted on a linear rail system, enabling it to perform a systematic scan over an area approximately 2 m in length.
During the data collection process, the system performed repeated scans of the 2-metre area at five-minute intervals. Each scan cycle lasted two minutes, providing a continuous stream of data that could be analysed to assess the effectiveness of the discrete fractional Fourier imaging technique. Data were recorded at 401 distinct azimuth positions, with measurements taken every 5 mm along the 2-metre scan length. This high density of data points is critical for generating detailed and accurate radar images, as it ensures that even small features within the scanned area can be detected and analysed. The application of the discrete fractional Fourier imaging technique to this data aims to enhance the focusing and resolution of the radar images. By transforming the data into a fractional Fourier domain, the technique can better handle the curvature of wavefronts, especially at different distances. This is particularly important for ground-based SAR systems, where the geometry of the scene can significantly impact the quality of the radar images.
The focused image produced by the discrete fractional Fourier imaging procedure is presented in Figure 7b. This image clearly demonstrates the effectiveness of the technique in achieving high-resolution radar imaging. In the scene, near-range targets such as tree trunks and building edges are distinctly visible and well-focused, highlighting the capability of this imaging method to resolve fine details in cluttered environments. The clarity of these features underscores the accuracy of the discrete fractional Fourier imaging procedure in processing radar signals and enhancing image quality. Figure 8 illustrates the optimization of the rotated angle for the Ku-band IBIS-L GB-SAR system. This angle is a critical parameter for achieving optimal focus in radar imaging. At the initial range of 0 m, the rotated angle is 0 degrees, indicating that the radar wavefronts are spherical due to their proximity to the source. As the range increases, the angle progressively adjusts, reaching close to 90 degrees at a distance of 35 m. This progression represents the transformation from a spherical wavefront—typically at shorter ranges—to a plane wavefront—a characteristic of the far-field region.
The focusing ability of both the conventional method and the proposed method is evaluated using a specific target, indicated by the red arrow in Figure 7b. Figure 9 depicts the azimuth cut of this real target, located in the measurement scene shown in Figure 7a. The azimuth cut is presented using two different methods: the conventional Fourier-based method and the newly proposed method. In Figure 9, the solid line represents the azimuth cut obtained by the proposed method, while the dashed line represents the azimuth cut obtained by the conventional method. This side-by-side comparison highlights the differences in azimuth resolution between the two methods. Around the 0.05 m mark, the superior performance of the proposed method becomes evident. The proposed method’s solid line demonstrates a much sharper and more defined azimuth cut compared to the conventional method’s dashed line. This improvement in azimuth resolution can be attributed to the advanced focusing capabilities of the proposed method. By employing a sophisticated algorithm, the proposed method can more accurately focus on the target, reducing blurriness and enhancing the overall clarity of the image. The conventional Fourier-based method, in contrast, shows a less precise focus, leading to a more blurred representation of the target. The significant enhancement in azimuth resolution achieved by the proposed method underscores its effectiveness in accurately capturing and processing target data. This improved focusing ability is crucial for applications requiring high precision and clarity, making the proposed method a valuable advancement over traditional Fourier-based techniques.

6. Discussions

Based on the numerical simulation and experimental valication, the ability of the proposed method to correctly focus the spherical wavefront using the FrFT makes it more effective in achieving better azimuth resolution than the traditional Fourier-based method. This significant improvement highlights the advantage of using advanced signal processing techniques to enhance the clarity and precision of imaging systems. The significance of this transition lies in its impact on the radar system’s imaging performance. At shorter ranges, the spherical wavefront requires more complex focusing adjustments to accurately resolve targets. As the distance increases, the wavefront becomes planar, simplifying the focusing process and enhancing image clarity. This transition is crucial for applications that require precise imaging over varying distances. The figure clearly shows the development from a spherical wavefront to a plane wavefront, illustrating how the discrete fractional Fourier imaging technique adapts to different ranges. Understanding and optimizing this transition ensures that the radar system can maintain high resolution and focus accuracy across its entire operational range. This capability is particularly valuable for applications in environmental monitoring, infrastructure inspection and disaster management, where detailed and accurate radar images are essential.

7. Conclusions

In this paper, we have discussed the application of discrete Fourier and Fractional Fourier Transforms within the pseudopolar format imaging coordinate system. This approach offers significant advantages, particularly when the limitations on distances are considered acceptable. By accepting these limitations, one can simplify the computational processes involved, utilizing easily multiplied matrices instead of relying on continuous space calculus. This simplification is beneficial for practical implementations, making the process more efficient and accessible. The use of discrete Fractional Fourier Transform operators in near-range imaging can be effectively represented through a combination of Fourier-based matrices and diagonal phase matrices. This matrix-based representation is not only computationally efficient but also Facilitates the manipulation and transformation of radar signals. The discrete Fractional Fourier Transform allows for precise adjustments in the phase patterns, which is crucial for accurate image focusing and resolution in near-range scenarios.
One of the key concepts explored in the paper is the transition of phase patterns from near-range to far-range imaging. This transition can be effectively captured by the rotated angle of the Fractional Fourier Transform. As the imaging range increases, the phase pattern evolves, necessitating adjustments in the transform to maintain focus and clarity. The rotated angle parameter provides a means to manage this evolution, ensuring that the radar system can adapt to different ranges while maintaining high-quality imaging performance. The transition from near-range to far-range phase patterns highlights the versatility of the Fractional Fourier Transform in radar imaging applications. By representing the phase adjustments through rotated angles, the system can seamlessly shift between different imaging conditions, optimizing focus and resolution as required. This adaptability is particularly valuable in dynamic environments where the range of targets can vary significantly.
Although the proposed approach provides a practical solution for high-resolution imaging across various ranges, making it a valuable tool for applications in fields such as remote sensing, surveillance and environmental monitoring, some limitations remain. The implementation of discrete FrFT within the pseudopolar coordinate system involves complex mathematical operations, which can be computationally intensive. This complexity can limit real-time applications and require significant computational resources, especially for high-resolution imaging tasks. Moreover, the existing FrFT transform formations are sensitive to noise. Any noise present in the data can be amplified during the transformation process, potentially leading to degraded image quality. Effective noise reduction techniques are essential but can add to the overall computational burden. Finally, the existing algorithms for FrFT in pseudopolar coordinates may not be fully optimized for all practical scenarios. Further refinement is needed to ensure robustness and efficiency across different applications and conditions.
Therefore, research can focus on developing more efficient algorithms to reduce the computational complexity associated with FrFT. Techniques such as parallel processing and optimized matrix operations could be explored to make the process faster and more efficient, and developing advanced noise reduction methods that can be integrated with FrFT processes will be crucial. This could involve adaptive filtering techniques that can dynamically adjust to varying noise levels and improve the overall robustness of the imaging system. More importantly, collaboration with hardware developers to design radar systems specifically tailored to support FrFT and pseudopolar coordinates would be beneficial. This could involve the creation of specialized antennas and signal processing units that optimize the performance of these advanced imaging techniques.

Author Contributions

Conceptualization, L.Z. and Y.L.; methodology, L.Z. and Y.L.; software, L.Z.; validation, L.Z., Y.L. and A.M.A.; formal analysis, L.Z. and Y.L.; investigation, L.Z. and Y.L.; resources, L.Z. and Y.L.; data curation, L.Z. and Y.L.; writing—original draft preparation, L.Z.; writing—review and editing, Y.L. and A.M.A.; visualization, L.Z. and Y.L.; supervision, Y.L. and A.M.A.; project administration, A.M.A.; funding acquisition, A.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors have no conflicts of interest.

References

  1. Sato, M. Near range radar and its application to near surface geophysics and disaster mitigation. J. Earth Sci. 2015, 26, 858–863. [Google Scholar] [CrossRef]
  2. Dine, F. Evaluation of the Utility of Radar Data to Provide Model Parameters for Energy System Analysis. Master’s Thesis, University of Applied Sciences, Stuttgart, Germany, March 2022. [Google Scholar]
  3. Lu, Z.; Kwoun, O.; Rykhus, R. Interferometric synthetic aperture radar (InSAR): Its past, present and future. Photogramm. Eng. Remote Sens. 2007, 73, 217. [Google Scholar]
  4. Richter, N.; Froger, J.L. The role of Interferometric Synthetic Aperture Radar in detecting, mapping, monitoring, and modelling the volcanic activity of Piton de la Fournaise, La Réunion: A review. Remote Sens. 2020, 12, 1019. [Google Scholar] [CrossRef]
  5. Yue, L.; Shen, H.; Li, J.; Yuan, Q.; Zhang, H.; Zhang, L. Image super-resolution: The techniques, applications, and future. Signal Process. 2016, 128, 389–408. [Google Scholar] [CrossRef]
  6. Wang, G.; Ye, J.C.; De Man, B. Deep learning for tomographic image reconstruction. Nat. Mach. Intell. 2020, 2, 737–748. [Google Scholar] [CrossRef]
  7. Chen, Z.; Guo, W.; Feng, Y.; Li, Y.; Zhao, C.; Ren, Y.; Shao, L. Deep-learned regularization and proximal operator for image compressive sensing. IEEE Trans. Image Process. 2021, 30, 7112–7126. [Google Scholar] [CrossRef]
  8. Berger, Z. Satellite Hydrocarbon Exploration: Interpretation and Integration Techniques; Springer: New York, NY, USA, 2012. [Google Scholar]
  9. Rosu, F.; Anghel, A.; Ciochină, S.; Cacoveanu, R.; Datcu, M. Near-Range Multipath Mitigation Methodology for Multistatic SAR Applications Using Matched-Adaptive Filters. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 3204–3214. [Google Scholar] [CrossRef]
  10. Nitti, D.O.; Bovenga, F.; Chiaradia, M.T.; Greco, M.; Pinelli, G. Feasibility of using synthetic aperture radar to aid UAV navigation. Sensors 2015, 15, 18334–18359. [Google Scholar] [CrossRef]
  11. Zhang, W. Three-dimensional through-the-wall imaging with multiple-input multiple-output (MIMO) radar. J. Electromagn. Waves Appl. 2014, 28, 1935–1943. [Google Scholar] [CrossRef]
  12. Narayanan, R.M.; Gebhardt, E.T.; Broderick, S.P. Through-wall single and multiple target imaging using MIMO radar. Electronics 2017, 6, 70. [Google Scholar] [CrossRef]
  13. Hu, X.; Tong, N.; Zhang, Y.; Hu, G.; He, X. Multiple-input–multiple-output radar super-resolution three-dimensional imaging based on a dimension-reduction compressive sensing. IET Radar Sonar Navig. 2016, 10, 757–764. [Google Scholar] [CrossRef]
  14. Clark, J.; Fierro, R. Mobile robotic sensors for perimeter detection and tracking. ISA Trans. 2007, 46, 3–13. [Google Scholar] [CrossRef] [PubMed]
  15. Weber, W.A.; Grosu, A.L.; Czernin, J. Technology Insight: Advances in molecular imaging and an appraisal of PET/CT scanning. Nat. Clin. Pract. Oncol. 2008, 5, 160–170. [Google Scholar] [CrossRef]
  16. Najjar, R. Redefining radiology: A review of artificial intelligence integration in medical imaging. Diagnostics 2023, 13, 2760. [Google Scholar] [CrossRef]
  17. Łoginoff, J.; Augustynowicz, K.; Świąder, K.; Ostaszewska, S.; Morawski, P.; Pactwa, F.; Popińska, Z. Advancements in Radiology and Diagnostic Imaging. J. Educ. Health Sport. 2023, 33, 45–51. [Google Scholar] [CrossRef]
  18. Pham, T.H.; Kim, K.H.; Hong, I.P. A study on millimeter wave SAR imaging for non-destructive testing of rebar in reinforced concrete. Sensors 2022, 22, 8030. [Google Scholar] [CrossRef] [PubMed]
  19. Bandyopadhyay, A.; Sengupta, A. A review of the concept, applications and implementation issues of terahertz spectral imaging technique. IETE Tech. Rev. 2022, 39, 471–489. [Google Scholar] [CrossRef]
  20. Zou, L.; Liu, H.; Alani, A.M.; Fang, G. Surface Permittivity Estimation of Southern Utopia Planitia by High Frequency RoPeR in Tianwen-1 Mars Exploration. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–9. [Google Scholar] [CrossRef]
  21. Das, S.; Dey, A.; Pal, A.; Roy, N. Applications of artificial intelligence in machine learning: Review and prospect. Int. J. Comput. Appl. 2015, 115, 31–41. [Google Scholar] [CrossRef]
  22. Helal, S.; Sarieddeen, H.; Dahrouj, H.; Al-Naffouri, T.Y.; Alouini, M.S. Signal processing and machine learning techniques for terahertz sensing: An overview. IEEE Signal Process. Mag. 2022, 39, 42–62. [Google Scholar] [CrossRef]
  23. Malhotra, I.; Jha, K.R.; Singh, G. Terahertz antenna technology for imaging applications: A technical review. Int. J. Microw. Wirel. Technol. 2018, 10, 271–290. [Google Scholar] [CrossRef]
  24. Gao, B.; Zhang, F.; Zhao, E.; Zhang, D.; Pan, S. High-resolution phased array radar imaging by photonics-based broadband digital beamforming. Opt. Express 2019, 27, 13194–13203. [Google Scholar] [CrossRef]
  25. Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Chennai, India, 2004. [Google Scholar]
  26. Zhuge, X.; Yarovoy, A.G. Study on two-dimensional sparse MIMO UWB arrays for high resolution near-field imaging. IEEE Trans. Antennas Propag. 2012, 60, 4173–4182. [Google Scholar] [CrossRef]
  27. Zhuge, X.; Yarovoy, A.G. Sparse multiple-input multiple-output arrays for high-resolution near-field ultra-wideband imaging. IET Microw. Antennas Propag. 2011, 5, 1552–1562. [Google Scholar] [CrossRef]
  28. Fortuny-Guasch, J. A novel 3-D subsurface radar imaging technique. IEEE Trans. Geosci. Remote Sens. 2002, 40, 443–452. [Google Scholar] [CrossRef]
  29. Fortuny, J.; Sieber, A.J. Three-dimensional synthetic aperture radar imaging of a fir tree: First results. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1006–1014. [Google Scholar] [CrossRef]
  30. Yang, H.; Li, T.; Li, N.; He, Z.; Liu, Q.H. Efficient near-field imaging for single-borehole radar with widely separated transceivers. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5327–5337. [Google Scholar] [CrossRef]
  31. Fortuny, J.; Sieber, A.J. Fast algorithm for a near-field synthetic aperture radar processor. IEEE Trans. Antennas Propag. 1994, 42, 1458–1460. [Google Scholar] [CrossRef]
  32. Zhuge, X.; Yarovoy, A.G.; Savelyev, T.; Ligthart, L. Modified Kirchhoff migration for UWB MIMO array-based radar imaging. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2692–2703. [Google Scholar] [CrossRef]
  33. Morrow, I.L.; van Genderen, P. Effective imaging of buried dielectric objects. IEEE Trans. Geosci. Remote Sens. 2002, 40, 943–949. [Google Scholar] [CrossRef]
  34. Zhu, R.; Zhou, J.; Jiang, G.; Fu, Q. Range migration algorithm for near-field MIMO-SAR imaging. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2280–2284. [Google Scholar] [CrossRef]
  35. Ahmad, F.; Amin, M.G.; Mandapati, G. Autofocusing of through-the-wall radar imagery under unknown wall characteristics. IEEE Trans. Image Process. 2007, 16, 1785–1795. [Google Scholar] [CrossRef]
  36. Cheney, M.; Borden, B. Fundamentals of Radar Imaging; CBMS-NSF Regional Conference Series in Applied Mathematics; SIAM: Philadelphia, PA, USA, 2009; Volume 79. [Google Scholar]
  37. Bracewell, R. Two-Dimensional Convolution. In Fourier Analysis and Imaging; Springer: Boston, MA, USA, 2003. [Google Scholar]
  38. Kidera, S.; Sakamoto, T.; Sato, T. Extended imaging algorithm based on aperture synthesis with double-scattered waves for UWB radars. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5128–5139. [Google Scholar] [CrossRef]
  39. Cheney, M. A mathematical tutorial on synthetic aperture radar. SIAM Rev. 2001, 43, 301–312. [Google Scholar] [CrossRef]
  40. Fromenteze, T.; Yurduseven, O.; Berland, F.; Decroze, C.; Smith, D.R.; Yarovoy, A.G. A transverse spectrum deconvolution technique for MIMO short-range Fourier imaging. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6311–6324. [Google Scholar] [CrossRef]
  41. Bamler, R. Principles of synthetic aperture radar. Surv. Geophys. 2000, 21, 147–157. [Google Scholar] [CrossRef]
  42. Gini, F.; De Maio, A.; Patton, L. Waveform Design and Diversity for Advanced Radar Systems; Institution of Engineering and Technology: London, UK, 2012. [Google Scholar]
  43. Brigham, E.O. The Fast Fourier Transform and Its Applications; Pearson: Upper Saddle River, NJ, USA, 1988; Volume 448. [Google Scholar]
  44. Brandwood, D. Fourier transforms in radar and signal processing. Artech House 2012. [Google Scholar]
  45. Zou, L.; Sato, M. An efficient and accurate GB-SAR imaging algorithm based on the fractional Fourier transform. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9081–9089. [Google Scholar] [CrossRef]
  46. Almeida, L.B. The fractional Fourier transform and time-frequency representations. IEEE Trans. Signal Process. 1994, 42, 3084–3091. [Google Scholar] [CrossRef]
  47. Ozaktas, H.M.; Arikan, O.; Kutay, M.A.; Bozdagt, G. Digital computation of the fractional Fourier transform. IEEE Trans. Signal Process. 1996, 44, 2141–2150. [Google Scholar] [CrossRef]
  48. Yetik, I.S.; Nehorai, A. Beamforming using the fractional Fourier transform. IEEE Trans. Signal Process. 2003, 51, 1663–1668. [Google Scholar] [CrossRef]
  49. McBride, A.C.; Kerr, F.H. On Namias’s fractional Fourier transforms. IMA J. Appl. Math. 1987, 39, 159–175. [Google Scholar] [CrossRef]
  50. Namias, V. The fractional order Fourier transform and its application to quantum mechanics. IMA J. Appl. Math. 1980, 25, 241–265. [Google Scholar] [CrossRef]
  51. Bailey, D.H.; Swarztrauber, P.N. The fractional Fourier transform and applications. SIAM Rev. 1991, 33, 389–404. [Google Scholar] [CrossRef]
  52. Zayed, A.I. On the relationship between the Fourier and fractional Fourier transforms. IEEE Signal Process. Lett. 1996, 3, 310–311. [Google Scholar] [CrossRef]
  53. Candan, C.; Kutay, M.A.; Ozaktas, H.M. The discrete fractional Fourier transform. IEEE Trans. Signal Process. 2000, 48, 1329–1337. [Google Scholar] [CrossRef]
  54. Santhanam, B.; McClellan, J.H. The discrete rotational Fourier transform. IEEE Trans. Signal Process. 1996, 44, 994–998. [Google Scholar] [CrossRef]
  55. Atakishiyev, N.M.; Vicent, L.E.; Wolf, K.B. Continuous vs. discrete fractional Fourier transforms. J. Comput. Appl. Math. 1999, 107, 73–95. [Google Scholar] [CrossRef]
Figure 1. (a) Illustration of a 1D synthetic aperture radar imaging setup. (b) Illustration of a pseudopolar format imaging pixel in range and azimuth directions.
Figure 1. (a) Illustration of a 1D synthetic aperture radar imaging setup. (b) Illustration of a pseudopolar format imaging pixel in range and azimuth directions.
Remotesensing 16 02482 g001
Figure 2. FMCW or SFCW radar system near-range imaging procedure.
Figure 2. FMCW or SFCW radar system near-range imaging procedure.
Remotesensing 16 02482 g002
Figure 3. Simulation and focusing results of 77 GHz SRR radar: (a) locations of nine simulated targets and (b) focusing image under discrete fractional Fourier imaging procedure.
Figure 3. Simulation and focusing results of 77 GHz SRR radar: (a) locations of nine simulated targets and (b) focusing image under discrete fractional Fourier imaging procedure.
Remotesensing 16 02482 g003
Figure 4. Flowchart of the proposed imaging processing procedure.
Figure 4. Flowchart of the proposed imaging processing procedure.
Remotesensing 16 02482 g004
Figure 5. Optimization rotated angle of 77 GHz SRR with slant range.
Figure 5. Optimization rotated angle of 77 GHz SRR with slant range.
Remotesensing 16 02482 g005
Figure 6. Analyses of the focusing capability comparison over the point target of the azimuth profile.
Figure 6. Analyses of the focusing capability comparison over the point target of the azimuth profile.
Remotesensing 16 02482 g006
Figure 7. Simulation and focusing results of the real Ku-band GB-SAR system: (a) real experiment scene located in Kawauchi, Sendai, Japan, and (b) focusing image under a discrete fractional Fourier imaging procedure.
Figure 7. Simulation and focusing results of the real Ku-band GB-SAR system: (a) real experiment scene located in Kawauchi, Sendai, Japan, and (b) focusing image under a discrete fractional Fourier imaging procedure.
Remotesensing 16 02482 g007
Figure 8. Optimization of the rotated angle of the Ku-band IBIS-L GB-SAR system with slant range.
Figure 8. Optimization of the rotated angle of the Ku-band IBIS-L GB-SAR system with slant range.
Remotesensing 16 02482 g008
Figure 9. Analyses of the focusing capability comparison over the real target of the azimuth profile.
Figure 9. Analyses of the focusing capability comparison over the real target of the azimuth profile.
Remotesensing 16 02482 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, L.; Li, Y.; Alani, A.M. Pseudopolar Format Matrix Description of Near-Range Radar Imaging and Fractional Fourier Transform. Remote Sens. 2024, 16, 2482. https://doi.org/10.3390/rs16132482

AMA Style

Zou L, Li Y, Alani AM. Pseudopolar Format Matrix Description of Near-Range Radar Imaging and Fractional Fourier Transform. Remote Sensing. 2024; 16(13):2482. https://doi.org/10.3390/rs16132482

Chicago/Turabian Style

Zou, Lilong, Ying Li, and Amir M. Alani. 2024. "Pseudopolar Format Matrix Description of Near-Range Radar Imaging and Fractional Fourier Transform" Remote Sensing 16, no. 13: 2482. https://doi.org/10.3390/rs16132482

APA Style

Zou, L., Li, Y., & Alani, A. M. (2024). Pseudopolar Format Matrix Description of Near-Range Radar Imaging and Fractional Fourier Transform. Remote Sensing, 16(13), 2482. https://doi.org/10.3390/rs16132482

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop