Next Article in Journal
Optical Characterization of Coastal Waters with Atmospheric Correction Errors: Insights from SGLI and AERONET-OC
Previous Article in Journal
Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial Feature-Based ISAR Image Registration for Space Targets

by
Lizhi Zhao
1,
Junling Wang
2,*,
Jiaoyang Su
1 and
Haoyue Luo
3
1
School of Information Engineering, Minzu University of China, Beijing 100081, China
2
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
3
China Aerospace Science and Industry Corp Second Research Institute, Institute No.25 of the Second Academy, Beijing 100854, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(19), 3625; https://doi.org/10.3390/rs16193625
Submission received: 31 July 2024 / Revised: 26 September 2024 / Accepted: 26 September 2024 / Published: 28 September 2024
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
Image registration is essential for applications requiring the joint processing of inverse synthetic aperture radar (ISAR) images, such as interferometric ISAR, image enhancement, and image fusion. Traditional image registration methods, developed for optical images, often perform poorly with ISAR images due to their differing imaging mechanisms. This paper introduces a novel spatial feature-based ISAR image registration method. The method encodes spatial information by utilizing the distances and angles between dominant scatterers to construct translation and rotation-invariant feature descriptors. These feature descriptors are then used for scatterer matching, while the coordinate transformation of matched scatterers is employed to estimate image registration parameters. To mitigate the glint effects of scatterers, the random sample consensus (RANSAC) algorithm is applied for parameter estimation. By extracting global spatial information, the constructed feature curves exhibit greater stability and reliability. Additionally, using multiple dominant scatterers ensures adaptability to low signal-to-noise (SNR) ratio conditions. The effectiveness of the method is validated through both simulated and natural ISAR image sequences. Comparative performance results with traditional image registration methods, such as the SIFT, SURF and SIFT+SURF algorithms, are also included.

1. Introduction

ISAR has been used for space target monitoring for many years, which can provide useful information about space targets such as shape structure, body attitude, and working mode, etc. [1,2,3,4]. To improve ISAR image quality and exploit more complete information, image fusion [5] and the three-dimensional (3D) target reconstruction technique [6,7,8] have been extensively investigated, which usually involve jointly processing radar data of the same target taken at different times from different viewpoints and/or by different sensors. Image registration is a crucial step in integrating or comparing different measurements in these applications. For example, different images need to be rotated to a common coordinate system prior to high resolution image formation in the image fusion; the same scatterers from different images need to be correctly registered to compare position differences or phase differences in 3D reconstruction. According to the imaging mechanism of radar systems, an ideal ISAR image can be represented by a two-dimensional (2D) point cloud, which is significantly different from an optical image [9]. Low SNR, sparse scatterer distribution, scatterer defocusing, scatterer glinting, and secondary reflections of targets in ISAR images impact the performance of image registration algorithms. Therefore, image registration methods should be reevaluated and modified according to the characteristics of ISAR images, even if they work well in matching synthetic aperture radar (SAR) images [10], optical images [11], and medical images [12].
ISAR image registration algorithms based on joint translational compensation have been proposed to resolve the translation mismatch problem in the signal domain [13,14,15,16,17,18]. These algorithms utilize the relative rotation of the line of sight (LOS) to compensate for image shifts. In [13], the relationship between time-varying rotation parameters and the compensation phase is derived from the observation geometry in a multiple antenna configuration, such as an L-shaped structure. Several studies have focused on image registration algorithms for non-cooperative space targets, where the relative rotation of target is unknown. In [13,14,15], the rotation trajectory is estimated by tracking dominant scatterers in one-dimensional range profiles along the cross direction. To solve the problem of cross-term phase interference, the authors in [16] proposed measuring the phases of joint-channel phase difference profiles generated by raw echo data from different receiving antennas. Reference [17] combined joint range alignment and joint Doppler centroid tracking to register multichannel ISAR images. In [18], a joint wave path difference compensation algorithm was developed to enable precise image registration for InISAR imaging with SFB-SA signals.
Registration techniques processed in the image domain have also been developed, which can avoid the impact of fluctuating quality in range profiles, especially in low SNR scenarios or when there are few dominant scatterers in the range profiles. Image alignment was achieved using cross-correlation techniques by iteratively searching for the translation and rotation parameters [19]. However, the drawback of this method is its high computational complexity. Since cross-correlation can be calculated in the frequency domain based on the Fourier theorem, computational efficiency can be improved by using the fast Fourier transform (FFT). The authors of [20] utilized phase correlation in the frequency domain to estimate the relative translation offset between two ISAR images. For image rotation parameter estimation, [21] decomposed the image rotation using a three-step FFT-based convolution interpolation process to improve computational efficiency, and [20] applied the polar mapping approach, which turns the rotation of an image into a translation. Since this type of method uses the intensity information of the image, the content of the image has a significant influence on the registration result. In [22], a registration method was proposed by tracking and matching scatterers based on the Kanade–Lucas–Tomasi (KLT) algorithm. The KLT algorithm makes use of spatial intensity information to directly search for the position that yields the best match. The rapid advancement of deep learning has led to significant progress in SAR image registration [23,24]. Deep learning’s feature extraction and matching capabilities have greatly improved image registration accuracy and efficiency. However, there has been limited research on the application of deep learning methods for ISAR image registration, which requires further investigation.
Feature-based methods are powerful tools in remote sensing, medical imaging, and computer vision applications. By focusing on distinctive features within images, these methods can effectively align images despite changes in viewpoint or partial overlaps. Researchers have explored feature-based registration techniques in the field of ISAR image registration [25,26,27,28,29,30,31]. For instance, line features [25] and rectangular features [26] are applied for the image registration of continuously changing ISAR image sequences. The extraction of these advanced features primarily relies on the target’s contours. For a discrete ISAR image, image preprocessing techniques, such as morphological operations, are necessary to produce continuous contours. Reference [27] proposed an image registration method by integrating SIFT and a local correlation matching method. In [28], a combination of scale-invariant feature transform (SIFT) and speeded up robust features (SURF) was employed to align ISAR image sequences and achieve large-angle imaging of ISAR targets. While effective, SIFT and SURF are optimized for images with distinct, well-defined features such as corners, edges, and blobs, but they are sensitive to image noise [32,33]. A well-focused ISAR image can be taken as a composition of many 2Ds sinc functions, with slight blur, located at the projected positions of the 3D scatterers in the ISAR imaging plane [1]. The lack of rich textures in many ISAR images poses challenges, necessitating alternative feature detection methods to achieve optimal results.
Sparsely distributed scatterers resemble point clouds more than color patches. Thus, using scatterers rather than other features to describe an ISAR image is more appropriate. Additionally, using structural information instead of boundary or gradient information in ISAR image registration is more reasonable.
In this work, we propose utilizing the center points of the sinc envelope as feature points (FPs) and constructing a spatial-based feature descriptor by employing the characteristics of distance, angle, and SNR of scatterers. This descriptor exploits essential information from ISAR images and is expected to provide a more stable performance in parameter estimation for image registration.
The innovations of this article are listed as follows:
(1)
A spatial feature of dominant scatterers is proposed to describe the spatial information of ISAR images, offering a more fundamental approach compared to boundary-based or gradient-based features in ISAR image registration.
(2)
The proposed method projects the two-dimensional spatial structure of the target into a one-dimensional feature vector, translating image rotation into a shift of the feature vector, which can be easily matched through circular cross-correlation.
(3)
The traditional random sample consensus (RANSAC) method was enhanced with a consistency check for the rough rotation angle, allowing closer integration with the proposed features and thereby improving the accuracy and stability of mapping parameter estimation, even when the percentage of outliers approaches 50%.
The remainder of this paper is organized as follows: Section 2 analyzes the transform model of ISAR images from different viewpoints. Section 3 introduces the proposed ISAR image registration method, including spatial feature model construction, feature matching, and transform model estimation. Section 4 presents simulated and experimental results to validate the efficacy of the proposed method. Finally, Section 5 provides the discussion, and Section 6 presents the conclusions.

2. Transform Model of ISAR Images

The conventional ISAR imaging methodology generates two-dimensional representations of targets by projecting the reflectivity function onto an imaging projection plane (IPP) [34]. By analyzing the relationship between imaging coordinate systems from two distinct viewpoints, an ISAR image registration model can be derived. This section will analyze the transformation model for registering two ISAR images obtained from different arcs by a ground-based radar imaging system.
Figure 1 illustrates the imaging geometry of a spatial target. An ISAR image sequence is generated for the entire visible orbit. Without a loss of generality, we assume that t n and t n + 1 represent the imaging moments for the n-th and n+1-th images in the sequence, with Δ t being the time interval between these moments. Let T n i n , j n , k n and T n + 1 i n + 1 , j n + 1 , k n + 1 denoting the imaging coordinate systems corresponding to these moments. j n and j n + 1 are the range axes aligned with the imaging LOS. i n and i n + 1 are the Doppler axes perpendicular to the range axis and the effective rotation vector. k n and k n + 1 are the normal vectors of the IPP, consistent with the direction of the effective rotation vector.
During target observation, LOS rotates with the target’s movement, while the target adjusts its attitude to maintain three-axis stability or accomplish specific missions. This results in relative rotation between the radar and the target. In the imaging model described in [9], a relative LOS vector is defined within the target’s body coordinate system, assuming that the target’s body coordinate system aligns with the instantaneous imaging system T n . The imaging LOS vector j n + 1 can be expressed as
j n + 1 Δ t = cos β Δ t sin α Δ t , cos β Δ t cos α Δ t , sin β Δ t T
where α is the azimuth angle of the LOS, β is the elevation angle of the LOS, and the superscript T represents the transpose of the matrix.
By differentiating and normalizing LOS vector, the Doppler vector is derived, as shown in Equation (2). This allows for the determination of the instantaneous IPP at time t n + 1 . The normal vector of the IPP is obtained by taking the cross-product of the range axis and the Doppler axis, as shown in Equation (3).
i n + 1 = d j n + 1 d Δ t / d j n + 1 d Δ t
k n + 1 = i n + 1 × j n + 1
The transformation matrix between the imaging coordinate systems at these two different time instances is
R t n + 1 t n = i n + 1 , j n + 1 , k n + 1   = β ˙ sin β sin α + α ˙ cos β cos α β ˙ 2 + α ˙ cos β 2 cos β sin α β ˙ cos α α ˙ cos β sin β sin α β ˙ 2 + α ˙ cos β 2 β ˙ sin β sin α α ˙ cos β cos α β ˙ 2 + α ˙ cos β 2 cos β cos α β ˙ sin α α ˙ cos β sin β cos α β ˙ 2 + α ˙ cos β 2 β ˙ cos β β ˙ 2 + α ˙ cos β 2 sin β α ˙ cos β β ˙ 2 + α ˙ cos β 2
where α ˙ and β ˙ are the derivatives of α and β , respectively.
Suppose P is an arbitrary scatterer on the target. Its coordinates in the imaging coordinate system T n are p n = [ x n , y n , z n ] T and p n + 1 = [ x n + 1 , y n + 1 , z n + 1 ] T in the imaging coordinate system T n + 1 . Based on the analysis above, the coordinate transformation relationship for the scatterer P in these two imaging systems is p n = R t n + 1 t n p n + 1 .
In ISAR imaging, target motion compensation techniques, such as envelope alignment and phase focusing, are employed to compensate for target motion and enhance image quality. However, the self-focusing procedure in translational motion compensation causes misalignment of ISAR images. We denote the induced offsets in ISAR processing as q = Δ x , Δ y , 0 T , where Δ x and Δ y represent the shifts in the cross-range and range dimensions of the image, respectively. The registration model can be represented as follows:
x n y n = β ˙ sin β sin α + α ˙ cos β cos α β ˙ 2 + α ˙ cos β 2 cos β sin α β ˙ cos α α ˙ cos β sin β sin α β ˙ 2 + α ˙ cos β 2 β ˙ sin β cos α α ˙ cos β sin α β ˙ 2 + α ˙ cos β 2 cos β cos α β ˙ sin α α ˙ cos β sin β cos α β ˙ 2 + α ˙ cos β 2 x n + 1 y n + 1 z n + 1 + Δ x Δ y
From Equation (5), it can be noted that the registration model is related with the α and β . α is always the primary rotational component of LOS, providing the foundation for achieving high resolution in the azimuth direction. In contrast, β represents the spatial variation of the IPP, resulting in a height-dependent displacement.
Let z n + 1 max denote the target’s maximum height. The cross-range displacement Δ x z satisfies Δ x z z n + 1 tan β sin α β z n + 1 max , and the range displacement Δ y z satisfies Δ y z z n + 1 sin β cos α β z n + 1 max . According to [35], when the coordinate error is less than one-eighth of a resolution unit, its impact on image correlation and the interferometric phase can be neglected. Thus, the constraint can be derived as β 0.125   min ρ r , ρ a / z n + 1 max , where ρ r and ρ a are the range and cross-range resolution, respectively. Assuming z n + 1 max is 10 m, with a bandwidth of 1 GHz and a resolution of 0.15 m, the permissible angular deviation is β 0.1074 ° .
For slow-rotating space targets, changes in IPP occur more gradually and do not shift rapidly over short periods. By limiting the interval between arc segments, the variation in IPP can be ignored. Section 4.1 provides a detailed analysis of α and β . Both α and β are influenced by various factors, such as the satellite’s orbital parameters, the radar station’s position, and the satellite’s attitude. Notably, α is much larger than β , particularly when the satellite is passing through the zenith. When the interval between two arcs is less than 10 s, the influence of β and β ˙ on image registration can be ignored.
Based on the previous analysis, it can be assumed that cos β 1 , sin β β , and β ˙ 0 . Therefore, the mathematical registration model between two ISAR images from adjacent imaging arcs can be simplified as a rigid transformation that includes both translation and rotation. This can be expressed as
x n y n 1 = cos α sin α Δ x sin α cos α Δ y 0 0 1 x n + 1 y n + 1 1
Through the registration of consecutive images, sequential image registration is ultimately employed to achieve long interval image registration.

3. Method

The image registration of ISAR images is the process of overlaying different ISAR images obtained from different viewpoints [11]. The principle of image registration involves extracting and matching representative features of ISAR images to establish mapping parameters between these images. In this section, a feature-based image registration method is proposed by exploiting the concordance of spatial structures of multiple scatterers in various ISAR images.

3.1. Feature Detection

Unlike optical images, ISAR images are not composed of color blocks. Instead, they are a set of slightly blurred 2D sinc functions due to system errors and imperfections in ISAR imaging processing. Assuming that the radar transmits a broadband signal with bandwidth B and center frequency f 0 , T o b s is the coherent processing time under the n-th viewing. After imaging processing, the obtained ISAR image can be expressed as:
I n t ^ , f d i = 1 K ρ i B T o b s sinc B t ^ 2 r n i c sinc T o b s f d f n i exp j 4 π f 0 c j n T p i
where t ^ is the fast time, f d is the Doppler frequency, K represents the number of scatterers on the target, ρ i is the scattering coefficient, c is the velocity of light, and r n i and f n i denote the 2D location of scatterer Pi in the ISAR image, while p i is its 3D coordinate in the body coordinate system. For in-orbit space targets, the cumulative rotation angle can be estimated using high-precision orbital measurements and attitude information. This ensures that two adjacent ISAR images have similar resolution. After cross-range scaling, the images are further adjusted to the same size through scaling and cropping.
Traditional salient structures and features, such as prominent regions, lines, region corners, line intersections, and points on curves with high curvature, used for optical image registration, are based on variations in color regions of images. In ISAR imaging, many edges form between dominant scatterers and the background. However, these edges are unstable due to noise, defocusing, and sidelobe interference from adjacent scatterers. Consequently, the performance of methods that rely on edge inflection points or corners as FPs degrades for ISAR image registration. This instability worsens when scatterers are sparsely distributed within well-focused ISAR images. Therefore, it is more appropriate to use the peak information of scatterers rather than their boundary or gradient information for ISAR image registration. In addition to amplitude and phase, the spatial structure of dominant scatterers is a key identifying feature in ISAR images. This paper extracts scatterers with high SNR as FPs and analyzes their relative distribution to construct a feature descriptor.

3.2. Spatial Feature Descriptor

A feature descriptor is a vector representation of an image feature, encapsulating essential information about the feature’s characteristics in a format useful for further processing or analysis. When a reference scatterer is selected for each set of FPs, each FP relative to this reference FP can be modeled as a distance vector. Considering counterclockwise as the positive direction, the angle-distance distribution of these distance vectors can describe the spatial relationships of these FPs.
Figure 2 illustrates a schematic diagram of the reference and sensed images used in image registration. The sets of FPs in the reference and sensed images are denoted as A = A 1 , A 2 , A 3 , .... , A N 1 , A N and B = B 1 , B 2 , B 3 , .... , B M 1 , B M , where N and M correspond to the number of dominant scatterers extracted from each image.
Let FP A 1 be the reference. We can denote the vector H o r i A 1 as the distance vectors of FPs relative to the reference A 1 in the set.
H o r i A 1 = A 1 A 1 , A 1 A 2 , , A 1 A i , ... , A 1 A N = 0 , r A 1 2 e j ζ A 1 2 , r A 1 i e j ζ A 1 i , ... , r A 1 N e j ζ A 1 N
where r A 1 i denotes the modulus of the vector A 1 A i and ζ A 1 i denotes the angle.
FP A 1 and FP B 8 are the projections of the same scatterer on the IPPs of the reference and sensed images, respectively. Similarly, the distance vectors of the FPs set relative to the reference B 8 in the sensed image are
H o r i B 8 = B 8 B 1 , B 8 B 2 , , B 8 B i , ... , B 8 B M = r B 8 1 e j ζ B 8 1 , r B 8 2 e j ζ B 8 2 , r B 8 i e j ζ B 8 i , ... , r B 8 M e j ζ B 8 M
where r B 8 i denotes the modulus of the vector B 8 B i , and ζ B 8 i denotes the angle.
The indices of the same scatterer in the two sets can differ significantly, and even the total number of FPs may not be the same. Elements with the same index in H o r i A 1 and H o r i B 8 may be different, namely r A 1 i e j ζ A 1 i r B 8 i e j ζ B 8 i . Therefore, computing the conjugate cross-correlation of H o r i A 1 and H o r i B 8 may lead to failure in feature matching. To resolve this problem, the distance vectors of the FPs should be sorted and padded before feature matching, and then we have
H A 1 = P sz sort ζ A 1 i H o r i ( A 1 )
H B 8 = P sz sort ζ B 8 i H o r i ( B 8 )
where sort ( ) is the sorting function that rearranges the distance vectors of FPs in increasing order according to the vector angle, ζ A 1 i or ζ B 8 i . P sz ( ) is the process of scatterer superposition and zero-padding in the generation of H A 1 and H B 8 . After this process, scatterers with the same angle will have the same index in H A 1 and H B 8 , enabling the use of cross-correlations of H A 1 and H B 8 to describe their similarity. In practical applications, the correlation of such discrete features is highly sensitive to angular errors. Therefore, an improved continuous feature is considered.
Noise affects the estimation accuracy of scatterer coordinates, which subsequently impacts the angular precision of the corresponding distance vectors of FPs. According to [36], the coordinate errors in range and cross-range are independent Gaussian random variables. Under the small error approximation, the angular error is a linear combination of the coordinate errors and can be approximated as Gaussian noise. Therefore, the angle value in the feature descriptor can be replaced with a parameterized Gaussian function to achieve an angular-error-tolerant feature. Finally, the continuous feature descriptor of the scatterer A 1 is expressed as the accumulation of N-1 Gaussian functions, which is
H A 1 G ζ = i = 1 N 1 σ A 1 i 2 π e ζ ζ A 1 i 2 2 σ A 1 i 2
where the range of variable ζ is from − π to π , ζ A 1 i is the mean of the Gaussian function, and σ A 1 i is the standard deviation, which is calculated based on the SNR and the modulus of the distance vectors of FPs. According to [36], the 2D coordinate errors between two scatterers in the ISAR image are expressed as
Δ x A 1 i = 1 S N R A 1 + 1 S N R A i 3 π ρ a Δ y A 1 i = 1 S N R A 1 + 1 S N R A i 3 π ρ r
where Δ x A 1 i , Δ y A 1 i denote the cross-range error and range error, respectively, and S N R A 1 and S N R A i represent the peak SNR of scatterer A 1 and A i .
Figure 3 shows the schematic diagram of angle error Δ ζ A 1 i . The standard deviation of ζ A 1 i can be derived as
σ A 1 i = f S N R A i , S N R A 1 , r A 1 i = arcsin Δ x A 1 , i 2 + Δ y A 1 , i 2 / r A 1 i
We present an example that utilizes this new feature descriptor. We assume that each line segment in Figure 2 is 5 m. The lengths and angles of each vector from A 1 to other points are listed in Table 1. Given that the SNR of each scatterer is 10 dB and the resolutions, ρ r and ρ a , are 1 m, we can calculate the variance σ A 1 i of the angle ζ A 1 i using Equations (13) and (14). The feature of scatterer A 1 is then obtained using Equation (12). Figure 4 illustrates the feature curves of scatterer A 1 along with its 10 components. It is worth noting that the modulus of the distance vector and the amplitude of the FPs are incorporated into the feature descriptor through σ A 1 i . As shown, a longer modulus corresponds to a taller and narrower Gaussian waveform, while a shorter modulus results in a shorter and wider Gaussian waveform.
In Figure 5, the feature curve of A 1 is compared with those of B 4 and B 8 . It can be observed that the A 1 curve and the curve B 4 are quite different, whereas the A 1 curve and the B 8 curve show a stronger correlation after an angle shift, despite some outlier pairings. The proposed method projects the two-dimensional structure into a one-dimensional vector and maps the image rotation into a translation of the one-dimensional feature vector.

3.3. Feature Matching

After obtaining the feature descriptors of the FPs, the next step is feature matching. Image rotation causes a shift in the feature curves. Therefore, this paper employs the circular cross-correlation method for feature matching [27,37]. The circular cross-correlation of scatterer A i and scatterer B j is
χ A i B j ζ = H A i G φ H B j G φ ζ d φ
Performing circular cross-correlation on all the FPs yields a cross-correlation matrix γ N × M .
γ N × M = γ 11 γ 12 γ 1 M γ 21 γ 22 γ 2 M γ N 1 γ N 2 γ N M
where γ i j = max ( χ A i B j ) . In this circular cross-correlation matrix, the row number represents the index of the scatterer in the reference image, and the column number represents the index of the scatterer in the sensed image. Furthermore, the circular cross-correlation can be obtained via the FFT, which greatly improves computational efficiency.
For scatterer index i in the reference image, its matching scatterer index j ^ in the sensed image can be obtained by searching for the maximum value in the i -th row. Let argmax ( ) stand for the argument of the maxima, it can be expressed as
j ^ = argmax j γ i j , j = 1 , 2 , , M
Feature matching is typically achieved by searching for the maximum value in each row. However, false matches are inevitable in image feature matching due to factors such as symmetrical structures, repetitive patterns, and noise. Additionally, viewpoint changes and occlusion contribute to these errors. Estimating registration parameters accurately is challenging in the presence of outliers because they introduce noise and bias, leading to incorrect estimates. To ensure precise registration, it is crucial to identify and remove outliers before estimating the registration parameters.

3.4. Two-Step Transform Model Estimation Method

In this subsection, we provide a two-step transform model estimation process, which includes outlier detection, removal, and parameter estimation.
The first step uses a statistical method for outlier filtering. Since the angular shift that corresponds to the maximum circular cross-correlation is close to the image’s rotation angle, a histogram analysis is applied to the angle shifts of all matching pairs. This histogram-based method automatically identifies incorrect matches, such as mirror mismatches caused by symmetrical structures. The angle intervals can be set based on a rough estimate of the image rotation. By adjusting the histogram’s central angle, most correct matches can be grouped within one interval, while matches outside this range are considered outliers. However, this method is less effective at excluding redundant adjacent scatterer pairs with similar rotation angles.
In the second step, a model-based approach identifies adjacent scatterer pairs using the RANSAC algorithm. The RANSAC algorithm fits a model to the data and identifies redundant outliers based on their deviation from the model. The parameters of the transformation model are estimated from the residual inliers.
RANSAC is an iterative method used to estimate parameters of a mathematical model from a set of observed data containing outliers [38]. According to the RANSAC algorithm, a subset of matched scatterer pairs is randomly selected, and the related parameters are estimated by solving the transform model equation. This allows redundant matches and mismatches to be excluded through consistency checks. It iterates through two steps: generating a hypothesis from random samples and verifying it against the data [39]. The most probable hypothesis, supported by the most inlier candidates, is chosen during the iterations. The RANSAC algorithm can be formulated by
Δ x ^ , Δ y ^ , α ^ = argmin Δ x , Δ y , α i = 1 N e Loss Err A i , B j ^ ; Δ x , Δ y , α
where Δ x , Δ y , α are the registration parameters as defined in Section 2, Δ x ^ , Δ y ^ , α ^ are their estimated values, A i and B j ^ are scatterer pairs for matching the reference image and sensed image, respectively, N e represents the number of matched scatterer pairs after the initial outlier exclusion, argmin Δ x , Δ y , α ( ) is the arguments of the minima, Err ( ) is the error function, and Loss ( ) is the loss function. The explicit definitions of these functions and parameter estimation procedure are provided below.
Assuming L scatterers are randomly selected, L must be at least three due to the need to estimate three parameters. The choice of L should balance model accuracy and algorithm efficiency. By substituting the scatterers’ coordinates into the transformation model in Equation (6), it can be derived
A 2 L × 7 h 7 × 1 = 0
where h = cos α sin α Δ x sin α cos α Δ y 1 T is the column vector related to parameters to be solved, A is a Matrix related to scatterer’s coordinates, and the 2l-1th row and 2lth row are
A 2 l 1 , : 2 l , : = x n + 1 l y n + 1 l 1 0 0 0 x n l 0 0 0 x n + 1 l y n + 1 l 1 y n l
where x n l and y n l denote the scatterer’s coordinates in the n-th ISAR image, while x n + 1 l and y n + 1 l denote the coordinates of the corresponding scatterer in the n+1-th ISAR image.
Through SVD decomposition of matrix A , we obtain A 2 L × 7 = U 2 L × 2 L Σ 2 L × 7 V 7 × 7 T . The singular vector v min corresponding to the smallest singular value is the non-zero solution that minimizes A h [40]. Thus, h v min . The mapping function parameters between different ISAR images can be obtained as follows:
Δ x ^ = v min 3 Δ y ^ = v min 6 α ^ = arctan v min 2 / v min 1
The coordinates of scatterers in the sensed image can be recalculated based on the estimation results of this randomly selected subset of matched scatterer pairs. Let H represent the estimated transformation matrix, which is shown in Equation (6), then the new coordinates of scatterer B ˜ j ^ can be obtained as B ˜ j ^ = H B j ^ . The Euclidean distance between scatterers is used as the error function Err ( ) , and the distance error e is expressed as
e = Err A i , B ˜ j = x n A i x n + 1 B ˜ j ^ 2 + y n A i y n + 1 B ˜ j ^ 2
where x n A i and y n A i denote the coordinates of scatterer A i , and x n + 1 B ˜ j ^ and y n + 1 B ˜ j ^ denote the coordinates of scatterer B ˜ j ^ .
The subset of matched scatterer pairs is considered an inlier candidate, when its error e is within a predefined threshold, formulated as
Loss e = 0 , e < T H 1 , otherwise
This threshold, T H , depends on the image resolution and SNR and can be set to 3 σ dis or 5 σ dis . σ dis is the standard deviation of the distance between the same scatterers in the two images when it is ideally registered, and it can be calculated by Equation (13).
Once the outliers are identified, they can be eliminated from the dataset before estimating the registration parameters. The remaining scatterers can be taken as inliers and used to estimate the registration parameters using least squares estimation (LSE).

3.5. Flowchart of the ProposedMethod

The entire procedure of the proposed ISAR image registration method can be summarized as feature detection, feature matching, transform model estimation, and image transformation. The flowchart is shown in Figure 6.
The specific processing steps are as follows.
Step 1: Analyze the noise power in the noise region to set the threshold for scatterer extraction and extract dominant scatterers as FPs in each image using the CLEAN technology.
Step 2: Calculate the distance vectors between any two scatterers within the image and construct the feature descriptor for each FP based on shape structures.
Step 3: Calculate the circular cross-correlation matrix and the angle shift of all matching scatterer pairs via FFT and then obtain the feature matching results.
Step 4: Discard false matched pairs by a consistency check of the estimated rotation angle.
Step 5: Discard redundant adjacently outliers and estimate the parameters of transform model by RANSAC.
Step 6: Image registration based on the estimated parameters.

4. Experiments and Results

In this section, several experiments based on both simulated and measured ISAR images are conducted to validate the effectiveness of the proposed method. Additionally, the well-known feature registration algorithms, SIFT, SURF and SIFT+SURF, are compared with the proposed method. The real ISAR images used in this section are sourced from a report published by the German FGAN Lab in 2012 (@Fraunhofer FHR) [21]. This report provides an ISAR image sequence of the US space shuttle, obtained with the TIRA system after long arc imaging.

4.1. Azimuth and Elevation Angles of LOS

In this subsection, the azimuth and elevation angles for different visible passes, imaging times, and time intervals are simulated. In this simulation, the radar station is set in Beijing (39.9° N, 116.4° E, 88 m), and the satellite orbit is the TIANHE space station. The two-line orbital elements (TLE) of the TIANHE space station are listed in Table 2. The observation time for the first visible pass is from 2:22:07 to 2:32:31 on 1 March 2023, and for the second visible pass, from 3:59:00 to 4:09:26 on 1 March 2023.
Figure 7a,b illustrate the variations in azimuth and elevation angles, while Figure 7c,d depict the derivation of these angles. According to the definition in [9], the center moment of coherent processing interval is defined as the imaging moment. In Figure 7, the time values on the horizontal axis represent the imaging moment of the reference image, while the imaging moment of the sensed image is offset by a time interval Δ t with respect to the reference moment. During the observation, the LOS form a curved surface in the target’s body coordinate system, which expands more in the horizontal plane than the vertical change. As a result, the azimuth angle variation is significantly larger than the elevation angle variation, particularly near the zenith-passing time. When the arc segment interval is less than 10 s, with alpha less than 10° and beta less than 0.05°, and a resolution of 0.15 m, the displacement for a target at a height of 10 m is 0.0086 m. This displacement, being less than one-tenth of a resolution unit, is negligible.

4.2. Experiment with Simulated Images

This section presents the experimental results based on simulated ISAR images. The scatterer model contains 80 dominant scatterers, extracted from a real ISAR image of the US space shuttle given in Figure 8a. It is assumed that there is a 20% outlier rate between the reference image and the sensed image to simulate phenomena such as angle glint and occlusion effects in ISAR imaging. So, scatterers are randomly divided into three groups according to the number of outliers: set1 (54 scatterers), set2 (13 scatterers), and set3 (13 scatterers), as shown in Figure 8a. The scatterers in set1 and set2 are used to generate the reference image shown in Figure 8b, and the scatterers in set1 and set3 are used to generate the sensed image shown in Figure 8c. The sensed image has been affine-transformed according to the parameters listed in Table 3.
The proposed image registration algorithm is validated using the sensed and reference images from Figure 8. Complex Gaussian noise is added to each image to simulate a noise environment with a 24 dB SNR. After extracting FPs, the feature descriptor of each point is constructed according to Equation (12), and the circular cross-correlation of the feature descriptors between any two reference FPs is calculated.
Figure 9a shows the maximum circular cross-correlation matrix, where the brightness of each cell indicates the cross-correlation of two feature curves, implying the reliability of the matching. It is observed that a few bright points are sparsely distributed in the maximum circular cross-correlation matrix, which can be taken as candidates for matched scatterer pairs. Finally, scatterer pairs are obtained by searching for the maximum value row by row, and these are taken as the coarse matches before the consistency check. The histogram of shift feature curves for matched pairs is shown in Figure 9b, with an interval of 20° between adjacent bins. It is observed that a tall column in the histogram is contributed by the correct matches and adjacent matches, while the scattered small values correspond to the false matches.
Figure 10 shows the feature matching results: red lines mark the mismatched pairs identified from the histogram, green lines mark the redundant adjacent matches identified during the RANSAC process, and yellow lines mark the correctly matched pairs used to estimate the mapping parameters. As per outlier detection, there are 8 false matches, 5 redundant adjacent matches and 54 correct matches.
Figure 11a depicts the superposition of the reference image and the sensed image before registration, which shows an obvious displacement. Figure 11b depicts the superposition after registration, where the two images are overlapped perfectly. Furthermore, the estimation results listed in Table 3 are compared with those of classical image registration methods, such as the SIFT algorithm, the SURF algorithm and the SIFT+SURF algorithm. To ensure the objectivity and fairness of the various experiments, all four methods used the RANSAC algorithm during the parameter estimation phase, randomly selecting four scatterer pairs per iteration with a total of 10,000 iterations. It can be seen that the estimation error of the proposed method is much smaller.

4.3. Experiment with Real Images

In this subsection, two different real ISAR images of a US Space Shuttle are used to test the effectiveness of the proposed method. The reference and sensed images are shown in Figure 12a,b, respectively. We sequentially extracted 200 dominate scatterers according to their amplitude from each ISAR image.
The matched pairs are shown in Figure 13. There are 65 pairs of false matches marked with red lines, 112 pairs of redundant adjacently matches marked with green lines, and 23 pairs of best matches marked with yellow lines.
The sensed image is transformed based on the estimated parameters, and the ISAR images before and after registration are compared in Figure 14. Figure 14a shows the superposition of the two ISAR images before registration, and Figure 14b shows the registration results. It can be observed that the main body is perfectly overlapped, although the tails caused by secondary reflection are displaced. Finally, the proposed method is compared with the SIFT, SURF, and SIFT + SURF methods. The estimation errors and the normalized image correlation of the two images after registration are listed in Table 4. The filtered image, with its tail removed, ensures that the results of image correlation are not affected by the secondary reflections in ISAR images.

4.4. Robustness Analysis

To further evaluate the robustness of the proposed image registration algorithm, we conducted 1000 Monte Carlo simulations under varying SNR levels and different outlier ratios. In the robustness versus SNR experiments, noise was added to the original ISAR image, and the SNR was adjusted from 10 dB to 30 dB in 2 dB increments using the scatterer model and mapping parameters described in Section 4.2. The mapping parameter estimation results and the image registration success rates for the three methods were compared.
Successful image registration is evaluated based on a preset estimation error threshold. The threshold for range and cross-range shifts is set to half a resolution unit, while the estimation error for the rotation angle is set at 1.6°, ensuring that the shift of the farthest scatterer remains within half a resolution unit. Figure 15 shows the success probabilities of the four image registration methods at different SNR levels. The results indicate that the proposed method achieves the highest success rate under low SNR conditions, whereas the SIFT method has the lowest success rate. The mean error (ME) and root mean squared error (RMSE) for range shift, cross-range shift, and rotation are depicted in Figure 16. Although the success rates of the SURF and SIFT + SURF methods are comparable to the proposed method when the SNR exceeds 18 dB, Figure 16 shows that the proposed method provides higher accuracy in parameter estimation, as indicated by the lower mean and variance of the registration parameters. The presence of noise reduces the number of detectable scatterers, introduces more outliers, and increases the localization error, consequently lowering both registration accuracy and method stability.
Similarly, the impact of outliers is analyzed using 1000 Monte Carlo simulations at 24 dB SNR. The ratio of outliers is varied from 0% to 50% in steps of 5%. Figure 17 illustrates the success probabilities of the four registration methods at different outlier ratios. It can be noted that the success rate of the SIFT+SURF method is lower than that of the SURF method when the proportion of outliers is high. Although the combination of SIFT and SURF increases the number of FPs, the mismatch pairs introduced by SIFT make it more difficult for RANSAC to find inliers when the same number of iterations is used, resulting in reduced registration accuracy. Figure 18 depicts the estimation ME and RMSE of the transformation parameters. It can be observed that the proposed method maintains its performance even as the outlier ratio increases to 50%.

4.5. Computational Complexity Analysis

In this subsection, the computational complexity of the proposed method is analyzed. It is evaluated for four core stages: feature point extraction, feature descriptor construction, feature matching, and model parameter estimation. Assuming the image size is N x × N y , containing K s scatterers, the computational complexity of each stage is denoted as C d , C f , C m and C e respectively.
(1)
At the feature point extraction stage, the sinc-based CLEAN method is used to extract scatterers. Assuming Z s is the number of pixels in the segmented area, the computational complexity is approximately C d O K s N x N y + 10 K s Z s [41].
(2)
At the feature descriptor construction stage, the computation is primarily focused on calculating the vector angles and magnitudes as well as the summation of Gaussian functions. Assuming the length of the Gaussian function is  G s , the computational complexity is approximately C f O K s 2 G s .
(3)
Feature matching is implemented using the circular cross-correlation method. Assuming the feature vector length is  F s , the computational complexity using the conventional time-domain correlation is approximately C m O 0.5 K s 2 F s 2 . By applying FFT for frequency-domain processing, the complexity can be reduced to approximately C m O 0.5 K s 2 F s + 0.5 K s F s log F s + 0.25 K s 2 F s log F s .
(4)
Model parameter estimation is performed iteratively. Each iteration involves singular value decomposition of the observation matrix, transformation of feature point coordinates, and distance loss calculation. Assuming the observation matrix size is A m × A n  and the number of scatterers after consistency checking is K s 2 , the computational complexity for I iteration is approximately C e O A m A n 2 I + A m 2 A n I + 11 K s 2 I .
From the above analysis, it can be concluded that the total computational complexity is approximately C T = 2 C d + 2 C f + C m + C e .
The computational complexities of different methods are compared through simulations. Table 5 lists the relevant parameters and the specific values used in Section 4.3. Table 6 presents the computational complexity formulas for each method at various stages. Figure 19 shows the simulation results of the computational complexity using different methods.
The SIFT method exhibits relatively high computational complexity, primarily because it requires generating a multi-scale Gaussian pyramid during the keypoint extraction stage. The SURF method effectively reduces computational complexity by employing integral images, Hessian matrix approximation, simplified feature descriptors, and accelerated orientation computation, making it the least complex of the four methods. The combination of the SIFT and SURF methods retains relatively high computational complexity. The computational complexity of the proposed method lies between that of SIFT, SIFT+SURF and SURF. In the proposed method, due to the use of longer feature descriptors and the circular cross-correlation method for matching, the feature matching stage contributes the second most to the overall computational complexity. However, with the introduction of preliminary outlier filtering, the complexity of the proposed method in the parameter estimation stage is relatively lower compared to the other three methods.

5. Discussion

In the above research, it was demonstrated that using the spatial distribution of dominant scatterers for registration is a more effective approach in ISAR image registration. For sparse and discrete ISAR images, the effectiveness of the SIFT and SURF methods decreases. This is because SIFT and SURF identify inflection points, corners, and textured areas as FPs, but in ISAR images, the edges formed between scatterers and the background are often unstable and unreliable, leading these methods to detect numerous invalid FPs. Additionally, SIFT and SURF rely on local gradient information around FPs for feature description, which restricts the amount of information that can be extracted from ISAR images. The proposed method uses the distance and angular information between all scatterers, adopting a global approach. Therefore, it is more effective for ISAR registration.
The proposed method encounters limitations in scenarios with significant IPP variations, such as target maneuvers, large viewing angle differences between distant stations, or images captured from widely separated arc segments at the same location. Variations in IPP result in projection discrepancies and image decorrelation. Improving radar image registration under such conditions remains a challenging but essential task and will be addressed in future research.

6. Conclusions

This paper introduces a novel ISAR image registration algorithm based on the spatial distribution of dominant scatterers. The method projects two-dimensional images into one-dimensional feature vectors by utilizing the distance vectors between scatterers. Image rotation is mapped to a shift in the feature vector, facilitating feature matching through circular cross-correlation. To mitigate error impact, a continuous Gaussian-based feature descriptor is employed. Finally, consistency checks and the RANSAC algorithm are combined to estimate registration parameters, enhancing efficiency by filtering out scatterer pairs with significant errors. Simulation results show that the proposed algorithm delivers more accurate parameter estimates than SIFT, SURF, and SIFT+SURF across various SNR levels. Additionally, it demonstrates greater stability in estimating transformation parameters, even when outliers constitute up to 50%. Real ISAR image registration results further confirm the accuracy and effectiveness of the approach.

Author Contributions

Conceptualization, J.W., J.S., H.L. and L.Z.; methodology, L.Z.; software, L.Z., H.L. and J.S.; validation, L.Z.; formal analysis, L.Z.; investigation, J.W. and L.Z.; writing—original draft, L.Z.; writing—review and editing, L.Z., J.W., H.L. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62071041, 61701554 and 52374169.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors are grateful to the anonymous reviewers for their careful review and constructive comments.

Conflicts of Interest

Haoyue Luo was employed by the company China Aerospace Science and Industry Corp Second Research Institute, Institute NO.25 of the Second Academy. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The Following Abbreviations are Used in this Manuscript:
ISARInverse synthetic aperture radar
SARSynthetic aperture radar
RANSACRandom sample consensus
SNRSignal-to-noise ratio
2DTwo-dimensional
3DThree-dimensional
FFTFast Fourier transform
KLTKanade–Lucas–Tomasi
SIFTScale-invariant feature transform
SURFSpeeded up robust features
IPPImaging projection plane
LOSLine of sight
FPsFeature points
MEMean error
RMSERoot mean squared error

References

  1. Vehmas, R.; Neuberger, N. Inverse Synthetic Aperture Radar Imaging: A Historical Perspective and State-of-the-Art Survey. IEEE Access 2021, 9, 113917–113943. [Google Scholar] [CrossRef]
  2. MacDonald, M.; Abouzahra, M.; Stambaugh, J. Overview of High-Power and Wideband Radar Technology Development at MIT Lincoln Laboratory. Remote Sens. 2024, 16, 1530. [Google Scholar] [CrossRef]
  3. Li, B.; Chen, D.; Cao, H.; Wang, J.; Li, H.; Fu, T.; Zhang, S.; Zhao, L. Estimating the Observation Area of a Stripmap SAR via an ISAR Image Sequence. Remote Sens. 2023, 15, 5484. [Google Scholar] [CrossRef]
  4. Anger, S.; Jirousek, M.; Dill, S.; Kempf, T.; Peichl, M. High-resolution inverse synthetic aperture radar imaging of satellites in space. IET Radar Sonar Navig. 2023, 18, 544–563. [Google Scholar] [CrossRef]
  5. Wang, H.; Liang, Y.; Xing, M.; Zhang, S. Subimage fusion for high-resolution ISAR imaging. In Proceedings of the International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010. [Google Scholar] [CrossRef]
  6. Tian, B.; Lu, Z.; Liu, Y.; Li, X. Review on interferometric ISAR 3D imaging: Concept, technology and experiment. Signal Process. 2018, 153, 164–187. [Google Scholar] [CrossRef]
  7. Shao, S.; Liu, H.; Zhang, L.; Wang, P.; Wei, J. Noise-robust interferometric ISAR imaging of 3-D maneuvering motion targets with fine image registration. Signal Process. 2022, 198, 515–524. [Google Scholar] [CrossRef]
  8. Zhou, Z.; Liu, L.; Du, R.; Zhou, F. Three-Dimensional Geometry Reconstruction Method for Slowly Rotating Space Targets Utilizing ISAR Image Sequence. Remote Sens. 2022, 14, 1144. [Google Scholar] [CrossRef]
  9. Yuan, Z.; Wang, J.; Zhao, L.; Gao, M. An MTRC-AHP Compensation Algorithm for Bi-ISAR Imaging of Space Targets. IEEE Sensors J. 2020, 20, 2356–2367. [Google Scholar] [CrossRef]
  10. Fan, Y.; Wang, F.; Wang, H. A Transformer-Based Coarse-to-Fine Wide-Swath SAR Image Registration Method under Weak Texture Conditions. Remote Sens. 2022, 14, 1175. [Google Scholar] [CrossRef]
  11. Zitova, B.; Flusse, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  12. Sotiras, A.; Davatzikos, C.; Paragios, N. Deformable Medical Image Registration: A Survey. IEEE Trans. Med. Imaging 2013, 32, 1153–1190. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, Q.; Yeo, T.S. Novel Registration Technique for InISAR and InSAR. In Proceedings of the 2003 IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003. [Google Scholar] [CrossRef]
  14. Tian, B.; Zou, J.; Xu, S.; Chen, Z. Squint model interferometric ISAR imaging based on respective reference range selection and squint iteration improvement. IET Radar Sonar Navig. 2015, 9, 1366–1375. [Google Scholar] [CrossRef]
  15. Rong, J.; Wang, Y.; Han, T. Interferometric ISAR Imaging of Maneuvering Targets with Arbitrary Three-Antenna Configuration. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1102–1119. [Google Scholar] [CrossRef]
  16. Kang, B.; Lee, K.; Kim, K. Image Registration for 3D Interferometric-ISAR Imaging Through Joint-Channel Phase Difference Functions. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 22–38. [Google Scholar] [CrossRef]
  17. Tian, B.; Wu, W.; Liu, Y.; Xu, S.; Chen, Z. Interferometric ISAR Imaging of Space Targets Using Pulse-Level Image Registration Method. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 2188–2203. [Google Scholar] [CrossRef]
  18. Shao, S.; Zhang, L.; Liu, H.; Wang, P.; Chen, Q. Images of 3-D Maneuvering Motion Targets for Interferometric ISAR With 2-D Joint Sparse Reconstruction. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9397–9423. [Google Scholar] [CrossRef]
  19. Luc, V. Multi-look autofocus in high resolution inverse SAR imaging. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Istanbul, Turkey, 5–9 June 2000; pp. 3029–3032. [Google Scholar] [CrossRef]
  20. Park, S.; Kim, H.; Kim, K. Cross-range scaling algorithm for ISAR images using 2-D Fourier transform and polar mapping. IEEE Trans. Geosci. Remote Sens. 2011, 49, 868–877. [Google Scholar] [CrossRef]
  21. Yeh, C.; Xu, J.; Peng, Y.; Wang, X. Cross-range scaling for ISAR based on image rotation correlation. IEEE Geosci. Remote Sens. Lett. 2009, 6, 597–601. [Google Scholar] [CrossRef]
  22. Wang, F.; Xu, F.; Jin, Y.Q. Three-Dimensional Reconstruction from a Multiview Sequence of Sparse ISAR Imaging of a Space Target. IEEE Trans. Geosci. Remote Sens. 2018, 56, 611–620. [Google Scholar] [CrossRef]
  23. Ye, Y.; Yang, C.; Gong, G.; Yang, P.; Quan, D.; Li, J. Robust Optical and SAR Image Matching Using Attention-Enhanced Structural Features. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–12. [Google Scholar] [CrossRef]
  24. Shi, J.; Zhou, Y.; Xie, Z.; Yang, X.; Guo, W.; Wu, F.; Li, C.; Zhang, X. Joint autofocus and registration for video-SAR by using sub-aperture point cloud. Int. Jor. Appl. Earth Obs. Geoinf. 2023, 118, 103295. [Google Scholar] [CrossRef]
  25. Wu, L.; Zhao, L.; Wang, J.; Su, J.; Cheng, W. ISAR Image Registration Based on Line Features. J. Electromagn. Eng. Sci. 2024, 24, 215–225. [Google Scholar] [CrossRef]
  26. Zhou, Y.; Zhang, L.; Cao, Y.; Wu, Z. Attitude Estimation and Geometry Reconstruction of Satellite Targets Based on ISAR Image Sequence Interpretation. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 1698–1711. [Google Scholar] [CrossRef]
  27. Zhang, L.; Li, Y. An Image Registration Method Based on Correlation Matching of Dominant Scatters for Distributed Array ISAR. Sensors 2022, 22, 1681. [Google Scholar] [CrossRef] [PubMed]
  28. Xu, Z.; Zhang, L.; Xing, M. Precise Cross-Range Scaling for ISAR Images Using Feature Registration. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1792–1796. [Google Scholar] [CrossRef]
  29. Li, P.; Zhou, F.; Zhao, B.; Liu, M.; Gu, H. A Novel Large-Angle ISAR Imaging Algorithm Based on Dynamic Scattering Model. IEICE Trans. Electron. 2020, 103, 524–532. [Google Scholar] [CrossRef]
  30. Ryu, B.; Kang, B.; Lee, M.; Kim, K. Robust ISAR Cross-Range Scaling via Two-Step Rotation Velocity Estimation. IEEE Access 2021, 9, 148132–148143. [Google Scholar] [CrossRef]
  31. Wang, Y.; Guo, R.; Tian, B.; Chen, C.; Xu, S.; Chen, Z. Feature point bidirectional matching and 3D reconstruction of sequence ISAR image based on SFIT and RANSAC method. In Proceedings of the International Conference on Radar, Haikou, China, 15–19 December 2021. [Google Scholar] [CrossRef]
  32. Lowe, D. Distinctive image features from scale-invariant key-points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  33. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  34. Martorella, M.; Stagliano, D.; Salvetti, F.; Battisti, N. 3D interferometric ISAR imaging of noncooperative targets. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 3102–3114. [Google Scholar] [CrossRef]
  35. Hanssen, R. Radar Interferometry: Data Interpretation and Error Analysis. Kluwer Academic Publishers: Dordrecht, The Netherlands, 2001. [Google Scholar]
  36. Skolnik, M. Theoretical Accuracy of Radar Measurements. IRE Trans. Aeronaut. Navig. Electron. 1960, 4, 123–129. [Google Scholar] [CrossRef]
  37. Zhu, X. Radar Signal Analysis and Processing; National Defense Industry Press: Beijing, China, 2011. [Google Scholar]
  38. Choi, S.; Kim, T.; Yu, W. Performance Evaluation of RANSAC Family. In Proceedings of the British Machine Vision Conference, London, UK, 7–10 September 2009. [Google Scholar]
  39. Yang, J.; Huang, Z.; Quan, S.; Cao, Z.; Zhang, Y. RANSACs for 3D Rigid Registration: A Comparative Evaluation. IEEE/CAA J. Autom. Sin. 2022, 9, 1861–1878. [Google Scholar] [CrossRef]
  40. Golub, G.; Van Loan, C. Matrix Computations, 4th ed.; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
  41. Zhang, X.; Cui, J.; Wang, J.; Sun, C.; Zhu, Z.; Wang, F.; Ma, Y. Parametric scatterer extraction method for space-target inverse synthetic aperture radar image CLEAN. IET Radar Sonar Navig. 2023, 17, 899–915. [Google Scholar] [CrossRef]
Figure 1. ISAR imaging geometry of two arcs.
Figure 1. ISAR imaging geometry of two arcs.
Remotesensing 16 03625 g001
Figure 2. FPs in the reference image and the sensed image: (a) Set A; (b) Set B. The black dots represent inner FPs, and the blue squares represent outer FPs.
Figure 2. FPs in the reference image and the sensed image: (a) Set A; (b) Set B. The black dots represent inner FPs, and the blue squares represent outer FPs.
Remotesensing 16 03625 g002
Figure 3. Schematic diagram of angle error.
Figure 3. Schematic diagram of angle error.
Remotesensing 16 03625 g003
Figure 4. Feature curve of scatterer A 1 .
Figure 4. Feature curve of scatterer A 1 .
Remotesensing 16 03625 g004
Figure 5. Feature curves for different scatterers: (a) comparison of the feature curves of scatterer A 1 and scatterer B 4 ; (b) comparison of the feature curves of scatterer A 1 and scatterer B 8 .
Figure 5. Feature curves for different scatterers: (a) comparison of the feature curves of scatterer A 1 and scatterer B 4 ; (b) comparison of the feature curves of scatterer A 1 and scatterer B 8 .
Remotesensing 16 03625 g005
Figure 6. Flowchart of the proposed image registration method.
Figure 6. Flowchart of the proposed image registration method.
Remotesensing 16 03625 g006
Figure 7. The angles variations of LOS between two adjacent imaging arcs: (a) azimuth angle; (b) elevation angle; (c) the derivatives of azimuth angle; (d) the derivatives of elevation angle.
Figure 7. The angles variations of LOS between two adjacent imaging arcs: (a) azimuth angle; (b) elevation angle; (c) the derivatives of azimuth angle; (d) the derivatives of elevation angle.
Remotesensing 16 03625 g007
Figure 8. Different ISAR images: (a) original image; (b) reference image; (c) sensed image.
Figure 8. Different ISAR images: (a) original image; (b) reference image; (c) sensed image.
Remotesensing 16 03625 g008
Figure 9. Statistical results of features: (a) maximum circular cross-correlation matrix of feature curves; (b) histogram of shift angle.
Figure 9. Statistical results of features: (a) maximum circular cross-correlation matrix of feature curves; (b) histogram of shift angle.
Remotesensing 16 03625 g009
Figure 10. The results of feature matching with 67 pairs.
Figure 10. The results of feature matching with 67 pairs.
Remotesensing 16 03625 g010
Figure 11. Superposition of two simulated ISAR images: (a) before registration; (b) after registration.
Figure 11. Superposition of two simulated ISAR images: (a) before registration; (b) after registration.
Remotesensing 16 03625 g011
Figure 12. Two real ISAR images from different views: (a) reference image; (b) sensed image. Red dots represent the positions of dominant scatterers.
Figure 12. Two real ISAR images from different views: (a) reference image; (b) sensed image. Red dots represent the positions of dominant scatterers.
Remotesensing 16 03625 g012
Figure 13. The result of feature matching with 200 pairs.
Figure 13. The result of feature matching with 200 pairs.
Remotesensing 16 03625 g013
Figure 14. Superposition of two real ISAR images: (a) before registration; (b) after registration.
Figure 14. Superposition of two real ISAR images: (a) before registration; (b) after registration.
Remotesensing 16 03625 g014
Figure 15. Probability of success with different SNRs.
Figure 15. Probability of success with different SNRs.
Remotesensing 16 03625 g015
Figure 16. Estimation error of mapping parameters with different SNRs: (a) ME of Δ x ; (b) RMSE of Δ x ; (c) ME of Δ y ; (d) RMSE of Δ y ; (e) ME of α ; (f) RMSE of α .
Figure 16. Estimation error of mapping parameters with different SNRs: (a) ME of Δ x ; (b) RMSE of Δ x ; (c) ME of Δ y ; (d) RMSE of Δ y ; (e) ME of α ; (f) RMSE of α .
Remotesensing 16 03625 g016
Figure 17. Probability of success with different outlier ratios.
Figure 17. Probability of success with different outlier ratios.
Remotesensing 16 03625 g017
Figure 18. Estimation error of mapping parameters with different outlier ratios: (a) ME of Δ x ; (b) RMSE of Δ x ; (c) ME of Δ y ; (d) RMSE of Δ y ; (e) ME of α ; (f) RMSE of α .
Figure 18. Estimation error of mapping parameters with different outlier ratios: (a) ME of Δ x ; (b) RMSE of Δ x ; (c) ME of Δ y ; (d) RMSE of Δ y ; (e) ME of α ; (f) RMSE of α .
Remotesensing 16 03625 g018
Figure 19. Computational complexity of different methods.
Figure 19. Computational complexity of different methods.
Remotesensing 16 03625 g019
Table 1. The length and angle of A 1 A i .
Table 1. The length and angle of A 1 A i .
Scatterer Number12345678910
r A 1 i (m)05.007.0711.1814.1418.0215.8115.007.5011.28
ζ A 1 i (degree)090.0045.0063.4345.0033.6918.430012.80
Table 2. Two-line orbital elements (TLE) of the TIANHE space station.
Table 2. Two-line orbital elements (TLE) of the TIANHE space station.
1 48274U 21035A 23059.87944444 00018297 00000-0 21063-3 0 9991
           2 48274 41.4746 96.4827 0003993 340.8847 14.0436 15.61544460104882
Table 3. Estimation results of the mapping parameters.
Table 3. Estimation results of the mapping parameters.
Method Δ x (Pixel) Δ y (Pixel) α (Degree) Error   Δ x Error   Δ y Error   α
Ground Truth−1020−15---
SIFT−9.527820.1025−14.87100.47210.10250.1284
SURF−9.603519.6803−14.47060.3965−0.31970.5294
SIFT + SURF−10.227320.0843−14.9511−0.22730.08430.0489
Proposed method−10.091120.0119−14.9668−0.09110.01190.0332
Table 4. Estimation results of the mapping parameters and normalized image correlation.
Table 4. Estimation results of the mapping parameters and normalized image correlation.
MethodEstimated ValueNormalized Image Correlation
Δ x (Pixel) Δ y (Pixel) α (Degree)Whole ImageFiltered Image
SIFT3.07674.0650−12.44280.69110.7136
SURF1.28625.2708−16.78800.69020.7181
SIFT+SURF5.22146.0782−16.40550.69070.7196
Proposed method3.68284.8772−14.39800.73600.7658
Table 5. Related parameters.
Table 5. Related parameters.
Parameter DescriptionSymbolValue
Image Size N x × N y 487 × 487
Number of FPs K i ; K u ; K s ; K s 2 168; 198; 200; 135
Feature Length F i ; F u ; F s 128; 128; 1024
Scales S i ; S u 5; 5
Octaves V i 3
Neighboring Region Size Z i ; Z u ; Z s 16; 20; 16
Gaussian Kernel Size G i ; G s 16; 512
Observation matrix Size A m × A n 8 × 7
Iteration Number I 10,000
Notes: Subscript i, u and s represents the SIFT, SURF, and proposed method, respectively.
Table 6. The computational complexity formulas in the four stages.
Table 6. The computational complexity formulas in the four stages.
MethodSIFTSURFProposed Method
C d O N x N y G i 2 S i V i O 3 N x N y S u O K s N x N y + 10 K s Z s
C f O K i Z i 2 2 + F i O K u Z u 2 1 + F u O K s 2 G s
C m O K i 2 F i O K u 2 F u O 0.5 K s 2 F s + 0.5 K s F s log F s + 0.25 K s 2 F s log F s
C e O A m A n 2 I + A m 2 A n I + 11 K i I O A m A n 2 I + A m 2 A n I + 11 K u I O A m A n 2 I + A m 2 A n I + 11 K s 2 I
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, L.; Wang, J.; Su, J.; Luo, H. Spatial Feature-Based ISAR Image Registration for Space Targets. Remote Sens. 2024, 16, 3625. https://doi.org/10.3390/rs16193625

AMA Style

Zhao L, Wang J, Su J, Luo H. Spatial Feature-Based ISAR Image Registration for Space Targets. Remote Sensing. 2024; 16(19):3625. https://doi.org/10.3390/rs16193625

Chicago/Turabian Style

Zhao, Lizhi, Junling Wang, Jiaoyang Su, and Haoyue Luo. 2024. "Spatial Feature-Based ISAR Image Registration for Space Targets" Remote Sensing 16, no. 19: 3625. https://doi.org/10.3390/rs16193625

APA Style

Zhao, L., Wang, J., Su, J., & Luo, H. (2024). Spatial Feature-Based ISAR Image Registration for Space Targets. Remote Sensing, 16(19), 3625. https://doi.org/10.3390/rs16193625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop