Next Article in Journal
Experimental Investigation of Vibration Analysis on Implant Stability for a Novel Implant Design
Previous Article in Journal
Exploration-Based SLAM (e-SLAM) for the Indoor Mobile Robot Using Lidar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image Registration Method Based on Correlation Matching of Dominant Scatters for Distributed Array ISAR

1
University of Chinese Academy of Sciences, Beijing 100049, China
2
National Key Laboratory of Microwave Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(4), 1681; https://doi.org/10.3390/s22041681
Submission received: 17 January 2022 / Revised: 15 February 2022 / Accepted: 18 February 2022 / Published: 21 February 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Distributed array radar provides new prospects for three-dimensional (3D) inverse synthetic aperture radar (ISAR) imaging. The accuracy of image registration, as an essential part of 3D ISAR imaging, affects the performance of 3D reconstruction. In this paper, the imaging process of distributed array ISAR is proposed according to the imaging model. The ISAR images of distributed array radar at different APCs have different distribution of scatters. When the local distribution of scatters for the same target are quite different, the performance of the existing ISAR image registration methods may not be optimal. Therefore, an image registration method is proposed by integrating the feature-based method and the area-based method. The proposed method consists of two stages: coarse registration and fine registration. In the first stage, a dominant scatters model is established based on scale-invariant feature transform (SIFT). In the second stage, sub-pixel precision registration is achieved using the local correlation matching method. The effectiveness of the proposed method is verified by comparison with other image registration methods. The 3D reconstruction of the registered experimental data is carried out to assess the practicability of the proposed method.

1. Introduction

Inverse synthetic aperture radar (ISAR) is a type of radar used for the imaging of noncooperative moving targets. ISAR transmits wideband signals to obtain high-range resolution and achieves high azimuth resolution via synthetic aperture caused by relative motions [1]. ISAR technology enables radar systems to develop from target detection and ranging [2,3,4] to acquiring detailed features of the target [5,6,7]. Traditional ISAR imaging is the two-dimensional (2D) projection of the target onto the imaging plane, which only reflects the shape information of the target. In addition, 2D ISAR imaging suffers from problems such as information loss and image feature instability. A high-resolution three-dimensional (3D) image can reflect the 3D geometry of the target and provide more robust features.
Earlier researchers mainly used a single-station wideband radar to implement 3D ISAR imaging. The main methods include 3D imaging based on sum-diff beams [8,9] and 3D reconstruction based on sequence ISAR images [10,11]. The sequential ISAR imaging method is highly sensitive to the target’s motion posture, causing difficulties in ensuring the accuracy of the 3D reconstruction result, and thus limited practical value. Therefore, researchers have focused on multiview 3D ISAR imaging technology; multi-station radars are used to observe the target at the same time to obtain 3D images. In recent years, many 3D ISAR imaging methods, such as interferometric ISAR imaging [12,13,14], array ISAR [15,16] and MIMO ISAR [17], have been studied by scholars. Among these, distributed array radar combining MIMO radar real-aperture imaging and ISAR synthetic-aperture imaging has become a research hotspot, as it can shorten the imaging time and obtain a higher elevation resolution. In particular, distributed array ISAR imaging requires high-quality 2D ISAR images from each antenna. After image registration, a spectral analysis is conducted on the 2D ISAR images with different antenna phase centers (APCs) to obtain the 3D reconstruction results.
One of the keys to 3D ISAR imaging technology is the multiview 2D ISAR image registration, which is primarily achieved by area-based, feature-based or hybrid methods. Area-based methods focus on image intensity, including the correlation matching method [18] and the max-spectrum method [19]. In the case of a short-length baseline, the correlation matching method and the max-spectrum method are commonly used for ISAR image registration. Feature-based methods focus on feature extraction. These methods include the Harris corner detector method [20], the scale-invariant feature transform (SIFT) method and its improved versions [21,22,23,24]. A SIFT-like algorithm was first proposed in [25] for multiview SAR image registration. An automatic and fast image registration method was presented in [26] for GF-3 SAR images, which combined an adaptive sampling method with the SAR-SIFT algorithm. In [27], an improved SIFT method was proposed for single-station sequential ISAR image registration, and 3D reconstruction of the simulated aircraft data was carried out. However, distributions of scatters of the same target on different APCs are different due to the long baseline length of the distributed array antenna. Overall, area-based methods are subject to image mismatch, while feature-based methods suffer from insufficient registration accuracy [28].
Thus far, there is in general very little research that has been conducted on distributed array ISAR image registration. Inspired by the application of the SAR-SIFT algorithm in multiview SAR image registration, this paper proposes an image registration method that combines the advantages of feature-based and area-based methods to address the above problems. Our main contributions include:
  • Based on the SAR-SIFT algorithm, a dominant scatters model is proposed for multi-view ISAR image registration;
  • Compared with existing ISAR image registration methods, the superiority of the proposed method is verified;
  • Subpixel registration and 3D reconstruction are carried out on different experimental data to verify the effectiveness and practicability of the proposed method.
The rest of this paper is organized as follows. Section 2 outlines the 3D imaging model of the distributed array ISAR system. Our proposed image registration algorithm based on dominant scatters is described in Section 3. In Section 4, the registration analysis of different registration methods on the experimental data is presented, and the 3D reconstruction is carried out. Finally, the conclusions are drawn in Section 5.

2. Imaging Model of the Distributed Array ISAR System

Figure 1 shows the geometry of the distributed array ISAR for 3D ISAR imaging. The radar system consists of a central station (CO), N transmitting stations with the same structure (Tx) and N receiving stations with the same structure (Rx), which means that there are N p ( = N × N ) APCs. Here, we assume that the target bears a constant velocity in the direction of v.
The echo signal is denoted by deskewing:
S R m ( t ) = V R m × rect [ t τ m + Δ τ m / 2 T p Δ τ m ] × cos [ 2 π k R F Δ τ m ( t τ ) + 2 π f R F Δ τ m ( t τ ) + π k R F Δ τ m 2 ]
where Δ τ m is the delay experienced by the radar signal from the transmitting antenna to the receiving antenna through the target and V R m , k R F and T p denote the amplitude, chirp rate and period of the echo signal, respectively.
A Fourier transform is applied after sampling the output deskew signal. This gives rise to the result of the range dimension pulse compression, expressed as follows:
I R m ( f ) = A R m sin c [ ( T p Δ τ m ) ( f k R F Δ τ m ) ] × exp [ j ( 2 π f R F Δ τ m + Δ φ m ) ]
where Δ φ m is the additional phase introduced into the signal transmission.
With the N × N APCs each being an independent channel, N p -channel 2D ISAR images are obtained by motion compensation after applying S c ( f ) = exp ( j Δ φ ) for the additional phase as follows:
I n p ( f , f m ) = A i sin c { T p [ f + k R F c ( R T n + R R m 2 R r e f ) ] } × sin c { T A [ f m + ( V T n + V R m ) λ ] } × exp { j 2 π f R F c ( R T n + R R m 2 R r e f ) }
where n is the index of the transmitting antenna, m is the index of the receiving antenna, R T n and R R m are the distance from the target to the n-th transmitting antenna and m-th receiving antenna, respectively, V T n and V R m are the target’s velocity relative to the n-th transmitting antenna and the m-th receiving antenna, respectively, f m is the Doppler frequency and λ is the wavelength corresponding to the center frequency of the radar’s transmitted signal.
Figure 2 presents a flowchart of the imaging process of the distributed array ISAR. The 16 echo signals of different APCs are passed through matched filtering, motion compensation and other signal processing steps to obtain 16 ISAR images. Among them, motion compensation mainly solves the problem of envelope shift caused by translational motion and high-order phase error caused by rotational motion. Next, the proposed method is used to register multiview ISAR images, and the registration area in the master image is used as the reference calibration area to correct the amplitude and phase errors. Finally, through super-resolution imaging processing of the elevation dimension, 3D re-construction of the target is realized.

3. ISAR Image Registration Method Based on Correlation Matching of Dominant Scatters

Following the imaging model introduced in Section 2, the proposed image registration method for a distributed array radar system is proposed in this section. Firstly, the SIFT algorithm is used to extract the features between the master image and the slave images, and the random sample consensus (RANSAC) algorithm is adopted to eliminate the mismatched relationship. Then, the registration control points are determined by the dominant scatters model. Finally, the relative offset is determined via correlation matching to complete image registration. The flowchart of the proposed method is shown in Figure 3.

3.1. Feature Extraction

SIFT is an algorithm for detecting and describing local features in images; it is widely adopted in the field of computer vision. First proposed by David Lowe in 1999 and subsequently supplemented and improved in 2004, SIFT adopts a Gaussian convolution kernel for scale transformation to obtain the corresponding scale space of the image, which can be calculated by the following expression [21]:
L ( x , y , σ ) = G ( x , y , σ ) I ( x , y )
where σ is the scale space factor, ( x , y ) are the pixel coordinates of the image at the scale of σ and denotes the convolution operation in the x and y directions. The Gaussian convolution kernel G ( x , y , σ ) is given by:
G ( x , y , σ ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2
The difference-of-Gaussian scale space is computed by convolving the difference-of-Gaussian kernel of different scales with the image:
D ( x , y , σ ) = [ G ( x , y , η σ ) G ( x , y , σ ) ] I ( x , y ) = L ( x , y , η σ ) L ( x , y , σ )
The image is sampled at different scales to improve the anti-noise performance of feature extraction, while achieving the invariant transformation of the image scales. The extrema of the scale space are acquired from the difference images of adjacent images in the same frequency order, and the exact feature points are obtained by 2D function fitting. Next, the gradient directions with high robustness are computed using the statistical properties of the image’s gradient direction histogram around the feature points. The calculated gradient directions are then used as the main direction of the feature points. The magnitude and direction of the corresponding gradient at ( x , y ) are given by:
m ( x , y ) = ( L ( x + 1 , y ) L ( x 1 , y ) ) 2 + ( L ( x , y + 1 ) L ( x , y 1 ) ) 2 θ ( x , y ) = arctan ( L ( x , y + 1 ) L ( x , y 1 ) L ( x + 1 , y ) L ( x + 1 , y ) )
Treating the image pixel as a unit, for the current feature point, the main direction of the feature point is obtained by using the statistical characteristics of the gradient histogram of the region around the extracted feature point. First, we rotate the axes to the orientation of the feature points to ensure rotation invariance. Next, we take a 16 × 16 window centered on the feature point. Then, eight gradient direction histograms are calculated on a 4 × 4 small block, and the accumulated value of each gradient direction is drawn to form a seed point. Each feature point is described by 4 × 4 = 16 seed points; each seed point has the information of eight direction vectors. Finally, a 16 × 8 = 128-dimensional SIFT feature vector is obtained.
The 2D ISAR image of the distributed array ISAR is mainly composed of dominant scatters. Figure 4 shows the distribution of scatters in the master image and the slave image in different viewpoints. The engine of the target is selected for zoom-in analysis. The dominant scatters in the red circle have relatively robust characteristics, while the same target has different scatter characteristics in different viewpoints in the green circle. It can be seen that the feature point correspondence between the master image and the slave image is unstable after feature extraction with SIFT.
The feature points include corner points, edge points, bright spots in dark areas and dark points in bright areas, all of which are not directly relevant to image registration. Generally speaking, dominant scatters are used as registration control points in ISAR image registration. After feature extraction, the RANSAC algorithm is leveraged to establish a dominant scatters model to determine registration control points through mapping relations.

3.2. Improved Correlation Matching Method Based on Dominant Scatters

3.2.1. Dominant Scatters Model

For the master image and the slave images, SIFT obtains two groups of descriptor vectors. If the element of two descriptor vectors has a Euclidean distance greater than a certain threshold, the element is selected as a feature point. The Euclidean distance corresponding to the descriptor vectors of two groups of different feature points is computed as follows [29]:
D i s i , j ( ε ) = X i , ε 1 X j , ε 2 = ( x i , ε 1 x j , ε 2 ) 2 + ( y i , ε 1 y j , ε 2 ) 2
Based on the geometric similarity between feature points in the master image and the slave images, a dominant scatters model between different 2D images is established. RANSAC denotes the correct matching points as inner points and the incorrect matching points as outer points. The steps of the RANSAC algorithm are shown in Table 1. The parameters of the model are estimated iteratively from a group of observed data containing the incorrect matching points.
The Euclidean distance corresponding to two sets of descriptor vectors of different feature points is taken as the dataset, and the probability of the interior point in the whole dataset is assumed to be ϖ , given by:
ϖ = n i n l i e r s n i n l i e r s + n o u t l i e r s
Assuming that two points are required to determine the above model and K is the number of iterations, then the probability of obtaining the correct solution is as follows:
P = 1 ( 1 ϖ n ) K
According to Equations (9) and (10), the number of iterations K is defined as:
K = log ( 1 P ) log ( 1 ϖ n )

3.2.2. Calculating the Relative Offset

The proposed registration method involves two steps, namely coarse registration and fine registration. Based on the coarse registration, the value of the selected control point is interpolated, and the correlation matching method is used to perform fine registration with the subpixel unit. Through these steps, the registration accuracy is expected to reach the subpixel level.
The first step is coarse registration of the gridded master image, and the extremum in the grid is taken as the alternative control point. SIFT extracts feature points for corner points. The position of the alternative control points in the slave image is determined by the dominant scatters model described in the above section, and the alternative control points with high correlation between images are selected as the control points to reach the pixel registration accuracy.
The second step is fine registration. A matching window around the control points is taken to perform 16-fold 2D linear interpolation to meet the subpixel registration accuracy. We set the two matching windows to be registered to be A 1 ( i , j ) and A 2 ( i , j ) . Then, the normalized cross-correlation function of the two images can be obtained:
X c o r r = ( i , j ) W | ϕ 1 ( i , j ) × ϕ 2 ( i + r , j + c ) | ( i , j ) W ϕ 1 ( i , j ) 2 × ( i , j ) W ϕ 2 ( i + r , j + c ) 2 ϕ k ( i , j ) = A k ( i , j ) E [ A k ( i , j ) ]
where r and c represent the offset of the row and column directions of the two images, respectively, W is the size of the image area and E [ ] is the mean value of the image.
When the master image and the slave image are accurately registered, the relative offset of the center position of the correlation coefficient window is effectively the accurate offset of registration. This method can also be implemented in the frequency domain by Fourier transform.
Once the exact offset of the corresponding point is obtained, the polynomial model is employed for accurate correction. The polynomial model establishes a set relation between radar image coordinates and the target’s physical coordinates. It considers the global deformation of a 2D ISAR image as the resultant effect of translation, scaling, rotation and other higher deformations. The binary quadratic polynomial is as follows:
x k = a k 0 + a k 1 x + a k 2 y + a k 3 x 2 + a k 4 y 2 + a k 5 x y y k = b k 0 + b k 1 x + b k 2 y + b k 3 x 2 + b k 4 y 2 + b k 5 x y
According to Equation (12), an offset is calculated in the master image A 1 ( i , j ) . Its matching position is then found in the slave image A 2 ( i , j ) . The value at this position is obtained by using bilinear interpolation to interpolate the complex number of pixels around the matching position in the slave image A 2 ( i , j ) .
The bilinear interpolation is expressed as:
I ( P ) = i = 1 2 j = 1 2 I ( i , j ) × W ( i , j ) = [ I 11 I 12 I 21 I 22 ] × [ W 11 W 12 W 21 W 22 ]
where W i j = W ( x i ) W ( y j ) .
W ( x 1 ) = 1 Δ x ; W ( x 2 ) = Δ x W ( y 1 ) = 1 Δ y ; W ( y 2 ) = Δ y
Substituting Equation (15) into Equation (14) yields:
I ( P ) = W 11 I 11 + W 12 I 12 + W 21 I 21 + W 22 I 22 = ( 1 Δ x ) ( 1 Δ y ) I 11 + ( 1 Δ x ) Δ y I 12 + Δ x ( 1 Δ y ) I 21 + Δ x Δ y I 22
The slave image is registered with the master image after bilinear interpolation.

4. Experimental Results

In this section, the performance of the proposed method is analyzed by using experimental data collected from distributed array radars. In Section 4.1, an experiment is described in which we obtained multiview 2D ISAR images of different channels. The registration details of the proposed method are also presented. The analysis and comparison of different registration methods for 16-channel 2D ISAR images are given in Section 4.2. Section 4.3 presents the 3D ISAR imaging results for a variety of aircraft based on 16-channel 2D ISAR images after registration, verifying the practicality of the proposed method.

4.1. Distributed Array Radar System and Image Registration

To assess the effectiveness of the proposed method, the ISAR imaging experiment of the distributed array radar was carried out near the Beijing Capital International Airport. Relevant system parameters are listed in Table 2. Figure 5 shows a distributed array radar system and the observation aircraft in foggy weather conditions. The system consists of four transmitters and four receivers.
The 16-channel echo signals are obtained through a reasonable layout of the equipment. As shown in Figure 6, a total of 16 2D ISAR images are obtained after 2D ISAR imaging of the 16-channel echo signals. The difference in the location of the transceiver antennas leads to the imaging planes of the target not coinciding with each other. The 2D ISAR images of adjacent equivalent phase centers between different channels at the same time are not entirely identical. Meanwhile, different transceiver antennas also have slightly different SNRs in their measurements, which affects the quality of the 2D ISAR images.
In this paper, the channel-8 image is taken as the master image. The 16-channel 2D ISAR images are registered, and the channel-2 image and channel-9 image are taken as examples for analysis. First, the proposed method uses the SIFT algorithm for feature extraction. Figure 7 shows the matching results of the master–slave pair.
After feature extraction, the RANSAC algorithm is used to reduce the influence of mismatched feature points. Compared with the least square (LS) algorithm, RANSAC obtains a robust mapping relation in building the dominant scatters model. The relative offset of coarse registration between the master image and the slave image is obtained by the dominant scatters model. Figure 8 shows the mapping relationship between the master image and the slave image.
After the master image is meshed, as shown in Figure 9 and Figure 10, dominant scatters are selected as registration control points, and the relative offset of fine registration is determined by the correlation matching method. Finally, a polynomial is used to fit the offset, and the image registration is completed by interpolating the slave image.

4.2. Result Analysis

The correlation coefficient of 2D ISAR images is taken as the metric to assess the image registration quality, and the registration method proposed in this paper is qualitatively analyzed. Different registration methods are used to register 16-channel 2D ISAR images. The following analysis reveals that the correlation matching method [18], max-spectrum method [19] and SAR-SIFT method [26] have poor registration performance when the SNR and distributions of scatters differ in distributed array ISAR imaging.

4.2.1. Correlation Coefficients between ISAR Images

The correlation coefficient is as follows:
ρ = E [ A 1 × A 2 * ] E [ | A 1 | 2 ] × E [ | A 2 | 2 ]
where ρ represents the correlation coefficient, A 1 , A 2 are two 2D ISAR images, * denotes the complex conjugate and E [ * ] denotes the mathematical expectation.
Figure 10 shows the correlation coefficient distribution of each channel’s 2D ISAR image after registration with the proposed method. Remarkably, the larger the modulus, the better the registration effect between the master image and the slave image.

4.2.2. Analysis and Comparison of Different Image Registration Methods

In this section, the correlation matching method, the max-spectrum method, the SAR-SIFT method and the proposed method are used to register 2D ISAR images of experimental data. The region with a correlation coefficient greater than 0.8 is regarded as the interested region for image registration, and an interval of 0.05 is used to count the number of interested region larger than the threshold. In the interested region, the larger the correlation coefficient, the better the registration quality. The ISAR image is mainly composed of a small number of scatters and a large amount of noise. In the correlation co-efficient distribution of the ISAR image, a large part of the correlation coefficient distribution is meaningless noise. Therefore, it is necessary to calculate the correlation coefficient of the region containing only scatters in the ISAR image to compare the registration effect of different methods.
As shown in Table 3 and Figure 11, the slave image has a relatively low SNR; there is an image mismatch between the correlation matching method and the max-spectrum method, and the registration accuracy of the SAR-SIFT method is low. The number of pixels in the interested region of our method is much greater than that of the other three methods in the interested region of 0.95~1. Our proposed registration method produces more pixels with a higher correlation coefficient through the dominant scatters model and achieves better registration outcomes.
As observed in Table 4 and Figure 12, the slave image has different distributions of scatters; the peak correlation coefficient of the correlation matching method and the max-spectrum method is 0.91, and that of the SAR-SIFT method is 0.89, whereas that of the proposed method is 0.97. Furthermore, the proposed method not only has a closer to 1 peak position, but also has a great number of pixels with a higher correlation coefficient in the interested region, achieving higher registration accuracy.
It can be seen from the above analysis that the proposed registration method can achieve accurate registration for both images with a low SNR and images with different distributions of scatters. The next section will show the effect of different image registration methods on the elevation 3D reconstruction.

4.3. Three-Dimensional ISAR Imaging

The traditional and the proposed registration methods are used to register multiview 2D ISAR images. Based on the image registration results, the dominant scatters of the master image are selected to compensate for the amplitude and phase consistency of all 2D ISAR images.
In [15], echo signals obtained by the distributed array radar system have sparsity in the elevation dimension; they can be used for super-resolution imaging by a compressive sensing algorithm. The sparse representation of the signal x R N is as follows:
x = i = 1 N θ i φ i = Ψ Θ
where Ψ = { φ i | i = 1 , 2 , , N } is the orthogonal basis matrix, θ i = < x , φ i > is the projection coefficient and Θ = Ψ T x is the projection coefficient vector.
When the signal x is K s p a r s e in the Ψ domain, the observation matrix A R M × N can be used to measure the sparse coefficient Θ linearly, and the observation vector y R M can be obtained as follows:
y = A Θ
When the signal contains noise e, the observation vector becomes:
y = A Θ + e
Compressed sensing is a technique for the reconstruction of sparse signals. When Θ of Equation (19) or Equation (20) is K s p a r s e , sparse solutions can be obtained by solving the following optimization problem:
min Θ y A Θ 2 2 s . t . Θ 0 K
Common reconstruction methods include the greedy tracking algorithm, the convex relaxation algorithm and the combination algorithm. The greedy tracking algorithm is widely used for its simple structure and low computation requirement. In this paper, OMP [30], one of the greedy tracking algorithms, is used to reconstruct the image sequence after amplitude and phase correction.
The OMP algorithm gradually approaches the original signal by selecting a locally optimal solution in each iteration in a greedy manner. First, the correlation principle is adopted to select the atom that best matches the iteration margin. Second, the selected atoms are Gram–Schmidt orthogonalized. Third, the signal is projected onto the space composed of these orthogonal atoms, and the component and iteration margin of the signal on the selected atoms are obtained. Finally, the residual is decomposed using the above procedure. The components and iterative residuals of the signal on the selected atom are obtained, and the residuals are decomposed using the same method. The residual is expressed as follows:
r k = y A k Θ ^ k
Table 5 describes the steps of the OMP algorithm.
Figure 13 shows the 3D reconstruction results, in the form of 3D point clouds, of 2D ISAR images with different registration methods using the OMP algorithm. The 3D point cloud images obtained by the proposed method contain fewer outliers and clearer features.
Different exceptional echoes are registered with our proposed method, followed by 3D ISAR imaging. Figure 14 shows the 3D ISAR imaging filtering results of different types of the Airbus aircraft. As evident in Figure 14, the Airbus A321 has a longer fuselage than the Airbus A319, which is reflected by the different detailed features.

5. Conclusions

When the length of the baseline is non-negligible in the distributed array radar, the distribution of scatters is different in the 2D ISAR images. In addition, different transceiver antennas may also cause inconsistent SNRs in actual experiments. For the above reasons, the correlation matching method and the max-spectrum method have image mismatches for distributed array ISAR systems. To solve these problems, a novel image registration method is proposed that leverages feature extraction to build a dominant scatters model for coarse registration, and then uses local correlation matching for fine registration. After image registration, amplitude and phase correction is performed on the 16-channel 2D ISAR images, and the OMP algorithm is employed for super-resolution imaging in the third dimension. Compared with traditional image registration methods, the proposed method achieves better registration accuracy under the condition of a lower SNR and different distributions of scatters. The 3D reconstruction results of different aircrafts offer more detailed features, which lay the foundation for target identification. In future research, we will further explore the impact of a longer length of baseline and longer observation distance on 2D image registration and 3D reconstruction of the distributed array ISAR.

Author Contributions

Conceptualization, L.Z. and Y.L.; methodology, L.Z.; software, L.Z.; validation, L.Z. and Y.L.; formal analysis, L.Z. and Y.L.; investigation, L.Z.; resources, Y.L.; data curation, Y.L.; writing—original draft preparation, L.Z.; writing—review and editing, L.Z. and Y.L.; visualization, L.Z. and Y.L.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the anonymous reviewers for their valuable comments, which improved the paper’s quality.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, C.-C.; Andrews, H.C. Target-Motion-Induced Radar Imaging. IEEE Trans. Aerosp. Electron. Syst. 1980, 16, 2–14. [Google Scholar] [CrossRef]
  2. Gao, Q.; Wei, X.; Wang, Z.N.; Na, D.T. An Imaging Processing Method for Linear Array ISAR Based on Image Entropy. Appl. Mech. Mater. 2012, 128–129, 525–529. [Google Scholar]
  3. Chen, S.; Li, X.; Zhao, L. Multi-source remote sensing image registration based on sift and optimization of local self-similarity mutual information. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016. [Google Scholar]
  4. Wen, H.; Sheng, X.Y. An improved SIFT operator-based image registration using cross-correlation information. In Proceedings of the 2011 4th International Congress on Image and Signal Processing, Shanghai, China, 15–17 October 2011; pp. 869–873. [Google Scholar]
  5. Huang, Q.; Jian, Y.; Wang, C.; Chen, J.; Meng, Y. Improved registration method for infrared and visible remote sensing image using NSCT and SIFT. In Proceedings of the Geoscience & Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar]
  6. Paul, S.; Pati, U.C. Remote Sensing Optical Image Registration Using Modified Uniform Robust SIFT. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1300–1304. [Google Scholar] [CrossRef]
  7. Harris, C.G.; Stephens, M.J. A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988. [Google Scholar]
  8. Gabriel, A.K.; Goldstein, R.M. Crossed Orbit Interferometry. In Proceedings of the International Geoscience & Remote Sensing Symposium, Edinburgh, UK, 12–16 September 1988. [Google Scholar]
  9. Zhu, Y.; Su, Y.; Yu, W. An ISAR Imaging Method Based on MIMO Technique. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3290–3299. [Google Scholar]
  10. Mcfadden, F.E. Three-dimensional reconstruction from ISAR sequences. In Proceedings of the AeroSense 2002, Orlando, FL, USA, 1–5 April 2002. [Google Scholar]
  11. Mayhan, J.T.; Burrows, M.L.; Cuomo, K.M.; Piou, J.E. High resolution 3D ”snapshot” ISAR imaging and feature extraction. IEEE Trans. Aerosp. Electron. Syst. 2001, 37, 630–642. [Google Scholar] [CrossRef]
  12. Xu, G.; Xing, M.; Xia, X.; Zhang, L.; Chen, Q.; Bao, Z. 3D Geometry and Motion Estimations of Maneuvering Targets for Interferometric ISAR With Sparse Aperture. IEEE Trans. Image Process. 2016, 25, 2005–2020. [Google Scholar] [CrossRef]
  13. Ma, C.; Yeo, T.S.; Guo, Q.; Wei, P. Bistatic ISAR Imaging Incorporating Interferometric 3D Imaging Technique. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3859–3867. [Google Scholar] [CrossRef]
  14. Martorella, M.; Stagliano, D.; Salvetti, F.; Battisti, N. 3D interferometric ISAR imaging of noncooperative targets. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 3102–3114. [Google Scholar] [CrossRef]
  15. Jiao, Z.; Ding, C.; Liang, X.; Chen, L.; Zhang, F. Sparse Bayesian Learning Based Three-Dimensional Imaging Algorithm for Off-Grid Air Targets in MIMO Radar Array. Remote Sens. 2018, 10, 369. [Google Scholar] [CrossRef] [Green Version]
  16. Jiao, Z.; Ding, C.; Chen, L.; Zhang, F. Three-Dimensional Imaging Method for Array ISAR Based on Sparse Bayesian Inference. Sensors 2018, 18, 3563. [Google Scholar]
  17. Nasirian, M.; Bastani, M.H. A Novel Model for Three-Dimensional Imaging Using Interferometric ISAR in Any Curved Target Flight Path. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3236–3245. [Google Scholar] [CrossRef]
  18. Lewis, J. Fast normalized cross-correlation. Vis. Interface 1995, 10, 120–123. [Google Scholar]
  19. Wang, G.; Xia, X.G.; Chen, V.C. Three-dimensional ISAR imaging of maneuvering targets using three receivers. IEEE Trans. Image Process. 2001, 10, 436–447. [Google Scholar] [CrossRef] [PubMed]
  20. Bajcsy, R.; Kovačič, S. Multiresolution elastic matching. Comput. Vis. Graph. Image Process. 1989, 46, 1–21. [Google Scholar] [CrossRef]
  21. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  22. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  23. Li, J.; Ling, H. Navigation. Application of adaptive chirplet representation for ISAR feature extraction from targets with rotating parts. IEE Proc.-Radar Sonar Navig. 2003, 150, 284–291. [Google Scholar] [CrossRef]
  24. Martorella, M.; Giusti, E.; Demi, L.; Zhou, Z.; Cacciamano, A.; Berizzi, F.; Bates, B. Target Recognition by Means of Polarimetric ISAR Images. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 225–239. [Google Scholar] [CrossRef]
  25. Schwind, P.; Suri, S.; Reinartz, P.; Siebert, A. Applicability of the SIFT operator to geometric SAR imageregistration. Int. J. Remote Sens. 2010, 31, 1959–1980. [Google Scholar] [CrossRef]
  26. Xiang, Y.; Wang, F.; You, H. An Automatic and Novel SAR Image Registration Algorithm: A Case Study of the Chinese GF-3 Satellite. Sensors 2018, 18, 672. [Google Scholar] [CrossRef] [Green Version]
  27. Yang, S.; Jiang, W.; Tian, B. ISAR Image Matching and 3D Reconstruction Based on Improved SIFT Method. In Proceedings of the 2019 International Conference on Electronic Engineering and Informatics (EEI), Nanjing, China, 8–10 November 2019. [Google Scholar]
  28. Tondewad, M.; Dale, M.M.P. Remote Sensing Image Registration Methodology: Review and Discussion. Procedia Comput. Sci. 2020, 171, 2390–2399. [Google Scholar] [CrossRef]
  29. Fischler, M.A.; Bolles, R.C.J.R.i.C.V. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. In Readings in Computer Vision: Issues, Problem, Principles, and Paradigms; Morgan Kaufmann: Burlington, MA, USA, 1987; pp. 726–740. [Google Scholar]
  30. Tropp, J.A.; Gilbert, A.C. Signal Recovery from Random Measurements via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A geometric overview of the distributed array ISAR imaging.
Figure 1. A geometric overview of the distributed array ISAR imaging.
Sensors 22 01681 g001
Figure 2. Flowchart of the imaging process of the distributed array ISAR.
Figure 2. Flowchart of the imaging process of the distributed array ISAR.
Sensors 22 01681 g002
Figure 3. Flowchart of the proposed image registration method.
Figure 3. Flowchart of the proposed image registration method.
Sensors 22 01681 g003
Figure 4. The experimental ISAR 2D images. (a) The master image with 68 feature points selected. (b) The slave image with 92 feature points selected.
Figure 4. The experimental ISAR 2D images. (a) The master image with 68 feature points selected. (b) The slave image with 92 feature points selected.
Sensors 22 01681 g004
Figure 5. The radar system and our experiment setup. (a) The distributed array ISAR system. (b) The observed airplane. (c) Distribution of APCs. The distance between adjacent array elements is d . The maximal baseline length is D .
Figure 5. The radar system and our experiment setup. (a) The distributed array ISAR system. (b) The observed airplane. (c) Distribution of APCs. The distance between adjacent array elements is d . The maximal baseline length is D .
Sensors 22 01681 g005
Figure 6. Examples of 16-channel 2D ISAR images. The channel-2 image and channel-15 image are with a lower SNR. The radar cross-section (RCS) characteristics of each channel 2D image are inconsistent in different viewpoints.
Figure 6. Examples of 16-channel 2D ISAR images. The channel-2 image and channel-15 image are with a lower SNR. The radar cross-section (RCS) characteristics of each channel 2D image are inconsistent in different viewpoints.
Sensors 22 01681 g006
Figure 7. The matching results of the master image and the slave image. (a) The mapping relationship with a low SNR. The SNR of the slave image is 16.3 dB lower than that of the master image. (b) The mapping relationship with different distributions of scatters. A small number of mismatches can be seen in the matching results, as marked in the yellow arrow in (b).
Figure 7. The matching results of the master image and the slave image. (a) The mapping relationship with a low SNR. The SNR of the slave image is 16.3 dB lower than that of the master image. (b) The mapping relationship with different distributions of scatters. A small number of mismatches can be seen in the matching results, as marked in the yellow arrow in (b).
Sensors 22 01681 g007
Figure 8. The mapping relationship of the master image and the slave image. (a) The mapping relationship of cross-range direction with a low-SNR image. (b) The mapping relationship of range direction with a low-SNR image. (c) The mapping relationship of cross-range direction with the distribution of scatters. (d) The mapping relationship of range direction with the distribution of scatters. A small amount of mismatching points is marked in the yellow box in (c,d).
Figure 8. The mapping relationship of the master image and the slave image. (a) The mapping relationship of cross-range direction with a low-SNR image. (b) The mapping relationship of range direction with a low-SNR image. (c) The mapping relationship of cross-range direction with the distribution of scatters. (d) The mapping relationship of range direction with the distribution of scatters. A small amount of mismatching points is marked in the yellow box in (c,d).
Sensors 22 01681 g008
Figure 9. Grid backup control points of the master image.
Figure 9. Grid backup control points of the master image.
Sensors 22 01681 g009
Figure 10. The correlation coefficient distribution of each channel.
Figure 10. The correlation coefficient distribution of each channel.
Sensors 22 01681 g010
Figure 11. Correlation coefficients after image registration. (a) Correlation coefficient distribution of the channel-2 image. (b) Correlation coefficient distribution of the channel-2 image only containing the scatters area.
Figure 11. Correlation coefficients after image registration. (a) Correlation coefficient distribution of the channel-2 image. (b) Correlation coefficient distribution of the channel-2 image only containing the scatters area.
Sensors 22 01681 g011
Figure 12. Correlation coefficients after image registration. (a) Correlation coefficient distribution of the channel-9 image. (b) Correlation coefficient distribution of the channel-9 image only containing the scatters area.
Figure 12. Correlation coefficients after image registration. (a) Correlation coefficient distribution of the channel-9 image. (b) Correlation coefficient distribution of the channel-9 image only containing the scatters area.
Sensors 22 01681 g012
Figure 13. The 3D reconstruction results of 2D ISAR images with different registration methods. (a) Results of the correlation matching method. (b) Results of the max−spectrum method. (c) Results of the SAR−SIFT method. (d) Results of the proposed method.
Figure 13. The 3D reconstruction results of 2D ISAR images with different registration methods. (a) Results of the correlation matching method. (b) Results of the max−spectrum method. (c) Results of the SAR−SIFT method. (d) Results of the proposed method.
Sensors 22 01681 g013
Figure 14. The 3D reconstruction results of the airplane with the X−band distributed array radar system. (a) Microwave image of Airbus A319. (b) Optical image of Airbus A319. (c) Microwave image of Airbus A321. (d) Optical image of Airbus A321.
Figure 14. The 3D reconstruction results of the airplane with the X−band distributed array radar system. (a) Microwave image of Airbus A319. (b) Optical image of Airbus A319. (c) Microwave image of Airbus A321. (d) Optical image of Airbus A321.
Sensors 22 01681 g014
Table 1. The RANSAC algorithm used to eliminate mismatching in feature extraction.
Table 1. The RANSAC algorithm used to eliminate mismatching in feature extraction.
The RANSAC Algorithm Flow
1. Randomly select two points in the dataset and substitute them into the fitting equation.
2. Calculate the Euclidean distance between all matching points after and before fitting.
3. Those points with Euclidean distances less than the threshold are recorded as inliers, and the number of inliers is counted.
4. After repeating Steps 1 to 3 K times, the group with the most significant number of inliers is identified as the final fitting parameters.
Table 2. Configuration of the distributed array radar system.
Table 2. Configuration of the distributed array radar system.
ParameterSymbolValue
Carrier frequency f c 10 GHz
Bandwidth B ω 2 GHz
Pulse repetition frequency P R F 2.5 kHz
Reference range R Ref 850 m
Number of APCs N p 16
Maximum baseline D 10.8 m
Table 3. The relation between pixels and correlation coefficients of the channel-2 image and the channel-8 image.
Table 3. The relation between pixels and correlation coefficients of the channel-2 image and the channel-8 image.
ρ 0~0.800.80~0.850.85~0.900.90~0.950.95~1
Correlation Matching Method436191090402184201
Max-Spectrum Method43638813440381224
SAR-SIFT Method4091423621135539546
Proposed Method372112159208622731767
Table 4. The relation between pixels and correlation coefficients of the channel-9 image and the channel-8 image.
Table 4. The relation between pixels and correlation coefficients of the channel-9 image and the channel-8 image.
ρ 0~0.800.80~0.850.85~0.900.90~0.950.95~1
Correlation Matching Method31,1353475411841222647
Max-Spectrum Method30,9073305396744262891
SAR-SIFT Method31,2104524470038601202
Proposed Method29,9023243340939305012
Table 5. The OMP algorithm used for super-resolution imaging.
Table 5. The OMP algorithm used for super-resolution imaging.
The OMP Algorithm Flow
1. Initialize r 0 = y , Λ 0 = ϕ , A 0 = ϕ , t = 1 .
2. Find the index λ t with the smallest correlation coefficient: λ t = arg max j = 1 , 2 , , N | < r i 1 , a j > | .
3. Λ t = Λ t 1 { λ t } , A t = A t 1 { a λ } .
4. Find the approximate solution Θ ^ t of least squares: Θ ^ t = arg min Θ t y A t Θ t = ( A t T A t ) 1 A t T y .
5. Update the residual r t = y A t Θ ^ t = y A t ( A t T A t ) 1 A t T y .
6. t = t + 1 , if t K , return to Step 2; otherwise, stop iteration.
7. In the last iteration, Θ ^ t reconstructs the non-zero term of Λ t .
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, L.; Li, Y. An Image Registration Method Based on Correlation Matching of Dominant Scatters for Distributed Array ISAR. Sensors 2022, 22, 1681. https://doi.org/10.3390/s22041681

AMA Style

Zhang L, Li Y. An Image Registration Method Based on Correlation Matching of Dominant Scatters for Distributed Array ISAR. Sensors. 2022; 22(4):1681. https://doi.org/10.3390/s22041681

Chicago/Turabian Style

Zhang, Liqi, and Yanlei Li. 2022. "An Image Registration Method Based on Correlation Matching of Dominant Scatters for Distributed Array ISAR" Sensors 22, no. 4: 1681. https://doi.org/10.3390/s22041681

APA Style

Zhang, L., & Li, Y. (2022). An Image Registration Method Based on Correlation Matching of Dominant Scatters for Distributed Array ISAR. Sensors, 22(4), 1681. https://doi.org/10.3390/s22041681

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop