Next Article in Journal
On the Influence of Infra-Red Sensor in the Accurate Estimation of Grinding Temperatures
Previous Article in Journal
Classification of Human Daily Activities Using Ensemble Methods Based on Smartphone Inertial Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Synthetic Aperture Radar (SAR) Imaging Method Combining Match Filter Imaging and Image Edge Enhancement

School of Electronics and Information Engineering, Beihang University, Beijing 102200, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2018, 18(12), 4133; https://doi.org/10.3390/s18124133
Submission received: 22 October 2018 / Revised: 17 November 2018 / Accepted: 22 November 2018 / Published: 26 November 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
In general, synthetic aperture radar (SAR) imaging and image processing are two sequential steps in SAR image processing. Due to the large size of SAR images, most image processing algorithms require image segmentation before processing. However, the existence of speckle noise in SAR images, as well as poor contrast and the uneven distribution of gray values in the same target, make SAR images difficult to segment. In order to facilitate the subsequent processing of SAR images, this paper proposes a new method that combines the back-projection algorithm (BPA) and a first-order gradient operator to enhance the edges of SAR images to overcome image segmentation problems. For complex-valued signals, the gradient operator was applied directly to the imaging process. The experimental results of simulated images and real images validate our proposed method. For the simulated scene, the supervised image segmentation evaluation indexes of our method have more than 1.18%, 11.2% and 11.72% improvement on probabilistic Rand index (PRI), variability index (VI), and global consistency error (GCE). The proposed imaging method will make SAR image segmentation and related applications easier.

1. Introduction

In the last several decades, synthetic aperture radar (SAR) has attracted much attention for its application to radar imaging due to the need for the all-weather observation of Earth. SAR imaging algorithms and image processing are two significant research topics. The studies on SAR imaging algorithms have examined many algorithms, such as the back-projection algorithm (BPA), chirp scaling algorithm (CSA), range-Doppler algorithm (RDA), and so on [1,2]. The BPA is commonly applied to obtain the most accurate result under arbitrary trajectories. In fact, for the formation of some special images, the BPA is the only option. However, the BPA procedure is quite time-consuming, and there have been many attempts to improve the speed of this algorithm [3,4,5]. On the other hand, other algorithms use various approximations to accelerate the speed. Namely, the range-Doppler, Chirp Scaling, and other imaging algorithms, all of which are faster than the BPA and can also achieve accurate results. Aside from image formation, there are various other potential SAR image applications, including edge detection, speckle reduction, image segmentation, and target recognition. Among them, SAR edge detection and image segmentation are the two fundamental procedures. Different from most of the popular methods, in the method proposed herein, SAR imaging is fused with a first-order gradient operation and image segmentation to improve SAR detection. In general, SAR image edge detection methods can be divided into a variety of filtering approaches, hypothesis testing methods, and other similar procedures. The commonly used edge detection operators include the Roberts, Sobel, Prewitt, Laplacian, and Canny operators, among many others [6]. Meanwhile, for SAR image segmentation, the most frequently utilized methods are the Markov random fields method [7], partial differential equation method [8], and clustering method [9]. However, SAR edge detection and image segmentation can still only achieve satisfactory results using the original SAR images, and with difficulty, due to SAR image characteristics. To overcome this issue, some algorithms have been proposed to improve the SAR image results. For instance, in [10,11,12], the authors suggested some algorithms for SAR image edge detection. Moreover, in [13,14,15,16], the authors provided algorithms for SAR image segmentation. However, these algorithms were built upon characteristic SAR images and worked under specific conditions and assumptions. In addition, to improve SAR imaging, Yanik and Li proposed an approach for the direct segmentation of SAR images, which was a kind of filter back-projection (FBP) [17]. On the other hand, Pena, Garza, and Qiao [18,19] successfully processed real echo data using the algorithm.
In this paper, we propose a new imaging method that combines SAR image reconstruction and image processing. SAR echo data usually contain a lot of noise, so it is difficult to proceed with raw echo data directly. However, the combination of SAR imaging and image processing could mitigate this problem during SAR application. In this study, we used the BPA approach because of its high precision [20]. Furthermore, by using the mentioned SAR image edge detection and image segmentation methods on the obtained edge-enhanced SAR image, the processed result can be improved further. Although the obtained image contains complex-valued data, the edge detection and image segmentation algorithms were implemented using real-valued data, i.e., the amplitude of the complex-valued SAR image.
This paper is organized as follows. First, we provide a brief introduction to the proposed algorithm. Second, we explain the background of the traditional BPA, the gradient operation for edge detection, the selected edge detection, and the image segmentation algorithms. Then, we describe the proposed algorithm in detail. Third, we provide the evaluation index and simulation results to demonstrate the effectiveness of our algorithm. Next, we discuss the advantages and disadvantages of the proposed algorithm. The last paragraph concludes the paper.

2. The Traditional Algorithm

In this section, the basic concepts and terminology related to SAR image processing are introduced, including the traditional back-projection algorithm (BPA), gradient operators, and selected SAR edge detection and segmentation algorithms.

2.1. Back-Projection Algorithm

To better understand the proposed algorithm, we first introduce the traditional BPA. The model of SAR echo data can be expressed by
d ( t , s ) z t a r g e t V ( z ) A [ z , x 0 ( s ) , e ( s ) ] e i π γ [ t t n ( s ) ] 2 d z
where t is the fast time, s is the slow time, V ( z ) is the reflectivity function of the target, A [ z , x 0 ( s ) , e ( s ) ] is the weight function corresponding to the antenna, x 0 ( s ) is the antenna position, e ( s ) is the normalized vector of the antenna, γ is the chirp ratio, and t n ( s ) is the time delay of the n-th target point. The issue of image reconstruction lies in solving V ( z ) from the echo data. Based on imaging theory, the model of a SAR image can be expressed by
I ( z t ) = e i w 2 | z t x 0 ( s ) | / c Q ( w , s , y ) D ( w , s ) d w d s
where z t is the point target position in the fast time domain, w is the frequency from the spectrum, y is the coordinate, and D ( w , s ) is the echo data expression ( d ( t , s ) ) in the frequency domain. The reflectivity function V ( z ) can be solved by Equation (2).
D ( w , s ) can be represented as follows:
D ( w , s ) = A ( w , s , z ) V ( z ) d z
where A ( w , s , z ) includes γ . Considering the echo data, we can achieve the following [21]:
I ( z t ) = e 2 i k ( | R s , z t | | R s , z | ) Q ( w , s , z ) A ( w , s , z ) V ( z ) d w d s d z
where R s , z = z x 0 refers to the distance between the antenna and the target, while R s , z t = z t x 0 represents the distance between the antenna and a specific point in the imaging area. According to [21], one of the key properties in the imaging method is to form
e 2 i k ( | R s , z t | | R s , z | ) Q ( w , s , z ) A ( w , s , z ) d w d s . e i [ z t x 0 ( s ) ] ξ d ξ
Through a mathematical derivation, we can get
Q = χ ( w , s , z t ) | ( ξ ) ( s , w ) | A ( w , s , z t ) ,
which is called a filter or a weight function corresponding to the antenna in the frequency domain. Here, χ ( w , s , z t ) is a smooth cutoff function that prevents division by zero, and ξ = 2 k ρ ( z t x 0 ( s ) ) , with the operator ρ projecting a vector to get its first two components. Furthermore, we have
I ( z t ) = e i [ z t x 0 ( s ) ] ξ d ξ V ( z ) d z
where x 0 ( s ) denotes the antenna location, and z t is the z-coordinate of the target. In the traditional BPA, the first step is range matching filtering. Then, the time delay of the doubled distance between the target and SAR platform is calculated to compensate for the phase factor. The last step is a summation of all signals to form the SAR image.
After the SAR image is formed, the image needs to be processed to provide for better image use. The image edge reflects the border information, which mainly refers to the discontinuous property in a certain region, such as a change in the gray value or a change in the texture structure. The image edge is still of great help to better utilize the image.

2.2. SAR Image Edge Detection

In SAR image edge detection, the first-order differential and second-order differential are usually computed to detect the edge. There are many popular second-order differential operators, such as those of Canny, Roberts, and so on, and they are widely used for image edge detection. However, because of the intensity inhomogeneities and the speckle noise of a SAR image, direct implementation of these algorithms usually results in too many false edges; therefore, they cannot be used directly on SAR images in practice. Here, we mainly analyze the gradient method, and we mostly use the first-order gradient, the second-order gradient Gauss–Laplace operator, and the Canny operator detection methods. The first-order gradient operator denotes vertical and horizontal gradients of the image. However, in practice, the edge direction is not always horizontal or vertical, so, the gradient of four directions is generally used, and the results are combined to get the final result. The second-order gradient operators combine Gaussian smoothing with the Laplacian algorithm, which is called the LoG operator. The LoG operator is obtained by combining the Gaussian-smoothed pretreatment result with the detection result. The Canny operator uses multiple steps to complete image edge extraction, and it is more complex than the other operators. Before using the Canny operator, it is necessary to perform Gaussian filtering to smooth the image to reduce the influence of noise on gradient extraction; then, the image gradient can be obtained. However, since a large number of false edges and repetitive edges are generated during the above calculation, it is very difficult to conduct subsequent image segmentation. Figure 1 and Figure 2 show the simulation results.
In order to detect the edge of a SAR image effectively, it is reasonable to enhance the edge by the first-order differential operator first and then seek for the proper algorithm to detect the edge. Moreover, here, we combined the first-order differential operator with the imaging operator to overcome the influence of speckle noise. In this way, we use the first-order filter more effectively and utilize the obtained SAR image more easily. SAR image edge detection is performed by employing the approach called the primal sketch [22].
Furthermore, to more effectively utilize the SAR data, many SAR applications perform SAR image segmentation first. Like SAR image edge detection, SAR image segmentation also suffers from many problems. The classic methods may fail with a SAR image due to its characteristics. Enhancing the edges of a SAR image using the BPA can contribute to SAR image segmentation. In this study, the method called the Chan-Vese (CV) level was employed to accomplish SAR image segmentation.

2.3. Edge Detection Using Multiscale Edge Features

SAR image edge detection is performed via a series of multiscale Gaussian filters through which we can get the edges for a specific scale:
Ł = f g ( : , : , t )
where f is the Gaussian filter; t = σ 2 is the bandwidth of the Gaussian filters, where σ is the standard deviation. Then, the gradient method is applied by combining the multiscale images. For a specific t, we seek the corresponding edges. Let G n o r m L = t γ ( L x 2 + L y 2 ) and γ = 0.5 . The specific value of γ used is derived from analytical edge models and the constraint that G n o r m L is maximized at the characteristic scale of the edge. Then, the edge detection condition can be defined by
L v v = 0 L v v v < 0 G n o r m L t = 0 G n o r m L t 2 < 0
where v represents the gradient direction. For each edge in a single image, we can get the corresponding scale t, so the edge can be detected.
The detected multiscale edge features are then used to obtain the candidate lines, which are further refined to get the final detection result.

2.4. Set Level Method with Intensity Inhomogeneities

The set level methods for image segmentation have many successful applications. In order to deal with the SAR image, we added some SAR characteristics to it. The original energy function of the set level method can be described by the following [13]:
E C V c 1 , c 2 , C = μ L e n g t h ( C ) + λ 1 i n s i d e ( C ) μ 0 ( x , y ) c 1 2 d x d y + λ 2 o u t s i d e ( C ) μ 0 ( x , y ) c 2 2 d x d y
where μ , λ 1 , and λ 2 are all positive values used to balance different items; c 1 is the mean value of the foreground area; c 2 is the mean value of the background area; and L e n g t h is the length of the curve. The physics-related meaning of this equation is that a curve is used to separate the image into foreground and background areas, and the selected curve is the one that minimizes the function.
In order to optimize the energy function, the function can be rewritten as
E C V c 1 , c 2 , C = μ Ω δ ε ( ϕ ( x , y ) ) ϕ ( x , y ) d x d y + λ 1 Ω μ 0 ( x , y ) c 1 2 H ε ( ϕ ( x , y ) ) d x d y + λ 2 Ω μ 0 ( x , y ) c 2 2 ( 1 H ε ( ϕ ( x , y ) ) ) d x d y
where ϕ represents the zero-level set of a Lipschitz function, ∇ is a gradient operator, and H ( z ) and δ ( z ) are defined as
H ε ( z ) = 1 π arctan ( z ε )
δ ε ( z ) = 1 π ε ε 2 + z 2
Applying a gradient to Equation (10), we get
ϕ t = δ ε ( ϕ ) μ d i v ( ϕ ϕ ) λ 1 ( μ 0 c 1 ) 2 + λ 2 ( μ 0 c 2 ) 2
where d i v represents divergence. The final update functions are given by
c 1 ( ϕ ) = I ( x , y ) H ε ( ϕ ( x , y ) ) d x d y H ε ( ϕ ( x , y ) ) d x d y c 2 ( ϕ ) = I ( x , y ) [ 1 H ε ( ϕ ( x , y ) ) ] d x d y [ 1 H ε ( ϕ ( x , y ) ) ] d x d y
To deal with intensity inhomogeneities, a bias field is added. For a SAR image, the bias field is defined as follows:
I = b J
where J is the true image without intensity inhomogeneities, and b is the bias field. We use the logarithmic transformation to change multiplication into addition. Considering the bias field model, Equation (10) can be rewritten as
E C V c 1 , c 2 , C = μ L e n g t h ( C ) + λ 1 i n s i d e ( C ) μ 0 ( x , y ) b ( x , y ) c 1 2 d x d y + λ 2 o u t s i d e ( C ) μ 0 ( x , y ) b ( x , y ) c 2 2 d x d y
Thus, the final update functions are defined by
c 1 ( ϕ ) = ( μ 0 ( x , y ) b ( x , y ) ) H ε ( ϕ ( x , y ) ) d x d y H ε ( ϕ ( x , y ) ) d x d y c 2 ( ϕ ) = ( μ 0 ( x , y ) b ( x , y ) ) [ 1 H ε ( ϕ ( x , y ) ) ] d x d y [ 1 H ε ( ϕ ( x , y ) ) ] d x d y b ( x , y ) = ( μ 0 ( x , y ) c 1 ) H ε ( ϕ ( x , y ) ) + ( μ 0 ( x , y ) c 2 ) ( 1 H ε ( ϕ ( x , y ) ) ) ) d x

3. New Method Combining Matched Filtering and Edge Enhancement

The differential operation should be performed on I ( z t ) to get the image edge. By computing the differential of both sides of Equation (6), and through the evolution of partial differential equations, we get Q * , which embodies the influence of the differential operation. By computing the differential of both sides of Equation (6), we get
z I ( z t ) = ( i ξ ) e i ( z t x 0 ( s ) ) ξ d ξ V ( z ) d z
where z represents the differential operator. Considering the direction of the SAR image, we can add the direction vector u ^ to Equation (18)
u ^ z I ( z t ) = u ^ ( i ξ ) e i ( z t x 0 ( s ) ) ξ d ξ V ( z ) d z
Thus, we get the expression for Q * :
Q * = u ^ ξ Q
where u ^ is defined as
u ^ = u ^ p 1 u ^ p 2 u ^ p 1 = [ 1 , 0 ] u ^ p 2 = [ 0 , 1 ]
Therefore, the final SAR image expression is defined by
μ · z I ( z t ) = [ u ^ p 1 + u ^ p 2 ] · ( i ξ ) · e i ( z z t ) ξ d ξ V ( z ) d z
If the pixels of a target area are all the same, the edge can be obtained directly. However, the area has a gray change. So, by exploiting Q * , we can enhance the edge of the SAR image by the imaging algorithm, which will contribute to SAR edge detection. By changing the direction vector u ^ , we may get either the azimuth direction or the range direction. Although the signal during imaging processing is complex-valued, a new operator can be added to the BPA directly. Therefore, Equation (22) is in the complex-valued form. In order to enhance the edge in all directions, we compute the differential operation in different directions, and, eventually, we synthesize the results into the final result. The image synthesis procedure uses both the range-direction result and the azimuth-direction result by adding them pixel by pixel.
I = I r a n g e + I a z i m u t h
where I is the final result, I r a n g e is the edge-enhanced result in the range direction, and I a z i m u t h is the edge-enhanced result in the azimuth direction.
The detailed flowchart of the proposed algorithm is presented in Figure 3, and the corresponding procedures are presented in Table 1.
In practice, the algorithm is performed in two steps. Firstly, after range compression, the image is filtered by matched filtering in the range direction; then, the phase factor is compensated for according to the time delay. Secondly, the echo data is filtered by matched filtering in the azimuth direction, and during the compensation of the phase factor, the first-order filter is added in the azimuth direction. Finally, these two results are combined to form the final edge-enhanced SAR image.
Through the proposed imaging algorithm, which combines the gradient operation and two-dimensional focus, we can enhance the edges of SAR images. However, the edge-enhanced images still need further processing to get the edge. Therefore, here, a specific algorithm was chosen to accomplish edge detection and image segmentation.

4. Simulations and Analysis

In this section, the experiments performed using the proposed algorithm are presented, and the obtained experimental results are compared with the results obtained by the traditional imaging procedure. In order to validate the effectiveness of the proposed algorithm, the proposed algorithm was used on both a geometric image and a real scene. SAR image edge detection was carried out by utilizing the approach called the primal sketch. Also, SAR image segmentation was applied to the obtained edge-enhanced image via the CV level set method.

4.1. Edge-Enhanced Back-Projection Algorithm

The results of a rectangular area obtained by performing the procedure for only one direction—the range direction or the azimuth direction—are presented in Figure 4, wherein it can be seen that the edge in the respective direction is enhanced. The final imaging result of the proposed algorithm was synthesized from the two one-direction results, so the edges in both directions were enhanced.
To evaluate the results further, we performed edge detection and image segmentation.

4.2. SAR Edge Detection

Figure 5 demonstrates the imaging and edge detection results of the rectangular area. The edge of the rectangle formed by the new algorithm is more distinct. As for edge detection, both results can be accepted because the edge of the rectangular area can be detected easily. Figure 6, in the circular area, shows similar phenomena. Figure 7 is divided into nine blocks, each having different grayscales and graphics. Comparing Figure 7a,c, we can see that the edge of the SAR image generated by the proposed method is clearer. After edge detection, this advantage is more intuitive, especially for the edges inside the image. The red circle in Figure 7d marks areas that are absent from Figure 7b. This shows that the edge of the image generated by this algorithm is more easily detected. It is also reflected by the quantitative evaluation index in Table 2. The mean square error (MSE) has improved by 19.13% and Pratt’s figure of merit (FOM) has improved by 11.91%. However, at the same time, there is still the problem of incomplete edge detection in Figure 7, but this is also the difficulty for SAR image edge detection. The above-mentioned scenes were simulated for the sake of supervised evaluation. In order to validate the effectiveness of the proposed algorithm, we performed the algorithm for complicated real scenes. Figure 8 demonstrates the imaging and edge detection result of a complicated real area. The result of this scene has enhanced edges, which verifies our method as described. We can see that the edge detection result shows that the edge-enhanced imaging result does contribute to edge detection. Even in a complicated real scene, the edge is well detected compared to the edge detection result of the traditional method. For example, the edge in the red box in Figure 8d is more complete than that in Figure 8b. Figure 9 shows similar results.
To evaluate the edge detection of the simulated geometry, we used the concept of the mean square error and quality factor. The mean square error measures the difference between two images and is defined as
M S E = 1 M N X ( i , j ) Y ( i , j ) 2
where X ( i , j ) is the real edge of the image, Y ( i , j ) is the detection edge, and M and N are the column and row of the image. The quality factor is defined as
F O M = 1 max ( X N , Y N ) i = 1 Y N 1 1 + α d i 2
where X N is the number of edge points in the reference image, Y N is the number of edge points in the detection result, α is a constant, and d is the distance between the detected edge point and the true edge point. The physical meaning of d is the geometric distance between the reference point and the actual test point.
Table 2 shows the evaluation of the edge detection process using the supervised ground truth (the true edge of the image). It can be seen that the edge detection results obtained by the algorithm proposed in this section are significantly superior to the traditional BPA. For the rectangular area, the edge is easily detected and the edge-enhanced images achieve the same result compared to the ordinary one. For the circular area, the edge is a little bit difficult, and the edge-enhanced images have better results.
For the simulated scenes, we know exactly where the true edge is, so we used the ground truth to evaluate the edge detection results. However, for real echo data, we do not have the ground truth. In order to evaluate the edge detection result, we used the concept of continuity and reconstruction similarity to denote the quality of the edge. Integrity is defined to describe the continuity of the specific detected edge, and continuity is defined as
S C i = S ( C i ) = 2 × 1 1 + exp ( C i / α ) 0.5
where
C i = k = 1 n i c k i
c k i = d k i D d k i < D 1 d k i < D
and d k i is the distance between the pixels at the edge and the center.
Reconstruction similarity is used to compare the original image and the image reconstructed from the edge image. An interpolation algorithm was first implemented to reconstruct the image using the edge detection result. The similarity is defined to compare the differences between two images using structural similarity (SSIM). Similarity is defined as
s ( X , Y ) = σ X Y + C 3 σ X + σ Y + C 3
where σ X and σ Y represent the covariance of each image, respectively, and σ X Y represents the cross-covariance of the two images. C 3 is a constant.
Table 3 is the result of the unsupervised evaluation for the real SAR images. From the data, we can see that the edge detection result of the edge-enhanced images shows a better performance for continuity and integrity. For the complicated scene, edge detection is much more difficult, and the edge detection of the edge-enhanced image is much better than that of ordinary method’s result.

4.3. SAR Image Segmentation

In order to further evaluate the effectiveness of the proposed imaging algorithm, we performed SAR image segmentation of the obtained SAR images to assess the improvement of the segmentation result. Similar to above, we used both simulated scenes and real scenes.
In Figure 10 and Figure 11, we can see that both SAR images are well segmented, but we can see in (d) that the edge is better than that in (c). In Figure 12, it is clear that both the edge and the result in (d) are better than that in (c). Because we have the ground truth of the simulated area, we evaluated these segmentation results using the probabilistic Rand index (PRI), variability index (VI), and global consistency error (GCE). PRI compares the similarity edges between two images, so the bigger the PRI, the better the result. However, VI and GCE measure the difference between two images, so, the smaller the values, the better the results. Let n denote the number of pixels in an image. PRI is defined as
P R I ( S , S t e s t ) = 1 c n 2 i j i I ( l i = l j & & l i = l j ) + I ( l i l j & & l i l j )
where & & represents the logic and operator; l i , l j represent two pixels in the ground truth result; and l i , l j represent two pixels in the detected result. VI is defined as
V I ( S , S t e s t ) = H ( S ) + H ( S t e s t ) 2 I ( S , S t e s t )
where S represents the true edge and S t e s t represents the detected edge. GCE is defined as
G C E ( S , S t e s t ) = 1 n min E ( S K , S t e s t K , p i ) , E ( S t e s t K , S K , p i )
where E ( S K , S t e s t K , p i ) is defined as
E ( S K , S t e s t K , p i ) = < R ( S K , p i ) R ( S t e s t K , p i ) > < R ( S K , p i ) >
and < R ( S K , p i ) > are the elements in the set R; ∖ is the difference set of two sets; and p i is a pixel.
Table 4 is the evaluation result of SAR image segmentation. From Table 4, we can see that the proposed algorithm has improved the effect of SAR image segmentation. The improvement in PRI, VI, and GCE are more than 1.18%, 11.2%, and 11.72%, respectively.

5. Discussion

Focusing on the usage of SAR images, the new method described in this paper tends to improve SAR edge detection and image segmentation, which are fundamental procedures for SAR image processing. Instead of analyzing the SAR image processing algorithm, this method combines the gradient operation with the SAR imaging process, resulting in an edge-enhanced SAR image. All the operations proceed in the imaging algorithm. The advantages can be described as follows:
  • The gradient operation is fused during the SAR image formation, which can overcome the problems related to speckle noise and intensity inhomogeneities to some extent, which is better than adding the operator directly to the SAR image;
  • Because all the operations are used in the imaging procedure, we can use all the promising SAR edge detection and image segmentation methods to process the edge-enhanced SAR images, which may make many existing algorithms more powerful;
  • We selected SAR edge detection and image segmentation methods to process the edge-enhanced SAR images, showing that the edge-enhanced SAR images can further improve these methods.
Despite the advantages mentioned above, there are some limitations to the proposed algorithm. Because the imaging procedure changes, the proposed algorithm may change the original SAR images, which may be unexpected. However, the proposed algorithm aims to better utilize SAR images, so we can use the edge detection result or image segmentation result as a mask to add to the traditional imaging result. In this way, this unexpected effect can be eliminated.

6. Conclusions

Characteristics of SAR present certain difficulties during the post-processing of a SAR image. In this paper, a new imaging algorithm is proposed to process SAR images during the imaging process, which can alleviate the difficulty of SAR image interpretation. The new imaging algorithm contributes to edge detection and image segmentation via the first-order gradient operation in the imaging procedure, which can overcome the problems of speckle noise and intensity inhomogeneities to some extent. As a consequence, the edge of the SAR images is enhanced after the imaging process. Both geometry graphics and real scenes were used for experiments to validate the new approach, which combines SAR imaging and SAR image processing. Edge detection and image segmentation were performed, and the results were evaluated to validate the effectiveness of the proposed algorithm. Experimental results show that processing the edge-enhanced SAR image with the edge detection and image segmentation algorithms can further improve the processing results. The edge detection evaluation indexes of our method have 19.13% and 11.91% improvement on MSE and FOM for the simulated complex graphics scene. And for the real scene, our method has improved more than 9.41% on the edge detection evaluation index for continuity and 3.8% for reconstruction similarity. Finally, for the simulated scene, the supervised image segmentation indexes of our method have more than 1.18%, 11.2% and 11.72% improvement on PRI, VI, and GCE. Furthermore, the idea of incorporating imaging processing algorithms into imaging algorithms can be instructive for improvements in SAR imaging algorithms.

Author Contributions

B.S. proposed the methodology. C.F. and H.X. performed the simulation and wrote this manuscript. A.G. wrote and edited this manuscript.

Funding

This research was funded by the National Science Fund of China of funder grant number 61301187 and the Shanghai Space Science and Technology Innovation Fund Project of funder grant number SAST2017040.

Acknowledgments

The authors would like to express great appreciation to Qiao in UTRGV for revising this manuscript, and would also like to thank the reviewers’ for their good suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, C.; Yang, W.; Wang, P. A review of spaceborne SAR algorithm for image formation. J. Radars 2013, 2, 111–122. [Google Scholar] [CrossRef]
  2. Pei, J.; Huang, Y.; Huo, W.; Miao, Y.; Zhang, Y.; Yang, J. Synthetic Aperture Radar Processing Approach for Simultaneous Target Detection and Image Formation. Sensors 2018, 18, 3377. [Google Scholar] [CrossRef] [PubMed]
  3. Xin, Z.; Zhang, X.; Shi, J.; Zhe, L. GPU-based parallel back projection algorithm for the translational variant bisar imaging. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Vancouver, BC, Canada, 24–29 July 2011; pp. 2841–2844. [Google Scholar]
  4. Yocky, D.A. An implementation of a fast backprojection image formation algorithm for spotlight-mode SAR. Proc. SPIE 2008, 6970, 69700H-1–69700H-8. [Google Scholar] [CrossRef]
  5. Shao, Y.; Wang, R.; Deng, Y.; Liu, Y.; Chen, R.; Liu, G. An N2.5 back-projection algorithm for SAR imaging. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 2113–2116. [Google Scholar]
  6. Ji, Q.; Haralick, R.M. Efficient facet edge detection and quantitative performance evaluation. Pattern Recognit. 2002, 35, 689–700. [Google Scholar] [CrossRef] [Green Version]
  7. Deng, H.; Clausi, D.A. Unsupervised image segmentation using a simple mrf model with a new implementation scheme. Pattern Recognit. 2004, 37, 2323–2335. [Google Scholar] [CrossRef]
  8. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef] [Green Version]
  9. Xue, X.; Wang, H.; Xiang, F.; Wang, J.P. A new method of SAR image segmentation based on FCM and wavelet transform. In Proceedings of the 2012 5th International Congress on Image and Signal Processing (CISP), Chongqing, China, 16–18 October 2012; pp. 621–624. [Google Scholar]
  10. Chang, Y.; Zhou, Z.; Chang, W.; Jin, T. New edge detection method for high-resolution SAR images. J. Syst. Eng. Electron. 2006, 17, 316–320. [Google Scholar] [CrossRef]
  11. Ilioudis, C.V.; Clemente, C.; Asghari, M.H.; Jalali, B.; Soraghan, J.J. Edge detection in SAR images using phase stretch transform. In Proceedings of the 2nd IET International Conference on Intelligent Signal Processing 2015 (ISP), London, UK, 1–2 December 2015; pp. 1–5. [Google Scholar]
  12. Liu, C.; Xiao, Y.; Yang, J. A coastline detection method in polarimetric SAR images mixing the region-based and edge-based active contour models. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3735–3747. [Google Scholar] [CrossRef]
  13. Wang, X.; Li, C. Multiphase segmentation of SAR images with level set evolution. In Proceedings of the 2009 WRI Global Congress on Intelligent Systems, Xiamen, China, 19–21 May 2009; pp. 447–452. [Google Scholar]
  14. Tirandaz, Z.; Akbarizadeh, G. A two-phase algorithm based on kurtosis curvelet energy and unsupervised spectral regression for segmentation of SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1244–1264. [Google Scholar] [CrossRef]
  15. Xu, H.; Sun, B.; Chen, J.; Guo, W.; Qiao, Z. SAR image segmentation based on BMFCM. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1764–1767. [Google Scholar]
  16. Wan, L.; Zhang, T.; Xiang, Y.; You, H. A robust fuzzy c-means algorithm based on Bayesian nonlocal spatial information for SAR image segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 896–906. [Google Scholar] [CrossRef]
  17. Yanik, H.C.; Li, Z.; Yazici, B. Computationally efficient FBP-type direct segmentation of synthetic aperture radar images. Proc. SPIE 2011, 8051, 80510C-1–80510C-8. [Google Scholar] [CrossRef]
  18. Pena, N.; Garza, G.; Cao, Y.; Qiao, Z. Edge detection of real synthetic aperture radar images through filtered back projection. In Proceedings of the 2012 International Conference on Systems and Informatics (ICSAI), Yantai, China, 19–20 May 2012; pp. 1910–1913. [Google Scholar]
  19. Mckee, A.C.; Nadig, P.; Kowall, N.W. Filtered back projection type direct edge detection of real synthetic aperture radar images. Proc. SPIE 2012, 8394, 83940N-1–83940N-7. [Google Scholar] [CrossRef]
  20. Ding, Y.; Munson, D.C. A fast back-projection algorithm for bistatic SAR imaging. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; pp. II-449–II-452. [Google Scholar]
  21. Cheney, M.; Borden, B. Fundamentals of Radar Imaging; Society for Industrial and Applied Methematics (SIAM): Philadelphia, PA, USA, 2009; ISBN 978-0-898716-77-1. [Google Scholar]
  22. Kokkinos, I.; Maragos, P.; Yuille, A. Bottom-up top-down object detection using primal sketch features and graphical models. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 1893–1900. [Google Scholar]
Figure 1. The edge detection results of synthetic aperture radar (SAR) image1. (a) Original image; (b) first-order result; (c) Gaussian–Laplacian operator result; (d) Canny operator result.
Figure 1. The edge detection results of synthetic aperture radar (SAR) image1. (a) Original image; (b) first-order result; (c) Gaussian–Laplacian operator result; (d) Canny operator result.
Sensors 18 04133 g001
Figure 2. The edge detection results of SAR image2. (a) Original image; (b) first-order result; (c) Gaussian–Laplacian operator result; (d) Canny operator result.
Figure 2. The edge detection results of SAR image2. (a) Original image; (b) first-order result; (c) Gaussian–Laplacian operator result; (d) Canny operator result.
Sensors 18 04133 g002
Figure 3. The flowchart of the proposed algorithm.
Figure 3. The flowchart of the proposed algorithm.
Sensors 18 04133 g003
Figure 4. The imaging results of the rectangular areas. (a) Original image; (b) edge-enhanced SAR image; (c) azimuth direction edge-enhanced SAR image; (d) range direction edge-enhanced SAR image.
Figure 4. The imaging results of the rectangular areas. (a) Original image; (b) edge-enhanced SAR image; (c) azimuth direction edge-enhanced SAR image; (d) range direction edge-enhanced SAR image.
Sensors 18 04133 g004
Figure 5. The imaging and edge detection result for the rectangular area. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Figure 5. The imaging and edge detection result for the rectangular area. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Sensors 18 04133 g005
Figure 6. The imaging and edge detection result for the circular area. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Figure 6. The imaging and edge detection result for the circular area. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Sensors 18 04133 g006
Figure 7. The imaging and edge detection result of complex graphics area. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Figure 7. The imaging and edge detection result of complex graphics area. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Sensors 18 04133 g007
Figure 8. The imaging and edge detection result of complex area1. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Figure 8. The imaging and edge detection result of complex area1. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Sensors 18 04133 g008
Figure 9. The imaging and edge detection result of complex area2. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Figure 9. The imaging and edge detection result of complex area2. (a) Original image; (b) edge detection result of the original image; (c) edge-enhanced SAR image; (d) edge detection result of the edge-enhanced SAR image.
Sensors 18 04133 g009
Figure 10. The imaging and image segmentation result of the circular area. (a) Original image; (b) image segmentation result of the original image; (c) edge-enhanced SAR image; (d) image segmentation result of the edge-enhanced SAR image.
Figure 10. The imaging and image segmentation result of the circular area. (a) Original image; (b) image segmentation result of the original image; (c) edge-enhanced SAR image; (d) image segmentation result of the edge-enhanced SAR image.
Sensors 18 04133 g010
Figure 11. The imaging and image segmentation result of the rectangular area. (a) Original image; (b) image segmentation result of the original image; (c) edge-enhanced SAR image; (d) image segmentation result of the edge-enhanced SAR image.
Figure 11. The imaging and image segmentation result of the rectangular area. (a) Original image; (b) image segmentation result of the original image; (c) edge-enhanced SAR image; (d) image segmentation result of the edge-enhanced SAR image.
Sensors 18 04133 g011
Figure 12. The imaging and image segmentation result of the complicated area. (a) Original image; (b) image segmentation result of the original image; (c) edge-enhanced SAR image; (d) image segmentation result of the edge-enhanced SAR image.
Figure 12. The imaging and image segmentation result of the complicated area. (a) Original image; (b) image segmentation result of the original image; (c) edge-enhanced SAR image; (d) image segmentation result of the edge-enhanced SAR image.
Sensors 18 04133 g012
Table 1. Algorithm procedures.
Table 1. Algorithm procedures.
ProcedureMethod
1Input the raw echo data and perform the improved range matching based on Equation (18) in the range direction using u ^ = u ^ p 1 .
2Compensate for the phase factor in the range direction.
3Accumulate the signal in the range direction, and output the edge-enhanced image in the range direction I r a n g e .
4Input the raw echo data and perform range matching in the azimuth direction.
5Compensate for the phase factor in the azimuth direction.
6Accumulate the azimuth gradient signal based on Equation (18) in the azimuth direction using u ^ = u ^ p 2 and output the edge-enhanced image in the range direction I a z i m u t h .
7Synthesize the image by adding two images pixel by pixel based on Equation (22), then output the edge-enhanced SAR image I.
Table 2. Supervised evaluation of edge detection. Mean square error (MSE), Pratt’s figure of merit (FOM).
Table 2. Supervised evaluation of edge detection. Mean square error (MSE), Pratt’s figure of merit (FOM).
ImageMSEImprovementFOMImprovement
Rectangular area0.0027/0.98/
Edge-enhanced rectangle0.00270.00%0.980.00%
Circular area0.0353/0.55/
Edge-enhanced circular0.004288.10%0.9878.18%
Complex graphics0.0115/0.42/
Edge-enhanced complex graphics0.009319.13%0.4711.91%
Table 3. Unsupervised evaluation of edge detection.
Table 3. Unsupervised evaluation of edge detection.
ImageContinuityImprovementReconstruction SimilarityImprovement
Complicated scene10.85/0.79/
Edge-enhanced complicated scene10.939.41%0.823.8%
Complicated scene20.80/0.78/
Edge-enhanced complicated scene20.8911.25%0.8112.5%
Table 4. Supervised evaluation of image segmentation. Probabilistic Rand index (PRI), variability index (VI), and global consistency error (GCE).
Table 4. Supervised evaluation of image segmentation. Probabilistic Rand index (PRI), variability index (VI), and global consistency error (GCE).
ImagePRIImprovementVIImprovementGCEImprovement
Circular scene0.9477/0.5795/0.0534/
Edge-enhanced circular scene0.95891.18%0.521511.2%0.047811.72%
Rectangular scene0.9595/0.5348/0.0546/
Edge-enhanced rectangular scene0.96808.5%0.459816.31%0.044917.77%

Share and Cite

MDPI and ACS Style

Sun, B.; Fang, C.; Xu, H.; Gao, A. A New Synthetic Aperture Radar (SAR) Imaging Method Combining Match Filter Imaging and Image Edge Enhancement. Sensors 2018, 18, 4133. https://doi.org/10.3390/s18124133

AMA Style

Sun B, Fang C, Xu H, Gao A. A New Synthetic Aperture Radar (SAR) Imaging Method Combining Match Filter Imaging and Image Edge Enhancement. Sensors. 2018; 18(12):4133. https://doi.org/10.3390/s18124133

Chicago/Turabian Style

Sun, Bing, Chuying Fang, Hailun Xu, and Anqi Gao. 2018. "A New Synthetic Aperture Radar (SAR) Imaging Method Combining Match Filter Imaging and Image Edge Enhancement" Sensors 18, no. 12: 4133. https://doi.org/10.3390/s18124133

APA Style

Sun, B., Fang, C., Xu, H., & Gao, A. (2018). A New Synthetic Aperture Radar (SAR) Imaging Method Combining Match Filter Imaging and Image Edge Enhancement. Sensors, 18(12), 4133. https://doi.org/10.3390/s18124133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop