Next Article in Journal
Ground Validation of GPM IMERG Precipitation Products over Iran
Next Article in Special Issue
Optimized Segmentation Based on the Weighted Aggregation Method for Loess Bank Gully Mapping
Previous Article in Journal
Detection of Collapsed Buildings in Post-Earthquake Remote Sensing Images Based on the Improved YOLOv3
Previous Article in Special Issue
Canopy Height Estimation from Single Multispectral 2D Airborne Imagery Using Texture Analysis and Machine Learning in Structurally Rich Temperate Forests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structure-Adaptive Clutter Suppression for Infrared Small Target Detection: Chain-Growth Filtering

1
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu 610054, China
3
Center for Information Geoscience, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(1), 47; https://doi.org/10.3390/rs12010047
Submission received: 20 November 2019 / Revised: 10 December 2019 / Accepted: 17 December 2019 / Published: 20 December 2019
(This article belongs to the Special Issue Object Based Image Analysis for Remote Sensing)

Abstract

:
Robust detection of infrared small target is an important and challenging task in many photoelectric detection systems. Using the difference of a specific feature between the target and the background, various detection methods were proposed in recent decades. However, most methods extract the feature in a region with fixed shape, especially in a rectangular region, which causes a problem: when faced with complex-shape clutters, the rectangular region involves the pixels inside and outside the clutters, and the significant grey-level difference among these pixels leads to a relatively large feature in the clutter area, interfering with the target detection. In this paper, we propose a structure-adaptive clutter suppression method, called chain-growth filtering, for robust infrared small target detection. The well-designed filtering model can adjust its shape to fit various clutter structures such as lines, curves and irregular edges, and thus has a more robust clutter suppression capability than the fixed-shape feature extraction strategy. In addition, the proposed method achieves a considerable anti-noise ability by employing guided filter as a preprocessing approach and enjoys the capability of multi-scale target detection without complex parameter tuning. In the experiment, we evaluate the performance of the detection method through 12 typical infrared scenes which contain different types of clutters. Compared with seven state-of-the-art methods, the proposed method shows the superior clutter-suppression effects for various types of clutters and the excellent detection performance for various scenes.

1. Introduction

Infrared small target detection plays an important role in many applications such as infrared search and tracking system (IRST), automatic target recognition system (ATR) and early warning system [1,2,3]. Due to the long-imaging distance in these applications, targets are usually small and lack of shape and structure information in infrared images, leading to the difficulties in extracting abundant distinctive features of the targets [4,5,6]. Moreover, in practical applications, the small targets are usually drowned in heavy noise and complicated background clutters, which cause more interference to stable detection [7,8]. Therefore, it is a challenging problem to separate small targets from complicated backgrounds without any false alarms in infrared noisy images [9,10]. To solve this problem, many methods were proposed in recent decades, and they can be generally grouped into two categories: track-before-detect (TBD)-based methods and detect-before-track (DBT)-based methods.
The TBD-based methods, such as pipeline filtering [11], hypothesis testing [12], 3-D matched filtering [13], temporal profile filtering [14], dynamic programming [15] and so on, try to use the grayscale consistency and the trajectory continuity of the targets in consecutive frames so as to discriminate the small targets from noise [16,17,18,19]. These methods are based on two assumptions; one is that the motion model of the target is known, the other is that the background motion is slow. In ideal conditions, both assumptions are satisfied; the energy of the targets is accumulated in adjacent frames, and the difference between targets and noise is increased. As a result, the TBD-based methods perform well in low signal-to-clutter conditions. However, in practical applications, we can hardly acquire the precise motion model of the targets, and the backgrounds can move fast when the infrared detector is in a moving platform. Therefore, both of the two assumptions could fail in real situations, and the performance of the TBD-based methods could degrade significantly [20]. Meanwhile, high time and storage requirements also make these TBD approaches unsuited to large-scale engineering projects [21].
Compared with TBD-based methods, DBT-based methods require less prior knowledge of target motion and background motion [21]. Therefore, the moving environments will not cause huge adverse effects on the the performance of the DBT-based methods, and this is an important advantage for processing sequences. Although there are still some studies on developing TBD-based methods, the DBT-based methods have become more popular and attracted more research attention in recent years.
Filtering-based methods are an important class of the DBT approaches. Maxmean and Maxmedian filters [22], which only contain a few concise operations, are widely used to suppress the backgrounds. Then, some more complicated filters, such as bilateral filter [23], the adaptive Butterworth high-pass filter [24] and the filter based on least squares support vector machine (LS-SVM) [25], were proposed to improve the effects of background suppression. However, the fixed-scale strategy makes these filters hardly competent for multi-scale small target detection [26]. As a special kind of filtering method, morphological filtering-based methods such as top-hat transform [27] and its improved methods [28,29,30] are also widely used to estimate background in infrared images, and the targets are enhanced by subtracting the estimated background from the original image. In these morphological methods, in order to suppress the clutters effectively, it is important to choose an appropriate structuring element that could match the clutter shapes. Unfortunately, for the complicated backgrounds, it is difficult to select a structuring element that could match various kinds of clutters, and this causes the considerable clutter residuals in the filtering results and raises the false alarms in the following decision-making stage.
The sparsity-based methods develop rapidly and form another class of the DBT-based methods. The infrared-patch-image (IPI) model [20] is the origin of these sparsity-based methods. Based on the low-rank property of the background and the sparsity of the targets, the IPI model transfer the original detection problem into a robust principal component analysis (RPCA) optimization problem. In this optimization problem, the IPI model uses the nuclear norm to depict the rank of the background components and the l 1 norm to depict the sparsity of the target components; the targets could be separated from backgrounds effectively in the uncomplicated scenes. Since the IPI model was proposed, many efforts were made to improve its performance. The weighted nuclear norm minimization (WNNM) [31], the capped norm [32], the truncated nuclear norm minimization (TNNM) [33], the Schatten-p norm [34] and the γ norm (NRAM) [3] were applied to approximate the rank more precisely. Moreover, the weighted l 1 norm [35], the capped l 1 norm [36], the l 2 , 1 norm [37], and the l p norm [3] have also been proposed to improve the sparse representation ability. However, the strong local clutters may break the non-local self-correlation configuration and the low rank assumption of the background, leading to some clutter residuals in target image. Specifically, the IPI model, which employs the nuclear norm to represent the rank of background components, has been pointed out as it leaves considerable clutter residuals when processing a scene with strong edges [38]. Although the following methods [39,40,41,42,43] obtain a remarkable progress to remove the edge residuals, they can hardly eliminate the strong local clutters of various shapes completely by employing a specific sophisticated norm to replace the nuclear norm.
In recent years, the contrast mechanism of human visual system (HVS) have been extensively introduced to the DBT-based methods. In 2013, Chen et al. created a feature called the local contrast measure (LCM) to define the local contrast between the targets and background in infrared images [44]. Since then, plenty of other definitions of the local contrast, such as the improved difference of Gabor filter [45], the multiscale patch-based contrast measure (MPCM) [46], the high-boost-based multiscale local contrast measure (HBMLCM) [47], the multiscale weighted local contrast measure (MWLCM) [48], the derivative entropy-based contrast measure (DECM) [49], the relative local contrast measure (RLCM) [50], the Gaussian scale-space enhanced local contrast measure (GSS-ELCM) [51], the homogeneity-weighted local contrast measure (HWLCM) [52], and so on, were proposed. The above models compute the local feature value at each position by sliding a rectangle window, and they are usually concise and thus have a fast running speed. However, the rectangle window can hardly match the different clutter shapes very well, and this may cause a decline of the detection performance. For instance, for an irregular clutter region, the rectangle window involves the pixels inside and outside the clutter region, and the wide grey-level gap between these pixels may lead to a large feature value in the clutter region, interfering with the detection.
In addition, some DBT approaches exploit the local features of the original image, including the self-information [53], the principal curvature [54], the entropy [49], the shearlet’s kurtosis [55], the multi-order directional derivatives [56], and so on, to distinguish the target regions from the background. These methods extract features based on the inflexible rectangular windows, which could cause the high false alarms under complex conditions with various irregular clutters. Some anomaly detection methods such as the cluster kernel Reed-Xiaoli (CKRX) algorithm [57] were also proposed for small target detection, yet they are sensitive to abnormal background pixels. Lately, a novel approach via modified random walks (MRW) [58] was proposed to detect the small IR targets with low signal-to-noise ratio. However, it is still a challenging task to detect the small infrared targets with high detection rate and low false alarm rate under complicated background.
As mentioned before, the rectangle-window-based feature extraction strategies seem not the optimal solutions to suppress the clutters with various irregular shapes. However, if a feature extraction model only involves the pixels inside the clutter region, the unfavourable influence on clutter suppression brought by irregular clutter shapes might be weakened. Based on this intuition, in this paper, we propose a structure-adaptive clutter suppression method for infrared small target detection, which is called chain-growth filtering. Compared with the traditional feature extraction strategy based on rectangle windows, when encountering various types of clutters with irregular shape, our filtering model can adjust its shape and only involve the pixels inside the clutter region, and the small grey-level difference (difference in grey value) among these pixels leads to a better clutter suppression effect. In addition, the proposed method enjoys the capability of multi-scale target detection and achieves a considerable anti-noise ability as well. In the experiments, 12 infrared scenes under various conditions (different levels of noise, different target sizes, different types of clutters and so on) are tested, and the diversity of these scenes brings a challenge for infrared small target detection methods. To evaluate the performance of our method, we adopt seven small target detection algorithms as baseline methods for comparison. In the experimental results, our method obtains the large values of the evaluation metrics signal-to-clutter-ratio gain (SCRg) and background-suppression-factor (BSF) under the different tested scenes, showing the superior clutter suppression effects of our method for various types of clutters. Besides, our method gets the best receiver-operating-characteristic (ROC) curve in each infraed scene, which demonstrates both the excellent detection performance and the robustness of our proposed method.

2. Methodology

Figure 1 shows the diagram of the proposed detection method, which is mainly composed of 3 parts: preprocessing, chain-growth filtering and thresholding. We first preprocess the input images because the random noise (widely exists in infrared images) usually interferes with the detection process. Since the structure of clutters and targets is important in the following detection steps, here we employee the guided filtering method [59] to keep as many structure details in original images as possible while denoising (with the default parameters of the matlab command “imguidedfilter”). Then, we generate the chains at each pixel and perform the proposed chain-growth filtering model to suppress various clutters according to their structures, and this procedure is depicted in detail in this section. Finally, a classic adaptive thresholding technique is used to produce the final detection results.

2.1. Chain-Growth Filtering

The chain-growth filtering model is designed based on the following intuition: if a filtering model only involves pixels inside the clutter region, then the clutter shape’s influence on clutter suppression will be weakened. Because of the relative small grey-level difference between the pixels inside the clutter, similarly to the flat region, the clutter region can also have small filtering response, which benefits the clutter suppression. Therefore, in this paper, we propose a chain-growth filtering model which can adjust its shape flexibly and only involve the pixels inside the clutter region in computation when encountering clutters, resulting in a better clutter suppression effect.

2.1.1. Terminology

To illustrate the concept of the chain-growth filtering clearly, we introduce the following terminologies. A chain is a set that is composed of 8-connected pixels, which follows a specific region growing criteria. A chain has a starting point and an end point. The initial state of a chain is just a pixel, and the pixel is both the starting point and the end point of the chain (marked in black in Figure 2) in this period. In the following growth procedure, only one pixel adjacent to the end point (8-connected) can be subsumed into the chain at each step, and the new-joined pixel becomes the new end point of the chain. Figure 2 shows several types of chains with the starting point and the end point marked.
In this paper, we use numbers to denote the directions. The numbers 0, 1, 2, 3, 4, 5, 6, 7 correspond to the north, the northeast, the east, the southeast, the south, the southwest, the west, the northwest, respectively. A integer greater than 7 or less than 0 represents the same direction as the remainder in the division of this integer by 8. Figure 3 marks the 8 directions with numbers intuitively. A chain has a search direction and a growth direction. The search direction determines the scope of the candidate pixels that may join the chain in the following growth step. We also define the search direction boundary (SDB) of a chain, which is the set consisting of 3 neighboring pixels of the end point in the search direction. Figure 3 presents the SDB in 8 different search directions with deep color. At each growth step, one pixel in the SDB is selected to be the new end point of the chain, and the direction from the previous end point to the new end point is the growth direction of the chain.

2.1.2. The Growth Process

The concept of chain growth, similar to that of other region growing approaches, is to start from a point and to grow the point in a specific direction to extend the chain. The concrete growth process of a chain is depicted as follows. Here, we use C to denote the chain, d s to denote the search direction of the chain, and d g to denote the growth direction of the chain. Let us assume that the growth process starts from an arbitrary pixel p 0 . The pixel p 0 is labeled as chain C that then grows according to the growth strategy. Please note that we use C ( n ) to denote the chain C that has grown n steps, and use d s ( n ) and d g ( n ) to represent the search direction and growth direction of C ( n ) respectively. Consequently, p 0 can be denoted as C ( 0 ) , the initial search direction can be denoted as d s ( 0 ) . Please note that p 0 is both the starting point and the end point of C ( 0 ) . When C ( 0 ) and d s ( 0 ) are given, the chain can grow step by step through the following strategy; therefore, the initial growth point and the initial search direction are the two initial growth conditions of a chain. In the first growth step, the neighboring pixel of p 0 in direction d s ( 0 ) (denoted as p 1 here) is added to C ( 0 ) and turned into the new end point of C ( 1 ) ; the search direction is unchanged. This procedure can be formulated as
C ( 1 ) = { p 0 , p 1 } ,
d s ( 1 ) = d s ( 0 ) .
In the following each step of growth (taking the ith growth step as an example), the maximum grey-level pixel in the S D B of C ( i ) (denoted as p i + 1 ) is absorbed and turned into the new end point of C ( i + 1 ) , which is formulated as
p i + 1 = arg max p S D B ( C ( i ) ) g ( p ) ,
C ( i + 1 ) = C ( i ) { p i + 1 } ,
where g ( · ) represents the grey level of a pixel. If there are multiple pixels with the maximum gray value in the S D B , we select the pixel closest to the current search direction d s ( i ) as p i + 1 (if there are two maximum grey-level pixels closest to d s ( i ) , we select the left one). The search direction is updated based on the following rule:
d s ( i + 1 ) = d s ( i ) + x m ,
x = 1 , i f d g ( i ) > d s ( i ) 0 , i f d g ( i ) = d s ( i ) 1 , i f d g ( i ) < d s ( i ) .
The growth direction d g ( i ) (representing the direction from p i to p i + 1 ) can be an infinity of integers, which construct a set called D g here, and we choose the closest integer to d s ( i ) in D g and assign it to d g ( i ) . The m in Equation (5) is the bending factor that controls the flexibility of the chains. The search direction won’ t change if m = + , and the search direction equals to the growth direction if m = 1 . If the growth direction is not changed from beginning to end, the generated chains couldn’t match the clutter that bends sharply; if the search direction equals to the growth direction of the chain, the generated chains might bend greatly and grow around the target, leading to unwanted small outputs in target region. Therefore, in order to balance the flexibility and the extensibility of the chains, we set m = 3 in this paper. Please note that when d s ( i ) is not an integer, it represents the same direction with its closest integer when determining the SDB (see this concept in Section 2.1.1).
In Figure 4, the concrete growth process of a chain is illustrated by a constructed matrix, in which each number is regarded as the gray value of a pixel. Figure 4a shows the initial state of a chain—a pixel: the pixel is marked in black, the initial search direction is marked by a red arrow. Figure 4b–i present the different states of the growing chain; in each subfigure, the starting point is marked in green, the end point is marked in blue, the search direction is marked by a red arrow, and the S D B of the chain is marked in deep gray. We can see that the chain can adjust its search direction to the high grey-level pixels. Despite of the irregular shape of the high grey-level region, the chain occupies the pixels inside the high grey-level region after growing, showing the effectiveness of the growth strategy.

2.1.3. Stop Criterion

We use the maximum number of growth steps N g to stop the growing process. Obviously N g determines the length of the chain, which is closely related to the size of small targets. According to Society of Photo-Optical Instrumentation Engineers (SPIE), a small target is defined to have a total spatial extent of less than 80 pixels [44,50], which means the target size is less than 9 × 9 (pixels) and the radius of the target is less than 5 pixels. To guarantee the chain can exceed the scope of target region when it grows from the target center, in this paper, we set N g = 5 so that the length of the chain is larger than the radius of small targets.

2.1.4. Filtering Model

Based on the chain, we develop an approach called chain-growth filtering to suppress various types of clutters. We perform the proposed filtering model at each pixel, but for illustration, here let us assume that we calculate the filtering result of an arbitrary pixel p. Taking pixel p as the starting point and direction 0–7 as the initial search directions (the terminologies are given in Section 2.1.1), we can generate 8 chains denoted as C 0 , C 1 , , C 7 . In the chain C j , we calculate the difference between the gray value of p and the minimum gray value in C j , and record it as h j ( p ) :
h j ( p ) = g ( p ) min q C j g ( q ) j = 0 , 1 , , 7 ,
where g ( · ) represents the gray value of a pixel, and q represents another pixel. Then we adopt a minimum pooling strategy in the calculation of the final filtering response: we choose the minimum value from h 0 ( p ) , h 1 ( p ) , , h 7 ( p ) as the chain-growth filtering result of pixel p, which is denoted as r ( p )
r ( p ) = min j h j ( q ) j = 0 , 1 , , 7 .
For a point in flat background region, the grey-level difference among the points in the eight chains is small, leading to a slight filtering response. For a central point in target area, because of the small area the target occupies, a chain can exceed the scope of a small target region after growing, no matter in which way the chain grows. Therefore, in all the 8 chains, the grey-level difference between the starting point and the end point is remarkable, resulting in a large final filtering response. For a point in clutter region, because the clutters (such as lines, curves, edges and so on) usually occupy quite more pixels than the small targets (only occupy several pixels) in infrared images, the flexible growth rules can guarantee at least one chain is finally inside of the clutter region despite of the complex clutter shape. In this chain, the grey-level difference among the points is small, leading to a small final filtering response similar to that in flat region. This is why chain-growth filtering can suppress various types of complex-shape clutters.
Figure 5 shows the chain-growth filtering results at pure background region, line-shape clutter region and target region, respectively. The first subfigure presents an input infrared image containing some clutters and a target. We pick a background region, a line-shape clutter (bridge) region and a target region in this image, which are marked as region 1, region 2 and region 3 separately. The three selected typical regions all have a size of 11 × 11 (pixels), and they are enlarged to display (in the order of region 1, 2 and 3) in the second to fourth subfigures for a better show of the chains. The generated chains in region 1, 2, and 3 are painted yellow, and the chains which determine the final filtering response are painted orange. In region 1, we can see all the chains has little grey-level difference, so the final filtering response is small too. In region 2, some chains has considerable grey-level difference, but the chains stretching along with the bridge (clutter line) have small grey-level difference; the minimum pooling strategy leads to a weak final filtering response. In region 3, because the target has larger gray values than the neighboring pixels around it, the chains growing in all directions have large grey-level difference, resulting in a large final filtering response. In this way, the proposed chain-growth filtering method can distinguish the targets from background and clutters.
In summary, the chain-growth filtering model is depicted in Algorithm 1.
Algorithm 1: Chain-growth filtering.
Input: A pixel p 0 to be filtered.
Output: The chain-growth filtering response of the pixel p 0 : r ( p 0 ) .
1:
forj = 0 to 7 do
2:
    Initialize the chain C j ( 0 ) = { p 0 } with the initial search direction d j ( 0 ) = j.
3:
    Add the neighboring pixel of p 0 in direction d j ( 0 ) to C j ( 0 ) to form C j ( 1 ) , and update the search direction by d j ( 1 ) = d j ( 0 ) .
4:
    for i = 1 to N g 1 do
5:
        Find the maximum grey-level pixel (denoted as p i + 1 ) in S D B ( C j ( i ) ) through Equation (3).
6:
        Update the chain C j ( i + 1 ) by Equation (4).
7:
        Update the search direction d j ( i + 1 ) by Equation (5).
8:
    end for
9:
    After the growth procedure of the chains, we have C j = C j ( N g ) .
10:
  Calculate the difference between the gray value of p 0 and the minimum gray value in C j through Equation (7).
11:
end for
12:
Calculate the chain-growth filtering response r ( p 0 ) by Equation (8).
13:
Replace the gray value of pixel p 0 with r ( p 0 ) .

2.2. Adaptive Threshold for Target Segmentation

In the process of chain-growth filtering, the background region and clutter region are suppressed, and the target region are relatively enhanced. Consequently, we can conceive that the target region is the most salient region after background suppression. Based on this fact, we can use a segmentation operation in the output image of chain-growth filtering to get the final detection results. The segmentation threshold is obtained through an adaptive threshold
T = μ + k × σ ,
where μ and σ are the mean and standard deviation of the chain-growth filtering response values in the output image, and k is a relative decision threshold. In practice, the range of k is usually from 15 to 30, and the large range of k benefits the robustness of our detection method for various scenes. Finally, any pixels with a chain-growth filtering output value larger than T will be regarded as a pixel of the target.

2.3. Complexity Analysis

In the chain-growth filtering approach, both the chain growth procedure and the filtering computation model cost a constant number of operations at each position. Therefore, for an image with N pixels, the chain-growth filtering operation naturally has a complexity of O ( N ) . Considering the preprocessing module—guided filtering—can also be computed efficiently in O ( N ) time [59], thus the complexity of the whole proposed detection method is O ( N ) , which represents a relatively low computation burden. We carry out the time consumption test, and the results are presented in Section 3.6. During this test, we perform the chain-growth filtering model at each pixel. It can be seen that our proposed method currently still need some time to cope with images with large size. In practical applications, we can only perform the chain-growth filtering at some candidate target points [2], which reduces the computation significantly; furthermore, some parallel computing techniques can also be applied to further accelerate the chain-growth filtering procedure.

3. Experimental Results

In this section, we carry out extensive experiments to test the performance of the proposed method. We introduce the test data and the baseline methods, and illustrate the evaluation metrics for infrared small target detection. Then, we present two experiments aiming to test the robustness to noise and the capability of multi-scale target detection. Finally, both the qualitative and quantitative experiments are conducted to test the clutter suppression effects and the detection performance of each method. Our proposed method performs well in these experiments, showing its superiority in comparison with the seven state-of-the-art baseline methods. It is worth mentioning that our experiment platform is Matlab 2016b running on a laptop with a 2.60-GHz Intel i5-7300U CPU processor and 8 GB memory.

3.1. Experimental Setup

In the experiment, we test the performance of the detection method through 12 infrared scenes that are under different conditions (different levels of noise, different target sizes, and different types of clutters). The diversity of these scenes can test the different properties of a detection method: different conditions of noise can test the anti-noise performance, different conditions of target size can test the multi-scale detection ability, and different types of clutters can test the robustness of clutter suppression. Thus, the 12 diverse scenes are exploited to form a challenging test set so that we can evaluate the performance of the detection methods objectively. Figure 6 shows the representative frame of each scene, where the targets are marked by red rectangles. Table 1 presents a brief description of these data.
To demonstrate the effectiveness of our proposed method, here we employee seven baseline methods for comparison. Please note that most of these baseline methods are proposed in the last two years, and they can represent the highest level of infrared small target detection in the current period. The employed baseline methods are Min-Local-LoG method [60], the LS-SVM-based method [25], the multiscale patch based contrast measure (MPCM) [46], the high-boost-based multiscale local contrast measure (HB-MLCM) [47], the multiscale weighted local contrast measure (MWLCM) [48], the derivative entropy based contrast measure (DECM) [49], and the multiscale relative local contrast measure (RLCM) [50].

3.2. Evaluation Metrics

The background suppression factor (BSF) is introduced to evaluate the clutter suppression effects quantitatively, and the signal-to-clutter ratio gain (SCRg) is adopted to evaluate how much the prominence of the target increases relative to the background. The two metrics are defined as
S C R g = S C R o u t S C R i n
S C R = μ t μ b σ
B S F = σ i n σ o u t
where subscript i n and o u t represent the original image and the output image of chain-growth filtering respectively. μ t is the average pixel value of target, μ b and σ are the average pixel value and standard deviation of the surrounding local neighbor background. From the above definitions, we can see the property of both the two metrics—the larger the better.
In contrast to the BSF and SCRg, the ROC curve can evaluate the final detection results directly. The ROC curve is plotted based on the true positive rate (TPR) and false positive rate (FPR):
T P R = T P A P
F P R = F N A N
T P (true positive) represents the number of detected true targets, and A P (actual positive) represents the total number of targets. F P (false positive) represents the number of detected false targets, and A N (actual negative) is commonly defined as the total number of pixels in one frame in this research field [49,50]. Through choosing different segmentation thresholds, we can get different points in the TPR-FPR space (also called ROC space). Connect the points with lines, and we can obtain a ROC curve. In the ROC space, a curve closer to the top-left corner represents a better performance. To measure the ROC curves quantitatively, here we also calculate the area under the curve (AUC): the larger the metrics AUC is, the better detection performance it represents.

3.3. Anti-Noise Performance

Usually, the infrared images more or less contain some noise, which has some similarity to targets and could degrade the detection performance. Thus, the infrared small target detection methods should have a good anti-noise ability. Here, we evaluate the anti-noise ability of our proposed method through a designed experiment.
In this experiment, we chose two highly noisy scenes (Figure 6a,i) and added different levels of Gaussian noise to them; then we got the image samples with different signal-to-noise ratio (SNR) in each scene. Through processing these artificial images, we can find the limit of our method’s anti-noise ability. Figure 7 and Figure 8 present the processing results of our method in the two noisy scenes. The first column of Figure 7a shows the original image of the noisy scene Figure 6a. The second to fourth columns of Figure 7a show the image samples after adding different levels of noise, and their SNR values correspond to 4.0, 3.2 and 2.3. Figure 7b shows the images after denoising, and Figure 7c shows the processing results of our method. In this scene, there is little residual noise in processing results when SNR is higher than 3; yet there is considerable residual noise that could exceed the target in intensity when SNR is lower than 2. Ther first column of Figure 8a shows the original image of the scene Figure 6i. The second to fourth columns of Figure 8a show the image samples after adding different levels of noise, and their SNR values correspond to 3.4, 2.8 and 2.2. The denoised images are shown in Figure 8b, and the processing results are shown in Figure 8c. In this scene, the residual noise in processing results will not exceed the target in intensity when SNR is higher than 2.2. In the above anti-noise tests, although the quality of detection result deteriorates as the noise increases, we can still get the correct final detection results when SNR reduces to around 2, which demonstrates that our method has a certain degree of anti-noise ability.

3.4. Multi-Scale Target Detection

In practical applications, the small targets in different scenes could vary in size, and even within a scene, the target size could change greatly. Thus, the detection methods should have a good ability for multi-scale target detection. In this subsection, we test the multi-scale target detection ability of our method without any parameter tuning.
Figure 9 presents the processing results of our method for four image samples in the scene Figure 6j where target size changes from 8 × 6 to 3 × 2 (pixels). Figure 9a show the four typical image samples in which the target sizes are 8 × 6 , 7 × 5 , 5 × 3 , and 3 × 2 (pixels) respectively. Figure 9b show the processing results of our method for the four image samples. Figure 9c show the three-dimensional projections of the processing results in Figure 9b. In the processing results, the targets are enhanced while the clutters are suppressed. Furthermore, the intensities of target in four processing results are roughly the same, showing the little influence of target size on the processing results of our method. In other words, the processing results of the above test demonstrate that our method has a strong adaptability to different small target sizes and a great ability of multi-scale target detection.

3.5. Qualitative Comparison

Figure 10 and Figure 11 show the processing results of various methods for the twelve scenes in Figure 6: Figure 10 shows the processing results for Figure 6a–f, and Figure 11 shows the processing results for Figure 6g–l. Figure 10 and Figure 11, the first row represents the original images, and the second to ninth row represents the processing results of Min-Local-LoG, LS-SVM, MPCM, HB-MLCM, MWLCM, DECM, RLCM and our proposed method, respectively.
According to Figure 10, the good results of our method can be seen for the six different scenes. Despite the various types of clutters such as noise, clouds, buildings, and bridges, our method suppresses these typical clutters effectively and obtains the pure backgrounds. Meanwhile, the targets are enhanced significantly by our method and turned into the most salient spots in the processing results. As for other comparison methods, they get satisfactory processing results in some scenes, but they also lose effectiveness in some other specific scenes. For example, the scenes shown in the first row of Figure 10b,c not only contain the boundaries of clouds and buildings, but also include considerable noise. These two scenes become a challenge for two classic methods: Min-Local-LoG and LS-SVM. In the processing results of the two methods (as shown in the second and third rows of Figure 10b and the second and third rows of Figure 10c), the clutter residuals near boundaries is larger than targets in intensity, resulting in false alarms in the decision-making stage. The forth and fifth rows of Figure 10a show the processing results of MPCM and HB-MLCM for the scene shown in the first row of Figure 10a, which is full of heavy noise. There is a certain degree of clutter residuals in the processing results, showing the fact that the anti-noise ability of MPCM and HB-MLCM still needs improvement. As shown in the sixth row and eighth row, though the targets have the largest intensity in the processing results of MWLCM and RLCM, the intensity of backgrounds fluctuates considerably, attenuating the difference between targets and backgrounds and further reducing the robustness of the method in various scenes. The seventh row of Figure 10 represents the processing results of DECM. It can be seen that this method gets good clutter suppression effects for most scenes. However, for the scene shown in the first row of Figure 10e, which has complicated ground backgrounds, DECM has some remarkable clutter residuals in its detection result (shown in the eighth row of Figure 10e), leading to the false alarms in the final decisions.
For the scenes shown in Figure 11, our method also achieves superior processing results compared to other baseline methods. For example, the first row of Figure 11a shows a scene with cloud clutters and obvious boundaries. What’ more, the target is dim and obscure, and drowned in heavy random noise. Such complicated scene bring challenges to our method. Through the detection result shown in the ninth row of Figure 11a, we can find that our method suppress the cloud clutters, the boundaries, and the noise clearly, and the targets are enhanced significantly and turned into the most salient spots. As for other comparison methods, DECM also gets satisfactory processing results shown in the seventh row of Figure 11a. Min-Local-LoG leaves some obvious clutter residuals near the boundaries in the processing results, and the noise is not removed. LS-SVM eliminates most clutters effectively, but we can still see some spot-like residuals near the cloud boundaries in the third row of Figure 11a. As shown in the sixth and eighth rows of Figure 11a, the noise is suppressed effectively in the processing results of MWLCM and RLCM, but the backgrounds fluctuate apparently, bringing the negative impacts on robust detection for various scenes. MPCM and HB-MLCM suppress the cloud clutters successfully, but they still retain a high level of noise in their processing results as shown in the forth and fifth rows of Figure 11a.
In conclusion, the boundaries and noise are the main challenges to detection. The above qualitative experiment demonstrates that our method overcomes these challenges and obtains the superior clutter-removal effects compared to the other seven baseline methods. Besides, the experiment based on 12 different scenes also validate the robustness of our method.

3.6. Quantitative Comparison

We use two metrics, BSF and SCRg, to evaluate the clutter-suppression and target-enhancement effects of our method and other baseline methods. Please note that a larger value of BSF or SCRg represents a better performance. Table 2 shows the evaluation results of 8 methods for the 12 infrared scenes in Figure 6, and the largest value of BSF and SCRg in each scene is displayed in bold. Our method gets the largest SCRg value in most scenes, and this shows that our method has a better effect on separating targets from backgrounds despite the noise and various types of clutters. Our method gets the largest BSF value in nine scenes and gets the second largest BSF value in the other three scenes, and this demonstrates the superior clutter-suppression effectiveness of our method in comparison with other baseline methods. DECM gets the largest BSF value in the scene Figure 6a,c,l, which shows the best clutter-suppression effect in the three scenes. However, considering both the clutter-suppression and target-enhancement effects, as the metric SCRg reveals, our method produces a better processing result.
We also use ROC curve to evaluate the detection performance of our method and other comparison methods, and it should be noted that a curve closer to the top left corner in ROC space represents a better detection performance. As shown in Table 1, in all the tested scenes, there is only one infrared image in the scene (a)–(f). In these scenes, a detection method can get a perfect ROC curve as long as the target has the larger intensity than backgrounds in the processing results for only one image. This condition is quite easy to achieve, leading to the perfect ROC curves of most methods, and we cannot distinguish the performance differences from these ROC curves. Thus, we only draw the ROC curves for the six sequences marked as scene (g)–(l) in Table 1. Figure 12 presents the ROC curves of different methods for the six scenes, and the value of area under curve (AUC) for each curve is also calculated and shown in the bottom right corner. In each subgraph, the ROC curve of our method is closest to the top left corner, and the AUC value of our method is the largest. This illustrate the best detection performance of our method.
We also test the average processing time of each method for a single frame in each scene, and the results are also shown in Table 2. Our method is slower than Min-Local-LoG, LS-SVM, MPCM, HB-MLCM, MWLCM, but much faster than DECM; actually, the computational efficiency of our method is comparable with that of RLCM, showing an acceptable efficiency of our method. Our method does not achieve a superior efficiency because the growth procedure at each position consumes too much time. In practical applications, we could use some simple feature to select the candidate targets and only apply the chain-growth filtering at the potential target positions to save the whole processing time; besides, parallel computing is also a helpful measure to reduce the running time.

4. Conclusions

In this paper, we presented a novel structure-adaptive clutter suppression method called chain-growth filtering for infrared small target detection. Owing to the flexible growth strategy, the chain-growth filtering model can only involve the pixels inside the clutter region despite the complex clutter shapes. Because of the relative small grey-level difference between the involving pixels, the proposed filtering model obtains a superior and robust suppression effect for various clutters with irregular shapes. Furthermore, the proposed detection method also achieves a multi-scale target detection ability and a considerable anti-noise ability. Compared with seven state-of-the-art methods, our proposed method shows an excellent detection performance for the diverse infrared scenes in extensive experiments.
For the algorithm generality, the filtering model exploits the pixels’ grey values in Equation (7), while using other advanced pixel-wise features might obtain more exciting results in some scenario-specific applications. Moreover, since our proposed method only uses the local image characteristics, in future, some additional non-local characteristics could be employed to further improve the detection performance. In addition, the proposed chain-growth filtering model might also be applied to other similar tasks, for instance, pulmonary nodules detection in CT images also needs to suppress line-shaped clutters brought by blood vessels. How does the proposed idea work in such applications? We leave this for further studies.

Author Contributions

S.H. proposed the original idea, performed the experiments and wrote the manuscript. Y.L., Y.H., T.Z. and Z.P. contributed to the content, writing and revising of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by National Natural Science Foundation of China (61571096, 61775030), Open Research Fund of Key Laboratory of Optical Engineering, Chinese Academy of Sciences (2017LBC003), and Sichuan Science and Technology Program (2019YJ0167, 2019YFG0307).

Acknowledgments

The authors would thank to Landan Zhang, who is pursuing a master’s degree in IDIP lab, for providing part of the images; we are also grateful to Yingpin Chen, who works in School of Physics and Information Engineering, Minnan Normal University, Zhangzhou 363000, China, for his constructive comments on the revision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, X.Y.; Peng, Z.M.; Kong, D.H.; He, Y.M. Infrared Dim and Small Target Detection Based on Stable Multisubspace Learning in Heterogeneous Scene. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5481–5493. [Google Scholar] [CrossRef]
  2. Huang, S.Q.; Peng, Z.m.; Wang, Z.R.; Wang, X.Y.; Li, M.H. Infrared Small Target Detection by Density Peaks Searching and Maximum-Gray Region Growing. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1919–1923. [Google Scholar] [CrossRef]
  3. Zhang, T.F.; Wu, H.; Liu, Y.; Peng, L.B.; Yang, C.P.; Peng, Z.M. Infrared Small Target Detection Based on Non-Convex Optimization with Lp-Norm Constraint. Remote Sens. 2019, 11, 559. [Google Scholar] [CrossRef] [Green Version]
  4. Li, W.; Zhao, M.; Deng, X.; Li, L.; Li, L.; Zhang, W. Infrared Small Target Detection Using Local and Nonlocal Spatial Information. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 3677–3689. [Google Scholar] [CrossRef]
  5. Li, X.; Wang, J.; Li, M.H.; Peng, Z.M.; Liu, X.R. Investigating Detectability of Infrared Radiation Based on Image Evaluation for Engine Flame. Entropy. 2019, 21, 946. [Google Scholar] [CrossRef] [Green Version]
  6. Liu, D.; Cao, L.; Li, Z.; Liu, T.; Che, P. Infrared Small Target Detection Based on Flux Density and Direction Diversity in Gradient Vector Field. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 2528–2554. [Google Scholar] [CrossRef]
  7. Wang, X.Y.; Peng, Z.M.; Zhang, P.; He, Y.M. Infrared Small Target Detection via Nonnegativity-Constrained Variational Mode Decomposition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1700–1704. [Google Scholar] [CrossRef]
  8. Liu, Y.; Peng, L.M.; Huang, S.; Wang, X.Y.; Wang, Y.Q.; Peng, Z.M. River detection in high-resolution SAR data using the Frangi filter and shearlet features. Remote Sens. Lett. 2019, 10, 949–958. [Google Scholar] [CrossRef]
  9. Zhang, L.D.; Peng, L.B.; Zhang, T.F.; Cao, S.Y.; Peng, Z.M. Infrared Small Target Detection via Non-Convex Rank Approximation Minimization Joint l2, 1 Norm. Remote Sens. 2018, 10, 1821. [Google Scholar] [CrossRef] [Green Version]
  10. Peng, Z.M.; Zhang, Q.H.; Wang, J.R.; Zhang, Q.P. Dim target detection based on nonlinear multifeature fusion by Karhunen-Loeve transform. Opt. Eng. 2004, 43, 2954–2959. [Google Scholar] [CrossRef]
  11. Wang, G.; Inigo, R.M.; Mcvey, E.S. A pipeline algorithm for detection and tracking of pixel-sized target trajectories. In Proceedings of the Signal and Data Processing of Small Targets, Orlando, FL, USA, 1 October 1990; pp. 167–178. [Google Scholar]
  12. Blostein, S.D.; Huang, T.S. Detecting small, moving objects in image sequences using sequential hypothesis testing. IEEE Trans. Signal Process. 1991, 39, 1611–1629. [Google Scholar] [CrossRef]
  13. Reed, I.S.; Gagliardi, R.M.; Stotts, L.B. Optical moving target detection with 3-D matched filtering. IEEE Trans. Aerosp. Electron. Syst. 1988, 24, 327–336. [Google Scholar] [CrossRef]
  14. Caefer, C.E.; Silverman, J.; Mooney, J.M. Optimization of point target tracking filters. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 15–25. [Google Scholar] [CrossRef]
  15. Barniv, Y. Dynamic programming solution for detecting dim moving targets. IEEE Trans. Aerosp. Electron. Syst. 1985, AES-21, 144–156. [Google Scholar] [CrossRef]
  16. Fan, X.S.; Xu, Z.Y.; Zhang, J.L.; Huang, Y.M.; Peng, Z.M. Infrared Dim and Small Targets Detection Method Based on Local Energy Center of Sequential Image. Math. Probl. Eng. 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  17. Fan, X.S.; Xu, Z.Y.; Zhang, J.L.; Huang, Y.M.; Peng, Z.M. Dim small targets detection based on self-adaptive caliber temporal-spatial filtering. Infrared Phys. Technol. 2017, 85, 465–477. [Google Scholar] [CrossRef]
  18. Fan, X.S.; Xu, Z.Y.; Zhang, J.L.; Huang, Y.M.; Peng, Z.M.; Wei, Z.R.; Guo, H.W. Dim small target detection based on high-order cumulant of motion estimation. Infrared Phys. Technol. 2019, 99, 86–101. [Google Scholar] [CrossRef]
  19. Sun, Y.; Yang, J.G.; Li, M.; An, W. Infrared small target detection via spatial–temporal infrared patch-tensor model and weighted Schatten p-norm minimization. Infrared Phys. Technol. 2019, 102, 103050. [Google Scholar] [CrossRef]
  20. Gao, C.Q.; Meng, D.; Yang, Y.; Wang, Y.T.; Zhou, X.F.; Hauptmann, A.G. Infrared patch-image model for small target detection in a single image. IEEE Trans. Image Process. 2013, 22, 4996–5009. [Google Scholar] [CrossRef]
  21. Zhang, L.D.; Peng, Z.M. Infrared Small Target Detection Based on Partial Sum of the Tensor Nuclear Norm. Remote Sens. 2019, 11, 382. [Google Scholar] [CrossRef] [Green Version]
  22. Deshpande, S.D.; Er, M.H.; Venkateswarlu, R.; Chan, P. Max-mean and max-median filters for detection of small targets. In Proceedings of the Signal and Data Processing of Small Targets, Denver, CO, USA, 4 October 1999; pp. 74–84. [Google Scholar] [CrossRef]
  23. Zhao, Y.; Pan, H.B.; Du, C.P.; Peng, Y.R.; Zheng, Y. Bilateral two-dimensional least mean square filter for infrared small target detection. Infrared Phys. Technol. 2014, 65, 17–23. [Google Scholar] [CrossRef]
  24. Yang, L.; Yang, J.; Yang, K. Adaptive detection for infrared small target under sea-sky complex background. Electron. Lett. 2004, 40, 1083–1085. [Google Scholar] [CrossRef]
  25. Wang, P.; Tian, J.W.; Gao, C.Q. Infrared small target detection using directional highpass filters based on LS-SVM. Electron. Lett. 2009, 45, 156–158. [Google Scholar] [CrossRef]
  26. Peng, L.B.; Zhang, T.F.; Huang, S.Q.; Pu, T.; Liu, Y.H.; Lv, Y.X.; Zheng, Y.C.; Peng, Z.M. Infrared small-target detection based on multi-directional multi-scale high-boost response. Opt. Rev. 2019, 26, 568–582. [Google Scholar] [CrossRef]
  27. Tom, V.T.; Peli, T.; Leung, M.; Bondaryk, J.E. Morphology-based algorithm for point target detection in infrared backgrounds. In Proceedings of the Signal and Data Processing of Small Targets, Orlando, FL, USA, 22 October 1993; pp. 2–12. [Google Scholar]
  28. Bai, X.Z.; Zhou, F.G. Analysis of new top-hat transformation and the application for infrared dim small target detection. Pattern Recogn. 2010, 43, 2145–2156. [Google Scholar] [CrossRef]
  29. Bai, X.Z.; Zhou, F.G.; Jin, T. Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter. Signal Process. 2010, 90, 1643–1654. [Google Scholar] [CrossRef]
  30. Meng, W.; Jin, T.; Zhao, X.W. Adaptive method of dim small object detection with heavy clutter. Appl. Optics. 2013, 52, D64–D74. [Google Scholar] [CrossRef]
  31. Gu, S.H.; Zhang, L.; Zuo, W.M.; Feng, X.C. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  32. Sun, Q.; Xiang, S.; Ye, J.P. Robust principal component analysis via capped norms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; pp. 311–319. [Google Scholar]
  33. Hu, Y.; Zhang, D.B.; Ye, J.P.; Li, X.L.; He, X.F. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 2117–2130. [Google Scholar] [CrossRef]
  34. Nie, F.P.; Huang, H.; Ding, C. Low-rank matrix recovery via efficient schatten p-norm minimization. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012; pp. 655–661. [Google Scholar]
  35. Guo, J.; Wu, Y.Q.; Dai, Y.M. Small target detection based on reweighted infrared patch-image model. IET Image Process. 2017, 12, 70–79. [Google Scholar] [CrossRef]
  36. Zhang, T. Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 2010, 11, 1081–1107. [Google Scholar]
  37. He, Y.J.; Li, M.; Zhang, J.L.; An, Q. Small infrared target detection based on low-rank and sparse representation. Infrared Phys. Technol. 2015, 68, 98–109. [Google Scholar] [CrossRef]
  38. Dai, Y.M.; Wu, Y.Q.; Song, Y. Infrared small target and background separation via column-wise weighted robust principal component analysis. Infrared Phys. Technol. 2016, 77, 421–430. [Google Scholar] [CrossRef]
  39. Liu, X.G.; Chen, Y.P.; Peng, Z.M.; Wu, J. Total variation with overlapping group sparsity and Lp quasinorm for infrared image deblurring under salt-and-pepper noise. J. Electron. Imaging 2019, 28, 043031. [Google Scholar] [CrossRef] [Green Version]
  40. Li, M.; He, Y.J.; Zhang, J.L. Small infrared target detection based on low-rank representation. In Proceedings of the International Conference on Image and Graphics, Tianjin, China, 13–16 August 2015; pp. 393–401. [Google Scholar]
  41. Liu, X.G.; Chen, Y.P.; Peng, Z.M.; Wu, J.; Wang, Z.R. Infrared image super-resolution reconstruction based on quaternion fractional order total variation with Lp quasinorm. Appl. Sci. 2018, 8, 1864. [Google Scholar] [CrossRef] [Green Version]
  42. Dai, Y.M.; Wu, Y.Q. Reweighted infrared patch-tensor model with both nonlocal and local priors for single-frame small target detection. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 3752–3767. [Google Scholar] [CrossRef] [Green Version]
  43. Zhu, H.; Liu, S.M.; Deng, L.Z.; Li, Y.S.; Xiao, F. Infrared Small Target Detection via Low-Rank Tensor Completion With Top-Hat Regularization. IEEE Trans. Geosci. Remote Sens. 2019. [Google Scholar] [CrossRef]
  44. Chen, C.P.; Li, H.; Wei, Y.T.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2013, 52, 574–581. [Google Scholar] [CrossRef]
  45. Han, J.H.; Ma, Y.; Huang, J.; Mei, X.G.; Ma, J.Y. An Infrared Small Target Detecting Algorithm Based on Human Visual System. IEEE Geosci. Remote Sens. Lett. 2016, 13, 452–456. [Google Scholar] [CrossRef]
  46. Wei, Y.T.; You, X.G.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recogn. 2016, 58, 216–226. [Google Scholar] [CrossRef]
  47. Shi, Y.F.; Wei, Y.T.; Yao, H.; Pan, D.H.; Xiao, G.R. High-Boost-Based Multiscale Local Contrast Measure for Infrared Small Target Detection. IEEE Geosci. Remote Sens. Lett. 2018, 15, 33–37. [Google Scholar] [CrossRef]
  48. Liu, J.; He, Z.Q.; Chen, Z.L.; Shao, L. Tiny and Dim Infrared Target Detection Based on Weighted Local Contrast. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1780–1784. [Google Scholar] [CrossRef]
  49. Bai, X.Z.; Bi, Y.G. Derivative entropy-based contrast measure for infrared small-target detection. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2452–2466. [Google Scholar] [CrossRef]
  50. Han, J.H.; Liang, K.; Zhou, B.; Zhu, X.Y.; Zhao, J.; Zhao, L.L. Infrared Small Target Detection Utilizing the Multiscale Relative Local Contrast Measure. IEEE Geosci. Remote Sens. Lett. 2018, 15, 612–616. [Google Scholar] [CrossRef]
  51. Guan, X.W.; Peng, Z.M.; Huang, S.Q.; Chen, Y.P. Gaussian Scale-Space Enhanced Local Contrast Measure for Small Infrared Target Detection. IEEE Geosci. Remote Sens. Lett. 2019. [Google Scholar] [CrossRef]
  52. Du, P.; Hamdulla, A. Infrared Small Target Detection Using Homogeneity-Weighted Local Contrast Measure. IEEE Geosci. Remote Sens. Lett. 2019. [Google Scholar] [CrossRef]
  53. Deng, H.; Liu, J.G. Infrared small target detection based on the self-information map. Infrared Phys. Technol. 2011, 54, 100–107. [Google Scholar] [CrossRef]
  54. Zhao, Y.; Pan, H.B.; Du, C.P.; Zheng, Y. Principal curvature for infrared small target detection. Infrared Phys. Technol. 2015, 69, 36–43. [Google Scholar] [CrossRef]
  55. Peng, L.B.; Zhang, T.F.; Liu, Y.H.; Li, M.H.; Peng, Z.M. Infrared Dim Target Detection Using Shearlet’s Kurtosis Maximization under Non-Uniform Background. Symmetry 2019, 11, 723. [Google Scholar] [CrossRef] [Green Version]
  56. Bi, Y.G.; Chen, J.Z.; Sun, H.; Bai, X.Z. Fast Detection of Distant, Infrared Targets in a Single Image Using Multi-Order Directional Derivatives. IEEE Trans. Aerosp. Electron. Syst. 2019. [Google Scholar] [CrossRef]
  57. Zhou, J.; Kwan, C.; Ayhan, B.; Eismann, M.T. A novel cluster kernel RX algorithm for anomaly and change detection using hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6497–6504. [Google Scholar] [CrossRef]
  58. Xia, C.Q.; Li, X.R.; Zhao, L.Y. Infrared small target detection via modified random walks. Remote Sens. 2018, 10, 2004. [Google Scholar] [CrossRef] [Green Version]
  59. He, K.M.; Sun, J.; Tang, X.O. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  60. Kim, S. Min-local-LoG filter for detecting small targets in cluttered background. Electron. Lett. 2011, 47, 105–106. [Google Scholar] [CrossRef]
Figure 1. The whole diagram of the proposed infrared small target detection method.
Figure 1. The whole diagram of the proposed infrared small target detection method.
Remotesensing 12 00047 g001
Figure 2. Examples of chains (with the starting point marked in green and the end point marked in blue).
Figure 2. Examples of chains (with the starting point marked in green and the end point marked in blue).
Remotesensing 12 00047 g002
Figure 3. The 8 directions indicated by numbers and the SDB of the end point in each search direction.
Figure 3. The 8 directions indicated by numbers and the SDB of the end point in each search direction.
Remotesensing 12 00047 g003
Figure 4. The growth procedure starting from an initial pixel in an initial search direction. (ai) The different states of the growing chain. The initial pixel is marked in dark; the main body of the chain is marked in yellow; the starting point is marked in green and the end point is marked in blue; the SDB of the chains is marked in grey; we also indicate the search direction in each state by a red arrow.
Figure 4. The growth procedure starting from an initial pixel in an initial search direction. (ai) The different states of the growing chain. The initial pixel is marked in dark; the main body of the chain is marked in yellow; the starting point is marked in green and the end point is marked in blue; the SDB of the chains is marked in grey; we also indicate the search direction in each state by a red arrow.
Remotesensing 12 00047 g004
Figure 5. Case analysis at pure background region, line-shape clutter region and target region (the chains are painted yellow and the chain to produce the filtering response is painted orange).
Figure 5. Case analysis at pure background region, line-shape clutter region and target region (the chains are painted yellow and the chain to produce the filtering response is painted orange).
Remotesensing 12 00047 g005
Figure 6. The image samples of the 12 infrared scenes (targets are marked by red rectangles). (a–l) The representative frame of each typical scene.
Figure 6. The image samples of the 12 infrared scenes (targets are marked by red rectangles). (a–l) The representative frame of each typical scene.
Remotesensing 12 00047 g006
Figure 7. The processing results of the proposed method for the image samples with different levels of noise in the scene Figure 6a. (a) The image samples with different levels of noise. (b) The image samples after denoising. (c) The processing results of our proposed method.
Figure 7. The processing results of the proposed method for the image samples with different levels of noise in the scene Figure 6a. (a) The image samples with different levels of noise. (b) The image samples after denoising. (c) The processing results of our proposed method.
Remotesensing 12 00047 g007
Figure 8. The processing results of the proposed method for the image samples with different levels of noise in the scene Figure 6i. (a) The image samples with different levels of noise. (b) The image samples after denoising. (c) The processing results of our proposed method.
Figure 8. The processing results of the proposed method for the image samples with different levels of noise in the scene Figure 6i. (a) The image samples with different levels of noise. (b) The image samples after denoising. (c) The processing results of our proposed method.
Remotesensing 12 00047 g008
Figure 9. The multi-scale detection performance of the proposed method. (a) The typical image samples with different target sizes. (b) The corresponding processing results of our method. (c) The three-dimensional projections of the processing results.
Figure 9. The multi-scale detection performance of the proposed method. (a) The typical image samples with different target sizes. (b) The corresponding processing results of our method. (c) The three-dimensional projections of the processing results.
Remotesensing 12 00047 g009
Figure 10. (a–f) The processing results of scene (a–f) in Figure 6. The first row shows original images with targets marked by yellow rectangles; the second row to sixth row are the resulting images of Min-Local-LoG, LS-SVM, MPCM, HB-MLCM, MWLCM, DECM, RLCM and the proposed method, respectively.
Figure 10. (a–f) The processing results of scene (a–f) in Figure 6. The first row shows original images with targets marked by yellow rectangles; the second row to sixth row are the resulting images of Min-Local-LoG, LS-SVM, MPCM, HB-MLCM, MWLCM, DECM, RLCM and the proposed method, respectively.
Remotesensing 12 00047 g010
Figure 11. (a–f) The processing results of scene (g–l) in Figure 6. The first row shows original images with targets marked by yellow rectangles; the second row to sixth row are the resulting images of Min-Local-LoG, LS-SVM, MPCM, HB-MLCM, MWLCM, DECM, RLCM and the proposed method, respectively.
Figure 11. (a–f) The processing results of scene (g–l) in Figure 6. The first row shows original images with targets marked by yellow rectangles; the second row to sixth row are the resulting images of Min-Local-LoG, LS-SVM, MPCM, HB-MLCM, MWLCM, DECM, RLCM and the proposed method, respectively.
Remotesensing 12 00047 g011
Figure 12. (af) The ROC curve and AUC value of each method for the scene (gl) in Figure 6.
Figure 12. (af) The ROC curve and AUC value of each method for the scene (gl) in Figure 6.
Remotesensing 12 00047 g012
Table 1. Details of the 12 test scenes.
Table 1. Details of the 12 test scenes.
TabFrame NumberSceneImage Size (pixels)Clutter Description
(a)1noisy sky 128 × 128 heavy noise; a few clouds
(b)1cloudy sky 128 × 128 strong edges of irregular cloud
(c)1building and sky 128 × 128 a circular building; heavy noise
(d)1bridge and sea 320 × 240 line-shaped clutters; complex background
(e)1grounds 272 × 208 complicated grounds; bright buildings
(f)1mountains 320 × 240 boundaries; bright rocks
(g)12cloudy sky 128 × 128 noise and some clouds
(h)60cloudy sky 320 × 240 heavy and irregular clouds
(i)67noisy sky 320 × 240 heavy noise; bright background
(j)400cloudy sky 256 × 172 complicated cloud clutters
(k)185trees and sky 252 × 213 curve-like clutters with irregular shapes
(l)200sky 256 × 208 pure background; bright halos
Table 2. Evaluation metrics and running time of each detection method.
Table 2. Evaluation metrics and running time of each detection method.
MethodsMetrics(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)(l)
Min-Local-LoGSCRg1.200.650.950.660.970.461.430.490.650.771.131.70
BSF1.011.011.001.001.001.021.001.001.001.001.011.00
Time(ms)353836907189359287627977
LS-SVMSCRg2.372.511.352.916.251.377.071.260.383.171.877.70
BSF1.021.081.011.061.041.111.051.081.011.361.021.04
Time(ms)232625362937233635273130
MPCMSCRg4.915.332.581.851.962.041.641.801.831.339.9912.46
BSF1.071.061.001.111.001.711.011.541.191.511.441.04
Time(ms)545259221523592224111313
HB-MLCMSCRg2.552.362.423.923.500.874.501.020.580.992.729.77
BSF1.041.131.051.061.011.161.021.121.011.211.061.07
Time(ms)565956241625572423111212
MWLCMSCRg1.391.512.971.225.560.661.821.731.110.345.655.10
BSF1.011.081.021.051.011.061.001.011.001.171.051.01
Time(ms)505257211621512222111312
DECMSCRg15.6525.8516.3424.407.636.139.715.926.1910.2646.10164.0
BSF1.893.551.582.671.032.631.171.201.141.153.043.24
Time(s)18.9819.0318.7296.6574.8795.9818.8296.0296.1855.6868.9666.73
RLCMSCRg0.651.800.721.020.250.520.280.660.631.131.241.16
BSF1.041.401.021.111.001.331.001.061.031.051.181.02
Time(s)0.740.720.743.942.713.980.753.963.932.082.602.54
Proposed MethodSCRg24.8526.4730.0083.85151.212.1913.0495.0592.25110.6126.7178.2
BSF1.619.151.412.934.072.774.742.621.193.133.441.61
Time(s)1.061.081.065.504.005.511.065.505.523.063.763.71

Share and Cite

MDPI and ACS Style

Huang, S.; Liu, Y.; He, Y.; Zhang, T.; Peng, Z. Structure-Adaptive Clutter Suppression for Infrared Small Target Detection: Chain-Growth Filtering. Remote Sens. 2020, 12, 47. https://doi.org/10.3390/rs12010047

AMA Style

Huang S, Liu Y, He Y, Zhang T, Peng Z. Structure-Adaptive Clutter Suppression for Infrared Small Target Detection: Chain-Growth Filtering. Remote Sensing. 2020; 12(1):47. https://doi.org/10.3390/rs12010047

Chicago/Turabian Style

Huang, Suqi, Yuhan Liu, Yanmin He, Tianfang Zhang, and Zhenming Peng. 2020. "Structure-Adaptive Clutter Suppression for Infrared Small Target Detection: Chain-Growth Filtering" Remote Sensing 12, no. 1: 47. https://doi.org/10.3390/rs12010047

APA Style

Huang, S., Liu, Y., He, Y., Zhang, T., & Peng, Z. (2020). Structure-Adaptive Clutter Suppression for Infrared Small Target Detection: Chain-Growth Filtering. Remote Sensing, 12(1), 47. https://doi.org/10.3390/rs12010047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop