Next Article in Journal
Testing the Impact of Pansharpening Using PRISMA Hyperspectral Data: A Case Study Classifying Urban Trees in Naples, Italy
Previous Article in Journal
Semi-Supervised Subcategory Centroid Alignment-Based Scene Classification for High-Resolution Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aberration Modulation Correlation Method for Dim and Small Space Target Detection

1
National Laboratory on Adaptive Optics, Chengdu 610209, China
2
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Aeronautics and Astronautics, Xihua University, Chengdu 610039, China
5
College of Physics and Electronic Engineering, Hainan Normal University, Haikou 571158, China
6
Key Laboratory of Laser Technology and Optoelectronic Functional Materials of Hainan Province, Academician Team Innovation Center of Hainan Province, College of Physics and Electronic Engineering, Hainan Normal University, Haikou 571158, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(19), 3729; https://doi.org/10.3390/rs16193729
Submission received: 10 September 2024 / Revised: 3 October 2024 / Accepted: 3 October 2024 / Published: 8 October 2024
(This article belongs to the Special Issue Recent Advances in Infrared Target Detection)

Abstract

:
The significance of detecting faint and diminutive space targets cannot be overstated, as it underpins the preservation of Earth’s orbital environment’s safety and long-term sustainability. Founded by the different response characteristics between targets and backgrounds to aberrations, this paper proposes a novel aberration modulation correlation method (AMCM) for dim and small space target detection. By meticulously manipulating the light path using a wavefront corrector via a modulation signal, the target brightness will fluctuate periodically, while the background brightness remains essentially constant. Benefited by the strong correlation between targets’ characteristic changes and the modulation signal, dim and small targets can be effectively detected. Rigorous simulations and practical experiments have validated the remarkable efficacy of AMCM. Compared to conventional algorithms, AMCM boasts a substantial enhancement in the signal-to-noise ratio (SNR) detection limit from 5 to approximately 2, with an area under the precision–recall curve of 0.9396, underscoring its ability to accurately identify targets while minimizing false positives. In essence, AMCM offers an effective method for detecting dim and small space targets and is also conveniently integrated into other passive target detection systems.

1. Introduction

According to the European Space Agency, as of December 2023, there are about 11,500 artificial satellites in Earth orbit, of which approximately 2500 are in a faulty state. Uncontrolled satellites may deviate from their intended orbit, potentially colliding with other satellites and generating a large amount of space debris, posing a significant safety threat to the entire Earth orbit environment [1]. Due to the strong skylight background during the day, most existing space target observation devices work at night, which severely limits the effective observation time of space targets [2]. Therefore, it is necessary to conduct research on detection technologies for such space targets with small size and low SNR.
Traditional weak and small target detection methods can be divided into visual saliency methods (known as local information-based methods) and low-rank and sparse decomposition methods (known as data structure-based methods) [3]. The visual saliency method simulates the way human vision focuses on weak and small targets in natural scenes and finds potential target locations by utilizing evaluation indicators such as gradient and contrast. Nie et al. designed a multi-scale local uniformity measure by combining intra-block features and inter-block features [4]. Li et al. proposed a local adaptive contrast measure based on regularized LSK structure to distinguish target blocks and texture clutter blocks [5]. Xia et al. adopted the Laplacian model to capture global rarity, and then combined two local descriptors that enhance local contrast and contrast consistency to avoid clutter interference [6]. To adaptively determine optimal parameters, Ren et al. used the multi-objective particle swarm optimization method to search for background suppression parameters, but the optimization function needs to be executed on real small targets [7]. In addition to local contrast, local gradient [8] and local standard deviation [9] are also utilized to generate visual saliency maps, thereby achieving better detection performance. Inspired by the four-leaf model, a saliency map calculation method combining background suppression and texture collection is proposed to better highlight the small target [10]. The visual saliency method has high computational efficiency and can effectively detect weak and small targets in simple backgrounds, but its detection capabilities are limited in complex backgrounds. Considering that the background of sky images has non-local autocorrelation [11], researchers have applied low-rank and sparse decomposition methods to the field of small target detection, transforming the target detection problem into a mathematical optimization problem of restoring low rank and sparse components. Zhang et al. constructed image blocks as tensors instead of vectorization to effectively preserve the target’s structural information [12]. Zhang et al. used the improved tensor kernel norm to characterize the low-rank nature of background tensors, which reduces the low-rank redundancy and improves computational efficiency [13]. Due to the fact that the non-local prior method and local prior method are complementary [14], Pang et al. adopted directional derivatives to extract target priors and obtain a target saliency map with a clean background by fusing edge features from four directions [15]. To better utilize the target’s motion information, Li et al. introduced a strengthened local feature map based on a temporally constrained Gaussian curvature filter and 3D structure tensor, and achieved infrared detection of small and dim moving targets [16]. The low-rank and sparse decomposition method has good performance in weak and small target detection in complex scenes, but due to the need for iterative calculations, there is a problem of time-consuming calculation, and it cannot be used in real-time situations.
With the gradual enrichment of computing power and data resources, a class of weak and small target detection methods represented by deep learning has begun to become a new research hotspot. Since deep neural networks do not require manual feature design and have strong nonlinear representation capabilities, weak target detection based on this method has achieved significant performance improvement. Some researchers attempted to apply convolutional neural networks [17], Taylor finite difference [18], or multi-scale local difference [19] to extract small target features, which can enhance target response and suppress background interference. Considering that the target size is generally small, and its corresponding high-level semantic features may not be extracted, Yao et al. improved the FCOS network by removing deep feature layers to improve computational efficiency [20]. After extracting most of the features, self-attention mechanisms are introduced into deep neural networks for detecting weak and small targets to accurately identify and utilize the useful features. Wang et al. used a center-point-guided circular-region local self-attention module (CCRS) to obtain multiple regions of interest and then extract local feature information of small targets in the shared-parameter local self-attention (SPSA) module [21]. Zhang et al. proposed an attention-guided pyramid context network (AGPCNet) to estimate the pixel correlation within and between patches so as to highlight the target and suppress the background. Although deep neural networks have achieved certain results in detecting infrared small targets, their detection performance heavily depends on the training dataset. The existing infrared small target datasets (such as MDvsFA_cGAN [22], SIRST [23], IRSTD-1k [18] and SIRST-V2 [24]) mainly include drones, airplanes, birds, ships and other targets, but do not include space targets such as Earth orbit satellites and spacecraft debris. Therefore, it can be foreseen that deep neural networks trained on these existing datasets will not achieve satisfactory results.
Since space targets are located above the Earth’s atmosphere, they are unlikely to be detected in cloudy conditions. Therefore, observation equipment, such as ground-based telescopes, only performs observations in clear and cloudless weather. In this way, the problem of detecting space targets becomes how to detect such targets with extremely low SNR against a pure sky background. Due to the low SNR, most current target detection algorithms have unsatisfactory performance in detecting space targets. Apart from researching target detection algorithms, there are few innovative studies on passive detection optical systems. Increasing the aperture size of the optical system is another feasible solution to enhance the detection capability of the system, but its volume, weight, and manufacturing difficulty also increase accordingly. Founded by the difference in response characteristics between targets and backgrounds to aberrations, this paper proposes an aberration modulation correlation method (AMCM) for dim and small space target detection. Unlike traditional algorithm-based target detection methods, AMCM controls wavefront correctors to apply aberration control to the light path. The target brightness will fluctuate periodically, while the background brightness remains essentially constant. Then, through performing modulation signal correlation operations on the collected image frame sequence, dim and small space targets can be effectively detected. Both simulations and experiments are conducted to verify that AMCM can achieve better detection results compared to current traditional algorithm-based target detection methods.
The main contributions of this paper can be summarized as follows:
(1)
To detect dim and small space targets, we proposed a target detection method based on aberration modulation and signal correlation (AMCM).
(2)
By performing active aberration modulation using the adaptive optics system and employing matched filtering for target-related detection, the feasibility and application potential of AMCM were preliminarily validated based on a self-constructed database and experiments.
(3)
Compared to traditional algorithm-based methods, AMCM achieved effective detection of targets with an SNR of 2, showing significant performance improvement.

2. Method

2.1. Principle

For ground-based optical observation systems, the space target is typically perceived as a point light source, while the background can be considered as a large well-distributed extended source. Therefore, the background signal received by a single detector pixel is the superposition of sky background noise at different distances in the full field of view of the detector, which generally follows Poisson distribution, as shown in Figure 1. In the ideal case without aberration, the energy of the space target is most concentrated on the detector, showing an Airy disk pattern. The target in the image is characterized by the presence of abrupt edges and rapid changes in local grayscale values. As aberrations increase, the space target signal gradually disperses with the peak energy sharply decreases, and eventually becomes submerged in the background noise. Background noise, due to its continuous superposition characteristics, usually lacks complex textures and abrupt edges, and the intensity distribution changes are not significant before and after increasing aberrations.
Figure 2 shows the differences between the distribution characteristics of the target and background before and after adding different types of aberrations. This simulation analyzes the Zernike coefficients [25] from the second to the eleventh order, with each Zernike order having a Peak–Valley (PV) of 2λ. The target distribution characteristics are represented by the peak brightness of the spot, normalized to the highest value of the ideal Airy spot center brightness, while the background distribution characteristics are measured by mean value and standard deviation (Std. deviation).
After adding various orders of Zernike aberrations, the mean value and Std. deviation of the background noise change by less than 0.10%, indicating that the background noise remains in a random Piston distribution and shows almost no change compared to the condition without aberrations. The response characteristics of the target signal to different aberrations are significantly different from background noise. X-tilt and Y-tilt aberrations mainly affect the position of the target spot on the detector focal plane, with a slight decrease in the central peak brightness, by less than 2%. Other aberrations have almost no impact on the position of the spot but mainly affect the spot shape and central peak brightness. These aberrations cause the spot size to be dispersed to varying degrees, resulting in a decrease in the central peak brightness of the spot. Among these, defocus aberration (Z4) has the greatest impact, while coma (Z7&Z8) has the least.
Based on the different response characteristics of the target and background to aberrations, this paper proposes a target detection method based on aberration modulation and signal correlation, i.e., AMCM.

2.2. Process

Taking the classic adaptive optics system in ground-based astronomical observations as an example, wavefront sensors are employed to detect wavefront distortions caused by atmospheric turbulence, and a wavefront corrector (typically a deformable mirror) completes the closed-loop wavefront correction. So, the deformable mirror (DM) can be utilized for active aberration modulation, as shown in Figure 3. In this mode, the shape of the DM dynamically changes according to control signals to generate periodic wavefront aberrations of specified types and PV values, thus enabling periodic aberration modulation of the optical system. This method does not require hardware modifications to existing ground-based telescopes; instead, it involves modifications at the software control level to add the corresponding aberration modulation control functions, making it simple, efficient, and highly feasible.
The entire workflow of AMCM is shown in Figure 4. Firstly, the computer sends an aberration modulation signal (AMS) to the DM of the ground-based optical telescope (omitting devices unrelated to AMCM, such as the Hartmann sensor). The DM generates corresponding additional aberrations. Simultaneously, the focal-plane array detector receives the synchronous acquisition signals sent by the computer and completes the acquisition of the image frame sequence. After image pretreatment, the frame sequence undergoes a correlation operation (matched filtering [26]) with the AMS. Then, threshold segmentation is performed to obtain a binary image based on the relevant significance (standardized form of covariance). Finally, after binary statistic filtering, the connected regions in the binary image are identified as the targets detected by AMCM.

2.3. Image Algorithm

To enhance the effectiveness of matched filtering and reduce false alarm rates, pretreatment and post-treatment algorithms are introduced. AMCM employs various mechanisms for pretreatment and post-treatment.

2.3.1. Pretreatment

AMCM mainly includes three steps in the preprocessing stage: difference of Gaussian (DoG) filtration [27], local contrast enhancement [28], and neighborhood Std. deviation.
(1)
DoG filter
The DoG filter is defined as follows:
D o G = Δ G σ 1 G σ 2 = 1 2 π ( 1 σ 1 e x 2 + y 2 2 σ 1 2 1 σ 2 e x 2 + y 2 2 σ 2 2 ) ,
where x and y are the coordinates of the image, and σ is the Std. deviation of Gaussian filtration.
The waveform of the DoG filter is shown in Figure 5. Essentially, it is a band-pass filter that attenuates signal frequencies outside the range of interest. Therefore, the DoG filter can be used for denoising images, reducing low-frequency artifacts such as illumination non-uniformity, while enhancing image features such as spots and edges, facilitating the detection of dim and small space targets. Specifically, when σ 1 / σ 2 is 1.6, the response characteristics of the DoG filter are comparable to those of the Laplacian of Gaussian filter. In this paper, guided by the method of detection’s performance, σ 1 / σ 2 is set to 1/3, while in other scenarios, this value may need to be adaptively adjusted.
(2)
Local contrast enhancement
The human visual system (HVS) possesses excellent complex-scene background-suppression capabilities, primarily utilizing local contrast variations to determine salient regions, thereby distinguishing targets from the background [29]. Since there is usually some contrast information between dim, small targets and their surrounding local background [30], local contrast is more effective than grayscale information in detecting space targets. This paper introduces the multiscale patch-based contrast measure (MPCM) method [28] to enhance local contrast, and subsequent correlation operations are employed to circumvent the deficiency of MPCM in adaptively selecting the optimal segmentation threshold.
According to the definition by the Society of Photo-Optical Instrumentation Engineers (SPIE), an infrared target with an area no larger than 9 pixels × 9 pixels is referred to as a small infrared target. In practical image processing, it is difficult to obtain the target size as prior information. So, AMCM adopts 3, 5, 7, and 9 as the multi-scale patch sizes to accommodate targets of different sizes, which support parallel computing. For each scale l , the image is filtered by eight directional filters D i r F for computing the local differences on different directions.
D i m g k , n = D i r F n i m g k , n = 1 , 2 , , 8 ,
where i m g represents the image filtered by the DoG filter, k represents the frame position of i m g in the sequence, and “ ” represents convolution operation.
Each directional filter has dimensions of 3 l × 3 l and can be further divided into a central region of l × l size and 8 edge regions of l × l size each, as shown in Figure 6.
The following formula characterizes the differences between the center region and 8 different directional regions:
d k , t ˜ = D i m g k , t D i m g k , t + 4 , t = 1 , 2 , 3 , 4 ,
where “ ” represents matrix element-wise multiplication.
In the enhancement of dim and small space targets, the contrast between the target area and the background area should be as large as possible. So, the minimum value of d k , t ~ is taken as a patch-based contrast measure (PCM). Once the PCMs at all scales l have been obtained, the maximum value among them is the MPCM of i m g , which is denoted as M i m g and has the same dimensions as i m g .
(3)
Neighborhood Std. deviation
A sliding window with the block size N × N is used to perform the neighborhood operation on M i m g , which can further enhance the edge information of target and expand the size of target area. The Std. deviation function is used for this operation, so the central pixel of the sliding window is replaced by the neighborhood Std. deviation [31]:
N i m g ( x s , y s ) = 1 N 2 1 i = x s f l o o r ( N / 2 ) i = x s + f l o o r ( N / 2 ) j = y s f l o o r ( N / 2 ) j = y s + f l o o r ( N / 2 ) ( M i m g ( x i , y j ) μ ) 2
μ = 1 N i = x s f l o o r ( N / 2 ) i = x s + f l o o r ( N / 2 ) j = y s f l o o r ( N / 2 ) j = y s + f l o o r ( N / 2 ) M i m g ( x i , y j )
where x s and y s are the coordinates of the pixel at the center of the sliding window, x i and y j are the coordinates of the pixel at the sliding window, and “ f l o o r ( ) ” represents rounding to the nearest integer less than or equal to the element.
The value of the sliding window size N depends on the target size: when N is significantly larger than the target size, the standard deviation of the pixels within the window is relatively small, which will weaken the enhancement effect on the target area; conversely, when N is significantly smaller than the target size, the standard deviation is also relatively small when the window is centered on the target. In those situations, the target is easily classified as a background region by correlation operation. Therefore, in this paper, the sliding window size N is set to a moderate size of 4, and the binary filter of the post-processing algorithm is used to fill the holes in the target area.
Repeat the above steps until obtaining N i m g for all frames in one sequence.

2.3.2. Matched Filtering

The grayscale values of the target pixels vary periodically with the periodic aberrations, showing a strong correlation with AMS, while the background and noise pixels exhibit weak correlation. So, the preprocessed and enhanced image frame sequence N i m g undergoes matched filtering with AMS, and the filtering results are shown as Figure 7. It can be observed that the difference in correlation levels causes entirely different filtering results. Based on this, the target area can be effectively selected.

2.3.3. Post-Treatment

The cross-correlation operation depends on the sample size of two random variables; that is, the detection performance of AMCM is positively correlated with the number of frames in the image sequence. However, an increase in the number of frames means that the time consumption of both hardware acquisition and image processing will increase, affecting the efficiency and real-time performance of AMCM, thereby limiting its application range. Therefore, the requirement for the number of frames should be minimized as much as possible. In this case, the binarized images obtained after threshold segmentation may still contain some isolated noise points, which are highly correlated with AMS; additionally, DoG filtering can lead to a hollow effect within larger targets. So, in the post-processing steps of AMCM, binary statistical filtering [32] is used to remove isolated noise points and fill in hollow areas within large targets.

3. Results

3.1. Simulation

3.1.1. Data Generation

Existing datasets mostly focus on ground and aerial objects, whose characteristics may differ from those in space scene target detection. Additionally, existing datasets do not actively introduce aberrations during the imaging stage; if pseudo-diffusion effects are achieved through pure image algorithms, there is a significant difference from the actual physical process of aberrations, making it likely that the trained and validated AMCM will not function properly in real-world scenarios. Therefore, we built a dataset using Matlab by superimposing Zernike phase screens on the generalized pupil to achieve active aberration and atmospheric turbulence loading, which is more closely aligned with the actual imaging process.
Regarding background noise, the considerations in this paper are as follows: In the photoelectric detection system, the output noise of the photoelectric detector mainly includes spatial noise v n t 2 and temporal noise v n s 2 . Spatial noise v n t 2 is generated by the response non-uniformity of different pixels in the array detector and can be well suppressed after careful calibration. Temporal noise v n s 2 mainly consists of background radiation photon noise v p h 2 , the detector’s own thermal noise v J 2 , generation–recombination noise v g r 2 , and 1/f noise v 1 / f 2 . With the advancement of detector technology, current state-of-the-art detectors have reached the background limit detection level [33], where the dominant noise is background radiation photon noise v p h 2 . The discrete-photon-number statistical distribution of background radiation photon noise can be described by a Poisson distribution. The variance in the random fluctuations in the detector’s output voltage v p h 2 is proportional to the mean number of photons absorbed by the detector over an integration period Q q ¯ [34]. In long-range target detection applications (the targets in this paper are artificial satellites, space debris, and other space objects, typically at distances of tens to hundreds of kilometers from the detection system), the external environment mainly affects the photoelectric detection system through atmospheric turbulence-induced phase modulation of the incoming light spot and random fluctuations in background photon noise. Atmospheric turbulence can cause the target image to become distorted, blurred, or even torn apart, significantly reducing the signal-to-noise ratio of detection. However, the use of adaptive optics can greatly improve the imaging quality, bringing the telescope’s imaging quality close to the diffraction limit [35]. Therefore, using Poisson-distributed random noise to describe background radiation photon noise can effectively simulate the impact of the external environment on the photoelectric detection system. In this paper, Poisson-distributed random noise has been used to represent background radiation photon noise in the image generation stage.
Based on the above premise, to demonstrate the effectiveness of AMCM, multiple sets of test data are generated by Matlab for simulation evaluation. The test data generation uses the classic OOK modulation format [36] as the AMS. That is, the image frame sequence is alternately arranged with aberration-free and aberrated frames. The aberration modulation type chosen is defocus aberration with a PV of 2 λ to maximize the target peak brightness variation. As a result, a total of 41 sets of test data are generated, covering the SNR range of 1 to 10. Each dataset contains 30 image frame sequences, with each single frame image containing about 25 real targets. There is no overlap or contact between targets. Every real target is 3 × 3 pixels in size, with 1 to 2 pixels of frame-to-frame jump to approximate real-world scenarios. The dimensions of each single frame image are 128 × 128 .
Figure 8 shows two consecutive image frames from an image sequence in the test data with an SNR of 6.01. The left image is an aberration-free frame where all real targets are clearly visible and marked with red pentagrams; the right image is an aberrated frame where defocus aberration with a PV value of 2 λ causes all real targets to become blurred and submerged in the background and noise.
The probability and sensitivity of detection are chosen as evaluation metrics to analyze the detection performance of AMCM. The probability of detection P d is calculated as the number ratio of correctly detected targets to the real targets, and the sensitivity of detection P s is calculated as the number ratio of correctly detected targets to total detected targets.
P d = t h e   n u m b e r   o f   c o r r e c t l y   d e t e c t e d   t a r g e t s t h e   n u m b e r   o f   t o t a l l y   r e a l   t a r g e t s × 100 %
P s = t h e   n u m b e r   o f   c o r r e c t l y   d e t e c t e d   t a r g e t s t h e   n u m b e r   o f   t o t a l l y   d e t e c t e d   t a r g e t s × 100 %

3.1.2. Sample Size

Figure 9 and Figure 10 show the detection performance and time consumption of AMCM under different numbers of image frames in one sequence. The method operates on a laptop equipped with a 13th Gen Intel Core i9-13900H, utilizing CPU parallel processing for acceleration in MATLAB R2023b. It can be observed that as the number of frames increases, both the detection probability and detection sensitivity of the AMCM are improved, proving that the method’s performance indeed depends on the signal correlation brought by aberration modulation. The higher the number of frames, the less the uniformly distributed background and random noise fluctuations can maintain high correlation with the AMS, thus being filtered out to achieve effective target detection. The time consumption shows a roughly linear positive correlation with the number of frames. When the number is 8, there is a significant positive deviation in time consumption, indicating that most of the time is spent on MATLAB’s computational memory allocation and on parts of the code unrelated to the number of frames. In summary, when the number of frames is 32, AMCM achieves a probability of around 90%, a sensitivity greater than 90%, and a time consumption of less than 0.2 s at SNR 2, showing a balanced overall performance.
Figure 11 shows the time consumption ratio of each part of the image algorithm when the image sequence consists of 32 frames. The analysis reveals that the MPCM algorithm used for local contrast enhancement requires calculating four scales of PCM for each image, resulting in a significantly higher computational load than other parts of the algorithm, thus having the highest proportion. Relying solely on Matlab’s CPU multi-threading parallel processing does not achieve effective improvement.

3.1.3. Aberration Type with Different PV Values

Figure 12 and Figure 13 show the detection performance of AMCM at an SNR of approximately 1.95 when different orders and PV values of Zernike aberrations are used for aberration modulation. The aberration PV values in Figure 12 are fixed at 2 λ . In terms of results, different aberration types and PV values have different impacts on the AMCM’s performance but are generally comparable, with no significant differences. Therefore, in Figure 13, considering that Z5 and Z6, Z7 and Z8, and Z9 and Z10 are the same type of aberration, only one of each pair is selected to analyze the impact of different PV values. Specifically, the optimal detection probability is 91.944% for Z6 aberration with a 0.2 λ PV value, and the worst is 87.719% for Z8 aberration with a 0.4 λ PV value, a difference of about 4.2%; the optimal detection sensitivity is 92.38% for Z11 aberration with a 1.0 λ PV value, and the worst is 87.176% for Z4 aberration with a 0.6 λ PV value, a difference of about 5.2%. If the impact of different aberration types and their PV values is comprehensively evaluated using the product of detection probability and detection sensitivity, the difference between the best and worst combinations in the tested parameter sets exceeds 7%, which is significant enough in our opinion. However, due to the limited combinations of aberration types and PV values, the entire parameter space has not been explored, and no pattern of changes can be summarized; therefore, this paper cannot provide the bona fide optimal parameter combination. To facilitate a clear comparison of data between contexts, the subsequent parameter settings for aberration modulation will still refer to the previous ones, specifically the Z4 aberration with a PV value of 2 λ .

3.1.4. AMS

Since the selection of aberration PV values does not significantly affect the detection performance of the method, choosing OOK modulation for AMS remains more appropriate. However, it is still necessary to analyze different modulation cycles and duty cycles, as shown in Figure 14, Figure 15 and Figure 16. The number of frames in the frame sequence is set to 64. Analysis shows that when the duty cycle of the modulation signal is fixed at 0.5, the detection performance of the method decreases as the modulation cycle increases; when the modulation cycle is fixed at four frames and the duty cycle increases from 0.25 to 0.75, the detection probability of the method increases while the detection sensitivity decreases, making a duty cycle of 0.5 the most balanced. When both the duty cycle and the modulation cycle are random, the detection performance of the method is comparable to that of the OOK modulation signal with a modulation cycle of four frames and a duty cycle of 0.5. Therefore, the optimal performance of AMCM corresponds to the OOK modulation signal with a modulation cycle of two frames and a duty cycle of 0.5. The reason is that at this point, the modulation frequency is maximized, allowing for target detection using the difference between the noise randomness and the strong correlation of the target within the shortest number of frames.

3.1.5. Classification Performance

The performance of AMCM is evaluated by using the precision–recall (PR) curve [37]. On the PR curve, the horizontal axis represents the recall ratio and the vertical axis represents the precision ratio, depicting the precision performance at different recall levels. Compared to other classification model evaluation tools, the PR curve focuses more on the accuracy of positive sample classification, making it suitable for the highly imbalanced positive and negative samples in this paper. The definitions of recall and precision are equivalent to the method’s detection sensitivity P s and detection probability P d , respectively.
Figure 17 shows the PR curve of AMCM when the SNR is 2. The area under the PR curve is the average precision at different recall levels, representing the overall quality of the prediction results. Additionally, the larger the balance point (where precision ratio is equal to recall ratio), the better the method’s performance. AMCM achieved an area under the curve of 0.9396 and a balance point value of 0.9, indicating good prediction performance.

3.1.6. Horizontal Comparison

Finally, AMCM is compared with several traditional algorithm-based detection methods, including the infrared patch-image model (IPI) [11], double-neighborhood gradient method (DNGM) [38], and MPCM method [28]. These methods represent different approaches to detecting dim and small targets: IPI is based on image data structure, DGNM on local intensity and gradient, and MPCM on local contrast. These methods are either classic or efficient methods in their respective fields, and their effectiveness has been validated in numerous studies. Moreover, most single-frame detection methods can serve as pretreatment for AMCM, such as the MPCM used in this paper. By comparison, the improvement in detection performance of traditional methods due to aberration modulation and signal correlation becomes more evident. Figure 18 and Figure 19 show that AMCM achieves a higher detection probability due to the use of active aberration modulation and signal cross-correlation operations, with detection sensitivity metrics significantly outperforming other methods. It also means that by adjusting the parameters of AMCM, detection probability can be further improved at the expense of slightly reduced detection sensitivity. Considering both detection probability and detection sensitivity, AMCM can effectively detect targets with an SNR of around 2, while traditional algorithms (such as DNGM) typically can only effectively detect targets with an SNR of about 5. Thus, AMCM has a stronger capability for detecting dim and small targets.

3.2. Experiment

To further validate AMCM’s usability, an aberration modulation experimental device as depicted in Figure 20 is established. This device is adapted from Thorlabs’ adaptive optics kit AOK8/M. The laser (Thorlabs CPS635R), regarded as a point target at infinity, is transmitted to the DM (Thorlabs DMH40/M-P01) via a two-stage beam expansion system consisting of L1-L2 and L3-L4. The integrating sphere (LBTEK LBIS-LPS100-3) is employed to simulate an overall uniform background with random fluctuations. The light source is a halogen lamp (LBTEK LBHL2000-20W), connected to the integrating sphere through a fiber bundle illustrated in Figure 20b. The background light is combined with the laser in the shared optical path via a beam combiner, then transmitted to the DM through the beam expander system comprising L3 and L4. The deformable mirror directs the target and background lights to the receiving aperture (Thorlabs MVL35M1), which are then imaged by the focal plane array detector (Thorlabs CS2100M-USB).
By applying an aberration modulation signal to the deformable mirror and synchronously capturing image frames with the detector, a sequence of aberration-adjusted image frames is obtained. The experiment also employs OOK modulation format and defocus aberration, consistent with the simulation. It is worth noting that when generating defocus aberration using a deformable mirror, some coma aberration is additionally produced due to the structure of the deformable mirror. However, this has no adverse effect on AMCM. The control block diagram of aberration modulation and image acquisition is shown in Figure 21.
Due to the limitations in the precision of target intensity control and SNR calculation, the experiment cannot collect image frame sequences at very small SNR intervals like the simulation. Therefore, this paper only collected four sets of image data with different SNRs, which are 1.94, 2.9, 6.2, and 10.5, covering low-, medium-, and high-SNR scenarios. Two consecutive frames from four different SNR image frame sequences of the experimental data are presented in Figure 22. For each SNR, 2000 sequences are collected to evaluate the method’s performance as accurately as possible, due to the presence of only one target in a single-frame image, which is marked with red pentagrams. Each sequence consists of 100 frames, maintaining consistency with the simulation conditions. The different positions of the target in each sequence are simulated by adjusting the camera’s region of interest (ROI) window. It is noteworthy that due to the position of the integrating sphere, the background of obtained images shows significant non-uniformity, characterized by a higher center and lower surroundings, which poses greater demands on the detection method.
The processing results of the experimental data by different methods are shown in Figure 23 and Figure 24, with all adjustable parameters of the methods being the same as those used in simulations. AMCM still demonstrates superior detection performance compared to other methods, effectively detecting images with an SNR of around 2. Due to the background non-uniformity, which does not meet the assumption of non-local self-similarity, IPI’s effectiveness is greatly compromised, making it difficult to effectively detect targets. The detection performance of MPCM at low SNR (<3) is significantly worse than the simulation data. The reason is that the segmentation threshold of MPCM cannot be adaptively adjusted based on scene changes. Both the DNGM and AMCM exhibit good robustness. Since the experimental setup does not dynamically simulate atmospheric turbulence, the collected sequence frame data lack a dynamic correction process. As a result, the image frames are relatively ideal and maintain a high correlation with the AMS, potentially leading to an overestimation of AMCM’s performance. This issue needs to be further discussed and analyzed in future field experiments.
The adaptive optics system of the 1.8m ground-based telescope at Lijiang Observatory has been basically modified to be compatible with the AMCM method. However, due to the rainy season in Lijiang from May to October each year, this paper has not yet obtained sufficient field data, and therefore cannot analyze the effectiveness and robustness of the AMCM method under actual atmospheric scenarios, nor discuss the impact of various atmospheric factors (such as different levels of atmospheric turbulence and cloud cover) on the method’s performance. Simultaneously, an indoor atmospheric turbulence simulation experiment platform will also be established, with the core aim being to use numerical simulation methods to generate atmospheric turbulence phase screens and load them onto a spatial light modulator, causing corresponding phase distortions in the beam. Given that the current work has basically demonstrated the feasibility and application potential of AMCM, research on atmospheric scenarios will be one of the key tasks in the next stage.

4. Discussion

4.1. Features and Applications of the Method

The core principle of the AMCM for target detection is phase-locked amplification. Phase-locked amplification technology has been widely applied in the fields of spectral analysis [39] and temperature measurement [40], amongst others. In optical signal measurement, phase-locked amplification typically uses an electronically controlled chopper [41] to modulate (chop) continuous light into periodic intermittent light at a certain frequency. The echo signal of the light source, along with the noise signal, is input into the phase-locked amplifier, and the frequency that matches the reference frequency will pass through the low-pass filter, while signals at other frequencies, such as noise signals, will be filtered out.
However, in the field of passive imaging, detection systems do not rely on external energy sources but merely receive the light energy emitted by the target object and background radiation. Therefore, traditional phase-locked amplification techniques cannot independently modulate the target signal periodically. This paper proposes a novel phase-locked amplification method, AMCM, by introducing wavefront correctors, such as DM and liquid crystal spatial light modulators, to perform periodic aberration modulation on optical detection systems. This method leverages the differences in aberration response characteristics between the target and background, introducing periodic features in addition to common features such as target intensity, morphology, and local contrast. It successfully utilizes the periodic nature of the target and the irregular, chaotic nature of noise to achieve effective detection of lower signal-to-noise ratio targets. The algorithm’s effectiveness has been validated through simulations and indoor experiments, and the next step will be to conduct field experiments. Additionally, due to the relative maturity of wavefront correctors, AMCM can be easily transferred from adaptive optics astronomical telescopes to other passive imaging systems, especially those used for long-distance target detection in the infrared band.

4.2. Factors Affecting Algorithms

The detection effectiveness of the cross-correlation operation depends on the sample size, which has been verified in Section 3.1. This is because, as the number of image frames increases, the target signal maintains a high positive correlation with the aberration modulation signal, while the correlation of randomly varying noise signals tends to zero, thereby enabling AMCM to effectively filter out noise even at higher bandwidths. However, a high frame count significantly increases hardware acquisition and algorithm processing time, making it difficult for AMCM to handle moving targets and thus limiting its application scope. Therefore, reducing the frame count requirement and increasing parallel processing speed are key optimization directions for AMCM to move towards practical application. The following optimization methods are worth studying: first, by designing non-local metasurfaces [42] to implement DoG filtering in the optical simulation domain, thereby alleviating the limitations on method speed and power consumption caused by the increase in image scale, and second, by using hardware accelerations [43] such as GPU and FPGA to improve the performance of image algorithms, especially the local contrast enhancement part. Speed improvement not only enhances real-time performance but also allows for target detection in a larger field of view within the same time frame, assuming the target resolution is the same.
Also, due to the limited repeat precision of the DM’s surface control, the introduced periodic aberrations fluctuate and atmospheric turbulence also varies over time, causing the residual aberrations after the DM compensates for wavefront distortions to be time-varying as well. The above restrictions lead to varying degrees of energy dispersion of the target in each cycle, meaning the actual reference signal does not perfectly match the target variations, thus affecting the cross-correlation effectiveness and reducing the detection performance. This may explain to some extent why AMCM’s performance in the experiments described in Section 3.2 is inferior to the simulations in Section 3.1.

4.3. Future Research Directions

Based on the characteristics and shortcomings of AMCM, future research directions mainly include the following.

4.3.1. Method Combination

On the image processing algorithm side in this paper, AMCM currently uses the MPCM to enhance dim and small targets and suppress the background and noise. The principle of the MPCM method is based on the human visual mechanism, and the advantages of MPCM have been briefly outlined above, especially compared to other methods with the same principle, such as the Local Contrast Method [44], Improved Local Contrast Method [45], and Accumulated Center–surround Difference Measure [46]. Considering that there are multiple principle routes available for detecting dim and small targets, to further enhance the detection performance of the AMCM method, future research could consider combining it with other passive target detection methods, such as the infrared patch-image (IPI) method based on image data structure [11], the three-dimensional collaborative filtering and spatial inversion (3DCFSI) method [47] based on spatiotemporal information, and the non-local means filtering (NLM) method based on background feature [48].

4.3.2. Optical Simulation Computing Device

The speed and power consumption of image processing algorithms in AMCM are limited by integrated circuit microelectronics [49], and these limitations increase rapidly with the scale of the sequence and the size of individual frames. Therefore, it is worth considering the introduction of a non-local metalens [50] to move steps like edge extraction from the algorithm side to the hardware side. Additionally, by using end-to-end optimization algorithms [51], jointly optimizing the metalens and image processing algorithms can enhance the processing speed and detection performance of AMCM.

4.3.3. Moving Object Detection

AMCM is currently focused on imaging scenarios mainly involving static or quasi-static targets, where the inter-frame displacement is still smaller than the target size. When the target is in motion, its trajectory is inherently continuous, while the noise in the image appears randomly and without regularity. Therefore, after introducing aberration modulation, it is possible to integrate the spatial and temporal information of the sequence images to achieve the detection of dim and small space targets. Based on the research of existing multi-frame motion target algorithms, aberration modulation can be considered in combination with dynamic programming [52], higher-order correlation [53], motion compensation [54], and adaptive filtering techniques [55] to further enhance the detection performance of dynamic dim and small targets. However, the computation time of the algorithms will inevitably increase significantly, making practical application difficult and improving algorithm efficiency important.

4.3.4. Hyperparameter Search

This paper only analyzes the impact of using a single aberration modulation or different periods and duty cycles when the modulation signal adopts the OOK mode on method performance. Meanwhile, the selection criteria for the values σ 1 / σ 2 of the DOG filter and the size N of the sliding window in the image algorithm are briefly introduced. However, the above analyses are relatively independent and discrete, with only one parameter being changed at a time while others remain fixed. This means that it is difficult for the optimization process to traverse the entire parameter space, and the method may fall into a local optimum. Therefore, algorithm adjustable parameters, aberration types, aberration PV values, and modulation signals (mode, period, duty cycle, etc.) can be regarded as hyperparameters, especially the combination of different types of aberrations and modulation parameters. Swarm intelligence algorithms such as Bayesian optimization [56], particle swarm optimization (PSO) [57], and artificial bee colony (ABC) [58] can be introduced to execute hyperparameter optimization. A cost function can also be developed that combines multiple performance metrics, such as detection probability, false alarm rate, and computational efficiency, to guide the optimization process.

Author Contributions

Methodology, C.J.; writing—original draft, J.L.; writing—review and editing, S.L. and H.X.; visualization, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Leibovich, M.; Papanicolaou, G.; Tsogka, C. Generalized correlation-based imaging for satellites. SIAM J. Imaging Sci. 2020, 13, 1331–1366. [Google Scholar] [CrossRef]
  2. Woods, D.; Shah, R.; Johnson, J.; Pearce, E.; Lambour, R.; Faccenda, W. Asteroid detection with the space surveillance telescope. In Proceedings of the AMOS Conference, Maui, HI, USA, 10–13 September 2013. [Google Scholar]
  3. Zhao, M.; Li, W.; Li, L.; Hu, J.; Ma, P.; Tao, R. Single-Frame Infrared Small-Target Detection: A survey. IEEE Geosci. Remote Sens. Mag. 2022, 10, 87–119. [Google Scholar] [CrossRef]
  4. Nie, J.; Qu, S.; Wei, Y.; Zhang, L.; Deng, L. An infrared small target detection method based on multiscale local homogeneity measure. Infrared Phys. Technol. 2018, 90, 186–194. [Google Scholar] [CrossRef]
  5. Li, Y.; Zhang, Y. Robust infrared small target detection using local steering kernel reconstruction. Pattern Recognit. 2018, 77, 113–125. [Google Scholar] [CrossRef]
  6. Xia, C.; Li, X.; Zhao, L.; Yu, S. Modified graph Laplacian model with local contrast and consistency constraint for small target detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5807–5822. [Google Scholar] [CrossRef]
  7. Ren, X.; Yue, C.; Ma, T.; Wang, J.; Wang, J.; Wu, Y.; Weng, Z. Adaptive parameters optimization model with 3D information extraction for infrared small target detection based on particle swarm optimization algorithm. Infrared Phys. Technol. 2021, 117, 103838. [Google Scholar] [CrossRef]
  8. Zhou, D.; Wang, X. Research on high robust infrared small target detection method in complex background. IEEE Geosci. Remote Sens. Lett. 2023, 20, 6007705. [Google Scholar] [CrossRef]
  9. Lee, I.H.; Park, C.G. Infrared small target detection algorithm using an augmented intensity and density-based clustering. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5002714. [Google Scholar] [CrossRef]
  10. Zhou, D.; Wang, X. Robust Infrared Small Target Detection Using a Novel Four-Leaf Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 1462–1469. [Google Scholar] [CrossRef]
  11. Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; Hauptmann, A.G. Infrared patch-image model for small target detection in a single image. IEEE Trans. Image Process. 2013, 22, 4996–5009. [Google Scholar] [CrossRef]
  12. Zhang, X.; Ding, Q.; Luo, H.; Hui, B.; Chang, Z.; Zhang, J. Infrared small target detection based on an image-patch tensor model. Infrared Phys. Technol. 2019, 99, 55–63. [Google Scholar] [CrossRef]
  13. Zhang, C.; He, Y.; Tang, Q.; Chen, Z.; Mu, T. Infrared small target detection via interpatch correlation enhancement and joint local visual saliency prior. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5001314. [Google Scholar] [CrossRef]
  14. Dai, Y.; Wu, Y. Reweighted infrared patch-tensor model with both nonlocal and local priors for single-frame small target detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3752–3767. [Google Scholar] [CrossRef]
  15. Pang, D.; Shan, T.; Li, W.; Ma, P.; Tao, R.; Ma, Y. Facet derivative-based multidirectional edge awareness and spatial–temporal tensor model for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5001015. [Google Scholar] [CrossRef]
  16. Li, Z.; Liao, S.; Wu, M.; Zhao, T.; Yu, H. Strengthened Local Feature-Based Spatial–Temporal Tensor Model for Infrared Dim and Small Target Detection. IEEE Sens. J. 2023, 23, 23221–23237. [Google Scholar] [CrossRef]
  17. Ju, M.; Luo, J.; Liu, G.; Luo, H. ISTDet: An efficient end-to-end neural network for infrared small target detection. Infrared Phys. Technol. 2021, 114, 103659. [Google Scholar] [CrossRef]
  18. Zhang, M.; Zhang, R.; Yang, Y.; Bai, H.; Zhang, J.; Guo, J. ISNet: Shape matters for infrared small target detection. In Proceedings of the IEEE/CVF Conference on Computer Vision. and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 877–886. [Google Scholar] [CrossRef]
  19. Hou, Q.; Wang, Z.; Tan, F.; Zhao, Y.; Zheng, H.; Zhang, W. RISTDnet: Robust infrared small target detection network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 7000805. [Google Scholar] [CrossRef]
  20. Yao, S.; Zhu, Q.; Zhang, T.; Cui, W.; Yan, P. Infrared image small-target detection based on improved FCOS and spatio-temporal features. Electronics 2022, 11, 933. [Google Scholar] [CrossRef]
  21. Wang, W.; Xiao, C.; Dou, H.; Liang, R.; Yuan, H.; Zhao, G.; Chen, Z.; Huang, Y. CCRANet: A Two-Stage Local Attention Network for Single-Frame Low-Resolution Infrared Small Target Detection. Remote Sens. 2023, 15, 5539. [Google Scholar] [CrossRef]
  22. Wang, H.; Zhou, L.; Wang, L. Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8509–8518. [Google Scholar] [CrossRef]
  23. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Asymmetric contextual modulation for infrared small target detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 950–959. [Google Scholar] [CrossRef]
  24. Dai, Y.; Li, X.; Zhou, F.; Qian, Y.; Chen, Y.; Yang, J. One-stage cascade refinement networks for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5000917. [Google Scholar] [CrossRef]
  25. Conforti, G. Zernike aberration coefficients from Seidel and higher-order power-series coefficients. Opt. Lett. 1983, 8, 407–408. [Google Scholar] [CrossRef] [PubMed]
  26. Milanfar, P. Two-dimensional matched filtering for motion estimation. IEEE Trans. Image Process 2002, 8, 438–444. [Google Scholar] [CrossRef] [PubMed]
  27. Kenneth, R. Castleman, Digital Image Processing; Prentice Hall Press: Saddle River, NJ, USA, 1996. [Google Scholar]
  28. Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226. [Google Scholar] [CrossRef]
  29. Koz, A.; Alatan, A.A. Oblivious Spatio-Temporal Watermarking of Digital Video by Exploiting the Human Visual System. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 326–337. [Google Scholar] [CrossRef]
  30. Kim, S.; Lee, J. Scale invariant small target detection by optimizing signal-to-clutter ratio in heterogeneous background for infrared search and track. Pattern Recognit. 2012, 45, 393–406. [Google Scholar] [CrossRef]
  31. Wang, C.; Luo, M.; Su, X.; Lan, X.; Sun, Z.; Gao, J.; Ye, M.; Zhu, J. A sliding-window based signal processing method for characterizing particle clusters in gas-solids high-density CFB reactor. Chem. Eng. J. 2023, 452, 139141. [Google Scholar] [CrossRef]
  32. Sortino, M. Application of Statistical Filtering for Optical Detection of Tool Wear. Int. J. Mach. Tools Manuf. 2003, 43, 493–497. [Google Scholar] [CrossRef]
  33. Qin, T.; Mu, G.; Zhao, P.; Tan, Y.; Liu, Y.; Zhang, S.; Luo, Y.; Hao, Q.; Chen, M.; Tang, X. Mercury telluride colloidal quantum-dot focal plane array with planar p-n junctions enabled by in situ electric field–activated doping. Sci. Adv. 2023, 9, eadg7827. [Google Scholar] [CrossRef]
  34. Dudzik, M.C. Electro-Optical Systems Design, Analysis, and Testing. In The Infrared and Electro-Optical Systems Handbook; Environment Research Institute of Michigan & SPIE: Ann Arbor, MI, USA, 1993; Volume 4. [Google Scholar]
  35. Guo, Y.; Chen, K.; Zhou, J.; Li, Z.; Han, W.; Rao, X.; Bao, H.; Yang, J.; Fan, X.; Rao, C. High-resolution visible imaging with piezoelectric deformable secondary mirror: Experimental results at the 1.8-m adaptive telescope. Opto-Electron. Adv. 2023, 6, 230039-1. [Google Scholar] [CrossRef]
  36. Isautier, P.; Pan, J.; DeSalvo, R.; Ralph, S.E. Stokes Space-Based Modulation Format Recognition for Autonomous Optical Receivers. J. Light. Technol. 2015, 33, 5157–5163. [Google Scholar] [CrossRef]
  37. Cook, J.; Ramadas, V. When to consult precision-recall curves. Stata J. 2020, 20, 131–148. [Google Scholar] [CrossRef]
  38. Wu, L.; Ma, Y.; Fan, F.; Wu, M.; Huang, J. A Double-Neighborhood Gradient Method for Infrared Small Target Detection. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1476–1480. [Google Scholar] [CrossRef]
  39. Liu, K.; Zhang, Y.; Gao, T.; Tong, F.; Liu, P.; Li, W.; Li, M. A handheld rapid detector of soil total nitrogen based on phase-locked amplification technology. Comput. Electron. Agric. 2024, 224, 109233. [Google Scholar] [CrossRef]
  40. Cheng, J.; Xu, Y.; Wu, L.; Wang, G. A Digital Lock-In Amplifier for Use at Temperatures of up to 200 °C. Sensors 2016, 16, 1899. [Google Scholar] [CrossRef] [PubMed]
  41. Enz, C.C.; Temes, G.C. Circuit Techniques for Reducing the Effects of Op-Amp Imperfections: Autozeroing, Correlated Double Sampling, and Chopper Stabilization. Proc. IEEE 1996, 84, 1584–1614. [Google Scholar] [CrossRef]
  42. Tanriover, I.; Dereshgi, S.A.; Aydin, K. Metasurface enabled broadband all optical edge detection in visible frequencies. Nat. Commun. 2023, 14, 6484. [Google Scholar] [CrossRef]
  43. Bustio-Martínez, L.; Cumplido, R.; Letras, M.; Hernández-León, R.; Feregrino-Uribe, C.; Hernández-Palancar, J. FPGA/GPU-based Acceleration for Frequent Itemsets Mining: A Comprehensive Review. ACM Comput. Surv. 2021, 54, 179. [Google Scholar] [CrossRef]
  44. Chen, C.L.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A Local Contrast Method for Small Infrared Target Detection. IEEE Trans. Geosci. Remote Sens. 2014, 52, 574–581. [Google Scholar] [CrossRef]
  45. Han, J.; Ma, Y.; Zhou, B.; Fan, F.; Liang, K.; Fang, Y. A Robust Infrared Small Target Detection Algorithm Based on Human Visual System. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2168–2172. [Google Scholar] [CrossRef]
  46. Xie, K.; Fu, K.; Zhou, T.; Zhang, J.; Yang, J.; Wu, Q. Small target detection based on accumulated center-surround difference measure. Infrared Phys. Technol. 2014, 67, 229–236. [Google Scholar] [CrossRef]
  47. Ren, X.; Wang, J.; Ma, T.; Bai, K.; Ge, M.; Wang, Y. Infrared dim and small target detection based on three-dimensional collaborative filtering and spatial inversion modeling. Infrared Phys. Technol. 2019, 101, 13–24. [Google Scholar] [CrossRef]
  48. Genin, L.; Champagnat, F.; Le Besnerais, G.; Coret, L. Point object detection using a NL-means type filter. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3533–3536. [Google Scholar] [CrossRef]
  49. Abdeldayem, H.; Frazier, D.O. Optical computing. Commun. ACM 2007, 50, 60–62. [Google Scholar] [CrossRef]
  50. Zhou, Y.; Zheng, H.; Kravchenko, I.I.; Valentine, J. Flat optics for image differentiation. Nat. Photonics 2020, 14, 316–323. [Google Scholar] [CrossRef]
  51. Xu, Y.; Huang, L.; Jiang, W.; Guan, X.; Hu, W.; Yi, L. End-to-End Learning for 100G-PON Based on Noise Adaptation Network. J. Light. Technol. 2024, 42, 2328–2337. [Google Scholar] [CrossRef]
  52. Qiang, Y.; Jiao, L.C.; Bao, Z. Study on mechanism of dynamic programming algorithm for dim target detection. In Proceedings of the 6th International Conference on Signal Processing, Beijing, China, 26–30 August 2002; Volume 2, pp. 1403–1406. [Google Scholar]
  53. Liou, R.J.; Azimi-Sadjadi, M.R. Dim target detection using high order correlation method. IEEE Trans. Aerosp. Electron. Syst. 1993, 29, 841–856. [Google Scholar] [CrossRef]
  54. Jin, D.; Lei, J.; Peng, B.; Li, W.; Ling, N.; Huang, Q. Deep Affine Motion Compensation Network for Inter Prediction in VVC. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 3923–3933. [Google Scholar] [CrossRef]
  55. Tang, K.; Astola, J.; Neuvo, Y. Nonlinear multivariate image filtering techniques. IEEE Trans. Image Process 1995, 4, 788–798. [Google Scholar] [CrossRef]
  56. Lancaster, J.; Lorenz, R.; Leech, R.; Cole, J.H. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction. Front. Aging Neurosci. 2018, 10, 28. [Google Scholar] [CrossRef]
  57. Thangaraj, R.; Pant, M.; Abraham, A.; Bouvry, P. Particle swarm optimization: Hybridization perspectives and experimental illustrations. Appl. Math. Comput. 2011, 217, 5208–5226. [Google Scholar] [CrossRef]
  58. Ji, J.; Song, S.; Tang, C.; Gao, S.; Tang, Z.; Todo, Y. An artificial bee colony algorithm search guided by scale-free networks. Inf. Sci. 2019, 473, 142–165. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the aberration response characteristics for target/background.
Figure 1. Schematic diagram of the aberration response characteristics for target/background.
Remotesensing 16 03729 g001
Figure 2. The distribution characteristics under different aberrations. (a) Background mean value and Std. deviation; (b) target peak brightness.
Figure 2. The distribution characteristics under different aberrations. (a) Background mean value and Std. deviation; (b) target peak brightness.
Remotesensing 16 03729 g002
Figure 3. Working modes of the DM. (a) Non-working mode; (b) closed-loop correction mode; (c) active aberration modulation mode.
Figure 3. Working modes of the DM. (a) Non-working mode; (b) closed-loop correction mode; (c) active aberration modulation mode.
Remotesensing 16 03729 g003
Figure 4. Illustration of the entire AMCM workflow.
Figure 4. Illustration of the entire AMCM workflow.
Remotesensing 16 03729 g004
Figure 5. The waveform of the DoG filter.
Figure 5. The waveform of the DoG filter.
Remotesensing 16 03729 g005
Figure 6. Eight directional filters with l = 3. (ah) n = 1~8.
Figure 6. Eight directional filters with l = 3. (ah) n = 1~8.
Remotesensing 16 03729 g006
Figure 7. Matched filtering results (frequency domain).
Figure 7. Matched filtering results (frequency domain).
Remotesensing 16 03729 g007
Figure 8. Continuous frames in the image frame sequence with an SNR of 6.01. (a) Aberration-free frame; (b) aberrated frame.
Figure 8. Continuous frames in the image frame sequence with an SNR of 6.01. (a) Aberration-free frame; (b) aberrated frame.
Remotesensing 16 03729 g008
Figure 9. Probability of detection curve with different numbers of frames.
Figure 9. Probability of detection curve with different numbers of frames.
Remotesensing 16 03729 g009
Figure 10. Sensitivity of detection curve with different number of frames.
Figure 10. Sensitivity of detection curve with different number of frames.
Remotesensing 16 03729 g010
Figure 11. Time consumption ratio of each part of the algorithm when the frame number is 32.
Figure 11. Time consumption ratio of each part of the algorithm when the frame number is 32.
Remotesensing 16 03729 g011
Figure 12. Detection performance at SNR of approximately 1.95 with different aberration modulations.
Figure 12. Detection performance at SNR of approximately 1.95 with different aberration modulations.
Remotesensing 16 03729 g012
Figure 13. Detection performance at SNR of approximately 1.95 with different PV values. (a) Probability of detection; (b) sensitivity of detection.
Figure 13. Detection performance at SNR of approximately 1.95 with different PV values. (a) Probability of detection; (b) sensitivity of detection.
Remotesensing 16 03729 g013
Figure 14. OOK modulation signals with different cycles and duty cycles.
Figure 14. OOK modulation signals with different cycles and duty cycles.
Remotesensing 16 03729 g014
Figure 15. Probability of detection curve with different OOK modulation signals.
Figure 15. Probability of detection curve with different OOK modulation signals.
Remotesensing 16 03729 g015
Figure 16. Sensitivity of detection curve with different OOK modulation signals.
Figure 16. Sensitivity of detection curve with different OOK modulation signals.
Remotesensing 16 03729 g016
Figure 17. The PR curve @ SNR 2.
Figure 17. The PR curve @ SNR 2.
Remotesensing 16 03729 g017
Figure 18. Horizontal comparison of detection probabilities for different methods.
Figure 18. Horizontal comparison of detection probabilities for different methods.
Remotesensing 16 03729 g018
Figure 19. Horizontal comparison of detection sensitivities for different methods.
Figure 19. Horizontal comparison of detection sensitivities for different methods.
Remotesensing 16 03729 g019
Figure 20. Aberration modulation experiment. (a) Beam path diagram; (b) physical illustration.
Figure 20. Aberration modulation experiment. (a) Beam path diagram; (b) physical illustration.
Remotesensing 16 03729 g020
Figure 21. Control block diagram in the experiment.
Figure 21. Control block diagram in the experiment.
Remotesensing 16 03729 g021
Figure 22. Example of image frame sequence experimental data. (ad) Different SNRs.
Figure 22. Example of image frame sequence experimental data. (ad) Different SNRs.
Remotesensing 16 03729 g022
Figure 23. Horizontal comparison of detection probabilities on experimental data.
Figure 23. Horizontal comparison of detection probabilities on experimental data.
Remotesensing 16 03729 g023
Figure 24. Horizontal comparison of detection sensitivities on experimental data.
Figure 24. Horizontal comparison of detection sensitivities on experimental data.
Remotesensing 16 03729 g024
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, C.; Li, J.; Liu, S.; Xian, H. Aberration Modulation Correlation Method for Dim and Small Space Target Detection. Remote Sens. 2024, 16, 3729. https://doi.org/10.3390/rs16193729

AMA Style

Jiang C, Li J, Liu S, Xian H. Aberration Modulation Correlation Method for Dim and Small Space Target Detection. Remote Sensing. 2024; 16(19):3729. https://doi.org/10.3390/rs16193729

Chicago/Turabian Style

Jiang, Changchun, Junwei Li, Shengjie Liu, and Hao Xian. 2024. "Aberration Modulation Correlation Method for Dim and Small Space Target Detection" Remote Sensing 16, no. 19: 3729. https://doi.org/10.3390/rs16193729

APA Style

Jiang, C., Li, J., Liu, S., & Xian, H. (2024). Aberration Modulation Correlation Method for Dim and Small Space Target Detection. Remote Sensing, 16(19), 3729. https://doi.org/10.3390/rs16193729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop