1. Introduction
With the continuing development of deep-space exploration technology, the number of launches of various types of spacecrafts is steadily increasing. At the same time, there is a demand for ever more accurate information from spacecraft. In spacecraft, the star sensor is the main attitude measurement device [
1,
2]. The measurement accuracy of the star sensor is crucial to the navigational accuracy of the spacecraft. In order to improve the measurement accuracy of a star sensor, a simulator must be used so that the sensor may be calibrated on the ground. However, the minimum magnitudes of background stars generated by existing simulators are much lower than the limiting detection magnitudes of existing star sensors, and the irradiation of the image plane is nonuniform. This means that the maximum detection capability of star sensors cannot be calibrated, and this inhibits any improvements in navigational accuracy. There is, therefore, an urgent need for a dark-and-weak-target simulator that can simulate the illumination of faint stars while maintaining energy uniformity in the image plane, so that the uniformity of radiated light displayed in star charts is maintained, and simulation input conditions are provided which meet the requirements for ground calibration of the star sensor.
To enhance full-field-of-view energy uniformity in dark-and-weak-target simulators, researchers have sought to optimally adjust the structures of light sources and improve the performance of optical systems. In some published studies [
3,
4], researchers have reported the use of free-form lenses, plane mirror arrays, and interferometric structures to optimally adjust the structures of light sources, for the purpose of improving full-field-of-view energy uniformity. However, when light from a light source hits an optical system, there is a greater impact on the energy uniformity of the image surface as a result of the vignetting effect. In other studies, therefore [
5,
6], researchers have optimized and improved optical systems using Lambert’s cosine theory and aberration theory, resulting in improved output-image quality. Still, other researchers [
7] have described improvements in the transmission of light through new designs of hair-glass material in optical systems.
Although the uniformity of illumination in the image plane of a dark-and-weak-target simulator can be improved by optimizing the design of the light source and the optical system, image quality and information extraction may be further improved by the use of image-processing algorithms. Existing image-processing algorithms [
8] are based on different techniques such as response curve-fitting [
9,
10] and frequency-domain analysis [
11]. By such means, the physical properties and response functions of the system which result in vignetting may be modeled, and a correction method adopted, such as coordinate transformation [
12], mathematical fitting [
13], marginal distribution estimation [
14], or parabolic fitting [
15]. All these methods are designed to compensate for the system’s vignetting effect and improve image uniformity. However, such methods are either computationally complex or result in an inadequate fit, and most rely on additional information to achieve effective performance.
In the present study, we sought to address the problem of full-field-of-view energy uniformity affecting dark-and-weak-target simulators. By modeling energy transfer in such a simulator, we developed an adaptive compensation algorithm based on pixel-by-pixel fitting. We also established an experimental platform to experimentally validate the compensation method and found that it achieved effective compensation of full-field-of-view energy nonuniformity in dark-and-weak-target simulators.
2. Composition and Working Principle of the Dark-and-Weak-Target Simulator
The composition of the dark-and-weak-target simulator is shown in
Figure 1. It can be seen that it comprised an attitude-and-orbit-dynamics simulation control computer, an LCD display driver circuit, a backlight light source, an LCD display, and a collimating optical system.
The attitude-and-orbit-dynamics simulation control computer, in conjunction with the target parameter settings and the mission requirements, was used to calculate and output image content in real time, including position information for the dark/weak targets, as well as environmental information. During output to the LCD display panel, by controlling the output image brightness
of the LCD display panel, and the output brightness
of the LEDs in the backlight light source, we increased or decreased the overall brightness of the final output image to harmonize and optimize the final imaging effect, while satisfying the following relationships:
where
and
are the initial brightness of the backlight source and LCD display panel, respectively, and
and
are adjustable parameters. Synergistic control between the two units was achieved through image-sensing and feedback mechanisms. When the output image brightness was weak, we increased the values of
and
; when the output image brightness was excessive, we lowered the values of
and
. This adjustment completed the multi-stage optimization of the output image brightness.
The image content output from the attitude-and-orbit-dynamics simulation control computer was collimated by an optical system, to project positional and environmental information from dark/weak targets onto a specified light-sensitive surface. The output image content under different parameters could be optically mapped to achieve a high degree of simulation, and effective visualization of a variety of dark/weak targets, to fulfil the requirement for a wide range of dark/weak targets, and thereby display the effect of simulation.
4. Adaptive Compensation Algorithm Based on Pixel-by-Pixel Fitting
For the present study, we introduced an adaptive compensation algorithm based on pixel-by-pixel fitting. Provided that the uniformity of the light source satisfied our study requirements, the algorithm was able to use sensors to acquire output images with different gray levels; these were displayed by the dark-and-weak-target simulator. The response error surface of the dark-and-weak-target simulator could then be calculated, based on the acquired images. The brightness of each pixel of the dark-and-weak-target simulator was adaptively compensated and adjusted using the error surface, and the compensated output image was then acquired. Finally, grayscale standard display function (SDF) values [
17] in the captured images before and after compensation were calculated, so that any improvement in uniformity in the light intensity distributed on the image surface of the dark-and-weak-target simulator could be evaluated. The overall compensation process is set out in flowchart form in
Figure 3.
Initial working conditions were established, the uniformity of the light source was measured, and images were captured. It is worth noting that the specific region in which a sensor can ultimately detect a dark-and-weak-target simulator depends on the field of view of the simulator and the field of view of the sensor, and these are usually not the same. In order to ensure the complete detection of the dark-and-weak-target simulator by the sensor, the field-of-view angle of the sensor should be greater than or equal to that of the simulator, i.e., . This may be demonstrated using Equation (3), as follows:
where
is the focal length of the dark-and-weak-target simulator,
is the focal length of the sensor,
is the image height of the dark-and-weak-target simulator,
is the image height of the sensor,
is the image-element size of the dark-and-weak-target simulator,
is the image-element size of the sensor,
is the number of transverse elements, and
is the number of longitudinal elements. Thus, by calculating the field-of-view angles of the simulator and the sensor, the percentage of the simulator’s field of view that can be detected by the sensor may be identified. In other words, by detecting localized areas within the imaging range of the dark-and-weak-target simulator and using these as a microcosm of overall system performance, the distribution and spread of dark-and-weak-target simulator imaging over the detection area of the sensor can be accurately described.
The uniformity of the backlight source can be evaluated by measuring the irradiance inhomogeneity of the backlight source using Equation (4), as follows:
where
is the maximum value of irradiance within the irradiated surface,
is the minimum value of irradiance within the irradiated surface, and
is the average irradiance. When irradiance inhomogeneity in the backlight source is better than ±10%, as specified in the CIE standard [
18], it can be assumed that the impact of the inhomogeneity of the backlight source on the system is controlled within an acceptable range.
- 2.
Error fitting was performed while ensuring that the uniformity of the backlight source met the study requirements. We let the dark-and-weak-target simulator output 256 images in a grayscale range from 0 to 255, and allowed the initial grayscale matrix of the th output image to be , and the grayscale of each pixel to be , with and being the coordinates of the horizontal and vertical positions, respectively, for each pixel point), so that
The grayscale matrix of the
th captured image of the sensor was then denoted as
, and the grayscale value of each pixel denoted as
, so that
Next, we calculated the grayscale mean of matrix
to obtain the grayscale mean matrix of the
th captured image. We noted this as
, and the grayscale of each pixel as
, so that
We then calculated and superimposed the mean value of the difference between the grayscale matrix of the captured image and the grayscale mean matrix. This value was denoted as
, as shown in Equation (9):
It should be noted that, when
, then the actual response of the pixel is lower than the ideal average response. In such an event, the grayscale of the pixel should be brightened, to make it closer to the ideal average grayscale distribution. If
is then taken to be the relative–difference matrix between the
th superposition and the
th superposition, and
be the maximum relative–difference matrix, then
A contrast-sensitivity function curve for the human eye reveals that the luminance threshold for visual sensitivity in the human eye is about 1.2% [
19]. However, the luminance-detection sensitivity of the sensor used in the present study was much higher than that of the human eye. Consequently, when
satisfied the conditions of Equation (11), it could be assumed that the sensor was unable to distinguish any difference in luminance, and iteration could be terminated.
Finally, we denoted the difference–mean matrix obtained from the th cycle as the response error matrix of the dark-and-weak-target simulator, denoted as .
To determine the uniformity compensation for pixel brightness in the dark-and-weak-target simulator, we obtained the compensated image
by taking the output-image matrix
of the dark-and-weak-target simulator and subtracting the product of the error matrix
and the output-image matrix
of the simulator, as shown in Equation (13), as follows:
Having acquired a second, compensated image, we were able to bring both the initial and compensated images into Equation (14). The grayscale SDF of the acquired image before and after compensation was then calculated, in order to analyze any improvement in the uniformity of light-intensity distribution on the image surface of the dark-and-weak-target simulator, before and after compensation, so that
where
and
are the numbers of rows and columns, respectively, in the image,
is the grayscale value of the captured image before and after compensation at the coordinates of point
, and
is the average grayscale value of the image. The lower the grayscale SDF value, the more concentrated and uniform the grayscale distribution in the image. However, this also means that the grayscale distribution in the image is more scattered and uneven. Because the algorithm was designed to fit the combined effect of various influencing factors into a unified response error surface, this became the ‘total factor’ affecting the uniformity of the picture, and the surface was utilized to inversely compensate for the dark-and-weak-target simulator. Any change in distribution before and after compensation could therefore be evaluated using SDF, which actually, if indirectly, reflects the combined effect of the various influencing factors being eliminated.
It can be seen that, through the use of image pixel distribution to reflect the characteristics of light intensity distribution, the use of compensation algorithms to obtain the response error surface, and the use of the dark-and-weak-target simulator image for inverse compensation, an image with uniform grayscale distribution may be obtained for the boundary region. Any distortion and nonuniformity is thereby alleviated, so that the boundary region of the grayscale distribution is more smooth and centralized. The algorithm does not need to accurately model the boundary region of the various factors affecting the complex model, it only needs to obtain a sufficient number of sampling data that can be simplified in the processing at the same time. However, through the appropriate expansion of the sampling region, to enhance the field of view of the boundary region, the energy compensation of the dark-and-weak-target simulator image surface is ultimately achieved.
6. Conclusions and Outlook
In the study reported in this paper, we used an adaptive compensation algorithm based on pixel-by-pixel fitting to effectively deal with the full-field-of-view energy inhomogeneity problem which affects dark-and-weak-target simulators as a result of light-source nonuniformity, the leakage of light from LCD display panels, and the vignetting of collimating optical systems. The experimental results proved that, after compensation, the grayscale SDF curve of the image was reduced by about 50% in general, which effectively eliminated the effects of uneven light sources, light leakage from the LCD display panel, and gradual vignetting of the optical system. Using our method, the uniformity of the energy of the image surface was greatly improved, and this may be seen as laying a foundation for high-quality image processing and feature extraction.
Researchers may now seek to optimize light distribution at the hardware level by dividing light sources into zones and regulating light for different regions. Image uniformity might be further improved by studying stationary-point image correction algorithms, as well as by image processing at the software level. Indeed, comprehensive zonal regulation of the light source and stationary-point image correction are two methods which are expected to deliver further control and correction of full-field-of-view energy distribution, and thus address still more comprehensively the problem of full-field-of-view energy nonuniformity which affects dark-and-weak-target simulators.