Next Article in Journal
Neural Colour Correction for Indoor 3D Reconstruction Using RGB-D Data
Previous Article in Journal
Development of High-Precision NO2 Gas Sensor Based on Non-Dispersive Infrared Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Compensation Method for Full-Field-of-View Energy Nonuniformity in Dark-and-Weak-Target Simulators

Aerospace Ground Simulation Test and Testing Technology Institute, Changchun University of Science and Technology, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4147; https://doi.org/10.3390/s24134147
Submission received: 15 April 2024 / Revised: 19 June 2024 / Accepted: 20 June 2024 / Published: 26 June 2024
(This article belongs to the Section Navigation and Positioning)

Abstract

:
Dark-and-weak-target simulators are used as ground-based calibration devices to test and calibrate the performance metrics of star sensors. However, these simulators are affected by full-field-of-view energy nonuniformity. This problem impacts the quality of output images and the calibration accuracy of sensors and inhibits further improvements in navigational accuracy. In the study reported in this paper, we sought to analyze the factors which affect full-field-of-view energy uniformity in dark-and-weak-target simulators. These include uneven irradiation in backlight sources, the leakage of light from LCD display panels, and the vignetting of collimating optical systems. We then established an energy transfer model of a dark-and-weak-target simulator based on the propagation of a point light source and proposed a self-adaptive compensation algorithm based on pixel-by-pixel fitting. This algorithm used a sensor to capture the output image of a dark-and-weak-target simulator and iteratively calculated the response error matrix of the simulator. Finally, we validated the feasibility and effectiveness of the compensation algorithm by acquiring images using a self-built test system. The results showed that, after compensating an output image of the dark-and-weak-target simulator, the grayscale standard display function (SDF) of the acquired sensor image was reduced by about 50% overall, so the acquisition image was more accurately compensated, and the desired level of grayscale distribution was obtained. This study provides a reference for improving the quality of output images from dark-and-weak-target simulators, so that the working environments of star sensors may be more realistically simulated, and their detection performance improved.

1. Introduction

With the continuing development of deep-space exploration technology, the number of launches of various types of spacecrafts is steadily increasing. At the same time, there is a demand for ever more accurate information from spacecraft. In spacecraft, the star sensor is the main attitude measurement device [1,2]. The measurement accuracy of the star sensor is crucial to the navigational accuracy of the spacecraft. In order to improve the measurement accuracy of a star sensor, a simulator must be used so that the sensor may be calibrated on the ground. However, the minimum magnitudes of background stars generated by existing simulators are much lower than the limiting detection magnitudes of existing star sensors, and the irradiation of the image plane is nonuniform. This means that the maximum detection capability of star sensors cannot be calibrated, and this inhibits any improvements in navigational accuracy. There is, therefore, an urgent need for a dark-and-weak-target simulator that can simulate the illumination of faint stars while maintaining energy uniformity in the image plane, so that the uniformity of radiated light displayed in star charts is maintained, and simulation input conditions are provided which meet the requirements for ground calibration of the star sensor.
To enhance full-field-of-view energy uniformity in dark-and-weak-target simulators, researchers have sought to optimally adjust the structures of light sources and improve the performance of optical systems. In some published studies [3,4], researchers have reported the use of free-form lenses, plane mirror arrays, and interferometric structures to optimally adjust the structures of light sources, for the purpose of improving full-field-of-view energy uniformity. However, when light from a light source hits an optical system, there is a greater impact on the energy uniformity of the image surface as a result of the vignetting effect. In other studies, therefore [5,6], researchers have optimized and improved optical systems using Lambert’s cosine theory and aberration theory, resulting in improved output-image quality. Still, other researchers [7] have described improvements in the transmission of light through new designs of hair-glass material in optical systems.
Although the uniformity of illumination in the image plane of a dark-and-weak-target simulator can be improved by optimizing the design of the light source and the optical system, image quality and information extraction may be further improved by the use of image-processing algorithms. Existing image-processing algorithms [8] are based on different techniques such as response curve-fitting [9,10] and frequency-domain analysis [11]. By such means, the physical properties and response functions of the system which result in vignetting may be modeled, and a correction method adopted, such as coordinate transformation [12], mathematical fitting [13], marginal distribution estimation [14], or parabolic fitting [15]. All these methods are designed to compensate for the system’s vignetting effect and improve image uniformity. However, such methods are either computationally complex or result in an inadequate fit, and most rely on additional information to achieve effective performance.
In the present study, we sought to address the problem of full-field-of-view energy uniformity affecting dark-and-weak-target simulators. By modeling energy transfer in such a simulator, we developed an adaptive compensation algorithm based on pixel-by-pixel fitting. We also established an experimental platform to experimentally validate the compensation method and found that it achieved effective compensation of full-field-of-view energy nonuniformity in dark-and-weak-target simulators.

2. Composition and Working Principle of the Dark-and-Weak-Target Simulator

The composition of the dark-and-weak-target simulator is shown in Figure 1. It can be seen that it comprised an attitude-and-orbit-dynamics simulation control computer, an LCD display driver circuit, a backlight light source, an LCD display, and a collimating optical system.
The attitude-and-orbit-dynamics simulation control computer, in conjunction with the target parameter settings and the mission requirements, was used to calculate and output image content in real time, including position information for the dark/weak targets, as well as environmental information. During output to the LCD display panel, by controlling the output image brightness L of the LCD display panel, and the output brightness B of the LEDs in the backlight light source, we increased or decreased the overall brightness of the final output image to harmonize and optimize the final imaging effect, while satisfying the following relationships:
B = B 0 β L = L 0 α
where B 0 and L 0 are the initial brightness of the backlight source and LCD display panel, respectively, and β and α are adjustable parameters. Synergistic control between the two units was achieved through image-sensing and feedback mechanisms. When the output image brightness was weak, we increased the values of β and α ; when the output image brightness was excessive, we lowered the values of β and α . This adjustment completed the multi-stage optimization of the output image brightness.
The image content output from the attitude-and-orbit-dynamics simulation control computer was collimated by an optical system, to project positional and environmental information from dark/weak targets onto a specified light-sensitive surface. The output image content under different parameters could be optically mapped to achieve a high degree of simulation, and effective visualization of a variety of dark/weak targets, to fulfil the requirement for a wide range of dark/weak targets, and thereby display the effect of simulation.

3. Energy Transfer Modeling and Analysis of Influencing Factors

3.1. Energy Transfer Model

The process of energy transfer from a pixel point on an LCD display panel to an image plane can be understood as a mapping relationship in which a pixel point n p on the LCD display is eventually mapped to a corresponding position m p on the image plane. On this basis, each pixel point on the LCD display panel can be regarded as a point light source, and a transfer function can be used to represent changes in the light ray during the propagation process. We therefore established an energy transfer model, as shown in Figure 2. We then calculated the light intensity received at different positions on the image plane using Equation (2), and finally obtained the light intensity on the image plane I ( m p ) .
I ( m p ) = 1 f 2 m p p / 2 m p + p / 2 E ( λ ) α ( n p ) L ( n p ) ( 1 + L ( n p ) ) T ( r n p , λ , f ) ) d r
In Equation (2), p is the width size of the pixel, E ( λ ) is the luminous intensity of the backlight source, α ( n p ) is the modulation factor of the LCD display panel for the n th pixel, L ( n p ) is the transmittance function of the n th pixel of the LCD display panel, L ( n p ) is the amount of variation in light leakage between the pixels, f is the focal length of the collimating optical system, r represents the radial coordinates of the transmission of the light rays, and T ( r n p , λ , f ) is the transfer function of the light rays through the optical system.
With the energy transfer model established using the above equation, an integral value for light intensity at each position could be calculated. By repeating this process for all positions, the distribution of light intensity across the whole image plane could be obtained.

3.2. Analysis of Influencing Factors

From the above description of the energy transfer model, it can be understood that the light intensity received at each position on the image plane is affected by a variety of factors, such as the luminous intensity of the backlight source, the intensity of the light transmitted by each pixel in the LCD display panel, and the efficiency of the optical system in transmitting light. However, due to uneven irradiation in the backlight source, leakage of light from the LCD display panel [16], and vignetting of the collimating optical system, the image-plane energy of the dark-and-weak-target simulator is not uniform. In addition, the transfer process can also be affected by variations in temperature. However, the operating temperature of the dark-and-weak-target simulator can be controlled by physical cooling, to ensure that it operates properly at operating and storage temperatures.
In summary, when the light source and collimating optical system are determined, the luminous intensity of the light source and the transfer function of the light after passing through the optical system will also be determined, and uniquely, so the image-plane energy of the dark-and-weak-target simulator can be adjusted and controlled using the LCD display panel.

4. Adaptive Compensation Algorithm Based on Pixel-by-Pixel Fitting

For the present study, we introduced an adaptive compensation algorithm based on pixel-by-pixel fitting. Provided that the uniformity of the light source satisfied our study requirements, the algorithm was able to use sensors to acquire output images with different gray levels; these were displayed by the dark-and-weak-target simulator. The response error surface of the dark-and-weak-target simulator could then be calculated, based on the acquired images. The brightness of each pixel of the dark-and-weak-target simulator was adaptively compensated and adjusted using the error surface, and the compensated output image was then acquired. Finally, grayscale standard display function (SDF) values [17] in the captured images before and after compensation were calculated, so that any improvement in uniformity in the light intensity distributed on the image surface of the dark-and-weak-target simulator could be evaluated. The overall compensation process is set out in flowchart form in Figure 3.
  • Initial working conditions were established, the uniformity of the light source was measured, and images were captured. It is worth noting that the specific region in which a sensor can ultimately detect a dark-and-weak-target simulator depends on the field of view ω 1 of the simulator and the field of view ω 2 of the sensor, and these are usually not the same. In order to ensure the complete detection of the dark-and-weak-target simulator by the sensor, the field-of-view angle of the sensor should be greater than or equal to that of the simulator, i.e., ω 2 ω 1 . This may be demonstrated using Equation (3), as follows:
ω 1 = 2 a r c tan ( h 1 f 1 ) = 2 a r c tan ( ( n p 1 ) 2 + ( m p 1 ) 2 f 1 ) ω 2 = 2 a r c tan ( h 2 f 2 ) = 2 a r c tan ( ( n p 2 ) 2 + ( m p 2 ) 2 f 2 )
where f 1 is the focal length of the dark-and-weak-target simulator, f 2 is the focal length of the sensor, h 1 is the image height of the dark-and-weak-target simulator, h 2 is the image height of the sensor, p 1 is the image-element size of the dark-and-weak-target simulator, p 2 is the image-element size of the sensor, n is the number of transverse elements, and m is the number of longitudinal elements. Thus, by calculating the field-of-view angles of the simulator and the sensor, the percentage of the simulator’s field of view that can be detected by the sensor may be identified. In other words, by detecting localized areas within the imaging range of the dark-and-weak-target simulator and using these as a microcosm of overall system performance, the distribution and spread of dark-and-weak-target simulator imaging over the detection area of the sensor can be accurately described.
The uniformity of the backlight source can be evaluated by measuring the irradiance inhomogeneity of the backlight source using Equation (4), as follows:
ε = E i E i ¯ = ± E i max E i min E i max + E i min × 100 %
where E i max is the maximum value of irradiance within the irradiated surface, E i min is the minimum value of irradiance within the irradiated surface, and E ¯ is the average irradiance. When irradiance inhomogeneity in the backlight source is better than ±10%, as specified in the CIE standard [18], it can be assumed that the impact of the inhomogeneity of the backlight source on the system is controlled within an acceptable range.
2.
Error fitting was performed while ensuring that the uniformity of the backlight source met the study requirements. We let the dark-and-weak-target simulator output 256 images in a grayscale range from 0 to 255, and allowed the initial grayscale matrix of the i th output image to be G i ( x , y ) , and the grayscale of each pixel to be g x y , with x and y being the coordinates of the horizontal and vertical positions, respectively, for each pixel point), so that
G i = g 11 g 1 y g 1 n g x 1 g x y g x n g m 1 g i y g m n
The grayscale matrix of the i th captured image of the sensor was then denoted as S i ( x , y ) , and the grayscale value of each pixel denoted as s x y , so that
S i = s 11 s 1 y s 1 n s x 1 s x y s x n s m 1 s i y s m n
Next, we calculated the grayscale mean of matrix S i ( x , y ) to obtain the grayscale mean matrix of the i th captured image. We noted this as A i ( x , y ) , and the grayscale of each pixel as a x y , so that
A i = a i a i a i a i a i a i a i a i a i
a i = 1 m n × x = 1 m y = 1 n S i ( x , y )
We then calculated and superimposed the mean value of the difference between the grayscale matrix of the captured image and the grayscale mean matrix. This value was denoted as μ i , as shown in Equation (9):
μ i = i = 1 n ( S i A i ) i ( i = 1 , 2 , , n )
It should be noted that, when S i A i < 0 , then the actual response of the pixel is lower than the ideal average response. In such an event, the grayscale of the pixel should be brightened, to make it closer to the ideal average grayscale distribution. If i is then taken to be the relative–difference matrix between the i th superposition and the i 1 th superposition, and i max be the maximum relative–difference matrix, then
i = μ i μ i 1
i max = max (   i   )
A contrast-sensitivity function curve for the human eye reveals that the luminance threshold for visual sensitivity in the human eye is about 1.2% [19]. However, the luminance-detection sensitivity of the sensor used in the present study was much higher than that of the human eye. Consequently, when i max satisfied the conditions of Equation (11), it could be assumed that the sensor was unable to distinguish any difference in luminance, and iteration could be terminated.
i max 0.12 % ( i + 2 ) max ( i + 1 ) max i max
Finally, we denoted the difference–mean matrix   μ i + 2 obtained from the i + 2 th cycle as the response error matrix of the dark-and-weak-target simulator, denoted as P ( x , y ) .
To determine the uniformity compensation for pixel brightness in the dark-and-weak-target simulator, we obtained the compensated image G i ( x , y ) by taking the output-image matrix G i ( x , y ) of the dark-and-weak-target simulator and subtracting the product of the error matrix P ( x , y ) and the output-image matrix G i ( x , y ) of the simulator, as shown in Equation (13), as follows:
G i ( x , y ) = G i ( x , y ) G i ( x , y ) P ( x , y )
Having acquired a second, compensated image, we were able to bring both the initial and compensated images into Equation (14). The grayscale SDF of the acquired image before and after compensation was then calculated, in order to analyze any improvement in the uniformity of light-intensity distribution on the image surface of the dark-and-weak-target simulator, before and after compensation, so that
s t d = 1 M N x = 1 M y = 1 N ( S i ( x , y ) S i ¯ )
where M and N are the numbers of rows and columns, respectively, in the image, S i ( x , y ) is the grayscale value of the captured image before and after compensation at the coordinates of point ( x , y ) , and S i ¯ is the average grayscale value of the image. The lower the grayscale SDF value, the more concentrated and uniform the grayscale distribution in the image. However, this also means that the grayscale distribution in the image is more scattered and uneven. Because the algorithm was designed to fit the combined effect of various influencing factors into a unified response error surface, this became the ‘total factor’ affecting the uniformity of the picture, and the surface was utilized to inversely compensate for the dark-and-weak-target simulator. Any change in distribution before and after compensation could therefore be evaluated using SDF, which actually, if indirectly, reflects the combined effect of the various influencing factors being eliminated.
It can be seen that, through the use of image pixel distribution to reflect the characteristics of light intensity distribution, the use of compensation algorithms to obtain the response error surface, and the use of the dark-and-weak-target simulator image for inverse compensation, an image with uniform grayscale distribution may be obtained for the boundary region. Any distortion and nonuniformity is thereby alleviated, so that the boundary region of the grayscale distribution is more smooth and centralized. The algorithm does not need to accurately model the boundary region of the various factors affecting the complex model, it only needs to obtain a sufficient number of sampling data that can be simplified in the processing at the same time. However, through the appropriate expansion of the sampling region, to enhance the field of view of the boundary region, the energy compensation of the dark-and-weak-target simulator image surface is ultimately achieved.

5. Experimental Validation and Results

To verify the feasibility of the compensation method, and as shown in Figure 4, a test environment was constructed by placing the LCD with a backlight source inside the head of the dark-and-weak-target simulator. Control of the LCD panel displaying the image was completed by driving the display circuit. Specifically, an Epson L3P14U-51G00R LCD display panel was used. For the sensor, an FLIR Black S BFS-PGE-50S5M-C camera was used, with an exposure time of 1.36119 × e 6 μ s and an exposure gain of 2 d b .

5.1. Establishment of Initial Conditions

The initial conditions were set as follows: since the sensor needs to be calibrated on the ground under dark conditions without stray light interference, a dark room environment was built to isolate the ambient stray light except for the experimental instruments and equipment. At the same time, simulator image element size p 1 = 0.014  m m ; optical system focal length f 1 = 238.19  m m ; field of view ω 1 = 7.4°; sensor image element size p 2 = 0.0074  m m ; optical system focal length f 2 = 111.12  m m ; and field of view ω 2 = 24°. These conditions were set to satisfy the requirement that ω 2 ω 1 , so as to ensure that the sensor on the dark-and-weak-target simulator effectively achieved an imaging area of complete detection. Irradiation inhomogeneity in the backlight source was measured to determine whether it met the uniformity requirements of the dark-and-weak-target simulator with respect to the backlight source.
With the backlight light source activated, we placed the irradiance meter on the irradiance surface of the light source. We adjusted the current of the backlight power supply so that it reached a maximum value when the light-sensitive surface was facing the backlight light source. When the light source was stabilized, the irradiance meter was used to take rapid measurements at 16 different positions on the surface of the light source, to eliminate any errors arising from instability in the light intensity. As a side-entry backlight panel was used, and default LED beads were placed above the backlight panel, the horizontal irradiation did not change much, while vertical irradiance decreased incrementally due to light attenuation on the propagation path. Figure 5 shows a schematic diagram of the method used for testing irradiance inhomogeneity. Figure 6 shows the irradiance meter used.
The results of the irradiation inhomogeneity testing are shown in Table 1. Irradiation inhomogeneity was calculated by bringing the test results into Equation (4); these data were normalized and plotted to show variations in illuminance corresponding to different positions on the irradiated surface, as shown in Figure 7.
It can be seen that the relative illuminance of the backlight source decreases incrementally. This result was mainly due to the fact that the LED was placed on the side of the backlight source, and light attenuation occurred during the transmission process. The irradiance inhomogeneity of the backlight source was found to be 3.62%, and thus satisfied the requirements of the CIE standard for irradiance inhomogeneity, being better than ±10%.

5.2. Error Fitting

Dark-field correction was used to compensate for the dark current of the sensor, and normal operation of the LCD at operating and storage temperatures was ensured by hardware cooling under constant temperature conditions, so that the dark-and-weak-target simulator output a total of 256 images G i ( x , y ) recorded in 0–255 grayscale, and the sensor was used for one-by-one acquisition to obtain S i ( x , y ) . The initial acquisition of the image of the grayscale SDF curve is shown in Figure 8. In the figure, it can be seen that with the enhancement of the gray value of the image, the larger the grayscale SDF value of the image, and the more uneven the image-plane energy. Initial-acquisition images with grayscale values of 0, 109, and 255 are shown in Figure 9, in which a gradual weakening trend from the center to the edges is clearly evident, along with irregularly bright areas at individual locations.
All initial acquisitions were then brought into the compensation algorithm, values for differences between each image and the mean image were calculated, and the cycle was superimposed. When superimposed to the 245th image, 245 = 0.119 % , 246 = 0.118 % , and 247 = 0.118 % . This satisfied the threshold condition and terminated the superimposition. Therefore,   μ i + 2 was the compensation surface P ( x , y ) . These results allowed us to determine that the surface had eliminated the difference value of the luminance. The response error surface of the dark-and-weak-target simulator is shown in Figure 10.
In Figure 10, it can be seen that the obtained response error surface is not flat and exhibits a trend similar to a Gaussian distribution; this may be explained by the fact that vignetting of an optical system itself characterized by Gaussian-like distribution, while the phenomenon of light leakage in LCD display panels is characterized by a random distribution.

5.3. Uniformity Compensation

The response error surface was used to compensate the output image of the dark-and-weak-target simulator to obtain G i ( x , y ) , and a secondary acquisition was performed to obtain a grayscale SDF comparison curve for the acquired image before and after compensation, as shown in Figure 11. It can be seen in the figure that there is a significant difference in the grayscale SDF values of the acquired images before and after compensation, with SDF values of the acquired images before compensation continuing to rise with increases in grayscale values, finally reaching a high level. After compensation, the overall reduction in the grayscale SDF values is about 50%, and the curve tends to be flat. Figure 12 shows a histogram of gray levels of a captured image with a grayscale value of 200 before and after compensation. It can be seen that, overall, the pixels of the captured image converge to an ideal linear response relationship after compensation, and any differences between the pixel points with respect to luminescence response are largely eliminated.

5.4. Discussion

It can be seen that the grayscale SDF curve of the captured image after compensation is significantly lower than the grayscale SDF curve of the initially captured image. The difference is not significant at grayscale values below 109 because less light is transmitted from the LCD display panel in this grayscale range, and image-plane irradiance does not change significantly. However, when the grayscale value reaches 109, the influence of various factors on the energy uniformity of the image surface gradually becomes larger, so that the compensation effect becomes obvious, and the two curves begin to exhibit significant differences. The grayscale SDF curve of the initially captured image shows a steady increase, reaching a maximum value of 3.3% at a grayscale value of 255, while the grayscale SDF curve of the compensated captured image reaches a peak value of 1.42% at a grayscale value of 120 and then starts to decrease, falling to 0.49% at a grayscale value of 255. Additionally, Figure 12 clearly shows that the compensation algorithm was able to accurately concentrate and correct an original grayscale distribution which was severely deviated and scattered, and thus greatly improved the uniformity of the image surface energy.
Overall, by using an adaptive compensation algorithm to compensate for the output image of the dark-and-weak-target simulator, the grayscale SDF value of the sensor acquisition image was reduced by about 50% overall, and the image energy distribution converged with the ideal linear response relationship. At the same time, compared with the traditional method, this method is easier to calculate, the compensation effect is obvious, and does not need to rely on additional information for assistance.

6. Conclusions and Outlook

In the study reported in this paper, we used an adaptive compensation algorithm based on pixel-by-pixel fitting to effectively deal with the full-field-of-view energy inhomogeneity problem which affects dark-and-weak-target simulators as a result of light-source nonuniformity, the leakage of light from LCD display panels, and the vignetting of collimating optical systems. The experimental results proved that, after compensation, the grayscale SDF curve of the image was reduced by about 50% in general, which effectively eliminated the effects of uneven light sources, light leakage from the LCD display panel, and gradual vignetting of the optical system. Using our method, the uniformity of the energy of the image surface was greatly improved, and this may be seen as laying a foundation for high-quality image processing and feature extraction.
Researchers may now seek to optimize light distribution at the hardware level by dividing light sources into zones and regulating light for different regions. Image uniformity might be further improved by studying stationary-point image correction algorithms, as well as by image processing at the software level. Indeed, comprehensive zonal regulation of the light source and stationary-point image correction are two methods which are expected to deliver further control and correction of full-field-of-view energy distribution, and thus address still more comprehensively the problem of full-field-of-view energy nonuniformity which affects dark-and-weak-target simulators.

Author Contributions

Conceptualization, F.Q. and S.L.; methodology, F.Q.; software, B.Z.; validation, F.Q. and J.Z. (Jian Zhang); formal analysis, B.Z. and G.S.; investigation, J.Z. (Jian Zhang) and J.Z. (Jierui Zhang); resources, G.S. and J.Z. (Jierui Zhang); data curation, B.Z. and F.Q.; writing—original draft preparation, F.Q. and S.L.; writing—review and editing, J.Z. (Jian Zhang) and G.S.; visualization, S.L. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number [61703057]; This research was funded by Project supported by innovative and entrepreneurial talents in Jilin Province of China, grant number [2023QN13]; This research was funded by Jilin province science and technology development plan project of China, grant number [20210201034GX].

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data in this study have been approved for publication.

Acknowledgments

I would like to express my gratitude to for his support and help throughout the process of completing the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shen, C.; Ma, C.; Gao, W.; Wang, Y. Hybrid Grid Pattern Star Identification Algorithm Based on Multi-Calibration Star Verification. Sensors 2024, 24, 1661. [Google Scholar] [CrossRef] [PubMed]
  2. Wu, L.; Xu, Q.; Wang, H.; Lyu, H.; Li, K. Guide Star Selection for the Three-FOV Daytime Star Sensor. Sensors 2019, 19, 1457. [Google Scholar] [CrossRef]
  3. Gong, J.; Wang, C.; Gong, C.; Liang, F.; Ma, J. Study on the uniformity of high concentration photovoltaic system with array algorithm. Sol. Energy 2017, 153, 181–187. [Google Scholar] [CrossRef]
  4. Zhong, Z.; Sui, Z.; Zhang, B.; Zhang, X. Improvement of irradiation uniformity by dynamic interference structures of laser array in inertial-confinement-fusion facilities. Opt. Commun. J. Devoted Rapid Publ. Short Contrib. Field Opt. Interact. Light Matter 2020, 455, 124558. [Google Scholar] [CrossRef]
  5. Wang, L. Bispectral panoramic corona detection optical system. J. Photonics 2010, 39, 1770–1774. [Google Scholar]
  6. Li, B.; Feng, S.; Mao, J. Image plane illumination uniformity of an online image-visible iron spectrum imaging system. Opt. Precis. Eng. 2016, 24, 2347–2356. [Google Scholar]
  7. Zheng, R.; Li, C.; Gao, Y.; Li, G.; Liu, B.; Sun, K. Design of an infrared star simulator optical system with uniform radiation at a specific irradiance. J. Photonics 2022, 51, 24–32. [Google Scholar]
  8. Leong, F.J.W.-M.; Brady, M.; McGee, J.O. Correction of uneven illumination (vignetting) in digital microscopy images. J. Clin. Pathol. 2003, 56, 619–621. [Google Scholar] [CrossRef] [PubMed]
  9. Kim, S.J.; Pollefeys, M. Robust Radiometric Calibration and Vignetting Correction. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 562–576. [Google Scholar] [CrossRef] [PubMed]
  10. Goldman, D.B. Vignette and Exposure Calibration and Compensation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2276–2288. [Google Scholar] [CrossRef]
  11. Zheng, Y.; Lin, S.; Kang, S.B. Single-Image Vignetting Correction. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 461–468. [Google Scholar] [CrossRef]
  12. Kim, Y.-N.; Sim, D.-G. Vignetting and Illumination Compensation for Omni-Directional Image Generation on Spherical Coordinate. In Proceedings of the 16th International Conference on Artificial Reality and Telexistence, ICAT 2006, Hangzhou, China, 29 November–1 December 2006; Workshops Proceedings. IEEE: New York, NY, USA, 2006. [Google Scholar] [CrossRef]
  13. Yu, W.; Chung, Y.; Soh, J. Vignetting distortion correction method for high quality digital imaging. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, Cambridge, UK, 26 August 2004; Volume 3, pp. 666–669. [Google Scholar] [CrossRef]
  14. Lin, S.; Gu, J.; Yamazaki, S.; Shum, H.-Y. Radiometric calibration from a single image. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004, CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; p. II. [Google Scholar] [CrossRef]
  15. Kordecki, A.; Palus, H.; Bal, A. Practical vignetting correction method for digital camera with measurement of surface luminance distribution. Signal Image Video Process. 2016, 10, 1417–1424. [Google Scholar] [CrossRef]
  16. Li, W.-C.; Tsai, D.-M. Defect Inspection in Low-Contrast LCD Images Using Hough Transform-Based Nonstationary Line Detection. IEEE Trans. Ind. Inform. 2009, 7, 136–147. [Google Scholar] [CrossRef]
  17. Liu, R.; Wang, D.; Jia, P.; Che, X. Gradient sky scene based nonuniformity correction and local weighted filter based denoising. Optik 2018, 174, 748–756. [Google Scholar] [CrossRef]
  18. Liu, S. Research on High-Precision Collimated Solar Simulator and Its Key Technology; Changchun University of Science and Technology: Jilin, China, 2014. [Google Scholar]
  19. Wang, S.-C.; Zheng, X.-F.; Mao, X.-Y.; Cheng, H.-B.; Chen, Y. Vignetting compensation during camera acquisition for LED display. Liq. Cryst. Disp. 2019, 34, 778–786. [Google Scholar] [CrossRef]
Figure 1. Composition of the dark-and-weak-target simulator.
Figure 1. Composition of the dark-and-weak-target simulator.
Sensors 24 04147 g001
Figure 2. A model for energy transfer in dark-and-weak-target simulators.
Figure 2. A model for energy transfer in dark-and-weak-target simulators.
Sensors 24 04147 g002
Figure 3. Flowchart of energy compensation process for dark/weak targets.
Figure 3. Flowchart of energy compensation process for dark/weak targets.
Sensors 24 04147 g003
Figure 4. Test environment.
Figure 4. Test environment.
Sensors 24 04147 g004
Figure 5. Numbering of sample positions on irradiated surfaces.
Figure 5. Numbering of sample positions on irradiated surfaces.
Sensors 24 04147 g005
Figure 6. Irradiance meter.
Figure 6. Irradiance meter.
Sensors 24 04147 g006
Figure 7. Backlight source irradiance distribution surface.
Figure 7. Backlight source irradiance distribution surface.
Sensors 24 04147 g007
Figure 8. Grayscale SDF curves for initial-acquisition images.
Figure 8. Grayscale SDF curves for initial-acquisition images.
Sensors 24 04147 g008
Figure 9. Initial-acquisition maps with different shades of gray.
Figure 9. Initial-acquisition maps with different shades of gray.
Sensors 24 04147 g009
Figure 10. Response error surface.
Figure 10. Response error surface.
Sensors 24 04147 g010
Figure 11. Grayscale SDF curves for initial image and compensated image.
Figure 11. Grayscale SDF curves for initial image and compensated image.
Sensors 24 04147 g011
Figure 12. Histogram of 200-grayscale image before and after compensation.
Figure 12. Histogram of 200-grayscale image before and after compensation.
Sensors 24 04147 g012
Table 1. Irradiation inhomogeneity test results.
Table 1. Irradiation inhomogeneity test results.
Test PointIrradiance (µW/cm2)Test PointIrradiance (µW/cm2)
150.9952.9
251.11052.7
351.31153.6
450.61253.7
551.21354.3
651.31454.4
751.61554.1
851.71654.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quan, F.; Liu, S.; Sun, G.; Zhang, J.; Zhang, Y.; Zhao, B.; Zhang, J. A Compensation Method for Full-Field-of-View Energy Nonuniformity in Dark-and-Weak-Target Simulators. Sensors 2024, 24, 4147. https://doi.org/10.3390/s24134147

AMA Style

Quan F, Liu S, Sun G, Zhang J, Zhang Y, Zhao B, Zhang J. A Compensation Method for Full-Field-of-View Energy Nonuniformity in Dark-and-Weak-Target Simulators. Sensors. 2024; 24(13):4147. https://doi.org/10.3390/s24134147

Chicago/Turabian Style

Quan, Fenghuan, Shi Liu, Gaofei Sun, Jian Zhang, Yu Zhang, Bin Zhao, and Jierui Zhang. 2024. "A Compensation Method for Full-Field-of-View Energy Nonuniformity in Dark-and-Weak-Target Simulators" Sensors 24, no. 13: 4147. https://doi.org/10.3390/s24134147

APA Style

Quan, F., Liu, S., Sun, G., Zhang, J., Zhang, Y., Zhao, B., & Zhang, J. (2024). A Compensation Method for Full-Field-of-View Energy Nonuniformity in Dark-and-Weak-Target Simulators. Sensors, 24(13), 4147. https://doi.org/10.3390/s24134147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop