Next Article in Journal
Independently Controlling Stochastic Field Realization Magnitude and Phase Statistics for the Construction of Novel Partially Coherent Sources
Next Article in Special Issue
Three-Dimensional Laser Imaging with a Variable Scanning Spot and Scanning Trajectory
Previous Article in Journal
Joint Estimation of Symbol Rate and Chromatic Dispersion Using Delayed Multiplier for Optical Performance Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Illumination Calibration for Computational Ghost Imaging

School of Instrumentation and Opto-Electronic Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Photonics 2021, 8(2), 59; https://doi.org/10.3390/photonics8020059
Submission received: 16 January 2021 / Revised: 9 February 2021 / Accepted: 21 February 2021 / Published: 22 February 2021
(This article belongs to the Special Issue Smart Pixels and Imaging)

Abstract

:
We propose a fast calibration method to compensate the non-uniform illumination in computational ghost imaging. Inspired by a similar procedure to calibrate pixel response differences for detector arrays in conventional digital cameras, the proposed method acquires one image of an all-white paper to determine the non-uniformity of the illumination, and uses the information to calibrate any further reconstructed images under the same illumination. The numerical and experimental results are in a good agreement, and the experimental results showed that the root mean square error of the reconstructed image was reduced by 79.94% after the calibration.

1. Introduction

Over the past two decades, ghost imaging has been one of the rapidly developing computational imaging schemes [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. Ghost imaging reconstructs images by illuminating an object with a series of varying light intensity distributions and associating the knowledge of these distributions and the corresponding total light intensity measured with a bucket detector [2]. In a standard pseudothermal two-detector ghost imaging scheme [2,3,4,5], the light intensity distributions are usually obtained by a scanning single-pixel detector or a detector array. With the development of the micro-optical electromechanical system, computational ghost imaging (CGI) was proposed [7]. In CGI, the intensity distributions are generated by illuminating a spatial light modulator (SLM) with programmable masks on it. The image is then reconstructed by correlating the calculated intensity distributions of the masks at the object plane with their corresponding light intensities measured by a single-pixel detector. CGI significantly simplifies ghost imaging systems and reduces acquisition time by calculating the intensity distributions numerically rather than measuring them experimentally. However, difference inevitably exists between the numerical calculations and experimental measurements, and one major cause of such difference is the non-uniformity of illumination sources [20,21,22,23,24].
To enhance the quality of a degraded image with non-uniform illumination, the Retinex [22,23] algorithm is commonly used. It is well known that an image is the pixelwise multiplication of the illumination component and reflectance component. The Retinex algorithm uses estimating illumination methods to extract uneven illumination components from images and then normalizes them. An effective method for estimating the illumination component is the key to calibrate non-uniform illumination based on Retinex theory [24]. The Retinex algorithm posits that the illumination component is the smoothed version of the degraded image. Several techniques have already been reported in the literature [24,25], such as some illumination estimation algorithms based on a filtering strategy [24,26,27,28], PDE-based Retinex methods (the illumination is obtained by solving a partial differential equation) [29,30], and variational model-based Retinex methods [25,31]. The above algorithm is also called retrospective calibration, which is a posteriori calibration applied after the acquisition. However, their calibration accuracy is unsatisfactory [24], because the illumination component is estimated.
At present, the imaging performance of CGI is nowhere near image sensors using detector arrays, especially silicon-based charge-coupled devices and complementary metal oxide semiconductors [32,33]. Beside the fact that the research and manufacture of detector arrays are intensively invested in due to the global market demands, sophisticated calibration procedures, such as dark current noise suppression and pixel-response non-uniformity compensation, are applied before the image sensors are put into actual use [34,35,36,37,38,39]. Therefore, it would be beneficial to investigate the same concept to improve the image quality of CGI.
In this work, we propose a calibration method to compensate for the non-uniformity of illumination in CGI. The proposed calibration acquires the knowledge of illumination non-uniformity by reconstructing an image of an all-white paper. The knowledge is then used to calibrate further reconstructed images and to improve their image qualities. Theoretical analysis and experimental results indicated that the proposed method is feasible. Specifically, the root mean square error (RMSE) of the experimentally reconstructed image was reduced by 79.94%, from 0.2618 to 0.0525, after the non-uniformity was compensated for using the proposed method.

2. Theory

2.1. The Principle of CGI

The scheme of CGI is performed as shown in Figure 1. Beams from the laser source are modulated by a SLM, which is controlled by the computer to generate a series of binary patterns and provide structed illumination. An imaging lens projects the patterns onto the object, which forms the conjugation between the SLM and the object. A collection lens and a bucket detector are used to collect the measured signal. The signal is then transferred to computer for reconstruction by using a highspeed analogue-to-digital convertor.
In CGI, the measured signal S i is the illuminating light intensity distribution I L modulated by the SLM mask pattern P i and transmitted or reflected by the pixelated object I o ; i.e.,
S i = K · I L ( x , y ) · P i ( x , y ) · I o ( x , y ) d x d y ,
where K is a scaling constant, and x and y refer to the spatial coordinates in the transverse plane. After many measurements, the reconstructed image I r can be calculated using the knowledge of S i and P i [7,9]. If the patterns form an orthonormal basis, then an N pixelated object can be completely sampled with N measurements. The reconstructed image can be obtained by using [13,14,15,16,17,18,19]
I r = i = 1 N S i · P i .

2.2. The Calibration of Non-uniform Illumination in CGI

In most CGI works, illuminating light intensity I L is uniform distribution, considered as a constant by presumption, and is not included in Equation (1). However, in this work, we were to address the non-uniformity of illumination intensity distribution I L ( x , y ) , and therefore it needed to be considered.
If the illumination was uniform, the reconstructed image I r would be strictly proportional to the object I o . Here, by substituting Equation (1) into Equation (2), it is demonstrated that I r is not a scaled I o , but rather a scaled dot product of I o and I L ; i.e.,
I r = i = 1 N { ( K · I L ( x , y ) P i ( x , y ) · I o ( x , y ) d x d y ) · P i } = K · I L · I o ,
where K is another scaling constant to ensure that Equation (3) stands. If the illuminating intensity distribution I L was obtained, then an authentic image I c of the object could be yielded as
I c = I r / I L = K · I o .
To retrieve the non-uniform illuminating intensity distribution I L , the easiest way is to set the object I o with a constant reflectivity R, such as a sheet of white paper. It is worth mentioning that a white paper is not perfectly uniform in its reflectivity in general. However, due to the quasi-Lambertian nature of the sheet of paper, combined with the fact that the half angular size of the paper with respect to the detection point is small (7° in the experiment), the sheet of white paper was considered as of a constant reflectivity R in this work. Consequently, the image I W P of the white paper is proportional to I L as
I W P = i = 1 N { ( K · I L ( x , y ) P i ( x , y ) · R · O n e s ( x , y ) d x d y ) · P i } = K · I L ,
where R and K″ are also scaling constants and Ones is an all-one matrix. The authentic image Ic can be obtained by
I c = I r / I W P = ( K / K ) · I o .
It is worth mentioning that all scaling constants, K , K , and K , are irrelevant to the reconstructions and their image quality evaluation because unity normalization will be performed on all reconstructions.

3. Simulation and Experiment

3.1. Numerical Simulation Results

Numerical simulation was performed to demonstrate the proposed calibration methods; its procedure is shown in Figure 1. The illumination light distribution I L was set to be a Gaussian function, which presented an expanded laser beam illumination, as
I L ( x , y ) = α 1 2 π σ e ( x μ 1 ) 2 + ( y μ 2 ) 2 + β ,
where μ 1 and μ 2 are the mathematical expectation in the x and y dimension, and σ is variance. α and β are the coefficients for adjusting the relative value.
In the simulation, the parameters for Gaussian illumination were μ 1 = 30 , μ 2 = 30 , σ = 2000 , α = 0.0089 , and β = 0.1 , and it was decentered in position, as shown in Figure 2a. An alphabet with 128 × 128 pixel resolution was used as the object, as shown in Figure 2b. Hadamard patterns were used for modulation purpose [14,15,16,17,18,19]. Detector noise was added to the measured signals, and the averaged signal-to-noise ratio (SNR) of the measured signal was 46 dB. The uncalibrated reconstruction is shown in Figure 2c, exhibiting the influence of non-uniform illumination, such as missing letters at the bottom-right corner. After being calibrated by the proposed method, image non-uniformity was significantly suppressed and the missing letters were recovered, as shown in Figure 2d, though with some noise. Since the proposed method is to calibrate the global error caused by non-uniform illumination, we chose RMSE as the main evaluation indicator. The root mean square error (RMSE) was reduced by 39.21%, from 0.2548 to 0.0999.

3.2. Experimental Results

An experiment was performed, as shown in Figure 3, using the same CGI system setup as in numerical simulation. Beams from the laser source (Viasho VA-I_LNS-532, 532 ± 0.1 nm, 200 mw) were expanded, and then modulated by a digital micromirror device (DMD, Texas Instruments V-7000, 1024 × 768, operating at 2 kHz). A camera lens (Nikon AF Nikkor, f = 35 mm, F = 1.8 G) imaged the DMD patterns onto the object. A single-pixel detector (Thorlabs PDA100A-EC, 320–1100 nm, operating at 20 dB) and a highspeed analogue-to-digital convertor (ADC, PicoScope 6404D, operating at 100 MS/s sampling rate and 500 MHz bandwidth) were used to measure and transfer the intensity signals to the computer for reconstruction.
To calibrate the non-uniformity of the illumination, a white paper was set as an object and its image, I W P , was obtained using Equation (5), which would be used as the estimated illumination distribution I L . Hadamard patterns [14,15,16,17,18,19] with 128 × 128 pixel resolution and differential measurement [11] were used for the image reconstruction. As shown in Figure 4a, the laser-sourced illumination had a Gaussian distribution, laser speckles, and other non-uniformity.
Under the same illumination condition and parameters configuration, a standard CGI experiment was performed with an alphabet object. The reconstructed image I r was influenced by the non-uniform illumination. As a result, some letters could not be distinguished, and some letters had speckles on them, as shown in Figure 4b. To eliminate the non-uniform illumination influence on the reconstructed image, the proposed calibration method was performed using Equation (6) with the measured I W P . The quality of the calibrated image I c , as shown in Figure 4c, was significantly improved, and all letters became distinguishable. The RMSEs, calculated with the ground truth, of the images before and after calibration were 0.2518 and 0.0525 respectively, indicating a 79.94% improvement by the proposed calibration method. Both the ground truth and the reconstructed images were normalized and aligned in a manner such that they had the same dynamic range and the same field of view. To demonstrate the whole procedure, the intensities (the normalized greyscale values) of the same line in Figure 4a–d are illustrated in Figure 4e. The intensity of the reconstructed image I r (red line) was enveloped by the non-uniform illumination I L (black dashed line). The intensity of the calibrated image I c (green line) is in good agreement with the ground truth (blue dotted line).
It is worth mentioning that the experiment contained two major non-uniform illumination scenarios, which were the global non-uniform due to the laser Gaussian distribution and the local one caused by laser speckles, such as those on letters “H” and “I” in Figure 4b.
For comparison, a retrospective calibration method was applied to calibrate the non-uniform illumination of Figure 4b. The retrospective calibration method performs calibration by estimating the non-uniform illumination with the assumption that the illumination distribution is smooth [24]. However, such an assumption is invalid for local non-uniformity, such as laser speckles. On the contrary, the calibration method proposed here calibrates both global and local non-uniform illumination. The comparison presented in Figure 5 shows that the laser speckles on letters “H” and “I” were not eliminated after retrospective Gaussian filtering calibration. The RMSE improved from 0.0689 to 0.0525 by using the proposed calibration method rather than the existing retrospective one.

4. Discussions

Interestingly, there are existing works [20,21] to eliminate the influence of non-uniform illumination in traditional ghost imaging schemes, i.e., ghost imaging with signal and reference paths. However, due to the nature of traditional ghost imaging, these works had to use charge-coupled devices to record the non-uniformity of the illumination by accumulating many frames of speckle patterns, which jeopardizes the real-time performance of a ghost imaging system.
It is worth noting that the noise became obvious at the corner areas after the calibration. That is because the SNR at these areas was low due to the weak illumination intensities, and the proposed method only reduced the non-uniformity of the reconstructed image caused by the illumination but would not improve the SNR of the image. The SNR was calculated using the following equation:
SNR = ( < I f > < I b > ) / ( σ f + σ b 2 ) ,
where < I f > and < I b > are the average intensities of the image feature and background, respectively (here calculated from the data within the white part of the letter and the black part around the letter). σ f and σ b are the standard deviations of the intensities in the feature and the background, respectively [40].
The image-quality improvement yielded in the proposed calibration was fundamentally due to a global dynamic range normalization of the reconstructed image, which was achieved by a pixelwise division in Equation (6). Importantly, the proposed illumination-calibrated method cannot improve the SNR of a local area in the reconstructed image. However, the calibrated images are more suitable for global observation and analysis.

5. Conclusions

In this work, an illumination calibration procedure was proposed to address the non-uniform illumination problem in computational ghost imaging. Without any extra device, the proposed procedure acquires one image of an all-white paper to determine the non-uniformity of the illumination and uses the acquired information to calibrate any further reconstructed images under the same illumination condition. Numerical and experimental results demonstrated that, without the proposed calibration, certain areas in the reconstructed images became indistinguishable and image information was missing due to the non-uniform illumination. The missed information can be recovered after the proposed calibration; the quality of the reconstructed images was significantly improved by approximately 80%. The proposed calibration method can be applied to other ghost imaging techniques.

Author Contributions

Conceptualization, M.-J.S.; methodology, M.-J.S., W.C., and S.-M.Y.; validation, S.-M.Y.; writing—original draft preparation, S.-M.Y. and W.C.; writing—review and editing, M.-J.S. and L.-J.L.; project administration, M.-J.S.; funding acquisition, M.-J.S. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (Grant No. 61922011); Open Research Projects of Zhejiang Lab (Grant No. 2021MC0AB03).

Data Availability Statement

Data available upon request.

Acknowledgments

The authors gratefully acknowledge the financial support of the National Natural Science Foundation of China and Open Research Projects of Zhejiang Lab.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pittman, T.B.; Shih, Y.H.; Strekalov, D.V.; Sergienko, A.V. Optical imaging by means of two-photon quantum entanglement. Phys. Rev. A 1995, 52, R3429–R3432. [Google Scholar] [CrossRef]
  2. Bennink, R.S.; Bentley, S.J.; Boyd, R.W. “Two-photon” coincidence imaging with a classical source. Phys. Rev. Lett. 2002, 89, 113601. [Google Scholar] [CrossRef] [Green Version]
  3. Valencia, A.; Scarcelli, G.; Angelo, M.D.; Shih, Y.H. Two-photo imaging with thermal light. Phys. Rev. Lett. 2005, 94, 063601. [Google Scholar] [CrossRef] [Green Version]
  4. Ferri, F.; Magatti, D.; Gatti, A.; Bache, M.; Brambilla, E.; Lugiato, L.A. High-resolution ghost image and ghost diffraction experiments with thermal light. Phys. Rev. Lett. 2005, 94, 183602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Basano, L.; Ottonello, P. A conceptual experiment on single-beam coincidence detection with pseudothermal light. Opt. Express 2007, 15, 12386–12394. [Google Scholar] [CrossRef]
  6. Zhai, Y.H.; Chen, X.H.; Zhang, D.; Wu, L.A. Two-photon interference with true thermal light. Phys. Rev. A 2005, 72, 043805. [Google Scholar] [CrossRef] [Green Version]
  7. Shapiro, J.H. Computational ghost imaging. Phys. Rev. A 2008, 78. [Google Scholar] [CrossRef]
  8. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. Ieee Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef] [Green Version]
  9. Bromberg, Y.; Katz, O.; Silberberg, Y. Ghost imaging with a single detector. Phys. Rev. A 2009, 79. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, X.-H.; Liu, Q.; Luo, K.-H.; Wu, L.-A. Lensless ghost imaging with true thermal light. Opt. Lett. 2009, 34, 695–697. [Google Scholar] [CrossRef]
  11. Ferri, F.; Magatii, D.; Lugiato, L.A.; Gatti, A. Diferential ghost imaging. Phys. Rev. A 2010, 104, 253603. [Google Scholar]
  12. Aßmann, M.; Bayer, M. Compressive adaptive computational ghost imaging. Sci. Rep. 2013, 3, 1545. [Google Scholar]
  13. Sun, B.; Edgar, M.P.; Bowman, R.; Vittert, L.E.; Welsh, S.; Bowman, A.; Padgett, M.J. 3D Computational Imaging with Single-Pixel Detectors. Science 2013, 340, 844–847. [Google Scholar] [CrossRef] [Green Version]
  14. Radwell, N.; Mitchell, K.J.; Gibson, G.M.; Edgar, M.P.; Bowman, R.; Padgett, M.J. Single-pixel infrared and visible microscope. Optica 2014, 1, 285–289. [Google Scholar] [CrossRef]
  15. Sun, M.J.; Edgar, M.P.; Gibson, G.M.; Sun, B.Q.; Radwell, N.; Lamb, R.; Padgett, M.J. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun. 2016, 7, 12010. [Google Scholar] [CrossRef]
  16. Sun, M.J.; Edgar, M.P.; Phillips, D.B.; Gibson, G.M.; Padgett, M.J. Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning. Opt. Express 2016, 24, 10476–10485. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, Z.B.; Wang, X.Y.; Zheng, G.; Zhong, J.G. Hadamard single-pixel imaging versus Fourier single-pixel imaging. Opt. Express 2017, 25, 19619–19639. [Google Scholar] [CrossRef] [PubMed]
  18. Sun, M.J.; Meng, L.T.; Edgar, M.P.; Padgett, M.J.; Radwell, N. A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging. Sci. Rep. 2017, 7, 3464. [Google Scholar] [CrossRef] [Green Version]
  19. Sun, M.-J.; Chen, W.; Liu, T.-F.; Li, L.-J. Image Retrieval in Spatial and Temporal Domains with a Quadrant Detector. IEEE Photonics J. 2017, 9. [Google Scholar] [CrossRef]
  20. Li, H.; Shi, J.H.; Zeng, G.H. Ghost imaging with nonuniform thermal light fields. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2013, 30, 1854–1861. [Google Scholar] [CrossRef] [PubMed]
  21. Sun, S.; Liu, W.-T.; Gu, J.-H.; Lin, H.-Z.; Jiang, L.; Chen, P.-X. Ghost imaging normalized by second-order coherence. Opt. Lett. 2019, 44, 5993–5996. [Google Scholar] [CrossRef] [PubMed]
  22. Land, E.H.; McCann, J.J. Lightness and retinex theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef]
  23. Land, E.H. The retinex theory of color vision. Sci. Am. 1977, 237, 108–128. [Google Scholar] [CrossRef]
  24. Dey, N. Uneven illumination correction of digital images: A survey of the state-of-the-art. Optik 2019, 183, 483–495. [Google Scholar] [CrossRef]
  25. Wang, W.; He, C.; Tang, L.; Ren, Z. Total variation based variational model for the uneven illumination correction. Neurocomputing 2018, 281, 106–120. [Google Scholar] [CrossRef]
  26. Jobson, D.J.; Rahman, Z.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. A Publ. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  27. Shen, H.; Li, H.; Qian, Y.; Zhang, L.; Yuan, Q. An effective thin cloud removal procedure for visible remote sensing images. ISPRS J. Photogramm. Remote Sens. 2014, 96, 224–235. [Google Scholar] [CrossRef]
  28. Gao, Y.; Hu, H.-M.; Li, B.; Guo, Q. Naturalness Preserved Nonuniform Illumination Estimation for Image Enhancement Based on Retinex. IEEE Trans. Multimed. 2018, 20, 335–344. [Google Scholar] [CrossRef]
  29. Morel, J.M.; Belen Petro, A.; Sbert, C. A PDE Formalization of Retinex Theory. IEEE Trans. Image Process. 2010, 19, 2825–2837. [Google Scholar] [CrossRef]
  30. Liang, Z.; Liu, W.; Yao, R. Contrast Enhancement by Nonlinear Diffusion Filtering. IEEE Trans. Image Process. 2016, 25, 673–686. [Google Scholar] [CrossRef]
  31. Ng, M.K.; Wang, W. A Total Variation Model for Retinex. Siam J. Imaging Sci. 2011, 4, 345–365. [Google Scholar] [CrossRef]
  32. Bigas, M.; Cabruja, E.; Forest, J.; Salvi, J. Review of CMOS image sensors. Microelectron. J. 2006, 37, 433–451. [Google Scholar] [CrossRef] [Green Version]
  33. Fossum, E.R.; Hondongwa, D.B. A Review of the Pinned Photodiode for CCD and CMOS Image Sensors. IEEE J. Electron Devices Soc. 2014, 2, 33–43. [Google Scholar] [CrossRef]
  34. Janesick, J.R. Photon Transfer; SPIE: Belingham, WA, USA, 2007; pp. 1–200. [Google Scholar]
  35. Schulz, M.; Caldwell, L. Non-uniformity correction and correctability of infrared focal plane arrays. Infrared Phys. Technol. 1995, 36, 763–777. [Google Scholar] [CrossRef]
  36. Bosco, A.; Bruna, A.; Messina, G.; Spampinato, G. Fast method for noise level estimation and integrated noise reduction. IEEE Trans. Consum. Electron. 2005, 51, 1028–1033. [Google Scholar] [CrossRef]
  37. Liu, Z.; Xu, J.; Wang, X.; Nie, K.; Jin, W. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor. Sensors 2015, 15, 23496–23513. [Google Scholar] [CrossRef] [Green Version]
  38. Singh, S.; Bray, M.A.; Jones, T.R.; Carpenter, A.E. Pipeline for illumination correction of images for high-throughput microscopy. J. Microsc. 2014, 256, 231–236. [Google Scholar] [CrossRef] [PubMed]
  39. Model, M. Intensity calibration and flat-field correction for fluorescence microscopes. Curr. Protoc. Cytom. 2014, 68, 10.14.1–10.14.10. [Google Scholar] [CrossRef]
  40. Redding, B.; Choma, M.A.; Cao, H. Speckle-free laser imaging using random laser illumination. Nat. Photonics 2012, 6, 355–359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Numerical simulation flow chart. The light source was non-uniform, the intensity distribution of which was Gaussian. 128 × 128 Hadamard marks were used as sampling masks.
Figure 1. Numerical simulation flow chart. The light source was non-uniform, the intensity distribution of which was Gaussian. 128 × 128 Hadamard marks were used as sampling masks.
Photonics 08 00059 g001
Figure 2. Numerical simulation. (a) Non-uniform illumination. (b) Object. (c) Uncalibrated reconstructed image; its root mean square error (RMSE) is 0.2548. (d) Calibrated reconstructed image; its RMSE is 0.0999. (e) Intensity distributions comparison.
Figure 2. Numerical simulation. (a) Non-uniform illumination. (b) Object. (c) Uncalibrated reconstructed image; its root mean square error (RMSE) is 0.2548. (d) Calibrated reconstructed image; its RMSE is 0.0999. (e) Intensity distributions comparison.
Photonics 08 00059 g002
Figure 3. Experiment system setup. The object was illuminated by a laser beam which was modulated by a DMD. The camera lens was used to project the Hadamard masks onto the object. A high speed analogue-to-digital (ADC) convertor was used to acquire the reflected light intensity from a single detector.
Figure 3. Experiment system setup. The object was illuminated by a laser beam which was modulated by a DMD. The camera lens was used to project the Hadamard masks onto the object. A high speed analogue-to-digital (ADC) convertor was used to acquire the reflected light intensity from a single detector.
Photonics 08 00059 g003
Figure 4. Experiment results. (a) Calibration result of light source, which was a reconstructed image of a white paper. (b) Uncalibrated reconstructed image; the peripheral area of the image was ambiguous due to the weak illumination. The RMSE of the uncalibrated image is 0.2518. (c) Calibrated reconstructed image; the letters in the peripheral area became visible. The RMSE after calibration is 0.0525. (d) Ground truth image. (e) Gray scale distributions of the images highlighted by the black dashed line in (a), the solid red line in (b), the solid green line in (c), and the blue dotted line in (d).
Figure 4. Experiment results. (a) Calibration result of light source, which was a reconstructed image of a white paper. (b) Uncalibrated reconstructed image; the peripheral area of the image was ambiguous due to the weak illumination. The RMSE of the uncalibrated image is 0.2518. (c) Calibrated reconstructed image; the letters in the peripheral area became visible. The RMSE after calibration is 0.0525. (d) Ground truth image. (e) Gray scale distributions of the images highlighted by the black dashed line in (a), the solid red line in (b), the solid green line in (c), and the blue dotted line in (d).
Photonics 08 00059 g004
Figure 5. Experimental comparison. (a) Image calibrated using retrospective Gaussian filtering. Its illumination was estimated as a Gaussian distribution. The RMSE after calibration is 0.0689, and the local speckles on letters “H” and “I” were not eliminated. (b) Image calibrated using proposed method. Its illumination was measured. The RMSE after calibration is 0.0525, and the local speckles were eliminated.
Figure 5. Experimental comparison. (a) Image calibrated using retrospective Gaussian filtering. Its illumination was estimated as a Gaussian distribution. The RMSE after calibration is 0.0689, and the local speckles on letters “H” and “I” were not eliminated. (b) Image calibrated using proposed method. Its illumination was measured. The RMSE after calibration is 0.0525, and the local speckles were eliminated.
Photonics 08 00059 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, S.-M.; Sun, M.-J.; Chen, W.; Li, L.-J. Illumination Calibration for Computational Ghost Imaging. Photonics 2021, 8, 59. https://doi.org/10.3390/photonics8020059

AMA Style

Yan S-M, Sun M-J, Chen W, Li L-J. Illumination Calibration for Computational Ghost Imaging. Photonics. 2021; 8(2):59. https://doi.org/10.3390/photonics8020059

Chicago/Turabian Style

Yan, Song-Ming, Ming-Jie Sun, Wen Chen, and Li-Jing Li. 2021. "Illumination Calibration for Computational Ghost Imaging" Photonics 8, no. 2: 59. https://doi.org/10.3390/photonics8020059

APA Style

Yan, S. -M., Sun, M. -J., Chen, W., & Li, L. -J. (2021). Illumination Calibration for Computational Ghost Imaging. Photonics, 8(2), 59. https://doi.org/10.3390/photonics8020059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop