Next Article in Journal
Joint Channel and Power Assignment for Underwater Cognitive Acoustic Networks on Marine Mammal-Friendly
Next Article in Special Issue
Dynamic Spectrum Assignment in Passive Optical Networks Based on Optical Integrated Microring Resonators Using Machine Learning and a Routing, Modulation Level, and Spectrum Assignment Method
Previous Article in Journal
Controlling Thermal Radiation in Photonic Quasicrystals Containing Epsilon-Negative Metamaterials
Previous Article in Special Issue
Study on Two-Dimensional Exit Pupil Expansion for Diffractive Waveguide Based on Holographic Volume Grating
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computational Imaging at the Infrared Beamline of the Australian Synchrotron Using the Lucy–Richardson–Rosen Algorithm

1
Optical Sciences Centre, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
2
ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
3
Institute of Physics, University of Tartu, 50411 Tartu, Estonia
4
Infrared Microspectroscopy (IRM) Beamline, ANSTO—Australian Synchrotron, Clayton, VIC 3168, Australia
5
Tokyo Tech World Research Hub Initiative (WRHI), School of Materials and Chemical Technology, Tokyo Institute of Technology, 2-12-1, Ookayama, Meguro-ku, Tokyo 152-8550, Japan
*
Author to whom correspondence should be addressed.
The authors contributed equally to this work.
Appl. Sci. 2023, 13(23), 12948; https://doi.org/10.3390/app132312948
Submission received: 11 October 2023 / Revised: 30 October 2023 / Accepted: 6 November 2023 / Published: 4 December 2023
(This article belongs to the Collection Optical Design and Engineering)

Abstract

:
The Fourier transform infrared microspectroscopy (FTIRm) system of the Australian Synchrotron has a unique optical configuration with a peculiar beam profile consisting of two parallel lines. The beam is tightly focused using a 36× Schwarzschild objective to a point on the sample and the sample is scanned pixel by pixel to record an image of a single plane using a single pixel mercury cadmium telluride detector. A computational stitching procedure is used to obtain a 2D image of the sample. However, if the imaging condition is not satisfied, then the recorded object’s information is distorted. Unlike commonly observed blurring, the case with a Schwarzschild objective is unique, with a donut like intensity distribution with three distinct lobes. Consequently, commonly used deblurring methods are not efficient for image reconstruction. In this study, we have applied a recently developed computational reconstruction method called the Lucy–Richardson–Rosen algorithm (LRRA) in the online FTIRm system for the first time. The method involves two steps: training step and imaging step. In the training step, the point spread function (PSF) library is recorded by temporal summation of intensity patterns obtained by scanning the pinhole in the x-y directions across the path of the beam using the single pixel detector along the z direction. In the imaging step, the process is repeated for a complicated object along only a single plane. This new technique is named coded aperture scanning holography. Different types of samples, such as two pinholes; a number 3 USAF object; a cross shaped object on a barium fluoride substrate; and a silk sample are used for the demonstration of both image recovery and 3D imaging applications.

1. Introduction

Incoherent digital holography has a rich history which has developed in two directions, namely non-scanning digital holography developed by Rosen, Kim, Tahara, etc., and scanning digital holography developed by Poon and team [1]. In IDH, light from an object point is interfered with light from the same object point but with a different modulation to create a self-interference hologram. Unlike regular holography or holography with coherent light sources, the self-interference hologram contains only the 3D location information of the object but not the phase information. Coded aperture imaging (CAI) is another imaging technique which was originally developed to extend imaging technologies to non-visible regions of the electromagnetic spectrum where manufacturing of lenses and materials engineering were challenging [2,3]. In CAI, light from an object is modulated by a coded mask, and the scattered beam is recorded by an image sensor. From this scattered pattern and another recording called the point spread function (PSF) using the same mask but with a point object, the object information is reconstructed. While IDH’s recording and reconstruction principles were adapted from conventional holography, CAI laid the foundations for modern-day computational imaging. In 2016 and 2017, two techniques called coded aperture correlation holography (COACH) with and without two beam interference were developed which connected CAI with IDH [4,5]. CAI techniques usually employ special mask patterns to encode spatial and spectral information in the intensity distribution [2,3,6,7,8,9]. The intensity distribution is then processed in the computer with the point spread functions (PSF) recorded using a pinhole or the transfer function of the coded mask to reconstruct the information from different spatio-spectral channels [8,9]. In CAI methods, the entire spatio-spectral information is captured in one or a few camera shots, unlike scanning holography methods where the interference pattern is scanned pixel by pixel [10].
This research work began when we attempted to apply the CAI method to the Fourier transform infrared microspectroscopy (FTIRm) system of the Australian Synchrotron [11,12]. The FTIRm system receives the high-brilliance IR beam from the synchrotron with a peculiar intensity distribution consisting of two parallel lines due to the gold coated extraction mirror with a central slit used for rejecting X-ray beams, as shown in Figure 1. Consequently, the optical configuration and imaging requirements are required to be stringent in order to achieve high-quality imaging. The FTIRm system consists of two detectors, namely the single pixel mercury cadmium telluride (MCT) detector and the focal point array (FPA) (64 × 64) pixels detector, both cooled by liquid nitrogen. One of the stringent requirements of the FTIRm system includes the use of a matching 36× Schwarzschild IR reflecting objective and condenser (NA = 0.5). During our first attempt to apply the non-scanning CAI method, the above 36× Schwarzschild objective lens (SOL) was replaced by a 15× Cassegrain objective lens (COL) to increase the beam diameter in the sample plane, and the single pixel MCT detector was replaced by a focal point array (FPA) detector. However, due to the peculiar beam profile, the experiment could not be extended beyond a pinhole. Therefore, in that case, a semi-synthetic analysis was only carried out using synthetic objects and recorded and averaged using point spread functions (PSFs) [12]. The outcome of the experiment revealed the possibility of achieving 3D imaging based on the following facts and observations. To achieve 3D imaging using CAI, a sharp autocorrelation and a low cross-correlation along depth (SALCAD) are needed. This SALCAD property is easily achieved using a scattering mask, which is one of the reasons why it has been widely used for 3D imaging [13]. Based on the observations made in [12], even deterministic intensity distributions may exhibit SALCAD properties.
Thereafter, the CAI method was repeated in the replica off-line FTIRm system of the Australian Synchrotron, which is not connected to the synchrotron beam but uses a Globar source [14]. The light from a Globar source is uniform and so it supported the application of the CAI method significantly better than the synchrotron beam. The FPA detector was used for recording the intensity distribution and the COL was used as the coded aperture. A new computational reconstruction algorithm called the Lucy–Richardson–Rosen algorithm (LRRA), integrating the Lucy–Richardson algorithm (LRA) [15,16] with non-linear reconstruction (NLR) [17], was developed for reconstructing three-dimensional (3D) information from the recorded intensity distributions of objects and the PSF library. The preliminary results were promising. However, the recorded intensity distributions were low resolution due to the limited number of pixels of the FPA. Moreover, the intensity is not as high as the beam from the synchrotron. Since most of the measurements at the Australian Synchrotron involve the use of the synchrotron beam, it is crucial to extend this idea to the online FTIRm system. In a recent study with the LRRA, it was found that it is possible to use a wide range of optical beams with peculiar intensity and phase distributions [18]. In another recent study, the high resilience of the LRRA to detector noises was verified [19]. In this manuscript, we report for the first time computational imaging using the IR synchrotron beam at the Australian Synchrotron using the LRRA. The method is unique because coded aperture holography has been implemented by a scanning approach for the first time. We call this technique coded aperture scanning holography (CASH). CASH can be implemented for deblurring, removing aberrations and 3D imaging and is suitable for extreme imaging scenarios such as low-resolution detectors and imaging through highly scattering media, and can improve almost all facets of imaging except time resolution. This manuscript consists of five sections. The methodology is described in the next section, experimental results are presented in the third section and the final two sections present the discussion and conclusions from the study.

2. Materials and Methods

In this study, we have developed a modified CASH approach for extending the CAI method to the online FTIRm system of the Australian Synchrotron with a 36× SOL as the coded aperture. In this method, the 36× SOL of the FTIRm system was retained, unlike in [12], and a scanning approach has been introduced with the single pixel MCT detector. The schematic of the FTIRm system is shown in Figure 1. The IR beam is extracted along with X-rays from the synchrotron, and the X-rays are removed by a gold coated mirror with a central slit as shown in Figure 1. This unique extraction results in a fork-shaped beam, as shown in the same figure. This FTIRm system consists of 765 channels ranging from 899 to 3845 cm−1. The intensity of the IR synchrotron beam is >105 times that of a Globar source. The FTIRm system consists of a visible lamp that generates white light which is aligned collinearly with the IR beam to achieve optical alignment, as the IR beam is invisible to the human eye. Both visible and IR beams were collected and focused on the sample plane using a 36× SOL. The light from the sample plane is collected by another 36× SOL and reimaged using a high-resolution visible camera and an IR sensor. The IR sensor can be either a single pixel MCT detector or a 64 × 64 FPA detector. In this study, a single pixel MCT detector has been used. In the FTIRm system, in principle, two microscopes (visible and MIR microscopes) are integrated. Since the MIR beam is not visible to the human eye, visible light is used as a guiding beam for sample mounting and the calibration procedure. Microscopy with visible light is well established with sophisticated refractive lenses. Refractive lenses are indicated by the letter L in Figure 1, and work based on the principles of refraction. However, microscopy with an IR beam is complicated, because manufacturing lenses on calcium fluoride and barium fluoride substrates is challenging, and so for microscopy with IR to work, reflective optics are used. The mirrors indicated by the letter M in Figure 1 are all concave mirrors. The MSP has been used to switch between visible and IR imaging modes for sample alignment and calibration. The visible microscope works based on both refractive as well as reflective optics, whereas the IR microscope works based on only reflective optics. The aperture stops indicated by the letter ‘A’ are used to reduce aberrations at the expense of light throughput.
In the conventional direct imaging mode, there is point-to-point mapping, i.e., every object point creates an image point. Since the synchrotron beam is spatially incoherent, there is only intensity addition between neighbouring points, unlike complex addition as in coherent light sources. The image formation can be expressed as I O = O I P S F , where O is the object information, IO is the recorded image, IPSF is the diffraction limited spot, and ‘⊗’ is a 2D convolutional operator. As it is seen, the object information O is sampled by the diffraction limited spot. If the NA is large, then the spot size is small and a better sample is obtained. If the NA is small, then the spot size is large resulting in a poor sample and blurred image. When there is an axial aberration due to the object distance not meeting the imaging condition 1/u + 1/v = 1/f, a blurred image of the object is obtained even with a large NA, where u, v, and f are the object, image distances, and focal length, respectively. This condition prevails in all planes which do not satisfy the imaging condition. Therefore, in order to image such planes, it is necessary to repeat the imaging process after achieving the imaging condition for those planes. Therefore, in direct imaging mode, the imaging process is time consuming, resulting in a total of M2N camera shots, where M is the number of pixels along the x and y directions and N is the number of planes. If the FPA is used, then it requires only N camera shots. With coded aperture holography, only a single shot is needed. However, the fork-shaped beam profile precludes the application of non-scanning methods. In this study, in order to extend CAI to FTIRm with the synchrotron beam, CAI is implemented as CASH, where the number of measurements needed is M2, which is the same as scanning-based 2D imaging but can obtain 3D information.
In indirect imaging mode, as demonstrated in [1,2,3,4,5,6,7,8,9], the imaging process is different from direct imaging mode, consisting of two steps in general. In the first step, the intensity pattern is recorded as I O = O I P S F . When the imaging condition is satisfied, then IO creates an image of the object as governed by the NA of the system. However, at other planes I O does not form the image of the object but a blurred intensity distribution. Therefore, in the second step, the image IR is reconstructed or recovered from IO. This process is also called deconvolution or deblurring. This is achieved by processing I O with I P S F , where I R = I O * I P S F , which can be rewritten as I R = O I P S F * I P S F , where ‘ * ’ is a 2D correlation operator. Depending upon the nature of the autocorrelation function, i.e., I P S F * I P S F , the object information O is sampled. With scattering masks, it is possible to obtain sharp autocorrelation functions. In [14], it was shown that even with deterministic masks such as COL, it is possible to obtain a sharp autocorrelation function. In this study, it is also possible to obtain sharp autocorrelation functions if the synchrotron beam has a uniform intensity distribution.
The blurring with the SOL is not symmetrical and the recording of PSFs at different axial planes was complicated due to the peculiar intensity distribution of the beam. In all the previous studies, the fundamental building block of the object is either ~λ/NA or the diameter of the pinhole. It is not possible to achieve resolution better than the size of the pinhole. In this study, to enable 3D imaging, the lateral resolution is set at the size of the pinhole while it is possible to image below that limit with a smaller spot size obtained by the SOL. A larger pinhole creates larger features in the PSF and vice versa, which are transferred to the resolving power through the autocorrelation function I P S F * I P S F which samples the object function. Therefore, in the proposed system, the pinhole is scanned by the highly focused IR beam from the 36× objective lens, and the PSF is formed by the summation of the intensity distributions obtained during scanning, given as I P S F = p , q = 1 N I D x p , y q , where ID is the intensity distribution obtained for a point (x,y) in the sample plane. Consequently, the PSF itself is recorded as a 2D object in a spatially incoherent imaging system by summation of light from different points. This procedure allows recording of the 3D PSFs even with the peculiar intensity distribution and complicated optical configuration. This approach results in decreasing the lateral resolution, as the pinhole is larger than the focal spot size obtained from the SOL. After recording the 3D PSF, the scanning procedure was repeated for a single plane or multiplane object. The 3D image of the object can be reconstructed by processing the PSF library catalogued against the axial location with the object intensity distributions. Unlike optical scanning holography methods [10], where it is necessary to reconstruct at all planes, the use of the SOL as a coded aperture enables direct imaging and holography to co-exist. At the image plane, the CASH technique performs perfectly as a direct imaging system. When defocused, the system behaves as a CAI method and a reconstruction method is required.
The intensity distribution recorded for the object O can be expressed as I O = O I P S F , where IPSF is the intensity distribution obtained by temporally summing the intensity distributions during pixel-by-pixel scanning. There are two main mechanisms to extract O from IO using IPSF: direct cross-correlation [4] and an iterative maximum likelihood approach such as the LRA and LRRA [14,15,16]. In our recent study [14], it was found that the LRRA performed better than the LRA, whose schematic is shown in Figure 2. The (n+1)th reconstructed image is given as I R n + 1 = I R n I O I R n I P S F β α I P S F , where ‘ β α ’ refers to NLR which is defined for the two functions A and B as F 1 A ~ α e x p j a r g A ~ B ~ β e x p j a r g B ~ , where X ~ is the Fourier transform of X. The α and β are tuned between −1 and 1. The number of iterations was increased from 1 until a minimum entropy was obtained. The algorithm begins with an initial guess, which is the recorded object intensity IO and convolves it with the IPSF. The ratio between IO and the resulting distribution IO’ is correlated with the IPSF using NLR and the residue is then inverse Fourier transformed and multiplied to the first guessed solution IRn (=IO when n = 1) to generate the next solution IR(n+1). This process is iterated with the new solution until the maximum likelihood solution is obtained. However, unlike the LRA, the LRRA has a rapid convergence and better estimation of the solution.

3. Results

The simplified optical configuration of the FTIRm system is shown in Figure 1. The spectral range of the MIR beam from the synchrotron is 899–3845 cm−1 with 765 channels. The MIR beam from the synchrotron is extracted using a special mirror, with a gold coating and a central slit to reject X-rays, and enters the FTIRm system. The size of the beam at the entrance port of the FTIR spectrometer is 12 mm × 12 mm [20]. A visible light source is collinearly aligned with the MIR beam and used as a reference for preliminary studies and alignment of the MIR beam. Extensive information on the calibration of the FTIRm system and the resolution limits are presented in [20,21]. For the 36× SOL, the spatial resolution is ~8 μm at 1600 cm−1 and varies with the wavelength. After the matching 36× SOL, a pinhole was used to define the spot size measured at the sample plane. Consequently, in this computational imaging method, the wavelength dependent spatial resolution limit is changed to the wavelength independent spatial resolution limit.
For each recorded image, the condenser and objective were aligned in free space or an equivalent blank substrate before the pinhole (25 µm pinhole, Thorlabs Inc.) or sample was placed onto the sample stage. Each point spectra were taken with a 4 cm−1 resolution and averaged over four scans per point. The pinhole discussed throughout the manuscript is the one that is mounted on the sample stage. During scanning, the stage moved the sample such that the static IR beam was aligned over each measurement point. For the IPSFs, a 40 × 40 grid with 5 µm grid spacing was selected totalling a 200 × 200 µm scan area. A 5.6 µm beam spot was used to ensure a slight overlap between scans. A 40 × 40, 15 µm grid and 16.7 µm beam size were used for the silk fibre in Figure 3. The above spot sizes are the built-in scanning options of the Bruker system and do not match with the actual spot sizes. The FTIRm system uses the OPUS software (Version 8.0) for recording and processing information. The data structure of OPUS is the spectral information for every pixel. This data structure is converted into spectral-wise images using a MATLAB linking code. The images are then averaged over the entire spectrum to obtain noise free images. This process is conducted for both the IPSF and IO.
A silk fibre is mounted on the sample plane held by a pinhole with a size of 400 µm. As the silk fibre is in 3D space, the focus changes with the depth as shown in Figure 3a (Δz = 100 μm) and Figure 3e (Δz = 200 μm). When the plane of interest is changed along z, a different region in (x,y) appears sharp and intense. The IPSF has a unique donut shaped intensity distribution with three lobes arising from the support structure consisting of three spokes in the SOL, as shown in the inset in Figure 1. A similar structure with four spokes in a Cassegrain objective lens has been investigated in [14]. Using the IPSF similar to the one shown as an inset in Figure 3, the images Figure 3a,e were reconstructed as shown in Figure 3b,f, respectively, with α = 0.5, β = 1 and eight iterations. As seen from the figure, the out of focus features are in focus after reconstruction. A single fibre strand can be clearly seen in both cases. The magnified versions of Figure 3a,b,e,f are shown in Figure 3c,d,g,h, respectively. The normalised absorbance spectrum was obtained, which is plotted as shown in Figure 3i which shows a distinct absorbance peak at 1600 cm−1. By choosing an IPSF recorded at a different distance, different parts of the images at different depths can be reconstructed. The high-intensity regions seen in the reconstructed images correspond to the axial plane of the pinhole when the IPSF was recorded. When an IPSF of another axial plane was used for reconstruction, it increased the intensity of the region that corresponded to that plane. To further evaluate the method, a two-plane object was constructed using two pinholes in one plane and the digit ‘3′ from the USAF resolution target separated by a distance of 300 µm. This was achieved by summing the corresponding intensity distributions from the two planes. Since the synchrotron beam is spatially incoherent, there is only intensity addition between light from two points and not a complex addition as in spatially coherent light sources. In this case, there was a slight misalignment, which is indicated by the intensity difference in one of the three lobes of the IPSF. The images of the IPSF recorded intensity distribution and the reconstructed images are shown in Figure 4a–c, respectively. It can be seen that the reconstructed image shows the two pinholes and distorts the digit ‘3′ which was focused. The distortion also follows the nature of the IPSF bringing more reconstructed intensity to the middle. The above digital refocusing shows the possibility of 3D imaging in the IR synchrotron beam. The reconstruction conditions of the LRRA are quite similar to the previous case, with α = 0.4, β = 1 and seven iterations.
In Figure 4b, it is seen that the digit ‘3′ is well focused, but the two pinholes were blurred indicating that the two objects are in different planes. When Figure 4b is reconstructed using the IPSF shown in Figure 4a using the LRRA, as the IPSF corresponds to the plane of the two pinholes, the information of the two pinholes is deblurred, while the other object digit ‘3′ is blurred. The blurring and deblurring behaviours depend upon the nature of the IPSF. The IPSF of the SOL is quite different from the commonly seen blurring with a lens. Consequently, unusual distributions such as the ‘hot spots’ seen in Figure 4c are obtained during blurring. Considering the fundamental concepts of image formation and image reconstruction, the above behaviour can be clearly understood. The physical meaning of the convolution operation which yields the object information IO is that every object point in O is replaced by the IPSF and in the overlapped regions there is an addition. This can be seen in Figure 4b, as there are two points, each point is replaced by Figure 4a and the overlapped region has a higher intensity, indicating an addition. The image reconstruction or deblurring from the IO and the IPSF is achieved by pattern recognition, where the IPSF is scanned over the IO and a peak is generated whenever there is a pattern matching. Consequently, when Figure 4a is scanned over Figure 4b, two peaks are generated for the two pinholes as there is perfect pattern matching in those locations. Further, there is some similarity between the structure of the digit ‘3′ and Figure 4a which gives rise to hot spots. The above results are therefore not an anomaly but the natural blurring and deblurring behaviour corresponding to a peculiar IPSF such as the one obtained for the SOL. The direct imaging results of the two pinholes using a single pixel MCT detector with the IR channel and with the high-resolution camera with the visible channel are shown in Figure 4d,e, respectively.
A cross shaped object with a width of 50 µm manufactured using femtosecond ablation on a barium fluoride substrate with 1 mm thickness coated with a 100 nm chromium layer was used as a test object. The IR images recorded with spatial aberrations with the cross shaped object had the maximum aberrations and noises in CASH. The images of the PSF, and the object intensity distributions and reconstruction results using a matched filter, phase-only filter, NLR, LRA, Weiner filter and the LRRA are shown in Figure 5a–h, respectively. The conditions for reconstruction for matched filter, phase-only filter and NLR in F 1 A ~ α e x p j a r g A ~ B ~ β e x p j a r g B ~ are (α = 1, β = 1), (α = 0, β = 1) and (α = 0, β = 0.7), respectively. The condition for reconstruction for the LRA is n = 100, for the LRRA it is (α = 0.4, β = 1 and n = 10), and for the Weiner filter the ratio of noise variance to signal variance (σ = 1000). As can be seen from Figure 5, the performance of the LRRA is better than the other methods. The LRA, too, has a higher resilience to detector noises and aberrations, but the number of iterations is 10 times than that of the LRRA. Moreover, it has been well investigated in [14,19] that the performance of the LRA for complicated objects is not as good as for simple objects, such as the cross shaped object as in this case. The Weiner filter has the poorest performance of all, with significant background noise.

4. Discussion

In this study, the LRRA has been used for deconvolution and 3D reconstructions. In this section, we compare the performances of the LRRA with the LRA, NLR, matched filter and phase-only filter for the averaged donut shaped IPSF with three lobes. A test object, the logo of ANSTO, was used for the study. The object intensity IO was synthesized by convolving the test object with the experimental IPSF. The images of the test object, IPSF, IO and reconstruction results from a matched filter, phase-only filter, NLR (α = 0, β = 0.5), LRA (n = 100) and LRRA (α = 0, β = 0.95, n = 9) are shown in Figure 6a–h, respectively. As it can be seen, the LRRA has a better performance than the LRA, NLR and other methods. Furthermore, the LRRA is also faster than the LRA by at least an order in both simulation and experiments. The value of α was between zero and one, and the value of β was one for most cases while the number of iterations n was less than 10. However, with advanced computational reconstruction algorithms, it is possible to enhance the reconstruction results.

5. Summary and Conclusions

The online FTIRm system of the Australian Synchrotron uses an IR beam with a peculiar intensity distribution in the shape of two lines, which results in a complicated optical configuration. As it is known, the development of a detector in the wavelength region of MIR is challenging, unlike in the visible region where megapixels of information can be detected with high-resolution sensors. The online FTIRm system at the Australian Synchrotron is equipped with two image sensors: single pixel and FPA detectors made of mercury cadmium telluride, both cooled by liquid nitrogen. As the FPA has only 64 × 64 pixels, a single shot recording with that detector results in a low-resolution image. Therefore, the single pixel detector is used for pixel-by-pixel scanning of the sample followed by a computational stitching procedure to record a 2D image of the object with a better resolution.
Even though in most of the measurements with FTIRm a thin slice of the sample is used, with a strong (36×) objective, an axial aberration or thickness variation of a few micrometres can significantly affect the imaging results. The running cost of the Australian Synchrotron is quite high, with most of the user projects awarded with limited beamtime hours ranging from 48 h to 96 h and valued at ~AUD 100 K. Therefore, it is crucial to avoid aberrations during measurements and to utilize the beamtime efficiently. The focusing elements used in FTIRm are not regular refractive lenses but reflective Schwarzschild objectives, which have a unique spatial aberration in the form of a donut with three distinct intensity lobes. When there is an axial aberration, the images are highly distorted. Consequently, existing deblurring methods are not suitable for deblurring the images.
In this study, a recently developed computational reconstruction method called the LRRA has been applied to realise CASH. Both standard samples, such as USAF objects and pinholes, and biochemical samples such as silk were studied. The LRRA method was able to reconstruct not only 2D but also 3D information from the object. One of the challenges with the LRRA is that it is not possible to know the values of α, β and n beforehand, and they can only be predicted by an expert user from the outcomes of NLR and the LRA. However, in the future, we plan to address this problem. We believe that the proposed and demonstrated method is a significant step forward for the FTIRm system of the Australian Synchrotron. The impact of the study is high, as the developed technique will enable rapid imaging, aberration correction, and 3D imaging, benefiting the users of the Australian Synchrotron. The proposed method is not limited only to applications in the FTIRm system of the Australian Synchrotron but can be applied to other advanced imaging systems with complicated optical configurations. Recently, single shot phase imaging has been demonstrated at the Australian Synchrotron [22]. Scanning holography is a milestone in the history of holography [10]. We believe that the CASH developed in this study and the recent phase imaging demonstration will lead to further developments and wider applications of FTIRm. The computational reconstruction methods used in CAI are based on pattern recognition using some types of correlation of maximum likelihood estimation. In this study, a maximum likelihood estimation method called the LRRA has been used. One of the challenges in applying pattern recognition-based algorithms for the reconstruction of images modulated by deterministic optical fields is that they give rise to artefacts in the form of hotspots when the PSF’s shape in a plane matches with the object’s shape. However, this is a rare event and can happen with any CAI method when there is a match between a PSF and an object. Further studies are needed to develop advanced computational methods which can avoid such artefacts during reconstruction.

Author Contributions

Conceptualization, V.A., S.H.N., J.V. and S.J.; methodology, V.A., S.H.N., J.V., M.J.T., K.B., S.J. and A.K.; software, V.A., S.H.N. and J.V.; validation, M.H., D.S., J.M. and T.K.; formal analysis, V.A. and S.H.N.; investigation, S.J., J.V. and M.J.T.; resources, S.J., M.J.T., J.V., K.B. and A.K.; writing—original draft preparation, V.A. and S.H.N.; writing—review and editing, all the authors; supervision, S.J., J.V. and M.J.T.; project administration, S.J.; funding acquisition, V.A., S.J., S.H.N. and J.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Union’s Horizon 2020 research and innovation programme grant agreement No. 857627 (CIPHR), ARC Linkage LP190100505 project. This research was undertaken on the IRM beamline at the Australian Synchrotron (Victoria, Australia), part of ANSTO (Proposal ID. 15775, Reference No. AS1/IRM/15775 and Proposal ID. M17333, Reference No. AS2/IRM/17333).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

S. No.AcronymDescription
13DThree dimensions
22DTwo dimensions
3CAICoded Aperture Imaging
4CASHCoded Aperture Scanning Holography
5COLCassegrain Objective Lens
6FPAFocal Point Array
7FTIRmFourier Transform Infrared microspectroscopy
8LRALucy–Richardson–Algorithm
9LRRALucy–Richardson–Rosen Algorithm
10MCTMercury Cadmium Telluride
11MLMaximum Likelihood
12MIRMid Infrared
13NANumerical Aperture
14NLRNon-Linear Reconstruction
15OTFOptical Transfer Function
16PSFPoint Spread Function
17SALCADSharp Autocorrelation and a Low Cross-correlation Along Depth
18SOLSchwarzschild Objective Lens
19USAFUnited States Air Force

References

  1. Tahara, T.; Zhang, Y.; Rosen, J.; Anand, V.; Cao, L.; Wu, J.; Koujin, T.; Matsuda, A.; Ishii, A.; Kozawa, Y.; et al. Roadmap of incoherent digital holography. Appl. Phys. B 2022, 128, 193. [Google Scholar] [CrossRef]
  2. Dicke, R.H. Scatter-Hole cameras for X-rays and gamma rays. Astrophys. J. 1968, 153, L101. [Google Scholar] [CrossRef]
  3. Ables, J.G. Fourier transform photography: A new method for X-ray astronomy. Publ. Astron. Soc. Aust. 1968, 1, 172–173. [Google Scholar] [CrossRef]
  4. Vijayakumar, A.; Kashter, Y.; Kelner, R.; Rosen, J. Coded aperture correlation holography—A new type of incoherent digital holograms. Opt. Express 2016, 24, 12430–12441. [Google Scholar] [CrossRef] [PubMed]
  5. Vijayakumar, A.; Rosen, J. Interferenceless coded aperture correlation holography–a new technique for recording incoherent digital holograms without two-wave interference. Opt. Express. 2017, 25, 13883–13896. [Google Scholar] [CrossRef]
  6. Fenimore, E.E.; Cannon, T.M. Coded aperture imaging with uniformly redundant arrays. Appl. Opt. 1978, 17, 337–347. [Google Scholar] [CrossRef] [PubMed]
  7. Rosen, J.; Vijayakumar, A.; Kumar, M.; Rai, M.R.; Kelner, R.; Kashter, Y.; Bulbul, A.; Mukherjee, S. Recent advances in self-interference incoherent digital holography. Adv. Opt. Photonics 2019, 11, 1–66. [Google Scholar] [CrossRef]
  8. Anand, V.; Ng, S.H.; Maksimovic, J.; Linklater, D.; Katkus, T.; Ivanova, E.P.; Juodkazis, S. Single shot multispectral multidimensional imaging using chaotic waves. Sci. Rep. 2020, 10, 13902. [Google Scholar] [CrossRef]
  9. Boominathan, V.; Robinson, J.T.; Waller, L.; Veeraraghavan, A. Recent advances in lensless imaging. Optica 2022, 9, 1–16. [Google Scholar] [CrossRef]
  10. Poon, T.-C. Optical Scanning Holography with Matlab; Springer: New York, NY, USA, 2007. [Google Scholar]
  11. Tobin, M.; Vongsvivut, J.; Martin, D.; Sizeland, K.; Hackett, M.; Takechi, R.; Fimorgnari, N.; Lam, V.; Mamo, J.; Carter, E.; et al. Focal plane array IR imaging at the Australian Synchrotron. Infrared Phys. Technol. 2018, 94, 85–90. [Google Scholar] [CrossRef]
  12. Anand, V.; Ng, S.H.; Katkus, T.; Maksimovic, J.; Klein, A.; Vongsvivut, J.; Bambery, K.; Tobin, M.J.; Juodkazis, S. Exploiting spatio-spectral aberrations for rapid synchrotron infrared imaging. J. Synchrotron. Rad. 2021, 28, 1616–1619. [Google Scholar] [CrossRef] [PubMed]
  13. Xie, X.; Zhuang, H.; He, H.; Xu, X.; Liang, H.; Liu, Y.; Zhou, J. Extended depth-resolved imaging through a thin scattering medium with PSF manipulation. Sci. Rep. 2018, 8, 4585. [Google Scholar] [CrossRef] [PubMed]
  14. Anand, V.; Han, M.; Maksimovic, J.; Ng, S.H.; Katkus, T.; Klein, A.; Bambery, K.; Tobin, M.J.; Vongsvivut, J.; Juodkazis, S. Single-shot mid-infrared incoherent holography using Lucy-Richardson-Rosen algorithm. Opto-Electron. Sci. 2022, 1, 210006. [Google Scholar]
  15. Richardson, W.H. Bayesian-Based Iterative Method of Image Restoration. J. Opt. Soc. Am. 1972, 62, 55–59. [Google Scholar] [CrossRef]
  16. Lucy, L.B. An iterative technique for the rectification of observed distributions. Astron. J. 1974, 79, 745. [Google Scholar] [CrossRef]
  17. Rai, M.R.; Anand, V.; Rosen, J. Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH). Opt. Express 2018, 26, 18143–18154. [Google Scholar] [CrossRef] [PubMed]
  18. Ignatius Xavier, A.P.; Arockiaraj, F.G.; Gopinath, S.; John Francis Rajeswary, A.S.; Reddy, A.N.K.; Ganeev, R.A.; Singh, M.S.A.; Tania, S.D.M.; Anand, V. Single-Shot 3D Incoherent Imaging Using Deterministic and Random Optical Fields with Lucy–Richardson–Rosen Algorithm. Photonics 2023, 10, 987. [Google Scholar] [CrossRef]
  19. Rosen, J.; Anand, V. Incoherent nonlinear deconvolution using an iterative algorithm for recovering limited-support images from blurred digital photographs. Opt. Express, 2023; in press. [Google Scholar]
  20. Cheeseman, S.; Truong, V.K.; Vongsvivut, J.; Tobin, M.J.; Crawford, R.; Ivanova, E.P. Applications of synchrotron-source IR spectroscopy for the investigation of insect wings. In Synchrotron Radiation; Intechopen: London, UK, 2019. [Google Scholar]
  21. Diffraction Limited Resolution (Theoretical). Available online: https://asuserwiki.atlassian.net/wiki/spaces/UO/pages/443449385/Diffraction+limited+resolution+theoretical (accessed on 10 October 2023).
  22. Han, M.; Smith, D.; Ng, S.H.; Katkus, T.; John Francis Rajeswary, A.S.; Praveen, P.A.; Bambery, K.R.; Tobin, M.J.; Vongsvivut, J.; Juodkazis, S.; et al. Single Shot Lensless Interferenceless Phase Imaging of Biochemical Samples Using Synchrotron near Infrared Beam. Biosensors 2022, 12, 1073. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic of the FTIRm system in transmission mode. BS—beam splitter, M—mirror, L—lens, MSP—Motorized sliding plate, A—aperture, MIR—mid-infrared. The synchrotron beam is extracted using the gold coated mirror with a central slit and enters the FTIR spectrometer and then the IR/VISIBLE transmission microscope. The image of the beam entering the FTIRm is shown with a dotted blue line. The normalised intensity distribution for different wavelengths is shown. The scanning mode is shown for a pinhole.
Figure 1. Schematic of the FTIRm system in transmission mode. BS—beam splitter, M—mirror, L—lens, MSP—Motorized sliding plate, A—aperture, MIR—mid-infrared. The synchrotron beam is extracted using the gold coated mirror with a central slit and enters the FTIR spectrometer and then the IR/VISIBLE transmission microscope. The image of the beam entering the FTIRm is shown with a dotted blue line. The normalised intensity distribution for different wavelengths is shown. The scanning mode is shown for a pinhole.
Applsci 13 12948 g001
Figure 2. Schematic of LRRA. ML—maximum likelihood; OTF—optical transfer function; n—number of iterations; NLR—non-linear reconstruction; ⊗—2D convolutional operator; O—object; IO—object intensity; IO’—estimated object intensity; I —Fourier transform; I * —complex conjugate operation following a Fourier transform; I 1 —inverse Fourier transform. IRn and IR(n+1) are the nth and (n+1)th solutions. IO was used as the initial guess solution. R1, ~Fourier transform of a variable. α and β are tuned between −1 and +1 to obtain the optimal reconstruction.
Figure 2. Schematic of LRRA. ML—maximum likelihood; OTF—optical transfer function; n—number of iterations; NLR—non-linear reconstruction; ⊗—2D convolutional operator; O—object; IO—object intensity; IO’—estimated object intensity; I —Fourier transform; I * —complex conjugate operation following a Fourier transform; I 1 —inverse Fourier transform. IRn and IR(n+1) are the nth and (n+1)th solutions. IO was used as the initial guess solution. R1, ~Fourier transform of a variable. α and β are tuned between −1 and +1 to obtain the optimal reconstruction.
Applsci 13 12948 g002
Figure 3. Experimental 2D imaging results. Recorded image of silk fibre (a,e). Reconstructed images of (a,e) are (b,f). Magnified versions of sections of (a,b,e,f) are shown in (c,d,g,h), respectively. The image of the IPSF is given as an inset in left most part of the figure. (i) The normalised absorbance of the silk fibre. The scale bar is 100 µm.
Figure 3. Experimental 2D imaging results. Recorded image of silk fibre (a,e). Reconstructed images of (a,e) are (b,f). Magnified versions of sections of (a,b,e,f) are shown in (c,d,g,h), respectively. The image of the IPSF is given as an inset in left most part of the figure. (i) The normalised absorbance of the silk fibre. The scale bar is 100 µm.
Applsci 13 12948 g003
Figure 4. Experimental 3D imaging results. (a) Image of the IPSF, (b) recorded intensity image of the two plane objects, (c) reconstructed image using the LRRA, (d) reference image of the two pinholes recorded with IR channel using the single pixel sensor, and (e) reference image of the two pinholes recorded with the visible channel using high-resolution visible camera, when the imaging condition is satisfied. The scale bar is 100 µm.
Figure 4. Experimental 3D imaging results. (a) Image of the IPSF, (b) recorded intensity image of the two plane objects, (c) reconstructed image using the LRRA, (d) reference image of the two pinholes recorded with IR channel using the single pixel sensor, and (e) reference image of the two pinholes recorded with the visible channel using high-resolution visible camera, when the imaging condition is satisfied. The scale bar is 100 µm.
Applsci 13 12948 g004
Figure 5. Experimental imaging results. (a) Image of the IPSF, and (b) recorded intensity image of the two-plane objects. Reconstruction results using (c) matched filter, (d) phase-only filter, (e) NLR (α = 0, β = 0.7), (f) LRA (n = 100), (g) LRRA (α = 0.4, β = 1 and n = 10) and (h) Weiner filter (σ = 1000). The scale bar is 50 µm.
Figure 5. Experimental imaging results. (a) Image of the IPSF, and (b) recorded intensity image of the two-plane objects. Reconstruction results using (c) matched filter, (d) phase-only filter, (e) NLR (α = 0, β = 0.7), (f) LRA (n = 100), (g) LRRA (α = 0.4, β = 1 and n = 10) and (h) Weiner filter (σ = 1000). The scale bar is 50 µm.
Applsci 13 12948 g005
Figure 6. Simulation results. (a) Test object. (b) IPSF. (c) IO. Reconstruction results using (d) matched filter, (e) phase-only filter, (f) NLR (α = 0, β = 0.5), (g) LRA (n = 100) and (h) LRRA (α = 0, β = 0.95, n = 9).
Figure 6. Simulation results. (a) Test object. (b) IPSF. (c) IO. Reconstruction results using (d) matched filter, (e) phase-only filter, (f) NLR (α = 0, β = 0.5), (g) LRA (n = 100) and (h) LRRA (α = 0, β = 0.95, n = 9).
Applsci 13 12948 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ng, S.H.; Anand, V.; Han, M.; Smith, D.; Maksimovic, J.; Katkus, T.; Klein, A.; Bambery, K.; Tobin, M.J.; Vongsvivut, J.; et al. Computational Imaging at the Infrared Beamline of the Australian Synchrotron Using the Lucy–Richardson–Rosen Algorithm. Appl. Sci. 2023, 13, 12948. https://doi.org/10.3390/app132312948

AMA Style

Ng SH, Anand V, Han M, Smith D, Maksimovic J, Katkus T, Klein A, Bambery K, Tobin MJ, Vongsvivut J, et al. Computational Imaging at the Infrared Beamline of the Australian Synchrotron Using the Lucy–Richardson–Rosen Algorithm. Applied Sciences. 2023; 13(23):12948. https://doi.org/10.3390/app132312948

Chicago/Turabian Style

Ng, Soon Hock, Vijayakumar Anand, Molong Han, Daniel Smith, Jovan Maksimovic, Tomas Katkus, Annaleise Klein, Keith Bambery, Mark J. Tobin, Jitraporn Vongsvivut, and et al. 2023. "Computational Imaging at the Infrared Beamline of the Australian Synchrotron Using the Lucy–Richardson–Rosen Algorithm" Applied Sciences 13, no. 23: 12948. https://doi.org/10.3390/app132312948

APA Style

Ng, S. H., Anand, V., Han, M., Smith, D., Maksimovic, J., Katkus, T., Klein, A., Bambery, K., Tobin, M. J., Vongsvivut, J., & Juodkazis, S. (2023). Computational Imaging at the Infrared Beamline of the Australian Synchrotron Using the Lucy–Richardson–Rosen Algorithm. Applied Sciences, 13(23), 12948. https://doi.org/10.3390/app132312948

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop