1. Introduction
Structural properties of the object, such as curvature, micro shape, and layer depth are carried by the phase information. However, due to the high-frequency light waves, general detectors like charge-coupled devices (CCD) are sensitive only to the intensity. The lost phase in the detection process can be retrieved by means of reconstruction methods, which can be mainly divided into three categories, namely, interferometry [
1,
2], transport of intensity equation (TIE) [
3,
4], and iterative phase recovery algorithm (IPR) [
5,
6]. Interferometry involves adding interference light to the imaging system, and strict registration between the object beam and the reference beam poses challenges to the stability of the experimental environment. TIE uses the intensity difference of the target beam along the optical axis to solve the 2D phase distribution; however, it is not suitable for the general complex wavefront. Compared with the former two methods, IPR puts emphasis on the superiority of the reconstruction algorithm with a simple imaging system. Considerable progress in IPR has sparked a revolution in applications of super-resolution imaging [
7,
8], wavefront sensing [
9,
10], and optical encryption [
11]. IPR was first proposed by Gerchberg and Saxton [
12]. In this method, the target wavefront propagates alternately between the object plane and the imaging plane, and a corresponding amplitude constrain is imposed on each plane. The process is repeated until the difference between the calculated results and the desired intensity distribution reaches an acceptable error. However, this method only uses a single recorded light field and is very sensitive to the initial phase guess, resulting in poor imaging quality and slow reconstruction speed, thus limiting its further application.
To tackle this problem, multiple-intensity phase retrieval was proposed. As a representative imaging modality, the ptychographical iterative engine (PIE) [
13] can reconstruct the complete complex amplitude distribution of a sample by laterally scanning multiple overlapping regions with a probe. A large amount of data redundancy ensures the convergence and robustness of PIE technology. However, PIE requires several tens of images to be captured and takes a long time to record, and is not suitable for the detection of fast-moving and changing samples. Unlike laterally scanning, single-beam multi-intensity reconstruction (SBMIR) is proposed to record a lesser number of diffraction intensities along the optical axis and is still valid for reconstructing wavefronts. SBMIR is a more low-cost, compact implement, and simple procedure method, which is successfully applied in coherent diffraction imaging [
14], lens-free imaging [
15,
16], etc.
In SBMIR, the convergence speed is determined by the intensity difference among the captured diffraction patterns. Hence, it is difficult to recover the object with a slowly-varying wavefront. To deal with this problem, many methods have been proposed, such as speckle illumination [
17,
18,
19], where a diffuser is placed upstream of the sample to introduce the speckle field to enhance the diversity of axial measurements. Similar methods such as microlens array modulation [
20], multimode optical fiber illumination [
21], and spherical wave illumination [
22,
23] were proposed to achieve the rapid change of diffraction field within a short distance. In addition, algorithm improvements such as disordered wavefront propagation [
24], non-equal-interval propagation [
25], adaptive support [
26], and relaxation constraint [
27] were also proposed to solve the problem of convergence stagnation.
In this paper, an enhanced single-beam multiple-intensity phase retrieval is proposed. Instead of plane wave illumination, holographical illumination is used to provide considerable intensity variations in the axial direction, which is beneficial to stable and unique phase retrieval without the problems of ambiguity or stagnation. The idea of holographic illumination is inspired by the principle of 3D holographic projection [
28,
29,
30], which can generate multi-plane intensity distribution with a single hologram.
In this work, several holograms are designed for generating different types of images to compare their capability of reconstructing the complex amplitude light field. The normalized cross-correlation (NCC) is introduced to reveal how to select images to generate holograms. The effects of various parameters on reconstruction accuracy and speed of the proposed method are also investigated. The simulation proves that our method can be successfully applied to recover both rough and smooth objects. In experiments, amplitude and phase objects are used to verify our method.
2. Principle
Figure 1 shows the imaging system of the phase retrieval method with holographical illumination. A collimated incident beam illuminates the hologram and the exiting wave is forward propagating to modulate the sample. A CCD camera is first located downstream of the sample with a distance of
z0. After one recording, the camera moves a fixed interval to capture the next diffraction pattern. The above measurement steps are repeated until the nth pattern is recorded. Using an iterative phase retrieval algorithm, the complex amplitude of the sample can be reconstructed from the
n collected diffraction intensities. At the top of
Figure 1, the function of the hologram is also illustrated. When a laser beam passes through this hologram, a series of images at certain distances can be generated. It is worth noting that the positions of n projected images coincide with
n recording planes one by one, aiming to introduce significant intensity change in the recorded diffraction pattern of the sample.
In order to produce different types of images, two hologram design methods are adopted. One is the random superposition method [
31]. The target amplitude in each plane is multiplied by a random phase, and then back propagates to the hologram plane. The above technique is well suited to generate a series of simple images composed of points or lines, such as Group 1 and Group 2 in Figure 3a. When a hologram is designed to produce complicated images like Group 3, another method called noniterative projection (NIP) should be employed [
32]. In this method, the estimated phase hologram passes through each plane one by one, and a relaxation constraint is imposed on the estimated field. The wave-field is then propagating backward through each plane using the same procedure. The final hologram can be obtained from the phase part of the wave-field at the hologram plane. The above diffraction propagation is implemented using the angular spectrum method:
where
F{} and
F−{} denote the Fourier transform operation and its inverse, respectively,
λ is the wavelength of the incident laser beam,
fx and
fy are the spatial frequency,
Un(
x,
y) is the wave-field to be propagating,
Un′(
x,
y) is the propagated wave-field, and
Z is the diffraction distance.
With the known hologram distribution Holo(x,y) and the recorded diffraction intensities of the sample I1, I2,…, In, the phase retrieval process can be conducted using the following procedure. For simplicity, the wave propagation is shortened as Prop{U, Z}, where U represents the wave-field and Z represents the propagation distance.
- (1)
The plane wave passes through the hologram and propagates to the sample with a diffraction distance of z-z0. The initial guess of the sample is given by U0, with constant amplitude and phase. Then, the emitted wave-field from the sample can be expressed as US = U0 ∗ Prop{Holo, z-z0}.
- (2)
The transmitted wavefront US propagates forward at the first recording plane, and the complex amplitude at the first recording plane can be written as U1 = Prop{US, z0}. The phase is kept and the amplitude is constrained by the square root of the measured intensity. The modified light field can be defined as U1′ = I11/2 ∗ exp[j(angle{U1})].
- (3)
The updated wave-field U1′ passes through each recording plane one-by-one and the same constraint is applied.
- (4)
The light field Un′ is backpropagated at the last recording plane to the sample plane with a distance of z0 + (n − 1) ∗ d. The estimated complex wave-field at the sample plane can be written as US′ = Un′ ∗ Prop{Holo, −z0 − (n − 1) ∗ d}.
- (5)
The modulation of the hologram then can be removed. The distribution of the sample is updated by U0′ = Prop{Un’, −(z0 + (n − 1) ∗ d)}./Prop{Holo, z-z0}, and one round of iteration is completed.
The iteration is terminated when the difference between the desired target and the reconstructed results reaches an acceptable error, which can be evaluated by the structural similarity index measure (SSIM), and it is calculated as:
where
uT and
uR are the mean values of the target and the reconstruction results, respectively,
σT,R is the covariance of the target and the reconstructed results,
σT and
σR are the standard deviations of the target and the reconstructed results, respectively,
c1 and
c2 are constant to avoid being divided by zero.
Peak signal-to-noise ratio (PSNR) is also introduced to evaluate the reconstructed image quality:
where
m is the image size,
i(
x,
y) is the target image,
imax(
x,
y) is the maximal value of
i(
x,
y), and
i’(
x,
y) is the reconstructed image. The higher SSIM and PSNR indicate the better reconstruction quality.
3. Simulation
To investigate the performance of the proposed method, two types of objects are used for simulation, as shown in
Figure 2. One is the rough object with the “baboon” image as the amplitude and the “peppers” image as the phase. The amplitude is normalized and the phase is scaled in the range [0, π]. The object is sampled with 200 × 200 pixels and padded with zeros to form 300 × 300 pixels. Another is the smooth object, which consists of a constant amplitude with 300 × 300 pixels and a vortex phase with 200 × 200 pixels in the center. The topological charge of the optical vortex is set to two. The sampling interval of the object plane and the recording planes are all 7.4 μm. The initial distance between the sample and the recording plane is
z0 = 10 mm. The plane interval is set to
d = 2 mm. The number of the recording plane is
n = 4. The wavelength of the laser is 658 nm.
Firstly, three holograms are designed to produce three groups of images as different holographic illumination (HI) to reveal the influence of hologram target image selection on the capability of phase retrieval. The images in Group 1 are simple and only contain one point with different positions. By contrast, the images in Group 2 and Group 3 are more complicated, in which the image of Group 2 contains more zero values. The distance between the hologram and the first projection plane is
z = 100 mm. The sampling interval and resolution of the hologram are all the same with the object. The obtained holograms and their reconstructed images are shown in
Figure 3b.
When the vortex phase object is illuminated by these holograms, the recorded diffraction patterns are presented in
Figure 4. Three holographic illuminations are denoted by HI-group1, HI-group2, and HI-group3. For comparison, the captured intensity distributions with plane wave illumination (PWI) are also given in
Figure 4. As we can see, under PWI, the diffraction patterns have no obvious and intuitive change. In contrast, a significant intensity difference can be observed with HI modes, since the diffracted light of the sample is superimposed with the modulated light from the hologram. The diffraction patterns of object 1 are not shown, owning to larger visually intensity change existing in both HI and PWI mode for the rough object.
The iteration convergence curves and the reconstructed results of the two objects are shown in
Figure 5. It can be clearly seen that as the number of iterations increases, three kinds of proposed HI converge much faster than the traditional PWI for both objects. After 100 iterations, the PWI can only reconstruct the contour of the rough object, but cannot recover the smooth vortex phase. Using HI, both objects are well retrieved. Among them, HI-group3 has the fastest convergence rate and the highest accuracy, followed by HI-group2, and then HI-group1. To quantitatively compare the results,
Table 1 is presented. Mean normalized cross-correlation (MNCC) is introduced as a new parameter to represent the similarity of the diffraction pattern of the sample, which is defined by:
where
i = 1, 2, …,
n, and the operator
corr2() is to calculate the cross-correlation. The smaller the value of MNCC, the more obvious the change in diffraction intensity. It can be found in
Table 1 that the MNCC value of PWI is around 0.9, meaning the difference among the recorded intensities is very small. In contrast, the MNCC values of the three HI methods are all less than 0.5, indicating that there is a large difference between the recorded intensity maps. Besides, HI-group3 has the smallest MNCC, which suggests that complex target images are more suitable for holographic illumination. The iteration time in
Table 1 represents the number of iterations required to achieve convergence. For rough objects, the traditional PWI needed 2687 iterations, while HI only requires a maximum of 246 iterations, the iteration speed increased by more than ten times, and using HI-group3 can even increase 30 times. For the vortex phase, PWI cannot reconstruct the correct result, because the difference between the recorded images is too small, the obtained SSIM value of the reconstructed image is only 0.1661. In contrast, the SSIM of the reconstruction results with HI methods is all greater than 0.9. From the relationship of the above three parameters in
Table 1, it can be concluded that a small MNCC value can achieve the highest SSIM value with fewer convergence times, which proves that MNCC is feasible to define the diversity of intensity variation. It also provides a numerical index for the design of the hologram. Holograms should be selected with a small MNCC value.
We also investigate the influence of the number of measurement planes on the reconstruction performance.
Figure 6a,b show the phase convergence curves of object 1 and object 2, respectively, under different numbers of the captured diffraction patterns, and the HI-group3 is adopted. It can be seen that the number of the measurements is two, the captured intensities cannot provide enough redundant information for algorithm convergence. When the number of planes reaches more than three, the convergence of the algorithm can be ensured, and the more planes are used, the faster the convergence will be. In general, the convergence speed and imaging quality can be guaranteed by recording at least four planes.
Figure 6c shows the phase convergence accuracy of object 1 under different illumination modes and different number of measurement planes after 100 iterations. We can see that with the increase of intensity measurements, the convergence accuracy is also improved. However, for plane wave illumination, the SSIM is still very low. It may be feasible to increase the number of planes, but it will increase the measurement time.
Figure 6e shows the difference between the reconstructed and the ideal phase of object 1 using HI-group3, and the maximum error is 10
−3 rad. In comparison, the maximum phase error is shown in
Figure 6f with the PWI method is 1.0134 rad. For the vortex phase object, the reconstructed phase error using PWI shown in
Figure 6h is up to 2.9818 rad, while the error by HI-group3 shown in
Figure 6g is on the order of 10
−9 rad, which proves the effectiveness of our method.
Subsequently, the influence of measurement interval d on the reconstruction result is discussed. Four recording planes are used.
Figure 7a,c show the phase convergence accuracy of object 1 and object 2 under different types of illumination at intervals of
d = 1, 1.5, 2, 2.5, and 3 mm. The measurement interval is greater than 2 mm, HI can reconstruct the phase accurately, while for the PWI it needs more than 7 mm to achieve accurate reconstruction for object 1. However, for object 2, the reconstruction is failed at any interval.
Figure 7b,d show the evolution of image and SSIM value with a measurement interval and iteration times using HI-group3. From left to right, when the number of iterations increases from 10 to 100, the reconstruction quality corresponding to each interval also increases. Similarly, from top to bottom, when the measurement interval grows from 1 to 3 mm under the same number of iterations, the accuracy of the recovered phase also improved gradually. The rightmost column is the reconstruction result of PWI for comparison, and the images are blurry, indicating that our method can successfully recover the sample with fast calculation and short-distance measurement.
4. Experiments
To further prove the effectiveness of the proposed method, the experiments are conducted and the imaging setup is shown in
Figure 8. A laser parallel light tube with a wavelength of 658 nm is incident on the SLM (The HOLOEYE PLUTO VIS, 1920 × 1080 pixels with a pixel size of 8 μm) after being modulated by the aperture and the polarizer. A computer-generated hologram described above is loaded on the SLM, and the modulated light from the SLM illuminates the sample through a beam splitting prism, and then projects onto a CCD (The HR16000CTLGEC, 8-bit, 4896 × 3248 pixels with a pixel size of 7.4 μm). The CCD is mounted on a translation stage, moved with an interval step of 2 mm. Four diffraction intensity patterns are recorded. The initial distance between the sample and CCD is 90.3 mm, which is measured by the principle of grating diffraction [
33]. The distance between the SLM and CCD is 200 mm, which is calibrated by the position of the clearest first holographic projection plane.
A “rabbit” pattern is firstly used as the amplitude sample, and the reconstruction results are shown in
Figure 9. In
Figure 9a, with the increase of iteration times, the original method using PWI always has problems of imaging blur, and the improvement of image quality is not significant. In the proposed method using HI, the calculation result of 60 iterations is visually better than that of the conventional method using 300 iterations. Meanwhile, the contour of the rabbit has been clearly formed at the beginning using HI, and increasing the number of iterations can effectively reduce the speckle noise. In order to quantitatively compare the two methods, the marked lines of the results of the two methods at 300 iterations are extracted and plotted in
Figure 9b. It can be observed that the recovered result of the proposed method has a high imaging contrast. In addition, the logarithm of mean square error (LMSE) is adopted to evaluate the convergence performance of the two methods, which calculates the difference between the square root of the recorded intensity and the retrieved amplitude on the first plane. The lower the LMSE value, the better the reconstruction quality. The LMSE curves of the two methods are shown in
Figure 9c, it can be found that with the increase of the number of iterations, the reconstruction error decreases with HI, while the reconstruction error is almost unchanged with PWI, which proves the effectiveness of our method.
Secondly, a quantified phase resolution plate is tested to evaluate the ability to retrieve the smooth wavefront. The retrieved phase distribution of the sample with the traditional method and the proposed method are shown in
Figure 10a,b, respectively. Group 4 is enlarged in the bottom and the cross-section phase distribution of groups 4-2 and 4-3 are plotted in
Figure 10c,d respectively. It can be seen that the lines in group 4-1 with a resolution of 31.1 μm can be distinguished with the original PWI method. In contrast, the lines in group 4-3 with a resolution of 24.8 μm can be discerned with the proposed HI method. Moreover, in order to quantitatively evaluate the accuracy of phase recovery, lines recovered in group 3-5 and 3-3 are selected and corresponding phase values are converted into depth values. The phase value φ and the depth value h are related by: φ = 2πh(1.4565 − 1)/λ, where the constant 1.4565 is the refractive index of the phase plate. The real depth measured by the step profiler (Stylus Profiler System, Dektak XT, Bruker, Karlsruhe, Germany) is 477 nm. Two sets of depth data are shown in
Figure 10e,f. Several retrieved depths marked by the red dashed lines using HI are 454.4, 455.7, and 457.7 nm. The depth error is about 4.5%, while the depth recovered by PWI is obviously much less than the true value and the depth error is larger. It is worth noting that the reconstructed results could be influenced by noises in the experiment, such as Gaussian noise, speckle noise, and Poisson noise. Besides, the displacement accuracy of the electric translation stage and the dark current noise of CCD will affect the imaging results. We will strive to overcome these defects in future work.
5. Discussion
To date, there have been a number of hologram design methods for generating multiple planar images, such as noniterative projection (NIP), sequential GS (SGS) [
34], global GS (GSG) [
35], IFTA [
36], non-convex optimization [
37], binary optimization [
38], and so on [
39,
40]. To reveal the influence of the hologram reconstruction accuracy on the multi-distance phase retrieval, three methods of hologram design are employed and compared. The parameters are the same as the simulation part and the iteration time in SGS and GGS is 100.
Figure 11 shows the holographic reconstruction results. With the naked eye, the reconstruction results of NIP and SGS have large speckle noise. The results of GGS are clearer and have higher image contrast. From the value of NCC, the reconstruction accuracy from low to high is NIP, SGS, and GGS.
Then, the above three holograms are used as holographic illumination to reconstruct the sample (the vortex phase in
Figure 2b). The iteration convergence curves are shown in
Figure 12. It can be observed that all the methods can make the reconstruction algorithm convergence, and the convergence speed is only slightly different. In order to quantitatively compare the reconstruction performance, MNCC of light recordings, iteration time, and SSIM of the reconstructed sample are shown in
Table 2. The hologram designed by the GGS algorithm can make the MNCC value of the recorded diffraction intensities as low as 0.0590. It means that the difference between the intensity is obvious, which can achieve a rapid reconstruction convergence rate, and it only takes 117 times to converge. While the NIP takes 140 to converge. It seems that using holograms with high reconstruction quality can realize fast phase recovery. However, the improvement is very slight. Therefore, it is considered that general methods can be used to design holograms for holographic illumination.