2.1. System Setup
The flow cytometer utilized a lens-less imaging technique based on an in-line holography structure, and the overall structure is shown in
Figure 1.
As shown in
Figure 1, the flow cytometer comprised a greyscale CIS (Aptina MT9P031, Micron Technology, Pennsylvania, ID, USA), a PDMS microfluidic chip and a blue light-emitting diode (LED) light source (central wavelength of ~465 nm). The pixel size of the CIS was 2.2 μm, the effective pixel size was 2592 H × 1944 V (5.7 mm × 4.2 mm), and the imaging area reached ~24.4 mm
2. To obtain holographic diffraction patterns on the surface of the CIS, the blue light LED was located 5 cm above the surface of the image sensor. In addition, there was a plate with a pinhole (diameter of 0.1 mm) at the front of the LED to obtain a coherent light source. To utilize the large FOV of the CIS, an S-type micro-channel was designed that could easily determine the volume of liquid samples and count the maximum possible number of cells in a frame. Moreover, the concentration of cells in a specimen could be calculated accurately, similar to a classic cell counting chamber. We used a PDMS channel and a piece of thin glass bonded together to obtain a microfluidic chip to capture the holograms of cells (the diffractive shadow images of cell) and fix the microfluidic chip on the surface of the CIS. We briefly introduce the fabricated process of the microfluidic below.
The photoresist (SU-8 2015, Microchem, Westborough, MA, USA) and a silicon wafer (4 inches in diameter) were used to fabricate positive model. The 3 mL of photoresist was dropped in the centrality of a wafer, and the photoresist film was 30 μm in thickness after using the spin coater at 1500 r/min for 15 s. Then, the silicon wafer was pre-baked for 15 min at 95 °C. The pre-designed channel photolithography plate was used for exposure on the lithography machine for 125 s. Next, the exposed wafer was after-baked for 3 min at 95 °C, and developed for 3 min. Then, we poured 30 g of liquid PDMS on the positive film, and put it in baking box for 40 min at 95 °C to solidify. The solidified PDMS layer and a piece of thin glass were bonded by vacuum plasma technique. Finally, the PDMS layer was drilled the holes of the inlet and outlet to finish the microfluidic chip.
However, since a microfluidic chip was used, a cell sample could be continuously detected, similar to a flow cytometer, as shown in
Figure 2.
Next, we prepared an experimental platform to obtain the features and parameters of the proposed system. In addition, we found that the exposure time of the image sensor in this system was greater than 400 ms. Unfortunately, motion blur is caused by the movement of cells in the sample when the image sensor is operating during the exposure time. Therefore, we considered that instead of the cells flowing through the detection area at high speed, a large number of cells passed through the exposure region at one time. In other words, the system utilized the large FOV of the CIS to obtain a large number of images of cells from each frame. To avoid the motion blur caused by cell flowing, we used a method of periodically controlling the flow velocity of the specimen. There was only one inlet and one outlet in the micro-channel, ensuring that the flow of all the tested cells out of the micro-channel and that of the new cells flow into the micro-channel took a short time. To obtain a sufficient processing time for the image processing algorithm, the new cells were injected into the micro-channel during the image processing period. Subsequently, all the tested cells flowed out the micro-channel, and then the flow of the cells stopped and the cell images were captured by the image sensor. With several repetitions, the device was able to collect the maximum possible number of cell signatures to improve the accuracy of the analysis.
2.3. Reconstruction of Lens-Less Holographic Images
The lens-less imaging technique utilizes an in-line holographic structure proposed by Gabor [
33] to reconstruct the image of the cell plane. The lens-less holographic imaging system is mainly composed of a blue LED light source, a pinhole plate, a microfluidic chip and a CIS, as shown in
Figure 3.
Because of the infinitesimal size of blood cells (~2–15 μm), the shadows of the blood cells on the surface of the CIS are diffraction images. Due to the influence of diffraction phenomenon, the shorter the wavelength of light source is, the higher the spatial resolution of the microscopic image becomes. In the most commonly used LED of single frequency light sources, a blue light source has the shortest wavelength. So we chose a blue LED as the light source. For convenience, we assumed that the cell plane was the object plane and that the surface of CIS was the image plane. The distance from the pinhole to the object plane was
d1, and the distance from the object plane to the image plane was
d (
d1 >>
d). According to the angular spectrum theory of diffraction, we can reconstruct an image of the object plane by recording the image plane. We assumed that the transmittance of the sample was
O(
x,
y) and that the complex amplitude of the wavefront through the object plane was as below:
Here, the image plane is assumed as the plane of
z = 0, and the object plane is assumed as the plane of
z =
d. According to the Rayleigh–Sommerfeld diffraction theory, the transfer function of light waves in two planes separated by a distance
d is defined as:
Here,
ε and
η denote the coordinates of a frequency domain and have been transformed by
x and
y into a spatial domain.
λ is the wavelength of the light source. According to the transfer function, we can obtain the complex amplitude of the image plane:
where
and
represent the optical forward and backward propagation operators, which carry out a fast Fourier transform and an inverse fast Fourier transform, respectively, belonging to a convolution operator.
d denotes the distance of the light propagation; in other words, it is the distance between the object plane and the image plane. + and − denote forward propagation and backward propagation along the
z-axis, respectively. The light intensity in the holographic plane recorded by the image sensor is the square of the amplitude of the light wave, and the light intensity is as below:
In Equation (4),
U0(
x,
y) is the complex amplitude of the actual light wave in the image plane, but the image sensor can receive only the light intensity,
I0, and the phase is discarded. Normally, image sensor acquisition of the holographic plane light intensity is a linear process, so the light intensity information collected by image sensor can be expressed as:
The amplitude of the light wave in the object plane can be obtained by the reconstruction of the distance image at the back of the image plane:
With Equations (1)–(6), we obtain Equation (7):
In Equation (7), the first term is the direct current (DC) component; the second term is the focus image; the third term is the holographic image, which is the focus image backward propagated a distance of 2
d; and the fourth term is the intermodulation. The second and third terms constitute the twin image, which still appears after the forward transfer reconstruction of the diffraction plane and is difficult to separate. In fact, the twin-image phenomenon, which is caused by the absence of a light phase, is a major problem in the in-line holographic system. In addition, we used micro-bead images obtained with a 10× objective lens to simulate the twin-image problem (
Figure 4).
According to Gabriel Koren’s research [
32], we can use only one diffractive to reconstruct a focus image of the object plane and suppress the twin-image phenomenon. In our proposed algorithm, only a holographic diffraction image and a cell-absent background image were needed to reconstruct the phase and obtain a focus image of the object plane. The general steps were as follows:
Step 1: Using the square root of the light intensity and the initial value of the phase (generally 0), reversely transfer the diffractive pattern of the image plane back to the object plane by the transfer function to obtain the focus image. However, the initial estimation of the object plane seriously suffers from the twin-image phenomenon. Thus, it is necessary to use the following steps to suppress the twin images. The main operation of the reconstruction process is similar to the frequency domain filter in digital image processing. The transfer function of the filter is shown by Equation (2), and the reconstruction algorithm of the object plane is shown below:
Step 2: The region information of the object is extracted from the preliminary estimated object image, which is used for the object plane constraint. Classic image segmentation algorithms, such as the gradient boundary extraction algorithm and the threshold segmentation algorithm can be used to find the object plane constraint. Because of the low signal-to-noise ratio (SNR) of the image extracted by the CIS, the threshold segmentation algorithm is more reliable. The threshold is 0.34 in this manuscript; in other words, the grey value of cell regions on the object plane is usually less than 0.34.
Step 3: The cell region is the
C region, and the background is the non-
C region. Through an iterative algorithm, the cell regions are close to the real image, and the twin-image phenomenon will be weakened on the object plane. The algorithm is
where
D(
x,
y) is the background image, which is obtained by the image sensor without cells, and
m is shown with
Step 4: The new complex amplitude of the image is obtained by the forward transfer operation. The phase of the newly calculated complex amplitude is retained, and the amplitude is replaced by the original known image plane amplitude. This process is called the image plane constraint:
The iteration can be completed by repeating the third and fourth steps and can converge after 5–6 iterations. To obtain the missing phase, the algorithm iterates between two planes (object plane and image plane) through the amplitude and makes the iteration convergent using the object plane constraint (Equation (9)) and the image plane constraint (Equation (11)). However, the algorithm converges rapidly in the initial several iterations, and then the convergence is almost stagnant. Furthermore, there is a large error in the estimation of the initial phase when the distance between the object plane and the image plane has a deviation in an actual system. Therefore, the classic phase recovery algorithm is necessary to improve an actual system. The manuscript proposes an initial phase constraint algorithm based on the classic algorithm, in which Equation (8) is replaced by Equation (12):
In general, there is no linear relationship between the amplitude and phase in a complex number. However, the phase changes of near-coherent light passing through a cell are related to the cell transmittance, and the cell transmittance is also expressed in amplitude. Therefore, there is a weak correlation between amplitude and phase. Using this property, we can estimate the initial phase of the iteration by transmittance. Through the initial phase constraint, the iterative convergent speed is faster, the reconstruction precision is higher, and the anti-jamming ability is stronger.
To test the performance of the algorithm, we used a dyed leucocyte captured by a 20× object lens microscope to perform a simulation. Using Equations (1)–(5) to establish a diffractive degradation model, we obtained the diffractive pattern of the leucocytes. To replicate our flow cytometer, we chose the same parameters as the actual system for simulation. The central wavelength of the light source was 465 nm, the distance between the object plane and the image plane was 0.875 mm, and the pixel size was 2.2 μm. The iterative algorithm without the initial phase constraint was compared to the iterative algorithm with the initial phase constraint, and the result is shown in
Figure 5.
To test the performance of the two methods, we calculated the root-mean-square error (RMSE) for the reconstructed image of the object plane and original image. Finally, the proposed algorithm was used to reconstruct the cell image on the object plane and compared with the original image to calculate the RMSE:
According to the distance between the object plane and image plane, we conducted two groups of comparative experiments. The first was without deviation, and the second was with 20% deviation. The RMSEs of the two method were calculated by Matlab (Version: 2016b, MathWorks, Endogenous, MA, USA) and are shown in
Figure 6.
In
Figure 6, the ‘phase constraint’ is our proposed method, and the ‘non-phase constraint’ is the classic method. The proposed method has a faster convergence rate and a lower error rate, making it more conducive to counting and analyzing cells. As shown in
Figure 5 and
Figure 6, by comparing the two groups with the two methods, we found that the iteration method with initial phase constraints had a faster iteration speed. In the case of a 20% distance deviation, the proposed method was able to restore the cell image, whereas the original method could not restore the image effectively, which has a great influence on the actual system. Moreover, when all the parameters were accurate, the proposed method converged faster, and the RMSE of image reconstruction was smaller. The results in
Figure 6 show that our proposed method can greatly reduce the time consumption of the image processing algorithm and provide a guarantee for the real-time implementation of the system.
Finally, we used a frame image of whole blood cells captured by the lens-less flow cytometer to test the computation time. We used Matlab to reconstruct the holographic image, and the hardware was graphics workstation (Xeon E5-2600, 16 GB DIMM DDR4, Intel, Santa Clara, CA, USA). The time consumed for one iteration was about 2 s with the classic method of phase iterative reconstruction, and the proposed method took ~0.1 s longer than the classic method. However, the method we proposed only needed 5 iterations, and the classic method needed 10 times to achieve the same reconstruction effect. Therefore, the time consumed by the propose method was ~12.82 s, and that of the classic one was ~24.42 s. In other words, it means that the proposed method reduced the computational time by ~48%. In general, the two algorithms have almost the same computational complexity. Our algorithm only adds one phase constraint to the first image reconstruction, but its time computation is ~0.19 s.
2.4. Blood Cell Analysis Method
In the on-chip flow cytometer, the blood cells flow in the micro-channel above the image sensor, and their holographic diffraction image is transmitted onto the sensor surface by the near-coherent light source. To reduce the cost and volume of the device, an ordinary blue LED with a limited light intensity was used. The light on the plane of the cells is further weakened because the light is illuminated through a pinhole. Therefore, the exposure time of the image sensor needs to be longer than 400 ms to capture a bright enough hologram. If the blood cells move during the exposure time of the CIS, there is a motion blur, as shown in
Figure 7a. To solve this problem, we used a pulse injection method, which is shown in
Figure 7b.
In
Figure 7,
t1 is the exposure time, and
t2 is the injection time. This process can be controlled by a micro-pump. Due to the high precision of micro-pump control, the injection time and stationary time of the blood cells can be fixed. Therefore, the algorithm can be processed according to fixed parameters. After an experiment, accurate cell image collection and injection of new samples in micro-pump mode can be ensured.
In addition, instead of the micro-pump method, a hand-push model can be used to reduce the cost and volume of the device. In the hand-push model, the motion state of the cells in the microfluidic chip can be detected by the image processing algorithm. The system acquisition accuracy can generally be guaranteed with t1 > 5 s and t2 > 20 s. The state of cell motion in the microfluidic chip is detected by the RMSE between two frames.
In addition, there are two important problems, which relate to cell overlap. The first is the cell overlap in the holographic image. As mentioned earlier, the raw image captured by the lens-less platform is a holographic image, so the size of a diffractive image of the cell is ~4 times bigger than that of a focus image. The inevitable cell image overlap was solved by the phase iterative reconstruction algorithm. The other problem relates to the position of cell overlap and the 3D structure of the micro-channel leads to the shadow image overlap. The problem is difficult to solve by digital image processing algorithms. Therefore, we used a diluted cell sample to solve the problem. According to the experiment, the 1:400 dilution ratio is an acceptable ratio for the blood cells, and cell overlap is almost impossible at 1:1000 dilution. Considering the speed of counting, we chose a 1:400 dilution.
The microfluidic chip was mounted above the CIS so that we could easily obtain the background image without cells. In addition, we then injected a fluid sample of cells into the micro-channel to record the holograms and reconstruct the focus images of the cells. The location and size of the cells were determined by threshold segmentation with images with background interference removed. According to our experiment, the flow velocity was 100 μL/min. Because of the infinitesimal volume of the micro-channel (0.246 μL), the digital injection pump was able to replace all cells in the channel less than 1s. However, it took ~15 s to 20 s for the cells in the fluid to become static. Fortunately, were able to use this time to process the cell image. Since the pixel number of the CIS was about 5.04 million, the computer took ~16 s to process the full resolution image.
The on-chip flow cytometry capability of this method, together with its ease of use, may offer a highly precise and lower cost alternative to existing whole blood analysis tools, urine analysis tools and plankton analysis tools, especially for point-of-care biological and medical tests.