Next Article in Journal
Polarisation Synthesis Applied to 3D Polarimetric Imaging for Enhanced Buried Object Detection and Identification
Previous Article in Journal
EDWNet: A Novel Encoder–Decoder Architecture Network for Water Body Extraction from Optical Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Full-Aperture Reflective Remote Fourier Ptychography with Sample Matching

1
School of Physics and Optoelectronic Engineering, Beijing University of Technology, 100 Ping Le Yuan, Beijing 100124, China
2
Beijing Engineering Research Center of Precision Measurement Technology and Instruments, Beijing University of Technology, Beijing 100124, China
3
Beijing Institute of Space Mechanics & Electricity, 104 YouYi Road, Beijing 100094, China
4
Beijing Key Laboratory of Advanced Optical Remote Sensing Technology, 104 YouYi Road, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(22), 4276; https://doi.org/10.3390/rs16224276
Submission received: 24 September 2024 / Revised: 10 November 2024 / Accepted: 15 November 2024 / Published: 16 November 2024

Abstract

:
Fourier ptychography (FP) can break through the limitations of existing optical systems with a single aperture and realize large field-of-view (FOV) and high-resolution (HR) imaging simultaneously by aperture synthesis in the frequency domain. The method has potential applications for remote sensing and space-based imaging. However, the aperture stop of the imaging system was generally set to be much smaller than the system with an adjustable diaphragm, so it failed to make full use of the imaging capability of the system. In this paper, a reflective remote FP with full aperture is proposed, and the optical aperture of the camera is set to be the maximum according to the sample-matching condition, which can further improve the imaging resolution by exploring the whole capability of the system. Firstly, the physical model of the remote FP is established using oblique illumination of a convergent spherical wave. Then, the sampling characteristics of the low-resolution (LR) intensity image are analyzed. Assuming diffraction-limited imaging, the size of the aperture of the optical system needs to match the sampling of the detector. An experimental setup with an imaging distance of 2.4 m is built, and a series of LR images is collected by moving the camera for the diffused samples, including the USAF resolution test target and the banknote, where the diameter of the single aperture is set to the maximum to match the size of the CCD pixel under the practical minimum F# of the camera of 2.8. The high-resolution image is reconstructed by applying the iterative phase retrieval algorithm. The experimental results show that the reconstructed resolution is improved to 2.5×. This verifies that remote FP with full aperture can effectively improve the imaging resolution using only the present single-aperture optical system.

1. Introduction

With the continuous progress of space exploration and remote sensing technology, higher requirements for imaging resolution have been expected, and improving the spatial resolution of space-based telescopes has become an urgent mission. According to the angular resolution, θ = 1.22 λ / D [1], where λ is the center wavelength and D is the diameter of the primary mirror. Increasing the diameter of a single aperture is one of the ways to improve the spatial resolution. However, the manufacturing cost, the weight of the lens, and the difficulty of manufacturing and launching severely limit the maximum physical size of the single-aperture mirror [2]. In order to overcome the above limitations, spatial-domain optical synthesized-aperture imaging methods have been investigated, such as segmented mirror technology [3,4] and optical synthetic-aperture interferometric imaging [5]. However, this type of method requires a high degree of co-phase and confocal between subsystems and the complex system. With the development of advanced computational optical imaging [6,7], many super-resolution imaging techniques have been proposed [8,9,10] and provide new possibilities for the solution, among which Fourier ptychography (FP) is one of the important typical techniques.
Fourier ptychography was first proposed based on a microscopic imaging system [11]. A series of low-resolution (LR) intensity images are recorded by illuminating the sample with quasi-plane waves at different tilted angles by lighting up the lamp beads of the LED arrays one by one [12,13]. Then, for the reconstruction, the Fourier spectrums of the multiple low-resolution images are merged together in the frequency domain by an iterative algorithm to recover the high-resolution (HR) complex-amplitude image. It overcomes the physical limitation of the numerical aperture of the microscope objective and uses non-mechanical scanning to realize the expansion of the spatial bandwidth product of the imaging system [14,15].
Subsequently, remote FP was developed. In 2014, remote FP was first applied to the macroscopic domain [16], where the Fourier spectrum plane of an object was obtained using far-field Fraunhofer diffraction in free space. A series of low-resolution images was acquired using camera scanning with a single aperture of approximately 3.1 mm and an overlapping ratio of 61% with the adjacent sub-spectrum. In total, 7 × 7 locations were acquired with a synthesized aperture of approximately 10 mm, enabling long-range imaging at 0.7 m. In order to further improve the imaging resolution, the multi-camera array scanning scheme was proposed with a single aperture of 2.3 mm to acquire images at an imaging distance of 1.5 m. In experiments, imaging resolution gains of 4×~7× for real scenes were achieved [17]. In 2017, a reflective imaging setup was first built to acquire the Fourier spectrum of a sample at a finite distance using convergent spherical wave illumination [18]. Imaging of a diffused object was achieved using camera scanning with a single aperture of 2.5 mm, where the overlapping ratio of the adjacent sub-spectrum was 76%, the synthetic aperture was about 15.1 mm, and the imaging distance was 1 m. Then, the coherence of the laser light field and the effect of different material surface properties on remote FP imaging were analyzed [19]. A single-aperture lens with a diameter of 60 mm was used to realize the coins and USAF resolution test target with a working distance of 3.31 m, and the resolution improvement was 2×. In order to increase the imaging field-of-view (FOV), a quasi-plane wave illumination of the sample was proposed, and a spade-poker sample at a distance of 15 m was imaged [20]. The single-aperture diameter was about 0.7 mm, the overlapping ratio of the sub-spectrum was 70%, and 27 × 27 positions were acquired, with a synthesized aperture of about 18.4 mm. In addition to the camera scanning, a laser-scanning scheme for reflective remote FP has been proposed [21]. Illumination of the sample at different tilted angles was achieved by moving the illumination laser, similar to the FPM scheme. In 2023, a diverging spherical wave was proposed to further extend the imaging FOV, realizing an imaging distance of 10 m, an equivalent aperture of 31 mm, and a FOV of 1 m × 0.7 m [22]. With the rapid development of deep learning in recent years [23,24,25], the application of it to the field of FP has achieved remarkable results, reducing the requirements for the overlapping ratio while guaranteeing the quality of imaging. This advancement provides new possibilities for improving imaging efficiency and reducing computational cost [26,27]. However, the diameters of the aperture used in remote FP are generally set to be small nowadays without considering sample matching, thus it fails to fully utilize the imaging capability of the system.
In order to adapt to the needs of remote sensing and fundamentally play to the advantages of the FP method, this paper carries out research on remote FP with full aperture. Firstly, the physical model under convergent spherical light illumination with a little tilted angle is established. Then, the spatial-domain sampling characteristics are analyzed. It is pointed out that the size of the diaphragm of the optical system needs to be matched with the sampling of the CCD detector, and the method of the spectrum interception under the critical sampling condition is discussed. Finally, an experimental system is designed and built, and the diameter of the single aperture is 35.71 mm. A standard reflective diffused USAF resolution test target and a banknote are used as the experimental samples to realize 2.4 m remote FP. On the basis of fully utilizing the whole imaging capability of the existing system, the resolution is improved and the imaging fidelity is enhanced.

2. Principle

2.1. Imaging Theory of FP

The single-imaging process of FP is generally regarded as a coherent imaging process. When applying the method to far-field imaging, it is necessary to perform the scanning acquisition on the Fourier spectrum plane of the object. From wave optics theory, when a plane wave is incident on an object, its far-field Fraunhofer diffraction can be equivalent to the Fourier transform of the object with a scale factor. The optical-field distribution after long-distance free-space propagation of the object is denoted by the optical Fourier transform of the object. Therefore, a camera can be utilized in this far-field plane to capture different positions of the spectrum and record the corresponding LR intensity images. Under the thin-lens approximation, assuming that the frame of the camera lens is assumed to be the aperture. When the camera lens is located in the Fourier spectrum plane of the object, the aperture of the camera lens determines the physical size of the sub-spectrum that can be transmitted by a single exposure, and a series of LR intensity images is recorded by the camera by moving the camera where the adjacent sub-spectrum needs to have a certain overlapping ratio.
Considering that the proof of the principle is usually carried out in the laboratory, the free-space diffraction distance is limited, and it is not easy to satisfy the condition of the far-field Fraunhofer diffraction. Therefore, it is proposed to use the convergent spherical wave to illuminate the object so that the Fourier transform spectrum of the object can be obtained at a finite distance. Following is a schematic diagram of the reflective FP system with oblique convergent spherical wave illumination in Figure 1.
Assume that a spherical wave from a point source propagates a distance d 0 and is incident on a focusing lens L 1 . Then, the optical wave passes through the lens to form a convergent spherical wave. There is an angle between the optical axis of the spherical wave and the normal direction of the sample, which is denoted as α . The image of the point source is formed at a distance of d = d 1 + d 2 after the lens, which has an offset of b with the z axis in the negative direction of the y0. Here, d 1 is the distance between the center of the object plane and the lens L 1 , and d 2 is the distance between the center of the object plane and the point image. Therefore, combining the complex-amplitude reflective function of the object with the oblique illumination of the convergent spherical wave, the optical-field distribution immediately after the object plane can be expressed as follows:
U ( x 0 , y 0 ) = o ( x 0 , y 0 ) · exp [ j k 2 z 0 ( x 0 2 + y 0 2 ) ] · exp ( j 2 π y 0 b λ z 0 )
where o ( x 0 , y 0 ) is the complex-amplitude reflective function of the object, and λ is the wavelength of the incident wave. The linear phase term is introduced due to the incidence of the tilted spherical wave, and b = d 2 · sin α , z 0 = d 2 · cos α .
When this convergent spherical wave illuminates the sample, the reflected optical wave carrying the information of the sample propagates a distance z 0 to reach the camera lens L2. Here, it is assumed that the aperture P of the camera is coincident with the pupil plane. Then, the wave propagation from the object plane to the pupil plane can be described by the Fresnel diffraction integral formula based on the single Fourier transform form, where the quadratic-phase factor present inside the integral can cancel the quadratic-phase factor produced by the focusing lens L 1 in Equation (1). The complex-amplitude distribution on the optical pupil plane is thus obtained as:
U f ( x f , y f ) = exp ( j k z 0 ) j λ z 0 · exp [ j k 2 z 0 ( x f 2 + y f 2 ) ] × F { o ( x 0 , y 0 ) · exp ( j 2 π y 0 b λ z 0 ) } u = x f λ z 0 , v = y f λ z 0 = exp ( j k z 0 ) j λ z 0 · exp [ j k 2 z 0 ( x f 2 + y f 2 ) ] × O ( x f λ z 0 , y f b λ z 0 ) ,
where F represents the Fourier transform with a scale factor of 1 / λ z 0 , O represents the Fourier spectrum of the object o, and ( u , v ) is the frequency-domain coordinates on this plane. From Equation (2), the complex-amplitude distribution of the optical field in the (xf, yf) plane is the Fraunhofer diffraction pattern of the object, except the presence of the quadratic-phase factor precedes the integral. Also, the oblique spherical illumination leads the Fourier spectrum of the object to have a certain translation along the yf direction. Both Equations (1) and (2) are different from previous work [18,19].
A diaphragm is applied to allow only part of the Fourier spectrum to pass through and then experience phase modulation given by the lens L2. Then, through free-space diffraction, the optical wave propagates a distance z i to the image plane, which is realized by the Fresnel diffraction integral formula based on the single Fourier transform form. The distances of z 0 and z i satisfy the object–image relation with respect to the lens L2, which is 1 z 0 + 1 z i = 1 f , where f is the focal length of the lens L2 and the imaging magnification is M. On the image plane, only the intensity information of the light field can be recorded. In the case that the center of the lens L2 coincides with the z axis, the distribution is obtained by ignoring the constant phase factor as follows:
I ( x i , y i ) = F 1 { O ( u , v b λ z 0 ) · P ( λ z 0 u , λ z 0 v ) } 2
where P ( λ z 0 u , λ z 0 ν ) is the coherent transfer function (CTF) of the imaging system, and P ( x f , y f ) is the pupil function determined by the diaphragm, which is a function of the circular domain with diameter D and can be expressed as:
P ( x f , y f ) = 1 ,   x f 2 + y f 2 D / 2 0 ,   others
The optical cutoff frequency in the object space is f c = D / 2 λ z o . At this time, the optical field on the image plane is a LR intensity image of the object, which is formed by the Fourier spectrum of the object corresponding to the limited aperture of L2. In order to break through the limitation of the imaging lens to achieve high resolution, it is necessary to collect a series of LR images formed by different Fourier spectrum components of the object. It is realized by moving the mobile scanning module on the xy plane to different positions. Each sub-spectrum, whose coordinates of each center position are denoted by ( u i , v i ) , needs to have a certain overlapping ratio so as to ensure that the collected data have enough redundancy information. Then, in the frequency domain, by employing the phase retrieval algorithm, the synthetic Fourier spectrum with phase information can be retrieved, and finally the high-resolution complex-amplitude image can be obtained.
In practice, the two-dimensional CCD detector is used to record the LR intensity images. Each pixel of the photosensitive cell has a certain size when finishing the integral calculation to obtain the value, and it is assumed that the pixels are sampled at equal intervals. Then, the process can be expressed as a process of convolution and sampling of the LR intensity of I s ( ξ , η ) [1]:
I s ( ξ , η ) = [ I ( x i , y i ) g C C D ( ξ x i , η y i ) d x i d y i ] c o m b ( ξ ξ ) c o m b ( η η )
where g C C D ( ξ , η ) represents the impulse-response function of the CCD detector, and ξ , η is the two-dimensional discrete sampling period of the detector.

2.2. Reconstruction Algorithm

The iterative phase retrieval algorithm of the remote FP is used for the recovery of the high-resolution complex-amplitude information of the object based on the recorded LR intensity images corresponding to different Fourier spectrums. The specific process is shown in Figure 2 as follows:
Step 1: Initialization of the high-resolution Fourier spectrum. Using the intensity image I 0 acquired from the center FOV of the CCD detector, divide the magnification ratio M to transform to the object plane. The amplitude is the square root of I 0 . Then, bilinear interpolating is applied to obtain an initial guess of the HR amplitude image. Set the initial phase value of the HR image to a constant value φ ( x 0 , y 0 ) = 0 . Then, apply the Fourier transform of the guessed HR complex-amplitude distribution to obtain the HR Fourier spectrum O ( u , v ) :
O ( u , v ) = F B [ I 0 ( x 0 , y 0 ) ] exp [ j φ ( x 0 , y 0 ) ]
where B represents the interpolation operation, and the interpolation magnification can be estimated according to the number of LR intensity images and the overlapping ratio of the adjacent sub-spectrum.
Step 2: For the ith camera position, use the CTF P ( λ z 0 u , λ z 0 ν ) to intercept the guessed HR Fourier spectrum. Then, the inverse Fourier transform is applied to obtain the LR complex amplitude of the optical field ϕ m ( x 0 , y 0 ) :
ϕ ˜ m ( u u i , v v i ) = O i ( u , v ) · P λ z o ( u u i ) , λ z o ( v v i )
ϕ m ( x 0 , y 0 ) = F 1 [ ϕ ˜ m ( u u i , v v i ) ]
where m is the number of the iteration.
Step 3: Retain the phase information of the LR complex amplitude of the sample. Replace the amplitude information with the square root of the intensity images collected by the cameras at the different corresponding positions. Then, obtain the updated complex-amplitude distribution ϕ m + 1 ( x 0 , y 0 ) .
Step 4: Apply the Fourier transform to the updated complex-amplitude distribution as ϕ ˜ m + 1 ( u u i , v v i ) = F ϕ m + 1 ( x 0 , y 0 ) , and update the Fourier spectrum corresponding to the sub-aperture according to the following equations:
O ˜ i m ( u , v ) = O ˜ i m ( u , v ) + α P m λ z o ( u u i ) , λ z o ( v v i ) P m λ z o ( u u i ) , λ z o ( v v i ) 2 × ϕ ˜ m + 1 ( u u i , v v i ) ϕ ˜ m ( u u i , v v i ) ,
and P m + 1 λ z o u , λ z o v = P m λ z o u , λ z o v + β O ˜ m ( u u i , v v i ) O ˜ m ( u u i , v v i ) 2 × ϕ ˜ m + 1 ( u u i , v v i ) ϕ ˜ m ( u u i , v v i ) ,
where α , β is the searching step of the algorithm, usually set as 1, and is the complex conjugate operator.
Step 5: Repeat steps 2–4 until the Fourier spectrum information is updated for all locations and an iterative process is completed.
Step 6: The Fourier spectrum obtained from the previous round is used as the initial guess for the next iteration. The updating process of steps 2–5 is repeated until convergence so as to obtain the optimized solution of the HR Fourier spectrum. Then, apply the inverse Fourier transform to obtain the HR complex-amplitude image of the object. It is worth noting that in step 2, applying the CTF to intercept the different Fourier spectrum of the object is the one of the key steps.

3. Analysis of Sample-Matching Conditions

The FP system uses the two-dimensional area-array CCD detector to record the intensity distribution on the image plane. While the sampling of the detector is discrete, how to choose the sampling interval of the detector is an important issue. The principle is to achieve neither under-sampling, leading to loss of the information, nor oversampling, leading to a waste of the capability of the detector. A continuous band-limited function can be accurately retrieved by a discrete sampling sequence without loss of information only if the Nyquist sampling condition is satisfied. At the same time, the sampling of the CCD needs to be matched with the diaphragm of the imaging system to fully utilize the imaging capability of the optical system. For analytical convenience, it is assumed here that g C C D ( ξ , η ) in Equation (5) is a delta function of δ ( ξ , η ) , and only the discrete sampling has an effect on the imaging.

3.1. Analysis of Sampling Conditions in the Spatial Domain

The single thin-lens imaging model is considered from the object plane to the image plane, as shown in Figure 1. According to coherent imaging theory, for a diffraction-limited coherent optical-imaging system, the radius of the Airy pattern can be expressed as δ = 1.22 λ z i / D , where z i is the image distance and D is the diaphragm diameter of the imaging lens. According to the Nyquist sampling theorem, the Airy-pattern radius has to be larger than the pixel size of the detector in order to avoid under-sampling, otherwise the aliasing effect will occur in the frequency domain. Here, only the one-dimensional case is analyzed as an example, and the following relation needs to be satisfied:
ξ 1.22 λ z i D
where ξ is the pixel size of the detector in the horizontal direction.
In order to study the sampling relationship, the other parameters of the imaging system are fixed and only the size of the diaphragm is changed to produce the different Airy-pattern radius. The specific parameters are as follows: the illumination wavelength λ is 532 nm, the object distance is z 0 = 2.4   m , the focal length of the camera lens L 2 is 100 mm, the image distance is z i 104   mm , the imaging magnification M is 1/23, and the pixel size of the detector is 1.67 μm × 1.67 μm. The USAF resolution test target is employed as the simulated object, and its physical size is 5 mm. In the following, there are two cases to be analyzed with different diaphragms of the camera. When the diameter of the diaphragm is set to 50 mm, according to Equation (8), the radius of the Airy pattern can be calculated to be 1.35 μm. It is smaller than the detector’s pixel size, and this is an under-sampling case. At this time, according to the diaphragm of the imaging system, the theoretical resolution can be calculated to be δ x = 1 / 2 f c = λ z i / D 25.5   μ m in the object plane, which corresponds to the group 6, element 4 (line width is 26.1 μm) of the object, as shown in Figure 3a. However, because of the under-sampling situation caused by the sampling of the CCD, the smallest resolvable line pair is actually group 5, element 6 (line width is 39.2 μm), as shown in Figure 3b. So, in this case, it is the pixel size of the camera that determines the achievable imaging resolution. In another case, the diameter of the diaphragm is set to be 25 mm, which leads to oversampling, and the theoretical resolution is λ z 0 / D 52.1   μ m . It is verified in Figure 3b, where the resolvable linewidth is group 5, element 4 (line width is 53.9 μm). The actual imaging resolution is determined by the size of the diaphragm of the imaging system. It should be noted that when the diameter of the diaphragm is reduced, it can also result in a decrease of the light energy passing through the optical system, and so in Figure 3b, the image plane becomes a little darker.

3.2. Interception of the Fourier Spectrum Under the Critical Sampling Condition

Assuming that the pixel shape of the CCD detector is square, using two-dimensional equally spaced sampling and still using the simulation parameters in the previous Section 3.1, the amplitude image and the phase image of the simulated object are as shown in Figure 4(a1,a2). According to the property of the discrete Fourier transform, the highest spatial frequency in the Fourier spectrum of an image with the sampling interval ξ is f max = 0.5 / ξ . This is the critical sampling case when an equal sign is used in Equation (8), and the diameter of the diaphragm can be calculated to be D = 1.22 λ z i / ξ 40.4 mm . At this point, the CTF is a circle function, as shown in Figure 4(b1,b2), whose radius is the optical cutoff frequency f c = 0.61 / ξ . It can be seen that the radius of the CTF has exceeded the maximum spatial frequency determined by the pixel size of the CCD. For this particular case, the following two ways are proposed to process the spectrum interception:
(a) Ignoring the excess part of the CTF at four corners in the Fourier spectrum. The Fourier spectrum is intercepted directly by using a 128 × 128 matrix with a bandwidth of 2 f max , as shown in Figure 4(b1).
(b) Upsampling the LR intensity image by interpolation processing. The number of sampling pixels is increased to 160 × 160. Then, the range of the corresponding Fourier spectrum is enlarged, and it is intercepted using the CTF, as shown in Figure 4(b2).
To simulate the remote FP, a total of 5 × 5 LR images is acquired with an overlapping ratio of 60%, and the phase retrieval algorithm is used for reconstruction. The simulated results are shown in Figure 4(c1–e2). It can be seen that the synthesized Fourier spectrums obtained by the two processing methods shown in Figure 4(c1,c2) only have differences in the boundary part. The result of the interpolation method has an extra “wave fluctuate” spectrum at the boundary. The peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [28] are used as the evaluation parameters for the retrieved HR complex-amplitude images shown in Figure 4(d1–e2). It can be seen that the result of the interpolation method has slightly better PSNR and SSIM values, and this method will be applied later. In conclusion, when we design the remote FP configuration by selecting the camera lens and CCD detector, the matching between the diaphragm of the system and the discrete sampling of the detector should be considered. The size of the diaphragm should be as large as possible under the premise of satisfying the discrete sampling of the CCD detector so as to give full play to the imaging capability of each optical-imaging module.

4. Simulation and Experimental Results of the Reflected Diffuse Samples

4.1. Simulation Results and Analysis

The simulation parameters are chosen as follows under consideration of the practical experimental conditions. A laser with a 532 nm wavelength is passed through a lens L 1 with a focal length of 300 mm, and the focal length of the camera lens L 2 is 100 mm. The diameter of the diaphragm is 35.71 mm according to the actual experimental parameter, and the CCD detector pixel is 1.67 μm × 1.67 μm. It can be determined that the sample-matching condition is satisfied. In order to generate a reflected diffuse sample, the amplitude is set to be an airfield image, and the phase is set to be a distribution within the range [ π , π ] with a resolution of 512 × 512. In order to ensure the reconstruction quality and reduce the acquisition time, the moving step of the camera is selected to be 7.14 mm, and the overlapping ratio in the frequency domain is calculated to be 80%. A total of 11 × 11 LR intensity images are acquired, the object distance is 2.4 m, and the imaging magnification M is still 1/23. At this time, the diameter of the CTF of the imaging system is larger than the spectrum size determined by the sampling interval of the LR image, and the interpolation method described in Section 3.2 is used for the spectrum interception process.
The simulation results are shown in Figure 5, where Figure 5a is the simulated sample, and Figure 5b is the Fourier spectrum of the sample. Figure 5c shows the LR images corresponding to the different parts of the region in the amplitude distribution of the Fourier spectrum, denoted by blue, red, green, and yellow circles. It is seen that there is speckle noise in the acquired LR image, but there is no obvious distinction between the bright and the dark fields corresponding to different spectrum components. This is due to the scattering of the rough surface of the object, whose microstructures in the concave and convex are larger than the wavelength of the light. When the light irradiates to the rough surface, due to the irregularity, the incident light will be reflected at different angles and directions, then the reflected light in all directions shows a uniform distribution. The reconstructed results are shown in Figure 5d, and the resolution has been improved, where the profiles of the building and plane can be distinguished. The quantitative evaluation using the correlation coefficient (c.c.) [29] shows an improvement in the quality of the reconstructed HR amplitude image. The phase of the sample, as shown in Figure 5e, is random due to the scattering effect of the diffused object. The simulation results reveal that the remote FP based on camera scanning is able to image the reflected diffused object. By acquiring a series of LR images, the image resolution can be improved, and the effect of scattering noise can be suppressed to a certain extent.

4.2. The Experimental Results and Analysis

According to the imaging schematic shown in Figure 1, a reflective experimental setup was built in the laboratory, as shown in Figure 6, where α 13 ° . The experimental parameters are consistent with those in Section 4.1. A focusing lens L 1 converts the diverging spherical wave into a convergent spherical wave and obtains the Fourier spectrum of the object at a finite distance. The camera lens L 2 has a fixed focal length of 100 mm and F# of 2.8~32, where F# = f/D, f is the focal length, and D is the diameter of the adjustable diaphragm. The detector is an 8-bit CMOS camera (MER-1070-14U3C, 3840 × 2748 pixels, 1.67 μm × 1.67 μm pixel size). Based on the above parameters, the F# of the camera needs to be set at 2.57 when the camera sampling interval satisfies the critical sampling condition. Experimentally, F# is set to be a minimum value of 2.8, and then the camera aperture is correspondingly set to be 35.71 mm, which is a little different from the sample-matching condition with slight oversampling. The overall resolution of the imaging system at this point is determined by the size of the diaphragm of the imaging system. Because the camera has several lenses combined together to form the imaging lens, it is not easy to measure the specific object distance and the image distance directly. So, a standard USAF test target is applied as the object to calibrate the imaging system. Through adjusting the focus of the camera, a focused image is obtained, and the pixel size of the CCD is known and used as the standard ruler on the imaging plane. Then, the system magnification is obtained by dividing the size of the particular bars by the truth value of the same bars of the UASF test target, which is calibrated as 1/23. Because the focal length of the lens L2 is fixed to be 100 mm, by applying the Gaussian lens formula, the object distance and the image distance can be calculated as 2.4 m and 104 mm, respectively. The object distance is in good accordance with the value measured in the laboratory.
According to Equation (2), the oblique spherical wave illumination will cause the Fourier spectrum of the sample to be shifted. In order to eliminate this shifting and thus ensure that the camera aperture can intercept the spectrum at the correct location, during the experiment, first a mirror-reflective object is used to find the position where the light beam is reflected, obeying the law of reflection, and the camera is placed so that the image falls exactly in the center of the FOV. Then, the camera is rotated so that the photosensitive surface is perpendicular to the z axis, thereby keeping the sample parallel to the camera sensor, which is the center of the Fourier spectrum of the object. Then, the mirror-reflective object is replaced with a reflective diffused sample. It is noted that for the illumination-scanning FP system, the relative positions of the sample and the imaging system are fixed, so the position of the image on the camera is always the same. In comparison, for the camera-scanning remote FP system, the image is shifted in the FOV of the camera during the image acquisition. So, image alignment processing is required before performing the reconstruction algorithm [30], and the accuracy of the image alignment will seriously affect the quality of the reconstructed image. At the same time, the available number of scanning positions of the camera is limited by the size of the photosensitive surface of the CCD detector and the FOV. If the imaging resolution needs to be further improved, a CCD with a larger photosensitive area and higher sensitivity is needed, or the CCD and the imaging lens could be moved separately.
In order to quantitatively characterize the resolution enhancement of the system, a standard USAF resolution test target (Newport, RES-2) is applied. One of the low-resolution images is shown in Figure 7(a1). The reconstructed amplitude image is further denoised using the Block Matching 3D Filter Algorithm (BM3D) [31], and the reconstructed result is shown in Figure 7(a2). The three different regions in the LR intensity image and the reconstructed image, represented by red, green, and blue rectangular boxes, are enlarged for comparison. It can be seen that the LR intensity image has a very blurred target strip and can hardly be resolved to group 1, element 1 (line width of 250 μm). In contrast, the reconstructed image is able to present the information of the target stripe clearly, as shown by the blue box in Figure 7(a2), which approaches the resolvable capability of group 2, element 3 (line width of 99 μm). The reconstructed resolution has been increased to 2.5×, and the image quality has been greatly improved. The contrast of each group of line pairs, denoted as C, is calculated as follows when considering the effect of scattering:
C = w ¯ b ¯ w ¯ + b ¯ w ¯ ( 1 b ¯ )
where w ¯ and b ¯ are the average intensities of the white and black bars, respectively. The contrast of each group of underlined regions is shown in Figure 7b, and the reconstructed image has a higher contrast than the original captured LR image for all three regions.
Then, the experiments are conducted with a sample of a reflective diffused banknote that contains the pattern of a deer. One of the LR intensities captured is shown in Figure 8(a1), where a whole dark-field phenomena appears. The reconstructed HR amplitude is shown in Figure 8(a2). The pattern of the “deer eye” position in the original LR image is very blurred, while the reconstructed HR image effectively improves the resolution and image quality. Furthermore, it can be seen from the curves of the contrast C shown in Figure 8b that the quality of the HR image is better than that of the LR image.

5. Conclusions and Discussion

In order to fully utilize the capability of the present optical-imaging system, for the demand of large FOV and high resolution in far-field imaging, this paper carries out full-aperture remote FP with sample matching. Firstly, the physical model under the oblique convergent spherical wave illumination is established. A quantitative formula for the shifting of the object spectrum is deduced. Then, the discrete sampling of the CCD detector is taken into consideration. By varying the diameter of the diaphragm of the imaging system, the resolution of LR images in both under-sampling and oversampling situations are analyzed. It is illustrated that the parameters of each imaging module need to be matched in order to fully utilize the imaging capability of the optical system, and the spectrum interception methods under the critical sampling condition are presented. Finally, a reflective remote FP system is designed and built. According to the sample-matching condition with the full aperture, based on a CCD pixel size of 1.67 μm × 1.67 μm and a camera lens F# of 2.8, the diameter of the aperture stop is set to be 35.71 mm. HR imaging with the diffused sample is successfully achieved, with the resolution being improved to 2.5×. The experimental results demonstrate that remote FP can improve imaging resolution and quality. It has great potential applications for remote sensing.
With regard to applying the proposed method to remote sensing scenarios, the following issues should be considered in the system design. Firstly, after the long-distance free-space propagation, the optical field distribution is right in the Fraunhofer diffraction pattern, which represents the Fourier spectrum of the object. In this case, in order to obtain the proper FOV, other illumination modes should be studied. Secondly, since the laser is adopted to illuminate the object actively, the energy should be guaranteed after propagating hundreds of kilometers. Then, the influence of atmospheric turbulence should be taken into consideration. Additionally, it is worth noting that the camera may need to be moved along a spherical surface under the non-paraxial approximation region if the spectrum acquisition region is very large and higher-resolution improvement is required. The movement of the camera set on the satellite should be accurately controlled to record LR images. Meanwhile, samples with different surface characteristics (such as an oil painting, a leaf, composite material, building material, etc.) should be employed to expand the application of the proposed method. These improvements will enable the FP method to be practical for remote sensing tasks.

Author Contributions

Methodology, D.W.; data curation, J.M.; software, J.M.; validation, J.Z.; writing—original draft preparation, J.M.; writing—review and editing, D.W. and J.Z.; supervision, D.W.; formal analysis, R.W., Y.W., L.R., S.L. and L.L.; funding acquisition, D.W. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Beijing Natural Science Foundation (4222061); National Natural Science Foundation of China (62220106005, 62075001, 62175004); Beijing Key Laboratory of Advanced Optical Remote Sensing Technology Fund (AORS20235); Beijing Natural Science Foundation (4222063).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Goodman, J.W. Introduction to Fourier Optics; Roberts and Company Publishers: Greenwood Village, CO, USA, 2005. [Google Scholar]
  2. Van Belle, G.T.; Meinel, A.B.; Meinel, M.P. The scaling relationship between telescope cost and aperture size for very large telescopes. Ground-Based Telesc. 2004, 5489, 563–570. [Google Scholar]
  3. Daukantas, P. James Webb Space Telescope: A Sparkling Optical Success. Opt. Photonics News 2023, 34, 28–35. [Google Scholar] [CrossRef]
  4. McElwain, M.W.; Feinberg, L.D.; Kimble, R.A.; Bowers, C.W.; Knight, J.S.; Niedner, M.B.; Perrin, M.D.; Rigby, J.R.; Smith, E.C.; Stark, C.C. Status of the james webb space telescope mission. Proc. SPIE 2020, 11443, 173–181. [Google Scholar]
  5. Cassaing, F.; Sorrente, B.; Fleury, B.; Laubier, D. Optical design of a Michelson wide-field multiple-aperture telescope. Opt. Des. Eng. 2004, 5249, 220–229. [Google Scholar]
  6. Mait, J.N.; Euliss, G.W.; Athale, R.A. Computational imaging. Adv. Opt. Photonics 2018, 10, 409–483. [Google Scholar] [CrossRef]
  7. Liu, J.; Feng, Y.; Wang, Y.; Liu, J.; Zhou, F.; Xiang, W.; Zhang, Y.; Yang, H.; Cai, C.; Liu, F. Future-proof imaging: Computational imaging. Adv. Imaging 2024, 1, 012001. [Google Scholar] [CrossRef]
  8. Gustafsson, M.G. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 2000, 198, 82–87. [Google Scholar] [CrossRef]
  9. Hell, S.W.; Wichmann, J. Breaking the diffraction resolution limit by stimulated emission: Stimulated-emission-depletion fluorescence microscopy. Opt. Lett. 1994, 19, 780–782. [Google Scholar] [CrossRef]
  10. Willig, K.I.; Rizzoli, S.O.; Westphal, V.; Jahn, R.; Hell, S.W. STED microscopy reveals that synaptotagmin remains clustered after synaptic vesicle exocytosis. Nature 2006, 440, 935–939. [Google Scholar] [CrossRef]
  11. Zheng, G.; Horstmeyer, R.; Yang, C. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 2013, 7, 739–745. [Google Scholar] [CrossRef]
  12. Wang, D.; Han, Y.; Zhao, J.; Rong, L.; Wang, Y.; Lin, S. Enhanced image reconstruction of Fourier ptychographic microscopy with double-height illumination. Opt. Express 2021, 29, 41655–41669. [Google Scholar] [CrossRef]
  13. Shu, Y.; Sun, J.; Lyu, J.; Fan, Y.; Zhou, N.; Ye, R.; Zheng, G.; Chen, Q.; Zuo, C. Adaptive optical quantitative phase imaging based on annular illumination Fourier ptychographic microscopy. PhotoniX 2022, 3, 24. [Google Scholar] [CrossRef]
  14. Zheng, G.; Shen, C.; Jiang, S.; Song, P.; Yang, C. Concept, implementations and applications of Fourier ptychography. Nat. Rev. Phys. 2021, 3, 207–223. [Google Scholar] [CrossRef]
  15. Zuo, C.; Sun, J.; Li, J.; Asundi, A.; Chen, Q. Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography. Opt. Lasers Eng. 2020, 128, 106003. [Google Scholar] [CrossRef]
  16. Dong, S.; Horstmeyer, R.; Shiradkar, R.; Guo, K.; Ou, X.; Bian, Z.; Xin, H.; Zheng, G. Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging. Opt. Express 2014, 22, 13586–13599. [Google Scholar] [CrossRef]
  17. Holloway, J.; Asif, M.S.; Sharma, M.K.; Matsuda, N.; Horstmeyer, R.; Cossairt, O.; Veeraraghavan, A. Toward long-distance subdiffraction imaging using coherent camera arrays. IEEE Trans. Comput. Imaging 2016, 2, 251–265. [Google Scholar] [CrossRef]
  18. Holloway, J.; Wu, Y.; Sharma, M.K.; Cossairt, O.; Veeraraghavan, A. SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography. Sci. Adv. 2017, 3, e1602564. [Google Scholar] [CrossRef]
  19. Yang, M.; Fan, X.; Wang, Y.; Zhao, H. Experimental study on the exploration of camera scanning reflective Fourier ptychography technology for far-field imaging. Remote Sens. 2022, 14, 2264. [Google Scholar] [CrossRef]
  20. Li, S.; Wang, B.; Liang, K.; Chen, Q.; Zuo, C. Far-Field Synthetic Aperture Imaging via Fourier Ptychography with Quasi-Plane Wave Illumination. Adv. Photonics Res. 2023, 4, 2300180. [Google Scholar] [CrossRef]
  21. Xiang, M.; Pan, A.; Zhao, Y.; Fan, X.; Zhao, H.; Li, C.; Yao, B. Coherent synthetic aperture imaging for visible remote sensing via reflective Fourier ptychography. Opt. Lett. 2020, 46, 29–32. [Google Scholar] [CrossRef]
  22. Tian, Z.; Zhao, M.; Yang, D.; Wang, S.; Pan, A. Optical remote imaging via Fourier ptychography. Photonics Res. 2023, 11, 2072–2083. [Google Scholar] [CrossRef]
  23. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
  24. Yanny, K.; Monakhova, K.; Shuai, R.W.; Waller, L. Deep learning for fast spatially varying deconvolution. Optica 2022, 9, 96–99. [Google Scholar] [CrossRef]
  25. Wu, J.; Boominathan, V.; Veeraraghavan, A.; Robinson, J.T. Real-time, deep-learning aided lensless microscope. Biomed. Opt. Express 2023, 14, 4037–4051. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, C.; Hu, M.; Takashima, Y.; Schulz, T.J.; Brady, D.J. Snapshot ptychography on array cameras. Opt. Express 2022, 30, 2585–2598. [Google Scholar] [CrossRef]
  27. Wang, B.; Li, S.; Chen, Q.; Zuo, C. Learning-based single-shot long-range synthetic aperture Fourier ptychographic imaging with a camera array. Opt. Lett. 2023, 48, 263–266. [Google Scholar] [CrossRef]
  28. Channappayya, S.S.; Bovik, A.C.; Caramanis, C.; Heath, R.W. Design of linear equalizers optimized for the structural similarity index. IEEE Trans. Image Process. 2008, 17, 857–872. [Google Scholar] [CrossRef]
  29. Mukaka, M.M. A guide to appropriate use of correlation coefficient in medical research. Malawi Med. J. 2012, 24, 69–71. [Google Scholar]
  30. Guizar-Sicairos, M.; Thurman, S.T.; Fienup, J.R. Efficient subpixel image registration algorithms. Opt. Lett. 2008, 33, 156–158. [Google Scholar] [CrossRef]
  31. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
Figure 1. Schematic of the remote FP system under convergent spherical wave illumination with a tilted angle α .
Figure 1. Schematic of the remote FP system under convergent spherical wave illumination with a tilted angle α .
Remotesensing 16 04276 g001
Figure 2. Flowchart of the reconstruction algorithm for remote FP.
Figure 2. Flowchart of the reconstruction algorithm for remote FP.
Remotesensing 16 04276 g002
Figure 3. The simulated sample and the LR intensity images. (a) The simulated-amplitude image of the sample, (b) the under-sampling case with a 50 mm diaphragm, and (c) the oversampling case with a 25 mm diaphragm.
Figure 3. The simulated sample and the LR intensity images. (a) The simulated-amplitude image of the sample, (b) the under-sampling case with a 50 mm diaphragm, and (c) the oversampling case with a 25 mm diaphragm.
Remotesensing 16 04276 g003
Figure 4. Comparison of the results with direct processing and interpolation processing under the critical sampling condition. (a1,a2) Ground truth, (b1,b2) the CTF and the interception matrix, (c1,c2) the synthetic Fourier spectrum, (d1,d2) the reconstructed amplitude images, (e1,e2) the reconstructed phase images.
Figure 4. Comparison of the results with direct processing and interpolation processing under the critical sampling condition. (a1,a2) Ground truth, (b1,b2) the CTF and the interception matrix, (c1,c2) the synthetic Fourier spectrum, (d1,d2) the reconstructed amplitude images, (e1,e2) the reconstructed phase images.
Remotesensing 16 04276 g004
Figure 5. Simulation results of remote FP with a reflective diffused object. (a) The original amplitude image of the sample, (b) the amplitude distribution of the Fourier spectrum of the sample, (c) four LR intensity images corresponding to different sub-spectrums, (d) the reconstructed amplitude image, (e) the reconstructed phase image.
Figure 5. Simulation results of remote FP with a reflective diffused object. (a) The original amplitude image of the sample, (b) the amplitude distribution of the Fourier spectrum of the sample, (c) four LR intensity images corresponding to different sub-spectrums, (d) the reconstructed amplitude image, (e) the reconstructed phase image.
Remotesensing 16 04276 g005
Figure 6. The experimental setup for the reflective remote FP using the camera-scanning mode.
Figure 6. The experimental setup for the reflective remote FP using the camera-scanning mode.
Remotesensing 16 04276 g006
Figure 7. Experimental results of the USAF resolution test target. (a1) One of the LR intensity images, (a2) the reconstructed amplitude image, (b) the contrast curves for the underlined parts of (a1,a2).
Figure 7. Experimental results of the USAF resolution test target. (a1) One of the LR intensity images, (a2) the reconstructed amplitude image, (b) the contrast curves for the underlined parts of (a1,a2).
Remotesensing 16 04276 g007
Figure 8. Experimental results for the banknote. (a1) One of the LR intensity images, (a2) the recon-structed HR amplitude image, (b) the contrast curves for the underlined parts of (a1,a2).
Figure 8. Experimental results for the banknote. (a1) One of the LR intensity images, (a2) the recon-structed HR amplitude image, (b) the contrast curves for the underlined parts of (a1,a2).
Remotesensing 16 04276 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, D.; Meng, J.; Zhao, J.; Wang, R.; Wang, Y.; Rong, L.; Lin, S.; Li, L. Full-Aperture Reflective Remote Fourier Ptychography with Sample Matching. Remote Sens. 2024, 16, 4276. https://doi.org/10.3390/rs16224276

AMA Style

Wang D, Meng J, Zhao J, Wang R, Wang Y, Rong L, Lin S, Li L. Full-Aperture Reflective Remote Fourier Ptychography with Sample Matching. Remote Sensing. 2024; 16(22):4276. https://doi.org/10.3390/rs16224276

Chicago/Turabian Style

Wang, Dayong, Jiahao Meng, Jie Zhao, Renyuan Wang, Yunxin Wang, Lu Rong, Shufeng Lin, and Ling Li. 2024. "Full-Aperture Reflective Remote Fourier Ptychography with Sample Matching" Remote Sensing 16, no. 22: 4276. https://doi.org/10.3390/rs16224276

APA Style

Wang, D., Meng, J., Zhao, J., Wang, R., Wang, Y., Rong, L., Lin, S., & Li, L. (2024). Full-Aperture Reflective Remote Fourier Ptychography with Sample Matching. Remote Sensing, 16(22), 4276. https://doi.org/10.3390/rs16224276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop