1. Introduction
Ghost imaging (GI) is an imaging technique that reconstructs images of unknown objects through the intensity correlation between the object beam and the reference beam. The total intensity of the object beam, which contains the object’s information, is collected by a bucket detector, while the reference beam is directly detected by a detector with spatial resolution. GI was first proposed by Pittman et al. [
1,
2]. This technique offers higher sensitivity in detection and greater efficiency in information extraction compared to traditional optical imaging. Additionally, GI has garnered increasing interest for applications such as remote sensing [
3,
4,
5], super-resolution [
6], and optical encryption [
7].
In 2008, Shapiro et al. [
8] theoretically proposed a computational ghost imaging (CGI) scheme and achieved single-arm ghost imaging. Subsequently, Bromberg et al. [
9] experimentally validated its feasibility. Compared to traditional ghost imaging, CGI has the advantages of a simpler optical path and enhanced usability. It allows for the artificial design of speckle patterns with various characteristics to improve imaging quality, thereby further advancing the practical application of ghost imaging technology.
In recent years, differential computational ghost imaging (DCGI) [
10], singular value decomposition ghost imaging (SVDGI) [
11], deep learning ghost imaging (DLGI), and several other methods [
12,
13,
14,
15,
16,
17] have further improved the computational efficiency and imaging quality of CGI. However, current research primarily focuses on grayscale imaging, with relatively few studies on multi-wavelength computational ghost imaging (MWCGI). In practical applications, most images are in color, making the study of MWCGI particularly necessary. Compared to monochromatic CGI, MWCGI not only recovers the spatial information of the target object but also captures its color- and wavelength-related information. However, its complex imaging system results in a time-consuming process and increased computational and storage demands. When the RGB three bands are extended to multiple bands especially, the system’s consumption of memory and computational resources becomes more significant. Additionally, multi-wavelength systems often face the challenge of poor reconstructed image quality under undersampling conditions.
To overcome the challenges faced by MWCGI, various research teams have proposed a range of innovative methods. Welsh et al. [
17,
18] analyzed multi-wavelength compressed computational ghost imaging and demonstrated that the system can produce full-color, multi-band, high-quality images of real objects. Duan et al. demonstrated that utilizing rotating ground glass and spatial light modulators can effectively produce color images from monochrome images through incoherent superposition. Additionally, Zhang et al. [
19] proposed a wavelength-multiplexing ghost imaging technique, successfully reducing the required number of measurements. Huang et al. [
20] proposed a computational ghost imaging scheme based on spectral encoding technology, achieving multispectral imaging. Additionally, other researchers have proposed various methods, including color ghost imaging schemes based on optimized random speckles and truncated singular value decomposition [
21], as well as color ghost imaging schemes based on deep learning [
22,
23]. While these studies have enhanced the imaging efficiency and quality of MWCGI, they have also increased computational and storage demands with the addition of more wavelength bands. Addressing these limitations is crucial for advancing the technology’s application and development.
In this paper, we propose a multi-wavelength computational ghost imaging method based on feature dimensionality reduction, which can achieve high-quality image reconstruction at low sampling rates and significantly reduce computational complexity and storage requirements. We optimize multi-scale speckles as illumination patterns and use the second-order correlation function to reconstruct the component images of the target object. Furthermore, we apply principal component analysis (PCA) for feature dimensionality reduction on these reconstructed component images, which are then fused to form a color image. This method effectively improves imaging quality at low sampling rates while successfully retaining key information from each wavelength band, significantly reducing computational and storage burdens. The rest of this article is arranged as follows. In
Section 2, we introduce the feature dimensionality reduction-based MWCGI scheme. In
Section 3, we carry out numerical simulations and discuss the results. In
Section 4, we present experimental results and discuss their significance. Finally, conclusions are drawn in
Section 5.
2. Theory
The principle diagram of MWCGI is shown in
Figure 1. In this setup, computer-generated random illumination speckle patterns are used as the light source. These speckle patterns are projected onto the surface of the object using a projector. In the signal light path, three bucket detectors equipped with red, green, and blue filters are set up to individually capture the intensity signals of the three light colors’ paths. Subsequently, second-order correlation operations are used to obtain the reconstructed image of the target object. By fusing the intensity information at corresponding positions in the images from each wavelength band, a reconstructed image containing the color information of the object can be obtained.
In the MWCGI system, at the
pth measurement, the illumination speckle pattern with dimensions m × n is denoted as
, and the bucket signal received by the bucket detector is denoted as
. Where
,
,
,
,
M is the number of measurements. After
M measurements, the second-order correlation reconstruction is performed for each wavelength band. The second-order correlation function can be expressed as:
where
represents the three RGB components obtained by detecting the target object, corresponding to the imaging results in the red, green, and blue wavelength bands. Here, each speckle pattern pre-generated by the computer is reconfigured into a row vector of length
to form a row of the matrix
, resulting in a measurement matrix of size
. The measurement matrix is represented as:
The matrix form of the signal B collected by the bucket detectors can be represented as:
Referring to the second-order correlation operation, Equation (
1) can be rewritten as follows:
It is evident that when
, the target object can be accurately reconstructed. However, this condition is difficult to achieve with a low number of measurements. To address this issue, we introduce a measurement matrix optimization method to achieve high-quality multi-wavelength computational ghost imaging. Specifically, we perform singular value decomposition (SVD) on the constructed random measurement matrix
to obtain the optimized measurement matrix. The decomposition of the measurement matrices for the RGB three wavelength bands can be expressed as follows:
where
and
are two orthogonal matrices and
is a diagonal matrix. We choose
as the optimized illumination speckles. In our method, the second-order correlation function for the RGB three wavelength bands can be rewritten as follows:
In our design method, the generated measurement matrix is an orthogonal matrix. However, in traditional ghost imaging techniques, the random measurement matrix used may not satisfy orthogonality. Therefore, we can significantly improve the imaging quality of multi-wavelength computational ghost imaging while maintaining the same number of measurements. Subsequently, by fusing the intensity information at corresponding positions in the RGB three-band images, we can successfully reconstruct an image containing the color information of the target object. However, as the number of wavelength bands increases, this method also correspondingly increases the computational complexity and storage requirements of the system. To address this issue, we introduced dimensionality reduction techniques into the MWCGI system.
Firstly, the RGB reconstructed component image is standardized using the following formula:
where
represents the RGB reconstructed component image matrix (
), and
and
are the mean and standard deviation of the data, respectively. Next, the covariance matrix of the standardized data is calculated using the following formula:
By performing eigenvalue decomposition on the covariance matrix
, a set of eigenvalues and corresponding eigenvectors can be obtained. These eigenvectors define a new coordinate system for the data. The mathematical expression for eigenvalue decomposition is as follows:
The eigenvalue
quantifies the variance contribution in the direction of each eigenvector, thereby determining its importance. In practical applications, the eigenvectors corresponding to the top
p largest eigenvalues are often selected as the principal components to achieve dimensionality reduction. The number of principal components
p selected is usually determined by the threshold of the cumulative contribution rate, which is chosen based on actual needs to ensure that most of the information is retained. It is important to note that as the number of principal components
p increases, the reconstruction quality typically improves. Increasing
p allows for the capture of more variance information, thereby making the reconstruction results more accurate and closer to the original data. Using the selected eigenvectors, the original data can be transformed into a new feature space, which is typically achieved using the following formula:
where
is the transformed data matrix and
is the matrix of selected eigenvectors. Subsequently, through the inverse transformation process, we can approximately reconstruct the low-dimensional principal component representation
back to the original high-dimensional data form
, retaining as much of the original information as possible. The formula for this reconstruction process is:
After dimensionality reduction, we obtained the reconstructed images for the three bands, , , and . Finally, by fusing the intensity information at the same positions from the RGB band images , the final color reconstructed image is generated.
Next, we conduct a theoretical analysis of computational complexity and storage requirements. In the MWCGI system, the RGB images for the three bands are all of size . By applying PCA for dimensionality reduction, the subsequent computational complexity and storage requirements can be significantly reduced.
Firstly, we analyze the impact of the dimensionality reduction method on computational complexity. For each band, the complexity of calculating the covariance matrix is , and the complexity of eigenvalue decomposition is . Since there are three bands, the total computational complexity is .
By applying PCA for dimensionality reduction, we transform the data matrix X for each band into a lower-dimensional space . In this case, the complexity of calculating the covariance matrix and eigenvalue decomposition are reduced to and , respectively. The total computational complexity is . Thus, the complexities of these calculations are significantly reduced because .
Next, we analyze the impact of the dimensionality reduction method on storage requirements. Storing each image data matrix X requires storage units. For three bands, the total storage requirement is .
By applying PCA for dimensionality reduction, the data matrix for each band is reduced to , requiring storage units. Additionally, the principal component eigenvector matrix needs to be stored, which requires storage units. For three bands, the total storage requirement is . Considering that , even with the additional storage requirement for the principal component eigenvector matrix of , the storage needs after dimensionality reduction are still significantly reduced.
To more intuitively demonstrate the comparison of the aforementioned complexities and storage requirements, we utilize a tabular format, as shown in
Table 1. This approach makes the significant advantages of dimensionality reduction methods in terms of computational complexity and storage requirements clearer and more evident. In PCA, a small number of principal components
p typically suffices to retain most of the information, as the majority of the data’s variance is concentrated in the first few principal components. For smaller data matrices, the number of principal components
p may not be significantly smaller relative to
n, though
p is still generally much smaller than
n. For larger data matrices,
p becomes noticeably smaller in relation to
n, making the dimensionality reduction effect of PCA more pronounced. As
n increases, the gap between
p and
n also widens, making the advantages of dimensionality reduction in terms of computational complexity and storage requirements more pronounced. Additionally, as the number of bands increases, the advantages of dimensionality reduction in terms of computational complexity and storage needs become even more significant.
Therefore, our method ensures the quality of the color image reconstruction even under undersampling conditions, while optimizing subsequent computational complexity and storage requirements, achieving efficient data processing.
3. Numerical Simulations and Discussion
To validate the effectiveness of the proposed theory, we conduct a series of numerical simulations to obtain multi-wavelength computational ghost imaging results. We select a 64 × 64 pixels color “house” image as the target object for testing. First, the original color image is decomposed into three separate wavelength bands: red, green, and blue (RGB), resulting in a grayscale images for each wavelength band. Then, the second-order correlation algorithm is used to reconstruct the images for each of these wavelengths independently. Finally, the reconstructed single-wavelength band images are fused to form a color image. The schematic diagram of the numerical simulation process for MWCGI is shown in
Figure 2.
We use three types of speckle patterns: random, SVD, and multi-scale speckle patterns [
24], and compare the quality of reconstructed color images using the second-order correlation algorithm.
Figure 3 shows the results, with
Figure 3a–c corresponding to the numerical simulation results of random, SVD, and multi-scale speckle patterns, respectively. From
Figure 3, it can be observed that as the number of measurements increases, the reconstructed image information becomes clearer. Although
Figure 3a successfully reconstructs the target object, the image quality is relatively poor with noticeable noise issues. In contrast,
Figure 3b shows a significant improvement in image quality.
Figure 3c shows that the optimized multi-scale speckle patterns significantly enhance the image reconstruction quality, especially at low sampling rates. With 1500 measurements, high-quality image reconstruction is achieved, highlighting the potential of the multi-scale speckle method in improving imaging efficiency.
To quantitatively evaluate the quality of reconstructed images, we use Peak Signal-to-Noise Ratio (PSNR) as the evaluation standard. The definition of PSNR is shown as follows:
where
represents the grayscale level of the image, which is 255 for an 8-bit image.
denotes the mean squared error.
Figure 4 presents the PSNR curves for the reconstructed color images using random speckle patterns, SVD speckle patterns, and multi-scale speckle patterns under different measurement times. These curves clearly demonstrate that the quality of the reconstructed images gradually improves with an increase in measurement times. Notably, the PSNR values achieved with the multi-scale speckle patterns are significantly higher than those obtained with random speckle patterns and SVD speckle patterns at low sampling rates. This result is consistent with previous numerical simulation reconstruction results, further confirming the effectiveness of multi-scale speckle patterns in enhancing reconstruction image quality.
Although the multi-wavelength computational ghost imaging method based on optimized multi-scale speckles effectively improves image quality, it also introduces significant challenges in computational complexity and storage space as the number of wavelength bands increases. To address this issue, we introduce dimensionality reduction methods into the multi-wavelength computational ghost imaging system. We provide the numerical simulation results of multi-wavelength computational ghost imaging based on feature dimensionality reduction, as shown in
Figure 5. In this method, we first perform dimensionality reduction on the preliminary reconstructed images for each wavelength band until the cumulative contribution rate of each wavelength band reaches 100%, resulting in optimized single-wavelength band images. The cumulative contribution rate is the ratio of the cumulative eigenvalues of the selected eigenvectors to the total eigenvalues, used to measure the amount of original data information retained by the selected eigenvectors. Subsequently, we perform fusion calculations on these dimensionally reduced images. This process effectively reduces the data dimensionality of multi-wavelength color images while ensuring that the most critical visual information is retained, facilitating subsequent analysis or visualization. This method holds significant practical value in image processing, storage, and analysis.
To validate our method, we conduct PCA numerical simulations on the reconstruction results of multi-scale speckles with 1500 measurements, as shown in
Figure 6. In the figure, p values represent the number of principal components, and the percentages indicate the cumulative contribution rate of each component’s principal component. As can be clearly seen from
Figure 6, the image quality gradually improves with the increase in the number of principal components. When the first nine principal components are selected, the cumulative contribution rate reaches 100%, indicating that there is a large amount of redundant information in the data. This also validates our method for effectively preserving the main information of the image while removing redundant data.
Subsequently, we perform dimensionality reduction simulations on the data from
Figure 3c, with the results shown in
Figure 7.
Figure 7a displays the multi-wavelength computational ghost imaging reconstruction results using multi-scale speckles, while
Figure 7b presents the fused multi-wavelength color ghost imaging reconstruction results after dimensionality reduction. From the figure, we can observe that there is no significant visual difference in image quality before and after dimensionality reduction. This result indicates that by combining multi-scale speckle optimization methods with PCA, we not only reconstruct high-quality color images with a lower number of measurements but also effectively extract key information from the reconstructed images. Additionally, this method significantly reduces the dimensionality of the feature space, thereby decreasing the required storage space and computational complexity. These results further validate the effectiveness and practicality of our approach.
Then, we use the Structural Similarity Index (SSIM) [
25] to evaluate the dimensionality-reduced images [as shown in
Figure 8], quantifying the similarity between the reconstructed images in
Figure 7a,b. The results show that when the number of principal components is 9, the SSIM values for all reconstructed images are 1. This indicates that during the compression and reconstruction process using PCA, the images after dimensionality reduction are very similar to the images before dimensionality reduction, with no significant quality loss. These results further validate the effectiveness and practicality of our method.