Next Article in Journal
Monitoring Water Quality Indicators over Matagorda Bay, Texas, Using Landsat-8
Next Article in Special Issue
Hierarchical Semantic-Guided Contextual Structure-Aware Network for Spectral Satellite Image Dehazing
Previous Article in Journal
Integrating the PROSAIL and SVR Models to Facilitate the Inversion of Grassland Aboveground Biomass: A Case Study of Zoigê Plateau, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Dimensional Fusion of Spectral and Polarimetric Images Followed by Pseudo-Color Algorithm Integration and Mapping in HSI Space

1
Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Shaanxi Key Laboratory of Information Photonic Technique, Xi’an Jiaotong University, Xi’an 710049, China
2
Non Equilibrium Condensed Matter and Quantum Engineering Laboratory, The Key Laboratory of Ministry of Education, School of Physics, Xi’an Jiaotong University, Xi’an 710049, China
3
Key Laboratory of Spectral Imaging Technology, Xi’an Institute of Optics and Precision Mechanics of CAS, Xi’an 710119, China
4
National and Local Joint Engineering Research Center of Space Optoelectronics Technology, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2024, 16(7), 1119; https://doi.org/10.3390/rs16071119
Submission received: 1 February 2024 / Revised: 12 March 2024 / Accepted: 20 March 2024 / Published: 22 March 2024
(This article belongs to the Special Issue Remote Sensing Cross-Modal Research: Algorithms and Practices)

Abstract

:
Spectral–polarization imaging technology plays a crucial role in remote sensing detection, enhancing target identification and tracking capabilities by capturing both spectral and polarization information reflected from object surfaces. However, the acquisition of multi-dimensional data often leads to extensive datasets that necessitate comprehensive analysis, thereby impeding the convenience and efficiency of remote sensing detection. To address this challenge, we propose a fusion algorithm based on spectral–polarization characteristics, incorporating principal component analysis (PCA) and energy weighting. This algorithm effectively consolidates multi-dimensional features within the scene into a single image, enhancing object details and enriching edge features. The robustness and universality of our proposed algorithm are demonstrated through experimentally obtained datasets and verified with publicly available datasets. Additionally, to meet the requirements of remote sensing tracking, we meticulously designed a pseudo-color mapping scheme consistent with human vision. This scheme maps polarization degree to color saturation, polarization angle to hue, and the fused image to intensity, resulting in a visual display aligned with human visual perception. We also discuss the application of this technique in processing data generated by the Channel-modulated static birefringent Fourier transform imaging spectropolarimeter (CSBFTIS). Experimental results demonstrate a significant enhancement in the information entropy and average gradient of the fused image compared to the optimal image before fusion, achieving maximum increases of 88% and 94%, respectively. This provides a solid foundation for target recognition and tracking in airborne remote sensing detection.

1. Introduction

Since the early 1960s, the utilization of spectroscopy has undergone transformative evolution, emerging as a pivotal tool for material analysis and marking a century-long period of progression in the maturation of spectral imaging technology [1]. This innovative methodology seamlessly integrates spectroscopy with two-dimensional imaging, giving rise to an “all-in-one” observational technique that spans a multitude of spectral bands. These images yield continuous spectral curves at each pixel, facilitating the nuanced identification and separation of diverse targets within intricate scenes through meticulous curve analysis [2,3]. Spectral imaging technology is widely employed in scientific endeavors, encompassing applications such as color enhancement [4,5], composition analysis [6,7], vegetation phenology [8,9], material identification [10,11], and object feature identification [12,13].
In parallel, the property of polarization, which changes with material states during reflection and scattering (e.g., surface roughness and conductivity), can be detected through polarimetric sensing, capturing the polarization characteristics of material surfaces. The development of polarization imaging technology offers a novel perspective for optical detection. Polarization images are not only intuitively visible but also contribute to scene analysis, bolstering target detection capabilities [14,15]. Specific photoreceptors dedicated to polarized light vision play a crucial role in numerous fields, including the separation of specular and diffuse reflections [16,17], material classification [18,19], three-dimensional reconstruction [20,21], anomaly detection [22,23], and the separation of man-made and camouflaged objects [24,25].
As potent tools for target detection, spectral and polarization images reveal material composition and surface characteristics. Their integration forms spectral–polarization imaging technology, providing four-dimensional information (spatial, spectral, polarization, and radiometric information). This approach deepens target understanding and enhances contrast with backgrounds, strengthening detection capabilities. In remote sensing, spectral–polarization imaging stands out with significant advantages [26,27]. Pronounced differences in spectral and polarization properties underscore its superiority over traditional methods. By combining polarization and hyperspectral imaging benefits, this technology offers detailed information, amplifying the contrast between targets and backgrounds. Extracting polarization spectra curves enhances detection capabilities, positioning spectral–polarization imaging with vast potential in complex remote sensing scenarios, and target detection [28,29,30].
The integration of spectral and polarization images involves establishing a mapping relationship between polarization and spectral images, amalgamating their complementary information into a cohesive representation. Fusion algorithms are classified into pixel-level, feature-level, and decision-level fusion [31,32,33]. Pixel-level fusion, a fundamental approach that processes image pixels directly, ensures diversity and integrity despite the data volume, making it the predominant method used, albeit resulting in grayscale images. To leverage human eye resolution for color, researchers propose various pseudocolor fusion schemes for polarization images, such as the Red–Green–Blue (RGB) color space application in reference [34,35]. The Hue–Saturation–Value (HSV)/Hue–Saturation–Intensity (HSI) color space, with independent channels, allows for single-channel transformations without disrupting relationships.
In the 1970s, studies in the literature pioneered the mapping of intensity, linear polarization intensity, and polarization angles to the HSV space, laying the foundation for subsequent polarization fusion [36]. In 1997, Wolff proposed a scheme mapping polarization degree to color saturation, polarization angle to hue, and synthetic light intensity to brightness, providing guidance for subsequent work on polarization fusion in the HSV space [37]. The general fusion process involves selecting a rule to map to a specific color space, followed by color transfer, as outlined in the literature [38,39,40,41]. Jihad et al. introduced a pseudo-color fusion method from the Bingham sphere to the color space [42], and Zhao et al. proposed the fusion of spectral and polarization image information using linear polarization degree modulation and the HSI color model [43]. In Ye Song’s study, pseudo-color fusion effectively distinguished land, sea surfaces, and buildings in polarized aerial remote sensing images at 665 nm [44]. This perceptual alignment with the human visual system enhances information richness in remote sensing images, improving target detection and land cover classification accuracy, particularly in complex scenes and varying lighting conditions. Pseudo-color fusion not only enhances visual effects but also captures scene details, providing a reliable data foundation for scientific research and practical applications across diverse fields.
In this paper, we present the fusion of multi-dimensional spectral–polarization in-formation in remote sensing detection and the technology of pseudo-color display for the fusion results. The main contributions are twofold. Firstly, we propose a novel spectral–polarization fusion algorithm designed to merge spectral images with different polarization directions. To achieve this, a hyperspectral camera with a rotating polarizer captures spectrally polarized images of complex scenes in low-light environments. The acquired spectral information undergoes dimensionality reduction through principal component analysis to extract the first principal component. Subsequently, energy-weighted fusion is applied to various polarization feature characterizations, including polarization images, Stokes images, polarization degree maps, and polarization angle maps. Concurrently, public spectral–polarization datasets were also used to verify the algorithm. Evaluation parameters are employed to assess and compare the fusion effects, revealing that the fused image not only effectively integrates polarization information and enhances target edge details but also improves the overall image quality. Secondly, a spectral–polarization pseudo-color mapping scheme is devised. This scheme maps polarization degree to color saturation, polarization angle to hue, and the fused image information to intensity. This mapping not only enhances the visual effects of the image but also improves the capture of scene details. Additionally, we discuss the robustness of using this technology to process CSBFTIS for obtaining image data. These spectral–polarization multi-dimensional in-formation fusion and pseudo-color display technologies have laid a solid foundation for target recognition and tracking in airborne remote sensing detection.
This paper is organized as follows. In Section 2, a brief review is provided on spectral–polarization imaging and the color spaces employed in pseudo-color fusion. Section 3 delves into the spectral–polarization fusion algorithm, along with a novel polarization pseudo-color display method based on the fusion results, offering a comprehensive mathematical and logical explanation. In Section 4 involves the acquisition of spectral–polarization images using a hyperspectral camera with a rotating polarizer and evaluates the proposed fusion algorithm using a publicly available spectral–polarization image dataset. In Section 5, we assess the effectiveness of the proposed pseudo-color fusion algorithm by applying it to spectral–polarization images obtained through the CSBFTIS principle prototype. Finally, Section 6 summarizes the research findings of this paper and proposes future research directions and suggestions.

2. Theoretical Basis

2.1. Spectral–Polarization Imaging Theory

Spectral–polarization imaging technology plays a pivotal role in capturing spectral and polarization information intricately linked to target characteristics. Through a meticulous analysis of spectral–polarization image data, we discern variations in spectral–polarization information among distinct objects. Spectral imaging aims to quantify the intensity distribution of light across different wavelengths, whereas polarization imaging distinguishes image intensities through different polarization states. Figure 1 illustrates the hierarchical structure of spectral–polarization imaging technology. Ensuring accurate spectral–polarization imaging involves two main steps: spectral tuning and polarization adjustment. The technology leverages dispersion, channel tuning, or interference to acquire spectral information, with polarization data measured using a rotating polarizer or a micro-polarization array.
Spectral imaging integrates principles from photography and spectroscopy to produce image data where individual elements (pixels) are intricately linked to spectral features. The spectral information delivered by each pixel is important for identifying, detecting, and classifying elements and structures in images. Each pixel within a spectral image typically comprises a narrow spectral band of the electromagnetic spectrum. Represented as I, the spectral image forms a three-dimensional data cube with two spatial dimensions (x-axis and y-axis, capturing image details) and a spectral dimension (z-axis), as depicted in Figure 2.
The spectral–polarization data cubes for the three experimental devices outlined in this paper are acquired employing distinct methodologies. In the first device, a polarizer is combined with a push-broom spectral camera. The procedure involves rotating the polarizer to the desired polarization angle while scanning line by line as the platform traverses the scene. This enables the capture of spectral–polarization information across various wavelengths, culminating in the construction of an image data volume with spatial and spectral–polarization dimensions. The second experimental device utilizes an imaging spectropolarimeter based on a liquid crystal tunable filter (LCTF). The operation of the LCTF and CCD camera is controlled by a program, allowing for the automatic selection of the filter’s transmission band in each polarization direction. Images are displayed, collected, and saved within each band before transitioning to the next. The entire experimental process takes in approximately 1 min [45,46]. The third device involves data acquisition through window-scan CSBFTIS (Cross-Scanning Beam Fourier Transform Imaging Spectroscopy). Initially, light reflected by the scene is collimated by the front optical system and subsequently modulated by the phase module. This modulated transmission light then traverses the spectroscopic interference module, resulting in the emission of two coherent light beams. These beams converge onto the CCD for imaging after passing through the imaging lens, where interference phenomena occur. The signal captured by the CCD is then transmitted to the signal acquisition and processing system for subsequent analysis, details of which are elaborated upon in the subsequent discussion.
The polarization characteristics of objects are conventionally expressed through the Jones vectors, Stokes vectors, and Mueller matrices. In the realm of polarization imaging, the significance of the Stokes vectors lies in their capacity to offer a comprehensive and succinct representation. In our experimental setup, a stable polarizer affixed to a cage structure undergoes rotation through four distinct polarization directions (0°, 45°, 90°, and 135°), denoted as I0(x, y), I45(x, y), I90(x, y), and I135(x, y). These components represent the measured image components at various polarization angles. Subsequently, the Stokes vector for a given wavelength λ is computed using the following equation. Using these four linear polarization angles for polarization measurement is effective because they constitute a complete linear polarization basis, covering all possibilities in the horizontal, diagonal, and vertical directions. This enables an accurate description of the linear polarization state of light waves. Through measurement and analysis of polarized light at these angles, a comprehensive understanding of the light’s polarization properties, including its direction and magnitude of vibration, can be achieved. Additionally, these four standardized angle configurations are used to obtain linear polarization in Stokes’ definition. Solomon’s review of single-parameter polarization imaging principles introduces the concept of multi-parameter Stokes vector imaging, demonstrating the effectiveness of measurements using linear polarizers oriented at 45° increments in remote sensing applications [47]. The Stokes vector of the image at wavelength λ is obtained through Equation (1).
S λ x , y = S 0 , λ x , y S 1 , λ x , y S 2 , λ x , y S 3 , λ x , y = I 0 , λ x , y + I 90 , λ x , y I 0 , λ x , y I 90 , λ x , y I 45 , λ x , y I 135 , λ x , y I R , λ x , y I L , λ x , y ,
In this context, S0,λ signifies the overall radiation intensity received by the system at wavelength λ, while S1,λ represents the difference in radiation intensity between the 0° and 90° polarization directions. S2,λ denotes the differential component between the 45° and 135° linear directions, and S3,λ indicates the intensity difference between right-handed circular polarization and left-handed circular polarization at wavelength λ. A positive S3,λ value implies the dominance of right-handed circular polarization. In practical scenarios, the intensity of right-handed circular polarization compared to left-handed circular polarization is negligible and can be disregarded. For linear polarization analysis, the degree of linear polarization (DoLP) is expressed as a fraction, ranging from 0 to 1, where 0 denotes no polarization and 1 represents complete linear polarization. Additionally, the Angle of Polarization (AoP) signifies the angle of the major axis of the polarization ellipse concerning the reference direction (the optical axis) [48,49].
D o L P λ x , y = S 1 , λ 2 x , y + S 2 , λ 2 x , y S 0 , λ x , y
A o P λ x , y = 1 2 tan 1 S 2 , λ x , y S 1 , λ x , y
In summary, the amalgamation of spatial, spectral, and polarization data encompasses seven distinct variables: the spatial coordinates (x, y), wavelength (λ), and polarization angles (S0, S1, S2, S3). The mathematical description of spectral–polarization necessitates generating multiple images to thoroughly characterize the spectral–polarization states within scenes.

2.2. Color Spaces

With the derived fused data, color spaces are usually further applied to allow for incorporating more information about polarization into the resulting image. The foundation of HSI pseudo-color mapping for polarization lies in translating the physical properties of polarized light into visualization parameters within the color space. This approach harmonizes the unique traits of the HSI color model with the defining parameters of polarization, thereby facilitating the incorporation of polarization information into the resulting image. Specifically, the angle information of the AoP is assigned to the hue channel, depicting the direction of polarization, while the DoLP is correlated with saturation, symbolizing the purity of polarized light. Meanwhile, light intensity is linked to brightness, conveying the variations in lightness and darkness across the image. Through this intricate mapping process, polarization data are vividly represented in color images, facilitating easier analysis and interpretation of polarized light characteristics. The HSI color space emerges as a pivotal tool in image processing, surpassing both RGB and HSV counterparts due to its inherent advantages. By dissecting color description into three distinct components—hue, saturation, and brightness—the HSI model aligns seamlessly with human perception, fostering an intuitive and natural approach to color manipulation. This intrinsic alignment facilitates a deeper understanding and more precise manipulation of colors, resulting in visual effects that resonate more closely with human perception. Moreover, the autonomy of hue, saturation, and brightness within the HSI framework affords unparalleled convenience and efficiency in color adjustments. Unlike the interlinked channels of the RGB space or the partially intertwined nature of HSV components, the HSI model allows for the independent manipulation of brightness without impinging upon hue or saturation. This autonomy not only simplifies color adjustments but also enhances precision and control over image enhancement processes. In essence, the utilization of the HSI color space represents a paradigm shift in image processing, offering researchers and practitioners a versatile and powerful toolset for achieving superior visual outcomes with enhanced efficiency and precision. Further details regarding color spaces can be found in other sections of reference. Here, the HSI color spaces are summarized for completeness.
The HSI color space provides an intuitive representation of colors, depicted within a conical space model, as shown in Figure 3. This arises from the fact that computers typically store RGB values within a finite precision range. The limitations in precision, coupled with human color perception constraints, render the cone visualization more practical in most cases. What distinguishes HSI is its separation of the intensity component from the color information encapsulating hue and saturation. The HSI space characterizes each color based on these physiological criteria:
  • Hue (H): Pertaining to color perception, it represents color purity within a range of 0 to 360°. We designate 0° as red, with 240–360° encompassing non-spectral colors discernible to the human eye. Conical longitudinal sections elucidate diverse relationships between brightness and saturation for a given hue.
  • Saturation (S): It quantifies the degree to which pure color is diluted by white light, with a numerical range from 0 to 1. A color ring is delineated around a conical section, where saturation serves as the transverse axis of the radius extending through the center. Along the circumference, colors are fully saturated solids, while the center of the circle represents a neutral color with 0 saturation.
  • Intensity (I): Serving as chromaticity information, intensity gauges the amount of light in the color, providing a range from light to dark, the brightness value is measured along the axis of the cone, with values between 0 and 1. Points along the axis of the cone represent completely unsaturated colors. In various grayscale levels, the brightest point is pure white, while the darkest point is pure black.
The HSI color space presents a perceptive method for analyzing and manipulating colors, facilitating a more intuitive grasp of color attributes in various applications.

3. The Proposed Method

3.1. Overall Process

The overall algorithm architecture proposed in this article is shown in Figure 4. Firstly, we proposed a pixel-level fusion method for spectral–polarization information. This involves subjecting the four obtained polarization images to dimensionality reduction through principal component analysis and spectral energy weighting. Subsequently, the four channels were processed using Formulas (1)–(3). Computational processing was then applied to weight the Stokes vector image based on polarization energy, resulting in a fused image. Secondly, a novel pseudo-color fusion algorithm was introduced to map the image. This algorithm skillfully mapped the image containing spectral–polarization information fusion, the DoLP image, and the AoP image into the HSI color space model. This innovative approach not only enhanced the visual appeal of the images but also ensured a more comprehensive understanding of the spectral–polarization information, leading to an optimized fusion outcome.

3.2. Fusion of Spectral and Polarimetric Imagery

The spectral–polarization image that we acquired encapsulated spectral–polarization information within each pixel. In comparison to window-based fusion methods, energy-weighted fusion stands out for its ability to more accurately preserve the original image’s information integrity. By assigning weights to each pixel, this method not only effectively retains information but also enhances the fused image’s quality, accentuating crucial features. It further offers flexibility by adjusting according to the energy distribution in the image, catering to diverse application scenarios.
Furthermore, to address issues related to information redundancy and spectrum duplication, we introduced the PCA method. This technique reduces data complexity and boosts computational efficiency by transforming raw spectral data into principal components. PCA achieves efficient feature extraction, preserving the most representative spectral features by selecting directions of maximum variance. Simultaneously, it diminishes noise impact in the data through correlation removal, ensuring more stable and reliable processing outcomes. Lastly, PCA’s visualization capability proved instrumental in intuitively understanding data structure and distribution. This clarity provides a lucid perspective for subsequent analyses.
When dealing with spectral images, the initial step involves preprocessing to remove potential noise and normalize the image, ensuring that subsequent analyses remain unaffected by unnecessary interference. Specifically, the bilateral filtering method was employed for denoising, leveraging both spatial proximity and pixel value similarity to preserve edge clarity while reducing noise. Additionally, image normalization was conducted by converting the image to the uint8 format and scaling it within the range [0–255], thus ensuring uniformity in brightness and contrast across different images. This step was crucial for subsequent PCA analysis, given PCA’s sensitivity to data scaling. Normalization ensured that all image data were standardized, facilitating the extraction of more representative principal components. Next, spectral curves of the targets were extracted from the processed images, representing the spectral characteristics of the targets at different angles (0°, 45°, 90°, 135°). After obtaining these curves, an analysis was performed for each channel, with the entire spectral range divided into three regions. PCA was then applied to each region.
In the PCA process, the data matrix S, structured as [x ij] M × N, captured information from N pixels across M original spectral dimensions in the original image. Each row of S corresponds to a distinct band of the spectral data. PCA is mathematically represented by the equation P c = WS, where W is the transformation matrix, S is the original image matrix, and P c is the matrix representing the dimension-reduced image. The transformation matrix W was computed by solving the eigenvalue problem (λIC) E = 0, where λ is the eigenvalue, I is the identity matrix, and C is the covariance matrix of the input image. The covariance matrix C is calculated as follows:
C = 1 N 1 S S T
Subsequently, we solved for the eigenvectors E and eigenvalues λ of the matrix C, with the eigenvalues ordered in descending order: λ1λ2 ≥ … ≥ λm. The eigenvectors satisfy the orthogonality condition ETE = EET = I, which facilitates the formation of the transformation matrix W, with each column representing an eigenvector. The transformation matrix W is the inverse of the eigenvectors matrix E:
w = E 1
In practice, for the dimension-reduced image Pc, we selected only the first ‘k’ eigenvectors corresponding to the ‘k’ largest eigenvalues. This resulted in a subset of W, denoted as Wₖ. Thus, the dimension-reduced image is represented as follows:
P c = W k S
where Wk is the matrix containing the first ‘k’ eigenvectors, and Pc embodies the principal components that capture the most significant variance in the data. In a concise representation, the essence of the PCA method is encapsulated in this expression:
[ P c , W ] = P C A ( S , k )
Here, Pc encapsulates the principal components, and W signifies the transformation matrix that projects the original data onto the space defined by these components. This approach effectively distills the critical information within the image data, paving the way for enhanced analysis or subsequent image processing tasks.
After the PCA dimensionality reduction process, the first principal component was utilized as the output result, followed by energy weighting processing. The computation involves determining the sum of the squares of the gray value for each pixel location across all images. Subsequently, a weight was assigned to each pixel by dividing the square of its gray value by the sum of squares corresponding to that location. The ultimate fusion result was achieved by multiplying the pixel value in each image by the weight assigned to its corresponding position. The mathematical expression for this process is illustrated in Equation (10).
F ( i , j ) = λ a λ 2 ( i , j ) E λ a λ ( i , j )
where λ represents the wavelength of light, and a λ ( i , j ) represents the grayscale value of the pixel at the location in the image ( i , j ) with the wavelength λ. The calculation E λ formula is as follows:
E λ = λ a λ 2 ( i , j )
After fusing each polarization channel, the Stokes image, DoLP image, and AoP image are generated using Equations (1)–(3). Subsequently, the energy weighting of the Stokes vector diagram was performed using Equation (10). This process adjusts the relative importance of the different Stokes parameters based on their energy contributions, with the aim of enhancing valuable information while suppressing noise. By prioritizing components with higher energy levels, this approach refines the representation of polarization information, emphasizing important polarization features and weakening irrelevant ones. This not only facilitates the fusion of images to preserve the information of the original spectral–polarization image to the greatest extent possible but also results in greater differences with the calculated DoLP and AoP images. This provides more information for the subsequent color mapping step. This comprehensive process ensured the deliberate fusion of spectral information from all angles, resulting in a more intricate representation of target spectral features and information.
F S t o k e s = E S 0 × S 0 + E S 1 × S 1 + E S 2 × S 2 S 0 + S 1 + S 2
E D , λ = 1 M × N i = 1 M × N Y D , λ 2 ( i )
Here, Y D ( i ) is the gray value of the pixel of the band, D { S 0 , S 1 , S 2 } .

3.3. HSI Space Fusion

Since we had obtained the fused data of image, we further applied the HSI color space method to include more polarization information, which would allow for better contrast at object edges. Herein, we started from the inspiration of Wolff’s fusion approach [37], which involves mapping images captured from partially polarized light onto images encoded using the HSI color scheme. By establishing a pertinent relationship between polarized and spectral information and devising distinct mapping rules based on the unique physical meanings of the H, S, and I channels, we introduced a novel fusion method aimed at optimizing the overall fusion effectiveness. Figure 5 shows the mapping function.
The fusion rules within the three channels are outlined as follows:
(a)
The AoP image, containing wavelength information, is mapped to the H channel, determining the pixel’s color.
(b)
The DoLP image, also containing wavelength information, is mapped to the S channel, and pixel values in the S channel are subjected to thresholding. The threshold rule is expressed in Equation (10). The threshold is adjusted to normalize and maximize the saturation value in the target area while minimizing the saturation in the non-target area. This maximizes the saturation difference between different targets, thereby enhancing the fusion effect.
(c)
The spectral–polarization fused image is assigned to the I channel. I-channel fusion aims to improve the overall brightness of the image, facilitating image visualization.

4. Experiments and Results

In this section, the initial segment focuses on employing hyperspectral and rotating polarizer methods to capture spectral–polarization data in complex multi-target scenes. The subsequent part provides an overview of online public datasets. The robustness and applicability of the algorithm across diverse scenarios are verified through the utilization of these two datasets. The evaluation of the proposed algorithm incorporates assessments based on visual quality and objective evaluation indicators. Conclusively, the fused image and polarization parameter map are transposed to the HSI space using the pseudo-color mapping scheme introduced in the preceding section. This concluding step enhances interpretability and visually presents the algorithm’s output within the specified color scheme.

4.1. Hyperspectral–Polarization Camera Dataset

A cage structure has been incorporated in front of the hyperspectral camera, and a rotating polarizer is employed to capture spectral–polarization images in complex scenes. The configuration of the hyperspectral camera is depicted in Figure 6. The initial step involves selecting specific points for each target and background within the scene to generate the spectral curve graph. The schematic representation of spectral data points in the complex scene is presented in Figure 7. In this illustration, data points 0 and 1 represent cement roads, data points 2 and 3 represent an asphalt road, data point 4 represents a manhole cover, point 5 represents a deep green board, and data point 6 represents light green cardboard. Furthermore, data points 7, 9, and 11 represent real grass; data point 8 represents fake lawn; data point 10 represents the presence of a green car; and data point 12 represents a stone brick road.
After rotating the polarizer, we employ software to govern the gimbal push-scan device, imaging the scene and acquiring a data cube containing spectral information at various wavelengths. Subsequently, for each wavelength, intensity data from each point in the image are extracted and amalgamated to construct a spectral curve. Firstly, we conducted a preprocessing step involving smoothing and denoising on the spectral intensity information obtained from the selected points. Our goal was to enhance the quality and interpretability of the data. In this preprocessing procedure, we deliberately opted for a spectral curve processing technique that integrates both the moving average smoothing method and the wavelet transform method, commonly referred to as “wavelet threshold denoising”. We applied spectral curve smoothing and wavelet transformation specifically using the “sym8” wavelet. The wavelet transform method excels at capturing the local characteristics of the signal, while the moving average contributes to further smoothing the overall trend. This dual approach facilitates denoising while retaining the crucial features of the signal. Such a strategic choice not only aids in noise reduction but also provides a more comprehensive overview of the intrinsic features embedded in the spectral data. Figure 8 showcases the results of smoothing using the wavelet threshold denoising method.
Upon inspecting the spectral curve, it is evident that our detector exhibits a response range from 400 to 1000 nm. Different target types display distinct wave peaks, with artificial targets and backgrounds exhibiting characteristic peaks, while natural backgrounds showcase a relatively smooth distribution across the entire spectrum. Notably, a peak at 570 nm (indicated by the red line in Figure 8) is observed for grassland (points 7 and 9), aligning with the well-known fact that chlorophyll absorption peaks in plants predominantly occur in the blue-light region (around 430–450 nanometers) and the red-light region (around 640–680 nanometers). The intervening green-spectrum region, approximately at 570 nm, demonstrates weaker chlorophyll absorption, leading to a higher reflection of green light and, consequently, the characteristic green color of plants.
Additionally, another concentrated peak emerges at 780 nm. We can discern three distinct segments in the overall spectral trend, as illustrated in the figure. This segmentation will serve as a foundational reference for subsequent partition fusion, aiding in the nuanced analysis of the spectral characteristics for various targets.
We then rotate the polarizer to capture hyperspectral images corresponding to four polarization channels (0°, 45°, 90°, 135°). Each channel covers a wavelength interval of 1.2 nm, resulting in a total of 478 images. To provide a specific example, let’s focus on the 0° polarization channel, which is illustrated in Figure 9. These images are segmented into three parts based on the delineated boxes in Figure 8. This segmentation strategy enables the concurrent processing of the three parts, optimizing processing speed while retaining the intrinsic spectral–polarization characteristics to the greatest extent possible.
In Figure 10, the outcomes of polarization image processing, grounded in the polarization angle, and the fusion of distinct spectral regions are depicted. The figure comprises 12 sub-pictures arranged in a configuration of four rows and three columns. Each row signifies a specific polarization angle (0°, 45°, 90°, and 135°, respectively), while each column represents the PCA processing of three parts within the corresponding polarization channel. The first principal component is then selected and designated as the output for each corresponding segment. The advantage of this partition design is that it can better utilize the spectral characteristics of the target within the imaging scene and divide the entire band into three parts. First, by dividing the entire band into three parts, complex spectral information can be decomposed into easier-to-process parts, thereby reducing the processing complexity. Secondly, the partition design helps to reduce the data dimension, and the three parts can be operated in parallel, thus improving the efficiency and speed of data processing. The most important thing is to retain the information near each wave peak as much as possible, so that the data features can be made more accurate while reducing the dimension. An observation from the figure reveals that this design effectively retains the information of each target within every spectral region. It can also be seen that the types of targets highlighted in different regions are different, which is consistent with our theoretical ideas. Subsequently, the energy weighting method is applied to merge the three segments into a spectral–polarization image, as illustrated in Figure 11. The visual representation accentuates distinct brightness levels for targets within each polarization channel, particularly emphasizing artificial targets and making them notably conspicuous.
Following this, Formula (1) is employed to derive the Stokes vector image, presented in Figure 12. Formulas (2) and (3) are utilized to generate the DoLP image and AoP image of the complex scene, both displayed in Figure 13. Significantly, the S0 fused image, which represents intensity, exhibits markedly improved clarity, yielding visually striking results. The differential S1 and S2 images exhibit a darkening effect attributable to the minimal disparity in the overall polarization characteristics of the scene’s background. It can clearly be seen from the polarization characteristic parameter map that the intersection of pixel points of different types of targets is obvious, which also reflects the edge effect of polarization. Moreover, the amalgamation of spectral information imparts diverse colors to targets, manifesting through distinct grayscale values in the image. In summary, the proposed approach outlined in this article notably enhances the visual quality of images.
This paper adopts four quantitative indicators widely used in the field of spectral–polarization image visualization, namely μ , σ , EN, and AG, to measure the performance of different methods [50]. For convenience, here’s a brief description of the four objective indicators:
(i)
Gray Mean denotes the average of all pixel values in an image. The calculation formula is expressed as follows:
μ ¯ = 1 M × N i = 1 M j = 1 N F ( i , j )
Here, μ ¯ represents the average grayscale value; M × N , respectively, represent the row and column widths of the image; and F ( i , j ) represents the grayscale value of the pixel located at position ( i , j ) in the image. A moderate mean value contributes to maintaining a favorable visual effect.
(ii)
Gray Standard Deviation gauges the dispersion of pixel grayscale values, akin to the standard deviation in statistics. The formula is defined as follows:
σ = g = 0 L 1 ( g μ ¯ ) 2 × p ( g )
Here, σ represents the standard deviation, g represents the grayscale mean, L represents the grayscale level (e.g., any value in the range of 0~255), and p ( g ) represents the ratio of the number of pixels with a grayscale level of g to the total number of pixels in the image. A larger standard deviation indicates more discrete grayscale values, enhancing the observation of image contrast.
(iii)
Entropy quantifies the information content within an image. The calculation formula is given as follows:
E N = g = 1 L 1 p ( g ) log 2 p ( g )
Here, EN represents the entropy of the image. Higher entropy implies richer information content and superior visual effects.
(iv)
Average Gradient centers on the change trend between adjacent pixels, providing insights into small details and texture structure. The formula is articulated as follows:
A G = 1 M × N i = 1 M j = 1 N Δ x G 2 ( i , j ) + Δ y G 2 ( i , j )
Here, AG represents the average gradient of the image, while Δ x G and Δ y G , respectively, represent the gradients of the image in the x and y directions. A larger average gradient signifies greater disparities between adjacent pixels, thereby enhancing image clarity and detail representation.
We conducted a thorough calculation of parameter values for each image based on the specific formulas associated with each evaluation parameter. The selection of the original image was centered on the 0° polarization image at 522 nm, primarily due to its optimal performance in spectral curves and visual perception. The resulting parameter values for each image are meticulously detailed in the table below.
The improvements in parameters post-fusion are evidently highlighted in Table 1. A meticulous examination of the grayscale average parameters necessitates careful consideration of changes in the grayscale standard deviation. Specifically, an increase in the grayscale mean holds significance only if the grayscale standard deviation remains constant or rises. In such cases, the heightened gray average results in a brighter overall image, contributing to a more favorable visual outcome. Despite the polarization parameter image presenting an overall darker tone and lower entropy value, the concurrently high σ and AG can be attributed to its heightened sensitivity to edge changes, particularly in the AoP image. This sensitivity leads to substantial variations in local details. However, a comprehensive comparison underscores the superior performance of fused S0 images. Post-fusion, noticeable enhancements manifest across various parameters, indicating an overall improvement in image quality. These improvements are instrumental in facilitating the clear distinction of targets by the human eye.
Figure 14 depicts the pseudo-color representation results obtained through three distinct mapping strategies. The RGB method, proposed by reference [35], assigns the fused image to the red channel, DoLP to the green channel, and S1 to the blue channel. Similarly, the HSV method, introduced by reference [37], maps AoP to H, DoLP to S, and the fused image to V. Our proposed approach involves mapping AoP, DoLP, and the fused image to the H, S, and I channels, respectively, facilitating the comprehensive visualization of various polarization data. By amalgamating spectral–polarization images in the HSI color space, detailed target analysis can be conducted based on the image’s brightness characteristics. The spectral intensity features of target reflection serve as the basis for meticulously examining the polarization angle characteristics through color features. Changes in hue indicate alterations in the polarization angle, particularly noticeable at boundaries, which appear greenish. Saturation variations denote the degree of polarization, clearly distinguishing between different targets, especially artificial cardboard, fake grass, and real grass. Brightness changes indicate light intensity, with the brightest areas corresponding to the highest light intensity. The large-scale color variations in the image are primarily attributed to the irregular surfaces of objects such as weeds and road surfaces. These irregular surfaces alter the structure at observation points, consequently affecting the polarization characteristics. In contrast, artificial materials exhibit smoother surfaces, resulting in more uniform colors in the image. The visualization outcomes of the RGB and HSV methods evidently differ from our approach. They inadequately capture the polarization information and lack a comprehensive description of details. This observation aligns with the theoretical analysis of the three methods’ strengths and limitations, as presented in Section 2.2. Figure 14 visually demonstrates that the fusion of polarization images within the HSI color space not only preserves the low-frequency characteristics inherent in the spectral image but also accentuates the polarization features. This enhancement significantly improves the image target recognition capabilities.

4.2. Online Public Datasets

To further validate the applicability and robustness of our algorithm, experiments were conducted utilizing the online public dataset released by Zhao from Northwestern Polytechnical University. The dataset was obtained using the LCTF polarization spectrum imaging method, using the specific acquisition and processing process detailed in reference [45,46]. The original dataset comprises 33 images in four polarization channels, covering wavelengths from 400 to 720 nm with 10 nm intervals.
Figure 15 displays four polarization images at 600 nm for angles 0°, 45°, 90°, and 135°. The overall imaging appears under low illumination, with the 45° and 90° polarized images appearing brighter, while the 0° and 135° polarized images are darker. This phenomenon is attributed to the liquid crystal modulation achieving a higher extinction ratio. By utilizing the algorithm proposed in this article, a fusion-enhanced map containing spectral–polarization information is obtained.
Figure 16 presents the results following the processing of Zhao’s public data image. As illustrated in the figure, artificial targets are more pronounced in the DoLP and AOP images. Although the target outlines are clearly depicted, the details appear significantly blurred, accompanied by an overall low grayscale, which proves less conducive to human eye observation. In contrast, the fused S0 image exhibits a rich display of details and delivers a visually effective outcome, as substantiated by the objective evaluation data shown in Table 2. Additionally, when comparing the three pseudo-color mapping methods, it becomes evident that while the RGB method yields a clearer overall resolution, the resulting color tends to be reddish and does not accurately reflect the target. Although the image obtained through the HSV method exhibits improved details, the presence of V-channel information blended into the H and S channels compromises the accuracy of the image’s detail presentation. Conversely, the HSI image, via post-color mapping, enhances the target’s color, effectively highlighting both the target’s outline and details. The distinct color contrast facilitates easy target identification, enabling the human eye to quickly focus on the target within the image.

5. Discussion

The effectiveness of the fusion and display algorithms proposed in this paper has been validated in the previous section, emphasizing their significant impacts on image processing. Building on this success, we plan to integrate this approach into the subsequent processing of results obtained from our research group’s CSBFTIS [51]. This algorithm aims to enhance the target recognition and tracking capabilities of airborne CSBFTIS during remote sensing detection.
The structural composition of CSBFTIS, as depicted in Figure 17, includes a front lens system, phase delayers R1 and R2, polarizers P1 and P2, double Wollaston prisms WP1 and WP2, an imaging mirror L, and a CCD array detector. The optical axes of P1 and P2 form a 45° angle with the positive x-axis. The fast axes of R1 and R2 are oriented at angles of 45° and 0°, respectively, with the positive x-axis. The light axes of WP1’s left wedge and WP2’s right wedge are parallel to the paper, while WP1’s right wedge and WP2’s left wedge are perpendicular to the paper. The back focal plane of imaging lens L is where the CCD is positioned.
After passing through the front lens system and collimation, the target light enters the phase modulation module consisting of R1 and R2 for phase modulation. The modulated light transforms into linearly polarized light through P and is then split into two beams with a certain lateral shearing amount by the double Wollaston prisms WP1 and WP2, propagating in parallel with equal amplitude and perpendicular vibration directions. After passing through P2, the two beams of linearly polarized light become parallel light with equal amplitudes and the same polarization direction, eventually converging onto the CCD to form an interference pattern.
In the CCD plane, the optical path difference Δ is equal in the x-axis direction, while it varies with the incident angle in the y-axis direction. By employing a window scanning method (spatiotemporal mixed modulation), the entire system uniformly translates in the y-axis direction, ensuring that the change in the optical path difference is synchronized with the uniform variation in the incident angle along the y-axis. As a result, the CCD captures the interference patterns of the target at different optical path differences. The acquired interference patterns are then reorganized to obtain the complete interference pattern of each spatial element. Finally, background subtraction, noise reduction, thresholding, and Fourier inverse transformation operations are applied to obtain the spectral and polarization information of the target. The distinctive feature of this dataset, setting it apart from the previous ones, lies in the inclusion of a complete polarization spectrum. Notably, it encompasses an additional circular polarization component, denoted as S3.
The specific process via which the device obtains the spectral–polarization data cube is as follows: assuming the Stokes vector of the incident light is S 0 ( σ ) , S 1 ( σ ) , S 2 ( σ ) , S 3 ( σ ) T and that the system operates within the spectral range [ σ 1 , σ 2 ] . The expression of the intensity of the imaging interference pattern directly acquired on the CCD is as follows:
I ( Δ ) = 1 8 σ 1 σ 2 [ 1 + cos ( 2 π σ Δ ) ] S 0 + S 1 sin φ 1 sin φ 2 + S 2 cos φ 2 + S 3 cos φ 1 sin φ 2 d σ
The first term is a constant unrelated to the optical path difference (the angle of incident light). It represents the background signal and contains no interference information. This term can be eliminated using a background removal algorithm. The second term corresponds to the interference caused by the optical path difference. By applying Euler’s formula to convert the trigonometric functions associated with phase modulation into exponentials and rearranging the equation, we obtain the following equation:
I ( Δ ) = 1 8 σ 1 σ 2 cos ( 2 π σ Δ ) S 0 + 1 2 S 2 e i φ 2 + e i φ 2 + 1 4 S 13 e i φ 1 φ 2 + S 13 * e i φ 1 φ 2 1 4 S 13 e i φ 1 + φ 2 + S 13 * e i φ 1 + φ 2 d σ
Here, S 13 = S 1 + i S 3 , where * denotes complex conjugation. The equation indicates that due to the action of the phase modulation module, the four components of the incident light’s Stokes vector are modulated by different phase factors, forming seven independent channels with center frequencies of 0 , ± φ 2 , ± ( φ 1 φ 2 ) , ± ( φ 1 + φ 2 ) . The interference patterns of the seven channels are segmented accordingly. The expressions for each channel are as follows:
  C 0 = 1 8 σ 1 σ 2 cos ( 2 π σ Δ ) S 0 d σ C 1 = 1 32 σ 1 σ 2 cos ( 2 π σ Δ ) S 13 e i φ 1 φ 2 d σ C 1 * = 1 32 σ 1 σ 2 cos ( 2 π σ Δ ) S 13 * e i φ 1 φ 2 d σ
C 2 = 1 16 σ 1 σ 2 cos ( 2 π σ Δ ) S 2 e i φ 2 d σ   C 2 * = 1 16 σ 1 σ 2 cos ( 2 π σ Δ ) S 2 e i φ 2 d σ
C 3 = 1 32 σ 1 σ 2 cos ( 2 π σ Δ ) S 13 e i φ 1 + φ 2 d σ C 3 * = 1 32 σ 1 σ 2 cos ( 2 π σ Δ ) S 13 * e i φ 1 + φ 2 d σ
Based on the principles of Fourier transform spectroscopy, the channels are segmented and then subjected to individual inverse Fourier transforms to obtain the reconstructed spectral expressions. Here, F 1 denotes the inverse Fourier transform, while real (·) and imag (·) represent the real and imaginary parts, respectively. Considering the conjugate relationship, both C1 and C3 contain information from S1 and S3. Hence, it is sufficient to only perform spectral segmentation and inverse Fourier transform on channels C0, C1, and C2 to obtain all the information regarding the Stokes polarization spectrum. The demodulation process essentially entails performing an inverse Fourier transform of the associated phase modulation data, as outlined below:
S 0 = 8 F 1 C 0 S 2 = 16 F 1 C 2 e i φ 2 S 1 = 32 r e a l F 1 C 1 e i φ 1 φ 2 S 1 = 32 i m a g F 1 C 1 e i φ 1 φ 2
As shown in Figure 18a, we conducted experiments using CSBFTIS in natural light environments. The scene covered various targets, including trees, grass, cars, and roads. The scanning process effectively gathered target information by adjusting the spectrometer’s position along the scanning direction. In all interference patterns, the position of the interference fringes remains consistent, while the intensity image of the target shifts. The subtle curvature observed in the image is attributed to the optical path difference (OPD) distribution of the Wollaston prism, which is reflected in the interference pattern. According to the principle of spatiotemporal mixed modulation scanning, the interference data of a specific point in the image are solely affected by the angular change in the scanning direction and are independent of the vertical direction of scanning. Each imaging session specifically captures information about a column of geographic elements at a particular angle. The image captured by the CSBFTIS system is a combination of intensity information and interference fringes. Figure 18b–d illustrate three example interference patterns.
The acquired interference patterns undergo data processing to extract spectral curves corresponding to each target point, so as to facilitate the use of the target spectral curve for partition processing. The operational band of this prototype spans 400–1000 nm; however, due to weakened light intensity at both ends of the range, numerous noise points are present. In this study, only spectral–polarization images within the central 500–900 nm band are utilized for fusion. As depicted in Figure 19, the DoLP and AOP images distinctly outline the target contours, yet details appear notably blurred, accompanied by an overall low grayscale, limiting favorable human eye observation. It is worth noting that the fused S0 image reveals intricate details, delivering visually effective outcomes. This finding is corroborated by the objective evaluation data presented in Table 3.The images obtained via the RGB and HSI methods still exhibit uniform overall coloration, lacking additional display details. In the final step, the HSI image undergoes color mapping to enhance target color, effectively accentuating both the outline and details of the target. Given the device’s ability to detect S3, the polarization characteristics of the target are further highlighted, resulting in a sharp color contrast that facilitates easy object identification. This feature allows the human eye to swiftly focus on objects within the image.

6. Conclusions and Perspectives

In conclusion, this paper presents a novel spectral–polarization fusion algorithm, incorporating PCA and energy weighting, to produce information-rich fused images. The subsequent introduction of a pioneering pseudo-color mapping scheme enhances the visual representation by mapping polarization degree to color saturation, polarization angle to hue, and overall fused image information to intensity. Validation through three distinct spectral–polarization image data acquisition methods underscores the efficiency and robustness of the proposed technology. The processing results of the three datasets demonstrate that, following the application of the spectral–polarization fusion algorithm, the information entropy of the fused image and the original optimal single-wavelength single-polarization images increase by approximately 3%, 88%, and 14%, respectively. Additionally, the average gradient experiences increases of approximately 68%, 3%, and 94% across the three datasets. Notably, the application of the pseudo-color fusion scheme results in superior visual effects and image quality compared to the original dataset. Furthermore, our proposed algorithm stands out for its ability to efficiently consolidate complex large datasets into a single comprehensive image, offering a novel perspective in data processing. This fusion technology not only enhances the visual appeal of images but also augments the capture of scene details, presenting innovative possibilities in the field of remote sensing data processing. Overall, the findings affirm the high efficiency, robustness, and enhanced performance of the presented spectral–polarization fusion algorithm and its potential to significantly contribute to the advancement of image processing in remote sensing applications.
To further expand the prospects for future research, our focus will be on developing an advanced fusion algorithm that surpasses existing methods. This sophisticated algorithm will be tailored to adaptively represent diverse target features within a scene, thereby enhancing spatial resolution, spectral fidelity, and noise reduction capabilities. Specifically, adaptive partitioning can be applied to particular scenes, and different processing methods can be employed for different parts to better highlight the target features. Moreover, we plan to integrate this advanced algorithm with the CSBFTIS, with a special focus on addressing the challenge of real-time data acquisition during airborne operations. This integration is crucial for enabling the use of true polarization display technology in remote sensing detection, which holds significant potential for improving target discrimination, environmental monitoring, and land cover classification accuracy. Looking ahead, our future research will also be extended to other potential avenues in spectral–polarization imaging and remote sensing. These include exploring multi-modal fusion techniques, leveraging machine learning algorithms for automated feature extraction, and investigating the development of novel sensors or imaging platforms. By pursuing these avenues, we aim to contribute to the advancement of remote sensing technology and its applications across various domains.

Author Contributions

Conceptualization, F.G. and J.Z.; methodology, F.G.; software, F.G.; validation, F.G., H.L. and J.D.; formal analysis, F.G.; investigation, F.G.; resources, F.G.; data curation, F.G.; writing—original draft preparation, F.G.; writing—review and editing, L.H., F.L., N.Z., H.J., X.Z., Y.Z. and X.H.; visualization, F.G.; supervision, F.G.; project administration, J.Z.; funding acquisition, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 61890961; the National Natural Science Foundation of China, grant number 62127813; the National Natural Science Foundation of China, grant number 62001382; the National Natural Science Foundation of China, grant number 62201568; and the Shaanxi Natural Science Basic Research Program, grant number 2022JQ-693.

Data Availability Statement

All data are contained within the article.

Acknowledgments

The authors thank the China Xi’an Satellite Control Center State Key Laboratory of Astronautic Dynamics for the space debris experiment sample materials.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gat, N. Imaging spectroscopy using tunable filters: A review. In Proceedings of the Conference on Wavelet Applications VII, Orlando, FL, USA, 26–28 April 2000; pp. 50–64. [Google Scholar]
  2. Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  3. Kang, X.; Duan, P.; Li, S. Hyperspectral image visualization with edge-preserving filtering and principal component analysis. Inf. Fusion 2020, 57, 130–143. [Google Scholar] [CrossRef]
  4. Berns, R.S.; Imai, F.H.; Burns, P.D.; Tzeng, D.-Y. Multispectral-based color reproduction research at the Munsell Color Science Laboratory. In Proceedings of the Electronic Imaging: Processing, Printing, and Publishing in Color, Zurich, Switzerland, 18–20 May 1998; pp. 14–25. [Google Scholar]
  5. Thomas, J.-B. Illuminant estimation from uncalibrated multispectral images. In Proceedings of the 2015 Colour and Visual Computing Symposium (CVCS), Gjovik, Norway, 25–26 August 2015; pp. 1–6. [Google Scholar]
  6. Rüfenacht, D.; Fredembach, C.; Süsstrunk, S. Automatic and accurate shadow detection using near-infrared information. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 1672–1678. [Google Scholar] [CrossRef]
  7. Sobral, A.; Javed, S.; Ki Jung, S.; Bouwmans, T.; Zahzah, E.-h. Online stochastic tensor decomposition for background subtraction in multispectral video sequences. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Santiago, Chile, 7–13 December 2015; pp. 106–113. [Google Scholar]
  8. Dandois, J.P.; Ellis, E.C. Remote sensing of vegetation structure using computer vision. Remote Sens. 2010, 2, 1157–1176. [Google Scholar] [CrossRef]
  9. Motohka, T.; Nasahara, K.N.; Oguma, H.; Tsuchida, S. Applicability of green-red vegetation index for remote sensing of vegetation phenology. Remote Sens. 2010, 2, 2369–2387. [Google Scholar] [CrossRef]
  10. Li, F.; Ng, M.K.; Plemmons, R.; Prasad, S.; Zhang, Q.A. Hyperspectral image segmentation, deblurring, and spectral analysis for material identification. In Proceedings of the Conference on Visual Information Processing XIX, Orlando, FL, USA, 6–7 April 2010. [Google Scholar]
  11. Li, N.; Gong, C.G.; Zhao, H.J.; Ma, Y. Space Target Material Identification Based on Graph Convolutional Neural Network. Remote Sens. 2023, 15, 27. [Google Scholar] [CrossRef]
  12. Bosman, H.H.; Iacca, G.; Tejada, A.; Wörtche, H.J.; Liotta, A. Spatial anomaly detection in sensor networks using neighborhood information. Inf. Fusion 2017, 33, 41–56. [Google Scholar] [CrossRef]
  13. Kang, X.; Zhang, X.; Li, S.; Li, K.; Li, J.; Benediktsson, J.A. Hyperspectral anomaly detection with attribute and edge-preserving filters. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5600–5611. [Google Scholar] [CrossRef]
  14. Shi, H.; Zhao, H.; Wang, J.; Zhang, Y.-L.; Wu, Y.; Wang, C.; Fu, Q.; Jiang, H. Analysis and experiment of polarization characteristics of Off-axis freeform optical system. Opt. Laser Technol. 2023, 163, 109383. [Google Scholar] [CrossRef]
  15. Wang, J.; Shi, H.; Liu, J.; Li, Y.; Fu, Q.; Wang, C.; Jiang, H. Compressive space-dimensional dual-coded hyperspectral polarimeter (CSDHP) and interactive design method. Opt. Express 2023, 31, 9886–9903. [Google Scholar] [CrossRef]
  16. Nayar, S.K.; Fang, X.-S.; Boult, T. Separation of reflection components using color and polarization. Int. J. Comput. Vis. 1997, 21, 163–186. [Google Scholar] [CrossRef]
  17. Wen, S.J.; Zheng, Y.Q.; Lu, F. Polarization Guided Specular Reflection Separation. IEEE Trans. Image Process. 2021, 30, 7280–7291. [Google Scholar] [CrossRef] [PubMed]
  18. Wolff, L.B. Polarization-based material classification from specular reflection. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 1059–1071. [Google Scholar] [CrossRef]
  19. Guo, F.; Zhu, J.; Huang, L.; Li, H.; Deng, J.; Jiang, H.; Hou, X. Enhancing Spatial Debris Material Classifying through a Hierarchical Clustering-Fuzzy C-Means Integration Approach. Appl. Sci. 2023, 13, 4754. [Google Scholar] [CrossRef]
  20. Partridge, M.; Saull, R. Three-dimensional surface reconstruction using emission polarization. In Proceedings of the Image and Signal Processing for Remote Sensing II, Paris, France, 25–28 September 1995; pp. 92–103. [Google Scholar]
  21. Li, X.; Liu, Z.; Cai, Y.; Pan, C.; Song, J.; Wang, J.; Shao, X. Polarization 3D imaging technology: A review. Front. Phys. 2023, 11, 341. [Google Scholar] [CrossRef]
  22. Goudail, F.; Terrier, P.; Takakura, Y.; Bigué, L.; Galland, F.; DeVlaminck, V. Target detection with a liquid-crystal-based passive Stokes polarimeter. Appl. Opt. 2004, 43, 274–282. [Google Scholar] [CrossRef]
  23. Romano, J.M.; Rosario, D.; McCarthy, J. Day/night polarimetric anomaly detection using SPICE imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 5014–5023. [Google Scholar] [CrossRef]
  24. Zhou, P.-C.; Liu, C.-C. Camouflaged target separation by spectral-polarimetric imagery fusion with shearlet transform and clustering segmentation. In Proceedings of the International Symposium on Photoelectronic Detection and Imaging 2013: Imaging Sensors and Applications, Beijing, China , 21 August 2013; pp. 376–383. [Google Scholar]
  25. Islam, M.N.; Tahtali, M.; Pickering, M. Man-made object separation using polarimetric imagery. In Proceedings of the SPIE Future Sensing Technologies, Tokyo, Japan, 12 November 2019; pp. 190–196. [Google Scholar]
  26. Sano, I.; Mukai, S.; Takashima, T. Multispectral polarization measurements of atmospheric aerosols. Adv. Space Res. 1997, 19, 1379–1382. [Google Scholar] [CrossRef]
  27. Guo, H.; Gu, X.-F.; Xie, D.-H.; Yu, T.; Meng, Q.-Y. A review of atmospheric aerosol research by using polarization remote sensing. Spectrosc. Spectr. Anal. 2014, 34, 1873–1880. [Google Scholar]
  28. Zhao, Y.; Zhang, L.; Pan, Q. Spectropolarimetric imaging for pathological analysis of skin. Appl. Opt. 2009, 48, D236–D246. [Google Scholar] [CrossRef]
  29. Bartlett, B.D.; Schlamm, A. Anomaly detection with varied ground sample distance utilizing spectropolarimetric imagery collected using a liquid crystal tunable filter. Opt. Eng. 2011, 50, 081207–081209. [Google Scholar] [CrossRef]
  30. Ibrahim, I.; Yuen, P.; Hong, K.; Chen, T.; Soori, U.; Jackman, J.; Richardson, M. Illumination invariance and shadow compensation via spectro-polarimetry technique. Opt. Eng. 2012, 51, 107004. [Google Scholar] [CrossRef]
  31. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T.A.; et al. A Review of the Application of Optical and Radar Remote Sensing Data Fusion to Land Use Mapping and Monitoring. Remote Sens. 2016, 8, 23. [Google Scholar] [CrossRef]
  32. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  33. Mo, Y.J.; Wu, Y.; Yang, X.N.; Liu, F.L.; Liao, Y.J. Review the state-of-the-art technologies of semantic segmentation based on deep learning. Neurocomputing 2022, 493, 626–646. [Google Scholar] [CrossRef]
  34. Le Hors, L.; Hartemann, P.; Breugnot, S. Multispectral polarization active imager in the visible band. In Proceedings of the Laser Radar Technology and Applications V, Orlando, FL, USA, 5 September 2000; pp. 380–389. [Google Scholar]
  35. Olsen, R.C.; Eyler, M.; Puetz, A.M.; Esterline, C. Initial results and field applications of a polarization imaging camera. In Proceedings of the Polarization Science and Remote Sensing IV, San Diego, CA, USA, 3–4 August 2009; pp. 121–130. [Google Scholar]
  36. Azzam, R.; Coffeen, D.L. Optical Polarimetry: Instrumentation & Applications. In Proceedings of the Society of Photo-Optical Instrumentation Engineers in Conjunction with the IEEE Computer Society International Optical Computing Conference 77, San Diego, CA, USA, 23–24 August 1977. [Google Scholar]
  37. Wolff, L.B. Polarization vision: A new sensory approach to image understanding. Image Vis. Comput. 1997, 15, 81–93. [Google Scholar] [CrossRef]
  38. Toet, A. Natural colour mapping for multiband nightvision imagery. Inf. Fusion 2003, 4, 155–166. [Google Scholar] [CrossRef]
  39. Shen, H.; Zhou, P. Near natural color polarization imagery fusion approach. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; pp. 2802–2805. [Google Scholar]
  40. Tyo, J.S.; Ratliff, B.M.; Alenin, A.S. Adapting the HSV polarization-color mapping for regions with low irradiance and high polarization. Opt. Lett. 2016, 41, 4759–4762. [Google Scholar]
  41. Yang, F.; Xie, C. Color contrast enhancement method of infrared polarization fused image. In Proceedings of the AOPC 2015: Image Processing and Analysis, Beijing, China, 5–7 May 2015; pp. 537–541. [Google Scholar]
  42. Aïnouz, S.; Zallat, J.; de Martino, A.; Collet, C. Physical interpretation of polarization-encoded images by color preview. Opt. Express 2006, 14, 5916–5927. [Google Scholar] [CrossRef]
  43. Zhao, Y.-q.; Zhang, L.; Zhang, D.; Pan, Q. Object separation by polarimetric and spectral imagery fusion. Comput. Vis. Image Underst. 2009, 113, 855–866. [Google Scholar] [CrossRef]
  44. Song, Y.E.; Weiping, T.; Xiaobing, S.U.N.; Yonghua, F. Characterization of the Polarized Remote Sensing Images Using IHS Color System. Remote Sens. Inf. 2006, 11–13. [Google Scholar] [CrossRef]
  45. Zhao, Y.; Gong, P.; Pan, Q. Unsupervised spectropolarimetric imagery clustering fusion. J. Appl. Remote Sens. 2009, 3, 033535. [Google Scholar]
  46. Zhao, Y.; Zhang, G.; Jie, F.; Gao, S.; Chen, C.; Pan, Q. Unsupervised classification of spectropolarimetric data by region-based evidence fusion. IEEE Geosci. Remote Sens. Lett. 2011, 8, 755–759. [Google Scholar] [CrossRef]
  47. Solomon, J.E. Polarization imaging. Appl. Opt. 1981, 20, 1537–1544. [Google Scholar] [CrossRef] [PubMed]
  48. Fu, Q.; Liu, X.; Wang, L.; Zhan, J.; Zhang, S.; Zhang, T.; Li, Z.; Duan, J.; Li, Y.; Jiang, H. Analysis of target surface polarization characteristics and inversion of complex refractive index based on three-component model optimization. Opt. Laser Technol. 2023, 162, 109225. [Google Scholar] [CrossRef]
  49. Fu, Q.; Liu, X.; Yang, D.; Zhan, J.; Liu, Q.; Zhang, S.; Wang, F.; Duan, J.; Li, Y.; Jiang, H. Improvement of pBRDF model for target surface based on diffraction and transmission effects. Remote Sens. 2023, 15, 3481. [Google Scholar] [CrossRef]
  50. Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 1. [Google Scholar] [CrossRef]
  51. Zhang, X.; Zhu, J.; Huang, L.; Zhang, Y.; Wang, H.; Li, H.; Guo, F.; Deng, J. Hyperspectral Channel-Modulated Static Birefringent Fourier Transform Imaging Spectropolarimeter with Zoomable Spectral Resolution. In Proceedings of the Photonics, Orlando, FL, USA, 12–16 November 2023; p. 950. [Google Scholar]
Figure 1. A hierarchical structure of spectral–polarization imaging technology.
Figure 1. A hierarchical structure of spectral–polarization imaging technology.
Remotesensing 16 01119 g001
Figure 2. Data cube of spectral images.
Figure 2. Data cube of spectral images.
Remotesensing 16 01119 g002
Figure 3. Geometrical representation of the HSI color space.
Figure 3. Geometrical representation of the HSI color space.
Remotesensing 16 01119 g003
Figure 4. Flowchart of the proposed method.
Figure 4. Flowchart of the proposed method.
Remotesensing 16 01119 g004
Figure 5. HSI representation of spectral–polarization images.
Figure 5. HSI representation of spectral–polarization images.
Remotesensing 16 01119 g005
Figure 6. Imaging spectral polarimeter.
Figure 6. Imaging spectral polarimeter.
Remotesensing 16 01119 g006
Figure 7. Schematic illustration of data points.
Figure 7. Schematic illustration of data points.
Remotesensing 16 01119 g007
Figure 8. Results of spectral curve smoothing.
Figure 8. Results of spectral curve smoothing.
Remotesensing 16 01119 g008
Figure 9. Six randomly selected original images from the 0° polarization spectrum image dataset.
Figure 9. Six randomly selected original images from the 0° polarization spectrum image dataset.
Remotesensing 16 01119 g009
Figure 10. Enhanced result visualization following partitioned PCA processing.
Figure 10. Enhanced result visualization following partitioned PCA processing.
Remotesensing 16 01119 g010
Figure 11. Diagram of Spectral–polarization Fusion Processing. (a) 0°, (b) 45°, (c) 90°, (d) 135°.
Figure 11. Diagram of Spectral–polarization Fusion Processing. (a) 0°, (b) 45°, (c) 90°, (d) 135°.
Remotesensing 16 01119 g011
Figure 12. Stokes vector maps: (a) Fused; (b) S1; (c) S2.
Figure 12. Stokes vector maps: (a) Fused; (b) S1; (c) S2.
Remotesensing 16 01119 g012
Figure 13. Polarization feature parameter maps: (a) DoLP; (b) AoP.
Figure 13. Polarization feature parameter maps: (a) DoLP; (b) AoP.
Remotesensing 16 01119 g013
Figure 14. Resulting images of different mapping methods: (a) RGB method [35], (b) HSV method [37], and (c) proposed HSI method.
Figure 14. Resulting images of different mapping methods: (a) RGB method [35], (b) HSV method [37], and (c) proposed HSI method.
Remotesensing 16 01119 g014
Figure 15. Polarization Images at 600 nm for angles (a) 0°, (b) 45°, (c) 90°, and (d) 135° [45,46].
Figure 15. Polarization Images at 600 nm for angles (a) 0°, (b) 45°, (c) 90°, and (d) 135° [45,46].
Remotesensing 16 01119 g015
Figure 16. Processing results of Zhao’s publicly available data images: (a) DoLP, (b) AoP, (c) fused, (d) RGB method [35], (e) HSV method [37], and (f) proposed HSI method.
Figure 16. Processing results of Zhao’s publicly available data images: (a) DoLP, (b) AoP, (c) fused, (d) RGB method [35], (e) HSV method [37], and (f) proposed HSI method.
Remotesensing 16 01119 g016
Figure 17. The schematic of the CSBFTIS.
Figure 17. The schematic of the CSBFTIS.
Remotesensing 16 01119 g017
Figure 18. (a) Experimental scene; (bd) sample interferograms obtained through scanning [51].
Figure 18. (a) Experimental scene; (bd) sample interferograms obtained through scanning [51].
Remotesensing 16 01119 g018
Figure 19. Processing results of CSBFTIS-obtained images: (a) DoLP, (b) AoP, (c) fused, (d) RGB method [35], (e) HSV method [37], and (f) proposed HSI method.
Figure 19. Processing results of CSBFTIS-obtained images: (a) DoLP, (b) AoP, (c) fused, (d) RGB method [35], (e) HSV method [37], and (f) proposed HSI method.
Remotesensing 16 01119 g019
Table 1. Quantitative metrics for different images in scene one.
Table 1. Quantitative metrics for different images in scene one.
Objective Metrics μ σ ENAG
Original Image88.0453.387.294.87
PCA85.7947.737.226.68
EWP88.0747.167.216.11
DoLP27.4323.715.938.58
AoP44.0896.420.6736.85
S0 Fused96.4271.477.468.17
Table 2. Quantitative metrics for different images in scene two.
Table 2. Quantitative metrics for different images in scene two.
Objective Metrics μ σ ENAG
Original Image57.9342.963.883.99
PCA54.6143.086.892.09
EWP55.2442.216.762.59
DoLP23.2227.985.474.25
AoP190.17111.030.8232.93
S0 Fused100.9960.777.294.10
Table 3. Quantitative metrics for different images in scene three.
Table 3. Quantitative metrics for different images in scene three.
Objective Metrics μ σ ENAG
Original Image43.0640.426.406.04
PCA37.9144.475.992.45
EWP75.3939.666.688.56
DoLP48.3929.016.6513.50
AoP218.6289.170.5924.74
S0 Fused130.2550.987.2911.71
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, F.; Zhu, J.; Huang, L.; Li, F.; Zhang, N.; Deng, J.; Li, H.; Zhang, X.; Zhao, Y.; Jiang, H.; et al. Multi-Dimensional Fusion of Spectral and Polarimetric Images Followed by Pseudo-Color Algorithm Integration and Mapping in HSI Space. Remote Sens. 2024, 16, 1119. https://doi.org/10.3390/rs16071119

AMA Style

Guo F, Zhu J, Huang L, Li F, Zhang N, Deng J, Li H, Zhang X, Zhao Y, Jiang H, et al. Multi-Dimensional Fusion of Spectral and Polarimetric Images Followed by Pseudo-Color Algorithm Integration and Mapping in HSI Space. Remote Sensing. 2024; 16(7):1119. https://doi.org/10.3390/rs16071119

Chicago/Turabian Style

Guo, Fengqi, Jingping Zhu, Liqing Huang, Feng Li, Ning Zhang, Jinxin Deng, Haoxiang Li, Xiangzhe Zhang, Yuanchen Zhao, Huilin Jiang, and et al. 2024. "Multi-Dimensional Fusion of Spectral and Polarimetric Images Followed by Pseudo-Color Algorithm Integration and Mapping in HSI Space" Remote Sensing 16, no. 7: 1119. https://doi.org/10.3390/rs16071119

APA Style

Guo, F., Zhu, J., Huang, L., Li, F., Zhang, N., Deng, J., Li, H., Zhang, X., Zhao, Y., Jiang, H., & Hou, X. (2024). Multi-Dimensional Fusion of Spectral and Polarimetric Images Followed by Pseudo-Color Algorithm Integration and Mapping in HSI Space. Remote Sensing, 16(7), 1119. https://doi.org/10.3390/rs16071119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop