Next Article in Journal
Behavior and Energy of the M2 Internal Tide in the Madagascar–Mascarene Region
Previous Article in Journal
A Numerical Simulation of Convective Systems in Southeast China: A Comparison of Microphysical Schemes and Sensitivity Experiments on Raindrop Break and Evaporation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image Fusion Algorithm for Sustainable Development Goals Satellite-1 Night-Time Light Images Based on Optimized Image Stretching and Dual-Domain Fusion

1
Aerospace Information Research Institute, Chinese Academy of Sciences (CAS), Beijing 100094, China
2
College of Resources and Environment, University of Chinese Academy of Sciences, Beijing 100049, China
3
Key Laboratory of Earth Observation of Hainan Province, Hainan Aerospace Information Research Institute, Wenchang 571399, China
4
Hainan Aerospace Technology Innovation Center, Wenchang 571333, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(22), 4298; https://doi.org/10.3390/rs16224298
Submission received: 27 September 2024 / Revised: 11 November 2024 / Accepted: 16 November 2024 / Published: 18 November 2024

Abstract

:
The Glimmer Imager of Urbanization (GIU) on SDGSAT-1 provides high-resolution and global-coverage images of night-time lights (NLs) with 10 m panchromatic (PAN) and 40 m multispectral (MS) imagery. High-resolution 10 m MS NL images after ideal fusion can be used to better study subtle manifestations of human activities. Most existing remote sensing image-fusion methods are based on the fusion of daytime optical remote sensing images, which do not apply to lossless compressed images of the GIU. To address this limitation, we propose a novel approach for 10 m NL data fusion, namely, a GIU NL image fusion model based on PAN-optimized OIS (OIS) and DDF (DDF) fusion for SDGSAT-1 high-resolution products. The OIS of PAN refers to the optimized stretching method that integrates linear and gamma stretching while DDF indicates a fusion process that separately merges the dark and light regions of NL images using different fusion methods, then stitches them together. In this study, fusion experiments were conducted in four study areas—Beijing, Shanghai, Moscow, and New York—and the proposed method was compared to traditional methods using visual evaluation and five quantitative evaluation metrics. The results demonstrate that the proposed method achieves superior visual quality and outperforms conventional methods across all quantitative metrics. Additionally, the ablation study confirmed the necessity of the methodological steps employed in this study.

1. Introduction

NL remote sensing can detect the intensity of various light sources emitted from the ground surface at night, such as city lights and natural firelight [1,2]. Primary light sources include city lights, oil and gas well combustion, ship glow, fire, aurora, and so on, where human activity lights constitute the central part of light sources. Therefore, compared with traditional optical satellite remote sensing images, NL remote sensing can more intuitively reflect human activities [3] and, so, the use GIU NL is becoming more common in urban studies [4,5,6,7,8,9,10,11].
Although many global NL remote sensing satellites exist, most satellite data are not currently shared; five main data types can be openly accessed and are more widely used. The earliest NL products were derived from the Defense Meteorological Satellite Program’s Operational Line-scan System (DMSP/OLS), the second-line NL products were derived from the Day/Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (NPP) satellite, the third-line products included the International Space Station (ISS)’s irregularly photographed data, and the fourth-line products comprised the Luojia-1 Scientific Experiment Night Light Satellite (Luojia-01) data. Meanwhile, the fifth-line products include SDGSAT-1 satellite data. On November 5, 2021, the world’s first Sustainable Development Goals Satellite-1 (SDGSAT-1) was successfully launched. SDGSAT-1 aims to realize the fine delineation of “traces of human activities” through the all-day synergistic observation of the three payloads—thermal infrared, NL, and multispectral—and provide support for the study of SDG indicators that characterize the interactions between human activities and the natural environment. It aims to realize the fine delineation of “traces of human activities” and support the study of SDG indicators that characterize the interaction between human activities and the natural environment. The SDGSAT-1 satellite includes the Glimmer Imager for Urbanization (GIU), which can acquire 10 m PAN and 40 m MS NL images, with an orbital operating altitude of 505 km and a revisit cycle of about 11 days for ground targets. The “SDGSAT-1 Open Science Program” was launched in September 2022 to realize the global open sharing of satellite data [12,13,14]. Table 1 shows some critical parameters of the GIU and other available NL images.
The parameters in Table 1 show that the NL images obtained through open-access research are PAN images and have a low spatial resolution, which is not conducive to scholarly research using NL images. The spectral and spatial rates and spatial resolution of NL images provided by SDGSAT-1’s GIU sensor were significantly improved. The GIU sensor of SDGSAT-1 provides 10 m NL PAN images and 40 m NL MS images with red, green, and blue bands. Using fusion technology, the high spatial resolution of panchromatic images can be combined with the three types of spectral information on multispectral images, giving multispectral images greater spatial details while maintaining spectral characteristics. The fused high-resolution multispectral NL images can more clearly and accurately identify urban elements such as buildings, roads, and commercial areas, which can be better applied to the research of fine depictions of human activities.
Most existing remote sensing image-fusion methods focus primarily on using daytime remote sensing images. Remote sensing image-fusion methods include pixel-level, feature-level, and decision-level fusion. The pixel-level methods fuse the pixels of each source image, which can retain all kinds of information in the source image. This approach is the most dominant method for remote sensing image fusion. The pixel-level fusion methods in optical images mainly include the Component Substitution (CS), Multi-Resolution Analysis (MRA, Variational Optimization (VO, and deep learning (DL) fusion methods [15]. The essence of CS-based fusion methods is to separate an MS image into multiple components, then use the PAN image to replace a certain component of the MS image. This mainly includes the Intensity-Hue-Saturation (IHS) method [16], HSV [17] method, Principal Component Analysis (PCA) method [18], GS method [19], and so on. When the correlation between the replaced part of the MS image and the PAN image is higher, the fusion effect is better, and vice versa; the distortion caused is more significant. As the PAN spectral range of the SDGSAT-1 sensor exactly covers the spectral range of the three RGB bands, it causes relatively less distortion. Figure 1 shows its spectral response function. This figure has been sourced from the SDGSAT-1 open data system (http://124.16.184.48:6008/home (accessed on 20 March 2024)).
MRA-based methods can better improve image quality by fusing remote sensing images at multiple levels, extracting spatial details from PAN images using multiscale transform methods, and then injecting the extracted spatial details into MS images, which mainly include Laplacian pyramid transform (Laplacian) [20], WT [21], and so on. MRA-based fusion methods have better spectral retention performance. However, they require higher alignment between the PAN image and the up-sampled MS image, which causes spatial distortion and leads to an image shift. The method based on VO [22] defines the energy function through the relationship between the original image and the fused image. Then, the fused image is obtained by optimizing the objective function. The VO method is more complicated with regard to the solution process. The time complexity is higher and usually involves many iterative calculations, complex mathematical calculations, and the model of the regularization parameter, the weight parameter, and other parameters that need to be set by the user. Finally, its influence on the results is large. Hence, the method is often challenging to use to achieve the best fusion effect. Deep-learning-based remote sensing image fusion methods mainly include the Generative Adversarial Network (GAN) [23,24], Convolutional Neural Network (CNN) [25,26], and Autoencoder (AEs) [27,28] fusion methods. As the nature of deep learning is based on data-driven models, the results are highly dependent on the training data; however, there is no unified public NL remote sensing image dataset serving as a training set. Image fusion algorithms based on deep learning have a strong research prospect, but there are still some unsolvable problems.
Due to the drawbacks of the VO- and DL-based methods’ lack of prior knowledge and experimental datasets, we only used CS- and MRA-based fusion methods in our NL image fusion experiments.
GIU night light imagery is very different from daylight remote sensing imagery. NL imagery is usually taken at night, and its main function is to capture light information in urban areas. Therefore, areas without light, such as mountains, wasteland, rivers, and so on, are difficult to distinguish in NL imagery, and the texture information of these areas is not visible in NL imagery. These dark areas without light tend to be larger in size while areas with light tend to be smaller in size, resulting in a large number of dark pixels distributed across an NL image. At the same time, the brighter areas in the field also produce a cluster of extremely high pixel values on the NL image. This extreme contrast between light and dark in NL images makes it impossible to account for both the overall brightness and texture details of the image when using traditional daytime image fusion methods, meaning that previously developed remote sensing image fusion algorithms may not directly apply to GIU images. To address this issue, we propose a dual-domain fusion (DDF) method that incorporates optimized image stretching (OIS) for panchromatic images and employs different fusion techniques depending on varying luminance levels. We first combined traditional linear stretching with power function stretching (gamma stretching) and the four stretching parameters of the OIS were determined using MATLAB’s fmincon objective optimization function. After PAN image stretching, the image was divided into light and dark values by combining the characteristics of the NL image; the light values were fused by combining the Gram-Schmidt (GS) method and wavelet transform (WT) and the dark values were fused by combining the hue-saturation-value (HSV) method and WT. We selected urban NL images with different geographical locations around the world, experimented on them with the fusion method, and evaluated the fusion results from the perspective of visual inspection and metric evaluation, which showed that the fusion method proposed in this study was more effective and applicable to the NL images of the GIU. Moreover, the main contributions of this paper are as follows:
  • An OIS method combining linear stretching and gamma stretching is proposed. The stretched PAN image can be closer to MS images and is applied to GIU images in different world regions.
  • A DDF method using different fusion methods at different luminance levels is pro-posed and applied to GIU images from different world regions.
  • The comprehensive evaluation of different fusion methods for different NL images using multiple evaluation metrics proves that the fusion method proposed in this study is more effective.

2. Materials and Analysis

2.1. Datasets and Study Area

This study used the GIU sensor NL image level 4 product, which is the product of the level 0 product after relative radiometric correction, band alignment, high dynamic range (HDR) fusion, positive polynomial coefficient (RPC) processing, orthorectification with ground control points and a digital elevation model, and output according to the format specification. GIU consists of one panchromatic band and three-color bands, and the panchromatic band contains three types of data: panchromatic low (PL), panchromatic high (PH), and high dynamic range (HDR, where the ground system fuses the PL and PH data with 50% weight each), and the pixel values of each band are in the range of 0–4095. This study used the PAN high and low gain fusion bands (HDR) with three-color bands to obtain 10 m MS NL images for remote sensing image fusion.
Four study areas were selected for this experiment. The original MS images of these study areas are shown in Figure 2, and the locations, acquisition times, and image sizes of the study areas are shown in Table 2. We performed fusion experiments on eight NL images of these four study areas separately. We obtained 10 m MS NL images of the four study areas and visually evaluated and quantitatively assessed the fusion results.

2.2. Dark Value Analysis of NL Images

Compared with other types of remote sensing images, GIU images are very different. NL images are usually taken at night and mainly record light information in urban areas. Compared with daytime remote sensing images, a large area in NL images comprises a black background, and most of the black-background areas in urban areas consist of barren land, parks, rivers, lakes, etc., where there is no light or the light is not apparent. To verify whether the values of these areas are related to light, we conducted field sampling and selected the Zhongguancun Forest Park in Beijing to conduct field sampling. The 27 sampling points were categorized into 3 types—where 1 represents no-light areas, 2 represents dark-light areas, and 3 represents bright-light areas—as the sampling site was selected in a park in which some trees shade street lights. Therefore, in this paper, the no-light area refers to the area without lights, the dark-light area refers to the area where lights are blocked by trees in the park, and the bright-light area refers to the area where lights are not blocked. The time of the expedition was 13:30 UTC on 27 May 2024 and 28 May 2024, which was close to the satellite transit time. Figure 3 shows the field trip’s geographic location and the expedition site’s specific location and GIU image.
After sampling, the GIU images of six days—6 September 2023, 31 October 2023, 21 November 2023, 15 January 2024, 20 March 2024, and 16 April 2024—were selected to read and analyze the MS data. If an NL image’s DN values were related to the light, the results had to be clustered according to the unlit, dark, and bright areas. We first assumed that the NL image values of the dark area also related to light. They were divided into three categories, and we applied the Silhouette Coefficient [29] and Davies–Bouldin Index (DBI) [30] to determine whether the clustering was reasonable. The Silhouette Coefficient represents a way to evaluate the effectiveness of clustering; the best value is 1, the worst value is −1, values close to 0 indicate overlapping clusters, and negative values usually indicate that the samples have been assigned to the wrong clusters; the DBI is a metric for evaluating the performance of clustering algorithms, especially for measuring the separation and compactness of the clusters, and it is usually used to compare the effectiveness of different clustering methods or clustering parameter settings. A low DBI value means the clustering is more effective, indicating that the clusters within the clusters are compact and separated. A high DBI value means that the clustering is less effective, possibly because the clusters within the clusters are scattered or the different clusters are too close. All the NL image points were counted together to obtain the clustered sample profile coefficient of −0.072 and a DBI index of 14.484, which indicated that the clustering effect was poor, and the dark value area was mainly represented as noise due to the influence of tree shading and moonlight reflection. Figure 4 below shows a three-dimensional diagram of the NL research results of the Zhongguancun Forest Park. The three-dimensional diagram also shows that the sample points were clustered, which had little relationship with the classification. Therefore, it is clear from the analyses in this section that a dark-area image is not necessarily related to the light, and the results of this analysis will be used in Section 3.2.

3. Methods

The next section describes the overall framework of our research methodology. Figure 5 shows the detailed flowchart of our overall research method, which mainly consists of three parts, namely, optimized image stretching, dual-domain fusion, and fusion result evaluation. The purposes of these three parts are pre-fusion preparation, the fusion process, and evaluation of the fusion results, respectively.

3.1. Optimized Image Stretching

Due to the characteristics of the GIU itself, there is a large gap between the pixel values of its PAN and MS images. Table 3 compares the pixel values of the PAN and MS images in this paper’s four study areas.
When performing remote sensing image fusion, too large of a gap between panchromatic and MS imagery can cause some information distortion; however, as the PAN spectral range of the SDGSAT-1 sensor exactly covers the spectral range of the three RGB bands, stretching the panchromatic bands of the GIU images to coincide with the multispectral mean and standard deviation is necessary and reasonable. In the past, image stretching was typically performed using simple linear or gamma stretching methods; these two stretching methods have obvious shortcomings; they will stretch the original image to the same extent when stretching, and because the PAN of the GIU image is much smaller than the multispectral pixel value, the use of these two stretching methods will inevitably result in the image of the maximum value of the image becoming very large. The linear stretching method also results in the minimum value of the image becoming larger, which is unreasonable. Therefore, we propose an optimized stretching method combining linear and gamma stretching. The specific method is calculated using Equation (1).
  P o u t = a · P i n b + c · P i n + d ,
P i n is the input image; P o u t is the output image; and a, b, c, and d are the stretching coefficients. The OIS method combines linear stretching with gamma stretching, using four coefficients for stretching. For each image, the coefficients must be selected newly, so the algorithm is self-adaptive. When selecting the coefficients, the mean and standard deviation of the PAN image corresponding to the MS image are first calculated. Since the MS is in three bands, it must first be converted to a grayscale image. Then, the coefficients are selected using MATLAB’s nonlinear MS-constrained minimum value optimization function fmincon. Specifically, we set the optimization objective and constraints and adopt the optimization objective to make the optimized stretched image mean, standard deviation, maximum, and minimum values close to those of the MS image. We also set the nonlinear inequality constraints, meaning the stretching function must increase monotonically. Since it is certainly not possible to ensure that the image means, standard deviations, maximum values, and minimum values of the stretched image and the MS image are the same when using the fmincon optimization function, we set the optimization function to try to satisfy all of the other three conditions as much as possible, except for the maximum value. Finally, the optimized stretched PAN image is linearly stretched over time to ensure that the maximum value remains unchanged. As most of the GIU image pixel values are low, the value stretched to the high-value part of the image is very small for the mean and standard deviation images. The steps of the OIS algorithm are shown in Algorithm 1, and the variables that appear in the algorithm are shown in italics and bold.
Algorithm 1. OIS Algorithm
Input: SDGSAT-1 GIU MS, PAN image original_pan, original_ms
1: Use rgb2gray function to convert original_ms to grayscale image and calculate mean, standard deviation target_mean, target_std
2: Set initial parameters a, b, c, d = 1, 1, 0, 0
3: Calculate the mean, standard deviation, minimum, and maximum values of the stretched image under the current parameters current_mean, current_std, current_min, current_max
4: Set penalties for not meeting the optimization objective penalty = (current_mean − target_mean)^2 + (current_std − target_std)^2 + (current_min − 1)^2 + ((current_max − 4095)/10)^2
5: Calculate the derivative of the optimized stretch formula derivative = a × b × P i n .^(b − 1) + c
6: Set nonlinear inequality constraints, derivative > 0
7: Apply the fmincon objective optimization function and iteratively modify the parameters under deviation penalties and nonlinear inequality constraints until the penalty no longer decreases.
8: Apply the final parameters for optimized stretching to obtain P t r a n s and calculate the current maximum value transmax
9: Calculate the column number N of original_pan
10: for i in range (N):
11:    get the current pixel value X i
12:    if X i > 2730 1:
13:       X i = 2730 + ( X i − 2730)·(4095 − 2730)/(transmax − 2730)
14:    elif X i < 1:
15:       X i = 1
Output: PAN image after applying optimized stretching OIS_pan
1 The threshold value of 2700 was chosen because it is possible to stretch pixels larger than 4095 without changing the pixels with smaller pixel values too much.
Table 4 shows the statistical values of the original MS, original PAN, and stretched PAN images in the four study areas. After the original PAN image is processed by the OIS algorithm, the mean and standard deviation of the image become very close to those of the original MS image while the range of values of the image remains unchanged.

3.2. Dual-Domain Fusion

As described in Section 2.2, we analyzed and concluded after the field visit that due to complex effects such as tree shading and moonlight irradiation, most of the dark-value sections of an NL image are noise, which does not qualify for light analysis. At the same time, the HSV fusion method is directly inputted by using the value of the PAN image as the intensity of the image, and Table 3 also indicates that the overall luminance of the PAN image is lower. Hence, HSV image fusion is more suitable for NL fusion in dark areas. Therefore, this paper proposes a dual-domain fusion method based on bright and dark values in the following steps. First, the maximum interclass variance algorithm (Otsu) method [31] is used to perform the threshold separation of MS images, and the images are divided into dark-value regions and bright-value regions. Then, the HSV fusion method is used in the dark-value regions, and when replacing the intensity components in the HSV method for the panchromatic image, instead of simply replacing them directly, the low frequency of the panchromatic image is weighted and averaged with that of the intensity components of the MS image using the wavelet decomposition method. Then, wavelet inversion is performed, which better preserves its spectral information. Similarly, in the bright-value regions, we adopt the GS fusion method; instead of simply replacing the first component of the Schmidt matrix directly with the panchromatic image, we use wavelet decomposition to achieve a weighted average of the low-frequency part of the first component of the Schmidt matrix with the low-frequency part of the panchromatic image and then conduct wavelet inversion, which can better preserve its spectral information. Finally, the DDF fusion results are outputted by mosaicking the two regions together after separately processing the dark-valued and bright-valued regions. Figure 6 shows the flowchart of the DDF algorithm.

4. Experiments and Results

4.1. Image Fusion Quality Assessment Methods

To verify the effect of the fusion method in this study more realistically and objectively and judge the quality of the fused NL images, we conducted a comparative analysis between the method in this study and some common remote-sensing image-fusion methods. We used two evaluation modes—subjective visual evaluation and objective practical evaluation—to conduct a comprehensive and integrated evaluation.

4.1.1. Subjective Evaluation

A subjective evaluation involves observing the image’s texture details and spectral information with the naked eye and judging the fusion quality by visually observing the edge contour, detail information, texture structure, color information, and other contents of the fused NL image. However, image evaluation requires professionals with rich experience and knowledge, and the subjective evaluation method is affected by human subjective factors and individual differences, which can easily lead to inconsistent evaluation conclusions for the same fusion image due to the existence of different observation objects or the same observer making observations at different times or states. Simultaneously, the GIU image takes the value range of 0–4095; subject to the image of the computer hardware, the computer can only display the colors in the range of 0–255, so there is also a need to use a more scientific and objective form of evaluation to measure image quality.

4.1.2. Objective Evaluation

As there is no natural 10 m multispectral GIU image to serve as an objective evaluation control, we followed the Wald protocol [32] in the fusion objective evaluation and performed fusion experiments at both the original and degraded scales; namely, the original MS and PAN images were down-sampled to 160 m and 40 m, respectively, and then fused to generate a 40 m MS image1 using the fusion method; the original MS and PAN images were fused to produce a 10 m MS image, which was then down-sampled to produce a 40 m MS image2; and the original MS image was used as the standard image to objectively evaluate the fused MS image1 and MS image2. The following are some commonly used fusion evaluation metrics.
Root Mean Square Error (RMSE) [33]: The RMSE is the primary index used to measure the quality of the fused image by calculating the difference of pixel values between the fused image and the reference image to judge whether the fused image is good or bad. A smaller difference in the RMSE indicates that the fused image preserves the spectral information better, with an ideal value of 0. The RMSE is represented by Equation (2):
R M S E = 1 N i = 1 N X i Y i 2 ,
Here, N is the total number of pixels in the picture, X is the original picture, and Y is the fused image.
Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS) [34]: ERGAS is mainly used to evaluate the good or bad spectral quality of the fused image, which reflects the spectral distortion of the fused image. The smaller the value of ERGAS is, the better the quality of the fused image is, and the superior its spectral preservation is. Its ideal value is 0. ERGAS is calculated by Equation (3):
  E R G A S = 100 × 1 r 1 K k = 1 K R M S E X k , Y k μ X k 2 ,
Here, r is the ratio of the resolution of the merged image to that of the reference image, K is the number of bands, and μ is the mean value of the image. The RMSE is calculated by Equation (2).
Mutual information (MI) [35] is mainly used to measure the amount of information that a variable contains about another variable, which can measure the degree of similarity between the two images; that is, the merged image acquires the amount of information of the source image. The greater the mutual information is, the better. MI is calculated by Equation (4):
M I = x X y Y p X Y x , y log p X Y x , y p X x p Y y ,
Here, p is the probability distribution of the image.
Structural similarity index (SSIM) [36]: The SSIM is a comprehensive evaluation index used to measure the degree of similarity between the fused image and the reference image based on the brightness, contrast, and structure to compare the degree of similarity between the two images. The value of SSIM is in the range of [0, 1], and the closer the value is to 1, the better the fusion effect will be. The SSIM is calculated using Equation (5):
S S I M = 2 μ X μ Y + C 1 2 σ X , Y + C 2 μ X 2 + μ Y 2 + C 1 σ 2 X + σ 2 Y + C 2 ,
Here, σ is the standard deviation of the image, σ 2 is the variance of the image, σ X , Y is the covariance of the two images, and C 1   and C 2 are the stability constants in the SSIM formula.
Similarity Measure (SM) [37]: The SM calculates the degree of similarity between the fused image and the reference image by calculating the gradient change between neighboring pixels. The closer the similarity measure value is to 1, the better the fusion is. SM is calculated by Equations (6)–(8):
    S M = 1 i , j , k G X i , j , k G Y i , j , k 2 i , j , k G X i , j , k 2 + i , j , k G Y i , j , k 2 ,
  G X i j = 0.5 X i j X i + 1 , j + 1 + X i j + 1 X i + 1 , j ,
G Y i j = 0.5 Y i j Y i + 1 , j + 1 + Y i j + 1 Y i + 1 , j ,
The Universal Image Quality Index (UIQI) [38], also known as the Q-indicator: The UIQI is an overall evaluation index; the closer the Q-value is to 1, the better the quality of the fusion will be. The UIQI is calculated by Equation (9):
   Q = σ X , Y σ X σ Y · 2 μ X μ Y μ X 2 + μ Y 2 · 2 σ X σ Y σ X 2 + σ Y 2 ,
The above image evaluation metrics are all based on the evaluation of image fusion results at degraded scales; due to the lack of reference images, full-scale images can be used for a generalized Quality of No Reference (QNR) [39]. The QNR is used to measure the degree of spatial and spectral information distortion before and after fusion between the fused image and the source image. The QNR index ranges from 0 to 1; the higher the index is, the better the fusion effect will be. QNR is calculated using Equation (10):
   Q N R = 1 D S α 1 D λ β ,
Here, D S represents the degree of spatial information distortion, D λ represents the degree of spectral information distortion, and α and β represent the weighting parameters. D S and D λ are calculated by Equations (11) and (12):
   D S = 1 K k = 1 K Q F k , P Q M k , P L q q ,
   D λ = 1 K K 1 i = 1 K j = 1 , j i K Q M i , M j Q F i , F j p p
M is the original multispectral image, P is the original panchromatic image, F is the merged image, PL is the down-sampled PAN image, and Q is the UIQI introduced in Equation (9) above.

4.2. Results of the Four Datasets

To evaluate the fusion quality, we improved the existing optical fusion methods to make them suitable for NL images. Our improvement mainly involves the OIS and the DDF fusion, which, in turn, can be used in experiments separately with wavelet transform. To verify the reasonableness and validity of these steps, we designed an ablation study [40] for verification. The fusion experiments were first performed using the HSV method and the GS method among the more widely used CS fusion methods and the WT method and the Laplacian method among the MRA fusion methods. Then, DDF fusion, DDF fusion with OIS (DO), DDF fusion with WT (DW), and our proposed integrated DDF fusion with WT with OIS (DWO) were performed as described in this paper. We also evaluated the above eight fusion methods qualitatively and quantitatively on the experimental sets in Beijing, China; Shanghai, China; Moscow, Russia; and New York, USA. We chose ERGAS, the SSIM, the SM, and the MI based on the degradation scale and the QNR based on the full scale as the evaluation indices. As each fusion method has two evaluation results in the degradation scale, we took the average value of the two results as the final result to avoid redundancy. In the following sections, this paper analyzes and evaluates each fusion method for each study area considering both the visual results and quantitative metrics of NL image fusion.

4.2.1. Results for the Beijing Dataset

Table 5 shows the results of five evaluation metrics for the eight fusion methods and Figure 7 shows the radar comparison chart after normalization of evaluation indicators using the Beijing dataset. Figure 8 and Figure 9 show the original MS image, the original PAN image, and the fusion result maps using the eight fusion methods. Figure 8 shows the overall effect and Figure 9 shows the detailed effects.
Table 4 shows that our proposed DWO fusion method is the optimal fusion method in all evaluation indices and our proposed DW and DO fusion method also takes the second or third place in all evaluation indices. Meanwhile, in the ablation study, we also found that our three proposed innovative methods are beneficial for improving the fusion evaluation indices, so all the fusion steps in this paper are necessary. Meanwhile, we also find that the DDF fusion method is better than the GS and HSV methods when using ERGAS, the SSIM, and the MI evaluation indices. The DDF fusion method is closer to the method with better GS and HSV indices when using the SM and QNR evaluation indices. The DDF fusion method is essentially a mosaic of the images of GS and HSV with different brightness values. In general, the metrics should be between the two, but DDF shows better results than the two, which proves the feasibility of the DDF fusion algorithm for NL images. In addition, we find that the WT fusion method is optimal among all other fusion methods, except the DO, DW, and DWO fusion methods, in terms of several evaluation metrics.
The image fusion results in Figure 8 and Figure 9 show that the GS and DDF fusion methods have different degrees of color aberration, with red, blue, and green shifts. The HSV and Laplacian fusion methods have a darker overall brightness, with some color loss and inconspicuous texture details. The WT and DW fusion methods show some blurring, with inconspicuous texture details. DO and DWO fusion methods have the best fusion effects from the visual point of view; both of them have no obvious color distortion, no obvious color loss, and obvious texture features. The fusion results of the two methods are not much different from the visual point of view, and their performances are relatively similar.
Although the DW method has a good index effect in the combined fusion index and the fusion result graph, its fusion visual effect is poor. The DO method has a good fusion visual effect, but the index is not as good as that of the DW method. The DWO method combines the advantages of the two, and it has both the best index result and the best visual result, which proves that the DWO method is suitable for fusing PAN and MS for the NL image of the GIU sensor.

4.2.2. Results for the Shanghai Dataset

Table 6 shows the results of five evaluation metrics for the eight fusion methods and Figure 10 shows the radar comparison chart after the normalization of evaluation indicators when using the Shanghai dataset. Figure 11 and Figure 12 show the original MS image, the original PAN image, and the fusion result maps when using the eight fusion methods. Figure 11 shows the overall effect and Figure 12 shows the detailed effects.
Table 5 shows that our proposed DWO fusion method is optimal for all evaluation metrics, and the DW, DO, and WT methods take second or third places. The DDF fusion method is better than the GS and HSV methods in all evaluation metrics in ERGAS and the SSIM, and the DDF fusion method when using the SM, MI, and QNR evaluation metrics is closer to the method with a better metric effect in GS and HSV images.
The image fusion results in Figure 11 and Figure 12 show that the GS DDF fusion methods have different cases of color aberration, and the two sides of the road show a green offset. The HSV and Laplacian fusion methods have a darker overall brightness, with a certain amount of color loss, and texture details are not obvious. The WT and DW fusion methods show some blurring, the light source in the higher-brightness area is dispersed outward, and the texture features in the detail area are not obvious. The DDF method is a mosaic of two fusion methods, the HSV and GS methods. The boundary line between the bright and dark areas in the image is very clear and abrupt, there is a lack of natural transition, and the difference in pixel values at the transition is too large. However, the DO, DW, and DWO methods do not have this phenomenon. The DO and DWO methods have the best fusion results from a visual point of view. Both have no obvious color aberration, color loss, or texture characteristics, and the fusion results of the two methods do not differ much from each other visually.
Combining the fusion index and fusion result graphs, the WT and DW methods have good index results, but their fusion visual results are poor; the DO and DWO methods have good fusion visual results, but the indices of the DO method are not as good as those of the DWO method. The DWO method has both the best index and MS results, proving that the DWO method is suitable for the PAN and MS fusion of NL images from GIU sensors.

4.2.3. Results for the Moscow Dataset

Table 7 shows the results of five evaluation metrics for the eight fusion methods, and Figure 13 shows the radar comparison chart after the normalization of evaluation indicators when using the Moscow dataset. Figure 14 and Figure 15 show the original MS image, the original PAN image, and the fusion result maps when using the eight fusion methods. Figure 14 shows the overall effect and Figure 15 shows the detailed effects.
Table 6 shows that the DWO method is the optimal fusion method when using the SSIM, CC, and QNR evaluation indices and the DW method is the optimal method when using ERGAS and the MI evaluation indices. Unlike the Beijing and Shanghai datasets, the DW method is the optimal method when using ERGAS and the MI evaluation indices in the Moscow dataset, and we also find that the evaluation indices of the WT method are high when the evaluation indices of the DW method are high, and these are synchronized with each other. DDF fusion methods outperform the HSV and GS methods in ERGAS, the SSIM, the CC, and the MI evaluation metrics and are closer to methods with higher metrics in the remaining evaluation metrics.
The image fusion results in Figure 14 and Figure 15 show that the GS and DDF fusion methods have different degrees of color aberration, and the image color has blue and red shifts; the HSV and Laplacian fusion methods have a darker overall brightness, with a certain degree of color loss, and the texture details are not obvious. The WT and DW fusion methods have some blurring and distortion, and the image edges and details are not obvious, with poor visual quality. The DO and DWO methods have the best fusion effect from the visual point of view. Both have no obvious color aberration, no obvious color loss, and no obvious texture features, and the fusion results of the two methods do not differ much from each other from the visual point of view. Their performances are relatively similar.
Combined with the fusion index and fusion result map, as well as the original image information, the mean value of each band of the image in the Moscow area is larger than that of the other three areas, and there is a problem of the improper fusion of the high-frequency part and the low-frequency part when WT fusion is used. The overall high luminance value amplifies this problem of improper fusion, which makes the image closer to the original image and thus leads to a higher image evaluation index. Overall, the DWO method proposed in this paper has a high evaluation index and an optimal visual effect, proving that the DWO method is suitable for the PAN and MS fusion operation of NL images from GIU sensors.

4.2.4. Results for the New York Dataset

Table 8 shows the results of five evaluation metrics for the eight fusion methods, and Figure 16 shows the radar comparison chart after the normalization of evaluation indicators when using the New York dataset. Figure 17 and Figure 18 show the original MS image, the original PAN image, and the fusion result maps when using the eight fusion methods. Figure 17 shows the overall effect and Figure 18 shows the detailed effects.
Table 7 shows that our proposed DWO fusion method is the optimal fusion method in all four evaluation metrics except the MI evaluation metric. The DW method is the optimal method in the MI metric and ranks in the top three in several evaluation metrics, The WT and DO fusion methods obtain several top three positions. The DDF fusion methods outperform the HSV and GS methods in ERGAS, the SSIM, and the MI evaluation metrics and are closer to methods with higher metrics in the remaining evaluation metrics.
The image fusion results in Figure 17 and Figure 18 visually show that the GS and DDF fusion methods have color distortion; the fused image shows higher color saturation, and there are red, blue, and green shifts. The HSV and Laplacian fusion methods lead to a low overall brightness in the image, accompanied by a certain degree of color loss and the insufficient expression of texture details. The WT and DW fusion methods show a certain degree of blur and distortion, and the image edges and details are not characterized by poor visual quality. The DO and DWO methods have the best fusion effect from the visual point of view. Both have no obvious color aberration, no obvious color loss, and no obvious texture features. The fusion results of the two methods do not differ much from each other from the visual point of view, and their performances are relatively similar.
Combined with the fusion index and fusion result graphs, as well as the original image information, the overall luminance value of the Moscow study area is larger, and the mean value of each band of the original image is larger than that of the other three study areas. There is a problem of the improper fusion of the high-frequency and low-frequency parts in the WT fusion. The overall high luminance value amplifies this improper fusion problem, making the image closer to the original image and leading to a higher evaluation. Overall, the DWO method proposed in this study has a high evaluation index and an optimal visual effect, proving that the DWO method is suitable for the PAN and MS fusion operation of NL images from GIU sensors.

5. Discussion

The DWO algorithm proposed in this study consists of OIS and DDF. OIS PAN optimization stretching involves processing the original panchromatic image by combining linear stretching and OIS with gamma stretching to make it closer to the MS image regarding the pixel value range, mean, and standard deviation. As NL images are usually acquired at night and the non-illuminated area comprises a large proportion of the image, it is necessary to adopt the DDF fusion method, which applies different fusion algorithms at different luminance values to perform NL image fusion. The experimental results when using the four datasets show that the DWO algorithm has the best visual effect and evaluation metrics, and the ablation studies with the DDF, DW, DO, and DWO methods demonstrate that the algorithmic steps proposed in this study are necessary and each step improves the visual effect or evaluation metrics of the fused images. Therefore, the DWO method can be applied well to GIU NL image fusion.
The DWO method proposed in this paper is mainly a specific fusion method applicable to NL images based on the spectral distribution characteristics of GIU sensors and the pixel value distribution characteristics of NL images. Therefore, the method only applies to NL image fusion and not traditional daytime optical and thermal infrared images.
Although the above experimental results reveal that the DWO algorithm proposed in this study has excellent fusion performance, there are still some possible defects and uncertainties in the design of the algorithm:
  • Using a single Otsu algorithm to select the thresholds for light and dark zone selection is not always useful.
  • The OIS algorithm proposed in this study has more OIS parameters and relies on MATLAB’s objective optimization function to compute the parameters, which leads to some uncertainties.
  • The image fusion evaluation metrics used in this study are those commonly used for daytime optical images, and no experiments apply metrics that are applicable to NL images.

6. Conclusions

In this study, the characteristics of SDGSAT-1 NL images were investigated, and a new DWO method was proposed based on the characteristics of the GIU sensor and the distribution of the NL image pixel values. The results of fusion experiments on four datasets, Beijing, Shanghai, Moscow, and New York, revealed that the DWO method proposed in this study performs the best, in terms of the visual effect and optimal evaluation results of metrics, when compared with the traditional CS and MRA methods. The methodological indicators used this study when using the ERGAS evaluation index ranged from 11.697 to 20.161, those of the SSIM evaluation index ranged from 0.786 to 0.874, those of the SM evaluation index ranged from 0.683 to 0.734, those of the MI evaluation index ranged from 0.983 to 1.315, and those of the QNR evaluation index ranged from 0.973 to 0.978. Considering the 20 evaluation metrics obtained with the four datasets, the DWO method was the optimal fusion method (as indicated 17 times). It presented the best fused visual effect for each dataset, indicating that the DWO method is well suited to the fusion of NL images with GIU sensors. At the same time, our fusion method was proposed based on the characteristics of GIU NL images and did not undergo experiments on daytime images or other sensor NL images; therefore, this method can only be applied for GIU NL image fusion.

Author Contributions

K.L. designed the algorithm and experiments and wrote the manuscript. B.C. provided the original data, supervised the study, and reviewed the draft paper. X.L., X.Z. and G.W. revised the manuscript and provided some constructive suggestions. J.G., Q.H. and Y.G. processed the GI images and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Hainan Province Science and Technology Special Fund (Grant No. ATIC-2023010001) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant Number XDA19010401).

Data Availability Statement

The data acquired by SDGSAT-1 are available to the scientific community globally free of charge through the SDGSAT-1 Open Science Program (www.sdgsat.ac.cn (accessed on 8 September 2022)).

Acknowledgments

It is acknowledged that the SDGSAT-1 GI data were kindly provided by the International Research Center of Big Data for Sustainable Development Goals (CBAS). The authors also thank the anonymous reviewers and the editors for their insightful comments and helpful suggestions that improved the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, T.; Zhou, C.; Pei, T.; Haynie, S.; Fan, J. Quantitative estimation of urbanization dynamics using time series of DMSP/OLS nighttime light data: A comparative case study from China’s cities. Remote Sens. Environ. 2012, 124, 99–107. [Google Scholar] [CrossRef]
  2. Stokes, E.C.; Seto, K.C. Characterizing urban infrastructural transitions for the Sustainable Development Goals using multi-temporal land, population, and nighttime light data. Remote Sens. Environ. 2019, 234, 111430. [Google Scholar] [CrossRef]
  3. Bailang, Y.; Congxiao, W.; Wenkang, G.; Zuoqi, C.; Kaifang, S.; Bin, W.; Yuchen, H.; Qiaoxuan, L.; Jianping, W. Nighttime light remote sensing and urban studies: Data, methods, applications, and prospects. Natl. Remote Sens. Bull. 2021, 25, 342–364. [Google Scholar]
  4. Li, C.; Chen, F.; Wang, N.; Yu, B.; Wang, L. SDGSAT-1 nighttime light data improve village-scale built-up delineation. Remote Sens. Environ. 2023, 297, 113764. [Google Scholar] [CrossRef]
  5. Jia, M.; Zeng, H.; Chen, Z.; Wang, Z.; Ren, C.; Mao, D.; Zhao, C.; Zhang, R.; Wang, Y. Nighttime light in China’s coastal zone: The type classification approach using SDGSAT-1 Glimmer Imager. Remote Sens. Environ. 2024, 305, 114104. [Google Scholar] [CrossRef]
  6. Lin, Z.; Jiao, W.; Liu, H.; Long, T.; Liu, Y.; Wei, S.; He, G.; Portnov, B.A.; Trop, T.; Liu, M. Modelling the public perception of urban public space lighting based on SDGSAT-1 glimmer imagery: A case study in Beijing, China. Sustain. Cities Soc. 2023, 88, 104272. [Google Scholar] [CrossRef]
  7. Liu, S.; Zhou, Y.; Wang, F.; Wang, S.; Wang, Z.; Wang, Y.; Qin, G.; Wang, P.; Liu, M.; Huang, L. Lighting characteristics of public space in urban functional areas based on SDGSAT-1 glimmer imagery: A case study in Beijing, China. Remote Sens. Environ. 2024, 306, 114137. [Google Scholar] [CrossRef]
  8. Quan, X.; Song, X.; Miao, J.; Huang, C.; Gao, F.; Li, J.; Ying, L. Study on the substitutability of nighttime light data for SDG indicators: A case study of Yunnan Province. Front. Environ. Sci. 2023, 11, 1309547. [Google Scholar] [CrossRef]
  9. Xie, Q.; Li, H.; Jing, L.; Zhang, K. Road Extraction Based on Deep Learning Using Sdgsat-1 Nighttime Light Data. In Proceedings of the IGARSS 2024—2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; pp. 8033–8036. [Google Scholar]
  10. Guo, B.; Hu, D.; Zheng, Q. Potentiality of SDGSAT-1 glimmer imagery to investigate the spatial variability in nighttime lights. Int. J. Appl. Earth Obs. Geoinf. 2023, 119, 103313. [Google Scholar] [CrossRef]
  11. Chang, D.; Wang, Q.; Yang, J.; Xu, W. Research on road extraction method based on sustainable development goals satellite-1 nighttime light data. Remote Sens. 2022, 14, 6015. [Google Scholar] [CrossRef]
  12. Chen, J.; Cheng, B.; Zhang, X.; Long, T.; Chen, B.; Wang, G.; Zhang, D. A TIR-visible automatic registration and geometric correction method for SDGSAT-1 thermal infrared image based on modified RIFT. Remote Sens. 2022, 14, 1393. [Google Scholar] [CrossRef]
  13. Zhang, D.; Cheng, B.; Shi, L.; Gao, J.; Long, T.; Chen, B.; Wang, G. A destriping algorithm for SDGSAT-1 nighttime light images based on anomaly detection and spectral similarity restoration. Remote Sens. 2022, 14, 5544. [Google Scholar] [CrossRef]
  14. Guo, H.; Chen, H.; Chen, L.; Fu, B. Progress on CASEarth satellite development. Chin. J. Space Sci. 2020, 40, 707–717. [Google Scholar] [CrossRef]
  15. Jiang, M.; Shen, H.; Li, J.; Yuan, Q.; Zhang, L. A differential information residual convolutional neural network for pansharpening. ISPRS J. Photogramm. Remote Sens. 2020, 163, 257–271. [Google Scholar] [CrossRef]
  16. González-Audícana, M.; Saleta, J.L.; Catalán, R.G.; García, R. Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1291–1299. [Google Scholar] [CrossRef]
  17. Chavez, P.; Sides, S.C.; Anderson, J.A. Comparison of three different methods to merge multiresolution and multispectral data- Landsat TM and SPOT panchromatic. Photogramm. Eng. Remote Sens. 1991, 57, 295–303. [Google Scholar]
  18. Shah, V.P.; Younan, N.H.; King, R.L. An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1323–1335. [Google Scholar] [CrossRef]
  19. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. Google Patents US6011875A, 4 January 2000. [Google Scholar]
  20. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  21. Li, S.; Kwok, J.T.; Wang, Y. Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Inf. Fusion 2002, 3, 17–23. [Google Scholar] [CrossRef]
  22. Wei, Q.; Dobigeon, N.; Tourneret, J.-Y. Fast fusion of multi-band images based on solving a Sylvester equation. IEEE Trans. Image Process. 2015, 24, 4109–4121. [Google Scholar] [CrossRef]
  23. Ma, J.; Yu, W.; Chen, C.; Liang, P.; Guo, X.; Jiang, J. Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion. Inf. Fusion 2020, 62, 110–120. [Google Scholar] [CrossRef]
  24. Chen, J.; Pan, Y.; Chen, Y. Remote sensing image fusion based on Bayesian GAN. arXiv 2020, arXiv:2009.09465. [Google Scholar]
  25. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef]
  26. Shakya, A.; Biswas, M.; Pal, M. CNN-based fusion and classification of SAR and Optical data. Int. J. Remote Sens. 2020, 41, 8839–8861. [Google Scholar] [CrossRef]
  27. Huang, W.; Xiao, L.; Wei, Z.; Liu, H.; Tang, S. A new pan-sharpening method with deep neural networks. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1037–1041. [Google Scholar] [CrossRef]
  28. Azarang, A.; Manoochehri, H.E.; Kehtarnavaz, N. Convolutional autoencoder-based multispectral image fusion. IEEE Access 2019, 7, 35673–35683. [Google Scholar] [CrossRef]
  29. Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
  30. Davies, D.L.; Bouldin, D.W. A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, PAMI-1, 224–227. [Google Scholar] [CrossRef]
  31. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  32. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  33. Jagalingam, P.; Hegde, A.V. A review of quality metrics for fused image. Aquat. Procedia 2015, 4, 133–142. [Google Scholar] [CrossRef]
  34. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2565–2586. [Google Scholar] [CrossRef]
  35. Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 313–315. [Google Scholar] [CrossRef]
  36. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  37. Santini, S.; Jain, R. Similarity measures. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 871–883. [Google Scholar] [CrossRef]
  38. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  39. Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and panchromatic data fusion assessment without reference. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef]
  40. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
Figure 1. GIU spectral response function.
Figure 1. GIU spectral response function.
Remotesensing 16 04298 g001
Figure 2. The 40 m SDGSAT-1 GIU color imagery of Beijing, China (a); Shanghai, China (b); Moscow, Russia (c); and New York, USA (d).
Figure 2. The 40 m SDGSAT-1 GIU color imagery of Beijing, China (a); Shanghai, China (b); Moscow, Russia (c); and New York, USA (d).
Remotesensing 16 04298 g002
Figure 3. Images of field trips. (a) The general location of sampling sites; (b) the specific locations of sampling sites; (c) the GIU image.
Figure 3. Images of field trips. (a) The general location of sampling sites; (b) the specific locations of sampling sites; (c) the GIU image.
Remotesensing 16 04298 g003
Figure 4. The research results of the Zhongguancun Forest Park NL’s three-dimensional image.
Figure 4. The research results of the Zhongguancun Forest Park NL’s three-dimensional image.
Remotesensing 16 04298 g004
Figure 5. The overall flowchart of the research methodology.
Figure 5. The overall flowchart of the research methodology.
Remotesensing 16 04298 g005
Figure 6. The flowchart of the DDF algorithm.
Figure 6. The flowchart of the DDF algorithm.
Remotesensing 16 04298 g006
Figure 7. A radar comparison chart after normalization of evaluation indicators using the Beijing dataset.
Figure 7. A radar comparison chart after normalization of evaluation indicators using the Beijing dataset.
Remotesensing 16 04298 g007
Figure 8. Original images and fusion results using the Beijing dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Figure 8. Original images and fusion results using the Beijing dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Remotesensing 16 04298 g008
Figure 9. Detailed images of the original and fused images using the Beijing dataset. Detailed images of (a) original MS; (b) original PAN; detailed fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Figure 9. Detailed images of the original and fused images using the Beijing dataset. Detailed images of (a) original MS; (b) original PAN; detailed fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Remotesensing 16 04298 g009
Figure 10. A radar comparison chart after the normalization of evaluation indicators when using the Shanghai dataset.
Figure 10. A radar comparison chart after the normalization of evaluation indicators when using the Shanghai dataset.
Remotesensing 16 04298 g010
Figure 11. Original images and fusion results when using the Shanghai dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Figure 11. Original images and fusion results when using the Shanghai dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Remotesensing 16 04298 g011
Figure 12. Detailed images of the original and fused images when using the Shanghai dataset. Detailed images of (a) original MS; (b) original PAN; detailed fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Figure 12. Detailed images of the original and fused images when using the Shanghai dataset. Detailed images of (a) original MS; (b) original PAN; detailed fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Remotesensing 16 04298 g012
Figure 13. A radar comparison chart after the normalization of evaluation indicators when using the Moscow dataset.
Figure 13. A radar comparison chart after the normalization of evaluation indicators when using the Moscow dataset.
Remotesensing 16 04298 g013
Figure 14. Original images and fusion results when using the Moscow dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Figure 14. Original images and fusion results when using the Moscow dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Remotesensing 16 04298 g014
Figure 15. Detailed images of the original and fused images when using the Moscow dataset. Detailed images of (a) original MS; (b) original PAN; and detail fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), DWO (j) methods.
Figure 15. Detailed images of the original and fused images when using the Moscow dataset. Detailed images of (a) original MS; (b) original PAN; and detail fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), DWO (j) methods.
Remotesensing 16 04298 g015
Figure 16. A radar comparison chart after the normalization of evaluation indicators when using the New York dataset.
Figure 16. A radar comparison chart after the normalization of evaluation indicators when using the New York dataset.
Remotesensing 16 04298 g016
Figure 17. Original images and fusion results when using the New York dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Figure 17. Original images and fusion results when using the New York dataset. (a) Original MS image; (b) original PAN image; and fused products of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Remotesensing 16 04298 g017
Figure 18. Detailed images of the original and fused images when using the New York dataset. Detailed images of (a) original MS; (b) original PAN; and detailed fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Figure 18. Detailed images of the original and fused images when using the New York dataset. Detailed images of (a) original MS; (b) original PAN; and detailed fused images of HSV (c), GS (d), WT (e), Laplacian (f), DDF (g), DO (h), DW (i), and DWO (j) methods.
Remotesensing 16 04298 g018
Table 1. A comparison of GIU NL data with other available data.
Table 1. A comparison of GIU NL data with other available data.
Satellite/SensorOperational
Years
Spatial
Resolution (m)
Temporal ResolutionSpectral Bands
DMSP/OLS1992 to 20132700Daily global coveragePAN: 400–1100 nm
NPP/VIIRS2011 to present750PAN: 505–890 nm
ISS2003 to present5–200Irregular image acquisitionRGB
Luojia1-012018 to 2019130Global coverage in 15 daysPAN: 460–980 nm
SDGSAT-1/GIU2021 to presentPAN: 10
MS: 40
Global coverage in 11 daysPAN: 430–900 nm;
B: 430–520 nm;
G: 520–615 nm;
R: 615–900 nm.
Table 2. The details of the four SDGSAT-1 NL datasets.
Table 2. The details of the four SDGSAT-1 NL datasets.
IDLocationDateTime (UTC)Image Size (MS/PAN)
1Beijing, China16 April 202413:13800 × 800/3200 × 3200
2Shanghai, China18 May 202413:05800 × 800/3200 × 3200
3Moscow, Russia27 February 202418:22800 × 800/3200 × 3200
4New York, USA7 February 202402:00800 × 800/3200 × 3200
Table 3. Comparing image element values in the examined area.
Table 3. Comparing image element values in the examined area.
IDLocationBandMeanStandard DeviationMinimumMaximum
1Beijing, ChinaR34250914095
G31950014095
B9826314095
PAN2512014095
2Shanghai, ChinaR43969014095
G47871014095
B18642714095
PAN4722814095
3Moscow, RussiaR85092214095
G84788314095
B26537714095
PAN5814214095
4New York, USAR28144114095
G33454414095
B11025414095
PAN2210614095
Table 4. A comparison of the original MS, original PAN, and OIS PAN images.
Table 4. A comparison of the original MS, original PAN, and OIS PAN images.
LocationImageMeanStandard DeviationMinimumMaximum
Beijing, ChinaOriginal MS25440114095
Original PAN2512014095
OIS PAN25339114095
Shanghai, ChinaOriginal MS36859114095
Original PAN4722814095
OIS PAN35952714095
Moscow, RussiaOriginal MS65470014095
Original PAN5814214095
OIS PAN64665514095
New York, USAOriginal MS24239914095
Original PAN2210614095
OIS PAN24139114095
Table 5. The fusion evaluation results using the Beijing dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
Table 5. The fusion evaluation results using the Beijing dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
MethodERGASSSIMSMMIQNR
HSV48.333 0.289 0.307 0.654 0.212
GS30.786 0.672 0.564 0.559 0.941
WT27.486 0.822 0.624 0.865 0.915
Laplacian49.797 0.262 0.281 0.280 0.367
DDF30.239 0.747 0.562 0.770 0.886
DO22.858 (3) 0.827 (2) 0.709 (2) 0.922 (3) 0.960 (3)
DW21.005 (2) 0.806 (3) 0.626 (3) 0.961 (2) 0.963 (2)
DWO20.161 (1) 0.855 (1) 0.724 (1) 0.983 (1) 0.973 (1)
Table 6. The fusion evaluation results when using the Shanghai dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
Table 6. The fusion evaluation results when using the Shanghai dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
MethodERGASSSIMSMMIQNR
HSV43.491 0.223 0.372 0.786 0.182
GS29.802 0.594 0.533 0.585 0.927
WT18.026 (3) 0.759 0.584 0.999 0.960 (3)
Laplacian45.767 0.182 0.338 0.260 0.377
DDF29.273 0.665 0.531 0.783 0.881
DO19.598 0.788 (2) 0.680 (2) 1.014 (3) 0.963 (2)
DW17.912 (2) 0.778 (3) 0.611 (3) 1.065 (2) 0.948
DWO16.822 (1) 0.827 (1) 0.698 (1) 1.085 (1) 0.974 (1)
Table 7. The fusion evaluation results when using the Moscow dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
Table 7. The fusion evaluation results when using the Moscow dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
MethodERGASSSIMSMMIQNR
HSV35.844 0.199 0.241 0.927 0.216
GS21.471 0.598 0.496 0.794 0.948
WT12.315 (3) 0.728 0.543 1.323 (2) 0.969 (3)
Laplacian37.318 0.171 0.212 0.257 0.531
DDF20.737 0.660 0.494 0.974 0.936
DO13.451 0.737 (3) 0.669 (2) 1.199 0.968
DW11.627 (1) 0.754 (2) 0.588 (3) 1.364 (1) 0.971 (2)
DWO11.697 (2) 0.786 (1) 0.683 (1) 1.315 (3) 0.978 (1)
Table 8. The fusion evaluation results when using the New York dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
Table 8. The fusion evaluation results when using the New York dataset. (1), (2), and (3) represent the three fusion methods with the highest evaluation indices.
MethodERGASSSIMSMMIQNR
HSV47.109 0.376 0.305 0.600 0.188
GS32.282 0.699 0.541 0.614 0.939
WT19.345 (3) 0.815 0.558 1.047 (3) 0.971 (2)
Laplacian48.688 0.352 0.280 0.230 0.383
DDF31.281 0.791 0.540 0.879 0.901
DO20.264 0.841 (3) 0.719 (2) 0.973 0.969 (3)
DW18.068 (2) 0.847 (2) 0.622 (3) 1.123 (1) 0.969
DWO17.189 (1) 0.874 (1) 0.734 (1) 1.083 (2) 0.978 (1)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, K.; Cheng, B.; Li, X.; Zhang, X.; Wang, G.; Gao, J.; He, Q.; Gan, Y. An Image Fusion Algorithm for Sustainable Development Goals Satellite-1 Night-Time Light Images Based on Optimized Image Stretching and Dual-Domain Fusion. Remote Sens. 2024, 16, 4298. https://doi.org/10.3390/rs16224298

AMA Style

Li K, Cheng B, Li X, Zhang X, Wang G, Gao J, He Q, Gan Y. An Image Fusion Algorithm for Sustainable Development Goals Satellite-1 Night-Time Light Images Based on Optimized Image Stretching and Dual-Domain Fusion. Remote Sensing. 2024; 16(22):4298. https://doi.org/10.3390/rs16224298

Chicago/Turabian Style

Li, Kedong, Bo Cheng, Xiaoming Li, Xiaoping Zhang, Guizhou Wang, Jie Gao, Qinxue He, and Yaocan Gan. 2024. "An Image Fusion Algorithm for Sustainable Development Goals Satellite-1 Night-Time Light Images Based on Optimized Image Stretching and Dual-Domain Fusion" Remote Sensing 16, no. 22: 4298. https://doi.org/10.3390/rs16224298

APA Style

Li, K., Cheng, B., Li, X., Zhang, X., Wang, G., Gao, J., He, Q., & Gan, Y. (2024). An Image Fusion Algorithm for Sustainable Development Goals Satellite-1 Night-Time Light Images Based on Optimized Image Stretching and Dual-Domain Fusion. Remote Sensing, 16(22), 4298. https://doi.org/10.3390/rs16224298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop