1. Introduction
Recently, multispectral remote sensing images have been widely used in agriculture, forestry, mineral exploration, military, and many other fields, resulting in huge social and economic benefits [
1]; however, due to the limitations of sensors and atmospheric scattering, the visual effect and spatial resolution of multispectral remote sensing images cannot fully meet the demands of people; therefore, image enhancement processing is usually used before image analysis and interpretation to highlight useful information and expand the differences between different features [
2,
3,
4]. Multispectral remote sensing images are generated by collecting several bands of the same region in different spectral sampling intervals [
5]. The generated data include information from multiple channels.
At present, single-channel image enhancement methods mainly include spatial domain algorithms and frequency domain algorithms [
6,
7,
8]. Common spatial domain algorithms include histogram matching [
9,
10,
11], Retinex algorithms [
12,
13,
14,
15], morphological methods [
16], differential filtering algorithms, dark channel prior algorithms, and deep learning algorithms.
Differential filtering algorithms include integer-order differential algorithms and fractional differential algorithms. In integer-order differential algorithms, several integer-order operators, including first-order operators, such as the Sobel operator and Prewitt operator, and second-order operators, such as the Laplacian operator [
17], have been proposed to sharpen images. With the appearance of fractal theory, fractional differential algorithms have been widely used [
18]. Compared with integer-order differential algorithms, fractional differential algorithms can preserve the low-frequency information of the smooth region and the high-frequency edge features [
19]. Classic fractional differential algorithms process images in the form of filtering masks; however, the masks usually cannot make full use of the autocorrelation between neighboring pixels [
20]. The pixels in the image have great correlations, and these correlations are mainly reflected in the spatial position relations; therefore, some researchers proposed dividing the nonzero coefficients of the fractional differential mask equally within the pixel neighborhood [
20,
21]; however, the above differential filtering algorithm cannot accurately represent the spatial position relations in the neighborhood and cannot improve the detailed information of smooth areas.
The dark channel prior algorithm, which mainly utilized the deviation of each pixel from the minimum brightness point in the three basic color channels to enhance images and obtain an effective defogging effect for a single image, was proposed by He et al. [
22]. Dong et al. [
23] found the similarity between low illuminance images after inversion and haze images and applied the dark channel prior theory to low-illuminance image enhancement. At present, researchers have also proposed some methods combining dark channel priors with other algorithms [
24,
25,
26,
27,
28]; however, the above enhancement algorithms are used for color images or grey images, and the multispectral remote sensing image contains more than three bands. We expand the RGB channels and optimize the transmittance through the fractional differentiation algorithm in the spatial domain.
Furthermore, traditional algorithms have relatively been less used in the field of image enhancement, especially in remote sensing images. In recent years, deep learning algorithms have been widely applied in image enhancement [
29]; however, deep learning algorithms still have some disadvantages, such as long training time, large data demand, and low universality.
Therefore, an improved enhancement method based on the dark channel prior and fractional differential filtering is proposed. We implement the improved dark channel prior technology to improve the clarity and brightness of the original multispectral image and then use the bilateral fractional differential algorithm to further enhance the edge and textural details of the image.
Figure 1 illustrates the flowchart of the proposed method, which contains four steps. The first step is to invert the original multispectral image to make it suitable for dark channel theory. Second, the improved dark channel prior technology is used to enhance the image after inversion. The guided image is obtained by space domain fractional differential filtering and applied to estimate the transmittance of the dark channel prior. Simultaneously, the original channels are extended to multiple channels. Third, a bilateral fractional differential algorithm is proposed to enhance image details, which includes the space domain fractional differential model and intensity domain fractional differential model. Multiple bands of the multispectral image are enhanced by the spatial domain algorithm and intensity domain algorithm. Finally, the combined image is inverted and synthesized to obtain the final enhanced image. To verify the effectiveness of the method, we test and analyze it on multispectral image datasets.
Compared with previous works, our proposed method mainly offers the following contributions.
- (1)
By synthesizing the multiband information of multispectral remote sensing images, our algorithm obtains more accurate and clearer images than single-band remote sensing images.
- (2)
A bilateral fractional differential model is firstly proposed and effectively improves the edge and textural details of multispectral images.
- (3)
By expanding the bands and optimizing the transmittance of the dark channel prior model, an improved image with higher contrast and brightness is further obtained.
The remainder of this paper is organized as follows.
Section 2 introduces the related multispectral image enhancement methods.
Section 3 describes the principle and implementation steps of the proposed method.
Section 4 discusses the experimental datasets and experimental results.
Section 5 presents the conclusion.
2. Related Work
In this section, we briefly review previous works on multispectral remote sensing image enhancement.
At present, some enhancement algorithms for multispectral images have been proposed. Tian et al. [
30] introduced the extended offset sparsity decomposition (OSD) algorithm into multispectral image enhancement and applied it to hue saturation value (HSV) transformation and principal component analysis (PCA) transformation, respectively, thereby forming the HSV-OSD and the PCA-OSD algorithms, respectively. OSD is performed on the brightness component of HSV and the selected principal component of PCA to maintain the original image information and improve the image details. A. K. Bhandari et al. [
31] applied a method combining the discrete wavelet transform (DWT) and singular value decomposition (SVD) to multispectral color image enhancement. The original image was decomposed by the wavelet transform and the subbands were normalized by SVD so that the image after inverse discrete wavelet transform (IDWT) had higher contrast. In addition, A. K. Bhandari et al. [
32] presented a combination method of the discrete cosine transform (DCT) and SVD to highlight the contrast of color multispectral remote sensing images. In [
33], Shilpa Suresh et al. exploited a novel framework for the enhancement of multispectral images, which primarily aimed to highlight the contrast of color-synthesis remote sensing images through a modified linking synaptic computation network (MLSCN). Wang et al. [
34] exploited a color constancy algorithm, which used the improved linear transformation function to improve the brightness while avoiding color distortion. Shan-long Lu et al. [
35] introduced a multispectral satellite remote sensing image enhancement algorithm based on the combination of PCA and the intensity-hue-saturation (IHS) transform. The intensity component of the IHS transform was replaced by the first principal component of the PCA transform, and the inverse IHS transform was applied to obtain an enhanced image. T. Venkatakrishnamoorthy et al. [
36] mainly expounded on a method based on spatial enhancement and spectral enhancement, which was applied to false-color-synthetic satellite cloud images. The algorithm was used in image processing after extracting useful features using independent component analysis (ICA) and PCA.
Nevertheless, the above research was mainly applied to color-synthetic multispectral images. That is, the images only included three bands of the original multispectral images, which cannot effectively combine the information of other bands. To solve the above problems, some multiband image enhancement algorithms have been proposed. In [
37], Afshan Mulla et al. proposed a multispectral image enhancement scheme for specific bands, which performed selective region enhancement to improve the resolution of all bands. Chen Yang et al. [
38] proposed a fuzzy PCA algorithm and tested it on a multispectral image dataset with six bands. The algorithm improved the accuracy of surface feature identification by introducing fuzzy statistics; however, the abovementioned studies do not show a satisfactory enhancement effect on image brightness and contrast and cannot sufficiently enhance the edge and local details of images.
In this paper, our proposed method is different from the previous multispectral remote sensing image enhancement methods in the following ways.
- (1)
The method of combining the dark channel prior algorithm with the fractional differential algorithm is applied to multispectral remote sensing image enhancement for the first time. Based on improving the overall brightness and detail characteristics of the image, the information of the spatial dimension and spectral dimension is combined to make full use of all wave bands of multispectral images.
- (2)
Unlike the previous fractional differential algorithm, we propose a new fractional differential framework that enhances the edge and textural details of images. Considering the influence of the spatial distance on the pixel autocorrelation, we modify the fractional differential coefficients and propose the spatial domain fractional mask. Furthermore, according to the ability of the pixel similarity to judge the image edge, we propose the intensity domain fractional mask. Then the two above domains are fused to compose the bilateral fractional differential framework, and the enhanced images can be obtained by the framework. This framework can fully combine the information of spatial domain and intensity domain to maintain the details of the image in the smooth region and texture region and improve the image definition.
- (3)
An improved dark channel prior algorithm, which extends the RGB channels to multiple channels and optimizes the transmittance through our proposed spatial domain fractional differentiation algorithm to make full use of the ground features of different spectral bands and enhance the overall brightness and contrast of an image, is proposed.
3. Proposed Method
In this section, we review the definitions of the classical dark channel prior algorithm and fractional differential algorithm and discuss the principles and steps of the proposed method.
3.1. Dark Channel Prior
An inverted low illumination image and its histogram have high similarities with a hazy image and its histogram [
23]; therefore, first, the low illumination image is inverted as
where
denotes the pseudohaze image vector containing 3-channel images
, and
denotes the input image vector containing 3-channel images
. The physical model of atmospheric scattering describing haze images can be expressed as
where
is the transmittance, which represents the degree of scattering of the incident light;
represents the total intensity of atmospheric light; and
represents the image vector to be restored containing 3-channel images
.
He et al. [
22] stated that in most local areas of fog-free images, there are some pixels with very low values in at least one of the color channels; therefore, the dark channel prior theory was proposed. That is, for a fog-free image, the dark channel can be defined as
where
represents the dark channel, and
is the intensity of the
color channels including
,
, and
,
represents the filtering window centered on the pixel point.
The rule of the dark channel prior can be defined as
The core of the dark channel prior model is to obtain the atmospheric light and transmittance by handling the dark channel map and then realize image enhancement according to the image defogging model.
To estimate the initial transmittance, the atmospheric scattering model is normalized as
Assuming that the transmittance is constant in the same filtering window, minimum filtering is conducted at both ends of the above formula to obtain the transmittance with errors
, which can be written as follows:
Assuming that the image to be restored is similar to the clear image under normal weather conditions, the dark channel prior rule can be generated as
Dividing by the constant
, we can obtain
Continuing with the derivation, the transmittance is computed by
Under normal weather conditions, incident light inevitably has some scattering effects, which shows that close objects are displayed more clearly, and distant objects are usually blurrier. To preserve this depth-of-field effect and avoid the over enhancement of images, a constant parameter
is introduced to increase the transmittance. The final transmission is modified as
where
is the haze removal factor, which is generally set to 0.95.
In the above deduction, it is assumed that the ambient light intensity is known. In practice, we can obtain the value from the dark channel of foggy images. The smaller the reflectivity of a pixel in the image, the greater the attenuation of the incident light, and the greater the superposition effect of ambient light; thus, the grey value of the corresponding pixel is higher in the final dark channel map. The larger the reflectivity of the pixel is, the smaller the attenuation of incident light and the weaker the effect of ambient light. Generally, the grey value of the corresponding pixel is lower. In He’s work, the first 0.1% pixels with the largest brightness values were screened in a dark channel image, that is, the area where the transmittance is close to zero; and then the pixel value of the point with the highest brightness was found as the ambient light value in the corresponding position of the area.
After calculating the atmospheric light and transmittance, the reconstructed image of the real scene can be obtained by substituting the image defrosting model:
A smaller value of the projection map will result in a larger value of
, which will make the entire image transition to the white field; therefore, the threshold is written as
. When
is less than
, set
, and the typical value of
is 0.1. The final haze-free image is recovered by
3.2. Improved Dark Channel Prior
Multispectral remote sensing images usually have the characteristics of low contrast and low illumination; therefore, we reverse the multispectral image and then substitute the haze removal model.
where
denotes the pseudofoggy image vector containing multi-channel images
,
denotes the input image vector containing multi-channel images
, and
represents the number of bands of the multispectral image. The atmospheric scattering model of multispectral images can be modified as
where
represents the image vector to be restored containing multi-channel images
.
For a fog-free image, the dark channel can be defined by
where
is the intensity of any channel of multispectral image; therefore, the atmospheric scattering model is generated as
In addition, we improve the guided filtering operation in the refinement of the transmittance. We use the image enhanced by the fractional differential algorithm in the spatial domain as the guide image, which can better identify the edge and texture area of the image, remove the image details, and make the guided filtering more accurate.
The improved dark channel image is introduced as
where
represents the refinement of the dark image obtained by the spatial domain fractional differential algorithm. We represent the intensity value of pixel coordinate as the
in the thinned dark channel image
and Equation (17) can be expressed as the convolution of dark channel image and spatial domain fractional differential mask.
where
is the original dark channel image,
represents the spatial domain fractional mask in Table 3 of
Section 3.4.1,
is the intensity value of pixel coordinate
in the original dark channel image.
Accordingly, the transmittance can be written as follows:
Dividing by a constant
, we can obtain the dark channel rule as follows:
The estimated transmittance can be defined as
Therefore, the final enhanced image
can be expressed as
3.3. G-L Fractional Differential Model
The Grünwald–Letnikov (G-L) definition [
39] of the fractional-order derivative comes from the operating rules of the classical integer derivative with a continuous function and the fractional-order from the derivation of the integer-order. The function
within the interval
is continuously differentiable, and the first-order derivative of
is generated as
where
denotes the step size of variable
in the interval
; and the value of
, which is normally set to 1, is unchanged. Furthermore, according to the theories of the integer-order derivative, the second-order derivative of the function can be deduced as follows:
By analogy, the nth derivative of the function can be defined by
where
is an integer from
to
. The integer
-order (
) can be extended to the fractional
-order (
). When
,
is taken at least the integer portion
of
; therefore, the G-L definition of fractional derivative is given by [
40]
where
is the fractional order,
is the integer portion of
, and
is the gamma function, which can be computed by
The continuous period of unary signal
is
. To make
close to the nonzero limit, when
,
needs to be close to
; therefore, the duration
is equally separated according to the unit interval
. Let
, then
. The approximate expression of the fractional-order differential of the unitary signal can be deduced as follows:
The fractional differential expression defined for the unary signal can be generalized to two-dimensional functions. Thus, the two-dimensional expression in the x-direction and y-direction can be obtained [
41].
The nonzero coefficient values can be written in order as
Therefore, the classic G-L fractional differential mask is designed based on the nonzero coefficients. The first three partial differential coefficients are used to define a 3 × 3 fractional differential algorithm including eight symmetrical directions, in which the directions of 8 sub masks correspond to the positive x-direction, negative x-direction, positive y-direction, negative y-direction, upper-left diagonal direction, lower-left diagonal direction, upper-right diagonal direction, and lower-right diagonal direction of the target pixel
in the image
. The masks of the negative x-direction, negative y-direction, and upper-right diagonal direction are illustrated in
Table 1 [
19].
3.4. Bilateral Fractional Differential Model
In recent research, some fractional differential algorithms that preserve pixel autocorrelation have been proposed [
20,
21]. The value of each pixel is related to the value of its adjacent pixels. These algorithms integrate spatial correlation into the construction of fractional masks, but they fail to measure the spatial position relations between adjacent pixels accurately. In addition, these algorithms still fail to improve the details of flat areas.
Bilateral filtering is a nonlinear filtering method that can maintain edges, reduce noise, and smooth images. This filtering considers both the spatial distance between pixels and the differences of the pixel value in the intensity range domain while sampling. We propose a bilateral fractional differential algorithm inspired by bilateral filtering, which considers the autocorrelation of pixels in the spatial domain and the similarity between pixel values in the intensity domain, for image sharpening. In the spatial domain, we accurately calculate the spatial position relations between adjacent pixels and reconstruct the coefficients of the mask. In the intensity domain, we accurately calculate the difference between the target pixel and the neighborhood pixels and design the new fractional coefficients to highlight the detail of the flat area and preserve the edges of the image.
3.4.1. Spatial Domain Fractional Differential Model
We propose a spatial domain fractional differential model that mainly modifies the fractional differential coefficients by using spatial distance weights. Considering that the distances from the surrounding pixels to the target pixel are different, we suggest that the correlations between these pixels and the target pixel are also different. The closer a pixel is to the target pixel, the more similar it is to the target pixel. We need to assign various weights to the adjacent pixels according to the distances; that is, we give higher weights to the closer pixels and lower weights to the farther pixels; therefore, we define the spatial distance weight as
where
is the two-dimensional vector coordinate of the central pixel and
is the two-dimensional vector coordinate of the neighborhood pixel.
is a Gaussian function, which
can ensure the maximum weight of the central pixel. In addition, it can be seen from
Figure 2 that compared with other values of
, the pixels farther from the central pixel can be given lower weight and the pixels closer to the central pixel can be given higher weight when
; therefore,
is set to a standard Gaussian function, in which
is 0 and
is 1.
For instance, the distances from the eight surrounding pixels to the target pixel are in order
,
,
,
,
,
,
, and
from left to right and from top to bottom in the negative x-direction mask, respectively; and the distances from the eight surrounding pixels to the target pixel are in order
,
,
,
,
,
,
, and
from left to right and from top to bottom in the bottom-right diagonal mask, respectively. Suppose variable
denotes the distance between surrounding pixels and the target pixel in the 3 × 3 neighborhood. When
is sorted in ascending order as
,
,
,
, and
, we substitute
into the function, and then the defined weights can be obtained by
If the 3 × 3 fractional differential masks in
Table 1 are used in each pixel of the original image, the pixels with the coefficient of 0 will be ignored. To make full use of the correlation between pixels, we use the above spatial weight to improve the G-L mask. It should be noted that the pixel with a constant coefficient of 1 is the target pixel. We divide the coefficients
unevenly on the pixels with the distance of a 1-unit pixel from the target pixel according to the weight, and divide the coefficients
unevenly on the pixels with the distance of a 2-unit pixel from the target pixel according to the weight.
Table 2 shows the G-L masks in negative x-direction, negative y-direction, and upper-right diagonal direction on the left, as well as the normalized spatial domain masks in the negative x-direction, negative y-direction, and upper-right diagonal direction on the right.
The 3 × 3 masks in eight directions (0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°) are obtained, and the 5 × 5 masks in eight directions are extended by the 3 × 3 masks with adding zero centered around the target pixel, and then the 5 × 5 masks in eight directions are stacked to obtain the new 5 × 5 mask, as illustrated in
Table 3.
Moreover, we divide each item in the improved mask by the sum of all the coefficients to obtain the normalized mask. Finally, we use the spatial domain mask to filter the image and implement histogram equalization (HE) to enhance the contrast of the image with a small dynamic range.
where
is the image after spatial domain fractional differential filtering, and
is the enhanced image.
3.4.2. Intensity Domain Fractional Differential Model
Considering the difference in the similarity and brightness between the neighboring pixel and the center pixel, we modified the fractional-order differential coefficients by using the intensity domain weights. The difference between the center pixel value and the adjacent pixel value is relatively small, which indicates that the change of pixel values in this area is not obvious; that is, it is usually in a flat area. We need to give higher weights to the points whose grey values are closer to the grey values of the center point, thereby highlighting the textural details of the flat area. The difference between the center pixel value and adjacent pixel values is large, which indicates that the change of pixel values in this area is relatively obvious, and the area contains boundary information. We need to give lower weights to the points whose grey values are farther away from the center point so that the current pixel is less affected, which preserves the edge information. The grey distance weight formula is generated by the standard Gaussian function as follows:
where
is the intensity value of the two-dimensional vector coordinate of the central pixel and
is the intensity value of the two-dimensional vector coordinate of the pixel adjacent to the central pixel.
Therefore, the two-dimensional fractional differential expression can be written as follows:
The nonzero coefficients are in order as follows:
where
represents the neighborhood that contains the
th nonzero coefficient in the fractional differential mask,
is the weight of pixel
in
, and
is the sum of all coefficients in neighborhood
.
The anisotropic filter is constructed according to (38) to obtain the intensity domain fractional differential mask, which is shown in
Table 4.
After fractional differential filtering using the above mask, we use linear transformation and contrast limited adaptive histogram equalization (CLAHE) [
42] to further enhance the global brightness and contrast of the image.
where
K is the adjustment parameter,
is the image after intensity domain fractional differential filtering, and
is the final enhanced image. The above-enhanced image of the spatial domain fractional differential filter and the enhanced image of the intensity domain fractional differential filter are synthesized to obtain the bilateral fractional differential enhanced image.
After inverting the bilateral fractional differential image as (40), the final enhanced image can be obtained.
The above content is summarized, and the proposed multispectral image enhancement algorithm is described in the Algorithm 1.
Algorithm 1. Multispectral image enhancement based on IDCP_BFD. |
Input: A original multispectral image of the test dataset. |
- (1)
Improved Dark Channel Prior step: Through (13), the multispectral image R is reverse as I to be used for the subsequent algorithm. The enhanced multispectral image is generated via (14)–(22).
- (2)
Bilateral Fractional Differential step: For λ = 1, to z do Let be any bands in the multispectral image.
|
| |
| |
end for ● Calculate the final enhanced image S by considering (40). Output: The enhanced result of the original image. |
4. Experimental Results and Analysis
To verify the effectiveness of our proposed algorithm, we evaluate the visual effects and objective indicators of the algorithm. All experiments in the paper are performed on a personal computer with an Intel (R) Core (TM) i5-11300H 3.10 GHz CPU. Retinex-Net method uses Python 3.6 and other methods use MATLAB R2020b. Code is available at
https://1drv.ms/u/s!ApVw-FZtwy2wfkXeeKpkh2HLvtY?e=YBgDiq (3 January 2022).
By referring to the relevant works of multispectral remote sensing image enhancement [
30,
31,
32,
33,
34,
35,
36,
37,
38], five well-known evaluation indexes, including the contrast, image intensity, information entropy, average gradient, and execution time are used to evaluate the performance of different methods.
Contrast represents the difference scale in the brightness levels of the brightest area and the darkest area in an image. The greater the contrast is, the better the image quality; however, the smaller the contrast is, the less obvious the image change. Contrast is defined by
where
C represents the image contrast.
represents the intensity difference between adjacent pixels.
indicates the distribution probability of the pixel with intensity difference.
The image intensity denotes the average value of an image.
where
μ represents the image intensity,
M,
N represent the dimension of image,
represents the intensity of the pixel
.
Entropy reflects the information that an image carries.
where
S represents the input image, and
pk represents the proportion of pixels with gray value
k in the image. The larger the information entropy, the richer the information of the image.
The average gradient represents the ability of an image to express minute details and textural changes.
where
S(
i,
j) represents the intensity value of pixel coordinates of the image. Generally, the greater the average gradient is, the richer the image hierarchy and the clearer the image.
To confirm the performance of the proposed algorithm, we conducted experiments in three ways. In
Section 4.3, we compare our bilateral fractional differential algorithm with five differential filtering algorithms, such as Sobel operator, Laplacian operator, the Yipufei approach [
19], the MGL approach [
20], the approach of Wadhwa et al. [
21], on the premise of the proposed dark channel prior algorithm. The Yipufei approach and the MGL approach are set in the fixed fractional order as in this paper. The approach of Wadhwa et al. uses the adaptive fractional order; thus, the parameters are obtained from original literature. In
Section 4.4, we compare our IDCP algorithm with the original DCP algorithm of 3-bands, IDCP algorithm of 3-bands, 4-bands, 5-bands, 6-bands, and 7-bands on the premise of the proposed BFD algorithm. In
Section 4.5, we compare the proposed IDCP_BFD algorithm with the Retinex-Net approach [
29], the LIME approach [
43], and the ACSEA approach [
44] to evaluate the overall algorithm. All parameters are obtained from the parameters set by each author.
4.1. Dataset
The Landsat-5 satellite is the fifth satellite in the Landsat series with an orbit altitude of 705 km, which was launched in March 1984. This is an Earth observation satellite that carries a thematic mapper (TM) and a multispectral scanner (MSS). To date, the images delivered by Landsat-5 satellites have been widely used in many fields and have provided a great deal of effective information worldwide. The Landsat-5 satellite is also the longest optical remote sensing satellite in orbit.
In this paper, we chose thirty multispectral remote sensing images from Changji, Xinjiang in 2011 and Turpan, Xinjiang in 2009 obtained by the Landsat-5 satellite and then produced the multispectral image dataset. Each image of the dataset contains 500 × 500 pixels. Landsat TM image contains 7 bands ranging from visible to thermal infrared wavelength. Band 1 is the blue band with a wavelength range from 0.45 μm to 0.52 μm and a spatial resolution of 30 m, which has strong penetration into water and can effectively distinguish soil and vegetation. Band 2 is the green band with a wavelength range from 0.52 μm to 0.60 μm and a spatial resolution of 30 m, which is relatively sensitive to the response of healthy and lush plants. Band 3 is the red band with a wavelength range from 0.63 μm to 0.69 μm and a spatial resolution of 30 m, which is the main band of chlorophyll absorption and is usually used to observe bare soil and vegetation. Band 4 is the near-infrared band with a wavelength range from 0.76 μm to 0.90 μm and a spatial resolution of 30 m, which is a general band for plants and is usually used for the analysis of crop growth. Band 5 is the shortwave infrared band with a wavelength range from 1.55 μm to 1.75 μm and a spatial resolution of 30 m, which is used to distinguish the characteristics of roads, bare soil, and water systems. Band 6 is the thermal infrared band with a wavelength range from 10.40 μm to 12.50 μm and a spatial resolution of 120 m, which is used to distinguish the characteristics of roads, bare soil, and water. Band 7 is the mid-infrared band with a wavelength range from 2.08 μm to 2.35 μm and a spatial resolution of 30 m, which is used to respond to targets emitting thermal radiation.
Figure 3,
Figure 4 and
Figure 5 present three sub-sets of multispectral dataset. Each sub-set shows Band 1–7 and the composite image of 7 bands, in which the composite image represents the superposition of all bands in the original multispectral data and shows the overall visual effect of the unenhanced image. The multispectral remote sensing images reflect the ground information of different spectral bands in the same region. From
Figure 4a, we can see that the characteristics of water bodies and urban colonies are obvious in band 1.
Figure 4d and
Figure 5d show that band 4 has relatively clear terrace characteristics.
Figure 4e reflects the clear characteristics of urban settlements and roads in band 5. In
Figure 3a, the ridge and other lithologic characteristics of band 7 are more prominent; therefore, we integrate seven bands to generate a single band image in the subsequent enhancement processing. Compared with the enhancement algorithm using only one band, the 7-bands combination algorithm can make better use of the detailed information of different bands, restore the real and rich features of ground objects, and then improve the visual interpretation effect.
4.2. Parameter Analysis
In this part, we discuss the parameters in the paper. In the proposed algorithm, some free parameters need to be adjusted in advance, including K and v. We carry out two sets of experiments to verify the impact of these parameters on the performance of the algorithm.
The introduction of
is to control the degree of brightness adjustment. The fractional filtering in the intensity domain makes the overall brightness of the image lower; therefore,
is used to constrain the linear transformation and improve the brightness value of multiband images. For this reason, we vary
from 0 to 10 with 0.5 intervals and test on multispectral image datasets, including 30 images. The influence of
on image information is given in
Figure 6a. From
Figure 6a, we can see that with the increase of
, the information entropy gradually increases and tends to be stable. To the end,
is set to 6 to retain the image information abundance in this paper.
In addition, we need to set the order
of the fractional differential algorithm. As for our fractional differential framework, the importance of neighborhood points changes with the order. If the order is too small and the target pixel is not prominent enough, the gray value of the image edge may mutate, resulting in the local region being too bright or too dark. If the order is too large, the gray value of the image may be too large or too small to exceed the gray display range. We vary the order
from 0 to 1 with 0.05 interval and test on multispectral image datasets including 30 images. From
Figure 6b, we can see that the information entropy increases with the value of order but decreases sharply when
is close to 1, which has been mentioned in Luo’s work [
45]; therefore, we set
to 0.8 to ensure the enhancement effects.
4.3. Comparative Experiments of Other Methods Based on Differential Filtering
In this part, we compared the proposed BFD algorithm with the image enhancement methods based on differential filtering, such as Sobel operator, Laplacian operator, Yipufei approach [
19], MGL approach [
20], and the approach of Wadhwa et al. [
21]. The enhanced visual effects of test images are shown in
Figure 7,
Figure 8 and
Figure 9. The ground objects of the original images are different in each band; that is, the same ground objects are clear in some bands and blurred in the other bands. By synthesizing the dimensional information, the enhanced image with more complete features can be obtained.
Figure 3,
Figure 4 and
Figure 5 and
Figure 7,
Figure 8 and
Figure 9 show that most of the enhanced images can improve the clarity and texture details of the original image to some extent. Comparing
Figure 3a,e,
Figure 4a,e and
Figure 5a,e with
Figure 7a,e,
Figure 8a,e and
Figure 9a,e, we can see that although the contrasts of the images using the Sobel method and the approach of Wadhwa et al. [
21] are higher than that of the original image, the brightness adjustment effects are not obvious, and the processed images still have fuzzy edge features.
Figure 7b–d,
Figure 8b–d and
Figure 9b–d show that the results generated by the Laplacian algorithm, the Yipufei algorithm [
19], and the MGL approach [
20] can enhance the edge and textural area of the original images to a certain extent, but the overall visual effects of the images are generally dark, and the image contrast is not effectively improved. Furthermore,
Figure 9c,d show that the Yipufei algorithm [
19] and the MGL approach [
20] do not recover some details of dark areas. The proposed method shows good visual effects on three sets of images, as shown in
Figure 7f,
Figure 8f and
Figure 9f. It is evident from the enhanced images that our fractional differential method performs exceedingly well in maintaining the details. Simultaneously, our method can fully enhance the edge and textural details and improve the contrast and brightness of multispectral images.
Table 5 shows the quantitative results of each band from
Figure 3,
Figure 4 and
Figure 5 and the average quantitative results of 30 original multispectral images.
Table 6 shows the quantitative results of six differential methods used in
Figure 7,
Figure 8 and
Figure 9 and the average quantitative results of the comparison methods used in 30 multispectral images. As shown in
Table 5, band 6 has relatively high brightness values, and less information, and band 5 has relatively high information entropy and average gradient; however, as a whole, the original multispectral images have low contrast and weak edge retention. By comparing the data in
Table 5 and
Table 6, it can be observed that the proposed BFD method can effectively improve these problems of the original images. The enhanced images using the BFD method have a relatively higher information entropy and average gradient than each band of original multispectral images, especially band 5 with the best comprehensive index, which reflects that the enhanced images contain richer information, clearer edge textural features, and higher ability to maintain details. In addition, it can be seen that the proposed method effectively improves the global contrast, thereby enhancing the visibility of the original multispectral images; therefore, the enhanced images have relatively moderate brightness value, clear edge, and rich details, which can improve the basic features of the original multispectral image.
Table 6 shows that the method proposed in this paper has a higher mean value and contrast than other differential algorithms, which shows that the enhanced image has good global visual quality. Moreover, the proposed method achieves the highest entropy and average gradient. This implies that our method can retain the advantages existing in the original image and highlight the local detail features. Furthermore, the running time of the proposed algorithm is higher than the other methods, apart from the approach of Wadhwa et al. [
21]. This is because our bilateral fractional filtering algorithm needs to calculate the pixel difference in the neighborhood of an image, which makes the time consumption of the algorithm longer. Although the time of the BFD algorithm is long, the other four indexes of our method are higher than those of the five comparison methods; therefore, our algorithm is a better algorithm to meet the needs of human vision.
4.4. Comparative Experiments Based on Dark Channel Prior
In this paper, we extend and improve the original dark channel algorithm; therefore, we have further experimented and discussed the dark channel algorithm in different bands. According to the average values of normalized information entropy and normalized image intensity, we rank the seven bands in descending order: Band 5, Band 6, Band 4, Band 7, Band 1, Band 3, and Band 2. Then we combine the above bands, including the first three bands, the first four bands, the first five bands, the first six bands, and the first seven bands. The original DCP algorithm is used to test 3-bands combinations and the proposed IDCP algorithm is used to test all band combinations. The experimental results are given in
Figure 10 and
Figure 11. It is readily observed that the histogram of
Figure 10e can make full use of the whole dynamic range than the histograms of
Figure 10b,c. This shows that the 7-bands algorithm used in this paper can obtain more detailed information. In addition, compared with
Figure 10a,b,
Figure 10e is closer to the normal distribution, and therefore closer to the ideal enhanced image [
46]. Moreover,
Figure 11 shows the information entropy and average gradient obtained by applying the DCP algorithm of 3-bands and the IDCP algorithm from 3-bands to 7-bands in the multispectral dataset. As shown in
Figure 11a, the 7-bands algorithm has relatively higher information entropy, indicating that the proposed algorithm can maintain the image information and enhance the clarity of the images. From
Figure 11b, it is clear that the average gradient of the 7-bands combined algorithm is relatively high, which shows that the image contains more detailed information. Theoretically, multispectral images exist “same body with different spectrum” phenomenon; that is, the same type of ground objects have different gray values in different bands; therefore, different electromagnetic wavebands can reflect different features. The 7-bands algorithm can use the spectrum information of all bands. On the whole, the 7-bands IDCP algorithm can better integrate information and improve image quality.
4.5. Comparative Experiments of Other Methods Based on Image Enhancement
We compared the lasting image enhancement methods with our whole algorithm, such as the Retinex-Net approach, the LIME approach, and the ACSEA approach, which as shown in
Figure 12 and
Table 7. As shown in
Figure 12a,b, the Retinex-Net approach and the LIME approach make the overall image brighter, but the sharpness is relatively lower. There are low contrast and loss of detail in the Retinex-Net approach, while saturation artifacts appear in the LIME approach. This is because the above methods are mainly used for low illumination image enhancement. Although they greatly improve the image brightness, they cannot preserve the edge details of the image well. In addition,
Figure 12c shows that the ACSEA approach has uniform brightness but has relatively blurred texture details. As shown in
Figure 12d, the proposed image enhancement method provides clearer edge details and texture features effectively.
Table 7 shows that the proposed IDCP_BFD has improved results in terms of contrast, entropy and average gradient than the other comparison algorithms; therefore, the enhanced image of the proposed algorithm has more texture details and can significantly highlight urban settlements, roads, and other ground features. It is clear that the proposed method is superior to other algorithms in maintaining the overall image quality.
5. Discussion
Due to the influence of sensors, atmospheric environment, and weather, multispectral remote sensing images usually have some problems such as distortion, blur, low contrast, and loss of details, which brings difficulties to the visual interpretation. From the above experiments, it can be seen that the proposed algorithm can effectively improve the image quality and clearly express the ground features. At the same time, the superiority of the proposed IDCP_BFD algorithm is also shown in the objective evaluation. The results mainly depend on the combination of bilateral fractional differential algorithm and improved dark channel prior algorithm. In particular, the BFD algorithm plays an important role in image texture detail and visual quality improvement. In this paper, we mainly implement the comparison methods in the Landsat-5 TM dataset. The proposed algorithm can still be used for other multispectral remote sensing datasets; however, the free parameters may change. Especially for datasets with a high overall brightness value, the brightness adjustment parameter can be appropriately reduced. In addition, the execution time of the proposed method is relatively long and the processing rate needs to be improved. In the future, we will further optimize the algorithm to achieve a higher image processing rate and try to use parameter adaptation to improve the robustness of the algorithm.