Next Article in Journal
Interpretation of Quantum Theory: The Quantum “Grue-Bleen” Problem
Next Article in Special Issue
CSI-Former: Pay More Attention to Pose Estimation with WiFi
Previous Article in Journal
A Novel Hyperchaotic 2D-SFCF with Simple Structure and Its Application in Image Encryption
Previous Article in Special Issue
Robust Multiple Importance Sampling with Tsallis φ-Divergences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nighttime Image Stitching Method Based on Guided Filtering Enhancement

1
Department of Electronic Engineering, Heilongjiang University, Harbin 150080, China
2
Department of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(9), 1267; https://doi.org/10.3390/e24091267
Submission received: 9 August 2022 / Revised: 5 September 2022 / Accepted: 7 September 2022 / Published: 9 September 2022
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)

Abstract

:
Image stitching refers to stitching two or more images with overlapping areas through feature points matching to generate a panoramic image, which plays an important role in geological survey, military reconnaissance, and other fields. At present, the existing image stitching technologies mostly adopt images with good lighting conditions, but the lack of feature points in scenes with weak light such as morning or night will affect the image stitching effect, making it difficult to meet the needs of practical applications. When there exist concentrated areas of brightness such as lights and large dark areas in the nighttime image, it will further cause the loss of image details making the feature point matching unavailable. The obtained perspective transformation matrix cannot reflect the mapping relationship of the entire image, resulting in poor splicing effect, and it is difficult to meet the actual application requirements. Therefore, an adaptive image enhancement algorithm is proposed based on guided filtering to preprocess the nighttime image, and use the enhanced image for feature registration. The experimental results show that the image obtained by preprocessing the nighttime image with the proposed enhancement algorithm has better detail performance and color restoration, and greatly improves the image quality. By performing feature registration on the enhanced image, the number of matching logarithms of the image increases, so as to achieve high accuracy for images stitching.

1. Introduction

The panorama image is a seamless wide-view image generated by stitching multiple narrow-view images with overlapping areas in the same scene using image stitching technology [1]. When stitching an image, one of the source images is selected as a reference image, the other adjacent images are transformed to match the coordinate system of the reference image, and the transformation matrix is used to calculate the single response between the adjacent images to construct a panoramic image. In recent years, image stitching has become an active research area in the field of image processing and plays an important role in several applications of computer vision and computer graphics, and has been widely used in various applications, such as image rendering, medical imaging, image stabilization, 2D and 3D image mapping, satellite imaging [2], soil water balance assessment [3], and disaster prevention and control [4]. Moreover, image stitching provides support for unmanned aerial vehicle (UAV) hyperspectral remote sensing technology [5].
Most of the current mature image stitching techniques are based on clear, easy-to-process images taken in scenes with good lighting conditions, while image stitching techniques in scenes with uneven lighting, such as morning and evening, are not yet perfect. High-quality images are the basis for stitching. Due to the limitations of image capture equipment and the current capture environment, high or low illumination of the captured images can cause serious image degradation. For example, the captured nighttime images often have low signal-to-noise ratio, low brightness and low contrast. As shown in Figure 1, due to the influence of street lights or building lights, the captured nighttime images are not evenly divided, and the image brightness is relatively concentrated, while the brightness of the surrounding scene is often very dark, making it difficult to observe the dark information of the images, which makes the loss of image details a serious issue [6]. When feature extraction is performed on the image, the feature points are not extracted enough, and when stitching the night image, it is very easy to cause an image stitching failure. In addition, the night image affects the visual effect due to poor visibility, weak recognition function, and serious detail loss, resulting in the stitched image not meeting the actual application requirements. In order to improve the image quality and stitching success rate, this paper uses image enhancement techniques to preprocess nighttime images. The main contributions are summarized as follows:
  • An enhancement algorithm based on guided filtering is proposed, so as to obtain nighttime images with good enhancement effect.
  • A nighttime image stitching method based on enhancement algorithm is constructed to increase the number of night image matching pairs, so as to achieve high accuracy for images stitching.

2. Related Work

The low-illumination image enhancement algorithm mainly achieves the purpose of improving the overall contrast and brightness of the image by increasing the brightness of the dark part and suppressing the gray value of the over-bright area. As a classic problem in the field of digital image processing, the low illumination image enhancement algorithm has been developing continuously for a long time. The commonly enhancement methods of low illumination color image consist of retinex theory, gray-scale transformation, etc.
Retinex theory is a classic low-light image enhancement method. Multi-scale retinex (MSR) [7] and multi-scale retinex with color restoration (MSRCR) [8] are representative Retinex algorithms. However, these algorithms are prone to problems such as color distortion, halo, and over-enhancement. Aiming at the problem of blurred image details under low-light conditions, Liu et al. [9] proposed a low-illumination image enhancement algorithm that combines homomorphic filtering and Retinex. In RGB color space, the original image is processed using the wavelet transform and an improved Butterworth filter to obtain a detail-enhanced image. After that, in the HSV space of the original image, a color-enhanced image is obtained by using the improved bilateral filter function to process the V channel; by weighted fusion of detail-enhanced image and color-enhanced image, a high-quality image is obtained. Tang et al. [10] proposed a light map estimation method based on Retinex theory. First, the initial light map was estimated by calculating the maximum value in the three channels of R, G, and B, and anisotropic filtering was used to refine the initial light map. The illumination map is processed by adapting the gamma function, and finally, the reflection image is calculated according to the Retinex model, and the reflection image is de-sharp-masked to enhance the details.
The gamma correction function is a commonly used method in gray level transformation. The implementation method is simple, but it is usually necessary to manually set the parameters according to the characteristics of the low illumination image, and the image cannot be adaptively enhanced. Al-Ameen [11] proposed a new illumination enhancement algorithm, which employs specialized logarithmic and exponential functions to process images, and fuses the images processed by two different functions through the adaptive logarithmic processing (LIP) method. A modified S-curve function is used to improve the overall brightness of the image. Finally, low-light image enhancement is achieved by processing the image using a linear scaling function to redistribute the intensity of the image to standard dynamic range. However, the algorithm must manually set the threshold, and it is difficult to set an optimal parameter for enhancement for different scenarios.
In recent years, intelligent algorithms have developed rapidly and have also been applied to image enhancement. Qian et al. [12] proposed an adaptive image enhancement algorithm based on visual saliency, and introduced the cuckoo search algorithm and bilateral gamma adjustment function in the Hue Saturation Intensity (HSI) color space. This method improves the overall brightness of the image by finding the best parameter values for different scenes. In addition, a brightness-preserving bi-histogram construction method based on the visual saliency method (BBHCVS) is proposed to enhance the contrast of the region of interest while maintaining the image brightness. Finally, the image is adjusted using the improved saturation stretch function, which enriches the color information of the image. Considering to the characteristics of low-illumination color images, Li et al. [13] used the proposed adaptive particle swarm optimization algorithm combined with gamma correction to improve the overall brightness of the image. Furthermore, in order to enhance the saturation of the image, the image is processed using an adaptive stretching function. This method can not only improve the contrast of low illumination color images and avoid color distortion, but also effectively improve the brightness of the image and provide more detail enhancement while maintaining the naturalness of the image. Processing low-light images through intelligent algorithms improves the quality of the images. However, the introduction of intelligent algorithms undoubtedly increases the complexity of the enhancement algorithm. Moreover, image filtering algorithms are also used in image enhancement. Shan et al. [14] proposed a globally optimized linear windowed (GOLW) tone mapping algorithm, which introduces a novel highly dynamic range compression method by using local linear filtering. This algorithm realizes the enhancement of high-dynamic range (HDR) images. Noise in low-light images cannot be ignored. Hamza and Krim [15] proposed a variational approach to maximize the a posteriori estimation for image denoising, which can improve the filtering performance of Gaussian noise. Ben Hamza et al. [16] presented a variational approach to maximize the a posteriori (MAP) estimation. The approach uses geometric insight to help construct regularization functions that yield well-denoised images.
These algorithms are commonly validated using images from publicly available datasets, and are not validated for actual captured low-light images. Since there is a large amount of noise in the dark region of the actual captured nighttime images, the enhancement algorithm is very likely to amplify the noise in the dark region while enhancing the image brightness, which will have an impact on the subsequent stitching. In addition, this paper uses enhancement techniques to preprocess images, which are applied in image stitching, and if the complexity of the enhancement algorithm is too high, the stitching speed of the images will be affected. Therefore, an adaptive image enhancement algorithm based on guided filtering is proposed. First, the V component is extracted by converting the color space, and then, the illumination component is estimated by multi-scale guided filtering. The illumination components are corrected by an improved enhancement function based on the Weber–Fechner law, and an adaptive factor is introduced. The illumination components before and after the correction are combined by fusion technology, and finally transferred to the RGB color space. This algorithm achieves fast adaptive nighttime image enhancement and obtains higher quality and more detailed nighttime images, which is beneficial to the subsequent image stitching. The algorithm framework of this paper is shown in Figure 2.
The remaining contents of this paper are arranged as follows. Section 3 presents the proposed enhancement algorithm. Section 4 presents the stitching method based on the proposed enhancement algorithm preprocessing. Section 5 contains experimental results and discussions. Finally, Section 6 presents the conclusions.

3. The Proposed Nighttime Image Enhancement Method

3.1. Space Conversion

The enhancement processing on the RGB color space is easy to cause the color distortion of the image, so this paper chooses the HSV color space that is closer to the human visual expectation to enhance the image. The RGB space of the image is converted into the HSV space [17], and three components are obtained, which are H (hue), S (saturation), and V (luminance). The mathematical expressions are as follows:
V = Y max
S = 0 , Y max = 0 Y max Y min Y max = 1 Y min Y max , otherwise
H = H , H 0 H + 360 , otherwise
H = 60 × G B Y max Y min , V = R 60 × B R Y max Y min + 2 , V = G 60 × R G Y max Y min + 4 , V = B
where  Y max = max ( R , G , B ) Y min = min ( R , G , B ) H  can be represented by Equation (4). Through spatial transformation, the H, S, and V components of the image are obtained, which are expressed as  I H ( x , y ) I S ( x , y ) , and  I V ( x , y ) , respectively.

3.2. Estimation of Illumination Components Based on Guided Filtering

In Retinex-based image enhancement algorithms, Gaussian filtering and bilateral filtering are usually used as surround functions to estimate the light components [18]. Gaussian filtering can extract the illumination components, but the computational complexity increases significantly with the increase of the filtering window. The time complexity of the bilateral filtering is  O ( N r 2 ) , where r is the filter window radius and N is the total number of pixels in the image. When the window radius r is large or processing large-resolution images, the calculation time is too long, so the bilateral filtering method is less efficient. In addition, when a color image is smoothed by bilateral filtering, gradient inversion occurs near the edges of objects in the image, resulting in halo, which affects the quality of the output image and interferes with subsequent image processing [19].
In this paper, a linear guided filter with smoothing and edge-preserving functions is used to estimate the illuminance components. Guided filtering refers to the idea of least squares and performs operations through box filtering and integral image techniques. The time complexity is only  O ( N ) , and the execution speed is independent of the filter window size. Compared with bilateral filtering and Gaussian filtering, it is more efficient to estimate the illumination component.
Guided filtering [20] represents the output image q as a linear model related to the guide image I, the formula is as follows:
q j = a k I j + b k , j ω k
where  q j  is the linearly transformed gray value of image I at pixel j in the window  ω k . k is the center pixel of the window  ω k a k  and  b k  are the linear coefficients of the guide image within a window  ω k  of radius r centered on pixel k. The cost function is set as follows:
E a k , b k = j ω k a k I j + b k g j 2 + δ a k 2
where  δ  is a regularization parameter to prevent  a k  from being too large and is used to adjust the filtering effect of the filter. The local linear coefficients  a k  and  b k  can be solved by the least square method:
a k = 1 N ω k j ω k I i g i μ k g ¯ k σ k 2 + δ
b k = g ¯ k a k μ k
where  μ k  and  σ k  are the mean and standard deviation of pixels in the window  ω k  with radius r and center pixel k, respectively.  g ¯ k  is the mean value of the image to be filtered in the window  ω k N ω k  is the total number of pixels in the window  ω k .
When calculating the linear coefficients of each window, it is considered that a pixel i can be covered by  N ω k  windows at the same time, that is, each pixel is described by multiple linear functions. Therefore, when solving the output of a certain point, it is necessary to average all the linear function values including this point, and finally get:
q j = 1 N ω k j ω k a k I j + b k = a ¯ j I j + b ¯ j
We calculate the gradient of both sides of Equation (5) simultaneously to obtain  q = a k I . It can be found that the guided filtering model has the edge preservation characteristics, and the coefficient  a k  determines the gradient preservation degree of the final image, which represents the image edge preservation degree. When  a k  is equal to 1, the output and input images have the same gradient change. When  a k  is smaller, the gradient information in  q j  is less, the smoothing force is greater, and the edge of the image is blurred. The  δ  in Equation (6) is a fixed regularization parameter that prevents  a k  from being too large and takes a value between 0 and 1. The smaller  δ  is, the smaller the smoothing multiplier of the superposition. Therefore, guided filtering uses  a k  and  δ  together to determine the degree of edge retention and smoothing of the output image [20,21]. Guided filtering adopts a linear method to realize the filtering process, which ensures that the output image has the gradient structure similar to the input image, and finally achieves the edge-preserving effect.
The framework of the estimation of illumination components based on guided filtering is shown in Figure 3. In this paper, we use the luminance component  I V ( x , y )  as the input image and guide image. Considering the slow change of illumination in most areas, and the sudden change of brightness in local areas due to factors such as lighting, two guided filtering processes are performed on the brightness components, which are fused together by weighting as the final illumination component estimation.
F 1 ( x , y ) = G F r 1 , δ 1 I V ( x , y ) , I V ( x , y )
F 2 ( x , y ) = G F r 2 , δ 2 I V ( x , y ) , F 1 ( x , y )
I V g i f = η 1 × F 1 ( x , y ) + η 2 × F 2 ( x , y )
where  G F ( r , δ )  represents the guided filter function with the window radius as r and the regularization parameter  δ . The weighting coefficients are  η 1 = r 1 r 1 + r 2 η 2 = r 2 r 1 + r 2 I V g i f  denotes the filtered illumination component.
After two guided filtering processes, the illumination component image is obtained. The processed illumination component removes texture details and retains edge information, and the effect is better than Gaussian filtering and bilateral filtering. The comparison results are shown in Figure 4.

3.3. Adaptive Brightness Enhancement

The human eye is able to distinguish between different objects because different objects reflect light with different intensities, thus creating a contrast in brightness and color between them. The Weber–Fechner law indicates the law of the relationship between mental and physical quantities, which expresses the laws of the human visual system for the perception of the intensity of light.
Weber–Fechner’s law shows that the difference between the same visual stimulus must reach a certain ratio before it can be distinguished by the human eye, and this ratio is called the discrimination threshold of the human eye. When the brightness change is less than the discrimination threshold, the human eye cannot detect it. The threshold is not fixed, it varies with the brightness of the object’s background. Its mathematical relationship is:
Δ S = Δ V V
After integrating Equation (13), the subjective visual luminance of the human eye is obtained as
S = k × log V + c
where S is the perceptual quantity. k is a constant. V is the physical brightness. c is the integral constant.
From Equation (14), it can be seen that there is a logarithmic relationship between the subjective perception of the intensity of light by the human visual system and the intensity of the stimulus change of light.
The Weber–Fechner law proves that the human visual system is a nonlinear processing process. By setting the enhancement function according to Weber–Fechner’s law, the obtained image is more in line with human vision. Due to the high complexity of logarithmic operations, the literature [22] proposed to simplify Equation (14) with Equation (15) for fitting the illumination component.
I V = I V ( 255 + k ) I V + k
where  I V  is the enhanced image,  I V  is the image before enhancement, the value 255 is the gray level of the image. k is the adjustment coefficient. The adjustment amplitude decreases as k increases. The literature [22] adjusts the magnitude of k by the product of a weight coefficient  α  and the mean value of the S component. The weight coefficient  α  is set empirically, and the enhancement amplitude of the image is adjusted by setting different values of  α . Obviously, this method cannot achieve adaptive enhancement, and the effect of the enhanced images obtained for different types of low-light image processing varies significantly.
To address this problem, this paper introduces  I ¯ V  as an adaptive enhancement factor. The magnitude of enhancement is determined based on the average brightness of the image. When the brightness of the image is low, the brightness adjustment intensity of the enhancement function to the image is increased, and when the brightness of the image light is high, the enhancement intensity of the image is automatically weakened to prevent the image from being over-enhanced.
In this paper, the average luminance value is introduced as the adaptive factor of the enhancement function to realize the adaptive enhancement of the image. The adaptive enhancement function formula used is as follows:
I V = I V 255 + I ¯ S × I ¯ V max I V , I V g i f + I ¯ S × I ¯ V
where  I ¯ S = 1 N i = 1 N I S I ¯ V = 1 N i = 1 N I V , N is the number of pixels of image  I V .

3.4. Image Fusion

The image fusion technique enables the extraction of effective information from the image. In this paper, the enhanced brightness image is fused by weighted fusion and the maximum value method. The maximum value method performs fusion by comparing the size of the pixel values of the corresponding points in the image.
The maximum pixel method is used to further enhance the image when the average brightness of the input image is too low. Conversely, the average weighting method is used to prevent over-enhancement. Therefore, it is reasonable to use the average brightness value as the threshold to determine the fusion algorithm. Experiments verify that a threshold of 0.2 can achieve better enhancement effects for nighttime images.
I V F ( x , y ) = max I V g i f ( x , y ) , I V ( x , y ) , I ¯ V 0.2 0.5 × I V g i f ( x , y ) + 0.5 × I V ( x , y ) , otherwise
where  I V F ( x , y )  represents the fused image,  I V  and  I V  denote the images to be fused.

3.5. Saturation Enhancement

After the brightness of the image is increased, the saturation of the image will be reduced to a certain extent. In order to prevent the increase of brightness from affecting the saturation, an adaptive nonlinear stretching function is constructed in the literature [12] to stretch the saturation of the image. The coefficient value of the function is too small, which often leads to unsaturation when enhancing low-light images, so as to obtain images with poor visual effects. Experiments show that the supersaturation phenomenon of the image will appear as the coefficient value increases. Therefore, an improved adaptive nonlinear stretching function is proposed to enrich the image details.
The improved stretch function used in this paper is as follows:
I S = 0.5 + 0.5 × max ( R , G , B ) + min ( R , G , B ) + 1 2 × mean ( R , G , B ) + 1 I S
where  I S  and  I S  denote the saturation of the image before and after stretching.  max ( R , G , B )  indicates maximum value of pixels in R, G and B color channels.  min ( R , G , B )  refers to minimum value of pixels in the three channels.  mean ( R , G , B )  refers to the average value of pixels in the three color channels.
Figure 5 shows the image comparison results processed by the improved saturation stretching function. It can be seen that after stretching the S component, the image has higher saturation, and the color information of the image is more abundant.

4. Image Stitching Based on the Proposed Enhancement Algorithm Preprocessing

The main steps of image stitching include image preprocessing, image registration, and image fusion. After the nighttime image is preprocessed by the enhancement algorithm, the SIFT algorithm is used to extract the features, and the RANSAC algorithm is used to eliminate the mismatched pairs, and then, the transformation matrix is solved to obtain the transformation relationship between the images. Finally, the weighted position fusion algorithm is used to fuse the pixels of the spliced images to eliminate the splicing traces and generate a panoramic image.

4.1. Elimination of Mismatch Points by Ransac Algorithm

Considering the large number of mismatched pairs in the rough matching obtained by the SIFT algorithm, this paper uses the RANSAC (Random Sampling Consensus) algorithm to eliminate the mismatched pairs. The RANSAC algorithm regards the data that meet the estimated model as an interior point, and the data that do not conform to the estimated model as an exterior point. Through parameter estimation, a reasonable result under a certain probability is generated, and repeated testing and continuous iteration increase the probability. When the number of iterations is sufficient over time, the true model is estimated from the dataset.
Assuming that the global homography matrix to be solved is H, the error threshold  ε  is set, and the number of iterations is k, the operation steps of the RANSAC algorithm to eliminate the mismatch points are as follows:
  • Randomly select 4 groups of non-collinear matching point pairs from the rough matching results;
  • Solve the projection transformation matrix H according the selected matched pairs of points;
  • Among the remaining matching pairs, apply the H derived from the above step to count the reprojection error less than the set threshold  ε  of the matching pairs, noting the matching pair as an inner point and counting the number.
  • If the number of current interior points is greater than the previous optimal projection transformation, the current projection transformation is recorded as the optimal projection transformation;
  • If the current probability is within the range allowed by the model or the number of iterations is greater than the specified number of times, the calculation is completed. If it does not meet the requirements, the above process is repeated until the requirements of the model are met or the specified number of iterations is completed.
Through the processing of RANSAC algorithm, the homography matrix of global projection transformation is obtained while eliminating the mismatched pairs, which represents the optimal spatial transformation relationship between the two images to be spliced.

4.2. Fusion of Stitched Images

Image fusion is the process of combining two images to be stitched together in a common coordinate system. In order to make the resulting stitched image more natural, it is necessary to fuse the overlapping parts of the two images to be stitched together.
This paper adopts the position-weighted fusion algorithm. The position-weighted fusion algorithm is a gradual and gradual-out fusion algorithm. When calculating the fusion transition area pixels, the overlapping area pixels are generated with linear weights. The formula is as follows:
f ( x , y ) = f 1 ( x , y ) , ( x , y ) f 1 ¬ f 2 ω 1 f 1 ( x , y ) + ω 2 f 2 ( x , y ) , ( x , y ) f 1 f 2 f 2 ( x , y ) , ( x , y ) f 2 ¬ f 1
where  ω 1  and  ω 2  are the pixel weighting coefficients corresponding to the images  f 1  and  f 2 , respectively, which control the smooth transition of the overlapping area from the left border to the right border. The calculation formula is as follows:
ω 1 = x L R L ω 2 = 1 ω 1
where L and R are the left and right boundaries of the overlapping region, respectively. The weight of the position-weighted fusion algorithm changes with the width of the overlapping area, so as to realize the smoothness of the pixel change in the fusion area, which can effectively improve the hard boundary effect of the stitched image, and realize the slow transition from the reference image to the target image in the overlapping part.

5. Experiments and Discussions

5.1. Experiment Setting

For the proposed image enhancement algorithm, specific images are used for validation, followed by feature matching and stitching for comparison. All experiments in this research were run on MATLAB R2018a on a PC with 1.6 GHz CPU and 8 GB RAM.
To evaluate the effectiveness of the proposed enhancement algorithm, we compare the proposed method with conventional image enhancement algorithms and state-of-the-art technologies, i.e., multi-scale retinex (MSR) [7], multi-scale retinex with color restoration (MSRCR) [8], retinex-based Multiphase algorithm (RBMP) [23], and adaptive image enhancement method (AIEM) [22]. Six representative images with uneven illumination (image #1–6) are selected from the MEF [24] and NPE [18] image sets and combined with four nighttime images actually taken as the experimental test images (image #7–10). The pictures collected in this article were taken in front of the tennis court and dormitory building of Heilongjiang University. This experiment evaluates the proposed enhancement algorithm and other comparison algorithms in terms of both subjective evaluation and objective evaluation metrics. The subjective visual evaluation of images can truly reflect the image quality from the visual perspective, and the evaluation is simple and reliable. The objective evaluation metrics judge the image quality from the specific metric level.
The relevant parameters of the algorithm are set as follows:
  • In order to balance the smoothness of the image and the edge-holding effect, this paper sets the guided filtering parameters as  r 1 = 3 r 2 = 5 δ 1 = 0.14 δ 2 = 0.14 .
  • The AIEM algorithm uses the parameters in the authors’ original paper, and the 3 Gaussian scale parameters are:  σ 1 = 15 σ 2 = 80 σ 3 = 250 , the weights are set as  α 1 = 0.1 α 2 = 1 .

5.2. Subjective Evaluation of Image Enhancement

The unevenly illuminated images in the public low-light dataset are processed using different enhancement algorithms, and the results are shown in Figure 6. The brightness of the image processed by the MSR algorithm is improved, but there is an over-enhancement phenomenon, and the overall image appears white, such as the clouds in image #2 (b) and yellow houses in image #4 (b). Image details are lost due to excessive image brightness enhancement. The MSRCR algorithm can improve the brightness of the image, but the color preservation effect of the image is still poor. For example, the sky color of image #1 (c) and image #6 (c) cannot maintain the color effect in the original image. The overall color of the image is lighter, with obvious color distortion. The brightness of the dark areas of the image processed by the RBMP algorithm is not significantly improved, and the color retention ability is slightly insufficient, such as the street signs in image #1 (d) and the balloons in image #5 (d). The color preservation effect of the image processed by the AIEM algorithm is good, but the halo phenomenon occurs in the alternating light and dark areas, such as around the street lights in image #3 (e). In addition, the images processed by the AIEM algorithm have artifacts on the edges of foreground objects, which affect the visual effect of the image, such as the edges of buildings and the edges of alternating light and dark clouds in image #2 (e). The brightness of the dark area of the image processed by the algorithm proposed in this paper is improved, and there is no overexposure phenomenon, and the color preservation effect is close to that of the AIEM algorithm. Due to the introduction of guided filtering, the edge of the image processed by the proposed method is sharper, such as the edge of the house in image #4 (f) and the edge of the lighthouse in image #6 (f). The image processed by the proposed algorithm has more natural brightness processing at the intersection of light and dark, without halos and artifacts. As shown in image #1 (f), the edge of the sign is clear and the color transition is natural.
The collected nighttime images (images #7–10) were enhanced using different algorithms, and the results are shown in Figure 7. The MSR algorithm improves the overall brightness of the image, but also for high-brightness areas, where overexposure occurs at the light source, as shown in the brightness area of images #6–7 (b). The MSRCR algorithm also has an overexposure phenomenon, the overall picture is bluish, and the “block effect” in the dark area is obvious, which affects the visual effect of the image, such as the window areas of image #7 and image #8. Compared with the MSR and MSRCR algorithms, the enhancement effect of the RBMP algorithm is improved, and the brightness of the dark areas of the image is improved, such as the steps and trees in image #8 (d). This algorithm improves the image over-enhancement phenomenon, but the detail preservation effect in the brightness area is still not good, such as the window in image #8 (d) and the light sign area in image #9 (d), the brightness enhancement is unnatural. The AIEM algorithm has a better effect on color retention, but in the edge area where light and dark alternate, such as in image #7 (e) and image #8 (e), there are the artefacts around the window, which affect the visual effect. In addition, the color of the image processed by AIEM algorithm is unnatural, such as the color of the light sign in image #9 (e) and image #10 (e). The image processed by the proposed algorithm maintains good brightness in areas with strong illumination, and improves the brightness in dark areas. As shown in image #7 (f) and image #8 (f), the edges of the windows are sharp, and the images have moderate brightness and good color retention. The brightness and color of the lights in image #9 (f) and image #10 (f) are natural with no over-enhancement.

5.3. Objective Evaluation of Image Enhancement

In order to objectively reflect the enhancement effect of each algorithm in processing low-light images, this paper uses average value (AVG), average gradient (AG), information entropy (IE), and peak signal-to-noise ratio (PSNR) to measure the quality of the enhanced low-light images [25,26,27].
The mean of the image is used to represent the average brightness of the image. The calculation formula is given by Equation (21).
AVG = i = 1 M j = 1 N I ( i , j ) M × N
where M is the image height, N is the image width.  I ( i , j )  refers to the gray value of the pixels in row i and column j of the image.
The average gradient is used to measure the sharpness of the image. The larger the average gradient of the image, the more layers of the image, and the clearer the image. AG is calculated by Equation (22).
A G = 1 M × N i = 1 M j = 1 N f x 2 + f y 2 2
where  f x  and  f y  respectively represent the horizontal and vertical gradients of the  M × N  image.
Information entropy is an index used to measure the richness of image information. The greater the image information entropy, the better the detail performance of the image. The Information entropy (IE) is calculated by Equation (23).
H = x k q ( x ) ln q ( x )
where  q ( x )  represents the distribution density of the image gray level x. k is the gray level of the image.
The peak signal-to-noise ratio is used to measure the degree of image distortion or the anti-noise level. The larger the value, the smaller the image distortion and the higher the anti-noise level. PSNR is calculated by Equation (24).
P S N R = 10 log 10 max I i 2 M S E
where max ( I i )  is the maximum gray level value of the input image  I i . MSE is the Mean Square Error of the enhanced image and the input image. MSE is given by Equation (25).
M S E = 1 M × N i = 1 M j = 1 N ( x ( i , j ) y ( i , j ) ) 2
where  x ( i , j )  is the gray value of the pixels in row i and j column of the orignal image.  y ( i , j )  is the gray value of the pixels in row i and column j of the enhanced image.
Table 1 lists the comparison of various indicators of the 6 dataset images enhanced by different algorithms. It can be seen from Table 1 that the average value of the processed images is improved, indicating that the brightness of the image is enhanced, but because the MSR and MSRCR over-enhance the image, the image is white, so the average value is too large. The average value of the image enhanced by the proposed algorithm is moderate, which shows that the brightness of the image is adaptively enhanced, and there is no over-enhancement phenomenon, which is in line with the human eye observation effect. From the point of view of the average gradient, the five enhancement algorithms all improve the image clarity to a certain extent, among which the proposed algorithm and AIEM have better average gradient values. The image information entropy values processed by each enhancement algorithm are improved, among which AIEM and the proposed algorithm obtain relatively high values. It can be seen from the PSNR value that the AIEM and RBMP algorithms and the proposed algorithm have better effect on image noise suppression.
Table 2 lists the comparison of the evaluation indexes of the 4 nighttime images actually shot through 5 different enhancement algorithms. As shown in Table 2, the mean value of the images enhanced by MSR and MSRCR is still too high, indicating that the image has been enhanced and the peak signal-to-noise of the image is low. After the image is enhanced by the proposed algorithm, the mean value is increased compared with the original image, but the brightness of the enhanced image is moderate, and there will be no excessive enhancement. The image processed by the proposed algorithm has the highest PSNR value, indicating that the suppression effect of nighttime image noise is better than other algorithms. Although the IE or AG values of individual images processed by AIEM are higher than those obtained by the method proposed in this paper, the comprehensive performance of our method is much better than that of the other methods.
In general, the proposed image enhancement algorithm can effectively improve image brightness and clarity. In addition, more detailed texture information of the image can be recovered, the color information is also protected, and the noise in the dark place is suppressed, resulting in a higher quality image, which is conducive to subsequent stitching.

5.4. Time Complexity

Table 3 shows the processing time comparison of each algorithm. The MSRCR algorithm requires Gaussian filtering of the logarithmic domains of R, G, and B components of the original image to estimate the illumination components, so the complexity is higher; RBMP uses gamma-corrected sigmoid function processing for image enhancement, which is a simple method and less complex than the MSR algorithm. The AIME algorithm is less time consuming than MSR and MSRCR, but it employs multiscale Gaussian filtering to extract the illumination components, which leads to an increase in the running time and a sharp increase in the complexity of the algorithm as the Gaussian window increases. Compared with the AIEM algorithm, the proposed algorithm uses guide filtering in estimating the illuminance components, which reduces the complexity of the algorithm and makes the processing time decrease, and lays the foundation for the subsequent fast stitching.

5.5. Feature Matching

For the sake of description, image #7 and #8 are named ‘building1’, ‘building2’, image #9 and #10 are named ‘light plate1’, ‘light plate2’. After enhancing the images with different enhancement algorithms, the SIFT algorithm in the VLFeat library was used for feature extraction and matching. The comparison of the number of feature points extracted and the number of matched pairs are shown in Figure 8 and Figure 9, and the matching results are shown in Figure 10 and Figure 11.
As can be seen from the data comparison in Figure 8, the number of feature points extracted from the enhanced nighttime image increases significantly, among which the feature extraction effect of the proposed algorithm is more significant for four nighttime images. The extraction ability is relatively stable, and will not fluctuate greatly due to different images. Figure 9 shows that the number of correctly matched feature pairs is greatly improved for the images enhanced by the proposed algorithm.
Figure 10a and Figure 11a show that the matched feature points of the images before enhancement are fewer and mainly concentrated in the regions with stronger lighting, while there are almost no successfully matched feature points in the dark places. When stitching the nighttime images with uneven illumination, the feature points are clustered in the bright places, which makes the obtained transformation matrix error large and eventually leads to poor stitching. As shown in Figure 10f and Figure 11f, through the proposed enhancement algorithm, the dark area of the road surface is matched to the feature points. This experiment proves that the proposed enhancement algorithm is beneficial to the feature extraction and registration of images with nighttime images, and provides guarantee for subsequent stitching.

5.6. Image Stitching

The two groups of images of ‘building’ and ‘light plate’ are spliced. The spliced images are shown in Figure 12 and Figure 13. The comparison of evaluation indicators is shown in Table 4.
After the image is preprocessed by the enhancement algorithm, the details of the image are more abundant, and the information of the dark area of the image is enhanced. Objects originally in dark areas, such as steps and trees in Figure 12f, can be clearly observed after enhancement. It can be observed from Figure 13a that when splicing the original image, there is an obvious ghost at the step, which is caused by the inaccuracy of the transformation matrix due to insufficient matching logarithms. After stitching using the comparison enhancement algorithm, as shown in Figure 13b–f, the ghosting phenomenon is improved, but not eliminated. It can be seen from Figure 13f that the ghosting phenomenon at the steps in the image disappears after stitching and after enhancement by the proposed algorithm, indicating that the proposed algorithm can obtain matching pairs with better quality, and then solve a more accurate transformation matrix, which improves the stitching accuracy.
As indicated in Table 4, the stitched images processed by the enhancement algorithm have improved in mean, average gradient, information entropy, and signal-to-noise ratio, indicating that the quality of the stitched image can be effectively improved by using the enhancement algorithm to preprocess the image. The MSR and MSRCR algorithms over-enhance bright areas, resulting in too large average image values and dazzling images. The five enhancement algorithms have little difference in the improvement of information entropy, indicating that the enhancement algorithms all enrich the image details. The AG value of the images processed by the proposed algorithm is slightly lower than that of the AIEM algorithm. The image processed by the proposed enhancement algorithm has the highest PSNR value, indicating that the proposed algorithm can improve the brightness while suppressing noise. Overall, the proposed algorithm improves nighttime image quality and achieves better image quality, which supports practical applications.

6. Conclusions

Aiming at the problem of the poor nighttime image stitching effect, an enhancement algorithm applicable to nighttime image stitching is proposed. The V component obtained by converting the color space of the image is used to extract the lighting component of the scene via multi-scale guided filtering. Then, the correction function based on the Weber–Fechner law is used to enhance the light component, and an adaptive factor is introduced to realize the adaptive brightness enhancement. Additionally, the S component is processed using a nonlinear stretching function. Finally, a nighttime image with better enhancement effect is obtained through color space conversion.
In this paper, the proposed method is verified by selected low-illumination dataset images and the collected nighttime images, and compared with four other enhancement algorithms. From the experimental results, it can be seen that the images with rich details, good color retention, high signal-to-noise ratio, and rich texture information are obtained by processing the image through the proposed enhancement algorithm. Compared with other algorithms, the proposed algorithm has the lowest complexity and can meet the demand of fast stitching. By performing feature matching on the enhanced image, more matching logarithms can be obtained. The proposed method has higher stitching accuracy. In conclusion, the proposed adaptive enhancement method based on guided filtering can meet the requirements of fast and efficient nighttime image stitching, which provides value for the application of nighttime surveillance image stitching.

Author Contributions

Software, writing—original draft preparation, writing—review and editing, M.Y.; formal analysis, D.Q.; project administration, D.Q.; investigation, G.Z.; validation, P.Z. and J.B.; supervision, D.Q. and L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (61771186), Outstanding Youth Project of Provincial Natural Science Foundation of China in 2020 (YQ2020F012), and Graduate Innovative Science Research Project of Heilongjiang University in 2022 (YJSCX2022-080HLJU, YJSCX2022-205HLJU).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study, for studies not involving humans.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, X.; Yu, M.; Song, Y. Optimized Seam-Driven Image Stitching Method Based on Scene Depth Information. Electronics 2022, 11, 1876. [Google Scholar] [CrossRef]
  2. Garg, A.; Dung, L.R. Stitching Strip Determination for Optimal Seamline Search. In Proceedings of the 2020 4th International Conference on Imaging, Signal Processing and Communications (ICISPC), Virtual, 23–25 October 2020; pp. 29–33. [Google Scholar] [CrossRef]
  3. Huang, Z.; Hui, B.; Sun, S. An Automatic Image Stitching Method for Infrared Image Series. In Proceedings of the 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), Xi’an, China, 14–17 October 2021; pp. 887–891. [Google Scholar]
  4. Qu, Z.; Li, J.; Bao, K.H.; Si, Z.C. An unordered image stitching method based on binary tree and estimated overlapping area. IEEE Trans. Image Process. 2020, 29, 6734–6744. [Google Scholar] [CrossRef] [PubMed]
  5. Peng, Z.; Ma, Y.; Mei, X.; Huang, J.; Fan, F. Hyperspectral Image Stitching via Optimal Seamline Detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  6. Saad, N.H.; Isa, N.A.M.; Saleh, H.M. Nonlinear Exposure Intensity Based Modification Histogram Equalization for Non-Uniform Illumination Image Enhancement. IEEE Access 2021, 9, 93033–93061. [Google Scholar] [CrossRef]
  7. Rahman, Z.; Jobson, D.; Woodell, G. Multi-scale retinex for color image enhancement. In Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 16–19 September 1996; Volume 3, pp. 1003–1006. [Google Scholar] [CrossRef]
  8. Jobson, D.; Rahman, Z.; Woodell, G. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  9. Liu, F.; Xue, Y.; Dou, X.; Li, Z. Low Illumination Image Enhancement Algorithm Combining Homomorphic Filtering and Retinex. In Proceedings of the 2021 International Conference on Wireless Communications and Smart Grid (ICWCSG), Hangzhou, China, 13–15 August 2021; pp. 241–245. [Google Scholar] [CrossRef]
  10. Tang, S.; Li, C.; Pan, X. A simple illumination map estimation based on Retinex model for low-light image enhancement. In Proceedings of the 2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Online, 23–25 October 2021; pp. 1–5. [Google Scholar] [CrossRef]
  11. Al-Ameen, Z. Nighttime image enhancement using a new illumination boost algorithm. IET Image Process. 2019, 13, 1314–1320. [Google Scholar] [CrossRef]
  12. Qian, S.; Shi, Y.; Wu, H.; Liu, J.; Zhang, W. An adaptive enhancement algorithm based on visual saliency for low illumination images. Appl. Intell. 2022, 52, 1770–1792. [Google Scholar] [CrossRef]
  13. Li, C.; Liu, J.; Wu, Q.; Bi, L. An adaptive enhancement method for low illumination color images. Appl. Intell. 2021, 51, 202–222. [Google Scholar] [CrossRef]
  14. Shan, Q.; Jia, J.; Brown, M.S. Globally Optimized Linear Windowed Tone Mapping. IEEE Trans. Vis. Comput. Graph. 2010, 16, 663–675. [Google Scholar] [CrossRef]
  15. Hamza, A.B.; Krim, H. A variational approach to maximum a posteriori estimation for image denoising. In Proceedings of the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, Sophia Antipolis, France, 3–5 September 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 19–34. [Google Scholar]
  16. Ben Hamza, A.; Krim, H.; Zerubia, J. A nonlinear entropic variational model for image filtering. EURASIP J. Adv. Signal Process. 2004, 2004, 540425. [Google Scholar] [CrossRef]
  17. Niu, G.; Wang, C.; Meng, H. Driver Face Image Enhancement Based on Guided Filter. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; pp. 100–104. [Google Scholar] [CrossRef]
  18. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Huang, W.; Bi, W.; Gao, G. Colorful image enhancement algorithm based on guided filter and Retinex. In Proceedings of the 2016 IEEE International Conference on Signal and Image Processing (ICSIP), Beijing, China, 13–15 August 2016; pp. 33–36. [Google Scholar] [CrossRef]
  20. Shi, Z.; Guo, B.; Zhao, M.; Zhang, C. Nighttime low illumination image enhancement with single image using bright/dark channel prior. EURASIP J. Image Video Process. 2018, 2018, 13. [Google Scholar] [CrossRef]
  21. Liang, W.; Long, J.; Li, K.C.; Xu, J.; Lei, X. A Fast Defogging Image Recognition Algorithm based on Bilateral Hybrid Filtering. ACM Trans. Multimed. Comput. Commun. Appl. 2020, 17, 42. [Google Scholar] [CrossRef]
  22. Wang, W.; Chen, Z.; Yuan, X.; Wu, X. Adaptive image enhancement method for correcting low-illumination images. Inf. Sci. 2019, 496, 25–41. [Google Scholar] [CrossRef]
  23. Al-Hashim, M.A.; Al-Ameen, Z. Retinex-Based Multiphase Algorithm for Low-Light Image Enhancement. Trait. Signal 2020, 37, 733–743. [Google Scholar] [CrossRef]
  24. Lee, C.; Lee, C.; Lee, Y.Y.; Kim, C.S. Power-Constrained Contrast Enhancement for Emissive Displays Based on Histogram Equalization. IEEE Trans. Image Process. 2012, 21, 80–93. [Google Scholar] [CrossRef] [PubMed]
  25. Luo, J.; Zhang, Y. Infrared Image Enhancement Algorithm based on Weighted Guided Filtering. In Proceedings of the 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 17–19 December 2021; Volume 2, pp. 332–336. [Google Scholar] [CrossRef]
  26. Li, W.; Yi, B.; Huang, T.; Yao, W.; Peng, H. A Tone Mapping Algorithm Based on Multi-scale Decomposition. KSII Trans. Internet Inf. Syst. 2016, 10, 1846–1863. [Google Scholar] [CrossRef]
  27. Tan, S.F.; Isa, N.A.M. Exposure Based Multi-Histogram Equalization Contrast Enhancement for Non-Uniform Illumination Images. IEEE Access 2019, 7, 70842–70861. [Google Scholar] [CrossRef]
Figure 1. Non-uniform illumination image at night and its feature extraction image.
Figure 1. Non-uniform illumination image at night and its feature extraction image.
Entropy 24 01267 g001
Figure 2. Overall framework of proposed enhancement method.
Figure 2. Overall framework of proposed enhancement method.
Entropy 24 01267 g002
Figure 3. A framework for estimating illumination components based on guided filtering.
Figure 3. A framework for estimating illumination components based on guided filtering.
Entropy 24 01267 g003
Figure 4. Comparison of filtering methods.
Figure 4. Comparison of filtering methods.
Entropy 24 01267 g004
Figure 5. (a) Original image; (b) saturation component of original image; (c) nonlinear stretching result image; (d) saturation component of nonlinear stretching result image.
Figure 5. (a) Original image; (b) saturation component of original image; (c) nonlinear stretching result image; (d) saturation component of nonlinear stretching result image.
Entropy 24 01267 g005
Figure 6. Comparison with various methods on the dataset image. (a) Original images. (b) MSR results. (c) MSRCR results. (d) RBMP results. (e) AIEM results. (f) Our results.
Figure 6. Comparison with various methods on the dataset image. (a) Original images. (b) MSR results. (c) MSRCR results. (d) RBMP results. (e) AIEM results. (f) Our results.
Entropy 24 01267 g006
Figure 7. Comparison with various methods on the collecting image. (a) Original images. (b) MSR results. (c) MSRCR results. (d) RBMP results. (e) AIEM results. (f) Our results.
Figure 7. Comparison with various methods on the collecting image. (a) Original images. (b) MSR results. (c) MSRCR results. (d) RBMP results. (e) AIEM results. (f) Our results.
Entropy 24 01267 g007
Figure 8. Comparison of the number of feature points.
Figure 8. Comparison of the number of feature points.
Entropy 24 01267 g008
Figure 9. Comparison of feature point logarithms.
Figure 9. Comparison of feature point logarithms.
Entropy 24 01267 g009
Figure 10. Comparison of feature matching for ‘building’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Figure 10. Comparison of feature matching for ‘building’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Entropy 24 01267 g010
Figure 11. Comparison of feature matching for ‘light plate’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Figure 11. Comparison of feature matching for ‘light plate’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Entropy 24 01267 g011
Figure 12. Comparison of stitching of ‘building’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Figure 12. Comparison of stitching of ‘building’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Entropy 24 01267 g012
Figure 13. Comparison of stitching of ‘light plate’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Figure 13. Comparison of stitching of ‘light plate’ image. (a) Original image. (b) MSR result. (c) MSRCR result. (d) RBMP result. (e) AIEM result. (f) Our result.
Entropy 24 01267 g013
Table 1. Objective Evaluation Metrics for dataset Images.
Table 1. Objective Evaluation Metrics for dataset Images.
Image IndexMethodsAVGAGIEPSNR
Image#1Unprocessed106.23438.33696.9185
MSR175.95478.79066.468010.3850
MSRCR168.35848.26847.015311.3675
RBMP135.29658.99866.955617.2985
AIEM147.329714.37007.504014.0416
OURS144.537413.28117.377514.5312
Image#2Unprocessed48.26412.00516.8375
MSR145.12723.35256.53307.8575
MSRCR143.55463.38427.30128.0074
RBMP104.20552.75256.774212.4023
AIEM148.26355.52197.47677.3040
OURS120.78554.49957.428210.1089
Image#3Unprocessed48.23711.38256.7409
MSR166.78162.40357.01806.6202
MSRCR149.72942.36187.05707.8877
RBMP119.28522.11577.236911.0260
AIEM144.32144.11077.48978.2008
OURS106.74722.96377.255112.5561
Image#4Unprocessed42.83732.13146.0142
MSR155.90163.50156.78606.9969
MSRCR154.56473.41347.01017.3259
RBMP103.67293.57886.824211.9598
AIEM117.53645.93007.146810.0802
OURS152.44107.24097.26106.8059
Image#5Unprocessed41.59322.66226.0279
MSR132.51385.47256.15328.6901
MSRCR128.13986.60465.79029.1541
RBMP91.52384.33357.490613.3060
AIEM95.90585.26597.417112.0187
OURS108.39026.05857.495010.1912
Image#6Unprocessed68.75532.90957.4913
MSR163.01434.01347.17147.2353
MSRCR172.43983.93057.40667.9513
RBMP121.14553.55837.534313.2318
AIEM120.72205.22547.742712.5621
OURS119.38434.89767.842512.6054
Table 2. Objective evaluation index of collected images.
Table 2. Objective evaluation index of collected images.
Image IndexMethodsAVGAGIEPSNR
Image#7Unprocessed41.39972.93466.4685
MSR150.88712.18737.03547.0697
MSRCR134.37812.12367.14798.2082
RBMP105.69252.68957.107911.8398
AIEM123.99114.39367.37219.3905
OURS102.27403.58147.113512.1219
Image#8Unprocessed41.03532.57886.3299
MSR157.85701.99596.80646.5846
MSRCR143.35811.96186.91197.6316
RBMP110.10612.47176.922711.1967
AIEM135.38324.28747.21468.3248
OURS110.30243.25436.913711.0764
Image#9Unprocessed48.89692.88156.8573
MSR158.11912.46767.30047.0982
MSRCR143.47912.52977.39398.2073
RBMP109.21152.87267.031312.3069
AIEM133.72104.96487.51668.9848
OURS92.97474.02747.250714.6012
Image#10Unprocessed48.89693.93677.0113
MSR165.92303.42607.26787.0697
MSRCR149.22803.40457.38918.2082
RBMP117.88244.15127.387612.0753
AIEM141.24297.04867.58788.9907
OURS99.35705.62387.418014.8902
Table 3. Comparison of different methods on computational complexity.
Table 3. Comparison of different methods on computational complexity.
Image IndexSizeMSR (s)MSRCR (s)RBMP (s)AIEM (s)OURS (s)
Image#1533 × 8000.56181.16720.72130.58700.3854
Image#2399 × 7000.52160.97310.52110.39670.2399
Image#3960 × 12801.19091.79471.15300.84500.9827
Image#41728 × 25923.48925.89123.29683.83413.5704
Image#5339 × 5120.43200.78150.50120.34450.2308
Image#6340 × 5120.27080.71460.54340.36350.2313
Image#71280 × 9161.46762.84201.38801.25320.9239
Image#81280 × 9161.49652.80911.23661.17430.9160
Image#91280 × 9161.45002.88551.24831.27140.9767
Image#101280 × 9161.63852.85741.25771.28010.9079
Table 4. Objective evaluation index of collected images.
Table 4. Objective evaluation index of collected images.
Image IndexMethodsAVGAGIEPSNR
buildingUnprocessed37.02891.87575.9837
MSR127.51541.61646.35347.7301
MSRCR115.32051.56526.36848.9447
RBMP90.81401.87316.496211.9741
AIEM108.23683.06216.66939.5082
OURS88.40052.35716.429412.3991
light plateUnprocessed36.19451.89566.1002
MSR122.87101.96176.58058.0107
MSRCR109.66231.89786.58759.1976
RBMP83.19302.11516.619813.0639
AIEM109.07774.01517.00489.1821
OURS72.66462.91106.650213.9791
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, M.; Qin, D.; Zhang, G.; Zheng, P.; Bai, J.; Ma, L. Nighttime Image Stitching Method Based on Guided Filtering Enhancement. Entropy 2022, 24, 1267. https://doi.org/10.3390/e24091267

AMA Style

Yan M, Qin D, Zhang G, Zheng P, Bai J, Ma L. Nighttime Image Stitching Method Based on Guided Filtering Enhancement. Entropy. 2022; 24(9):1267. https://doi.org/10.3390/e24091267

Chicago/Turabian Style

Yan, Mengying, Danyang Qin, Gengxin Zhang, Ping Zheng, Jianan Bai, and Lin Ma. 2022. "Nighttime Image Stitching Method Based on Guided Filtering Enhancement" Entropy 24, no. 9: 1267. https://doi.org/10.3390/e24091267

APA Style

Yan, M., Qin, D., Zhang, G., Zheng, P., Bai, J., & Ma, L. (2022). Nighttime Image Stitching Method Based on Guided Filtering Enhancement. Entropy, 24(9), 1267. https://doi.org/10.3390/e24091267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop