Next Article in Journal
Evaluation of the Properties and Reaction-to-Fire Performance of Binderless Particleboards Made from Canary Island Palm Trunks
Previous Article in Journal
Numerical Investigation of Flow Structures and Combustion Mechanisms with Different Injection Locations in a Hydrogen-Fueled Scramjet Combustor
Previous Article in Special Issue
Canadian Fire Management Agency Readiness for WildFireSat: Assessment and Strategies for Enhanced Preparedness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Thermal Infrared-Image-Enhancement Algorithm Based on Multi-Scale Guided Filtering

College of Building Environmental Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Fire 2024, 7(6), 192; https://doi.org/10.3390/fire7060192
Submission received: 11 April 2024 / Revised: 2 June 2024 / Accepted: 7 June 2024 / Published: 8 June 2024

Abstract

:
Obtaining thermal infrared images with prominent details, high contrast, and minimal background noise has always been a focal point of infrared technology research. To address issues such as the blurriness of details and low contrast in thermal infrared images, an enhancement algorithm for thermal infrared images based on multi-scale guided filtering is proposed. This algorithm fully leverages the excellent edge-preserving characteristics of guided filtering and the multi-scale nature of the edge details in thermal infrared images. It uses multi-scale guided filtering to decompose each thermal infrared image into multiple scales of detail layers and a base layer. Then, CLAHE is employed to compress the grayscale and enhance the contrast of the base layer image. Then, detail-enhancement processing of the multi-scale detail layers is performed. Finally, the base layer and the multi-scale detail layers are linearly fused to obtain an enhanced thermal infrared image. Our experimental results indicate that, compared to other methods, the proposed method can effectively enhance image contrast and enrich image details, and has higher image quality and stronger scene adaptability.

1. Introduction

Infrared thermal imaging technology plays a crucial role in medical examinations [1], firefighting and rescue [2], and target detection [3], among others. Infrared thermal imaging technology is characterized by its independence from illumination and its ability to operate around the clock [4]. It can compensate for the shortcomings of visible-light cameras, which struggle to function normally in special environments such as smoke, night-time, and strong light [5]. Limited by the performance constraints of infrared systems and disturbances from external environments, thermal infrared images often suffer from low overall contrast, poor visual effects, and blurred details compared to visible-light images [6]. These issues significantly impact the subsequent observation and target detection of thermal infrared images. Therefore, it becomes particularly necessary to enhance the detail features of thermal infrared images, increase image contrast, and reduce noise through algorithms.
In the field of image enhancement, histogram equalization (HE) is one of the most classical algorithms. HE tries to balance the distribution probability of the pixels within each grey level by redistributing the grey values of the image, so as to enhance the average grey difference between the pixels and enhance the contrast of the image [7]. However, conventional HE may lose important target details while enhancing image contrast, cause image brightness shift, and amplify noise in the image background [8]. In order to solve the problem with HE algorithm enhancement, many scholars have researched and innovated it. Kim [9] proposed Brightness-Preserving Bi-Histogram Equalization (BBHE), which divides the image into two subgraphs to perform the HE algorithm separately, avoiding fine target under-enhancement, but the enhancement process generates noise. Stark [10] proposed Adaptive Histogram Equalization (AHE) by calculating the local histogram of the image for equalization, which better preserves the image details, but it does not solve the problem of amplifying the image noise. In order to further reduce the noise generated by the histogram equalization algorithm, Zuiderveld [11] proposed Contrast-Limited Adaptive Histogram Equalization (CLAHE). By implementing contrast limitation, CLAHE prevents the excessive enhancement of local contrast during the histogram equalization process, effectively avoiding loss of detail and noise issues in certain regions of the image. Additionally, to eliminate the boundary effects caused by local processing, CLAHE employs bilinear interpolation techniques for smooth transitions between adjacent processing windows, thus ensuring the visual continuity of the image. Zhang [12] proposed a platform histogram equalization algorithm based on brightness segmentation. According to the characteristics of the human visual system’s perception of brightness, the image is divided into different brightness areas. Each brightness area is enhanced, effectively balancing the display of details in the dark and bright areas of the image. These improvements to the histogram equalization algorithm have significantly mitigated the drawbacks of the traditional HE algorithm.
Although the histogram equalization-related algorithm can enhance the overall contrast of the infrared image, the histogram equalization related algorithm does not consider the edge characteristics of the image, resulting in poor edge effects in the processed image [13]. Therefore, the Unsharp Mask (UM) method based on the idea of image layering has been widely used for the purpose of thermal infrared-image enhancement. This method effectively highlights the details in the image and improves its visual effect by performing multi-level decomposition and reconstruction of the image. However, despite the significant advantages of this method in detail enhancement, the use of linear filtering may cause blurring in the edge areas of the image. This blurring can lead to halos around the edges, making parts of the image appear unnatural and affecting the overall effect. To solve this problem, many researchers have proposed nonlinear filters that can maintain edges and combine additional information to adjust the gain of different regions to obtain better image enhancement effects. Branchitta [14] proposed the method of using a bilateral filter (BF) with edge preservation properties to replace the traditional linear filtering method for the hierarchical processing of image data. The image-enhancement effect is achieved by performing Gamma transformation on the separated detail layer and background layer with different parameters and then fusing them. Although bilateral filtering has achieved certain results, bilateral filtering has a gradient reversal problem.
Zuo et al. [15] introduced adaptive Gaussian filtering into bilateral filtering and proposed a method based on bilateral filtering and adaptive detail enhancement (BF&DDE). This method effectively suppresses the gradient reversal effect that occurs at strong edges of the image after bilateral filtering. However, it still does not completely eliminate this phenomenon and has a high computational cost. He et al. [16] proposed the Guided Filter (GF) technique, which enhances the role of the guide image, ensuring that the output image better aligns with the gradient information of the guide image. This effectively preserves edge details and avoids the occurrence of artifacts that may arise from bilateral filtering. Subsequently, Ren [17] used a weighted-variance-guided filter for image decomposition and Non-Local Means (NLM) for detail enhancement. Although this method eliminates noise, it does not effectively improve contrast. Jiang et al. [18] proposed a method for enhancing maritime infrared target images based on guided filtering. This method addresses the issue of edge blurring by using Gaussian filtering and utilizes the target feature-extraction image as the guided image, thereby effectively enhancing the detectability of maritime targets and improving image clarity. However, the process of extracting the target feature image requires a significant amount of computational time. Ouyang et al. [19] proposed an infrared-image detail-enhancement algorithm based on parameter-adaptive guided filtering. This algorithm suppresses noise through a noise mask function and improves the adaptability of guided filtering to different scenes using adaptive parameters. Although this method effectively enhances images, the guided filtering with a single parameter size is insufficient to fully represent the complex spatial structure information of infrared images.
Aiming to overcome the shortcomings of the above algorithms, in order to avoid under-enhancement and over-enhancement while enhancing the contrast and brightness of the image, and to further highlight the detailed information of the image, multi-scale guided filtering for thermal infrared-image enhancement (MSGF-TIR) is proposed. This method uses multi-scale guided filtering to decompose thermal infrared images into detail layers of small, medium, and large scales, as well as a base layer. For the base layer image, CLAHE is used to effectively improve the image contrast level; for the detail layer images, dynamic linear transformation is utilized to enhance the edges and details of the images. Finally, the multi-scale detail layer and the background layer images are weight-fused to generate a detail-enhanced image, and the algorithm flow chart is shown in Figure 1.

2. Materials and Methods

2.1. Multi-Scale Guided Filtering

Guided filtering is a filtering method based on the local linear model [20]. It assumes that any pixel has a certain linear relationship with the pixels in its local neighborhood. During the construction of the convolution kernel, a guidance image is introduced to determine the weights in the weighted average operation. The effectiveness of guided filtering largely depends on the choice of the guidance image, which can be an image related to the original image or the original image itself to be filtered. When the guidance image is chosen as the image to be processed itself, guided filtering can effectively preserve the edge information of the image. Compared to bilateral filtering, guided filtering can more effectively avoid the gradient reversal phenomenon while preserving image edges, offering superior edge preservation.
Through the guided-filtering technique, the original image p can be decomposed into a base layer q containing the image contours and a detail layer e including details and noise, expressed as
p = q + e
In this formula, the base layer q is the result of guided filtering.
It is assumed that there is a local linear model between the base layer q and the guidance image, which can be expressed as
q i = a k I i + b k , i ω k
In this formula, k represents a certain pixel in the guidance image I , and ω k refers to the neighborhood centered at k with a radius of r ; a k and b k are the parameters of the model.
By deriving this linear relationship, we can obtain the gradient relationship:
q = a k I
The result indicates that the gradient of the filtered base layer q is highly consistent with the gradient of the guidance image I , where the parameter a k plays a decisive role in the similarity of the gradients. When a k is greater than 1, the filtering can maintain the gradient details of the image; when a k is less than 1, it mainly achieves smoothing of the image.
The core of the guided filter algorithm lies in the accurate calculation of the linear-model coefficients a k and b k , ensuring that the output image q after filtering is as close as possible to the original input image p . This process is accomplished by constructing a cost function within the window ω k that minimizes the cost, which is expressed as
E a k , b k = i ω k a k I i + b k p i 2 + ε a k 2
In this formula, ε is a regularization factor used to prevent a k from becoming too large and causing overfitting. The coefficients a k and b k are solved using the method of least squares, resulting in
a k = 1 ω i ω k I i p i μ k p k ¯ σ k 2 + ε
b k = p k ¯ a k μ k
In this formula, ω represents the total number of pixels within the local window. σ k 2 and μ k , respectively, represent the variance and the mean gray level of the pixels of the guidance image I within the window ω k , and p k ¯ represents the average gray level of the input image p within the same window ω k .
Since a given pixel will appear in multiple local windows, its pixel value is actually determined by the values from these windows collectively. To accurately compute the final pixel value for this specific point, it is necessary to first calculate the linear function values corresponding to all windows that include this pixel. Then, by averaging these function values, the formula is expressed as
q i = 1 ω k , i ω k a k I i + b k = a k ¯ I i + b k ¯
When the guidance image used is the original image itself, the coefficients a k and b k of the filtering function simplify to
a k = σ k 2 σ k 2 + ϵ
b k = 1 a k μ k
In guided filtering, ϵ is a key parameter. When ϵ is 0, the output image q i is consistent with the original image p i , indicating that no filtering has been applied. As the value of ϵ increases, the output image q i tends towards the average gray level μ k of its region, achieving an effect similar to mean filtering. Choosing an appropriate ϵ value is crucial as it ensures that in the smooth areas of the image, the output image q i is mainly influenced by the regional average gray level μ k , while in areas of high contrast, the coefficient a k approaches 1, ensuring consistency of the output image q i with the original image p i and effectively preserving the edge information in the image [21].
The edge details in thermal infrared images have multiple scales, and if the edge details of all scales are mixed and processed for uniform enhancement, it lacks robustness and will inevitably destroy some of the image information. Therefore, this paper adopts a layer-by-layer decomposition method, as shown in Figure 2. Thermal infrared images are processed using multi-scale guided filtering to obtain the base layers B1, B2, and B3, from which the edge details at a small scale (D1), medium scale (D2), and large scale (D3) are extracted for subsequent enhancement processing.
Using the multi-scale layer-by-layer decomposition method shown in Figure 2, the thermal infrared image I is decomposed into detail layers of small, medium, and large scales, as well as a base layer, with the decomposition effect shown in Figure 3.

2.2. Enhancement of the Base Layer Based on CLAHE

The CLAHE algorithm is an extension of the AHE algorithm, primarily aimed at addressing the noise and over-enhancement issues that may arise when the AHE algorithm processes images [22]. The core of the CLAHE algorithm involves limiting the contrast to redistribute the histogram of the image and performing histogram equalization on each small block separately. Finally, these blocks are reconnected through bilinear interpolation, thereby eliminating the blocky effect. The steps are as follows:
  • Segment the original image into ( n × n ) non-overlapping, equal-sized blocks.
  • Calculate the histogram for each block separately.
  • Compute the clipping limit T
T = c × n x n y K
In this formula, the number of pixels in the x direction within a block is denoted by n x ; the number of pixels in the y direction within a block is denoted by n y ; the number of gray levels is represented by K ; and the clipping coefficient is denoted by c .
4.
Clip the histogram and distribute the pixels. In each of the segmented blocks, clip the histogram h n according to the clipping limit, and then distribute the number of pixels clipped from each gray level evenly, as shown in Figure 4.
S = x = 0 K 1 m a x ( h x T ) , 0
A = S K
In this formula, the total number of pixels exceeding the clipping limit T is denoted by S ; the number of pixels allocated on average to each gray level is represented by A ; and the histogram after redistribution is represented by h x . Then, we have:
h x = T + A h x T h x + A h x < T
5.
Performed histogram equalization on each sub-block after the pixels have been redistributed.
6.
To avoid block artifacts in the processed image, carry out interpolation calculations to determine the values of pixels in each sub-block, as shown in Figure 5.
The pixels in the four corner areas marked in black are calculated using the mapping function of their respective sub-blocks. For the pixels in the four edge areas marked in white, the mapping functions of the two adjacent sub-blocks are used for transformation, followed by a linear interpolation operation between the two obtained mapping values. The formula is as follows:
f x , y = x 2 x x 2 x 1 f 1 + x x 1 x 2 x 1 f 2
In this formula, the pixel value of the point being calculated is denoted by f x , y ; the mapping values obtained from the transformation using the mapping functions of the two adjacent sub-blocks are denoted by f 1 and f 2 ; and the center pixel coordinates of the two adjacent sub-blocks are denoted by x 1 , y 1 and x 2 , y 2 .
The pixel values in the center area marked in gray are transformed using the mapping functions of the four surrounding adjacent sub-blocks. Then, the four obtained mapping values are processed using bilinear interpolation, and the processing formula is as follows:
f x , y = f 1 x 2 x y 2 y x 2 x 1 y 2 y 1 + f 2 x 2 x y y 1 x 2 x 1 y 2 y 1
In this formula, f 1 , f 2 , f 3 , and f 4 represent the mapping values obtained for the point through the transformation using the mapping functions of the four surrounding sub-blocks. x 1 , y 1 , x 2 , y 2 , x 3 , y 3 , and x 4 , y 4 , respectively, represent the center pixel coordinates of the four surrounding sub-blocks.

2.3. Detail Layer Enhancement Based on Dynamic Linear Enhancement

Since the detail layer only retains texture and edge information, without containing the wide-range brightness distribution of the image, continuing to enhance the detail layer with CLAHE may lead to inconsistent brightness and contrast changes in different areas of the image, failing to meet the requirements. Therefore, to overcome these issues, a new dynamic enhancement method is proposed. This method dynamically adjusts the contrast based on the image’s average brightness, thereby better maintaining the image’s naturalness and global consistency. The steps of the algorithm are as follows:
(1)
Average brightness calculation. Calculate the average intensity μ of the input image I . The calculation formula is as follows:
μ = 1 N i = 1 N I i
In this formula, N is the total number of pixels in the image, and I i is the brightness value of the i -th pixel.
(2)
Contrast Adjustment. Adjust the contrast of the image using the contrast factor α If the contrast factor is greater than 1, the contrast is enhanced. If the contrast factor is less than 1, the contrast is reduced. The adjusted image I adj is calculated as follows:
I adj x , y = α × I x , y μ + μ
In this formula, I x , y is the original brightness value of the pixel at coordinates x , y in the image, and I adj x , y is the adjusted brightness value of the corresponding pixel. This process adjusts the brightness of pixels relative to the average intensity of the image, thereby changing the contrast of the image.
(3)
Return the adjusted image. After the above steps, an image with adjusted contrast is obtained, where the intensity of each pixel has been dynamically linearly transformed based on the contrast factor provided by the user, offering a certain degree of flexibility.

2.4. Image Fusion

The primary objective of the image-detail-enhancement algorithm is to address issues such as low contrast and poor detail in traditional thermal infrared images. In this process, the CLAHE algorithm effectively compresses the image’s grayscale range by limiting contrast enhancement during histogram equalization, thereby improving the overall visibility of the image. Dynamic linear enhancement is applied to multi-scale detail layers to intensify the edge detail information of the image and enhance the presentation of the detail layer. Through weighted fusion of the base layer processed with CLAHE and the enhanced multi-scale detail layers, an enhanced image is obtained. The weighted fusion formula is as follows:
I = p I B + q I D
In this formula, I is the final fused output image, I B is the enhanced base layer, I D is the enhanced multi-scale detail layer, and p and q are the fusion coefficients, which can be selected according to different scenarios and requirements.

3. Results

To verify and evaluate the effectiveness of the proposed algorithm in enhancing thermal infrared images, we selected three groups of thermal infrared images for experimental analysis. The first group of scenes was captured with an iRay T3-Pro camera, while the second and third groups of scenes were derived from the FLIR dataset [24]. The experiment was conducted on a Windows 10 operating system, using hardware configured with an AMD Ryzen 7 5800H processor equipped with Radeon Graphics, clocked at 3.20 GHz. We compared these three groups of thermal infrared images from both subjective and objective perspectives. The subjective evaluation was based on direct visual observation of the processed images by the human eye. The objective assessment was conducted using image quality assessment metrics, including the Peak Signal-to-Noise Ratio (PSNR), Information Entropy (IE), Average Gradient (AG), and the Perception-based Image Quality Evaluator (PIQE) [25]. The experiment compared the algorithm proposed in this paper with HE, Detail Enhancement based on Guided Filtering (GF&DDE) [26], and Detail Enhancement based on Bilateral Filtering (BF&DDE) [14].
  • Peak Signal-to-Noise Ratio
PSNR is a metric commonly used to evaluate the quality of images and videos. It measures the quality loss of an image by comparing the original image to a compressed or processed image. Specifically, PSNR is the logarithmic value of the ratio between the peak signal energy and the average noise energy. The higher the PSNR value, the better the image quality [27]. The calculation formula is as follows:
PSNR = 10 lg n 2 × M × N i = 0 M 1 j = 0 N 1 ( I ( i , j ) I m ( i , j ) ) 2
In this formula, I i , j represents the pixel value of the original image, I m i , j represents the processed pixel value, and n is the number of gray levels in the image.
  • Information Entropy
IE is a metric used to measure the amount of information and the complexity of details contained in an image [28]. Based on information theory, it determines the average information content of an image by calculating the probability distribution of its grayscale levels or color values. The higher the value of information entropy, the more information the image contains. The formula for calculating information entropy is as follows:
H ( x ) = i = 1 n p ( x i ) log 2 p ( x i )
In this formula, n is the number of gray levels in the image, and p x i is the probability of the occurrence of gray level i in the image.
  • Average Gradient
The average gradient can be used to assess the edge information and clarity of an image [29]. It represents the average rate of change in grayscale or color between adjacent pixels in the image, reflecting the variation in small details in multiple dimensions [30]. The higher the value, the greater the clarity of the image details. The specific calculation steps are as follows:
Assume the rows and columns of an image are m and n respectively; the average gradient can be represented as
G avg = 1 M × N i = 1 M 1 j = 1 N 1 ( H ( i + 1 , j ) H ( i , j ) ) 2 + ( H ( i , j + 1 ) H ( i , j ) ) 2 2
In this formula, H ( i , j ) represents the grayscale value of the image in the i -th row and j -th column.

3.1. Subjective Evaluation

The enhancement effects of various algorithms on Scene 1 are shown in Figure 6. Scene 1 features buildings, with the original image being dark and lacking in contrast. After processing with HE, the overall image becomes overexposed and whitened. The images processed with GF&DDE and BF&DDE have improved brightness, but the contrast in dark areas remains low. The contrast enhancement of the algorithm proposed in this paper is significant without overexposure; the details in the dark areas are clear and visible, achieving a better result.
The enhancement effects of the various algorithms on Scene 2 are shown in Figure 7. Scene 2 includes information about vehicles and trees. The image processed with HE shows overexposure, especially in the main body of the vehicle. GF&DDE, BF&DDE, and the algorithm proposed in this paper have similar effects on Scene 2, but the algorithm proposed in this paper performs better on the vehicle headlights and emblems, with clearer object contours.
The enhancement effects of the various algorithms on Scene 3 are shown in Figure 8, which includes pedestrians, trees, and buildings. After processing with HE, it is difficult to distinguish pedestrians from the background in the image. Both GF&DDE and BF&DDE increase the brightness of the image but do not effectively enhance the contrast, making it hard to distinguish pedestrians from background objects, and the overall image is too bright. The algorithm proposed in this article improves the details in dark areas (trees, grass) and avoids increasing the overall brightness of the image, making it easier to distinguish people from the background and providing a better visual effect.
Overall, Figure 5a, Figure 6a, and Figure 7a respectively depict scenes of high-rise buildings, cars, and a person walking. After processing with the proposed algorithm, the boundaries between windows and buildings in the first scene become clearer; the contours and details of the headlights and body in the second scene are more distinct, and the texture details of the background trees are preserved; the contour of the person in the third scene is clear, and the surrounding buildings and trees also exhibit certain texture features. Each image demonstrates different textures and details, indicating that the proposed algorithm performs well in various scenarios.

3.2. Objective Assessment

The quality of algorithms cannot be solely judged by subjective human evaluation [31]; objective evaluation parameters are also necessary, such as PSNR, AG, and IE. PSNR is used to measure image quality, IE measures the amount of information contained in the image, and AG measures the clarity of the image. Although these metrics have their advantages in evaluating image-enhancement effects, they do not fully reflect the perceptual quality of the image. Therefore, this paper introduces PIQE as a complementary indicator. While AG and IE focus on specific aspects of the image, such as clarity and information content, PIQE provides a holistic quality assessment approach. It is based on human visual perception to evaluate image quality and can better reflect the subjective quality of the image [25]. PIQE considers the degree of distortion in the image, such as noise, blur, and block effects, making the evaluation results more aligned with the actual perception of the human eye. The evaluation results are presented in Table 1.
From the PSNR metric, in all scenarios, our algorithm and the BF&DDE algorithm show higher PSNR values, indicating that they are better at preserving image quality and information. The HE algorithm has the lowest PSNR values, suggesting its reconstructed image quality is relatively low. Looking at the IE metric, our algorithm exhibits higher values, indicating that it can retain more information and details in the image. From the AG metric, due to the HE algorithm’s over-enhancement, it results in a too-high average gradient, whereas our algorithm generally performs well in terms of average gradient. Regarding the PIQE metric, our algorithm achieves the best results in most cases.
Overall, our algorithm performs well in most cases, with higher PSNR, IE, and AG values, and lower PIQE values. This indicates that for different types of thermal infrared images, our method demonstrates excellent enhancement effects with high robustness. Subjective and objective experimental data prove that our method not only improves the overall image quality but also optimizes the texture feature differences in various regions of different images, showcasing superior image-enhancement performance.

4. Discussion

In order to further enhance the brightness and contrast of thermal infrared images while avoiding detail loss, this paper proposes multi-Scale guided filtering for thermal infrared-image enhancement, building upon the foundation of guided filtering. According to the multi-scale characteristics of edge details, the image is decomposed into small-, medium-, and large-scale detail layers and a base layer using guided filtering. The base layer undergoes CLAHE to improve image contrast and enhance the brightness of dark areas, while the multi-scale detail layers undergo dynamic linear enhancement to highlight the texture edges of the detail layers. After experimental verification, the visual effect of the thermal infrared image enhanced by the method proposed in this paper is better, and it performs well for all evaluation indicators, with higher PSNR, IE, and AG values, and lower PIQE values. Compared with the other algorithms, the method proposed in this paper has better thermal infrared-image-enhancement performance.

Author Contributions

Conceptualization, H.L. and S.L.; methodology, S.W. (Shuaijun Wang); software, S.L.; validation, H.W.; formal analysis, H.W.; investigation, S.W. (Shupei Wen); resources, H.L. and S.L.; data curation, S.W. (Shuaijun Wang); writing—original draft preparation, H.L.; writing—review and editing, S.L. and F.L.; visualization, H.L.; supervision, S.L.; project administration, H.L.; funding acquisition, H.L. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science and Technology Research Project of Henan Province (232102321021, 232102211050).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data underlying the results presented in this paper, which were collected at H.L.’s Laboratory, are not publicly available at this time but may be obtained from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. She, X.; Lu, H.; Liu, Q.; Xie, P.; Xia, Q. Dermatological infrared thermal imaging with human-machine interaction image diagnostics interface using DenseNet. J. Radiat. Res. Appl. Sci. 2024, 17, 100826. [Google Scholar] [CrossRef]
  2. Hahn, B. Research and Conceptual Design of Sensor Fusion for Object Detection in Dense Smoke Environments. Appl. Sci. 2022, 12, 11325. [Google Scholar] [CrossRef]
  3. Jiang, C.; Ren, H.; Ye, X.; Zhu, J.; Zeng, H.; Nan, Y.; Sun, M.; Ren, X.; Huo, H. Object detection from UAV thermal infrared images and videos using YOLO models. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102912. [Google Scholar] [CrossRef]
  4. Yeom, S. Thermal Image Tracking for Search and Rescue Missions with a Drone. Drones 2024, 8, 53. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Zhai, B.; Wang, G.; Lin, J. Pedestrian Detection Method Based on Two-Stage Fusion of Visible Light Image and Thermal Infrared Image. Electronics 2023, 12, 3171. [Google Scholar] [CrossRef]
  6. Zhang, Q.; Li, Y.; Yang, L.; Zhang, Y.; Li, Z.; Chen, X.; Han, J. Thermal-visible stereo matching at night based on Multi-Modal Autoencoder. Infrared Phys. Technol. 2024, 136, 105010. [Google Scholar] [CrossRef]
  7. Han, Y.; Chen, X.; Zhong, Y.; Huang, Y.; Li, Z.; Han, P.; Li, Q.; Yuan, Z. Low-Illumination Road Image Enhancement by Fusing Retinex Theory and Histogram Equalization. Electronics 2023, 12, 990. [Google Scholar] [CrossRef]
  8. Wang, J.; Li, Y.; Cao, L.; Li, Y.; Li, N.; Gao, H. Range-restricted pixel difference global histogram equalization for infrared image contrast enhancement. Opt. Rev. 2021, 28, 145–158. [Google Scholar] [CrossRef]
  9. Kim, Y.-T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 1997, 43, 1–8. [Google Scholar]
  10. Stark, J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 2000, 9, 889–896. [Google Scholar] [CrossRef]
  11. Zuiderveld, K. Contrast limited adaptive histogram equalization. In Graphics Gems IV; Academic Press Professional, Inc.: Cambridge, MA, USA, 1994; pp. 474–485. [Google Scholar]
  12. Zhang, F.; Dai, Y.; Peng, X.; Wu, C.; Zhu, X.; Zhou, R.; Wu, Y. Brightness segmentation-based plateau histogram equalization algorithm for displaying high dynamic range infrared images. Infrared Phys. Technol. 2023, 134, 104894. [Google Scholar] [CrossRef]
  13. Wang, B.; Chen, L.; Liu, Y. New results on contrast enhancement for infrared images. Optik 2019, 178, 1264–1269. [Google Scholar] [CrossRef]
  14. Branchitta, F.; Diani, M.; Corsini, G.; Romagnoli, M. New technique for the visualization of high dynamic range infrared images. Opt. Eng. 2009, 48, 096401. [Google Scholar] [CrossRef]
  15. Zuo, C.; Chen, Q.; Liu, N.; Ren, J.; Sui, X. Display and detail enhancement for high-dynamic-range infrared images. Opt. Eng. 2011, 50, 127401. [Google Scholar] [CrossRef]
  16. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  17. Ren, L.; Pan, Z.; Cao, J.; Liao, J.; Wang, Y. Infrared and visible image fusion based on weighted variance guided filter and image contrast enhancement. Infrared Phys. Technol. 2021, 114, 103662. [Google Scholar] [CrossRef]
  18. Jiang, Y.; Dong, L.; Liang, J. Image Enhancement of Maritime Infrared Targets Based on Scene Discrimination. Sensors 2022, 22, 5873. [Google Scholar] [CrossRef] [PubMed]
  19. Ouyang, H.; Xia, L.; Li, Z.; He, Y. An Infrared Image Detail Enhancement Algorithm Based on Parameter Adaptive Guided Filtering. Infrared Technol. 2022, 44, 1324–1331. [Google Scholar]
  20. Tian, F.; Wang, M.; Liu, X. Low-Light Mine Image Enhancement Algorithm Based on Improved Retinex. Appl. Sci. 2024, 14, 2213. [Google Scholar] [CrossRef]
  21. Zhang, J.; Chen, C.; Chen, K.; Ju, M.; Zhang, D. Local Adaptive Image Filtering Based on Recursive Dilation Segmentation. Sensors 2023, 23, 5776. [Google Scholar] [CrossRef]
  22. Liu, C.; Sui, X.; Kuang, X.; Liu, Y.; Gu, G.; Chen, Q. Adaptive Contrast Enhancement for Infrared Images Based on the Neighborhood Conditional Histogram. Remote Sens. 2019, 11, 1381. [Google Scholar] [CrossRef]
  23. Liu, J.; Zhou, X.; Wan, Z.; Yang, X.; He, W.; He, R.; Lin, Y. Multi-Scale FPGA-Based Infrared Image Enhancement by Using RGF and CLAHE. Sensors 2023, 23, 8101. [Google Scholar] [CrossRef] [PubMed]
  24. Lewis, J. FLIR releases machine learning thermal dataset for advanced driver assistance systems. Vis. Syst. Des. 2018, 9, 23. [Google Scholar]
  25. Venkatanath, N.; Praneeth, D.; Maruthi Chandrasekhar, B.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
  26. Ge, P.; Yang, B.; Mao, W.; Chen, S.; Zhang, Q.; Han, Q. High Dynamic Range Infrared Image Enhancement Algorithm Based on Guided Image Filter. Infrared Technol. 2017, 39, 1092–1097. [Google Scholar]
  27. Lu, P.; Huang, Q. Robotic Weld Image Enhancement Based on Improved Bilateral Filtering and CLAHE Algorithm. Electronics 2022, 11, 3629. [Google Scholar] [CrossRef]
  28. Tsai, D.-Y.; Lee, Y.; Matsuyama, E. Information Entropy Measure for Evaluation of Image Quality. J. Digit. Imaging 2008, 21, 338–347. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, F.; Xie, W.; Ma, G.; Qin, Q. High dynamic range compression and detail enhancement of infrared images in the gradient domain. Infrared Phys. Technol. 2014, 67, 441–454. [Google Scholar] [CrossRef]
  30. Cheng, T.; Lu, X.; Yi, Q.; Tao, Z.; Zhang, Z. Research on Infrared Image Enhancement Method Combined with Single-scale Retinex and Guided Image Filter. Infrared Technol. 2021, 43, 1081–1088. [Google Scholar]
  31. Tian, K.; Ma, X.; He, H. Global double gamma correction with improved SSA for low-light image enhancement. Electron. Meas. Technol. 2023, 46, 124–133. [Google Scholar] [CrossRef]
Figure 1. MSGF-TIR.
Figure 1. MSGF-TIR.
Fire 07 00192 g001
Figure 2. Multi-scale extraction of detail layers and base layers of thermal infrared images.
Figure 2. Multi-scale extraction of detail layers and base layers of thermal infrared images.
Fire 07 00192 g002
Figure 3. Extraction effect of multi-scale detail layers and basic layer. (a) Original image. (b) Small-scale detail layer. (c) Medium-scale detail layer. (d) Large-scale detail layer. (e) Base layer, where (bd) are normalized for ease of observation.
Figure 3. Extraction effect of multi-scale detail layers and basic layer. (a) Original image. (b) Small-scale detail layer. (c) Medium-scale detail layer. (d) Large-scale detail layer. (e) Base layer, where (bd) are normalized for ease of observation.
Fire 07 00192 g003
Figure 4. Histogram clipping and redistribution [23].
Figure 4. Histogram clipping and redistribution [23].
Fire 07 00192 g004
Figure 5. Interpolation operation.
Figure 5. Interpolation operation.
Fire 07 00192 g005
Figure 6. (a) Original image; (b) HE; (c) GF&DDE; (d) BF&DDE; (e) our algorithm.
Figure 6. (a) Original image; (b) HE; (c) GF&DDE; (d) BF&DDE; (e) our algorithm.
Fire 07 00192 g006
Figure 7. (a) Original image; (b) HE; (c) GF&DDE; (d) BF&DDE; (e) our algorithm.
Figure 7. (a) Original image; (b) HE; (c) GF&DDE; (d) BF&DDE; (e) our algorithm.
Fire 07 00192 g007
Figure 8. (a) Original image; (b) HE; (c) GF&DDE; (d) BF&DDE; (e) our algorithm.
Figure 8. (a) Original image; (b) HE; (c) GF&DDE; (d) BF&DDE; (e) our algorithm.
Fire 07 00192 g008
Table 1. Objective evaluation indicators.
Table 1. Objective evaluation indicators.
NumberAlgorithmsPSNRIEAGPIQE
Scene OneHE13.756.9586.2038.79
GF&DDE23.426.9348.7238.91
BF&DDE27.077.0549.2266.21
Our algorithm27.077.3454.1534.45
Scene TwoHE12.335.9161.8348.54
GF&DDE22.836.0918.4311.32
BF&DDE19.566.1119.4842.97
Our algorithm21.206.4719.849.35
Scene ThreeHE11.805.6686.9246.18
GF&DDE18.575.7015.889.38
BF&DDE21.465.7915.7947.56
Our algorithm30.746.1619.058.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Wang, S.; Li, S.; Wang, H.; Wen, S.; Li, F. Thermal Infrared-Image-Enhancement Algorithm Based on Multi-Scale Guided Filtering. Fire 2024, 7, 192. https://doi.org/10.3390/fire7060192

AMA Style

Li H, Wang S, Li S, Wang H, Wen S, Li F. Thermal Infrared-Image-Enhancement Algorithm Based on Multi-Scale Guided Filtering. Fire. 2024; 7(6):192. https://doi.org/10.3390/fire7060192

Chicago/Turabian Style

Li, Huaizhou, Shuaijun Wang, Sen Li, Hong Wang, Shupei Wen, and Fengyu Li. 2024. "Thermal Infrared-Image-Enhancement Algorithm Based on Multi-Scale Guided Filtering" Fire 7, no. 6: 192. https://doi.org/10.3390/fire7060192

APA Style

Li, H., Wang, S., Li, S., Wang, H., Wen, S., & Li, F. (2024). Thermal Infrared-Image-Enhancement Algorithm Based on Multi-Scale Guided Filtering. Fire, 7(6), 192. https://doi.org/10.3390/fire7060192

Article Metrics

Back to TopTop