Next Article in Journal
JPEG Image Enhancement with Pre-Processing of Color Reduction and Smoothing
Next Article in Special Issue
High-Range and High-Linearity 2D Angle Measurement System for a Fast Steering Mirror
Previous Article in Journal
Coverage Analysis of LoRa and NB-IoT Technologies on LPWAN-Based Agricultural Vehicle Tracking Application
Previous Article in Special Issue
Simulation of Laser Profilometer Measurements in the Presence of Speckle Using Perlin Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Low-Delay Dynamic Range Compression and Contrast Enhancement Algorithm Based on an Uncooled Infrared Sensor with Local Optimal Contrast

1
School of Optics and Photonics, Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing 100081, China
2
Kunming Institute of Physics, No. 31, Jiaochang East Road, Wuhua District, Kunming 650221, China
3
School of Life Science, Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing 100081, China
4
College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(21), 8860; https://doi.org/10.3390/s23218860
Submission received: 23 August 2023 / Revised: 7 October 2023 / Accepted: 27 October 2023 / Published: 31 October 2023
(This article belongs to the Special Issue Applications of Manufacturing and Measurement Sensors)

Abstract

:
Real-time compression of images with a high dynamic range into those with a low dynamic range while preserving the maximum amount of detail is still a critical technology in infrared image processing. We propose a dynamic range compression and enhancement algorithm for infrared images with local optimal contrast (DRCE-LOC). The algorithm has four steps. The first involves blocking the original image to determine the optimal stretching coefficient by using the information of the local block. In the second, the algorithm combines the original image with a low-pass filter to create the background and detailed layers, compressing the background layer with a dynamic range of adaptive gain, and enhancing the detailed layer for the visual characteristics of the human eye. Third, the original image was used as input, the compressed background layer was used as a brightness-guided image, and the local optimal stretching coefficient was used for dynamic range compression. Fourth, an 8-bit image was created (from typical 14-bit input) by merging the enhanced details and the compressed background. Implemented on FPGA, it used 2.2554 Mb of Block RAM, five dividers, and a root calculator with a total image delay of 0.018 s. The study analyzed mainstream algorithms in various scenarios (rich scenes, small targets, and indoor scenes), confirming the proposed algorithm’s superiority in real-time processing, resource utilization, preservation of the image’s details, and visual effects.

1. Introduction

Because of its advantages for passive imaging and night imaging, infrared imaging systems have been widely used in military, aerospace, security, and other fields. It has always been a goal to obtain high-quality imaging results. Infrared imaging systems typically use 14-bit or higher ADC (analog to digital converter) acquisition circuits to gather information from an infrared scene with a high dynamic range, while the majority of image display devices are 8-bit displays with a low dynamic range. This may result in some details being lost during the display process. To combine the application scenarios of infrared imaging devices in order to better adapt them to the visual characteristics of the human eye, we need to perform high-dynamic-range compression of the image. Of course, the loss of information from the image in this process is inevitable, but different processing methods will exhibit different details, and the effect of the processing method depends mostly on human vision. In addition, in recent years, uncooled infrared sensor technology has been significantly improved, and the applications have become increasingly widespread, especially in civilian products. Therefore, how to compress images with a high dynamic range into those with a low dynamic range while preserving the majority of the detailed information is one of the critical technologies for infrared systems; at the same time, low-delay image processing is also essential for back-end target detection and tracking.
Contrast refers to the measurement of different brightness levels between the brightest white and the darkest black in the light and dark areas of an image. The greater the difference, the greater the contrast, and the smaller the difference, the lower the contrast. It can be global or local (i.e., concentrated in a small area).
Appropriate local contrast can make the image present a concave and convex three-dimensional sense, which can improve the visual effect of the human eye. In addition, the enhancement of local contrast can minimize the halo effect, making the picture clearer.
To address the problem of dynamic range compression, many researchers have developed algorithms for displaying high dynamic ranges and enhancing details [1]. Overall, high-dynamic-range image compression algorithms can be classified into traditional mapping-based algorithms, gradient domain-based compression algorithms, and image layering-based compression algorithms. Besides these, there are also a few methods [2,3] that cannot be classified into any of these three classes. Different algorithms are applicable for various application contexts and vary in their complexity and performance.

Related Work

Mapping-based high-dynamic-range IR (infrared radiation) image enhancement algorithms are the simplest and most widely used, and the visual effect of the human eye is obviously improved. These algorithms include self-gain-based linear mapping, Gamma curve correction, histogram projection, etc. In order to make the distribution of the histogram as uniform as possible, the earliest method of histogram equalization redistributed the image’s grayscale using a cumulative distribution function. A uniform probability density distribution can be obtained using this method, but there are drawbacks as well, including overenhancement, higher noise levels, the loss of some details, and fading. To solve these issues, a threshold-based plateau histogram equalization (PHE) algorithm was proposed in the literature [4]. The plateau histogram modifies the ordinary histogram algorithms by adding a threshold. When the density of each gray value exceeds the threshold, it will be processed, thereby improving the global contrast and avoiding a small number of outliers affecting the global distribution of the gradient. The literature [5] has proposed an adaptive histogram equalization algorithm (AHE), which computes a histogram function based on a local window and can enhance the local contrast of an image while further highlighting its finer details. However, the AHE algorithm is prone to creating a great deal of noise. In light of this, a contrast-limited adaptive histogram equalization algorithm (CLAHE) was proposed in the literature [6], which reduces the amplitude of the noise with a clipping threshold. Some researchers [7,8] have also made some improvements to the CLAHE algorithm. Generally, the self-gain-based equalization algorithms perform noticeably worse than the histogram-based equalization algorithms. Nevertheless, the histogram-based algorithms lack flexibility in some particular applications and are also susceptible to overenhancement, block effects, etc., because they are entirely dependent on the histogram’s statistics.
A gradient dynamic range compression algorithm compresses the large-scale gradient and keeps the low-amplitude information. Fattal [9] proposed an algorithm as the framework of the gradient-domain-based high-dynamic-range image compression algorithm (GDHDRC). Based on this, a detail-preserving algorithm (GDHDRC-DPS) was proposed, and further research [10,11,12,13] refined the GDHDRC-DPS algorithms even more. On most occasions, these algorithms produce rather acceptable outcomes; however, there are significant limitations in certain occasions. On the one hand, the factors used to smooth the data items and gradient items need to be properly designed in order to prevent oversaturation; on the other hand, the details of the output images are still not sharp enough to meet some of the requirements of displaying infrared images with a HDR (high dynamic range).
The algorithms based on image layering [14,15,16,17,18,19,20,21,22,23,24,25,26,27] decompose the original image into various components, such as the background layer of the image and the detailed layer of the image, and then process each component separately; afterwards, the compressed background layer image and the stretched detailed layer image are reintegrated into an image in which the dynamic range is compressed and the detailed layer image is enhanced. The basic framework is shown in Figure 1, where the original image is decomposed into a background layer image and a detailed layer image after being processed by a low-pass filter; the background layer image undergoes dynamic range compression, and the detailed layer image is enhanced. A final image with a low dynamic range is obtained after re-fusing. Although the choice of algorithms can differ for low-pass filtering, the background layer, the detailed layer, and image fusion, they are basically improved and promoted in this framework.
Specifically, an infrared image enhancement algorithm based on bilateral filtering (bilateral filter and dynamic range partitioning; BF-DRP) was proposed in the literature [14]. The algorithm divides the original image into a background layer image and a detailed layer image using bilateral filtering, then performs Gamma curve processing (compression or expansion, respectively) on the background layer image and the detailed layer image, and finally reintegrates the background layer and the detailed layer image to obtain the detail-enhanced image. The bilateral filter is a nonlinear filter that takes both the spatial proximity and the pixels’ grayscale difference into account, smoothing out uniform areas of the input image such as the sky and sea while preserving strong edges such as floors. However, the drawbacks of the BF-DRP algorithm are also obvious: (1) several parameters of the algorithm require fine-tuning to achieve better enhancement; (2) the original image minus the image after bilateral filtering is used to obtain a detailed layer image, the edges of which will be sharper than the edges of the original image or even have gradient inversion, while the highlighted noise appears in flat areas [15,16].
Based on this, an improved version of the BF-DRP algorithm, namely the bilateral filter and digital detail enhancement (BF-DDE) algorithm, was proposed in the literature [17,18], with the goal of obtaining a corrected background layer image that was closer to the original image. In addition, research on the human visual system [17] showed that human eyes are more sensitive to noise in uniform regions than in complex regions. The literature [19] first described the noise mask function, and the literature [20] used a noise mask function based on local variance, both of which clearly improved the image’s overall contrast and improved the information on the target and the detail. The algorithms based on a bilateral filter have two problems: (1) the detailed layer image obtained with the bilateral filter is prone to gradient flipping because the mean filter method based on Gaussian weights is unstable when one pixel has a large difference from its adjacent pixels; (2) the computational complexity of bilateral filter is O(Nr2), and as the filter window increases, the computational time will increase quadratically. To solve these problems, a linear filter, i.e., a guided filter (GF), for the detail-enhancement algorithm was proposed in the literature [21], while the studies of both [22,23] introduced an algorithm for enhancing images with a high dynamic range (GF-DDE) based on a guided filter. The guided filter was used instead of the bilateral filter in the GF-DDE method, which significantly decreased the algorithm’s computing complexity but worsened its edge retention. An algorithm for enhancing images with a high dynamic range (LEPF-DDE) based on a local edge-preserving filter was proposed by [24]. The local edge-preserving filter (LEPF) was used to separate the original image into a background layer image and one or more detailed layer images. Next, the multi-scale background layer image undergoes maximum entropy-based Gamma curve correction, and the detailed layer undergoes of elimination of the artifacts and amplification of the detail. Finally, the detailed layer image and the background layer image are re-bounded. An edge-preserving filter algorithm called LEPF has been proposed in the literature [24,25], which aimed to filter low-amplitude noise while maintaining strong edges.
On the whole, the traditional image mapping method has a general effect on the retention of detail and image contrast, in which the gradient compression algorithm retains the details better and the contrast is enhanced, but the parameters need to be carefully adjusted for different scenes. The algorithm based on image layering results in greater improvement in the retention of detail and noise suppression, but traditional methods are still used in the background layer, and the contrast enhancement effect is general and can easily cause halos and other problems.
The DRCE-LOC algorithm proposed in this study focused on improving image delay, storage resources, FPGA-based implementation, and the trade-off between global contrast and local contrast while applying the framework of dynamic range compression based on image layering (Figure 1).
At the same time, the algorithm was optimized and implemented on an FPGA (field-programmable gate array). For the low-delay image processing requirements of offline applications [26], it is often necessary to design a hardware processing method based on an FPGA. This is different from many CPU (central processing unit)-based algorithms. Multiplication can be implemented on a parallel pipelined FPGA with only one clock cycle, but exponential calculation that is simple on a CPU is difficult to implement on an FPGA. Therefore, it is necessary to effectively utilize the characteristics of the FPGA to study the design and improvement of algorithms. We conducted experiments on an infrared imager equipped with the FPGA and uncooled infrared sensors and evaluated the effect of the algorithm.

2. Methods

2.1. DRCE-LOC

An uncooled infrared sensor does not need a refrigeration device and can work at room temperature. It has many advantages, such as fast start-up, low power consumption, small size, light weight, long life, and low cost. In recent years, such sensors have been widely used in military and civil night vision products. The proposed algorithm was applied to an infrared imager equipped with 1024 × 768 uncooled infrared sensors, and all experiments were carried out on this system.
The framework of the DRCE-LOC algorithm is shown in Figure 2. Its basic principle is as follows. Firstly, the brightness value and contrast value are re-corrected in each local block to achieve the best possible local contrast. Secondly, the Gaussian model [28] is used to smooth the contrast value of the local block to eliminate the block effect. Then, a brightness adjustment algorithm guided by the global brightness is applied. The algorithm uses the background layer obtained by the guided filter to obtain the brightness-guided image after passing through the global image. This step maintains the brightness relationship of the original image as much as possible. Finally, the detailed layer obtained via the guided filter and the image obtained after brightness and contrast correction are re-fused to obtain the final image. The process of implementing the algorithm includes the following eight steps.
  • Step 1: Image segmentation.
To reduce the amount of storage, the original image I i n (with a size of M × N ) is divided into X × Y local blocks. The size of each local block is M X × N Y . The mean value ( B l o c k m e a n ) and its standard deviation are calculated. B l o c k m e a n reflects the level of brightness of local blocks in the original image, and B l o c k s t d reflects the level of richness of the details of local blocks in the original image. The sizes of B l o c k m e a n and B l o c k s t d are both X × Y .
  • Step 2: Calculation of the local block’s stretching coefficient.
The stretching coefficient of each local block is determined by the standard deviation of each local block, as shown in Equation (1)
S t r e t c h p a r a = 255 B l o c k s t d + B i a s
where B l o c k s t r is the stretching coefficient of the local block and Bias is a constant set to limit the coefficient of gain from being too large. After processing, the size of B l o c k s t r is X × Y .
  • Step 3: Upsampling of the parameters of the local block.
S t r e t c h p a r a and B l o c k s t r are upsampled into M × N dimensions, the basic principle of which is shown in Figure 3. The stretching parameter and the mean value of each local block are projected to the center of the local block of the original image, and the stretching parameter and mean parameter are calculated at the position of each pixel in the original image. They are calculated from the parameters of each local block and the Gaussian weights of the distance to the center of each local block.
N S t r e t c h p a r a ( i , j ) = x = 1 X y = 1 Y S t r e t c h p a r a ( x , y ) exp ( d ( i , j ) ( x , y ) 2 δ 2 ) x = 1 X y = 1 Y exp ( d ( i , j ) ( x , y ) 2 δ 2 )
N M e a n p a r a ( i , j ) = x = 1 X y = 1 Y B l o c k m e a n ( x , y ) exp ( d ( i , j ) ( x , y ) 2 δ 2 ) x = 1 X y = 1 Y exp ( d ( i , j ) ( x , y ) 2 δ 2 )
where N S t r e t c h p a r a and N M e a n p a r a are the stretching parameter and the mean parameter after upsampling, respectively, and their sizes are both M × N ; δ 2 is a constant used to control the Gaussian weights; and d ( i , j ) ( x , y ) represents the distance from the coordinates of the original image ( i , j ) to the coordinates of the local block’s center ( x , y ) .
  • Step 4: Layering of the guided filter.
Guided filters [19] are edge-preserving linear filters that have been widely used in image enhancement. Here, they are used to divide the original image into a background layer image and a detailed layer image. We assumed that the input image is p , the output image is q , and the guide image is I . The fundamental idea is to combine an original image with a guide image to produce a filtered image that resembles the guide image. The equation is displayed in Equation (4)
{ q = a k I + b k , i w k a k = p k I k ¯ p k ¯ I k ¯ I k 2 ¯ I k ¯ 2 + ε = 1 | w | i w k I i p i u k p k ¯ σ k 2 + ε b k = p k ¯ a k u k
where w k is a square window with a diameter of w with k as the center; |w| represents the number of the pixels in the window; a k   and b k are linear coefficients and are constants in the local window w k ; and   i denotes the number of pixels in w k . It should be ensured that the linear coefficient satisfies the formula above so that the difference between q and p is minimized.
  • Step 5: Calculation of the brightness guide map.
In order to increase the computational speed of the FPGA and maintain the light–dark relationship of the original image, a linear mapping method is used. Gamma curves, histogram equalization, and other methods can be used in some specialized applications. The mapping method is shown in Equation (5)
I g c ( i , j ) = 255 ( I i n ( i , j ) I i n ¯ ) S t d ( I i n ) + λ + B r i g h t
where S t d ( I i n ) is the standard deviation of the original image, Bright is the desired brightness of the image, λ is a constant to avoid excessive gain, and I g c   ( i , j ) is the mapped 8-bit global brightness guide.
  • Step 6: Dynamic range compression and contrast enhancement.
Four factors are needed for dynamic range compression and contrast enhancement: I i n , I g c   , N M e a n p a r a , and N S t r e t c h p a r a . The corresponding equation is
B a s e o u t ( i , j ) = k 1 × N S t r e t c h p a r a × [ I i n ( i , j ) N M e a n p a r a ( i , j ) ] + k 2 × I g c
where B a s e o u t ( i , j ) is the output image after contrast enhancement, k 1 is the parameter that controls the local contrast, k 2 is the parameter that controls the global contrast, I g c   is the brightness-guided image calculated in Step 5, and N M e a n p a r a and N S t r e t c h p a r a are the local mean parameters and stretching parameters calculated in Step 3, respectively.
  • Step 7: Filtering of the detailed layer.
The detailed layer obtained through the previous steps contains noise. A noise mask function based on human vision [17] is utilized to reduce noise. The basic principle of the function is that the human eye is sensitive to noise in a uniform scene but is insensitive to noise in areas with large amounts of detail.
D e t a i l o u t = D e t a i l i m a g e × | a k | [ g L + ( g H g L ) ]
where D e t a i l i m a g e is the detailed layer image obtained after layering of the guided filter and a k is the parameter that was calculated during the process of layering the guided filter, which reflects the quantity of local information.
  • Step 8: Image enhancement.
The final image is obtained by re-integrating the contrast-enhanced image and the detailed layer image.
I o u t = B a s e o u t + D D E × D e t a i l o u t
where I o u t is the final output image and   D D E is the detail enhancement factor.

2.2. Noise Suppression

The algorithm includes two sources of noise: noise in the detailed layer and noise in the background layer. The detailed layer’s noise adopts the principle of [17]. The over-enhancement of the local contrast, which is primarily the source of the noise in the background layer, is suppressed by Bias in Equation (1). In order to analyze the influence of stretching the background layer on local noise, we set I g c = 128 . At this point, the stretching of the image in terms of global contrast is 0, and all the noise in the image comes from the stretching of local contrast. The algorithm next divides the 1024 × 768 infrared images into 4 × 3 squares and set δ 2 to 3600 (the selection of this parameter is introduced in the next section). Then, the effect of Base_out imaging under different values of bias is shown in Figure 4.
AGC (adaptive gain control) is a method of compressing high dynamic ranges that maps the dynamic range of an input image to a specified dynamic range. Since a 14-bit image cannot be visualized on most display devices, we considered the image processed by AGC as the input image and compared it with images processed by other algorithms.
Figure 4 illustrates how the final image’s noise level in terms of contrast varied significantly with various biases. The image in the uniform area of the scene was overamplified when B i a s = 0 , producing a significant amount of noise. The noise in the uniform region steadily reduced with an increase in the bias, while the local contrast of the image diminished with the noise. The image’s noise was smaller and the local contrast performed better when the bias was around 256.

2.3. Halo Suppression and Contrast Control

A halo is a virtual shadow that extends from the edge of the image, which is a problem that can easily happen during image blocking. Suppressing halos can keep the image in its original state. In this study, we combined local and global information, but the halo problem could also exist in the case of improper parameter selection. In Equation (6), k 2 was used to control the halo problem and the global contrast. When k 2 = 0 , the algorithm became a stretching algorithm with pure local contrast; as k 2 increased, the weight of global contrast in the final image gradually increased. In particular, in Equation (6), the mean component of the local stretched image was eliminated rather than the algorithm just adding the local stretched image and the global image. Therefore, the global image plays a role in guiding the global contrast in Equation (6). The final image’s global contrast will be closer to the global compressed image as k 2 increases, and the halo will be weaker; however, better local contrast can be achieved by controlling k 1 . Figure 5 shows the effect of the algorithm under different values of k 2 ( k 1 = 1 ; the basis for selecting this parameter is presented in the next section). The red rectangles in the images facilitated a comparison of the halo suppression effect under different values of k . It can be seen from Figure 5 that when   k 2 is large, the contrast of the final image was excessively high and some local information was even obscured, as shown in Figure 5a. As k 2 increased, the overall contrast of the image steadily declined, and when k 2 = 0.1 , the brightness of the local blocks of the image was nearly the same, losing the overall brightness and the blackness of the original image. When k 2 = 0.1 , if a continuous video sequence enters the algorithm module proposed in this study, a transformed halo will develop as a result of the alteration of the local information. It can be seen from Figure 5d that when k 2 = 0.5 or so, the final image is nearly identical to the global contrast image, and the halo phenomenon almost disappears, whereas improved global contrast and local contrast were achieved.

2.4. Parameter Settings

The choice of parameters determines how well the algorithm performs. The selection of the parameters in the algorithm was as follows:
M X × N Y is the local block size of the image. In general, the smaller the local block size is, the better the local contrast will be. The computational effort will rise as the local block size falls, since more local blocks are required. Tiny squares also have a tendency to bring about localized halo alterations because of the movement of items in real-world settings. For continuous video, a value of M X × N Y = 64 × 64 achieves better results.
B i a s is used to suppress the local overenhancement phenomenon, which is described in Section 2.2, and generally achieves better results when it is between 128 and 384.
The parameter δ 2 is used to control the Gaussian kernels’ curvature during downsampling and upsampling of the image. The selection of δ 2 is strongly influenced by the size of the image block. Better results are achieved when δ 2 M X × N Y , which eliminates the obvious block effect that may be present in the image.
The parameter λ is set to prevent the global contrast from being overenhanced. It performs the same function as bias, with the exception that bias controls the local contrast while the global contrast is controlled when the parameter is modified. Similarly, better results are achieved when λ   is between 128 and 384.
The parameter k 1 is used to control the local contrast. Since Equation (2) already places a limit on the range of the image, higher results can be achieved when k 1 = 1 .
The parameter k 2 is used to suppress halos and control the global contrast. A detailed description is given in Section 2.3. Better results can be achieved when k 2 is in the range of 0.2 and 0.7.

2.5. Complexity of the Algorithm

The algorithm proposed in this study is built on a framework based on guided filter layering, which does not make the original guided filter any more computationally complex. Similarly, the computational complexity of the algorithm proposed in this study is O(n) and depends solely on the number of pixels. However, the following improvements can be applied during the construction of the algorithm, particularly for real-time videos, to further reduce the method’s resource consumption:
  • Calculating the local information of the image local blocks, which can be achieved by using the local information of the data of the previous image of consecutive frames, thus achieving an algorithm with a delay of less than one frame while storing the complete image frame differently.
  • In the calculations of upsampling and downsampling of the image, the Gaussian kernel can be saved in advance as a parameter to avoid exponential calculations during the execution of the algorithm.

3. Results and Discussion

3.1. Quantitative Assessment

To analyze and evaluate the effect of the algorithm, four image indicators, namely the root mean square difference [29] (root mean square, RMS), image entropy [30], structural similarity [31] (SSIM), and the Tenengrad clarity index [32], and the algorithm’s running time were used. For calculating the running time, the experiment was carried out in the same environment as follows: operating system, Windows 11; CPU, 12th Gen Intel (R) Core (TM) i9-12900HX 2.30 GHz; RAM, 32.0 GB.
R M S is defined as
R M S = 1 M N i = 1 M j = 1 N ( I ( i , j ) I ¯ ) 2
where I denotes the image to be evaluated, I ¯   represents the mean value of the image, M is the width of the image, and N is the height of the image. The larger the R M S , the better the overall contrast of the image.
Image e n t r o p y is defined as
E n t r o p y = i = 0 255 p i log p i
where p i   represents the probability of each grayscale value. Greater entropy indicates that the grayscale has been stretched further.
S S I M is defined as
S S I M ( I , r e f ) = ( 2 u I u r e f + C 1 ) ( 2 σ r e f + C 2 ) ( u I 2 + u r e f 2 + C 1 ) ( σ I 2 + σ r e f 2 + C 2 )
where ref represents the reference image, I   is the image obtained by stretching the original image with adaptive gain, u represents the mean value, σ represents the standard deviation, and C 1   and C 2   are very small numbers to prevent the denominator from becoming 0. A larger S S I M indicates that the two images are more similar in structure.
T e n e n g r a d is a gradient-based function that extracts the gradient values in horizontal and vertical directions through the Sobel operator [33]. It is defined as
S ( i , j ) = G x I ( i , j ) + G y I ( i , j )
T e n e n g r a d = 1 M N i = 1 M j = 1 N S ( i , j ) 2
G x = 1 4 [ 1 0 1 2 0 2 1 0 1 ] , G y = 1 4 [ 1 2 1 0 0 0 1 2 1 ]
where S ( i , j ) represents the gradient of Image I at the point ( i , j ) ; G x and G y   represent the Sobel convolution kernels in the horizontal direction and vertical direction, respectively. A larger Tenengrad means a clearer image.
To evaluate the processing effect of the DRCE-LOC algorithm in this study, we used the typical AGC, HE, CLAHE, GF-DDE, and BF-DRP algorithms as a comparison in three typical scenarios: a rich scene, a scene with a small target, and an indoor scene. The following can be seen from Figure 6, Figure 7 and Figure 8.
(1) The CLAHE algorithm and the proposed algorithm had the best results in terms of image local contrast. CLAHE had two problems. First, it easily produced an overenhancement phenomenon, leading to the noise being amplified, as seen in Figure 6c, where the red rectangle has a significant amount of noise. Second, the scene’s energy was weak when the overall contrast was low, as shown in Figure 7c, where the thermal radiation of the indoor scene was weak and the overall contrast of the whole image was low.
(2) The proposed algorithm had the best results in terms of retaining small targets and details, as shown in Figure 6, where the two small dots at the top right are aircraft targets. The proposed algorithm could highlight the aircraft targets and other detailed information without overexposure.
(3) The proposed algorithm had the best global contrast, as shown in Figure 6 and Figure 8. As shown in Figure 6, each algorithm maintained the details well, but the proposed algorithm was more transparent and more informative. In Figure 8, HE, BF-HDR, and the proposed algorithm all had good global contrast, but HE and BF-HDR both showed overenhancement. The proposed algorithm achieved better global contrast while suppressing overenhancement.
Table 1 shows the results of the evaluation of the three scenarios shown in Figure 6, Figure 7 and Figure 8. It can be seen from the table that for the R M S index, no algorithm showed particularly good superiority; that is, several algorithms had no significant difference in the global contrast. For entropy, the HE algorithm performed best in all three scenarios; that is, the distribution of gray in the image after HE processing was the most uniform, which was determined by the principle of the HE algorithm. However, the HE-based algorithm is prone to excessive stretching, which can also be seen in Figure 6, Figure 7 and Figure 8. For the S S I M index, the closer it is to 1, the greater the similarity to the original image. It can be seen from the index that the performance of several algorithms was not very good. This is because the original image was a 14-bit image, and the image’s structure underwent changes after compression. T e n e n g r a d represents the clarity of the image. The proposed algorithm showed very obvious superiority in all three scenarios, indicating that the proposed algorithm retained the most detailed information and had the best image clarity after dynamic range compression.
In terms of the operation time, the algorithm based on AGC had the least delay, followed by the algorithm proposed in this study. The AGC algorithm had the worst enhancement effects for all image indicators. Therefore, the algorithm proposed in this study has obvious advantages in terms of the operation t i m e .

3.2. Implementation of the Algorithm on an FPGA

The proposed algorithm’s FPGA implementation is depicted in Figure 9. This is made up of five parts, each of which is indicated by a green circle. The algorithm is applied to a 1024 × 768 infrared imager with an image frame rate of 25 Hz, a pixel clock of 30 MHz, and 200 pixel clocks of line fading. The algorithm was implemented on the Xilinx (San Jose, CA, USA) A7 FPGA chip and was written in VHDL. Each component’s implementation and resource utilization were examined in turn.
Part 1: The first part was to finish the process of image buffering in 256 lines so that the calculation of local image stretching and the bias factor needed to wait for 256 lines (in a local block of 64, where δ 2 = 25 ). This module’s operation was straightforward and only required 1 Mb of Block RAM, but it consumed the most resources and had the longest image delay, with a total of 2.1875 Mb of Block RAM and an image delay of 313,344 pixel clocks.
Part 2: The second part was to calculate the pull-up and bias coefficients of the image’s local blocks. Each local block had 4096 pixels and was 64 × 64 pixels in size. There were 192 local blocks in a single image frame. To store the values of data accumulation, square accumulation, and the gain coefficient within the local block, three arrays of 192 bits each were built by the computation module. The data were 26 bits wide. The image was shifted 12 bits to the right after all the data in the local block had been calculated to produce the image’s mean value for the local block (BXm) and the mean image variance (BXX). The module accumulated the image data according to the row and column numbers corresponding to the relevant positions in the arrays; furthermore, the local block variance Bstd was computed by calculating the difference between the square of BXm and BXX, and the local standard deviation was generated by taking the square root of Bstd. Finally, the gain coefficient of the local block G could be calculated by Equation (1). A divider was needed for this stage. Overall, this part used resources of 0.0142 Mb of Block RAM, a divider, and some other auxiliary computing resources. The local block calculation required a delay of 64 rows of data, and the calculation of the mean, variance, standard deviation, gain coefficient, and algorithm required 1, 1, 34, and 26 pixel clocks, respectively, with a total delay of 78,398 pixel clocks or about 0.0026 s. Moreover, since Part 1 and Part 2 can run simultaneously, the system was not subjected to any additional delay.
Part 3: The third part was image layering based on a guided filter, which used a 5 × 5 window, and the data stream was be divided into two main steps:
(1) Calculate the coefficients a,b. To create a real-time image window of 5 × 5, the original image was successively entered into shift registers of 5 lines after a 256-line delay, and the data from each line were successively entered into 5-pixel buffer registers. The mean and variance of the image were calculated independently for the numbers in the window, and the method of calculation was the same as in Part 2. Specifically, to reduce the computational volume and image delay, the standard deviation and mean values of the 16 domain pixels highlighted in red were used to decrease the computational volume and image delay. The mean value of the 16 data points was computed by simply shifting the sum value 4 bits to the right instead of using a divider, and the local values of a and b were calculated by Equation (4). This step used pixel shift registers of 5 lines + 25 pix and one divider; the image’s delay was 3 lines + 36 pix.
(2) Image layering. To create a 5 × 5 real-time image window, a and b were entered into 5-line shift registers, and the data from each line were entered into 5-pixel buffer registers. To calculate a and b and arrive at the mean values of Ma and Mb, the same 16 neighborhood values were chosen. Finally, the background layer image Base was calculated by applying Equation (4); the detailed layer image was Detail = qBase. To match the alignment of a and b, this step specifically called for a delay of 3 lines + 8 pix of the original image. The image delay in this stage was 3 lines + 8 pixel clocks or 3680 clocks, and it used pixel shift registers of 13 line + 58 pix, as well as various other auxiliary computing resources.
All of Part 3 required pixel shift registers of 15 lines + 75 pix, or 0.0147 Mb of Block RAM, a divider, and several auxiliary calculation resources; the image’s delay was 6 lines + 44 pixel clocks or 7388 clocks.
Part 4: The fourth part involved the calculation of the global brightness guide map. It involved the standard computation of the global image as described in Part 2. To reduce the amount of calculation, downsampling and summing were performed in the process of image summation, following the method of discarding one out of every three pixels. Meanwhile, the standard deviation of the previous frame was used as a parameter to calculate the guide image of the current frame in the process of continuous video processing. Finally, Equation (5) was used to calculate the global brightness guide image of the frame in real time. A divider, a root calculator, and some more processing resources were needed for this section, and the image’s delay, which was caused by the divider, was only 26 clocks.
Part 5: The fifth part was to calculate the final output image, which included three main steps:
(1) Determining the brightness value of the original image under different local block stretching factors, while 192 local blocks were calculated simultaneously to yield B1, B2, …, Bn. The calculation used Equation (6), and the delay was only one clock.
(2) Determining the weight value using the formula in Equation (2). The different local block weights were based on the distance of the actual pixel from each local block, and the distance could be obtained by extrapolating the row counter. In this step, exponential calculation was involved, so optimization was carried out as follows
exp ( d ( i , j ) ( x , y ) 2 δ 2 ) = exp ( d ( i ) ( x ) 2 + d ( j ) ( y ) 2 δ 2 ) = exp ( d ( i ) ( x ) 2 δ 2 ) × exp ( d ( j ) ( y ) 2 δ 2 ) = e x × e y
where d i , j x , y   denotes the Euclidean distance from a pixel with the coordinates i , j to the center of the local block with the coordinates x , y , d i x denotes the horizontal distance from a pixel with the coordinates i , : to the center of the local block with the coordinates x , : , and d j y denotes the vertical distance from a pixel with the coordinates : , j to the center of the local block with the coordinates : , y . According to δ 2 , when d i x = 256   or d j y = 256 , e x ( e x = exp ( d ( i ) ( x ) 2 δ 2 ) ) and e y ( e y = exp ( d ( j ) ( y ) 2 δ 2 ) ) almost decay to zero, so they can be ignored. Therefore, e y and e y have at most 256 cases, or 256 values at x , y = 0,1 , 255 . Therefore, to avoid exponential calculations in the FPGA, the process calculated the value at x , y = 0,1 , 255   and saved it in the Block RAM. During the real-time computation, only the lookup table was needed to obtain e x and e y , and only a simple multiplication operation needed performing to obtain the weights. The process of reading the lookup table only delays this step by one clock but requires a Block RAM with a depth of 256 and a width of 16 bits.
(3) The weighted average calculated the final B. The final B was obtained by weighting B1, B2, …, Bn. This step involved a divider, delayed by 26 pixel clocks.
(4) The last step was to calculate the final image. The detailed layer image was adaptively enhanced and delayed before being added to the background layer image to obtain the final image. The total delay was 55 clocks, with the detailed layer requiring a delay of 53 clocks.
Therefore, Part 5 requires two dividers, 0.0039 Mb of Block RAM, and 55 clocks of image delay.
Overall, the entire process used 2.2554 Mb of Block RAM, five dividers, a root calculator, and some other auxiliary computational resources, with a total image delay of 0.018 s. The specific resource utilization of the FPGA is shown in Table 2.
The proposed algorithm can be widely used in infrared imaging modules, which can be applied to various military and civilian thermal imagers and scientific research equipment. The infrared imager used in this study was small in size, as shown in Figure 10a, and the effect is shown in Figure 10b. It achieved good imaging results for different scenes. Figure 10c is a schematic diagram of the imaging module mounted on the optomechanical system.

4. Conclusions

In this study, an algorithm for enhancing the contrast and compressing the dynamic range of infrared images with local optimal contrast was proposed. The focus was to strengthen and improve the following aspects:
(1) Low-delay image processing technology. Most of the algorithms used for conventional background layer image processing are Gamma correction algorithms or histogram-based equalization algorithms. Among them, Gamma-based correction algorithms are generally effective for comparing images, while histogram-based or improved algorithms must count one frame before outputting the final image, which means that at least one whole image will be delayed and there may be phenomena such as image overenhancement. In particular, in continuous video sequences, we found in experiments that when the histogram statistics of the previous frame are used as the current mapping curve, the brightness of the continuous image will flicker due to the mismatch between the real image and the mapping curve. Therefore, an image processing algorithm with a low delay (less than one frame) focuses on the compression of the background layer, so this study proposed a local contrast enhancement framework based on guided global compression of the image.
(2) Minimal storage resources. One of the most crucial components of FPGA-based image processing algorithms is the Block RAM. It is necessary to keep more than one frame of image data when an image processing algorithm has a delay of more than one frame. The block-based local optimal contrast stretching algorithm proposed in this study uses fewer storage resources and only needs two parameters for each local block.
(3) The design of the algorithm structure of the pipeline and the efficient implementation of the algorithm based on FPGA. The pipeline of the algorithm’s structure is conducive to the implementation of FPGA; however, exponentiation, division, and other calculations require a significant amount of the FPGA’s resources; therefore, this study designed the entire algorithm in accordance with the pipeline and optimized the calculations in the algorithm to facilitate the efficient operation of the FPGA.
(4) Consideration of both global and local contrast. Previous algorithms based on local histograms are prone to block effects, and global class-based algorithms have poor local contrast. Based on this, this study improved the local contrast while ensuring that the global contrast closely resembled the original image. Specifically, the global background layer’s compressed image is used as the guide image to achieve the optimal enhancement of local contrast.
In general, in order to effectively suppress local noise and the halo effect while maintaining both global and local contrast, the proposed algorithm combines local information with global information and uses the global contrast compression map as the guide image. Meanwhile, this study used infrared image with a resolution of 1024 × 768 as an example for optimization and implementation of the FPGA-based algorithm, and the entire algorithm used 2.2554 Mb of Block RAM, five dividers, one root calculator, and some other auxiliary computing resources, with a total image delay of 0.018 s. Finally, the image processing results of different scenes showed that the algorithm proposed in this study had good results in rich scenes, scenes with small targets, and indoor scenes. At the same time, the algorithm has low complexity and low delay, which is beneficial for its application to scenes with high requirements in real time.

Author Contributions

Conceptualization, Y.Z. (Youpan Zhu), W.J. and Y.Z. (Yongkang Zhou); methodology, Y.Z. (Youpan Zhu), Y.Z. (Yongkang Zhou) and W.J.; software, Y.Z. (Youpan Zhu), Y.Z. (Yongkang Zhou), L.Z. and G.W.; validation, Y.Z. (Youpan Zhu), Y.Z. (Yongkang Zhou), L.Z. and Y.S.; formal analysis, G.W. and Y.S.; investigation, Y.Z. (Youpan Zhu), W.J. and G.W.; resources, W.J.; data curation, Y.Z. (Youpan Zhu) and Y.Z. (Yongkang Zhou); writing—original draft preparation, Y.Z. (Youpan Zhu), Y.Z. (Yongkang Zhou), W.J., L.Z., G.W. and Y.S.; writing—review and editing, Y.Z. (Youpan Zhu), Y.Z. (Yongkang Zhou), W.J., L.Z., G.W. and Y.S.; visualization, G.W. and Y.S.; supervision, W.J. and Y.S.; project administration, W.J.; funding acquisition, W.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Defense Science and Technology Foundation Strengthening Plan (grant number 2021-JCJQ-JJ-1020).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the National Defense Science and Technology Foundation Strengthening Plan for help in identifying collaborators for this work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AbbreviationsFull Name
ADCAnalog to digital converter
IRInfrared radiation
HDRHigh dynamic range
PHEPlateau histogram equalization
CLAHEContrast-limited adaptive histogram equalization
GDHDRCGradient domain-based high dynamic range image compression
FPGAField-programmable gate array
CPUCentral processing unit
DRCEDynamic range compression and enhancement algorithm
LOCLocal optimal contrast
BF-DRPBilateral filter and dynamic range partitioning
LPFLow-pass filter
GFGuided filter
LEPFLocal edge-preserving filter
DDEDigital detail enhancement
pixPixel
AGCAdaptive gain control

References

  1. Jamrozik, W.; Górka, J.; Batalha, G.F. Dynamic Range Compression of Thermograms for Assessment of Welded Joint Face Quality. Sensors 2023, 23, 1995. [Google Scholar] [CrossRef] [PubMed]
  2. Lv, Z.; Li, J.; Li, X.; Wang, H.; Wang, P.; Li, L.; Shu, L.; Li, X. Two adaptive enhancement algorithms for high gray-scale RAW infrared images based on multi-scale fusion and chromatographic remapping. Infrared Phys. Technol. 2023, 133, 104774. [Google Scholar] [CrossRef]
  3. Lang, Y.Z.; Qian, Y.S.; Wang, H.G.; Kong, X.Y.; Wu, S. A real-time high dynamic range intensified complementary metal oxide semiconductor camera based on FPGA. Opt. Quantum Electron. 2022, 54, 304. [Google Scholar] [CrossRef]
  4. Vickers, V.E. Plateau equalization algorithm for real-time display of high-quality infrared imagery. Opt. Eng. 1996, 35, 1921–1926. [Google Scholar] [CrossRef]
  5. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  6. Zuiderveld, K. Contrast limited adaptive histogram equalization. In Graphics Gems, 4th ed.; Heckbert, P., Ed.; Elsevier Inc.: Amsterdam, The Netherlands, 1994; pp. 474–485. [Google Scholar]
  7. Schatz, V. Low-latency histogram equalization for infrared image sequences: A hardware implementation. J. Real-Time Image Process. 2013, 8, 193–206. [Google Scholar] [CrossRef]
  8. Ashiba, H.I.; Mansour, H.M.; Ahmed, H.M.; Dessouky, M.I.; El-Kordy, M.F.; Zahran, O.; Abd El-Samie, F.E. Enhancement of IR images using histogram processing and the Undecimated additive wavelet transform. Multimed. Tools Appl. 2019, 78, 11277–11290. [Google Scholar] [CrossRef]
  9. Fattal, R.; Lischinski, D.; Werman, M. Gradient domain high dynamic range compression. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, TX, USA, 12 July 2002. [Google Scholar]
  10. Kim, J.H.; Kim, J.H.; Jung, S.W.; Noh, C.K.; Ko, S.J. Novel contrast enhancement scheme for infrared image using detail-preserving stretching. Opt. Eng. 2011, 50, 077002. [Google Scholar] [CrossRef]
  11. Zhang, F.; Xie, W.; Ma, G.; Qin, Q. High dynamic range compression and detail enhancement of infrared images in the gradient domain. Infrared Phys. Technol. 2014, 67, 441–454. [Google Scholar] [CrossRef]
  12. Wang, Z.; Sun, S.; Li, Y.; Yue, Z.; Ding, Y. Distributed Compressive Sensing for Wireless Signal Transmission in Structural Health Monitoring: An Adaptive Hierarchical Bayesian Model-Based Approach. Sensors 2023, 23, 5661. [Google Scholar] [CrossRef]
  13. Zhu, J.; Jin, W.; Li, L.; Han, Z.; Wang, X. Multiscale infrared and visible image fusion using gradient domain guided image filtering. Infrared Phys. Technol. 2018, 89, 8–19. [Google Scholar] [CrossRef]
  14. Branchitta, F.; Diani, M.; Corsini, G.; Romagnoli, M. New technique for the visualization of high dynamic range infrared images. Opt. Eng. 2009, 48, 096401. [Google Scholar] [CrossRef]
  15. Durand, F.; Dorsey, J. Fast bilateral filtering for the display of high-dynamic-range images. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, TX, USA, 12 July 2002. [Google Scholar]
  16. Bae, S.; Paris, S.; Durand, F. Two-scale tone management for photographic look. ACM Trans. Graph. 2006, 25, 637–645. [Google Scholar] [CrossRef]
  17. Zuo, C.; Chen, Q.; Liu, N.; Ren, J.; Sui, X. Display and detail enhancement for high-dynamic-range infrared images. Opt. Eng. 2011, 50, 127401. [Google Scholar] [CrossRef]
  18. Liu, N.; Chen, X. Infrared image detail enhancement approach based on improved joint bilateral filter. Infrared Phys. Technol. 2016, 77, 405–413. [Google Scholar] [CrossRef]
  19. Anderson, G.L.; Netravali, A.N. Image restoration based on a subjective criterion. IEEE Trans. Syst. Man Cybern. 1976, 6, 845–853. [Google Scholar] [CrossRef]
  20. Katsaggelos, A.K.; Biemond, J.; Schafer, R.W.; Mersereau, R.M. A regularized iterative image restoration algorithm. IEEE Trans. Signal Process. 1991, 39, 914–929. [Google Scholar] [CrossRef]
  21. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef]
  22. Liu, N.; Zhao, D. Detail enhancement for high-dynamic-range infrared images based on guided image filter. Infrared Phys. Technol. 2014, 67, 138–147. [Google Scholar] [CrossRef]
  23. Song, Q.; Wang, Y.; Bai, K. High dynamic range infrared images detail enhancement based on local edge preserving filter. Infrared Phys. Technol. 2016, 77, 464–473. [Google Scholar] [CrossRef]
  24. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  25. Gu, B.; Li, W.; Zhu, M.; Wang, M. Local edge-preserving multiscale decomposition for high dynamic range image tone mapping. IEEE Trans. Image Process. 2012, 22, 70–79. [Google Scholar]
  26. Kwan, C.; Chou, B.; Yang, J.; Rangamani, A.; Tran, T.; Zhang, J.; Etienne-Cummings, R. Deep Learning-Based Target Tracking and Classification for Low Quality Videos Using Coded Aperture Cameras. Sensors 2019, 19, 3702. [Google Scholar] [CrossRef] [PubMed]
  27. Shao, Y.; Xu, F.; Chen, J.; Lu, J.; Du, S. Engineering surface topography analysis using an extended discrete modal decomposition. J. Manuf. Process. 2023, 90, 367–390. [Google Scholar] [CrossRef]
  28. Shao, Y.; Du, S.; Tang, H. An extended bi-dimensional empirical wavelet transform based filtering approach for engineering surface separation using high definition metrology. Measurement 2021, 178, 109259. [Google Scholar] [CrossRef]
  29. Peli, E. Contrast in complex images. JOSA A 1990, 7, 2032–2040. [Google Scholar] [CrossRef]
  30. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  31. Zhou, W. Image quality assessment: From error measurement to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–613. [Google Scholar]
  32. Tenenbaum, J.M. Accommodation in Computer Vision; Stanford University: Stanford, CA, USA, 1971. [Google Scholar]
  33. Pratt, W.K. Digital Image Processing, 3rd ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
Figure 1. Framework of the dynamic range compression algorithm based on image layering. LPF, low-pass filter; I_base, the background layer image; I_detail, the detailed layer image; p, parameter.
Figure 1. Framework of the dynamic range compression algorithm based on image layering. LPF, low-pass filter; I_base, the background layer image; I_detail, the detailed layer image; p, parameter.
Sensors 23 08860 g001
Figure 2. Framework of the DRCE-LOC algorithm.
Figure 2. Framework of the DRCE-LOC algorithm.
Sensors 23 08860 g002
Figure 3. Schematic of the image upsampling process.
Figure 3. Schematic of the image upsampling process.
Sensors 23 08860 g003
Figure 4. Effects of local noise suppression. (a) AGC; (b) I g c = 128, bias = 0; (c) I g c = 128, bias = 128; (d) I g c = 128, bias = 256; (e) I g c = 128, bias = 384; (f) I g c = 128, bias = 512.
Figure 4. Effects of local noise suppression. (a) AGC; (b) I g c = 128, bias = 0; (c) I g c = 128, bias = 128; (d) I g c = 128, bias = 256; (e) I g c = 128, bias = 384; (f) I g c = 128, bias = 512.
Sensors 23 08860 g004
Figure 5. The halo suppression effect under different values of k 2 . (a) k2 = 1.5; (b) k2 = 1; (c) k2 = 0.8; (d) k2 = 0.5; (e) k2 = 0.2; (f) k2 = 0.1.
Figure 5. The halo suppression effect under different values of k 2 . (a) k2 = 1.5; (b) k2 = 1; (c) k2 = 0.8; (d) k2 = 0.5; (e) k2 = 0.2; (f) k2 = 0.1.
Sensors 23 08860 g005
Figure 6. Comparison of the effects for the rich scene (Scene 1). (a) AGC; (b) HE; (c) CLAHE; (d) GF-DDE; (e) BF-DRP; (f) proposed method.
Figure 6. Comparison of the effects for the rich scene (Scene 1). (a) AGC; (b) HE; (c) CLAHE; (d) GF-DDE; (e) BF-DRP; (f) proposed method.
Sensors 23 08860 g006
Figure 7. Comparison of the effects for the scene with a small target scene (Scene 2). (a) AGC; (b) HE; (c) CLAHE; (d) GF-DDE; (e) BF-DRP; (f) proposed method.
Figure 7. Comparison of the effects for the scene with a small target scene (Scene 2). (a) AGC; (b) HE; (c) CLAHE; (d) GF-DDE; (e) BF-DRP; (f) proposed method.
Sensors 23 08860 g007aSensors 23 08860 g007b
Figure 8. Comparison of the effects for the indoor scene (Scene 3). (a) AGC; (b) HE; (c) CLAHE; (d) GF-DDE; (e) BF-DRP; (f) proposed method.
Figure 8. Comparison of the effects for the indoor scene (Scene 3). (a) AGC; (b) HE; (c) CLAHE; (d) GF-DDE; (e) BF-DRP; (f) proposed method.
Sensors 23 08860 g008
Figure 9. Diagram of FPGA-based implementation of the algorithm.
Figure 9. Diagram of FPGA-based implementation of the algorithm.
Sensors 23 08860 g009
Figure 10. Infrared imager and intended display effects. (a) The infrared imager; (b) The different display effects in different scenes; (c) experimental environment.
Figure 10. Infrared imager and intended display effects. (a) The infrared imager; (b) The different display effects in different scenes; (c) experimental environment.
Sensors 23 08860 g010
Table 1. Evaluation results of the image quality of different algorithms.
Table 1. Evaluation results of the image quality of different algorithms.
IndexAGCHECLAHEGF-DDEBF-DRPProposed Method
Scene 1RMS314942515048
Entropy6.9897.9917.7997.7117.9297.744
SSIM10.80610.5610.60920.69800.6168
Tenengrad3.2577.58119.64214.81311.818128.561
Time (s)0.00200.68564.67800.938424.86540.3168
Scene 2RMS13116223233
Entropy2.9727.9907.3227.6097.8887.393
SSIM10.46530.3440.4080.3860.454
Tenengrad0.28315.37129.90623.51518.75443.940
Time (s)0.00240.66704.61600.941825.38050.3651
Scene 3RMS3318283217
Entropy4.5877.9536.2787.1427.9247.378
SSIM10.5560.7740.4830.6560.547
Tenengrad1.4117.6256.0427.8785.4899.661
Time (s)0.00180.68434.68740.940023.32560.3358
The results of running the software on a computer and its implementation on the FPGA were consistent.
Table 2. Utilization report.
Table 2. Utilization report.
Slice LUTs (total: 134,600)39,045 (29%)
Slice registers (total: 269,200)38,919 (14.46%)
Block RAM (total: 365)105 (28.77%)
DSPs (total: 740)319 (43.1%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Y.; Zhou, Y.; Jin, W.; Zhang, L.; Wu, G.; Shao, Y. A Low-Delay Dynamic Range Compression and Contrast Enhancement Algorithm Based on an Uncooled Infrared Sensor with Local Optimal Contrast. Sensors 2023, 23, 8860. https://doi.org/10.3390/s23218860

AMA Style

Zhu Y, Zhou Y, Jin W, Zhang L, Wu G, Shao Y. A Low-Delay Dynamic Range Compression and Contrast Enhancement Algorithm Based on an Uncooled Infrared Sensor with Local Optimal Contrast. Sensors. 2023; 23(21):8860. https://doi.org/10.3390/s23218860

Chicago/Turabian Style

Zhu, Youpan, Yongkang Zhou, Weiqi Jin, Li Zhang, Guanlin Wu, and Yiping Shao. 2023. "A Low-Delay Dynamic Range Compression and Contrast Enhancement Algorithm Based on an Uncooled Infrared Sensor with Local Optimal Contrast" Sensors 23, no. 21: 8860. https://doi.org/10.3390/s23218860

APA Style

Zhu, Y., Zhou, Y., Jin, W., Zhang, L., Wu, G., & Shao, Y. (2023). A Low-Delay Dynamic Range Compression and Contrast Enhancement Algorithm Based on an Uncooled Infrared Sensor with Local Optimal Contrast. Sensors, 23(21), 8860. https://doi.org/10.3390/s23218860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop