Next Article in Journal
Electrochemical Removal of Cesium Ions via Capacitive Deionization Using an Ion-Exchange Layer Coated on a Carbon Electrode
Previous Article in Journal
A Contrast Calibration Protocol for X-ray Speckle Visibility Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polarimetric Dehazing Method Based on Image Fusion and Adaptive Adjustment Algorithm

College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(21), 10040; https://doi.org/10.3390/app112110040
Submission received: 30 August 2021 / Revised: 19 October 2021 / Accepted: 21 October 2021 / Published: 27 October 2021
(This article belongs to the Section Optics and Lasers)

Abstract

:
To improve the robustness of current polarimetric dehazing scheme in the condition of low degree of polarization, we report a polarimetric dehazing method based on the image fusion technique and adaptive adjustment algorithm which can operate well in many different conditions. A splitting focus plane linear polarization camera was employed to grab the images of four different polarization directions, and the haze was separated from the hazy images by low-pass filtering roughly. Then the image fusion technique was used to optimize the method of estimating the transmittance map. To improve the quality of the dehazed images, an adaptive adjustment algorithm was introduced to adjust the illumination distribution of the dehazed images. The outdoor experiments have been implemented and the results indicated that the presented method could restore the target information obviously, and both the visual effect and quantitative evaluation have been enhanced.

1. Introduction

Image dehazing has long been a question of great interest in target detection and automatic recognition, which has drawn researchers’ attention. Many methods were proposed to improve the quality of hazy images, and they can be generally divided into two classes: the first one is the methods based on image processing, such as contrast limited adaptive histogram equalization [1], wavelet transform [2,3], Retinex [4,5] and homomorphic filtering [6,7]. The physical principle of image degradation caused by haze is not considered by these methods, and the identification of target is improved through image enhancement technology; the other one is the methods based on the physical model of atmospheric scattering, such as scene depth method [8], dark channel prior method [9], non-local dehazing algorithm [10], neural networks [11], refined transmission map [12] and polarimetric dehazing method [13,14,15,16,17,18,19,20,21]. These methods can improve the quality of the image by establishing the physical model of degradation process and obtaining the dehazed images by the model. Previous studies have shown that the polarimetric dehazing methods are efficient in many different scattering environments. The basic principle of these methods is to collect the polarimetric information of the scenes first, and then the key parameters of the dehazing model are estimated based on the polarimetric information. Schechner et al. [13,14] studied the polarimetric dehazing method in 2001, and they obtained the polarimetric information by grabbing two orthorhombic images. Then the parameters of the dehazing model were estimated, and the haze was removed. Mudge et al. [15] published their dehazing method based on the infrared camera, which could dehaze in real time. Liang et al. [16,17] added the AOP (angle of polarization) into the dehazing method, which improved the accuracy of the parameter estimation. Hu et al. [18,19] employed the polarimetric method in the underwater environment and eliminated the influence of underwater scattering medium. Shen et al. [20] optimized the scattering model and proposed an iterative polarimetric image dehazing method. Wang et al. [22] designed a real-time optical sensing and detection system, which was composed of four polarizers with different polarization directions integrated into independent cameras aligned parallel to the optical axis, and they adopted an improved image enhancement algorithm using the CLAHE-based bilinear interpolation to generate real-time high-contrast and high-definition images. Zhang et al. [21] presented an end-to-end method combining polarimetric dehazing with lane detection, which provided a valuable reference for driving safety in dense fog. You et al. [23] proposed a polarization image dehazing enhancement algorithm, and they obtained the polarimetric information by the polarization images and automatically extracted the sky region based on the region growth algorithm, and then estimated the key parameters by the dark channel priori principle. Liang et al. [24] optimized the angle of polarization (AoP) by regularization constraints, and then automatically estimated all the key parameters without considering the sky region. However, the DOP (degree of polarization) of the background scattering light is very low in some hazy weathers, especially in heavy hazy weathers the DOP is less than 0.01, most of the polarimetric information is covered by noise and it is very difficult to obtain the polarimetric information in such environments, and the inaccurate information will lead to inaccurate key parameters estimation of the dehazing model, so the conventional polarimetric dehazing methods cannot operate effectively.
To overcome the disadvantages of the conventional methods and improve the quality of the dehazed images, a polarimetric dehazing method was proposed based on image fusion and adaptive adjustment algorithm. First, the images in four different polarization directions were grabbed by a splitting focus plane linear polarization camera (FLIR, BFS-U3-51S5P-C), and then the low-pass filtering operation was used to separate haze from these images roughly to decrease the bad influence of the noise. Secondly, the key parameters of the scattering model were estimated based on the filtered images and a new transmittance map would be obtained based on image fusion technology, which could enhance the visual range of the dehazed images. Thirdly, the dehazed images could be obtained by substituting these parameters into the scattering model. Finally, for the purpose of improving the visual effect of the dehazed images, the adaptive adjustment algorithm was applied to adjust the illumination distribution of these images. Several hazy images were grabbed and dehazed by the conventional method and our scheme, respectively, and three evaluating indicators were applied to evaluate the dehazed images quantitatively. The experimental results indicated that this method could solve the problem that conventional method cannot operate effectively when the DOP of the background scattering light is very low, and the adverse influence of the haze was reduced. Both the visual effect and quantitative evaluation were enhanced finally.

2. Theoretical Model

2.1. The Physical Model of Atmospheric Scattering

According to the atmospheric scattering physical model proposed by McCartney et al. [25], the processes of scattering and polarimetric imaging are shown in Figure 1, and the irradiance I ( i , j ) received by the camera can be divided into two parts: the direct light D ( i , j ) and the scattering light (named airlight) A ( i , j ) . The irradiance can be written as:
I ( i , j ) = D ( i , j ) + A ( i , j )
Furthermore, D ( i , j ) and A ( i , j ) can be expressed, respectively, as:
D ( i , j ) = L ( i , j ) t ( i , j )
A ( i , j ) = A [ 1 t ( i , j ) ]
where L ( i , j ) is the target irradiance without scattering. A is the airlight at infinity, and t ( i , j ) is the transmittance map of the atmosphere, which is relative to the distance between the targets and the camera.
t ( i , j ) = e x p [ β z ( i , j ) ]
where β is the extinction coefficient of the scattering medium. Based on the Equations (1)–(3), the target irradiance L ( i , j ) and the atmosphere transmittance map t ( i , j ) can be written, respectively, as:
L ( i , j ) = I ( i , j ) A ( i , j ) t ( i , j )
t ( i , j ) = 1 A ( i , j ) A
As shown in Equations (5) and (6), the dehazed images can be obtained after A ( i , j ) , A and t ( i , j ) are estimated.

2.2. Polarimetric Imaging Dehazing Method

2.2.1. Airlight A ( i , j ) and Airlight at Infinity A

In the hazy background, A ( i , j ) is partially polarized light while L ( i , j ) is non-polarized light [13], and the airlight can be obtained by the polarimetric imaging method. To estimate A ( i , j ) , four different polarization images in the direction of 0 , 45 , 90 and 135 are grabbed by a linear polarization camera. Nevertheless, the polarimetric information of the images cannot be obtained accurately due to the influence of the noise. In the frequency domain of the images, the low-frequency component mainly comes from the scattering light (the airlight), while the high-frequency component is mainly contributed by the noise and the texture of target [17], so they can be separated by low-pass filtering roughly.
I 0 F = F 1 [ F ( I 0 ) × R ( r ) ] I 45 F = F 1 [ F ( I 45 ) × R ( r ) ] I 90 F = F 1 [ F ( I 90 ) × R ( r ) ] I 135 F = F 1 [ F ( I 135 ) × R ( r ) ]
where F ( ) and F 1 ( ) represent the operations of Fourier transform and inverse Fourier transform, respectively, R ( r ) is the low-pass filtering whose patch’s radius is r. Then, the filtered Stokes parameters S 0 F ( i , j ) , S 1 F ( i , j ) and S 2 F ( i , j ) are given by:
S 0 F ( i , j ) = I 0 F ( i , j ) + I 90 F ( i , j ) S 1 F ( i , j ) = I 0 F ( i , j ) I 90 F ( i , j ) S 2 F ( i , j ) = I 45 F ( i , j ) I 135 F ( i , j )
After obtaining the Stokes parameters, the DOP and AOP of each pixel can be calculated by:
p ( i , j ) = S 1 F 2 ( i , j ) + S 2 F 2 ( i , j ) S 0 F ( i , j )
θ ( i , j ) = 1 2 t a n 1 S 2 F ( i , j ) S 1 F ( i , j )
There is only haze in the sky, so the DOP and AOP of the scattering light (named p A and θ A , respectively) can be estimated by calculating the mean values of p ( i , j ) and θ ( i , j ) which belong to the sky region, respectively. The method of how to locate the sky region is described in [13].
Setting A p as the polarized part of airlight, the polarized components in the 0 and 90 directions can be written as A p x = A p c o s 2 θ A and A p y = A p s i n 2 θ A , respectively. Then, A p can be expressed as [16]:
A p ( i , j ) = I 0 F ( i , j ) S 0 F ( i , j ) ( 1 p A ) / 2 c o s 2 θ A = I 90 F ( i , j ) S 0 F ( i , j ) ( 1 p A ) / 2 s i n 2 θ A
The airlight A ( i , j ) can be estimated by Equation (12).
A ( i , j ) = A p ( i , j ) p A
The A is a scene unrelated parameter value [20], which can be obtained by calculating the average irradiance of the sky region.

2.2.2. The Transmittance Map of Atmosphere t ( i , j )

As shown in Equation (4), the transmittance map of the atmosphere is related to the distance between the targets and the camera, and it can be obtained by Equation (6) in the conventional dehazing method. However, the A ( i , j ) and t ( i , j ) are estimated by the filtered images, which contain no targets and distance information. To solve this problem, a method is presented in this paper based on image fusion.
In the hazy background, sunlight irradiates the targets homogeneously after it is scattered by hazy media, and then the camera receives the irradiance attenuated by the haze. As shown in Equations (1)–(4), the longer the distance, the larger A ( i , j ) is and the more minor D ( i , j ) is. The scattering light is always white in hazy weather, and the irradiance I ( i , j ) received by the camera consists of A ( i , j ) and D ( i , j ) according to the scattering model, so the value of the pixel I ( i , j ) will be greater with the increase of the target distance, which means that I ( i , j ) also can reflect the distance relationship between the targets and the camera. To fuse with t ( i , j ) conveniently, a similar equation is employed to obtain an image which contains the distance relationship.
q ( i , j ) = 1 I ( i , j ) m a x ( I ( i , j ) )
As shown in Figure 2, the new transmittance map can be constructed by combining q ( i , j ) and t ( i , j ) .
t 1 ( i , j ) = [ t ( i , j ) + q ( i , j ) ] / 2
Replacing t ( i , j ) with the t 1 ( i , j ) in Equation (5) and the dehazed images can be obtained as:
L ( i , j ) = I ( i , j ) A ( i , j ) t 1 ( i , j )

2.2.3. The Dehazed Images L ( i , j )

It has been found that the pixels’ values of short-distance targets are large in t 1 ( i , j ) , so the short-distance targets are dark in L ( i , j ) according to Equation (15). To solve this problem, an adaptive adjustment algorithm for non-uniform illumination images [26] is used to adjust the illumination distribution of the L ( i , j ) based on the Gamma function.
Generally speaking, the low-frequency component can also describe the illumination distribution characteristics of the images, and more homogeneous illumination distribution means better visual effect. To improve the quality of dehazed images, we need to adjust the illumination distribution of the dehazed images. The illumination component can be obtained by applying low-pass filtering.
G ( i , j ) = F 1 [ F ( L ( i , j ) ) × R ( r ) ]
where G ( i , j ) is the image of the illumination component.
Then, an adaptive adjustment algorithm is applied to adjust the illumination distribution of the images adaptively based on the 2D Gamma function.
L 1 ( i , j ) = 255 ( L ( i , j ) 255 ) γ , γ = ( 1 2 ) G ( i , j ) m m
where L 1 ( i , j ) is the adjusted image, γ is the value of the exponent, which is used to adjust the brightness, m is the average value of G ( i , j ) . As shown in Equation (17) and Figure 3, when the value of the pixel G ( i , j ) is larger than m, γ is less than 1, the value of L ( i , j ) decreases, and vice versa.
In order to clearly show clearly show the details of the proposed method, we present the corresponding flowchart in Figure 4.

3. Experimental Results and Discussion

In hazy weathers, a polarization camera has been applied to capture several hazy images out of doors. These images were obtained in different distance and hazy background, and they were dehazed by Schechner’s method [13] and our method, respectively. A splitting focus plane linear polarization camera (FLIR, BFS-U3-51S5P-C) was employed in experiments, which could grab four different polarization images in the directions of 0 , 45 , 90 and 135 simultaneously. The software was developed and ran in an office computer, and its configurations are shown as follows: the CPU is intel(R) Core(TM) i7-10700F, and the graphics card is NVIDIA GeForce GTX 1660 SUPER, and the internal storage is 16G. It takes about 10 to 11 s to process a hazy image.
In order to evaluate the dehazed images quantitatively, three quantitative evaluating indicators are introduced in this part. Image contrast (C) can reflect the contrast among the gray levels. Image entropy (H) can describe the quantity of information of an image. Average gradient (G) is the mean value of the gradient image, it can describe the variation of textures. The above three evaluating indicators [27,28] are utilized to evaluate the dehazed images comprehensively.
Figure 5a,d,g are the hazy images grabbed in light hazy background, whose p A is less than 0.02 . As shown in Figure 5, the visual effect of the original images is poor, and the far-distance targets are fuzzy due to the adverse influence of the haze. After dehazing by Schechner’s method, the improvement of visual effect is limited, and the details obscured by the haze have not been restored well. The main reason is that p A is very low and the polarimetric information is easily covered by the noise, which makes it difficult to estimate A ( i , j ) and t ( i , j ) accurately. After dehazing by our method, the visual effect of the image becomes better, and these improvements are especially obvious in far-distance targets.
The regions in the red box of Figure 5d–i are enlarged and shown in Figure 6. As shown in Figure 6c,f,i, the details of the dehazed images become richer, and the texture are clearer. There are some buildings obscured by haze in the original image, but they become visible after dehazing. The outlines of far-distance buildings are clearer in Figure 6c,f compared with Figure 6a,d. Moreover, there is a building in the middle part of Figure 6g and it is kept out by haze, but it can be recognized in Figure 6i by our dehazing operation.
The gray level histogram is the statistic of the gray level distribution of an image, which can show the improvement of dehazed images intuitively. The gray level histogram of Figure 5a–c are calculated and shown in Figure 7. For the original image, most of the pixels’ values distribute in the range of 100 to 200 in Figure 7a. After dehazing by Schechner’s method, the image has a wider distribution of the histogram, but the values of many pixels are higher than 150 in Figure 7b, which means that the disturbance of haze is not eliminated totally. In Figure 7c, after dehazing by our method, the pixels’ values distribute mainly in the middle of the histogram, which is different from the original image absolutely.
The quantitative results of the dehazed images in the light hazy background are shown in Table 1. As for Schechner’s method, the greatest improvements of C, H and G are 9 % , 9 % and 125 % , respectively. For our method, the greatest improvements of C, H and G are 26 % , 11 % and 190 % , respectively. The comparisons of the results indicate that our method can make better enhancements in the quantitative evaluation than the conventional method. However, the improvements of C and H are limited in the light hazy background. There are two main reasons for this problem: the first one is that the adjustment algorithm adjusts the illumination distribution adaptively based on the average value of the illumination component, and this value is approximate to the average value of the dehazed image, so many pixels’ values of the dehazed image are closed to it, which lead to the limited enhancement of image contrast; the second one is that most of the pixels belong to sky region in these images, and the changes of these pixels’ values are small after dehazing, which will reduce the value of the image contrast and image entropy of the dehazed image.
To test the dehazing capability of our method in different hazy environments, several images were captured and dehazed in heavy hazy backgrounds, and the experimental results are shown in Figure 8. Figure 8a,d are the original images, and their p A are less than 0.01 . In Figure 8a, the Chinese words in the red box are kept out by the haze, and they can be recognized roughly in Figure 8b. After the dehazing operation by our method, these words can be recognized with no difficulty. The similar dehazing results have also been obtained and shown in Figure 8d–f.
The gray level histogram of Figure 8a–c are shown in Figure 9. The histogram of the original image is narrow because of the adverse influence of haze, and Schechner’s method broadens it limitedly. After dehazing by our method, the histogram becomes much wider than before, whose gray values distribute mainly in the range of 0 to 250.
The quantitative results of the dehazed images in heavy hazy background are shown in Table 2. As for Schechner’s method, the greatest improvements of C, H and G are 119 % , 23 % and 73 % , respectively. For our method, the greatest improvements of C, H and G are 362 % , 42 % and 439 % , respectively. The data in Table 2 indicate that the results of our method are better than the results of conventional method in the quantitative evaluation indicators, and they also verify that our method can operate in heavy hazy background effectively.

4. Conclusions

To solve the problem of many polarimetric dehazing methods cannot operate effectively when the DOP of the background is very low, a new dehazing method is proposed and verified in this paper based on image fusion and adaptive adjustment algorithm. The main idea of the scheme is that the haze and the targets can be roughly separated by the low-pass filtering, which is helpful to obtain the polarization information accurately. A new atmosphere transmittance map that contains the distance relationship between the targets and the camera has been obtained based on image fusion technique, and this operation can enhance the visual range of the images evidently. Meanwhile, an adaptive adjustment algorithm is applied to adjust the illumination distribution of the dehazed images, which can enhance the quality of the dehazed images furtherly. Comparing with other polarimetric dehazing methods, the outdoor experimental results indicated that our method can make better improvement in both visual effect and quantitative evaluation when the DOP is very low, and it can still operate very well even though the DOP is less than 0.01, which means that the proposed method is effective, generalized and robust in many different hazy backgrounds. This paper can provide a reference for the optimization of the dehazing method.
However, there is an obvious disadvantage that cannot be ignored in the improved operation mode, the current algorithm is very time-consuming and it takes about 10–11 s to finish a typical dehazing operation. After making statistics and analysis, it is found that most of the time is spent in low-pass filtering process and adaptively adjusting process, and the running time will be shorten by optimizing the steps of low-pass filtering process and adjusting process. It can also be expected that the polarimetric dehazing method will be improved in the next version to realize better dehazing effectiveness in shorter time.

Author Contributions

Conceptualizatio, B.L.; methodology, B.L. and Y.L.; validation, B.L., Y.L., C.G., Y.C. and F.W.; investigation, Y.L.; resources, B.L. and Y.L.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, B.L., Y.L. and C.G.; visualization, Y.L.; supervision, B.L.; project administration, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by National Natural Science Foundation of China (NSFC) (61975235) and Natural Science Foundation of Hunan Province (2019JJ40342).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper if applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reza, A.M. Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement. J. VLSI Signal Process. Syst. 2004, 38, 35–44. [Google Scholar] [CrossRef]
  2. Wang, L.; Zhu, R. Image Defogging Algorithm of Single Color Image Based on Wavelet Transform and Histogram Equalization. Appl. Math. Sci. 2013, 7, 3913–3921. [Google Scholar]
  3. Khmag, A.; Al-Haddad, S.A.R.; Ramli, A.R.; Kalantar, B. Single Image Dehazing Using Second-generation Wavelet Transforms and The Mean Vector L2-norm. Vis. Comput. 2018, 34, 675–688. [Google Scholar] [CrossRef]
  4. Hu, X.; Gao, X.; Wang, H. A Novel Retinex Algorithm and Its Application to Fog-degraded Image Enhancement. Sens. Transducers 2014, 175, 138–143. [Google Scholar]
  5. Yang, W.; Wang, R.; Fang, S.; Zhang, X. Variable Filter Retinex Algorithm for Foggy Image Enhancement. J. Comput.-Aided Des. Comput. Graph. 2010, 22, 965–971. [Google Scholar] [CrossRef]
  6. Seow, M.; Asari, K.; Ratio, V. Rule and Homomorphic Filter for Enhancement of Digital Colour Image. Neurocomputing 2006, 69, 954–958. [Google Scholar] [CrossRef]
  7. Xiao, L.; Li, C.; Wu, Z.; Wang, T. An Enhancement Method for X-ray Image Via Fuzzy Noise Removal and Homomorphic Filtering. Neurocomputing 2016, 195, 56–64. [Google Scholar] [CrossRef]
  8. Yang, Y.; Liu, C. Single Image Dehazing Using Elliptic Curve Scattering Model. Signal Image Video Process. 2021, 15, 1443–1451. [Google Scholar] [CrossRef]
  9. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE. Trans. Pattern Anal. 2011, 33, 2341–2353. [Google Scholar]
  10. Berman, D.; Treibitz, T.; Avidan, S. Non-Local Image Dehazing. In Proceedings of the 2016 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1674–1682. [Google Scholar]
  11. Haouassi, S.; Wu, D. Image Dehazing Based on (CMTnet) Cascaded Multi-scale Convolutional Neural Networks and Efficient Light Estimation Algorithm. Appl. Sci. 2020, 10, 1190. [Google Scholar] [CrossRef] [Green Version]
  12. Musunuri, Y.R.; Kwon, O. Haze Removal Based on Refined Transmission Map for Aerial Image Matching. Appl. Sci. 2021, 11, 6917. [Google Scholar] [CrossRef]
  13. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Instant Dehazing of Images Using Polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR), Kauai, HI, USA, 8–14 May 2001; pp. 325–332. [Google Scholar]
  14. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Polarization-based Vision Through Haze. Appl. Opt. 2003, 42, 511–525. [Google Scholar] [CrossRef] [PubMed]
  15. Mudge, J.; Virgen, M. Real Time Polarimetric Dehazing. Appl. Opt. 2013, 52, 1932–1938. [Google Scholar] [CrossRef]
  16. Liang, J.; Ren, L.; Ju, H.; Zhang, W.; Qu, E. Polarimetric Dehazing Method for Dense Haze Removal Based on Distribution Analysis of Angle of Polarization. Opt. Express 2015, 23, 26146–26157. [Google Scholar] [CrossRef] [PubMed]
  17. Liang, J.; Ju, H.; Ren, L.; Yang, L.; Liang, R. Generalized Polarimetric Dehazing Method Based on low-pass Filtering in Frequency Domain. Sensors 2020, 20, 1729. [Google Scholar] [CrossRef] [Green Version]
  18. Hu, H.; Zhao, L.; Li, X.; Wang, H.; Yang, J.; Li, K.; Liu, T. Polarimetric Image Recovery in Turbid Media Employing Circularly Polarized Light. Opt. Express 2018, 26, 25047–25059. [Google Scholar] [CrossRef] [PubMed]
  19. Hu, H.; Qi, P.; Li, X.; Cheng, Z.; Liu, T. Underwater Imaging Enhancement Based on A Polarization Filter and Histogram Attenuation Prior. J. Phys. D Appl. Phys. 2021, 54, 175102–175111. [Google Scholar] [CrossRef]
  20. Shen, L.; Zhao, Y.; Peng, Q.; Chan, J.C.; Kong, S.G. An Iterative Image Dehazing Method with Polarization. IEEE. Trans. Multimed. 2019, 21, 1093–1107. [Google Scholar] [CrossRef]
  21. Zhang, L.; Yin, Z.; Zhao, K.; Tian, H. Lane detection in dense fog using a polarimetric dehazing method. Appl. Opt. 2020, 59, 5702–5707. [Google Scholar] [CrossRef]
  22. Wang, X.; Ouyang, J.; Wei, Y.; Liu, F.; Zhang, G. Real-Time Vision through Haze Based on Polarization Imaging. Appl. Sci. 2019, 9, 142. [Google Scholar] [CrossRef] [Green Version]
  23. You, J.; Liu, P.; Rong, X.; Li, B.; Xu, T. Dehazing and enhancement research of polarized image based on dark channel priori principle. Laser Infrared 2020, 50, 493–500. [Google Scholar]
  24. Liang, Z.; Ding, X.; Mi, Z.; Wang, Y.; Fu, X. Effective Polarization-Based Image Dehazing With Regularization Constraint. IEEE Geosci. Remote Sens. Lett. 2020, 1, 1–5. [Google Scholar] [CrossRef]
  25. McCartney, E.J.; Hall, F.F. Optics of The Atmosphere: Scattering by Molecules and Particles. Phys. Today 1977, 30, 76–77. [Google Scholar] [CrossRef]
  26. Liu, Z.; Wang, D.; Liu, Y.; Liu, X. Adaptive Adjustment Algorithm for Non-uniform Illumination Images Based on 2D Gamma Function. JB Inst. Technol. 2016, 36, 191–196. [Google Scholar]
  27. Zhang, Y.; Luo, L.; Zhao, H.; Qiu, R.; Ying, Y. Image Dehazing Based on Multispectral Polarization Imaging Method in Different Detection Modes. In Proceedings of the 2018 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS), Beijing, China, 7–10 May 2018; pp. 615–620. [Google Scholar]
  28. Ren, W.; Guan, J. Investigation on Principle of Polarization-difference Imaging in Turbid Conditions. Opt. Commun. 2018, 413, 30–38. [Google Scholar] [CrossRef]
Figure 1. The physical model of atmospheric scattering and polarimetric imaging.
Figure 1. The physical model of atmospheric scattering and polarimetric imaging.
Applsci 11 10040 g001
Figure 2. The process of combining q ( i , j ) with t ( i , j ) .
Figure 2. The process of combining q ( i , j ) with t ( i , j ) .
Applsci 11 10040 g002
Figure 3. The comparison between dehazed image and adjusted image.
Figure 3. The comparison between dehazed image and adjusted image.
Applsci 11 10040 g003
Figure 4. The flowchart of the proposed method.
Figure 4. The flowchart of the proposed method.
Applsci 11 10040 g004
Figure 5. Comparisons between the original images and the dehazed images in light hazy background. (a,d,g) are the original images; (b,e,h) are the images dehazed by Schechner’s method; (c,f,i) are the images dehazed by our method.
Figure 5. Comparisons between the original images and the dehazed images in light hazy background. (a,d,g) are the original images; (b,e,h) are the images dehazed by Schechner’s method; (c,f,i) are the images dehazed by our method.
Applsci 11 10040 g005
Figure 6. The enlarged images within the red box in Figure 5d–i. (af) are the enlarged images cut from Figure 5d–f, respectively; (gi) are the enlarged images cut from Figure 5g–i, respectively.
Figure 6. The enlarged images within the red box in Figure 5d–i. (af) are the enlarged images cut from Figure 5d–f, respectively; (gi) are the enlarged images cut from Figure 5g–i, respectively.
Applsci 11 10040 g006
Figure 7. The gray level histogram of the original images and dehazed images. (ac) are the gray level histograms of Figure 5a–c, respectively.
Figure 7. The gray level histogram of the original images and dehazed images. (ac) are the gray level histograms of Figure 5a–c, respectively.
Applsci 11 10040 g007
Figure 8. Comparisons between the original images and the dehazed images in heavy hazy background. (a,d) are the original images; (b,e) are the images dehazed by Schechner’s method; (c,f) are the images dehazed by our method.
Figure 8. Comparisons between the original images and the dehazed images in heavy hazy background. (a,d) are the original images; (b,e) are the images dehazed by Schechner’s method; (c,f) are the images dehazed by our method.
Applsci 11 10040 g008
Figure 9. The gray level histogram of the original images and the dehazed images. (ac) are the gray level histograms of Figure 8a–c, respectively.
Figure 9. The gray level histogram of the original images and the dehazed images. (ac) are the gray level histograms of Figure 8a–c, respectively.
Applsci 11 10040 g009
Table 1. The quantitative evaluation results of the original images and the dehazed images in Figure 5.
Table 1. The quantitative evaluation results of the original images and the dehazed images in Figure 5.
ImagesOriginal ImagesSchechner’s MethodOur Method
(a)(d)(g)(b)(e)(h)(c)(f)(i)
C 0.4108 0.4275 0.4795 0.3368 0.4628 0.5225 0.4718 0.5390 0.5672
H 7.2357 6.9007 7.1322 7.4431 7.5076 7.5211 7.7117 7.6813 7.6690
G 0.8100 0.6983 1.2433 1.5012 1.5722 1.9944 2.0991 2.0262 2.7462
Table 2. The quantitative evaluation results of the original images and the dehazed images in Figure 8.
Table 2. The quantitative evaluation results of the original images and the dehazed images in Figure 8.
ImagesOriginal ImagesSchechner’s MethodOur Method
(a)(d)(b)(e)(c)(f)
C 0.0729 0.0696 0.1502 0.1527 0.2578 0.3218
H 5.5614 5.2639 6.5483 6.5091 7.4163 7.4525
G 0.7424 1.0146 1.2224 1.7595 2.4986 5.4732
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lei, Y.; Lei, B.; Cai, Y.; Gao, C.; Wang, F. Polarimetric Dehazing Method Based on Image Fusion and Adaptive Adjustment Algorithm. Appl. Sci. 2021, 11, 10040. https://doi.org/10.3390/app112110040

AMA Style

Lei Y, Lei B, Cai Y, Gao C, Wang F. Polarimetric Dehazing Method Based on Image Fusion and Adaptive Adjustment Algorithm. Applied Sciences. 2021; 11(21):10040. https://doi.org/10.3390/app112110040

Chicago/Turabian Style

Lei, Yu, Bing Lei, Yubo Cai, Chao Gao, and Fujie Wang. 2021. "Polarimetric Dehazing Method Based on Image Fusion and Adaptive Adjustment Algorithm" Applied Sciences 11, no. 21: 10040. https://doi.org/10.3390/app112110040

APA Style

Lei, Y., Lei, B., Cai, Y., Gao, C., & Wang, F. (2021). Polarimetric Dehazing Method Based on Image Fusion and Adaptive Adjustment Algorithm. Applied Sciences, 11(21), 10040. https://doi.org/10.3390/app112110040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop