Next Article in Journal
An Intelligent Risk Forewarning Method for Operation of Power System Considering Multi-Region Extreme Weather Correlation
Next Article in Special Issue
XSC—An eXplainable Image Segmentation and Classification Framework: A Case Study on Skin Cancer
Previous Article in Journal
Privacy-Friendly Task Offloading for Smart Grid in 6G Satellite–Terrestrial Edge Computing Networks
Previous Article in Special Issue
A Fine-Tuned Hybrid Stacked CNN to Improve Bengali Handwritten Digit Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Haze Level Estimation in Data Cleaning for Supervised Deep Image Dehazing Models

Department of Computer Science and Information Engineering, Chaoyang University of Technology, No. 168, Jifong E. Rd., Wufeng District, Taichung 413, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(16), 3485; https://doi.org/10.3390/electronics12163485
Submission received: 25 June 2023 / Revised: 7 August 2023 / Accepted: 14 August 2023 / Published: 17 August 2023
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 3rd Edition)

Abstract

:
Recently, supervised deep learning methods have been widely used for image haze removal. These methods rely on training data that are assumed to be appropriate. However, this assumption may not always be true. We observe that some data may contain hazy ground truth (GT) images. This can lead to supervised deep image dehazing (SDID) models learning inappropriate mapping between hazy images and GT images, which negatively affects the dehazing performance. To address this problem, two difficulties must be solved. One is to estimate the haze level in an image, and the other is to develop a haze level indicator to discriminate clear and hazy images. To this end, we proposed a haze level estimation (HLE) scheme based on dark channel prior and a haze level indicator accordingly for training data cleaning, i.e., to exclude image pairs with hazy GT images in the data set. With the data cleaning by the HLE, we introduced an SDID framework to avoid inappropriate learning and thus improve the dehazing performance. To verify the framework, using the RESIDE data set, experiments were conducted with three types of SDID models, i.e., GCAN, REFN and cGAN. The results show that our method can significantly improve the dehazing performance of the three SDID models. Subjectively, the proposed method generally provides better visual quality. Objectively, our method, using fewer training image pairs, was capable of improving PSNR in the GCAN, REFN, and cGAN models by 3.10 dB, 5.74 dB, and 6.44 dB, respectively. Furthermore, our method was evaluated using a real-world data set, KeDeMa. The results indicate that the better visual quality of the dehazed images is generally for models with the proposed data cleaning scheme. The results demonstrate that the proposed method effectively and efficiently enhances the dehazing performance in the given examples. The practical significance of this research is to provide an easy but effective way, that is, the proposed data cleaning scheme, to improve the performance of SDID models.

1. Introduction

In the field of image restoration, image haze removal has been a topic of increasing interest. The haze is mainly due to light scattering from particles in the air. Hazy images generally have a reduced contrast and visibility, which degrades the performance of image-based applications, such as automatic driving and surveillance. Consequently, haze removal methods are sought to improve image quality. In general, there are two types of dehazing methods. The first type uses physical models where model parameters are estimated based on assumptions or statistical priors, e.g., [1,2,3,4,5]. In this type, a popular method is based on the dark channel prior reported in ref. [1] where the hazy image captured by a camera is modeled as
I x = J x t x + A 1 t x
where I x is the captured image; J x is the clear radiance; A is the global atmospheric light or simply atmospheric light; and t x = e β d x is the transmittance that represents the portion of the non-scattered light to the camera; β is the scattering coefficient of the atmosphere, and d x is the scene depth at position x . The main challenge for model-based dehazing approaches is to estimate atmospheric light A and transmittance t x from a single hazy image.
The second type of image dehazing employs deep learning models. Deep learning has demonstrated its advantages in image processing. Recently, it has been introduced in the field of image haze removal. The existing deep learning methods in image dehazing can be divided into two categories: model-based methods and end-to-end methods. Model-based methods use deep learning to estimate the model parameters A or t x or both, based on Equation (1), such as [6,7,8,9,10,11,12]. End-to-end methods, which are currently more popular in the image dehazing community, directly learn the mapping between hazy and ground truth (GT) images from a training data set. This study concentrates on end-to-end methods, that is, supervised deep image dehazing (SDID) models.
To date, many deep learning methods have been proposed for image dehazing. For example, based on the encoder–decoder framework, the densely connected pyramid dehazing network [13] and the enhanced context aggregation network [14] were proposed. Based on the retinex decomposition, the deep retinex dehazing network [15] was presented. Using residual learning or generative adversarial networks, the gated context aggregation network (GCAN) [16] and the conditional generative adversarial network [17,18] were developed. By introducing novel modules or domains, many deep models have been reported. Some examples are the self-supporting dehazing network [19] with a self-filtering block and a self-support module; the wavelet hybrid network [20] with a convolution neural network in the wavelet domain; and the dynamic multi-attention dehazing network [21] with dynamic feature attention and adaptive feature fusion modules. In [22], a deep model was presented in which guide information and progressive feature fusion were used. Basically, the main differences in SDID models are in network structures, learning rules, and cost functions. For a complete survey, see [23,24].
The data set is an important aspect in SDID models. At present, SDID models use the data set as is, assuming that all GT images are clear. But this assumption is not always true. Note that hazy GT images may degrade dehazing performance since an SDID model learns the mapping between the hazy images and their corresponding GT images. Unfortunately, little attention has been paid to the suitability of the data set used in SDID models so far. To distinguish clear GT images requires a scheme to estimate the haze level in an image. To date, there exists no such haze level estimation scheme. Thus, haze level information is rarely used in SDID models. Several researchers had attempted to use haze level information in SDID models, such as the level aware progressive network [25], the knowledge distillation dehazing network [26], and the adaptive network for haze concentration [27]. However, these methods have at least two problems. First, they implicitly use haze level information in the networks, and consequently, it is hard to apply them to other models. Second, they do nothing about the image pairs with hazy GT images that negatively affect dehazing performance. To this end, in this study we will devise a haze level estimation scheme that can explicitly use haze level information to filter out image pairs with hazy GT images in the data set. This will be called data cleaning in this study. After the data cleaning, the cleaned data are then used to train SDID models. Therefore, we expect that the dehazing performance of SDID models can be improved.
The challenge of this study is to devise a scheme that can adequately estimate the haze level in an image. To this end, we develop a haze level estimation scheme and a haze level indicator based on the dark channel in ref. [1]. In ref. [1], the dark channel was related to the haze level in an image and was used to estimate atmospheric light and transmittance in Equation (1). Although it relates the haze level to the dark channel, no further research has been performed on a haze level indicator. Therefore, it is not trivial to develop a haze level estimation scheme. To do so, there are at least two problems to solve. First, as described in ref. [1], sky regions and white objects are not met with the assumption of dark channel prior. A scheme should be developed to eliminate this case. Second, an appropriate haze level indicator should be introduced to distinguish clear and hazy images. The problems will be solved in this study. When compared to the way of using the dark channel in ref. [1], our proposed haze level estimation (HLE) has at least two fundamental differences. First, the dark channel was used to estimate the model parameters in ref. [1]. But we used it to develop a haze level indicator and applied it to training data cleaning. Second, the dark channel was used in a model-based dehazing. However, we used it in an end-to-end SDID framework in this study.
Based on the proposed HLE, a data cleaning scheme is introduced, which will be used to exclude image pairs with hazy GT images in a data set to improve the dehazing performance of SDID models. To verify our method, using the RESIDE data set [28], experiments were conducted with three different types of SDID models, that is, the GCAN [16], the RefineDNet (REFN) [29] and the cGAN [30]. The results show that the proposed data cleaning improved PSNR by 3.10 dB, 5.74 dB, and 6.44 dB for the GCAN, REFN, and cGAN models, respectively. Furthermore, the proposed method was further evaluated using a real-world data set, KeDeMa. The results indicate that objectively and subjectively, the models with our data cleaning generally outperform those without it. In summary, the dehazing performance of the GCAN, REFN, and cGAN models can be improved by the proposed data cleaning scheme. The results imply that it may also benefit other SDID models. The three main contributions of this study are listed below.
  • We introduced a haze level estimation (HLE) scheme in which a haze level indicator was devised based on the dark channel in ref. [1]. In ref. [1], it was observed that the haze level in an image is related to its dark channel. However, there is no further work on haze level estimation. To date, no such research has been conducted on using haze level information in a data set for SDID models. Thus, our HLE is a pioneering work in the SDID field.
  • We presented a data cleaning scheme based on the proposed HLE for SDID models. The haze level indicator is used to distinguish clear and hazy GT images in a data set. When a GT image is hazy, its corresponding hazy images are excluded together from the data set. The process to discard image pairs with hazy GT images is called data cleaning in this study. The experiment showed that the proposed data cleaning scheme can significantly improve the dehazing performance of SDID models. So far, no such work has been reported on SDID methods. Therefore, this paper is a contribution in this field.
  • We proposed an SDID framework that uses a data cleaning scheme to exclude image pairs with hazy GT images. This prevents an SDID model from learning an inappropriate mapping that degrades the dehazing performance. The proposed framework requires fewer training image pairs, yet achieves better dehazing performance. The framework can be easily applied to any SDID model. This is another contribution to the research community.
This paper is organized as follows. Section 2 introduces the proposed HLE and then explains how it applies to data cleaning. Section 3 presents a deep dehazing framework that uses our data cleaning method as a preprocessing scheme before training an SDID model. Section 4 provides three SDID models to justify the proposed SDID framework. Section 5 concludes this study and describes further research.

2. Haze Level Estimation and Data Cleaning

In this section, a haze level estimation (HLE) scheme is introduced in Section 2.1. Then, the application of the proposed HLE to data cleaning is described in Section 2.2.

2.1. Haze Level Estimation

This section proposes an HLE scheme in which a haze level indicator is introduced based on the dark channel in ref. [1]. In ref. [1], the haze level in an image was related to its dark channel. However, no further work has been conducted on the development of hazy level estimation with a haze level indicator. Thus, although the proposed HLE is based on the dark channel, there are at least two main differences. First, the dark channel was applied to estimate the model parameters in ref. [1], that is, atmospheric light and transmittance in Equation (1), while the dark channel was developed as a haze level indicator in this study. Second, the dark channel was used in a model-based dehazing method, while we used it for data cleaning and developed an end-to-end dehazing framework in this study. Consequently, our use of the proposed HLE is essentially different from the way the dark channel was used in ref. [1].
To date, haze level information has rarely been used in SDID models. Although some researchers, e.g., [24,25,26], implicitly incorporated it into SDID models, they did not explicitly apply it to the data set itself. The reason could be that there is no way to extract an accurate estimate of the haze level from an image. Therefore, data cleaning that uses haze-level information has rarely been the subject of SDID research. As a result, current SDID researchers use the data set as is. However, not every image pair is good for an SDID model, that is, an image pair with a hazy GT image. When it happens, it will negatively affect the dehazing performance of SDID models since an inappropriate mapping is learned from the image pairs. To eliminate the problem, in this study, an HLE scheme is developed and applied to data cleaning that excludes image pairs with hazy GT images for SDID models. The details are described in the following.
Based on the dark channel prior [1], the HLE is developed. The dark channel prior states that in a block of a haze-free image, excluding sky regions and white objects, there are some pixels with very low or zero values in at least one of the RGB components. The dark channel can be obtained by a block minimum filter. For a haze-free image, the dark channel is dark, while for a hazy image, it is not. The brightness of the dark channel is positively proportional to the haze level of the image. Figure 1a shows a clear image “Cage” as an example to show the property of the dark channel prior. Figure 1b is the dark channel of Figure 1a obtained by the 15 × 15 minimum filter. As described, the dark channel in Figure 1b is dark except for the sky region. Figure 1c is a hazy image generated from Figure 1a, and its dark channel is shown in Figure 1d. A brighter dark channel is obtained compared to that in Figure 1b. The results in Figure 1 demonstrate the properties of the dark channel prior.
Although Figure 1 shows the haze level is related to its dark channel, it provides no further information about the haze level. Thus, it is required to have an indicator for haze level estimation. Fortunately, Figure 1 gives us a hint. If sky regions and white objects can be ignored, then the average of other pixel values in the dark channel can be a good indicator of haze level. In other words, the average of the selected pixels in the dark channel can be used as an indicator of the haze level, as long as sky regions and white objects are excluded from the calculation. To achieve this, in this study, a threshold τ is introduced to exclude the values of pixels that are greater than τ in the dark channel. From the viewpoint of the histogram, this is a truncated mean in which pixel values greater than τ are not included in the calculation. The truncated mean is used as a haze level indicator and denoted as μ ˜ d c τ . When τ is properly set, μ ˜ d c τ works well in most of cases. However, we observe that a clear image may be considered hazy in some cases. Thus, a further check is required to avoid this problem. Fortunately, we find that for a clear image, μ ˜ d c τ has a small change, as τ varies while a hazy image does not. Thus, we use the difference Δ μ ˜ d c = μ ˜ d c τ 1 μ ˜ d c τ for the second check, where τ 1 > τ . With the above ideas, for a given image I , the proposed HLE scheme is implemented as follows.
Step 1.
Obtain the dark channel using the 15 × 15 minimum filter as below.
I 15 d a r k x = min y Ω x min c I c y
where c R ,   G , B and Ω x is a 15 × 15 window centered at x .
Step 2.
Calculate the haze level indicator μ ˜ d c τ , a truncated mean of I 15 d a r k x , as below.
μ ˜ d c τ = mean   I 15 d a r k x τ
where 0 < τ < 1 is a user-defined threshold to exclude the pixel values from the calculation.
Step 3.
Check if the inequality μ ˜ d c τ < η holds, where 0 < η < 1 is a user-defined threshold. If μ ˜ d c τ < η is true, then image I is considered as clear. Otherwise, go to Step 4 for the second check.
Step 4.
Calculate the difference of μ ˜ d c τ , Δ μ ˜ d c = μ ˜ d c τ 1 μ ˜ d c τ , and check if the inequality Δ μ ˜ d c < ϵ holds, where τ 1 > τ and 0 < ϵ < 1 . Notations τ 1 and ϵ are user-defined thresholds. If Δ μ ˜ d c < ϵ is true, then image I is considered as clear. Otherwise, it is hazy.
Figure 2 summarizes the steps for the proposed HLE. Figure 2 shows that the HLE consists of two checks. The first check is performed in Step 3. If image I is considered hazy, a second check is performed in Step 4 to alleviate the false negative problem.

2.2. Application of the HLE to Data Cleaning

This section applies the HLE scheme developed in Section 2.1 to data cleaning to exclude training image pairs with hazy GT images. In the training process, an SDID model learns the mapping from training image pairs. However, an inappropriate mapping may be learned when image pairs with hazy GT images are involved in the training process. It affects the dehazing performance of the SDID model. Conversely, it also implies that better dehazing performance can be expected when image pairs with hazy GT images are excluded from the training process. Here, we explain how the proposed HLE scheme is applied to data cleaning for SDID models. In the following, the RESIDE data set in [28] is used as an example for data cleaning.
The RESIDE data set is widely used in the image dehazing field. It includes 8970 GT images, through which 313,950 hazy images were generated. That is, one GT image created 35 hazy images. To date, the RESIDE data set has not been investigated for its image pairs. By inspection, we can find hazy GT images in the RESIDE data set. To see a whole picture, we used the proposed haze level indicator μ ˜ d c 0.9 to examine the haze level distribution of the GT images. The distribution is shown in Figure 3 and indicates that many GT images are of a large value μ ˜ d c 0.9 . A larger μ ˜ d c 0.9 means more haze in a GT image. By observations, we find that GT images of μ ˜ d c 0.9 > 0.4 are hazy. Five sample images of μ ˜ d c 0.9 > 0.4 are given in Figure 4. The results show that many GT images exist in the RESIDE data set and that μ ˜ d c τ is a good measure for haze level in an image.
Although GT images of μ ˜ d c 0.9 > 0.4 are hazy, that does not imply that GT images of μ ˜ d c 0.9 0.4 are clear. Therefore, further research was conducted. By experiments, we find that GT images of μ ˜ d c 0.9 0.4 may be clear or hazy. Three examples selected from the RESIDE data set, whose μ ˜ d c 0.9 0.4 , are given in Table 1, where I 1 and I 2 are clear, and I 3 is hazy. To distinguish a clear GT image, a further check is required where μ ˜ d c 0.4 is used as the haze level indicator based on the above observation. Through experiments, it is observed that a GT image is clear if μ ˜ d c 0.4 < 0.05 in the first check. In addition, a GT image is clear when Δ μ ˜ d c = μ ˜ d c 0.9 μ ˜ d c 0.4 < 0.1 in the second check. In summary, we empirically set τ = 0.4 , η = 0.05 , τ 1 = 0.9 , and ϵ = 0.1 in the HLE described in Section 2.1.
To verify the feasibility of the data cleaning by the HLE, the three GT images in Table 1 are given as examples. Table 1 shows the three GT images and the corresponding μ ˜ d c τ , Δ μ ˜ d c , and the discrimination result. In Table 1, the subscript in I I is the order of examples, that is, 1 as the first example, and so on. In Table 1, I 1 is considered as clear because μ ˜ d c 0.4 < 0.05 . For image I 2 , μ ˜ d c 0.4 > 0.05 , so Δ μ ˜ d c < 0.1 was further checked. It is clear because Δ μ ˜ d c < 0.1 holds. For image I 3 , μ ˜ d c 0.4 > 0.05 and Δ μ ˜ d c 0.1 hold, thus it is considered hazy. The HLE discrimination result is consistent with the visual result. It suggests that the proposed HLE is feasible. In this study, we will discard image pairs with hazy GT images. This process is called data cleaning in this paper.

3. The Proposed SDID Framework

In this section, we present an SDID framework, shown in Figure 5, that includes the proposed data cleaning scheme. Denote an original data set as S o , which includes a hazy image set S h and a GT image set S g . The set S o consists of a collection of image pairs. Each image pair consists of a hazy image from S h and its corresponding GT image from S g . Before training an SDID model, S g is processed with the proposed data cleaning scheme. Then, the image pairs with the hazy GT images are discarded. The resulting set is denoted as S o c . That is, the proposed data cleaning preprocesses S o to S o c . Then, S o c is divided into a training data set S t r c and a testing data set S t t c . Then, the SDID is trained with S t r c as in Figure 5a. By this, better dehazing performance is expected since a better mapping is learned from S t r c than that from the original set S o . The trained SDID model is tested by S t t c as shown in Figure 5b, where I h is a hazy image in S t t c and I ^ g is the predicted GT image, and the performance is evaluated. Note that the performance evaluation may be biased by hazy GT images when a full reference metric is used. For example, a less dehazed I ^ g is considered as a better result when a hazy GT image is used as a reference. Thus, in later experiments, we use the cleaned data set S t t c in the testing stage to avoid bias in performance evaluation.
It should be mentioned that the only difference of the proposed SDID framework from a common SDID framework is the preprocessing scheme, i.e., data cleaning, on S o . Thus, the proposed framework can be applied to any SDID model, and we expect that any SDID model can benefit from the proposed framework for better dehazing performance. This is verified in Section 4.

4. Results and Discussion

In this section, the proposed framework in Figure 3 is evaluated using three SDID models. In the experiment, the GCAN model [16], the REFN model [27], and the cGAN model [30] were used as examples. Section 4.1 explains how the parameter η in the HLE was empirically determined. The comparison results for the three models are shown in Section 4.2 for the GCAN; Section 4.3 for the REFN, and Section 4.4 for the cGAN, respectively. The purpose of this study was not to compare the dehazing performance of the three models, but to demonstrate the effectiveness of the proposed framework by comparing the performance with and without training data cleaning.

4.1. Determination of η in the HLE

In this section, the parameter η in the HLE is empirically determined. As given in Section 2, there are four thresholds, τ , η , τ 1 , and ϵ , in the HLE. For the RESIDE data set, we set the three thresholds as τ = 0.4 , τ 1 = 0.9 , and ϵ = 0.1 by experiment. In the experiment, we find that the threshold η significantly affects the cleaning result, that is, the number of image pairs excluded from the data set. Thus, only η is considered here. In the experiment, the RESIDE data set [28] was used as the original set S o , which contains 313,950 hazy images generated from 8970 GT images. In the set S o , one GT image generated 35 hazy images with different model parameters, that is, different haze levels. As shown in Figure 4, some GT images in the RESIDE data set are hazy and degrade the dehazing performance of an SDID model since the SDID model learns an inappropriate mapping. Therefore, image pairs with hazy GT images should be excluded before training an SDID model for better performance. To this end, in the experiment, the HLE was used to discriminate clear and hazy GT images, and the data cleaning was then used to discard image pairs with hazy GT images in the RESIDE data set. Table 2 summarizes the number of hazy images N h c and the number of GT images N g c in the RESIDE data set with various η values and also gives the ratio ( R   % ) of the number of cleaned images to the original number.
Table 2 shows that only 53.01% of GT images were clear when η 0.1 was used in the HLE. This suggests that using a cleaned training data set can save a lot of training time for an SDID model. Table 2 indicates a small difference in N h c between η = 0.05 and η = 0.025 . Therefore, in the following experiment, the cleaned data set with η = 0.05 , denoted as S o c 0.05 , was used and was split into training set S t r c 0.05 and testing set S t t c 0.05 .

4.2. The GCAN Results for RESIDE Data Set

This section uses the cleaned data set S t r c 0.05 to train the fully supervised deep image dehazing model called GCAN with a training set identical to that reported in [16]. The trained GCAN was then tested by S t t c 0.05 . The experiment was conducted as follows.

4.2.1. Objective Comparison

In the experiment, we trained the GCAN with 10,000 (10 K), 20,000 (20 K), and 30,000 (30 K) image pairs randomly selected from S t r c 0.05 . Then, we evaluated the three trained GCANs with three testing data subsets of S t t c 0.05 , with 10 K, 30 K, and 50,000 (50 K) image pairs. Table 3 shows the PSNR result, where the subscripts in GCAN, o, 10 K, 20 K, and 30 K refer to the original GCAN and the GCANs trained with image pairs of 10 K, 20 K, and 30 K, respectively. The result in Table 3 reveals three points. First, the G C A N 10 K , G C A N 20 K , G C A N 30 K , on average, have higher PSNR than the G C A N o by 2.61 dB, 3.05 dB, 3.10 dB, respectively. This suggests that there is better performance for the models with the proposed data cleaning scheme. Second, the PSNR increases as the number of training image pairs increases for the GCANs with cleaned training data. Moreover, it indicates a small difference of 0.44 dB between the G C A N 10 K and the G C A N 20 K ; 0.05 dB between the G C A N 20 K and the G C A N 30 K . This suggests that a good trade-off can be made between training time and PSNR loss. Third, it has a similar performance for each comparison model when different numbers of test image pairs were used. It implies that the performance is steady and is not related to the number of test image pairs. In summary, Table 3 indicates that the proposed SDID framework can improve dehazing performance in the GCAN model, both in efficiency and effectiveness. In the following experiment, the G C A N 30 K was used because it has the highest PSNR.
To better understand the performance of our method, six more objective image quality assessments were used to compare the performance of G C A N o and G C A N 30 K . They include three full reference evaluations (SSIM [31], TMQI [32], FSITM [33]) and three non-full reference evaluations (BRISQUE [34], ILNIQE [35] and DHQI [36]). Note that each objective assessment has its own preference. Therefore, the average rank was used as the overall performance index for a fair comparison. In the experiment, only 10 K test image pairs were used since similar PSNRs were obtained with the cases of 30 K and 50 K image pairs. Table 4 shows the comparison result with the rank in parentheses and the average rank R ¯ where the arrow indicates the direction of better performance. The result shows that the G C A N 30 K has a better overall performance than the G C A N o , as expected. It suggests that our method works well for the GCAN model.

4.2.2. Subjective Comparison

Here, a subjective comparison was made for the G C A N o and G C A N 30 K models. We selected six images with different scenes and haze levels for this evaluation. The six images are shown in Table 5 where the corresponding GT image, PSNR and μ ˜ d c 0.9 are given for reference. In Table 5, the subscripts g and h in I are for the GT image and the hazy image. In addition, the number in the subscript of I I is related to the haze level in I h , for example, I 1 has the lowest haze level in the given examples. Table 5 shows that the G C A N o has a satisfactory dehazed result, except in I 6 . On the other hand, the G C A N 30 K has a better result than the G C A N o except I 4 has a lower PSNR. Even so, I 4 still has an excellent visual quality. In addition, the G C A N 30 K is able to consistently provide high quality dehazed images at different haze level, while the G C A N o has difficulty in the high haze level case, i.e., I 6 . Besides, all GT images ( I g ) in Table 5 by the proposed data cleaning are clear. This justifies that the proposed data cleaning is effective in the selection of GT images. In summary, the G C A N 30 K generally has better visual quality of dehazed images than the G C A N o , even using fewer training image pairs. It implies that our data cleaning can improve the GCAN model.

4.3. The REFN Results for RESIDE Data Set

This section uses RefineDNet (REFN) [29], a weakly supervised deep image dehazing model, as the second example to verify our method. In the experiment, the REFN was trained with three subsets of 10 K, 20 K and 30 K image pairs from the training data set S t r c 0.05 . As before, trained REFNs were then tested by three randomly selected subsets of 10 K, 30 K, and 50 K image pairs from S t t c 0.05 . Objective and subjective comparisons are presented in Section 4.3.1 and Section 4.3.2, respectively.

4.3.1. Objective Comparison

In this section, we tested the REFNs trained by 10 K, 20 K and 30 K image pairs, using three subsets of 10 K, 30 K and 50 K image pairs randomly selected from S t t c 0.05 . Table 6 shows the PSNRs of the REFNs, where the subscripts o, 10 K, 20 K, and 30 K represent the original REFN and the REFNs trained with the 10 K, 20 K, and 30 K image pairs, respectively. The results in Table 6 have three implications. First, the R E F N 10 K , R E F N 20 K , and R E F N 30 K improved the average PSNR of R E F N o by 5.12 dB, 5.74 dB, and 5.42 dB, respectively. The results show that training an SDID model with hazy GT images degrades the dehazing performance due to the inappropriate mapping learned. Second, a larger number of training image pairs may not improve dehazing performance. That is, the R E F N 30 K has no better result than the R E F N 20 K that uses fewer training image pairs. Thus, it is not necessary to use all image pairs in S t r c 0.05 to train the REFN. Third, a little PSNR loss happens when the R E F N 10 K is used to trade a lower training cost. In summary, the proposed SDID framework can improve the performance of the REFN model effectively and efficiently. Since the R E F N 20 K has the best performance, it was used in the following experiment.
As before, the R E F N o and R E F N 20 K were further evaluated with the six objective metrics described in Section 4.2.1. Table 7 shows the results of SSIM, TMQI, FSITM, BRISQUE, ILNIQE and DHQI, along with the average rank R ¯ . The results indicate that the R E F N 20 K has a better R ¯ than the R E F N o , suggesting that the proposed data cleaning scheme is able to improve the dehazing performance of the REFN model, as expected.

4.3.2. Subjective Comparison

This section compares the subjective visual quality of R E F N o and R E F N 20 K . Six images with different scenes and haze levels were selected as examples. The notations used here are the same as those in Section 4.2.2. The six images are shown in Table 8 with their dehazed images by the R E F N o and R E F N 20 K . For reference, the corresponding PSNR and μ ˜ d c 0.9 are also given in Table 8, except for I g . Table 8 indicates that the R E F N o has an excellent visual quality, while the R E F N 20 K generally gives an even better result. As expected, the selected GT images by the proposed data cleaning are clear in the given examples. The results in Table 8 indicate that our method is capable of improving the performance of the REFN model.

4.4. The cGAN Results RESIDE Data Set

In this section, cGAN [30] is used as the final example to justify our method. The cGAN is a type of SDID model that uses a generative adversary scheme. Section 4.4.1 shows the objective comparison results, while Section 4.4.2 gives the subjective comparison results.

4.4.1. Objective Comparison

In the experiment, the conditional adversarial generative network (cGAN) in [30] was trained using 10 K, 20 K, and 30 K image pairs randomly selected from S t r c 0.05 . The trained cGAN models were then tested, respectively, by 10 K, 20 K and 30 K image pairs randomly selected from S t t c 0.05 . Table 9 shows the PSNR for the compared models, where the subscripts e, 10 K, 20 K, and 30 K represent the enhanced cGAN in [18] and the cGAN trained with the 10 K, 20 K, and 30 K image pairs, respectively. Since no SDID model using cGAN [30] is available, the enhanced cGAN in [18] was used here for comparison. There are three points that should be mentioned for the results in Table 9. First, the PSNR of c G A N 10 K , c G A N 20 K , and c G A N 30 K is better than that of c G A N e by 5.8 dB, 6.11 dB, and 6.44 dB, respectively. It proves that the proposed data cleaning is effective. Second, the PSNR increases as the number of training image pairs increases, although the difference is not significant. It implies that more training image pairs benefit the dehazing performance of cGAN. Third, the PSNR differences for c G A N 10 K , c G A N 20 K , and c G A N 30 K are less than 0.64 dB. Thus, it gives a good trade-off between PSNR and the number of training image pairs. In summary, our method improves effectiveness and efficiency in the cGAN model.
The c G A N e and c G A N 30 K were further evaluated with SSIM, TMQI, FSITM, BRISQUE, ILNIQU, and DHQI, as before. Table 10 shows the results and the average rank R ¯ . Table 10 indicates that the c G A N 30 K has R ¯ equal to that of the c G A N e . Even so, the c G A N 30 K has a better SSIM, TMQI, and FSITM, which are full reference metrics. That is, they are more reliable and suggest that better visual quality is obtained for the c G A N 30 K . This is verified in the following.

4.4.2. Subjective Comparison

For a subjective comparison, six images were selected from the dehazed images by the c G A N e and c G A N 30 K . As in the previous section, these images were selected with different scenes and hazy levels. They are shown in Table 11 with the corresponding GT images, the dehazed images by the c G A N e and c G A N 30 K , and their haze levels measured by μ ˜ d c 0.9 . For notation, the number in the subscript of I I relates to its haze level. In the given examples, the c G A N e works well, except for I 5 , while the c G A N 30 K , which used fewer training image pairs, has better visual quality in all cases. Though the c G A N 30 K has the same R ¯ as the c G A N e , it has a better subjective result, as described above. In summary, the results show that our method can improve the cGAN effectively and efficiently.

4.5. The KeDeMa Results

In this section, the G C A N o , G C A N 30 K , R E F N o , R E F N 20 K , c G A N e , and c G A N 30 K were further evaluated by a real-world image data set, KeDeMa in [37]. The KeDeMa data set consists of 25 natural images with different scenarios that were collected from several pioneering papers. The purpose of this experiment was to understand the robustness of our method. In other words, the dehazing performance of the proposed method was investigated using a data set outside S o c 0.05 . Objective and subjective comparisons for the GCAN, REFN and cGAN models are given in Section 4.5.1, Section 4.5.2, and Section 4.5.3, respectively.

4.5.1. The GCAN Results for KeDeMa Data Set

Here, the G C A N o and G C A N 30 K were evaluated. Since PSNR, SSIM, TMQI, and FSITM require GT images in the calculation, they are not used in the objective comparison. Only BRISQUE, ILNIQE, and DHQI were used in the experiment. The objective results are shown in Table 12. They indicate that the G C A N 30 K is objectively superior to the G C A N o by the average rank R ¯ .
Next, a subjective comparison was made for the G C A N o and G C A N 30 K . Six images were selected from the dehazed images, as shown in Table 13, where the number in the subscript of I I is the order of presentation and is not related to the haze level. For visual quality, the G C A N o has the problems of halo, artifacts and color distortion. The color distortion in an image refers to hue distortion, saturation distortion, or both. Thus, all six dehazed images by the G C A N o have severe color distortion in different degrees. In general, in a dehazed image, halos occur in the large depth discontinuities. In the given examples, I 3 and I 5 have the halo problem for the G C A N o . It occurs in the front tree areas in I 3 , while occurring in the branches of the trees in I 5 . An artifact refers to any feature introduced in the dehazing process that is not in the original image. According to this definition, the G C A N o has the artifact problem in I 2 (sky region), I 3 (dark areas on the road), I 4 (sky region), and I 6 (right lower corner). On the contrary, the G C A N 30 K has none of the above problem. However, the G C A N 30 K has difficulty in removing haze in I 3 . For other dehazed images, it gives satisfactory visual quality. Although our method shows an improvement in the GCAN model, dehazing effects of our method should be improved in future research.

4.5.2. The REFN Results for KeDeMa Data Set

In this section, the objective and subjective comparisons of R E F N o and R E F N 20 K were made. The objective results of BRISQUE, ILNIQE, and DHQI are recorded in Table 14 with R ¯ . Table 14 shows that the R E F N o has a better R ¯ than the R E F N 20 K . Even so, the visual quality is better for the R E F N 20 K because of a better DHQI, which was specifically developed to evaluate dehazed images. That is, a better DHQI generally means a better dehazed image. This is justified in the following.
Next, we compared the visual quality of the dehazed images obtained from the R E F N o and R E F N 20 K . As before, we selected six images, which are shown in Table 15. Table 15 indicates that the R E F N o has a stronger dehazing effect than the R E F N 20 K . However, it introduces artifacts in I 1 , i.e., contours in the sky region and color distortion in I 3 , I 4 , I 5 , and I 6 . On the other hand, the R E F N 20 K retains the mood of hazy image I h and has a less dehazed result in I 2 and I 5 . For other images, the R E F N 20 K yields good visual quality. In summary, the R E F N 20 K is generally able to provide a satisfactory result in the given examples. The results imply that our method can improve the REFN model. Furthermore, better dehazing performance should be sought for our method to avoid less dehazed cases.

4.5.3. The cGAN Results KeDeMa Data Set

In this section, the c G A N e and c G A N 30 K were objectively and subjectively compared. The objective comparison results are given in Table 16. The c G A N 30 K has a worse R ¯ than the c G A N e . However, the c G A N 30 K has a much better DHQI, which implies better visual quality of dehazed images. This is verified in the following.
Next, a subjective comparison was made with six selected dehazed images for the c G A N e and c G A N 30 K . The six images and the corresponding dehazed images are given in Table 17. The results show that the c G A N e has a much stronger dehazing effect than the c G A N 30 K . However, it has problems in the dehazed images. Severe color distortion is observed in all six images. In addition, halos are found in I 3 (around the policeman), I 4 (around the woman), I 5 (around the front hill) and I 6 (the ridge of the front mountain). As for the c G A N 30 K , none of the above problems are observed. In general, the dehazed image retains a tone similar to that of the corresponding input image. However, the c G A N 30 K has a less dehazed result in I 3 and I 5 . It suggests that the approach proposed in this study can improve the cGAN model. However, further research should be performed to improve it.
The experimental results show that the proposed method generally performs better in the given GCAN, REFN, and cGAN models. However, they also indicate that a few images are not dehazed well in the KeDeMa data set. The reason might be that we only used the RESIDE data set to train the models in this study. It would be helpful to include more data sets in training, that is, to increase modalities for better generalization. This will be our work in further research.

5. Conclusions and Further Research

This paper has presented a data cleaning scheme for supervised deep image dehazing (SDID) models that uses haze level estimation (HLE). In light of the dark channel, HLE was developed and applied to detect hazy GT images in a data set, e.g., the RESIDE data set [28]. The image pairs with hazy GT images were excluded when training an SDID model. The preprocessing scheme was called data cleaning in this study. Furthermore, an SDID framework was given in which data cleaning was integrated to improve the dehazing performance. The proposed framework was verified by three SDID models, the GCAN, REFN, and cGAN, using the RESIDE data set. The results indicate that the dehazing performance was improved by 3.10 dB, 5.74 dB, and 6.44 dB in PSNR for the GCAN, REFN, and cGAN, respectively. Furthermore, our method was further verified by the real-world data set KeDeMa in [37]. The results show that the models with our data cleaning scheme generally have better visual quality than those without it. This suggests that our method is able to improve the dehazing performance in the given example models. It could also benefit other SDID models. Although our method generally works well, four points need to be mentioned. First, this method is suitable for a large-scale data set, e.g., the RESIDE data set, since it may significantly reduce the data size after data cleaning and affect dehazing performance. Second, the parameters in the proposed HLE should be adjusted when using a data set other than the RESIDE data set. Third, the HLE works well in most cases to discriminate clear and hazy images. However, it fails in some images, and thus, an improvement should be made in the HLE. Fourth, the models that use our method have a less dehazed result in the KeDeMa data set. More data sets in training an SDID model may be needed for better performance. These points will guide our further research.

Author Contributions

Conceptualization, C.-H.H.; methodology, C.-H.H.; software, Z.-Y.C.; validation, C.-H.H. and Z.-Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by National Science and Technology Council of Taiwan, grant number MOST 110-2221-E-324-012.

Data Availability Statement

The RESIDE data set is available at https://sites.google.com/view/reside-dehaze-datasets/reside-v0, and the KeDeMa data set can be downloaded at https://ivc.uwaterloo.ca/database/dehaze.html.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
  2. Ju, M.; Ding, C.; Ren, W.; Yang, Y.; Zhang, D.; Guo, Y.J. IDE: Image Dehazing and Exposure Using an Enhanced Atmospheric Scattering Model. IEEE Trans. Image Process. 2021, 30, 2180–2192. [Google Scholar] [CrossRef] [PubMed]
  3. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef]
  4. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
  5. Fattal, R. Dehazing Using Color-Lines. ACM Trans. Graph. 2014, 34, 1–14. [Google Scholar] [CrossRef]
  6. Xie, L.; Wang, H.; Wang, Z.; Cheng, L. DHD-Net: A Novel Deep-Learning-based Dehazing Network. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
  7. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. AOD-Net: All-in-One Dehazing Network. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4780–4788. [Google Scholar] [CrossRef]
  8. Zhu, H.; Cheng, Y.; Peng, X.; Zhou, J.T.; Kang, Z.; Lu, S.; Fang, Z.; Li, L.; Lim, J.-H. Single-Image Dehazing via Compositional Adversarial Network. IEEE Trans. Cybern. 2019, 51, 829–838. [Google Scholar] [CrossRef] [PubMed]
  9. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef]
  10. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.-H. Single Image Dehazing via Multi-scale Convolutional Neural Networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 154–169. [Google Scholar]
  11. Yin, S.; Yang, X.; Wang, Y.; Yang, Y.-H. Visual Attention Dehazing Network with Multi-level Features Refinement and Fusion. Pattern Recognit. 2021, 118, 108021. [Google Scholar] [CrossRef]
  12. Jiao, L.; Hu, C.; Huo, L.; Tang, P. Guided-Pix2Pix+: End-to-end spatial and color refinement network for image dehazing. Signal Process. Image Commun. 2022, 107, 116758. [Google Scholar] [CrossRef]
  13. Zhang, H.; Patel, V.M. Densely Connected Pyramid Dehazing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 3194–3203. [Google Scholar] [CrossRef]
  14. Bai, H.; Pan, J.; Xiang, X.; Tang, J. Self-Guided Image Dehazing Using Progressive Feature Fusion. IEEE Trans. Image Process. 2022, 31, 1217–1229. [Google Scholar] [CrossRef]
  15. Li, P.; Tian, J.; Tang, Y.; Wang, G.; Wu, C. Deep Retinex Network for Single Image Dehazing. IEEE Trans. Image Process. 2020, 30, 1100–1115. [Google Scholar] [CrossRef]
  16. Chen, D.; He, M.; Fan, Q.; Liao, J.; Zhang, L.; Hou, D.; Yuan, L.; Hua, G. Gated Context Aggregation Network for Image Dehazing and Deraining. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 7–11 January 2019; pp. 1375–1383. [Google Scholar] [CrossRef]
  17. Li, R.; Pan, J.; Li, Z.; Tang, J. Single Image Dehazing via Conditional Generative Adversarial Network. In Proceedings of the CVPR Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 July 2018; pp. 8202–8211. [Google Scholar]
  18. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced Pix2pix Dehazing Network. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8152–8160. [Google Scholar] [CrossRef]
  19. Huang, P.; Zhao, L.; Jiang, R.; Wang, T.; Zhang, X. Self-filtering image dehazing with self-supporting module. Neurocomputing 2020, 432, 57–69. [Google Scholar] [CrossRef]
  20. Dharejo, F.A.; Zhou, Y.; Deeba, F.; Jatoi, M.A.; Khan, M.A.; Mallah, G.A.; Ghaffar, A.; Chhattal, M.; Du, Y.; Wang, X. A deep hybrid neural network for single image dehazing via wavelet transform. Optik 2021, 231, 166462. [Google Scholar] [CrossRef]
  21. Zhao, D.; Mo, B.; Zhu, X.; Zhao, J.; Zhang, H.; Tao, Y.; Zhao, C. Dynamic Multi-Attention Dehazing Network with Adaptive Feature Fusion. Electronics 2023, 12, 529. [Google Scholar] [CrossRef]
  22. Cui, Z.; Wang, N.; Su, Y.; Zhang, W.; Lan, Y.; Li, A. ECANet: Enhanced context aggregation network for single image dehazing. Signal Image Video Process. 2022, 17, 471–479. [Google Scholar] [CrossRef]
  23. Agrawal, S.C.; Jalal, A.S. A Comprehensive Review on Analysis and Implementation of Recent Image Dehazing Methods. Arch. Comput. Methods Eng. 2022, 29, 4799–4850. [Google Scholar] [CrossRef]
  24. Gui, J.; Cong, X.; Cao, Y.; Ren, W.; Zhang, J.; Zhang, J.; Cao, J.; Tao, D. A Comprehensive Survey and Taxonomy on Single Image Dehazing Based on Deep Learning. ACM Comput. Surv. 2022, 55, 1–37. [Google Scholar] [CrossRef]
  25. Li, Y.; Miao, Q.; Ouyang, W.; Ma, Z.; Fang, H.; Dong, C.; Quan, Y. LAP-Net: Level-Aware Progressive Network for Image Dehazing. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 3275–3284. [Google Scholar] [CrossRef]
  26. Hong, M.; Xie, Y.; Li, C.; Qu, Y. Distilling Image Dehazing With Heterogeneous Task Imitation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3459–3468. [Google Scholar] [CrossRef]
  27. Wang, T.; Zhao, L.; Huang, P.; Zhang, X.; Xu, J. Haze concentration adaptive network for image dehazing. Neurocomputing 2021, 439, 75–85. [Google Scholar] [CrossRef]
  28. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking Single-Image Dehazing and Beyond. IEEE Trans. Image Process. 2019, 28, 492–505. [Google Scholar] [CrossRef]
  29. Zhao, S.; Zhang, L.; Shen, Y.; Zhou, Y. RefineDNet: A Weakly Supervised Refinement Framework for Single Image Dehazing. IEEE Trans. Image Process. 2021, 30, 3391–3404. [Google Scholar] [CrossRef]
  30. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  31. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  32. Yeganeh, H.; Wang, Z. Objective Quality Assessment of Tone-Mapped Images. IEEE Trans. Image Process. 2013, 22, 657–667. [Google Scholar] [CrossRef] [PubMed]
  33. Nafchi, H.Z.; Shahkolaei, A.; Moghaddam, R.F.; Cheriet, M. FSITM: A Feature Similarity Index for Tone-Mapped Images. IEEE Signal Process. Lett. 2015, 22, 1026–1029. [Google Scholar] [CrossRef]
  34. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  35. Zhang, L.; Zhang, L.; Bovik, A.C. A Feature-Enriched Completely Blind Image Quality Evaluator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef]
  36. Min, X.; Zhai, G.; Gu, K.; Yang, X.; Guan, X. Objective Quality Evaluation of Dehazed Images. IEEE Trans. Intell. Transp. Syst. 2019, 20, 2879–2892. [Google Scholar] [CrossRef]
  37. Ma, K.; Liu, W.; Wang, Z. Perceptual evaluation of single image dehazing algorithms. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3600–3604. [Google Scholar] [CrossRef]
Figure 1. (a) Original clear image, (b) dark channel of (a), (c) a hazy image generated from (a), (d) dark channel of (c).
Figure 1. (a) Original clear image, (b) dark channel of (a), (c) a hazy image generated from (a), (d) dark channel of (c).
Electronics 12 03485 g001
Figure 2. Flowchart of the proposed HLE scheme.
Figure 2. Flowchart of the proposed HLE scheme.
Electronics 12 03485 g002
Figure 3. The distribution of μ ˜ d c 0.9 for GT images in the RESIDE data set.
Figure 3. The distribution of μ ˜ d c 0.9 for GT images in the RESIDE data set.
Electronics 12 03485 g003
Figure 4. Five sample hazy GT images in the RESIDE data set.
Figure 4. Five sample hazy GT images in the RESIDE data set.
Electronics 12 03485 g004
Figure 5. The proposed SDID framework.
Figure 5. The proposed SDID framework.
Electronics 12 03485 g005
Table 1. Three RESIDE GT images and the related μ ˜ d c 0.4 , μ ˜ d c 0.9 , and Δ μ ˜ d c .
Table 1. Three RESIDE GT images and the related μ ˜ d c 0.4 , μ ˜ d c 0.9 , and Δ μ ˜ d c .
I 1 I 2 I 3
Haze level indicatorElectronics 12 03485 i001Electronics 12 03485 i002Electronics 12 03485 i003
μ ˜ d c 0.4 0.02520.24610.2312
μ ˜ d c 0.9 0.25280.27420.4661
Δ μ ˜ d c 0.22760.02810.2349
Discrimination resultclearclearhazy
Table 2. The number of hazy images and GT images after the data cleaning by the HLE.
Table 2. The number of hazy images and GT images after the data cleaning by the HLE.
η 0.0250.050.0750.1Original
N h c 104,440113,295136,570166,425313,950
N g c 29843237390247558970
R   % 33.27%36.09%43.50%53.01%100%
Table 3. The PSNR comparison of G C A N o , G C A N 10 K , G C A N 20 K , and G C A N 30 K models (RESIDE).
Table 3. The PSNR comparison of G C A N o , G C A N 10 K , G C A N 20 K , and G C A N 30 K models (RESIDE).
Testing Set G C A N o G C A N 10 K G C A N 20 K G C A N 30 K
10 K24.8927.5327.9327.99
30 K24.9627.5628.0128.07
50 K24.9727.5728.0228.08
Average24.9427.5527.9928.04
Table 4. Objective comparison of the G C A N o and G C A N 30 K models (RESIDE).
Table 4. Objective comparison of the G C A N o and G C A N 30 K models (RESIDE).
SSIM↑TMQI↑FSITM↑BRISQUE↑ILNIQE↓DHQI↑ R ¯
G C A N o 0.91(2)0.92(2)0.73(2)20.46(1)21.31(2)54.96(2)1.83
G C A N 30 K 0.93(1)0.93(1)0.74(1)19.84(2)20.88(1)55.36(1)1.17
Table 5. Subjective comparison of the G C A N o and G C A N 30 K models (RESIDE).
Table 5. Subjective comparison of the G C A N o and G C A N 30 K models (RESIDE).
I g I h G C A N o G C A N 30 K
I 1 Electronics 12 03485 i004Electronics 12 03485 i005
μ ˜ d c 0.9 = 0.11
Electronics 12 03485 i006
PSNR = 28.09
Electronics 12 03485 i007
PSNR = 28.69
I 2 Electronics 12 03485 i008Electronics 12 03485 i009
21.03/0.26
Electronics 12 03485 i010
26.23
Electronics 12 03485 i011
28.70
I 3 Electronics 12 03485 i012
Electronics 12 03485 i013
14.22/0.36
Electronics 12 03485 i014
25.61
Electronics 12 03485 i015
28.68
I 4 Electronics 12 03485 i016Electronics 12 03485 i017
13.97/0.45
Electronics 12 03485 i018
25.67
Electronics 12 03485 i019
25.48
I 5 Electronics 12 03485 i020Electronics 12 03485 i021
0.54
Electronics 12 03485 i022
28.07
Electronics 12 03485 i023
29.48
I 6 Electronics 12 03485 i024Electronics 12 03485 i025
0.63
Electronics 12 03485 i026
19.72
Electronics 12 03485 i027
25.44
Table 6. Comparison of PSNR for R E F N o , R E F N 10 K , R E F N 20 K , and R E F N 30 K models (RESIDE).
Table 6. Comparison of PSNR for R E F N o , R E F N 10 K , R E F N 20 K , and R E F N 30 K models (RESIDE).
Testing Set R E F N o R E F N 10 K R E F N 20 K R E F N 30 K
10 K23.4928.5629.1728.86
30 K23.4328.5729.1928.87
50 K23.4528.5829.2028.87
Average23.4528.5729.1928.87
Table 7. Objective comparison of the R E F N o and R E F N 20 K models (RESIDE).
Table 7. Objective comparison of the R E F N o and R E F N 20 K models (RESIDE).
SSIM↑TMQI↑FSITM↑BRISQUE↑ILNIQE↓DHQI↑ R ¯
R E F N o 0.93(2)0.93(2)0.77(2)16.28(1)19.78(1)52.67(2)1.67
R E F N 20 K 0.97(1)0.94(1)0.80(1)17.66(2)20.24(2)56.26(1)1.33
Table 8. Subjective comparison of the R E F N o and R E F N 20 K models (RESIDE).
Table 8. Subjective comparison of the R E F N o and R E F N 20 K models (RESIDE).
I g I h R E F N o R E F N 20 K
I 1 Electronics 12 03485 i028Electronics 12 03485 i029
μ ˜ d c 0.9 = 0.15
Electronics 12 03485 i030
PSNR = 23.05
Electronics 12 03485 i031
PSNR = 31.50
I 2 Electronics 12 03485 i032Electronics 12 03485 i033
0.25
Electronics 12 03485 i034
27.19
Electronics 12 03485 i035
33.69
I 3 Electronics 12 03485 i036Electronics 12 03485 i037
0.36
Electronics 12 03485 i038
19.49
Electronics 12 03485 i039
24.48
I 4 Electronics 12 03485 i040Electronics 12 03485 i041
0.49
Electronics 12 03485 i042
25.31
Electronics 12 03485 i043
25.40
I 5 Electronics 12 03485 i044Electronics 12 03485 i045
0.53
Electronics 12 03485 i046
22.30
Electronics 12 03485 i047
27.89
I 6 Electronics 12 03485 i048Electronics 12 03485 i049
0.64
Electronics 12 03485 i050
20.42
Electronics 12 03485 i051
22.66
Table 9. Comparison of PSNR for c G A N e , c G A N 10 K , c G A N 20 K , and c G A N 30 K models (RESIDE).
Table 9. Comparison of PSNR for c G A N e , c G A N 10 K , c G A N 20 K , and c G A N 30 K models (RESIDE).
Testing Set c G A N e c G A N 10 K c G A N 20 K c G A N 30 K
10 K22.3428.1128.4128.75
30 K22.3328.1428.4528.78
50 K22.3128.1528.4628.79
Average22.3328.1328.4428.77
Table 10. Objective comparison of the c G A N e and c G A N 30 K models (RESIDE).
Table 10. Objective comparison of the c G A N e and c G A N 30 K models (RESIDE).
SSIM↑TMQI↑FSITM↑BRISQUE↑ILNIQE↓DHQI↑ R ¯
c G A N e 0.91(2)0.89(2)0.74(2)10.99(1)20.39(1)57.90(1)1.50
c G A N 30 K 0.94 (1)0.94(1)0.76(1)11.88(2)33.77(2)57.75(2)1.50
Table 11. Subjective comparison of the c G A N e and c G A N 30 K models (RESIDE).
Table 11. Subjective comparison of the c G A N e and c G A N 30 K models (RESIDE).
I g I h c G A N e c G A N 30 K
I 1 Electronics 12 03485 i052Electronics 12 03485 i053
μ ˜ d c 0.9 = 0.12
Electronics 12 03485 i054
PSNR = 23.44
Electronics 12 03485 i055
PSNR = 30.78
I 2 Electronics 12 03485 i056Electronics 12 03485 i057
0.27
Electronics 12 03485 i058
27.41
Electronics 12 03485 i059
27.70
I 3 Electronics 12 03485 i060Electronics 12 03485 i061
0.35
Electronics 12 03485 i062
15.49
Electronics 12 03485 i063
24.14
I 4 Electronics 12 03485 i064Electronics 12 03485 i065
0.43
Electronics 12 03485 i066
26.30
Electronics 12 03485 i067
24.92
I 5 Electronics 12 03485 i068Electronics 12 03485 i069
0.55
Electronics 12 03485 i070
16.45
Electronics 12 03485 i071
27.63
I 6 Electronics 12 03485 i072Electronics 12 03485 i073
0.60
Electronics 12 03485 i074
23.57
Electronics 12 03485 i075
26.58
Table 12. Objective comparison of the G C A N o and G C A N 30 K models (KeDeMa).
Table 12. Objective comparison of the G C A N o and G C A N 30 K models (KeDeMa).
BRISQUE↑ILNIQE↓DHQI↑ R ¯
G C A N o 19.27(2)26.30(2)50.23(2)2
G C A N 30 K 17.28(1)25.93(1)50.90(1)1
Table 13. Subjective comparison of the G C A N o and G C A N 30 K models (KeDeMa).
Table 13. Subjective comparison of the G C A N o and G C A N 30 K models (KeDeMa).
I h G C A N o G C A N 30 K
I 1 Electronics 12 03485 i076Electronics 12 03485 i077Electronics 12 03485 i078
I 2 Electronics 12 03485 i079Electronics 12 03485 i080Electronics 12 03485 i081
I 3 Electronics 12 03485 i082Electronics 12 03485 i083Electronics 12 03485 i084
I 4 Electronics 12 03485 i085Electronics 12 03485 i086Electronics 12 03485 i087
I 5 Electronics 12 03485 i088Electronics 12 03485 i089Electronics 12 03485 i090
I 6 Electronics 12 03485 i091Electronics 12 03485 i092Electronics 12 03485 i093
Table 14. Objective comparison of the R E F N o and R E F N 20 K models (KeDeMa).
Table 14. Objective comparison of the R E F N o and R E F N 20 K models (KeDeMa).
BRISQUE↑ILNIQE↓DHQI↑ R ¯
R E F N o 14.33(1)23.43(1)48.25(2)1.33
R E F N 20 K 16.97(2)26.11(2)51.06(1)1.67
Table 15. Subjective comparison of the R E F N o and R E F N 20 K models (KeDeMa).
Table 15. Subjective comparison of the R E F N o and R E F N 20 K models (KeDeMa).
I h R E F N o R E F N 20 K
I 1 Electronics 12 03485 i094Electronics 12 03485 i095Electronics 12 03485 i096
I 2 Electronics 12 03485 i097Electronics 12 03485 i098Electronics 12 03485 i099
I 3 Electronics 12 03485 i100Electronics 12 03485 i101Electronics 12 03485 i102
I 4 Electronics 12 03485 i103Electronics 12 03485 i104Electronics 12 03485 i105
I 5 Electronics 12 03485 i106Electronics 12 03485 i107Electronics 12 03485 i108
I 6 Electronics 12 03485 i109Electronics 12 03485 i110Electronics 12 03485 i111
Table 16. Objective comparison of the c G A N e and c G A N 30 K models (KeDeMa).
Table 16. Objective comparison of the c G A N e and c G A N 30 K models (KeDeMa).
BRISQUE↑ILNIQE↓DHQI↑ R ¯
c G A N e 10.50(1)24.84(1)63.46(2)1.33
c G A N 30 K 29.97(2)37.81(2)55.31(1)1.67
Table 17. Subjective comparison of the c G A N e and c G A N 30 K models (KeDeMa).
Table 17. Subjective comparison of the c G A N e and c G A N 30 K models (KeDeMa).
I h c G A N e c G A N 30 K
I 1 Electronics 12 03485 i112Electronics 12 03485 i113Electronics 12 03485 i114
I 2 Electronics 12 03485 i115Electronics 12 03485 i116Electronics 12 03485 i117
I 3 Electronics 12 03485 i118Electronics 12 03485 i119Electronics 12 03485 i120
I 4 Electronics 12 03485 i121Electronics 12 03485 i122Electronics 12 03485 i123
I 5 Electronics 12 03485 i124Electronics 12 03485 i125Electronics 12 03485 i126
I 6 Electronics 12 03485 i127Electronics 12 03485 i128Electronics 12 03485 i129
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hsieh, C.-H.; Chen, Z.-Y. Using Haze Level Estimation in Data Cleaning for Supervised Deep Image Dehazing Models. Electronics 2023, 12, 3485. https://doi.org/10.3390/electronics12163485

AMA Style

Hsieh C-H, Chen Z-Y. Using Haze Level Estimation in Data Cleaning for Supervised Deep Image Dehazing Models. Electronics. 2023; 12(16):3485. https://doi.org/10.3390/electronics12163485

Chicago/Turabian Style

Hsieh, Cheng-Hsiung, and Ze-Yu Chen. 2023. "Using Haze Level Estimation in Data Cleaning for Supervised Deep Image Dehazing Models" Electronics 12, no. 16: 3485. https://doi.org/10.3390/electronics12163485

APA Style

Hsieh, C. -H., & Chen, Z. -Y. (2023). Using Haze Level Estimation in Data Cleaning for Supervised Deep Image Dehazing Models. Electronics, 12(16), 3485. https://doi.org/10.3390/electronics12163485

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop