Next Article in Journal
Literacy Deep Reinforcement Learning-Based Federated Digital Twin Scheduling for the Software-Defined Factory
Previous Article in Journal
A Sinusoidal Current Generator IC with 0.04% THD for Bio-Impedance Spectroscopy Using a Digital ΔΣ Modulator and FIR Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement

1
Faculty of Information Science and Engineering, Ningbo University, Ningbo 315211, China
2
College of Science and Technology, Ningbo University, Ningbo 315212, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(22), 4451; https://doi.org/10.3390/electronics13224451
Submission received: 17 October 2024 / Revised: 2 November 2024 / Accepted: 12 November 2024 / Published: 13 November 2024

Abstract

:
Underwater images are important for underwater vision tasks, yet their quality often degrades during imaging, promoting the generation of Underwater Image Enhancement (UIE) algorithms. This paper proposes a Dual-Channel Convolutional Neural Network (DC-CNN)-based quality assessment method to evaluate the performance of different UIE algorithms. Specifically, inspired by the intrinsic image decomposition, the enhanced underwater image is decomposed into reflectance with color information and illumination with texture information based on the Retinex theory. Afterward, we design a DC-CNN with two branches to learn color and texture features from reflectance and illumination, respectively, reflecting the distortion characteristics of enhanced underwater images. To integrate the learned features, a feature fusion module and attention mechanism are conducted to align efficiently and reasonably with human visual perception characteristics. Finally, a quality regression module is used to establish the mapping relationship between the extracted features and quality scores. Experimental results on two public enhanced underwater image datasets (i.e., UIQE and SAUD) show that the proposed DC-CNN method outperforms a variety of the existing quality assessment methods.

1. Introduction

Underwater visual information acquisition is crucial for tasks such as underwater resource detection, intelligent perception of the underwater environment, underwater archeology, and many others [1,2]. High-quality underwater images serve as a way to convey valuable information about the underwater world. However, the complex imaging environment often introduces distortions that result in significant quality degradation of the raw underwater image. This includes color deviation, detail blurring, and contrast degradation due to phenomena like light absorption, light forward scattering, and light backward scattering, respectively, resulting in low-visibility imaging scenes [3,4,5].
To address these challenges, numerous Underwater Image Enhancement (UIE) algorithms have been developed to attain high-quality underwater images [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. While a human-assisted judgment process can be used to select a well-performing UIE algorithm, it is costly, time-consuming, and impractical for real-time systems. Hence, it is necessary to design an effective and objective quality assessment method for comparing the performance of UIE algorithms and predicting the quality of enhanced underwater images. Existing objective Image Quality Assessment (IQA) methods can be classified into three types based on whether the assessment process requires reference information or not, i.e., Full-Reference (FR), Reduced-Reference (RR), and No-Reference (NR) [26]. FR-IQA and RR-IQA methods primarily involve identifying the discrepancies between the reference image and the distorted one, with full or partial reference information, respectively. Conversely, NR-IQA methods do not require any reference information, making them more suitable for IQA tasks.
Most of the prevailing NR-IQA methods are tailored towards natural images [27,28], exhibiting relatively good performance. Among them, Moorthy and Bovik [29] devised a simple NR-IQA method named Blind Image Quality Index (BIQI) by dividing the IQA task into two steps, i.e., distortion classification and quality assessment. Building upon BIQI, Moorthy et al. [28] introduced the Distortion Identification-based Image Verity and Integrity Evaluation (DIIVINE) method. Mittal et al. [30] proposed the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), achieving excellent performance by fitting the extracted Mean Subtracted Contrast Normalized (MSCN) coefficients. Li et al. [31] introduced the Gradient Local Binary Pattern (GLBP), which extracts LBP features based on gradient information and weights them with gradient amplitude. Liu et al. [32] combined local spatial entropy and spectral entropy features to develop the Spatial and Spectral Entropy Quality Index (SSEQ). Moreover, Min et al. [33] presented the Blind Multiple Pseudo-Reference Image Quality Index (BMPRI), which downscales distorted images to generate multiple pseudo-reference images for quality-aware feature extraction. As CNNs have found wide application in image processing, several NR-IQA methods [34,35,36,37,38] based on CNNs have been developed, demonstrating good performance when tested on natural images. However, due to the vastly different imaging environments between underwater and natural images, these IQA methods cannot be directly applied to underwater images.
To objectively measure the quality of underwater images, numerous IQA methods have been specifically proposed for this purpose, taking into account the unique characteristics of underwater imaging. Among them, Panetta et al. [39] introduced underwater IQA method called Underwater Image Quality Measure (UIQM), which employs a linear combination of color, sharpness, and contrast to perceive the distortion. Yang et al. [40] developed another method named Underwater Color Image Quality Evaluation (UCIQE), which combines chromaticity standard deviation, luminance contrast, and saturation averages. Yang et al. [41] constructed a large-scale underwater raw dataset and introduced the Frequency Domain Underwater Metric (FDUM), a new reference-free underwater IQA method that combines color, contrast, and sharpness. With the development of UIE algorithms, some methods have emerged that capture the distortion introduced by UIE. For instance, Li et al. [42] proposed the Underwater Image Quality Evaluation Index (UIQEI) method that takes into account the gradients in dark and bright channels. Jiang et al. [43] developed a method named No-Reference Underwater Image Quality Metric (NUIQ), based on luminance and chromaticity. Additionally, Liu et al. [44] and Yi et al. [45] designed six quality-aware features and three quality-sensitive statistical features, respectively, to characterize various distortions more comprehensively. However, these methods heavily rely on manual features and have some limitations.
Given the powerful feature representation capabilities of deep learning, it has a large number of applications in many fields. CNN has shown its advantages in natural IQA, and in recent years, Generative Adversarial Networks (GANs) have been gradually used to supplement the feature extraction capability of the former [46,47,48]. Inspired by these, Fu et al. [49] introduced a rank-learning framework based on self-supervised mechanisms to predict the quality of the enhanced underwater images. Guo et al. [50] also proposed a rank-based method, called Uranker, utilizing color histogram priors for global degradation focusing and a dynamic cross-scale consistency module for local degradation modeling. However, CNNs struggle with long-range dependencies. Dong et al. [51] tackled this by proposing a Transformer-based multiscale residual self-attention network for global feature representation and later developed a wavelet-integrated frequency attention network [52] to enhance training efficiency. These advancements suggest that combining attention-based modeling strategies may drive progress in NR-IQA. Hence, designing effective objective NR-IQA methods for comparing the performance of various UIE algorithms remains a significant challenge.
Actually, enhanced underwater images are susceptible to color and texture distortions caused by underwater imaging. Color distortion can be considered inherent to the enhanced image, while texture distortion may arise from variations in illumination caused by factors such as light absorption. Facing the aforementioned challenges, this paper proposes a Dual-Channel Convolutional Neural Network (DC-CNN)-based NR-IQA method to predict the quality of enhanced underwater images. Specifically, considering the color and texture distortions presented and drawing from the Retinex theory [53], the enhanced underwater images can be decomposed into two components, i.e., reflectance and illumination. A dual-channel network is devised to extract distinct features from these decomposed layers, thereby mitigating the extraction of redundant or invalid features. To effectively integrate color and texture-related features, a feature fusion module is designed, incorporating global pooling and channel interaction during the feature extraction process. Finally, channel attention and spatial attention mechanisms are introduced to consolidate the final features for quality regression. In summary, the main contributions of this paper can be outlined as follows:
(1) Inspired by intrinsic image decomposition principles, the input-enhanced underwater image undergoes decomposition into reflectance containing color information and illumination containing texture information, based on the Retinex theory. This decomposition aims to characterize color and texture degradations separately.
(2) Building upon the two decomposition components, a DC-CNN is devised to extract features related to color and texture to describe the distortion characteristics. Additionally, a feature fusion module is implemented to facilitate interaction between the two factors.
(3) Given that the human visual system has inherent viewing habits, attention mechanisms comprising channel attention and spatial attention are introduced to enhance the attention region. Experimental results on two enhanced underwater image datasets showcase the superiority of the proposed method.
The rest of this paper is organized as follows. In Section 2, the related works are illustrated. In Section 3, the proposed method is described in detail. In addition, the experimental results are presented and analyzed in Section 4. In Section 5, we give the conclusion.

2. Proposed Method

Inspired by the intrinsic decomposition mechanism, firstly, in the image preprocessing stage, we decompose the enhanced underwater image into its reflectance component and illumination component based on the Retinex theory. Subsequently, we design a feature extraction module with DC-CNN to learn color-related and texture-related features. Finally, attention modules are employed to refine the extracted features, followed by a quality regression module to obtain final quality scores. Figure 1 illustrates the framework of the proposed method, and each stage is elaborated upon in the following subsections.

2.1. Image Preprocessing

2.1.1. Retinex Decomposition

Based on the Retinex theory [53], the reflectance and illumination components of an image can be obtained through a decomposition process. The specific steps of this decomposition process are described as follows.
Let S denote the observed enhanced underwater image, which can be represented as S = R·I, where R and I denote the reflectance component and the illumination component, respectively, and · means the element-wise multiplication. However, it is difficult to estimate R and I only with elaborate handcraft constraints. According to the reference [53], Decom-Net is utilized to accomplish the decomposition task, which is trained using pairs of images comprising both low-light and normal light conditions. Specifically, the architecture comprises a 3 × 3 convolutional layer for feature extraction from the input image, followed by several 3 × 3 convolutional layers utilizing the ReLU activation function to map the input image into reflectance and illumination components. Additionally, a 3 × 3 convolutional layer is employed to project reflectance and illumination from the feature space. The resulting reflectance and illumination values are constrained within the [0, 1] range using the sigmoid function.
Figure 2 illustrates the decomposition results of different underwater images enhanced by three UIE algorithms, i.e., RD-based [9], Retinex [10], and Water-Net [17]. In Figure 2, the first column shows three enhanced underwater images, the second column displays their reflectance components, and the third column exhibits their illumination components. Obviously, it can be noticed that the whole part of Figure 2a is more vividly colored compared to Figure 2d,g, but the detail is not as good; Figure 2d looks relatively normal, while Figure 2g has a distinct green background. The reflectance component can reflect the color characteristics of the enhanced underwater image, indicating that it contains the true colors of the underwater scene. From the illumination, it is evident that the texture details in Figure 2f are clearer compared to the other two illumination components. This consistency with the representation of the enhanced underwater image indicates that the illumination component effectively represents texture detail information.

2.1.2. Patch Partition and Local Normalization

The obtained R and I components are divided into non-overlapping patches of size 64 × 64. These patches are denoted as Rn and In (where n ∈ {1, 2, …, m}), respectively, with m representing the total number of patches. Then, to expedite convergence, a local contrast normalization process is performed for each patch. For example, let G(i,j,d) represent the pixel value at position (i,j,d) in patch G, and G ^ ( i , j , d ) is the normalization result. The normalization process can be expressed as follows,
G ^ ( i , j , d ) = G ( i , j , d ) ξ ( i , j , d ) σ ( i , j , d ) + C
ζ ( i , j , d ) = k = K K l = L L ω k , l G ( i + k , j + l , d )
σ ( i , j , d ) = k = K K l = L L ω k , l ( G ( i + k , j + l , d ) ξ ( i , j , d ) ) 2
where d is the number of channels and C is a small constant to prevent division by zero. ζ(i,j,d) and σ(i,j,d) represent the local mean and the local deviation of the intensity value in each patch, respectively, which are calculated within a K × L window. ω = {ωk,l|k = −K, …, K, l = −L, …, L} is a two-dimensional circular symmetric Gaussian weighting function.
As the window size increases, the performance tends to degrade [27]. Therefore, based on empirical evidence, we set K = L = 3, which is smaller than the input size.
Based on the above elaboration, the patch Rn and patch In can be locally normalized for subsequent utilization.

2.2. Feature Extraction Module

After obtaining the decomposed results (i.e., reflectance and illumination patches), it is crucial to characterize the distortion reflected by these two components. In this paper, a dual-channel feature extraction network is designed to achieve this distortion representation goal. This network comprises a reflectance sub-network and an illumination sub-network, as illustrated in Figure 1.
Specifically, the local normalized patches Rn serve as input to the reflectance sub-network, tasked with learning the color distortion of the enhanced underwater images. The reflectance sub-network includes five stages of feature extraction, each composed of two 3 × 3 convolutional layers and ReLU activation. A maximum pooling layer and a feature fusion module are integrated into the first and third stages to merge features from the corresponding stages of the illumination sub-network. Additionally, to address gradient disappearance and network degradation issues, we incorporate three residual modules into the reflectance sub-network to ensure data fidelity. The residual module consists of four 3 × 3 convolutional layers and four ReLU layers, as depicted in Figure 3. The stride for all convolutional layers is set to 1, and the padding is set to 0.
Similarly, the local normalized patches In are used as input to the illumination sub-network to learn the texture distortion of the enhanced underwater images. The structure of the illumination sub-network is identical to that of the reflectance sub-network, except for the absence of the feature fusion module, as the feature fusion module is already present in the reflectance sub-network. The final features of the sub-networks are concatenated to form the final feature, denoted as M. To elaborate further, the details of the feature fusion module are described below.
For a holistic image, color variation and texture loss are mutually influential on visual perception. To simulate this relationship, we designed two fusion modules during feature extraction for integrating color-related and texture-related features. These modules are integrated into the reflectance sub-network. Specifically, the fusion modules allow interaction between color-related and texture-related features to better perceive color changes and texture distortions. The main idea is to calculate the weights of these two types of features during feature extraction and merge them effectively to combine different features. The detailed architecture is depicted in Figure 4. Taking color-related feature FR1 and texture-related feature FI1 as examples, they are combined by element-wise addition to obtain a sum result, denoted as X1. Then, X1 is fed into two sets of operations, each comprising global average pooling and global max pooling. To facilitate interaction between different pooling channels, two point-wise convolutional layers and one ReLU activation layer are utilized. Finally, two Sigmoid functions are used to compute the weights for FR1 and FI1, denoted as ω1 and ω2, respectively. The specific process can be defined as follows:
T R = p w c o n v ( Re L U ( p w c o n v ( X R 1 ) ) )
T I = p w c o n v ( Re L U ( p w c o n v ( X I 1 ) ) )
ω 1 , ω 2 = σ ( T R ) , σ ( T I )
where pwconv(·) is the point convolution.
Finally, FR1 and FI1 are multiplied with the weights ω1 and ω2, respectively, and the respective results are added to each other to obtain the final fused feature, marked as F1, which can be defined as:
F 1 = F R 1 × ω 1 + F I 1 × ω 2

2.3. Attention Module

Reflectance and illumination belong to two different modalities, describing different properties of the enhanced underwater image. Features extracted from these two components contribute differently to visual perception. Therefore, we employ a convolutional attention module [54] as an attention mechanism to consider the contribution of both components. This attention mechanism includes channel attention and spatial attention. Specifically, channel attention is used to determine the interdependency between features extracted from different modalities, while spatial attention is employed to differentiate regions of human visual attention.

2.3.1. Channel Attention

The channel attention mechanism, by adaptively weighting the importance of each channel, enhances the capability of DC-CNN to capture correlations among channels. Specifically, DC-CNN extracts the most relevant features from different modalities, such as reflectance and illumination, which are essential for accurately representing the severe degradation of channels in underwater images.
We aggregate spatial information of the feature map M through global average pooling and global max pooling. The resulting outputs are denoted as Fm1 and Fa1, respectively, calculated as follows:
F a 1 = A v g P o o l ( M )
F m 1 = M a x P o o l ( M )
where AvgPool(·) and MaxPool(·) are global average pooling and global max pooling, respectively.
These two pooling operations compress global information of each channel into two scalars, serving as representations of spatial features. To establish correlations between channels, we employ two fully connected layers to propagate Fm1 and Fa1. Then, we merge the two feature vectors element-wise and transform the merged feature vector into channel attention weights, denoted as WCA, through the sigmoid function. The channel attention weights WCA are calculated as follows:
W C A = σ ( W 2 ( δ ( W 1 × F m 1 ) ) + W 2 ( δ ( W 1 × F a 1 ) ) )
where σ(·) is the sigmoid function, δ(·) is the ReLU function, W1 and W2 represent the weights of two fully connected layers, respectively.
Finally, WCA and M is multiplied element by element to obtain the feature map C adjusted by channel attention, which can be expressed as follows:
C = W C A × M

2.3.2. Spatial Attention

The spatial attention mechanism models spatial dependencies within feature maps, enabling DC-CNN to emphasize important regions in underwater images. Specifically, the spatial attention mechanism captures spatial context that is crucial for visual perception features in underwater environments.
Similarly to channel attention, for spatial attention, we utilize global average pooling and global max pooling along the channel direction to aggregate channel information from the feature map C. The resulting outputs are denoted as Fm2 and Fa2, respectively. They can be calculated using the following equations:
F a 2 = A v g P o o l ( C )
F m 2 = M a x P o o l ( C )
These two pooling operations compress all channel information into a single channel, serving as representations of channel features. Next, Fm2 and Fa2 are concatenated, and passed through a convolutional layer and a sigmoid function to obtain the spatial attention weights, WSA:
W S A = σ ( c o n v ( c o n c a t ( F m 2 ; F a 2 ) ) )
where conv(·) denotes the convolution operation with a 3 × 3 convolution kernel, and concat(·) denotes the channel stitching.
Finally, the spatial attention weights, WSA, are multiplied element-wise with C to obtain the feature map, MS, adjusted based on spatial attention. This can be represented as:
M S = W S A C

2.4. Quality Regression Module

The main task of the quality regression module is to establish a mapping relationship between features and subjective opinion scores. Here, a simple and efficient architecture is employed, consisting of two convolutional layers, two max pooling layers, two average pooling layers, one global max pooling layer, one global average pooling layer, and one fully connected layer. Specifically, the global max pooling and global average pooling are utilized for final feature filtering to enhance the robustness of quality regression. Finally, the quality scores for all patches are averaged to obtain the predicted quality of the entire enhanced underwater image.
The loss function for both the feature extraction module and the quality regression module is the l1 loss, which can be represented as follows:
L = 1 N n = 1 N Q n q n l 1
where Qn is the predicted quality of the n-th patch by the proposed method and qn is the subjective opinion score.

3. Experimental Results

In this section, we will present experimental results and conduct experimental analysis. Firstly, two publicly available datasets and three criteria used for experimental performance verification are described in detail. In addition, the specific details of the training-testing experiments are also elaborated. Next, the experimental results are compared with different NR-IQA methods. Finally, the effectiveness of the proposed method is validated through ablation experiments.

3.1. Experimental Setups

3.1.1. Benchmark Datasets

To validate the performance of the proposed method, we conducted comparative experiments on the SAUD dataset [43] and the UIQE dataset [42].
The SAUD dataset comprises 100 underwater raw images from different scenes and 1000 enhanced underwater images. Each underwater raw image is enhanced using 10 mainstream enhancement algorithms, including BL-TM [6], HP-based [7], RD-based [9], Retinex [10], RGHS [11], TS-based [12], and UDCP [13], as well as the latest deep learning-based algorithms UWCNN [14], GL-Net [15], and Water-Net [17].
Differently, the UIQE dataset comprises 45 underwater raw images from diverse scenes and 405 enhanced underwater images. Each underwater raw image undergoes enhancement using 9 algorithms, which include Retinex [10], UDCP [13], UIBLA [18], RED [19], CycleGAN [20], WSCT [21], UGAN [22], FGAN [23], and UWCNN-SD [24]. All algorithms are distinct from those in the SAUD dataset, except for UDCP.

3.1.2. Evaluation Protocols and Performance Criteria

To evaluate the performance of the proposed method in comparison, we employ three widely used metrics in IQA, i.e., Spearman Rank Order Correlation Coefficient (SROCC), Kendall Rank-Order Correlation Coefficient (KROCC), and Pearson Linear Correlation Coefficient (PLCC). SROCC and KROCC measure the rank correlation between predicted scores and ground truth, which is particularly important for assessing the consistency of quality rankings in underwater environments where image quality can vary significantly. PLCC assesses the linear correlation between predicted scores and ground truth scores, reflecting the accuracy of predictions in capturing subtle variations in underwater image quality. A value closer to 1 for SROCC, KROCC, and PLCC indicates higher levels of monotonicity, consistency, and accuracy of the method, respectively.
We implemented the proposed method using the PyTorch 2.3.0 [55] framework. For each dataset, 80% of the images were randomly selected based on scenes for the training set, while the remaining 20% were allocated to the testing set. This random train-test splitting process was repeated 10 times, with SROCC, KROCC, and PLCC recorded for each iteration. With 150 epochs per training and a learning rate of 0.0005, we employed the Adam optimizer for the training process. Finally, the mean values of the 10 SROCC, KROCC, and PLCC scores were calculated to determine the performance results.

3.2. Performance Evaluation

To validate the effectiveness of the proposed method, 16 mainstream NR-IQA methods are selected for performance comparison, and categorized into In-air IQA methods and UIQA methods. The In-air IQA methods consist of five general NR-IQA methods (i.e., DIIVINE [29], BRISQUE [30], GLBP [31], SSEQ [32], and BMPRI [33]), along with three deep learning-based NR-IQA methods (i.e., CNN-IQA [34], MUSIQ [37], VCRNet [38]). The UIQA methods include four commonly used underwater raw image quality assessment methods (i.e., UIQM [39], UCIQE [40], CCF [56], and FDUM [41]), as well as four UIE-IQA methods (i.e., NUIQ [43], Twice-Mixing [49], Uranker [50], and UIQI [44]). To ensure the fairness and consistency of the experiments, all NR-IQA methods used for comparison were implemented based on their official source codes. The parameter settings for each method were strictly configured according to the recommended settings in their original publications or the default parameters provided in the source codes. It should be noted that the PLCC values of the Twice-Mixing and Uranker methods are calculated by linearly fitting their predicted ranking scores to the MOS values.

3.2.1. Performance Comparisons on SAUD

The proposed method is experimentally compared with all the aforementioned methods on the SAUD dataset, and the results are shown in Table 1, with the best performance highlighted in bold. From the results, the following conclusions can be drawn. Firstly, deep learning-based methods (i.e., CNN-IQA [34], MUSIQ [37], VCRNet [38]) outperform most handcrafted feature-based NR-IQA methods in terms of performance metrics. This is reasonable because deep neural networks have strong feature learning capabilities, enabling them to directly establish a mapping relationship between features and objective quality scores end-to-end. Secondly, the BRISQUE method performs well in assessing structural degradation but lacks consideration of color information, resulting in overall mediocre performance. GLBP extracts LBP features in the gradient domain, but its performance is not ideal because the extracted gradient domain features cannot effectively represent the color distortion of enhanced underwater images. Although BMPRI considers many different distortions, it still falls short, and its performance is not satisfactory. Thirdly, the four methods (i.e., UCIQE, UIQM, CCF, and FDUM) used for underwater raw image quality assessment perform poorly. This is because different UIE algorithms have different focuses, and besides inheriting distortion from underwater raw images, they also introduce red-shifts, artifacts, and other issues. Fourthly, apart from the proposed method, NUIQ and Uranker stand out among various UIQA methods. The NUIQ method performs the best as it considers luminance and chrominance, capturing the most pertinent information. The Uranker method achieves the second-highest performance, benefiting from its use of color priors and dynamic scale features to capture global and local degradation. Finally, in all cases, the proposed method outperforms other competing methods, which may be attributed to our decomposition of enhanced underwater images into reflectance and illumination components, where reflectance primarily provides color information and illumination primarily provides texture information. This enables targeted learning of different distortion features, avoiding the learning of redundant features.

3.2.2. Performance Comparisons on UIQE

A well-performing UIE-IQA method should be capable of evaluating the performance of different UIE algorithms. To this end, the performance of various NR-IQA methods is evaluated on another enhanced underwater image dataset (i.e., UIQE), and the results are shown in Table 2. Again, the best performance is highlighted in bold. Note that the UIQEI method is released alongside the UIQE database. Typically, the original authors will pay more attention to the characteristics of their datasets to design their method, thus achieving superior performance. To better validate the superiority of our proposed method, the UIQEI method is also included in the comparison. From Table 2, it can be observed that the proposed method outperforms other competing quality assessment methods. Except for the BMPRI method and deep learning-based CNN-IQA, MUSIQ, and VCRNet, methods designed for general images perform poorly in predicting the quality of enhanced underwater images. Considering the distortion characteristics of enhanced underwater images, the UIQEI method achieves relatively good performance, ranking second among the compared UIQA methods. Overall, the overall performance trends in the UIQE database are mostly similar to SAUD, but there are also significant differences in the performance of some methods, such as DIIVINE, UIQM, UCIQE, and NUIQ. This underscores the significance of further research on methods for evaluating the quality of enhanced underwater images.
To visually present the performance comparison results, we also provide a scatter plot in Figure 5 showing the fitting between predicted scores (predicted by NR-IQA methods) and subjective quality scores (MOS values provided in the dataset). The x-axis represents the predicted scores by NR-IQA methods, while the y-axis represents the subjective MOS values of enhanced underwater images. A good NR-IQA method will generate scattered points close to the fitting curve. It can be clearly observed from Figure 5 that the proposed method achieved the best results on the UIQE dataset.
To further demonstrate the performance of the proposed method, we compared the subjective MOS values of enhanced underwater images obtained from different UIE algorithms with the objective quality scores predicted by the proposed method, as shown in Figure 6. From Figure 6, it can be observed that the objective quality scores produced by the proposed method are closely aligned with the subjective MOS values, accurately reflecting human visual perception of image quality. Specifically, Figure 6b,f,h, which received higher MOS values, exhibit improved color accuracy, enhanced contrast, and greater clarity, which are effectively captured by the DC-CNN quality predictions. In contrast, images with lower MOS values, such as Figure 6e,g, display less vibrant colors and reduced contrast, consistent with the corresponding objective scores. These results indicate that the proposed method can effectively distinguish the performance of various UIE algorithms in a way that aligns with subjective quality assessments, further illustrating its superiority in underwater image enhancement evaluation.

3.2.3. Model Efficiency

To illustrate the efficiency of the proposed method, Table 3 presents a comparison of inference times across various IQA models. Although the proposed method is not the fastest, it achieves an effective balance between computational efficiency and assessment accuracy. With an inference time of 0.0719 s, the proposed method operates faster than several well-established methods, such as DIIVINE and FDUM, while maintaining competitive performance.

3.3. Ablation Experiment

This section investigates the impact of different modules on the overall performance of the proposed method. To validate the importance of the dual-channel scheme, we use the same backbone structure as the proposed method and perform quality prediction tasks for enhanced underwater images using different inputs. Specifically, the network model using enhanced underwater images as input is referred to as Model-1, the one using reflectance as input is referred to as Model-2, and the one using illumination as input is referred to as Model-3. The experimental results are shown in Table 4, with the best performance highlighted in bold. It can be observed that the performance of Model-1 is not the worst, but there is still room for improvement compared to the proposed method. This also reflects the possibility of redundancy in the color-related and texture-related features learned from enhanced underwater images. The results of Model-3 are the worst, demonstrating the importance of color distortion perception. The proposed method, which combines illumination and reflectance information, achieves better performance, indicating the effectiveness of the dual-channel scheme.
To investigate the impact of the feature fusion module and attention module on the performance of the proposed method, we conduct corresponding ablation experiments. Specifically, the network model without the feature fusion module is denoted as No_FFM, and the one without the attention module is denoted as No_Attention. From Table 5, it can be found that No_FFM and No_Attention still have room for improvement compared to the proposed method. The results of No_FFM indicate the importance of the feature fusion module in aggregating color and texture features. On the other hand, the results of No_Attention suggest that the attention module enhances the features of regions attended by human vision.

4. Discussion

Inspired by Retinex theory, this paper decomposes enhanced underwater images into two components: illumination and reflectance, proposing a novel UIE-IQA method based on a dual-channel network. Through a series of experiments, the proposed method achieves excellent performance on the SAUD dataset and UIQE dataset, mainly attributed to the effective feature extraction and fusion from the reflectance and illumination maps. However, the proposed method lacks targeted feature extraction methods for different branches in the reflectance and illumination sub-networks, which could be further improved for overall performance and generalization. In general, the proposed method can be optimized in the following aspects: (1) Designing a better image decomposition network to obtain feature maps that better represent enhanced underwater image distortions; (2) Designing targeted feature extraction networks for different branches to adapt to different image information; (3) Designing feature fusion modules and attention modules that better align with human visual characteristics to enhance network performance; (4) Creating enhanced underwater image datasets with more UIE algorithms to better train deep learning networks.

5. Conclusions

In this paper, a Dual-Channel Convolutional Neural Network (DC-CNN)-based No-Reference Image Quality Assessment (NR-IQA) method is introduced, which is tailored for Underwater Image Enhancement (UIE) algorithms. The proposed method comprises four main components: image pre-processing, feature extraction module, attention modules, and quality regression module. Firstly, the image pre-processing stage decomposes the input-enhanced underwater image into two distinct components, namely reflectance, and illumination. This separation enables better learning of color and texture distortions present in the enhanced underwater image. Secondly, in the feature extraction module, two branches are utilized to capture degradation features related to color and texture separately, and feature fusion modules are employed to integrate these diverse features effectively. Next, attention mechanisms are incorporated to ensure that the extracted features align well with human visual perception, enhancing the overall performance of the proposed method. Lastly, the quality regression module predicts the quality score based on the extracted features. Experimental results on various benchmark datasets validate the effectiveness and superiority of the proposed method in assessing the quality of enhanced underwater images.

Author Contributions

Conceptualization, R.H. and G.J.; Formal analysis, Z.L.; Funding acquisition, T.L. and Z.H.; Investigation, Z.H.; Methodology, R.H. and G.J.; Supervision, T.L.; Validation, Z.L.; Writing—original draft, R.H. and G.J.; Writing—review and editing, T.L. and Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China under Grant No. 62471264, Natural Science Foundation of Zhejiang Province under Grant No. LY22F020020, Zhejiang Provincial Postdoctoral Research Excellence Foundation under Grant ZJ2022130, Laboratory of Intelligent Home Appliances, College of Science and Technology, Ningbo University.

Data Availability Statement

The original data presented in this study are openly available in [UIEB Database] at [https://li-chongyi.github.io/proj_benchmark.html (accessed on 15 October 2023)], and [U45 Database] at [https://github.com/IPNUISTlegal/underwater-test-dataset-U45- (accessed on 15 October 2023)].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xue, C.; Liu, Q.; Huang, Y.; Cheng, E.; Yuan, F. A Dual-Branch Autoencoder Network for Underwater Low-Light Polarized Image Enhancement. Remote Sens. 2024, 16, 1134. [Google Scholar] [CrossRef]
  2. Zhang, W.; Li, X.; Xu, S.; Li, X.; Yang, Y.; Xu, D.; Liu, T.; Hu, H. Underwater Image Restoration via Adaptive Color Correction and Contrast Enhancement Fusion. Remote Sens. 2023, 15, 4699. [Google Scholar] [CrossRef]
  3. Kang, Y.; Jiang, Q.; Li, C.; Ren, W.; Liu, H.; Wang, P. A perception-aware decomposition and fusion framework for underwater image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 988–1002. [Google Scholar] [CrossRef]
  4. Zhou, J.; Pang, L.; Zhang, D.; Zhang, W. Underwater image enhancement method via multi-interval subhistogram perspective equalization. IEEE J. Ocean. Eng. 2023, 48, 474–488. [Google Scholar] [CrossRef]
  5. Lu, Y.; Yang, D.; Gao, Y.; Liu, R.W.; Liu, J.; Guo, Y. AoSRNet: All-in-One Scene Recovery Networks via Multi-knowledge Integration. Knowl. Based Syst. 2024, 294, 111786. [Google Scholar] [CrossRef]
  6. Song, W.; Wang, Y.; Huang, D.; Liotta, A.; Perra, C. Enhancement of underwater images with statistical model of background light and optimization of transmission map. IEEE Trans. Broadcast. 2020, 66, 153–169. [Google Scholar] [CrossRef]
  7. Li, C.-Y.; Guo, J.-C.; Cong, R.-M.; Pang, Y.-W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef]
  8. Wang, Z.; Li, C.; Mo, Y.; Shang, S. RCA-CycleGAN: Unsupervised underwater image enhancement using Red Channel attention optimized CycleGAN. Displays 2023, 76, 102359. [Google Scholar] [CrossRef]
  9. Abdul Ghani, A.S.; Mat Isa, N.A. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching. SpringerPlus 2014, 3, 757. [Google Scholar] [CrossRef]
  10. Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.-P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4572–4576. [Google Scholar]
  11. Huang, D.; Wang, Y.; Song, W.; Sequeira, J.; Mavromatis, S. Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition. In Proceedings of the MultiMedia Modeling: 24th International Conference, MMM 2018, Bangkok, Thailand, 5–7 February 2018; Proceedings, Part I 24. pp. 453–465. [Google Scholar]
  12. Fu, X.; Fan, Z.; Ling, M.; Huang, Y.; Ding, X. Two-step approach for single underwater image enhancement. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Xiamen, China, 6–9 November 2017; pp. 789–794. [Google Scholar]
  13. Drews, P.L.; Nascimento, E.R.; Botelho, S.S.; Campos, M.F.M. Underwater depth estimation and image restoration based on single images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef]
  14. Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
  15. Fu, X.; Cao, X. Underwater image enhancement with global–local networks and compressed-histogram equalization. Signal Process. Image Commun. 2020, 86, 115892. [Google Scholar] [CrossRef]
  16. Ding, D.; Gan, S.; Chen, L.; Wang, B. Learning-based underwater image enhancement: An efficient two-stream approach. Displays 2023, 76, 102337. [Google Scholar] [CrossRef]
  17. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
  18. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef]
  19. Peng, Y.-T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  20. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  21. Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7159–7165. [Google Scholar]
  22. Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef]
  23. Li, H.; Li, J.; Wang, W. A fusion adversarial underwater image enhancement network with a public test dataset. arXiv 2019, arXiv:1906.06819. [Google Scholar]
  24. Wu, S.; Luo, T.; Jiang, G.; Yu, M.; Xu, H.; Zhu, Z.; Song, Y. A two-stage underwater enhancement network based on structure decomposition and characteristics of underwater imaging. IEEE J. Ocean. Eng. 2021, 46, 1213–1227. [Google Scholar] [CrossRef]
  25. Wang, B.; Xu, H.; Jiang, G.; Yu, M.; Chen, Y.; Ding, L.; Zhang, X.; Luo, T. Underwater image co-enhancement based on physical-guided transformer interaction. Displays 2023, 79, 102505. [Google Scholar] [CrossRef]
  26. Zhai, G.; Min, X. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 211301. [Google Scholar] [CrossRef]
  27. Zhang, C.; Huang, Z.; Liu, S.; Xiao, J. Dual-channel multi-task CNN for no-reference screen content image quality assessment. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5011–5025. [Google Scholar] [CrossRef]
  28. Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
  29. Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef] [PubMed]
  30. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  31. Li, Q.; Lin, W.; Fang, Y. No-reference quality assessment for multiply-distorted images in gradient domain. IEEE Signal Process. Lett. 2016, 23, 541–545. [Google Scholar] [CrossRef]
  32. Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
  33. Min, X.; Zhai, G.; Gu, K.; Liu, Y.; Yang, X. Blind image quality estimation via distortion aggravation. IEEE Trans. Broadcast. 2018, 64, 508–517. [Google Scholar] [CrossRef]
  34. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
  35. Yang, S.; Jiang, Q.; Lin, W.; Wang, Y. SGDNet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment. In Proceedings of the 27th ACM International Conference on Multimedia, New York, NY, USA, 21–25 October 2019; pp. 1383–1391. [Google Scholar]
  36. Liu, X.; Van De Weijer, J.; Bagdanov, A.D. Rankiqa: Learning from rankings for no-reference image quality assessment. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1040–1049. [Google Scholar]
  37. Ke, J.; Wang, Q.; Wang, Y.; Milanfar, P.; Yang, F. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 5148–5157. [Google Scholar]
  38. Pan, Z.; Yuan, F.; Lei, J.; Fang, Y.; Shao, X.; Kwong, S. VCRNet: Visual compensation restoration network for no-reference image quality assessment. IEEE Trans. Image Process. 2022, 31, 1613–1627. [Google Scholar] [CrossRef]
  39. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
  40. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef] [PubMed]
  41. Yang, N.; Zhong, Q.; Li, K.; Cong, R.; Zhao, Y.; Kwong, S. A reference-free underwater image quality assessment metric in frequency domain. Signal Process. Image Commun. 2021, 94, 116218. [Google Scholar] [CrossRef]
  42. Li, W.; Lin, C.; Luo, T.; Li, H.; Xu, H.; Wang, L. Subjective and objective quality evaluation for underwater image enhancement and restoration. Symmetry 2022, 14, 558. [Google Scholar] [CrossRef]
  43. Jiang, Q.; Gu, Y.; Li, C.; Cong, R.; Shao, F. Underwater image enhancement quality evaluation: Benchmark dataset and objective metric. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5959–5974. [Google Scholar] [CrossRef]
  44. Liu, Y.; Gu, K.; Cao, J.; Wang, S.; Zhai, G.; Dong, J.; Kwong, S. UIQI: A comprehensive quality evaluation index for underwater images. IEEE Trans. Multimed. 2023, 13, 600–612. [Google Scholar] [CrossRef]
  45. Yi, X.; Jiang, Q.; Zhou, W. No-reference quality assessment of underwater image enhancement. Displays 2024, 81, 102586. [Google Scholar] [CrossRef]
  46. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  47. Wang, X.; Jiang, H.; Mu, M.; Dong, Y. A trackable multi-domain collaborative generative adversarial network for rotating machinery fault diagnosis. Mech. Syst. Signal Process. 2025, 224, 111950. [Google Scholar] [CrossRef]
  48. Wang, X.; Jiang, H.; Wu, Z.; Yang, Q. Adaptive variational autoencoding generative adversarial networks for rolling bearing fault diagnosis. Adv. Eng. Inform. 2023, 56, 102027. [Google Scholar] [CrossRef]
  49. Fu, Z.; Fu, X.; Huang, Y.; Ding, X. Twice mixing: A rank learning based quality assessment approach for underwater image enhancement. Signal Process. Image Commun. 2022, 102, 116622. [Google Scholar] [CrossRef]
  50. Guo, C.; Wu, R.; Jin, X.; Han, L.; Zhang, W.; Chai, Z.; Li, C. Underwater ranker: Learn which is better and how to be better. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 702–709. [Google Scholar]
  51. Dong, Y.; Jiang, H.; Wu, Z.; Yang, Q.; Liu, Y. Digital twin-assisted multiscale residual-self-attention feature fusion network for hypersonic flight vehicle fault diagnosis. Reliab. Eng. Syst. Saf. 2023, 235, 109253. [Google Scholar] [CrossRef]
  52. Dong, Y.; Jiang, H.; Liu, Y.; Yi, Z. Global wavelet-integrated residual frequency attention regularized network for hypersonic flight vehicle fault diagnosis with imbalanced data. Eng. Appl. Artif. Intell. 2024, 132, 107968. [Google Scholar] [CrossRef]
  53. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
  54. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  55. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8024–8035. [Google Scholar]
  56. Wang, Y.; Li, N.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; Sun, M. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electr. Eng. 2018, 70, 904–913. [Google Scholar] [CrossRef]
Figure 1. General framework of the proposed DC-CNN method.
Figure 1. General framework of the proposed DC-CNN method.
Electronics 13 04451 g001
Figure 2. The decomposition results of different underwater images enhanced by three UIE algorithms. (a) RD-based [9]; (b) the reflectance of (a); (c) the illumination of (a); (d) Retinex [10]; (e) the reflectance of (d); (f) the illumination of (d); (g) Water-Net [17]; (h) the reflectance of (g); (i) the illumination of (g).
Figure 2. The decomposition results of different underwater images enhanced by three UIE algorithms. (a) RD-based [9]; (b) the reflectance of (a); (c) the illumination of (a); (d) Retinex [10]; (e) the reflectance of (d); (f) the illumination of (d); (g) Water-Net [17]; (h) the reflectance of (g); (i) the illumination of (g).
Electronics 13 04451 g002
Figure 3. Schematic of the residual module.
Figure 3. Schematic of the residual module.
Electronics 13 04451 g003
Figure 4. Structure of feature fusion module.
Figure 4. Structure of feature fusion module.
Electronics 13 04451 g004
Figure 5. Fitted scatter plots of predicted scores (predicted by the NR-IQA method) versus subjective mean opinion score (MOS) values (provided by the UIQE dataset). (ar) correspond to DIIVINE [29], BRISQUE [30], GLBP [31], SSEQ [32], BMPRI [33], CNN-IQA [34], MUSIQ [37], VCRNet [38], UIQM [39], UCIQE [40], CCF [56], FDUM [41], NUIQ [43], UIQEI [40], Twice-Mixing [49], Uranker [50], UIQI [44], and the proposed method, respectively.
Figure 5. Fitted scatter plots of predicted scores (predicted by the NR-IQA method) versus subjective mean opinion score (MOS) values (provided by the UIQE dataset). (ar) correspond to DIIVINE [29], BRISQUE [30], GLBP [31], SSEQ [32], BMPRI [33], CNN-IQA [34], MUSIQ [37], VCRNet [38], UIQM [39], UCIQE [40], CCF [56], FDUM [41], NUIQ [43], UIQEI [40], Twice-Mixing [49], Uranker [50], UIQI [44], and the proposed method, respectively.
Electronics 13 04451 g005aElectronics 13 04451 g005b
Figure 6. Subjective MOS of different enhanced underwater images in the UIQE database compared to the objective quality scores predicted by the proposed method.
Figure 6. Subjective MOS of different enhanced underwater images in the UIQE database compared to the objective quality scores predicted by the proposed method.
Electronics 13 04451 g006
Table 1. Performance comparison of different methods on SAUD dataset.
Table 1. Performance comparison of different methods on SAUD dataset.
MethodsSROCCKROCCPLCC
In-air IQADIIVINE [29]0.61980.45330.6487
BRISQUE [30]0.58660.42440.6066
GLBP [31]0.45830.32180.5011
SSEQ [32]0.54750.42490.7417
BMPRI [33]0.52090.39390.7326
CNN-IQA [34]0.77220.57990.7594
MUSIQ [37]0.63610.45390.6165
VCRNet [38]0.68290.50540.6454
UIQAUIQM [39]0.40850.30980.6957
UCIQE [40]0.51840.38630.7219
CCF [56]0.46400.30800.4791
FDUM [41]0.19470.13210.1520
NUIQ [43]0.79000.65550.8679
Twice-Mixing [49]0.47290.32950.4428
Uranker [50]0.73660.56360.7252
UIQI [44]0.49010.34470.5695
Proposed0.87410.69710.8744
Note: The best performance is highlighted in bold.
Table 2. Performance comparison of different methods on UIQE dataset.
Table 2. Performance comparison of different methods on UIQE dataset.
MethodsSROCCKROCCPLCC
In-air IQADIIVINE [29]0.12780.10840.0997
BRISQUE [30]0.72780.50000.7507
GLBP [31]0.58700.41500.6236
SSEQ [32]0.65800.46950.6772
BMPRI [33]0.72000.53990.7380
CNN-IQA [34]0.78400.58490.7765
MUSIQ [37]0.74970.56540.7383
VCRNet [38]0.87270.68700.8712
UIQAUIQM [39]0.15560.19840.3112
UCIQE [40]0.28380.13810.3851
CCF [56]0.25560.15170.3010
FDUM [41]0.27330.24740.1759
NUIQ [43]0.44330.30670.4023
UIQEI [40]0.85680.64560.8705
Twice-Mixing [49]0.56900.41420.5506
Uranker [50]0.81880.65040.8172
UIQI [44]0.71310.51570.7270
Proposed0.88500.70950.8765
Note: The best performance is highlighted in bold.
Table 3. Computational time (measured in seconds) comparison of IQA methods.
Table 3. Computational time (measured in seconds) comparison of IQA methods.
MethodDIIVINEBRISQUEBMPRICNN-IQAUCIQEUIQM
Time/s2.03740.01340.20680.05720.01860.0500
MethodCCFFDUMUIQITwice-MixingUrankerProposed
Time/s0.10600.25500.23580.10650.07860.0719
Note: The fastest computational time is highlighted in bold.
Table 4. Ablation results for different inputs on the UIQE dataset.
Table 4. Ablation results for different inputs on the UIQE dataset.
ModelSROCCPLCC
Model-10.82560.8177
Model-20.83290.8242
Model-30.81190.8003
Proposed0.88500.8765
Note: The best performance is highlighted in bold.
Table 5. Ablation results for the role of feature fusion module and attention module.
Table 5. Ablation results for the role of feature fusion module and attention module.
ModelSROCCPLCC
No_FFM0.87680.8714
No_Attention0.87430.8697
Proposed0.88500.8765
Note: The best performance is highlighted in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, R.; Luo, T.; Jiang, G.; Lin, Z.; He, Z. No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement. Electronics 2024, 13, 4451. https://doi.org/10.3390/electronics13224451

AMA Style

Hu R, Luo T, Jiang G, Lin Z, He Z. No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement. Electronics. 2024; 13(22):4451. https://doi.org/10.3390/electronics13224451

Chicago/Turabian Style

Hu, Renzhi, Ting Luo, Guowei Jiang, Zhiqiang Lin, and Zhouyan He. 2024. "No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement" Electronics 13, no. 22: 4451. https://doi.org/10.3390/electronics13224451

APA Style

Hu, R., Luo, T., Jiang, G., Lin, Z., & He, Z. (2024). No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement. Electronics, 13(22), 4451. https://doi.org/10.3390/electronics13224451

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop