Next Article in Journal
Energy and Performance Trade-Off Optimization in Heterogeneous Computing via Reinforcement Learning
Previous Article in Journal
An Artificial Neural Network Approach and a Data Augmentation Algorithm to Systematize the Diagnosis of Deep-Vein Thrombosis by Using Wells’ Criteria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Image Quality Assessment Based on Classification Guidance and Feature Aggregation

1
School of Electronic Information, Wuhan University, Wuhan 430072, China
2
National Engineering Laboratory for Public Safety Risk Perception and Control by Big Data (NEL-PSRPC), Beijing 100041, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(11), 1811; https://doi.org/10.3390/electronics9111811
Submission received: 29 September 2020 / Revised: 23 October 2020 / Accepted: 24 October 2020 / Published: 2 November 2020
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
In this work, we present a convolutional neural network (CNN) named CGFA-CNN for blind image quality assessment (BIQA). A unique two-stage strategy is utilized which firstly identifies the distortion type in an image using Sub-Network I and then quantifies this distortion using Sub-Network II. Different from most deep neural networks, we extract hierarchical features as descriptors to enhance the image representation and design a feature aggregation layer in an end-to-end training manner applying Fisher encoding to visual vocabularies modeled by Gaussian mixture models (GMMs). Considering the authentic distortions and synthetic distortions, the hierarchical feature contains the characteristics of a CNN trained on the self-built dataset and a CNN trained on ImageNet. We evaluated our algorithm on four publicly available databases, and the results demonstrate that our CGFA-CNN has superior performance over other methods both on synthetic and authentic databases.

1. Introduction

Digital pictures may occur different distortions in the procedure of acquisition, transmission, and compression, leading to an unsatisfactory perceived visual quality or a certain level of annoyance. Thus, it is crucial to predict the quality of digital pictures in many applications, such as compression, communication, printing, display, analysis, registration, restoration, and enhancement [1,2,3]. Generally, image quality assessment approaches can be classified into three kinds according to the additional information needed. Specifically, full-reference image quality assessment (FR-IQA) [4,5,6,7] and reduced-reference image quality assessment (RR-IQA) [8,9,10] need full and partial information of reference images, respectively, while blind image quality assessment (BIQA) [11,12,13,14] performs quality measure without any information from the reference image. Thus, BIQA methods are more attractive in many practical applications because the reference image usually is not available or hard to derive.
Early studies mainly focused on one or more specific distortion types, such as Gaussian blur [15], blockiness from JPEG compression [16], or ringing arising from JPEG2000 compression [17]. However, images may be affected by unknown distortion in many practical scenarios. In contrast, general BIQA methods aim to work well for arbitrary distortion, which can be classified into two categories according to the features extracted, i.e., Natural Scene Statistics (NSS)-based methods and training-based methods.
NSS-based methods [18] assume that the natural image with non-distorted obeys certain perceptually relevant statistical laws that are violated by the presence of common image distortions, and they attempt to describe an image utilizing its scene statistics from different domains. For example, BIRSQUE [19] derives features from the locally normalized luminance coefficients in the spatial domain. M3 [20] utilizes the joint local contrast features from the gradient magnitude (GM) map and the Laplacian of Gaussian (LOG) response. Later, a perceptually motivated and feature-driven model is deployed in FRIQUEE [21], in which a large collection of features defined in various complementary, perceptually relevant color and transform-domain spaces are drawn from among the most successful BIQA models produced to date.
However, knowledge-driven feature extraction and data-driven quality prediction are separated in the above methods. It has been demonstrated that training-based methods outperform NSS-based methods by a large margin because a fully data-driven BIQA solution becomes possible. For example, CORNIA [22] constructs a codebook in an unsupervised manner, using raw image patches as local descriptors and using soft-assignment for encoding. Considering that the feature set generally adopted in previous methods are from zero-order statistics and insufficient for BIQA, HOSA [23] constructs a much smaller codebook using K-means clustering [24] and introducing higher-order statistics. In contrast, the above methods capture spatially normalized coefficients and codebook-based features which are learned automatically from beginning to end by using CNNs. For example, TCSN [25] aims to learn the complicated relationship between visual appearance and perceived quality via a two-stream convolutional neural network. DIQA [26] defines two separated CNN branches to learn objective distortion and human visual sensitivity, respectively.
In this work, we propose an end-to-end BIQA based on classification guidance and feature aggregation, which is accomplished by two sub-networks with shared features in the early layers. Due to the lack of training data, we construct a large-scale dataset by means of synthesizing distortions and pre-train Sub-Network I to identify an image into a specific distortion type from a set of pre-defined categories. We find the proposed method will be much harder to achieve high accuracies on authentic images if only it is exposed to synthetic distortions during training. Then, we extract hierarchical features from the shared layers of two-subnetworks and another CNN (VGG-16 [27]) pre-trained on ImageNet [28], in which pictures occur as a natural consequence of photography and a unified feature group is formed.
Sub-Network II takes the hierarchical features and the classification information as inputs to predict the perceptual quality. The combination of two sub-networks enables the learning framework to have the probability of favorable quality perception and proper parameter initialization in an end-to-end training manner. We design a feature aggregation layer that could convert arbitrary input seizes to a fixed-length representation. Then, a fully connected layer is exploited as a linear regression model to map the high-dimensional features into the quality scores. This allows the proposed CGFA-CNN to accept an image of any size as the input, thus there is no need to perform any transformation of images (including cropping, scaling, etc.), which would affect perceptual quality scores.
The paper is structured as follows. In Section 2, previous work on CNN-based BIQA related to our work is briefly reviewed. In Section 3, details of the proposed method are described. In Section 4, experimental results on the public IQA databases and the corresponding analysis are presented. In Section 5, the work of this paper is concluded.

2. Related Work

In this section, we provide a brief survey about the major solutions to the lack of training data in BIQA and a review of recent studies related to our work.
Due to the number of parameters to be trained on CNN is usually very large, the training set needs to contain sufficient data to avoid over-fitting. However, the number of samples and image contents in the public quality-annotated image databases are rather limited, which cannot meet the need for end-to-end training of a deep network. Currently, there are two main methods to tackle this challenge.
The first method is to train the model based on image patches. For example, deepIQA [29] randomly samples image patches from the entire image as inputs and predicts the quality score on local regions by assigning the subjective mean score (MOS) of the pristine image to all patches within it. Although taking small patches as inputs for data augmentation is superior to using the whole image in a given dataset, this method still suffers from limitations because local image quality with context varies across spatial locations even when the distortion is homogeneous. To resolve this problem, BIECON [30] makes use of the existing FR-IQA algorithms to assign quality labels for sampled image patches, but the performance of such a network depends highly on that of FR-IQA models. Other methods such as dipIQ [31] attempting to generate discriminable image pairs by involving FR-IQA models may suffer from similar problems.
The second method is to pre-train a network with large-scale datasets in other fields. For each pre-trained architecture, two types of back-end training strategies are available: replacing the last layer of the pre-trained CNN model with the regression layer and fine-tuning it with the IQA database to conduct image quality prediction or using SVR to regress the extracted features through the pre-trained networks onto subjective scores. For instance, DeepBIQ [32] reports on the use of different features extracted from pre-trained CNNs for different image classification tasks via ImageNet [28] and Places365 [33] as a generic image description. Kim et al. [34] selected the well-known deep CNN models AlexNet [35] and ResNet50 [36] as the architectures of the baseline models, which have been pre-trained for the task of image classification on ImageNet [28]. These methods directly inheriting the weights from the pre-trained models for general image classification tasks have a defect of low relevance to BIQA but unnecessary complexity.
To better address the training data shortage problem, MEON [37] proposes a cascaded multi-task framework, which firstly trains a distortion type identification network by large-scale pre-defined samples. Then, a quality prediction network is trained subsequently, taking advantage of distortion information obtained from the first stage. Furthermore, DB-CNN [38] not only constructs a pre-training set based on Waterloo Exploration Database [39] and PASCAL VOC [40] for synthetic distortions, but also uses ImageNet [28] to pre-train another CNN for authentic distortions. Motivated by the previous studies on MEON [37] and DB-CNN [38], we construct a pre-training set based on Waterloo Exploration Database [39] and PASCAL VOC [40] for synthetic distortions. Besides, both distortion type and distortion level are considered at the same time, which results in better quality-aware initializations and distortion information.
Although previous DNN-based BIQA methods have achieved significant performance, all of these methods usually comprise convolutional layers and pooling layers for feature extraction and employ fully connected layers for regression, which would suffer three limitations. First, such techniques as averaging or maximum pooling are too simple to be accurate for long sequences. Second, a fully connected layer is destructive to the high-dimensional disorder and spatial invariance of the local feature. Third, such CNNs typically require a fixed image size. To feed into the network, images have to be resized or cropped to a fixed size, and either scaling or cropping would cause the perceptual difference with the assigned quality labels. To tackle these challenges, we explore more sophisticated pooling techniques based on clustering approaches such as Bag-of-visual-words (BOW) [41], Vector of Locally aggregated Descriptors (VLAD) [42] and Fisher Vectors [43]. Studies have shown that integrating VLAD as a differentiable module in a neural network can significantly improve the aggregated representation for the task of place recognition [44] and video classification [45]. Our proposed feature aggregation layer acts as a pooling layer on top of the convolutional layers, which converts arbitrary input seizes to a fixed-length representation. Afterward, using a fully connected layer for regression does not require any preprocessing of the input image.

3. The Proposed Method

The framework CGFA-CNN is illustrated in Figure 1. Sub-Network I aims to classify an image into a specific distortion type and initialize the shared layers for a further learning process, which is firstly pre-trained on a self-built dataset. Sub-Network II predicts the perceptual quality of the same image, which is fine-tuned with the IQA databases and takes advantage of distortion information obtained from Sub-Network I. The feature aggregation layer (FV layer) and classification-guided gating unit (CGU) are described in Section 3.3 and Section 3.4.

3.1. Distortion Type Identification

3.1.1. Construction of the Pre-Training Dataset

Due to the deficiency of the available quality-annotated samples, we firstly construct a large-scale dataset based on Waterloo Database [39] and PASCAL VOC Database [40]. The former contains 4744 images and can be loosely categorized into seven classes. The latter contains 17,125 images covering 20 categories. In this paper, we merge the two databases and obtain 21,869 pristine images with various contents. Then, nine types of distortion are introduced: JPEG compression, JPEG2000 compression, Gaussian blur, white Gaussian noise, contrast stretching, pink noise, image quantization with color dithering, over-exposure, and under-exposure. We synthesize each image with five distortion levels following Ma et al. [39] except for over-exposure and under-exposure, where only three levels are generated according to Ma et al. [46]. The constructed dataset consists of 896,629 images, which are organized into 41 subcategories according to the distortion type and degradation level. We label these images by the subcategory they belong to.

3.1.2. Sub-Network I Architecture

Inspired by the VGG-16 network architecture [27], we design a similar structure subject to some modifications identifying the distortion type of the input image. Details are given in Table 1. The tailored VGG-16 network comprises a stack of convolutions (Conv) for feature extraction, one maximum pooling (MaxPool) for feature fusion, and three fully connected layers (FC) for feature regression. All hidden layers are equipped with the Rectified Linear Unit (ReLU) [35] and Batch Normalization (BN) [47]. We denote the input mini-batch training data by X n , p n n = i N , where X ( n ) is the nth input image and p ( n ) is a multi-class indicator vector of the ground truth distortion type. We append the soft-max layer at the end and define the soft-max function as
p ^ i ( n ) ( X ( n ) ; W ) = exp y i ( n ) ( X ( n ) ; W ) j = 1 C exp y i ( n ) ( X ( n ) ; W ) ,
where p ^ ( n ) = p ^ 1 ( n ) , , p ^ C ( n ) T is a C-dimensional probability vector of the nth input in a mini-batch, indicating the probability of each distortion type. Model parameters of Sub-Network I are collectively denoted as W . A cross-entropy loss is used to train this sub-network
s ( { X ( n ) } ; W ) = n = 1 N i = 1 C p i ( n ) log p ^ i ( n ) ( X ( n ) ; W ) .
Notably, in the fine-tuning phase, except for the shared layers, the rest of Sub-Network I only participates in the forward propagation and the parameters are fixed.

3.2. Feature Extraction and Fusion

In Figure 2, we can see that the representation of different distortion types varies in each convolution. Therefore, only using features extracted from the last convolution is not enough to predict the quality of an image. Inspired by the idea of combining the complementary features and hierarchical feature extraction strategy in our previous work [48], we resort to extracting features from low-level, middle-level and high-level convolutional layers as descriptors by rescaling and concatenating them. We design Sub-Network I to identify a given image’s distortion type pre-trained on a synthesized dataset. We find this takes advantage of synthetic images but fails to handle those authentically distorted. More details can be found in Section 4.5. Then, we model synthetic and authentic distortions by two separated CNNs and fuse the two feature sets into a unified representation for final quality prediction. The tailored VGG-16 pre-trained on ImageNet that contains many realistic natural images of different perceptual quality is added to extract relevant features for authentic images. In the proposed CGFA-CNN index, we take a raw image of H × W × 3 as input and predict its perceptual quality. Then, the fused feature group acquired is with the size of H 16 × W 16 × D . Here, D is the channel of hierarchical features. Sub-Network II takes the fused feature group and the estimated probability vector p ^ ( n ) as inputs.

3.3. Feature Aggregation Layer and Encoding

In this paper, we design a feature aggregation layer that employs the Fisher Vectors (FV) [43] to perform the feature aggregation and encoding procedures. Because GMM [49] and FV are non-differentiable and fail to achieve theoretically valid backpropagation, we define a FV layer to yield a quality-aware feature vector f . The implementation is shown in Figure 3.
As illustrated in Figure 1, the fused feature group is a H 16 × W 16 × D map, which can be considered as a set of D-dimensional descriptors extracted at H 16 × W 16 spatial locations. Then, we utilize GMM to obtain the cluster centers C of K components and encoding vector f of the image descriptors-X.

3.3.1. GMM Clustering

A Gaussian mixture model p ( x | θ ) is a mixture of K multivariate Gaussian distributions [49], which can be formulated as
p ( x | θ ) = k = 1 K p ( x | μ k , k ) π k ,
p ( x | μ k , k ) = e 1 2 ( x μ k ) T k 1 ( x μ k ) ( 2 π ) D det k ,
θ = ( π 1 , μ 1 , 1 , , π K , μ K , K ) ,
where θ is the vector of parameters of the model. For each Gaussian component, π k is the prior probability value, μ k is the means, and K is the diagonal covariance matrices. The parameters are learned from a training set of descriptors x 1 , , x N . The GMM defines the assignments q k i ( k = 1 , , K , i = 1 , , N ) of the N descriptors to the K Gaussian components
q k i = p ( x i | μ k , k ) π k j = 1 K p ( x i | μ j , j ) π j , k = 1 , , K .

3.3.2. Fisher Encoding

Fisher encoding captures both the first- and second-order differences between the image descriptors and the centers of a GMM. The construction of the encoding begins by learning a GMM model θ . For each k = 1 , , K , define the vectors
u k = 1 N π k i = 1 N q k i k 1 2 ( x i μ k ) ,
v k = 1 N 2 π k i = 1 N q k i [ ( x i μ k ) k 1 ( x i μ k ) 1 ] .
The Fisher encoding of the set of local descriptors is then given by the concatenation of μ k and v k for all K components, giving an encoding of size 2 × D × K
f F i s h e r = [ u 1 T , u K T , v 1 T , , v K T ] T .
To integrate Fisher vector as a differentiable module in a neural network, we write the descriptor x i hard assignment to the cluster k as a soft assignment
a k ( x i ) = e α x i c k 2 j = 1 K e α x i c j 2 .
Then, we can write the FV representation as
F V 1 j , k = i = 1 N a k ( x i ) x i ( j ) c k ( j ) σ k ( j ) ,
F V 2 j , k = i = 1 N a k ( x i ) x i ( j ) c k ( j ) σ k ( j ) 2 1 ,
where F V 1 and F V 2 are capturing the first- and second-order statistics, respectively. x i ( j ) is the jth dimensions of the ith descriptor and c k ( j ) is the kth cluster centers. c k and σ k ( k 1 , K ) are the learnable clusters and the clusters’ diagonal covariance. We define α as positive ranging between 0 and 1.
Let ω k = 2 α c k and b k = α c k 2 ; Equation (10) can then be written as
a k ( x i ) = e ω k T x i + b k j = 1 K e ω j T x i + b j .
where ω k , b k , and c k are sets of trainable parameters for each cluster k.

3.3.3. Beyond the FV Aggregation

The source of discontinuities in the traditional Bag-of-visual-words (BOW) [41] and Vector of Locally aggregated Descriptors (VLAD) [42] are the hard assignments q k i of descriptors x to cluster centers c k . To make this operation differentiable, we replace it with the descriptor x i hard assignment to the cluster as a soft assignment and reuse the same soft assignment established in Equation (12) to obtain differentiable representation. We denote them as VLAD layer and BOW layer, respectively. The differentiable BOW representation and VLAD representation can be written as
B O W k = i = 1 N a k ( x i ) ,
V L A D j , k = i = 1 N a k ( x i ) x i ( j ) c k ( j ) ,
where a k ( x i ) denotes the membership of the descriptor x i to cluster k. BOW is the histogram of the number of image descriptors assigned to each visual word. Therefore, it produces a K-dimensional vector, while VLAD is a simplified non-probabilistic version of the FV and produces a D × K -dimensional vector.
The soft assignment a k ( x i ) can be regarded as a two-step process: (i) Perform a 1 × 1 convolution with a set of K filters ω k and bias b i . Then, the output produced is ω k T x i + b k . (ii) Follow a soft-max function to obtain soft assignment of the descriptor x i to the cluster k. Notably, for BOW encoding, there is no need to store the sum of residuals for each visual word, which is the difference vector between the descriptor and its corresponding cluster center.
The advantage of the BOW aggregation is that it aggregates the descriptor into a more compact representation, and fewer parameters are trained in a discriminative manner only including ω k and b k . The drawback is that significantly more clusters are needed to obtain a rich representation. The VLAD computes the first-order residuals between the descriptors and the cluster centers, making the richness of representation relatively sufficient, and parameters to be learned are moderate, including ω k , b k and c k . In contrast, the FV aggregation concatenates both the first- and second-order aggregated residuals, but too many parameters need to be learned, including ω k , b k , c k and σ k .
As discussed Section 4.5, we also experimented with averaging and maximum pooling of the image descriptor-X. The results show that FV proves itself to be superior to the reference BOW and VLAD approach. Additionally, simply using averaging or maximum pooling results in poor performance.

3.4. Classification-Guided Gating Unit and Quality Prediction

We pre-trained Sub-Network I to identify the distortion type of the input, and Sub-Network II takes the estimated probability vector p ^ from Sub-Network I as partial input. To introduce this prior information of the classification, a Classification-guided Gating Unit (CGU) is utilized to emphasize informative features and suppress less useful ones. The CGU combines p ^ and f to produce a score vector f ^
f ^ = p ^ · σ W · f Fisher + b ,
where σ is a linear mapping and W , b are the learnable parameters. Then, a linear mapping is followed to yield an overall quality score q. To increase nonlinearity, two fully connected layers are applied as the linear mapping.
For Sub-Network II, the L 1 function is used as the empirical loss
= 1 N i = 1 N q i q i ^ ,
where q i is the MOS of the ith image in a mini-batch and q i ^ is the predicted quality score by CGFA-CNN.

4. Experimental Results and Discussions

4.1. Database Description and Experimental Protocol

(1) IQA databases: These experiments were confirmed on three singly distorted synthetic IQA databases, namely LIVE [50], CSIQ [51], and TID2013 [52], and an authentic LIVE Challenge database [53]. LIVE contains five distortion types—JPEG compression (JPEG), JPEG-2000 compression (JP2K), White noise (GN), Gaussian blurring (GB), and Fast-fading error (FF)—at 7–8 degradation levels. CSIQ contains six distortion types—JPEG compression (JPEG), JPEG-2000 compression (JP2K), global contrast decrements (GC), additive pink Gaussian noise (PN), additive white Gaussian noise (WN), and Gaussian blurring (GB)—at 3–5 degradation levels. TID2013 contains 24 sceptic distortion types: additive Gaussian noise, additive noise in color components, spatially correlated noise, masked noise, high-frequency noise, impulse noise, quantization noise, Gaussian blur, image denoising, JPEG compression, JPEG2000 compression, JPEG transmission errors, non-eccentricity pattern errors, local bock-wise distortions, mean shift, contrast change, change of color saturation, multiplicative Gaussian noise, comfort noise, lossy compression of noisy images, color quantization with dither, chromatic aberrations, sparse sampling and reconstruction, which are denoted as #01–#24, respectively.
(2) Evaluation Criteria: Two evaluation criteria are adopted as follows to benchmark BIQA models:
  • Spearman’s rank-order correlation coefficient (SRCC) is a nonparametric measure
    SRCC = 1 6 i d i 2 I I 2 1 ,
    where I is the test image number and d i is the rank difference between the MOS and the model prediction of the ith image.
  • Pearson linear correlation coefficient (PLCC) is a nonparametric measure of the linear correlation
    PLCC = i q i q m q ^ i q ^ m i q i q m 2 i q ^ i q ^ m 2 ,
    where q i and q ^ i stand for the MOS and the model prediction of the ith image, respectively.
For synthetic databases LIVE, CSIQ and TID2013, we divided the distorted images into two splits of non-overlapping content—80% of which were used as fine-tuning samples and the other 20% were left as testing samples. For the LIVE Challenge database, the distorted images were divided into two groups—80% for training and 20% for testing. This random process was repeated ten times, and the average SRCC and PLCC are reported as the final results. Besides, the three synthetic databases were selected for cross-database experiments, using one database as the training sets while the other as the testing.
We compared the proposed CGFA-CNN against several state-of-the-art BIQA methods, including three based on NSS (BRISQUE [19], M3 [20], and FRIQUEE [21]), two based on manual feature learning (CORNIA [22] and HOSA [23]), and eight based on CNN (BIECON [30], dipIQ [31], deepIQA [29], ResNet50+ft [34], MEON [48], DIQA [26], TSCN [25], and DB-CNN [38]). Due to the source codes of some methods are not available to the public, we only copy the metrics from the corresponding papers.

4.2. Experimental Settings

Parameters in Sub-Network I were initialized by He’s method [54], and Adam was adopted as optimizer with the default parameters with a mini-batch of 64. The learning rate was initialized as a decay logarithmically from 10 4 , 10 6 in 30 epochs. The construction details of the pre-training dataset are described in Section 3.1. The datasets were randomly divided into two subsets, 80% for training and 20% for testing. All images were firstly scaled to 256 × 256 × 3 and then cropped to 224 × 224 × 3 as inputs. The top-1 and top-5 errors were 3.842% and 0.026%, respectively.
In the fine-tuning phase, the shared layers were directly initialized with the parameters of Sub-Network I. Adam was used as optimizer with the default parameters for 20 epochs and the learning rate was set to 10 5 . Except for the LIVE database, images were input without any pre-processing during training with a mini-batch of 8. Since the LIVE database contains images in different size, images were randomly cropped to 320 × 320 during training in a mini-batch, whose quality annotated were assigned from the corresponding image. All of the images were input without any preprocessing during testing. We implemented all of our models using PyTorch 0.4.1 deep learning framework and the numerical calculations presented in this paper were performed on the supercomputing system at the Supercomputing Center of Wuhan University. We will release the code at https://github.com/Cwp1107/CGFA-CNN.

4.3. Consistency Experiment

We investigated the effectiveness of CGFA-CNN on LIVE, TID2013, CSIQ and LIVE Challenge databases and the results are presented in Table 2. The results of each specific distortion type on LIVE, CSIQ, and TID2013 databases are reported in Table 3, Table 4 and Table 5. The top three SRCC and PLCC results are highlighted in red, green, and blue, respectively.
Based on the results in Table 2, we have the following observations. First, DIQA [26] achieves state-of-the-art accuracies which surpasses CGFA-CNN by about 0.004 in SRCC and PLCC, and most methods take great advantages in indexes on LIVE. However, their results on CSIQ and TID2013 are rather diverse. Second, CGFA-CNN achieves comparable accuracies on LIVE Challenge compared with DB-CNN [38] and ResNet50+ft [34], which are pre-trained on ImageNet [28] databases. This suggests that CNNs pre-trained on ImageNet [28] could extract relevant features for authentically distorted images.
Performance on individual distortion types on LIVE, CSIQ, and TID2013 are shown in Table 3, Table 4 and Table 5. On LIVE, we also find that CGFA-CNN is superior to other methods in most distortions, except Fast-fading error, which is not introduced into the pre-training dataset because there is no open-source or detailed description of it. On CSIQ, CGFA-CNN has obvious advantages compared with other methods, especially in contrast change and pink noise. On TID2013, CGFA-CNN achieves state-of-the-art performance in 10 of the 24 distortions and the whole effect standout accuracies other methods. In addition, we find that CGFA-CNN performs well when the distortion shares similar artifacts with the distortion synthesized in the pre-training dataset. For example, additive Gaussian noise, additive noise in color components, and high-frequency noise are all grainy noise; quantization noise and image color quantization with dither exhibit similar appearances; and Gaussian blur, image denoising, and sparse sampling and reconstruction all introduce blur effects on the image. Therefore, although the pre-training dataset constructed in this paper does not cover all distortion types, CGFA-CNN still achieves impressive gains in performance.

4.4. Cross-Database Experiment

To analyze the generalization ability of the proposed method, we trained CGFA-CNN on one full database and evaluated it on another database. Specifically, a model was trained on CSIQ and evaluated on either LIVE or TID2013. The results are reported in Table 6. It can be concluded that CGFA-CNN can easily be generalized to distortions that have not been seen during training.

4.5. Comparison among Different Experimental Settings

In this section, we first work with the performance of different feature aggregation layers investigated in this paper and number of GMM components K. Experiments were conducted on LIVE and the results are shown in Figure 4. We observe that SRCC gradually increases and eventually keeps stability as K increases. Besides, CGFA-CNN FV, CGFA-CNN VLAD, and CGFA-CNN BOW attain highly competitive prediction accuracy when K is set to 32, 64 and 1024, respectively. By contrast, CGFA-CNN FV is superior to CGFA-CNN VLAD and CGFA-CNN BOW.
Additionally, we report ablation studies to evaluate the design rationality of CGFA-CNN and the following comparative set of experiments were conducted: (1) to evaluate the effectiveness of the proposed FV layer, we used the maximum pooling (denoted as CGFA-CNN (MaxPool)) and average pooling (denoted as CGFA-CNN (AvgPool)) instead; (2) to examine the validity of the CGU described in this work, we predicted the quality score directly by regressing the output feature vector without CGU (denoted as CGFA-CNN (w/o CGU)); (3) to verify the necessity of the hierarchical feature extraction, we extracted features only from high-level (Conv 5-2 of shared layers and Conv 4-3 of VGG-16) convolutional layers as descriptors (denoted as CGFA-CNN (single feature)); (4) to discuss the optimal settings of for feature aggregation layer, we set the BOW with K = 1024 (denoted as CGFA-CNN (BOW layer ( K = 1024 ))), VLAD with K = 64 (denoted as CGFA-CNN (VLAD layer ( K = 1024 ))), and FV with K = 32 (denoted as CGFA-CNN (proposed)); and (5) to demonstrate the prediction accuracies on authentic distortions by involving VGG-16, we only included Sub-Network I pre-trained on self-built dataset to extract features (denoted as CGFA-CNN (w/o VGG-16)). The results are demonstrated in Table 7. We empirically found that the proposed CGFA-CNN could achieve state-of-the-art prediction accuracies on both synthetic and authentic distortion image quality databases. Besides, CGFA-CNN (w/o VGG-16) can only deliver promising performance on synthetic databases and its results on LIVE Challenge are inferior to CGFA-CNN (proposed), suggesting that authentic distortions cannot be fully fitted by synthetic distortions.

5. Conclusions

In this work, we propose an end-to-end learning framework for BIQA based on classification guidance and feature aggregation, which is named as CGFA-CNN. In the fine-tuning phase, except for the shared convolutional layers, the rest of Sub-Network I only participates in the forward propagation, and the parameters are fixed. The fused feature group is aggregated and encoded by the FV layer to obtain a fisher vector. Then, the fisher vector is corrected by the CGU to obtain a quality-ware feature, which is mapped to a quality score by the regression model. In the test phase, only forward propagation is required to obtain the quality score. The results on the four publicly IQA databases demonstrate that the proposed method indeed benefited image quality assessment. However, CGFA-CNN is not a unified learning framework because it takes two steps to pre-train and fine-tune. The promising future direction is to optimize CGFA-CNN for both distortion identification and quality prediction at the same time. For example, we think that the autoencoder method could be designed to perform k-mean clustering. A VAE framework can be introduced to decode. This approach can replace the two-stage procedure. We also look forward to designing a potential objective function that could in principle reduce the necessity to rely on external procedures.
CGFA-CNN is versatile and extensible. For example, more distortion types and levels can be added to the pre-training dataset, and it could fuse with other approaches to achieve a new backbone network.

Author Contributions

W.C., C.F. and Y.M. conceived the idea; W.C., C.F. and L.Z. performed the experiments, analyzed the data, and wrote the paper; Y.L. and M.W. developed the proofs. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded partly by the National Key R&D Program of China (Project No. 2017YFC0821603).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Z.; Sheikh, H.R.; Bovik, A.C. Objective video quality assessment. In The Handbook of Video Databases: Design and Applications; CRC Press: Boca Raton, FL, USA, 2003; Volume 41, pp. 1041–1078. [Google Scholar]
  2. Panetta, K.; Samani, A.; Agaian, S. A Robust No-Reference, No-Parameter, Transform Domain Image Quality Metric for Evaluating the Quality of Color Images. IEEE Access 2018, 6, 10979–10985. [Google Scholar] [CrossRef]
  3. Jian, M.; Ping, A.; Shen, L.; Kai, L. Reduced-Reference Stereoscopic Image Quality Assessment Using Natural Scene Statistics and Structural Degradation. IEEE Access 2017, 6, 2768–2780. [Google Scholar]
  4. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  5. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. In Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; Volume 3, p. 709. [Google Scholar]
  6. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Chandler, D.M.; Hemami, S.S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans. Image Process. 2007, 16, 2284–2298. [Google Scholar] [CrossRef]
  8. Li, Q.; Wang, Z. Reduced-reference image quality assessment using divisive normalization-based image representation. IEEE J. Sel. Top. Signal Process. 2009, 3, 202–211. [Google Scholar] [CrossRef]
  9. Liu, D.; Li, F.; Song, H. Image Quality Assessment Using Regularity of Color Distribution. IEEE Access 2016, 4, 4478–4483. [Google Scholar] [CrossRef]
  10. Wu, J.; Liu, Y.; Li, L.; Shi, G. Attended Visual Content Degradation Based Reduced Reference Image Quality Assessment. IEEE Access 2018, 6, 2169–3536. [Google Scholar] [CrossRef]
  11. Brand ao, T.; Queluz, M.P. No-reference image quality assessment based on DCT domain statistics. Signal Process. 2008, 88, 822–833. [Google Scholar] [CrossRef]
  12. Saad, M.A.; Bovik, A.C.; Charrier, C. A DCT statistics-based blind image quality index. IEEE Signal Process. Lett. 2010, 17, 583–586. [Google Scholar] [CrossRef] [Green Version]
  13. Moorthy, A.; Bovik, A. A Two-Step Framework for Constructing Blind Image Quality Indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
  14. Li, J.; Yan, J.; Deng, D.; Shi, W.; Deng, S. No-reference image quality assessment based on hybrid model. Signal Image Video Process. 2017, 11, 985–992. [Google Scholar] [CrossRef]
  15. Ferzli, R.; Karam, L.J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans. Image Process. 2009, 18, 717–728. [Google Scholar] [CrossRef]
  16. Wang, Z.; Sheikh, H.R.; Bovik, A.C. No-reference perceptual quality assessment of JPEG compressed images. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; Volume 1. [Google Scholar]
  17. Marziliano, P.; Dufaux, F.; Winkler, S.; Ebrahimi, T. Perceptual blur and ringing metrics: Application to JPEG2000. Signal Process. Image Commun. 2004, 19, 163–172. [Google Scholar] [CrossRef] [Green Version]
  18. Bovik, A.C. Automatic prediction of perceptual image and video quality. Proc. IEEE 2013, 101, 2008–2024. [Google Scholar]
  19. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  20. Xue, W.; Mou, X.; Zhang, L.; Bovik, A.C.; Feng, X. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans. Image Process. 2014, 23, 4850–4862. [Google Scholar] [CrossRef] [PubMed]
  21. Ghadiyaram, D.; Bovik, A.C. Perceptual quality prediction on authentically distorted images using a bag of features approach. J. Vis. 2017, 17, 32. [Google Scholar] [CrossRef]
  22. Ye, P.; Kumar, J.; Kang, L.; Doermann, D. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105. [Google Scholar]
  23. Xu, J.; Ye, P.; Li, Q.; Du, H.; Liu, Y.; Doermann, D. Blind Image Quality Assessment Based on High Order Statistics Aggregation. IEEE Trans. Image Process. 2016, 25, 4444–4457. [Google Scholar] [CrossRef]
  24. Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  25. Yan, Q.; Gong, D.; Zhang, Y. Two-Stream Convolutional Networks for Blind Image Quality Assessment. IEEE Trans. Image Process. 2018, 28, 2200–2211. [Google Scholar] [CrossRef]
  26. Kim, J.; Nguyen, A.D.; Lee, S. Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 11–24. [Google Scholar] [CrossRef] [PubMed]
  27. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  28. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  29. Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2018, 27, 206–219. [Google Scholar] [CrossRef] [Green Version]
  30. Kim, J.; Lee, S. Fully deep blind image quality predictor. IEEE J. Slected Top. Signal Process. 2017, 11, 206–220. [Google Scholar] [CrossRef]
  31. Ma, K.; Liu, W.; Liu, T.; Wang, Z.; Tao, D. dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs. IEEE Trans. Image Process. 2017, 26, 3951–3964. [Google Scholar] [CrossRef] [Green Version]
  32. Bianco, S.; Celona, L.; Napoletano, P.; Schettini, R. On the use of deep learning for blind image quality assessment. Signal Image Video Process. 2018, 12, 355–362. [Google Scholar] [CrossRef]
  33. Zhou, B.; Lapedriza, A.; Khosla, A.; Oliva, A.; Torralba, A. Places: A 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1452–1464. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Kim, J.; Zeng, H.; Ghadiyaram, D.; Lee, S.; Zhang, L.; Bovik, A.C. Deep convolutional neural models for picture-quality prediction: Challenges and solutions to data-driven image quality assessment. IEEE Signal Process. Mag. 2017, 34, 130–141. [Google Scholar] [CrossRef]
  35. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  37. Ma, K.; Liu, W.; Zhang, K.; Duanmu, Z.; Wang, Z.; Zuo, W. End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 2018, 27, 1202–1213. [Google Scholar] [CrossRef]
  38. Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind Image Quality Assessment Using A Deep Bilinear Convolutional Neural Network. IEEE Trans. Circ. Syst. Video Technol. 2018. [Google Scholar] [CrossRef] [Green Version]
  39. Ma, K.; Duanmu, Z.; Wu, Q.; Wang, Z.; Yong, H.; Li, H.; Zhang, L. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. Image Process. 2017, 26, 1004–1016. [Google Scholar] [CrossRef]
  40. Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. Int. J. Comput. Vision 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
  41. Jégou, H.; Douze, M.; Schmid, C.; Pérez, P. Aggregating local descriptors into a compact image representation. In Proceedings of the CVPR 2010-23rd IEEE Conference on Computer Vision & Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3304–3311. [Google Scholar]
  42. Csurka, G.; Dance, C.; Fan, L.; Willamowski, J.; Bray, C. Visual categorization with bags of keypoints. In Proceedings of the Workshop on Statistical Learning in Computer Vision, ECCV, Prague, Czech Republic, 11–14 May 2004; Volume 1, pp. 1–2. [Google Scholar]
  43. Perronnin, F.; Dance, C. Fisher kernels on visual vocabularies for image categorization. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  44. Arandjelovic, R.; Gronat, P.; Torii, A.; Pajdla, T.; Sivic, J. NetVLAD: CNN architecture for weakly supervised place recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 40, 5297–5307. [Google Scholar]
  45. Miech, A.; Laptev, I.; Sivic, J. Learnable pooling with context gating for video classification. arXiv 2017, arXiv:1706.06905. [Google Scholar]
  46. Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
  47. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  48. Ma, Y.; Zhang, W.; Yan, J.; Fan, C.; Shi, W. Blind image quality assessment in multiple bandpass and redundancy domains. Digit. Signal Process. 2018, 80, 37–47. [Google Scholar] [CrossRef]
  49. Chatfield, K.; Lempitsky, V.S.; Vedaldi, A.; Zisserman, A. The devil is in the details: An evaluation of recent feature encoding methods. BMVC 2011, 2, 8. [Google Scholar]
  50. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef]
  51. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
  52. Ponomarenko, N.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Jin, L.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Color image database TID2013: Peculiarities and preliminary results. In Proceedings of the European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 June 2013; pp. 106–111. [Google Scholar]
  53. Ghadiyaram, D.; Bovik, A.C. Crowdsourced study of subjective image quality. In Proceedings of the 2014 48th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2–5 November 2014; pp. 84–88. [Google Scholar]
  54. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
Figure 1. Illustration of CGFA-CNN configurations for BIQA, highlighting the feature aggregation layer (denoted as FV layer) and classification-guided gating unit (denoted as CGU). Features are extracted from the distorted image by Sub-Network I.
Figure 1. Illustration of CGFA-CNN configurations for BIQA, highlighting the feature aggregation layer (denoted as FV layer) and classification-guided gating unit (denoted as CGU). Features are extracted from the distorted image by Sub-Network I.
Electronics 09 01811 g001
Figure 2. A comparison of several distortion types identified by Sub-Network I.
Figure 2. A comparison of several distortion types identified by Sub-Network I.
Electronics 09 01811 g002
Figure 3. The configurations of the proposed FV layer. Convolution kernel size is 1 × 1 .
Figure 3. The configurations of the proposed FV layer. Convolution kernel size is 1 × 1 .
Electronics 09 01811 g003
Figure 4. Relationship between the SRCC on LIVE of different feature aggregation layer and the number of K.
Figure 4. Relationship between the SRCC on LIVE of different feature aggregation layer and the number of K.
Electronics 09 01811 g004
Table 1. Architecture of Sub-Network I.
Table 1. Architecture of Sub-Network I.
Layer NameTypePatch SizeStrideOutput Size
Conv 1-1Conv + ReLU + BN 3 × 3 × 48 1 H × W × 48
Conv 2-1Conv + ReLU + BN 3 × 3 × 48 2 H 2 × W 2 × 48
Conv 2-2Conv + ReLU + BN 3 × 3 × 64 1 H 2 × W 2 × 64
Conv 3-1Conv + ReLU + BN 3 × 3 × 64 2 H 4 × W 4 × 64
Conv 3-2Conv + ReLU + BN 3 × 3 × 64 1 H 4 × W 4 × 64
Conv 4-1Conv + ReLU + BN 3 × 3 × 64 2 H 8 × W 8 × 64
Conv 4-2Conv + ReLU + BN 3 × 3 × 128 1 H 8 × W 8 × 128
Conv 5-1Conv + ReLU + BN 3 × 3 × 128 2 H 16 × W 16 × 128
Conv 5-2Conv + ReLU + BN 3 × 3 × 128 1 H 16 × W 16 × 128
PoolMaxPool 1 × 1 × 128 1 1 × 1 × 128
FC-1FC + ReLU 1 × 1 × 256 1 1 × 1 × 256
FC-2FC + ReLU 1 × 1 × 256 1 1 × 1 × 256
FC-3FC 1 × 1 × 41 1 1 × 1 × 41
ClassifierSoft-max 1 × 1 × 41 1 1 × 1 × 41
Table 2. Demographic prediction performance comparison by two evaluation metrics. The top three SRCC and PLCC results are highlighted in red, green, and blue, respectively.
Table 2. Demographic prediction performance comparison by two evaluation metrics. The top three SRCC and PLCC results are highlighted in red, green, and blue, respectively.
MethodLIVECSIQTID2013LIVE Challenge
SRCCPLCCSRCCPLCCSRCCPLCCSRCCPLCC
BRISQUE [19]0.9400.9450.7770.8170.5730.6510.6030.641
M3 [20]0.9500.9540.8040.8350.6790.7050.5950.620
FRIQUEE [21]0.9480.9550.8440.8890.6680.7050.6940.710
CORNIA [22]0.9430.9460.7300.8000.5500.6130.6180.665
BIECON [30]0.9580.9600.8150.8230.7170.7620.5950.613
deepIQA [29]0.9600.9720.8030.8210.6710.680
ResNet50+ft [34]0.9500.9540.8760.9050.7120.7560.8190.849
DIQA [26]0.9750.9770.8840.9150.8250.8500.7030.704
TSCN [25]0.9690.972
DB-CNN [38]0.9680.9710.9460.9590.8160.8650.8510.869
CGFA-CNN0.9710.9730.9530.9650.8410.8580.8370.846
Table 3. Average SRCC and PLCC results of individual distortion types across ten sessions on LIVE database. The top three SRCC and PLCC results are highlighted in red, green, and blue, respectively.
Table 3. Average SRCC and PLCC results of individual distortion types across ten sessions on LIVE database. The top three SRCC and PLCC results are highlighted in red, green, and blue, respectively.
SRCCJPEGJP2KWNGBFF
BRISQUE [19]0.9650.9290.9820.9640.828
M3 [20]0.9660.9300.9860.9350.902
FRIQUEE [21]0.9470.9190.9830.9370.884
CORNIA [22]0.9470.9240.9580.9510.921
HOSA [23]0.9540.9350.9750.9540.954
dipIQ [31]0.9690.9560.9750.940
DIQA [26]0.9610.9760.9880.9620.912
TCSN [25]0.9660.9500.9790.9630.911
DB-CNN [38]0.9720.9550.9800.9350.930
CGFA-CNN0.9730.9750.9860.9680.912
PLCCJPEGJP2KWNGBFF
BRISQUE [19]0.9710.9400.9890.9650.894
M3 [20]0.9770.9450.9920.9470.920
FRIQUEE [21]0.9550.9350.9910.9490.943
CORNIA [22]0.9620.9440.9740.9610.943
HOSA [23]0.9670.9490.9830.9670.967
dipIQ [31]0.9800.9640.9830.948
DIQA [26]
TCSN [25]0.9660.9630.9950.9500.949
DB-CNN [38]0.9860.9670.9880.9560.961
CGFA-CNN0.9720.9760.9810.9740.947
Table 4. Average SRCC and PLCC results of individual distortion types across ten sessions on CSIQ database. The top three SRCC and PLCC results are highlighted in red, green, and blue, respectively.
Table 4. Average SRCC and PLCC results of individual distortion types across ten sessions on CSIQ database. The top three SRCC and PLCC results are highlighted in red, green, and blue, respectively.
SRCCJPEGJP2KWNGBPNCC
BRISQUE [19]0.8060.8400.7320.8200.3780.804
M3 [20]0.7400.9110.7410.8680.6630.770
FRIQUEE [21]0.8690.8460.7480.8700.7530.838
CORNIA [22]0.5130.8310.6640.8360.4930.462
HOSA [23]0.7330.8180.6040.8410.5000.716
dipIQ [31]0.9360.9440.9040.932
MEON [48]0.9480.8980.9510.918
DIQA [26]0.8350.9310.9270.8930.8700.718
DB-CNN [38]0.9400.9530.9480.9470.9400.870
CGFA-CNN0.9500.9390.9560.9410.9520.897
PLCCJPEGJP2KWNGBPNCC
BRISQUE [19]0.8280.8870.7420.8910.4960.835
M3 [20]0.7680.9280.7280.9170.7170.787
FRIQUEE [21]0.8850.8830.7780.9050.7690.864
CORNIA [22]0.5630.8830.7780.9050.6320.543
HOSA [23]0.7590.8990.6560.9120.6010.744
dipIQ [31]0.9750.9590.9270.958
MEON [48]0.9790.9250.9580.846
DIQA [26]
DB-CNN [38]0.9820.9710.9560.9690.9500.895
CGFA-CNN0.9720.9530.9690.9550.9420.893
Table 5. Average SRCC results of individual distortion types across ten sessions on TID2013 database. The top three SRCC results are highlighted in red, green, and blue, respectively.
Table 5. Average SRCC results of individual distortion types across ten sessions on TID2013 database. The top three SRCC results are highlighted in red, green, and blue, respectively.
Method#01#02#03#04#05#06#07#08#09#10#11#12
BRISQUE [19]0.8520.7090.4910.5750.7530.6300.7980.8130.5860.8520.8930.315
M3 [20]0.7480.5910.7690.4910.8750.6930.8330.8780.7210.8230.8720.400
FRIQUEE [21]0.7300.5730.8660.3450.3450.8470.7300.7640.8810.8390.8130.498
CORNIA [22]0.7560.7500.7i 270.7260.7690.7670.0160.9210.8320.8740.9100.686
HOSA [23]0.8330.5510.8420.4680.8970.8090.8150.8830.8540.8910.7300.710
MEON [48]0.8130.7220.9260.7280.9110.9010.8880.8870.7970.8600.8910.746
DIQA [26]0.9150.7550.8780.7340.9390.8430.8580.9200.7880.8920.9120.861
DB-CNN [38]0.7900.7000.8260.6460.8790.7080.8250.8590.8650.8940.9160.772
CGFA-CNN0.8120.8040.8510.8450.9100.7940.8670.9330.8660.9140.9220.763
Method#13#14#15#16#17#18#19#20#21#22#23#24
BRISQUE [19]0.3590.1450.2240.1240.0400.1090.7240.0080.6850.7640.6160.784
M3 [20]0.7310.1900.3180.1190.224−0.1210.7010.2020.6640.8860.6480.915
FRIQUEE [21]0.6600.0760.0320.2540.5850.5890.7040.3180.6410.7680.7370.891
CORNIA [22]0.8050.2860.2190.0650.1820.0810.6440.5340.8620.2720.7920.862
MEON [48]0.7160.1160.5000.1770.2520.6840.8490.4060.7720.8570.7790.855
DIQA [26]0.8120.6590.4070.2990.687−0.1510.9040.6550.9300.9360.7560.909
DB-CNN [38]0.7730.2700.4440.6460.5480.6310.7110.7520.8600.8330.7320.902
CGFA-CNN0.7570.3350.6490.4410.5730.6570.8190.7850.8970.9400.7110.938
Table 6. SRCC comparison on cross-database. The top three SRCC results are highlighted in red, green, and blue, respectively.
Table 6. SRCC comparison on cross-database. The top three SRCC results are highlighted in red, green, and blue, respectively.
MethodCSIQTID2013
LIVETID2013LIVECSIQ
BRISQUE [19]0.8470.4540.7900.590
M3 [20]0.7970.3280.8730.605
FRIQUEE [21]0.8790.4630.7550.635
CORNIA [22]0.8530.3120.8460.672
HOSA [23]0.7730.3290.5940.462
DB-CNN [38]0.8770.5400.8910.807
CGFA-CNN0.8910.5330.8980.774
Table 7. SRCC with different settings.
Table 7. SRCC with different settings.
MethodLIVECSIQTID2013LIVE Challenge
CGFA-CNN (MaxPool)0.9150.8930.7780.766
CGFA-CNN (AvgPool)0.9090.8760.7550.761
CGFA-CNN (w/o CGU)0.9480.9190.7830.799
CGFA-CNN (single feature)0.9310.8900.7570.765
CGFA-CNN (BOW layer (K = 1024))0.9550.9360.8080.791
CGFA-CNN (VLAD layer (K = 64))0.9660.9450.8190.810
CGFA-CNN (w/o VGG-16)0.9700.9500.8360.672
CGFA-CNN (proposed)0.9730.9530.8410.837
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cai, W.; Fan, C.; Zou, L.; Liu, Y.; Ma, Y.; Wu, M. Blind Image Quality Assessment Based on Classification Guidance and Feature Aggregation. Electronics 2020, 9, 1811. https://doi.org/10.3390/electronics9111811

AMA Style

Cai W, Fan C, Zou L, Liu Y, Ma Y, Wu M. Blind Image Quality Assessment Based on Classification Guidance and Feature Aggregation. Electronics. 2020; 9(11):1811. https://doi.org/10.3390/electronics9111811

Chicago/Turabian Style

Cai, Weipeng, Cien Fan, Lian Zou, Yifeng Liu, Yang Ma, and Minyuan Wu. 2020. "Blind Image Quality Assessment Based on Classification Guidance and Feature Aggregation" Electronics 9, no. 11: 1811. https://doi.org/10.3390/electronics9111811

APA Style

Cai, W., Fan, C., Zou, L., Liu, Y., Ma, Y., & Wu, M. (2020). Blind Image Quality Assessment Based on Classification Guidance and Feature Aggregation. Electronics, 9(11), 1811. https://doi.org/10.3390/electronics9111811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop