Next Article in Journal
A Non-Structural Representation Scheme for Articulated Shapes
Next Article in Special Issue
Multivariate Statistical Approach to Image Quality Tasks
Previous Article in Journal
Phase-Contrast and Dark-Field Imaging
Previous Article in Special Issue
GPU Acceleration of the Most Apparent Distortion Image Quality Assessment Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment

by
Pedro Garcia Freitas
1,*,
Luísa Peixoto Da Eira
2,
Samuel Soares Santos
2 and
Mylene Christine Queiroz de Farias
2
1
Department of Computer Science, University of Brasília, Brasília 73345-010, Brazil
2
Department of Electrical Engineering, University of Brasília, Brasília 73345-010, Brazil
*
Author to whom correspondence should be addressed.
J. Imaging 2018, 4(10), 114; https://doi.org/10.3390/jimaging4100114
Submission received: 16 July 2018 / Revised: 23 September 2018 / Accepted: 26 September 2018 / Published: 4 October 2018
(This article belongs to the Special Issue Image Quality)

Abstract

:
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity.

1. Introduction

With the fast growth of imaging systems, a large number of digital images are being generated every day. These images are often altered in the acquisition, transmission or compression stages. These alterations can introduce distortions that may affect how human and machines understand the image content. Therefore, multimedia and computer vision applications can really benefit from automatic tools that are capable of assessing image quality. More specifically, image quality assessment (IQA) methods can be used, for example, to determine optimal codec parameters [1], find best perceptual coding schemes [2,3,4,5,6], and design efficient image watermarking algorithms [7,8]. Moreover, a recent report by Conviva® shows that viewers are demanding a higher quality of delivered multimedia content [9]. As users’ demands increase, the importance of designing automatic tools to predict the quality of the visual stimuli also increases.
In the context of computer vision (CV), the quality of input images can affect the performance of the algorithms. For instance, Kupyn et al. [10] have shown that object detection methods based on deep learning approaches are greatly affected by the quality of the input images, as can be seen in the images depicted in Figure 1. Moreover, Dodge and Karam [11] demonstrated that deep neural networks are susceptible to image quality distortions, particularly to blur and noise. Another examples of known CV algorithms that are affected by the quality of the input images include finger vein detection [12], biometric sensor spoofing [13], face recognition [14], video stream recognition systems [15], deep learning reconstruction of magnetic resonance imaging (MRI) [16], and multi-view activity recognition [17].
There are mainly two ways of measuring image quality. The first consists of performing psychophysical experiments in which humans rate the quality of a set of images. These experiments use standardized experimental methodologies to obtain quality scores for a broad range of images processed with a diverse number of algorithms and procedures. Since these experiments use human subjects, this approach is known as subjective quality assessment and it is considered the most accurate method to estimate quality [20]. Unfortunately, subjective methods are expensive and time-consuming and, therefore, are unsuitable for the most real-time applications. The second approach consists of using computer algorithms to obtain a quality estimate. Since this does not require human subjects, this approach is often entitled ‘objective quality assessment’. If a given objective method produces results that are well correlated with the quality scores provided by human viewers, it can be used to replace subjective methods.
Objective IQA methods are classified according to the amount of available reference information they require. If the full reference image (pristine content) is required to estimate quality, the method is classified as full-reference (FR). If the method only requires a limited amount of information regarding the reference image, the method is a reduced-reference (RR) method. Since requiring full or limited reference information can be a severe impediment for applications, one solution is to use no-reference (NR) methods, which evaluate the quality of images without requiring any information about the reference image. Objective methods can also be classified according to their target applications. Methods designed for specific applications are known as distortion-specific (DS) methods. DS methods can be designed to estimate the amount of sharpness [21,22,23], JPEG/JPEG2000 degradations [24,25], blockiness artifacts [26], contrast distortions [27], and enhancement [28] in an image. Although DS methods can be useful for specific scenarios, they have limited applicability in the real world. An alternative to DS methods is the distortion-generic (DG) methods, which do not require a prior knowledge of the type of distortion and, therefore, are more adequate for diverse scenarios. As expected, the design of DG methods is more challenging [29,30].
According to Hemami and Reibman [31], the design of IQA methods requires three major steps: measuring, pooling, and mapping. Measuring refers to the extraction of a set of specific physical attributes of the image. In other words, the method must compute a set of image features that describes visual quality. Pooling refers to the combination of these measurements to create a link between the image features and its quality. Mapping refers to the model of correspondence between the result of the pooling and the subjective scores. Most existing works focus on the measuring stage, where quality-aware features are designed to measure the level of image distortion. These features are usually based on the natural scene statistics (NSS) [32,33,34,35], assuming that pristine natural images have particular statistical properties that are disturbed by the distortions. NSS-based methods can extract features in different domains, such as discrete cosine transform (DCT) domain [36,37,38], discrete wavelet transform (DWT) domain [39,40,41], spatial domain [42], etc. More recently, convolutional neural networks (CNN) have also been used in the design of NR-IQA methods [43,44,45]. CNN-based methods use the direct correspondence between the hierarchy of the human visual system and the layers of a CNN [46,47,48,49,50].
Another trend has been the use of saliency models (SM) [51,52,53,54]. SMs provide a measure of the perceptual importance of each image region, which allows quality assessment methods to weight the distortion according to region importance. In other words, quality and saliency are inherently associated because both of them depend on how the human visual system perceives the content and, consequently, on how (suprathreshold) distortions are detected [53]. Some investigators have studied how to include saliency information into existing visual quality metrics in order to boost their performance [52,55,56,57,58]. Nevertheless, most of these investigations are targeted at either FR or DS image quality metrics.
In this paper, we investigate the suitability of texture descriptors to assess image quality. This paper has inspiration on the studies of Ciocca et al. [59] and Larson and Chandler [60]. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate image quality. To investigate this premise, we analyze the use of a set of state-of-the-art texture descriptors in quality assessment methods. Additionally, we propose a framework to use these texture descriptors for NR-IQA. The framework is based on supervised machine learning (ML) approach that takes into account how impairments affect the statistics of the texture descriptors. These statistics are used as feature vectors of a random forest regression algorithm that learns the predictive quality model via regression [61].
The rest of this paper is organized as follows. Section 2 presents a brief review of the texture descriptors investigated in this paper. Section 3 describes the proposed framework, the experimental setup, all simulation results, and a discussion of these results. Finally, Section 4 presents the conclusions.

2. Texture Descriptors

Texture is a fundamental attribute of images, but there is no consensus on its definition. Petrou and Garcia-Sevilla, for instance, define texture as a variation of the visual stimuli at scales smaller than the scale of interest [62]. Davies associates texture to patterns with both randomness and regularity [63]. In this paper, texture refers to area characteristics that are perceived as combinations of basic image patterns. These basic patterns present a certain regularity that is captured by statistical measures.
To characterize a texture, texture analysis methods identify and select a set of relevant texture features. Over the years, several texture analysis methods have been proposed, using a variety of approaches [62,63], including gray level run-length (GLRLM) [64], gray level co-occurrence matrices (GLCM) [65], texture spectrum [66], and textons [67]. Among the popular texture operators is the local binary patterns (LBP) [68], which describes the local textures of an image by performing simple operations. More specifically, the textures are labeled according to the relationships between each pixel and its neighbors. One of the advantages of the LBP descriptor is that it unifies traditional texture analysis models.
There are several modifications of the LBP operator [69,70]. Most of them try to improve the performance of the LBP in specific applications (e.g., texture classification, face recognition, object detection, etc.). However, few works have investigated the performance of the LBP (and its variants) in specific applications. This paper is inspired by the work of Hadid et al. [69], who compared the performance of 13 different LBP-based methods in gender recognition applications. Our focus is to test the performance of LBP-based descriptors in IQA applications. This section describes the basic LBP descriptor and the state-of-the-art LBP variants considered in this work.

2.1. Basic Local Binary Patterns (LBP)

The Local Binary Pattern (LBP) is arguably one of the most powerful texture descriptors. It was first proposed by Ojala et al. [68] and it has since been proven to be an effective feature extractor for texture-based problems. The traditional LBP descriptor takes the following form:
L B P R , P ( I c ) = p = 0 P 1 S ( I p I c ) 2 p ,
where
S ( t ) = 1 , if t 0 0 , otherwise .
In Equation (1), I c = I ( x , y ) is an arbitrary central pixel at the position ( x , y ) and I p = I ( x p , y p ) is a neighboring pixel surrounding I c , where
x p = x + R cos 2 π p P
and
y p = y R sin 2 π p P .
P is the total number of neighboring pixels I p , sampled with a distance R from I c . Figure 2 illustrates examples of symmetric samplings with different numbers of neighboring points (P) and radius (R) values.
Figure 3 illustrates the steps for applying the LBP descriptor on a single pixel ( I c = 8 ) located in the center of a 3 × 3 image block, as shown in the bottom-left of this figure. The numbers in the yellow squares of the block represent the order in which the descriptor is computed (counter-clockwise direction starting from 0). In this figure, we use a unitary neighborhood radius (R = 1) and eight neighboring pixels (P = 8). After calculating S ( t ) (Equation (2)) for each neighboring pixel I p ( 0 p 7 ), we obtain a binary output for each I p , as illustrated in upper-left position of Figure 3. The black circles correspond to ‘0’ and white circles to ‘1’. These binary outputs are stored in a binary format, according to their position (yellow squares). The LBP output for I c is the decimal number, obtained by converting this binary number. After the LBP is applied to all pixels in an image, we get a set of labels that compose the LBP channel. Figure 4 shows examples of LBP channels for the image ‘Baboon’, obtained using different radius values and different numbers of neighbors.
When an image is rotated, the I p sampled values move along the perimeter of the circumference around I c , generating a circular shift in the binary number generated. As a consequence, a different decimal L B P R , P ( I c ) value is obtained. To remove this effect, we can use the following rotation invariant (ri) descriptor, defined as:
L B P R , P r i ( I c ) = min { R O T R ( L B P R , P ( I c ) , k ) } ,
where k = { 0 , 1 , 2 , , P 1 } and R O T R ( x , k ) is the circular bit-wise right shift descriptor that shifts the tuple x by k positions.
Due to the crude quantization of the angular space and to the occurrence of specific frequencies in individual patterns, L B P R , P and L B P R , P r i descriptors do not always provide a good discrimination [71]. To improve the discriminability, Ojala et al. [68] proposed a ‘uniform’ descriptor that captures fundamental pattern properties:
L B P R , P u ( I c ) = p = 0 P 1 S ( I p I c ) , if U ( L B P R , P r i ) 2 , P + 1 , otherwise ,
where
U ( L B P P , R ) = Δ ( I P 1 , I 0 ) + p = 1 P 1 Δ ( I p , I p 1 ) ,
and
Δ ( I x , I y ) = | S ( I x I c ) S ( I y I c ) | .
In addition to a better discriminability, the uniform LBP descriptor has the advantage of generating fewer distinct LBP labels. While the ‘nonuniform’ descriptor (Equation (1)) produces 2 P different output values, the ‘uniform’ descriptor produces only P + 2 distinct output values. Finally, once the LBP mask is calculated using any the LBP approaches described above, we compute its histogram. Next, we present some of the LBP variants that have been proposed to improve the robustness and discriminability of the original descriptor.

2.2. Local Ternary Patterns (LTP)

The LTP operator is an extension of the LBP descriptor that assumes up to 3 coded values ( { 1 , 0 , 1 } ). This is achieved by changing the step function S in the following manner:
S ^ ( t ) = 1 , t τ , 0 , τ < t < τ 1 , t < τ ,
where τ is a threshold that determines how sharp an intensity change should to be considered as an edge. After computing the ternary codes, each ternary pattern is split into two codes: a positive (upper pattern) and a negative (lower pattern) codes, which are treated as two separate channels.
Figure 5 illustrates the basic feature extraction procedure for a single pixel using a LTP descriptor. The numbers in yellow squares represent the order in which the step function is computed (Equations (2) and (9)). In this example, we consider an unitary neighborhood radius (R = 1), eight neighboring pixels (P = 8), and a threshold τ equal to five. While in the LBP the binary code takes only two values (0 or 1, represented by the colors black and white), the LTP descriptor generates three possible values (see Equation (9)) that are represented by the colors black ( S ^ ( t ) = 1 ), white ( S ^ ( t ) = 0 ), and red ( S ^ ( t ) = 1 ).
We split the LTP code into two LBP codes (with only positive values). First, we create the upper pattern by converting the negative codes to zero. Next, we create the lower pattern by setting the positives values to zero, by converting the negative values to positive. Comparing Figure 5 and Figure 3, we notice that the LTP descriptor generates two texture information maps that are two separate LBP channels. Finally, we compute independent histograms and similarity measures for each of these maps and combine these histograms to generate the feature vector.

2.3. Local Phase Quantization (LPQ)

A limitation of the LBP is its sensitivity to blur. To tackle this problem, the local phase quantization (LPQ) descriptor was proposed [72]. The LPQ descriptor performs a quantization of the Fourier transform phase in local neighborhoods. Assuming that G ( u ) and F ( u ) are the discrete Fourier transforms (DFT) of the blurred g ( z ) and original f ( z ) images, which are related by the following equation:
G ( u ) = F ( u ) · H ( u ) .
Assuming that h ( x ) = h ( x ) , its DFT is always real and the phase assumes only two values, namely:
H ( u ) = 0 , H ( u ) 0 π , otherwise .
For the LPQ descriptor, the phase is computed in the local neighborhood N z , for each pixel position of f ( z ) . The local spectrum is computed with the following equation:
F ( u , x ) = y N z f ( y ) · w R ( y x ) · e j 2 π u y ,
where u is the frequency and w R is a window given by:
w R ( x ) = 1 , | x | < N R 2 0 , otherwise .
The local Fourier coefficients are computed at four frequencies for each pixel position, i.e.,
F ( x ) = F ( u 1 , x ) , F ( u 2 , x ) , F ( u 3 , x ) , F ( u 4 , x ) ,
where u 1 = [ a , 0 ] T , u 2 = [ 0 , a ] T , u 3 = [ a , a ] T , and u 4 = [ a , a ] T . In these cases, a is sufficiently small to satisfy H ( u i ) > 0 .
The phase of the Fourier coefficients is given by the signs of the real and imaginary parts of each component F ( x ) , computed by scalar quantization:
q j = 1 , g j 0 0 , otherwise ,
where g j is the j-th component of G ( x ) = R e { F ( x ) } , I m { F ( x ) } . After generating the binary coefficients q j , the feature vector is generated using the same technique used in the LBP.

2.4. Binarized Statistical Image Features (BSIF)

The binarized statistical image features (BSIF) is a descriptor proposed by Kannala and Rahtu [73], which does not use a manually predefined set of filters. Instead, it learns the filters using the statistics of natural images. BSIF is among the best texture descriptors for face recognition and texture classification applications [69,73]. Differently from previous descriptors, which operate on pixels, BSIF works on patches of pixels. Given an image patch X of size l × l pixels and a linear symmetric filter W i of the same size, the filter response s i is obtained computing the following expression:
s i = u , v W i ( u , v ) X ( u , v ) = w i T x ,
where vectors w and x contain the pixels of W i and X, respectively. The binarized feature is acquired using the following function:
b i = 1 , s j > 0 0 , otherwise .
The filters W i are learned via independent component analysis (ICA). The binarized features b i are aggregated following the same procedure described for generating the LBP labels. The descriptive features are obtained by computing the histogram of the aggregated data.
Similarly to the LBP, which generates LBP channels, the BSIF generates coded images. These coded images are the set of labels generated after the binarized features are computed using Equation (17) and aggregated using Equation (1). The aggregation of BSIF results is based on a selected number of bits, instead of the number of neighbors of the labeled pixel. The labeling depends on the relationship between the patch size l and the number of binarized features b i . Figure 6 shows the BSIF coded images corresponding to the same reference using different BSIF parameters. As can be seen in this figure, the textured information depends on the patch size l and on the number of bits. The number of bits is less or equal l 2 1 . This is the reason why the second column does not contain BSIF coded images for 9, 10, 11, or 12 bits. Figure 6 shows that the choice of the number of bits and patch sizes is important for texture analysis algorithms. Therefore, multiscale approaches that incorporate several combinations of these parameters are interesting [74,75,76,77].

2.5. Rotated Local Binary Patterns (RLBP)

For some applications, image rotation affect the LBP results because of the fixed order of its weights. Since weights are distributed in a circular way, the effect of rotation can be eliminated by rotating the weights by the same angle. When the rotation angle is not known, an adaptive arrangement of weights, based on the locally computed reference direction, can be used. Mehta and Egiazarian [78] proposed the rotated local binary (RLBP) descriptor, which considers that, if an image is rotated, the descriptor should be rotated by the same angle.
The RLBP makes the LBP invariant to rotation by circularly shifting the weights according to the dominant direction (D). In a neighborhood of a pixel I c , D is the index of the neighbor whose difference to I c is maximum, i.e.,
D = argmax p { 0 , 1 , , P 1 } | I p I c | .
Since D is taken as a reference, the weights are assigned with respect to it. The RLBP descriptor is computed as follows:
R L B P R , P = p = 0 P 1 S ( I p I c ) 2 ( p D mod P ) ,
where i ( mod j ) is the remainder of the division of i by j.
Figure 7 depicts the effect of a rotation on LBP and RLBP descriptors. Notice that the LBP changes for a rotation. The red color indicates pixels with values above the threshold, while the yellow color indicates pixels with the maximum difference to I c (D). The position D takes the smallest weight, while the other positions get weights that correspond to circular shifts with relation to D. From Figure 7g, we notice that the weight corresponding to D is the same both for the original and rotated images, even when these pixels are at different angles. Therefore, the RLBP values for two rotated neighborhoods are the same.
Figure 8 shows the effect of rotation after generating the LBP and RLBP channels. The first row shows the LBP and RLBP maps of the original images and their corresponding histograms. The second row shows the same information for a 90 degrees rotated version of the original image. To compare the differences between the LBP and RLBP histograms, before and after the rotation, we use three statistical divergences measures: Kullback–Leibler divergence (KLD) [80], Jensen–Shannon divergence (JSD) [81], and chi-square distance (CSD) [82]. The KLD, JSD, and CSD of the LBP histogram are 2.92 × 10 2 , 6.96 × 10 3 , and 2.11 × 10 2 , respectively. These divergences for the RLBP histograms are 2.06 × 10 4 , 5.12 × 10 5 , and 1.57 , respectively. Therefore, the order of magnitude of the LBP statistical divergences is two times higher than for the RLBP statistical divergences.

2.6. Complete Local Binary Patterns (CLBP)

The LBP descriptor considers only the local differences of each pixel and its neighbors. The complete local binary patterns consider both signs (S) and magnitude (M) of the local differences, as well as the original intensity value of the center pixel [83]. Therefore, the CLBP feature is a combination of three descriptors, namely C L B P S , C L B P M , and C L B P C . Figure 9 illustrates the computation of the CLBP feature.
The C L B P S and C L B P M components are computed using the local difference sign-magnitude transform (LDSMT), which is defined as:
L D S M T p = s p · m p ,
where s p = S ( I p I c ) and m p = | I p I c | . The s p is the sign descriptor used to compute C L B P S , i.e., C L B P S is the same as the original LBP and it is used to code the sign information of the local differences. C L B P M is used to code the magnitude information of local differences:
C L B P M = p = 0 P 1 t h r e s h ( m p , c ) · 2 p ,
where
t h r e s h ( x , c ) = 1 x c , 0 otherwise .
In the above equation, c is a threshold set as the mean value of the input image I. Finally, the C L B P C is used to code the information of original center gray level value:
C L B P C = t h r e s h ( I c , c ) .
The three descriptors, C L B P S , C L B P M , and C L B P C , are combined. Individual histograms are computed and concatenated. This joint histogram is used as a CLBP feature.

2.7. Local Configuration Patterns (LCP)

Local configuration patterns (LCP) is a rotation invariant image descriptor proposed by Guo et al., which is more discriminative [84]. LCP decomposes the image information into two levels: local structural information and microscopic configuration information. The local structural information is composed by LBP features, while the microscopic configuration (MiC) information is determined by the image configuration and by the pixel-wise interaction relationships.
To model the image configuration, we estimate the optimal weights, which are associated with the neighboring pixels, to linearly reconstruct the central pixel intensity for each pattern type. This can be expressed by the following equation:
E ( a 0 , a 1 , , a P 1 ) = | I c p = 0 P 1 a p I p | ,
where I c and I p denote the intensity values of the center pixel and neighboring pixels, a p are weighting parameters associated with I p , and E ( a 0 , a 1 , , a P 1 ) are the reconstruction errors with respect to the model parameters. To minimize the reconstruction errors, the optimal parameters for each pattern are determined by a least squares estimation.
Suppose the occurrence of a particular pattern type j is f j . There are f j pixels in the image with the pattern j. We denote intensities of those f j pixels as c j , i , where i = 0 , 1 , , f j 1 . These intensities c j , p are organized into a vector:
c j = c j , 0 c j , 1 c j , f j 1 .
We denote the intensities of neighboring pixels with respect to each c j , i as v i , 0 , , v i , P 1 , which are organized into a matrix with the following form:
V j = v 0 , 0 v 0 , 1 v 0 , P 1 v 1 , 0 v 1 , 1 v 1 , P 1 v f j 1 , 0 v f j 1 , 1 v f j 1 , P 1 .
To minimize the reconstruction error (Equation (24)), the unknown parameters a p are organized as a vector:
A j = a 0 a 1 a P 1 ,
and the optimal parameters are determined by solving the following equation:
A j = V j V j 1 V j c j .
After determining A j , we apply the Fourier transform to the estimated parameter, which can be expressed by:
H j ( k ) = p = 0 P 1 A j ( p ) e i 2 π k p P ,
where H j ( k is the k-th element of H j and A j ( p ) is the p-th element if A j . The magnitude part of each element of the vector H j is taken as the resulting MiC, which is defined by:
| H j | = | H j ( 0 ) | , | H j ( 1 ) | , , | H j ( P 1 ) | .
The LCP feature is formed by both pixelwise interaction relationships and local shape information, which is expressed as:
L C P = [ | H 0 | ; O 0 ] , [ | H 1 | ; O 1 ] , , [ | H P 1 | ; O P 1 ] ,
where | H j | is computed using Equation (30) with respect to the j-th pattern and O j is the number of occurrences of the j-th LBP label.

2.8. Opposite Color Local Binary Patterns (OCLBP)

To combine both texture and color information into a joint descriptor, Maenpaa [85] proposed to use the Opponent Color Local Binary Pattern (OCLBP) descriptor. This descriptor improves the descriptor proposed by Jain and Healey [86] by substituting the Gabor filter with a variant of the LBP descriptor, decreasing its computational cost. The OCLBP descriptor uses two approaches. In the first, the LBP descriptor is applied, individually, on each color channel, instead of being applied only on a single luminance channel. This approach is called ‘intra-channel’ because the central pixel and the corresponding sampled neighboring points belong to the same color channels. In the second approach, called ‘inter-channel’, the central pixel belongs to a color channel and its corresponding sampled neighboring points belong to another color channel. More specifically, for an OCLBP M N descriptor, the central pixel is positioned in the channel M, while the neighborhood is sampled in the channel N. For a three-channel color space, such as RGB, there are six possible combinations of channels: OCLBP R G , OCLBP R G , OCLBP R B , OCLBP R B , OCLBP G B , and OCLBP G B .
Figure 10 depicts the sampling approach of OCLBP when the central pixel is sampled in R channel. From this figure, we can notice that two combinations are possible: OCLBP R G (left) and OCLBP R B (right). In OCLBP R G , the gray circle in the red channel is the central point, while the green circles in the green channel correspond to ‘0’ sampling points and the white circles correspond to ‘1’ sampling points, respectively. Similarly, in the OCLBP R B the blue circles correspond to ‘0’ sampling points and the white circles correspond to ‘1’ sampling points, respectively.
After computing the OCLBP descriptor for all pixels, a total of six texture channels are generated. As depicted in Figure 11, three LBP intra-channels (LBP R , LBP G , and LBP B ) and three LBP inter-channels (OCLBP R G , OCLBP R B , and OCLBP G B ) are generated. Although all possible combinations of the opposite color channels allow six distinct channels, we observed that the symmetric opposing pairs are very redundant (e.g., OCLBP R G is equivalent to OCLBP G R ). Due to this redundancy, only the three more descriptive inter-channels are used.

2.9. Three-Patch Local Binary Patterns (TPLBP)

Wolf et al. [87] proposed a family of LBP-related descriptors designed to encode additional types of local texture information. While variants of LBP descriptor use short binary strings to encode information about local micro-texture pixel-by-pixel, the authors considered capturing information which is complementary to that computed pixel-by-pixel. These patch-based descriptors are named Three-Patch LBP (TPLBP) and Four-Patch-LBP (FPLBP).
TPLBP considers a w × w patch centered on a pixel and and S additional patches distributed uniformly on a ring of radius r around it, as illustrated in Figure 12. For an angle α , we get a set of neighboring patches along a circle and compare their values with those of the central patch. More specifically, the TPLBP is given by:
T P L B P r , S , w , α ( p ) = i = 0 S f ( d ( C i , C p ) d ( C i + α mod S , C p ) ) · 2 i ,
where
f ( t ) = 1 , if t τ , 0 , otherwise .
The function d ( x , y ) is any distance function between two patches under a vector representation. Examples of d ( x , y ) are Manhattan [88], Mahalanobis [89], Minkowski [90], etc. The parameter τ is slightly larger than zero to provide some stability in uniform regions.

2.10. Four-Patch Local Binary Patterns (FPLBP)

In FPLBP, two rings centered on the pixel are used, instead of only one ring as used in TPLBP. As depicted in Figure 13, two rings of radii r 1 and r 2 (centered in the central pixel) are considered, with S patches of size w × w equally distributed on each ring, positioned α patches away along the circle. We compare the two center symmetric patches in the inner ring with the two center symmetric patches in the outer ring. The bit in each coded pixel is set according to which of the two pairs is being compared. Therefore, the FPLBP code is computed as follows:
F P L B P r , S , w , α ( p ) = i = 0 S 2 f ( d ( C 1 , i , C 2 , i + α mod S ) d ( C 1 , i + S / 2 , C 2 , i + S / 2 + α mod S ) ) · 2 i .

2.11. Multiscale Local Binary Patterns (MLBP)

The Multiscale local binary pattern (MLBP) is an extension of the LBP, designed with the goal of extracting image quality information [91]. A block diagram of the MLBP descriptor is depicted in Figure 14 and it is computed as follows. First, we generate several LBP channels, by varying the parameters R and P and performing a symmetrical sampling. For the smallest possible radius, R = 1, there are two possible P values that produce rotational symmetrical sampling (P = 4 and P = 8). When R = 2, there are three possible P values (P = 4, P = 8, and P = 16). In general, for a given radius R, there is a total of R + 1 distinct LBP channels.
Figure 14a depicts the feature extraction for R = 1. The unitary radius generates only two distinct symmetrical patterns (P = 4 and P = 8). Each pattern generates a distinct LBP channel (see Figure 4). For a radius R, LBP maps are generated and combined:
L R = { L B P R , 4 u , L B P R , 8 u , L B P R , 16 u , , L B P R , 8 R u } ,
where L B P R , P u is computed according to Equation (6) and L R contains R + 1 elements. From these LBP channels, the texture features are obtained by computing the histogram of each member of L R :
H R , P = h R , P ( l 1 ) , h R , P ( l 2 ) , , h R , P ( l P + 2 ) ,
where
h R , P ( l i ) = x , y δ ( L B P R , P u ( x , y ) , i ) ,
and
δ ( s , t ) = 1 s = t , 0 otherwise .
In the above equations, ( x , y ) indicates the position of a given point of L B P R , P u and l i is the i-th LBP label. Notice that we are using ‘uniform’ LBP descriptors (Equation (6)) since their histograms provide a better discrimination of the texture properties.
To obtain the feature vector, we vary the radius, compute all possible symmetric LBP patterns and their histograms, as illustrated in Figure 14b. For a radius R, we generate a vector of histograms by concatenating all individual LBP histograms:
H R = H R , 4 H R , 8 H R , 16 H R , 8 R ,
where ⊕ denotes the concatenation descriptor.
The steps for computing the multiscale LBP histogram are summarized in Figure 15. For R = N, the final feature vector is generated by concatenating the histograms of the LBP channels with radius values smaller than N:
x = x N = H 1 H 2 H 3 H N ,
where R = N is the maximum radius value and x N is the feature vector used to compute the histogram.

2.12. Multiscale Local Ternary Patterns (MLTP)

In general, the LTP threshold τ is adjusted for the target application. Anthimopoulos et al. [92] demonstrated that the τ values correspond to the gradient of the image. The choice of τ may affect the discrimination of edge and non-edge pixels, which is an important step in the texture analysis. We propose [93] an optimal set of thresholds to be used in the multilevel edge description operation, which make it possible to cluster gradient PDFs. The procedure is described as follows. First, the image gradients are fit using an exponential distribution:
P D F e ( z ) = λ e λ z ,
where λ is the rate parameter of the distribution. Then, the average value of the image gradient λ 1 is computed. The inverse cumulative distribution function of P D F e is, then, obtained using the following equation:
F e ( Δ i ) = λ 1 ln ( 1 Δ i ) ,
where:
Δ i = i L + 1 , Δ i [ 0 , 1 )
and i { 1 , 2 , , L } and L is the number of levels. To select a threshold, we take
τ i = F e ( Δ i )
for equally spaced values of Δ i .
The feature extraction process is illustrated in Figure 16. We decompose the image into LTP channels. These channels are generated by varying the τ values according to Equations (42)–(44). Since for a single image the LTP descriptor produces two channels, for L numbers of τ i , 2 L LTP channels are produced. For example, in Figure 17, we use L = 4 , generating eight distinct LTP channels. In the proposed LTP approach, instead of computing the differences between t c and its neighbors on the grayscale image, we take the maximum difference on the R, G, or B channels.
After the aforementioned steps are completed, we obtain a set of LTP channels with 2 × L elements: { C 1 u p , C 1 l o , C 2 u p , C 2 l o , , C L u p , C L l o } . In this set, the subscript index corresponds to the i-th τ value, while the superscript index indicates whether the element is an upper (up) or lower (lo) pattern. For each LTP channel C i j , where j { u p , l o } , we compute the corresponding LTP histogram H i j . These histograms are used to build the feature vector. If we simply concatenate these histograms, we generate a feature vector with a 2 P × 2 × L dimension. Depending on the L and P parameters, the number of features can be very high, what has a direct impact on the performance of the proposed algorithm.
In order to limit the number of dimensions, the number of bins of the LTP histograms is reduced according to the following formula:
k i j = max H i j min H i j n ,
where · is the operation of rounding to the nearest integer, n defines the number of equal-width bins in the given range, and k i j is the reduced number of bins of histogram H i j . After this quantization, we acquire a set of quantized histograms { h 1 u p , h 1 l o , h 2 u p , h 2 l o , , h L u p , h L l o } . This new set is used to generate the feature vector associated with the image I. More specifically, the feature vector x is generated by concatenating the quantized histograms h i j , i.e.,
x ˘ = h 1 u p h 1 l o h 2 u p h 2 l o h L u p h L l o ,
where ⊕ is the concatenation descriptor and x is the feature vector.

2.13. Local Variance Patterns (LVP)

The Local Variance Pattern (LVP) is an extension of the LBP descriptor proposed in this work. This descriptor was developed specifically for quality assessment tasks. The LVP descriptor computes the texture local energy using the following formula:
L V P R , P u ( I c ) = P · V R , P ( I c ) L B P R , P ( I c ) 2 P 2 ,
where:
V R , P ( I c ) = p = 0 P 1 S ( I p I c ) · 2 p 2 .
LVP descriptor estimates the spread of the texture local energy. By measuring the texture energy, the LVP descriptor is able to estimate the effect that specific impairments have on the texture. For example, a Gaussian blurring impairment decreases the local texture energy, while a noise impairment increases it. Figure 18 shows a comparison of the steps used to extract texture information using the LBP and LVP descriptors, assuming that R = 1 and P = 8. The numbers in the yellow squares represent the order in which the steps are computed. The LBP descriptor generates two possible values (see Equation (2)), which are represented by the colors white S ( t ) = 1 and black S ( t ) = 0 . Next, we use Equation (6) to compute the LBP label and Equation (47) to compute the LVP label.
After computing the LBP and LVP labels for all pixels of a given image, we obtain two channels for each image. These channels, C L B P and C L V P , correspond to the LBP and LVP patterns, respectively. Examples of these channels are shown in Figure 19. The first row of this figure shows the unimpaired reference image and three impaired images, degraded with different types of distortions. The second and third rows show the C L B P and C L V P channels for each image, respectively. Observing the C L B P and C L V P patterns in Figure 19, we notice that textures are affected differently by the different impairments. Comparing the C L B P channels corresponding to the noisy, blurry, and jpeg2k compressed images (2nd line of Figure 19), we can notice that they are very different among themselves. The C L B P channels corresponding to the blurry and jpeg2k images are also very different from the C L B P channel corresponding to the reference (unimpaired) image. Nevertheless, the C L B P channel corresponding to the noisy and reference images are visually similar. This similarity makes it difficult to discriminate between unimpaired and impaired images, what affects the quality prediction. Nevertheless, the C L V P channels clearly show the differences between impaired and reference images, as can be seen in the 3rd line of Figure 19.

2.14. Orthogonal Color Planes Patterns (OCPP)

The Orthogonal Color Planes Pattern (OCPP) descriptor extends the LBP to make it more sensitive to color and contrast distortions. Consider a pixel τ c = I ( x , y , z ) of a tri-dimensional (XYZ) color image I . This image can be decomposed into a set of individual XY planes stacked along the Z-axis, a set of YZ planes stacked along the X-axis, or a set of XZ planes stacked along the Y-axis. In this work, we concatenate the LBP descriptors corresponding to the XY, XZ, and YZ planes to build an orthogonal color planes pattern (OCPP) texture descriptor.
As can be noticed from the aforementioned formulation, the LBP descriptor corresponding to the XY, XZ, and YZ planes can be computed independently to generate the thee LBP maps: LBP X Y , LBP X Z , and LBP Y Z . But, since the spatial dimensions of the XY, XZ, and YZ planes are generally different, the radius ( R X , R Y , and R Z ) and the number of sampled points ( P X Y , P X Z , and P Y Z ) corresponding to each of the LBP maps can vary. Figure 20a illustrates how the points along the tri-dimensional HSV color space are sampled, while Figure 20b–d illustrate how each of the XY, XZ, and YZ planes are sampled.
Considering R Z = 1 and R X = R Y = R , the coordinates of the neighboring points in the XY, XZ, and YZ orthogonal planes are given by:
x X Y = x + R cos 2 π p X Y P X Y y X Y = y R sin 2 π p X Y P X Y ,
x X Z = x + R cos 2 π p X Z P X Z z X Z = z sin 2 π p X Z P X Z ,
and
y Y Z = y + R cos 2 π p Y Z P Y Z z Y Z = z sin 2 π p Y Z P Y Z .
We compute the LBP for each plane using the following equations:
L X Y = L B P R P X Y ( τ c ) = p X Y = 0 P X Y 1 S ( τ c τ X Y ) 2 p X Y ,
L X Z = L B P R P X Z ( τ c ) = p X Z = 0 P X Z 1 S ( τ c τ X Z ) 2 p X Z ,
and
L Y Z = L B P R P Y Z ( τ c ) = p Y Z = 0 P Y Z 1 S ( τ c τ Y Z ) 2 p Y Z .
The OCPP descriptor is built by concatenating these individual LBP descriptors:
O C P P R P ( τ c ) = L X Y , L X Z , L Y Z T .

2.15. Salient Local Binary Patterns (SLBP)

The salient local binary pattern (SLBP) is an extension of the LBP which is designed to be used in image quality assessment methods. The descriptor incorporates visual salient information, given that recent results show that visual attention models improve the performance of visual quality assessment methods [52,58].
To estimate the saliency of the different areas of an image I, we use a computational visual attention model. More specifically, to keep the computational complexity low, we chose the Boolean map-based saliency (BMS) model [94]. When compared with other state-of-the-art visual attention models, BMS is noticeably faster, while still providing a good performance.
After computing the LBP descriptor of all pixels of image I, we obtain a LBP map L , where each L [ x , y ] gives the local texture associated to the pixel I [ x , y ] . Similarly, the output of BMS is a saliency map W , where each element W [ x , y ] corresponds to the probability that the pixel I [ x , y ] attracts the attention of a human observer. The first, second, and third columns of Figure 21 depict a set of original images I, their corresponding LBP maps L , and their corresponding saliency maps W , respectively.
We generate the feature vector by computing the histogram of L weighted by W . The histogram H = { h [ 0 ] , h [ 1 ] , , h [ P + 1 ] } is given by the following expression:
h [ ϕ ] = i j W [ i , j ] Δ ( L [ i , j ] , ϕ ) ,
where
Δ ( v , u ) = 1 v = u , 0 otherwise .
The number of bins of this histogram is similar to the number of distinct LBP patterns of L . So, we can remap each L [ i , j ] to its weighted form, generating the map S displayed in Figure 21d. This figure depicts a heatmap representing the importance of each local texture. We name this weighted LBP map as the “Salient Local Binary Patterns” (SLBP).

2.16. Multiscale Salient Local Binary Patterns (MSLBP)

The multiscale salient local binary patterns (MSLBP) is an extension of SLBP in combination with MLBP. The idea behind MSLBP is to achieve fine information about frame texture by varying the parameters of LBP and combining the multiple generated LBP maps with saliency maps. In other words, we variate the SLBP to obtain multiple maps, as illustrated in Figure 22. For each combination of radius (R) and sampled points (P), we have an associated histogram H R , P .

3. No-Reference Image Quality Assessment Using Texture Descriptors

In the previous section, we presented a series of texture descriptors. Most of them were designed for pattern recognition and computer vision applications. We also presented a set of proposed descriptors (MLBP, MLTP, LVP, OCPP, and SLBP), which were specially designed for visual quality assessment. Our goal is to investigate which descriptors are more suitable for no-reference (blind) image quality assessment (NR-IQA) methods. Moreover, we are interested in the relation between the type of descriptor and the performance accuracy of the IQA method.

3.1. Training and Testing Stages

Figure 23 depicts the training stage of the set of IQA methods proposed in this work. First, we collect subjective scores corresponding to each image of a training set. This procedure generates a set of labeled images, where each training set entry is composed by a pair of an image marker and its associated MOS (mean observer score). In other words, for the k-th unlabeled image I k the algorithm associates a real value v k , which corresponds to the overall quality of I k .
After generating the labeled database formed by the set of pairs ( I k , v k ), the features are extracted in order to generate the IQA model. For each image I k , we compute the histogram of the given LBP variant H k and concatenate all histograms to produce the feature vector. Therefore, the training data is composed by the set ( H k , v k ). The model is created using ( H k , v k ), which is formed by a matrix H R K × Q and a vector v R 1 × K . In this case, K is the number of training entries (rows of H) and Q is the number of features (columns and the numbers of bins of H k ).
The prediction model is built using a regression model. This model maps each H k into a real value v k that predicts a corresponding quality score. The chosen regression model is the random forest (RF) regressor [61]. RF was chosen based on the results of Fernandez-Delgado et al. [95], which conducted an exhaustive evaluation of several machine learning methods and concluded that the best results are achieved by a family of RF methods.
The quality assessment task is depicted in Figure 24. After generating the prediction model, the image quality can be estimated using the model trained in the previous stage. The procedure is the same used for the images in the training set. In other words, the same feature (LBP histogram) is computed using the test image as input and, using this feature, the trained model predicts the quality score.

3.2. Test Setup

Results were generated using an Intel i7-4790 processor at 3.60 GHz. To assess the performance of the proposed NR-IQA method, we compute the Spearman’s Rank Ordered Correlation (SROCC) between the mean opinion scores (MOS) and the predicted scores. Although other correlation coefficients (such as KRCC and PCC) can be added in the analysis, we decided to report the results using only the SROCC to prevent this article from becoming too lengthy. The proposed method is compared with the fastest state-of-the-art NR-IQA methods, including BRISQUE [42], CORNIA [96], CQA [97], SSEQ [98], and LTP [93]. These methods were chosen because they are all based on machine learning techniques, making the comparison with the proposed method straightforward. Moreover, the codes of these methods are publicly available for download.
For all machine learning NR-IQA methods, we use the same procedure for training and testing. In order to avoid overlapping between content detection and quality prediction, we divide the benchmark databases into content-independent training and testing subsets. Specifically, image content in the training subset was not used in the testing subset, and vice-versa. This division is made in a way that 80% of images are used for training and 20% are used for testing. This split is a common procedure used by several ML-based NR-IQA methods [42,96,97]. For the machine learning NR-IQA methods that are based on SVR, we use a LibSVR implementation accessed via Python interface and provided by the Sklearn library [99]. The optimal SVR meta parameters (C, γ , ν , etc.) are found using exhaustive grid search methods provided by Sklearn’s API. No optimized search methods are used for the RF version of the proposed method.
The tests were performed using three image quality databases, which include subjective scores collected from psychophysical experiments. These databases are:
  • LIVE2 [100] database has 982 test images, including 29 originals. This database includes 5 categories of distortions: JPEG, JPEG 2000 (JPEG2k), white noise (WN), Gaussian blur (GB), fast fading (FF).
  • CSIQ [101] database has a total fo 866 test images, consisting of 30 originals and 6 different categories of distortions. The distortions include JPEG, JPEG 2000 (JPEG2k), JPEG, white noise (WN), Gaussian blur (GB), fast fading (FF), global contrast decrements (CD), and additive Gaussian pink noise (PN).
  • TID2013 [102] database contains 25 reference images with the following distortions: Additive Gaussian noise (AGN), Additive noise in color components (AGC), Spatially correlated noise (SCN), Masked noise (MN), High frequency noise (HFN), Impulse noise (IN), Quantization noise (QN), Gaussian blur (GB), Image denoising (ID), JPEG, JPEG2k, JPEG transmission errors (JPEGTE), JPEG2k transmission errors (JPEG2kTE), Non eccentricity pattern noise (NEPN), Local block-wise distortions (LBD), Intensity shift (IS), Contrast change (CC), Change of color saturation (CCS), Multiplicative Gaussian noise (MGN), Comfort noise (CN), Lossy compression (LC), Image color quantization with dither (ICQ), Chromatic aberration (CA), and Sparse sampling and reconstruction (SSR).

3.3. Results for Basic Descriptor with Varying Parameters

In order to test the LBP and its variants, we vary some parameters of each algorithm. Specifically, we vary the parameters of LBP, BSIF, CLBP, and LPQ. For the other tested variants, we choose the parameters R = 1 and P = 8. Table 1 depicts the parameters used by the tested algorithms.
To investigate the suitability of the basic LBP descriptor, we variate the parameters R and P using the Rotation Invariant LBP (LBP r i ), the Uniform LBP (LBP u ), and the Uniform LBP with Rotation Invariance (LBP r i u 2 ), which are described in Section 2.1. Figure 25 depicts the distribution of SROCC over simulations on the general case (i.e., when all distortions are considered). Table 2 shows the average SROCC correlation values for 100 simulations following the aforementioned protocol. In this table, STD represents the standard deviation and Δ is the subtraction between the maximum and minimum value in a given row or column.
From Table 2, we can notice that the basic LBP descriptor is suitable for predicting quality. This suitability is indicated by the high correlation indices obtained on LIVE2 database. On this database, the average SROCC vary from 0.8034 to 0.9532 in the general case, from 0.6459 to 0.9054 for the FF distortion, from 0.8771 to 0.9666 for the GB distortion, from 0.9285 to 0.9794 for the WN distortion, from 0.7812 to 0.9423 for the JPEG2k distortion, and from 0.7716 to 0.9306 for the JPEG distortion. These values suggest that basic LBP variations are well appropriate to model quality of images under WN and GB distortions. Regarding the LBP parameters, the prediction performance of WN and GB are less affected by these parameters when compared with other distortions (see the variance and Δ values).
Although the basic LBP works well for WN and GB distortions independently of its parameters, the performance for other distortions varies according to the parameters. This variation is also observed in CSIQ and TID2013 databases. For example, on the CSIQ database, the SROCC values varies from 0.8073 to 0.8912 in the best case (JPEG) and from 0.2093 to 0.5901 in the worst case (CD). These values indicate that the prediction performance is affected by the basic LBP parameters. Actually, this is the premise used by Freitas et al. [91], who assume that different parameters of LBP can be used to achieve a better performance. In their work, an aggregation of features obtained with different LBP parameters results in a more robust quality assessment model.

3.4. Results for Variants of Basic Descriptors

Once it has been demonstrated that basic LBP variants present a suitable descriptor to describe image quality, we check the performance of the other LBP variants described in Section 2. To perform the tests, we vary the parameters of BSIF, LPQ, and CLBP descriptors. For the remaining extensions (i.e., LCP, LTP, RLBP, TPLBP, FPLBP, LVP, OCLBP, OCPP, SLBP, MLBP, MLTP, and MSLBP), we do not vary the parameters. Figure 26 depicts the distribution of SROCC for the general case using the tested LBP variants (100 simulations).
To investigate the suitability of the basic BSIF descriptor, we performed the simulations by changing the patch size and the number of selected binarized feature (see Section 2.4). The results of the performed simulations on the LIVE2, CSIQ, and TID2013 databases are depicted in Table 3 respectively. Based on results of Table 3, we notice that BSIF is a valuable descriptor for IQA. In the LIVE2 database, the BSIF performs well for almost all configurations. However, the results are better for smaller patch sizes. In these cases, the average SROCC values are higher and have a low variance. As shown in Table 3, the performance of BSIF decreases for the CSIQ database. When compared with the LIVE2 database, the average SROCC values are lower and the variance is higher. The values in both Table 3 indicate that there is a relationship between the patch size and the number of bits. More specifically, the larger the patch size, the higher the number of bits required to obtain a good quality prediction. For example, in both LIVE2 and CSIQ databases, using a 3 × 3 patch, the best performance is obtained using 8 bits and the worst performance is obtained when only 5 bits are used.
Table 4 shows the results of simulations using seven different LPQ configurations, corresponding to different LPQ parameters. The main parameters of the LPQ descriptor are the size of the local window and the method used for local frequency estimation. The size of the local window was fixed on 3 × 3 and the tests were performed by varying the method used for local frequency estimation. The LPQ configurations are the following:
  • C1: Short-term Fourier transform (STFT) with uniform window (basic version of LPQ);
  • C2: STFT with Gaussian window;
  • C3: Gaussian derivative quadrature filter pair;
  • C4: STFT with uniform window + STFT with Gaussian window (concatenation of feature vectors produced by C1 and C2);
  • C5: STFT with uniform window + STFT with Gaussian derivative quadrature filter pair (concatenation of feature vectors produced by C1 and C3);
  • C6: STFT with Gaussian window + Gaussian derivative quadrature filter pair (concatenation of feature vectors produced by C2 and C3);
  • C7: Concatenation of feature vectors produced by C1, C2, and C3.
Table 4 shows that the performance of LPQ is high for the LIVE2 database, with mean SROCC values above 0.9 for all distortions, independently of the configuration. The low variance and the high average value of the SROCC values for the LIVE2 indicate that LPQ is a suitable descriptor for measuring the quality of JPEG, JPEG2k, WN, GB, and FF distortions. However, the performance of the prediction decreases for the CSIQ and TID2013 databases. This is probably due to the presence of the contrast and color distortions on the CSIQ and TID2013 databases.
Table 5 shows the average SROCC of simulations using CLBP as the texture descriptor. For this descriptor, we tested the influence of each combination of feature set (see CLBP S , CLBP M , and CLBP C in Figure 9) on the image quality prediction. From Table 5, we can notice that the feature sets, CLBP M and CLBP C , are individually unsatisfactory for measuring image quality. This is due to the low SROCC scores obtained for the three tested databases. On the other hand, CLBP S is the dominant feature set for quality description, since it presents the higher SROCC values in almost all cases.
Interestingly, the combination of CLBP feature sets produces a better performance, as indicated by the performances of CLBP S M (CLBP S + CLBP M ) and CLBP S M C (CLBP S + CLBP M + CLBP C ). From Table 5, we can observe that the mean SROCC value of overall case increases from 0.91 (CLBP S ) to 0.93 (CLBP M C and CLBP S M C ) for the LIVE2 database. The combination of feature sets also improves the average SROCC values of the TID2013 database, increasing from 0.35 (CLBP S ) to 0.44 (CLBP M C and CLBP S M C ). The average values on CSIQ database show that the best performance is obtained using CLBP M C . Based on these SROCC values, we can conclude that CLBP M C is the best combination of features to assess image quality since the incorporation of CLBP C does not improve or even deteriorate the general prediction performance.
Table 6 depicts the mean SROCC values of simulations using other LBP variants. From this table, we can notice that almost all variants present an acceptable performance for the LIVE2 database. The exceptions are TPLBP and FPLBP that presented mean SROCC below 0.65, which is poorer than other methods. Based on the average values of mean SROCC on LIVE2, the methods LTP, RLBP, LCP, LVP, MLTP, SLBP, OCLBP, MLBP, MSLBP, and OCPP are in ascending order of performance. For the CSIQ and TID2013 databases, the methods perform similarly, but RLBP performs worse than LTP on CSIQ.
It is noticeable that multiscale approaches (MLBP, MLTP, and MSLBP) present the best results. For the three tested databases, the results are in agreement with the assumptions made by Freitas et al. [91], who demonstrated that combining multiple LBP descriptor parameters increases the prediction performance. However, we can observe that the OCPP descriptor presents the best performance when compared with any other tested descriptor, even when compared with the multiscale approaches. Although for the LIVE2 database the performance of the OCPP descriptor is similar to the performance of the MSLBP descriptor, this good performance is not achieved for the other databases. While MSLBP presents average SROCC values of 0.8147 for the CSIQ database, the OCPP presents an average SROCC value of 0.9140 for the same database. Similarly, for the TID2013 database, the average SROCC values obtained with MSLBP and OCPP are 0.5919 and 0.7035, respectively.
When we observe the results obtained per distortion for the CSIQ database, we can notice that the superiority of OCPP is due to the good performance obtained for the contrast distortions. While the quality prediction of contrast-distorted images has a mean SROCC value equal to 0.5299 when using MSLBP, the mean SROCC value for these same images are 0.7753 when using OCPP. Similarly, for the TID2013 database, the OCPP presents a superior performance for several types of distortions, especially for color and contrast-related distortions (AGC, AGN, CA, CC, CCS, etc.).

3.5. Comparison with Other IQA Methods

Figure 27 depicts the SROCC box plots for different no-reference IQA methods. Moreover, Table 7 depicts the results of six IQA methods, including two established full-reference metrics (PSNR and SSIM) and four state-of-the-art no-reference metrics (BRISQUE, CORNIA, CQA, and SSEQ). From this table, we can notice that CORNIA and SSEQ present the best performance on LIVE2 database, even when compared with full-reference approaches, such as PSNR and SSIM. On the LIVE2 database, the average SROCC values of CORNIA and SSEQ is 0.92, a score similar to some LBP-based descriptors, such as CLBP S M and BSIF. However, several LBP-based descriptors (LPQ, MLBP, MSLBP, and OCPP) present a notable performance, being superior to the state-of-the-art methods, achieving average SROCC values above 0.94 for the LIVE2 database.
By comparing Table 7 with Table 4, Table 5 and Table 6, we can notice that LBP-based NR-IQA approaches present better performance also for the CSIQ and TID2013 databases. For the CSIQ database, we can observe that, on average, the best state-of-the-art NR-IQA method is BRISQUE, followed by SSEQ and CORNIA. The average SROCC scores are 0.7406, 0.6979, and 0.6886 for BRISQUE, SSEQ, and CORNIA, respectively. However, the LPQ, BSIF, LVP, OCLBP, OCPP, SLBP, MLBP, MLTP, and MSLBP descriptors present better results for this CSIQ database. Similarly, for the TID2013 database, the best state-of-the-art method is CORNIA, which presents an average SROCC of 0.5361. This value is outperformed by several LBP-based descriptors, such as LVP (0.5428), OCLBP (0.5902), OCPP (0.7035), MLBP (0.5284), MLTP (0.5652), MSLBP (0.5919), and LPQ (0.5518).

3.6. Prediction Performance on Cross-Database Validation

To investigate the generalization capability of the studied methods, we performed a cross-database validation. This validation consists of training the ML algorithm using all images of one database and testing the them on the other databases. Table 8 depicts the SROCC values obtained using LIVE2 as the training database and TID2013 and CSIQ as the testing databases. To perform a straightforward cross-database comparison, only the shared subset of distortions are selected from each database.
Based on the results in Table 8, we can notice that OCPP outperforms other methods for almost all types of distortions. For TID2013, the OCPP outperforms the other methods for 3 out of the 5 distortions, while for CSIQ it outperforms the other methods for 4 distortions out of the 5 distortions. OCPP is followed by MSLBP, which achieves the best results in the cases where OCPP is not the best. The cross-database validation test indicates that, in general, texture descriptors have a better generalization capacity, when compared to the tested state-of-the-art methods.

3.7. Simulation Statistics

In order to investigate the stability of the mean over the simulations, we generated some box plots depicted inf Figure 28. We chosen the BSIF, LCP, CLBP, and LPQ because these descriptors were among the best analyzed in the last section. Based on this figure, we can notice that the mean changes over the simulations. More specifically, the inter-quartile ranges increase over the simulations for BSIF and LPQ on LIVE2 database. On the other hand, this behavior is not the same for LCP and CLBP descriptors. The pattern on CSIQ and TID2013 are more similar. Further studies concerning the number of simulations that generates a stable distribution are suggested as future works.

4. Conclusions

In this paper, we compared three basic LBP (LBP r i , LBP u , and LBP r i u 2 ) with eight different parameter combinations each. This comparison was performed to verify whether LBP can be used as a feature descriptor in image quality assessment applications. Preliminary results show that, although LBP can be used in image quality assessment, the performance varies greatly for each distortion and the its parameters. Based on these results, we investigated other 14 texture descriptors, which are variants of the basic LBP. When tested using the proposed framework, BSIF, LPQ, LVP, and CLBP present a good mean correlation value for the LIVE2 database, but their performances decrease for the CSIQ and TID2013 datasets due to color and contrast distortions. Results show that multiscale approaches have a substantially better quality prediction performance. Among the tested multiscale approaches, the MSLBP descriptor, which incorporates visual saliency, has the best performance. While MSLBP has a performance that is similar to the performance obtained with the OCPP descriptor for the LIVE2 database, the OCPP presents the best performance for the remaining databases.

Author Contributions

P.G.F. wrote most of the text, figures, and analysis and developed the methods described in this manuscript. L.P.d.E. and S.S.S were responsible to perform the experiments. M.C.Q.d.F. is the principal investigator in this project and has guided the research and helped write and revise this manuscript.

Funding

No funding information.

Acknowledgments

This work is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), and by the University of Brasília (UnB).

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Chen, Q.H.; Xie, X.F.; Cao, J.; Cui, X.C. Research of ROI image compression based on visual attention model. In Proceedings of the International Conference on Image Processing and Pattern Recognition in Industrial Engineering, Xi’an, China, 7–8 August 2010; Volume 7820, p. 78202W. [Google Scholar]
  2. Wang, Z.; Li, Q.; Shang, X. Perceptual image coding based on a maximum of minimal structural similarity criterion. In Proceedings of the IEEE International Conference on Image Processing, ICIP 2007, San Antonio, TX, USA, 16–19 September 2007; Volume 2, p. II-121. [Google Scholar]
  3. Chen, Z.; Guillemot, C. Perceptually-friendly H. 264/AVC video coding based on foveated just-noticeable-distortion model. IEEE Trans. Circuits Syst. Video Technol. 2010, 20, 806–819. [Google Scholar] [CrossRef]
  4. Ou, T.S.; Huang, Y.H.; Chen, H.H. SSIM-based perceptual rate control for video coding. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 682–691. [Google Scholar]
  5. Wang, Z.; Baroud, Y.; Najmabadi, S.M.; Simon, S. Low complexity perceptual image coding by just-noticeable difference model based adaptive downsampling. In Proceedings of the Picture Coding Symposium (PCS), Nuremberg, Germany, 4–7 December 2016; pp. 1–5. [Google Scholar]
  6. Wu, H.R.; Rao, K.R. Digital Video Image Quality and Perceptual Coding; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  7. Zhang, F.; Liu, W.; Lin, W.; Ngan, K.N. Spread spectrum image watermarking based on perceptual quality metric. IEEE Trans. Image Process. 2011, 20, 3207–3218. [Google Scholar] [CrossRef] [PubMed]
  8. Urvoy, M.; Goudia, D.; Autrusseau, F. Perceptual DFT watermarking with improved detection and robustness to geometrical distortions. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1108–1119. [Google Scholar] [CrossRef]
  9. Conviva. Viewer Experience Report. 2015. Available online: http://www.conviva.com/convivaviewer-experience-report/vxr-2015/ (accessed on 19 July 2017).
  10. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. arXiv, 2017; arXiv:1711.07064. [Google Scholar]
  11. Dodge, S.; Karam, L. Understanding how image quality affects deep neural networks. In Proceedings of the 2016 IEEE Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016; pp. 1–6. [Google Scholar]
  12. Bhogal, A.P.S.; Söllinger, D.; Trung, P.; Hämmerle-Uhl, J.; Uhl, A. Non-reference Image Quality Assessment for Fingervein Presentation Attack Detection. In Proceedings of the Scandinavian Conference on Image Analysis, Tromsø, Norway, 12–14 June 2017; pp. 184–196. [Google Scholar]
  13. Söllinger, D.; Trung, P.; Uhl, A. Non-reference image quality assessment and natural scene statistics to counter biometric sensor spoofing. IET Biom. 2018, 7, 314–324. [Google Scholar] [CrossRef]
  14. Karahan, S.; Yildirum, M.K.; Kirtac, K.; Rende, F.S.; Butun, G.; Ekenel, H.K. How image degradations affect deep CNN-based face recognition? In Proceedings of the 2016 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 21–23 September 2016; pp. 1–5. [Google Scholar]
  15. Chernov, T.S.; Razumnuy, N.P.; Kozharinov, A.S.; Nikolaev, D.P.; Arlazarov, V.V. Image quality assessment for video stream recognition systems. In Proceedings of the Tenth International Conference on Machine Vision (ICMV 2017), Vienna, Austria, 13–15 November 2017; Volume 10696, p. 106961U. [Google Scholar]
  16. Jeelani, H.; Martin, J.; Vasquez, F.; Salerno, M.; Weller, D.S. Image quality affects deep learning reconstruction of MRI. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 357–360. [Google Scholar]
  17. Choi, J.H.; Cheon, M.; Lee, J.S. Influence of Video Quality on Multi-view Activity Recognition. In Proceedings of the 2017 IEEE International Symposium on Multimedia (ISM), Taichung, Taiwan, 11–13 December 2017; pp. 511–515. [Google Scholar]
  18. Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  19. Nah, S.; Kim, T.H.; Lee, K.M. Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 257–265. [Google Scholar]
  20. Seshadrinathan, K.; Bovik, A.C. Automatic prediction of perceptual quality of multimedia signals—A survey. Multimed. Tools Appl. 2011, 51, 163–186. [Google Scholar] [CrossRef]
  21. Ferzli, R.; Karam, L.J. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans. Image Process. 2009, 18, 717–728. [Google Scholar] [CrossRef] [PubMed]
  22. Maheshwary, P.; Shirvaikar, M.; Grecos, C. Blind image sharpness metric based on edge and texture features. In Proceedings of the Real-Time Image and Video Processing 2018, Orlando, FL, USA, 15–19 April 2018; Volume 10670, p. 1067004. [Google Scholar]
  23. Li, L.; Xia, W.; Lin, W.; Fang, Y.; Wang, S. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features. IEEE Trans. Multimed. 2017, 19, 1030–1040. [Google Scholar] [CrossRef]
  24. Ong, E.; Lin, W.; Lu, Z.; Yao, S.; Yang, X.; Jiang, L. No-reference JPEG-2000 image quality metric. In Proceedings of the 2003 International Conference on Multimedia and Expo, ICME’03, Baltimore, MD, USA, 6–9 July 2003; Volume 1, pp. I–545. [Google Scholar]
  25. Barland, R.; Saadane, A. Reference free quality metric for JPEG-2000 compressed images. In Proceedings of the Eighth International Symposium on Signal Processing and Its Applications, Sydney, Australia, 28–31 August 2005; Volume 1, pp. 351–354. [Google Scholar]
  26. Li, L.; Zhou, Y.; Lin, W.; Wu, J.; Zhang, X.; Chen, B. No-reference quality assessment of deblocked images. Neurocomputing 2016, 177, 572–584. [Google Scholar] [CrossRef]
  27. Gu, K.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W.; Chen, C.W. No-Reference Quality Metric of Contrast-Distorted Images Based on Information Maximization. IEEE Trans. Cybern. 2017, 47, 4559–4565. [Google Scholar] [CrossRef] [PubMed]
  28. Gu, K.; Zhai, G.; Yang, X.; Zhang, W.; Chen, C.W. Automatic contrast enhancement technology with saliency preservation. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1480–1494. [Google Scholar]
  29. Chandler, D.M. Seven challenges in image quality assessment: Past, present, and future research. ISRN Signal Process. 2013, 2013, 905685. [Google Scholar] [CrossRef]
  30. Chandler, D.M.; Alam, M.M.; Phan, T.D. Seven challenges for image quality research. In Proceedings of the IS&T/SPIE Electronic Imaging, San Francisco, CA, USA, 2–6 February 2014; p. 901402. [Google Scholar]
  31. Hemami, S.S.; Reibman, A.R. No-reference image and video quality estimation: Applications and human-motivated design. Signal Process. Image Commun. 2010, 25, 469–481. [Google Scholar] [CrossRef]
  32. Cheng, G.; Huang, J.; Liu, Z.; Lizhi, C. Image quality assessment using natural image statistics in gradient domain. AEU Int. J. Electron. Commun. 2011, 65, 392–397. [Google Scholar] [CrossRef]
  33. Appina, B.; Khan, S.; Channappayya, S.S. No-reference Stereoscopic Image Quality Assessment Using Natural Scene Statistics. Signal Process. Image Commun. 2016, 43, 1–14. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, Y.; Wu, J.; Xie, X.; Li, L.; Shi, G. Blind image quality assessment with improved natural scene statistics model. Digit. Signal Process. 2016, 57, 56–65. [Google Scholar] [CrossRef]
  35. Fang, Y.; Ma, K.; Wang, Z.; Lin, W.; Fang, Z.; Zhai, G. No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Process. Lett. 2015, 22, 838–842. [Google Scholar] [CrossRef]
  36. Saad, M.; Bovik, A.C.; Charrier, C. A DCT statistics-based blind image quality index. IEEE Signal Process. Lett. 2010, 17, 583–586. [Google Scholar] [CrossRef]
  37. Ma, L.; Li, S.; Ngan, K.N. Reduced-reference image quality assessment in reorganized DCT domain. Signal Process. Image Commun. 2013, 28, 884–902. [Google Scholar] [CrossRef]
  38. Saad, M.A.; Bovik, A.C.; Charrier, C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef] [PubMed]
  39. Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
  40. Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef] [PubMed]
  41. He, L.; Tao, D.; Li, X.; Gao, X. Sparse representation for blind image quality assessment. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 1146–1153. [Google Scholar]
  42. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  43. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
  44. Li, J.; Zou, L.; Yan, J.; Deng, D.; Qu, T.; Xie, G. No-reference image quality assessment using Prewitt magnitude based on convolutional neural networks. Signal Image Video Process. 2016, 10, 609–616. [Google Scholar] [CrossRef]
  45. Bosse, S.; Maniry, D.; Wiegand, T.; Samek, W. A deep neural network for image quality assessment. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3773–3777. [Google Scholar]
  46. Kuzovkin, I.; Vicente, R.; Petton, M.; Lachaux, J.P.; Baciu, M.; Kahane, P.; Rheims, S.; Vidal, J.R.; Aru, J. Frequency-Resolved Correlates of Visual Object Recognition in Human Brain Revealed by Deep Convolutional Neural Networks. bioRxiv 2017, 133694. [Google Scholar] [CrossRef]
  47. Yamins, D.L.; DiCarlo, J.J. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci. 2016, 19, 356–365. [Google Scholar] [CrossRef] [PubMed]
  48. Bianco, S.; Celona, L.; Napoletano, P.; Schettini, R. On the use of deep learning for blind image quality assessment. Signal Image Video Process. 2018, 12, 355–362. [Google Scholar] [CrossRef]
  49. Scott, E.T.; Hemami, S.S. No-Reference Utility Estimation with a Convolutional Neural Network. Electron. Imaging 2018, 2018, 1–6. [Google Scholar] [CrossRef]
  50. Jia, S.; Zhang, Y. Saliency-based deep convolutional neural network for no-reference image quality assessment. Multimed. Tools Appl. 2018, 77, 14859–14872. [Google Scholar] [CrossRef]
  51. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed]
  52. Farias, M.C.; Akamine, W.Y. On performance of image quality metrics enhanced with visual attention computational models. Electron. Lett. 2012, 48, 631–633. [Google Scholar] [CrossRef]
  53. Engelke, U.; Kaprykowsky, H.; Zepernick, H.J.; Ndjiki-Nya, P. Visual attention in quality assessment. IEEE Signal Process. Mag. 2011, 28, 50–59. [Google Scholar] [CrossRef]
  54. Gu, K.; Wang, S.; Yang, H.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W. Saliency-guided quality assessment of screen content images. IEEE Trans. Multimed. 2016, 18, 1098–1110. [Google Scholar] [CrossRef]
  55. You, J.; Perkis, A.; Hannuksela, M.M.; Gabbouj, M. Perceptual quality assessment based on visual attention analysis. In Proceedings of the 17th ACM international conference on Multimedia, Beijing, China, 19–24 October 2009; pp. 561–564. [Google Scholar]
  56. Le Meur, O.; Ninassi, A.; Le Callet, P.; Barba, D. Overt visual attention for free-viewing and quality assessment tasks: Impact of the regions of interest on a video quality metric. Signal Process. Image Commun. 2010, 25, 547–558. [Google Scholar] [CrossRef]
  57. Le Meur, O.; Ninassi, A.; Le Callet, P.; Barba, D. Do video coding impairments disturb the visual attention deployment? Signal Process. Image Commun. 2010, 25, 597–609. [Google Scholar] [CrossRef]
  58. Akamine, W.Y.; Farias, M.C. Video quality assessment using visual attention computational models. J. Electron. Imaging 2014, 23, 061107. [Google Scholar] [CrossRef]
  59. Ciocca, G.; Corchs, S.; Gasparini, F. A complexity-based image analysis to investigate interference between distortions and image contents in image quality assessment. In Proceedings of the International Workshop on Computational Color Imaging, Milan, Italy, 29–31 March 2017; pp. 105–121. [Google Scholar]
  60. Larson, E.C.; Chandler, D.M. Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
  61. Liu, M.; Wang, M.; Wang, J.; Li, D. Comparison of random forest, support vector machine and back propagation neural network for electronic tongue data classification: Application to the recognition of orange beverage and Chinese vinegar. Sens. Actuators B Chem. 2013, 177, 970–980. [Google Scholar] [CrossRef]
  62. Petrou, M.; Sevilla, P.G. Image Processing: Dealing with Texture; John Wiley and Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  63. Davies, E.R. Introduction to Texture Analysis. In Handbook of Texture Analysis; Mirmehdi, M., Xie, X., Suri, J., Eds.; Imperial College Press: London, UK, 2008; Chapter 1; pp. 1–31. [Google Scholar] [Green Version]
  64. Galloway, M.M. Texture analysis using gray level run lengths. Comput. Graph. Image Process. 1975, 4, 172–179. [Google Scholar] [CrossRef]
  65. Soh, L.K.; Tsatsoulis, C. Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices. IEEE Trans. Geosci. Remote Sens. 1999, 37, 780–795. [Google Scholar] [CrossRef] [Green Version]
  66. He, D.C.; Wang, L. Texture unit, texture spectrum, and texture analysis. IEEE Trans. Geosci. Remote Sens. 1990, 28, 509–512. [Google Scholar]
  67. Julesz, B. Textons, the elements of texture perception, and their interactions. Nature 1981, 290, 91–97. [Google Scholar] [CrossRef] [PubMed]
  68. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef] [Green Version]
  69. Hadid, A.; Ylioinas, J.; Bengherabi, M.; Ghahramani, M.; Taleb-Ahmed, A. Gender and texture classification: A comparative analysis using 13 variants of local binary patterns. Pattern Recognit. Lett. 2015, 68, 231–238. [Google Scholar] [CrossRef]
  70. Brahnam, S.; Jain, L.C.; Lumini, A.; Nanni, L. Introduction to Local Binary Patterns: New Variants and Applications. In Local Binary Patterns; Studies in Computational Intelligence; Brahnam, S., Jain, L.C., Nanni, L., Lumini, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 506, pp. 1–13. [Google Scholar]
  71. Pietikäinen, M.; Ojala, T.; Xu, Z. Rotation-invariant texture classification using feature distributions. Pattern Recognit. 2000, 33, 43–52. [Google Scholar] [CrossRef] [Green Version]
  72. Ojansivu, V.; Heikkilä, J. Blur insensitive texture classification using local phase quantization. In Proceedings of the International Conference on Image and Signal Processing, Cherbourg-Octeville, France, 1–3 July 2008; pp. 236–243. [Google Scholar]
  73. Kannala, J.; Rahtu, E. Bsif: Binarized statistical image features. In Proceedings of the 2012 21st International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 1363–1366. [Google Scholar]
  74. Arashloo, S.R.; Kittler, J. Dynamic texture recognition using multiscale binarized statistical image features. IEEE Trans. Multimed. 2014, 16, 2099–2109. [Google Scholar] [CrossRef]
  75. Raja, K.B.; Raghavendra, R.; Busch, C. Binarized statistical features for improved iris and periocular recognition in visible spectrum. In Proceedings of the 2014 International Workshop on Biometrics and Forensics (IWBF), Valletta, Malta, 27–28 March 2014; pp. 1–6. [Google Scholar]
  76. Arashloo, S.R.; Kittler, J.; Christmas, W. Face spoofing detection based on multiple descriptor fusion using multiscale dynamic binarized statistical image features. IEEE Trans. Inf. Forensics Secur. 2015, 10, 2396–2407. [Google Scholar] [CrossRef]
  77. Raghavendra, R.; Busch, C. Robust scheme for iris presentation attack detection using multiscale binarized statistical image features. IEEE Trans. Inf. Forensics Secur. 2015, 10, 703–715. [Google Scholar] [CrossRef]
  78. Mehta, R.; Egiazarian, K.O. Rotated Local Binary Pattern (RLBP)-Rotation Invariant Texture Descriptor; ICPRAM: Barcelona, Spain, 2013; pp. 497–502. [Google Scholar]
  79. Mehta, R.; Egiazarian, K. Dominant rotated local binary patterns (DRLBP) for texture classification. Pattern Recognit. Lett. 2016, 71, 16–22. [Google Scholar] [CrossRef]
  80. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  81. Briët, J.; Harremoës, P. Properties of classical and quantum Jensen-Shannon divergence. Phys. Rev. A 2009, 79, 052311. [Google Scholar] [CrossRef]
  82. Ye, N.; Borror, C.M.; Parmar, D. Scalable Chi-Square Distance versus Conventional Statistical Distance for Process Monitoring with Uncorrelated Data Variables. Qual. Reliab. Eng. Int. 2003, 19, 505–515. [Google Scholar] [CrossRef]
  83. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar] [PubMed]
  84. Guo, Y.; Zhao, G.; Pietikäinen, M. Texture Classification using a Linear Configuration Model based Descriptor. In Proceedings of the British Machine Vision Conference; BMVA Press: Dundee, UK, 2011; pp. 119.1–119.10. [Google Scholar]
  85. Mäenpää, T. The Local Binary Pattern Approach to Texture Analysis: Extensions and Applications; Oulun Yliopisto: Oulu, Finland, 2003. [Google Scholar]
  86. Jain, A.; Healey, G. A multiscale representation including opponent color features for texture recognition. IEEE Trans. Image Process. 1998, 7, 124–128. [Google Scholar] [CrossRef] [PubMed]
  87. Wolf, L.; Hassner, T.; Taigman, Y. Descriptor Based Methods in the Wild. In Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition; Erik Learned-Miller and Andras Ferencz and Frédéric Jurie: Marseille, France, 2008. [Google Scholar]
  88. Chang, D.J.; Desoky, A.H.; Ouyang, M.; Rouchka, E.C. Compute pairwise manhattan distance and pearson correlation coefficient of data points with gpu. In Proceedings of the 10th ACIS International Conference on Software Engineering, Artificial Intelligences, Networking and Parallel/Distributed Computing, Daegu, Korea, 27–29 May 2009; pp. 501–506. [Google Scholar]
  89. De Maesschalck, R.; Jouan-Rimbaud, D.; Massart, D.L. The mahalanobis distance. Chemom. Intell. Lab. Syst. 2000, 50, 1–18. [Google Scholar] [CrossRef]
  90. Merigó, J.M.; Casanovas, M. A new Minkowski distance based on induced aggregation operators. Int. J. Comput. Intell. Syst. 2011, 4, 123–133. [Google Scholar] [CrossRef]
  91. Freitas, P.G.; Akamine, W.Y.; Farias, M.C. Blind Image Quality Assessment Using Multiscale Local Binary Patterns. J. Imaging Sci. Technol. 2016, 60, 60405-1. [Google Scholar] [CrossRef]
  92. Anthimopoulos, M.; Gatos, B.; Pratikakis, I. Detection of artificial and scene text in images and video frames. Pattern Anal. Appl. 2013, 16, 431–446. [Google Scholar] [CrossRef]
  93. Freitas, P.G.; Akamine, W.Y.; Farias, M.C. No-reference image quality assessment based on statistics of Local Ternary Pattern. In Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016; pp. 1–6. [Google Scholar]
  94. Zhang, J.; Sclaroff, S. Exploiting surroundedness for saliency detection: A Boolean map approach. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 889–902. [Google Scholar] [CrossRef] [PubMed]
  95. Fernández-Delgado, M.; Cernadas, E.; Barro, S.; Amorim, D. Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 2014, 15, 3133–3181. [Google Scholar]
  96. Ye, P.; Kumar, J.; Kang, L.; Doermann, D. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 1098–1105. [Google Scholar]
  97. Liu, L.; Dong, H.; Huang, H.; Bovik, A.C. No-reference image quality assessment in curvelet domain. Signal Process. Image Commun. 2014, 29, 494–505. [Google Scholar] [CrossRef]
  98. Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun. 2014, 29, 856–863. [Google Scholar] [CrossRef]
  99. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  100. Sheikh, H.R.; Wang, Z.; Cormack, L.; Bovik, A.C. LIVE Image Quality Assessment Database Release 2. 2015. Available online: http://live.ece.utexas.edu/research/quality (accessed on 30 September 2016).
  101. Larson, E.C.; Chandler, D. Categorical Image Quality (CSIQ) Database. 2010. Available online: http://vision.okstate.edu/csiq (accessed on 30 September 2016).
  102. Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015, 30, 57–77. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Object detection using YOLO [18] on the distorted (left) and pristine (right) images, taken from GoPro [19] dataset. The detection effectiveness of YOLO is remarkably impaired by the quality of the input image.
Figure 1. Object detection using YOLO [18] on the distorted (left) and pristine (right) images, taken from GoPro [19] dataset. The detection effectiveness of YOLO is remarkably impaired by the quality of the input image.
Jimaging 04 00114 g001
Figure 2. Circularly symmetric P neighbors extracted from a distance R.
Figure 2. Circularly symmetric P neighbors extracted from a distance R.
Jimaging 04 00114 g002
Figure 3. Calculation of LBP labels.
Figure 3. Calculation of LBP labels.
Jimaging 04 00114 g003
Figure 4. Reference image and its correspondent Local Binary Pattern (LBP) channels computed using three different radius (R) values.
Figure 4. Reference image and its correspondent Local Binary Pattern (LBP) channels computed using three different radius (R) values.
Jimaging 04 00114 g004
Figure 5. Illustration of the basic Local Ternary Pattern descriptor.
Figure 5. Illustration of the basic Local Ternary Pattern descriptor.
Jimaging 04 00114 g005
Figure 6. BSIF code images at different scales.
Figure 6. BSIF code images at different scales.
Jimaging 04 00114 g006
Figure 7. Rotation effect on LBP and RLBP descriptors: (a) original image and its rotated version, (b) Illustration of the neighbors rotation for the same pixel ‘63’, (c) Thresholded neighbors, values above threshold are shown in red color, (d) The weights corresponding to the thresholded neighbors, (e) LBP values, (f) Thresholded neighbors for RLBP with reference denoted in yellow color, (g) The weights of the thresholded neighbors, (h) The RLBP values for the original and rotated image is same [79].
Figure 7. Rotation effect on LBP and RLBP descriptors: (a) original image and its rotated version, (b) Illustration of the neighbors rotation for the same pixel ‘63’, (c) Thresholded neighbors, values above threshold are shown in red color, (d) The weights corresponding to the thresholded neighbors, (e) LBP values, (f) Thresholded neighbors for RLBP with reference denoted in yellow color, (g) The weights of the thresholded neighbors, (h) The RLBP values for the original and rotated image is same [79].
Jimaging 04 00114 g007
Figure 8. Effect of rotation on LBP and RLBP information.
Figure 8. Effect of rotation on LBP and RLBP information.
Jimaging 04 00114 g008
Figure 9. Framework of CLBP descriptor [83].
Figure 9. Framework of CLBP descriptor [83].
Jimaging 04 00114 g009
Figure 10. Sampling scheme for the OCLBP R G and OCLBP R B descriptors.
Figure 10. Sampling scheme for the OCLBP R G and OCLBP R B descriptors.
Jimaging 04 00114 g010
Figure 11. Original images and their output channels, computed using the OCLBP descriptor.
Figure 11. Original images and their output channels, computed using the OCLBP descriptor.
Jimaging 04 00114 g011
Figure 12. The Three-Patch LBP code with α = 2 , w = 3 and S = 8 [87].
Figure 12. The Three-Patch LBP code with α = 2 , w = 3 and S = 8 [87].
Jimaging 04 00114 g012
Figure 13. The Four-Patch LBP code with α = 1 , w = 3 and S = 8 [87].
Figure 13. The Four-Patch LBP code with α = 1 , w = 3 and S = 8 [87].
Jimaging 04 00114 g013
Figure 14. Feature extraction steps. (a) Multipoint LBP sampling. (b) Multiple histogram generation from LBP.
Figure 14. Feature extraction steps. (a) Multipoint LBP sampling. (b) Multiple histogram generation from LBP.
Jimaging 04 00114 g014
Figure 15. Feature extraction using MLBP histograms.
Figure 15. Feature extraction using MLBP histograms.
Jimaging 04 00114 g015
Figure 16. Illustration of process of extracting the feature vector x with L = 2 .
Figure 16. Illustration of process of extracting the feature vector x with L = 2 .
Jimaging 04 00114 g016
Figure 17. The reference image and its upper and lower patterns generated using the Local Ternary Pattern (LTP) descriptor with four different threshold values.
Figure 17. The reference image and its upper and lower patterns generated using the Local Ternary Pattern (LTP) descriptor with four different threshold values.
Jimaging 04 00114 g017
Figure 18. Pattern extraction process for a given pixel using LBP and LVP descriptors with R = 1, P = 8, t c = 35 , and t p = { 71 , 32 , 91 , 103 , 21 , 10 , 34 , 13 } .
Figure 18. Pattern extraction process for a given pixel using LBP and LVP descriptors with R = 1, P = 8, t c = 35 , and t p = { 71 , 32 , 91 , 103 , 21 , 10 , 34 , 13 } .
Jimaging 04 00114 g018
Figure 19. Reference image, its impaired versions, and their respective LBP and LVP maps ( C L B P and C L V P ).
Figure 19. Reference image, its impaired versions, and their respective LBP and LVP maps ( C L B P and C L V P ).
Jimaging 04 00114 g019
Figure 20. (a) General view of OCPP, (b) XY (P X Y = 16 ) plane, (c) XZ (P X Z = 8 ) plane, and (d) YZ (P Y Z = 10 ) plane.
Figure 20. (a) General view of OCPP, (b) XY (P X Y = 16 ) plane, (c) XZ (P X Z = 8 ) plane, and (d) YZ (P Y Z = 10 ) plane.
Jimaging 04 00114 g020
Figure 21. Example of original images (a), their LBP (b), BMS (c), and SLBP (d) maps.
Figure 21. Example of original images (a), their LBP (b), BMS (c), and SLBP (d) maps.
Jimaging 04 00114 g021
Figure 22. Multiple histogram generation from SLBP.
Figure 22. Multiple histogram generation from SLBP.
Jimaging 04 00114 g022
Figure 23. Training the quality metric.
Figure 23. Training the quality metric.
Jimaging 04 00114 g023
Figure 24. Predicting quality scores.
Figure 24. Predicting quality scores.
Jimaging 04 00114 g024
Figure 25. SROCC distribution on LIVE2 using basic LBP.
Figure 25. SROCC distribution on LIVE2 using basic LBP.
Jimaging 04 00114 g025
Figure 26. Distribution of average SROCC after 100 simulations using different LBP variations. (a) LIVE2. (b) CSIQ. (c) TID2013.
Figure 26. Distribution of average SROCC after 100 simulations using different LBP variations. (a) LIVE2. (b) CSIQ. (c) TID2013.
Jimaging 04 00114 g026
Figure 27. Distribution of average SROCC after 100 simulations using different state-of-the-art methods. (a) LIVE2. (b) CSIQ. (c) TID2013.
Figure 27. Distribution of average SROCC after 100 simulations using different state-of-the-art methods. (a) LIVE2. (b) CSIQ. (c) TID2013.
Jimaging 04 00114 g027
Figure 28. Stability of the mean SROCC over the simulations after 10, 50, 100, 500, and 1000 simulations using different descriptors.
Figure 28. Stability of the mean SROCC over the simulations after 10, 50, 100, 500, and 1000 simulations using different descriptors.
Jimaging 04 00114 g028
Table 1. Tested LBP variants.
Table 1. Tested LBP variants.
AbbreviationNameParameters
LBP r i Basic Local Binary Patterns with rotation invarianceRadius (R) and number of neighbors (P)
LBP u Uniform Local Binary PatternsRadius (R) and number of neighbors (P)
LBP r i u 2 Uniform Local Binary Patterns with rotation invarianceRadius (R) and number of neighbors (P)
BSIFBinarized Statistical Image FeaturesWindow size and number of bits
LPQLocal Phase Quantization)Local frequency estimation
CLBPComplete Local Binary PatternsCLBP-S, CLBP-C, and CLBP-M
LCPLocal Configuration PatternsRadius (R) and number of neighbors (P)
LTPLocal Ternary PatternsThreshold ( τ ), Radius (R) and number of neighbors (P)
RLBPRotated Local Binary PatternsRadius (R) and number of neighbors (P)
TPLBPThree-Patch Local Binary PatternsPatch size (w), Radius (R), and angle between neighboring patches
FPLBPFour-Patch Local Binary PatternsPatch size (w), Radius of first ring (R1), Radius of second ring (R2), and angle between neighboring patches
LVPLocal Variance PatternsRadius (R) and number of neighbors (P)
OCLBPOpposite Color Local Binary PatternsRadius (R) and number of neighbors (P)
OCPPOrthogonal Color Planes PatternsRadius (R) and number of neighbors (P)
SLBPSalient Local Binary PatternsRadius (R) and number of neighbors (P)
MLBPMultiscale Local Binary PatternsMultiple values of Radius (R) and number of neighbors (P)
MLTPMultiscale Local Ternary PatternsMultiple values of Radius (R) and number of neighbors (P)
MSLBPMultiscale Salient Local Binary PatternsMultiple values of Radius (R) and number of neighbors (P)
Table 2. Average SROCC of simulations on tested image databases using basic LBP variations.
Table 2. Average SROCC of simulations on tested image databases using basic LBP variations.
DBDISTLBP ri LBP u LBP riu 2
R = 1R = 2R = 3R = 1R = 2R = 3R = 1R = 2R = 3
P = 4P = 8P = 4P = 8P = 16P = 4P = 8P = 16P = 4P = 8P = 4P = 8P = 16P = 4P = 8P = 16P = 4P = 8P = 4P = 8P = 16P = 4P = 8P = 16
LIVE 2JPEG0.89590.93060.82380.90580.91240.77590.86830.90650.89210.92750.83760.90630.91760.81760.83010.90690.89550.92040.83430.84810.89060.77160.79710.8813
JPEG2k0.90620.94230.87720.91610.93240.78120.89990.92380.90560.93530.86910.91490.92770.81810.84640.90230.90880.92450.87420.87240.88950.78570.82410.8816
WN0.97530.97940.95210.96710.96940.93090.95530.96760.97430.97820.93560.96610.97030.92850.94650.96870.97530.97710.95380.96070.96610.92940.94070.9642
GB0.91230.96210.91690.94740.95510.88730.93310.94790.92530.96110.91680.94940.96320.87710.93490.96660.91370.94810.91970.91440.93170.88080.89460.9134
FF0.83410.88710.78780.86870.90540.64590.80270.85390.85210.87550.76920.84930.90260.65880.67560.87140.83250.89740.78210.79590.87550.64880.75850.8672
ALL0.90150.95320.87130.92880.94220.80380.89880.92740.91010.94170.86310.92350.94270.82080.85010.92820.90480.93660.87040.88260.91740.80340.84930.9079
CSIQJPEG0.82450.88610.81350.89080.87050.81420.86820.88060.82410.89120.85130.86310.87250.84460.85060.87010.81760.85210.80730.85180.86420.80830.83230.8688
JPEG2k0.76950.85320.78670.83790.84140.69640.82720.83390.76540.82660.76580.80650.81230.70250.74520.79770.76990.78510.77380.75710.76250.68440.70630.7524
WN0.70790.84520.64040.79260.82290.52410.79840.89050.63280.91330.76580.71850.74990.68010.71760.65880.71490.81730.64030.66150.74280.67930.70310.6704
GB0.85920.90780.83780.88910.91250.78890.88080.91410.86690.88560.82730.87380.89720.79460.84550.88730.85470.89230.83350.87180.87780.79690.84570.8828
PN0.57860.88270.52890.83330.87680.66540.73310.85410.78210.85110.61840.74460.76480.58570.66980.68010.57350.82580.53230.75710.71910.52380.63010.6729
CD0.30660.59010.31590.47910.49680.26150.38570.45770.38840.47140.45610.29290.35360.40510.36070.30520.26610.37880.29670.32450.31450.27310.39760.2093
ALL0.67350.82780.64710.79460.80190.62740.75610.79610.68540.80280.66350.73410.74570.63650.70590.70860.66380.77180.64210.70910.71810.62110.67960.6861
TID 2013AGC0.47810.61350.23530.10840.47130.37030.21310.35540.19540.34960.17420.13090.25190.25090.21540.29120.46070.32730.19750.16650.14690.36810.20610.2746
AGN0.78610.77570.43460.58810.67990.56420.44260.69690.62010.61380.36260.28730.67260.57260.22070.45810.76190.53530.44340.41460.56730.53420.47530.5957
CA0.21860.20520.36740.22110.24530.29670.26930.20610.20350.21860.27970.22160.24750.30320.29620.27710.20650.24070.41550.36510.28280.25050.37810.2939
CC0.12870.10070.11780.11810.08690.09710.14760.16960.12840.17420.11310.06230.09570.12380.06070.07490.15510.14380.11610.09960.07730.09380.10980.1073
CCS0.18910.12410.16660.12550.23090.18980.21310.14730.17510.13190.19380.21950.29030.17540.18810.23110.16990.17860.16840.15870.15990.16710.23740.1852
CN0.30520.19790.16550.14250.32530.19590.11810.13840.18340.18510.14910.13840.17420.14650.12570.18420.36450.14730.17480.14670.13650.17010.23010.2325
GB0.82160.83840.80410.80060.81220.77810.82080.82610.81390.83410.80270.82530.80380.73910.81520.80750.80730.81990.80230.80950.82530.77660.79690.8276
HFN0.79340.81260.69680.76480.83650.77930.61210.84730.79010.75410.64310.67190.86480.72480.52310.78210.78910.65110.70480.67170.67010.76040.64150.7375
ICQ0.77410.77150.76380.82460.79730.67480.80880.81960.76420.76330.74980.79040.81830.70990.77030.81730.76340.79080.75540.79110.80110.63830.75420.7818
ID0.35030.81070.62110.76310.72380.60840.68920.69380.27380.53460.55230.74150.79190.53490.57420.70810.35340.41920.63840.60380.50190.59010.47490.4761
IN0.13840.34230.13940.53960.54310.13270.38730.59540.11690.09320.15510.42520.40210.12690.21880.30590.16650.13840.13230.58940.47220.12020.21690.4401
IS0.13780.06310.12010.09770.06920.11830.07950.08940.10680.05980.09950.07430.06590.10750.07420.10540.16520.09360.13220.13280.09820.10250.08660.1271
JPEG0.72410.83920.66780.80160.79730.62650.78140.78610.69120.80350.65230.76150.76570.63110.67510.74480.68880.75190.67620.66310.68310.64310.63670.6941
JPEGTE0.12730.29420.13610.33610.27840.13530.30070.28690.14340.19880.12610.30260.35990.14520.10920.25230.17070.15340.13510.21030.28030.15910.14530.1888
JPEG2k0.79490.86690.68760.80570.83840.77510.81530.83730.81030.80570.81510.85110.83230.76340.81260.79960.78880.84110.83110.82180.81070.76730.75150.7673
JPEG2kTE0.38880.50150.83260.60490.59340.55260.72030.70730.41420.49810.61490.70990.71310.58230.58880.70070.37650.40570.68530.62380.51210.55810.65840.5642
LBD0.16340.17390.14620.16570.11750.11840.14420.18940.15020.16050.15690.13310.13320.15660.14110.13230.17530.13430.12630.15620.13920.13350.12880.1556
LC0.44190.55810.28690.27310.45070.27690.29960.48070.15420.15150.18650.21070.15530.19010.14760.17690.35960.27340.34730.15190.12840.30920.29840.1146
MGN0.69770.69470.52390.49770.77660.58710.35190.70020.59710.41910.28480.20840.47960.47310.16050.40140.72140.49160.51390.46580.48930.59110.44830.4081
MN0.26770.42950.19520.34690.18320.14480.15010.16150.16670.32360.15310.12860.13980.14490.30870.12880.24380.26520.16310.15730.13190.16110.22520.1771
NEPN0.14130.20540.21070.23580.33830.16110.27210.37080.13290.12730.17950.28620.27060.13910.29170.20940.12540.14160.16670.27870.33730.15330.22520.1996
QN0.77330.85840.78710.80730.83530.73060.79650.81150.82540.86310.80690.82260.87570.79570.80190.83840.77690.80420.77720.77640.80530.74310.78280.8242
SCN0.63990.66030.71030.64260.53570.54110.60030.68070.61110.63030.56810.62570.74960.58110.41690.60840.66730.68030.69650.54420.42380.55380.54570.6853
SSR0.82460.88460.81510.85070.91420.70420.79110.88730.71260.77760.75960.76030.81880.75030.74310.78840.82150.69810.82030.71420.73380.69310.66530.7431
ALL0.45930.58590.46180.51740.53560.41710.47810.51980.42530.46610.40310.47510.52810.38480.40590.46820.44130.44310.46210.46880.46040.41690.42240.4728
Table 3. Average SROCC of 1000 runs of simulations on tested databases using BSIF variations.
Table 3. Average SROCC of 1000 runs of simulations on tested databases using BSIF variations.
SIZE3 × 35 × 57 × 7AverageSTDMAXMIN Δ
BITS56789101112
DBDIST
LIVE2JPEG0.88640.90150.88570.89310.87990.89690.88740.86700.88720.01070.90150.86700.0345
JPEG2k0.88030.90460.90190.85850.90590.88650.91380.86200.88920.02090.91380.85850.0553
WN0.94400.95900.96090.96140.96200.96300.96390.94690.95760.00770.96390.94400.0200
GB0.86440.92930.92030.93300.93690.92640.95050.92130.92280.02550.95050.86440.0862
FF0.82700.82080.81740.80750.84860.85870.86740.77280.82750.03060.86740.77280.0946
ALL0.88870.91160.91270.90990.92360.92510.93080.89470.91210.01470.93080.88870.0422
Average0.88180.90450.89980.89390.90950.90950.91900.8775
STD0.03800.04620.04750.05490.04070.03650.03700.0605
MAX0.94400.95900.96090.96140.96200.96300.96390.9469
MIN0.82700.82080.81740.80750.84860.85870.86740.7728
CSIQJPEG0.85410.86380.86620.87800.88250.88000.88000.86870.87170.01000.88250.85410.0284
JPEG2k0.80400.81110.85490.85640.81050.82820.82370.81820.82590.02000.85640.80400.0525
WN0.54680.66490.76680.78820.80890.82170.81930.77540.74900.09590.82170.54680.2749
GB0.69830.79650.78710.80810.87990.87620.87900.87240.82470.06480.87990.69830.1816
PN0.33250.53910.65380.79900.78750.76630.76990.74600.67430.16330.79900.33250.4665
CD0.15500.14430.24280.29520.07410.09070.07710.09780.14710.08190.29520.07410.2210
_ALL0.59770.69040.72340.76640.73170.73250.73110.70740.71010.05040.76640.59770.1687
Average0.56980.64430.69930.74160.71070.71370.71150.6980
STD0.25230.24590.21420.20070.28560.27990.28490.2716
MAX0.85410.86380.86620.87800.88250.88000.88000.8724
MIN0.15500.14430.24280.29520.07410.09070.07710.0978
TIDAGC0.25990.22730.38680.48880.40420.39310.39230.47870.37890.09280.48880.22730.2615
AGN0.50460.53880.72500.74620.66150.64000.68080.69540.64900.08590.74620.50460.2415
CA0.57270.67200.67290.67710.50790.50570.53510.58240.59070.07410.67710.50570.1714
CC0.12190.09460.11850.13620.08150.09650.08850.08380.10270.02020.13620.08150.0546
CCS0.14310.14150.14350.11920.18810.18060.22460.17620.16460.03390.22460.11920.1054
CN0.13380.31650.27350.37690.29420.42150.46000.48380.34500.11490.48380.13380.3500
GB0.75460.82770.81540.84160.88320.88620.90510.89530.85110.05120.90510.75460.1505
HFN0.65800.76280.80470.83130.77570.78780.81310.80680.78000.05390.83130.65800.1733
ICQ0.71170.77770.77000.79390.76320.77420.78540.81230.77350.02930.81230.71170.1006
ID0.56270.68040.69460.68230.73380.74460.75380.77850.70380.06720.77850.56270.2158
IN0.41000.73850.67120.70080.77620.75920.77140.69950.69090.11960.77620.41000.3662
IS0.11420.10920.11460.09380.12910.14350.16890.15190.12810.02500.16890.09380.0750
JPEG0.76250.81680.76970.77080.81770.80260.80910.82150.79630.02460.82150.76250.0591
JPEGTE0.10480.37700.42310.47230.47500.49080.57520.50000.42730.14240.57520.10480.4704
JPEG2k0.76220.83620.79230.82080.82080.82460.81150.84450.81410.02620.84450.76220.0823
JPEG2kTE0.33620.40450.36460.52500.54760.67430.69220.77460.53990.16350.77460.33620.4385
LBD0.28080.28640.29680.37750.33420.33000.33430.36520.32570.03550.37750.28080.0967
LC0.25690.27310.47770.53230.57960.63150.63000.65650.50470.15900.65650.25690.3996
MGN0.37540.51730.66920.69240.65080.64690.65270.69190.61210.11050.69240.37540.3170
MN0.18330.32780.23370.35910.18120.18620.18120.16580.22730.07480.35910.16580.1933
NEPN0.12010.13440.14560.14000.23830.26100.25690.20910.18820.05930.26100.12010.1410
QN0.64540.70460.76150.74690.72810.70010.72960.75850.72180.03830.76150.64540.1162
SCN0.46270.62380.69040.71380.82150.80690.81310.88150.72670.13560.88150.46270.4188
SSR0.72310.79620.87000.91080.88230.89380.91920.90080.86200.06790.91920.72310.1962
_ALL0.42520.53640.58090.61770.59650.61260.62470.59640.57380.06610.62470.42520.1995
Average0.41540.50090.53060.56670.55490.56780.58430.5924
STD0.23720.25590.25600.24950.25660.25010.25090.2624
MAX0.76250.83620.87000.91080.88320.89380.91920.9008
MIN0.10480.09460.11460.09380.08150.09650.08850.0838
Table 4. Average SROCC of 1000 runs of simulations on tested databases using LPQ variations.
Table 4. Average SROCC of 1000 runs of simulations on tested databases using LPQ variations.
DBDISTC1C2C3C4C5C6C7AverageSTDMAXMIN Δ
LIVE2JPEG0.89990.93240.91400.91860.91300.91970.91740.91640.00970.93240.89990.0325
JPEG2k0.88650.89160.88320.87970.89000.88280.88410.88540.00420.89160.87970.0119
WN0.94450.96050.93230.95660.94220.95650.95560.94980.01020.96050.93230.0282
GB0.89240.91260.90420.91160.90370.92580.92310.91050.01160.92580.89240.0333
FF0.86590.85360.83940.85960.85230.84500.85770.85330.00900.86590.83940.0265
ALL0.90510.91410.89980.91490.90470.91630.91670.91020.00690.91670.89980.0169
Average0.89910.91080.89550.90680.90100.90770.9091
STD0.02610.03630.03190.03370.02950.03870.0339
MAX0.94450.96050.93230.95660.94220.95650.9556
MIN0.86590.85360.83940.85960.85230.84500.8577
CSIQJPEG0.84150.88010.86510.87010.85380.88120.87060.86600.01420.88120.84150.0397
JPEG2k0.73230.76770.81720.76980.80290.80290.79480.78390.02910.81720.73230.0849
WN0.35860.58050.56580.60080.55540.65050.63720.56410.09720.65050.35860.2919
GB0.80470.84830.83430.86470.83050.87230.86870.84620.02460.87230.80470.0676
PN0.66810.72100.78670.73280.77400.81250.79440.75560.05070.81250.66810.1444
CD0.30170.32480.32640.43280.33210.44620.44360.37250.06480.44620.30170.1445
ALL0.68580.72230.73600.74490.73080.76230.76040.73470.02610.76230.68580.0765
Average0.62750.69210.70450.71650.69710.74680.7385
STD0.21280.18910.19380.15460.18880.15340.1519
MAX0.84150.88010.86510.87010.85380.88120.8706
MIN0.30170.32480.32640.43280.33210.44620.4436
TIDAGC0.35260.21150.37920.36350.39000.40830.43040.36220.07150.43040.21150.2189
AGN0.51490.53000.52920.66930.53990.71450.70920.60100.09180.71450.51490.1996
CA0.66010.64510.62560.64440.64100.64550.64130.64330.01010.66010.62560.0344
CC0.09960.09920.09460.11000.10270.10150.10370.10160.00470.11000.09460.0154
CCS0.13440.10880.13000.13620.13420.12580.13460.12920.00960.13620.10880.0273
CN0.38460.25710.36960.31310.38810.35310.36040.34660.04670.38810.25710.1310
GB0.67620.77200.74310.81620.72570.84310.82040.77090.05990.84310.67620.1669
HFN0.79630.78620.82540.81230.82350.84620.83620.81800.02130.84620.78620.0600
ICQ0.76620.78080.82000.78310.80230.81650.80850.79680.02030.82000.76620.0538
ID0.62770.71920.72310.78770.70620.80880.79380.73810.06370.80880.62770.1812
IN0.76770.60230.72690.71580.74260.69920.75230.71530.05480.76770.60230.1654
IS0.10310.09500.11890.08230.11150.08310.09000.09770.01410.11890.08230.0365
JPEG0.69850.80770.73990.83600.72480.84720.83120.78360.06090.84720.69850.1487
JPEGTE0.52040.38850.49380.45120.51150.43540.45810.46550.04660.52040.38850.1319
JPEG2k0.79150.76920.79680.81200.80720.80820.80850.79910.01500.81200.76920.0427
JPEG2kTE0.45540.44190.51580.38230.51650.49310.50380.47270.04930.51650.38230.1342
LBD0.33620.36270.35600.35910.35480.38620.36350.35980.01480.38620.33620.0499
LC0.72120.30880.68080.49960.71690.58380.58150.58470.14640.72120.30880.4123
MGN0.64060.58230.67070.73460.67530.77050.77130.69220.07030.77130.58230.1890
MN0.36780.62900.41740.46810.40950.54660.51370.47890.09070.62900.36780.2612
NEPN0.16390.13290.17580.21460.19680.20730.21890.18720.03130.21890.13290.0860
QN0.83620.81460.81270.84690.83080.83230.84420.83110.01330.84690.81270.0342
SCN0.74920.70380.82650.72690.81310.76350.80270.76940.04620.82650.70380.1227
SSR0.72310.88460.72650.88150.74690.88350.88310.81850.08110.88460.72310.1615
ALL0.59100.55930.60310.63580.60750.65190.65450.61470.03470.65450.55930.0951
Average0.53910.51970.55610.56330.56080.58620.5886
STD0.23610.25930.24080.25730.23880.25880.2563
MAX0.83620.88460.82650.88150.83080.88350.8831
MIN0.09960.09500.09460.08230.10270.08310.0900
Table 5. Average SROCC of 1000 runs of simulations on tested databases using CLBP variations.
Table 5. Average SROCC of 1000 runs of simulations on tested databases using CLBP variations.
Radius12AverageSTDMAXMIN Δ
Sampled Points481216481216
DBDIST
LIVE2JPEG0.90430.90740.90560.90490.88890.88260.90860.86170.89550.01660.90860.86170.0469
JPEG2k0.90950.91070.92510.90090.91440.89800.91640.88880.90800.01150.92510.88880.0363
WN0.97470.97140.97990.97300.95540.95850.98360.95150.96850.01190.98360.95150.0321
GB0.91870.93430.92630.93830.91570.91680.92850.91510.92420.00900.93830.91510.0232
FF0.89640.85200.84060.85770.82610.80030.85530.76340.83650.04040.89640.76340.1329
ALL0.92640.92270.92520.91890.90530.89900.92420.87990.91270.01660.92640.87990.0465
Average0.92170.91640.91710.91560.90100.89250.91940.8767
STD0.02800.03910.04500.03870.04270.05220.04110.0637
MAX0.97470.97140.97990.97300.95540.95850.98360.9515
MIN0.89640.85200.84060.85770.82610.80030.85530.7634
TIDAGC0.46420.23960.28920.31960.42470.16420.16480.16380.27880.11850.46420.16380.3004
AGN0.78270.68310.67580.73960.65650.51920.55460.53730.64360.09710.78270.51920.2635
CA0.52750.62650.57360.45190.66400.60350.49160.53400.55910.07110.66400.45190.2121
CC0.12580.09890.08650.11920.11380.09120.12040.09540.10640.01510.12580.08650.0393
CCS0.17040.14960.16230.14150.18530.15410.11380.13170.15110.02250.18530.11380.0715
CN0.23730.26850.30960.40120.16580.42620.40350.33690.31860.09140.42620.16580.2604
GB0.87080.89150.88460.86310.88670.87770.86560.87220.87650.01030.89150.86310.0285
HFN0.84960.82630.81840.82320.84120.76540.77680.72200.80290.04390.84960.72200.1276
ICQ0.82050.83650.82770.81850.82080.81030.84760.83920.82770.01250.84760.81030.0372
ID0.56920.59920.61380.60000.72960.58150.64850.67310.62690.05370.72960.56920.1604
IN0.61500.68920.73280.77920.58310.50580.62530.61540.64320.08700.77920.50580.2735
IS0.10660.21230.18790.12460.14080.14710.10460.11440.14230.03930.21230.10460.1077
JPEG0.81010.83450.80880.80460.77460.80540.82810.80260.80860.01800.83450.77460.0599
JPEGTE0.24870.38460.35180.41250.24640.37620.44620.40150.35850.07380.44620.24640.1998
JPEG2k0.82000.85690.85150.82230.87860.84420.86150.86380.84990.02030.87860.82000.0586
JPEG2kTE0.56960.55380.57230.58120.70150.69770.64770.66230.62330.06080.70150.55380.1477
LBD0.19080.27230.18940.17870.14730.18750.26540.28810.21490.05220.28810.14730.1407
LC0.65850.55690.54150.55650.58460.53350.48270.46810.54780.05930.65850.46810.1904
MGN0.72750.69460.67580.72910.70910.56070.57610.55830.65390.07570.72910.55830.1707
MN0.41850.42340.35700.39370.18830.15980.19460.19080.29080.11700.42340.15980.2635
NEPN0.14520.22680.28010.36130.14860.20970.23600.24850.23200.07000.36130.14520.2160
QN0.76460.81080.83230.81540.72040.81030.76180.78000.78690.03700.83230.72040.1119
SCN0.78770.79270.74920.76460.71230.64080.65120.65770.71950.06290.79270.64080.1519
SSR0.88420.86080.88380.85190.87310.78920.76770.79620.83840.04670.88420.76770.1165
_ALL0.59250.60920.60700.59830.57470.56260.59040.58460.58990.01580.60920.56260.0466
Average0.55030.55990.55450.56210.53890.51290.52110.5175
STD0.27020.25980.26010.25410.28100.26020.25740.2578
MAX0.88420.89150.88460.86310.88670.87770.86560.8722
MIN0.10660.09890.08650.11920.11380.09120.10460.0954
Table 6. Average SROCC of 100 runs of simulations on tested image databases using other LBP variations.
Table 6. Average SROCC of 100 runs of simulations on tested image databases using other LBP variations.
DBDISTLCPLTPRLBPTPLBPFPLBPLVPOCLBPOCPPSLBPMLBPMLTPMSLBP
LIVE 2JPEG0.89210.82780.80520.70470.66260.93630.93120.96780.91510.92490.93950.9373
JPEG2k0.89130.80290.82990.64910.55520.94610.94110.95970.93340.93420.93720.9406
WN0.96280.93580.92250.63540.67740.97640.97310.98610.98250.98220.96460.9831
GB0.93040.88240.91110.59230.58840.95310.95710.96120.94320.95240.95300.9619
FF0.80340.70040.78210.67240.64430.88480.89360.91410.90790.94870.87580.9364
ALL0.90060.82510.84870.63080.61710.93760.94180.95620.94050.92380.93160.9528
Average0.89680.82910.84990.64750.62420.93910.93970.95750.93710.94440.93360.9520
STD0.05340.07940.05660.03840.04650.03030.02690.02380.02630.02200.03070.0182
MAX0.96280.93580.92250.70470.67740.97640.97310.98610.98250.98220.96460.9831
MIN0.80340.70040.78210.59230.55520.88480.89360.91410.90790.92380.87580.9364
CSIQJPEG0.84120.80110.71860.75240.71790.92210.89430.95960.87540.88470.92920.9064
JPEG2k0.77460.63710.65520.56990.61180.89460.88650.93310.79130.80950.88770.8156
WN0.81520.50570.60640.19310.35990.70630.84410.91860.84950.90140.64540.8939
GB0.77240.79010.79390.85170.69720.91370.92030.93900.85390.91590.92440.8816
PN0.70490.53560.20780.08150.33670.70910.83610.94710.75020.88720.78280.8431
CD0.13820.22460.10720.31740.10250.26590.49140.77530.45150.51720.20820.5299
ALL0.66720.58040.51090.48150.32140.82380.84210.92530.79710.83990.82800.8324
Average0.67340.58210.51430.46390.44960.74790.81640.91400.76700.82230.74370.8147
STD0.24350.19580.26070.28470.23000.23120.14680.06260.14570.13950.25590.1300
MAX0.84120.80110.79390.85170.71790.92210.92030.95960.87540.91590.92920.9064
MIN0.13820.22460.10720.08150.10250.26590.49140.77530.45150.51720.20820.5299
TID 2013AGC0.36830.36540.22730.19420.12070.46880.53150.83080.39990.57080.59630.6018
AGN0.39030.42110.59030.17310.21110.60690.72530.86340.63690.78840.66310.7811
CA0.28440.22670.33560.28840.16040.69440.42540.88210.23790.31440.67490.3891
CC0.10890.18570.08160.09530.13310.17560.08460.47850.12610.08810.18860.2161
CCS0.12510.15030.19340.21480.12960.19970.57040.55770.14020.13750.23840.2757
CN0.47690.28960.26820.11010.19420.21010.58490.53090.27250.32490.38800.5229
GB0.84550.57950.80840.80720.40960.85510.86070.89140.82150.87690.74650.8721
HFN0.62260.66780.71250.27350.35030.81810.81180.94450.73610.86760.76260.9031
ICQ0.72730.63340.49510.55920.51230.82610.78490.83500.83290.81340.76030.8302
ID0.53070.22490.49690.36230.27380.86940.77190.91020.56840.64340.70630.7488
IN0.43420.42570.46490.11070.15340.28660.50690.66960.18420.45510.64840.5838
IS0.07460.08210.10580.07570.05270.14060.10610.16990.09920.11650.32910.2092
JPEG0.68230.69140.66530.35060.57380.89610.82010.91580.71230.79640.66310.7907
JPEGTE0.43610.11380.25230.10240.08960.29250.51530.37950.25110.21310.23140.4353
JPEG2k0.80570.56920.71380.65570.36610.90990.87690.94070.86610.85070.77800.9369
JPEG2kTE0.60150.75310.34760.37690.15310.43940.59840.65520.50460.67110.65940.7388
LBD0.09690.10460.14530.12150.11350.19440.13110.18850.23740.14640.38130.2365
LC0.32420.18190.32260.27760.08760.52890.56920.83260.25650.37110.65330.3819
MGN0.42110.12810.54880.30850.15410.53240.67530.84710.63350.66660.62090.7512
MN0.14360.19880.19810.15460.29590.41680.51460.72900.33290.15350.42430.1638
NEPN0.15830.10090.12070.26030.09080.15340.21980.15450.30260.25580.12560.3712
QN0.79610.77110.65240.36180.56760.78690.82070.78900.87690.86230.73610.9173
SCN0.65460.65760.79110.13310.11260.65840.71920.89140.58030.74340.70150.6042
SSR0.75880.57810.65690.66230.59880.90880.88920.93910.66380.84880.84570.8357
ALL0.46310.34370.40720.25120.13770.69970.64170.76210.59010.63390.60780.7012
Average0.45320.37780.42410.29120.24170.54280.59020.70350.47460.52840.56520.5919
STD0.24600.23530.23080.19580.17050.27670.24180.25240.25620.28730.20980.2530
MAX0.84550.77110.80840.80720.59880.90990.88920.94450.87690.87690.84570.9369
MIN0.07460.08210.08160.07570.05270.14060.08460.15450.09920.08810.12560.1638
Table 7. Average SROCC of 100 runs of simulations on tested image databases using state-of-the-art IQA methods.
Table 7. Average SROCC of 100 runs of simulations on tested image databases using state-of-the-art IQA methods.
DBDISTORTIONPSNRSSIMBRISQUECORNIACQASSEQ
LIVE 2JPEG0.85150.94810.86410.90020.82570.9122
JPEG2k0.88220.94380.88380.92460.83660.9388
WN0.98510.97930.97500.95000.97640.9544
GB0.78180.88890.93040.94650.83770.9157
FF0.88690.93350.84690.91320.82620.9038
ALL0.80130.89020.90980.93860.86060.9356
Average0.86480.93060.90170.92890.86050.9268
STD0.07260.03530.04690.01970.05820.0192
MAX0.98510.97930.97500.95000.97640.9544
MIN0.78180.88890.84690.90020.82570.9038
CSIQJPEG0.90090.93090.85250.83190.65060.8066
JPEG2k0.93090.92510.84580.84050.82140.7302
WN0.93450.87610.69310.61870.72760.7876
GB0.93580.90890.83370.85260.74860.7766
PN0.93150.88710.77400.53400.54630.6661
CD0.88620.81280.42550.44580.53830.4172
ALL0.80880.81160.75970.69690.63690.7007
Average0.90410.87890.74060.68860.66710.6979
STD0.04620.04950.15020.16240.10530.1335
MAX0.93580.93090.85250.85260.82140.8066
MIN0.80880.81160.42550.44580.53830.4172
TID 2013AGC0.85680.79120.41660.26050.39640.3949
AGN0.93370.64210.64160.56890.60510.6040
CA0.77590.71580.73100.68440.43800.4366
CC0.46080.34770.18490.14000.20430.2006
CCS0.68920.76410.27150.26420.24610.2547
CN0.88380.64650.21760.35530.16230.1642
GB0.89050.81960.80630.83410.70190.7058
HFN0.91650.79620.71030.77070.71040.7061
ICQ0.90870.72710.76630.70440.68290.6834
ID0.94570.83270.52430.72270.67110.6716
IN0.92630.80550.68480.58740.42310.4272
IS0.76470.74110.22240.24030.20110.2013
JPEG0.92520.82750.72520.78150.63170.6284
JPEGTE0.78740.61440.35810.56790.22210.2195
JPEG2k0.89340.75310.73370.80890.72190.7205
JPEG2kTE0.85810.70670.72770.61130.65290.6529
LBD0.13010.62130.28330.21570.23820.2290
LC0.93860.83110.57260.66820.45610.4460
MGN0.90850.78630.55480.43930.49690.4897
MN0.83850.73880.26500.23420.25060.2575
NEPN0.69310.53260.18210.28550.13080.1275
QN0.86360.74280.53830.49220.72420.7214
SCN0.91520.79340.72380.70430.71210.7064
SSR0.92410.77740.71010.85940.81150.8084
ALL0.68690.57580.54160.60060.49250.4900
Average0.81260.71720.52380.53610.47940.4779
STD0.18140.11350.21450.22580.21910.2186
MAX0.94570.83270.80630.85940.81150.8084
MIN0.13010.34770.18210.14000.13080.1275
Table 8. SROCC cross-database validation, when models are trained on LIVE2 and tested on CSIQ and TID2013.
Table 8. SROCC cross-database validation, when models are trained on LIVE2 and tested on CSIQ and TID2013.
DatabaseDistortionBRISQUECORNIACQASSEQLVPOCPPMLBPMLTPMSLBP
TID2013JPEG0.80580.74230.80710.78230.78270.88750.83780.84720.8779
JPEG2k0.82240.88370.77240.82580.87180.92460.92190.90460.9293
WN0.86210.74030.86920.69590.77810.90010.83510.68810.8766
GB0.82450.81330.82140.86240.88730.86510.88490.86930.8958
ALL0.79650.75990.82140.79550.83650.88140.86610.81370.8776
CSIQJPEG0.82090.70620.71290.81410.83340.90910.90120.87840.9151
JPEG2k0.82790.84590.69570.78620.77160.91010.87440.89140.8846
WN0.69510.86270.65960.46130.82290.91070.84980.77390.8809
GB0.83110.88150.76480.77580.87530.91880.90470.87120.9115
ALL0.80220.75420.71140.74030.83590.89210.86080.86280.8723

Share and Cite

MDPI and ACS Style

Garcia Freitas, P.; Da Eira, L.P.; Santos, S.S.; Farias, M.C.Q.d. On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment. J. Imaging 2018, 4, 114. https://doi.org/10.3390/jimaging4100114

AMA Style

Garcia Freitas P, Da Eira LP, Santos SS, Farias MCQd. On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment. Journal of Imaging. 2018; 4(10):114. https://doi.org/10.3390/jimaging4100114

Chicago/Turabian Style

Garcia Freitas, Pedro, Luísa Peixoto Da Eira, Samuel Soares Santos, and Mylene Christine Queiroz de Farias. 2018. "On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment" Journal of Imaging 4, no. 10: 114. https://doi.org/10.3390/jimaging4100114

APA Style

Garcia Freitas, P., Da Eira, L. P., Santos, S. S., & Farias, M. C. Q. d. (2018). On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment. Journal of Imaging, 4(10), 114. https://doi.org/10.3390/jimaging4100114

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop