Next Article in Journal
Special Issue Editorial “Symmetry in the Mathematical Inequalities”
Previous Article in Journal
Symmetry and Asymmetry in the Fluid Mechanical Sewing Machine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Quality Assessment Based on Three Features Fusion in Three Fusion Steps

1
School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China
2
School of Mechanical Engineering, Anhui Polytechnic University, Wuhu 241000, China
3
Department of Light Sources and Illuminating Engineering, Fudan University, Shanghai 200433, China
4
Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(4), 773; https://doi.org/10.3390/sym14040773
Submission received: 21 March 2022 / Revised: 2 April 2022 / Accepted: 6 April 2022 / Published: 8 April 2022
(This article belongs to the Section Computer)

Abstract

:
The objective image quality assessment (IQA) method was developed to replace subjective observer image quality evaluations in various applications. A reliable full reference color IQA method that allows reference and distorted images to be compared in a symmetric way is designed via three fusion steps described in this article. The three fusion steps include luminance channels fusion, similarity maps fusion, and features fusion. A fusion weight coefficient is designed to fuse the luminance channels of input images as an enhancement operator for features. The extracted SR (spectral residual), gradient, and chrominance features, by means of symmetric calculations for the reference and distorted images, are conducted via similarity fusion processing. Then, based on the human visual system (HVS) characteristics of achromatic and chromatic information receiving, a features fusion map represents the weighted sum of three similarity fusion maps. Finally, a deviation pooling strategy is utilized to export the quality score after features fusion. The novel method is called the features fusion similarity index (FFS). Various experiments are carried out based on statistical evaluation criteria to optimize the parameters of FFS, after which the proposed method of FFS is compared with other state-of-the-art IQA methods using large-scale benchmark single distortion databases. The results show that FFS performs with higher consistency with respect to subjective scores in terms of prediction accuracy, e.g., the PLCC can achieve at least 0.9116 accuracy and at most 0.9774 accuracy for four databases. In addition, the average running time of FFS is 0.0657 s—a value representing a higher computational efficiency.

1. Introduction

Perceptual image quality assessment (IQA) has become an important issue in many fields and applications [1], for instance, image acquisition, transmission, compression, and enhancement. To assess image quality, massive objective IQA methods have been designed in the last few decades [2]. Among IQA methods, the most developed are the methods that compare all the information of processed (distorted) images to the original (reference) image in a symmetric way; these are called Full Reference IQA (FR-IQA) methods [3]. According to the availability of the reference image, all of the IQA methods can be categorized into three main well-established types: (1) full reference (FR) [4,5], (2) reduced reference (RR), and (3) no reference (NR) IQA methods [6]. The main scope of this research is FR-IQA. The most reliable method for IQA is based on human opinion scoring because the human visual system (HVS) is a perfect image information analyzer [7,8]. However, since psycho-visual experiments under standard protocols are laborious, human opinion scoring is infeasible [9]. To solve these problems, some objective IQA methods are designed to predict human observer ratings. Human observer ratings are always of two types: mean opinion scores (MOSs) and difference mean opinion scores (DMOSs) [10].
For the IQA methods, the conventional methods are mean squared error (MSE) and peak signal-to-noise ratio (PSNR). These methods were widely used because of their simplicity [11]. However, their accuracy is not as good as their efficiency because these two methods ignore the visual mechanism of the HVS. Hence, numerous IQA methods have been developed by mimicking the HVS to achieve outstanding performance. The representative method for assessing image quality is the structural similarity (SSIM) method. SSIM was proposed based on the assumption that the HVS is more sensitive to structure information [12]. Although the accuracy of SSIM is better than MSE and PSNR, it needs to be improved to meet practical demands. Recently, learning-based methods have been proposed. The independent feature similarity (IFS) method was introduced by Chang et al. and it consisted of feature and luminance components [13]. A FastICA (fast independent component analysis) algorithm was selected to train data via IFS methods [14]. In addition, Wang et al. proposed a local linear model (LLM) to process IQA by using a convolutional neural network (CNN) [15]. These kinds of methods have represented another direction for FR-IQA method development. Although these learning-based methods can reach higher prediction accuracies, they are over-reliant on training data.
Additionally, many IQA methods have been proposed using different types of image feature extraction. The feature similarity index (FSIM) [16] was proposed by combining phase congruency (PC) and gradient magnitude (GM) similarity maps to calculate IQA scores. A mean deviation similarity index (MDSI) [17] was proposed by using gradient and chrominance fusion similarity. In MDSI, the pooling strategy was chosen as a deviation calculation based on the Minkowski pooling method. Recently, single-value decomposition (SVD) has become a useful tool to assess image quality and structure SVD (SSVD) was proposed [18]. These three IQA methods pay more attention to grayscale image features, so they cannot be used to assess color images. An improved SPSIM (SuperPixel-based SIMilarity) method can compute similarity maps via MDSI calculations by means of a YCrCb color space [3]. Therefore, MDSI calculation has been proved to be suitable to deal with color image feature computing. In addition, visual saliency has been a hotspot in image processing research and some state-of-the-art IQA methods have been proposed.
Visual saliency has become an effective feature for IQA because a visual attention receiver of suprathreshold distortions can express how “salient” a local region of an image is to the HVS. Some IQA methods have been proposed based on the influence of visual saliency on image quality and have achieved better prediction results. In [19], a saliency detection index in the spatial domain was introduced based on the spectral residual (SR) of an image in the spectral domain, namely, the SR visual saliency index. Based on the SR visual saliency index, a spectral residual based similarity (SR-SIM) IQA method was proposed [20]. This method was designed to deal with grayscale images and cannot reflect the real HVS, since the visual detector receives color information. Therefore, a good objective IQA method should take chromatic components into consideration in feature-extracting procedures. In [21], visual saliency, which was processed based on SDSP (saliency detection by combing simple priors), was integrated with gradient and chromatic features in the visual saliency-based index (VSI) in LMN color space. Hence, VSI yielded a better performance than SR-SIM by considering the chrominance distortion.
Recently, the visual saliency feature has played an important role in IQA methods. In [22,23], SDSP was chosen as the visual saliency extractor for global and double-random window similarity (GDRW) and edge feature-based image segmentation (EFS). The efficiencies of the methods using SDSP are not as good as their accuracies. Shi et al. proposed an IQA method combining visual saliency with color appearance and gradient similarity (VCGS) [9]. In VCGS, visual saliency was computed by applying a log-Gabor filter on two new color appearance indices in CIELAB color space. Although CIELAB is more closely related to the HVS, the transforming time from RGB to CIELAB took almost half the time to obtain the evaluation results. To achieve a better performance with the subjective evaluation scores, a visual saliency feature can be thought as an indispensable component of an IQA method. Moreover, an IQA method based on gradient, visual saliency, and color information (GSC) has been proposed. With this method, just the gradient feature was calculated via MDSI computing [24].
Based on the above analysis, transforming an RGB image into other color spaces to extract features is more relevant to the HVS. MSDI calculation is a useful measurement tool for image feature computing to achieve higher correlation coefficients and efficiency.
In this article, a reliable FR-IQA method involving similarity calculation is developed without learning. The proposed method connects three feature information processing components, i.e., visual saliency, gradient, and chromatic features, by using an MDSI fusion strategy in LMN color space. This fusion strategy consists of three fusion steps and these three fusion steps include luminance channels fusion, similarity maps fusion, and features fusion. After some experimental comparisons with the other outstanding methods, the proposed method is shown to be less complex and to offer better quality predictions.

2. Proposed IQA Methods

In this section, an FR-IQA method to evaluate color image quality is introduced. The proposed method is designed for general purpose, which means that it performs consistently with commonly encountered distortions. Three fusion steps are included in the proposed method. The first fusion step is to fuse the luminance channels of the two images with a fusion weight coefficient for SR and gradient feature extraction. The second fusion step is to calculate the SR and gradient normal similarity maps between two references, distortion and fusion images, and connect these normal similarity maps as SR and gradient [17] fusion similarity maps. In addition, a chrominance fusion similarity map is extracted from the chrominance channels and utilized to represent the color distortion at pixel level [17]. Finally, these three similarity maps mentioned above are combined with different weights and pooled based on the Minkowski pooling method [25].

2.1. Luminance Channels Fusion

Based on related research [20], SR and gradient features are all extracted by the luminance channel of images. In a color image [20], an SR map and a gradient map cannot work quite so well for color distortion types. Hence, to deal with color distortion, chrominance features should be computed specially in an IQA method. Consequently, an RGB color image will be transformed into an opponent color space by Equation (1) [26], which is more compatible with HVS intuition.
[ L M N ] = [ 0.06 0.63 0.27 0.30 0.04 0.35 0.34 0.6 0.17 ] [ R G B ]
In LMN color space, L means the luminance channel and M and N represent the chrominance channels. Because of the shortcomings of conventional similarity maps, similarity maps should not compute two independent images in normal way. Inspired by [17], the first step of features fusion in this research, luminance channels of the reference and the distorted images are fused as an FL map for feature enhancement extraction. The fusion strategy is based on a weight coefficient, and it can be calculated by:
F L = α · ( R L + D L )
where RL and DL are the luminance channels of the reference and the distorted images, respectively. α represents the fusion weight and FL is the fusion map. In Figure 1, some example images are selected from the TID2008 database to illustrate the validity of FL maps. Figure 1a is a reference image R and Figure 1(b1–e1) are four JPEG compression images with distortion levels increasing, while Figure 1(b2–e2) represent the FL maps among the images in the first row. It can be clearly seen that the quality of the FL map is lower when the distortion level is higher. In FL maps, some weaker edges in the background region are smoothed, especially in Figure 1(e2). Figure 1(b3–e3) represent the SR maps of the images in the second row after normalization, and Figure 1(b4–e4) are the gradient maps of the images in the second row. It can be observed that there are no obvious differences among the SR maps, whereas the gradient map presents more structural features losing as the image becomes more distorted. As we all know, the SR map indicates the visual attention of an image, and the gradient map can represent the edge features well. After luminance channels fusion, the stronger edges in the texture region will not change obviously and some information in the flat region can be changed. So, the changes in visual saliency features in the SR map cannot be figured out easily and the flat region in the gradient maps may exhibit apparent differences. In the next subsection, the differences in SR maps will be shown by similarity maps calculation.

2.2. Similarity Maps Fusion

The second fusion step is similarity maps fusion. In this subsection, SR, gradient, and chrominance similarity fusion maps are computed in a symmetric way for the reference and distorted images. To extract visual saliency features, the SR operator is selected to process the input images. The prominent advantage of this index is its higher computing efficiency. Different from other SR-based IQA methods [20,24], the fusion similarity map of SR will be calculated with the following equations:
S S R 1 = 2 S R R · S R D + K S R 1 S R R 2 + S R D 2 + K S R 1
S S R 2 = 2 S R R · S R F + K S R 2 S R R 2 + S R F 2 + K S R 2
S S R 3 = 2 S R F · S R D + K S R 3 S R F 2 + S R D 2 + K S R 3
S S R = S S R 1 + S S R 3 S S R 2
where parameters KSR1, KSR2, and KSR3 are the constants to control numerical stability. These three parameters are set as KSR1 = 2KSR2 = 2KSR3 in the experimental calculation. SRR, SRD, and SRF are the SR maps of reference, distorted images, and the FL map. Figure 2c–f are the SR similarity maps between R and D, R and the FL map (RF), D and the FL map (DF), and DF–RF, respectively. Figure 2g is the fused similarity map of the SR visual saliency feature. After DF–RF computing, the main difference located in the flat region is enlarged. It can be seen that the fused SR similarity map contains more information than the other non-fused ones, especially in the flat region. Based on this, the fused SR similarity map can be a useful feature operator in designing the proposed method.
To compute the image gradient, several operators can be selected, such as the Prewitt operator [27], the Sobel operator [27], the Roberts operator [28], and the Scharr operator [28]. The vertical gradient of an image X is calculated by Gy = gyX (see Equation (7)). Similarly, the horizontal gradient is processed by Gx = gxX (see Equation (8)). In these two equations, gx and gy are horizontal and vertical gradient operators and ∗ represents the convolution. Therefore, the gradient magnitude of an image is defined as G ( x ) = G x 2 + G y 2 .
G x = 1 3 [ 1 0 1 1 0 1 1 0 1 ] X
G y = 1 3 [ 1 1 1 0 0 0 1 1 1 ] X
In this article, the Prewitt operator is used to deal with the gradient feature of the L channel in LMN color space among the reference, distorted images and the FL map, which are GR, GD, and GF, respectively. Then, the gradient fusion similarity map (SG) is processed by the following the SSIM-based equations and the simple fusion strategy:
S G 1 = 2 G R · G D + K G 1 G R 2 + G D 2 + K G 1
S G 2 = 2 G R · G F + K G 2 G R 2 + G F 2 + K G 2
S G 3 = 2 G F · G D + K G 3 G F 2 + G D 2 + K G 3
S G = S G 1 + S G 3 S G 3
where parameters KG1, KG2, and KG3 are the constants to control numerical stability and KG2 and KG3 are defined as the same value. Gradient similarity has been widely used in the related literatures [3,8,9,16,17,20,21,22,23,24]. To achieve a better performance, gradient similarity has been extensively investigated in [29]. Gradient fusion similarity was proposed in [17]. Figure 2h–k are the gradient similarity maps between R and D, R and the FL map (RF), D and the FL map (DF), and DF–RF, respectively. Figure 2l is the fused similarity map of the gradient. After DF–RF calculation, the main difference located at the weak edge region is enlarged. It can be observed that the fused gradient similarity map contains more information than other non-fused ones, especially in the weaker edge region. In all, the gradient similarity fusion map is a useful evaluator for the structural distortions.
The last fusion similarity map is of the chromatic components in LMN color space and it can be simply defined as [17]:
S C = 2 M R · M D + 2 N R · N D + K C M R 2 + M D 2 + N R 2 + N D 2 + K C
where the parameter KC is a constant to control numerical stability. Figure 2m is the chrominance fusion similarity map of the reference and the distorted images.

2.3. Features Fusion

The last fusion step is to combine the above three similarity fusion maps. The SR, gradient, and chrominance similarity fusion maps are calculated by the following summation scheme:
S = 0.4 · S S R + 0.4 · S G + 0.2 · S C
In Equation (14), the three components of the features fusion map have different weight settings in an S map computing procedure. These values are determined by the visual mechanism, since the HVS is generally more sensitive to achromatic features than to chromatic features [30]. The sum of the weight value should be set as 1 and the weight value of achromatic features can be twice as high as chromatic features. Since the SR and gradient features are all achromatic features, extracted in the luminance channel, the weight values of these two features should be set the same. Thus, the weight values of the three parts are set in Equation (14). Figure 2n is the features fusion map. After the three similarity fusion maps are connected, the features fusion map can represent the difference between R and D well.

2.4. Pooling Strategy

After the three fusion steps mentioned above have been finished, the next step in the proposed method is to choose the pooling strategy. Minkowski pooling has proved to be an efficient method for IQA score calculation [17]. With SR, gradient, and chrominance fused similarity maps connected, a novel method in the IQA task is defined and is named the Features Fusion Similarity index (FFS). It is to be described using the following formula:
F F S = [ 1 n i n | S i 0.5 ( 1 n i n S i 0.5 ) | ] 0.15
where n is the total pixel number of the S map and Si is the pixel value of the S map. The open-source MATLAB code of FFS is publicly available online at https://github.com/AlAlien/FFS (accessed on 20 March 2022). The computational framework of FFS is illustrated in Figure 3. Since FFS consists of different features from other IQA methods, the parameters of the pooling method should be set at different values and the values have been shown in Equation (15). Figure 2o is the square root of the S map and it has more distinguishable information than the S map. To prove the effectiveness of FFS, the IQA scores of FFS have been computed for Figure 1(b1–e1). The MOSs for Figure 1(b1–e1) are 6.3438, 5.2500, 3.8065, and 2.2500, respectively, and the FFS scores for Figure 1(b1–e1) are 0.3470, 0.4065, 0.4876, and 0.5410, respectively. The conclusion is that the FFS scores are consistent with the levels of the distorted images.
In this paper, to apply the proposed method in all databases, KSR1KSR3, KG1KG3, and KC in the proposed method should be fixed. Additionally, α needs also to be specifically defined for overall databases. Based on previous related research, trial-and-error methods are the most popular way of solving parameter optimization problems. In the following section, these parameters will be defined using a trial-and-error method.

3. Experiments and Performance Analysis

3.1. Databases and Assessment Criteria

In this article, four large-scale, publicly available, single-distortion databases are selected for performance optimization and comparison, i.e., TID2013 [31], TID2008 [32], CSIQ [33], and LIVE [34]. Some representative information for these databases is provided in Table 1. These databases are designed with some ordinarily encountered distortions in real-world applications of IQA. They are annotated with subjective scores, i.e., MOS or DMOS, as suitable benchmarks between the proposed method and others.
In order to test the IQA performance, comparisons are made between the computed scores and the ratings by humans. Four widely used criteria for the performance comparisons of IQA methods are employed: Spearman rank-order correlation coefficient (SROCC), Pearson linear correlation coefficient (PLCC), Kendall rank-order correlation coefficient (KROCC), and root mean squared error (RMSE) [2,35]. Both SROCC and KROCC are calculated by rank of the score and PLCC takes the relative distance between scores into consideration. PLCC is used to indicate the correlation between subjective evaluation and objective evaluation by logistic regression, and SROCC and KROCC are utilized to measure the consistency between objective evaluation and subjective evaluation values [24]. These three criteria reaching unity 1 means that the prediction performance of an objective method is considered high. For the RMSE, a smaller value represents better performance.
Before computing the PLCC and RMSE, a logistic regression should be utilized to process subjective judgments by means of the following equation:
p ( x ) = β 1 [ 1 2 1 1 + exp ( β 2 ( x β 3 ) ) ] + β 4 x + β 5
where β1, …, β5 are the parameters to be fitted, x represents the scores computed by the IQA method, and p(x) is the rating after logistic regression [34].

3.2. Parameter Setting for FFS

In this work, there are five main parameters that need to be determined, including α, KSR1, KC, KG1, and KG2. In the optimizing procedure, when a parameter is tested, the others are fixed as invariable. PLCC is selected as the main criterion to define the parameters because these four criteria perform similarly in parameter optimization experiments.
Parameter α serves as the fusion weight between the luminance channels of reference and distorted images. As shown in Figure 4, PLCC changes with α on the four single-distortion databases presented. The optimal intervals for α on TID2013, TID2008, CSIQ, and LIVE are in [0.5, 0.53], [0.51, 0.55], [0.47, 0.52], [0.5, 0.53]. It can be found that the best fusion weight values are different for each database. In these databases, the optimal α values have similar intervals, which is consistent with visual perception under certain fusion weights in IQA. In this research, α is fixed as 0.52.
Parameters KSR1 and KC are the numerical stability controllers for SSR and SC. As expressed in Figure 5a, the PLCC and SROCC curves against KSR1 for the TID2013 database are shown. It can be observed that, for TID2013, the performances are all stable and high when KSR1 stays in the interval [0.25, 0.75]. In this work, KSR1 can be set as 0.25. Figure 5b shows the SROCC and PLCC curves against KC for the TID2013 database. It can be found that, for TID2013, when KC keeps in [260, 280], the performances can be stable and high. In this research, KC can be fixed as 270.
KG1 and KG2 are the last two parameters, set as the numerical stability controllers for SG. Their influences on the perfomance of FFS will be studied. Figure 6 illustrates the results by a contour map. It can be found that the optimal KG1 and KG2 for TID2013 are in the intervals [140, 180] × [70, 110]. In this article, KG1 and KG2 are set as 160 and 90, respectively.

3.3. Overall Performance Comparison

Overall performance comparisons need to be conducted to test the ability of an IQA method with different databases. In this subsection, the performance of the proposed method was compared with eight typical methods, including SSIM [12], FSIMc [16] (the improved FSIM method with color space transforming), and VSI [21], IFS [13], LLM [15], MDSI [17], GDRW [22], EFS [23], and the latest SSVD [18], VCGS [9], SPSIM (YCbCr_MDSI) [3], and GSC [24] published in 2019, 2020, 2021, and 2022, respectively. To show the better performance, the highest three values for all the criteria are highlighted in boldface in Table 2, Table 3, Table 4 and Table 5. In addition, the weighted average (W. A.) and direct average (D. A.) values of the SROCC, PLCC, and KROCC results of these databases are also included to assess the overall performance, according to Wang and Li [36]. The weight of each database is computed by the number of the distortion images contained in the database.
As shown in Table 2, Table 3, Table 4 and Table 5, it can be concluded that the proposed method has a consistent performance for all the selected databases. Specifically, the proposed method always keeps in the top-three ranks for the TID2008 and LIVE databases. For the TID2013 and CSIQ databases, the gap between the proposed method’s performance and the top three results is very small. Meanwhile, there is no method that performs the best for all databases, based on the distribution of boldfaced figures in Table 2, Table 3, Table 4 and Table 5. From the results for each element of the performance comparisons of IQA methods, it can be found that the effective methods are the proposed method, SPSIM(YCbCr_MDSI), and MDSI on SROCC; for PLCC, the proposed method, MDSI, and SPSIM(YCbCr_MDSI) provide precise results; the proposed method, MDSI, and SPSIM(YCbCr_MDSI) perform IQA consistently with human opinion scores on KROCC; as for RMSE, the proposed method, GSC, GDRW, and MDSI have better performances than the others. Furthermore, the proposed method also has the best performance for the weighted and direct average values. From Table 2, the accuracy SROCC of the proposed method can achieve 0.8926 at least and 0.9768 at most for all databases. From Table 3, the accuracy PLCC value of the proposed method can achieve 0.9116 at least and 0.9774 at most for four databases. Moreover, the proposed method yields the best rank of the weighted average values of PLCC and KROCC, and the direct average values of SROCC, PLCC, and KROCC. Furthermore, the proposed method yields the best rank number (18 times) among chosen IQA methods, followed by MDSI (13 times), and SPSIM(YCbCr_MDSI) (13 times).
From Table 2, Table 3, Table 4 and Table 5, it can be seen that the proposed method yields better performance than the learning-based methods, i.e., IFS and LLM. Meanwhile, the proposed method performs better than the methods without color space transforming, i.e., SSIM and SSVD. In the selected methods with color space transforming, some of them contain normal gradient maps, i.e., FSIMc, VSI, GDRW, EFS, and VCGS. Compared with these methods, the proposed method with a fusion gradient map has remarkable advantages for all databases. As for the methods consisting of fusion gradient maps, i.e., MDSI, SPSIM(YCbCr_MDSI), and GSC, it can be found that IQA performance is improved by fusing SR, gradient, and chrominance features in three fusion steps.

3.4. Performance Comparison among Different Distortion Types and Statistical Significance Comparisons

The performance comparison among different distortion types should be carried out to check the IQA method’s ability to predict image quality. In Table 6, the comparison results for different distortion types are summarized. The tests on TID2008 are not displayed, since all distortion types of TID2008 are contained in the TID2013 database. The performance measurement was chosen as SROCC because it had a similar effect to the other criteria, i.e., PLCC, RMSE, and KROCC. Therefore, these three databases contained 35 distortion types of images to be compared. Due to the lack of an open-source code, the results of SPSIM(YCbCr_MDSI) are not included in Table 6, and the results of GSC are based on the values from the published paper. The top three SROCC values of all distortion types are highlighted in bold to show the ability of IQA. From Table 6, the proposed method (17 times) wins the best SROCC-performance rank, followed by GSC (15 times), EFS (14 times), VCGS (11 times), GDRW (10 times), and MDSI (9 times). Furthermore, their performances are much better than other IQA methods. Meanwhile, there is no method that performs the best for all distortion types. The proposed method cannot deal with some distortion types, e.g., MN, NEPN, Block, or CTC in TID2013. Compared with MDSI, the proposed method performs much better for the performance comparison among different distortion types. In a comparison of the proposed method and GSC, GSC has a much better performance with respect to MN, NEPN, Block, and CTC in TID2013, while the proposed method performs much better for AGN, QN, CCS, and LCNI in TID2013. To sum up, the conclusion is that the proposed IQA method performs better than others depending on the distortion type.
There are some scatter plots shown in Figure 7 for the TID2013 database based on the open-source code mentioned in the articles. Due to the lack of an open-source code, the scatter plots of SPSIM(YCbCr_MDSI) and GSC are not included in Figure 7. To compare the visual performance between the proposed method and the comparison methods, the scatter plot for the proposed method is shown in Figure 8. It can be concluded that the proposed method performs consistently with the subjective ratings, compared with most IQA methods, including MDSI.
Moreover, statistical significance comparisons were performed and the results are displayed in Table 7. These values were computed by means of a series of hypothetical experiments to evaluate the residuals of all methods after logistic regression [2,35]. In particular, the left-tailed F-test was used to pairwise test between the proposed method and other methods. In this article, the left-tailed F-test was set at a 0.05 significance level. After the calculation, the result H = 1 (green) means that the first method (the proposed method) yields a better IQA performance than the second method (the method in the first row of Table 7) with a confidence larger than 95%. A value of H = 0 (orange) shows that these two competing methods have similar IQA performances. As shown in Table 7, the number of total statistical tests between two methods is 40 and the number of comparisons in which the proposed method surpasses the others statistically is 31. Therefore, the proposed method yields significant improvement in 77.5% of the cases. Consequently, the proposed method has been shown to have a very promising statistical performance when compared with most of the other methods.
In this subsection, with the fusion gradient map, the proposed method shows an obvious improvement for the different distortion types and statistical significance tests, compared with the methods containing the normal gradient map, including FSIMc, GSM, VSI, GDRW, EFS, and VCGS. Compared with MDSI, it can be concluded that IQA performance is improved by the proposed method among different distortion type comparisons, and the proposed method has a similar performance among the statistical significance tests. Compared with GSC, the performance with different distortion types has been obviously improved by the fusion strategy utilized in the proposed method. Therefore, it can be concluded that the proposed method has better predictive accuracy than the other IQA methods with respect to the widely used databases.

3.5. Computational Cost

Computational cost is another criterion for assessing all IQA methods, which represents computational efficiency. All the experiments in this research were conducted on a PC with a 2.5 GHz Intel Core i5 CPU and 8 G RAM running the MATLAB R2013b software platform and included a running time comparison. The average running times of each method for the TID2013 database, with a resolution of 512 × 384, are listed in Table 8 (the running time of FFS is in bold). It can be observed that the FFS is less computationally complex than most IQA methods. The running time of MDSI and SSIM are lower than the proposed method but the proposed method has a higher predictive accuracy. In the experiments, it can be found that the time cost of the proposed method is about 0.0657 s. Hence, FFS can be used for a real-time automated system application with a higher computational efficiency. To deal with the IQA problem in real settings, the importance of computational cost and prediction accuracy should be at the same level.

4. Conclusions

In this research, a novel FR-IQA method with good performance was proposed, namely, the features fusion similarity (FFS) method. This method consists of three fusion steps, i.e., luminance fusion, similarity maps fusion, and features fusion. Firstly, the luminance channels of two images are fused with a fusion weight for SR and gradient features enhancement extraction. Secondly, the reference image, the distorted image, and fusion map were calculated by means of a SR similarity fusion map, a gradient similarity fusion map and a chrominance similarity fusion map in a symmetric way, respectively. Lastly, these three feature similarity maps were fused with different weights based on the HVS mechanism and then a deviation pooling strategy was selected to process the features fusion map to obtain an image quality score. After the IQA method design, the main parameters were defined by optimization tests. Twelve state-of-the-art or newly published IQA methods were selected as methods to put in competition with the proposed method with respect to four popular databases. The experimental results showed that the accuracy PLCC value of FFS can achieve at least 0.9116 and at most 0.9774 for the four databases. The time cost of the proposed method is about 0.0657 s. These comparative results illustrated that FFS yields statistically better predictive accuracy than the other methods with a higher computational efficiency. In the future, all IQA methods need to be improved to yield a better performance for the IQA problem in real settings, including the proposed method.

Author Contributions

Conceptualization, C.S. and Y.L.; methodology, C.S.; software, C.S.; validation, C.S. and Y.L.; investigation, Y.L.; resources, C.S.; data curation, C.S.; writing—original draft preparation, C.S.; writing—review and editing, Y.L.; visualization, C.S.; supervision, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2017YFB0403700, and the Research Start-up Foundation for Introduction of Talents of AHPU, grant number 2021YQQ027.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhan, Y.B.; Zhang, R.; Wu, Q. A Structural Variation Classification Model for Image Quality Assessment. IEEE Trans. Multimed. 2017, 19, 1837–1847. [Google Scholar] [CrossRef]
  2. Athar, S.; Wang, Z. A Comprehensive Performance Evaluation of Image Quality Assessment Algorithms. IEEE Access 2019, 7, 140030–140070. [Google Scholar] [CrossRef]
  3. Frackiewicz, M.; Szolc, G.; Palus, H. An Improved SPSIM Index for Image Quality Assessment. Symmetry 2021, 13, 518. [Google Scholar] [CrossRef]
  4. Saha, A.; Wu, Q.M.J. Full-reference image quality assessment by combining global and local distortion measures. Signal Process. 2016, 128, 186–197. [Google Scholar] [CrossRef] [Green Version]
  5. Saha, A.; Wu, Q.M.J. Perceptual image quality assessment using phase deviation sensitive energy features. Signal Process. 2013, 93, 3182–3191. [Google Scholar] [CrossRef]
  6. Zhou, B.Z.; Shao, F.; Meng, X.C.; Fu, R.D.; Ho, Y.S. No-Reference Quality Assessment for Pansharpened Images via Opinion-Unaware Learning. IEEE Access 2019, 7, 40388–40401. [Google Scholar] [CrossRef]
  7. Lin, W.S.; Kuo, C.C.J. Perceptual visual quality metrics: A survey. J. Vis. Commun. Image Represent. 2011, 22, 297–312. [Google Scholar] [CrossRef]
  8. Shi, C.-Y.; Lin, Y.-D. Objective image quality assessment based on image color appearance and gradient features. Acta Phys. Sin. 2020, 69, 228701. [Google Scholar] [CrossRef]
  9. Shi, C.Y.; Lin, Y.D. Full Reference Image Quality Assessment Based on Visual Salience With Color Appearance and Gradient Similarity. IEEE Access 2020, 8, 97310–97320. [Google Scholar] [CrossRef]
  10. Reisenhofer, R.; Bosse, S.; Kutyniok, G.; Wiegand, T. A Haar wavelet-based perceptual similarity index for image quality assessment. Signal Process-Image 2018, 61, 33–43. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, R.F.; Yang, H.; Pan, Z.K.; Huang, B.X.; Hou, G.J. Screen Content Image Quality Assessment With Edge Features in Gradient Domain. IEEE Access 2019, 7, 5285–5295. [Google Scholar] [CrossRef]
  12. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Chang, H.W.; Zhang, Q.W.; Wu, Q.G.; Gan, Y. Perceptual image quality assessment by independent feature detector. Neurocomputing 2015, 151, 1142–1152. [Google Scholar] [CrossRef]
  14. Hyvarinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 1999, 10, 626–634. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, H.; Fu, J.; Lin, W.; Hu, S.; Kuo, C.-C.J.; Zuo, L. Image quality assessment based on local linear information and distortion-specific compensation. IEEE Trans. Image Process. 2016, 26, 915–926. [Google Scholar] [CrossRef]
  16. Zhang, L.; Zhang, L.; Mou, X.Q.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  17. Nafchi, H.Z.; Shahkolaei, A.; Hedjam, R.; Cheriet, M. Mean Deviation Similarity Index: Efficient and Reliable Full-Reference Image Quality Evaluator. IEEE Access 2016, 4, 5579–5590. [Google Scholar] [CrossRef]
  18. Mansouri, A.; Mahmoudi-Aznaveh, A. SSVD: Structural SVD-based image quality assessment. Signal Process-Image 2019, 74, 54–63. [Google Scholar] [CrossRef]
  19. Hou, X.; Zhang, L. Saliency Detection: A Spectral Residual Approach. In Proceedings of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, MN, USA, 17–22 June 2007; pp. 2280–2287. [Google Scholar] [CrossRef]
  20. Zhang, L.; Li, H.Y. SR-SIM: A fast and high performance IQA index based on spectral residual. In Proceedings of the 2012 IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 1473–1476. [Google Scholar] [CrossRef]
  21. Zhang, L.; Shen, Y.; Li, H.Y. VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [Green Version]
  22. Shi, Z.F.; Chen, K.X.; Pang, K.; Zhang, J.P.; Cao, Q.J. A perceptual image quality index based on global and double-random window similarity. Digit. Signal Process. 2017, 60, 277–286. [Google Scholar] [CrossRef]
  23. Shi, Z.; Zhang, J.; Cao, Q.; Pang, K.; Luo, T. Full-reference image quality assessment based on image segmentation with edge feature. Signal Process. 2018, 145, 99–105. [Google Scholar] [CrossRef]
  24. Chang, H.-W.; Bi, X.-D.; Du, C.-Y.; Mao, C.-W.; Wang, M.-H. Image Quality Evaluation Based on Gradient, Visual Saliency, and Color Information. Int. J. Digit. Multimed. Broadcast. 2022, 2022, 7540810. [Google Scholar] [CrossRef]
  25. Wang, Z.; Shang, X.L. Spatial pooling strategies for perceptual image quality assessment. In Proceedings of the 2006 IEEE International Conference on Image Processing, ICIP 2006, Atlanta, GA, USA, 8–11 October 2006; pp. 2945–2948. [Google Scholar] [CrossRef] [Green Version]
  26. Geusebroek, J.M.; van den Boomgaard, R.; Smeulders, A.W.M.; Geerts, H. Color invariance. IEEE Trans. Pattern Anal. 2001, 23, 1338–1350. [Google Scholar] [CrossRef] [Green Version]
  27. Jain, R.C.; Kasturi, R.; Schunck, B.G. Machine Vision; McGraw-Hill: New York, NY, USA, 1995. [Google Scholar]
  28. Sonka, M.; Hlavac, V.; Boyle, R. Image Processing, Analysis and Machine Vision, 3rd ed.; Cengage Learning: Stanford, CT, USA, 2008. [Google Scholar]
  29. Xue, W.F.; Zhang, L.; Mou, X.Q.; Bovik, A.C. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index. IEEE Trans. Image Process. 2014, 23, 684–695. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Lee, D.; Plataniotis, K.N. Towards a Full-Reference Quality Assessment for Color Images Using Directional Statistics. IEEE Trans. Image Process. 2015, 24, 3950–3965. [Google Scholar] [CrossRef]
  31. Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Image database TID2013: Peculiarities, results and perspectives. Signal Process-Image 2015, 30, 57–77. [Google Scholar] [CrossRef] [Green Version]
  32. Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008—A Database for Evaluation of Full-Reference Visual Quality Assessment Metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
  33. Larson, E.C.; Chandler, D.M. Categorical Image Quality (CSIQ) Database. Available online: http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23 (accessed on 19 January 2009).
  34. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef]
  35. Zhai, G.T.; Min, X.K. Perceptual image quality assessment: A survey. Sci. China-Inf. Sci. 2020, 63, 52. [Google Scholar] [CrossRef]
  36. Wang, Z.; Li, Q. Information Content Weighting for Perceptual Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 1185–1198. [Google Scholar] [CrossRef]
Figure 1. Typical images extracted from TID2008. (a) Reference image. (b1e1) Distorted images of JPEG compression with four levels of distortion. (b2e2) The FL map among these images. (b3e3) The SR map of the FL map. (b4e4) The gradient map of the FL map.
Figure 1. Typical images extracted from TID2008. (a) Reference image. (b1e1) Distorted images of JPEG compression with four levels of distortion. (b2e2) The FL map among these images. (b3e3) The SR map of the FL map. (b4e4) The gradient map of the FL map.
Symmetry 14 00773 g001
Figure 2. Typical images extracted from TID2008 after similarity maps fusion and features fusion. (a,b) are the reference and distorted images, respectively. (cf) are the SR similarity maps between R and D, R and FL map (RF), D and FL map (DF), and DF–RF, respectively. (hk) are the gradient similarity maps between R and D, R and the FL map (RF), D and the FL map (DF), and DF–RF, respectively. (g,l,m) are the fused similarity map of SR and the gradient and chrominance map, respectively. (n,o) are the features fusion map and the square root of the features fusion map, respectively.
Figure 2. Typical images extracted from TID2008 after similarity maps fusion and features fusion. (a,b) are the reference and distorted images, respectively. (cf) are the SR similarity maps between R and D, R and FL map (RF), D and FL map (DF), and DF–RF, respectively. (hk) are the gradient similarity maps between R and D, R and the FL map (RF), D and the FL map (DF), and DF–RF, respectively. (g,l,m) are the fused similarity map of SR and the gradient and chrominance map, respectively. (n,o) are the features fusion map and the square root of the features fusion map, respectively.
Symmetry 14 00773 g002
Figure 3. Framework of the proposed IQA method, FFS.
Figure 3. Framework of the proposed IQA method, FFS.
Symmetry 14 00773 g003
Figure 4. Performances with different α values for four databases.
Figure 4. Performances with different α values for four databases.
Symmetry 14 00773 g004
Figure 5. Performance of FFS in terms of SROCC and PLCC against (a) KSR1 and (b) KC for the TID2013 database, respectively.
Figure 5. Performance of FFS in terms of SROCC and PLCC against (a) KSR1 and (b) KC for the TID2013 database, respectively.
Symmetry 14 00773 g005
Figure 6. Performance of FFS in terms of PLCC against KG1 and KG2 for the TID2013 database.
Figure 6. Performance of FFS in terms of PLCC against KG1 and KG2 for the TID2013 database.
Symmetry 14 00773 g006
Figure 7. Scatter plots of subjective MOSs against scores calculated on the basis of the compared methods’ predictions for TID2013 databases.
Figure 7. Scatter plots of subjective MOSs against scores calculated on the basis of the compared methods’ predictions for TID2013 databases.
Symmetry 14 00773 g007
Figure 8. Scatter plot of subjective MOSs against scores calculated on the basis of the proposed method’s predictions for TID2013 databases.
Figure 8. Scatter plot of subjective MOSs against scores calculated on the basis of the proposed method’s predictions for TID2013 databases.
Symmetry 14 00773 g008
Table 1. Benchmark test databases for IQA.
Table 1. Benchmark test databases for IQA.
DatabaseSource ImagesDistorted ImagesDistortion TypesObservers
TID201325300024971
TID200825170017838
CSIQ30866635
LIVE297795161
Table 2. Comparison of SROCC values for selected IQA methods.
Table 2. Comparison of SROCC values for selected IQA methods.
TID2013TID2008CSIQLIVEW. A.D. A.
SSIM0.74170.77490.87560.96010.80080.8381
FSIMc0.85100.88400.93100.98070.88960.9117
VSI0.89650.89790.94230.97080.91410.9269
IFS0.86970.89030.95810.95990.90030.9195
LLM0.90370.90770.90500.96080.91350.9193
MDSI0.88990.92080.95690.96670.91830.9336
GDRW0.88030.89710.95900.97520.90930.9279
EFS0.89480.89250.93710.95500.90880.9199
SSVD0.81120.88460.89750.97400.86610.8918
VCGS0.89260.89750.94430.97680.91330.9278
SPSIM(YCbCr_MDSI)0.90670.91500.94340.96250.92210.9319
GSC0.86570.89060.95980.95890.89860.9188
Proposed0.89260.91660.95500.97680.91970.9353
Table 3. Comparison of PLCC values for selected IQA methods.
Table 3. Comparison of PLCC values for selected IQA methods.
TID2013TID2008CSIQLIVEW. A.D. A.
SSIM0.78950.77320.86130.95080.81900.8437
FSIMC0.87690.87620.91920.97290.89670.9113
VSI0.89990.87620.92790.96590.90730.9175
IFS0.87910.88100.95760.95860.90190.9191
LLM0.90680.89710.90000.95780.91100.9154
MDSI0.90800.91600.95300.96590.92470.9358
GDRW0.89130.88210.95410.97520.90980.9257
EFS0.90670.88100.92870.95060.90950.9168
SSVD0.81980.89400.88780.96870.87040.8926
VCGS0.90000.87760.93010.96760.90830.9188
SPSIM (YCbCr_MDSI)0.91730.90510.93340.95830.92240.9285
GSC0.87660.88010.96010.95770.90070.9186
Proposed0.91160.91680.95300.97740.92830.9397
Table 4. Comparison of KROCC values for selected IQA methods.
Table 4. Comparison of KROCC values for selected IQA methods.
TID2013TID2008CSIQLIVEW. A.D. A.
SSIM0.55880.57680.69070.83140.62180.6644
FSIMC0.66650.69910.76900.88810.72170.7557
VSI0.71830.71230.78570.85170.74570.7670
IFS0.67850.70090.81580.82540.72450.7552
LLM0.72090.73680.72380.82300.74070.7511
MDSI0.71230.75150.81300.83950.75490.7791
GDRW0.69780.71250.81690.86600.74260.7733
EFS0.72000.70910.77890.81090.73860.7547
SSVD0.64670.71050.72550.86010.70570.7357
VCGS0.71660.71710.79060.87520.75030.7749
SPSIM (YCbCr_MDSI)0.73060.73930.78770.83070.75540.7721
GSC0.67640.70170.81900.82120.72350.7546
Proposed0.71590.74430.81110.87000.75900.7853
Table 5. Comparison of RMSE values for selected IQA methods.
Table 5. Comparison of RMSE values for selected IQA methods.
TID2013TID2008CSIQLIVEW. A.D. A.
SSIM0.76080.85110.13349.69022.04042.8589
FSIMC0.59590.64680.10347.23671.53992.1457
VSI0.54040.64660.09798.10361.64372.3471
IFS0.59090.63490.07577.77641.61182.2695
LLM0.52770.59820.12327.76781.57832.2542
MDSI0.51810.53830.07967.07901.44932.0538
GDRW0.56210.63220.07866.92881.47122.0504
EFS0.52300.63490.09738.47941.68902.4337
SSVD0.70990.60130.12087.77091.66272.3007
VCGS0.54040.64330.09647.90351.61272.2959
SPSIM (YCbCr_MDSI)0.49350.57050.09427.80481.55722.2408
GSC0.58790.63320.07214.75641.15661.5124
Proposed0.50960.53600.07966.62031.37601.9364
Table 6. SROCC values of IQA methods for each type of distortion.
Table 6. SROCC values of IQA methods for each type of distortion.
DatabasesDistortion TypeSSIMFSIMCVSIIFSLLMMDSIGDRWEFSSSVDVCGSGSCProposed
TID2013AGN0.86710.91010.94600.93820.94620.94770.94700.94270.92210.93680.91200.9503
ANC0.77260.85370.87050.85370.89750.87940.86750.86800.82890.85590.93520.8820
SCN0.85150.89000.93670.93400.93490.94590.93860.93690.93690.93150.94350.9491
MN0.77670.80940.76970.79600.75450.80040.71160.79430.73250.80700.88980.7962
HFN0.86340.90940.92000.91400.95240.91470.91780.92090.89950.91620.92580.9192
IN0.75030.82510.87410.83890.83260.86000.80380.88610.76710.86820.90460.8653
QN0.86570.88070.87480.83350.90550.90150.89550.86820.85720.88310.75850.8841
GB0.96680.95510.96120.96580.94510.95230.92060.96310.94990.95490.97040.9565
DEN0.92540.93300.94840.91830.94780.94980.95390.94730.94860.94540.94970.9508
JPEG0.92000.93390.95410.92900.95440.95040.95130.95200.93180.95970.96270.9487
JP2K0.94680.95890.97060.96110.97020.96350.96570.97070.96880.96860.95420.9641
JPTE0.84930.86100.92160.89250.84590.88970.88470.92410.84410.89550.90030.8990
J2TE0.88280.89190.92280.90100.91760.90980.91740.92330.93320.92040.87900.9167
NEPN0.78210.79370.80600.78390.79670.82170.81370.82010.80850.78870.89760.8123
Block0.57200.55320.17130.10040.62730.69310.26270.55810.47680.43260.74550.6455
MS0.77520.74870.77000.65750.75860.74240.75970.78210.75040.76460.76830.7752
CTC0.37750.46790.47540.44690.46340.43780.37950.47480.44940.46870.78190.4913
CCS0.41410.83590.81000.82570.31170.80010.79800.82610.35620.78160.63660.8096
MGN0.78030.85690.91170.87900.90970.88970.89040.90510.86000.89480.92660.8987
CN0.85660.91350.92430.90370.94550.91900.93020.91920.92630.92680.91140.9241
LCNI0.90570.94850.95640.94330.95880.95590.96310.95500.96490.95560.91720.9594
ICQD0.85420.88150.88390.90070.91550.91340.90440.90070.89480.90500.90020.9112
CHA0.87750.89250.89060.88620.86820.88240.86090.89540.88160.88610.88370.8861
SSR0.94610.95760.96280.95560.96760.96380.96670.96300.97000.96350.93450.9635
CSIQAWGN0.89740.93590.96360.95930.94030.96760.96860.96710.94740.96710.95880.9661
JPEG0.95430.96640.96180.96600.96020.95720.96690.96720.95930.96430.96600.9581
JP2K0.96050.97040.96940.97120.97300.97320.97510.97720.96920.97470.97130.9772
AGPN0.89240.93700.96380.95260.92410.96650.96230.96040.93920.96240.93890.9640
GB0.96080.97290.96790.96210.96500.97330.97300.97740.97130.97340.97890.9778
CTC0.79250.94380.95040.94850.94180.94460.92820.95570.86900.95530.91450.9488
LIVEJP2K0.96140.97240.96040.96940.96680.97030.96970.96780.96810.98410.97260.9786
JPEG0.97640.98400.97610.97780.97350.97620.98050.97670.98050.98490.98670.9758
AWGN0.96940.97160.98350.98830.97650.98710.98110.98410.98160.98960.98500.9905
GB0.95170.97080.95270.96650.95290.96730.95750.96630.93870.97630.97650.9787
FF0.95560.95190.94300.94040.94520.94870.94500.94900.93920.96830.94890.9669
Table 7. Statistical significance comparison of different IQA methods.
Table 7. Statistical significance comparison of different IQA methods.
DatabaseSSIMFSIMcVSIIFSLLMMDSIGDRWEFSSSVDVCGS
CSIQ1110100111
LIVE1111100111
TID20081111001111
TID20131111001111
Table 8. Time cost of each IQA method.
Table 8. Time cost of each IQA method.
MethodsSSIMFSIMcVSIIFSMDSIGDRWEFSSSVDVCGSFFS
Time(s)0.06010.31100.27270.07710.02554.62981.61610.18860.62400.0657
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, C.; Lin, Y. Image Quality Assessment Based on Three Features Fusion in Three Fusion Steps. Symmetry 2022, 14, 773. https://doi.org/10.3390/sym14040773

AMA Style

Shi C, Lin Y. Image Quality Assessment Based on Three Features Fusion in Three Fusion Steps. Symmetry. 2022; 14(4):773. https://doi.org/10.3390/sym14040773

Chicago/Turabian Style

Shi, Chenyang, and Yandan Lin. 2022. "Image Quality Assessment Based on Three Features Fusion in Three Fusion Steps" Symmetry 14, no. 4: 773. https://doi.org/10.3390/sym14040773

APA Style

Shi, C., & Lin, Y. (2022). Image Quality Assessment Based on Three Features Fusion in Three Fusion Steps. Symmetry, 14(4), 773. https://doi.org/10.3390/sym14040773

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop