Next Article in Journal
Full-Vectorial 3D Microwave Imaging of Sparse Scatterers through a Multi-Task Bayesian Compressive Sensing Approach
Previous Article in Journal
Macrosight: A Novel Framework to Analyze the Shape and Movement of Interacting Macrophages Using Matlab®
Previous Article in Special Issue
User-Centered Predictive Model for Improving Cultural Heritage Augmented Reality Applications: An HMM-Based Approach for Eye-Tracking Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality Assessment of HDR/WCG Images Using HDR Uniform Color Spaces

1
Harmonic Inc., ZAC des Champs Blancs, 57 Rue Clément Ader, 35510 Cesson-Sévigné, France
2
Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Campus Universitaire de Beaulieu, 35042 Rennes CEDEX, France
*
Author to whom correspondence should be addressed.
J. Imaging 2019, 5(1), 18; https://doi.org/10.3390/jimaging5010018
Submission received: 31 October 2018 / Revised: 21 December 2018 / Accepted: 4 January 2019 / Published: 14 January 2019
(This article belongs to the Special Issue Multimedia Content Analysis and Applications)

Abstract

:
High Dynamic Range (HDR) and Wide Color Gamut (WCG) screens are able to render brighter and darker pixels with more vivid colors than ever. To assess the quality of images and videos displayed on these screens, new quality assessment metrics adapted to this new content are required. Because most SDR metrics assume that the representation of images is perceptually uniform, we study the impact of three uniform color spaces developed specifically for HDR and WCG images, namely, I C t C p , J z a z b z and H D R - L a b on 12 SDR quality assessment metrics. Moreover, as the existing databases of images annotated with subjective scores are using a standard gamut, two new HDR databases using WCG are proposed. Results show that MS-SSIM and FSIM are among the most reliable metrics. This study also highlights the fact that the diffuse white of HDR images plays an important role when adapting SDR metrics for HDR content. Moreover, the adapted SDR metrics does not perform well to predict the impact of chrominance distortions.

1. Introduction

Screen technologies have made incredible progress in recent years. They are able to display brighter and darker pixels with more vivid colors than ever and, thus, create more impressive and realistic images.
Indeed, the new generation of screens can display a luminance that can go below 0.01 cd / m 2 and up to 10,000 cd / m 2 , thus, allowing them to handle images and video with a High Dynamic Range (HDR) of luminance. For comparison, screens with a standard dynamic range (SDR) are traditionally able to display luminance between 1 and 100 cd / m 2 . To handle HDR images, new transfer functions have to be used to transform the true linear light to perceptually linear light (Opto-Electronic Transfer Function (OETF)). The function used to transform back the perceptually linear light to true linear light is called the Electro-Optic Transfer Function (EOTF). The OETF and the EOTF functions are not exactly the inverse of each other. This non-linearity compensates the differences in tonal perception between the environment of the camera and that of the display. The SDR legacy transfer functions called gamma functions are normalized in BT.709 [1] and BT.1886 [2]. For HDR video compression, two transfer functions were standardized: PQ [3] (Perceptual Quantizer) and HLG (Hybrid-Log-Gamma) [4].
Screen enhancements do not only focus on increasing the luminance range but also on the size of the color space it can cover. Indeed, the color space that a screen can display is limited by the chromatic coordinates of its three primary colors (Red, Green and Blue) corresponding to the three kinds of display photo-transmitters. The gamut, i.e., a subset of visible colors that can be represented by a color space, used to encode SDR images (normalized in BT.709 [1]) is not wide enough to cover the gamut that could be displayed by a Wide Color Gamut (WCG) screen. The BT.2020 [5] recommendation define how to handle this wider gamut. For the moment, there is no screen that can cover this gamut in its totality, but some are really close. The standard BT.2100 [6] sums up all the aforementioned HDR/WCG standards.
For these new images and videos, new quality assessment metrics are required. Indeed, quality metrics are key tools to assess performances of diverse image processing applications such as image and video compression. Unfortunately, SDR image quality metrics are not appropriate for HDR/WCG contents. To overcome this problem, we can follow two strategies. The first one is to adapt existing SDR metrics to a higher dynamic range. For instance, instead of using a classical gamma transfer function, Aydin et al. [7] defined a transfer function, called the Perceptually Uniform (PU) function, which corresponds to the gamma non-linearity (defined in BT.1886 [2]) for luminance value between 0.1 and 80 cd / m 2 while retaining perceptual linearity above. This method can be used for any metrics using the luminance corrected with the gamma function, such as PSNR, SSIM [8], VIF [9], Multiscale-SSIM [10] (MS-SSIM). In this paper, the metrics using the Perceptually Uniform (PU) function have the prefix PU- (PU-PSNR, PU-SSIM). The second strategy is to design dedicated metrics for HDR contents. We can mention HDR-VDP2 [11,12] for still images and HDR-VQM [13] for videos.
Several studies have already benchmarked the performances of HDR metrics. In [14], the authors assessed the performances of 35 quality metrics over 240 HDR images compressed with JPEG XT [15]. They conclude that HDR-VDP2 (version 2.2.1 [12]) and HDR-VQM were the best performing metrics, closely followed by PU-MS-SSIM. In [16], the authors came to the conclusion that HDR-VDP2 (version 2.1.1) can be successfully used for predicting the quality of video pair comparison contrary to HDR-VQM. In [17], authors showed that HDR-VDP2, HDR-VQM, PU-VIF and PU-SSIM provide similar performances. In [18], results indicate that PU-VIF and HDR-VDP2 have similar performances, although PU-VIF has a slightly better reliability than HDR-VDP2 for lower quality scores. More recently, Zerman et al. [19] demonstrated that HDR-VQM is the best full-reference HDR quality metric.
The above studies have two major limitations. First, as all of these metrics are color-blind, they only provide an answer to the increase of luminance range but they do not consider the WCG gamut. Second, the databases used to evaluate the different metrics were most of the time defined with an HDR display only capable of displaying a BT.709 [1] gamut. The WCG gamut BT.2020 [5] is currently addressed neither by current metrics nor by current databases.
To overcome these limitations, in this paper, we adapt existing SDR metrics to HDR/WCG images using uniform color spaces adapted to HDR. Indeed, most SDR metrics assume that the representation of images is perceptually linear. To be able to evaluate HDR metrics that include both luminance and chromatic information, we propose two new image databases, that include chrominance artifacts within the BT.2020 wide color gamut.
This paper is organized as follows. First, we describe the adaptation of SDR metrics to HDR/WCG images using perceptually uniform color spaces. Second, we present the methodology used to evaluate the performances of these metrics. In a third part, the performances of the considered metrics are presented. Results are discussed in a fourth part. A fifth section describes our recommendation to assess the quality of HDR/WCG images. The last section concludes this paper.

2. From State-of-the-Art SDR Quality Assessment Metrics to HDR/WCG Quality Assessment Metrics

In this section we first present the perceptually uniform color spaces able to encode the HDR/WCG content. Then in a second part, we elaborate on the color difference metrics associated with these color space. In a third part, we describe a selection of SDR quality metrics. Finally, we present how we tailor SDR quality metrics to HDR/WCG content.

2.1. Perceptually Uniform Color Spaces

For many image processing applications such as compression and quality assessment, pixels are encoded with a three-dimensional representation: one dimension corresponds to an achromatic component the luminance and the two others correspond to the chromatic information. An example of this kind of representations is the linear color space CIE- X Y Z where Y represents the luminance and X and Z the chromatic information. This color space is often used as a reference from which many other color spaces are derived (including most of R G B color spaces). However this space is not a uniform (or perceptually uniform) color space. A uniform color space is defined so that the difference between two values always corresponds to the same amount of visually perceived change.
Three uniform color spaces are considered in this article: H D R - L a b [20], the HDR extension of the CIE 1976 L a b [21] and two other HDR/WCG color spaces designed to be perceptually linear, and simple to use: I C t C p [6] and J z a z b z [22]. Unlike the X Y Z color space in which all components are always non-negative, these three uniform color spaces represent the chromatic information using a color-opponent model which is coherent with the Human Visual System (HVS) and the opponent color theory.
In this article, the luminance component of the uniform color spaces is called uniform luminance instead of, according to the case, lightness, brightness or luma to avoid unnecessary complexity. For example, the uniform luminance of H D R - L a b should be called lightness while the uniform luminance of J z a z b z should be called brightness.

2.1.1. HDR-Lab

One of the most popular uniform color spaces is the CIE 1976 L a b or CIELAB which is suited for SDR content. An extension of this color space for HDR images was proposed in [20]. The proposition is to tailor CIELAB for HDR by changing the non-linear function applied to the pixel X Y Z values. This color space is calculated as follows:
L H D R = f ( Y / Y n ) a H D R = 5 [ f ( X / X n ) f ( Y / Y n ) ] b H D R = 2 [ f ( Y / Y n ) f ( Z / Z n ) ]
where X n , Y n and Z n are the X Y Z coordinates of the diffuse white. The non-linear function f is used to output perceptually linear values. f is defined for HDR as follows:
f ( ω ) = 247 ω ϵ ω ϵ + 2 ϵ + 0.02
ϵ = 0.58 / ( s f × l f )
s f = 1.25 0.25 ( Y s / 0.184 )
l f = log ( 318 ) / log ( Y n )
where Y S is the relative luminance of the surround and Y n is the absolute luminance of the diffuse white or reference white. The diffuse white corresponds to the chromatic coordinates in the X Y Z domain of a 100% reflectance white card without any specular highlight. In HDR imaging, the luminance Y of the diffuse white is different from the luminance of the peak brightness. Light coming from specular reflections or emissive light sources can reach much higher luminance values. The luminance of the diffuse white is often chosen during the color grading of the images.
The use of H D R - L a b color space is somewhat difficult since it requires to know the relative luminance of the surround, Y S , as well as the diffuse white luminance, Y n . Unfortunately these two parameters are most of the time unknown for HDR contents. To cope with this issue, we consider two different diffuse whites to compute the H D R - L a b color space, i.e., 100 cd / m 2 and 1000 cd / m 2 . These two color spaces are named H D R - L a b 100 and H D R - L a b 1000 , respectively.
In addition to the H D R - L a b color space, Fairchild et al. [20] also proposed the H D R - I P T color space, which aims to extent the IPT color space [23] to HDR content. This color space is not studied in this article due to its high similarity with H D R - L a b .

2.1.2. ICtCp

I C t C p has a better chrominance and luminance decorrelation and has a better hue linearity than the classical Y C r C b color space [24]. This color space is calculated in three steps:
  • First, the linear R G B values (in the BT.2020 gamut) are converted into L M S values which correspond to the quantity of light absorbed by the cones:
    L = 0.41210938 × R + 0.52392578 × G + 0.06396484 × B M = 0.16674805 × R + 0.72045898 × G + 0.11279297 × B S = 0.02416992 × R + 0.07543945 × G + 0.90039062 × B
  • Second, the inverse EOTF PQ [6] is applied to each L M S component:
    L = E O T F P Q 1 ( L ) M = E O T F P Q 1 ( M ) S = E O T F P Q 1 ( S )
  • Finally, the luminance component I and the chrominance components C t and C p are deduced as follows:
    I = 0.5 × L + 0.5 × M C t = 1.61376953 × L 3.32348632 × M + 1.70971679 × S C p = 4.37817382 × L 4.37817383 × M 0.13256835 × S
The I C t C p color space [6] is particularly well adapted to video compression and more importantly to the PQ EOTF as defined in BT.2100 [6].

2.1.3. J z a z b z

J z a z b z [22] is a uniform color space allowing to increase the hue uniformity and to predict accurately small and large color differences, while keeping a low computational cost. It is computed from the X Y Z values (with a standard illuminant D65) in five steps:
  • First, the X Y Z values are adjusted to remove a deviation in the blue hue.
    X Y = b X g Y ( b 1 ) Z ( g 1 ) X
    where b = 1.15 and g = 0.66 .
  • Second, the X Y Z values are converted to L M S values
    L = 0.41478972 × X + 0.579999 × Y + 0.0146480 × Z M = 0.2015100 × X + 1.120649 × Y + 0.0531008 × Z S = 0.0166008 × X + 0.264800 × Y + 0.6684799 × Z
  • Third, as for I C t C p , the inverse EOTF PQ is applied on each L M S component
    L = E O T F P Q 1 ( L ) M = E O T F P Q 1 ( M ) S = E O T F P Q 1 ( S )
  • Fourth, the luminance I z and the chrominance a z and b z are calculated
    I z = 0.5 × L + 0.5 × M b z = 3.5240000 × L 4.0667080 × M + 0.5427080 × S a z = 0.1990776 × L + 1.0967990 × M 1.2958750 × S
  • Finally, to handle the highlight, the luminance is adjusted:
    J z = ( 1 + d ) × I z 1 + ( d × I z ) d 0
    where J z is the adjusted luminance, d = 0.56 and d 0 is a small constant: d 0 = 1.6295499532812566 × 10 11 .

2.2. Color Difference Metrics

In this section, we present the color difference metrics associated to each HDR color space. Because the color spaces are uniform, it is possible to calculate the perceptual difference between two colors.
  • For H D R - L a b color space, the Euclidean distance Δ E H D R - L a b is used:
    Δ E H D R - L a b = ( Δ L ) 2 + ( Δ a ) 2 + ( Δ b ) 2
  • For the J z a z b z , Safdar et al. [22] proposed the following formula:
    Δ E J z a z b z = ( Δ J z ) 2 + ( Δ C z ) 2 + ( Δ H z ) 2
    where C z corresponds to the color saturation and h z to the hue:
    C z = ( a z ) 2 + ( b z ) 2
    h z = a r c t a n ( b z a z )
    Δ H z = 2 C z 1 C z 2 × s i n ( Δ h z 2 )
    where C z 1 and C z 2 correspond to the saturation of the two compared colors.
  • For I C t C p , a weighted Euclidean distance formula was proposed in [25]:
    Δ E I C t C p = 720 ( Δ I ) 2 + 0.25 ( Δ C t ) 2 + ( Δ C p ) 2
    Then, to have a I C t C p color space truly perceptually linear, the coefficient 0.25 is applied to the C t component before using any SDR metric.
These color difference metrics work well for measuring perceptual differences of uniform patches. Although that we do not perceive color differences in the same way in textured images or in uniform and large patches, they are often used to compare the distortion between two images. The mean of the difference between the distorted and the reference images can be used as an indicator of image quality:
Δ E ¯ c o l o r s p a c e = 1 I J i = 1 I j = 1 J Δ E c o l o r s p a c e ( i , j )
where I and J correspond to the dimensions of the image and ( i , j ) corresponds to the spatial coordinates of the pixel.

2.3. SDR Quality Assessment Metrics

We have selected 12 SDR metrics commonly used in academic research, standardization or industry. There are six achromatic or color-blind metrics (PSNR, SSIM, MS-SSIM, FSIM, PSNR-HVS-M and PSNR-HMA) and six metrics including chrominance information ( Δ E ¯ , Δ E S ¯ , SSIMc, CSSIM, FSIMc, PSNR-HMAc). Table 1 summarizes the principle of each metrics. More detailed information about these metrics can be found in a Supplementary Materials.

2.4. Adapting SDR Metrics to HDR/WCG Images

For adapting SDR metrics to HDR/WCG images, the reference and distorted images are first converted in a perceptually linear color space. A remapping function is then applied. Finally the SDR metrics is used to determine the quality score. Figure 1 presents the diagram of the proposed method.

2.4.1. Color Space Conversion

Most SDR metrics were designed with the assumption that the images are encoded in the legacy Y C r C b [1] color space; this color space is approximately perceptually uniform for SDR content.
To use SDR metrics with HDR images, we propose to leverage perceptually uniform color spaces adapted to HDR and WCG images ( I C t C p , J z a z b z , H D R - L a b 100 and H D R - L a b 1000 ).
To illustrate the importance of using uniform color space, we also consider two non-uniform color spaces, namely X Y Z and Y C r C b color spaces as defined in the BT.2020 recommendation [5]. This last space can’t be considered as approximatly uniform for HDR content as it uses the classical gamma function. This function is applied to the R G B component of an image:
E = 4.5 E if 0 E β α E 0.45 ( α 1 ) otherwise
where α = 1.099 , β = 0.018 and E is one of the R, G and B channel normalized by the reference white level. In SDR, it is supposed to be equal to the peak brightness of the display, so we choose as being the maximum value taken by our own HDR images: 4250 cd / m 2 .
From the non-linear R B G color space, the Y C r C b color space can be easily deduced:
Y = 0.2627 × R + 0.6780 × G + 0.0593 × B C r = ( R Y ) / 1.4746 C b = ( B Y ) / 1.8814
In addition to the previous color space, for the color-blind metrics, we use the PU-mapping function for the luminance [7]. As mentioned earlier, this transfer function keeps the same behaviour than the Y C r C b with a reference white of 80 cd / m 2 (which is perceptually linear within a SDR range) and retains perceptual linearity above. Thus any color-blind metrics can be used thanks to this mapping.

2.4.2. Remapping Function

The six aforementioned color spaces, i.e., X Y Z , Y C r C b , H D R - L a b 100 , H D R - L a b 1000 , I C t C p and J z a z b z , have a different range of values. As most of SDR metrics have constant values defined for pixel values between 0 and 255, it is required to adapt the color spaces. We remap them in a way that their respective perceptually linear luminances fit a similar range as the luminances encoded with the PU transfer function between 0 and 100 cd / m 2 . We choose 100 cd / m 2 as a normalization point because it roughly corresponds to the peak brightness of an SDR screen. Moreover, the PU-encoding is used as a reference to remap the color spaces because it is already adapted to SDR metrics. The goal of this process is to obtain HDR images with the same luminance scale than SDR images in the range 0 to 100 cd / m 2 while preserving the perceptual uniformity of the color spaces. The remapping of the perceptual color spaces is done as follows:
J z a z b z ^ ( i , j ) = α P U β J z × J z a z b z ( i , j )
where J z a z b z ( i , j ) corresponds to the value in the J z a z b z domain of the pixel with the spatial coordinates i and j. J z a z b z ^ ( i , j ) corresponds to the same pixel value after the remapping. α P U is the luminance value in the PU space when linear luminance value is 100 cd / m 2 . β J z is the same value but for the luminance component J z of the J z a z b z color space. A similar operation is applied to I C t C p and H D R - L a b , X Y Z and Y C r C b color spaces. The resulting luminances for the aforementioned color-space as well as the PU-encoding luminance are plot on Figure 2. For these figures, we chose a surround luminance of 20 cd / m 2 for the two H D R - L a b color spaces.
Remark 1.
  • Note that, to adapt the Δ E S ¯ metrics, the blurring model used in this metrics is first applied to the X Y Z color space of the images and then the different color difference metrics are calculated. In the case of the J z a z b z color space instead of the color difference metrics presented in Equation (15), we use a simpler Euclidean distance between the pixel values.
  • In the following sections, the naming convention used for all metrics is M e t r i c s C o l o r S p a c e . For example, the PSNR metrics used with the I C t C p color space is called P S N R I C t C p .

3. Methodology for the Quality Assessment Metrics Evaluation

In this section, we describe the methodology used to evaluate the performances of the adapted metrics described in the previous section. First, we present the HDR image databases annotated with subjective score or Mean Opinion Score (MOS). They are used to compare the objective metrics quality scores to a ground truth. Then, we present the performance indexes used to perform this comparison.

3.1. Databases Presentation

Five image databases are used for carrying out the comparison. Three were already available online and two were created to handle WCG image quality assessment. To describe objectively the images of each database, we use four indicators:
  • the image dynamic range (DR):
    DR = log ( Y m a x ) log ( Y m i n )
    where Y m i n and Y m a x are the minimum and the maximum luminance (in the X Y Z linear domain), respectively. They are computed after excluding 1 % of the brightest and darkest pixels to be more robust to outliers.
  • the key of the picture is a measure of its overall brightness (in the range [0, 1]):
    key = log ( Y ) ¯ log ( Y m i n ) log ( Y m a x ) log ( Y m i n )
    log ( Y ) ¯ is computed as follows:
    log ( Y ) ¯ = 1 N i = 1 I j = 1 J log ( Y ( i , j ) + δ )
    where Y ( i , j ) is the luminance of pixel ( i , j ) , N the total number of pixels and δ a small offset to avoid a singularity when the luminance is null. Y m i n and Y m a x are calculated as for the dynamic range.
  • the spatial information (SI) [32] describes the complexity of an image. It corresponds to the standard deviation of the image luminance plane which has been filtered by a Sobel filter:
    SI = std [ Sobel ( Y ) ]
    On SDR images, this metrics is used after the OETF, usually a gamma function, and, thus, making the luminance approximately perceptually linear. To be as similar as values in SDR, the PU function is applied on the luminance of the HDR images.
  • the colorfulness is a metrics of the perceived saturation of an image [33]. The M ^ ( 1 ) version of the metrics is used because the image is first converted in the CIELab space, a space that can be adapted to HDR (cf. Section 2.1). This metrics is computed as follows:
    M ^ ( 1 ) = σ a b + 0.37 × μ a b
    where
    σ a b = σ a 2 + σ b 2
    and
    μ a b = μ a 2 + μ b 2
    where σ a and σ b are the standard deviations along the a and b axis, respectively. μ a and μ b are the means of the a and b component, respectively.
Before calculating these indicators, the image luminance is limited to the display available dynamic range used in the respective subjective tests.
Table 2 presents the main characteristics of the studied databases and a representation of each image of each database can be found in Appendix A.

3.1.1. Existing Databases

In this section, three databases available online are presented. They all include compression artifacts. Some of them use a backward compatible compression. This method allows the images to be displayable with SDR equipment while preserving the dynamic range for the display on HDR screens. A Tone Mapping Operator (TMO) is used to tone map the HDR images into SDR range. These tone-mapped images are then compressed using different codecs. After the decoding process, they are tone expanded to recover their HDR characteristics. These three databases use the same HDR SIM2 display (HDR47ES4MB) which has a measured dynamic range going from 0.03 to 4250 cd / m 2 .
  • Narwaria et al. [34]’s database (Available at http://ivc.univ-nantes.fr/en/databases/JPEG_HDR_Images/) is composed of 10 source images, which have been distorted by a backward compatible compression with the JPEG codec and the iCam TMO [36]. This database was used along with others to tune HDR-VDP2. The angular resolution used during the subjective test was 60 pix/degree and the surround luminance was 130 cd / m 2 . For this database due to its surround luminance above 100 cd / m 2 , we consider a surround luminance of 20 cd / m 2 to obtain the color space H D R - L a b 100 . The characteristics of each reference image are reported on Figure 3.
  • Korshunov et al. [35]’s database (Available at http://mmspg.epfl.ch/jpegxt-hdr) consists in images distorted with a backward-compatible compression scheme using the JPEG-XT standard and either the Mantiuk et al. [37] or the Reinhard et al. [38] TMO. The angular resolution used during the subjective test was 60 pix/degree and the surround luminance was 20 cd / m 2 . The characteristics of each reference images are reported on Figure 4.
  • Zerman et al. [19]’s database (Available at http://webpages.l2s.centralesupelec.fr/perso/giuseppe.valenzise/) is partially composed of images from [39]. The distorted images are generated by using both backward-compatible compression using the TMO proposed by Mai et al. [40] and using a non backward-compatible compression with the use of the PQ function for the EOTF. The compression was performed using the JPEG and the JPEG2000 codec. The angular resolution used during the subjective test was 40 pix/degree and the surround luminance was 20 cd / m 2 . The characteristics of each reference images are reported on Figure 5.
The main limitation of these databases is the limited BT.709 gamut used during their creation. The wider BT.2020 color gamut is more and more associated with HDR images and videos. In addition, Standards Developing Organizations such as DVB, recommend the use of the BT.2020 gamut with HDR content [41].

3.1.2. Proposed Databases

To deal with the limitations of existing databases, we propose two new databases (Available at www-percept.irisa.fr/software/). The first one is a database with BT.709 contents encapsulated in a BT.2020 gamut and the second one is composed with native BT.2020 content. We used the same display for both of them: the SONY BVM-X300. This is a professional HDR video monitor able to faithfully display the brightness of signals [42]. It has a peak brightness at 1000 cd / m 2 and a luminance of a black pixel that was too low to be measured by our equipment (<0.2 cd / m 2 ). In this paper, we assume a luminance for the black pixel at 0.001 cd / m 2 . This monitor also allows us to force the use of a chosen EOTF without having to consider the image metadata.
To display the images on the screen, we used the b<>com *Ultra Player* which allows to distribute uncompressed YUV content with a 10 bits quantization and 4:2:0 chroma sub-sampling.The connection to the screen was done using 3G-SDI cables.
To create the distortions, we used HDRTools (v0.16) (Available at https://gitlab.com/standards/HDRTools/) to apply format conversion, chrominance sub-sampling or gamut conversion. For the compression and decompression of the images, we used the reference software of HEVC, the HEVC test Model (v16.17) (Available at https://hevc.hhi.fraunhofer.de/).
For both subjective tests, we used the same methodology: the Double-Stimulus Impairment Scale (DSIS) variant I methodology [43] with a side-by-side comparison. Pairs of images were presented to the viewers, one side being always the reference. 50% of the participant had the reference always on the right-hand side, 50% always on the left-hand side. To avoid a bias with the order of presentation, the pairs of images were randomized for each participant with the condition that the same content was never shown twice consecutively. Each image pair was shown 10 s and voting time was 5 s. The test session lasts 35 min (including instructions and training time) with a 5 min pause in the middle of the test.
To obtain realistic distortion we compress the images with the HEVC codec using the recommendation ITU-T H Suppl.15 [44], i.e., a 10 bits quantization for the images, the PQ EOTF and the Y C r C b color space. Moreover this recommendation proposes different processes for increasing the quality of the compression such as a 4:2:0 chroma subsampling using a luma-adjustment process and a chroma Qp adaptation. This last is of special interest for this study because it corrects errors due to the compression of the chrominances. In WCG, most of chroma values tend to be near their mean value (i.e., 512) while the Y component tends to use most of its range. This is even more significant for BT.709 content encapsulated in a BT.2020 gamut. This behaviour creates a shift in the bitrate allocation from the chrominance to the luminance. This can potentially create visible chrominance artifacts. The chroma Qp adaptation proposed in the recommendation is a method to counter this effect. Because we wanted to create realistic color artifacts, we distort the images using the compression with and without the chroma Qp adaptation to study the sensibility of the color metrics.
The first database we propose was already presented in [45]. Eight images were selected from 3 collections: two are from the MPEG HDR sequences (FireEater and Market) [46], one is from the Stuttgart HDR Video Database [47] and the remaining five images are from the HDR photographic survey [48]. Note that these images also belong to Zerman et al.’s database [19]. The characteristics of the images are not exactly the same as in Zerman et al. [19] because we used only half of the images ( 944 × 1080 ) to be able to display simultaneously both the reference image and the distorted image on the same screen. The characteristics of the images can be found on Figure 6.
Fifteen naive subjects participated in this test (11 males, 4 females) with an average age of 25.8 . All declared normal or corrected-to-normal vision. One participant was removed from the analysis using the methodology described in [43].
Because we used the display in HD mode, we choose to call this database HDdtb in this paper. Four kinds of distortions have been chosen:
  • HEVC compression using the recommendation ITU-T H Suppl.15 [44]. Four different quantization parameters ( Q p ) were selected for each image for this distortion.
  • HEVC compression without the chroma Q p adaptation leading to more chrominance distortions. Three Q p were selected for each image.
  • Gaussian noise on the chroma components: 3 levels of noise were selected.
  • Gamut mismatch: two kinds of distortion were created: on one hand, the BT.709 images were considered as if they had been already encapsulated in a BT.2020 gamut leading to more saturated images. On the other hand, we took images already encapsulated in a BT.2020 gamut and considered them as BT.709 images and re-encapsulated them in a BT.2020 gamut. This creates less saturated images.
For the second database, we used eight 4K images produced by Harmonic Inc. The characteristics of the images are given in Figure 7. We used only part of these images so the reference and the distorted image could fit on our display ( 1890 × 2160 ) with a band of 60 black pixels. We called this new database 4Kdtb. We used the same visualization distance as in the previous database. Thirteen experts or sensitized subjects participated in this test (11 males, 2 females) with an average age of 40. All declared normal or corrected-to-normal vision. With this database we aim to create more visible color artifacts induced by compression than in the HDdtb database. We compressed the images with four different Q p with three different options for the compression:
  • HEVC compression using the recommendation ITU-T H Suppl.15 [44].
  • HEVC compression without the chroma Q p offset algorithm.
  • HEVC compression with 8 bits quantization for the chroma instead of 10 during the compression. The chroma were re-sampled to 10 bits before displaying images on screen.
Because we have a higher resolution, the angular resolution increases as well and become 120 pix/degree. Because most quality metrics are not adapted to this kind of resolution, we choose to downsample the images to obtain a 60 pix/degree resolution before testing the different quality metrics. Using a downsampled image can improve the performances of some metrics such as MS-SSIM or HDR-VDP2, which were tuned for lower resolution.

3.2. Performance Indexes

We present the performances of the different quality metrics for each database with four different indicators. Before computing these indicators, a non linear regression is applied to the quality scores thanks to a logistic function:
Q ˜ i = a + b 1 + e ( Q i c ) d
where Q i is the score of the quality metrics on the image i and Q ˜ i the mapped quality score. Parameters a, b, c and d are determined by the regression conducted by the lsqcurvefit function of Matlab.
The four performance indicators are given below:
  • the Pearson correlation coefficient (PCC) (cf. Table A1 and Table A2) measures the linearity between two variables:
    PCC S , Q = cov ( S , Q ) σ S σ Q
    where, S corresponds to the subjective scores (MOS), Q to the predicted quality score, cov ( S , Q ) is the covariance of S and Q and σ S (resp. σ Q ) is the standard deviation of S (resp. Q).
  • the Spearman rank Order Correlation coefficient (SROCC) (cf. Table A3 and Table A4) measures of the monotony between two variables. Raw scores S and Q are first converted to ranks r g S and r g Q . The SROCC corresponds to the PCC of these two new variables:
    SROCC S , Q = cov ( r g S , r g Q ) σ r g S σ r g Q
  • the Outlier Ratio [49] (OR) (cf. Table A5 and Table A6) represents the quality metrics consistency. It represents the number of outlier point to total points N:
    OR = T o t a l   o f   O u t l i e r N
    An outlier is defined as a point for which the error exceeds the 95 percent confidence interval of the mean MOS value as defined in [43].
  • The Root Mean square error (RMSE) (cf. Table A7 and Table A8) measures the accuracy of the quality metrics:
    RMSE = 1 N i = 1 N ( S ( i ) Q ( i ) ) 2

4. Results

In this section, we present the performances of the different metrics presented in the previous sections. For the sake of completeness, we also study the performances of the following color-blind HDR metrics: PU-VIF [9], HDR-VDP2 [11] (version 2.2.1) [12] and HDR-VQM [13]. Note that HDR-VDP2 requires a number of parameters such as the angular resolution, the surround luminance and the spectral emission of the screen. For these parameters, we use the values corresponding to the different subjective tests. We measured the spectrum of the Sony BVM-X300 and the SIM2 HDR47ES4MB monitor using the “X-Rite Eye one Pro 2” probe (more details are given in [45]. All these parameters are summarized in Table 3.
Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 represent the SROCC performances for each database and each metric. Numerical values of the performance indexes (SROCC, PCC, OR, RMSE) can be found in Appendix B.

4.1. 4Kdtb Database

With our proposed 4Kdtb (cf. Figure 8), for each color-blind metrics, the best color spaces are always the I C t C p , H D R - L a b 100 and the PU-encoding. J z a z b z and H D R - L a b 1000 provide the lowest performances. The best performing color-blind metrics is FSIM used with the PU-encoding, closely followed by FSIM I C t C p and FSIM H D R - L a b 100 . MS-SSIM used with the PU encoding, I C t C p and, H D R - L a b 100 are almost on par with the second best performing metrics HDR-VDP2 (cf. Appendix B). The only color space that provides good performances on all color metrics is the I C t C p color space.

4.2. Zerman et al. Database

With the Zerman et al. database, as previously, the color spaces, I C t C p , H D R - L a b 100 and the PU-encoding provide the best performances for almost all color-blind metrics (cf. Figure 9). However, there is one exception with FSIM. Used with the following color spaces, J z a z b z , H D R - L a b 100 and H D R - L a b 1000 , it provides slightly better performances than I C t C p and the PU-encoding. The best performing color-blind metrics are, with almost the same performances, HDR-VDP2, HDR-VQM and MS - SSIM I C t C p .

4.3. HDdtb Database

With our proposed HDdtb (cf. Figure 10), for color-blind metrics, the color space J z a z b z provides slightly lower performances for all metrics, except with FSIM. For this metric, the performances with J z a z b z are higher. The best performing color-blind metrics for this database are FSIM J z a z b z , FSIM H D R - L a b 1000 and MS - SSIM H D R - L a b 100 . For the color metrics, the metrics based on color difference metrics ( Δ E ¯ , Δ E S ¯ and CSSIM) have very low performances. This is partially due to the presence of the gamut mismatch artifact. As noticeable on Table 4, discarding this artifact increases the performances of these metrics. For the participants of our subjective test, the distortions on the images are clearly visible but are not directly associated with a loss in quality perception.

4.4. Korshunov et al. Database

The Korshunov et al. database is the less selective database (cf. Figure 11). Most of the metrics have high correlation coefficients and the choice of color space has close to no impact on the performances especially on color-blind metrics. Even using non-perceptually linear color space like the Y C r C b color space impacts only moderately the performances of MS-SSIM, FSIM, PSNR-HVS-M and PSNR-HMA. For this database, the best performing color-blind metrics are FSIM J z a z b z , FSIM H D R - L a b 1000 and MS - SSIM J z a z b z .

4.5. Narwaria et al. Database

With the Narwaria et al. database (cf. Figure 12), J z a z b z is the best color space for SSIM and MS-SSIM while the PU-encoding and the H D R - L a b 100 are the best color spaces for FSIM. The best metrics for this database are MS - SSIM J z a z b z , HDR-VDP2 and HDR-VQM. The good performances of HDR-VDP2 were expected for this database because it was part of the training set of this metric. For this database, the performances of the PSNR and the PSNR-HVS-M are relatively low compared to the other databases. The fact that PSNR-HMA with the adequate color space significantly increases the performances of PSNR-HVS-M suggests that the backward compatible compression used by Narwaria et al. (Section 3.1.1) creates distortions that impact the mean luminance and the contrast of the images. Indeed PSNR-HMA is an improvement of PSNR-HVS-M that takes into account these two kinds of artifacts [50].

4.6. Results Summary

For all studied databases, HDR-VDP2 has generally high performances although it is not always on the top three metrics (cf. Appendix B). FSIM and MS-SSIM with appropriate perceptually uniform color space are often on par if not better than HDR-VDP2.
Among all metrics, FSIM is the less sensitive metrics to the choice of color space assuming that this color space is perceptually uniform.
The color extension of FSIM, namely FSIMc, does not improve the performances of FSIM even for our proposed database 4Kdtb which focuses on chromatic distortions. Worst, the metrics becomes much more sensitive to the color space choice. We observe the same behavior for the color extension of PSNR-HMA, PSNR-HMAc, which decreases the performances of the metrics for any color spaces.
When using the two non-uniform color space X Y Z and Y C r C b , the performances of all metrics drop significantly compared to the other color spaces for all the databases and especially for our proposed database 4Kdtb, the Zerman et al. database and the Narwaria et al. database. It emphasizes the importance of perceptually uniform color space for predicting the quality of HDR images.

5. General Discussion

We separate our general discussion in two parts. First, we study the impact of the color space on the metrics performances. Moreover we emphasize the influence of the diffuse white luminance. As a reminder, the luminance of the diffuse white correponds to the luminance of a 100% reflectance white card without any specular highlight. In HDR imaging, it is different from the peak brightness. In the second part of our analysis, we discuss the sensitivity of chrominance artifacts on color metrics using our proposed database 4Kdtb.

5.1. Impact of the Diffuse White Luminance

Our results suggest that the best color space for assessing the quality of HDR images depends on the test database. Indeed, some of the color spaces are adapted and tuned for one visualization condition.
The H D R - L a b color space considers two important parameters, i.e., the diffuse white and the surround luminance. Moreover, the final equation of the J z a z b z color space (Equation (13)) was tuned using the experimental dataset called SL2 [20]. This dataset was obtained for a diffuse white at 997 cd / m 2 . This explain why the J z a z b z luminance have a behaviour close to the H D R - L a b 1000 luminance (cf. Figure 2).
The PU function and the I C t C p color space were not obtained through the same kind of training. They were created using Daly’s Contrast Sensitivity Function model [51] and Barten’s Contrast Sensitivity Function model [52], respectively. However, Figure 2, that represent the different color spaces luminance in function of the linear luminance, suggest that I C t C p and the PU encoded luminance have a behaviour closer to the H D R - L a b 100 luminance than from the J z a z b z luminance or than the H D R - L a b 1000 luminance.
Because the color spaces are adapted for different viewing conditions, it is not easy to determine the best color space.
  • With the proposed database 4Kdtb, the color spaces with a diffuse white around 100 cd / m 2 ( I C t C p , H D R - L a b 100 and the PU-mapping) give better performances than J z a z b z and H D R - L a b 1000 spaces. We also observe that the performances of color metrics are more sensitive to the color space choice.
  • We draw a similar conclusion on Zerman et al. database, except for FSIM and FSIMc (cf. Figure 9). These two metrics are less sensitive to the color space for this database.
  • With the proposed database HDdtb (cf. Figure 11), the J z a z b z color space provides the lowest performances for PSNR, SSIM, MS-SSIM and PSNR-HMA metrics but provides the highest performances with FSIM and FSIMc. However, results indicate that the PSNR, SSIM and PSNR-HMA metrics based on H D R - L a b 1000 and H D R - L a b 100 color spaces perform better than the same metrics using the J z a z b z color space. This suggests that the low performances of these metrics is not due to the diffuse white characteristics of the images, but to the design of J z a z b z color space which corrects a deviation in the perception of the blue hue (cf. Equation (9)). To test this hypothesis, we measure the SROCC of these metrics on the HDdtb database with the J z a z b z color space without the blue deviation correction. We call this new space J z a z b z ˜ . Results, shown in Table 5, indicate that SROCC values of the three aforementioned metrics increase with the J z a z b z ˜ color space. In addition, metrics using this modified color space provide similar performances to metrics based on the H D R - L a b 1000 color space. This is consistent with the fact that H D R - L a b 1000 and J z a z b z are adapted to almost the same diffuse white luminance. This might be due to the presence of the “gammut mismatch” artifact in this database. Indeed, the “gammut mismatch” artifact creates visible distortions that was not associated with a subjective quality loss during our test. We suspect that the blue hue deviation correction makes the J z a z b z color space more sensitive to this distortion. However, this is difficult to demonstrate due to the low number of images with this kind of artifact present in this database.
  • With the Narwaria et al. database, it is difficult to draw a conclusion (cf. Figure 12). The MS-SSIM and the SSIM metrics perform better when using the J z a z b z color space. However, the FSIM and PSNR-HMA metrics perform better when using the I C t C p color space. This contrasted result might be due to the fact that the diffuse white luminance is likely not homogeneous across the entire database.
To go further into the analysis, we propose to evaluate the impact of the diffuse white on the performances of H D R - L a b metrics. The SROCC performances of three metrics (FSIM, MS-SSIM and SSIM) are evaluated for a diffuse white in the range 80 to 1000. Results are plotted in Figure 13. For the FSIM, the performances decrease slightly when the diffuse white luminance increases for the 4Kdtb database and the Narwaria et al. database while increasing with the diffuse white for the HDdtb. The impact of the diffuse white is more important on the MS-SSIM metric. For example, with the Zerman et al. database, the SROCC score drops from 0.9143 to 0.7791. The impact for the SSIM metrics is in the same order of magnitude as for MS-SSIM.

5.2. Sensibility to Chrominance Distortions

In this section, the ability of color metrics to take into account chrominance artifacts is discussed. The discussion is focused on the database 4Kdtb which is the only database providing significant chrominance artifacts. Also we only consider metrics using the I C t C p color space since the best performances are observed with this color space. Figure 14 presents the Mean Opinion Score (MOS) and objective scores for the reference image “Regatta_24s”, for the distorted images (compressed with HEVC). The objective scores are given after applying the logistic regression presented in Section 4. Results for the other reference images can be found on Appendix C.
There is a clear difference of quality perception between the images compressed with the chroma Q p adaptation (cf. Section 3.1.2) (red Line) and the images compressed without the chroma Q p adaptation and a 8 bits quantization on the chrominance (blue line). The MOS of images compressed without the chroma Q p adaptation algorithm and a 10 bits quantization (green line) are in-between the two previous encodings.
As expected, the color-blind metrics, i.e., HDR-VDP2 and FSIM, are not sensitive at all to the chrominance distortions. However, more surprisingly, the color extension of FSIM, namely FSIMc, is not sensitive to the generated chrominance artifacts. The metrics was tailored for images in a BT.709 gamut with a SDR range. Its non-sensibility to the chrominance might be due to the pre-defined constant used for the color comparisons [29].
The other color metrics, i.e., Δ E S ¯ , SSIMc and CSSIM, are more sensitive to the chrominance artifacts. However, SSIMc and CSSIM have a tendency to underestimate the influence of chrominance artifacts for images compressed with a low Q p (so low distortion in the luminance channel) and a 8 bits quantization on the chrominance (cf. Figure A7, Figure A9, Figure A10, Figure A11 and Figure A13).

6. Recommendations

In this section, some recommendations are given to assess the HDR/WCG content quality in the context of image/video compression. The recommendations are listed below:
  • For assessing the impact of luminance distortions, we recommend to use the FSIM metric. This is one of the best performing metrics. Moreover, it is the less sensitive to the choice of color space and to the diffuse white of the images. Using the color extension of the metrics (FSIMc) does not bring a significant added-value. In addition, it is important to underline that the FSIMc metrics is sensitive to the choice of color space (cf. Figure 8).
  • For assessing the impact of chrominance distortions, we recommend to use the Δ E S ¯ I C t C p metric.
  • For assessing the impact of both luminance and chrominance distortions, we recommend to use both the FSIM metrics and the Δ E S ¯ I C t C p metrics.
  • To choose the color space, we recommend to take into account the diffuse white used during the color grading of the images. If the producer of the content follows the ITU recommendation BT.2408 [53] that defines the diffuse white luminance at 203 cd / m 2 , we recommend to use the I C t C p color space. Indeed, this color space is well adapted to a low value of diffuse white. At the opposite, the J z a z b z color space is well appropriate for a diffuse white luminance at 997 cd / m 2 . Another benefit to use the I C t C p color space is related to its direct compatibility with popular compression codecs such as HEVC.
  • For application where the calculation time and the complexity are critical aspects, we recommend to be very careful with the choice of the color space. The simplest metrics, such as PSNR and SSIM, are much more sensitive than FSIM to the diffuse white luminance.
  • If the chosen metrics is the PSNR, we recommend to first verify that the tested image/video processing application, such as compression codecs, does not create luminance mean shift or contrast change. These artifacts can be induced by backward compatible compression (if the image is first tone-mapped, then compressed using a legacy codec and finally tone expanded).
Due to the characteristics of the tested databases, these recommendations have to be used in the context of image/video compression. Different subjective tests would be required to extend the analysis to other kinds of distortion.

7. Conclusions

In this article, we reviewed the relevance of using SDR metrics with perceptually uniform color spaces to assess the quality of HDR/WCG contents. We studied twelve different metrics along with six different color spaces. To evaluate the performances of these metrics, we used three existing HDR image databases annotated with MOS and created two more databases specifically dedicated to WCG and chrominance artifacts. We showed that the use of perceptually uniform color spaces increases, in most cases, the performances of SDR metrics for HDR/WCG contents.
In this study, we also highlight two weaknesses of state-of-art metrics. First, The relationship between the diffuse white used for grading the image and the diffuse white used for the color space is not always easy to define. In a number of cases, we do not know the value of the diffuse white used for grading of the image. Choosing an arbitrary diffuse white for the color space may significantly alter the objective quality assessment. Further analysis of this relationship is required. A better understanding could help to evaluate compression of images using the HLG EOTF for which the diffuse white depends on the display. Second, to the best of our knowledge, the quality assessment of DR/WCG images with chrominance distortions is still an open-issue, because of the lack of relevant objective metrics.
In a broader perspective, the relevance of subjective tests can also be questioned. For example, on the proposed database HDdtb, viewers did not perceive the gamut mismatch artifact as a loss of quality. However, this kind of artifact changes completely the appearance of images. Some other artifact could also alter the image appearance like the tone mapping/tone expansion used during backward compatible compression. In some cases, asking the viewers to not assess only the quality of the images but also their fidelity to the image appearance can be valuable to fully evaluate image processing algorithms.

Supplementary Materials

The following are available online at https://www.mdpi.com/2313-433X/5/1/18/s1.

Author Contributions

Conceptualization, M.R.; Data curation, M.R.; Formal analysis, M.R.; Investigation, M.R.; Methodology, M.R.; Project administration, O.L.M., R.C. and X.D.; Resources, M.R., O.L.M., R.C. and X.D.; Software, M.R.; Supervision, O.L.M., R.C. and X.D.; Validation, O.L.M., R.C. and X.D.; Writing—original draft, M.R.; Writing—review & editing, O.L.M., R.C. and X.D.

Funding

This research received no external funding.

Acknowledgments

Thanks to all the participants that have accepted to be part of the two proposed subjective tests.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HDRHigh Dynamic Range
SDRStandard Dynamic Range
WCGWide Color Gamut
HVSHuman Visual System
EOTFElectro-Optic Transfer Function
OETFOpto-Electronic Transfer Function
PUPerceptually Uniform
PQPerceptual Quantizer
HLGHybrid Log Gamma
CSFContrast Sensitivity Function
DSISDouble-Stimulus Impairment Scale
ACR-HRAbsolute Category Rating with Hidden Reference
SISpatial Information
DRDynamic Range
SROCCSpearman Rank Order Correlation Coefficient
PCCPearson Correlation Coefficient
OROutlier Ratio
MSEMean Square Error
RMSERoot Mean Square Error
PSNRPeak Signal on Noise Ratio
SSIMStructure SIMilarity
MS-SSIMMultiScale Structure SIMilarity
FSIMFeature SIMilarity
VIFVisual Information Fidelity
HDR-VDPHigh Dynamic Range Visual Difference Predictor
HDR-VQMHigh Dynamic Range Video Quality Metric
TMOTone Mapping Operator
JPEGJoint Photographic Experts Group
HEVCHigh Efficiency Video Coding
ITUInternational Telecommunication Union
DVBDigital Video Broadcasting

Appendix A. Database Images

The following images represent tone map version of all the used reference images. The tone mapping operator (TMO) used to produce this images was the Reinhard et al. TMO [54]. We used its Matlab implementation of the HDRToolbox [55]. The reason why the same content present in several databases (like FireEater) can have a different rendering is because the Reinhard TMO was applied indifferently on BT.709 content and BT.2020 content.

Appendix A.1. Narwaria et al.

Figure A1. Tone-mapped version (Reinhard et al. TMO [54]) of Narwaria et al. reference images. From left to right and from top to bottom: (a) Apartment_float_o15C (b) bausch_ lot (c) carpark_ ivc (d) CD1_serie2 (e) forest_path (f) lake (g) LightHouse072 (h) moto (i) office_ivc (j) outro022168.
Figure A1. Tone-mapped version (Reinhard et al. TMO [54]) of Narwaria et al. reference images. From left to right and from top to bottom: (a) Apartment_float_o15C (b) bausch_ lot (c) carpark_ ivc (d) CD1_serie2 (e) forest_path (f) lake (g) LightHouse072 (h) moto (i) office_ivc (j) outro022168.
Jimaging 05 00018 g0a1

Appendix A.2. Zerman et al.

Figure A2. Tone-mapped version (Reinhard et al. TMO [54]) of Zerman et al. reference images. From left to right and from top to bottom: (a) AirBellowsGap (b) Balloon (c) FireEater (d) LasVegasStore (e) Market3 (f) MasonLake(1) (g) RedwoodSunset (h) Showgirl (i) Typewriter (j) UpheavalDome.
Figure A2. Tone-mapped version (Reinhard et al. TMO [54]) of Zerman et al. reference images. From left to right and from top to bottom: (a) AirBellowsGap (b) Balloon (c) FireEater (d) LasVegasStore (e) Market3 (f) MasonLake(1) (g) RedwoodSunset (h) Showgirl (i) Typewriter (j) UpheavalDome.
Jimaging 05 00018 g0a2

Appendix A.3. Korshunov et al.

Figure A3. Tone-mapped version (Reinhard et al. TMO [54]) of Korshunov et al. reference images. From left to right and from top to bottom: (a) 507 (b) BloomingGorse2 (c) CanadianFalls (d) DevilsBathtub (e) dragon_ 3 (f) HancockKitchenInside (g) LabTypewriter (h) LasVegasStore (i) McKeesPub (j) MtRushmore2 (k) set18 (l)set22 (m) set23 (n) set24 (o) set31 (p) set33 (q) set70 (r) showgirl (s) sintel_ 2 (t) WillyDesk.
Figure A3. Tone-mapped version (Reinhard et al. TMO [54]) of Korshunov et al. reference images. From left to right and from top to bottom: (a) 507 (b) BloomingGorse2 (c) CanadianFalls (d) DevilsBathtub (e) dragon_ 3 (f) HancockKitchenInside (g) LabTypewriter (h) LasVegasStore (i) McKeesPub (j) MtRushmore2 (k) set18 (l)set22 (m) set23 (n) set24 (o) set31 (p) set33 (q) set70 (r) showgirl (s) sintel_ 2 (t) WillyDesk.
Jimaging 05 00018 g0a3

Appendix A.4. HDdtb

Figure A4. Tone-mapped version (Reinhard et al. TMO [54]) of the HDdtb reference images. From left to right and from top to bottom: (a) FireEater (b) LasVegasStore (c) Market3 (d) MasonLake(1) (e) RedwoodSunset (f) Showgirl (g) Typewriter (h) UpheavalDome.
Figure A4. Tone-mapped version (Reinhard et al. TMO [54]) of the HDdtb reference images. From left to right and from top to bottom: (a) FireEater (b) LasVegasStore (c) Market3 (d) MasonLake(1) (e) RedwoodSunset (f) Showgirl (g) Typewriter (h) UpheavalDome.
Jimaging 05 00018 g0a4

Appendix A.5. 4Kdtb

Figure A5. Tone-mapped version (Reinhard et al. TMO [54]) of the 4Kdtb reference images. From left to right and from top to bottom: (a) Bike_ 20s (b) Bike_ 30s (c) Bike_ 81s (d) Bike_ 110s (e) Regatta_ 11s (f) Regatta_ 24s (g) Regatta_ 80s (h) Regatta_ 95s.
Figure A5. Tone-mapped version (Reinhard et al. TMO [54]) of the 4Kdtb reference images. From left to right and from top to bottom: (a) Bike_ 20s (b) Bike_ 30s (c) Bike_ 81s (d) Bike_ 110s (e) Regatta_ 11s (f) Regatta_ 24s (g) Regatta_ 80s (h) Regatta_ 95s.
Jimaging 05 00018 g0a5

Appendix B. Performance Indexes

In this appendix, we present the numerical value for the performance indexes presented in Section 3.2 of each quality metrics with each database. The metrics with the best performances in terms of SROCC and PCC is in red, the second is in blue and the third in green.
Table A1. PCC of the different color-blind quality metrics on the considered databases.
Table A1. PCC of the different color-blind quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
PU - PSNR 0.69640.80020.58310.85970.8188
PSNR H D R - L a b 100 0.67240.79500.55620.86020.8042
PSNR H D R - L a b 1000 0.47110.73200.53440.80040.7023
PSNR I C t C p 0.72310.79480.60360.86970.8546
PSNR J z a z b z 0.54310.68990.55330.79990.7002
PSNR X Y Z 0.18900.64160.46270.70170.5612
PSNR Y C r C b 0.26740.67550.49960.76350.6428
PU - SSIM 0.69620.85200.75670.92650.8262
SSIM H D R - L a b 100 0.69660.84480.78050.92430.8010
SSIM H D R - L a b 1000 0.60250.76770.72470.90210.6596
SSIM I C t C p 0.68380.83660.75720.92960.8522
SSIM J z a z b z 0.65800.78510.79900.91740.6923
SSIM X Y Z 0.27550.61740.61670.77860.4863
SSIM Y C r C b 0.34350.67290.67180.84230.5343
PU - MS - SSIM 0.84790.88810.87560.96310.9324
MS - SSIM H D R - L a b 100 0.84510.88990.84480.96060.9253
MS - SSIM H D R - L a b 1000 0.77920.83950.86800.90680.7633
MS - SSIM I C t C p 0.83820.87630.88460.95750.9410
MS - SSIM J z a z b z 0.83370.86030.91570.96940.8013
MS - SSIM X Y Z 0.43770.64220.63160.86190.6258
MS - SSIM Y C r C b 0.54090.70620.70910.91260.6443
PU - FSIM 0.90000.84430.87730.95680.8988
FSIM H D R - L a b 100 0.89500.84760.87260.95400.9120
FSIM H D R - L a b 1000 0.84160.89230.81950.97330.9133
FSIM I C t C p 0.89920.82340.86540.94710.8883
FSIM J z a z b z 0.88290.91870.84660.97240.9059
FSIM X Y Z 0.58170.73720.65460.90150.7402
FSIM Y C r C b 0.70540.80660.74450.92150.8148
PU - PSNP - HVS - M 0.81690.79630.60900.92100.9120
PSNR - HVS - M H D R - L a b 100 0.82000.80230.59420.92180.9110
PSNR - HVS - M H D R - L a b 1000 0.66740.70880.58740.92520.8588
PSNR - HVS - M I C t C p 0.81870.77620.62690.92970.9226
PSNR - HVS - M J z a z b z 0.73160.66240.58070.91200.8310
PSNR - HVS - M X Y Z 0.30160.63200.45170.81020.6716
PSNR - HVS - M Y C r C b 0.42140.65400.50090.89720.7915
PU - PSNR - HMA 0.82440.78730.76250.93640.8893
PSNR - HMA H D R - L a b 100 0.81290.76930.76470.93600.8850
PSNR - HMA H D R - L a b 1000 0.66760.75450.72210.93680.8516
PSNR - HMA I C t C p 0.84100.81400.79040.93100.9230
PSNR - HMA J z a z b z 0.72980.75920.74270.92610.8322
PSNR - HMA X Y Z 0.30220.67500.50690.85740.6794
PSNR - HMA Y C r C b 0.42160.72030.64060.89420.7933
HDR - VDP 2 0.86050.87150.91300.95180.9385
HDR - VQM 0.77140.86760.90610.96120.9304
PU - VIF 0.87220.76450.75710.92150.8919
Table A2. PCC of the different color quality metrics on the considered databases.
Table A2. PCC of the different color quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
Δ E ¯ H D R - L a b 100 0.45820.25020.64070.76290.6012
Δ E ¯ H D R - L a b 1000 0.22800.25590.61060.70240.5133
Δ E ¯ I C t C p 0.67830.25480.62770.80650.7508
Δ E ¯ J z a z b z 0.40580.29520.64360.55360.5392
Δ E ¯ X Y Z 0.21840.34380.33750.59930.3564
Δ E ¯ Y C r C b 0.23360.30830.32200.68730.4287
Δ E S ¯ H D R - L a b 100 0.75130.11350.66860.73340.7464
Δ E S ¯ H D R - L a b 1000 0.46620.10270.57630.74700.6723
Δ E S ¯ I C t C p 0.78850.13550.55410.76390.7655
Δ E S ¯ J z a z b z 0.61030.13310.55200.75560.6980
Δ E S ¯ X Y Z 0.27480.26980.27610.71520.4372
Δ E S ¯ Y C r C b 0.30930.20470.25270.72500.6325
SSIMc H D R - L a b 100 0.52460.70500.74850.88450.7126
SSIMc H D R - L a b 1000 0.31200.66180.78860.86640.5507
SSIMc I C t C p 0.73760.77640.75050.91760.8275
SSIMc J z a z b z 0.51080.68420.81530.89140.6253
SSIMc X Y Z 0.25960.60200.62920.77130.4811
SSIMc Y C r C b 0.28510.61870.74710.83540.5185
CSSIM H D R - L a b 100 0.79910.51930.77840.89290.7762
CSSIM H D R - L a b 1000 0.54400.42490.66960.88280.6644
CSSIM I C t C p 0.76990.51370.73540.91520.8372
CSSIM J z a z b z 0.68120.48720.61800.91970.7025
CSSIM X Y Z 0.21740.36710.42130.75900.4856
CSSIM Y C r C b 0.30650.43670.53170.87880.5649
FSIMc H D R - L a b 100 0.84730.85980.86800.93600.9131
FSIMc H D R - L a b 1000 0.68910.86540.83430.96330.9055
FSIMc I C t C p 0.90800.80860.86870.94530.8894
FSIMc J z a z b z 0.83550.91620.85660.97170.9092
FSIMc X Y Z 0.58480.73370.66090.90290.7415
FSIMc Y C r C b 0.67830.79230.76560.95410.8162
PSNR - HMAc H D R - L a b 100 0.56020.44890.67440.75330.7229
PSNR - HMAc H D R - L a b 1000 0.35730.39860.67010.72750.6457
PSNR - HMAc I C t C p 0.75400.64670.78990.86080.8195
PSNR - HMAc J z a z b z 0.49630.47490.73190.81960.7213
PSNR - HMAc X Y Z 0.20570.57110.58840.83280.6327
PSNR - HMAc Y C r C b 0.35800.61530.71850.89420.7432
Table A3. SROCC of the different color-blind quality metrics on the considered databases.
Table A3. SROCC of the different color-blind quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
PU - PSNR 0.72610.78020.53310.85970.8266
PSNR H D R - L a b 100 0.65960.77510.49750.86020.8147
PSNR H D R - L a b 1000 0.46730.75870.41970.80780.7086
PSNR I C t C p 0.74190.77450.57360.87420.8508
PSNR J z a z b z 0.55310.69330.46340.81020.7001
PSNR X Y Z 0.21310.63110.46010.72160.5682
PSNR Y C r C b 0.25190.66870.41570.77560.6493
PU - SSIM 0.70660.84300.72400.92800.8316
SSIM Z 100 0.69770.83550.74940.92530.8090
SSIM H D R - L a b 1000 0.60540.79040.72470.90850.6851
SSIM I C t C p 0.67520.82450.72310.93070.8618
SSIM J z a z b z 0.64920.78130.77210.91810.7073
SSIM X Y Z 0.19650.64880.59380.77460.5065
SSIM Y C r C b 0.30270.69260.63760.84210.5563
PU - MS - SSIM 0.85170.86400.86560.95830.9165
MS - SSIM H D R - L a b 100 0.84480.87080.82000.95670.9143
MS - SSIM H D R - L a b 1000 0.76840.85050.85280.96000.7791
MS - SSIM I C t C p 0.84470.84640.87140.95290.9260
MS - SSIM J z a z b z 0.83060.85570.90880.96480.8109
MS - SSIM X Y Z 0.43340.68640.60920.86460.6104
MS - SSIM Y C r C b 0.52020.72960.68460.91240.6499
PU - FSIM 0.90540.81490.87730.95530.8912
FSIM H D R - L a b 100 0.89940.82390.87260.95530.9091
FSIM H D R - L a b 1000 0.84200.87990.81950.96920.9087
FSIM I C t C p 0.90490.80990.86540.94770.8863
FSIM J z a z b z 0.88490.90690.84660.96630.9031
FSIM X Y Z 0.57320.75460.63160.89860.7393
FSIM Y C r C b 0.70520.81530.72640.94150.8190
PU - PSNR - HVS - M 0.84010.78030.56240.93310.9035
PSNR - HVS - M H D R - L a b 100 0.81100.78560.54550.93330.9028
PSNR - HVS - M H D R - L a b 1000 0.66070.75080.45570.93110.8486
PSNR - HVS - M I C t C p 0.84520.75540.58460.93080.9066
PSNR - HVS - M J z a z b z 0.73150.65010.49630.92300.8286
PSNR - HVS - M X Y Z 0.28910.63140.43920.84490.6793
PSNR - HVS - M Y C r C b 0.39220.66700.41570.91020.7954
PU - PSNR - HMA 0.84030.82180.76340.93690.9041
PSNR - HMA H D R - L a b 100 0.81140.81670.74580.93650.9034
PSNR - HMA H D R - L a b 1000 0.66070.79840.69070.93840.8493
PSNR - HMA I C t C p 0.84550.80110.76960.93430.9076
PSNR - HMA J z a z b z 0.72870.76640.70940.93390.8294
PSNR - HMA X Y Z 0.28950.67730.49790.86920.6831
PSNR - HMA Y C r C b 0.39260.71600.62460.92360.7954
HDR - VDP 2 0.86780.86850.89060.95160.9289
HDR - VQM 0.77350.83300.89950.95720.9170
PU - VIF 0.86580.74640.77040.93220.8863
Table A4. SROCC of the different color quality metrics on the considered databases.
Table A4. SROCC of the different color quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
Δ E ¯ H D R - L a b 100 0.48070.25780.64900.75230.6394
Δ E ¯ H D R - L a b 1000 0.21230.24180.61790.69450.5259
Δ E ¯ I C t C p 0.68460.34010.62180.84480.7599
Δ E ¯ J z a z b z 0.30080.29940.63390.66940.5602
Δ E ¯ X Y Z 0.07390.39490.38960.68500.4436
Δ E ¯ Y C r C b 0.15490.39920.43770.72970.4999
Δ E S ¯ H D R - L a b 100 0.78270.27840.71810.85080.7559
Δ E S ¯ H D R - L a b 1000 0.48980.25850.62070.85240.6851
Δ E S ¯ I C t C p 0.79110.26060.61950.86350.7892
Δ E S ¯ J z a z b z 0.63960.28040.58550.87710.7283
Δ E S ¯ X Y Z 0.17060.36510.39110.81300.5665
Δ E S ¯ Y C r C b 0.23920.34850.39270.86740.6365
SSIMc H D R - L a b 100 0.51840.70850.72120.88730.7535
SSIMc H D R - L a b 1000 0.29910.66410.76430.89430.6047
SSIMc I C t C p 0.74370.77480.72730.91740.8545
SSIMc J z a z b z 0.50590.71340.79260.89430.6740
SSIMc X Y Z 0.17850.63250.60640.76700.5065
SSIMc Y C r C b 0.22590.63930.70440.83920.5443
CSSIM H D R - L a b 100 0.79720.40650.76050.89810.7834
CSSIM H D R - L a b 1000 0.53690.32570.63220.88570.6813
CSSIM I C t C p 0.77120.46960.72120.91730.8464
CSSIM J z a z b z 0.67300.42420.61810.91970.7037
CSSIM X Y Z 0.17170.35920.30540.77130.4995
CSSIM Y C r C b 0.26570.43070.38300.88050.5805
FSIMc H D R - L a b 100 0.85100.85310.85480.93320.9068
FSIMc H D R - L a b 1000 0.68350.85600.81960.95750.9025
FSIMc I C t C p 0.91270.78920.85850.94490.8852
FSIMc J z a z b z 0.83710.90650.84850.96570.9046
FSIMc X Y Z 0.57840.74830.63760.89990.7413
FSIMc Y C r C b 0.67990.79660.75120.95000.8196
PSNR - HMAc H D R - L a b 100 0.55330.40420.65920.76640.7337
PSNR - HMAc H D R - L a b 1000 0.35340.33940.71380.74460.6589
PSNR - HMAc I C t C p 0.76180.63730.75850.86380.8418
PSNR - HMAc J z a z b z 0.48930.42930.70650.82870.7301
PSNR - HMAc X Y Z 0.22820.54310.55650.84550.6315
PSNR - HMAc Y C r C b 0.34430.56690.68510.90250.7486
Table A5. OR of the different color-blind quality metrics on the considered databases.
Table A5. OR of the different color-blind quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
PU - PSNR 0.63540.57290.77140.58330.6400
PSNR H D R - L a b 100 0.64580.58330.88570.60420.6800
PSNR H D R - L a b 1000 0.65630.56250.77140.66670.7000
PSNR I C t C p 0.59380.58330.77860.59580.6100
PSNR J z a z b z 0.65630.60420.81430.65000.7400
PSNR X Y Z 0.68750.63540.84290.81250.7700
PSNR Y C r C b 0.69790.60420.77140.75830.7300
PU - SSIM 0.63540.47920.77860.47920.6500
SSIM H D R - L a b 100 0.59380.47920.78570.47920.6500
SSIM H D R - L a b 1000 0.64580.48960.75710.58750.7100
SSIM I C t C p 0.58330.54170.79290.48750.6700
SSIM J z a z b z 0.67710.50000.75000.55420.6900
SSIM X Y Z 0.70830.65620.80710.73330.8700
SSIM Y C r C b 0.69790.56250.82860.70000.8000
PU - MS - SSIM 0.40630.51040.67860.36670.5000
MS - SSIM H D R - L a b 100 0.43750.47920.73570.39150.5400
MS - SSIM H D R - L a b 1000 0.50000.47920.71430.37080.7000
MS - SSIM I C t C p 0.44790.55210.65000.39580.4600
MS - SSIM J z a z b z 0.42710.57290.68570.35000.6900
MS - SSIM X Y Z 0.68750.62500.81430.63330.8100
MS - SSIM Y C r C b 0.66670.56250.79290.56670.7900
PU - FSIM 0.35450.50000.61430.41670.5000
FSIM H D R - L a b 100 0.37500.52080.67140.45000.4400
FSIM H D R - L a b 1000 0.44790.53130.63570.33330.5300
FSIM I C t C p 0.32290.50000.67140.46670.5200
FSIM J z a z b z 0.41670.51040.65000.32500.5900
FSIM X Y Z 0.65620.59380.77140.56250.7800
FSIM Y C r C b 0.58330.56250.67860.46670.6900
PU - PSNR - HVS - M 0.48960.65630.75000.58750.6600
PSNR - HVS - M H D R - L a b 100 0.45830.62500.76430.58750.6400
PSNR - HVS - M H D R - L a b 1000 0.59380.64580.80710.53330.6600
PSNR - HVS - M I 0.45830.70830.78570.51250.5900
PSNR - HVS - M J z a z b z 0.55210.68750.75710.57500.6700
PSNR - HVS - M X Y Z 0.69790.63540.85000.70420.7600
PSNR - HVS - M Y C r C b 0.66670.67710.82140.58750.7200
PU - PSNR - HMA 0.41670.71880.80710.51250.6200
PSNR - HMA H D R - L a b 100 0.47920.70830.77860.52920.6100
PSNR - HMA H D R - L a b 1000 0.58330.66670.75710.46250.6500
PSNR - HMA I C t C p 0.42710.66670.76430.53750.5700
PSNR - HMA J z a z b z 0.60320.59380.76430.52920.6800
PSNR - HMA X Y Z 0.69790.61460.84290.68330.7500
PSNR - HMA Y C r C b 0.66670.58330.78570.63330.7100
HDR - VDP 2 0.35450.45760.62500.37080.4400
HDR - VQM 0.53130.56160.90610.3920.5300
PU - VIF 0.40630.59380.75710.58330.5500
Table A6. OR of the different color quality metrics on the considered databases.
Table A6. OR of the different color quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
Δ E ¯ H D R - L a b 100 0.70830.76040.77140.71670.8100
Δ E ¯ H D R - L a b 1000 0.70830.76040.79290.75000.8000
Δ E ¯ I C t C p 0.63540.77080.83570.78330.6900
Δ E ¯ J z a z b z 0.64580.76040.84290.85420.8000
Δ E ¯ X Y Z 0.65620.75000.86430.83330.8500
Δ E ¯ Y C r C b 0.68750.87140.84290.77560.8700
Δ E S ¯ H D R - L a b 100 0.54170.78130.82140.80830.7100
Δ E S ¯ H D R - L a b 1000 0.66670.77080.81430.80000.7600
Δ E S ¯ I C t C p 0.47920.78130.82140.85830.6800
Δ E S ¯ J z a z b z 0.65630.78130.84290.77920.7400
Δ E S ¯ X Y Z 0.68750.75000.87140.80000.8600
Δ E S ¯ Y C r C b 0.67710.73960.85000.77920.7500
SSIMc H D R - L a b 100 0.69790.56250.79290.62080.6900
SSIMc H D R - L a b 1000 0.70830.64580.74290.84580.8000
SSIMc I C t C p 0.57290.57290.84290.52920.6400
SSIMc J z a z b z 0.69790.56250.69290.61670.7300
SSIMc X Y Z 0.71880.66670.80000.72920.8700
SSIMc Y C r C b 0.71880.66670.76430.65420.8000
CSSIM H D R - L a b 100 0.54170.69790.80710.64580.6700
CSSIM H D R - L a b 1000 0.67710.76040.75000.66670.7700
CSSIM I C t C p 0.50000.80210.81430.64580.6900
CSSIM J z a z b z 0.66670.73960.82860.61250.7600
CSSIM X Y Z 0.69790.77080.79290.71670.6900
CSSIM Y C r C b 0.70830.77080.80000.61670.7600
FSIMc H D R - L a b 100 0.43750.56250.59280.50830.5600
FSIMc H D R - L a b 1000 0.65630.50000.63570.36250.6200
FSIMc I C t C p 0.29170.59380.58570.48330.5500
FSIMc J z a z b z 0.43750.42710.67140.32080.5500
FSIMc X Y Z 0.64580.57290.77860.56670.7700
FSIMc Y C r C b 0.65620.57290.67860.36250.6900
PSNR - HMAc H D R - L a b 100 0.63540.75000.75710.75000.7300
PSNR - HMAc H D R - L a b 1000 0.64580.78130.76430.74170.7400
PSNR - HMAc I C t C p 0.51040.71880.75000.63750.7600
PSNR - HMAc J z a z b z 0.64580.73960.73570.65420.7300
PSNR - HMAc X Y Z 0.66670.67710.75710.68750.7500
PSNR - HMAc Y C r C b 0.67710.69790.70000.59580.7500
Table A7. RMSE of the different color-blind quality metrics on the considered databases.
Table A7. RMSE of the different color-blind quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
PU - PSNR 15.8717.0920.4316.0017.08
PSNR H D R - L a b 100 16.4417.4524.9515.9717.66
PSNR H D R - L a b 1000 19.5219.7921.2718.7721.14
PSNR I C t C p 15.3216.8120.0515.4615.49
PSNR J z a z b z 18.6620.0220.9218.7921.21
PSNR X Y Z 21.8021.2422.3622.3124.58
PSNR Y C r C b 21.3820.4221.7920.2322.75
PU - SSIM 15.9214.4816.4411.7516.69
SSIM H D R - L a b 100 15.9114.8115.7311.8617.78
SSIM H D R - L a b 1000 17.7017.0816.1213.5222.41
SSIM I C t C p 15.9115.1616.4311.5415.28
SSIM J z a z b z 16.7017.1415.1312.4621.20
SSIM X Y Z 21.3221.7919.8019.6525.95
SSIM Y C r C b 20.8920.4918.6316.8825.11
PU - MS - SSIM 11.7612.7312.158.4310.73
MS - SSIM H D R - L a b 100 11.8612.6213.468.7111.26
MS - SSIM H D R - L a b 1000 13.6215.0312.498.1219.18
MS - SSIM I C t C p 12.1013.3311.739.0410.53
MS - SSIM J z a z b z 12.2514.1110.117.6917.88
MS - SSIM X Y Z 19.9521.2319.6515.8823.17
MS - SSIM Y C r C b 18.6619.6117.7412.8122.71
PU - FSIM 9.6714.8412.079.1113.02
FSIM H D R - L a b 100 9.8914.7012.299.3912.18
FSIM H D R - L a b 1000 11.9812.5014.417.1912.09
FSIM I C t C p 9.7015.7012.6010.0513.64
FSIM J z a z b z 10.4210.9413.397.3112.58
FSIM X Y Z 18.0418.7119.0113.5519.97
FSIM Y C r C b 15.7216.3716.7911.6017.22
PU - PSNR - HVS - M 12.8016.7519.9515.3912.19
PSNR - HVS - M H D R - L a b 100 12.7016.5320.2312.1414.20
PSNR - HVS - M H D R - L a b 1000 16.5219.5420.3611.8815.21
PSNR - HVS - M I C t C p 12.7417.4519.6011.5411.45
PSNR - HVS - M J z a z b z 15.1220.7520.4812.8516.52
PSNR - HVS - M X Y Z 21.1521.4622.4418.3622.01
PSNR - HVS - M Y C r C b 20.1220.9521.7713.8318.15
PU - PSNR - HMA 12.5517.0716.2810.9913.60
PSNR - HMA H D R - L a b 100 12.9217.7016.2111.1014.10
PSNR - HMA H D R - L a b 1000 16.5118.2017.4010.9615.62
PSNR - HMA I C t C p 12.0016.0915.4111.6011.43
PSNR - HMA J z a z b z 15.1618.1116.8411.8216.47
PSNR - HMA X Y Z 21.1420.4321.6816.1221.79
PSNR - HMA Y C r C b 20.1119.2119.3114.0418.08
HDR - VDP 2 11.312.5511.279.609.50
HDR - VQM 14.1114.7210.648.5710.88
PU - VIF 10.8517.8515.7412.1713.43
Table A8. RMSE of the different color quality metrics on the considered databases.
Table A8. RMSE of the different color quality metrics on the considered databases.
Quality Metric4KdtbHDdtbNarwaria et al.Korshunov et al.Zerman et al.
Δ E ¯ H D R - L a b 100 19.7226.8119.3120.2523.73
Δ E ¯ H D R - L a b 1000 21.6026.7719.9222.2925.49
Δ E ¯ I C t C p 16.3027.1119.5819.9919.62
Δ E ¯ J z a z b z 20.2826.4619.2526.0825.01
Δ E ¯ X Y Z 21.6426.0123.6827.5128.31
Δ E ¯ Y C r C b 21.5726.3623.8222.7726.83
Δ E S ¯ H D R - L a b 100 14.6427.5118.7021.3019.77
Δ E S ¯ H D R - L a b 1000 19.6727.6820.5620.8221.99
Δ E S ¯ I C t C p 13.6427.6120.9428.9118.10
Δ E S ¯ J z a z b z 17.5827.4421.4520.5320.35
Δ E S ¯ X Y Z 21.3326.6724.1821.9026.71
Δ E S ¯ Y C r C b 21.0927.1124.3421.5923.00
SSIMc H D R - L a b 100 18.8819.6216.6814.6119.95
SSIMc H D R - L a b 1000 21.0720.7415.4615.9224.30
SSIMc I C t C p 14.9817.4416.6213.5015.77
SSIMc J z a z b z 19.0720.1814.5614.2022.63
SSIMc X Y Z 21.4222.1119.5519.9326.04
SSIMc Y C r C b 21.2621.7616.7217.2225.40
CSSIM H D R - L a b 100 13.3423.5915.7914.1018.72
CSSIM H D R - L a b 1000 18.6125.0518.6814.7122.20
CSSIM I C t C p 14.1523.7417.0412.6216.24
CSSIM J z a z b z 16.2424.1620.4116.4721.14
CSSIM X Y Z 21.6525.7622.8120.3926.21
CSSIM Y C r C b 21.1124.9121.3014.9424.51
FSIMc H D R - L a b 100 11.7814.1412.8011.0212.11
FSIMc H D R - L a b 1000 16.0713.8813.878.4112.0
FSIMc I C t C p 9.2916.2912.4610.2213.57
FSIMc J z a z b z 12.1911.0912.987.4012.36
FSIMc X Y Z 17.9918.8218.8813.4719.93
FSIMc Y C r C b 16.3016.9016.189.3817.17
PSNR - HMAc H D R - L a b 100 18.3724.7518.5720.6020.52
PSNR - HMAc H D R - L a b 1000 20.7225.4018.7021.4922.68
PSNR - HMAc I C t C p 14.5721.1219.0215.9417.03
PSNR - HMAc J z a z b z 19.2624.3717.1417.9420.57
PSNR - HMAc X Y Z 21.7122.7520.3417.3423.00
PSNR - HMAc Y C r C b 20.7121.8317.4914.0219.87

Appendix C. Metrics Sensitivity on the Chrominance Artifacts with Our Proposed Database 4Kdtb

This appendix is an extension of the Section 5.2 where the impacts of the chrominance artifacts on quality metrics performances is studied. For each reference image of the database 4Kdtb, the subjective and objective scores for each distorted image are shown in function of the HEVC Qp. The objective scores are displayed after applying the logistic regression presented in Section 4.
The images compressed with the chroma Qp offset algorithm (cf. Section 3.1.2) are represented with a red line. The images compressed without the chroma Qp offset and 10 bits quantization on the chrominance with a green line. The images compressed without the chroma Qp offset algorithm and a 10 bits quantization are represented with a blue line.
Figure A6. Subjective and objective scores for the image Bike_20s.
Figure A6. Subjective and objective scores for the image Bike_20s.
Jimaging 05 00018 g0a6
Figure A7. Subjective and objective scores for the image Bike_30s.
Figure A7. Subjective and objective scores for the image Bike_30s.
Jimaging 05 00018 g0a7
Figure A8. Subjective and objective scores for the image Bike_81s.
Figure A8. Subjective and objective scores for the image Bike_81s.
Jimaging 05 00018 g0a8
Figure A9. Subjective and objective scores for the image Bike_110s.
Figure A9. Subjective and objective scores for the image Bike_110s.
Jimaging 05 00018 g0a9
Figure A10. Subjective and objective scores for the image Regatta_11s
Figure A10. Subjective and objective scores for the image Regatta_11s
Jimaging 05 00018 g0a10
Figure A11. Subjective and objective scores for the image Regatta_24s.
Figure A11. Subjective and objective scores for the image Regatta_24s.
Jimaging 05 00018 g0a11
Figure A12. Subjective and objective scores for the image Regatta_80s.
Figure A12. Subjective and objective scores for the image Regatta_80s.
Jimaging 05 00018 g0a12
Figure A13. Subjective and objective scores for the image Regatta_95s.
Figure A13. Subjective and objective scores for the image Regatta_95s.
Jimaging 05 00018 g0a13

References

  1. Parameter Values for the HDTV Standard for Production and International Program Exchange; Rec BT.709-6, ITU-R; International Telecommunication Union (ITU): Geneva, Switzerland, 2015.
  2. Reference Electro-Optical Transfer Function for Flat Panel Displays Used in HDTV Studio Production; Rec BT.1886-0, ITU-R; International Telecommunication Union (ITU): Geneva, Switzerland, 2015.
  3. High Dynamic Range Electro-Optical Transfer Function of Mastering Reference Displays; Standard ST.2084; Society of Motion Picture & Television Engineers (SMPTE): White Plains, NY, USA, 2014.
  4. Essential Parameter Values for the Extended Image Dynamic Range Television System for Programme Production; Standard STD-B67; Association of Radio Industries and Businesses (ARIB): Tokyo, Japan, 2015.
  5. Parameter Values for Ultra-High Definition Television Systems for Production and International Programme Exchange; Rec BT.2020-2, ITU-R; International Telecommunication Union (ITU): Geneva, Switzerland, 2016.
  6. Image Parameter Values for High Dynamic Range Television for Use in Production and International Programme Exchange; Rec BT.2100-1, ITU-R; International Telecommunication Union (ITU): Geneva, Switzerland, 2017.
  7. Aydın, T.O.; Mantiuk, R.; Seidel, H.P. Extending quality metrics to full luminance range images. Proc. SPIE 2008, 6806. [Google Scholar] [CrossRef]
  8. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  9. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. In Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; Volume 3, pp. iii–709. [Google Scholar] [CrossRef]
  10. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the The Thrity-Seventh Asilomar Conference on Signals, Systems Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar] [CrossRef]
  11. Mantiuk, R.; Kim, K.J.; Rempel, A.G.; Heidrich, W. HDR-VDP-2: A Calibrated Visual Metric for Visibility and Quality Predictions in All Luminance Conditions. ACM Trans. Graph. 2011, 30, 40:1–40:14. [Google Scholar] [CrossRef]
  12. Narwaria, M.; Mantiuk, R.; Da Silva, M.P.; Le Callet, P. HDR-VDP-2.2: A calibrated method for objective quality prediction of high-dynamic range and standard images. J. Electron. Imaging 2015, 24, 010501. [Google Scholar] [CrossRef]
  13. Narwaria, M.; Da Silva, M.P.; Le Callet, P. HDR-VQM: An objective quality measure for high dynamic range video. Signal Process. Image Commun. 2015, 35, 46–60. [Google Scholar] [CrossRef] [Green Version]
  14. Hanhart, P.; Bernardo, M.V.; Pereira, M.; Pinheiro, A.M.G.; Ebrahimi, T. Benchmarking of objective quality metrics for HDR image quality assessment. EURASIP J. Image Video Process. 2015, 2015, 39. [Google Scholar] [CrossRef]
  15. Richter, T. On the standardization of the JPEG XT image compression. In Proceedings of the 2013 Picture Coding Symposium (PCS), San Jose, CA, USA, 8–11 December 2013; pp. 37–40. [Google Scholar] [CrossRef]
  16. Hanhart, P.; Řeřábek, M.; Ebrahimi, T. Towards high dynamic range extensions of HEVC: Subjective evaluation of potential coding technologies. Proc. SPIE 2015, 9599. [Google Scholar] [CrossRef]
  17. Hanhart, P.; Řeřábek, M.; Ebrahimi, T. Subjective and objective evaluation of HDR video coding technologies. In Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016; pp. 1–6. [Google Scholar]
  18. Vigier, T.; Krasula, L.; Milliat, A.; Perreira Da Silva, M.; Le Callet, P. Performance and robustness of HDR objective quality metrics in the context of recent compression scenarios. In Proceedings of the Digital Media Industry and Academic Forum, Santorini, Greece, 4–6 July 2016; pp. 59–64. [Google Scholar] [CrossRef]
  19. Zerman, E.; Valenzise, G.; Dufaux, F. An extensive performance evaluation of full-reference HDR image quality metrics. Qual. User Exp. 2017, 2, 5. [Google Scholar] [CrossRef]
  20. Fairchild, M.D.; Chen, P.H. Brightness, lightness, and specifying color in high-dynamic-range scenes and images. Proc. SPIE 2011, 7867. [Google Scholar] [CrossRef]
  21. Colorimetry—Part 4: CIE 1976 L*A*B* Colour Space; Standard CIE S014-4/E:2007; Commision Internationale de l’Eclairage: Vienna, Austria, 1976.
  22. Safdar, M.; Cui, G.; Kim, Y.J.; Luo, M.R. Perceptually uniform color space for image signals including high dynamic range and wide gamut. Opt. Express 2017, 25, 15131–15151. [Google Scholar] [CrossRef]
  23. Ebner, F.; Fairchild, M.D. Development and testing of a color space (IPT) with improved hue uniformity. In Proceedings of the Color and Imaging Conference, Scottsdale, AZ, USA, 17–20 November 1998; Society for Imaging Science and Technology: Springfield, VA, USA, 1998; Volume 1998, pp. 8–13. [Google Scholar]
  24. What Is ICtCp? White paper Version 7.2; Dolby: San Francisco, CA, USA, 2017.
  25. Pieri, E.; Pytlarz, J. Hitting the Mark—A New Color Difference Metric for HDR and WCG Imagery. SMPTE Motion Imaging J. 2018, 127, 18–25. [Google Scholar] [CrossRef]
  26. Zhang, X.; Wandell, B.A. A spatial extension of CIELAB for digital color-image reproduction. J. Soc. Inf. Disp. 1997, 5, 61–63. [Google Scholar] [CrossRef]
  27. Wang, Z.; Lu, L.; Bovik, A.C. Video quality assessment based on structural distortion measurement. Signal Process. Image Commun. 2004, 19, 121–132. [Google Scholar] [CrossRef] [Green Version]
  28. Hassan, M.A.; Bashraheel, M.S. Color-based structural similarity image quality assessment. In Proceedings of the 2017 8th International Conference on Information Technology (ICIT), Amman, Jordan, 17–18 May 2017; pp. 691–696. [Google Scholar] [CrossRef]
  29. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J.; Lukin, V. On between-coefficient contrast masking of DCT basis functions. In Proceedings of the Third International Workshop on Video Processing and Quality Metrics, Scottsdale, AZ, USA, 25–26 January 2007; Volume 4. [Google Scholar]
  31. Ponomarenko, N.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Jin, L.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Color image database TID2013: Peculiarities and preliminary results. In Proceedings of the 2013 4th European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 June 2013. [Google Scholar]
  32. Subjective Video Quality Assessment Methods for Multimedia Applications; Rec P.910, ITU-T; International Telecommunication Union (ITU): Geneva, Switzerland, 2008.
  33. Hasler, D.; Suesstrunk, S.E. Measuring colorfulness in natural images. Proc. SPIE 2003, 5007. [Google Scholar] [CrossRef]
  34. Narwaria, M.; Da Silva, M.P.; Le Callet, P.; Pepion, R. Tone mapping-based high-dynamic-range image compression: Study of optimization criterion and perceptual quality. Opt. Eng. 2013, 52, 102008. [Google Scholar] [CrossRef]
  35. Korshunov, P.; Hanhart, P.; Richter, T.; Artusi, A.; Mantiuk, R.; Ebrahimi, T. Subjective quality assessment database of HDR images compressed with JPEG XT. In Proceedings of the 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX), Pylos-Nestoras, Greece, 26–29 May 2015; pp. 1–6. [Google Scholar] [CrossRef]
  36. Kuang, J.; Johnson, G.M.; Fairchild, M.D. iCAM06: A refined image appearance model for HDR image rendering. J. Vis. Commun. Image Represent. 2007, 18, 406–414. [Google Scholar] [CrossRef]
  37. Mantiuk, R.; Myszkowski, K.; Seidel, H.P. A Perceptual Framework for Contrast Processing of High Dynamic Range Images. ACM Trans. Appl. Percept. 2006, 3, 286–308. [Google Scholar] [CrossRef]
  38. Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic Tone Reproduction for Digital Images. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’02 ), San Antonio, TX, USA, 23–26 July 2002; ACM: New York, NY, USA, 2002; pp. 267–276. [Google Scholar] [CrossRef]
  39. Valenzise, G.; De Simone, F.; Lauga, P.; Dufaux, F. Performance evaluation of objective quality metrics for HDR image compression. Proc. SPIE 2014, 9217. [Google Scholar] [CrossRef]
  40. Mai, Z.; Mansour, H.; Mantiuk, R.; Nasiopoulos, P.; Ward, R.; Heidrich, W. Optimizing a Tone Curve for Backward-Compatible High Dynamic Range Image and Video Compression. IEEE Trans. Image Process. 2011, 20, 1558–1571. [Google Scholar] [CrossRef]
  41. Specification for the Use of Video and Audio Coding in Broadcast and Broadband Applications; Technical Specification ETSI TS 101 154 v2.4.1; Digital Video Broadcasting (DVB): Sophia Antipolis, France, 2018.
  42. Operation Manuals: BVM-X300; Manual Version 2.2; Sony: Tokyo, Japan, 2017.
  43. Methodology for the Subjective Assessment of the Quality of Television Pictures; Rec BT.500-13, ITU-R; International Telecommunication Union (ITU): Geneva, Switzerland, 2012.
  44. Conversion and Coding Practices for HDR/WCG Y’CbCr 4:2:0 Video with PQ Transfer Characteristics; Rec H-Suppl.15, ITU-T; International Telecommunication Union (ITU): Geneva, Switzerland, 2017.
  45. Rousselot, M.; Auffret, E.; Ducloux, X.; Le Meur, O.; Cozot, R. Impacts of Viewing Conditions on HDR-VDP2. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1442–1446. [Google Scholar] [CrossRef]
  46. Lasserre, S.; LeLéannec, F.; Francois, E. Description of HDR Sequences Proposed by Technicolor; ISO/IEC JTC1/SC29/WG11 JCTVC-P0228]; IEEE: San Jose, CA, USA, 2013. [Google Scholar]
  47. Froehlich, J.; Grandinetti, S.; Eberhardt, B.; Walter, S.; Schilling, A.; Brendel, H. Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays. Proc. SPIE 2014, 9023. [Google Scholar] [CrossRef]
  48. Fairchild, M.D. The HDR photographic survey. In Proceedings of the Color and Imaging Conference, Albuquerque, NM, USA, 5–9 November 2007; Society for Imaging Science and Technology: Springfield, VA, USA, 2007; Volume 2007, pp. 233–238. [Google Scholar]
  49. Methods, Metrics and Procedures for Statistical Evaluation, Qualification And Comparison of Objective Quality Prediction Models; Rec P.1401, ITU-T; International Telecommunication Union (ITU): Geneva, Switzerland, 2012.
  50. Ponomarenko, N.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Carli, M. Modified image visual quality metrics for contrast change and mean shift accounting. In Proceedings of the 2011 11th International Conference, The Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), Polyana-Svalyava, Ukraine, 23–25 February 2011. [Google Scholar]
  51. Daly, S.J. Visible differences predictor: An algorithm for the assessment of image fidelity. Proc. SPIE 1992, 1666. [Google Scholar] [CrossRef]
  52. Barten, P.G.J. Formula for the contrast sensitivity of the human eye. Proc. SPIE 2003, 5294. [Google Scholar] [CrossRef]
  53. Operational Practices in HDR Television Production; Rec BT.2408-0, ITU-R; International Telecommunication Union (ITU): Geneva, Switzerland, 2017.
  54. Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. ACM Trans. Graph. (TOG) 2002, 21, 267–276. [Google Scholar] [CrossRef]
  55. Banterle, F.; Artusi, A.; Debattista, K.; Chalmers, A. Advanced High Dynamic Range Imaging: Theory and Practice, 2nd ed.; AK Peters (CRC Press): Natick, MA, USA, 2017. [Google Scholar]
Figure 1. Diagram of the proposed method to adapt SDR metrics to HDR/WCG contents.
Figure 1. Diagram of the proposed method to adapt SDR metrics to HDR/WCG contents.
Jimaging 05 00018 g001
Figure 2. Different perceptually uniform luminances as a function of the linear luminance: (a) for the range 0–1000 cd / m 2 , (b) for the range 0–150 cd / m 2 .
Figure 2. Different perceptually uniform luminances as a function of the linear luminance: (a) for the range 0–1000 cd / m 2 , (b) for the range 0–150 cd / m 2 .
Jimaging 05 00018 g002
Figure 3. Characteristics of the Narwaria et al. [34] images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Figure 3. Characteristics of the Narwaria et al. [34] images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Jimaging 05 00018 g003
Figure 4. Characteristics of the Korshunov et al. [35] images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Figure 4. Characteristics of the Korshunov et al. [35] images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Jimaging 05 00018 g004
Figure 5. Characteristics of the Zerman et al. [19] images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Figure 5. Characteristics of the Zerman et al. [19] images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Jimaging 05 00018 g005
Figure 6. Characteristics of the HDdtb images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Figure 6. Characteristics of the HDdtb images: (a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Jimaging 05 00018 g006
Figure 7. Characteristics of the 4Kdtb images: ((a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Figure 7. Characteristics of the 4Kdtb images: ((a) The dynamic range, (b) key, (c) spatial Information, (d) Colorfulness.
Jimaging 05 00018 g007
Figure 8. SROCC performances for the 4Kdtb database for color-blind quality metrics (a) and for color quality metrics (b).
Figure 8. SROCC performances for the 4Kdtb database for color-blind quality metrics (a) and for color quality metrics (b).
Jimaging 05 00018 g008
Figure 9. SROCC performances for the Zerman et al. database for color-blind quality metrics (a) and for color quality metrics (b).
Figure 9. SROCC performances for the Zerman et al. database for color-blind quality metrics (a) and for color quality metrics (b).
Jimaging 05 00018 g009
Figure 10. SROCC performances for the HDdtb database for color-blind quality metrics (a) and for color quality metrics (b).
Figure 10. SROCC performances for the HDdtb database for color-blind quality metrics (a) and for color quality metrics (b).
Jimaging 05 00018 g010
Figure 11. SROCC performances for the Korshunov et al. database for color-blind quality metrics (a) and for color quality metrics (b).
Figure 11. SROCC performances for the Korshunov et al. database for color-blind quality metrics (a) and for color quality metrics (b).
Jimaging 05 00018 g011
Figure 12. SROCC performances for the Narwaria et al. database for color-blind quality metrics (a) and for color quality metrics (b).
Figure 12. SROCC performances for the Narwaria et al. database for color-blind quality metrics (a) and for color quality metrics (b).
Jimaging 05 00018 g012
Figure 13. SROCC of (a) FSIM H D R - L a b , (b) MS S - SSIM H D R - L a b , (c) SSIM H D R - L a b in function of the diffuse white luminance.
Figure 13. SROCC of (a) FSIM H D R - L a b , (b) MS S - SSIM H D R - L a b , (c) SSIM H D R - L a b in function of the diffuse white luminance.
Jimaging 05 00018 g013
Figure 14. Subjective and objective scores for the image Regatta_24s and for 6 metrics based on the I C t C p color space.
Figure 14. Subjective and objective scores for the image Regatta_24s and for 6 metrics based on the I C t C p color space.
Jimaging 05 00018 g014
Table 1. Selected SDR quality metrics.
Table 1. Selected SDR quality metrics.
NameColorReferenceMain Principle
PSNR Ratio between the range of the signal and the mean square error
Δ E ¯ Mean of the color difference metrics
Δ E S ¯ Zhang et al. [26]Mean of the color difference metrics by considering the blurring effect of the HVS. Also known as S-CIELab
SSIM Wang et al. [8]Metrics based on the comparison of three characteristics of the images: the luminance, the contrast and the structure
SSIMcWang et al. [27]Linear combination of the SSIM applied on the three components Y , Cr and Cb of the images.
CSSIMHassan et al. [28]Combination of SSIM and Δ E S
MS-SSIM Wang et al. [10]Multi-scale SSIM
FSIM Zhang et al. [29]Comparison of the phase congruency and the gradient magnitude
FSIMcZhang et al. [29]Color extension of FSIM. Adds two comparisons corresponding to the two chrominance components
PSNR-HVS-M Ponomarenko et al. [30]PSNR on the DCT blocks of the images using CSF and visual masking
PSNR-HMA Ponomarenko et al. [31]Improvement of the PSNR-HVS-M. Takes into account the particularities of the mean shift and the contrast change distortions
PSNR-HMAcPonomarenko et al. [31]Linear combination of the PSNR-HMA on the three components Y , Cr and Cb of the images.
Table 2. Number of observers, number of images, subjective test protocol, kind of distortion and used display for the 3 existing HDR image quality databases and the two databases proposed in this paper.
Table 2. Number of observers, number of images, subjective test protocol, kind of distortion and used display for the 3 existing HDR image quality databases and the two databases proposed in this paper.
Name#Obs#ImgProtocolDistortionDisplayGamutSize
Narwaria et al. [34]27140ACR-HRJPEGSIM2 HDR47ES4MBBT.7091920 × 1080
Korshunov et al. [35]24240DSIS (side by side)JPEG-XTSIM2 HDR47ES4MBBT.709944 × 1080
Zerman et al. [19]15100DSISJPEG, JPEG-XT JPEG2000SIM2 HDR47ES4MBBT.7091920 × 1080
Proposed HDdtb1596DSIS (side by side)HEVC, Gaussian noise, Gamut mismatchSony BVM-X300BT.2020944 × 1080
Proposed 4Kdtb1396DSIS (side by side)HEVC, QuantizationSony BVM-X300BT.20201890 × 2160
Table 3. Parameters used for HDR-VDP2.
Table 3. Parameters used for HDR-VDP2.
NameAngular Resolution (Pixel/Degree)Surround Luminance ( cd / m 2 )Spectral Emission
Narwaria et al.60130SIM2 HDR47ES4MB
Korshunov et al.6020SIM2 HDR47ES4MB
Zerman et al.4020SIM2 HDR47ES4MB
HDdtb6040Sony BVM-X300
4Kdtb6040Sony BVM-X300
Table 4. SROCC for the HDdtb database with and without the gamut mismatch artifact.
Table 4. SROCC for the HDdtb database with and without the gamut mismatch artifact.
Quality MetricAll ImagesWithout the “Gamut Mismatch” DistortionCompression Artifacts Only
Δ E ¯ H D R - L a b 100 0.25780.39050.6190
Δ E S ¯ H D R - L a b 100 0.27840.56870.6946
CSSIM H D R - L a b 100 0.40650.64530.7714
Table 5. SROCC for the HDdtb database for three metrics based on J z a z b z , J z a z b z ˜ and H D R - L a b 1000 .
Table 5. SROCC for the HDdtb database for three metrics based on J z a z b z , J z a z b z ˜ and H D R - L a b 1000 .
Color Spaces
Metrics J z a z b z J z a z b z ˜ HDR Lab 1000
PSNR0.69330.74630.7587
SSIM0.78310.79730.7904
PSNR-HMA0.76640.79490.7984

Share and Cite

MDPI and ACS Style

Rousselot, M.; Le Meur, O.; Cozot, R.; Ducloux, X. Quality Assessment of HDR/WCG Images Using HDR Uniform Color Spaces. J. Imaging 2019, 5, 18. https://doi.org/10.3390/jimaging5010018

AMA Style

Rousselot M, Le Meur O, Cozot R, Ducloux X. Quality Assessment of HDR/WCG Images Using HDR Uniform Color Spaces. Journal of Imaging. 2019; 5(1):18. https://doi.org/10.3390/jimaging5010018

Chicago/Turabian Style

Rousselot, Maxime, Olivier Le Meur, Rémi Cozot, and Xavier Ducloux. 2019. "Quality Assessment of HDR/WCG Images Using HDR Uniform Color Spaces" Journal of Imaging 5, no. 1: 18. https://doi.org/10.3390/jimaging5010018

APA Style

Rousselot, M., Le Meur, O., Cozot, R., & Ducloux, X. (2019). Quality Assessment of HDR/WCG Images Using HDR Uniform Color Spaces. Journal of Imaging, 5(1), 18. https://doi.org/10.3390/jimaging5010018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop