Next Article in Journal
Enhanced U-Net with GridMask (EUGNet): A Novel Approach for Robotic Surgical Tool Segmentation
Next Article in Special Issue
A Cortical-Inspired Contour Completion Model Based on Contour Orientation and Thickness
Previous Article in Journal
Convolutional Neural Network Model for Segmentation and Classification of Clear Cell Renal Cell Carcinoma Based on Multiphase CT Images
Previous Article in Special Issue
Autokeras Approach: A Robust Automated Deep Learning Network for Diagnosis Disease Cases in Medical Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measuring 3D Video Quality of Experience (QoE) Using A Hybrid Metric Based on Spatial Resolution and Depth Cues

1
Department of Electrical-Electronics Engineering, Graduate School of Natural and Applied Sciences, Gazi University, Ankara 06560, Turkey
2
Department of Computer Engineering, TED University, Ankara 06420, Turkey
3
Department of Information Engineering, University of Padova, 35131 Padova, Italy
4
Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(12), 281; https://doi.org/10.3390/jimaging9120281
Submission received: 18 October 2023 / Revised: 10 December 2023 / Accepted: 12 December 2023 / Published: 18 December 2023
(This article belongs to the Special Issue Modelling of Human Visual System in Image Processing)

Abstract

:
A three-dimensional (3D) video is a special video representation with an artificial stereoscopic vision effect that increases the depth perception of the viewers. The quality of a 3D video is generally measured based on the similarity to stereoscopic vision obtained with the human vision system (HVS). The reason for the usage of these high-cost and time-consuming subjective tests is due to the lack of an objective video Quality of Experience (QoE) evaluation method that models the HVS. In this paper, we propose a hybrid 3D-video QoE evaluation method based on spatial resolution associated with depth cues (i.e., motion information, blurriness, retinal-image size, and convergence). The proposed method successfully models the HVS by considering the 3D video parameters that directly affect depth perception, which is the most important element of stereoscopic vision. Experimental results show that the measurement of the 3D-video QoE by the proposed hybrid method outperforms the widely used existing methods. It is also found that the proposed method has a high correlation with the HVS. Consequently, the results suggest that the proposed hybrid method can be conveniently utilized for the 3D-video QoE evaluation, especially in real-time applications.

1. Introduction

The Quality of Experience (QoE) of a video is a measure that states the satisfaction level from a viewer’s perspective. Hence, this measurement is viewer-centric and focuses on measuring the overall satisfaction and acceptability of a video by taking a holistic approach by evaluating all QoE factors that can affect a viewer’s appreciation positively and/or negatively [1].
The QoE is based on real and end-user experiences. Therefore, the QoE is directly affected by objective and subjective parameters. The objective parameters are the parameters that originate from Quality of Service (QoS) factors and depend mostly on network performance, software, and hardware features. On the contrary, the subjective parameters are determined by the influence of the viewers` individual preferences, expectations, previous video experiences, etc. So, the subjective parameters are more difficult to categorize compared to the objective parameters. However, they are more likely to arise from different perception characteristics that people have (e.g., age, eyesight, mobility, perspective, etc.). For this reason, it is indisputable that the measurement of the subjective parameters is more arduous because they are more abstract. In addition, the other challenge is the design of a comprehensive QoE metric. To be able to design a comprehensive QoE metric, a sufficient number of QoE factors is required. These factors are possibly controlled, measured, or simply collected and reported [2].
The video quality perceived by a viewer is considered the most important part of the QoE [3]. Three-dimensional videos are special types of video representations that can enable a feeling of being in the same space while viewing them due to the addition of depth perception with the depth cues being forward. It is clear that in addition to the quality, a vital factor affecting the QoE in 3D videos is the perception of depth enabled by the viewer. Therefore, the key to increasing the QoE of 3D videos for a viewer is to enable 3D video representations that will create a plausible depth perception in the viewer.
As is known, depth perception is the main result of the stereoscopic vision process carried out by the HVS [4,5]. Except for people with various visual impairments or losses, every person with normal vision has an HVS that combines monocular and binocular depth cues to achieve stereoscopic vision. Although this system works according to the same principles in every human, the perceived depth may be relativistic because of different perception characteristics and visual experiences. In other words, it is possible to have different QoE evaluations about a 3D video when viewed by different viewers due to different credibility perceptions enabled. This is a major obstacle to performing an accurate QoE evaluation for the 3D videos.
Currently, the QoE evaluation for 3D videos can be performed by using two methods [5]. One of them relies on the subjective evaluation in which real human observers assess the 3D-video QoE. It is a fact that the subjective quality evaluation is vital for accurately assessing the 3D-video QoE. However, the subjective evaluation is difficult to perform due to being time consuming and costly and its unsuitability for real-time applications [6,7]. The other one relies on the objective QoE in which iterative mathematical and statistical metrics are utilized during the evaluation process. The objectivity of these metrics stems from their rational expansions that are accepted by the researchers and enable reliable and objective evaluations on a regular basis. These metrics mostly do not consider the most important characteristics of the HVS for 3D-video perception. Therefore, they generally do not achieve a high correlation with real-human quality evaluations [8].
The subjective and the objective image or video QoE metrics existing in the literature can be categorized as full reference (FR), reduced reference (RR), and no reference (NR) [9,10,11,12]. The FR ones cannot be used without the original video, and the RR ones require video features obtained from the original video. Therefore, it is not possible to run the FR and the RR metrics simultaneously with a streaming video. On the other hand, the NR metrics do not require an original video or video features obtained from the original video for the QoE evaluation. It means that they can run simultaneously with a streaming video. However, the FR and the RR metrics can contain more information for the QoE evaluation than the NR metrics. Therefore, the FR, the RR, and the NR metrics have superiorities over each other in terms of the QoE evaluation. Another problem with the subject is that researchers feel obliged to select one of these three approaches when developing metrics. Also, it has been observed that pseudo reference image (PRI) quality-evaluation metrics have been developed in recent years. Contrary to conventional FR, RR, and NR metrics, the PRI metrics use a new type of reference. In conventional metrics, the reference is the original image, which is assumed to have a perfect quality or some derived characteristics of the original image. However, in the PRI metrics, the reference, which is called the pseudo reference image, is generated from the distorted image by further degrading it in several ways and to certain degrees [13,14]. With this approach, the PRI metrics have brought a new breath to the image-quality-evaluation field.
On the other hand, image-quality-evaluation approaches need to be developed according to the characteristics of the digital images obtained by different rendering methods. In general, it is possible to classify digital images into three types according to rendering methods: Natural Scene Image (NSI), Computer Graphic Image (CGI), and Screen Content Image (SCI). The NSIs are digital images captured from the real world and may be degraded by physical reasons such as a low-quality lens, being out-of-focus, motion blur, and insufficient and inappropriate lighting conditions and aerial conditions. CGIs are created or animated by using computer software and are widely used in video games, animations, simulators, etc. They may be degraded by rendering artifacts. SCIs are composite images and consist of texts, graphics, icons, etc. Also, they sometimes contain NSI and CGI regions so that they may be degraded by NSI and CGI degraders. Computer-generated SCIs and CGIs have more noise-free smooth areas, high-saturation color content, repeated patterns, and low- or high-frequency contents [15,16]. As can be seen, it is significant to measure the quality of images that are likely to be dominated by different defects due to differences in the rendering methods by using the video-quality-evaluation method specific to the image’s type. Otherwise, the quality measurements may be inaccurate.
There are also metrics developed to deal with some physical drawbacks that reduce the visual quality and the viewer’s depth perception. One of them is the metric that is developed for measuring the light field. What exactly we can see depends on our precise position in the light field. The light field records the total of all light rays in 3D space that flow through every point and in every direction. Therefore, the light field contains very rich information. A light-field image contains many depth cues to make depth estimation possible. The light-field quality metrics measure the light-field qualities of the light-field images [17,18,19,20].
Additionally, in order to increase the viewer’s depth perception, developing objective quality-evaluation metrics for dehazed images has been a leading light recently. There are many image-dehazing algorithms to remove the haze from the images captured in hazy conditions and preserve the intrinsic image structures. To assess and compare the image-dehazing algorithms, subjective and objective methods can be used. Since subjective evaluation is a time-consuming process and difficult to apply, objective quality-evaluation metrics are more preferable for the researchers [21,22].
Audio–visual content-quality-evaluation issues have also been researched for decades because visual signals are rarely presented without accompanying audio. The distortions that may separately (or conjointly) afflict the visual and audio signals collectively shape the user-perceived Quality of Experience (QoE) [23,24,25].
Lastly, with the recent rapid developments in the field of virtual reality, developing a 360-degree image (also known as an omnidirectional, panoramic, or virtual-reality image) quality-evaluation metric has been a remarkable research area. Three hundred and sixty-degree images and videos include visual information covering the entire 180 × 360° viewing spherical. Hence, compared to conventional 2D spaces, there are many challenges to developing a quality metric for immersive multimedia. Especially, ultrahigh or even higher resolution requirements and degradations in 360-degree images/videos are the main two challenges. In the quality-evaluation field of 360-degree images and videos, multichannel convolutional neural networks (CNNs) have been successfully used due to their good performance [26,27,28].
In light of the above explanations, it can be clearly comprehended that the objective QoE metrics, which are frequently used today, are not adequate for the 3D-video QoE evaluation. Hence, there is a need to develop a 3D-video QoE evaluation metric that has a high correlation with the HVS. While developing this metric, a QoE-based approach that examines with the effects of real visual experiences and different perception characteristics of humans on depth perception should be utilized. On the other hand, designing a hybrid 3D-video QoE evaluation combining the superiorities of the FR, the RR, and the NR metrics is a remarkable advantage. The development of a 3D-video QoE evaluation metric with all these properties contributes to the production of more scientific studies on ubiquitous 3D-video technologies.
Considering all of these facts, a hybrid 3D-video QoE evaluation metric relying on the depth cues associated with the spatial-resolution feature of a 3D video, which is quite effective at influencing the depth-perception experience of a 3D viewer, is proposed in this study. These depth cues are determined as the blurriness and motion information extracted from the 2D-texture videos and retinal-image size and convergence extracted from the depth maps (DMs). As the first step of the proposed-metric-development process, prediction models are developed for these depth cues. Due to the nonobjective features of the 3D videos, such as the perceived depth and naturalness, which differ from person to person, subjective tests are applied to evaluate the QoE of the 3D videos. Then, the depth cues and the Mean Absolute Score (MOS) values obtained from the subjective tests are subjected to a correlation analysis to form the proposed hybrid 3D-video QoE evaluation metric. The performance-evaluation results derived by using the proposed metric prove its effectiveness in assessing the 3D-video QoE.
The rest of this paper is organized as follows: Section 2 includes state-of-the-art studies. Section 3 explains the proposed hybrid 3D-video QoE evaluation metric. Section 4 includes the results and the discussions. This paper is ended with the conclusions and future works given in Section 5.

2. State-of-the-Art Studies

In this section, we provide an overview of the existing studies in the literature in two parts, adhering to the reference-classification approach in Section 1. In the first part, the FR and the RR metrics are presented together, which need to take the original video or some features of the original video as a reference, respectively. The second part includes the NR metrics, which do not need any references for the video-quality-measurement process. Finally, we present an evaluation of the state-of-the-art studies to identify the literature gap.

2.1. Reference-Based Metrics

In [29], the use of objective two-dimensional (2D) video-quality metrics for the 3D-video-quality assessment (VQA) is discussed, and a perceptual-based objective metric that mimics the HVS is proposed. In this study, the luminance component is taken as an input parameter in the development of the metric. According to the experimental results, it is found that using 2D- and 3D-video-quality evaluations is appropriate since the proposed Perceptual Quality Metric (PQM) mimics the MOS and has greater alignment with it compared to the Video Quality Metric (VQM). To the FR metric in [30], the HVS properties, such as the contrast-sensitive function and luminance masking, are taken into account, and in order to analyze the perceptual similarity of the blocks in the left and right views of the stereoscopic video frames, 3D-DCT transform is used. In [31], a 3D structural-similarity (3D-SSIM) approach is proposed. The proposed algorithm regards a video signal as a 3D volume image and combines a local SSIM-based quality measure with local information content and distortion-based pooling methods. The proposed metric in [32] uses blocking artifacts, blurring in edge regions, and the video-quality difference between two views. The proposed metric in [33] uses the color-video information and the depth information as the input parameters. The color-quality metric (CQM) for 3D videos proposed in [34] takes the luminance coefficient into consideration as it is much more sensitive than the chrominance coefficient of a frame for the HVS. In [35], the proposed metric focuses on the interview correlation of the spatial–temporal structural information extracted from adjacent frames. In [36,37], the proposed FR 3D-video-quality metric is modeled around the HVS, fusing the information of both the left and right channels and considering color components, the cyclopean views of the two videos, and the disparity. Since the metric also considers the screen size, video resolution, and the distance of the viewer from the screen, it is possible to use this metric in different applications. In [38,39], an RR stereoscopic VQA metric is proposed, which comprises spatial neighboring information from the contrast of grey-level co-occurrence matrices for both color and depth and edge properties.
In [40], an FR stereoscopic video-quality-assessment (SVQA) metric based on the Stereo Just-Noticeable Difference (SJND) model that works by using contrast, spatial masking, temporal masking, and binocular masking factors to mimic the HVS is proposed. In [41], an FR stereoscopic VQA metric is proposed by using measurements of structural distortions, blurring artifacts, and content complexity. In the FR metric proposed in [42], human stereoscopic vision is modeled by combining left-eye-view and right-eye-view information through 3D-DCT transformation, and the contrast sensitivity of the HVS is considered as well as the depth information of the scene. The metric proposed in [43] is developed by incorporating the stereoscopic visual-attention (SVA) metric into the stereoscopic video-quality-assessment (SVQA) metric in order to benefit the image-quality-evaluation metrics. The proposed metric in [44], in which the SSIM metric is adapted to stereoscopic videos, is the product of approaches that combine SSIM maps and depth maps with local and global weighting methods. In [45], with the approach that the 3D distortions affecting the 3D video quality should also be taken into account when developing a 3D VQA metric, the proposed metric uses texture distortions (i.e., ghost effects and contour artifacts) and depth distortions as the input parameters. In [46], an FR 3D VQA metric based on the dependencies between motion and its binocular disparities was developed. This metric calculates the spatial, temporal, and depth features and uses them in the ultimate quality calculation. The proposed metric in [47] is used for the quality evaluation of various asymmetrically compressed stereoscopic 3D videos. It is observed that the results obtained from the proposed 2D-to-3D metric are more successful than the results obtained from the direct averaging method. The metric proposed in [48] uses two important phenomena (i.e., binocular suppression and recurrent excitation) to model the HVS better and improve depth perception. The FR 3D-video-quality metric proposed in [49] is based on measuring the directional dependency between the motion and depth sub-band coefficients of stereoscopic 3D videos. The proposed metric in [50] evaluates the quality of 3D videos synthesized with DIBR from three aspects: the quality of unoccluded regions, quality of first-order similarity, and quality of second-order similarity using an energy-based sequence-mapping strategy. Another SSIM-based metric in [51] uses the perceptually significant features, contrast, and motion characteristics that have an impact on the HVS.

2.2. NR Metrics

In [52,53], an objective metric (3VQM) is proposed for Depth-Image-Based Rendering (DIBR)-based stereoscopic 3D videos. According to this metric, firstly, the ideal depth map is estimated, which is then used to derive three distortion measures (temporal outliers—TO, temporal inconsistencies—TI, and spatial outliers—SO) to objectify the visual discomfort in the stereoscopic videos. The combination of the three measures constitutes a vision-based quality measure for 3D DIBR-based videos. In the metric proposed in [54], the four factors (temporal variance, disparity variation in the intraframes, disparity variation in the interframes, and disparity distribution in the frame-boundary areas) that affect human perception and visual comfort are examined. In [55], motion and parallax information obtained from depth maps and their histograms are the main parameters of the proposed stereoscopic VQA metric. The results show good performance for video sequences that contain annoying effects for the human eye.
In [56], an NR stereoscopic VQA metric that considers the correlation between the packet loss and perceptual video quality in the network is proposed. The metric yields better results than existing objective metrics so that it can be used in real time when monitoring network statistics. The NR metric proposed in [57], which can be used in the quality measurement of 3D videos that are corrupted or degraded after transmission, uses disparity-index-based dissimilarity measurements and edge-detection-based perceptual-difference measurements. Experimental results demonstrate the effectiveness of the proposed metric. In [58], a stereoscopic VQA metric is proposed to quantify the perceived quality of transmitted and degraded stereoscopic videos. The extracted features are accumulated according to the binocular suppression that is performed by measuring dissimilarity based on the disparity index and perceptual-difference measurement based on edge detection. According to the results, considering the effect of binocular rivalry in a stereoscopic video-quality metric seems to be effective at reflecting the HVS sensitivity and increasing the overall quality.
The proposed NR metric in [59], which examines the effect of the variable network conditions on the 3D-video quality, uses the frame rate, bit rate, and network-packet-loss rate. In [60], the proposed NR metric considers the motion vector lengths and depth information for the 3D-video-quality evaluation. In [61], an NR 3D objective VQA metric that estimates the 3D quality by taking into account the spatial distortions, excessive disparity, depth representation, and temporal information of the video is proposed. The metric is resolution- and frame-rate-independent. To estimate the amount of spatial distortion in the video, the proposed metric computes blockiness. In [62], an extended NR objective 3D VQA metric that can run in real time is proposed. For this purpose, the network-packet loss, video-transmission bit rate, and frame-rate parameters are used as the input parameters.
In [63], a stereo VQA metric by modeling the binocular perception effect in multiviews, including the spatial domain, temporal domain, and the spatial–temporal domain, is proposed. In [5], a depth-perception quality metric is applied to a blind stereoscopic video-quality evaluator to obtain an NR stereoscopic video-quality metric. The proposed NR metric in [64] is based on modeling the joint statistical dependencies between the motion and depth sub-band coefficients. In the proposed metric in [65], the components in the spatial and frequency domains associated with the HVS are used for the 3D VQA. In [66], the proposed NR stereoscopic VQA metric utilizes the 3D saliency map of the sum map first and then uses the sparse representation to decompose the sum map of 3D saliency into coefficients and calculates the features based on sparse coefficients to obtain the effective expression of the videos’ message.
The study in [67] introduces a 3D convolutional-neural-network-based SVQA framework that can model not only local spatiotemporal information but also global temporal information with cubic-difference video patches as the input. In [68], a blind NR 3D VQA metric, which is based on the HVS mechanism and natural video statistics of 3D-video characteristics, is proposed. In [69], a stereoscopic VQA metric based on motion perception is proposed. In [70], a comprehensive stereoscopic VQA metric based on the joint contribution of multiple-domain information and a new interframe cross about spatiotemporal information is proposed.
Apart from these studies, the study in [71] examines the added value of using stereo saliency prediction in FR and NR quality-evaluation cases.

2.3. Evaluation of the State-of-the-Art Studies

As can be seen from the elucidations above, there are three important limitations regarding the QoE evaluation of 3D videos from the depth-perception perspective. One of them is that it is very difficult to measure the depth cues with current rational methods and scientific approaches in 3D videos. Only a limited number of factors can be considered from a large number of factors affecting the human 3D-video QoE, and these factors are evaluated only within the limits permitted by well-known scientific approaches. Another limitation is that the results obtained from objective 3D-video QoE metrics do not correspond exactly to the 3D-viewing perception of an end user. Therefore, it would not be wrong to state that the most important problem with objective 3D QoE evaluation metrics is the lack of a high correlation with the human depth-viewing perception. The last major problem relies on the fact that the researchers’ habit of designing their proposed metrics relies solely on the traditional FR, RR, or NR approaches.
Considering the handicaps elucidated above, the 2D + DM-formed 3D-video QoE evaluation metric proposed in this study is designed by using spatial-resolution-associated depth cues, which have the ability to directly affect the depth perception of the viewer (i.e., the blurriness and motion information measured on the 2D-texture videos and the retinal-image size and convergence measured on the DM sequences). Moreover, while developing the proposed metric with an innovative approach, the NR and the RR types are integrated together to make a hybrid metric. In light of these facts, it could be easily stated that a remarkable hybrid 3D-video QoE evaluation metric, which uses depth cues from two difficult sources and is obtained by getting rid of the routine FR, RR, and NR classification approach that the researchers are stuck in, is developed in the proposed study.

3. Proposed Hybrid 3D-Video QoE Evaluation Method

In this paper, we propose a hybrid 3D-video QoE evaluation metric that utilizes depth cues associated with spatial resolution (i.e., blurriness and motion information extracted from the 2D-texture videos, retinal-image size, and convergence extracted from the depth maps).
We have a salient reason for our focus on the depth cues associated with spatial resolution in this study. In 3D videos, the depth-perception satisfaction of a viewer is at the forefront. Therefore, the viewer unwittingly encounters many depth cues. Because of having a high depth-cue density, it is a rule of thumb in developing a 3D-video QoE evaluation metric to design it based on the QoE factors that increase the depth perception of the viewer. Since a significant amount of these are closely related to spatial resolution, it is appropriate to start with the spatial resolution.
Spatial resolution can be defined as the number of pixels used for displaying a certain area of a digital image that shows a plane defining a finite volume in unlimited space. In a digital image, the smaller the area a pixel occupies on an object, the more pixels are used to represent that object. Accordingly, as the number of pixels per area (i.e., the spatial resolution) in a digital image increases, it is possible to display more detail [72].
Considering a digital image with different spatial-resolution versions, objects are represented with a greater number of pixels in the higher-spatial-resolution version of this image. Therefore, the pixel-related losses in the objects are less and the lines that highlight the objects appear more. As the objects become apparent, it becomes easier to distinguish them from their background and other objects. Thus, the viewer’s depth perception increases. In contrast, in the lower-spatial-resolution version, objects are represented with fewer pixels due to the use of larger pixels. The increase in pixel-related losses in objects results in the loss of detail in the image and a decrease in the viewer’s depth perception [72].
On the other hand, depth cues in 2D color videos and associated DM sequences cause the viewer to perceive more or less depth depending on the spatial resolution. As a matter of fact, the HVS obtains better-quality stereoscopic vision by perceiving the monocular and binocular cues that create depth perception more and more comfortably in the version with high spatial resolution. On the contrary, in the version with low spatial resolution, the cues that create depth perception disappear or become unnoticeable enough to the viewer. In this case, it is not possible to obtain a superior-quality stereoscopic view [73].
For the reasons explained above, the spatial resolution of the 3D videos is an important player that directly affects a viewer’s depth-perception experience. Therefore, the development of a 3D-video QoE evaluation metric, considering the role of this player in obtaining stereoscopic vision in the HVS, draws the attention of this research study.
The framework of the proposed 3D-video QoE evaluation metric is illustrated in Figure 1. As shown in the framework, we prefer using a 3D-video representation that is the product of the 2D + DM method. The 2D + DM method has become one of the most preferred 3D-video-creation techniques due to its support for coding, transmission, and compression technologies [74].
As can also be seen from Figure 1, due to the usage of 3D videos obtained with the 2D + DM method in this study, the proposed metric has two main elements, with one from the 2D-texture video ( M C ) and the other from the DM ( M D ). It is clear that these elements have their own effects on the viewer’s perception of depth, and each contributes separately to the artificial stereoscopic vision. A change in one of these elements directly causes the viewer’s depth perception to change. The reflection of this change in the artificial stereoscopic vision occurs independently of the other element. Therefore, there is an additive relationship between these elements, and this relationship can be illustrated in a metric created based on superposition theory. In light of these explanations, the proposed metric combines these two elements as follows:
M 3 D = M C + M D
where M 3 D is the proposed metric’s expansion.
The M C element provides the effects of two depth cues, blurriness and motion information, in the texture video and the spatial resolution of the 2D-texture video on the depth perception of the viewers. The M D component provides the contributions of the two monocular cues in the DM (i.e., the retinal-image size and convergence) and the spatial resolution to the depth perception of the viewers. The M 3 D value ranges from 0 to 15.

3.1. Proposed Models for the Depth Cues

As we state in Section 3, while we construct the proposed metric, we prefer using the 3D-video representation form, which is the product of the 2D + DM method. The 2D-texture videos are the main components of the 3D videos. While the main QoE factors that create depth perception in the viewer are depth cues hidden in the 2D-texture videos, the helping-component DM sequences have depth-information pixels corresponding to each pixel in the associated 2D-texture video. The quality of the 3D-video viewing experience of the viewers increases significantly with the effective use of these QoE factors or bringing these factors into the foreground. Therefore, it is indisputable that the QoE of the 3D videos that succeed in showing more realistic scenes to the viewer because of being equipped with depth cues is high.
The 2D + DM-formed 3D videos are tailor-made for measuring the depth cues. They allow for measuring the depth cues in the 2D-texture videos and DM sequences separately and provide the possibility to measure depth cues from two separate sources.

3.2. Blurriness

The blur defect, which directly affects the video quality of digital images, manifests itself as the reduction in high-frequency components containing edge information in the image. Accordingly, in digital images, the values of the neighbor pixels in the blurred parts of the images converge.
The ambiguity that occurs especially in the edge information of the objects causes the shapes of the objects to not be understood by the viewer or the objects to be indistinguishable from each other or the background. This situation dramatically reduces the perception of depth of the viewer. Therefore, blurriness is an unacceptable flaw in 3D videos that can be associated with the spatial resolution of these videos.
In this study, to scale the blurriness, the total standard deviation of the 2D-texture videos is normalized by the spatial resolution and frame rate as follows:
B = i = 1 f 1 N j = 1 N ( x j x ¯ ) 2   F S
where B is the blurriness, i is the frame number, f is the total number of frames, j is the pixel number, N is the total number of pixels, x j is the pixel value, x ¯ is the mean of the pixel values in the frame, F is the frame rate, and S is the spatial resolution. Table 1 presents the blurriness measurements of 2D-texture videos calculated by using Equation (2). According to Table 1, the measurements show that the amount of blurriness in versions of a selected 2D-texture video (e.g., Breakdance) with any specific spatial resolution (e.g., SD) and with a gradually increasing compression ratio from QP = 25 to QP = 45 is close. It is also seen that the amount of blurriness in the versions of a selected 2D-texture video (e.g., Ballet) encoded with any specific compression ratio (e.g., QP = 25) and whose spatial resolution changes gradually from SD to QCIF fluctuates. These observations clearly state the correlation between blurriness and spatial resolution.

3.3. Motion Information

One of the most remarkable parameters affecting the depth perception of a viewer is the motion information of a 3D video. The motion information is a parameter that depends on the motion density of video frames.
The motion density in a frame is directly proportional to the spatial resolution of the frame. This is because the higher the spatial resolution of the frame, the higher the motion density of the frame.
Optical-flow vectors are used to measure the motion density of the frames. In the calculation of optical-flow vectors, dense or sparse optical-flow algorithms are used. The dense optical flow is based on the global calculation of the amount of displacement of each pixel in an image sequence that occurs between the current frame and the previous frame. Therefore, every pixel that is displaced and not displaced is included in the calculation. The sparse optical flow, on the other hand, is based on the local calculation of the displacement of only displaced pixels in an image sequence between the current frame and the previous frame.
In this study, we use an optical-flow vector calculated by using the Horn and Schunck method, which is a dense optical-flow algorithm, to measure the motion information.
The motion information is calculated by normalizing the average of the total motion density in a video sequence as follows [75]:
M = i = 1 f Π ( i ) f × F S
where M is the motion information, i is the number of frames, f is the total number of frames, Π ( i ) is the motion density of the i-th video frame, F is the frame rate, and S is the spatial resolution. Π ( i ) is calculated according to the following equation [75]:
Π i = d = 1 n V d ( x i , y i )
where d is a feature point in the frame, n is the number of feature points in the frame, and V d ( x i ,   y i ) is the motion vector of the i-th frame at feature point d . Table 2 shows the motion-information measurements of 2D-texture videos computed by using Equation (3). In Table 2, it is noticeable that as the compression ratio gradually increases (from QP = 25 to QP = 45) in the SD, CIF, or QCIF spatial-resolution forms of each 3D video, the motion amount gradually decreases. A strong relationship between the motion information and compression ratio can be observed clearly. In addition, as the spatial resolution gradually changes (from QCIF to SD) at any QP value, the motion amount gradually increases. As can be seen, there is another strong relationship between the motion information and spatial resolution.

3.4. Retinal-Image Size

According to Emmert’s law [76], the distance between an object and its viewer can be calculated by using the actual size of the object and the size of its image on the viewer’s retina (see Figure 2).
The mathematical expression of this law is given by the following equation:
P = R × D
where P is the size of the object, D is the distance of the object to the viewer’s eye, and R is the size of the image of the object formed on the retina. Since the P does not change, the R decreases when the D increases and vice versa. In other words, when the object moves away from the viewer and the depth increases, the retinal-image size of the object decreases, and the viewer perceives it as smaller. On the contrary, when the object moves nearer to the viewer and the depth decreases, the retinal-image size of the object increases, and the viewer perceives it as larger. This interesting phenomenon occurs as the change in the pixel values of the DM sequences of 3D videos occurs. While an object moves farther or nearer, the depth-pixel values change between 0 and 255 depending on the depth of the object, and the depth-pixel colors take gray tones. White corresponds to the nearest distance and black corresponds to the farthest distance (see Figure 3).
In light of this information, it is proposed to use the change in the depth-pixel values in the DM to compute the retinal-image size in this study. This change can be calculated with the Mean Absolute Deviation (MAD) method for each DM frame as follows:
R = i = 1 m j = 1 n X i , j X ¯ m × n
where X i , j is the depth-pixel value at point ( i , j ) ; X ¯ is the average of the depth-pixel value of the frame of the DM sequence; and m and n are the width and height, respectively. Table 3 presents the retinal-image-size measurements of the DM sequences calculated considering Equation (6) and shows that the retinal-image-size measurements in the versions of a selected DM (e.g., Advertisement) with any specific spatial resolution (e.g., SD) and with a gradually increasing compression ratio from QP = 25 to QP = 45 gradually increase or tend to increase. This fluctuation means that there is no significant relationship between the retinal-image size and compression ratio. It is highly considered that this lack of relationship is caused by spatial and temporal distortions due to encoding, compressing, resizing, upsampling, downsampling, or other similar reasons in the DM sequences. Table 3 also shows that the retinal-image-size measurements in the versions of a selected DM (e.g., Butterfly) encoded with any specific compression ratio (e.g., QP = 25) and whose spatial resolution changes gradually from SD to QCIF gradually increase or tend to increase. This proves a strong relationship between the retinal-image size and spatial resolution.

3.5. Convergence

The position of objects affects the viewing angle of the eyes. Convergence is seeing an object that is moving closer to the viewer’s eyes with a greater angle. Therefore, convergence is a factor that directly increases the depth perception of the viewer. As seen in Figure 4, the viewing angle for an object positioned at d distance from the viewer is calculated as follows:
α = 2 tan 1 ( x 2 d )
where α is the viewing angle and x is the distance between two human eyes. In the literature, the x distance between two human eyes is adopted as 65 mm [77].
The viewing angles differ for objects located at the same distance from the eye but with different volumes and surface areas. In Figure 4, two objects with different surface areas ( S 1 > S 2 ) are positioned at the same distance d from the viewer. Accordingly, the viewing angle for the object with a larger surface area ( α 1 ) will be smaller than the viewing angle for the object with a smaller surface area ( α 2 ). This is similar for DM sequences with different spatial resolutions.
According to the geometric analysis in Figure 4, if the viewers watch SD-, CIF-, and QCIF-sized DM sequences of a 2D-texture video, they perceive that the distance of the objects does not change, but the objects in the DM sequences are reduced in size and settle in farther locations. This means that DM sequences with a lower spatial resolution are viewed with a larger viewing angle.
In order to obtain convergence in this study, the viewing angles are calculated by using Equation (7) for each frame of the DM sequences, and the total viewing angle is normalized as follows:
C = i = 1 f α i f × S
where C is the convergence, i is the number of frames, f is the total number of frames, S is the spatial resolution, and α is the angle of convergence. Table 4 presents the convergence measurements of DM sequences computed considering Equation (8) and shows that the convergence measurements in the versions of a selected DM (e.g., Interview) with any specific spatial resolution (e.g., SD) and with a gradually decreasing compression ratio from QP = 25 to QP = 45 fluctuate. Similar to the retinal-image-size clause, this fluctuation also means that there is no significant relationship between convergence and the compression ratio because of the reasons explained before. Table 4 also shows that the convergence measurements in the versions of a selected DM sequence (e.g., Windmill) encoded with any specific compression ratio (e.g., QP = 25) and whose spatial resolution changes gradually from SD to QCIF gradually increase or tend to increase. So, a strong relationship between convergence and the spatial resolution can be observed.

3.6. Subjective Tests

Subjective test results, conducted within the framework of standards adopted by major standard bodies, are in fact derived directly from the human vision system. Thus, it becomes possible to consider the relative effect of the depth cues on the viewers by using the subjective test results represented by the MOSs. In this study, subjective tests are conducted to construct a relationship between the MOS values and the proposed metric. After the tests, the 95% confidence intervals [78] are also computed together with the MOS values.
The subjective tests were carried out independently of the metric design and by using 10 different 2D + DM-formed 3D videos (Breakdance, Ballet, Windmill, Newspaper, Interview, Advertisement, Butterfly, Chess, Farm, and Football) in different spatial resolutions (i.e., SD, CIF, and QCIF) encoded with 25, 30, 35, 40, and 45 Quantization Parameters (QPs). An autostereoscopic display of 23′ is utilized to present 2D + DM-form-based 3D videos during the experiments.
Before the subjective tests, the participants are sufficiently informed about the features of the test and scoring. The scores given by the observers range from one to five. A five indicates that perception is at the highest level, and a one indicates that it is at the lowest level. The observers are not informed about the order, coding parameters, and features of the test videos.
The observers participating in the tests do not have expertise in 3D videos. The observers participated in the test sitting 3 m away from the autostereoscopic screen. The tests are always carried out in the same test environment. To create the 3D videos, the same sized and encoded DM sequences and 2D-texture videos are used.
During the tests, the Single Stimulus Continuous Quality Evaluation (SSCQE) method is used for quality evaluation. The observers only evaluate the quality and depth perception of the encoded 3D video and the overall 3D-video quality separately without taking a 3D video as a reference. While making this evaluation, the observers benefited from their previous experiences. Inconsistent scores were obtained in all test results based on the ITU-R BT.500-13 standard [78]. Thus, the results of 2 of the 23 observers who participated in the test are determined to be inconsistent. The test results of the remaining 21 observers are used to calculate the MOS values.

4. Modeling of M C and M D

4.1. Modeling of M C

As stated above, the M C element, which represents the 2D-texture video QoE evolution component, combines two depth cues, namely the blurriness and motion information existing in a 2D (i.e., texture) video and the spatial resolution of the 2D video. In order to form a model for this element, the results of the subjective tests are integrated with the M C element to obtain a more-efficient 3D-video-quality metric. During this integration process, the best correlation between the subjective test results and the M C element is taken to determine the mathematical equation of M C . The Pearson correlation method is used for this correlation calculation. The common feature of the depth cues is that they change when the spatial resolution changes. Therefore, a multiplicative relationship between the depth cues and the spatial resolution is considered to be the best reflection of the viewers’ depth perception considered in the proposed model. With this approach, the mathematical equation of M C is determined as follows:
M C = k C × B × M × S C
where B and M are the blurriness and motion-information depth cues, respectively, and S C is the spatial resolution of the 2D-texture video. In addition, the k C constant coefficient in Equation (9) is selected as 10 4 for all 2D videos in order to keep the M 3 D values within the specified interval.

4.2. Modeling of M D

As discussed above, the M D element, which states the DM-quality-evolution element, provides the contributions of the two monocular cues in the DM (i.e., the retinal-image size and convergence) and the spatial resolution to the depth perception of a viewer. To be able to construct a model for this element, similar to the process conducted for the M C element, the results of the subjective tests are integrated with the M D element that is adopted as the product of the two monocular cues and the spatial resolution of the DM sequences so as to make more contributions to the proposed metric. Similar to the M C model, the common feature of the monocular cues is that they vary when the spatial resolution varies, and a multiplicative relationship between the monocular cues and the spatial resolution is a useful assumption to reflect the viewers’ depth perception for the proposed model.
In this sense, the M D element’s mathematical model is formulated as follows:
M D = k D × R × C × S D
where R and C are the retinal-image size and convergence monocular-depth cues, respectively, and S D is the spatial resolution of the DM. Also, the k D constant coefficient in Equation (10) is selected as 2.5 × 10 8 for all DMs in order to keep the M 3 D values within the specified interval.

5. Results and Discussions

In this study, the last 150 frames of ten different 2D + DM-formed 3D videos (Breakdance, Ballet, Windmill, Newspaper, Interview, Advertisement, Butterfly, Chess, Farm, and Football) with different spatial resolutions (i.e., SD, CIF, and QCIF) and encoded with 25, 30, 35, 40, and 45 QPs are used to derive results from the proposed metric. The publicly available original versions of these videos were provided by the I-Lab, Center for Vision, Speech, and Signal Processing at the University of Surrey, UK, for research purposes. In order to evaluate the performance of the proposed metric, the MOS values and the quality-evaluation results of widely used 2D-video quality-evaluation metrics, namely the VQM, Peak Signal-to-Noise Ratio (PSNR), and structural-similarity metric (SSIM), are also calculated by using the same 3D videos. All video-quality measurements are set at a precision of four digits after the decimal point.
Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14 show the quality measurements of the videos used in terms of the MOS, VQM, PSNR, and SSIM results. The confidence-interval values for the MOS results are also presented in the tables. According to the MOS, VQM, PSNR, and SSIM results, it can be clearly observed that as the compression ratio gradually increases (from QP = 25 to QP = 45) in the SD, CIF, or QCIF spatial-resolution forms of each 3D video, the 3D-video QoE by the viewer decreases. This clearly shows the effects of the video spatial resolution and video compression ratio on the 3D-video QoE. The results obtained from the proposed metric bear a resemblance to the MOS results as well as the VQM, PSNR, and SSIM techniques. As can also be observed in the tables, the highest quality measurements calculated by the objective VQM, PSNR, and SSIM methods are obtained from the lowest compression ratio (QP = 25) versions of the SD, CIF, and QCIF spatial-resolution videos. As the compression ratio increases gradually, it is observed that the video quality decreases slightly at each compression level compared to the previous compression level. A similar situation is also observed in the gradual decrease in the MOS measurements obtained from subjective tests. From this point on, we will discuss the M 3 D measurements of the proposed metric.
Table 5 shows the quality measurements of the video “Breakdance”. The M 3 D measurement values obtained from the proposed metric are similar to both the objective video-quality measurements and subjective MOS measurements. In other words, as with other video-quality-measurement methods, the highest quality measurements in the proposed metric are obtained from the lowest compression ratio (QP = 25) version of the SD, CIF, and QCIF spatial-resolution videos. As the compression ratio gradually increases, the M 3 D measurement decreases. The same situation is observed for the video “Interview” in Table 7.
Table 6 gives the quality measurements of the video “Ballet”. The M 3 D measurements obtained from the proposed metric generally show similarity to both the objective video-quality measurements and subjective MOS values. Only the M 3 D measurements for the QP = 30 and QP = 35 compression ratios at the CIF spatial resolution are equal. Here, the M 3 D measurement value for the QP = 35 compression ratio is expected to be low, but not lower than the M 3 D measurement value for the QP = 40 compression ratio, or the M 3 D measurement value for the QP = 30 compression ratio is expected to be high, but not higher than the M 3 D measurement value for the QP = 25 compression ratio. These expectations of the M 3 D measurements are true for all of the QP-related results, and this equality arises due to the fact that there are no huge deviations in the M 3 D measurement values for both compression ratios QP = 30 and QP = 35.
In Table 8, the quality measurements for the video “Newspaper” are given. The M 3 D measurements taken at the SD spatial resolution have a similar variation to other objective video-quality measurements and especially the MOS values. But the M 3 D measurements taken at the CIF spatial resolution for QP = 25, QP = 30, and QP = 35 are equal. Also, some deviations are observed in the M 3 D and SSIM measurements at the QCIF spatial resolution. These equalities in the CIF spatial resolution and deviations in the QCIF spatial resolution result from compression and downsampling processes for this video. However, these results look insignificant considering the number precision.
Table 9 demonstrates the quality measurements for the video “Windmill”. According to Table 9, only the M 3 D measurements taken at the SD and CIF spatial resolution for the QP = 25 and QP = 30 compression ratios show insignificant deviations that are not possible to be perceived by the HVS. Also, some insignificant deviations are observed at the QCIF spatial resolution.
Table 10, which gives the quality measurements of the video “Advertisement”, shows that the M 3 D measurements are not compatible with other objective video-quality measurements and the MOS values. The M 3 D measurements have huge deviations at all spatial resolutions for all compression ratios. But the video “Advertisement” is a CGI-based video, so the deviations most likely arise from the rendering method. The NSI-based video-quality-evolution metrics do not give accurate results in the quality measurements of the CGI-based videos.
The quality measurements of the video “Butterfly” are given in Table 11. According to this table, only the M 3 D measurements taken at an SD spatial resolution for the QP = 25 and QP = 30 compression ratios show deviation. This issue is most likely caused by errors in the compression process. The rest of the M 3 D measurements are aligned with the other objective quality measurements and the MOS values.
The measurements of the video “Chess” in Table 12 show that only the M 3 D measurements taken at the QCIF spatial resolution show similar variations to other objective video-quality measurements and the MOS values. But, there are significant deviations in the SD and QCIF spatial resolutions for all the compression ratios. These deviations are most likely caused by the rendering method, which makes “Chess” a CGI-based video. And, the quality of a CGI-based video should be measured by using a CGI-based video-quality-evaluation metric.
Table 13 gives the quality measurements of the video “Farm”. This table shows that only M 3 D measurements taken at the SD spatial resolution are aligned with the other objective video-quality measurements and the MOS values. Although there are bias-like deviations at the CIF and QCIF spatial resolutions, these deviations are too insignificant to be perceived by the HVS. On the other hand, the VQM, PSNR, and SSIM measurements have deviations at all spatial resolutions and for all compression ratios because of the errors in the encoding, compressing, and resizing processes.
Lastly, it is observed in Table 14 showing the measurements of the video “Football” that the M 3 D measurements taken at the SD spatial resolution for the QP = 25 or QP = 30 compression ratios show deviations. Also, although there is another deviation at the CIF and QCIF spatial resolutions, they are very small and thus cannot be perceived by the HVS.
As a general assessment according to the M 3 D measurements, the quality estimates of the proposed metric show significant similarities with the VQM, PSNR, and SSIM measurements and especially the MOS values. Approximately 80% of the results obtained from the proposed metric vary in accordance with the MOS, VQM, PSNR, and SSIM variances. The majority of the remaining 20% show insignificant variances that cannot be noticed by the HVS. It is considered that these cases are caused by spatial and temporal distortions due to encoding, compressing, resizing, upsampling, downsampling, pixel losses, or other similar reasons in the 2D-texture videos and DM sequences of the 3D videos used. Particularly, the effects of the change in the compression ratio on DM sequences are remarkable. In addition, the artifacts observed in some DM sequences led to inaccurate calculations of the depth cues and had disruptive effects on the M 3 D measurements (see Figure 5).
Moreover, the 3D-video QoE evaluation-performance efficiency of the M 3 D over the VQM, PSNR, and SSIM metrics can be observed from the correlation coefficient (CC) results calculated by using the MOS results. The CC results calculated by using the Pearson method and showing the relationship between the M 3 D quality estimations and the MOS values are given in Table 15. The average CC results of the M 3 D and the MOS are computed as 0.775 for all the 3D videos, QPs, and spatial resolutions. However, the CC results of the M 3 D and the VQM, PSNR, and SSIM metrics are computed as 0.784, 0.772, and 0.838, respectively. From this point on, we will take a deeper look at Table 15.
For the videos “Breakdance”, “Ballet”, “Interview”, “Football”, and “Butterfly”, the M 3 D measurements have high correlation coefficients with the objective VQM, PSNR, and SSIM metrics and subjective MOS measurements. This means that there are strong linear relationships between the M 3 D measurements and the other video-quality measurements used.
The lowest correlation coefficients between the M 3 D measurements and the other video-quality measurements are observed in the video “Advertisement”. The correlation coefficients of the video “Advertisement” are generally below the value 0.3 so that there are weak linear relationships between the M 3 D measurements and other video-quality measurements. This also means that an increase in any video-quality measurement does not mean a higher M 3 D measurement and vice versa.
For the videos “Farm” and “Chess”, half of the CC results are between 0.3 and 0.7, and the remaining half are above 0.7. As the CC results between 0.3 and 0.7 (half) indicate moderate linear relationships between the M 3 D measurements and the VQM, PSNR, SSIM, and MOS measurements, the CC results above 0.7 indicate strong linear relationships between the M 3 D measurements and the VQM, PSNR, SSIM, and MOS measurements.
For the videos “Windmill” and “Newspaper”, the CC results of the QCIF versions are generally below 0.3 so that there are weak linear relationships between the M 3 D measurements and the VQM, PSNR, SSIM, and MOS measurements. As mentioned above, this situation arises from the negative reflections on the M 3 D of the errors that occur in processes such as encoding, resizing, and downsampling. The CIF and SD versions have high CC results, which mean strong linear relationships between the M 3 D measurements and the VQM, PSNR, SSIM, and MOS measurements.
In light of the CC results in Table 15 and the explanations above, it is understood that there is a useful correlation between M 3 D quality estimations and the measurements of the MOS, VQM, PSNR, and SSIM; also, this correlation is worth considering when developing a new hybrid 3D-video-quality metric based on spatial resolution and depth cues.

6. Conclusions and Future Works

Researchers use subjective tests in general to evaluate the quality of 3D videos. However, subjective tests have significant disadvantages such as a high cost, being time consuming, and its unsuitability for real-time applications. For this reason, there is a great need for an objective and hybrid 3D-video QoE evaluation metric that is highly correlated with the HVS and has excellent alignment with the MOS. Therefore, for such a metric to be developed, it is a must to consider the effects of depth cues and spatial resolution, which directly affect the viewer’s depth perception.
In this study, a hybrid 3D-video QoE evaluation metric was developed that employs the effects of spatial-resolution-associated blurriness, motion information, retinal-image size, convergence, and parameters on the depth perception of the viewers to be used in the quality evaluation of 3D videos obtained by using the 2D + DM method, which may be a preferred method by the researchers. Blurriness and motion information were derived from the 2D color-texture video while the retinal-image size and convergence are derived from the DM. Also, spatial resolution is derived from both the color-texture video and the DM.
This study emphasizes the critical role of the depth cues associated with spatial resolution in designing an effective 3D-video QoE metric. The results show that the proposed hybrid metric is quite successful and can be utilized to predict the 3D-video QoE. Obtaining successful results from the proposed metric proves that it is an appropriate approach to use depth cues and spatial resolution together as input parameters while developing a 3D-video QoE evaluation metric. Especially, a high correlation with the HVS also proves the validity of the proposed metric’s estimations. The proposed metric will allow researchers to avoid the high cost of subjective tests and save time. Also, it is feasible to use the proposed metric in real-time applications as it is a hybrid metric. For these reasons, it will accelerate the studies on 3D-video technologies and encourage future studies.
It has to be noted that the predicted MOS values are eligible to be enhanced. In future work, it is possible to fine tune the formulas by optimizing the coefficients, developing different models for measuring the depth cues, changing existing depth cues with other depth cues, and/or adding extra depth-cue elements to the proposed metric to further improve the results.

Author Contributions

Conceptualization, S.C. and G.N.Y.; methodology, S.C., G.N.Y., F.B. and S.I.; software, S.C. and G.N.Y.; validation, S.C., G.N.Y., F.B., M.A. and S.I.; investigation, S.C., G.N.Y. and F.B.; resources, S.C. and G.N.Y.; data curation, S.C., G.N.Y. and F.B.; writing—original draft preparation, S.C., G.N.Y., F.B. and S.I.; writing—review and editing, G.N.Y., F.B., M.A., S.I. and M.A.; supervision, G.N.Y., F.B. and S.I.; project administration, S.C. and G.N.Y.; funding acquisition, F.B., S.I. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by Research Supporting Project with the project number RSPD2023R553, King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

The publicly available videos were provided by the I-Lab, Center for Vision, Speech, and Signal Processing at the University of Surrey, UK, for research purposes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghadiyaram, D.; Pan, J.; Bovik, A.C. Learning a Continuous-Time Streaming Video QoE Model. IEEE Trans. Image Process. 2018, 27, 2257–2271. [Google Scholar] [CrossRef] [PubMed]
  2. International Telecommunication Union—Telecommunication Standardization Sector. Vocabulary for performance, quality of service and quality of experience. In Recommendation ITU-T P.10/G.100 (2017)—Amendment 1 (06/2019); International Telecommunication Union: Geneva, Switzerland, 2019. [Google Scholar]
  3. Suárez, F.J.; García, A.; Granda, J.C.; García, D.F.; Nuño, P. Assessing the QoE in Video Services Over Lossy Networks. J. Netw. Syst. Manag. 2016, 24, 116–139. [Google Scholar] [CrossRef]
  4. Su, Z.; Li, D.; Ren, H.; Chen, L. Evaluation of depth perception based on binocular stereo vision. In Proceedings of the 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Guilin, China, 29–31 July 2017; pp. 2892–2896. [Google Scholar] [CrossRef]
  5. Chen, Z.; Zhou, W.; Li, W. Blind Stereoscopic Video Quality Assessment: From Depth Perception to Overall Experience. IEEE Trans. Image Process. 2018, 27, 721–734. [Google Scholar] [CrossRef] [PubMed]
  6. Zhai, G.; Min, X. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 211301. [Google Scholar] [CrossRef]
  7. Barman, N.; Jammeh, E.; Ghorashi, S.A.; Martini, M.G. No-Reference Video Quality Estimation Based on Machine Learning for Passive Gaming Video Streaming Applications. IEEE Access 2019, 7, 74511–74527. [Google Scholar] [CrossRef]
  8. Vlaović, J.; Vranješ, M.; Grabić, D.; Samardžija, D. Comparison of Objective Video Quality Assessment Methods on Videos with Different Spatial Resolutions. In Proceedings of the International Conference on Systems, Signals and Image Processing (IWSSIP), Osijek, Crotia, 6–7 June 2019; pp. 287–292. [Google Scholar] [CrossRef]
  9. Yilmaz, G.N. A no reference depth perception assessment metric for 3D video. Multimed. Tools Appl. 2015, 74, 6937–6950. [Google Scholar] [CrossRef]
  10. Varga, D. No-Reference Image Quality Assessment with Global Statistical Features. J. Imaging 2021, 7, 29. [Google Scholar] [CrossRef] [PubMed]
  11. Rajchel, M.; Oszust, M. No-reference image quality assessment of authentically distorted images with global and local statistics. SIViP 2021, 15, 83–91. [Google Scholar] [CrossRef]
  12. Dost, S.; Saud, F.; Shabbir, M.; Khan, A.G.; Shahid, M.; Lovstrom, B. Reduced reference image and video quality assessments: Review of methods. J. Image Video Proc. 2022, 2022, 1. [Google Scholar] [CrossRef]
  13. Min, X.; Zhai, G.; Gu, K.; Liu, Y.; Yang, X. Blind Image Quality Estimation via Distortion Aggravation. IEEE Trans. Broadcast. 2018, 64, 508–517. [Google Scholar] [CrossRef]
  14. Min, X.; Zhai, G.; Gu, K.; Liu, Y.; Yang, X.; Chen, C.W. Blind Quality Assessment Based on Pseudo-Reference Image. IEEE Trans. Multimed. 2018, 20, 2049–2062. [Google Scholar] [CrossRef]
  15. Min, X.; Ma, K.; Gu, K.; Zhai, G.; Wang, Z.; Lin, W.; Chen, C.W. Unified Blind Quality Assessment of Compressed Natural, Graphic, and Screen Content Images. IEEE Trans. Image Process. 2017, 26, 5462–5474. [Google Scholar] [CrossRef] [PubMed]
  16. Min, X.; Ma, K.; Gu, K.; Zhai, G.; Yang, X.; Zhang, W.; Le Callet, P.; Chen, C.W. Screen Content Quality Assessment: Overview, Benchmark, and Beyond. ACM Comput. Surv. Dec. 2022, 54, 1–36. [Google Scholar] [CrossRef]
  17. Mahmoudpour, S.; Schelkens, P. On the performance of objective quality metrics for lightfields. Signal Process. Image Commun. 2021, 93, 116179. [Google Scholar] [CrossRef]
  18. Min, X.; Zhou, J.; Zhai, G.; Le Callet, P.; Yang, X.; Guan, X. A Metric for Light Field Reconstruction, Compression, and Display Quality Evaluation. IEEE Trans. Image Process. 2020, 29, 3790–3804. [Google Scholar] [CrossRef] [PubMed]
  19. Zhou, S.; Zhu, T.; Shi, K.; Li, Y.; Zheng, W.; Yong, J. Review of light field technologies. Vis. Comput. Ind. Biomed. Art. 2021, 4, 29. [Google Scholar] [CrossRef] [PubMed]
  20. Adhikarla, V.K.; Vinkler, M.; Sumin, D.; Mantiuk, R.K.; Myszkowski, K.; Seidel, H.; Didyk, P. Towards a Quality Metric for Dense Light Fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3720–3729. [Google Scholar] [CrossRef]
  21. Min, X.; Zhai, G.; Gu, K.; Yang, X.; Guan, X. Objective Quality Evaluation of Dehazed Images. IEEE Trans. Intell. Transp. Syst. 2019, 20, 2879–2892. [Google Scholar] [CrossRef]
  22. Min, X.; Zhai, G.; Gu, K.; Zhu, Y.; Zhou, J.; Guo, G.; Yang, X.; Guan, X.; Zhang, W. Quality Evaluation of Image Dehazing Methods Using Synthetic Hazy Images. IEEE Trans. Multimed. 2019, 21, 2319–2333. [Google Scholar] [CrossRef]
  23. Min, X.; Zhai, G.; Zhou, J.; Farias, M.C.Q.; Bovik, A.C. Study of Subjective and Objective Quality Assessment of Audio-Visual Signals. IEEE Trans. Image Process. 2020, 29, 6054–6068. [Google Scholar] [CrossRef]
  24. Min, X.; Zhai, G.; Zhou, J.; Zhang, X.; Yang, X.; Guan, X. A Multimodal Saliency Model for Videos with High Audio-Visual Correspondence. IEEE Trans. Image Process. 2020, 29, 3805–3819. [Google Scholar] [CrossRef]
  25. Min, X.; Zhai, G.; Hu, C.; Gu, K. Fixation prediction through multimodal analysis. In Proceedings of the Visual Communications and Image Processing (VCIP), Singapore, 13–16 December 2015; pp. 1–4. [Google Scholar] [CrossRef]
  26. Sun, W.; Luo, W.; Min, X.; Zhai, G.; Yang, X.; Gu, K.; Ma, S. MC360IQA: The Multi-Channel CNN for Blind 360-Degree Image Quality Assessment. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019; pp. 1–5. [Google Scholar] [CrossRef]
  27. Zhou, W.; Xu, J.; Jiang, Q.; Chen, Z. No-Reference Quality Assessment for 360-Degree Images by Analysis of Multifrequency Information and Local-Global Naturalness. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1778–1791. [Google Scholar] [CrossRef]
  28. Sendjasni, A.; Larabi, M. PW-360IQA: Perceptually-Weighted Multichannel CNN for Blind 360-Degree Image Quality Assessment. Sensors 2023, 23, 4242. [Google Scholar] [CrossRef] [PubMed]
  29. Joveluro, P.; Malekmohamadi, H.; Fernando, W.A.C.; Kondoz, A.M. Perceptual Video Quality Metric for 3D video quality assessment. In Proceedings of the 3DTV-Conference: The True Vision—Capture, Transmission and Display of 3D Video, Tampere, Finland, 7–9 June 2010; pp. 1–4. [Google Scholar] [CrossRef]
  30. Jin, L.; Boev, A.; Gotchev, A.; Egiazarian, K. 3D-DCT based perceptual quality assessment of stereo video. In Proceedings of the 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 2521–2524. [Google Scholar] [CrossRef]
  31. Zeng, K.; Wang, Z. 3D-SSIM for video quality assessment. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 621–624. [Google Scholar] [CrossRef]
  32. Seo, J.; Liu, X.; Kim, D.; Sohn, K. An Objective Video Quality Metric for Compressed Stereoscopic Video. Circuits Syst. Signal Process 2012, 31, 1089–1107. [Google Scholar] [CrossRef]
  33. Sun, C.; Liu, X.; Xu, X.; Yang, W. An Efficient Quality Assessment Metric for 3D Video. In Proceedings of the IEEE 12th International Conference on Computer and Information Technology, Chengdu, China, 27–29 October 2012; pp. 209–213. [Google Scholar] [CrossRef]
  34. Sun, C.; Liu, X.; Yang, W. An Efficient Quality Metric for DIBR-based 3D Video. In Proceedings of the IEEE 14th International Conference on High Performance Computing and Communication & IEEE 9th International Conference on Embedded Software and Systems, Liverpool, UK, 25–27 June 2012; pp. 1391–1394. [Google Scholar] [CrossRef]
  35. Han, J.; Jiang, T.; Ma, S. Stereoscopic video quality assessment model based on spatial-temporal structural information. In Proceedings of the Visual Communications and Image Processing, San Diego, CA, USA, 27–30 November 2012; pp. 1–6. [Google Scholar] [CrossRef]
  36. Banitalebi-Dehkordi, A.; Pourazad, M.T.; Nasiopoulos, P. A human visual system-based 3D video quality metric. In Proceedings of the International Conference on 3D Imaging (IC3D), Liege, Belgium, 3–5 December 2012; pp. 1–5. [Google Scholar] [CrossRef]
  37. Banitalebi-Dehkordi, A.; Pourazad, M.T.; Nasiopoulos, P. 3D video quality metric for mobile applications. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 3731–3735. [Google Scholar] [CrossRef]
  38. Malekmohamadi, H.; Fernando, W.A.C.; Kondoz, A.M. A new reduced reference objective quality metric for stereoscopic video. In Proceedings of the IEEE Globecom Workshops, Anaheim, CA, USA, 3–7 December 2012; pp. 1325–1328. [Google Scholar] [CrossRef]
  39. Malekmohamadi, H.; Fernando, W.A.C.; Kondoz, A.M. Reduced reference metric for compressed stereoscopic videos. Electron. Lett. 2013, 49, 701–702. [Google Scholar] [CrossRef]
  40. Qi, F.; Jiang, T.; Fan, X.; Ma, S.; Zhao, D. Stereoscopic video quality assessment based on stereo just-noticeable difference model. In Proceedings of the IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; pp. 34–38. [Google Scholar] [CrossRef]
  41. De Silva, V.; Arachchi, H.K.; Ekmekcioglu, E.; Kondoz, A. Toward an Impairment Metric for Stereoscopic Video: A Full-Reference Video Quality Metric to Assess Compressed Stereoscopic Video. IEEE Trans. Image Process. 2013, 22, 3392–3404. [Google Scholar] [CrossRef] [PubMed]
  42. Banitalebi-Dehkordi, A.; Pourazad, M.T.; Nasiopoulos, P. An efficient human visual system based quality metric for 3D video. Multimed. Tools Appl. 2015, 75, 4187–4215. [Google Scholar] [CrossRef]
  43. Qi, F.; Zhao, D.; Fan, X.; Jiang, T. Stereoscopic video quality assessment based on visual attention and just-noticeable difference models. SIViP 2015, 10, 737–744. [Google Scholar] [CrossRef]
  44. Genco, M.L.; Adas, T.; Ozbek, N. Stereo Video Quality assessment using SSIM and depth maps. In Proceedings of the 24th Signal Processing and Communication Application Conference (SIU), Zonguldak, Turkey, 16–19 May 2016; pp. 1325–1328. [Google Scholar] [CrossRef]
  45. Lee, P.J.; Yang, H.P.; Hsu, C.C. 3D video quality assessment based on visual perception. In Proceedings of the IEEE 6th Global Conference on Consumer Electronics (GCCE), Nagoya, Japan, 24–27 October 2017; pp. 1–2. [Google Scholar] [CrossRef]
  46. Appina, B.; Manasa, K.; Channappayya, S.S. A full reference stereoscopic video quality assessment metric. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–7 March 2017; pp. 2012–2016. [Google Scholar] [CrossRef]
  47. Wang, J.; Wang, S.; Wang, Z. Asymmetrically Compressed Stereoscopic 3D Videos: Quality Assessment and Rate-Distortion Performance Evaluation. IEEE Trans. Image Process. 2017, 26, 1330–1343. [Google Scholar] [CrossRef]
  48. Galkandage, C.; Calic, J.; Dogan, S.; Guillemaut, J. Stereoscopic Video Quality Assessment Using Binocular Energy. IEEE J. Sel. Top. Signal Process. 2017, 11, 102–112. [Google Scholar] [CrossRef]
  49. Appina, B.; Channappayya, S.S. Full-Reference 3-D Video Quality Assessment Using Scene Component Statistical Dependencies. IEEE Signal Process. Lett. 2018, 25, 823–827. [Google Scholar] [CrossRef]
  50. Huang, Y.; Zhou, Y.; Hu, B.; Tian, S.; Yan, J. DIBR-synthesised video quality assessment by measuring geometric distortion and spatiotemporal inconsistency. Electron. Lett. 2020, 56, 1314–1317. [Google Scholar] [CrossRef]
  51. Yilmaz, G.N.; Akar, G.B. 3D Video Quality Evaluation Based on SSIM Model Improvement. In Proceedings of the 6th International Conference on Computer Science and Engineering (UBMK), Ankara, Turkey, 15–17 September 2021; pp. 425–428. [Google Scholar] [CrossRef]
  52. Solh, M.; AlRegib, G.; Bauza, J.M. 3VQM: A vision-based quality measure for DIBR-based 3D videos. In Proceedings of the IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; pp. 1–6. [Google Scholar] [CrossRef]
  53. Solh, M.; AlRegib, G. A no-reference quality measure for DIBR-based 3D videos. In Proceedings of the IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; pp. 1–6. [Google Scholar] [CrossRef]
  54. Ha, K.; Kim, M. A perceptual quality assessment metric using temporal complexity and disparity information for stereoscopic video. In Proceedings of the 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 2525–2528. [Google Scholar] [CrossRef]
  55. López, J.P.; Rodrigo, J.A.; Jiménez, D.; Menéndez, J.M. Stereoscopic 3D video quality assessment based on depth maps and video motion. EURASIP J. Image Video Proc. 2013, 2013, 62. [Google Scholar] [CrossRef]
  56. Han, Y.; Yuan, Z.; Muntean, G. No reference objective quality metric for stereoscopic 3D video. In Proceedings of the IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, Beijing, China, 25–27 June 2014; pp. 1–6. [Google Scholar] [CrossRef]
  57. Hasan, M.M.; Arnold, J.F.; Frater, M.R. No-reference quality assessment of 3D videos based on human visual perception. In Proceedings of the International Conference on 3D Imaging (IC3D), Liege, Belgium, 9–10 December 2014; pp. 1–6. [Google Scholar] [CrossRef]
  58. Hasan, M.M.; Arnold, J.F.; Frater, M.R. A novel quality assessment of transmitted 3D videos based on binocular rivalry impact. In Proceedings of the Picture Coding Symposium (PCS), Cairns, QLD, Australia, 31 May–3 June 2015; pp. 297–301. [Google Scholar] [CrossRef]
  59. Han, Y.; Yuan, Z.; Muntean, G. Extended no reference objective Quality Metric for stereoscopic 3D video. In Proceedings of the IEEE International Conference on Communication Workshop (ICCW), London, UK, 8–12 June 2015; pp. 1729–1734. [Google Scholar] [CrossRef]
  60. Mahmood, S.A.; Ghani, R.F. Objective quality assessment of 3D stereoscopic video based on motion vectors and depth map features. In Proceedings of the 7th Computer Science and Electronic Engineering Conference (CEEC), Colchester, UK, 24–25 September 2015; pp. 179–183. [Google Scholar] [CrossRef]
  61. Silva, A.R.; Melgar, M.E.V.; Farias, M.C.Q. A no-reference stereoscopic quality metric. In Proceedings of SPIE 9393, Three-Dimensional Image Processing, Measurement (3DIPM), and Applications; Proceedings of Electronic Imaging Science and Technology: San Francisco, CA, USA, 2015; p. 9393. [Google Scholar] [CrossRef]
  62. Han, Y.; Yuan, Z.; Muntean, G. An Innovative No-Reference Metric for Real-Time 3D Stereoscopic Video Quality Assessment. IEEE Trans. Broadcast. 2016, 62, 654–663. [Google Scholar] [CrossRef]
  63. Yang, J.; Wang, H.; Lu, W.; Li, B.; Badii, A.; Meng, Q. A no-reference optical flow-based quality evaluator for stereoscopic videos in curvelet domain. Inf. Sci. 2017, 414, 133–146. [Google Scholar] [CrossRef]
  64. Appina, B.; Jalli, A.; Battula, S.S.; Channappayya, S.S. No-Reference Stereoscopic Video Quality Assessment Algorithm Using Joint Motion and Depth Statistics. In Proceedings of the 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 2800–2804. [Google Scholar] [CrossRef]
  65. Bayrak, H.; Yilmaz, G.N. No-reference evaluation of 3 dimensional video quality using spatial and frequency domain components. In Proceedings of the 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar] [CrossRef]
  66. Yang, J.; Ji, C.; Jiang, B.; Lu, W.; Meng, Q. No Reference Quality Assessment of Stereo Video Based on Saliency and Sparsity. IEEE Trans. Broadcast. 2018, 64, 341–353. [Google Scholar] [CrossRef]
  67. Yang, J.; Zhu, Y.; Ma, C.; Lu, W.; Meng, Q. Stereoscopic video quality assessment based on 3D convolutional neural networks. Neurocomputing 2018, 309, 83–93. [Google Scholar] [CrossRef]
  68. Wang, Y.; Shuai, Y.; Zhu, Y.; Zhang, J.; An, P. Jointly learning perceptually heterogeneous features for blind 3D video quality assessment. Neurocomputing 2019, 332, 298–304. [Google Scholar] [CrossRef]
  69. Yang, J.; Zhao, Y.; Jiang, B.; Lu, W.; Gao, X. No-Reference Quality Evaluation of Stereoscopic Video Based on Spatio-Temporal Texture. IEEE Trans. Multimed. 2020, 22, 2635–2644. [Google Scholar] [CrossRef]
  70. Yang, J.; Zhao, Y.; Jiang, B.; Meng, Q.; Lu, W.; Gao, X. No-Reference Quality Assessment of Stereoscopic Videos With Inter-Frame Cross on a Content-Rich Database. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 3608–3623. [Google Scholar] [CrossRef]
  71. Banitalebi-Dehkordi, A.; Nasiopoulos, P. Saliency inspired quality assessment of stereoscopic 3D video. Multimed. Tools Appl. 2018, 77, 26055–26082. [Google Scholar] [CrossRef]
  72. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson: New York, NY, USA, 2018; ISBN 9780133356724. [Google Scholar]
  73. Yilmaz, G.N.; Cimtay, Y. Depth Perception Assessment of a 3D Video Based on Spatial Resolution. J. Artif. Intell. Data Sci. 2022, 2, 1–7. [Google Scholar]
  74. Liu, W.; Ma, L.; Qiu, B.; Cui, M.; Ding, J. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation. PLoS ONE 2017, 12, e0175910. [Google Scholar] [CrossRef] [PubMed]
  75. Yilmaz, G.N. A Bit Rate Adaptation Model for 3D Video. Multidimens. Syst. Sign Process. 2016, 27, 201–215. [Google Scholar] [CrossRef]
  76. Emmert, E. Größenverhältnisse der Nachbilder. Klin. Monatsblätter Augenheilkd. Augenärztliche Fortbild. 1881, 19, 443–450. [Google Scholar]
  77. Zeiss, C.; Goersch, H. Handbuch für Augenoptik, 4th ed.; C. Maurer Druck + Verlag: Geislingen, Germany, 2000. [Google Scholar]
  78. International Telecommunication Union—Radiocommunication Sector. Methodology for the subjective assessment of the quality of television pictures. In Recommendation ITU-R BT.500-13; Electronic Publication: Geneva, Switzerland, 2012. [Google Scholar]
Figure 1. The framework of the proposed 3D-video QoE evolution metric.
Figure 1. The framework of the proposed 3D-video QoE evolution metric.
Jimaging 09 00281 g001
Figure 2. The geometry of Emmert’s law.
Figure 2. The geometry of Emmert’s law.
Jimaging 09 00281 g002
Figure 3. Change in depth values in Breakdance DM sequence.
Figure 3. Change in depth values in Breakdance DM sequence.
Jimaging 09 00281 g003
Figure 4. The geometry of convergence. The green and orange circles represent two objects having different sizes. The grey circles represent left and right eyes.
Figure 4. The geometry of convergence. The green and orange circles represent two objects having different sizes. The grey circles represent left and right eyes.
Jimaging 09 00281 g004
Figure 5. Some artifacts in the DMs of (a) Newspaper, (b) Breakdance, (c) Chess, and (d) Farm. The red rectangles/squares highlight some remarkable artifacts on the DM sequences.
Figure 5. Some artifacts in the DMs of (a) Newspaper, (b) Breakdance, (c) Chess, and (d) Farm. The red rectangles/squares highlight some remarkable artifacts on the DM sequences.
Jimaging 09 00281 g005
Table 1. Blurriness measurements per QP and spatial resolution for the 2D-video sequences.
Table 1. Blurriness measurements per QP and spatial resolution for the 2D-video sequences.
2D VideoVideo SizeQuantization Parameter (QP)
2530354045
BreakdanceOriginal0.2600.2590.2580.2560.236
SD0.2580.2580.2570.2550.253
CIF0.2580.2570.2560.2550.253
QCIF0.2560.2550.2550.2540.251
ButterflyOriginal0.0920.0920.0920.0910.090
SD0.0890.0890.0890.0880.088
CIF0.0860.0860.0860.0860.086
QCIF0.0830.0830.0830.0830.083
WindmillOriginal0.2000.2000.1990.1990.197
SD0.1980.1980.1970.1970.195
CIF0.1960.1960.1960.1950.194
QCIF0.1920.1920.1920.1920.191
ChessOriginal0.3350.3350.3350.3350.335
SD0.3650.3650.3650.3650.365
CIF0.3790.3790.3790.3790.379
QCIF0.3780.3780.3780.3780.379
InterviewOriginal0.1990.1990.1990.1980.197
SD0.1970.1970.1970.1960.195
CIF0.1900.1900.1900.1900.189
QCIF0.1830.1830.1840.1830.183
AdvertisementOriginal0.2820.2810.2810.2810.280
SD0.2890.2880.2880.2880.287
CIF0.2880.2880.2880.2870.286
QCIF0.2860.2860.2850.2850.285
FarmOriginal0.2980.2980.2980.2980.298
SD0.2970.2970.2970.2970.297
CIF0.2960.2960.2960.2960.296
QCIF0.2930.2930.2940.2930.293
FootballOriginal0.2060.2070.2060.2060.204
SD0.2050.2050.2050.2040.203
CIF0.2020.2020.2020.2020.201
QCIF0.1970.1970.1980.1980.197
NewspaperOriginal0.2780.2780.2780.2780.278
SD0.3560.3560.3560.3650.356
CIF0.3630.3630.3630.3620.362
QCIF0.3620.3620.3620.3620.361
BalletOriginal0.6410.6400.6380.6350.628
SD0.6550.6530.6500.6440.631
CIF0.6560.6540.6500.6450.632
QCIF0.6570.6540.6510.6450.632
Table 2. Motion-information measurements per QP and spatial resolution for the 2D-video sequences.
Table 2. Motion-information measurements per QP and spatial resolution for the 2D-video sequences.
2D VideoVideo SizeQuantization Parameter (QP)
2530354045
BreakdanceOriginal0.3110.3330.3090.2820.224
SD0.3260.3260.2970.2700.210
CIF0.2650.2600.2400.2220.169
QCIF0.2160.2140.2030.1870.143
ButterflyOriginal1.6101.6271.5961.5201.426
SD1.6731.6871.6561.5811.478
CIF1.1461.1441.1141.0811.027
QCIF0.7910.7830.7640.7370.701
WindmillOriginal0.1530.1520.1450.1370.110
SD0.1300.1300.1250.1190.101
CIF0.0670.0670.0670.0670.061
QCIF0.0350.0360.0360.0360.033
ChessOriginal0.2800.2810.2790.2760.301
SD0.2540.2540.2510.2520.270
CIF0.1680.1700.1700.1680.172
QCIF0.1200.1220.1210.1170.116
InterviewOriginal0.1140.1090.1010.0900.078
SD0.1190.1140.1060.0950.082
CIF0.0660.0640.0610.0570.050
QCIF0.0380.0370.0360.0330.031
AdvertisementOriginal0.2920.3000.3040.2980.283
SD0.3080.3140.3210.3140.307
CIF0.2310.2360.2410.2400.234
QCIF0.1760.1790.1800.1800.174
FarmOriginal1.0331.0431.0260.9700.878
SD1.1421.1411.1191.0530.956
CIF0.7670.7670.7550.7230.661
QCIF0.4650.4670.4650.4590.432
FootballOriginal1.3361.4431.3371.2571.296
SD1.4661.5201.3191.1721.170
CIF1.2711.2711.0430.8560.812
QCIF0.6360.6400.5660.4680.440
NewspaperOriginal0.4650.4550.4370.4000.347
SD0.3550.3520.3430.3190.281
CIF0.1920.1920.1910.1850.169
QCIF0.1020.1020.1030.1020.099
BalletOriginal0.2470.2650.2550.2210.203
SD0.2480.2390.2320.2100.190
CIF0.1800.1680.1630.1530.140
QCIF0.1190.1140.1140.1030.090
Table 3. Retinal-image-size measurements per QP and spatial resolution for the DM sequences.
Table 3. Retinal-image-size measurements per QP and spatial resolution for the DM sequences.
DM SequenceDM SizeQuantization Parameter (QP)
2530354045
BreakdanceOriginal3.291 × 10−73.290 × 10−73.279 × 10−73.244 × 10−73.246 × 10−7
SD6.364 × 10−76.361 × 10−76.343 × 10−76.278 × 10−76.282 × 10−7
CIF2.520 × 10−62.521 × 10−62.513 × 10−62.498 × 10−62.501 × 10−6
QCIF9.979 × 10−69.979 × 10−69.939 × 10−69.861 × 10−69.893 × 10−6
ButterflyOriginal6.835 × 10−86.857 × 10−86.952 × 10−87.076 × 10−87.245 × 10−8
SD9.518 × 10−89.536 × 10−89.600 × 10−89.802 × 10−81.005 × 10−7
CIF3.767 × 10−73.793 × 10−73.810 × 10−73.918 × 10−74.019 × 10−7
QCIF1.489 × 10−61.503 × 10−61.513 × 10−61.529 × 10−61.580 × 10−6
WindmillOriginal1.232 × 10−71.232 × 10−71.235 × 10−71.235 × 10−71.254 × 10−7
SD1.573 × 10−71.582 × 10−71.579 × 10−71.578 × 10−71.604 × 10−7
CIF6.272 × 10−76.312 × 10−76.294 × 10−76.297 × 10−76.404 × 10−7
QCIF2.495 × 10−62.509 × 10−62.502 × 10−62.509 × 10−62.550 × 10−6
ChessOriginal9.743 × 10−89.670 × 10−89.753 × 10−89.787 × 10−81.008 × 10−7
SD1.252 × 10−71.243 × 10−71.252 × 10−71.253 × 10−71.291 × 10−7
CIF5.027 × 10−74.996 × 10−75.040 × 10−75.025 × 10−75.164 × 10−7
QCIF2.030 × 10−62.009 × 10−62.032 × 10−62.022 × 10−62.065 × 10−6
InterviewOriginal7.915 × 10−87.929 × 10−88.014 × 10−88.236 × 10−88.405 × 10−8
SD8.240 × 10−88.259 × 10−88.340 × 10−88.556 × 10−88.731 × 10−8
CIF3.287 × 10−73.296 × 10−73.333 × 10−73.425 × 10−73.491 × 10−7
QCIF1.309 × 10−61.320 × 10−61.337 × 10−61.369 × 10−61.389 × 10−6
AdvertisementOriginal4.246 × 10−84.285 × 10−84.348 × 10−84.451 × 10−84.577 × 10−8
SD6.156 × 10−86.191 × 10−86.248 × 10−86.385 × 10−86.595 × 10−8
CIF2.470 × 10−72.474 × 10−72.492 × 10−72.567 × 10−72.632 × 10−7
QCIF1.011 × 10−61.023 × 10−61.034 × 10−61.056 × 10−61.084 × 10−6
FarmOriginal1.217 × 10−71.214 × 10−71.206 × 10−71.203 × 10−71.226 × 10−7
SD1.562 × 10−71.558 × 10−71.546 × 10−71.543 × 10−71.568 × 10−7
CIF6.233 × 10−76.216 × 10−76.167 × 10−76.156 × 10−76.254 × 10−7
QCIF2.472 × 10−62.467 × 10−62.443 × 10−62.446 × 10−62.483 × 10−6
FootballOriginal1.214 × 10−71.224 × 10−71.223 × 10−71.231 × 10−71.241 × 10−7
SD1.551 × 10−71.563 × 10−71.563 × 10−71.573 × 10−71.586 × 10−7
CIF6.220 × 10−76.269 × 10−76.268 × 10−76.309 × 10−76.363 × 10−7
QCIF2.497 × 10−62.517 × 10−62.519 × 10−62.534 × 10−62.557 × 10−6
NewspaperOriginal1.646 × 10−71.659 × 10−71.670 × 10−71.692 × 10−71.718 × 10−7
SD3.258 × 10−73.268 × 10−73.278 × 10−73.311 × 10−73.354 × 10−7
CIF1.310 × 10−61.314 × 10−61.317 × 10−61.329 × 10−61.348 × 10−6
QCIF5.285 × 10−65.298 × 10−65.312 × 10−65.358 × 10−65.438 × 10−6
BalletOriginal4.583 × 10−74.585 × 10−74.577 × 10−74.568 × 10−74.546 × 10−7
SD8.847 × 10−78.849 × 10−78.835 × 10−78.820 × 10−78.773 × 10−7
CIF3.559 × 10−63.560 × 10−63.555 × 10−63.547 × 10−63.535 × 10−6
QCIF1.412 × 10−51.413 × 10−51.411 × 10−51.410 × 10−51.407 × 10−5
Table 4. Convergence measurements per QP and spatial resolution for the DM sequences.
Table 4. Convergence measurements per QP and spatial resolution for the DM sequences.
DM SequenceDM SizeQuantization Parameter (QP)
2530354045
BreakdanceOriginal2.035 × 10−102.035 × 10−102.034 × 10−102.030 × 10−102.031 × 10−10
SD3.949 × 10−103.950 × 10−103.946 × 10−103.939 × 10−103.940 × 10−10
CIF1.582 × 10−91.582 × 10−91.581 × 10−91.577 × 10−91.578 × 10−9
QCIF6.362 × 10−96.362 × 10−96.350 × 10−96.329 × 10−96.335 × 10−9
ButterflyOriginal1.618 × 10−101.162 × 10−101.627 × 10−101.637 × 10−101.635 × 10−10
SD2.138 × 10−102.140 × 10−102.145 × 10−102.162 × 10−102.157 × 10−10
CIF8.559 × 10−108.577 × 10−108.599 × 10−108.672 × 10−108.648 × 10−10
QCIF3.420 × 10−93.433 × 10−93.442 × 10−93.452 × 10−93.448 × 10−9
WindmillOriginal2.795 × 10−102.799 × 10−102.781 × 10−102.772 × 10−102.790 × 10−10
SD3.578 × 10−103.583 × 10−103.559 × 10−103.548 × 10−103.573 × 10−10
CIF1.432 × 10−91.435 × 10−91.425 × 10−91.421 × 10−91.433 × 10−9
QCIF5.750 × 10−95.759 × 10−95.721 × 10−95.702 × 10−95.767 × 10−9
ChessOriginal3.151 × 10−103.114 × 10−103.104 × 10−103.047 × 10−103.003 × 10−10
SD4.041 × 10−103.994 × 10−103.977 × 10−103.900 × 10−103.844 × 10−10
CIF1.617 × 10−91.599 × 10−91.593 × 10−91.560 × 10−91.536 × 10−9
QCIF6.482 × 10−96.401 × 10−96.390 × 10−96.249 × 10−96.134 × 10−9
InterviewOriginal1.972 × 10−101.975 × 10−101.981 × 10−101.989 × 10−101.998 × 10−10
SD2.028 × 10−102.031 × 10−102.034 × 10−102.045 × 10−102.054 × 10−10
CIF8.117 × 10−108.129 × 10−108.156 × 10−108.189 × 10−108.226 × 10−10
QCIF3.244 × 10−93.248 × 10−93.261 × 10−93.275 × 10−93.287 × 10−9
AdvertisementOriginal1.178 × 10−101.183 × 10−101.193 × 10−101.207 × 10−101.233 × 10−10
SD1.524 × 10−101.530 × 10−101.542 × 10−101.560 × 10−101.594 × 10−10
CIF6.094 × 10−106.118 × 10−106.166 × 10−106.242 × 10−106.374 × 10−10
QCIF2.437 × 10−92.449 × 10−92.469 × 10−92.497 × 10−92.553 × 10−9
FarmOriginal2.182 × 10−102.181 × 10−102.178 × 10−102.185 × 10−102.200 × 10−10
SD2.799 × 10−102.800 × 10−102.795 × 10−102.804 × 10−102.823 × 10−10
CIF1.122 × 10−91.122 × 10−91.120 × 10−91.124 × 10−91.131 × 10−9
QCIF4.518 × 10−94.517 × 10−94.504 × 10−94.520 × 10−94.551 × 10−9
FootballOriginal2.694 × 10−102.685 × 10−102.681 × 10−102.666 × 10−102.646 × 10−10
SD3.445 × 10−103.433 × 10−103.428 × 10−103.400 × 10−103.384 × 10−10
CIF1.378 × 10−91.373 × 10−91.371 × 10−91.364 × 10−91.353 × 10−9
QCIF5.514 × 10−95.494 × 10−95.485 × 10−95.458 × 10−95.415 × 10−9
NewspaperOriginal7.960 × 10−117.980 × 10−117.990 × 10−118.020 × 10−118.050 × 10−11
SD1.554 × 10−101.555 × 10−101.556 × 10−101.560 × 10−101.566 × 10−10
CIF6.223 × 10−106.225 × 10−106.225 × 10−106.240 × 10−106.261 × 10−10
QCIF2.493 × 10−92.492 × 10−92.493 × 10−102.497 × 10−92.505 × 10−9
BalletOriginal2.313 × 10−102.316 × 10−102.315 × 10−102.313 × 10−102.316 × 10−10
SD4.493 × 10−104.498 × 10−104.496 × 10−104.492 × 10−104.496 × 10−10
CIF1.803 × 10−91.805 × 10−91.804 × 10−91.803 × 10−91.805 × 10−9
QCIF7.240 × 10−97.247 × 10−97.246 × 10−97.239 × 10−97.243 × 10−9
Table 5. Breakdance 3D-video QoE measurements.
Table 5. Breakdance 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Breakdance25SD3.44014.879656.56180.99893.032 ± 0.32
303.43324.829153.58680.99842.844 ± 0.29
353.11404.749750.41480.99752.688 ± 0.35
402.82134.626247.11430.99602.500 ± 0.33
452.17444.427143.67350.99342.375 ± 0.28
25CIF0.79204.903756.96230.99913.032 ± 0.34
300.77774.863854.10260.99872.844 ± 0.35
350.72484.807850.99810.99782.688 ± 0.31
400.67454.713347.68290.99622.500 ± 0.27
450.53424.560844.21030.99312.375 ± 0.32
25QCIF0.54224.912556.88380.99933.032 ± 0.31
300.54064.880954.22200.99892.844 ± 0.33
350.53084.834651.30940.99832.688 ± 0.36
400.51564.757948.13420.99692.500 ± 0.34
450.48834.635244.69450.99422.375 ± 0.37
Table 6. Ballet 3D-video QoE measurements.
Table 6. Ballet 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Ballet25SD6.62674.877353.76050.99913.407 ± 0.27
306.37594.823953.20950.99873.250 ± 0.29
356.14004.741649.92500.99793.126 ± 0.32
405.53724.609146.46180.99653.032 ± 0.35
454.90824.421742.91270.99412.907 ± 0.36
25CIF1.36054.903956.50660.99933.407 ± 0.42
301.27354.858853.55670.99893.250 ± 0.39
351.27354.799250.27660.99813.126 ± 0.37
401.16404.702746.76130.99683.032 ± 0.33
451.06114.568843.22640.99422.907 ± 0.35
25QCIF0.84644.915056.53250.99953.407 ± 0.29
300.83804.877353.62130.99923.250 ± 0.32
350.83544.830350.51810.99873.126 ± 0.34
400.81434.748146.93280.99763.032 ± 0.28
450.78914.635643.48910.99562.907 ± 0.35
Table 7. Interview 3D-video QoE measurements.
Table 7. Interview 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Interview25SD0.94744.220041.49980.99664.001 ± 0.27
300.90754.190341.31800.99593.907 ± 0.33
350.84484.131740.92470.99443.813 ± 0.31
400.75924.053840.16470.99173.688 ± 0.29
450.65303.844938.79460.98643.625 ± 0.32
25CIF0.13454.327242.77060.99754.001 ± 0.28
300.13034.305842.55610.99693.907 ± 0.26
350.12534.268742.12890.99583.813 ± 0.31
400.11644.206641.26020.99363.688 ± 0.29
450.10374.076039.76090.98913.625 ± 0.33
25QCIF0.04474.415142.38090.99734.001 ± 0.34
300.04454.401742.22830.99703.907 ± 0.31
350.04434.377941.90240.99623.813 ± 0.29
400.04394.334341.24900.99483.688 ± 0.30
450.04324.247040.00280.99153.625 ± 0.28
Table 8. Newspaper 3D-video QoE measurements.
Table 8. Newspaper 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Newspaper25SD5.13424.068435.82100.99333.688 ± 0.41
305.08374.033935.69920.99183.626 ± 0.40
354.95363.970935.46850.98893.500 ± 0.38
404.73253.870835.02510.98363.344 ± 0.36
454.06183.713234.24980.97653.250 ± 0.34
25CIF0.72524.066035.65770.99173.688 ± 0.33
300.72524.050735.59440.99153.626 ± 0.29
350.72524.020535.43490.99023.500 ± 0.31
400.69993.964535.08340.98663.344 ± 0.32
450.64223.851534.41940.98043.250 ± 0.30
25QCIF0.17684.055835.27810.98923.688 ± 0.29
300.17734.050435.25390.98943.626 ± 0.31
350.17814.034335.16580.98923.500 ± 0.34
400.17814.006634.92850.98793.344 ± 0.33
450.17703.930334.90870.98433.250 ± 0.36
Table 9. Windmill 3D-video QoE measurements.
Table 9. Windmill 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Windmill25SD1.04344.435845.14540.99633.969 ± 0.26
301.04454.392343.56930.99533.844 ± 0.23
351.00454.305241.66180.99343.751 ± 0.28
400.95694.187939.77360.99023.594 ± 0.31
450.80333.988037.67510.98453.500 ± 0.29
25CIF0.15664.455245.41610.99613.969 ± 0.30
300.15704.427644.03460.99553.844 ± 0.32
350.15534.361642.30030.99403.751 ± 0.34
400.15464.274340.48560.99133.594 ± 0.36
450.14334.130838.38910.98593.500 ± 0.31
25QCIF0.10794.462645.20370.99553.969 ± 0.29
300.10914.440743.90340.99503.844 ± 0.27
350.10824.390642.35300.99403.751 ± 0.32
400.10804.318140.71040.99213.594 ± 0.33
450.10894.204338.82370.98793.500 ± 0.30
Table 10. Advertisement 3D-video QoE measurements.
Table 10. Advertisement 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Advertisement25SD3.60424.887356.60450.99954.219 ± 0.42
303.67784.828953.18580.99904.063 ± 0.39
353.74514.736049.67900.99823.844 ± 0.33
403.66534.583845.79160.99663.719 ± 0.35
453.56794.320441.57410.99323.563 ± 0.38
25CIF0.67774.910756.76550.99964.219 ± 0.36
300.69094.865453.40210.99934.063 ± 0.33
350.70724.799949.98540.99863.844 ± 0.31
400.70174.690846.15550.99743.719 ± 0.29
450.68294.497142.04310.99453.563 ± 0.27
25QCIF0.14334.917656.58430.99974.219 ± 0.34
300.14544.877453.28470.99954.063 ± 0.37
350.14614.821550.01200.99903.844 ± 0.35
400.14684.727246.21160.99813.719 ± 0.38
450.14304.570642.23220.99623.563 ± 0.33
Table 11. Butterfly 3D-video QoE measurements.
Table 11. Butterfly 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Butterfly25SD6.02564.887356.60450.99954.219 ± 0.35
306.07104.828953.18580.99904.063 ± 0.38
355.95884.736049.67900.99823.844 ± 0.32
405.66844.583845.79160.99663.719 ± 0.34
455.26474.320441.57410.99323.563 ± 0.37
25CIF1.00644.910756.76550.99964.219 ± 0.40
301.00324.865453.40210.99934.063 ± 0.42
350.97864.799949.98540.99863.844 ± 0.39
400.94984.690846.15550.99743.719 ± 0.43
450.89984.497142.04310.99453.563 ± 0.37
25QCIF0.19854.917656.58430.99974.219 ± 0.35
300.19704.877453.28470.99954.063 ± 0.33
350.19364.821550.01200.99903.844 ± 0.37
400.18864.727246.21160.99813.719 ± 0.32
450.18214.570642.23220.99623.563 ± 0.30
Table 12. Chess 3D-video QoE measurements.
Table 12. Chess 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Chess25SD3.77154.878954.58790.99914.407 ± 0.33
303.77144.816650.97100.99824.188 ± 0.30
353.71814.728347.32750.99654.032 ± 0.29
403.73734.573343.56730.99293.875 ± 0.32
454.00904.305539.68010.98593.751 ± 0.35
25CIF0.66654.900054.77850.99934.407 ± 0.36
300.67314.840551.21660.99864.188 ± 0.38
350.67184.768547.67910.99734.032 ± 0.41
400.66634.661144.04730.99443.875 ± 0.37
450.68094.461340.22140.98863.751 ± 0.39
25QCIF0.19874.908954.68480.99964.407 ± 0.33
300.19844.852451.17960.99904.188 ± 0.31
350.19824.783047.68530.99804.032 ± 0.35
400.19234.683444.16760.99593.875 ± 0.34
450.19134.526140.49340.99153.751 ± 0.38
Table 13. Farm 3D-video QoE measurements.
Table 13. Farm 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Farm25SD13.74204.859255.33040.99894.063 ± 0.31
3013.73564.388244.32430.99563.938 ± 0.34
3513.48224.692449.18770.99693.844 ± 0.36
4012.67594.525845.93080.99443.719 ± 0.29
4511.51304.268142.28880.98973.563 ± 0.35
25CIF2.31614.873053.61090.99904.063 ± 0.32
302.31764.437244.60490.99573.938 ± 0.29
352.31764.757849.16740.99743.844 ± 0.35
402.18574.641646.31530.99543.719 ± 0.31
452.00144.442642.90540.99103.563 ± 0.28
25QCIF0.41624.849651.07680.99884.063 ± 0.32
300.41794.453044.45860.99553.938 ± 0.33
350.41564.757147.67910.99763.844 ± 0.29
400.41094.667645.58330.99583.719 ± 0.27
450.39284.505942.83980.99203.563 ± 0.30
Table 14. Football 3D-video QoE measurements.
Table 14. Football 3D-video QoE measurements.
VideoQuantization Parameter
(QP)
Spatial Resolution (2D + DM)M3DVQMPSNRSSIMMOS
Football25SD12.16414.873455.28680.99873.407 ± 0.23
3012.62004.805752.15080.99763.251 ± 0.26
3510.94624.712549.01730.99593.157 ± 0.28
409.69974.573345.79450.99352.969 ± 0.22
459.63184.322941.92820.98902.844 ± 0.25
25CIF2.62624.901855.94880.99913.407 ± 0.27
302.62884.851653.00470.99843.251 ± 0.25
352.62884.789049.97810.99733.157 ± 0.22
401.77454.690346.77400.99532.969 ± 0.24
451.67854.515144.40510.99162.844 ± 0.28
25QCIF0.40514.906655.82130.99933.407 ± 0.30
300.40774.860453.06270.99883.251 ± 0.32
350.37084.811950.23110.99803.157 ± 0.35
400.32194.734247.19550.99652.969 ± 0.38
450.30764.592943.47360.99352.844 ± 0.37
Table 15. Correlation between the M3D measurements and the values of the MOS, VQM, PSNR, and SSIM.
Table 15. Correlation between the M3D measurements and the values of the MOS, VQM, PSNR, and SSIM.
3D VideoSpatial
Resolution
Correlation
between
the M3D and the MOS
Correlation
between
the M3D and VQM
Correlation
between
the M3D and PSNR
Correlation
between
the M3D and SSIM
BreakdanceSD0.9240.9940.9530.996
CIF0.9190.9940.9520.998
QCIF0.9200.9960.9560.998
BalletSD0.9560.9980.9900.993
CIF0.9560.9840.9710.970
QCIF0.9250.9920.9570.995
WindmillSD0.8870.9790.9160.987
CIF0.7850.9200.8400.952
QCIF0.2550.2730.2600.332
NewspaperSD0.9070.9820.9910.978
CIF0.8540.9760.9810.986
QCIF0.2860.1060.2820.264
InterviewSD0.9770.9780.9850.983
CIF0.9620.9930.9940.990
QCIF0.9440.9980.9970.993
AdvertisementSD0.1340.4500.2390.513
CIF0.3280.0070.2280.106
QCIF0.1110.1870.0330.289
ButterflySD0.8800.9840.9220.989
CIF0.9400.9970.9660.992
QCIF0.9590.9980.9820.987
ChessSD0.5250.7830.6010.826
CIF0.5620.6840.5900.699
QCIF0.8760.9260.9050.912
FarmSD0.9390.6750.6460.936
CIF0.9010.5400.6510.914
QCIF0.8620.4050.6200.855
FootballSD0.9110.8800.9140.880
CIF0.9060.9020.8820.903
QCIF0.9590.9390.9590.925
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Coskun, S.; Nur Yilmaz, G.; Battisti, F.; Alhussein, M.; Islam, S. Measuring 3D Video Quality of Experience (QoE) Using A Hybrid Metric Based on Spatial Resolution and Depth Cues. J. Imaging 2023, 9, 281. https://doi.org/10.3390/jimaging9120281

AMA Style

Coskun S, Nur Yilmaz G, Battisti F, Alhussein M, Islam S. Measuring 3D Video Quality of Experience (QoE) Using A Hybrid Metric Based on Spatial Resolution and Depth Cues. Journal of Imaging. 2023; 9(12):281. https://doi.org/10.3390/jimaging9120281

Chicago/Turabian Style

Coskun, Sahin, Gokce Nur Yilmaz, Federica Battisti, Musaed Alhussein, and Saiful Islam. 2023. "Measuring 3D Video Quality of Experience (QoE) Using A Hybrid Metric Based on Spatial Resolution and Depth Cues" Journal of Imaging 9, no. 12: 281. https://doi.org/10.3390/jimaging9120281

APA Style

Coskun, S., Nur Yilmaz, G., Battisti, F., Alhussein, M., & Islam, S. (2023). Measuring 3D Video Quality of Experience (QoE) Using A Hybrid Metric Based on Spatial Resolution and Depth Cues. Journal of Imaging, 9(12), 281. https://doi.org/10.3390/jimaging9120281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop