Next Article in Journal
Analysis of Loose Surrounding Rock Deformation and Slope Stability at Shallow Double-Track Tunnel Portal: A Case Study
Next Article in Special Issue
Antivirus Evasion Methods in Modern Operating Systems
Previous Article in Journal
Detection and Identification of Potato-Typical Diseases Based on Multidimensional Fusion Atrous-CNN and Hyperspectral Data
Previous Article in Special Issue
A Nonuniformity Correction Method Based on 1D Guided Filtering and Linear Fitting for High-Resolution Infrared Scan Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of the Quality of Video Sequences Performed by Viewers at Home and in the Laboratory

by
Janusz Klink
1,*,
Stefan Brachmański
2 and
Michał Łuczyński
2
1
Department of Telecommunications and Teleinformatics, Faculty of Information and Telecommunication Technology, Wrocław University of Science and Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland
2
Photonics and Microsystems Department of Acoustics, Multimedia and Signal Processing, Faculty of Electronic, Wrocław University of Science and Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 5025; https://doi.org/10.3390/app13085025
Submission received: 27 February 2023 / Revised: 4 April 2023 / Accepted: 14 April 2023 / Published: 17 April 2023
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)

Abstract

:

Featured Application

The results of the research may be helpful in the setup of video quality assessment procedures in order to achieve results as close as possible to the quality experienced by the end users of the video streaming services.

Abstract

The paper presents the results of subjective and objective quality assessments of H.264-, H.265-, and VP9-encoded video. Most of the literature is devoted to subjective quality assessment in well-defined laboratory circumstances. However, the end users usually watch the films in their home environments, which may be different from the conditions recommended for laboratory measurements. This may cause significant differences in the quality assessment scores. Thus, the aim of the research is to show the impact of environmental conditions on the video quality perceived by the user. The subjective assessment was made in two different environments: in the laboratory and in users’ homes, where people often watch movies on their laptops. The video signal was assessed by young viewers who were not experts in the field of quality assessment. The tests were performed taking into account different image resolutions and different bit rates. The research showed strong correlations between the obtained results and the coding bit rates used, and revealed a significant difference between the quality scores obtained in the laboratory and at home. As a conclusion, it must be underlined that the laboratory tests are necessary for comparative purposes, while the assessment of the video quality experienced by end users should be performed under circumstances that are as close as possible to the user’s home environment.

1. Introduction

For many years, in everyday life, television played the role of the most important medium. Significant changes are currently being observed, especially among the young generation. Today’s youth are increasingly willing to watch TV broadcasts, including movies, via the Internet using mobile devices or laptops. An important issue is the quality of the video delivered to the users. There are two general approaches to quality assessment, namely, subjective and objective. The recommendations of the International Telecommunication Union (ITU) [1] specify, in detail, the conditions for performing measurements related to the subjective assessment of video quality. In general, the assessment should be conducted under laboratory conditions, possibly simulating home conditions. The universal measurement room should be able to meet the requirements of an idealized room as well as a domestic room. Recommendation BT.500 [1] defines the general viewing conditions for subjective assessments in a laboratory and in the home environment. However, it should be taken into account that the general viewing conditions for the home environment do not guarantee that they will suit the specific home conditions of each user. An Internet user usually watches a video transmission under conditions that do not always meet the requirements of the ITU BT.500 recommendation. Consequently, the evaluation of video quality performed under real home conditions and under home conditions emulated in the laboratory will not necessarily be the same. Moreover, conducting the research in real users’ locations allows us to spread the test environment across a much wider population of service customers. In general, the viewing conditions can significantly impact the results obtained. The purpose of the evaluation and the audience to whom the evaluation is devoted may determine the acceptable circumstances of the test. The next issue of such a subjective evaluation is its high cost, because many factors must be included in the experimental design and many subjects (human testers) must be involved. Therefore, many studies try to find a replacement for these methods by modeling, simulating the real world in an artificial environment, or using objective approaches to quality assessment [2]. In the next step, different approaches to quality modeling may be applied, taking into account the different points of view of various stakeholders in the media streaming process [3]. However, the results obtained from objective methods may not always correlate with subjective users’ scores, especially when the circumstances of subjective quality assessment are changing. The authors anticipate that the video quality assessment performed in the artificial environment (even if the conditions emulate an ‘average’ home) may give different results from the scores obtained under real home conditions. Examining this may provide an answer as to whether the assessment of video quality experienced by users may (or may not) always be replaced by laboratory tests. The next issue occurs when we talk about comparisons between the subjective and objective results of the quality assessment. This is not a trivial task, taking into account that there are plenty of objective methods and quality metrics. There is a long list of literature that describes their characteristics. Some studies discuss their strengths and weaknesses, as well as their usability to predict video quality assessed by end users, especially when talking about metrics such as mean squared error (MSE) and peak signal-to-noise ratio (PSNR) [4,5,6]. Others show that the correspondence between objective and subjective scores also depends on the video content and show that some metrics, such as the structural similarity (SSIM) index, present characteristics similar to the human visual system (HVS) and their results are closer to users’ subjective scores [7,8]. Finally, there are papers that give a comprehensive view of the different factors that influence the degradation of the video content delivered to the user, present a broad review of objective video quality assessment methods, their classification, and performance comparison [9], and give a survey of the evolution of these methods, analyzing their characteristics, advantages, and drawbacks [10,11]. Most of them are good enough for comparison and benchmarking purposes [12,13,14,15], but some give results that present stronger correlations with quality of experience (QoE) scores, given by users during subjective quality assessment, than others. Mapping the quality of service (QoS) onto QoE allows us to build proper QoE models. However, finding general relationships between QoS and QoE is not an easy task. Sometimes, the content of the video may influence the perceptual-based quality assessment in specific circumstances [16,17,18,19]. This is why big content providers and streaming platforms, which use Dynamic Adaptive Streaming over HTTP (DASH) mechanisms to provide their content via the Internet, use different coding bit rate ladders according to the video content provided [20,21]. Furthermore, the bit rate coding ladder for specific video content may depend on the video codec [22]. The authors chose three objective quality metrics, namely, the PSNR [23], SSIM [24], and the video multimethod assessment fusion (VMAF) [25], from the long list of metrics proposed in the literature. The PSNR metric is often used because it has a clear physical meaning and is simple to calculate. It presents good results when assessing the influence of some degradation factors on the quality of specific video footage, e.g., before and after compression. However, it may not always be sufficiently correlated with subjective quality assessment scores. PSNR is memoryless, which means that it is calculated pixel by pixel, independently, for each pair of corresponding frames of the two compared videos and assumes that the video quality is independent of the spatial and temporal relationships between the samples of the source footage. Reordering the pixels in the reference and examined videos, in the same way, does not change the PSNR values, although the subjective quality may change. Moreover, it can be found in the literature that video signals are highly structured and the ordering of pixels carries important perceptual structural information about the contents of the visual scene [4]. This led to also taking into account other video quality metrics, such as SSIM and VMAF, which take into account the fact that natural image signals are highly structured and may possibly better correlate with subjective quality assessment scores [11,24,25,26]. When there is no possibility to access the original footage, then no-reference (NR) image or video quality assessment methods can be used to evaluate the quality of the material delivered to the end user. The original footage may be distorted at any stage of the media delivery chain, that is, during acquisition, processing, compression, transmission, decoding, or presentation at the receiver’s site. Therefore, it is important to use quality assessment methods that are based on a good representation of different types of distortions and may use them for a proper evaluation. Early NR quality assessment methods usually took into account specific distortion types, such as blur [27], blocking [28], and ringing artifacts [29]. In real situations, the distortion types are usually not known in advance; thus, recently, more attention has been paid to general-purpose NR methods. These metrics attempt to learn the knowledge when evaluating the quality of images and characterize the general rules of image distortions. On the basis of this knowledge, image quality prediction models can be established and adapted to unknown distortions [30]. There are many approaches based on deep convolutional neural networks (DCNN) NR image quality assessment (IQA) [31,32,33]. They emphasize a good distortion representation, which is crucial for the performance of NR-IQA or blind image quality assessment (BIQA). In [34], the relationship between different distortion levels and their types is analyzed. The authors proposed a new approach, named ‘GraphIQA’, which presents a distortion graph representation-based deep learning BIQA. General-purpose BIQA models suffer from catastrophic forgetting, which refers to the tendency of a neural network to ‘forget’. A solution to this problem may be the lifelong blind image quality assessment (LIQA) approach, which not only learns new distortions, but can also mitigate the catastrophic forgetting of identified distortions [35]. The main purpose of our work was to assess the influence of the environment on the video quality experienced by the user and to find correlations with the results of the objective quality assessment. The objective evaluation was based on the full reference (FR) method, where not only the distorted video, but also the reference footage was available.
The goals of the research were to:
  • Conduct a comparative analysis of the video quality assessment results obtained under laboratory and real home (not lab-emulated) conditions;
  • Find correlations between objective results and subjective assessment scores, taking into account the influence of the test environment.
The results of the research should answer the question of whether laboratory tests can replace the video quality assessment conducted in users’ homes and reduce testing costs. Furthermore, the research should show which type of subjective quality assessment is more closely correlated with objective quality assessment methods and which metric is worth using.
The video quality assessment was made taking into account:
  • H.264, H.265, and VP9 encodings [36,37,38];
  • The bit rate (from 300 kbps to 6000 kbps);
  • Resolutions (640 × 360—ninth high definition (nHD), 858 × 480—standard definition (SD), 1280 × 720—high definition (HD), and 1920 × 1080—full high definition (Full HD)).
The paper is organized as follows. After the introduction, Section 2 describes the video test sample preparation procedure and the methods used in the research. In the next section, the results of the subjective and objective quality assessment are presented and discussed. At the end, the results are summarized and the conclusions drawn.

2. Materials and Methods

The first step of the research consisted of using the subjective video quality assessment. From many different video quality assessment methods [1,39,40,41,42], the comparative method Double Stimulus Impairment Scale Method (DSSM) was used in the study. The DSSM is recommended by the International Telecommunication Union (ITU), and the measurement technique is described in the BT.500 recommendation [1]. The evaluation consists of comparing the reference video sequence (reference signal) with the evaluated sequence. The reference signal was presented first and assessed second. The task of the observer (viewer) was to assess the degree of deterioration of the second signal in relation to the first signal. The rating was given on a five-point mean opinion score (MOS) scale, where 5 means invisible quality deterioration, 4—noticeable but not annoying, 3—slightly annoying, 2—annoying, and 1—very annoying [39]. The video sequences were presented to the observers in single pairs (pattern-evaluated sequence). Each pair was assessed separately. The reference and evaluated video sequences were separated by a gray screen presented to the observers for about 2 s.
Measurements were made for two cases:
  • Evaluation in the laboratory;
  • Ratings at the viewer’s home.
The evaluation of video quality for condition 1, that is, in the laboratory, was carried out in a room adapted for the evaluation of video signals, equipped with a 60-inch TV screen. The laboratory room met the requirements of the recommendations of the International Telecommunication Union [1,39,43,44]. Its additional advantage was the fact that all participants in the research knew about it, so it did not affect the distraction of the students related to the adaptation to the location of the research. In turn, the video quality assessment for Case 2 was made in home conditions, i.e., not ideal, but still ensuring the quality assessment by the consumer. All participants in the measurements evaluated the video sequence on high-definition television (HDTV) monitors with a resolution of 1920 × 1080 [45]. The standard test material was a 20 s video sequence (without sound) with a resolution of 1920 × 1080 pixels in AVI format. The length of the video footage was twice as long as the (minimum) value proposed in [1]. This decision does not negatively affect the results of the subjective assessment, but, in the case of objective evaluation, allows one to calculate quality metrics based on a larger dataset, which, in the case of films with varied dynamics and content of the presented scenes, may positively influence the proper calculation of objective quality metrics, which will be more representative and more correlated with the subjective assessment. However, there are studies that take into account longer video samples. This case was described in [46], where the authors considered 180 s samples for the evaluation of QoE in adaptive video streaming over wireless networks. Longer samples allow for a better evaluation of the quality perceived by users, especially when transmission disturbances may occur irregularly and at relatively longer intervals. The test footage included horse racing start scenes (see Figure 1) [47].
The original sequence was encoded in H.264, H.265, and VP09 with different resolutions and different bit rates. Four resolutions were taken into account in the research: 640 × 360 (360p), 858 × 480 (480p), 1280 × 720 (720p), and 1920 × 1080 (1080p). For the coding techniques and a specific resolution, various transmission conditions were simulated with 18 bit rates: 300, 400, 500, 600, 700, 800, 900, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 5500, and 6000 kbps. The test material was presented to viewers divided into encoding technique and resolution. The test videos with different transmission conditions (bit rate) were randomly presented to the viewers. Each group of viewers evaluated the video signal subjected to one encoding technique for all resolutions and bit rates. In both cases, the team of observers consisted of second-year electronics students at the Faculty of Electronics, Photonics, and Microsystems of the Wrocław University of Science and Technology, aged 20–21, with normal visual acuity and correct color discrimination. As recommended by the International Telecommunication Union BT. 500 [1], the minimum number of observers should be 15. In the presented studies, three groups were created for home measurements and three groups for laboratory measurements. Each group evaluated a different type of coding. The number of individual test groups examining individual codecs was as follows:
  • H.264—25 people under home conditions and 45 people under laboratory conditions;
  • H.265—35 people under home conditions and 35 people under laboratory conditions;
  • VP09—30 people under home conditions and 40 people under laboratory conditions.
The different size of the groups was, among other reasons, the result of a different number of individuals willing to participate in a given measurement session, the effect of a statistical analysis of observer ratings, and the elimination of those observers who were characterized by low participation (regardless of the bit rate or resolution, they gave the same quality rating). Before beginning the measurements, the participants were familiarized with the assessment method and had one training session. During the training, the observers became acquainted with the technique of presenting the test material and how to assess the changes in video quality. After the training, the actual measurements began. After viewing the original and encoded sequences, each study participant recorded their assessment of quality deterioration in a special form. In the second part of the research, the authors performed video quality assessment using the objective double stimulus method, which relies on a comparison of the encoded video samples with the reference original video (see Figure 2).
The original video footage was encoded using the FFmpeg [48] tool with implemented H.264, H.265, and VP9 video codecs. Four spatial resolutions and coding bit rates in the range from 300 to 6000 kbps, mentioned above, were taken into account. There were 216 video samples (3 codecs × 4 spatial resolutions × 18 coding bit rates) prepared in total. Each set of video samples for specific spatial resolution should be compared with source video footage of the same resolution. This way, the quality of each set of videos is objectively assessed independently of the other sets of videos. When it comes to subjective quality assessment, each set of videos should be presented on a display with proper resolution that should be fitted to the resolution of the assessed video. This may be difficult to achieve when the quality assessment is performed by many different users in their home environments, where a specific display resolution may be used by default. Thus, the authors assumed that the objective assessment should be conducted using one display resolution. The most popular spatial resolution of the displays used by end users was 1920 × 1080 pixels (FHD). Therefore, the sample preparation process was a bit more complicated than just encoding. It also included upsizing all of the videos of smaller spatial resolution, i.e., 640 × 360, 858 × 480 and 1280 × 720, to FHD (Figure 3). This way, the authors wanted to achieve the same effect observed on the end-user equipment, which usually resizes smaller resolution videos to the maximum display size, with FHD resolution set by default.
After this preparation, the tested video samples were objectively assessed, by comparison with the reference video (denoted in Figure 3 as ‘1920 × 1280/ref./’), using three metrics, i.e., PSNR, SSIM, and VMAF. Finally, these results could be compared with the subjective user scores. A detailed description of the methodology in the form of a flow diagram of the work is presented in Figure 4.

3. Results and Discussion

The results of the subjective assessment of the quality of the video were entered into a spreadsheet and subjected to statistical analysis according to the procedure described in the ITU-R BT.500 recommendation [1]. In accordance with this recommendation, a 95% confidence interval was adopted. The mean value of the MOS score in the group of observers was calculated separately for each encoding technique, screen resolution, and bit rate.

3.1. Subjective Quality Assessment of Video Encoded Using H.264 Standard

The H.264 standard [36], also known as MPEG-4 Part 10 or AVC (advanced video coding), was introduced in 2003 as a result of cooperation between the ITU-T Q.6/SG16 Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG). The team formed in this way is known as the Joint Video Team (JVT). The H.264 standard uses differential compression, in which the current image is created based on one or more previous images, taking into account the differences that occurred between them at that time. In relation to earlier solutions, the H.264 standard uses a number of improvements, on the one hand, allowing a reduction in the bit rate with unchanged image quality, and on the other hand, significantly increasing the demand for computing power during encoding. The degradation of the quality of the video signal encoded in the H.264 standard was assessed in a group of 25 people at home and 45 people in a laboratory. The obtained results of the measurements made under home conditions are presented in Table 1 and graphically in Figure 5, with the laboratory conditions presented in Table 2 and Figure 6. In addition to the MOS mean value, the tables also include the standard deviation values (S), as well as the values of the confidence interval coefficient (δ) calculated according to the ITU BT.500 recommendation [1].
The statistical analysis of the results showed that up to a bit rate of 1000 kbps, the resolution does not affect the assessment of the video image quality made at home, while the laboratory measurements show a slightly lower assessment for the resolution of 1920 × 1080. Above this bit rate, the video quality depends on the resolution, and as expected, the video signal with a resolution of 1920 × 1080 is the highest rated. For this resolution, an MOS rating of at least 4 was achieved in home conditions at a bit rate starting from 2500 kbps and in laboratory conditions from approximately 1500 kbps. On the other hand, for a resolution of 1280 × 720, the MOS value of 4 was achieved at home at 3500 kbps and in laboratory conditions at 2000 kbps. Comparing the results of the MOS evaluation obtained in the laboratory and home conditions, it can be seen that viewers rated the video image presented under laboratory conditions more highly; only for the highest resolution at bit rates up to 600 kbps was the opposite MOS result found. Table 3 and Figure 7 show the difference of ΔMOS in the evaluation of video quality obtained for measurements made in the laboratory and home conditions according to Formula (1):
∆MOS = MOSL − MOSH,
where MOSL is the evaluation obtained under laboratory conditions and MOSH is the evaluation obtained under home conditions.
The statistical analysis of the results obtained using the t-test showed that at the confidence level of α = 0.05, there is no basis to accept the hypothesis of the identity of the results obtained under laboratory and home conditions. The t-test values obtained using the Statistica tool for each resolution are as follows:
-
640 × 360; t = 17.6 > t α = 2.1, at α = 0.05;
-
858 × 480; t = 14.9 > t α = 2.1, at α = 0.05;
-
1280 × 720; t = 10.7 > t α = 2.1, at α = 0.05;
-
1920 × 1080; t = 2.9 > t α = 2.1, with α = 0.05.
It can be concluded that the differences between the MOS values obtained under the laboratory and home conditions are significant.

3.2. Subjective Quality Assessment of Video Encoded Using H.265 Standard

The H.265 standard [37], also known as high-efficiency video coding (HEVC), was originally published on 13 April 2013 and is currently the most recent and most efficient video coding system. This standard was created in cooperation between the Video Coding Experts Group (VCEG) and the Moving Picture Experts Group. The H.265 standard ensures the compression of videos in very high resolution (2 K, 4 K, 8 K, etc.) and also allows the use of images with increasingly higher resolutions on mobile devices. The H.265 standard offers up to twice the compression compared to H.264. Video compression is based on motion prediction; that is, when there are no changes to a pixel, the codec references that pixel instead of reproducing it. The motion prediction and compensation procedure have also been improved. Another improvement is the enlargement of the macroblock from 16 × 16 pixels (H.264) to 64 × 64 pixels, which is especially important in high-definition movies. The quality degradation of the video signal encoded in the H265 standard was evaluated in a group of 35 people, under both home and laboratory conditions. The results of the measurements taken at home are presented in Table 4 and graphically represented in Figure 8, with the laboratory conditions presented in Table 5 and Figure 9. In addition to the mean MOS value, the tables also include the standard deviation (S) and the values of the confidence interval coefficient (δ) calculated in accordance with the ITU BT.500 recommendation [1]. The statistical analysis of the results showed that up to a bit rate of 600 kbps, the resolution does not affect the evaluation of video image quality. In turn, by comparing the quality ratings for 1280 × 720 and 1920 × 1080 resolutions, it can be observed that there is no significant difference in the image quality rating for bit rates up to 900 kbps for home measurements and up to 1000 kbps under laboratory conditions.
Above these bit rates, the video quality is clearly resolution dependent. For a video signal with a resolution of 1920 × 1080, the MOS rating exceeds the value of 4.0 for the bit rate starting from 2000 kbps for home measurements and approximately 1500 kbps for laboratory measurements. On the other hand, for the resolution of 1280 × 720, the MOS value = 4.0 is reached for a bit rate of 3000 kbps for home measurements, and for laboratory measurements for a bit rate of 2000 kbps. Level 4.0 was also exceeded for a resolution of 858 × 480 with a bit rate of at least 4500 kbps for the home measurements and 2500 kbps for laboratory measurements. A video signal with a resolution of 640 × 360 under home conditions does not reach MOS = 4.0, while under laboratory conditions, starting at 4500 kbps, the MOS reaches a value of 4.0.
Compared to the H.264-encoding standard, much higher MOS rating values are observed for the H.265 standard. Comparing the results of the MOS evaluation obtained under laboratory and home conditions, it can be seen that the viewers rated the video image presented under laboratory conditions more highly; Table 6 and Figure 10 show the difference of ΔMOS in the evaluation of video quality obtained for measurements made under laboratory and home conditions according to Formula (1).
The statistical analysis of the results obtained using the t-test showed that at the confidence level of α = 0.05, there is no basis to accept the hypothesis of the identity of the results obtained under laboratory and home conditions. The t-test values obtained with the Statistica tool for each resolution are as follows:
-
640 × 360; t = 9.1 > t α = 2.1, at α = 0.05;
-
858 × 480; t = 10.4 > t α = 2.1, at α = 0.05;
-
1280 × 720; t = 10.9 > t α = 2.1, at α = 0.05;
-
1920 × 1080; t = 7.3 > t α = 2.1, at α = 0.05.
It can be concluded that the differences between the MOS values obtained under the laboratory and home conditions are significant.

3.3. Subjective Quality Assessment of Video Encoded Using VP9 Standard

The VP9 standard, developed by Google, was the last evaluated coding technique. The VP9 codec is used, among others, on YouTube. The VP9 codec is based on an open-source license, uses the Webm container, and is basically MKV (the H.264 and H.265 codecs use the MP4 container) [40]. The degradation of the quality of the video signal encoded in the VP9 standard was assessed in a group of 30 people at home and 40 people in a laboratory. The results of the measurements made under home conditions are presented in Table 7 and graphically in Figure 11, with the results of the laboratory conditions in Table 8 and Figure 12. In addition to the MOS mean value, the tables also include the standard deviation values (S), as well as the values of the confidence interval coefficient (δ) calculated according to the ITU BT.500 recommendation [1]. The statistical analysis of the results showed that up to a bit rate of 2000 kbps, there is no difference in the assessment of image quality with resolutions of 1920 × 1080 and 1280 × 720 in home measurements; for higher speeds, slight differences can be observed in favor of the image with higher resolution. However, the differences in the quality assessment are within the designated confidence interval. For both resolutions, the MOS value of 4.0 is exceeded at 3000 kbps. The quality assessment made at home for other resolutions is comparable to the H.265 standard, which is probably related to the young people’s habits, as this standard is very popular, among others, on YouTube. In turn, the statistical analysis of the results obtained in the laboratory measurements showed that up to a bit rate of 1000 kbps, there is no difference in the assessment of image quality for all of the resolutions assessed. Above this bit rate, the video quality slightly depends on the resolution and, as expected, the video signal with a resolution of 1920 × 1080 is rated the highest, for which MOS = 4 was already achieved at a bit rate of 2000 kbps. An MOS value of 4.0 was obtained for 1280 × 720 at a bit rate of 2500 kbps and for 858 × 480 at a bit rate of 3000 kbps. The smallest resolution, i.e., 640 × 360, achieves the worst MOS values, but starting from 4500 kbps, the quality rating reaches the level of 4.0, just like in home measurements.
Comparing the results of the MOS evaluation obtained in the laboratory and home conditions, it can be seen that the viewers rated the video image presented under laboratory conditions more highly; Table 9 and Figure 13 show the difference in ΔMOS in the evaluation of video quality obtained for measurements made under laboratory and home conditions according to Formula (1).
The statistical analysis of the results obtained using the t-test showed that at the confidence level of α = 0.05, there is no basis to accept the hypothesis of the identity of the results obtained under laboratory and home conditions. The t-test values obtained with the Statistica tool for each resolution are as follows:
-
640 × 360; t = 11.1 > t α = 2.1, at α = 0.05;
-
858 × 480; t = 11.3 > t α = 2.1, at α = 0.05;
-
1280 × 720; t = 13.0 > t α = 2.1, at α = 0.05;
-
1920 × 1080; t = 10.5 > t α = 2.1, at α = 0.05.
It can be concluded that the differences between the MOS values obtained under the laboratory and home conditions are significant.

3.4. Objective Quality Assessment of Video Encoded Using H.264, H.265, and VP9 Standards

The results of the objective video quality assessment are presented using three metrics: PSNR, SSIM, and VMAF (see Figure 14, Figure 15 and Figure 16).
It can be noted that, just like in the case of subjective quality assessment, the objective video quality results are directly proportional to the used coding bit rate, which is valid for all presented metrics and video codecs. The most significant changes in quality are observed for low bit rates, while for higher bit rates, the quality changes seem to be very small or imperceptible. The results are also consistent with those presented in the literature, where the H.265 and VP9 codecs are more efficient than the H.264 codec. A very important issue here is the problem of the spatial resolutions of the examined videos. Here, each set of video samples of a specific resolution was compared (double-stimulus method) with a reference footage of proper resolution, i.e., 360p reference with 360p test sample, 480p reference with 480p test sample, etc. This resulted in higher quality values for videos with higher spatial resolution, which was consistent with the results of the subjective assessment. The correlation coefficients between these objective results and subjective quality assessment scores in the laboratory and in users’ homes for each codec and video spatial resolution were determined and are presented in Table 10, Table 11 and Table 12.
All tests and calculated correlations were conducted for the selected video resolutions and a limited number of coding bit rates (i.e., 18 coding bit rates for each video sample of a specific resolution). To validate these results and check how much of the whole population of different cases is well described by this research, a determination coefficient (R2) was calculated for each previously determined correlation.
Taking into account each codec, it can be stated that the determination coefficients fluctuated as follows:
  • For the H.264 codec: from 0.9 to 0.996;
  • For the H.265 codec: from 0.931 to 0.998;
  • For the VP9 codec: from 0.905 to 0.998.
This means that the obtained results of the correlations well describe 90 to 99 percent of the whole population. This leads to the conclusion that the obtained correlations are very strong and that they are representative for a population that can be much wider than the video set that was used during the research.

4. Conclusions

The authors presented the problem of subjective quality assessment conducted in different environments. Most papers and formal regulations recommend performing such tests in a laboratory under special circumstances. It is understandable that test conditions must be strictly determined, especially when the procedure must be repeatable and should give representative results that are comparable with those of other laboratories. However, in the case of watching the video at home, the environment may not meet the laboratory conditions described in formal recommendations. This may cause the quality experienced by the home user to differ from the quality measured in the laboratory. The results of our investigations confirmed these assumptions and showed statistically significant differences. This implies the need to separate these two types of environments and to conduct the tests in both depending on their purpose. The second part of the research was devoted to objective video quality evaluation and identifying the relationships between their results and the results of the subjective assessment conducted in different environments. The authors observed very high correlations between all three sets of results, i.e., objective, subjective in the laboratory, and subjective at home. The very high determination coefficients imply that the results obtained from testing a limited number of video samples may produce conclusions that can be generalized to the entire population. Obviously, here, the QoS/QoE models can be made, but their parameters must be determined separately for laboratory and home environments. Searching for better quality models for other environments (different from laboratory) may help to better fit the video delivered to its recipients.

Author Contributions

Conceptualization and methodology, J.K. and S.B.; objective quality assessment, J.K.; subjective quality assessment, S.B. with M.Ł.’s support; writing—original draft preparation, J.K. and S.B.; visualization, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data sharing is not applicable to this article.

Acknowledgments

The paper presents the results of the statutory research carried out at Wroclaw University of Science and Technology. The authors would like to thank Wroclaw Centre for Networking and Supercomputing for providing the computing resources that were used for the digital processing of the tested video samples.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ITU-R BT 500-14; Methodologies for the Subjective Assessment of the Quality of Television Images. ITU: Geneva, Switzerland, 2020.
  2. Fela, R.F.; Zacharov, N.; Forchhammer, S. Comparison of Full Factorial and Optimal Experimental Design for Perceptual Evaluation of Audiovisual Quality. J. Audio Eng. Soc. 2023, 71, 4–19. [Google Scholar] [CrossRef]
  3. Barman, N.; Martini, M.G. QoE Modeling for HTTP Adaptive Video Streaming–A Survey and Open Challenges. IEEE Access 2019, 7, 30831–30859. [Google Scholar] [CrossRef]
  4. Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
  5. Huynh-Thu, Q.; Ghanbari, M. The accuracy of PSNR in predicting video quality for different video scenes and frame rates. Telecommun. Syst. 2012, 49, 35–48. [Google Scholar] [CrossRef]
  6. Klink, J.; Uhl, T. Video Quality Assessment: Some Remarks on Selected Objective Metrics. In Proceedings of the 2020 28th International Conference Software, Telecommun. Comput. Networks, SoftCOM, Split, Croatia, 17–19 September 2020. [Google Scholar]
  7. Vranjes, M.; Rimac-Drlje, S.; Zagar, D. Objective video quality metrics. In Proceedings of the ELMAR 2007, Zadar, Croatia, 12–14 September 200; pp. 45–49. [CrossRef]
  8. Kotevski, Z.; Mitrevski, P. Performance Assessment of Metrics for Video Quality Estimation. In Proceedings of the International Scientific Conference on Information, Communication and Energy Systems and Technologies, Macedonia, Greece, 23–26 June 2010; pp. 693–696. [Google Scholar]
  9. Chikkerur, S.; Sundaram, V.; Reisslein, M.; Karam, L.J. Objective video quality assessment methods: A classification, review, and performance comparison. IEEE Trans. Broadcast. 2011, 57, 165–182. [Google Scholar] [CrossRef]
  10. Akramullah, S.; Akramullah, S. Video quality metrics. In Digital Video Concepts, Methods, and Metrics; Apress: New York, NY, USA, 2014; pp. 101–160. [Google Scholar]
  11. Chen, Y.; Wu, K.; Zhang, Q. From QoS to QoE: A Tutorial on Video Quality Assessment. IEEE Commun. Surv. Tutor. 2015, 17, 1126–1165. [Google Scholar] [CrossRef]
  12. Hanhart, P.; Korshunov, P.; Ebrahimi, T. Benchmarking of quality metrics on ultra-high definition video sequences. In Proceedings of the 2013 18th International Conference on Digital Signal Processing (DSP), Santorini, Greece, 1–3 July 2013; pp. 1–8. [Google Scholar]
  13. Hanhart, P.; Bernardo, M.V.; Pereira, M.; Pinheiro, A.M.G.; Ebrahimi, T. Benchmarking of objective quality metrics for HDR image quality assessment. EURASIP J. Image Video Process. 2015, 2015, 39. [Google Scholar] [CrossRef]
  14. Klink, J. A Method of Codec Comparison and Selection for Good Quality Video Transmission Over Limited-Bandwidth Networks. Sensors 2021, 21, 4589. [Google Scholar] [CrossRef]
  15. Barman, N.; Martini, M.G. H. 264/MPEG-AVC, H. 265/MPEG-HEVC and VP9 codec comparison for live gaming video streaming. In Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, 31 May–2 June 2017; pp. 1–6. [Google Scholar]
  16. You, J.; Reiter, U.; Hannuksela, M.M.; Gabbouj, M.; Perkis, A. Perceptual-based quality assessment for audio–visual services: A survey. Signal Process. Image Commun. 2010, 25, 482–501. [Google Scholar] [CrossRef]
  17. Akhtar, Z.; Siddique, K.; Rattani, A.; Lutfi, S.L.; Falk, T.H. Why is Multimedia Quality of Experience Assessment a Challenging Problem? IEEE Access 2017, 7, 117897–117915. [Google Scholar] [CrossRef]
  18. Rassool, R. VMAF reproducibility: Validating a perceptual practical video quality metric. In Proceedings of the 2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Cagliari, Italy, 7–9 June 2017; pp. 1–2. [Google Scholar]
  19. Moldovan, A.-N.; Ghergulescu, I.; Muntean, C.H. VQAMap: A Novel Mechanism for Mapping Objective Video Quality Metrics to Subjective MOS Scale. IEEE Trans. Broadcast. 2016, 62, 610–627. [Google Scholar] [CrossRef]
  20. Bentaleb, A.; Taani, B.; Begen, A.C.; Timmerer, C.; Zimmermann, R. A Survey on Bitrate Adaptation Schemes for Streaming Media Over HTTP. IEEE Commun. Surv. Tutor. 2018, 21, 562–585. [Google Scholar] [CrossRef]
  21. Sani, Y.; Mauthe, A.; Edwards, C. Adaptive Bitrate Selection: A Survey. IEEE Commun. Surv. Tutor. 2017, 19, 2985–3014. [Google Scholar] [CrossRef]
  22. Zabrovskiy, A.; Feldmann, C.; Timmerer, C. Multi-codec DASH dataset. In Proceedings of the 9th ACM Multimedia Systems Conference, Amsterdam, The Netherlands, 12–15 June 2018; pp. 438–443. [Google Scholar]
  23. Tanchenko, A. Visual-PSNR measure of image quality. J. Vis. Commun. Image Represent. 2014, 25, 874–878. [Google Scholar] [CrossRef]
  24. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  25. Bampis, C.G.; Li, Z.; Bovik, A.C. Spatiotemporal Feature Integration and Model Fusion for Full Reference Video Quality Assessment. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 2256–2270. [Google Scholar] [CrossRef]
  26. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  27. Li, L.; Lin, W.; Wang, X.; Yang, G.; Bahrami, K.; Kot, A.C. No-Reference Image Blur Assessment Based on Discrete Orthogonal Moments. IEEE Trans. Cybern. 2015, 46, 39–50. [Google Scholar] [CrossRef]
  28. Li, L.; Zhu, H.; Yang, G.; Qian, J. Referenceless Measure of Blocking Artifacts by Tchebichef Kernel Analysis. IEEE Signal Process. Lett. 2013, 21, 122–125. [Google Scholar] [CrossRef]
  29. Liu, H.; Klomp, N.; Heynderickx, I. A No-Reference Metric for Perceived Ringing Artifacts in Images. IEEE Trans. Circuits Syst. Video Technol. 2009, 20, 529–539. [Google Scholar] [CrossRef]
  30. Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. MetaIQA: Deep meta-learning for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14 June 2020–19 June 2020; pp. 14143–14152. [Google Scholar]
  31. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the 2014 IEEE Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
  32. Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef] [PubMed]
  33. Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network. IEEE Trans. Circuits Syst. Video Technol. 2018, 30, 36–47. [Google Scholar] [CrossRef]
  34. Sun, S.; Yu, T.; Xu, J.; Zhou, W.; Chen, Z. GraphIQA: Learning Distortion Graph Representations for Blind Image Quality Assessment. IEEE Trans. Multimedia 2022, 14, 1–14. [Google Scholar] [CrossRef]
  35. Liu, J.; Zhou, W.; Li, X.; Xu, J.; Chen, Z. LIQA: Lifelong Blind Image Quality Assessment. IEEE Trans. Multimedia 2022, 14, 1–13. [Google Scholar] [CrossRef]
  36. ITU-T Rec; H.264. Audiovisual and Multimedia Systems: Infrastructure of Audiovisual Services-Coding of Moving Video, Advanced Video Coding for Generic Audiovisual Services. International Telecommunication Union: Geneva, Switzerland, 2021.
  37. ITU-T Rec; H.265. Infrastructure of Audiovisual Services—Coding of Moving Video. High Efficiency Video Coding. International Telecommunication Union: Geneva, Switzerland, 2021.
  38. Grange, A.; De Rivaz, P.; Hunt, J. VP9 Bitstream Decoding Process Specification. WebM Project. 2016. Available online: http://downloads.webmproject.org.storage.googleapis.com/docs/vp9/vp9-bitstream-specification-v0.6-20160331-draft.pdf (accessed on 25 February 2023).
  39. ITU-T Rec; P.910. Subjective Video Quality Assessment Methods for Multimedia Applications. International Telecommunication Union: Geneva, Switzerland, 2021.
  40. Mukherjee, D.; Bankoski, J.; Grange, A.; Han, J.; Koleszar, J.; Wilkins, P.; Xu, Y.; Bultje, R. The latest open-source video codec VP9—An overview and preliminary results. In Proceedings of the 2013 Picture Coding Symposium, San Jose, CA, USA, 8–11 December 2013; pp. 390–393. [Google Scholar] [CrossRef]
  41. Winkler, S. Video quality measurement standards—Current status and trends. In Proceedings of the 2009 7th International Conference on Information, Communications and Signal Processing (ICICS), Macau, China, 8–10 December 2009; pp. 1–5. [Google Scholar]
  42. Winkler, S. On the properties of subjective ratings in video quality experiments. In Proceedings of the 2009 International Workshop on Quality of Multimedia Experience, Lippstadt, Germany, 5–7 September 2022; pp. 139–144. [Google Scholar] [CrossRef]
  43. ITU-T Recommendation P.913; Methods for the Subjective Assessment of Video Quality, Audio Quality and Audiovisual Quality of Internet Video and Distribution Quality Television in Any Environment. ITU: Geneva, Switzerland, 2021.
  44. Harysandi, D.K.; Oktaviani, R.; Meylani, L.; Vonnisa, M.; Hashiguchi, H.; Shimomai, T.; Aris, N.A.M. International Telecommunication Union-Radiocommunication Sector P. 837-6 and P. 837-7 performance to estimate Indonesian rainfall. Telkomnika 2020, 18, 2292–2303. [Google Scholar]
  45. ITU-R BT 709-6; Parameter Values for the HDTV Standards for Production and International Programme Exchange BT Series Broadcasting Service. ITU: Geneva, Switzerland, 2015.
  46. Taha, M.; Ali, A.; Lloret, J.; Gondim, P.R.L.; Canovas, A. An automated model for the assessment of QoE of adaptive video streaming over wireless networks. Multimedia Tools Appl. 2021, 80, 26833–26854. [Google Scholar] [CrossRef]
  47. Mercat, A.; Viitanen, M.; Vanne, J. UVG dataset: 50/120fps 4K sequences for video codec analysis and development. In Proceedings of the 11th ACM Multimedia Systems Conference, Istanbul, Turkey, 8– 11 June 2020; pp. 297–302. [Google Scholar]
  48. FFmpeg: A Complete, Cross -Platform Solution to Record, Convert and Stream Audio and Video. Available online: https://ffmpeg.org/ (accessed on 25 February 2023).
  49. Brachmański, S.; Klink, J. Subjective Assessment of the Quality of Video Sequences by the Young Viewers. In Proceedings of the 30 th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2022), Split, Croatia, 22–24 September 2022; pp. 1–6. [Google Scholar]
Figure 1. An example frame from the original video.
Figure 1. An example frame from the original video.
Applsci 13 05025 g001
Figure 2. Video quality assessment using double stimulus method.
Figure 2. Video quality assessment using double stimulus method.
Applsci 13 05025 g002
Figure 3. Video sample preparation procedure.
Figure 3. Video sample preparation procedure.
Applsci 13 05025 g003
Figure 4. Flow diagram of the work.
Figure 4. Flow diagram of the work.
Applsci 13 05025 g004
Figure 5. Results of the subjective quality assessment (MOS) for the H.264-encoded video as a function of bit rate for different spatial resolutions—measurements at home [49].
Figure 5. Results of the subjective quality assessment (MOS) for the H.264-encoded video as a function of bit rate for different spatial resolutions—measurements at home [49].
Applsci 13 05025 g005
Figure 6. Results of the subjective quality assessment (MOS) for the H.264-encoded video as a function of bit rate for different spatial resolutions—measurements in the laboratory.
Figure 6. Results of the subjective quality assessment (MOS) for the H.264-encoded video as a function of bit rate for different spatial resolutions—measurements in the laboratory.
Applsci 13 05025 g006
Figure 7. Difference between the results of the subjective evaluation of the H.264-encoded video (ΔMOS), conducted in the laboratory and at home, as a function of bit rate for different resolutions.
Figure 7. Difference between the results of the subjective evaluation of the H.264-encoded video (ΔMOS), conducted in the laboratory and at home, as a function of bit rate for different resolutions.
Applsci 13 05025 g007
Figure 8. Results of the subjective quality assessment (MOS) for the H.265-encoded video as a function of bit rate for different spatial resolutions—measurements at home.
Figure 8. Results of the subjective quality assessment (MOS) for the H.265-encoded video as a function of bit rate for different spatial resolutions—measurements at home.
Applsci 13 05025 g008
Figure 9. Results of the subjective quality assessment (MOS) for the H.265-encoded video as a function of bit rate for different spatial resolutions—measurements in the laboratory.
Figure 9. Results of the subjective quality assessment (MOS) for the H.265-encoded video as a function of bit rate for different spatial resolutions—measurements in the laboratory.
Applsci 13 05025 g009
Figure 10. Difference between the results of the subjective evaluation of the H.265-encoded video (ΔMOS), conducted in the laboratory and at home, as a function of bit rate for different resolutions.
Figure 10. Difference between the results of the subjective evaluation of the H.265-encoded video (ΔMOS), conducted in the laboratory and at home, as a function of bit rate for different resolutions.
Applsci 13 05025 g010
Figure 11. Results of the subjective quality assessment (MOS) for the VP9-encoded video as a function of bit rate for different spatial resolutions—measurements at home.
Figure 11. Results of the subjective quality assessment (MOS) for the VP9-encoded video as a function of bit rate for different spatial resolutions—measurements at home.
Applsci 13 05025 g011
Figure 12. Results of the subjective quality assessment (MOS) for the VP9-encoded video as a function of bit rate for different spatial resolutions—measurements in laboratory.
Figure 12. Results of the subjective quality assessment (MOS) for the VP9-encoded video as a function of bit rate for different spatial resolutions—measurements in laboratory.
Applsci 13 05025 g012
Figure 13. Difference between the results of the subjective evaluation of the VP9-encoded video (ΔMOS), conducted in the laboratory and at home, as a function of bit rate for different resolutions.
Figure 13. Difference between the results of the subjective evaluation of the VP9-encoded video (ΔMOS), conducted in the laboratory and at home, as a function of bit rate for different resolutions.
Applsci 13 05025 g013
Figure 14. Relationship of the objective assessment of video quality encoded in the H.264 standard vs. bit rate for the resolutions 640 × 360, 858 × 480, 1280 × 720, and 1920 × 1080.
Figure 14. Relationship of the objective assessment of video quality encoded in the H.264 standard vs. bit rate for the resolutions 640 × 360, 858 × 480, 1280 × 720, and 1920 × 1080.
Applsci 13 05025 g014
Figure 15. Relationship of the objective assessment of video quality encoded in the H.265 standard vs. bit rate for the resolutions 640 × 360, 858 × 480, 1280 × 720, and 1920 × 1080.
Figure 15. Relationship of the objective assessment of video quality encoded in the H.265 standard vs. bit rate for the resolutions 640 × 360, 858 × 480, 1280 × 720, and 1920 × 1080.
Applsci 13 05025 g015
Figure 16. Relationship of the objective assessment of video quality encoded in the VP9 standard vs. bit rate for the resolutions 640 × 360, 858 × 480, 1280 × 720, and 1920 × 1080.
Figure 16. Relationship of the objective assessment of video quality encoded in the VP9 standard vs. bit rate for the resolutions 640 × 360, 858 × 480, 1280 × 720, and 1920 × 1080.
Applsci 13 05025 g016
Table 1. Mean value of the video quality assessment (MOS) for H.264 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements at home [49].
Table 1. Mean value of the video quality assessment (MOS) for H.264 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements at home [49].
Bit Rate640 × 360858 × 4801280 × 7201920 × 1080
(kbps)MOSSδMOSSδMOSSδMOSSδ
3001.000.000.001.000.000.001.000.000.001.050.230.10
4001.110.320.151.210.420.191.260.450.201.260.450.20
5001.500.510.241.420.690.311.580.690.311.420.510.23
6001.740.730.331.720.570.271.840.600.271.680.480.21
7001.940.730.342.110.580.272.110.570.261.890.320.14
8002.330.490.222.260.650.292.390.700.322.210.540.24
9002.420.510.232.470.620.302.610.700.322.580.510.23
10002.710.920.442.790.540.242.830.620.292.840.500.23
15002.890.880.393.000.490.223.160.760.343.370.500.22
20002.950.710.323.110.680.313.500.510.243.790.630.28
25003.000.670.303.240.750.363.670.590.274.060.730.34
30003.050.620.283.330.770.353.880.780.374.280.670.31
35003.170.620.293.420.770.354.060.680.334.410.510.24
40003.220.550.253.580.840.384.110.580.274.560.510.24
45003.330.590.273.680.890.404.190.660.324.670.490.22
50003.380.720.353.840.760.344.250.580.284.780.430.20
55003.440.620.283.950.710.324.320.580.264.830.380.18
60003.570.650.344.060.730.344.370.500.224.940.240.11
Table 2. Mean value of the video quality assessment (MOS) for H.264 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements in laboratory.
Table 2. Mean value of the video quality assessment (MOS) for H.264 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements in laboratory.
Bit Rate640 × 360858 × 4801280 × 7201920 × 1080
(kbps)MOSSδMOSSδMOSSδMOSSδ
3001.330.480.151.210.420.131.160.430.131.050.320.10
4001.540.510.161.510.550.161.330.570.171.190.450.13
5001.850.710.221.950.490.151.810.550.161.350.570.17
6002.180.790.252.260.620.192.240.660.201.580.590.18
7002.690.520.162.630.660.202.710.560.171.980.640.19
8002.900.550.172.950.490.152.860.640.192.400.540.16
9002.970.490.153.000.620.193.120.540.162.810.550.16
10003.080.620.203.160.690.213.260.540.163.160.570.17
15003.380.490.153.600.690.213.740.660.203.860.470.14
20003.490.640.203.810.700.214.070.700.214.260.490.15
25003.540.790.253.910.530.164.190.590.184.400.540.16
30003.590.640.204.050.580.174.280.550.164.490.510.15
35003.610.720.234.070.740.224.330.470.144.540.550.17
40003.680.660.214.140.740.224.400.490.154.630.490.15
45003.740.600.194.210.600.184.490.510.154.700.460.14
50003.790.770.244.260.620.194.560.500.154.790.410.12
55003.820.510.164.360.480.154.650.480.144.840.370.11
60003.850.490.154.400.490.154.720.450.144.910.290.09
Table 3. Value of the difference in ΔMOS (for H.264 codec) between the results obtained under laboratory and home conditions.
Table 3. Value of the difference in ΔMOS (for H.264 codec) between the results obtained under laboratory and home conditions.
Bit Rate ΔMOS
(kbps)640 × 360858 × 4801280 × 7201920 × 1080
3000.330.210.160.00
4000.430.300.06−0.08
5000.350.530.24−0.07
6000.440.530.40−0.10
7000.750.520.600.08
8000.560.690.470.18
9000.550.530.510.24
10000.370.370.420.32
15000.490.600.580.49
20000.540.700.570.47
25000.540.670.520.34
30000.540.710.400.21
35000.440.650.260.13
40000.460.560.280.07
45000.400.530.300.03
50000.420.410.310.01
55000.370.410.340.00
60000.270.340.35−0.04
Table 4. Mean value of the video quality assessment (MOS) for H.265 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements at home.
Table 4. Mean value of the video quality assessment (MOS) for H.265 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements at home.
Bit Rate640 × 360858 × 4801280 × 7201920 × 1080
(kbps)MOSSδMOSSδMOSSδMOSSδ
3001.090.340.161.090.330.111.090.280.101.120.380.13
4001.330.560.221.350.500.171.480.590.201.420.650.22
5001.730.790.331.740.850.291.760.740.251.790.730.24
6001.970.730.261.970.710.242.160.620.212.180.710.24
7002.090.720.262.230.770.272.420.670.242.500.760.26
8002.330.660.222.550.820.282.690.650.222.820.720.25
9002.470.560.222.760.740.252.940.550.193.060.550.19
10002.560.590.272.940.730.253.130.290.103.350.570.19
15002.750.720.233.180.580.193.440.510.183.790.570.19
20002.930.840.223.350.560.193.640.490.174.030.570.19
25003.060.860.333.580.580.203.850.530.184.260.680.23
30003.240.810.213.680.600.204.000.600.214.440.650.22
35003.330.750.163.820.850.294.180.540.184.530.580.19
40003.440.720.153.880.610.214.250.580.204.650.460.15
45003.520.780.004.030.620.214.310.600.214.710.410.14
50003.630.650.204.090.630.224.390.650.234.790.410.14
55003.690.630.164.180.760.264.530.580.204.850.370.13
60003.840.800.194.210.780.274.590.510.184.880.370.13
Table 5. Mean value of the video quality assessment (MOS) for H.265 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements in laboratory.
Table 5. Mean value of the video quality assessment (MOS) for H.265 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements in laboratory.
Bit Rate640 × 360858 × 4801280 × 7201920 × 1080
(kbps)MOSSδMOSSδMOSSδMOSSδ
3001.150.370.161.220.420.171.090.290.121.130.340.14
4001.550.510.221.430.510.221.610.500.201.650.490.20
5001.850.750.331.960.710.292.090.420.172.040.640.26
6002.150.590.262.260.540.222.480.590.242.480.510.21
7002.350.590.262.570.660.272.870.340.142.830.580.24
8002.500.510.222.870.550.223.090.420.173.130.760.31
9002.600.500.223.050.490.203.260.450.183.350.490.20
10002.800.620.273.170.490.203.430.510.213.480.590.24
15003.200.520.233.570.510.213.780.420.173.910.510.21
20003.400.500.223.910.600.244.090.670.274.220.520.21
25003.550.760.334.040.470.194.300.560.234.430.510.21
30003.700.470.214.170.390.164.430.590.244.570.510.21
35003.800.410.184.260.450.184.480.590.244.700.470.19
40003.890.320.144.300.470.194.570.510.214.780.420.17
45004.000.000.004.350.490.204.610.500.204.830.390.16
50004.050.390.174.390.500.204.700.470.194.870.340.14
55004.100.310.134.390.500.204.740.450.184.910.290.12
60004.150.370.164.410.500.214.780.420.174.910.290.12
Table 6. Value of the difference in ΔMOS (for H.265 codec) between the results obtained under laboratory and home conditions.
Table 6. Value of the difference in ΔMOS (for H.265 codec) between the results obtained under laboratory and home conditions.
Bit Rate ΔMOS
(kbps)640 × 360858 × 4801280 × 7201920 × 1080
3000.060.130.000.01
4000.220.080.120.23
5000.120.220.330.25
6000.180.290.320.30
7000.260.340.450.33
8000.170.320.400.31
9000.130.290.320.29
10000.240.230.310.13
15000.450.390.350.12
20000.470.560.450.19
25000.490.470.460.17
30000.460.500.430.12
35000.470.440.300.17
40000.460.430.320.14
45000.480.320.300.12
50000.430.300.310.08
55000.410.210.210.06
60000.310.200.190.03
Table 7. Mean value of the video quality assessment (MOS) for VP9 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements at home.
Table 7. Mean value of the video quality assessment (MOS) for VP9 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements at home.
Bit Rate640 × 360858 × 4801280 × 7201920 × 1080
(kbps)MOSSδMOSSδMOSSδMOSSδ
3001.120.350.141.240.460.181.260.450.131.290.480.19
4001.480.510.201.520.510.201.690.560.171.620.760.29
5001.760.430.171.800.710.282.070.640.192.000.860.32
6001.920.560.221.960.710.282.410.550.172.250.750.28
7002.090.570.232.040.740.302.710.550.172.430.920.34
8002.300.620.242.290.580.232.930.710.222.640.850.32
9002.380.660.252.460.600.243.120.500.152.850.800.30
10002.480.510.202.600.500.203.260.450.133.000.760.28
15002.810.740.282.960.710.273.640.480.153.370.810.31
20002.960.850.333.190.720.283.900.430.133.710.560.21
25003.070.720.273.350.700.274.070.510.163.930.570.21
30003.190.720.273.500.590.234.290.600.184.140.600.22
35003.300.640.243.640.660.264.430.550.174.360.560.21
40003.440.590.223.770.620.244.520.550.174.460.510.19
45003.520.510.193.850.580.224.570.550.174.610.510.19
50003.630.650.243.960.710.274.640.480.154.670.490.19
55003.700.620.244.040.710.274.690.470.144.700.480.18
60003.740.610.234.150.630.244.740.450.134.750.460.17
Table 8. Mean value of the video quality assessment (MOS) for VP9 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements in laboratory.
Table 8. Mean value of the video quality assessment (MOS) for VP9 codec, standard deviation (S), and confidence interval coefficient (δ) for four resolutions—measurements in laboratory.
Bit Rate640 × 360858 × 4801280 × 7201920 × 1080
(kbps)MOSSδMOSSδMOSSδMOSSδ
3001.320.470.141.270.450.141.260.450.131.500.550.17
4001.620.580.181.640.580.171.690.560.171.830.660.20
5002.000.440.131.980.510.152.070.640.192.170.610.20
6002.240.480.152.370.620.182.410.550.172.490.630.19
7002.500.510.152.630.490.152.710.550.172.740.760.23
8002.710.460.142.840.430.132.930.710.223.020.470.14
9002.900.300.093.020.340.103.120.500.153.230.430.13
10003.050.440.133.140.410.123.260.450.133.430.500.15
15003.480.510.153.530.500.153.640.480.153.840.480.14
20003.690.560.173.810.450.133.900.430.134.050.430.13
25003.810.450.143.930.400.124.070.510.164.260.440.13
30003.880.330.104.070.400.124.290.600.184.440.500.15
35003.950.380.114.190.500.154.430.550.174.600.490.15
40003.950.440.134.280.500.154.520.550.174.720.450.14
45004.000.580.184.350.530.164.570.550.174.770.430.13
50004.050.440.144.440.500.154.640.480.154.810.390.12
55004.100.430.134.510.510.154.690.470.144.830.380.11
60004.150.480.154.560.500.154.740.450.134.840.370.11
Table 9. Value of the difference in ΔMOS (for VP9 codec) between the results obtained under laboratory and home conditions.
Table 9. Value of the difference in ΔMOS (for VP9 codec) between the results obtained under laboratory and home conditions.
Bit Rate ΔMOS
(kbps)640 × 360858 × 4801280 × 7201920 × 1080
3000.200.03−0.010.21
4000.140.120.170.22
5000.240.180.290.17
6000.320.410.380.24
7000.410.590.390.32
8000.420.550.390.38
9000.520.560.400.38
10000.570.540.440.43
15000.660.570.320.47
20000.730.620.310.33
25000.740.580.290.33
30000.700.570.320.30
35000.660.550.350.25
40000.510.510.300.26
45000.480.500.280.16
50000.420.480.300.15
55000.390.470.310.13
60000.410.400.280.09
Table 10. Correlations between QoS and QoE values (in the lab and home) for H.264 encoded video.
Table 10. Correlations between QoS and QoE values (in the lab and home) for H.264 encoded video.
QoS vs. QoE Correlations
LabHome
360p480p720p1080p360p480p720p1080p
PSNR360p0.9810.9960.9950.9860.9860.9880.9950.985
480p0.9680.9890.9900.9870.9800.9890.9980.994
720p0.9510.9780.9810.9820.9700.9860.9950.996
1080p0.9490.9760.9790.9810.9680.9850.9940.996
SSIM360p0.9890.9970.9950.9770.9880.9830.9870.970
480p0.9870.9970.9960.9810.9880.9850.9900.975
720p0.9850.9970.9960.9840.9880.9870.9930.980
1080p0.9890.9970.9950.9780.9880.9820.9870.970
VMAF360p0.9740.9920.9940.9920.9830.9880.9980.992
480p0.9730.9920.9930.9920.9820.9870.9980.993
720p0.9710.9900.9920.9930.9790.9840.9960.992
1080p0.9810.9940.9940.9900.9820.9790.9890.980
Table 11. Correlations between QoS and QoE values (in the lab and home) for H.265 encoded video.
Table 11. Correlations between QoS and QoE values (in the lab and home) for H.265 encoded video.
QoS vs. QoE Correlations
LabHome
360p480p720p1080p360p480p720p1080p
PSNR360p0.9980.9970.9970.9980.9910.9950.9960.997
480p0.9990.9910.9900.9930.9930.9940.9950.996
720p0.9950.9810.9800.9830.9910.9890.9910.989
1080p0.9910.9750.9730.9770.9890.9860.9870.985
SSIM360p0.9930.9970.9990.9980.9860.9910.9920.993
480p0.9950.9980.9990.9990.9880.9930.9940.995
720p0.9950.9980.9990.9990.9880.9930.9940.995
1080p0.9940.9960.9980.9980.9870.9910.9930.993
VMAF360p0.9980.9950.9920.9950.9890.9940.9950.998
480p0.9970.9950.9920.9950.9880.9930.9950.997
720p0.9950.9960.9920.9950.9850.9910.9930.996
1080p0.9760.9890.9910.9890.9650.9760.9770.981
Table 12. Correlations between QoS and QoE values (in the lab and home) for VP9 encoded video.
Table 12. Correlations between QoS and QoE values (in the lab and home) for VP9 encoded video.
QoS vs. QoE Correlations
LabHome
360p480p720p1080p360p480p720p1080p
PSNR360p0.9970.9980.9990.9980.9940.9900.9960.993
480p0.9900.9960.9970.9970.9970.9970.9990.999
720p0.9800.9890.9900.9910.9950.9990.9970.999
1080p0.9510.9660.9680.9720.9780.9890.9840.987
SSIM360p0.9980.9970.9970.9950.9880.9810.9890.985
480p0.9980.9980.9980.9960.9910.9840.9910.988
720p0.9980.9980.9980.9970.9910.9860.9930.990
1080p0.9880.9910.9920.9950.9900.9940.9990.995
VMAF360p0.9950.9960.9970.9980.9930.9930.9980.995
480p0.9940.9950.9960.9980.9920.9930.9980.996
720p0.9940.9940.9950.9970.9890.9900.9970.993
1080p0.9930.9880.9870.9910.9780.9800.9910.983
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Klink, J.; Brachmański, S.; Łuczyński, M. Assessment of the Quality of Video Sequences Performed by Viewers at Home and in the Laboratory. Appl. Sci. 2023, 13, 5025. https://doi.org/10.3390/app13085025

AMA Style

Klink J, Brachmański S, Łuczyński M. Assessment of the Quality of Video Sequences Performed by Viewers at Home and in the Laboratory. Applied Sciences. 2023; 13(8):5025. https://doi.org/10.3390/app13085025

Chicago/Turabian Style

Klink, Janusz, Stefan Brachmański, and Michał Łuczyński. 2023. "Assessment of the Quality of Video Sequences Performed by Viewers at Home and in the Laboratory" Applied Sciences 13, no. 8: 5025. https://doi.org/10.3390/app13085025

APA Style

Klink, J., Brachmański, S., & Łuczyński, M. (2023). Assessment of the Quality of Video Sequences Performed by Viewers at Home and in the Laboratory. Applied Sciences, 13(8), 5025. https://doi.org/10.3390/app13085025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop