Next Article in Journal
Processes and Challenges for the Manufacturing of Lyocell Fibres with Alternative Agricultural Feedstocks
Previous Article in Journal
Flywheel Vibration Isolation of Satellite Structure by Applying Structural Plates with Elastic Boundary Instead of Restrained Boundary
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Quantization Parameter Decision Scheme for High Efficiency Video Coding

School of Computer and Information Engineering, Harbin University of Commerce, Harbin 150028, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(23), 12758; https://doi.org/10.3390/app132312758
Submission received: 28 October 2023 / Revised: 18 November 2023 / Accepted: 24 November 2023 / Published: 28 November 2023

Abstract

:
High-Efficiency Video Coding (HEVC) is one of the most widely studied coding standards. It still uses the block-based hybrid coding framework of Advanced Video Coding (AVC), and compared to AVC, it can double the compression ratio while maintaining the same quality of reconstructed video. The quantization module is an important module in video coding. In the process of quantization, quantization parameter is an important factor in determining the bitrate in video coding, especially in the case of limited channel bandwidth. It is particularly important to select a reasonable quantization parameter to make the bitrate as close as possible to the target bitrate. Aiming at the problem of unreasonable selection of quantization parameters in codecs, this paper proposes using a differential evolution algorithm to assign quantization parameter values to the coding tree unit (CTU) in each frame of 360-degree panoramic video based on HEVC so as to strike a balance between bitrate and distortion. Firstly, the number of CTU rows in a 360-degree panoramic video frame is considered as the dimension of the optimization problem. Then, a trial vector is obtained by randomly selecting the vectors in the population for mutation and crossover. In the mutation step, the algorithm generates a new parameter vector by adding the weighted difference between two population vectors to a third vector. And the elements in the new parameter vector are selected according to the crossover rate. Finally, the trial vector is regarded as the quantization parameter of each CTU in the CTU row to encode, and the vector with the least rate distortion is selected. The algorithm will produce the optimal quantization parameter combination for the current video. The experimental results show that compared to the benchmark algorithm of HEVC reference software HM-16.20, the proposed algorithm can provide a bit saving of 1.86%, while the peak signal-to-noise ratio (PSNR) can be improved by 0.07 dB.

1. Introduction

With the development of communication technology, video collection technology, computation storage, and transmission technology, the types of videos are gradually diversifying. Video can be divided into 2Dof video, 3Dof video, and 6Dof video according to the degree of freedom (Dof). Different from traditional 2Dof video, 360-degree panoramic video has three degrees of freedom. It is a sphere video that contains the entire environment. Furthermore, it provides users with an immersive experience and is expected to be widely used in the future. In order to ensure a better immersive experience for users, 360-degree panoramic video has a higher resolution [1], which greatly increases the amount of video data and brings great challenges to both storage and transmission. Therefore, panoramic videos must be compressed before being transmitted on the Internet. However, there is no framework for directly compressing and encoding panoramic videos; 360-degree panoramic videos need to be converted into two-dimensional videos for encoding. Therefore, the compression of 360-degree panoramic video is of great significance.
Up to now, researchers have developed a variety of coding standards. In 1990, the International Telecommunication Union-Telecommunication Standardization Sector (ITU-T) developed the H.261 video coding standard. The block-based hybrid coding framework proposed by this standard is still used. The hybrid coding framework combines prediction with transform. During the encoding, the residual information is obtained by performing intra-frame or inter-frame prediction on the video source information, followed by transform and quantization. After performing a series of inverse quantization and inverse transformation operations at the decoding end, the decoded video sequence is generated. However, with the updates of technology, the H.261 standard has only a few applications in transmission scenarios with extremely low bitrate requirements.
In order to be suitable for application scenarios such as video conferencing and television communication, ITU-T promulgated the H.263 encoding standard in 1996 [2]. Compared to the H.261 encoding standard, the H.263 standard has better compression performance and more advanced features, but the encoding delay is slightly higher than the H.261 standard.
In 1998, ITU-T released the H.263+ coding standard [3] based on the H.263 coding standard. The H.263+ standard achieves the purpose of improving video quality by adding some new coding tools. Among them, the new technologies include more accurate motion estimation, higher entropy coding efficiency, and better coding mode selection. These technologies allow H.263+ to provide better video quality at the same bitrate as H.263 or to consume a lower bitrate at the same video quality as H.263. The H.263+ standard has been widely used in fields such as video conferencing and video streaming and also provides basic technology for subsequent international video coding standards.
With the development of information technology and multimedia applications, the H.263+ standard could no longer meet the commands of video coding transmission and communication, and the H.264/AVC coding standard [4] was provided. The H.264 encoding standard greatly improves video compression performance based on the H.263+ standard and is widely used in various video applications, attracting many video coding scholars to conduct optimization research on this standard. At present, the H.264 encoding standard is still the most widely used video encoding standard in the industry and is mainly used in fields such as digital TV broadcasting, video real-time communication, and video on demand.
Today, video applications are becoming increasingly diversified. A large number of videos in different formats and rich content can provide for various needs, such as entertainment and leisure. Moreover, they bring more visual effects to users because of higher resolution and wider viewing port. Therefore, the pursuit of ultra-high-definition video has become the trend. At present, the resolution of videos that is widely used on the Internet has reached 1080p, and the range of scene viewing angles is also constantly expanding. These have inspired researchers to continuously explore video coding standards. In 2013, the Video Coding Experts Group (VCEG) formulated a new video coding standard, H.265/HEVC. The H.265/HEVC [5] coding standard continues to use the six modules of intra-prediction, inter-prediction, transformation, quantization, entropy coding, and loop filtering and comprehensively measures the current network environment and computer processing capabilities. Depending on the type of video and other factors, the technology in each module is optimized or replaced to meet the needs of different industries or users. Compared to the H.264 encoding standard, this standard reduces the mismatch, improves the reliability of real-time transmission, reduces bandwidth overhead, and doubles the compression ratio.
In order to meet the challenges brought by the massive increase in the resolution of the video and to promote the development of the next generation of international video coding standards, the Joint Video Exploration Team (JVET) was established in October 2015. On the evening of 1 July 2020, Versatile Video Coding (VVC) version 1 was formally finalized at the 19th Joint Video Expert Group meeting. Compared to HEVC, VVC adds more than 30 new coding tools, covering all modules of the hybrid video codec system framework, making it not only more widely applicable to all links of the video industry chain but also able to widely cover all kinds of devices and terminals, such as mobile phones, computers, head-mounted virtual devices, and so on. In addition, according to the iterative requirements of the video coding standard, the data compression performance of VVC is doubled while keeping the decoder complexity not more than twice that of HEVC.
Different from traditional two-dimensional videos, 360-degree panoramic videos are presented as a three-dimensional sphere, which makes existing video codec unable to directly encode and decode panoramic videos [6]. In order to solve this problem, JVET developed a panoramic video encoding standard test platform and developed a set of 360Lib software library [7], which can be independently used for projection of panoramic video encoding, conversion and quality assessment can also be combined with the test platform of the video coding standard H.265/HEVC or H.266/VVC for panoramic video encoding and decoding operations. In order to be suitable for existing codecs, the panoramic video encoding process mainly includes projection and encoding. Panoramic video needs to be collected and spliced to form a spherical video. During encoding, it needs to be converted into a flat video through a projection transformation process before it can be encoded. There are already many projection conversion methods, such as Equi-Rectangular Projection (ERP), Cube Map (CMP) [8], Equi-angular Cubemap (EAC) [9], Octahedron projection (OHP), Icosahedron Projection (ISP), Segmented Sphere Projection (SSP), Rotated Sphere Projection (RSP), etc. Restricted by factors such as acquisition and splicing, most panoramic video content is often converted to ERP projection format. The ERP projection projects the pixels in each frame of the panoramic video to a rectangular area according to a one-to-one mapping relationship. This will cause some distortion in the projected video, which affects the coding efficiency. The CMP projection format is another format supported by video encoders. This format maps each pixel in the panoramic image to the corresponding position of a disc, with the center of the circle being the observation point (viewport). Compared to the ERP projection format, the CMP projection format keeps the image height fixed and will provide a more realistic panoramic effect but will cause obvious distortion at the edges of the image. In order to further reduce the degree of distortion, researchers continue to increase the number of projection planes. For example, the OHP projection format projects a sphere onto an octahedron and then performs extended splicing; the ISP format projection projects a sphere onto an icosahedron and then performs splicing. However, an increase in the number of projection planes will have the effect of increasing discontinuity boundaries and will also have an adverse impact on the prediction process, resulting in a decrease in coding performance. In order to solve the encoding-related problems caused by severe distortion in the bipolar areas in the ERP projection format, Yu et al. [10] divided the projection plane of the ERP format into different tiles according to latitude and realized the dynamics of sampling density by adjusting the height and width of the tiles, thus solving the problem of bit allocation at different latitudes. Similarly, Lee et al. [11] also conducted in-depth research on the latitude direction of the ERP projection format and proposed performing downsampling pixel by pixel according to the latitude value. That is, the higher the latitude, the greater the downsampling ratio; the lower the latitude, the smaller the downsampling ratio. Although this method can achieve great bitrate savings and is suitable for situations where the transmission bandwidth is limited, the computational complexity of the algorithm is very high. In addition, during the quantization process, the degree of quantization of the transformed prediction residual has a great impact on the bitrate. If some areas can be coarsely quantized with less impact on visual quality, the bitrate will be further saved. Therefore, Racapé et al. [12] proposed performing coarse quantization on high-latitude areas and fine quantization on low-latitude areas in the projection plane of ERP format, thereby further reducing the bit rate while maintaining stable visual quality.
H.265/HEVC coding standard adds new technologies to many coding modules. For example, the number of angle prediction modes for intra-prediction has increased to 35; advanced motion vector prediction (AMVP) technology and merge technology are provided during inter-prediction [13,14,15,16,17,18]; the quantization process proposes the implementation method of the quantization group [19,20,21]. These new tools have greatly improved coding efficiency. However, the increase in new coding technologies will inevitably lead to an increase in coding parameters, and the selection of some coding parameters in the H.265/HEVC coding standard is unreasonable. For example, the quantization process requires the use of corresponding quantization parameters for each coding tree unit (CTU). In H.265/HEVC coding standards, the quantization parameter value of each CTU is fixed. Such a quantization parameter allocation method lacks adaptability and will inevitably lead to a waste of bitrate and a reduction in rate–distortion performance. Since quantization will bring distortion to the compressed video data; it is particularly important to develop efficient and low-complexity quantizers and study efficient quantization algorithms [22,23]. During the HEVC standardization process, many improvements to the quantization method were proposed. Ref. [24] combined the encoded frame data with the Laplacian distribution and proposed an adaptive quantization method based on inverse quantization parameters. However, the accuracy of the results will be affected to some extent. Ref. [25] proposed a quantization algorithm for improving subjective quality, named block-level adaptive quantization (BLAQ). In BLAQ, each block has its own quantization parameters which can better adapt to the local content of the video sequence. In addition, BLAQ can effectively reduce the non-uniformity of the video source of the quadtree-based hybrid video codec, which divides each CTU into sub-blocks and then performs a uniform reconstruction quantizer (URQ). However, in the HEVC encoder, the BLAQ method needs to find an adaptive QP for each block through rate–distortion optimization (RDO), so the computational complexity of the HEVC encoder increases greatly when BLAQ is enabled. In [26], Mo et al. proposed a quantization matrix (QM) method to improve subjective quality control and avoid hypothetical reference decoder (HRD) overflow in HEVC. QM methods that take advantage of the spatial and frequency domain awareness of the human visual system (HVS) have been used in image/video coding for a long time. However, a challenge faced by QM methods is that the distortion cost is usually measured in terms of peak signal-to-noise ratio (PSNR), which correlates poorly with HVS models [27]. In addition, the quantization step size is also a key parameter that affects the quantization process. Some studies [28,29] use rate–distortion-optimized quantization to select the optimal quantization step size, but the computational complexity will increase accordingly. However, the video sources of the above research based on HEVC are ordinary two-dimensional planar video, and the optimization of the emerging 360-degree panoramic video with three degrees of freedom is relatively rare. Moreover, the latter has more special video characteristics, which is beneficial to improving coding efficiency.
VVC is the latest video coding standard jointly developed by ITU-T VCEG and ISO/IEC MPEG. Compared to HEVC, VVC can provide higher compression performance. However, the improvement of compression performance must be accompanied by the increase in coding complexity. For example, VVC provides a more complex CU division method, intra-frame prediction, which provides many new coding tools to adapt to different video scenes. Therefore, a considerable amount of current research is on how to reduce the coding time on the premise of ensuring the quality of video. Ref. [30] introduces the type of VVC block partition and the processing method of image boundary in detail. Aiming at the problem of high time complexity of VVC coding, the main idea of [31] is to filter out IBC and ISP, which are likely unused. By rearranging the common prediction model, the candidate is more likely to become the optimal model, and some neural network methods are also adopted. This candidate’s advantage is that it takes into account the newly added VVC in-frame mode tools and can also achieve some performance improvement. Ref. [32] uses a histogram of oriented gradient (HOG) algorithm pruning forecast and can also achieve good performance, improved by 36.61%. The advantage of this method is that it is very suitable for the hardware algorithm, but the disadvantage is that the performance loss is large. In the inter-frame prediction module, there is also some research related to fast algorithms. Ref. [33] skips part of the affine search process of cu through the strategy of early termination, thus reducing the coding time. On the basis of a 63% reduction in coding time, the video loss quality will not be very high. In the hardware direction, [34] reduces the area of the interpolation circuit and the cost of hardware by reducing the number of filter taps in the search process of fractional motion estimation (FME) with acceptable performance loss.
The differential evolution (DE) algorithm is a parameter optimization method based on the evolutionary theory proposed by Storn and Price in 1995 [35]. It is widely used in global optimization problems in the fields of science and engineering [36]. Researchers in various fields have proposed many improved DE algorithms by performing some parameter adjustments and problem evolution on the traditional DE algorithm. Ali et al. [37] added some auxiliary populations based on the original population and proposed rules for automatically calculating scaling parameters to improve its efficiency and robustness. Liu and Lampinen [38] proposed an adaptive algorithm for the two important links of mutation and crossover operations in the differential evolution algorithm so that the mutation factor and crossover rate during the algorithm execution can be adaptively changed; M-Montes [39] and others limited the range of some parameters in the algorithm and used different control parameters to solve certain specific problems. Finally, through experiments, they selected the best control parameters for each problem; Das et al. [40] proposed applying a larger mutation factor to the initial iteration process of the algorithm, which makes the algorithm have strong search capabilities at the beginning of the iteration and then makes the mutation factor linearly decrease as the number of iterations increases. Although the DE algorithm has been studied for nearly thirty years and there are many improved DE algorithms, most of them can only be applied to low-dimensional problems. For high-dimensional problems, there are currently few optimization solutions. For example, some researchers used co-evolutionary optimization algorithms [41,42]. The core idea is to use a divide-and-conquer strategy to decompose the original problem into multiple sub problems until each sub problem can be solved individually, and then the results can be fused to obtain the optimal solution to the original problem. In order to speed up the convergence speed on large-scale high-dimensional problems, Liu et al. [43] combined fast evolutionary programming (FEP) with co-evolution. Recently, for different practical applications, researchers have made different improvements to the differential evolution algorithm. The original algorithm treats the individuals in a population equally and evolves all of them in each generation. In order to overcome this shortcoming, ref. [44] proposed a new population evolution strategy (PES) to decrease the population size based on the differences among individuals during an evolution process and applied it to waveform inversion. The improved algorithm reduced the runtime by 50% compared to the original algorithm. For the problem of object tracking, Foo [45] chose fast-compressive tracking for rapid real-time tracking with the element of compressed sensing theory and used a DE algorithm to search for the optimal parameters by iteratively solving and improving a solution with a proper objective function. As a result, the proposed model enhances the ability to tackle occlusion and out-of-plane rotation problems. With the rapid development of machine learning and deep learning technologies, differential evolution algorithms have also been applied to artificial intelligence (AI) methods. Sun [46] proposes a novel personalized recommendation algorithm for learning resources based on DE and graph neural networks (GNN). The differential evolution algorithm is utilized to optimize model hyperparameters, resulting in improved recommendation performance. In conclusion, the differential evolution algorithm has become a stable and efficient parameter optimization algorithm. Therefore, a reasonable idea is to use differential evolution algorithm to make decisions for encoding parameters of 360-degree panoramic video based on the data characteristics of the video, so as to improve coding efficiency.

2. Research Significance

As an efficient parameter optimization algorithm, the DE algorithm has been widely used in various optimization problems by researchers in various fields around the world. However, research that integrates differential evolution algorithms into the field of video coding is relatively rare. In the process of video coding, there are many coding parameters involved, such as CU split depth and prediction mode for intra-frame prediction, motion vectors for inter-frame prediction, and quantization parameters of the quantization process. Therefore, how to use the differential evolution algorithm to solve the problem of finding optimal coding parameters in each module of video coding needs to be studied. Quantization parameters are important parameters in video encoding. However, in the current encoding standard, under the default encoding configuration, the quantization parameters of each CTU are fixed and do not change adaptively according to the video content or scene complexity. There is still room for improvement in compression ratio. Since the differential evolution algorithm has the characteristics of easy implementation and good robustness, the differential evolution algorithm can be used outside the H.265/HEVC encoder and combined with the data characteristics of the panoramic video to quantify the CTU of each frame of the 360-degree panoramic video. Parameters are selected to ensure reasonable allocation of code rates and improvement of rate–distortion performance.

3. Materials and Methods

Differential evolution algorithm is an optimization algorithm that uses real number coding to perform heuristic random search in a continuous space. Because this method can better solve some of the difficulties encountered by traditional genetic algorithms in solving complex problems, it has received widespread attention and application. In video coding, differential evolution algorithms can be used to find optimal coding parameters to achieve higher video coding quality and compression rate. Quantization is a key module in video coding, and the quantization parameters in the process are crucial. However, under the current encoding standard, under the default encoding configuration, the quantization parameter of each CTU is a fixed value and does not change adaptively according to the video content or scene complexity. This flaw leaves room for improvement in the encoding compression rate. This section applies the differential evolution algorithm to the quantization parameter decision-making process in video coding, proposes a quantization parameter decision-making scheme based on the differential evolution algorithm, and allocates quantization parameters for each CTU, which makes the coding efficiency higher while reducing mismatch rate. The process of quantization parameter decision scheme improved by using differential evolution algorithm is shown in Figure 1.

3.1. Initialization

In the process of population initialization, the dimension of the population must be determined. The population dimension is usually proportional to the dimension of the problem and affects the convergence speed and accuracy of the algorithm. The larger the population, the stronger the diversity and the more the search space is expanded. The range and the possibility of obtaining the optimal solution decrease, but a large-scale population will inevitably increase the number of evaluations of the fitness function. If the fitness function operation is complex and the execution time is long, the time complexity of the entire algorithm will increase exponentially. However, if the population size is small, the diversity is relatively low, and the algorithm search space is narrow, it will accelerate the convergence speed of the algorithm and easily lead to local optimality or algorithm stagnation [47]. Therefore, the choice of population size generally requires a balance between the convergence speed and search capabilities of the algorithm, while also considering the size and complexity of the problem. The resolution of 360-degree panoramic video is large, resulting in a large number of CTUs in each video frame. The resolution of the 360-degree panoramic video test sequence used in this article is 3840 × 1920, and the size of each CTU is 64 × 64, so the number of CTUs in each frame is 1800. In the differential evolution algorithm, the dimension of the population is usually related to the number of populations and the number of iterations. If the quantization parameter decision is made directly for 1800 CTUs, that is, the dimension corresponding to each population size is 1800, it will result in the number of populations and the number of iterations required by the algorithm, thus increasing the time complexity of the algorithm. Since in the ERP projection format, CTUs at the same latitude have the same sampling degree—that is, the CTUs in each line of the projected two-dimensional video have the same sampling degree—by analyzing the 360-degree panoramic video in the ERP projection format, the number of CTU rows of the two-dimensional video after projection is regarded as the population dimension of the differential evolution algorithm.
After determining the dimensions of the population, the population size and number of iterations based on the complexity of the problem are decided. Since mutation operation is required for each individual in the population in the differential evolution algorithm, the process randomly selects three individuals that are different from the current individual, so the population size cannot be less than 4. By changing the value of the population size and conducting multiple experimental verifications and taking into account the convergence speed and algorithm performance, the proposed algorithm sets the population size to 50 and the maximum number of iterations to 75.
In H.265/HEVC, the optional values of the quantization parameter QP range from 0 to 51, with a total of 52 optional values. The initial population is established by randomly determining the optional value of the quantization parameter for each individual in the population, expressed as (X1, X2,…, X50), where X1 = (QP1,1, QP1,2,…, QP1,N). Figure 2 shows the correspondence between X1 individuals in the population and CTUs in each row.

3.2. Mutation

The mutation operation is the core of the differential evolution algorithm, in which the mutation factor is an important parameter for controlling the diversity and convergence of the population. The value range of mutation factor is usually [0, 1], and the common values are 0.5 and 0.75. The mutation factor is related to the characteristics of the problem and the nature of the objective function. Generally, the optimal variation factor can be determined through experiments. At the same time, adaptive strategies can also be used to dynamically adjust the size of the mutation factor to adapt to different optimization problems and population states. For example, the mutation factor can be adjusted based on the average fitness, standard deviation, and other indicators of the population to achieve a better-balanced search effect. The smaller the mutation factor and the smaller the disturbance range, the search range of the algorithm will be too narrow and the population’s exploration ability will be poorer. The algorithm will converge slowly and may even fall into a local optimal solution when the mutation factor is too large. The range of disturbance will increase, and the exploration ability of the population will be enhanced. However, the search process is too random, which can easily lead to algorithm instability and oscillation. Differently from the fixed mutation factors used by traditional algorithms, by improving the mutation factors in the mutation operation, different mutation factors are used in each mutation process, thereby disturbing the population individuals to varying degrees to ensure the diversity of the population and thereby improving the search efficiency of the algorithm. The formula of the mutation operation is shown in (1) and (2):
V i , t = r o u n d ( X a , t + M F × ( X b , t X c , t ) )
M F = r a n d ( 0.1 ,   0.9 )
where i represents the index of the current individual in the population, t represents the number of iterations, and a, b, c are positive integers that are different from i. For example, Xa,t can represent that in the tth iteration. In order to avoid floating point numbers generated during the mutation operation, the mutation results are rounded through the rounding function. MF represents the mutation factor, which randomly takes a value from 0.1 to 0.9 during each mutation process to ensure that the population individuals have varying degrees of disturbance.

3.3. Crossover

The crossover operation randomly selects some elements in the mutated solution vector and crosses them with the original solution vector to generate a new solution vector. The crossover operation helps the algorithm jump out of the local optimal solution in the search space, thereby better exploring the entire search space. Using different values will inevitably have a greater impact on the target problem. Choosing an appropriate crossover rate for a specific problem is crucial. The crossover rate controls the probability of occurrence of the crossover operation in the differential evolution algorithm, the degree of participation of each dimension of the individual in the crossover operation, and the balance of global and local search capabilities. The value range of crossover rate is usually [0, 1], and the more common values are 0.9 and 1.0. The choice of crossover rate needs to be adjusted based on the characteristics of the problem and the nature of the objective function. The smaller the CR, the lower the diversity of the population, resulting in insufficient search capabilities of the algorithm and premature convergence of the algorithm; the larger the CR, the greater the convergence speed, but if it is too large, the convergence speed may be slower due to the disturbance being greater than the difference in the population. Through verification analysis of different values of CR, the CR value is set to 0.5. The crossover operation used in this chapter is shown in (3). Among them, i represents the index of the current individual in the population, j represents the current dimension, and t represents the number of iterations; n is a random integer between [1, N], and N represents the population dimension; r is a random number between [0, 1]. The specific process of mutation operation and crossover operation is shown in Figure 3.
{ Y i , t ( j ) = V i , t ( j )                                                                             i f   r C R   o r   i = n Y i , t ( j ) = X i , t ( j )                                                                                                             o t h e r w i s e

3.4. Selection

The selection operation is the last step of the differential evolution algorithm, which brings the optimal solution generated by the search process into the next iteration. In the selection operation, the original solution vector and the solution vector after mutation and crossover operations are usually compared using the fitness value, and the solution vector with better fitness is selected as a member of the next generation’s population. Generally speaking, the selection operation can use the roulette wheel selection method (RWS) [48] or the competition selection method (CS) [49]. In the algorithm of this paper, the actual distortion produced after encoding the 360-degree panoramic video using the quantization parameter vector generated by the algorithm is used as the fitness value. By comparing the fitness value of the original quantization parameter vector with the quantization parameter vector after mutation and crossover, we retain the vectors with smaller distortion and discard the vectors with larger distortion. The selection operation is shown in (4).
{ X i , t + 1 = Y i , t                           i f D ( X i , t ) > D (   Y i , t )   a n d   R (   Y i , t ) R T   X i , t + 1 = X i , t                                                                                                                                   o t h e r w i s e
where RT is the target bitrate, i represents the index of the current individual in the population, t represents the number of iterations, Xi,t is the original quantization parameter vector, and Yi,t is the value after mutation and crossover operations. If the actual bitrate of the 360-degree panoramic video encoded using Yi,t as the quantization parameter meets the target bitrate and the distortion value is smaller than Xi,t, then Yi,t will be retained. Otherwise, keep Xi,t.
After repeating mutation, crossover, and selection operations several times, if the current number of iterations is greater than the maximum number of iterations set in the initialization step, the algorithm terminates and the current optimal quantization parameter vector is obtained.

4. Results

4.1. Experimental Environment and Evaluation Indicators

In order to evaluate the quantization parameter decision scheme proposed in the paper, the algorithm is compared using the HEVC reference software HM-16.20. The resolution of all the test sequences is 4K, and their names are: AerialCity, DrivingInCity, DrivingInCountry, and PoleVault. The sampling format is 4:2:0, bit depth is 8, and all-intra encoding mode is used. The test sequence information and the relevant parameters of the DE algorithm are shown in Table 1 and Table 2, respectively. All the test sequences of the experiment can be downloaded from the website Common Test Condition (CTC) [50], and differential evolution algorithm was implemented on MATLAB 2020A.
Figure 4 shows the basic characteristics of the four test sequences. It can be seen from the figure that most of the upper part of the 360-degree panoramic video is the sky or the cloud, and most of the lower part is the ground. Therefore, a differential evolution algorithm can be used in these background areas to deduce higher QPs, which makes this part of the region coarse quantized. For regions closer to the equator, the algorithm can be expected to derive a smaller QP for fine quantization.
BD-rate and BD-PSNR are important parameters for evaluating the performance of video-coding algorithms [51], indicating the changes in the bitrate and peak signal-to-noise ratio (PSNR) of videos encoded using the new algorithm compared to the original algorithm. In video coding, a low bitrate indicates a large amount of compression, and a high PSNR value indicates good objective quality. Therefore, if the compressed video bitrate is reduced and the PSNR value is increased, then the algorithm has good performance. However, usually video coding algorithms will lose compression quality while increasing the compression amount. That is, when the bitrate decreases, the PSNR value also decreases. In this case, BD-rate and BD-PSNR need to be used for measurement.

4.2. Comparison of Experimental Results

Table 3 shows the encoding results of the proposed algorithm and the HM-16.20 benchmark algorithm, including the video sequence name, target bit rate, actual bitrate, and YUV-PSNR. Table 4 shows the average BD-Rate and BD-PSNR. As a result, Figure 5 shows the corresponding rate–distortion curve.
Table 4 shows that compared with the algorithm in the all-intra mode of HM-16.20, the encoder can save an average of 1.86% of the bitrate by encoding using the optimal quantization parameter values obtained with the proposed algorithm. Furthermore, the average quality improvement is about 0.07 dB.
In addition, mismatch indicates the difference between the actual coding bitrate and the target coding bitrate in the algorithm. According to the detailed coding information shown in Table 3, using (5) to calculate the actual bitrate Ractual and the target bitrate Rtarget, it can be found that the algorithm proposed in this paper does not exceed the predefined target under the constraints of the differential evolution algorithm, and compared to the HM-16.20 benchmark algorithm, the mismatch is significantly reduced. Table 5 shows the comparison of the mismatch of the two algorithms and Figure 6 intuitively shows the mismatch between the two algorithms in the form of a histogram.
M i s m a t c h = | R a c t u a l R t a r g e t | R a c t u a l × 100 %
During video transmission, if the actual bitrate does not match the network bandwidth, the mismatch will increase greatly. If the actual bitrate exceeds the network bandwidth, packet loss or stutter may occur during transmission. When the actual bitrate is lower than the network bandwidth, the video quality will be degraded, and the phenomenon of blur and distortion will appear. Therefore, it is very important to design a reasonable algorithm so that the actual bitrate is close to the target bitrate. As can be seen from Figure 6, the proposed algorithm can provide a lower mismatch than the HM-16.20 benchmark algorithm, which can effectively improve the coding performance. By observing the rate–distortion curves of the four video sequences in Figure 5 under the two algorithms, it can be found that the rate–distortion curve of the proposed algorithm is slightly higher than the rate–distortion curve corresponding to the HM-16.20 benchmark algorithm, which shows that the algorithm proposed by this paper can obtain excellent rate–distortion performance.

5. Discussion

In this work, we proposed a quantization parameter decision algorithm by using differential evolution in HEVC, and the algorithm can achieve a bitrate saving. To the best of our knowledge, the proposed algorithm is the first attempt to use differential evolution algorithm to decide the quantization parameter of 360-degree panoramic video. However, due to the rich variety of video data, videos of various resolutions and different complex scenes emerge rapidly and the work of this article also needs further improvement. In the differential evolution algorithm, there are too many CTUs in the video sequence to be encoded, and the problem dimension in the differential evolution algorithm is also larger, resulting in a larger time complexity of the algorithm. With the variety of video, such as 4K, 8K, and various immersive videos with 6 degrees of freedom, this complexity problem will become more and more serious. This high-complexity problem can be solved by improving the differential evolution algorithm. As mentioned in Section 1, the differential evolution algorithm has become a stable and efficient algorithm after years of research, so in the future, attempts can be made to adapt different variant forms of differential evolution algorithms to different video formats. Moreover, the algorithm requires many parameters; changing the parameters in the algorithm, such as mutation factor or crossover rate, to try different parameter combinations is also an optimization scheme. Quantization parameter is an important parameter in the process of video coding; lower quantization parameter can improve the quality of reconstructed video, and the bitrate will be increased accordingly. Therefore, it is very important to choose the quantization parameters effectively and reduce the bitrate to save network bandwidth while keeping good video quality. However, in previous studies, there have been few schemes for optimizing quantization parameters in 360-degree panoramic video. This paper makes full use of the data characteristics of 360-degree panoramic video and allocates quantization parameter values for each CTU row of each frame, which can achieve the effect of saving bitrate, thus saving transmission resources.

6. Conclusions

This paper regards the quantization parameter decision problem in the video coding as a combinatorial optimization problem. Firstly, encoding the video to obtain the target bitrate under the default configuration; secondly, the projected 360-degree panoramic video is divided according to latitude. As a result, the problem dimension in the differential evolution algorithm is obtained; finally, after several iterations, under the target bitrate, The differential evolution algorithm will obtain a set of optimal quantization parameter combinations for the current video. Under the same encoding configuration, the algorithm proposed in this paper can achieve bitrate savings of 1.86% compared to the HEVC reference software HM-16.20, and the PSNR has an improvement of 0.07 dB. At the same time, the actual bitrate generated via encoding is nearly equal to the target bitrate, and the mismatch has decreased. Since the DE algorithm has a simple concept and has good robustness, it can be widely used in optimization problems in video coding.

Author Contributions

Conceptualization, X.J. and Y.C.; methodology, X.J.; software, Y.C.; validation, X.J. and Y.C.; formal analysis, X.J.; investigation, Y.C.; resources, X.J.; data curation, Y.C.; writing—original draft preparation, Y.C.; writing—review and editing, X.J.; visualization, Y.C.; supervision, X.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, L.; Yan, N.; Li, Z.; Liu, S.; Li, H. λ-Domain Perceptual Rate Control for 360-Degree Video Compression. IEEE J. Sel. Top. Signal Process. 2019, 14, 130–145. [Google Scholar] [CrossRef]
  2. Rijkse, K.H. 263: Video coding for low-bit-rate communication. IEEE Commun. Mag. 1996, 34, 42–45. [Google Scholar] [CrossRef]
  3. Gardos, T.R. H.263+: The new itu-t recommendation for video coding at low bit rates. In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP’98 (Cat. No. 98CH36181), Seattle, WA, USA, 15 May 1998; IEEE: Piscataway, NJ, USA, 1998; Volume 6, pp. 3793–3796. [Google Scholar]
  4. Richardson, I.E.H. 264 and MPEG-4 Video Compression: Video Coding for Next-Generation Multimedia; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  5. Sullivan, G.J.; Ohm, J.R.; Han, W.J.; Wiegand, T. Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1649–1668. [Google Scholar] [CrossRef]
  6. Yu, M.; Lakshman, H.; Girod, B. A framework to evaluate omnidirectional video coding schemes. In Proceedings of the 2015 IEEE International Symposium on Mixed and Augmented Reality, Fukuoka, Japan, 29 September–3 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 31–36. [Google Scholar]
  7. He, Y.; Xiu, X.; Ye, Y.; Zakharchenko, V.; Alshina, E.; Dsouza, A. JVET 360Lib Software Manual; Document ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11. Joint Video Explor. Team, ITU-T: Geneva, Switzerland, 2016. Available online: https://jvet.hhi.fraunhofer.de/svn/svn_360Lib/trunk/ (accessed on 17 July 2017).
  8. Duanmu, F.; He, Y.; Xiu, X.; Hanhart, P.; Ye, Y.; Wang, Y. Hybrid cubemap projection format for 360-degree video coding. In Proceedings of the 2018 Data Compression Conference, Snowbird, UT, USA, 27–30 March 2018; IEEE: Piscataway, NJ, USA, 2018; p. 404. [Google Scholar]
  9. Du, J.; Kim, B.C.; Zhao, D. Cost performance as a stochastic process: EAC projection by Markov Chain simulation. J. Constr. Eng. Manag. 2016, 142, 04016009. [Google Scholar] [CrossRef]
  10. Yu, M.; Lakshman, H.; Girod, B. Content adaptive representations of omnidirectional videos for cinematic virtual reality. In Proceedings of the 3rd International Workshop on Immersive Media Experiences, Brisbane, Australia, 26–30 October 2015; pp. 1–6. [Google Scholar]
  11. Lee, S.H.; Kim, S.T.; Yip, E.; Choi, B.D.; Song, J.; Ko, S.J. Omnidirectional video coding using latitude adaptive down-sampling and pixel rearrangement. Electron. Lett. 2017, 53, 655–657. [Google Scholar] [CrossRef]
  12. Racapé, F.; Galpin, F.; Rath, G.; Francois, E. AHG8: Adaptive QP for 360 video coding. Signal Process. 2018, 146, 66–78. [Google Scholar]
  13. Zhu, S.; Ma, K.K. A new diamond search algorithm for fast block-matching motion estimation. IEEE Trans. Image Process. 2000, 9, 287–290. [Google Scholar] [CrossRef]
  14. Tang, X.; Dai, S.; Cai, C. An analysis of TZSearch algorithm in JMVC. In Proceedings of the 2010 International Conference on Green Circuits and Systems, Shanghai, China, 21–23 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 516–520. [Google Scholar]
  15. Helle, P.; Oudin, S.; Bross, B.; Marpe, D.; Bici, M.O.; Ugur, K.; Jung, J.; Clare, G.; Wiegand, T. Block merging for quadtree-based partitioning in HEVC. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1720–1731. [Google Scholar] [CrossRef]
  16. Oudin, S.; Helle, P.; Stegemann, J.; Bartnik, C.; Bross, B.; Marpe, D.; Schwarz, H.; Wiegand, T. Block merging for quadtree-based video coding. In Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar]
  17. Purnachand, N.; Alves, L.N.; Navarro, A. Fast motion estimation algorithm for HEVC. In Proceedings of the 2012 IEEE Second International Conference on Consumer Electronics-Berlin (ICCE-Berlin), Berlin, Germany, 3–5 September 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 34–37. [Google Scholar]
  18. Laroche, G.; Jung, J.; Pesquet-Popescu, B. RD optimized coding for motion vector predictor selection. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1247–1257. [Google Scholar] [CrossRef]
  19. Schwarz, H.; Nguyen, T.; Marpe, D.; Wiegand, T. Hybrid video coding with trellis-coded quantization. In Proceedings of the 2019 Data Compression Conference (DCC), Snowbird, UT, USA, 26–29 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 182–191. [Google Scholar]
  20. Pfaff, J.; Schwarz, H.; Marpe, D.; Bross, B.; De-Luxán-Hernández, S.; Helle, P.; Helmrich, C.R.; Hinz, T.; Lim, W.Q.; Ma, J.; et al. Video compression using generalized binary partitioning, trellis coded quantization, perceptually optimized encoding, and advanced prediction and transform coding. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1281–1295. [Google Scholar] [CrossRef]
  21. Ki, S.; Kim, M.; Ko, H. Just-noticeable-quantization-distortion based preprocessing for perceptual video coding. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
  22. Bernatin, T.; Sundari, G. Video compression based on Hybrid transform and quantization with Huffman coding for video codec. In Proceedings of the 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), Kanyakumari, India, 10–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 452–456. [Google Scholar]
  23. Liu, Y.; Sidaty, N.; Hamidouche, W.; Déforges, O.; Valenzise, G.; Zerman, E. An adaptive perceptual quantization method for HDR video coding. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1027–1031. [Google Scholar]
  24. Abrahamsson, A. Variance Adaptive Quantization and Adaptive Offset Selection in High Efficiency Video Coding. IEEE Trans. Circuits Syst. Video Technol. 2016, 15, 1037–1074. [Google Scholar]
  25. Chuang, T.-D.; Chen, C.-Y.; Chang, Y.-L.; Huang, Y.-W.; Lei, S. AhG Quantization: Sub-LCU Delta QP; JCTVC-E051; Joint Collaborative Team on Video Coding: Geneva, Switzerland, 2011. [Google Scholar]
  26. Mo, Y.; Xiong, J.; Chen, J.; Xu, F. Quantization matrix coding for high efficiency video coding. In Proceedings of the Advances on Digital Television and Wireless Multimedia Communications: 9th International Forum on Digital TV and Wireless Multimedia Communication, IFTC 2012, Shanghai, China, 9–10 November 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 244–249. [Google Scholar]
  27. Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
  28. Lee, H.; Yang, S.; Park, Y.; Jeon, B. Fast quantization method with simplified rate–distortion optimized quantization for an HEVC encoder. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 107–116. [Google Scholar] [CrossRef]
  29. Zhao, T.; Wang, Z.; Chen, C.W. Adaptive quantization parameter cascading in HEVC hierarchical coding. IEEE Trans. Image Process. 2016, 25, 2997–3009. [Google Scholar] [CrossRef] [PubMed]
  30. Huang, Y.W.; An, J.; Huang, H.; Li, X.; Hsiang, S.T.; Zhang, K.; Gao, H.; Ma, J.; Chubach, O. Block partitioning structure in the VVC standard. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3818–3833. [Google Scholar]
  31. Dong, X.; Shen, L.; Yu, M.; Yang, H. Fast intra mode decision algorithm for versatile video coding. IEEE Trans. Multimed. 2021, 24, 400–414. [Google Scholar] [CrossRef]
  32. Gou, A.; Sun, H.; Katto, J.; Li, T.; Zeng, X.; Fan, Y. Fast intra mode decision for VVC based on histogram of oriented gradient. In Proceedings of the 2022 IEEE International Symposium on Circuits and Systems (ISCAS), Austin, TX, USA, 27 May–1 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 3028–3032. [Google Scholar]
  33. Park, S.H.; Kang, J.W. Fast affine motion estimation for versatile video coding (VVC) encoding. IEEE Access 2019, 7, 158075–158084. [Google Scholar] [CrossRef]
  34. Mahdavi, H.; Azgin, H.; Hamzaoglu, I. Approximate versatile video coding fractional interpolation filters and their hardware implementations. IEEE Trans. Consum. Electron. 2021, 67, 186–194. [Google Scholar] [CrossRef]
  35. Storn, R.; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341. [Google Scholar] [CrossRef]
  36. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
  37. Ali, M.M.; Törn, A. Population set-based global optimization algorithms: Some modifications and numerical studies. Comput. Oper. Res. 2004, 31, 1703–1725. [Google Scholar] [CrossRef]
  38. Liu, J.; Lampinen, J. A fuzzy adaptive differential evolution algorithm. Soft Comput. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  39. Mezura-Montes, E.; Velázquez-Reyes, J.; Coello Coello, C.A. A comparative study of differential evolution variants for global optimization. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; pp. 485–492. [Google Scholar]
  40. Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  41. Potter, M.A.; Jong, K.A.D. A cooperative coevolutionary approach to function optimization. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Jerusalem, Israel, 9–14 October 1994; Springer: Berlin/Heidelberg, Germany, 1994; pp. 249–257. [Google Scholar]
  42. Sofge, D.; De Jong, K.; Schultz, A. A blended population approach to cooperative coevolution for decomposition of complex problems. In Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No. 02TH8600), Honolulu, HI, USA, 12–17 May 2002; IEEE: Piscataway, NJ, USA, 2002; Volume 1, pp. 413–418. [Google Scholar]
  43. Liu, Y.; Yao, X.; Zhao, Q.; Higuchi, T. Scaling up fast evolutionary programming with cooperative coevolution. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No. 01TH8546), Seoul, Republic of Korea, 27–30 May 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 2, pp. 1101–1108. [Google Scholar]
  44. Gao, Z.; Pan, Z.; Gao, J. A new highly efficient differential evolution scheme and its application to waveform inversion. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1702–1706. [Google Scholar]
  45. Foo, C.Y.; Rajendran, P.; Aswini, N.; Raja, V.; Natarajan, E.; Ang, C.K. A Fast-Compressive Tracking Integrated with Differential Evolution to Optimize Object Tracking Performance. In Proceedings of the 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, 26–30 August 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  46. Sun, T.; Wen, J.; Gong, J. Personalized Learning Resource Recommendation using Differential Evolution-Based Graph Neural Network: A Graph SAGE Approach. In Proceedings of the 2023 4th International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), Nanjing, China, 18–20 August 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 636–639. [Google Scholar]
  47. Lampinen, J.; Zelinka, I. On stagnation of the differential evolution algorithm. In Proceedings of the MENDEL, Brno, Czech Republic, 7–10 March 2000; pp. 76–83. [Google Scholar]
  48. Lipowski, A.; Lipowska, D. Roulette-wheel selection via stochastic acceptance. Phys. A Stat. Mech. Its Appl. 2012, 391, 2193–2196. [Google Scholar] [CrossRef]
  49. Blickle, T. Tournament selection. Evol. Comput. 2000, 1, 181–186. [Google Scholar]
  50. Boyce, J.; Alshina, E.; Abbas, A.; Ye, Y. JVET common test conditions and evaluation procedures for 360 degreevideo. In Proceedings of the Joint Video Exploration Team of ITU-T SG16 WP3 and ISO/IECJTC1/SC29/WG11.JVET-D1030, Torino, IT, USA, 13–21 July 2017. [Google Scholar]
  51. Gisle, B. Calculation of Average PSNR Differences between RD curves. In Proceedings of the ITU-T SG16/Q6, 13th VCEG Meeting, Austin, TX, USA, 2–4 April 2001; pp. 210–214. [Google Scholar]
Figure 1. Quantization parameter decision scheme based on differential evolution.
Figure 1. Quantization parameter decision scheme based on differential evolution.
Applsci 13 12758 g001
Figure 2. The Relationship between population individual and CTU lines.
Figure 2. The Relationship between population individual and CTU lines.
Applsci 13 12758 g002
Figure 3. Mutation operation and crossover operation.
Figure 3. Mutation operation and crossover operation.
Applsci 13 12758 g003
Figure 4. Frames of four test sequences. (a) AerialCity; (b) DrivingInCity; (c) DrivingInCountry; (d) PoleVault.
Figure 4. Frames of four test sequences. (a) AerialCity; (b) DrivingInCity; (c) DrivingInCountry; (d) PoleVault.
Applsci 13 12758 g004
Figure 5. R-D curves of different test sequences. (a) AerialCity; (b) DrivingInCity; (c) DrivingInCountry; (d) PoleVault.
Figure 5. R-D curves of different test sequences. (a) AerialCity; (b) DrivingInCity; (c) DrivingInCountry; (d) PoleVault.
Applsci 13 12758 g005
Figure 6. Histogram of mismatch comparison results.
Figure 6. Histogram of mismatch comparison results.
Applsci 13 12758 g006
Table 1. Test sequence information.
Table 1. Test sequence information.
Test SequenceResolutionFrame RateFrame Number
AerialCity3840 × 192030100
DrivingInCity3840 × 192030100
DrivingInCountry3840 × 192030100
PoleVault3840 × 192030100
Table 2. Related parameters of the proposed algorithm.
Table 2. Related parameters of the proposed algorithm.
ParameterValue
Population size50
Iteration number50
Mutation factorrand (0.1, 0.9)
Crossover rate0.5
Terminal conditionmax iteration number
Table 3. Comparison result between proposed algorithm and HM-16.20.
Table 3. Comparison result between proposed algorithm and HM-16.20.
Video SequenceTarget BitrateYUV-PSNR (dB)Actual Bitrate (kbps)
ProposedHM-16.20ProposedHM-16.20
AerialCity200029.9429.891999.572005.86
400031.7931.864000.354005.72
800034.2334.167998.118009.88
10,00035.0134.989994.789995.82
DrivingInCity200031.5631.501998.121999.98
400034.1333.893998.263991.92
800036.6836.597999.077987.32
10,00037.5837.5299869974.28
DrivingInCountry200031.0830.681999.651997.76
400031.8531.953978.793962.52
800033.4733.377990.437991.52
10,00033.9833.949995.039992.88
200026.8126.812655.142676.00
PoleVault_le400027.8227.783987.434008.00
800030.0629.937990.388010.84
10,00030.6830.669965.5210,018.08
Table 4. BD-Rate and BD-PSNR results.
Table 4. BD-Rate and BD-PSNR results.
Video SequenceBD-Rate (%)BD-PSNR (dB)
AerialCity0.11−0.01
DrivingInCity−3.330.17
DrivingInCountry−1.140.03
PoleVault_le−3.070.09
Average−1.860.07
Table 5. Mismatch result between the proposed algorithm and HM-16.20.
Table 5. Mismatch result between the proposed algorithm and HM-16.20.
Video SequenceTarget Bitrate (kbps)HM-16.20Proposed
Actual Bitrate (kbps)Mismatch (%)Actual Bitrate (kbps)Mismatch (%)
AerialCity20002005.860.02931999.570.0002
40004005.720.01434000.350.0001
80008009.880.00127998.110.0002
10,0009995.820.00049994.780.0005
DrivingInCity20001999.980.00001998.120.0009
40003991.920.00203998.260.0004
80007987.320.00167999.070.0001
10,0009974.280.002699860.0014
DrivingInCountry20001997.760.00111999.650.0002
40003962.520.00943978.790.0028
80007991.520.00117990.430.0012
10,0009992.880.00079995.030.0005
PoleVault_le20002676.000.33801995.140.0024
40004008.000.00203987.430.0031
80008010.840.00147990.380.0012
10,00010,018.080.00189965.520.0034
Average0.02540.0016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jin, X.; Chai, Y. Research on Quantization Parameter Decision Scheme for High Efficiency Video Coding. Appl. Sci. 2023, 13, 12758. https://doi.org/10.3390/app132312758

AMA Style

Jin X, Chai Y. Research on Quantization Parameter Decision Scheme for High Efficiency Video Coding. Applied Sciences. 2023; 13(23):12758. https://doi.org/10.3390/app132312758

Chicago/Turabian Style

Jin, Xuesong, and Yansong Chai. 2023. "Research on Quantization Parameter Decision Scheme for High Efficiency Video Coding" Applied Sciences 13, no. 23: 12758. https://doi.org/10.3390/app132312758

APA Style

Jin, X., & Chai, Y. (2023). Research on Quantization Parameter Decision Scheme for High Efficiency Video Coding. Applied Sciences, 13(23), 12758. https://doi.org/10.3390/app132312758

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop