Image Virtual Viewpoint Generation Method under Hole Pixel Information Update
Abstract
:1. Introduction
2. Analysis of the Difficulties of Virtual Viewpoint Generation and Preprocessing
2.1. Analysis and Treatment of Overlapping Problems
2.2. Analysis and Treatment of the Hole Problem
2.3. Analysis and Treatment of Cracks and Artifacts
3. Virtual-Viewpoint Generation
3.1. Virtual-Viewpoint Generation for Simple Textured Backgrounds
- During the preprocessing of one hole, the location of the holes present in the virtual-viewpoint image is detected and extracted, along with their edge locations and edge poles. As shown in Figure 3, the shaded part indicates the hole.The edge of a hole can be divided into two types: one is the uppermost or lowermost position of the edge of the hole, i.e., the pole position of the edge of the hole, i.e., in the diagram. The second is the location of the normal hole edge, which is in the figure.
- The virtual-viewpoint image is grayed out, and the image is masked by the vertical-line recognition detection template and symmetrical ±45-line recognition detection template, and the final result of the detection is recorded. Formula (2) is the vertical-line mask operator:Formula (3) is the expression for the 45 linear detection mask:Formula (4) is the expression for the −45 linear detection mask:When the inspection is finished, the results are saved in the matrix. The horizontal-line inspection is also no longer carried out because of the horizontal filling in Section 2.2.
- Blend with the location of the hole to identify the existence or not of a background demarcation situation. For the first type of hole edge point in Item 1, which is the edge pole of the hole, a vertical line with ±45 straight-line detection is carried out at a location that is not a hole. Details are shown in Figure 4.In the figure, represents the acquired hole edge pole, and, according to the relative position, the correlation between it and the hole can be judged; the point is to be tested for straight lines at 90, 45, and 135 through Formula (2) to determine the vertical line through Formulas (3) and (4) for ±45 to determine the diagonal line.The mean value is calculated for a straight-line position according to Formulas (5) and (6), which are used to establish whether the position needs to be straightened or not.For general hole edges, no vertical line determination is required, only ±45 and ±135 straight-line determination. The process is the same as that above.
- To render the final virtual viewpoint image more realistic, the filter of Formula (1) is used for filtering.
3.2. Virtual-Viewpoint Generation for Complex Textured Backgrounds
- Extraction of virtual viewpoint depth map.Use the basic principles of 3D image transformation to complete the reference viewpoint conversion.
- Projection to secondary reference viewpoint position.At this point, the image of the virtual viewpoint with the presence of the hole and the corresponding, relatively complete depth map are obtained, which meets the conditions for projection to the auxiliary reference viewpoint location [23,24,25]. The hole area is recorded, the depth information is updated using the inverse 3D transformation method for the hole pixels, and the projection elements are fused to obtain the auxiliary reference viewpoint location information, thus obtaining the auxiliary viewpoint hole map. The left and right viewpoints on either side of the virtual viewpoint are selected as reference viewpoints for the joint generation of the virtual view. Each pixel is projected onto its point in 3D space, e.g., with the top-right corner of the image as the origin, and the virtual-viewpoint pixel coordinates are projected to the corresponding value of the reference viewpoint position coordinates with the help of a depth map.
- Find the matching pixel point to complete the fill.Pixel points are compared with each other and the known auxiliary reference viewpoint position pixel points to find out the corresponding auxiliary reference viewpoint position pixel with the hole point, and the pixel-point information is obtained according to the hole map. Multiple reference viewpoints are used to separately generate the target virtual viewpoint images, and the generated multiple virtual viewpoint images are then fused, i.e., the images of the two reference viewpoints selected on both sides of the target viewpoint in the previous section are reprojected to generate the virtual images of the respective target viewpoint positions, and then the two generated virtual images are fused to complete the image matching and filling to achieve virtual viewpoint optimization in complex texture backgrounds.
4. Experimental Results and Analysis
- Comparison of preprocessing capacity.In the process of generating the virtual viewpoint of the overlapped pixel image, the problem detection and preprocessing of the image should be carried out first, as shown in Figure 5:According to the above figures showing image problem detection and preprocessing, in this process, the image preprocessing ability was tested. The results of the preprocessed images generated by the method in this paper were compared with those generated by the methods in the literature [6,7,8], and the results of the comparison are shown in Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10:Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 show that the method in [6] ignored the artifact problem of the image when processing the image; the method in [7] had deviations in the processing of the overlap problem, and the pixel-point parallax value was not handled properly; the image effect of the method in [8] was blurred, and the preprocessing of the pole position of the edge of the hole was not in place. The proposed method could achieve comprehensive preprocessing for overlaps, holes, cracks, artifacts, and other problems that tend to arise in the virtual-viewpoint generation process, and it is based on background texture recognition. It uses different methods for filling simple and complex textured backgrounds. The quality of the obtained virtual viewpoints was significantly improved compared to that with the general virtual-viewpoint generation method; thus, the feasibility of the method was enhanced.
- Target viewpoint fidelityAs virtual-viewpoint generation is one of the most critical aspects of overlapping image generation, the peak-signal-to-noise ratio (PSNR) metric is used to ensure the quality of virtual-viewpoint generation for a more objective comparison. This metric measures the fidelity of the target viewpoint, with a larger metric indicating less distortion, i.e., a closer approximation of the virtual viewpoint to the actual image. Figure 11 shows a schematic diagram of the model for objective quality assessment.To objectively evaluate the quality of virtual-viewpoint generation, Frames 1 to 20 of a sequence were selected for testing, and the results of the PSNR comparison of the target viewpoint planar images obtained by plotting are shown in Figure 12.The analysis of the above experimental results shows that the overall performance of the virtual-viewpoint generation method for overlapping pixel-point images based on background texture recognition was optimal compared to that of the current research results. The image-processing results generally reflect high definition and high resolution, and the PSNR was also higher and more reliable compared to that in the literature results.
- Overall image-processing performance comparison.The final result of the quality of the image to be drawn by the superior virtual-viewpoint generation is expressed in the image itself, so the overall performance of the image processing was tested for comparison. Figure 13 shows a video image selected from the image testing standard image library, and Figure 14, Figure 15, Figure 16 and Figure 17 are the images after processing the literature results and the proposed method.Image quality indicators are specific methods for evaluating image deviation values and readability, and measuring image quality, and they are usually divided into relative and absolute evaluations; relative evaluations are those performed in comparison with reference images, while absolute evaluations are those conducted on the basis of conventional evaluation scales and self-experience.In this comparison test, absolute and relative evaluations of the image effect of the method in this paper were carried out. Relative evaluation: as the above figures show, the image-processing effect of this method was significantly better than that of other literature results under the comparison test. Absolute evaluation: the processed image of the proposed method was higher in clarity, with normal exposure and color contrast, and uniform illumination. The evaluation of the above image quality indicators indicates that the image quality of the method in this paper is high.
- Comparison of image-processing structural similarity performance.Structural similarity (SSIM) was used to test the effect of virtual-viewpoint drawing; the closer the value of this indicator is to 1, the higher the quality of the drawn virtual-viewpoint image. Frames 1 to 20 of a sequence were selected for testing, and the SSIM comparison results of the target viewpoint planar images obtained by drawing are shown in Figure 18.Analyzing Figure 18 shows that the method in this paper had the highest SSIM values, all above 0.9, compared to other research results. The image-processing results show the advantages of high structural similarity and more reliability in general.
- Image-processing time comparisonAfter 600 images had been randomly selected from the image database and subjected to virtual-viewpoint processing such as voids, overlaps, cracks, and artifacts, the image-processing time of the method in this paper was compared and tested against the methods in [6,7,8]. The test results are shown in Figure 19.This paper uses two-dimensional Gaussian filtering to process the image in depth, so as to improve the image restoration effect, so the image-processing time was shorter, only 0.6 s, and it could effectively process 600 defective images. The efficiency was significantly higher than that of the other three comparison methods, and it had strong applicability.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhang, X.; Yu, Y.; Wang, X. Research and simulation of 3D image virtual viewpoint generation optimization. Comput. Simulationg 2021, 39, 205–209. [Google Scholar]
- Dong, W. Research on algorithm of virtual try-on based on single picture. Sci. Technol. Innov. 2021, 32, 82–84. [Google Scholar]
- Shi, H.; Wang, L.; Wang, G. Blind quality prediction for view synthesis based on heterogeneous distortion perception. Sensors 2022, 22, 7081. [Google Scholar] [CrossRef] [PubMed]
- Chen, J.Z.; Chen, Y.; Song, J.R. Panoramic video virtual view synthesis based on viewing angle. Chin. J. Liq. Cryst. Displays 2019, 34, 63–69. [Google Scholar] [CrossRef]
- Cui, S.N.; Peng, Z.J.; Zou, W.H.; Chen, F.; Chen, H. Quality assessment of synthetic viewpoint stereo image with multi-feature fusion. Telecommun. Sci. 2019, 35, 104–112. [Google Scholar]
- Cai, L.; Li, X.; Tian, X. Virtual viewpoint image post-processing method using background information. J. Chin. Comput. Syst. 2022, 43, 1178–1184. [Google Scholar]
- Wang, C.; Peng, Z.; Zhang, L.; Chen, F.; Lu, Z. No-reference quality assessment for virtual view images based on skewness and structural feature. J. Comput. Appl. 2021, 41, 226–233. [Google Scholar]
- Guo, Q. Restoration of moving image contour pixels based on optical network. Laser J. 2021, 42, 78–82. [Google Scholar]
- Wang, H.; Chen, F.; Jiao, R.; Peng, Z.; Yu, M. Virtual view rendering algorithm based on spatial weighting. Comput. Eng. Appl. 2016, 52, 174–195. [Google Scholar]
- Lou, D.; Wang, X.; Fu, X.; Zhang, L. Virtual view point rendering based on range image segmentation. Comput. Eng. 2016, 42, 12–19. [Google Scholar]
- Zhang, Q.; Li, S.; Guo, W.; Chen, J.; Wang, B.; Wang, P.; Huang, J. High quality virtual view synthesis method based on geometrical model. Video Eng. 2016, 40, 22–25. [Google Scholar]
- Chen, Y.; Ding, W.; Xu, X.; Hu, S.; Yan, X.; Xu, N. Image super-resolution generative adversarial network based on light loss. J. Tianjin Univ. Sci. Technol. 2022, 37, 55–63. [Google Scholar]
- Zhu, H.; Li, H.; Li, W.; Li, F. Single image super-resolution reconstruction based on generative adversarial network. J. Jilin Univ. 2021, 59, 1491–1498. [Google Scholar]
- Zhu, C.; Li, S. Depth image based view synthesis:New insights and perspectives on hole generation and filling. IEEE Trans. Broadcast. 2015, 62, 82–92. [Google Scholar] [CrossRef]
- Zambanini, S.; Loghin, A.M.; Pfeifer, N.; Soley, E.M.; Sablatnig, R. Detection of parking cars in stereo satellite images. Remote Sens. 2020, 12, 2170. [Google Scholar] [CrossRef]
- Wu, L.; Yu, L.; Zhu, C. The camera arrangement algorithm based on the central attention in light field rendering. China Sci. 2017, 12, 180–184. [Google Scholar]
- Le, T.H.; Long, V.T.; Duong, D.T.; Jung, S.W. Reduced reference quality metric for synthesized virtual views in 3DTV. Etri J. 2016, 38, 1114–1123. [Google Scholar] [CrossRef]
- Ma, J.; Li, S.; Qin, H.; Hao, A. Unsupervised multi-class co-segmentation via joint-cut over L-1-manifold hyper-graph of discriminative image regions. IEEE Trans. Image Process. 2017, 26, 1216–1230. [Google Scholar] [CrossRef]
- Serafin, S.; Erkut, C.; Kojs, J.; Nilsson, N.C.; Rolf, N. Virtual reality musical instruments: State of the art, design principles, and future directions. Comput. Music. J. 2016, 40, 22–40. [Google Scholar] [CrossRef] [Green Version]
- Guo, Q.; Liang, X. High-quality virtual viewpoint rendering for 3D warping. Comput. Eng. Appl. 2019, 55, 84–90. [Google Scholar]
- Huang, H.; Huang, S. Fast hole filling for view synthesis in free viewpoint video. Electronics 2020, 9, 906. [Google Scholar] [CrossRef]
- Zhang, J.; Hou, Y.; Zhang, Z.; Jin, D.; Zhang, P.; Lo, G. Deep region segmentation-based intra prediction for depth video coding. Multimed. Tools Appl. 2022, 81, 35953–35964. [Google Scholar] [CrossRef]
- Liu, D.; Wang, G.; Wu, J.; Ai, L. Light field image compression method based on correlation of rendered views. Laser Technol. 2019, 43, 115–120. [Google Scholar]
- Zhou, G.; Song, H.; Wu, Y.; Ren, P. A non-feature fast 3D figid-body image registration method. Acta Electron. Sin. 2018, 46, 7. [Google Scholar]
- Amanjot, S.; Jagroop, S. Content adaptive deblocking of artifacts for highly compressed images. Multimed. Tools Appl. 2022, 81, 18375–18396. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Leng, L.; Gao, C.; Zhang, F.; Li, D.; Zhang, W.; Gao, T.; Zeng, Z.; Tang, L.; Luo, Q.; Duan, Y. Image Virtual Viewpoint Generation Method under Hole Pixel Information Update. Symmetry 2023, 15, 34. https://doi.org/10.3390/sym15010034
Leng L, Gao C, Zhang F, Li D, Zhang W, Gao T, Zeng Z, Tang L, Luo Q, Duan Y. Image Virtual Viewpoint Generation Method under Hole Pixel Information Update. Symmetry. 2023; 15(1):34. https://doi.org/10.3390/sym15010034
Chicago/Turabian StyleLeng, Ling, Changlun Gao, Fangren Zhang, Dan Li, Weijie Zhang, Ting Gao, Zhiheng Zeng, Luxin Tang, Qing Luo, and Yuxin Duan. 2023. "Image Virtual Viewpoint Generation Method under Hole Pixel Information Update" Symmetry 15, no. 1: 34. https://doi.org/10.3390/sym15010034
APA StyleLeng, L., Gao, C., Zhang, F., Li, D., Zhang, W., Gao, T., Zeng, Z., Tang, L., Luo, Q., & Duan, Y. (2023). Image Virtual Viewpoint Generation Method under Hole Pixel Information Update. Symmetry, 15(1), 34. https://doi.org/10.3390/sym15010034