Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An Intelligent Virtual Reference Frame Generation Method

A virtual reference and reference frame technology, applied in deep learning and 3D video coding fields, can solve the problems of further improvement of coding efficiency and lack of multi-view video coding methods, so as to facilitate storage and transmission, reduce coding bits, and improve quality Effect

Active Publication Date: 2021-12-07
TIANJIN UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Compared with the successful application of deep learning in the field of 2D video coding, there is still a lack of multi-view video coding methods based on deep learning technology; the existing inter-view predictive coding tools for 3D-HEVC do not make full use of the disparity relationship between adjacent viewpoints. Coding efficiency still needs to be further improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Intelligent Virtual Reference Frame Generation Method
  • An Intelligent Virtual Reference Frame Generation Method
  • An Intelligent Virtual Reference Frame Generation Method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] In order to make the purpose, technical solution and advantages of the present invention clearer, the implementation manners of the present invention will be further described in detail below.

[0037] The embodiment of the present invention provides an intelligent virtual reference frame generation method, builds a virtual reference frame generation network based on viewpoint synthesis, generates a high-quality virtual reference frame and provides a high-quality reference for the current coded frame, and improves prediction Accuracy, thereby improving the coding efficiency. Method flow see figure 1 , the method includes the following steps:

[0038] 1. Feature extraction

[0039] The temporal virtual reference frame F t and the inter-view reference frame F iv As the input of the virtual reference frame generation network, their features f are extracted separately using the same branch shared by the two parameters t ,f iv , the extraction process see figure 2 . ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for generating an intelligent virtual reference frame. The method comprises the following steps: according to the characteristics of the virtual reference frame in the time domain and the reference frame between viewpoints, using an alignment step based on viewpoint synthesis to learn the parallax relationship, and according to the parallax relationship, the viewpoint The inter-reference frame is aligned with the time-domain virtual reference frame to generate an offset inter-view reference frame; for the time-domain virtual reference frame and the offset inter-view reference frame, the complementary information between the two is fused to generate a high-quality Virtual reference frame; build a virtual reference frame generation network based on viewpoint synthesis, and use two MSE loss functions to constrain the training of the generation network. The present invention provides a high-quality reference for the current coding frame, thereby improving the prediction accuracy and further improving the coding efficiency.

Description

technical field [0001] The invention relates to the fields of 3D video coding and deep learning, in particular to an intelligent virtual reference frame generation method. Background technique [0002] Multi-viewpoint video is a typical representation of 3D video, which can provide viewers with an immersive stereoscopic feeling. In order to improve the compression efficiency of multi-viewpoint video, the international standard organization JCT-VC has launched the multi-viewpoint plus depth video coding standard 3D-HEVC. For multi-view video coding, 3D-HEVC not only uses HEVC (High Efficiency Video Coding) coding technology to eliminate temporal redundancy and spatial redundancy within the same view, but also introduces a variety of inter-view coding technologies to eliminate inter-view redundancy. In addition, the overall coding efficiency is effectively improved. [0003] The data volume of multi-view video is much larger than that of 2D video, which brings great challeng...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N13/161H04N13/194H04N19/132H04N19/182H04N19/31H04N19/50G06K9/46G06K9/62G06N3/04G06T7/00
CPCH04N13/161H04N13/194H04N19/132H04N19/182H04N19/31H04N19/50G06T7/97H04N2013/0081G06T2207/20081G06T2207/20084G06T2207/20228G06V10/454G06N3/045G06F18/253
Inventor 雷建军刘祥瑞张宗千潘兆庆彭勃
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products