Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Quick depth video coding method

A technology of depth video and encoding method, which is applied in the field of encoding multi-viewpoint video signals, can solve problems such as poor time continuity, discontinuous depth, and high cost, and achieve the effects of ensuring compression efficiency, reducing computational complexity, and ensuring accuracy

Inactive Publication Date: 2011-03-16
NINGBO UNIV
View PDF2 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] There are two ways to obtain depth information: the first way is to collect through a depth camera. Using a depth camera can obtain a more accurate depth map, but there are still certain restrictions. For example, the acquisition distance of a depth camera is only 1 to 10 In other words, the depth camera cannot collect the depth information of outdoor scenes; in addition, an important reason why the depth camera cannot be widely used is that it is very expensive
The second way is to estimate and obtain the depth map through the depth estimation algorithm. This is the main way to obtain the depth map at this stage. The depth estimation algorithm estimates the depth information through the disparity matching of multi-viewpoint color video, but the depth obtained by the depth estimation algorithm The accuracy of the map is not very ideal, and there are problems such as poor time continuity and depth discontinuity, which will cause a decrease in the compression and coding efficiency of the depth map.
At present, many scholars have proposed fast coding methods for multi-viewpoint color video, but these methods are all proposed for multi-viewpoint color video, and cannot be directly applied to the coding of multi-viewpoint depth video.
In addition, some scholars have carried out research on depth video preprocessing and depth video compression methods. These methods can improve the accuracy and compression efficiency of depth maps, but due to the lack of consideration of the inter-view redundancy of multi-view video sequences , so the computational complexity of multi-view depth video coding cannot be effectively reduced

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Quick depth video coding method
  • Quick depth video coding method
  • Quick depth video coding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0033] A fast depth video coding method proposed by the present invention divides all viewpoints in the multi-viewpoint depth video predictive coding structure into three categories: main viewpoint, first-level auxiliary viewpoint and second-level auxiliary viewpoint. The main viewpoint is time-only Viewpoints that are predicted without inter-viewpoint prediction. The first-level auxiliary viewpoint refers to the viewpoint where the key frame performs inter-viewpoint prediction, and the non-keyframe only performs temporal prediction without inter-viewpoint prediction. The second-level auxiliary viewpoint refers to the keyframe. For inter-viewpoint prediction and non-key frames that perform both temporal prediction and inter-viewpoint prediction, the set of keyframes of all viewpoints in the multi-viewpoint depth video prediction coding structur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a quick depth video coding method. In the method, all viewpoints in a multi-viewpoint depth video predictive coding structure are divided into main viewpoints, first-stage auxiliary viewpoints and second-stage auxiliary viewpoints, different quick coding strategies can be adopted according to coding frames with different types in different viewpoints, and the relevance among the viewpoints of a depth video signal is determined by utilizing the information of the coded frames so as to determine whether the current coding frame is subjected to prediction among the viewpoints or not; and whether to perform bidirectional search or not during the coding of the current coding macroblock is determined by utilizing a motion vector of a coded adjacent block or the search mode of the optimal matching block of the current coding macroblock in the current coding frame; and thus, the time-consuming and poor-effect searches in the process of coding the current coding frame is prevented, and the calculation complexity of the multi-viewpoint depth video codes is reduced effectively while the accuracy and compression efficiency of a depth map are ensured.

Description

technical field [0001] The present invention relates to a coding technique for multi-viewpoint video signals, in particular to a fast depth video coding method. Background technique [0002] Multi-viewpoint video can provide information of a certain scene or subject at different angles and levels, and generate multi-angle, all-round stereoscopic vision. Therefore, the multi-viewpoint video system has broad application prospects in fields such as video surveillance, audio-visual entertainment, immersive conferences, and special-effect advertisements. The combination of multi-view color video and multi-view depth video (MVD, Multiview Video plus Depth) is the core data representation method of 3D scenes. The 3D information expression method of the MVD data format can better describe the geometric information of the scene, especially for scenes with a wide range of viewing angles and rich depth levels, the video information can be completely given, and it can also achieve flex...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N13/00H04N7/26H04N7/32H04N19/50
Inventor 郁梅姒越后蒋刚毅陈恳彭宗举邵枫
Owner NINGBO UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products