3D Video Depth Image Prediction Mode Selection Method Based on Viewpoint Correlation

A technology of prediction mode and depth image, which is applied in the field of video coding and decoding, can solve the problems such as insufficient consideration of viewpoint correlation, and the complexity of depth map mode selection algorithm needs to be reduced, and achieves the effect of reducing coding complexity.

Active Publication Date: 2020-06-05
NANJING UNIV OF SCI & TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] All in all, the existing depth map fast coding technology does not fully consider the correlation between viewpoints, and the complexity of the existing depth map mode selection algorithm still needs to be reduced

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 3D Video Depth Image Prediction Mode Selection Method Based on Viewpoint Correlation
  • 3D Video Depth Image Prediction Mode Selection Method Based on Viewpoint Correlation
  • 3D Video Depth Image Prediction Mode Selection Method Based on Viewpoint Correlation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0041] What this embodiment shows is a 3D video depth image prediction mode rapid selection method based on viewpoint correlation, and its process is as follows image 3 As shown, the steps include:

[0042] Step 1: For the input non-independent view coding block, judge whether the 5 adjacent reference coding blocks of the base view have selected Merge mode as its predictive coding mode, if yes, go to step 2; if not, skip to step 5;

[0043] Step 2: Calculate the rate-distortion RD-cost of the current encoding block based on Skip mode, 2N×2N Merge mode and DIS mode, and the calculation formula is:

[0044] J(m)=D VSO (m)+λ·B(m) m∈C

[0045] Step 3: Determine whether the rate-distortion of Skip mode is smaller than 2N×2N Merge mode and DIS mode, if yes, go to step 4; if not, go to step 5;

[0046] Step 4: Set the Skip mode as the prediction mode of the current coding block, and the prediction mode selection process is terminated early, and skip to step 6;

[0047] Step 5: ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a 3D video depth image prediction mode selection method based on the viewpoint correlation. The method comprises the steps of: preliminarily judging the possibility that a current coding block selects the Skip mode by utilizing the viewpoint correlation at first; and then, for the coding block having high possibility, comparing the rate distortion RD-cost of the Skip mode with the rate distortion RD-cost of the 2N*2N Merge mode and the DIS mode, determining whether the current block selects the Skip mode or not, and ending a prediction mode selection process in advance. By means of the 3D video depth image prediction mode selection method based on the viewpoint correlation disclosed by the invention, the depth image prediction coding complexity can be reduced; the coding time required by prediction can be reduced; and furthermore, the video quality in the synthesis visual angle of the final decoding end can be ensured.

Description

technical field [0001] The invention belongs to the technical field of video encoding and decoding, in particular to a method for quickly selecting a prediction mode of a 3D video depth map based on viewpoint correlation. Background technique [0002] The emerging video format of multi-view plus depth map is the most important format of the next generation 3D video system. It uses texture map information of a small number of viewpoints and additional depth map information of corresponding viewpoints to represent a 3D video scene, and more viewpoint information can be synthesized by 3D rendering technology based on depth images. Since the depth map plays a key role in providing disparity information and guiding the synthesis process in current 3D video systems, the research on depth map coding has important practical significance. [0003] figure 1 Lists all possible predictive coding modes for depth maps. It is worth noting that each prediction mode selection process in 3...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/597H04N19/147H04N19/103
CPCH04N19/103H04N19/147H04N19/597
Inventor 伏长虹陈浩
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products