Method for reconstructing depth image and decoder for reconstructing depth image

A depth image and depth technology, applied in the field of efficient representation of depth video, can solve problems such as depth information errors

Active Publication Date: 2012-05-02
MITSUBISHI ELECTRIC CORP
View PDF1 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] One problem with synthesizing virtual images is errors in depth information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for reconstructing depth image and decoder for reconstructing depth image
  • Method for reconstructing depth image and decoder for reconstructing depth image
  • Method for reconstructing depth image and decoder for reconstructing depth image

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0015] virtual view synthesis

[0016] Our virtual image synthesis uses camera parameters and in-frame depth information to determine the texture value of pixels in an image synthesized from pixels in images of adjacent viewpoints (adjacent images).

[0017] Typically, two adjacent images are used to synthesize a virtual image for any viewpoint between the two adjacent images.

[0018] Each pixel in two adjacent images is projected to the corresponding pixel in the virtual image plane. We use the pinhole camera model to exploit

[0019] [u,v,w] T = R c ·A c -1 [x, y, 1] T d[c,x,y]+T c ,(1)

[0020] to project the pixel at position (x, y) in the adjacent image c into world coordinates [u, v, w],

[0021] where d is the depth relative to the optical center of the camera at image c, A, R, and T are the camera parameters, and the superscript T is the transpose operator.

[0022] we based on

[0023] x v =[x',y',z'] T =A v · R v -1 · [u, v, w] T -T v .(2)

[0024...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A method reconstructs a depth image encoded as a base layer bitstream, and a set of enhancement layer bitstreams. The base layer bitstream is decoded to produce pixels of a reconstructed base layer image corresponding to the depth image. Each enhancement layer bitstream is decoded in a low to high order to produces a reconstructed residual image. During the decoding of the enhancement layer bitstream, a context model is maintained using an edge map, and each enhancement layer bitstream is entropy decoded using the context model to determine a significance value corresponding to pixels of the reconstructed residual image and a sign bit for each significant pixel, and a pixel value of the reconstructed residual image is reconstructed according to the significance value, sign bit and an uncertainty interval. Then, the reconstructed residual images are added to the reconstructed base layer image to produce the reconstructed depth image.

Description

technical field [0001] The present invention relates generally to efficient representation of depth video, and more particularly to accurate encoding of depth video for the purpose of synthesizing virtual images from new viewpoints. Background technique [0002] Three-dimensional (3D) video applications such as 3D-TV and Free Viewpoint TV (FTV) require depth information to generate virtual images. Virtual images may be used for free viewpoint navigation of screens or various other display processing purposes. [0003] One problem with synthesizing virtual images is errors in depth information. This is a particular problem around edges and can cause distressing artifacts in composite images, see Merkle et al. in 3DTV conference: The True vision-capture, transmission and display of 3D video, Volume, Issue, 28-30 May 2008, pp. 245-248, article entitled "The Effect of Depth Compression on Multiview Rendering Quality". Contents of the invention [0004] Embodiments of the pr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/26
CPCH04N7/26978H04N19/00751H04N19/00303H04N7/26079H04N7/26265H04N19/00272H04N7/26313H04N7/26106H04N19/00448H04N7/26127H04N19/00157H04N19/0009H04N19/00769H04N19/00545H04N7/26351H04N19/00121H04N19/00969H04N7/26877G06T2207/10028H04N13/0022H04N19/00296H04N7/26255H04N7/2625H04N2213/003H04N19/124H04N19/13H04N19/14H04N19/174H04N19/18H04N19/182H04N19/36H04N19/46H04N19/587H04N19/597H04N19/96H04N13/128
Inventor 芮世薰安东尼·韦特罗
Owner MITSUBISHI ELECTRIC CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products