Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for processing multi-viewpoint depth video

A technology of deep video, processing method, applied in the direction of digital video signal modification, television, electrical components, etc.

Inactive Publication Date: 2012-11-07
NINGBO UNIV
View PDF4 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In addition, new noise will be introduced when the depth image passes through the H.264 encoder, and due to the special nature of the depth image, these encoding distortions are often most obvious in the edge area of ​​the depth, and the edge area is just the sensitive area for virtual viewpoint drawing, so Depth restoration techniques are needed to restore the real depth boundaries to improve the subjective and objective quality of virtual viewpoint rendering images

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for processing multi-viewpoint depth video
  • Method for processing multi-viewpoint depth video
  • Method for processing multi-viewpoint depth video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0045] A method for processing multi-view depth video proposed by the present invention, the processing process is as follows: preprocessing the original multi-view depth video to be processed to reduce the encoding bit rate; then encoding and compressing the pre-processed multi-view depth video, Decoding and reconstruction operations; then perform depth restoration and spatial smoothing processing on the decoded and reconstructed multi-view depth video; finally use the processed multi-view depth video to draw virtual view video images.

[0046] In this specific embodiment, the specific process of preprocessing the original multi-viewpoint depth video to be processed is:

[0047] A1. Use the existing edge detection operator to perform edge detection on each frame of the color image in the original multi-viewpoint color video corresponding to t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for processing a multi-viewpoint depth video. The method comprises the steps as follows: recombination conversion operation is executed aiming at the depth video; secondly, the converted depth video is smoothly processed in a time domain; thirdly, recombination conversion operation is executed aiming at the smoothly processed depth video to obtain a pretreated depth video; fourthly, coding compression, decoding recombination operation are executed aiming at the pretreated multi-viewpoint depth video; fifthly, depth recovery and null field smooth processing are executed aiming at decoded and recombined multi-viewpoint depth video; and lastly, a virtual viewpoint video image is mapped by the processed multi-viewpoint depth video. The method has the advantages that the pretreatment method can improve the time domain relativity of a depth video sequence, and further effectively improve the coding efficiency of the depth video; as for the situation of a code QP equal to 22, 27, 32 and 37, the coding efficiency can be saved by 17.07 percent to 38.29 percent, and the mapping quality of the obtained virtual viewpoint video image can be improved by about 0.05 dB.

Description

technical field [0001] The invention relates to a video signal processing technology, in particular to a multi-viewpoint depth video processing method. Background technique [0002] The depth video sequence corresponds to the color video sequence, which reflects the distance between the object and the camera. The actual distance between each pixel in the color image and the camera is quantified to 0~255, and then this quantization value is assigned to the color image. The corresponding position becomes the depth map. The larger the brightness value in the depth video sequence, the closer the pixel is to the camera; otherwise, the farther the pixel is from the camera. Depth video sequences are captured by depth cameras or obtained by depth estimation. Since depth cameras are relatively expensive and have problems of application range and accuracy, current depth video sequences are generally obtained by depth estimation. [0003] The inaccurate calculation of depth will lead...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/26H04N7/30H04N13/00H04N19/154H04N19/186H04N19/597
Inventor 蒋刚毅钱健郁梅朱林卫邵枫彭宗举白翠霞
Owner NINGBO UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products