Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Detection and processing method for boundary inconsistency of depth and color videos

A color video and depth video technology, applied in the field of 3D video coding, can solve the problems of reducing coding efficiency and large prediction residual error, and achieve the effect of reducing coding cost and improving rate-distortion performance.

Active Publication Date: 2014-04-23
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF1 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In the depth video coding process based on color video, the pixels at the boundary inconsistent position between color video and depth video will produce a large prediction residual error after prediction, and after DCT transformation, a large number of high-frequency components will be generated in the frequency domain , which seriously reduces the coding efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Detection and processing method for boundary inconsistency of depth and color videos
  • Detection and processing method for boundary inconsistency of depth and color videos
  • Detection and processing method for boundary inconsistency of depth and color videos

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0031] like image 3 Shown:

[0032] S1. Identify areas where boundary inconsistencies occur between the depth video and the color video:

[0033] Use boundary detection operators such as Canny boundary detection operator (multi-level edge detection operator) to detect the boundary of the depth map, and the boundary is denoted as E d . Because the boundary inconsistency between the depth video and the color video mainly appears in the neighborhood of 2 to 3 pixels from the object boundary in the depth video, errors of more than 3 pixels are no longer considered as boundary inconsistency. Therefore, the area with H pixels centered on the object boundary detected in the depth map is the area where the boundary between the depth video and the color video is inconsistent, where 2≤H≤3. Therefore, using a 3×3 rectangle pair E d Perform expansion to obtain E d The extended area of ​​3 pixels in the center, which is the area where the boundary between the depth video and the colo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the field of 3D (three dimensional) video coding, and relates to a color video boundary-based detection and processing method for the boundary inconsistency of the depth video in the depth video coding assisted by the color video. The method comprises the concrete steps of determining the area with the boundary inconsistency between the depth video and the color video, detecting the inconsistent boundary pixels between the depth video and the color video, and processing the inconsistent boundary pixels before entropy coding. The detection and processing method can perform detection and processing on the boundary inconsistency between the color video and the depth video in the depth video prediction process based on the structural similarity between the color video and the depth video, so the rate-distortion performance of coding and the virtual viewpoint synthesis quality at a final decoding end are improved and the coding cost can be effectively reduced.

Description

technical field [0001] The invention belongs to the field of 3D video coding, and relates to a method for detecting and processing depth video boundary inconsistencies based on color video boundaries in color video-assisted depth video coding. Background technique [0002] Multi-view plus depth video can use depth-based view synthesis technology to generate virtual viewpoints at any position, similar to capturing images with a virtual camera at the generated position. This format consists of color video and depth video of multiple viewpoints, where color video records the color information of the scene, and depth video records the depth information of the scene. Depth video can represent the geometric structure of the scene and the geometric relationship between viewpoints. Combined with the camera configuration parameters, the video obtained at the capture camera position can be mapped to the virtual viewpoint position to form a virtual viewpoint image. [0003] The exist...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N17/00H04N19/597H04N19/137H04N13/00
Inventor 朱策雷建军李帅高艳博王勇李贞贞
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products