Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Depth image preprocessing method

A depth image and preprocessing technology, which is applied in the field of image processing and can solve the problems of poor temporal continuity and depth discontinuity of depth images.

Inactive Publication Date: 2013-05-15
NANTONG GUOMIQI MASCH EQUIP CO LTD
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Compared with the color image, the texture of the depth image is simple, which includes more flat areas, but due to the limitations of the depth image acquisition algorithm, the depth image generally has problems such as poor time continuity and depth discontinuity.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth image preprocessing method
  • Depth image preprocessing method
  • Depth image preprocessing method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0075] The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

[0076] A kind of preprocessing method of depth image that the present invention proposes, it mainly comprises the following steps:

[0077] ① Acquire K color images of K reference viewpoints at time t whose color space is YUV and their corresponding K depth images, and record the color image of the kth reference viewpoint at time t as Denote the depth image of the kth reference viewpoint at time t as Among them, 1≤k≤K, the initial value of k is 1, i=1, 2, and 3 respectively represent the three components of the YUV color space, the first component of the YUV color space is the brightness component and is recorded as Y, the second The first component is the first chroma component and is denoted as U and the third component is the second chroma component and is denoted as V, (x, y) represents the coordinate position of the pixel in the color ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth image preprocessing method. A maximally tolerant distortion distribution image of a depth image is obtained according to the influence of depth distortion on the rendering of a virtual viewpoint image in combination with the visual characteristics of a human eye, the depth image is divided into a belief content area and an unbelief content area according to the maximally tolerant distortion distribution image, and two groups of bilateral filters with different filtering intensities are designed to filter depth values of each pixel in the belief content area andthe unbelief content area. The method has the advantage that: the filtering intensity is selected according to the maximally tolerant distortion distribution image of the depth image to greatly improve the compression efficiency of the depth image on the basis of ensuring the rendering performance of the virtual viewpoint image.

Description

technical field [0001] The present invention relates to an image processing method, in particular to a preprocessing method of a depth image. Background technique [0002] Three-dimensional video (Three-Dimensional Video, 3DV) is an advanced visual mode, which enables people to have a sense of three-dimensionality and immersion when viewing images on a screen, and can meet people's needs for viewing three-dimensional (3D) scenes from different angles. A typical 3D video system such as figure 1 As shown, it mainly includes modules such as video capture, video encoding, transmission decoding, virtual viewpoint rendering and interactive display. [0003] Multi-view video plus depth (MVD) is currently recommended by ISO / MPEG to represent 3D scene information. MVD data adds the depth information of corresponding viewpoints on the basis of multi-viewpoint color images. There are currently two basic ways to obtain depth information: 1) through depth cameras; 2) through algorithms...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N7/26H04N7/32H04N19/117H04N19/154H04N19/186H04N19/597
Inventor 邵枫蒋刚毅郁梅
Owner NANTONG GUOMIQI MASCH EQUIP CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products