Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

K-means clustering based depth image encoding method

A depth map coding and depth map technology, applied in the field of depth map coding based on clustering, can solve the problems of high bandwidth pressure, gray level difference, insufficient characteristic analysis, etc.

Active Publication Date: 2015-09-02
TAIYUAN UNIVERSITY OF SCIENCE AND TECHNOLOGY
View PDF3 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in these current methods, the analysis of the characteristics of the depth map different from the texture image is still insufficient. For example, there is no texture information on the surface of the object in the depth map, which makes the areas contained in an object have very similar depths. Value, only presents sharp edges on the edge of the object, and there are obvious gray level differences, and in the structure of MVD, each color image has its corresponding depth image, and the synthesis of the depth image at the virtual viewpoint and The display plays a vital role. The data volume of the depth image is huge, so the bandwidth pressure is also high during transmission. The performance of the currently used depth image coding scheme is not good, and the edge integrity of the depth image cannot be well guaranteed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • K-means clustering based depth image encoding method
  • K-means clustering based depth image encoding method
  • K-means clustering based depth image encoding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] This example figure 2 As shown, three 1024×768 images Kendo, BookArrival and Ballet are used as test images, where Kendo and BookArrival are taken from viewpoint 1 and viewpoint 8 in its multi-view depth map sequence, and Ballet is taken from its multi-view depth map sequence Viewpoint 4, 3D mapping and median filtering are used in the process of synthesizing viewpoints using DIBR technology.

[0029]The specific steps are as follows:

[0030] Step 1: Read in a depth map Kendo, and cluster the depth map into 5 categories according to the clustering level=5 and the cluster center C=5. The depth map Kendo read after clustering is divided into 5 new images. The specific method is: set a zero matrix A1 with the same dimension as the original image, and divide the original image corresponding to the position of the pixel in the first category after clustering. The depth value is assigned to the zero matrix A1 to form the first new depth map D1, and so on until all classes...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a K-means clustering based depth image encoding method, and belongs to the field of depth image encoding and decoding in 3D video. The K-means clustering based depth image encoding method is characterized by comprising the steps of segmenting a depth image into n types by adopting K-means clustering; extracting the boundary of a new depth image formed by each type of the depth image after segmentation, carrying out entropy encoding and transmitting the entropy encoded boundaries to a decoding terminal; carrying out down sampling on non boundary region pixel values, and carrying out entropy encoding on down sampling values; transmitting an encoded bit stream to the decoding terminal; recovering each type of data by using a partial differential equation (PDE) method to acquire n reconstructed depth images at the decoding terminal; overlaying the n reconstructed depth images acquired by recovery so as to form a complete depth image; and synthesizing a required virtual viewpoint image by using depth image based viewpoint synthesis technologies. The advantage is that the quality of a virtual viewpoint synthesized under guidance of the depth image acquired by compression according to the scheme provided by the invention is higher than JPEG and JPEG2000 compression standards.

Description

technical field [0001] The invention belongs to the field of coding and decoding of depth maps in 3D videos, and in particular relates to a method for coding depth maps based on clustering. Background technique [0002] Currently, 3D video has become a research hotspot in the field of video coding and communication because users can freely choose viewing angles and stereoscopic perception capabilities in 3D video. "Multi-viewpoint video + depth" is the MVD format, which is a commonly used three-dimensional video representation method at present. MVD adds a depth sequence to each video signal on the basis of the original multi-viewpoint video. The depth map is only used for viewpoint synthesis. , instead of being directly displayed to the user. The depth map combined with its corresponding texture map can be used to synthesize images of virtual viewpoints at any position. The distortion of the depth map will lead to the distortion of the chromaticity or brightness of the synt...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N13/00H04N19/597
Inventor 王安红刘瑞珍
Owner TAIYUAN UNIVERSITY OF SCIENCE AND TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products