Depth map intra prediction method based on linear model

An intra-frame prediction and linear model technology, applied in the field of communication, can solve problems such as low coding efficiency and failure to consider the inherent characteristics of the depth map, and achieve the effects of improving coding efficiency, accurate prediction value, and small coding rate

Inactive Publication Date: 2011-10-05
SHANDONG UNIV
View PDF2 Cites 46 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0024] Aiming at the problem of low coding efficiency caused by the existing depth map using H.264 / AVC intra prediction method that does not take into account the inherent characteristics of the depth map, the present invention proposes a high coding efficiency based on the spatial characteristics of the depth map Intra-frame prediction method of depth map based on linear model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth map intra prediction method based on linear model
  • Depth map intra prediction method based on linear model
  • Depth map intra prediction method based on linear model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] The method for intra-frame prediction of a depth map based on a linear model in the present invention calculates linear model parameters based on the adjacent pixel gray values ​​and pixel coordinates of the current coding block; then calculates the linear model parameters based on the model parameters and the pixel coordinates of the current coding block. The pixel grayscale prediction value of the current encoding block; it is necessary to change the encoder and decoder at the same time, including the implementation process of the encoding end and decoding end.

[0050] The implementation process on the encoding side is as follows figure 2 shown, including the following steps:

[0051] Step 1, through the analysis, it is known that the spatial distribution characteristics of the depth map can be expressed by the following model,

[0052] L=a.x+b.y+c,

[0053] Among them, L represents the gray value of the pixel in the depth map, (x, y) represents the pixel coordina...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a depth map intra prediction method based on a linear model. The gray values and coordinates of the previous line and left row of adjacent pixels of a current coded block are utilized to determine a linear model parameter; and according to the parameter and the pixel coordinate of the current coded block, the pixel gray value of the current coded block is predicted. According to the spatial character of a depth map, the depth map intra prediction method based on a linear model has the advantages of accurate prediction; meanwhile, because the previous line and left row of adjacent pixels of the current coded block are adopted to calculate the model parameter, a coding end does not need to code the model parameter; and a decoding end can directly determine the model parameter. The depth map intra prediction method can be applied to the coding standard of a three-dimensional video.

Description

technical field [0001] The invention relates to a method for intra-frame prediction of a depth map in a three-dimensional stereoscopic video coding standard, and belongs to the technical field of communication. Background technique [0002] As the main video application technology in the future, 3D stereoscopic video means that users can enjoy real 3D stereoscopic video content through a 3D stereoscopic video display device. Technologies related to 3D video, such as 3D stereoscopic video acquisition, 3D stereoscopic video coding, and 3D stereoscopic video display, have received extensive attention. In order to promote the standardization of 3D stereoscopic video technology, in 2002, the Motion Picture Experts Group (MPEG) proposed the concept of Free View Television (FTV), which can provide vivid, real and interactive 3D stereoscopic audiovisual system. The user can watch the three-dimensional video of the angle from different angles, so that the user has the real feeling ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N7/32H04N7/26H04N19/11H04N19/147H04N19/593H04N19/597
Inventor 元辉刘琚孙建德
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products