Video encoding, decoding method and video encoder, decoder

A technology of video encoding and encoding and decoding, which is applied in the fields of decoding methods, video encoders, decoders, and video encoding. The effect of transmission efficiency

Inactive Publication Date: 2010-08-25
GLOBAL INNOVATION AGGREGATORS LLC
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The existing stereoscopic video encoding and decoding method only realizes the separate encoding of two-dimensional display and three-dimensional display, that is, one of the views in the binocular video is used as the reference view, and the standard encoding method is used for encoding, and the other view is encoded with reference to the reference view. , so that two-dimensional display can be achieved by decoding the content of the reference view on the display side, and three-dimensional display can be achieved by decoding all the content, but it cannot meet the different levels of stereoscopic display requirements of various stereoscopic display devices connected to different networks

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video encoding, decoding method and video encoder, decoder
  • Video encoding, decoding method and video encoder, decoder
  • Video encoding, decoding method and video encoder, decoder

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0070] As shown in FIG. 1 , it is a flowchart of a video encoding method according to an embodiment of the present invention. In this embodiment, depth / disparity information is used as prediction information. Before performing the steps shown in Figure 1, the number of layers and levels of depth / parallax information to be extracted can be preset. This embodiment takes the extraction of three layers of depth / parallax information as an example, from rough to fine in order of sparse depth / disparity information, dense depth / disparity information, and fine depth / disparity information will further introduce the technical solution of this embodiment. The video encoding method of this embodiment performs the following steps:

[0071] Step 101, using two or more cameras to shoot the same scene from different angles to obtain two views, which are left-eye view and right-eye view;

[0072] Step 102: Select one of the left-eye view and the right-eye view as a reference view to perform b...

Embodiment 2

[0104] As shown in FIG. 5 , it is a flowchart of a video encoding method according to Embodiment 2 of the present invention. In this embodiment, depth / disparity information is used as prediction information. Before performing the steps shown in Figure 5, the number of layers and levels of depth / parallax information to be extracted can be preset. This embodiment takes the extraction of three layers of depth / parallax information as an example, from rough to fine in order of sparse depth / Disparity information, dense depth / disparity information, and fine depth / disparity information will further introduce the technical solution of this embodiment. The video encoding method of this embodiment performs the following steps:

[0105] Step 301: Use two or more cameras to shoot the same scene from different angles, and obtain two views, namely the left-eye view and the right-eye view;

[0106] Step 302: Select one of the left-eye view and the right-eye view as a reference view to perfo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video coding-decoding method and a video coder-decoder, wherein, one video coding method comprises: base layer coding is carried out on a first visual image, at least one layer of forecast information is extracted by combining with the first visual image and a second visual image after the local decoding; the enhancement layer coding is respectively carried out on at least one layer of the forecast information; the enhancement layer coding and the base layer coding of the first visual image are multiplexed to obtain coded information; the other video coding method comprises: the base layer coding is carried out on the first visual image, and a first layer of forecast information is extracted by combining with the first visual image and the second visual image; the enhancement layer coding is carried out on the first layer of forecast information; at least one layer of forecast information increment is extracted, the enhancement layer coding is further carried out; and the coded information is obtained by multiplexing the base layer coding and the enhancement layer coding. The invention realizes the scalable coding and the decoding of the content of a stereo video and meets the stereo display requirements with different levels of various stereo display devices under the different networks.

Description

technical field [0001] The present invention relates to the field of video technology, in particular to a video encoding and decoding method, a video encoder, and a decoder. Background technique [0002] The traditional two-dimensional video is a carrier of plane information, which can only show the content of the scene, but not the depth information of the scene. When human beings watch the surrounding world, they can not only see the width and height of the object, but also can Know the depth of objects and judge the distance between objects or between the viewer and objects. The reason for this three-dimensional visual characteristic is that people use binoculars to watch objects at the same time. Due to the distance between the binoculars, when the left eye and the right eye look at objects at a certain distance, the visual images received are different. information, thus creating a three-dimensional sense in people's brains. With the development of video technology, p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04N7/26H04N7/32H04N19/102H04N19/166H04N19/30H04N19/50H04N19/503H04N19/597H04N19/61H04N19/625H04N19/70
CPCH04N7/26351H04N19/00757H04N19/00751H04N21/234327H04N21/2365H04N7/465H04N21/4347H04N19/00424H04N19/00769H04N7/467H04N19/597H04N19/30H04N19/587H04N19/40H04N19/59H04N13/161H04N13/167H04N2213/007
Inventor 方平
Owner GLOBAL INNOVATION AGGREGATORS LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products