Inter-frame prediction method in hybrid video coding standard

A technology of inter-frame prediction and video coding, which is applied in digital video signal modification, electrical components, image communication, etc., and can solve problems such as poor model prediction effect

Active Publication Date: 2015-09-23
HARBIN INST OF TECH
View PDF2 Cites 51 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, neither model predicts well when a block contains multiple objects moving in different directions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Inter-frame prediction method in hybrid video coding standard
  • Inter-frame prediction method in hybrid video coding standard
  • Inter-frame prediction method in hybrid video coding standard

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0039] Embodiment 1: The inter-frame prediction method in the hybrid video coding standard described in this embodiment is used to describe the deformation motion existing in the video sequence, and the prediction method is used in merge mode, skip mode or inter mode; the prediction The implementation process of the method is:

[0040] Step 1: Obtain the motion information of several adjacent coded blocks around the current coded block, the size of the current coded block is W*H, W is the width of the current coded block, and H is the height of the current coded block; the motion information Including reference index and motion vector; several adjacent coded blocks around are called adjacent coded blocks;

[0041] Step 2: Obtain the reference index of each division unit in the current coding block according to the reference index of the adjacent coding block obtained in step 1;

[0042] Step 3: Process the motion vector of the adjacent coding block according to the reference ...

specific Embodiment approach 2

[0043] Specific implementation mode two: the inter-frame prediction method in the hybrid video coding standard described in this implementation mode is characterized in that:

[0044] In step 1, the adjacent coding blocks are adjacent blocks at the four corner positions of the current coding block, or adjacent blocks at the four corner positions and the center point of the current coding block; the adjacent coding blocks are located at Spatial neighboring blocks in the current frame or temporal neighboring blocks located in the temporal reference frame;

[0045] It also supports selecting a larger number of adjacent blocks to obtain the motion information of the current coding block. In addition to the four corner positions and the central point position, adjacent blocks in other positions are also supported;

[0046] For example, the four adjacent coded blocks of the four corners are selected from the upper left, upper right, lower left, and lower right corners of the current...

specific Embodiment approach 3

[0054] Specific embodiment three: In the inter-frame prediction method in the hybrid video coding standard described in this embodiment, in step three, the realization process of obtaining the motion vector of each division unit in the current coding block is as follows:

[0055] The motion vector is obtained through bilinear interpolation model calculation, and the calculation process is as follows: select four adjacent blocks from several adjacent blocks in step 1, the motion information of which must all exist and at least one selected adjacent block has motion information and The motion information of other selected adjacent blocks is different; according to the method described in the second embodiment, the reference index of the selected adjacent block is used to obtain the target reference index of each division unit in the current block; according to the method described in the second embodiment, the selected adjacent block The motion vector is preprocessed, and then it...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

An inter-frame prediction method in a hybrid video coding standard belongs to the field of video coding. In order to effectively process deformation movement existing in a video sequence, the present invention puts forward an inter-frame prediction method in a hybrid video coding standard for further improving coding performance of a video. The inter-frame prediction method comprises the steps of: obtaining motion information of a plurality of adjacent coded blocks around a current coding block; obtaining a reference index of each dividing unit in the current coding block according to obtained reference indexes of the adjacent coded blocked; and processing motion vectors of the adjacent coded blocks according to the obtained reference indexes of the adjacent coded blocks and the obtained reference index of each dividing unit in the current coding block, so as to obtain a motion vector of each dividing unit in the current coding block. According to the inter-frame prediction method of the present invention, motion information of the current coding block is predicted through the motion information of the adjacent coded blocks of the current coding block, so that deformation movement existing in the video sequence can be effectively described, and coding efficiency can be further improved.

Description

technical field [0001] The invention relates to an inter-frame prediction method in a hybrid video coding standard. Background technique [0002] With the improvement of people's requirements for video display quality, new video application forms such as high-definition and ultra-high-definition video have emerged. In the situation that the high-resolution and high-quality video appreciation is more and more widely used, how to enhance the video compression efficiency becomes crucial. [0003] During the digitization of images and videos, a large amount of data redundancy is generated, which makes video compression technology possible. Generally speaking, redundancy types include at least spatial redundancy, time redundancy, and information entropy redundancy. For the elimination of time redundancy, a method based on prediction is generally used, that is, inter-frame predictive coding. The basic idea is to find the block that best matches the current block from the encode...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/51H04N19/503H04N19/56
Inventor 范晓鹏张娜赵德斌
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products