Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Method of Inter-frame Prediction in Hybrid Video Coding Standard

A prediction method and inter-frame prediction technology, applied in digital video signal modification, electrical components, image communication, etc., can solve problems such as poor model prediction effect, and achieve the effect of inter-frame prediction performance improvement

Active Publication Date: 2018-03-30
HARBIN INST OF TECH
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, neither model predicts well when a block contains multiple objects moving in different directions.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Method of Inter-frame Prediction in Hybrid Video Coding Standard
  • A Method of Inter-frame Prediction in Hybrid Video Coding Standard
  • A Method of Inter-frame Prediction in Hybrid Video Coding Standard

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0039] Embodiment 1: The inter-frame prediction method in the hybrid video coding standard described in this embodiment is used to describe the deformation motion existing in the video sequence, and the prediction method is used in merge mode, skip mode or inter mode; the prediction The implementation process of the method is:

[0040] Step 1: Obtain the motion information of several adjacent coded blocks around the current coded block, the size of the current coded block is W*H, W is the width of the current coded block, and H is the height of the current coded block; the motion information Including reference index and motion vector; several adjacent coded blocks around are called adjacent coded blocks;

[0041] Step 2: Obtain the reference index of each division unit in the current coding block according to the reference index of the adjacent coding block obtained in step 1;

[0042] Step 3: Process the motion vector of the adjacent coding block according to the reference ...

specific Embodiment approach 2

[0043] Specific implementation mode two: the inter-frame prediction method in the hybrid video coding standard described in this implementation mode is characterized in that:

[0044] In step 1, the adjacent coding blocks are adjacent blocks at the four corner positions of the current coding block, or adjacent blocks at the four corner positions and the center point of the current coding block; the adjacent coding blocks are located at Spatial neighboring blocks in the current frame or temporal neighboring blocks located in the temporal reference frame;

[0045] It also supports selecting a larger number of adjacent blocks to obtain the motion information of the current coding block. In addition to the four corner positions and the central point position, adjacent blocks in other positions are also supported;

[0046] For example, the four adjacent coded blocks of the four corners are selected from the upper left, upper right, lower left, and lower right corners of the current...

specific Embodiment approach 3

[0054] Specific embodiment three: In the inter-frame prediction method in the hybrid video coding standard described in this embodiment, in step three, the realization process of obtaining the motion vector of each division unit in the current coding block is as follows:

[0055] The motion vector is obtained through bilinear interpolation model calculation, and the calculation process is as follows: select four adjacent blocks from several adjacent blocks in step 1, the motion information of which must all exist and at least one selected adjacent block has motion information and The motion information of other selected adjacent blocks is different; according to the method described in the second embodiment, the reference index of the selected adjacent block is used to obtain the target reference index of each division unit in the current block; according to the method described in the second embodiment, the selected adjacent block The motion vector is preprocessed, and then it...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an inter-frame prediction method in a hybrid video coding standard, which belongs to the field of video coding. The purpose of the present invention is to effectively deal with the deformation motion existing in the video sequence, and propose an inter-frame prediction method in the hybrid video coding standard, so as to further improve the coding performance of the video. Obtain the motion information of several adjacent encoded blocks around the current coding block; obtain the reference index of each division unit in the current coding block according to the obtained reference index of the adjacent coding block; obtain the The reference index of each division unit in the current coding block is processed on the motion vectors of adjacent coding blocks to obtain the motion vector of each division unit in the current coding block. The motion information of the current block is predicted by using the motion information of the coded blocks adjacent to the current coding block, so that the deformation motion existing in the video sequence can be effectively described, and the coding efficiency is further improved.

Description

technical field [0001] The invention relates to an inter-frame prediction method in a hybrid video coding standard. Background technique [0002] With the improvement of people's requirements for video display quality, new video application forms such as high-definition and ultra-high-definition video have emerged. In the situation that the high-resolution and high-quality video appreciation is more and more widely used, how to enhance the video compression efficiency becomes crucial. [0003] During the digitization of images and videos, a large amount of data redundancy is generated, which makes video compression technology possible. Generally speaking, redundancy types include at least spatial redundancy, time redundancy, and information entropy redundancy. For the elimination of time redundancy, a method based on prediction is generally used, that is, inter-frame predictive coding. The basic idea is to find the block that best matches the current block from the encode...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/51H04N19/503H04N19/56
Inventor 范晓鹏张娜赵德斌
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products