Unlock instant, AI-driven research and patent intelligence for your innovation.

An inter-frame prediction method based on deep neural network

A deep neural network and inter-frame prediction technology, applied in the field of inter-frame prediction, can solve problems such as insufficient accuracy, achieve the effects of improving accuracy, improving coding efficiency, and reducing time-domain redundancy

Active Publication Date: 2020-09-11
HARBIN INST OF TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional inter-frame prediction method has the problem of insufficient accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An inter-frame prediction method based on deep neural network
  • An inter-frame prediction method based on deep neural network
  • An inter-frame prediction method based on deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] A kind of inter-frame prediction method based on deep neural network, the process of described method is, as figure 1 Shown:

[0042] Step 1: Obtain the surrounding adjacent pixels of the current block, the reference block and the surrounding adjacent pixels of the reference block, the current block and the reference block are rectangular areas or non-rectangular areas; when the current block and the reference block are rectangular areas , the size of the current block and the reference block is W*H, W is the width of the current block and the reference block, and H is the height of the current block and the reference block;

[0043] Step 2: Input the surrounding adjacent pixels of the current block, the reference block and the surrounding adjacent pixels of the reference block obtained in step 1 into the deep neural network, and learn to obtain the relationship between the current block and the reference block, or learn to obtain the reference block and the relationsh...

Embodiment 2

[0069] The difference between embodiment 2 and embodiment 1 is that the process of obtaining a more accurate prediction block in the second step is:

[0070] Step 1: Input the reference block obtained in step 1 and the surrounding adjacent pixels of the reference block into a neural network to learn the relationship between the reference block and the surrounding adjacent pixels of the reference block. The neural network consists of fully connected layers, convolutional layers, or a combination of both;

[0071] The second step: input the surrounding adjacent pixels of the current block obtained in step 1 and the relationship obtained in the first step into a neural network, and learn to obtain a more accurate prediction block of the current block. The neural network consists of a fully connected layer, Convolutional layers or a combination of both.

Embodiment 3

[0073] The difference between embodiment 3 and embodiment 1 is that, in the inter-frame prediction method in the hybrid video codec system of this embodiment, the deep neural network of the first step, the second step and the third step in step 2 can be passed through simple The transformations are integrated into a deep neural network. In principle, the distinction between the first step, the second step, and the third step is for the convenience of description, and they are distinguished according to their functions. During training and deployment, the entire network is in an end-to-end manner, so conceptually distinguishing network modules is a special case of Embodiment 1.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an inter-frame prediction method based on a deep neural network in a hybrid video coding and decoding system. The inter-frame prediction method belongs to the technical field ofinter-frame prediction. The inter-frame prediction method obtains a more accurate prediction block by utilizing the deep neural network by means of surrounding adjacent pixels of a current block anda reference block, thereby improving inter-frame prediction performance. The inter-frame prediction method has the beneficial advantages that: different from the conventional inter-frame prediction method, the inter-frame prediction method is based on the deep neural network; and different from an existing deep neural network method only regarding an image block as input in the hybrid video codingand decoding system, the deep neural network of the inter-frame prediction method has a plurality of inputs, including a non-square region, which is a contribution different from a common convolutional neural network.

Description

technical field [0001] The invention relates to an inter-frame prediction method based on a deep neural network in a hybrid video codec system, and belongs to the technical field of inter-frame prediction. Background technique [0002] With the rapid development of portable devices and self-media, applications related to video coding are developing rapidly and gradually becoming mature, such as short video sharing, video calling, Internet live broadcasting, TV broadcasting, and so on. Inter-frame prediction can effectively remove redundant information between adjacent video frames, so improving the accuracy of inter-frame prediction can improve the compression performance of video coding. [0003] Generally, in the traditional inter-frame prediction algorithm, the prediction value of the current block is directly copied or interpolated from the reference frame. There are many variations between adjacent frames of video, including brightness changes, fades in and out, blurri...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/573H04N19/587H04N19/61
CPCH04N19/573H04N19/587H04N19/61
Inventor 范晓鹏王洋赵德斌
Owner HARBIN INST OF TECH