Supercharge Your Innovation With Domain-Expert AI Agents!

Video hybrid encoding and decoding method and device based on deep learning, and medium

A technology of deep learning and video mixing, which is applied in the field of video coding, can solve problems such as the difficulty of further improving the compression rate, and achieve the effect of improving video coding performance, improving video compression performance, and solving the problem that the compression rate is difficult to increase

Active Publication Date: 2020-11-06
PEKING UNIV
View PDF4 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to solve the problem that the compression rate of the existing video hybrid coding scheme is difficult to further improve, the present invention innovatively provides a video hybrid coding and decoding method, device, and medium based on deep learning, which can better solve the existing problems The above-mentioned problems in the video hybrid coding scheme

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video hybrid encoding and decoding method and device based on deep learning, and medium
  • Video hybrid encoding and decoding method and device based on deep learning, and medium
  • Video hybrid encoding and decoding method and device based on deep learning, and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0051] like figure 1 , 2 As shown, this embodiment provides a video hybrid coding method based on deep learning, which can perform intra-frame coding work through a deep learning autoencoder. Specifically, the video hybrid encoding method may include but not limited to the following steps.

[0052]First, the intra-frame coding process is performed to extract the bottleneck layer features from the specified frame image of the current video. In some preferred embodiments of the present invention, the process of extracting the bottleneck layer feature from the specified frame image of the current video includes: grouping all the frame images of the current video in order from front to back, so as to obtain multiple sets of image, the first frame image of each group of images is taken as the designated frame image; if the current video has 1600 frames, it can be divided into 100 groups, each group has 16 frames; each group of images includes the first subsequent frame image and ...

Embodiment 2

[0059] like image 3 , 4 As shown, this embodiment provides a video decoding method based on deep learning, which corresponds to the video encoding method in any embodiment of the present invention, and this embodiment is used to generate a video hybrid encoding method in any embodiment of the present invention. The corresponding data is decoded accordingly.

[0060] Specifically, the video decoding method based on deep learning in this embodiment includes but is not limited to the following steps.

[0061] First, entropy decoding is performed on the intra-frame coded data in the received code stream to obtain the bottleneck layer feature, and then the specified frame image is decoded according to the bottleneck layer feature.

[0062] Then, perform entropy decoding, inverse quantization, inverse transformation and compensation with the specified frame image on the first prediction residual data in the received code stream, and then perform loop filtering on the compensated ...

Embodiment 3

[0065] Based on the same inventive concept as the first embodiment, this embodiment can specifically provide a video hybrid encoding device based on deep learning. The device can provide a video coding framework that integrates deep intra-frame autoencoder and inter-frame motion compensation prediction, and can select intra-frame mode or non-intra-frame mode at the coding end to realize efficient video coding and complete further video compression. , the video hybrid encoding device includes but is not limited to the following modules.

[0066] Analyze the network module, the input is the original signal, and the output is the bottleneck layer feature. The visible analysis network module is used to extract the bottleneck layer features from the specified frame image of the current video. In this embodiment, all frame images of the current video are grouped and set in sequence from front to back, so as to obtain multiple groups of images, and the first frame image of each grou...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video hybrid encoding and decoding method and device based on deep learning, and a medium. The coding method comprises the following steps: extracting bottleneck layer features from a specified frame image; reconstructing a first frame image according to the bottleneck layer features; performing quantization and entropy coding on the bottleneck layer features to obtain intra-frame coded data; and performing compensation, transformation, quantification and entropy coding on the first subsequent frame image of the current video to obtain first prediction residual data.The decoding method comprises the following steps of: performing entropy decoding on intra-frame coded data to obtain bottleneck layer characteristics, and decoding a specified frame image; carrying out entropy decoding, inverse quantization, inverse transformation and compensation on the first prediction residual data, then carrying out loop filtering on the compensated data, and decoding a firstsubsequent frame image. The encoding and decoding device corresponds to the corresponding encoding and decoding method. The invention provides a brand-new video encoding and decoding scheme, efficient compression and rapid decoding of the video can be realized, and the video compression performance is greatly improved.

Description

technical field [0001] The present invention relates to the technical field of video coding, and more specifically, the present invention relates to a video hybrid coding and decoding method and device, and a computer storage medium. Background technique [0002] At present, traditional hybrid coding frameworks mainly perform predictive transform coding based on image blocks of different sizes, and improved schemes often focus on local rate-distortion optimization of each coding tool; And a probability estimation model based on Gaussian super-prior distribution entropy estimation model is proposed, and a context model is established in combination with an autoregressive model-based coding framework to help the end-to-end image coding framework obtain higher coding gain. However, when faced with higher video compression requirements, the existing video hybrid coding schemes are often difficult to meet. [0003] Therefore, how to further improve the video compression rate thr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/124H04N19/13H04N19/184H04N19/60H04N19/82
CPCH04N19/124H04N19/13H04N19/184H04N19/60H04N19/82
Inventor 贾川民马思伟王苫社
Owner PEKING UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More