Video compression method based on deep learning

A video compression and deep learning technology, applied in the field of video compression based on deep learning, can solve problems such as small storage, and achieve the effect of small storage and good video restoration effect

Active Publication Date: 2020-06-16
山东新一代信息产业技术研究院有限公司
View PDF6 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But how does this technical solution achieve video compression and storage, an

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video compression method based on deep learning
  • Video compression method based on deep learning
  • Video compression method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0051] The video compression method based on deep learning of the present invention is to use the Spynet motion estimation network composed of optical flow network (opticalFlow Net), motion vector encoding network (MV Encoder Net) and motion vector decoding network (MV Decoder Net) to carry out Motion estimation and motion compensation calculation to achieve better motion estimation and motion compensation effects; the residual network used again, as attached image 3 As shown, the residual network includes two Resblock modules to achieve network training at a deeper level; then use arithmetic entropy coding operations to complete the encoding and store it as a Pickle file to achieve video compression and storage, while achieving smaller storage, to obtain better video restoration effect; as attached figure 1 As shown, the details are as follows:

[0052] S1. The video is divided into each frame picture, and the current frame picture x is input t and the reconstructed image ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video compression method based on deep learning, and belongs to the field of video compression and deep learning. The technical problem to be solved by the invention is how to realize video compression and storage, smaller storage is realized, and the better video recovery effect is obtained. The technical scheme is adopted as follows: the method comprises the steps thatmotion estimation and motion compensation calculation are conducted through a Spynet motion estimation network composed of an optical flow network, a motion vector coding network and a motion vectordecoding network, and the better motion estimation and motion compensation effect is achieved; a residual network is used again, the residual network comprises two Resblock modules, and network training is carried out under the condition of a deeper level; and coding is completed through arithmetic entropy coding operation, the codes are stored as a Pick file, compression and storage of the videoare achieved, the smaller storage is achieved, and a better video restoration effect is obtained.

Description

technical field [0001] The invention relates to the fields of video compression and deep learning, in particular to a video compression method based on deep learning. Background technique [0002] Today, video is the primary medium for mass communication of information. Especially with the development of self-media, video data is growing explosively. In our country, the data of surveillance video accounts for a large proportion in the video field, so how to compress and store the video, and achieve better video restoration effect with smaller storage has become an urgent problem to be solved . Video compression methods based on deep learning have become the mainstream direction of recent research. The video compression method based on deep learning has become a strong competitor of the current mainstream method H.264 and H.265. Traditional video compression frameworks H.264 and H.265 use algorithms such as motion estimation, nonlinear change, motion compensation, and ent...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N19/91H04N19/172H04N19/124H04N19/42H04N19/51H04N19/44G06N3/08G06N3/04
CPCH04N19/91H04N19/172H04N19/124H04N19/42H04N19/51H04N19/44G06N3/08G06N3/045
Inventor 冯落落李锐金长新
Owner 山东新一代信息产业技术研究院有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products