Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method and system for generating high frame rate video based on deep learning

A deep learning, high frame rate technology, applied in the field of computer vision, can solve problems such as video quality degradation and frame loss, and achieve the effect of overcoming the inconspicuous effect and overcoming the time-consuming and labor-intensive effects.

Active Publication Date: 2019-04-26
HUAZHONG UNIV OF SCI & TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Aiming at the above defects or improvement needs of the prior art, the present invention provides a method for generating high frame rate video based on deep learning, the purpose of which is to convert low frame rate video into high frame rate video, thereby solving the problem caused by low frame rate Frame rate video frame loss during network transmission, resulting in a decrease in video quality and a technical problem that affects people's experience

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method and system for generating high frame rate video based on deep learning
  • A method and system for generating high frame rate video based on deep learning
  • A method and system for generating high frame rate video based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0034] Below at first explain and illustrate with regard to the technical terms of the present invention:

[0035] Convolutional Neural Network (CNN): A neural network that can be used for image classification, regression and other tasks. Its particularity is reflected in two aspects. On the one hand, the connection between its neurons is not fully connected. , on the other hand the weights of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for generating high frame rate video based on deep learning, comprising: using one or more original high frame rate video clips to generate a training sample set; using multiple video frame subsets in the training sample set to train dual Passage convolutional neural network model, to obtain optimized dual-channel convolutional neural network, the dual-channel convolutional neural network model is a convolutional neural network fused by two convolution channels; using the optimized dual-channel The convolutional neural network generates an interpolation frame of any two adjacent video frames in the low frame rate video, so as to generate a video with a higher frame rate than the low frame rate video. The whole process of the method of the present invention is end-to-end, without subsequent processing of video frames, the effect of video frame rate conversion is good, the fluency of the synthesized video is high, and the problem of jitter, video scene switching and other problems existing in the video shooting process is relatively good. Good robustness.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and more specifically, relates to a method and system for generating high frame rate video based on deep learning. Background technique [0002] With the development of technology, it is more and more convenient for people to obtain videos. However, due to hardware reasons, most videos are collected by non-professional equipment, and the frame rate is generally only 24fps-30fps. High frame rate video has extremely high fluency, which can bring people a better visual experience. If people directly upload high frame rate videos to the Internet, due to the increase in traffic consumption, people's costs will also increase. If the video with a low frame rate is directly uploaded, due to the reasons of the network line, there will inevitably be a problem of frame loss during the video transmission process. The larger the video, the more likely this phenomenon will occur, so that the remote vi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N21/845H04N19/587H04N7/01G06N3/04G06N3/08
CPCH04N7/0127H04N19/587H04N21/845G06N3/084G06N3/045
Inventor 王兴刚罗浩姜玉静刘文予
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products