Unlock instant, AI-driven research and patent intelligence for your innovation.

A video generation method for two non-adjacent images based on deep learning

An adjacent image, deep learning technology, applied in image communication, biological neural network model, selective content distribution, etc., can solve the problems of poor quality and short time for generating or predicting video, reducing the dimension of the solution space, generating Easy, strong similarity effect

Inactive Publication Date: 2019-08-09
HUAZHONG UNIV OF SCI & TECH
View PDF11 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Aiming at the above defects or improvement needs of the prior art, the present invention provides a video generation method based on deep learning for two non-adjacent images, thereby solving the problem of poor quality and short time of generated or predicted video in the prior art. technical problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A video generation method for two non-adjacent images based on deep learning
  • A video generation method for two non-adjacent images based on deep learning
  • A video generation method for two non-adjacent images based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0035] Such as figure 1 As shown, a video generation method based on deep learning of two non-adjacent images, including:

[0036] (1) Carry out linear interpolation processing to two frames of non-adjacent images to obtain N frames of input images, input the N frames of input images into the trained first generator, and obtain N frames of video images between the two frames of non-adjacent ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video generation method of two non-adjacent images based on deep learning, which belongs to the field of confrontational learning and video generation, including performing linear interpolation processing on two non-adjacent images to obtain N frames of input images, and converting the N frames The input image is input to the first generator to obtain N frames of blurred video images between two frames of non-adjacent images; the N frames of video images are input to the trained second generator to obtain new N frames of clear video images, and Two frames of non-adjacent images are concatenated with new N frames of video images to generate a video. Among them, use the full convolutional layer to construct the first deep self-encoding convolutional network, use confrontation training to obtain the trained first generator, use the full convolutional layer and perform cross-layer connections to construct the second deep self-encoding convolutional network, Using confrontation training, a trained second generator is obtained. The video generated by the invention has good quality and long time.

Description

technical field [0001] The invention belongs to the field of confrontational learning and video generation, and more specifically relates to a method for generating video of two non-adjacent images based on deep learning. Background technique [0002] The prediction of video generation has always been a difficult problem in the field of computer vision. It is difficult for traditional non-deep learning algorithms to generate continuous high-quality video, but in fact video generation and prediction can be used in many fields, such as behavior analysis, intelligent monitoring, Video forecasting, animation production and more. [0003] In the 1980s, Yuan Lecun and others had proposed the basic theory of deep learning, but the hardware level used at that time could not meet its computing requirements, so the development of artificial intelligence was slow, but with the improvement of hardware level, deep learning With the rise of the , the method of using the features learned ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N21/85H04N21/44H04N21/845G06N3/04
CPCH04N21/44016H04N21/845H04N21/85G06N3/045
Inventor 温世平刘威威
Owner HUAZHONG UNIV OF SCI & TECH