Time-dimension video super-resolution method based on deep learning

A deep learning and super-resolution technology, applied in the field of image processing, can solve the problems of unsatisfactory stability and accuracy of video images, and insufficient use of structural similarity, etc., to achieve the goals of reducing computational complexity, improving stability, and improving accuracy Effect

Inactive Publication Date: 2017-09-05
XIDIAN UNIV
View PDF2 Cites 35 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The video image frame interpolation and reconstruction problem is an ill-conditioned inverse problem, which uses the time information of the video image and combines the spatial information of the video image to realize the video image frame interpolation reconstruction, but the algo

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Time-dimension video super-resolution method based on deep learning
  • Time-dimension video super-resolution method based on deep learning
  • Time-dimension video super-resolution method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] Embodiments and effects of the present invention will be further described in detail below in conjunction with the accompanying drawings.

[0029] refer to figure 1 , the present invention is based on the time-dimensional video super-resolution method of deep learning, and its realization steps are as follows:

[0030] Step 1, get the color video image set S.

[0031] (1a) From a given database, select a color video image set S={S with a sample number of 464814 1 ,S 2 ,...,S i ,...,S 464814}, convert S to a grayscale video image set, that is, the original video image set X={X 1 ,X 2 ,...,X i ,...,X 464814},in, Represents the i-th original video image sample, 1≤i≤464814, M represents the size of the original video image block, M=576, L h Indicates the number of image blocks in each sample of the original video image set, L h = 6;

[0032] (1b) Use the downsampling matrix F to directly downsample the original video image set X to obtain the downsampled video ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a time-dimension video super-resolution method based on deep learning, which is mainly aimed at solving the problems of the prior art that reconstructed video image insert frames are poor in stability and low in precision. The technical key of the method is that fitting of a nonlinear mapping relationship between an original video image and a down-sampling video image is conducted through neural network training. The method comprises the following steps: 1) obtaining an original video image set and a down-sampling video image set and taking the original video image set and the down-sampling video image set as training samples of a neural network; 2) constructing a neural network model, and training parameters of the neural network through the training samples; and 3) taking any given video as a test sample, and inputting the test sample into the trained neural network model, wherein an output result of the neural network is a reconstructed video image. The calculating complexity of reconstruction of video image insert frames is reduced, and the stability and the precision of the reconstructed video image insert frames are improved. The method can be used for scene interpolation and animation making and also for time-domain insert frames of low-frame-rate videos.

Description

technical field [0001] The invention belongs to the field of image processing, and in particular relates to a time-dimensional video super-resolution method, which can be used for scene interpolation, animation production, and time domain frame interpolation of low frame rate videos. Background technique [0002] The video image not only contains the spatial information of the observed target, but also contains the motion information of the observed target in time, which has the property of "integration of space and time". Since the video image can maintain the spatial information and time information reflecting the nature of the object together, it greatly improves the ability of human beings to recognize the objective world, and has been proved to have great potential in remote sensing, military affairs, agriculture, medicine, biochemistry and other fields. application value. [0003] The use of video imaging equipment to obtain precise video images is very costly, and is...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T3/40H04N19/587G06N3/08G06N3/04
CPCG06T3/4053G06N3/04G06N3/08H04N19/587
Inventor 董伟生巨丹石光明谢雪梅吴金建李甫
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products