Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video super-resolution method based on convolutional neural network and mixed resolution

A convolutional neural network and super-resolution technology, applied in the field of video super-resolution based on convolutional neural network and mixed resolution, can solve the problem of unsatisfactory super-resolution results, lack of high-frequency information including texture details, increased Problems such as the overall calculation redundancy of video sequences, to achieve the effect of increasing nonlinear mapping capabilities, improving processing capabilities, and improving recovery capabilities

Active Publication Date: 2019-08-13
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF2 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] At present, many video super-resolution methods have been proposed, but due to the characteristics of video image frames and the diversity of video scenes, the super-resolution results are not completely satisfactory.
Existing video super-resolution methods generally use redundant information between adjacent frames to complement each other to restore the high-frequency information missing from the image frame; however, whether it is the current image frame or adjacent frames, they are all low-resolution images. There is also a lack of high-frequency information containing texture details, which may make it difficult to recover this part of the information during the entire super-resolution process; at the same time, the existing video super-resolution methods generally use 5 consecutive low-resolution frames as network input, each Frames are repeatedly input multiple times, which increases the overall computational redundancy for restoring the entire video sequence; in addition, in practical applications, the degradation methods of image frames are complex and diverse, and most current video super-resolution methods only consider bicubic downsampling. Degradation methods, once the degradation methods involved in the video also include other degradation factors, the results of these super-resolution methods will degrade

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution method based on convolutional neural network and mixed resolution
  • Video super-resolution method based on convolutional neural network and mixed resolution
  • Video super-resolution method based on convolutional neural network and mixed resolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

[0033] This embodiment provides a video super-resolution method based on a convolutional neural network and a mixed resolution model, and the specific steps are as follows:

[0034] Step 1, data preprocessing stage: collect Internet video sequences to form a data set, which contains sports, natural scenery, animal migration, building movement and other different scenes; in this embodiment, some video sequences in the data set are not compressed at all The video with a resolution of 3840×2160 is down-sampled by 4 times to convert the resolution to 940×540; the resolution of other videos is around 1080×720;

[0035] Step 2, data set division: all the video sequences in the data set are randomly sampled; in this embodiment, 70 scenes are selected as the training data set, of which 64 scenes are used for the training of the network model,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of computer vision, and particularly provides a video super-resolution method based on a convolutional neural network and mixed resolution. The method comprises the following steps of firstly, prediorically revsering a part of frame in a video sequence to serve as high-resolution frame data, subjecting other frames to degradation processing to serve aslow-resolution image frames, and combinding the two frames are to form a hybrid resolution video; secondly, training a degradation network based on the convolutional neural network to extract featureinformation of a degradation factor, and obtaining a degradation feature map of the low-resolution image frame by using the trained model; and then, taking the low-resolution image frame and the high-resolution frame and degradation feature map related to the low-resolution image frame as input data, performing training based on a convolutional neural network to obtain a super-resolution network model, and obtaining an output high-resolution video. The convolutional neural network and the hybrid resolution model are combined, and image texture details and degradation factors can be analyzed ina targeted manner, so that the super-resolution accuracy is improved.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a video super-resolution method based on a convolutional neural network and mixed resolutions. Background technique [0002] With the continuous development of multimedia technology, video applications such as online live broadcast and high-resolution TV have gradually become the mainstream media for people's life and entertainment; however, video systems are often limited by various objective conditions, including video acquisition equipment with insufficient precision and limited network bandwidth. And terminals with insufficient processing capabilities, etc., which make it difficult for the video system to provide sufficient high-resolution video sources. [0003] In order to solve the above problems, super-resolution technology can be used in the video system, so that video applications with limited objective conditions can also provide high-quality video ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06K9/62
CPCG06T3/4007G06T3/4053G06F18/2135G06F18/214
Inventor 傅志中敬琳萍徐莉娟李晓峰徐进刘洪盛
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products