Video super-resolution reconstruction method based on deep learning and self-similarity

A super-resolution reconstruction and self-similarity technology, which is applied in the field of video super-resolution reconstruction based on deep learning and self-similarity, can solve problems such as long training time, low reconstruction magnification, and poor reconstruction effect

Inactive Publication Date: 2016-12-21
BEIJING UNIV OF POSTS & TELECOMM
View PDF3 Cites 21 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The self-similarity feature provides internal instances that are highly correlated with the low-resolution input, and super-resolution methods based on this internal similarity do not require additional training sets and long training time, but in the case of insufficient internal similarity blocks , tends to cause some visual artifacts due to the mismatch of the inner instance
[0006] In short, the video super-resolution reconstruction methods in the prior art have disadvantages such as poor reconstruction effect and low reconstruction magnification.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution reconstruction method based on deep learning and self-similarity
  • Video super-resolution reconstruction method based on deep learning and self-similarity
  • Video super-resolution reconstruction method based on deep learning and self-similarity

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0050]In order to make the method of the embodiment of the present invention easy to understand, and also to make the following discussion more convenient, some basic concepts are defined here first, and these basic concepts will not be described in detail below.

[0051] Temporal Neighborhood: If the sound is not considered, a video can be regarded as a sequence of video frames, and each video frame corresponds to a moment. The so-called temporal neighborhood refers to a neighborhood selected in the time dimension with the moment of a certain video frame as the center.

[0052] Spatial neighborhood: A video frame is a two-dimensional image. If you choose a point on the image, you can select a neighborhood in the video fr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video super-resolution reconstruction method based on deep learning and self-similarity, which belongs to the field of video processing technologies. The video super-resolution reconstruction method comprises the steps such as video frame extraction, preliminary estimation, preliminary reconstruction, secondary reconstruction and video frame integration. The video super-resolution reconstruction method makes use of apriori constraint conditions provided by an external training set and the internal self-similarity comprehensively, and has good reconstruction effect on some smooth regions, irregular structure information which seldom occurs in a video frame sequence, and some situations such as unique and singular characteristics which rarely occur in the external training set but repeatedly occur in the video frame sequence; in addition, the video super-resolution reconstruction method does not rely on precise sub pixel motion estimation, thus can adapt to a complicated motion scene, and achieves large multiples of super-resolution reconstruction.

Description

technical field [0001] The invention relates to the technical field of video processing, in particular to a video super-resolution reconstruction method based on deep learning and self-similarity. Background technique [0002] Super-resolution reconstruction refers to a technology that uses a computer to process a low-resolution (Low Resolution, LR) image or video to obtain a high-resolution (High Resolution, HR) image or video. Super-resolution reconstruction can provide more detailed information than traditional interpolation methods, which can greatly improve the quality of images or videos. [0003] The current super-resolution reconstruction methods mainly include reconstruction methods based on learning mechanism and reconstruction methods based on self-similarity. [0004] The super-resolution method based on the learning mechanism can adapt to a large super-resolution multiple, but because it relies on a large-scale external training set, it cannot guarantee that an...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06T5/50
CPCG06T3/4053G06T5/50
Inventor 杜军平梁美玉李玲慧
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products