Video super-resolution reconstruction method based on multi-memory and mixed loss

A technology of super-resolution reconstruction and mixed loss, which is applied in the field of super-resolution reconstruction constrained by the mixed loss function, can solve the problems of limited performance, limited effect, memory consumption of sub-pixel motion compensation layer, etc., and achieve fast convergence and enhanced features The effect of expressiveness

Active Publication Date: 2019-01-01
WUHAN UNIV
View PDF3 Cites 52 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the sub-pixel motion compensation layer consumes a lot of video memory, and its effect is very limited
Liu et al. designed a time-adaptive neural network to adaptively learn the optimal scale of time-dependence, but currently only designed a simple three-layer convolutional neural network structure, which limits the performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution reconstruction method based on multi-memory and mixed loss
  • Video super-resolution reconstruction method based on multi-memory and mixed loss
  • Video super-resolution reconstruction method based on multi-memory and mixed loss

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit this invention.

[0018] please see figure 1 , a kind of satellite image super-resolution reconstruction method provided by the present invention is characterized in that, comprises the following steps:

[0019] A video super-resolution reconstruction method based on multi-memory and mixed loss, is characterized in that, comprises the following steps:

[0020] Step 1: Select a number of video data as training samples, intercept an image with a size of N×N pixels from the same position in each video frame as a high-resolution learning target, and downsample it by r times to ob...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video super-resolution reconstruction method based on multi-memory and mixed loss, which comprises two parts of an optical flow network and an image reconstruction network. In the optical flow network, the optical flow between the current frame and the reference frame is calculated for the input multiple frames, and the optical flow is used as motion compensation to compensate the current frame to be similar to the reference frame as much as possible. In the image reconstruction network, the compensated multi-frames are sequentially input into the network, and the network adopts multi-memory residual blocks to extract image features, so that the following input frames can receive the feature map information of the previous frames. Finally, the output low-resolution feature image is amplified by sub-pixel, and added with the image amplified by bi-cubic interpolation to obtain the final high-resolution video frame. A hybrid loss function is used to train the optical flow network and the image reconstruction network simultaneously in the training process. The invention greatly enhances the feature expression ability of the inter-frame information fusion, andcan reconstruct the high-resolution video with rich details.

Description

technical field [0001] The invention belongs to the technical field of digital image processing, and relates to a video super-resolution reconstruction method, in particular to a super-resolution reconstruction method constrained by a multi-memory mixed loss function. Background technique [0002] In recent years, with the emergence of high-definition display devices (such as HDTV) and the emergence of ultra-high-definition video resolution formats such as 4K (3840×2160) and 8K (7680×4320), the reconstruction of high-resolution video from low-resolution video Demand is increasing day by day. Video super-resolution refers to the technology of reconstructing high-resolution video from a given low-resolution video, which is widely used in high-definition television, satellite images, video surveillance and other fields. [0003] Currently, the most widely used super-resolution methods are based on interpolation methods, such as nearest neighbor interpolation, bilinear interpol...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40
CPCG06T3/4007G06T3/4076
Inventor 王中元易鹏江奎韩镇
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products