Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image super-resolution reconstruction method

A super-resolution reconstruction and image technology, applied in the field of image processing, can solve problems such as inability to apply multiple magnifications, poor reconstruction effect, blurred edge information of the generated image, etc., to enhance the reconstruction effect, improve the reconstruction effect, and improve the convergence speed. Effect

Active Publication Date: 2018-09-18
CHINA UNIV OF MINING & TECH
View PDF5 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In view of the above analysis, the embodiment of the present invention aims to provide an image super-resolution reconstruction method to solve the problems of blurred edge information of the image generated by the prior art, inability to apply multiple magnifications, and poor reconstruction effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image super-resolution reconstruction method
  • Image super-resolution reconstruction method
  • Image super-resolution reconstruction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] A specific embodiment of the present invention discloses a method for image super-resolution reconstruction, comprising the following steps:

[0057] S1. Construct a convolutional neural network for training and learning.

[0058] The convolutional neural network sequentially includes an LR feature extraction layer, a nonlinear mapping layer, and an HR reconstruction layer from top to bottom. Specifically, in the LR feature extraction layer, the gradient feature extraction is performed on the input LR image to obtain the LR feature map; in the nonlinear mapping layer, multiple nonlinear mappings are performed on the LR feature map to obtain the HR feature map; in the HR reconstruction layer, Perform image reconstruction on the HR feature map to obtain a HR reconstruction image.

[0059] S 2. Use the convolutional neural network to train the training LR image and the training HR image set in pairs in the input training library, and carry out training and learning of at ...

Embodiment 2

[0069] In another embodiment based on the above method, the convolutional neural network further includes an HR feature extraction layer, a loss function layer, a logic judgment module, and an HR gradient prior extraction layer. Among them, the HR feature extraction layer, the loss function layer, and the logical judgment module are set after the HR reconstruction layer in sequence, and the HR gradient prior extraction layer is set before the loss function layer, and are set side by side with the HR feature extraction layer.

[0070] The HR feature extraction layer performs gradient feature extraction on the HR reconstruction image output by the HR reconstruction layer to obtain the HR gradient feature map.

[0071] The HR gradient prior extraction layer extracts the gradient prior information from the training HR images in the training database (the resolution is the same as that of the HR reconstruction image, which is only used in the training process), and obtains the HR gr...

Embodiment 3

[0110] Such as image 3 As shown, in another embodiment based on the above method, the convolutional neural network under the scale of ×2 times, ×3 times, and ×4 times respectively shares the nonlinear mapping layer, by sharing the weight of the nonlinear mapping layer and For the receptive field, the same set of filters can be used for each path, and through information transfer at multiple scales, regularization guidance can be provided to each other, which greatly simplifies the complexity of the convolutional neural network and reduces the number of parameters.

[0111] Preferably, the nonlinear mapping layer includes three convolutional layers, for example, the above three convolutional layers are respectively marked as convolutional layers 21 , 22 , and 23 . The other layers are also marked sequentially, and will not be described here one by one.

[0112] In the LR feature extraction layer (convolutional layers 11, 12, 13), the output is expressed as:

[0113]

[01...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an image super-resolution reconstruction method, belongs to the image processing technology field and solves problems that the edge information of an image generated in the prior art is fuzzy, application to multiple magnification times cannot be realized and the reconstruction effect is poor. The method comprises steps that a convolutional neural network for training andlearning is constructed, and the convolutional neural network comprises an LR characteristic extraction layer, a nonlinear mapping layer and an HR reconstruction layer in order from top to bottom; inputted paired LR images and HR images are trained through utilizing the convolutional neural network, training of at least two magnification scales is performed simultaneously, and an optimal parameterset of the convolutional neural network and scale adjustment factors at the corresponding magnification scales are acquired; after the training is completed, the target LR images and the target magnification times are inputted to the convolutional neural network, and the target HR images are acquired. The method is advantaged in that the training speed of the convolutional neural network is fast,after training is completed, and the HR images at any magnification times in the training scale can be acquired in real time.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to an image super-resolution reconstruction method. Background technique [0002] Image super-resolution reconstruction is a method of directly restoring high-resolution (HR) images from low-resolution (LR) images, which is needed in many practical applications, such as medical image analysis, computer vision, remote sensing, etc. At present, image super-resolution reconstruction methods are mainly divided into three categories: interpolation-based, reconstruction-based, and learning-based. [0003] The image super-resolution reconstruction method based on interpolation is generally relatively simple and easy to implement, but it is difficult to reproduce detailed information such as texture in the generated image, which is relatively blurred. [0004] Reconstruction-based image super-resolution reconstruction methods are based on degradation models and use prior knowled...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06N3/04G06N3/08
CPCG06N3/08G06N3/084G06T3/4053G06N3/045
Inventor 程德强蔡迎春陈亮亮赵凯姚洁于文洁赵广源刘海
Owner CHINA UNIV OF MINING & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products