A super-resolution reconstruction method based on feature fusion of dual-channel convolution network

A super-resolution reconstruction and convolutional network technology, applied in the field of super-resolution reconstruction based on dual-channel convolutional network feature fusion, can solve the problem of insufficient extraction of image local features, degradation of image reconstruction quality, and degradation of robustness, etc. problem, achieve the effect of saving preprocessing, simple and convenient reconstruction, and improving robustness

Pending Publication Date: 2019-03-22
TIANJIN UNIV
View PDF12 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The problem of image super-resolution is a morbid algorithm problem. On the one hand, super-resolution aims to recover more high-frequency features, so that the texture and outline of the reconstructed image are clearer and the details are richer; on the other hand, super-resolution rate cannot create more false detail at the expense of image accuracy
[0004] However, as far as deep learning is concerned, image features can be abstracted into two concepts: local features and global features, while networks such as VGG and ResNet do not fully extract local features of images.
Although this classical network shows excellent performance in classification, detection and other issues, the super-resolution problem requires more sufficient local features to provide a reference for prediction, which has become a constraint for traditional networks in the field of super-resolution
[0005] At the same time, the traditional classic network adopts a single-channel network architecture, and feature fusion is rarely used. Considering that the features extracted by different convolution kernels of the convolutional network are complementary, the lack of a reasonable feature fusion module will lead to a decline in the quality of image reconstruction. and a decrease in robustness

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A super-resolution reconstruction method based on feature fusion of dual-channel convolution network
  • A super-resolution reconstruction method based on feature fusion of dual-channel convolution network
  • A super-resolution reconstruction method based on feature fusion of dual-channel convolution network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0038] The embodiment of the present invention proposes a super-resolution reconstruction method based on dual-channel convolutional network feature fusion, see figure 1 and figure 2 , the method includes the following steps:

[0039] 101: Build a dual-channel convolutional network based on a dense convolutional network with different convolution kernels;

[0040] Among them, the two-channel convolutional network includes: two sub-channels, each sub-channel adopts a densely connected network structure, which is generated by cascading multiple densely connected blocks, and each densely connected block is connected by a 1×1 convolutional layer. It consists of a 3×3 convolutional layer and a skip layer connection, and uses a PReLU layer as a nonlinear activation function before each convolutional layer.

[0041] 102: Use weighted L 1 The norm is used as a loss function, and the image is super-resolution reconstructed after each sub-channel, and the loss function is calculated...

Embodiment 2

[0055] The scheme in embodiment 1 is further introduced below in conjunction with specific mathematical formulas and examples, see the following description for details:

[0056] 201: constructing a data set;

[0057] Wherein, the step 201 includes:

[0058] Step 1: Divide the data set. The source of the data set used is DIV2K (DIVerse 2K resolution images, a variety of 2000 resolution images). Each sample includes: high resolution and low resolution images of different scales (as training images) , the low-resolution image is generated by a degradation method. The DIV2K data set divided by the embodiment of the present invention includes 800 pieces of training data and 100 pieces of verification data.

[0059] Wherein, the degrading method may be: commonly used algorithms such as bicubic difference downsampling and bilinear downsampling.

[0060] The second step: image cropping, the training image is cropped into several 96×96 image blocks, which are used as the input of t...

Embodiment 3

[0085] Below in conjunction with concrete experimental data, the scheme in embodiment 1 and 2 is further introduced, see the following description for details:

[0086] 301: Data preparation:

[0087] Among them, this step includes:

[0088] (a) Divide the dataset:

[0089] This embodiment uses the DIV2K data set, including: 800 training images, 100 verification images, and 100 test images. Since the test set does not disclose labels, this paper uses the validation set as the test evaluation data.

[0090] (b) Randomly crop 800 training images into 96×96 image blocks, which are used as network input in the training phase.

[0091] 302: Network structure construction;

[0092] The network structure in the embodiment of the present invention can be divided into: a feature pre-extraction module, two sub-channels (containing several dense convolution blocks respectively), a feature fusion module, a layer-skip connection, and a feature reconstruction module (including feature u...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a super-resolution reconstruction method based on feature fusion of dual-channel convolution network, which comprises the following steps: a dual-channel convolution network isbuilt based on dense convolution network of different convolution cores; The dual channel convolution network comprises: two sub-channels, each sub-channel adopts the structure of dense connection network, the structure is generated by cascading a plurality of dense connection blocks, each dense connection block is composed of a 1* 1 convolution layer, a 3 *3 convolution layer and a hop layer connection, and a PRELU layer is used as a nonlinear activation function in front of each convolution layer; The weighted L1 norm is used as the loss function. After each sub-channel, the image is reconstructed with super-resolution, and the loss function is calculated and the parameters of the model are optimized. The loss function is the weighted sum of the calculated loss function of the sub-channel output image and the calculated loss function of the whole output; Input any size of low-resolution image, load the trained model, and output the reconstructed high-resolution image.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a super-resolution reconstruction method based on dual-channel convolution network feature fusion. Background technique [0002] Image super-resolution reconstruction technology refers to the technology of reconstructing small-scale low-resolution images into large-scale high-resolution images through computer processing. The problem of image super-resolution is a morbid algorithm problem. On the one hand, super-resolution aims to recover more high-frequency features, so that the texture and outline of the reconstructed image are clearer and the details are richer; on the other hand, super-resolution The rate cannot create more pseudo-details at the expense of image accuracy. Since similar but different images may be down-sampled into the same low-resolution image, super-resolution is a pathological problem that cannot converge accurately. In order to improve the recons...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06N3/04
CPCG06T3/4076G06N3/045
Inventor 褚晶辉李晓川吕卫
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products