Image super-resolution reconstruction method based on convolutional neural network

A technology of super-resolution reconstruction and convolutional neural network, applied in the field of image super-resolution reconstruction based on convolutional neural network, to achieve the effect of improving image quality, improving accuracy, and enhancing the effect

Active Publication Date: 2022-07-29
WEIHAI VOCATIONAL COLLEGE
View PDF4 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, quite a few models (such as EDSR, MSRN, SAN, etc.) have proved that the use of convolutional neural networks can perform super-resolution reconstruction of low-resolution images well, but the image quality of existing algorithms / methods after reconstruction There is still a certain gap between the goal and the image super-resolution reconstruction method based on convolutional neural network needs to be further improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image super-resolution reconstruction method based on convolutional neural network
  • Image super-resolution reconstruction method based on convolutional neural network
  • Image super-resolution reconstruction method based on convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] Using python language and deep learning framework, build figure 1 Image super-resolution reconstruction convolutional neural network shown. Among them, the preliminary convolution layer 3 is an ordinary convolution operation layer, and its convolution kernel size is 3*3. The deep feature mapping unit 4 includes 6 integrated feature extraction modules 5, and the internal structure of the integrated feature extraction module 5 is as follows: figure 2 As shown, the internal structures of the pre-residual module 51 and the post-residual module 52 are both as follows image 3 As shown, the internal structures of the former stage spatial modulation module 53, the latter stage channel modulation module 54 and the feature integration unit 7 are as follows: Figure 4 shown. The function of the image reconstruction unit 6 is to perform upsampling and super-resolution reconstruction on the feature map, and output the reconstructed image 2. The image reconstruction unit 6 can d...

Embodiment 2

[0052] In order to illustrate the effect of setting the neighbor incoming connection 55 and the neighbor outgoing connection 56 in the integrated feature extraction module 5 to improve the image reconstruction effect, the overall network architecture, the pre-residual module 51, the post-residual module 52, The components such as the front-stage spatial modulation module 53 , the rear-stage channel modulation module 54 , and the image reconstruction unit 6 are the same as those in Embodiment 1, except that the adjacent incoming connection 55 and the adjacent outgoing connection 56 in Embodiment 1 are removed. In the embodiment 2, the structure of the integrated feature extraction module 5 is as follows Image 6 As shown, using the same dataset and model training process, the test results are shown in the following table:

[0053]

[0054] It can be seen from the results that after setting the nearest neighbor incoming connection 55 and the nearest neighbor outgoing connecti...

Embodiment 3

[0056] Similar to Embodiment 2, in order to illustrate the effect of setting the feature integration unit 7 in the integrated feature extraction module 5, only the feature integration unit 7 in Embodiment 2 is removed, and other parts of the network in Embodiment 3 are completely the same as those in Embodiment 2. The same, the structure of the integrated feature extraction module 5 in the embodiment 3 is as follows Figure 7 shown. Using the same dataset and training process, the test results are shown in the following table:

[0057]

[0058] The above test results well illustrate the effectiveness of setting the feature integration unit 7 to improve the quality of network super-resolution reconstructed images.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image super-resolution reconstruction method based on a convolutional neural network, and belongs to the technical field of artificial intelligence and image processing, and the method comprises the steps: obtaining a primary image of which the resolution needs to be improved, obtaining a super-resolution reconstruction convolutional neural network, receiving the primary image as an input through a preliminary convolution layer, and obtaining a secondary image of which the resolution needs to be improved; and inputting the preliminary feature map into a deep feature mapping unit, so that each integrated feature extraction module performs feature extraction operation on the feature map in sequence, and an image reconstruction unit performs super-resolution reconstruction on the comprehensive feature map. Average pooling, maximum pooling and median pooling are set in the two attention mechanisms to perceive important information in an image, and part of modulation information in a front-stage space modulation module is input into a rear-stage channel modulation module by setting a feature integration unit. Therefore, the post-stage channel modulation module has the visual fields in the space direction and the channel direction at the same time, and the network has the advantages of good reconstruction effect, high robustness and the like.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence, and in particular relates to an image super-resolution reconstruction method based on a convolutional neural network. Background technique [0002] The resolution value of the image is an important indicator for judging the quality of the image. When the resolution of the obtained image is lower than expected, the image needs to be reconstructed to increase the resolution of the image to the target size, resulting in image super-resolution reconstruction. algorithm. This technology has important application value in many fields such as medical treatment, public safety and film and television. Since its birth, it has experienced roughly three stages of development, namely, interpolation-based super-resolution reconstruction, reconstruction-based super-resolution reconstruction, and learning-based super-resolution reconstruction. At present, quite a few models (such as EDSR, MSRN,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06N3/04G06N3/08
CPCG06T3/4053G06N3/08G06N3/045
Inventor 张淑红
Owner WEIHAI VOCATIONAL COLLEGE
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products