Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Image super-resolution reconstruction method and electronic equipment based on gradual fusion of features

A technology of super-resolution reconstruction and feature fusion, which is applied in the image super-resolution reconstruction method of gradually fused features and in the field of electronic equipment, can solve the problems of low efficiency of model effective information extraction, limit the quality of neural network reconstructed images, etc., and achieve improvement Super-resolution reconstruction effect, reduction of useless information redundancy, effect of small computer memory

Active Publication Date: 2022-01-28
SHENZHEN SALUS BIOMED CO LTD
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, after the existing super-resolution reconstruction neural network extracts features, it fuses features at different depths at one time. The fused features contain a lot of information that is useless for super-resolution reconstruction. The effective information extraction efficiency of the model is Low, which limits the quality of the image reconstructed by the neural network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image super-resolution reconstruction method and electronic equipment based on gradual fusion of features
  • Image super-resolution reconstruction method and electronic equipment based on gradual fusion of features
  • Image super-resolution reconstruction method and electronic equipment based on gradual fusion of features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0053] according to figure 1 The super-resolution reconstruction network structure shown is used to build the model, the code uses the 3.7 version of python, and uses the pytorch framework. In terms of hardware, the CPU used for model training and testing is Inteli9, with 128G of memory, and the graphics card is NVIDIA 2080ti with 11G of memory.

[0054] In this embodiment, the first feature extraction module 3 is implemented using a convolution layer with a convolution kernel size of 3*3, and the structure of the second feature extraction module 4 is as follows figure 2 As shown, the local dimensionality reduction layer 44 is a convolution layer with a convolution kernel size of 1*1. The second feature extraction module 4 and the third feature extraction module 5 have a one-to-one correspondence relationship, and the number of the second feature extraction module 4 and the third feature extraction module 5 are both three. After the feature map input into the second feature...

Embodiment 2

[0065] As a comparison, in this embodiment, on the basis of Embodiment 1, the channel attention module 45 and the AM pooling layer 452 are separately removed, and the remaining parts and experimental conditions are exactly the same as those in Embodiment 1. The comparison of experimental results is shown in the following table:

[0066] Model gain set5 set14 BSDS100 Example 1 2 39.13 / 0.9644 32.76 / 0.9074 33.57 / 0.9395 Model A 2 38.26 / 0.9602 32.41 / 0.9031 33.35 / 0.9390 Model B 2 38.57 / 0.9609 32.62 / 0.9371 33.35 / 0.9395 Example 1 4 33.16 / 0.9013 28.05 / 0.7448 27.53 / 0.8195 Model A 4 32.73 / 0.9004 27.90 / 0.7441 27.22 / 0.8171 Model B 4 32.92 / 0.9010 27.90 / 0.7440 28.25 / 0.8188

[0067]In the above table, Model A is obtained after removing the channel attention module 45 alone on the basis of Example 1, and Model B is obtained after removing the AM pooling layer 452 on the basis of Example 1 and leaving the variance pooling...

Embodiment 3

[0069] On the basis of the image super-resolution reconstruction network in embodiment 1, a jump connection module 9 is added for comparative experiments, and its structure is as follows Image 6 Shown, all the other parts are exactly the same as in Example 1.

[0070] In this embodiment, the original feature map is generated after the first skip-connected convolutional layer 91 and the fifth ReLU activation function 96 to generate the first skip-connected feature map, and the last feature map output by the second feature extraction module 4 is extracted and passed through the second After the skip-connected convolution layer 92 and the sixth ReLU activation function 97, a second skip-connected feature map is generated, and after the first skip-connected feature map and the second skip-connected feature map are spliced, they pass through the third skip-connected convolution layer 94 in sequence , the second sub-pixel convolution layer 93, the fourth skip convolution layer 95 a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an image super-resolution reconstruction method and electronic equipment for gradually merging features. The image super-resolution reconstruction method includes acquiring an original image and a reconstruction network, and using the first feature extraction module to perform Feature extraction, the original feature map sequentially passes through multiple second feature extraction modules and multiple third feature extraction modules, and uses the image reconstruction module to perform super-resolution reconstruction on the intermediate feature map to obtain a larger resolution target image. The present invention screens out useful features for image super-resolution reconstruction step by step through multiple fusion features, and finally the useful feature information input into the image reconstruction module accounts for a larger proportion, the model feature extraction effect is better, and the loss of useful information is reduced and useless information redundancy, and the computer memory occupied during model training and running is also smaller.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence, and in particular relates to an image super-resolution reconstruction method and electronic equipment which gradually fuse features. Background technique [0002] Single image super-resolution technology is a classic task in the field of computer vision. Using algorithms to increase the resolution of a specific image can reconstruct some details in the image, thereby improving image quality. This technology is widely used in film and television, medicine, and public security. There are important applications in other fields. [0003] Due to its powerful data fitting ability, the neural network is far superior to traditional algorithms in image super-resolution reconstruction. Therefore, super-resolution technology based on deep learning has become the mainstream. However, after the existing super-resolution reconstruction neural network extracts features, it fuses features at dif...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T3/40G06T5/50G06V10/44G06V10/82G06N3/04G06N3/08
CPCG06T3/4053G06T5/50G06N3/08G06T2207/20221G06T2207/20081G06N3/048G06N3/045
Inventor 张世龙
Owner SHENZHEN SALUS BIOMED CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products