Flexible separable convolution framework and feature extraction method and its application in vgg and resnet

A convolution and frame technology, applied in the field of image processing, can solve problems such as long model reasoning time, difficulty in actual deployment, and difficulty in meeting low-latency practical application requirements, so as to reduce filter depth, reduce information loss, and reduce calculation costs Effect

Active Publication Date: 2021-01-26
SICHUAN UNIV
View PDF15 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, traditional deep convolutional neural networks require a large number of parameters and floating-point operations to achieve satisfactory accuracy, and the model inference time is long
In some real application scenarios, such as mobile or embedded devices, due to the limited memory and computing resources of the device, the actual deployment of traditional deep convolutional neural networks in small devices is difficult; at the same time, the actual application requirements for low latency are difficult. Satisfy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Flexible separable convolution framework and feature extraction method and its application in vgg and resnet
  • Flexible separable convolution framework and feature extraction method and its application in vgg and resnet
  • Flexible separable convolution framework and feature extraction method and its application in vgg and resnet

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] This embodiment provides a flexible and separable deep learning convolution framework. On the premise of ensuring accurate operation, this module improves network performance, reduces calculation load, and reduces network parameters.

[0038] Such as figure 1 As shown, a flexible and separable deep learning convolution framework provided in this embodiment includes a feature map clustering and division module, a first convolution operation module, a second convolution operation module, a feature map fusion module, and an attention mechanism SE module , M input channels and N output channels.

[0039] The feature map clustering and dividing module is to divide the M input feature maps into feature maps representing main information and feature maps representing supplementary information according to the proportion of hyperparameter supplementary feature information α, and the proportion of supplementary feature information α is a defined super Parameter, α∈(0,1). In thi...

Embodiment 2

[0050] like figure 2 As shown, this embodiment provides a VGG convolutional neural network, including a convolutional layer, a pooling layer, and a fully connected layer. This embodiment only changes the structure of the convolutional layer, and changes the convolutional layer to Embodiment 1. The provided convolutional framework, the VGG convolutional neural network in this embodiment is called FSConv_VGG.

Embodiment 3

[0052] like image 3 As shown, this embodiment provides a ResNet-20 network, including a residual block, and the residual block includes a first convolutional layer and a second convolutional layer connected in sequence, and the first convolutional layer and the second convolutional layer The convolution layer adopts the convolution framework provided by Embodiment 1 with the same structure. This embodiment introduces the hyperparameter channel scaling factor β in the output channel of the first convolution layer and the input channel of the second convolution layer, and the value of the channel scaling factor β is Depends on device memory and computing resources; this ResNet-20 network is called FSBneck_ResNet-20.

[0053] Replace the original convolutional layer of VGG-16 and the original residual module of ResNet-20 with Example 2 and Example 3, and verify the results provided by Example 1 on different public datasets (CIFAR-10 and CIFAR-100). Effectiveness of convolutiona...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a flexible and separable convolution framework and feature extraction method and its application in VGG and ResNet, including a feature map clustering and division module, a first convolution operation module, a second convolution operation module, and feature map fusion module and attention mechanism SE module; the feature map clustering and division module divides the feature map into feature maps representing main information and feature maps representing supplementary information; the first convolution operation module performs ordinary convolution operations on feature maps representing main information; the second The second convolution operation module performs group convolution operation on the feature map representing the supplementary information; the feature map fusion module first stitches the convolutional feature map and then adds and activates the original feature map; the attention mechanism SE module will extract the channel weight Multiplied with the feature map to generate the output feature map. The invention includes ordinary convolution, group convolution, residual branch and attention mechanism SE, which reduces the amount of calculation and network parameters while ensuring the accuracy of the operation, and can be plugged and used in the neural network convolution layer middle.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a flexible and separable convolution framework and feature extraction method and its application in VGG and ResNet. Background technique [0002] In recent years, deep convolutional neural networks have shown excellent performance on different computer vision tasks such as image recognition, object detection, and semantic segmentation. However, traditional deep convolutional neural networks require a large number of parameters and floating-point operations to achieve satisfactory accuracy, and the model inference time is long. In some real application scenarios, such as mobile or embedded devices, due to the limited memory and computing resources of the device, the actual deployment of traditional deep convolutional neural networks in small devices is difficult; at the same time, the actual application requirements for low latency are difficult. Satisfy. Although the h...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/2321G06F18/241G06F18/253
Inventor 谢罗峰朱杨洋谢政峰殷鸣殷国富
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products