Flexible separable convolution framework, feature extraction method and application thereof in VGG and ResNet

A convolution and frame technology, applied in the field of image processing, can solve problems such as difficulties in actual deployment, difficulty in meeting low-latency practical application requirements, long model reasoning time, etc., to reduce filter depth, reduce information loss, and reduce calculation costs Effect

Active Publication Date: 2020-12-01
SICHUAN UNIV
View PDF15 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, traditional deep convolutional neural networks require a large number of parameters and floating-point operations to achieve satisfactory accuracy, and the model inference time is long
In some real application scenarios, such as mobile or embedded devices, due to the limited memory and computing resources of the device, the actual deployment of traditional deep convolutional neural networks in small devices is difficult; at the same time, the actual application requirements for low latency are difficult. Satisfy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Flexible separable convolution framework, feature extraction method and application thereof in VGG and ResNet
  • Flexible separable convolution framework, feature extraction method and application thereof in VGG and ResNet
  • Flexible separable convolution framework, feature extraction method and application thereof in VGG and ResNet

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] This embodiment provides a flexible and separable deep learning convolution framework. On the premise of ensuring accurate operation, this module improves network performance, reduces calculation load, and reduces network parameters.

[0038] Such as figure 1 As shown, a flexible and separable deep learning convolution framework provided in this embodiment includes a feature map clustering and division module, a first convolution operation module, a second convolution operation module, a feature map fusion module, and an attention mechanism SE module , M input channels and N output channels.

[0039] The feature map clustering and dividing module is to divide the M input feature maps into feature maps representing main information and feature maps representing supplementary information according to the proportion of hyperparameter supplementary feature information α, and the proportion of supplementary feature information α is a defined super Parameter, α∈(0,1). In thi...

Embodiment 2

[0050] Such as figure 2 As shown, this embodiment provides a VGG convolutional neural network, including a convolutional layer, a pooling layer, and a fully connected layer. This embodiment only changes the structure of the convolutional layer, and changes the convolutional layer to Embodiment 1. The provided convolutional framework, the VGG convolutional neural network in this embodiment is called FSConv_VGG.

Embodiment 3

[0052] Such as image 3 As shown, this embodiment provides a ResNet-20 network, including a residual block, and the residual block includes a first convolutional layer and a second convolutional layer connected in sequence, and the first convolutional layer and the second convolutional layer The convolution layer adopts the convolution framework provided by Embodiment 1 with the same structure. This embodiment introduces the hyperparameter channel scaling factor β in the output channel of the first convolution layer and the input channel of the second convolution layer, and the value of the channel scaling factor β is Depends on device memory and computing resources; this ResNet-20 network is called FSBneck_ResNet-20.

[0053] Replace the original convolutional layer of VGG-16 and the original residual module of ResNet-20 with Example 2 and Example 3, and verify the results provided by Example 1 on different public datasets (CIFAR-10 and CIFAR-100). Effectiveness of convoluti...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a flexible separable convolution framework, a feature extraction method and application thereof in VGG and ResNet. The flexible separable convolution framework comprises a feature map clustering division module, a first convolution operation module, a second convolution operation module, a feature map fusion module and an attention mechanism SE module. The feature map clustering division module divides a feature map into a representation main information feature map and a representation supplementary information feature map; the first convolution operation module performs common convolution operation on the representation main information feature map; the second convolution operation module performs grouping convolution operation on the representation supplementaryinformation feature map; the feature map fusion module firstly splices the convolved feature map, and then adds and activates the convolved feature map and an original feature map; and the attention mechanism SE module multiplies the extracted channel weight by the feature map to generate an output feature map. The method comprises common convolution, grouping convolution, residual branches and anattention mechanism SE, the calculation amount of operation and the parameter amount of the network are reduced while the operation accuracy is guaranteed, and the framework can be used in a neural network convolution layer in a plug and play mode.

Description

technical field [0001] The invention relates to the technical field of image processing, in particular to a flexible and separable convolution framework and feature extraction method and its application in VGG and ResNet. Background technique [0002] In recent years, deep convolutional neural networks have shown excellent performance on different computer vision tasks such as image recognition, object detection, and semantic segmentation. However, traditional deep convolutional neural networks require a large number of parameters and floating-point operations to achieve satisfactory accuracy, and the model inference time is long. In some real application scenarios, such as mobile or embedded devices, due to the limited memory and computing resources of the device, the actual deployment of traditional deep convolutional neural networks in small devices is difficult; at the same time, the actual application requirements for low latency are difficult. Satisfy. Although the h...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/2321G06F18/241G06F18/253
Inventor 谢罗峰朱杨洋谢政峰殷鸣殷国富
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products