Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Mixed Granularity Based Joint Sparsity Method for Neural Networks

A technology of joint sparseness and mixed granularity, which is applied in the engineering field, can solve problems such as sparsity and low weight structure, and achieve the effect of ensuring model accuracy and reducing reasoning overhead

Active Publication Date: 2021-03-26
ZHEJIANG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, block sparsity schemes with acceptable model accuracy are usually only able to generate weight structures with relatively low sparsity

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Mixed Granularity Based Joint Sparsity Method for Neural Networks
  • A Mixed Granularity Based Joint Sparsity Method for Neural Networks
  • A Mixed Granularity Based Joint Sparsity Method for Neural Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments.

[0031] As shown in Figure 1(a), Figure 1(b) and Figure 1(c), the present invention proposes a joint sparse method based on mixed granularity for neural networks, which is used for image recognition, such as machine-readable For the automatic review of card test papers, first collect some image data and manually add labels to generate image data sets, which are divided into training data sets and test data sets; the training data sets are input into the convolutional neural network, and each part of the convolutional neural network is randomly initialized. The weight matrix of the layer is trained iteratively and the joint sparse process is used to prune the convolutional neural network; the test data set is used to cross-validate the training effect, and the weight matrix of each layer is updated through the back propagation algorithm ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a mixed granularity-based joint sparse method for neural networks. The joint sparse method includes independent vector-level fine-grained sparseness and block-level coarse-grained sparseness. The bitwise logical AND operation is performed on the branch mask to obtain the final pruning mask, and then obtain the weight matrix of the sparse neural network. The joint sparsity of the present invention always achieves an inference speed between block-sparse and balanced-sparse modes, regardless of the vector row size for vector-level fine-grained sparsification and the vector block size for block-level coarse-grained sparsification. It is used for the pruning of the convolutional layer and the fully connected layer of the neural network. It has the advantages of variable sparse granularity, general hardware inference acceleration, and high model inference accuracy.

Description

technical field [0001] The invention relates to engineering technical fields such as structured sparseness, lightweight network structure, and convolutional neural network, and in particular to a combined granularity-based joint sparse method for neural networks. Background technique [0002] In recent years, deep learning, especially convolutional neural network (CNN), has achieved great success with high accuracy in the fields of computer vision, speech recognition and language processing. Due to the growth of data volume, the scale of deep neural network becomes larger and larger to have general feature extraction ability. On the other hand, with the overparameterization of deep neural networks, large models usually require a lot of computing and storage resources during training and inference. Facing these challenges, there has been increasing focus on techniques to compress and accelerate neural networks with minimal computational cost, such as tensor decomposition, da...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/082G06N3/084G06N3/045G06F18/214G06N3/063G06N3/04
Inventor 卓成郭楚亮尹勋钊
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products