Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A compression method of a deep neural network

A technology of deep neural network and compression method, which is applied in the direction of neural learning method, biological neural network model, neural architecture, etc. It can solve the problems of not considering weight correlation, low compression accuracy, and poor compression effect, so as to reduce the amount of calculation, The effect of reducing memory and high compression ratio

Inactive Publication Date: 2019-05-07
SICHUAN UNIV
View PDF0 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But this method only considers whether to prune each weight independently, and does not consider the correlation between weights
Therefore, the achievable compression ratio is limited, the compression accuracy is low and the compression effect is poor

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A compression method of a deep neural network
  • A compression method of a deep neural network
  • A compression method of a deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] In order to make the purpose, technical solution and advantages of the present invention clearer, the present invention will be further elaborated below in conjunction with the accompanying drawings.

[0049] In this example, see figure 1 Shown, the present invention proposes a kind of compression method of deep neural network, comprises steps:

[0050] S100, network parameter pruning: pruning the network by pruning, deleting redundant connections, and retaining the connection with the largest amount of information;

[0051] S200, training quantization and weight sharing: quantify the weights so that multiple connections share the same weights, and store valid weights and indexes;

[0052] S300, using the bias distribution of effective weights, and using Huffman coding to obtain a compressed network.

[0053] As the optimization scheme of the above-mentioned embodiment, in the step S100, the network parameters are pruned, such as figure 2 shown, including steps:

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a compression method of a deep neural network, which comprises the following steps of: network parameter trimming: trimming the network through pruning, deleting redundant connections, and reserving the connection with the maximum information amount; Training quantification and weight sharing: quantifying the weight, enabling the plurality of connections to share the same weight, and storing the effective weight and the index; And obtaining a compression network by using Huffman coding and bias distribution of effective weights. According to the method, the precision ofthe compressed network is improved through modes of modifying pruning, weight sharing and the like, the calculation memory space is greatly reduced, and the running speed is greatly increased; Therefore, the calculation amount and the memory of the large-scale network are effectively reduced, so that the large-scale network can effectively run on limited hardware equipment.

Description

technical field [0001] The invention belongs to the technical field of neural network optimization, in particular to a compression method of a deep neural network. Background technique [0002] There are massive weights in the deep neural network, which will occupy a large storage and memory bandwidth. Due to the huge network, the operation is heavily dependent on high-performance graphics cards, which greatly limits the calculation and popularization of the model, resulting in excessive hardware overhead for large-scale network computing. Therefore, the compression of deep neural networks has become an urgent problem to be solved; although the current neural network compression methods can compress the network to a certain extent, most of them use pruning methods for compression. [0003] The traditional pruning method is to prune the network weights, that is, to reduce the size of the CNN model by cutting the weights while maintaining the accuracy. Specifically, some con...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/04G06N3/08
Inventor 苟旭
Owner SICHUAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products