Check patentability & draft patents in minutes with Patsnap Eureka AI!

Deep neural network compression method

A deep neural network and network layer technology, applied in neural learning methods, biological neural network models, neural architectures, etc., can solve problems such as limited improvement, large DNN model size, memory and battery life limitations, etc.

Pending Publication Date: 2021-11-19
ACER INC
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the size of DNN models is usually very large, sometimes approaching 100 million bytes, which limits the improvement of large-scale models on IoT platforms and edge devices
For example, mobile computing platforms are limited in CPU speed, memory, and battery life

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep neural network compression method
  • Deep neural network compression method
  • Deep neural network compression method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] based on the following Figure 1A ~ Figure 10 , and the embodiment of the present invention will be described. This description is not intended to limit the embodiment of the present invention, but is one of the examples of the present invention.

[0060] Such as Figure 7 as well as Figure 10 As shown, according to a method of deep neural network compression according to an embodiment of the present invention, the branch pruning of optional local conventional weights, the steps include: Step 11 (S11): using a processor 100 to obtain a depth At least one weight of the neural network, the weight is placed between an input layer (11, 21) of the deep neural network adjacently connected to an output layer corresponding to two layers of network layers, and the nodes of the input layer (11, 21) The input value of is multiplied by a corresponding weight value, which is equal to the output value of the node of the output layer, the value of the P parameter is set, and the wei...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a deep neural network compression method, which comprises the following steps of: setting the value of a P parameter by using at least one weight of a deep neural network; grouping the weights in a state that every P weights form one group; branch pruning retraining is executed, so that branches are uniformly grouped in a state that only one weight of each group is non-zero and the other weights are zero. Therefore, the compression rate of the deep neural network is adjusted, and the reduction rate of the deep neural network is adjusted.

Description

technical field [0001] A method of compression, in particular of a deep neural network with uniform branching groupings. Background technique [0002] Deep learning has the highest accuracy in image, speech recognition, autonomous driving, machine translation, etc. Almost all successful deep learning techniques are built on top of neural networks. DNN is the simplest neural network, and existing technologies are usually connected by CNN, RNN, LSTM and other neural networks. After the DNN is imported into the embedded platform, it has a wide application impact on mobile computing. However, the size of DNN models is usually very large, sometimes close to 100 million bytes, which limits the improvement of large-scale models on IoT platforms and edge devices. For example, mobile computing platforms are limited in CPU speed, memory, and battery life. So, to apply these neural networks to these edge devices. Although the prior art can reduce parameters and reduce memory usage...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/08G06N3/045G06N3/082G06N3/04G06N3/063G06F7/49G06F7/491
Inventor 黄俊达张雅筑林暐宸
Owner ACER INC
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More