Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compression and acceleration method based on a deep neural network model

A technology of deep neural network and model, applied in the field of neural network, can solve the problems of reducing the accuracy loss of neural network and not solving it, and achieve the effect of improving the accuracy of the model

Inactive Publication Date: 2019-03-19
深圳市友杰智新科技有限公司
View PDF3 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although the above-mentioned patent document discloses a method for neural network compression and acceleration, the patent document does not solve the problem of reducing the accuracy loss of the neural network by pruning the neural network and ensuring that the neural network basically does not lose the accuracy of the algorithm. The problem of reducing the calculation amount and scale of the neural network model several times

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression and acceleration method based on a deep neural network model
  • Compression and acceleration method based on a deep neural network model
  • Compression and acceleration method based on a deep neural network model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] The invention will be further described in detail below in conjunction with the drawings and embodiments of the invention.

[0056] The models of general global and dynamic pruning and acceleration methods (DGP) usually have a large amount of information redundancy, and there is room for reduction from the number of parameters of the model to the representation accuracy of the parameters. Relying on the world's leading research results in the field of neural network model compression, redundant filters can be pruned to solve the above two problems, which can greatly speed up the pruned network and reduce the accuracy loss of the network , under the premise of ensuring that the accuracy of the algorithm is basically not lost, the calculation amount and scale of the network model can be reduced several times.

[0057] The present invention evaluates the weight of each filter globally on all network layers, then dynamically and iteratively prunes and adjusts the network ac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a compression and acceleration method based on a deep neural network model. The compression and acceleration method comprises the following steps: 1) initializing a pre-trainedconvolutional network; 2) applying a global mask to all the filters, wherein the global mask is equal to 1; and 3) iteratively adjusting the sparse network and updating the filter significance. According to the compression and acceleration method based on the deep neural network model, the accuracy loss of the neural network is reduced through the pruned neural network, and the calculated amountand the scale of the neural network model can be compressed by multiple times on the premise that the algorithm precision of the neural network is not lost basically.

Description

technical field [0001] The invention relates to the technical field of neural networks, in particular to a compression and acceleration method based on a deep neural network model. Background technique [0002] Various deep learning neural network models are applied in computer vision. Convolutional neural networks (CNN) have achieved remarkable success in many different applications. These models rely on deep networks with hundreds or even billions of parameters. Such a huge network cannot run and train quickly, only relying on GPUs with high computing power to make the network run and train quickly. Such outstanding performance comes with a significant computational cost—deploying these CNNs to real-time applications would be difficult without the support of an efficient graphics processing unit (GPU). Compared to the cloud, mobile systems are limited by computing resources. However, deep learning models are known to be resource-intensive. To enable on-device deep learn...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/08G06N3/048G06N3/045
Inventor 吴土孙
Owner 深圳市友杰智新科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products