Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compression method and device of deep neural network model, storage medium and terminal

A technology of deep neural network and compression method, which is applied in the field of terminal, storage medium, compression method and device of deep neural network model, and can solve the problems of low precision and effectiveness of deep neural network model, so as to avoid precision loss and reduce Loss of precision, effect of improving performance

Inactive Publication Date: 2018-11-02
SPREADTRUM COMM (SHANGHAI) CO LTD
View PDF0 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the compressed deep neural network model has the problem of low accuracy and effectiveness

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression method and device of deep neural network model, storage medium and terminal
  • Compression method and device of deep neural network model, storage medium and terminal
  • Compression method and device of deep neural network model, storage medium and terminal

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] As described in the background art, current simplification and compression methods for deep neural network models are mainly divided into two categories: methods for changing the density of the deep neural network model and methods for changing the diversity of parameters of the deep neural network model.

[0026] Change the density method of the deep neural network model, and achieve the purpose of compression by changing the sparseness of the neural network. In some algorithms, a relatively small threshold is usually given to delete small-value parameters in the deep neural network model, which is highly subjective, and it is necessary to perform too many parameter adjustments for neural networks with different structures to obtain ideal simplification. Effect. Other algorithms screen input nodes based on the contribution relationship between input nodes and output responses. Such algorithms only target single-hidden layer neural networks and do not sufficiently proce...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a compression method and device of a deep neural network model, a terminal and a storage medium. The method comprises the following steps: acquiring a trained deep neural network model; traversing all layers of a deep neural network to obtain parameters of a current traversed layer; and quantifying the parameters of the current traversed layer based on the density and extension range of the parameters until all the layers of the deep neural network are traversed to obtain a quantified deep neural network model. Through the scheme, the accuracy and validity of the deep neural network model can be improved during compression of the deep neural network model.

Description

technical field [0001] The invention relates to information processing technology, in particular to a compression method and device, a terminal, and a storage medium of a deep neural network model. Background technique [0002] With the rapid development of deep neural network-related technology research, a large number of deep neural network-related technologies have emerged in related fields, such as convolutional neural networks in the field of vision and recurrent neural networks in the field of speech recognition or natural language processing. etc. These neural network technologies have greatly improved the processing accuracy in the corresponding fields. [0003] Deep Neural Network Compared with shallow learning, deep neural network has great development potential. Through the multi-layer processing structure of the deep neural network model, the characteristic features of the sample can be extracted and analyzed, and the sample features can be transformed and calcu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/02
CPCG06N3/02
Inventor 赵晓辉林福辉
Owner SPREADTRUM COMM (SHANGHAI) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products