Deep neural network compression method, system and device based on multi-group tensor decomposition and storage medium

A technology of deep neural network and compression method, applied in the fields of system, deep neural network compression method, device and storage medium, can solve the problems of large storage capacity, high computational complexity of deep neural network, difficult application of mobile devices, etc. Parameter ratio, the effect of reducing parameters

Inactive Publication Date: 2019-11-12
SHENZHEN UNIV
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In summary, deep neural networks usually have the characteristics of high computational complexity and large storage capacity, so it is difficult to apply them in mobile devices.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep neural network compression method, system and device based on multi-group tensor decomposition and storage medium
  • Deep neural network compression method, system and device based on multi-group tensor decomposition and storage medium
  • Deep neural network compression method, system and device based on multi-group tensor decomposition and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The invention discloses a deep neural network compression method based on multi-group tensor decomposition, in particular a set of low-rank and sparse compression models. We use TT decomposition for low-rank operations, and we retain the top 0.6 percent of the sparse structure with the largest absolute value. Adding sparsity in this way has little effect on the compression ratio. In addition, a Multi-TT structure is also constructed, which can well understand the characteristics of existing models and improve the accuracy of the model. Furthermore, the use of sparse structure is not important when using this method, and the Multi-TT structure can well explore the structure of the model.

[0034] 1. Symbols and definitions

[0035] First, the symbols and preparations of the present invention are defined. Scalars, vectors, matrices, and tensors are denoted by italic, bold lowercase, bold uppercase, and bold calligraphic symbols, respectively. This means that the dime...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a deep neural network compression method, system and device based on multi-group tensor decomposition and a storage medium. The method comprises the steps that a neural networkstructure is built; wherein the first convolution layer and the last full connection layer of the neural network structure do not use TT decomposition, and the weight matrix of the remaining network structures is expressed through a TT format; operation is directly carried out on the core tensor on the full connection layer, and the convolution layer needs to be finally restored to the size of theoriginal weight matrix for convolution; multi-TT decomposition is carried out on a convolution layer; and adding a sparse value on the basis of TT decomposition to form a new compressed network structure. The beneficial effects of the deep neural network compression method, system and device are that: the tensor column model is adopted to reconstruct the original weight matrix into several high-dimensional tensor compression models, then a new network structure is established on the basis of decomposition, parameters are reduced, experiments show that the robustness of the compression modelsis increased along with the increase of the number of modes in the depth model, and the compression method can reach a good parameter ratio.

Description

technical field [0001] The present invention relates to the technical field of data processing, in particular to a deep neural network compression method, system, device and storage medium based on the decomposition of multiple sets of tensors. Background technique [0002] Although the deep neural network has been successfully and widely used in practical applications, its complex structure and many parameters cause waste of resources and increase of training time. When applying deep neural networks to specific devices such as smartphones, wearables, and embedded devices, these devices have certain limitations in terms of model size, power consumption, etc. The enormous limitations of these devices make them difficult to apply to deep neural networks, thereby prompting researchers to discover the inherent redundancy in parameter and feature maps in deep models. By eliminating redundancy, resources can be saved without compromising the capacity and performance of most deep ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/045
Inventor 孙维泽杨欣黄均浩黄磊张沛昌包为民
Owner SHENZHEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products