Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Deep learning model compression method and device

A deep learning and model technology, applied in the field of artificial intelligence, can solve problems such as neural network redundancy, and achieve the effect of enhancing usability and good performance

Inactive Publication Date: 2020-08-28
ALIPAY (HANGZHOU) INFORMATION TECH CO LTD
View PDF5 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Significant redundancy exists in many deep neural networks, study finds

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning model compression method and device
  • Deep learning model compression method and device
  • Deep learning model compression method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] Various non-limiting embodiments provided in this specification will be described in detail below with reference to the accompanying drawings.

[0037] Methods for compressing deep learning models can include, but are not limited to: low-rank approximation, network pruning, network quantization, knowledge distillation, and compact network design. Network design), etc. Among them, network pruning is one of the most commonly used model compression methods.

[0038] The core idea of ​​network pruning is: after training the weight matrix of the deep learning model, that is, after training the deep learning model that meets the accuracy requirements, find relatively "unimportant" weight parameters from the deep learning model and delete them. Then the deep learning model is fine-tuned and trained to obtain a compressed deep learning model. Specifically, the trained deep learning model may include multiple network layers, and each network layer may include multiple neurons;...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a compression method and device for a deep learning model. The method can comprise the steps that firstly, a deep learning model obtained through training andmultiple pieces of training data used for training the deep learning model are acquired, wherein the deep learning model is used for conducting service prediction and comprises multiple weight parameters; determining gradient values corresponding to the plurality of weight parameters according to a loss function corresponding to the deep learning model and the plurality of pieces of training data;determining importance measurement values respectively corresponding to the plurality of weight parameters; wherein the importance measurement value corresponding to one weight parameter is in positive correlation with the absolute value of the weight parameter, and is in positive correlation with the absolute value of the gradient value corresponding to the weight parameter. And performing network pruning on the deep learning model according to the importance measurement values corresponding to the plurality of weight parameters.

Description

technical field [0001] One or more embodiments of this specification relate to the field of artificial intelligence, and in particular to a compression method and device for a deep learning model. Background technique [0002] Deep learning enables many businesses to be executed through trained deep learning models. For example, speech recognition tasks, computer vision tasks, and natural language processing tasks can be implemented through corresponding deep learning models. In order to improve the task execution performance of deep learning models, the scale of deep learning models is usually relatively large, which requires high storage and computing resources, which may make it difficult for deep learning models to be efficiently applied to various hardware devices. For example, the convolutional neural network VGG-16 used to realize image recognition tasks has more than 130 million weight parameters, occupies 500MB of storage resources, and needs 30.9 billion floating-...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08G06N3/04
CPCG06N3/082G06N3/045
Inventor 杨新星周俊李龙飞
Owner ALIPAY (HANGZHOU) INFORMATION TECH CO LTD
Features
  • Generate Ideas
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More