Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Automatic model compression method based on Q-Learning algorithm

A compression method and model technology, applied in the field of deep learning, to achieve the effects of reduced energy consumption, reduced energy consumption and reasoning time, and efficient embedded reasoning

Active Publication Date: 2019-07-02
NORTHWEST UNIV(CN)
View PDF3 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, most of the research on model compression focuses on the narrow field of compression algorithms, but does not conduct technical research from the perspective of effectively realizing the fusion of various algorithms to maximize the performance of the compressed model.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Automatic model compression method based on Q-Learning algorithm
  • Automatic model compression method based on Q-Learning algorithm
  • Automatic model compression method based on Q-Learning algorithm

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0037] Examples, see figure 1 :

[0038] 1) Build a deep learning environment on the JD Cloud server and NVIDIA Jetson TX2 embedded mobile platform, and select five classic deep neural network models from github for backup, including MobileNet, Inceptionv3, ResNet50, VGG16 and NMT models.

[0039] 2) Analyze and design the state set, action set, and reward function in the Q-Learning algorithm according to the constraints, and complete the algorithm coding and scripting of the model performance test.

[0040] 3) Integrate and modify the codes of different model compression technologies and choose MobileNet to test on NVIDIAJetson TX2, and make a preliminary judgment on the performance of different compression algorithms.

[0041] 4) Transplant the code to the JD Cloud server and set different demand coefficients to select compression algorithms for the five network models and save all the compressed models.

[0042] 5) Transplant all the models before and after compression to...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an automatic model compression method based on a Q-Learning algorithm. According to the invention, by taking the model performance including reasoning time, model size, energy consumption and accuracy of a deep neural network as constraint conditions, an algorithm capable of automatically selecting the model compression method according to the network structure is designed,and therefore compression scheme selection with the optimal performance is obtained. Through the model use of the automatic model compression framework under five different network structures, it is finally achieved that under the condition that the average loss of accuracy is 3.04%, the reasoning time of the model is reduced by 12.8% on average, the energy consumption is reduced by 30.2%, and thesize of the model is reduced by 55.4%. According to the invention, an automatic compression algorithm is provided for model compression of the neural network, and a thought is provided for further realizing effective compression and reasoning of the deep neural network.

Description

technical field [0001] The invention belongs to the technical field of deep learning, and in particular relates to an automatic model compression method based on a Q-Learning algorithm. Background technique [0002] Deep neural network has developed rapidly in recent years. Its powerful computing power makes it an effective tool for solving complex problems. In order to reduce delay and protect user privacy, it is very necessary to perform model reasoning on mobile or edge devices. For the mobile terminal, its limited resources and energy consumption constraints become the biggest bottleneck of model application. Through research, it is found that model compression technology provides the possibility to solve the computing problems of deep reasoning of embedded mobile devices. Model compression is not a free lunch. Usually, the reduction of model size will be at the expense of prediction accuracy loss. This means that the model compression technique and its parameters must ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N5/04G06F17/50
CPCG06N5/04G06F2111/04G06F30/20Y02D10/00
Inventor 高岭秦晴袁璐党鑫于佳龙王海郑杰刘瑞献杨建锋
Owner NORTHWEST UNIV(CN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products