Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Progressive block knowledge distillation method for neural network acceleration

A technology of neural network and distillation method, which is applied in the field of deep network model compression and acceleration, can solve problems such as instability, difficulty in non-joint optimization process, and inability to protect the receptive field information of atomic network blocks well, so as to achieve simple implementation and reduce difficulty Effect

Inactive Publication Date: 2018-11-30
ZHEJIANG UNIV
View PDF0 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Most of the existing strategies use only one-step mechanism to obtain the student model from the teacher model. Finding the student network function that approximates the teacher network function in a huge search space requires a lot of network configurations. In practice, this non-joint optimization Process is also unwieldy and unstable
Distillation schemes in the form of sub-network blocks are very easy to optimize, but cannot effectively preserve the sequential dependencies between layer-specific sub-network blocks
In addition, existing design guidelines for student sub-network blocks cannot well protect the receptive field information of atomic network blocks in feature extraction

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Progressive block knowledge distillation method for neural network acceleration
  • Progressive block knowledge distillation method for neural network acceleration
  • Progressive block knowledge distillation method for neural network acceleration

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0063] The following simulation experiments are carried out based on the above method. The implementation method of this embodiment is as described above, and the specific steps are not described in detail. The following only demonstrates its effects based on the experimental results.

[0064] This embodiment uses the original complex VGG-16 network used for image classification tasks on the CIFAR100 and ImageNet data sets. First, the VGG-16 is divided into 5 teacher sub-network blocks, and then compression and acceleration based on the method of the present invention are carried out.

[0065] The implementation effects are shown in Table 1 and Table 2. As shown in Table 1, on the CIFAR100 data set, the present invention compresses the original model (OriginalVGG). When the parameters of the original model are reduced by 40% and the amount of calculation is reduced by 169%, the top-1 accuracy of the model is It only dropped by 2.22% and the Top-5 accuracy rate dropped by only 1.89%...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a progressive block knowledge distillation method for neural network acceleration. The method specifically comprises the following steps of: inputting an original complex network and related parameters; dividing the original complex network into a plurality of sub-network blocks, designing student sub-network blocks and randomly initializing the parameters; taking the inputoriginal complex network as a teacher network of the first block distillation process and obtaining a student network after the block distillation process is completed, wherein the first student sub-network block has the optimum parameters; taking the student network obtained in the last block distillation process as a teacher network of the next block distillation process so as to obtain a nextstudent network, wherein the student sub-network blocks, the block distillation of which is finished, have the optimum parameters; and obtaining a final simple student network and optimum parameters after all the sub-network block distillation processes are completed. The method is capable of achieving the effect of accelerating model compression on common hardware architecture, is simple to realize, and is an effective, practical and simple deep network model compression acceleration algorithm.

Description

Technical field [0001] The invention relates to the field of deep network model compression and acceleration, in particular to a progressive block knowledge distillation method for neural network acceleration. Background technique [0002] Since 2016, the artificial intelligence boom has swept the world. Major domestic and foreign companies, including Google, Microsoft, Baidu, Alibaba, Tencent, etc., have put a lot of effort into research on artificial intelligence; the Chinese government has also released the "New Generation Artificial Intelligence Development Plan" not long ago, formulating the future The goal of China's artificial intelligence development. In the past few years, the rapid development of deep learning has enabled the most advanced algorithm performance in a series of fields such as computer vision and natural language processing to make leapfrog progress. In the field of artificial intelligence, the traditional chip computing architecture cannot support the n...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/08
CPCG06N3/08
Inventor 李玺赵涵斌汪慧
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products