Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network model compression and acceleration method based on entropy attention

A neural network model and convolutional neural network technology, applied in the field of neural networks, can solve problems such as large number of parameters, long inference time, and large training time.

Pending Publication Date: 2019-08-06
电科瑞达(成都)科技有限公司
View PDF5 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Another example is that in the ImageNet Challenge in 2014, the VGGNet series models achieved very good results that year, such as the VGG16 model, which contains 13 layers of convolution, 3 fully connected layers, contains hundreds of millions of parameters, and the amount of parameters is huge. , although the performance is improved, it takes a lot of training time, and the inference time will also take a long time
Although the increase in the number of parameters of the model can increase performance, it is not suitable for low-power, low-storage, and low-bandwidth embedded device applications. If a model has too many parameters, it will undoubtedly limit its application in engineering

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network model compression and acceleration method based on entropy attention
  • Neural network model compression and acceleration method based on entropy attention
  • Neural network model compression and acceleration method based on entropy attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] In order to make the purpose, technical solution and advantages of the present invention clearer, the present invention takes the Cifar10 target recognition task as an example to further describe the present invention.

[0048] The Cifar10 training sample is a 32×32 optical image, and the image data display is shown in the attachment Figure 5 .

[0049] In the experiment on the Cifar10 dataset, the ResNet series network is used, but the networks of different depths and widths are used as the teacher network and the student network respectively. The specific experimental results are shown in Table 1.

[0050] Table 1 Comparison experiment of knowledge transfer based on information entropy attention on Cifar10

[0051] teacher parameter(M) student parameter(M) teacher student(%) F_AT eat KD F_AT+KD EAT+KD R-16-2 0.69 R-16-1 0.18 93.83 90.85 91.41 91.31 91.33 91.31 91.33 R-40-2 2.2 R-16-1 0.18 94.82 90.85 91.17 91.3...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of neural networks, and relates to a neural network model compression and acceleration method based on entropy attention. According to the invention, a large parameter is constructed; a teacher network model with a large calculation amount and excellent performance is used for carrying out strong supervised learning on a student network with a small parameter amount, a small calculation amount and poor performance, a small model with a small parameter amount, a small calculation amount and excellent performance is finally obtained through the learning process, and the small model can meet the requirements for real-time performance and precision of a real scene.

Description

technical field [0001] The invention belongs to the technical field of neural networks and relates to a method for compressing and accelerating neural network models based on entropy attention. Background technique [0002] In recent years, the convolutional neural network has developed very rapidly. With the continuous improvement of the theory and the support of modern large-scale computing platforms, the convolutional neural network has made great progress. It has applications in different fields, and has shown very good performance in different applications. [0003] The convolutional neural network is a computationally intensive network model, and its superior performance depends on the inclusion of millions or even tens of millions of convolutional neural network models. The training of the model involves a large number of matrix operations, so the requirements for the computing platform are relatively high. , due to the advantages of large-scale parallel computing of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08
CPCG06N3/082G06N3/045
Inventor 闵锐蒋霆
Owner 电科瑞达(成都)科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products