Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Self-distillation training method and scalable dynamic prediction method of convolutional neural network

A technology of convolutional neural network and training method, which is applied in the field of self-distillation training method of convolutional neural network and scalable dynamic prediction, can solve problems such as hidden safety hazards, low precision, cumbersome work, etc., and achieve improved accuracy, improved performance, Effect of Accuracy Improvement

Pending Publication Date: 2019-11-19
INST FOR INTERDISCIPLINARY INFORMATION CORE TECH XIAN CO LTD
View PDF1 Cites 48 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Another problem is how to design and train an appropriate teacher model. Existing distillation frameworks require a lot of effort and experimentation to find the best architecture for the teacher model, which takes a relatively long time.
The third problem is that the teacher model and the student model work in their own way, and the knowledge transfer flows between different models, which involves the establishment of multiple models, which is cumbersome and has low accuracy.
[0005] In the prior art, efficient training is carried out through the self-distillation training method proposed, but the accuracy of the classifier is low during the self-distillation process, and its own function cannot be automatically separated, which affects the function of the classifier, thereby reducing the accuracy of the training method
[0006] At the same time, the neural network has incomparable advantages in dealing with nonlinear problems, and the predictive control is very pertinent to the problem of card-edge operation with constraints. Their respective advantages provide a good solution to the control of nonlinear, time-varying, strong constraints, and large lag industrial processes. Therefore, convolutional neural networks are widely used in the field of prediction; in the prior art, convolutional neural networks are based on The prediction needs to consider its response speed and the confidence of the prediction results. Therefore, for the prediction requirements of different needs, the algorithms of multiple models will be stored at the same time. According to the requirements of different response speeds and accuracy rates, different models will be replaced. A vacuum period is formed during the switching process, which brings security risks to practical applications

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Self-distillation training method and scalable dynamic prediction method of convolutional neural network
  • Self-distillation training method and scalable dynamic prediction method of convolutional neural network
  • Self-distillation training method and scalable dynamic prediction method of convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0062] The present invention will be further described in detail below in conjunction with specific embodiments, which are explanations of the present invention rather than limitations.

[0063] Such as figure 1 As shown, the present invention proposes a self-distillation training method for convolutional neural networks, which can achieve the highest possible accuracy and overcome the shortcomings of traditional distillation when training compact models. Instead of implementing two steps in traditional distillation, i.e., the first step to train a large teacher model, and the second step to distill knowledge from the teacher model to the student model; the method of the present invention provides a one-step self-distillation framework whose training is directed to the student Model. The proposed self-distillation not only requires less training time (4.6 times shorter training time from 26.98 hours to 5.87 hours on CIFAR100), but also achieves higher accuracy (79.33 times fr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

According to the self-distillation training method of the convolutional neural network, the performance of the convolutional neural network is significantly enhanced by reducing the size of the convolutional neural network rather than expanding the size of the network. When knowledge is distilled in the network, the network is firstly divided into several parts; knowledge in the deeper portion ofthe network is then squeezed into the shallow portion. Under the condition that the response time is not used as the cost, the performance of the convolutional neural network is greatly improved through self-distillation, and the average precision improvement of 2.65% is obtained; a precision increase of 0.61% in the dataset ResNeXt is used as a minimum value to a precision increase of 4.07% in the VGG19 is used as a maximum value. According to the method, a convolutional neural network with multiple outputs can be regarded as a plurality of convolutional neural networks, and the output of each shallow classifier is utilized according to different requirements by matching with intensified extraction of features of the shallow classifier by an attention layer, so that the precision of the shallow classifier is remarkably improved.

Description

technical field [0001] The invention relates to the training of a convolutional neural network, in particular to a self-distillation training method and a scalable dynamic prediction method for a convolutional neural network. Background technique [0002] Convolutional neural networks have been widely deployed in various application scenarios. In order to extend the range of applications to some areas where accuracy is critical, researchers have been studying methods to improve accuracy through deeper or wider network structures, which will bring exponential growth in computing and storage costs, thus will delay the response time. [0003] Applications such as image classification, object detection, and semantic segmentation are currently developing at an unprecedented rate with the help of convolutional neural networks. However, in some applications that require fault tolerance, such as autonomous driving and medical image analysis, further improvements in prediction and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/12G06K9/62
CPCG06N3/126G06N3/045G06F18/241
Inventor 马恺声张林峰
Owner INST FOR INTERDISCIPLINARY INFORMATION CORE TECH XIAN CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products