Image incremental learning method based on dynamic correction vector

A technique for dynamic correction, learning methods, applied in the fields of knowledge distillation techniques and representational memory

Pending Publication Date: 2020-05-26
ZHEJIANG UNIV OF TECH
View PDF0 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] In order to solve the problem that the depth model trains dynamically changing data sets in practical application scenarios, reduce the dependence on distributed computing systems, and save a lot of computing overhead and system memory, the present invention proposes a 32-layer

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Image incremental learning method based on dynamic correction vector
  • Image incremental learning method based on dynamic correction vector
  • Image incremental learning method based on dynamic correction vector

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The present invention will be further described below in conjunction with the accompanying drawings of the description.

[0045] refer to Figure 1 ~ Figure 3 , an image incremental learning method based on dynamic correction vectors, which solves the problem of deep model training on dynamically changing data sets, reduces the dependence on distributed computing systems, and saves a lot of computing overhead and system memory. The invention proposes to use the 32-layer residual network ResNet-32 as the basis, introduce knowledge distillation technology and representative memory method, and use the technique of dynamic correction vector to alleviate the problem of catastrophic forgetting and improve the performance of incremental learning.

[0046] The present invention comprises the following steps:

[0047] S1: Construct a backbone network modeled on the ResNet-32 network layer structure to identify new and old categories that appear in the incremental phase tasks. T...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an image incremental learning method based on a dynamic correction vector. The method comprises the following steps: S1, a backbone network with a ResNet-32 network layer structure as a model is constructed, an Adam training optimizer is adopted, and meanwhile, a basic classification loss function is a Kullback-Leibler Diverge relative entropy loss function; s2, knowledge distillation is introduced into a loss function, a classification loss function is combined, a new model is helped to learn knowledge in an old category, and the catastrophic forgetting problem is relieved; s3, a ResNet-32 model is trained in a training mode by adopting a representative memory method and a dynamic correction vector method; and S4, the optimal model trained in the previous incremental stage is reloaded for repeating the steps S2 to S3, and the performance on all test sets is evaluated until all incremental data is trained. According to the method, the incremental learning task recognition capability is improved, and the method has high practical value.

Description

technical field [0001] The present invention relates to Knowledge Distillation technology and Representative Memory method. By using the technology of Dynamic Correction Vector, while maintaining the accuracy of classification and recognition of old categories, the accuracy of new category data is improved at the same time. Classification accuracy, so as to realize the incremental learning recognition task on the original data set. Background technique [0002] In recent years, Deep Convolutional Neural Networks (DCNNs) have been widely used in various fields of detection, segmentation, object recognition and images. Despite the success of Convolutional Neural Networks, it was in the ImageNet competition that computer vision and machine learning teams began to pay attention. In 2012, AlexNet easily won the ImageNet large-scale visual recognition challenge by implementing Deep-CNN and pushing DCNNs into the public eye. (ISLVRC). Since then, DCNNs have dominated ISLVRC and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/241G06F18/214
Inventor 宣琦缪永彪陈晋音翔云
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products