Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Meta-learning-based domain increment method

A meta-learning and domain technology, applied in the domain incremental domain based on meta-learning, which can solve the problems of reduced model accuracy, difficulty in reconciliation, large data storage and training time overhead, etc., to achieve the effect of ensuring accuracy and reducing overhead.

Pending Publication Date: 2021-02-02
中国科学院计算技术研究所厦门数据智能研究院
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the method based on the convolutional neural network also has defects: when the distribution of the test image data is inconsistent with the distribution of the training image data, such as changes in lighting, background, posture, etc., the accuracy of the model will drop
[0003] At present, the most intuitive domain incremental learning method is to continue training the model with data in new domains, but the accuracy of this method often cannot meet the requirements: if the training is insufficient, the accuracy of the data in the new domain is not high; The accuracy rate of data in the old field will decrease, and it is difficult to reconcile the two
However, if the old domain data and the new domain data are directly mixed to retrain the convolutional neural network, the data storage and training time will be huge, especially in practice as the new domain data increases, the overhead will increase.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Meta-learning-based domain increment method
  • Meta-learning-based domain increment method
  • Meta-learning-based domain increment method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0029] In this embodiment, a batch of old data D is first given old ,D old Composed of Mobike and Golden Retriever, and later a batch of new data D new ,D new Consisting of little yellow cars and huskies, the goal of this embodiment is to achieve a high accuracy rate for both old and new data. Such as figure 1 As shown, the method of this embodiment is as follows:

[0030] S1. Build a pre-training model: use the meta-learning method iTAML to select several public datasets as metadata, construct meta-tasks and learn a pre-training model, such as selecting airplanes and birds in the cifar10 dataset (task 1), driving a truck and Deer (task 2), car and horse (task 3), the classification model structure selects MobileNetV2, obtains the parameter φ of described pre-training model, and described pre-training model is convolution neural classification network, and it should be pointed out that iTAML is the same as Model-independent, you can choose any convolutional neural classif...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a meta-learning-based domain increment method. The method comprises the following steps: S1, constructing a pre-training model; s2, training an old model by using the pre-training model; s3, training a new model. According to the invention, randomly reserved 5% memory data and new data are mixed and finely tuned to train a new model, and a cross entropy loss function and aknowledge distillation loss function are combined to guide the learning of the new model, so that the new model learns the classification knowledge of the data in the new field while memorizing the classification knowledge in the old field, and the expenditure of data storage and training time is greatly reduced.

Description

technical field [0001] The invention relates to the field of computer technology, in particular to a field incremental method based on meta-learning. Background technique [0002] With the rise of deep learning, object classification methods based on convolutional neural networks have developed rapidly, and the recognition accuracy has been greatly improved. However, methods based on convolutional neural networks also have drawbacks: when the distribution of test image data is inconsistent with the distribution of training image data, such as changes in lighting, background, and pose, the accuracy of the model will decrease. Therefore, when new domain data appears, that is, data that is inconsistent with the original training data distribution, the model needs to be able to incrementally learn new domain classifications, that is, to learn new domain knowledge classifications while remembering the classifications of old domain data . [0003] At present, the most intuitive ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/08G06K9/62
CPCG06N3/08G06N3/045G06F18/214G06F18/241
Inventor 王杰龙安竹林程坦徐勇军
Owner 中国科学院计算技术研究所厦门数据智能研究院
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products