Meta-knowledge fine adjustment method and platform based on domain invariant features

A technology of knowledge and platform, which is applied in the domain-invariant feature-based meta-knowledge fine-tuning method and platform field, can solve the problem of limited effect of compression model and achieve the effect of improving compression efficiency

Active Publication Date: 2021-02-12
ZHEJIANG LAB
View PDF3 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the fine-tuning stage, the existing compression method of the pre-trained language model for downstream tasks is fine-tuned on the specific dat

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Meta-knowledge fine adjustment method and platform based on domain invariant features
  • Meta-knowledge fine adjustment method and platform based on domain invariant features
  • Meta-knowledge fine adjustment method and platform based on domain invariant features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The invention discloses a meta-knowledge fine-tuning method and platform of a general language model based on domain-invariant features on the basis of a general compression framework of a pre-trained language model. The fine-tuning method of the pre-trained language model for downstream tasks is to perform fine-tuning on the downstream task cross-domain data set, and the effect of the obtained compression model is suitable for data scenarios of different domains of the same task.

[0034] Such as figure 1 As shown, the present invention designs a meta-knowledge fine-tuning learning method: a learning method based on domain-invariant features. The present invention learns highly transferable shared knowledge, ie, domain-invariant features, on different datasets for similar tasks. Introduce domain-invariant features, fine-tune the common domain features on different domains corresponding to different data sets of similar tasks learned by the network, and quickly adapt ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a meta-knowledge fine adjustment method and platform based on domain invariant features, and the method comprises the steps: learning highly-transferable common knowledge, i.e., domain invariant features, on different data sets of similar tasks, and learning common domain features on different domains corresponding to different data sets of similar tasks in a fine adjustment network set; and any different domains can be quickly adapted. According to the method, the parameter initialization capability and generalization capability of the similar task universal language model are improved, and finally, the universal compression architecture of the similar downstream task language model is obtained through fine adjustment. In the meta-knowledge fine adjustment network,a domain invariant feature loss function is designed, and domain-independent general knowledge is learned, that is, a learning target of one domain invariant feature is minimized to drive a languagemodel to have domain invariant feature coding capability.

Description

technical field [0001] The invention belongs to the field of language model compression, in particular to a meta-knowledge fine-tuning method and platform based on domain-invariant features. Background technique [0002] Pretrained neural language models improve the performance of a variety of natural language processing tasks by fine-tuning on task-specific training sets. In the fine-tuning stage, the existing compression method of the pre-trained language model for downstream tasks is fine-tuned on the specific data set of the downstream task, and the effect of the trained compression model is limited to the specific data set of this type of task. Contents of the invention [0003] The purpose of the present invention is to provide a meta-knowledge fine-tuning method and platform based on domain-invariant features to address the deficiencies of the prior art. The present invention introduces meta-knowledge based on domain-invariant features, and learns common domain fea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04G06N3/08G06N5/04G06N20/00
CPCG06N3/08G06N20/00G06N5/041G06N3/045G06F18/2414
Inventor 王宏升单海军梁元邱启仓
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products