Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Compression method and platform of pre-trained language model based on knowledge distillation

A technology of language model and compression method, which is applied in the field of compression method and platform of pre-trained language model, which can solve the problems of difficult generalization of small samples, increase of deep learning network scale, and increase of computational complexity, so as to improve the efficiency of model compression Effect

Active Publication Date: 2020-12-08
ZHEJIANG LAB
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] With the popularization of smart devices, the application of large-scale language models on embedded devices such as smart phones and wearable devices is becoming more and more common. In addition to its application on smart devices such as mobile phones, the current response method is still one-way from the knowledge distillation of the teacher model to the compression method of the student model, but the problem of difficult generalization of small samples in the process of large-scale language model compression still exists

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compression method and platform of pre-trained language model based on knowledge distillation
  • Compression method and platform of pre-trained language model based on knowledge distillation
  • Compression method and platform of pre-trained language model based on knowledge distillation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] Such as figure 1 As shown, a compression method of pre-trained language model based on knowledge distillation includes feature map knowledge distillation module, self-attention cross knowledge distillation module and linear learning module based on Bernoulli probability distribution. Among them, the feature map knowledge distillation module is a universal knowledge distillation strategy for feature transfer. In the process of distilling the knowledge of the teacher model to the student model, the feature map of each layer of the student model is approached to the characteristics of the teacher, and the student model is more accurate. Pay more attention to the mid-level features of the teacher model and use these features to guide the student model. The self-attention cross-knowledge distillation module, that is, through the self-attention module of the cross-connection teacher and student network, realizes the deep mutual learning of the teacher model and the student mo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a compression method and platform of a pre-trained language model based on knowledge distillation. The method first designs a universal knowledge distillation strategy for feature transfer, and in the process of distilling the knowledge of the teacher model to the student model, the The feature map of each layer of the student model is close to the characteristics of the teacher, focusing on the expressive ability of small samples in the middle layer of the teacher model, and using these features to guide the student model; then using the self-attention distribution of the teacher model to detect the semantic and The syntactic ability builds a cross-knowledge distillation method based on self-attention; finally, in order to improve the learning quality of the learning model in the early stage of training and the generalization ability in the later stage of training, a linear migration strategy based on the Bernoulli probability distribution is designed and gradually completed from Knowledge transfer with teacher-to-student feature maps and self-attention distributions. Through the present invention, the multi-task-oriented pre-training language model is automatically compressed, and the compression efficiency of the language model is improved.

Description

technical field [0001] The invention belongs to the field of automatic compression of multi-task-oriented pre-training language models, and in particular relates to a compression method and platform of a pre-training language model based on knowledge distillation. Background technique [0002] With the popularization of smart devices, the application of large-scale language models on embedded devices such as smart phones and wearable devices is becoming more and more common. In addition to its application on smart devices such as mobile phones, the current response method is still one-way from the knowledge distillation of the teacher model to the compression method of the student model, but the problem of difficult generalization of small samples in the process of large-scale language model compression still exists . Contents of the invention [0003] The object of the present invention is to provide a compression method and platform of a pre-trained language model based...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06F40/211G06F40/30G06K9/62G06N5/02G06N20/00
CPCG06F40/211G06F40/30G06N5/02G06N20/00G06F18/24G06F40/20G06N3/08G06N3/045
Inventor 王宏升单海军鲍虎军
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products