Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Pre-trained language model compression method and platform based on Knowledge distillation

A language model and compression method technology, applied in the field of compression methods and platforms of pre-trained language models, can solve the problems of difficult generalization of small samples, limited application, and increase in the scale of deep learning networks, and achieve the effect of improving model compression efficiency.

Active Publication Date: 2020-10-13
ZHEJIANG LAB
View PDF3 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] With the popularization of smart devices, the application of large-scale language models on embedded devices such as smart phones and wearable devices is becoming more and more common. In addition to its application on smart devices such as mobile phones, the current response method is still one-way from the knowledge distillation of the teacher model to the compression method of the student model, but the problem of difficult generalization of small samples in the process of large-scale language model compression still exists

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Pre-trained language model compression method and platform based on Knowledge distillation
  • Pre-trained language model compression method and platform based on Knowledge distillation
  • Pre-trained language model compression method and platform based on Knowledge distillation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] Such as figure 1 As shown, a compression method of pre-trained language model based on knowledge distillation includes feature map knowledge distillation module, self-attention cross knowledge distillation module and linear learning module based on Bernoulli probability distribution. Among them, the feature map knowledge distillation module is a universal knowledge distillation strategy for feature transfer. In the process of distilling the knowledge of the teacher model to the student model, the feature map of each layer of the student model is approached to the characteristics of the teacher, and the student model is more accurate. Pay more attention to the mid-level features of the teacher model and use these features to guide the student model. The self-attention cross-knowledge distillation module, that is, through the self-attention module of the cross-connection teacher and student network, realizes the deep mutual learning of the teacher model and the student mo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a pre-trained language model compression method and platform based on knowledge distillation. The method comprises the steps of firstly designing a universal knowledge distillation strategy of feature migration, mapping and approaching features of each layer of a student model to features of a teacher in a process of distilling knowledge of a teacher model to the student model, focusing on the feature expression ability of a small sample in the middle layer of the teacher model, and guiding the student model by utilizing the features; constructing a distillation methodbased on self-attention crossover knowledge by utilizing the ability of self-attention distribution of a teacher model to detect semantics and syntax among words; and in order to improve the learningquality of the learning model at the early stage of training and the generalization ability of the learning model at the later stage of training, designing a linear migration strategy based on Bernoulli probability distribution to gradually complete feature mapping from teachers to students and knowledge migration of self-attention distribution. According to the method and the device, the multi-task-oriented pre-trained language model is automatically compressed, so that the compression efficiency of the language model is improved.

Description

technical field [0001] The invention belongs to the field of automatic compression of multi-task-oriented pre-training language models, and in particular relates to a compression method and platform of a pre-training language model based on knowledge distillation. Background technique [0002] With the popularization of smart devices, the application of large-scale language models on embedded devices such as smart phones and wearable devices is becoming more and more common. In addition to its application on smart devices such as mobile phones, the current response method is still one-way from the knowledge distillation of the teacher model to the compression method of the student model, but the problem of difficult generalization of small samples in the process of large-scale language model compression still exists . Contents of the invention [0003] The object of the present invention is to provide a compression method and platform of a pre-trained language model based...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F40/211G06F40/30G06K9/62G06N5/02G06N20/00
CPCG06F40/211G06F40/30G06N5/02G06N20/00G06F18/24G06F40/20G06N3/08G06N3/045
Inventor 王宏升单海军鲍虎军
Owner ZHEJIANG LAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products