Pre-trained language model quantification method and device

A language model and quantification method technology, applied in neural learning methods, biological neural network models, natural language data processing, etc., can solve the problems of low model accuracy, poor compression effect, and much decline in quantitative model performance.

Pending Publication Date: 2020-10-23
AISPEECH CO LTD
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The compression effect of linear quantization is not very good: the performance of the quantized model drops more at low precision, which makes the model cannot be compressed to very low precision

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Pre-trained language model quantification method and device
  • Pre-trained language model quantification method and device
  • Pre-trained language model quantification method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment approach

[0133] As an implementation, the above-mentioned electronic equipment is applied to a pre-trained language model quantization device, including:

[0134] at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor so that the at least one processor can:

[0135] Perform the first fine-tuning of the pre-trained language model on downstream tasks;

[0136] Using k-means clustering, cluster the data in the weight matrices of all embedding layers and all linear layers of the fine-tuned model except the classification layer, and set the number of categories to 2 n , where n is the number of bits occupied by each data of the compressed target model;

[0137] The quantized model is fine-tuned for the second time on the downstream task under the condition of maintaining the quantization, and finally the quantize...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a pre-trained language model quantification method and device, and the method comprises the steps of carrying out the first fine tuning of a pre-trained language model on a downstream task; clustering the data in the weight matrixes of all the other embedding layers and all the linear layers except the classification layer of the fine-tuned model by using k-means clustering, and setting the category number as 2n, where n is the bit number occupied by each piece of data of the compressed target model; and carrying out second fine tuning on the quantized model on the downstream task under the condition of maintaining quantization, and finally obtaining a quantized network. According to the scheme provided by the embodiment of the invention, the influence of the improvement of the quantization scheme of the bottom layer on the quantization effect is greatly underestimated and ignored; meanwhile, it is shown that a very good compression effect can be achieved through simple k-means quantization without any skill, and it is shown that the k-means compression method has very large development space and application prospects.

Description

technical field [0001] The invention belongs to the field of language model quantification, in particular to a method and device for quantifying a pre-trained language model. Background technique [0002] In the prior art, some quantization methods related to pre-trained language models have appeared, including 8-bit specific precision quantization and mixed precision quantization based on Hessian matrix. [0003] 8-bit specific precision quantization: Quantize all layers of the model that need to be quantized to 8 bits, and then fine-tune. [0004] Mixed-precision quantization based on Hessian matrix: Use the information of the Hessian matrix of each layer parameter to determine the quantization accuracy of the layer. The larger the Hessian matrix, the larger the eigenvalue, the higher the quantization accuracy of the layer, and vice versa. Fine-tuning after quantization. [0005] The bottom-level quantization scheme in the above two methods is linear quantization. In o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F40/205G06N3/04G06N3/08G06K9/62
CPCG06F40/205G06N3/084G06N3/045G06F18/23213G06F18/24
Inventor 俞凯赵梓涵刘韫聪陈露刘奇马娆
Owner AISPEECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products