Text similarity matching model compression method and system for knowledge distillation

A text similarity and matching model technology, applied in the field of text matching, can solve the problems of small calculation, calculation speed bottleneck, complex model, etc., to achieve the effect of improving accuracy and avoiding the decrease of calculation speed

Active Publication Date: 2020-05-19
BEIJING UNISOUND INFORMATION TECH
View PDF4 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to achieve better results, the deep learning model has become more and more complex, and the amount of calculation is increasing
Moreover, since the retrieval module obtains N results, N calculations are required. Therefore, when the depth matching algorithm actually lands products, in order to ensure speed, it still tends to use a simple model with a small amount of calculation, and the calculation speed has become the largest. bottleneck

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Text similarity matching model compression method and system for knowledge distillation
  • Text similarity matching model compression method and system for knowledge distillation
  • Text similarity matching model compression method and system for knowledge distillation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] The preferred embodiments of the present invention will be described below in conjunction with the accompanying drawings. It should be understood that the preferred embodiments described here are only used to illustrate and explain the present invention, and are not intended to limit the present invention.

[0047] The embodiment of the present invention provides a text similarity matching model compression method of knowledge distillation, such as figure 1 As shown, the method performs the following steps:

[0048] Step 1: Obtain training data;

[0049] Step 2: According to the training data, using the first deep text matching algorithm to determine the first training model;

[0050] Step 3: Distill the prior knowledge of the first training model into the training data of the second training model, and use the second deep text matching algorithm to determine the second training model, wherein the first deep text matching algorithm The calculation amount is greater th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a text similarity matching model compression method and system for knowledge distillation. The method comprises the following steps: obtaining training data; determining a firsttraining model by adopting a first deep text matching algorithm according to the training data; distilling the priori knowledge of the first training model into training data of a second training model, adopting a second deep text matching algorithm to determine the second training model, wherein the calculated amount of the first deep text matching algorithm is larger than that of the second deep text matching algorithm; and predicting a text similarity matching result by adopting the second training model. According to the method, a text matching method based on knowledge distillation is adopted, a calculation result of a large model is fused into a training process of a small model, when a second training model is adopted for online prediction, the operation speed is prevented from being reduced, meanwhile, a priori result of a first training large model is utilized, and the prediction accuracy can be improved.

Description

technical field [0001] The invention relates to the technical field of text matching, in particular to a text similarity matching model compression method and system for knowledge distillation. Background technique [0002] At present, in the open domain question answering in the human-computer dialogue system, the mainstream solution is the combination of the retrieval module and the matching module. The steps of the current mainstream text matching scheme are: Step 1: First, through the retrieval module, obtain a fixed number N (such as 20) of candidate results; Step 2: Through a deep text matching algorithm (such as a twin network based on long and short-term memory), get The score of each candidate result; Step 3: take out the candidate scores in step 2, and use the candidate with the highest score as the final matching result. [0003] At present, the mainstream scheme of matching module is the method of deep learning. In order to achieve better results, the deep lear...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F40/194G06N3/04G06N3/08
CPCG06N3/08G06N3/045Y04S10/50
Inventor 张勇
Owner BEIJING UNISOUND INFORMATION TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products