Grammar error correction model training method and device and grammar error correction method and device

An error correction model and training method technology, applied in the computer field, can solve problems such as unreachable, lack of labeled data, time-consuming and labor-intensive manual labeling data, etc., to achieve the effect of expanding data, increasing accuracy, and reducing manual labor

Inactive Publication Date: 2020-10-13
BEIJING YUANLI WEILAI SCI & TECH CO LTD
View PDF6 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For the lack of labeled data, it is often used to hire labelers to label the data, and manual labeling data is often time-consuming and labor-intensive
[0003] The technical problem existing in the existing technology is: allowing the machine to automatically correct the Chinese sentences with grammatical errors often fails to achieve the expected effect. One of the most important reasons is the lack of a large amount of labeling data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Grammar error correction model training method and device and grammar error correction method and device
  • Grammar error correction model training method and device and grammar error correction method and device
  • Grammar error correction model training method and device and grammar error correction method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0103] This embodiment provides a training method for an error correction model, see figure 1 As shown, the training method includes steps S101 to S105, and each step will be described in detail below.

[0104] S101. Perform data expansion processing based on the first training set to obtain a second training set.

[0105] In this embodiment, the first training set is an existing training set with less data. It is divided into source end (source statement end) and target end (target statement end). In the first training set, the source end (source sentence end) includes the first source sample sentence, and the target end (target sentence end) includes the first target sample sentence. The first source sample sentence is a sentence with grammatical errors, and the first target sample sentence is a grammatically correct target sample sentence corresponding to the first source sample sentence.

[0106] For example, in the first training set, the first source sample sentence i...

Embodiment 2

[0134] This embodiment provides a training method for a grammatical error correction model, see figure 2 As shown, steps S201 to S207 are specifically included.

[0135] S201. Perform data expansion processing based on the first training set to obtain a second training set.

[0136] In this embodiment, the first training set includes a first source sample sentence and a first target sample sentence.

[0137] The first training set is an existing training set with less data. It is divided into source end (source statement end) and target end (target statement end). In the first training set, the source end (source sentence end) includes the first source sample sentence, and the target end (target sentence end) includes the first target sample sentence. The first source sample sentence is a sentence with grammatical errors, and the first target sample sentence is a grammatically correct target sample sentence corresponding to the first source sample sentence.

[0138] Speci...

Embodiment 3

[0230] This embodiment provides a grammatical error correction model training method, see image 3 shown, including the following steps:

[0231] S301. Data preprocessing.

[0232] Specifically, preprocessing the first source sample sentence and the first target sample sentence in the existing first training set includes:

[0233] performing word segmentation processing on the first source sample sentence and the first target sample sentence, and separating each word unit;

[0234] removing sentences whose sentence length is greater than a preset threshold in the first training set;

[0235] The same sentences in the first source sample sentence and the first target sample sentence are removed.

[0236] Further, each word unit in all sentences included in the first source sample sentence and the first target sample sentence is separated from each other by spaces; then the first training set is too long or too short Delete, for example word unit is too long sentence more th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a grammar error correction model training method and device, a grammar error correction method and device, computing equipment and a computer readable storage medium. The training method comprises the steps of performing data extension processing based on a first training set to obtain a second training set; obtaining a second source sample statement and a second target sample statement based on the second training set; inputting the second source sample statement into a grammar error correction model to generate an error correction sample statement; determining a lossvalue based on the error correction sample statement and the second target sample statement; and performing iterative training on the grammar error correction model based on the loss value until a training stop condition is met. By performing data enhancement processing on an existing training set, the purpose of automatically expanding the training set is achieved, and manual labor is effectively reduced.

Description

technical field [0001] The present application relates to the field of computer technology, and in particular to a training method and device for a grammatical error correction model, a grammatical error correction method and device, computing equipment, and a computer-readable storage medium. Background technique [0002] When using the neural network model for Chinese grammar error correction, a large amount of labeled data is often required. For the lack of labeled data, it is often used to hire labelers to label the data, and manual labeling data is often time-consuming and labor-intensive. [0003] The technical problem existing in the prior art is: allowing the machine to automatically correct Chinese sentences with grammatical errors often fails to achieve the desired effect. One of the most important reasons is the lack of a large amount of labeled data. This is because there are many types of Chinese grammatical errors, and different annotators may have different a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F40/289G06N3/04
CPCG06F40/289G06N3/045
Inventor 何苏王亮赵薇刘金龙柳景明郭常圳
Owner BEIJING YUANLI WEILAI SCI & TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products