Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Causal event extraction method based on self-training and noise model

A noise model and event extraction technology, applied in neural learning methods, biological neural network models, natural language data processing, etc., can solve problems such as limited effects, and achieve the effect of improving performance

Active Publication Date: 2020-09-11
HARBIN INST OF TECH
View PDF9 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to solve the problem that the existing deep learning model-based causal event extraction method relies on a large amount of labeled data, which leads to its limited effect in fields or scenes with insufficient labeled data, and proposes a method based on self-training and noise model The causal event extraction method of

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Causal event extraction method based on self-training and noise model
  • Causal event extraction method based on self-training and noise model
  • Causal event extraction method based on self-training and noise model

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0024] Specific implementation mode 1: In this implementation mode, a causal event extraction method based on self-training and noise model, the specific process is as follows:

[0025] Step 1. Collect a small amount of labeled text in the target domain, or label a small amount of unlabeled text in the target domain, and label causal event pairs. When labeling, use the labeling method of the sequence labeling task to mark a word for each word in the text a label, indicating that the word belongs to a causal event, a consequential event, or other constituents;

[0026] Step 2. First use the existing word segmentation tool to segment the marked text in step 1. Use a neural network structure, such as a pre-trained language model based on the self-attention mechanism, to calculate a word in the marked text after word segmentation. vector representation;

[0027] Step 3. Use the conditional random field model to calculate the label sequence with the highest probability from the ve...

specific Embodiment approach 2

[0036] Specific embodiment 2: The difference between this embodiment and specific embodiment 1 is that the tagging method of the sequence tagging task in the step 1 is to use BIO or BIOES and other tagging specifications, such as "money / super issue / result / got / house price / The label "B-cause / I-cause / O / O / B-effect / I-effect / I-effect / I-effect" under the BIO label specification is "B-cause / I-cause / O / O / B-effect / I-effect / I-effect / I-effect", where B-cause means the reason I-cause means the middle of the cause, B-effect means the beginning of the effect, I-effect means the middle of the result, O means other text that does not belong to the cause or effect.

[0037] Other steps and parameters are the same as those in Embodiment 1.

specific Embodiment approach 3

[0038] Specific embodiment three: the difference between this embodiment and specific embodiment one or two is: in the step two, first use the existing word segmentation tool to carry out word segmentation for the marked text in step one, and use a neural network structure, such as based on The pre-trained language model of the self-attention mechanism calculates a vector representation for the words in the labeled text after word segmentation; the specific process is:

[0039] Find the word vector corresponding to each word in the labeled text after word segmentation from the pre-trained word vector matrix, and use the word vector corresponding to each word in the labeled text after word segmentation (the word vector corresponding to each word is A row in the pre-trained vector matrix) is input into a neural network to obtain a vector representation of the fusion context information of each word;

[0040] The neural network is a recurrent neural network, a long short-term mem...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a causal event extraction method based on self-training and a noise model, and relates to a causal event extraction method. The objective of the invention is to solve the problem of limited effect of an existing causal event extraction method based on a deep learning model in the field or scene with insufficient annotation data. The method comprises: 1, collecting a markedtarget domain text; 2, calculating a vector representation; 3, calculating a label sequence with the maximum probability; 4, training the model in the step 3, and finely adjusting the model in the step 2; 5, obtaining a large amount of self-labeling data; 6, calculating a vector representation for each word, and calculating the probability of generating each possible label sequence for the word sequence; 7, calculating a noise matrix of each word in the self-annotation text; 8, obtaining the probability of generating a self-labeling label sequence from the word sequence; and 9, jointly training the integral model in the step 2, the step 3, the step 6 and the step 7 by using the labeled data in the step 1 and the self-labeled data in the step 5. The method is applied to the field of causalevent extraction.

Description

technical field [0001] The invention relates to a method for extracting causal events based on self-training and noise models. Background technique [0002] In recent years, deep learning methods have achieved impressive results on various challenging natural language processing tasks, such as machine translation (Kyunghyun Cho, Bart Van Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language EM NLP ( Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1870–1879.). The deep learning method uses the deep neural network to automatically learn the functional relationship between the input and output data. Compar...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F40/284G06N3/04G06N3/08
CPCG06F40/284G06N3/08G06N3/047G06N3/045
Inventor 丁效刘挺秦兵廖阔
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products