Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Key evidence extraction method for text prediction result

A text prediction and key technology, applied in the field of key evidence extraction of text prediction results, can solve problems such as wasting time and resources

Active Publication Date: 2019-08-02
HARBIN INST OF TECH
View PDF8 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] The purpose of the present invention is to solve the problem of relying on manual annotation to find evidence when extracting key evidence that can explain the prediction result in the existing technology, and the problem of wasting time and resources, and propose a key evidence extraction method for text prediction results

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Key evidence extraction method for text prediction result
  • Key evidence extraction method for text prediction result
  • Key evidence extraction method for text prediction result

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0041] Specific implementation mode one: combine figure 1 Describe this implementation mode, the specific implementation mode is a key evidence extraction method of a text prediction result (see details Figure 4 ), the specific process is:

[0042] Step 1. Find the word vector corresponding to each word from the GloVe word vector (pre-trained vector matrix) for each word, and then encode the sentence through a convolutional neural network to obtain a sentence-level vector;

[0043] Step 2, the sentence-level vector obtained in step 1 averaged to get a sentence-level vector The average of the sentence-level vector The average value of is used as the initial value of the external storage unit;

[0044] Use external storage units to record and accumulate information to support final predictions;

[0045] Inspired by the success of memory networks in the field of question answering, an external memory block is proposed to record information. The external memory block ca...

specific Embodiment approach 2

[0052] Specific embodiment two: the difference between this embodiment and specific embodiment one is that in the step one, the word vector corresponding to each word is searched for each word from the GloVe word vector (pre-trained vector matrix), and then through the volume The product neural network is used to encode sentences to obtain sentence-level vectors; the specific process is:

[0053] Find the word vector corresponding to each word from the pre-trained vector matrix, and then encode the sentence through the convolutional neural network to obtain the vector representation of the sentence; the sentence encoder is not task-specific, but can be semantically combined. Any algorithm for dense vector representation. To improve efficiency, a convolutional neural network is adopted, which is outstanding in various sentence classification tasks, such as sentence-level sentiment analysis. Empirical studies show that convolutional filters with different window widths can capt...

specific Embodiment approach 3

[0062] Embodiment 3: The difference between this embodiment and Embodiment 1 or 2 is that in Step 3, the first sentence-level vector obtained in Step 1, the initial external storage unit obtained in Step 2, and the hard extraction network model are obtained. The updated external storage unit corresponding to the first sentence-level vector;

[0063] Based on the second sentence-level vector obtained in step 1, the updated external storage unit, and the hard extraction network model, the updated external storage unit corresponding to the second sentence-level vector is obtained;

[0064] Until the final external storage unit corresponding to the nth sentence-level vector is obtained based on the nth sentence-level vector obtained in step 1, the updated external storage unit, and the hard extraction network model, the document-level vector is obtained; the specific process is:

[0065] Hard Extraction Network Hard Extra-iNet: The sentence extraction representation module in Hard...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a key evidence extraction method for a text prediction result, and relates to a key evidence extraction method for a text prediction result. The objective of the invention is to solve the problem that in the prior art, when key evidence capable of explaining a prediction result is extracted from a text, evidence searching is dependent on manual annotation. The method comprises the following steps: 1, obtaining a sentence level vector; 2, taking an average value of the sentence level vectors as an initial value of an external storage unit; 3, obtaining an updated external storage unit corresponding to the first sentence level vector; obtaining the document level vector until a final external storage unit corresponding to the nth sentence level vector is obtained; 4,outputting the probability of each category of the document; 5, obtaining a trained hard extraction network model; inputting the to-be-classified documents into the trained hard extraction network model to obtain the probability that the to-be-classified documents are divided into various categories and a sentence set that the documents are divided into the categories. The method is applied to thefield of evidence extraction of text prediction results.

Description

technical field [0001] The invention relates to a key evidence extraction method for text prediction results. Background technique [0002] Recently deep learning models have achieved impressive results in various challenging natural language processing tasks, such as machine translation (Kyunghyun Cho, Bart Van Merri nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statisti cal machine translation. arXiv preprint arXiv:1406.1078 (2014).) and Reading Comprehension, (AdamschFi Chen Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051(2017).). An advantage of deep neural network models is their ability to automatically generalize effective features for the final task without relying on feature engineering. However, with the gradual complexity of application scenarios and the introduction of new ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/27G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06F40/211G06F40/30G06N3/048G06N3/045G06F18/24
Inventor 丁效刘挺段俊文
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products