Language model pre-training method for common sense concept enhancement
A language model and pre-training technology, applied in neural learning methods, biological neural network models, natural language data processing, etc., can solve problems such as the inability to display and model common sense concept information, achieve enhanced common sense understanding, and improve the accuracy of questions and answers Effect
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment
[0076] like figure 1 As shown, the embodiment of the present invention proposes a language model pre-training method for common sense concept enhancement, which mainly includes the following steps:
[0077] Step 1) Construct an unsupervised corpus based on commonsense concepts, specifically including the following steps: First, traverse a given commonsense knowledge graph G to obtain a list C={c 1 ,...,c i ,...,c n},c i is the i-th common sense concept, n is the number of common sense concepts; then traverse the collected unlabeled corpus T={t 1 ,…, t j ,...,t m}, where t j is the jth sentence, m is the number of sentences; for each sentence t in the corpus j , with each common sense concept c in the common sense concept list C i Perform hard matching to obtain a set of all common sense concepts that appear in the sentence, as well as the start and end positions of each common sense concept in the sentence, so as to obtain a single training sample u for the sentence j...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


