Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Word vector and context information-based short text topic model

A topic model and word vector technology, applied in the field of short text topic models, can solve the problems of sparse word co-occurrence, low topic semantic consistency, and topic models cannot extract high-quality topics, so as to improve semantic consistency, Efficiency and effect improvement

Inactive Publication Date: 2018-08-17
DALIAN UNIV OF TECH
View PDF4 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Since word co-occurrence is relatively sparse in short text data, traditional topic models cannot effectively extract high-quality topics from short texts, and the semantic consistency of the topics obtained is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Word vector and context information-based short text topic model
  • Word vector and context information-based short text topic model
  • Word vector and context information-based short text topic model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] The specific embodiments of the present invention will be described below to further illustrate the starting point and corresponding technical solutions of the present invention.

[0022] The present invention is a short text topic model based on word vectors and context information. The main purpose is to hope that the model can automatically extract high-quality topic information from short text data sets. The method is divided into the following four steps:

[0023] (1) Obtain the semantic similarity between words:

[0024] First, use Google’s open source tool word2vec to train the word vector from the Wikipedia dataset. If it is English training data, you need to use the English Wikipedia dataset to train the word vector. If it is Chinese training data, you need to use the Chinese Wikipedia dataset to train Chinese. The word vector of . Here, the English training data is taken as an example. The training data used are the Google review data set (Amazon Reviews) and...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a word vector and context information-based short text topic model. A semantic relationship between words is extracted from word vectors; by explicitly obtaining the semantic relationship, the shortcoming of word co-occurrence deficiency of short text data is made up for; and through data of a training set, the semantic relationship between the words is further filtered, sothat the data set can be better trained. A background topic is added in a generation process; and through the background topic, noise words in a document are modeled. The model is solved by using a Gibbs sampling method in model inference, and the probability of the words with relatively high semantic correlation in related topics is increased by using a sampling strategy of a generalized Polya urn model in the sampling process; and in the way, the semantic consistency of the words in the topics is greatly improved. A series of experiments show that the method provided by the invention can improve the semantic consistency of the topics to a greater extent, and a new method is provided for short text topic modeling.

Description

technical field [0001] The invention belongs to the field of natural language processing, and relates to a short text topic model based on word vectors and context information Background technique [0002] With the development of social networks, short texts have become one of the main ways of Internet information dissemination. Short text data contains rich information, and it is very valuable to extract topic information from short text data. The probabilistic topic model is an effective method to extract topic information from document data sets. The topic model is an unsupervised learning method. The input of the model is document data, and the output is the topic information contained in the document data. Each topic can be viewed As the distribution of words, words with a higher probability of appearing under this topic can reflect the semantic characteristics of this topic, such as "education", "university", "student" and other words with a higher probability under a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/27
CPCG06F40/258G06F40/30
Inventor 梁文新冯然张宪超
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products