Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-view language recognition method based on unidirectional self-tagging auxiliary information

An auxiliary information and multi-view technology, applied in speech recognition, speech analysis, instruments, etc., can solve problems such as the flat auxiliary features at the word level, the inability to improve the effect, and the single type of auxiliary features of the multi-view language model.

Active Publication Date: 2017-12-08
AISPEECH CO LTD
View PDF3 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In the present invention, the auxiliary information at the word level of the existing multi-view language model contains subsequent information, so that the latter information has a negative impact on the prediction results, so that it is impossible to improve the effect in the speech recognition rescore (ASR rescore) task, multi-view language The types of auxiliary features of the model are relatively single, and the auxiliary features at the word level are relatively flat. A multi-view language recognition method based on one-way self-labeling auxiliary information is proposed. By combining the auxiliary features at the word level in the multi-view neural network feature, from the state containing the context information to only containing the context information, thereby eliminating the negative impact caused by the context information, on this basis, the present invention also uses a variety of auxiliary information at the word level, and introduces a tree structure Auxiliary features at the word level are used for multi-view language model training, and stable operators are used in the labeling model and language modules to adjust different adaptations to their respective learning rates and other characteristics.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-view language recognition method based on unidirectional self-tagging auxiliary information
  • Multi-view language recognition method based on unidirectional self-tagging auxiliary information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] Such as figure 1 As shown, this embodiment includes: an annotation model and a multi-view language model used to generate auxiliary vectors containing only previous information at the word level, wherein: the annotation model converts the annotation features of bidirectional information in the information to be identified into unidirectional The characteristics of the information, the labeling model determines the classification and labeling of the input word, and its output together with the word vector is used as the input of the language model and forms a multi-view structure.

[0022] The information to be identified w t It is a one-dimensional array with only one position being 1 and the other bits being 0, where t is the current moment, and the information to be recognized is used as the input of the labeling model and the language model at the same time.

[0023] The labeling model adopts a recurrent neural network (RNN) with a long-short time change (LSTM) unit...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-view language recognition method based on unidirectional self-tagging auxiliary information. The method comprises the following steps: firstly, implementing self-tagging on current words and word-level auxiliary information by virtue of a tagging model, so that probability distribution of self-tagging auxiliary characteristics of the current words is obtained; then, decoding the probability distribution of the self-tagging auxiliary characteristics by virtue of Viterbi, so that relatively accurate auxiliary characteristics are obtained, and bidirectional auxiliary information is converted into unidirectional auxiliary information; and inputting the unidirectional auxiliary information, together with the current words, into a multi-view language model for analysis, so that accurate semantics of the current words can be obtained. The multi-view language recognition method provided by the invention has the characteristics that on the basis of the word-level auxiliary characteristics in a multi-view neural network, adverse influence on post-text information is eliminated, the various word-level auxiliary information is adopted, the word-level auxiliary characteristics, which are represented as a tree structure, are introduced to the multi-view language model for training, in the tagging model and the language model, stable operators are adopted to regulate various adaptive learning rates and the like.

Description

technical field [0001] The invention relates to a technology in the field of speech recognition, in particular to a multi-view language recognition method based on one-way self-labeling auxiliary information. Background technique [0002] In recent years, Recurrent Neural Networks (RNN) and Long-Short Time Variation Neural Networks (LSTM) based on memory cells have been widely used in language models. Among the many existing language models modeled by LSTM, the multi-view neural network language model can improve the performance of the perplexity judgment standard (Perplexity), but it does not improve the speech recognition re-scoring task. [0003] This is because the word-level vector information in the auxiliary feature vectors involved in these models is bidirectional information, that is, it contains context information at the same time, so that the latter cheating information is introduced in the confusion judgment, so it is improved, and the speech recognition is re-s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/14G10L15/16G10L15/18G10L17/04
CPCG10L15/14G10L15/16G10L15/18G10L17/04
Inventor 俞凯钱彦旻吴越贺天行陈哲怀
Owner AISPEECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products