Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for training language model of neural network and voice recognition method and device

A language model and neural network technology, applied in speech recognition, speech analysis, instruments, etc., can solve the problems of long training time and time-consuming neural network language model training, and achieve high classification accuracy, less training changes, and realization of simple effect

Inactive Publication Date: 2018-03-16
KK TOSHIBA
View PDF5 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The training of neural network language model is very time-consuming
In order to get a good model, it is necessary to use a large amount of training corpus for training, and the training time is relatively long

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for training language model of neural network and voice recognition method and device
  • Method and device for training language model of neural network and voice recognition method and device
  • Method and device for training language model of neural network and voice recognition method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0074] Various preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

[0075]

[0076] figure 1 is a flowchart of a method for training a neural network language model according to an embodiment of the present invention.

[0077] The method for training a neural network language model in this embodiment includes: calculating the probability of n-gram entries based on the training corpus; and training the above-mentioned neural network language model based on the above-mentioned n-gram entries and their probabilities.

[0078] like figure 1 As shown, firstly, in step S105, based on the training corpus 10, the probability of the n-gram entry is calculated.

[0079] In this embodiment, the training corpus 10 is a word-segmented corpus. An n-gram entry refers to an n-gram word sequence. For example, when n is 4, the n-gram entry is "w1w2w3w4". The probability of an n-gram entry refers to the probability ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method and device for training a language model of a neural network and a voice recognition method and device. According to one embodiment, the device for training the language model of the neural network includes a calculation unit calculating probabilities of n entries based on training corpus; and a training unit training the above language model of the neural network based on the above n entries and the probabilities.

Description

technical field [0001] The invention relates to speech recognition, in particular to a method for training a neural network language model, a device for training a neural network language model, a speech recognition method and a speech recognition device. Background technique [0002] A speech recognition system generally includes two parts: an acoustic model (AM) and a language model (LM). The acoustic model is a model that counts the probability distribution of speech features to phoneme units. The language model is a model that counts the occurrence probability of word sequences (lexical context). The speech recognition process is based on the weighted sum of the probability scores of the two models to obtain the highest score. [0003] In recent years, neural network language model (NN LM) has been introduced into the speech recognition system as a new method, which has greatly improved speech recognition performance. [0004] The training of neural network language mod...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/06G10L15/16G10L15/26
CPCG10L15/063G10L15/16G10L15/26G10L15/183G10L15/197
Inventor 雍坤丁沛贺勇朱会峰郝杰
Owner KK TOSHIBA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products