Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Speech recognition model establishing method based on bottleneck characteristics and multi-scale and multi-headed attention mechanism

A technology of speech recognition model and establishment method, applied in the field of training model, can solve the problems of single attention scale and poor recognition performance, achieve powerful time series modeling ability and distinguish ability, improve accuracy, and improve the effect of recognition effect

Active Publication Date: 2019-09-06
HARBIN INST OF TECH
View PDF10 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of the present invention is to solve the problem of poor recognition performance and single attention scale in the existing traditional attention model, and propose a method for establishing a speech recognition model based on bottleneck features and multi-scale multi-head attention mechanism

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Speech recognition model establishing method based on bottleneck characteristics and multi-scale and multi-headed attention mechanism
  • Speech recognition model establishing method based on bottleneck characteristics and multi-scale and multi-headed attention mechanism
  • Speech recognition model establishing method based on bottleneck characteristics and multi-scale and multi-headed attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0018] The method for establishing a speech recognition model based on bottleneck features and a multi-scale multi-head attention mechanism of the present embodiment, the method includes the following steps:

[0019] Step 1, utilize the input sample FBank speech feature vector X=(x 1 ,x 2 ,...,x T ) to perform unsupervised training on the RBM network in the DBN, and obtain the connection weight matrix W of the first three layers in the initialization coding network 1 , W 2 , W 3 , the weight matrix connected by these three layers and a layer of randomly initialized weight output layer W 4 The bottleneck feature extraction network based on DBN that constitutes the front end of the encoding network; RBM network means restricted Boltzmann machine, the English full name is Restricted Boltzmann Machine, referred to as RBM; DBN means deep belief network, English full name is Deep Belief Network, referred to as DBN; FBank represents a filter bank; Sample FBank speech feature vec...

specific Embodiment approach 2

[0027] Different from the specific embodiment one, the speech recognition model establishment method based on the bottleneck feature and the multi-scale multi-head attention mechanism of the present embodiment, in the described step one, the input speech feature vector X=(x 1 ,x 2 ,...,x T ) using 40-dimensional FBank features and energy, and then stitching corresponding to the first-order and second-order differences, a total of 123-dimensional parameters; for the extracted features, first normalize within the scope of the training set, so that each component obeys the standard normal distribution, Then use the normalization parameters of the training set to normalize the features of the test set and development set.

specific Embodiment approach 3

[0029] Different from the specific embodiment one or two, the speech recognition model establishment method based on the bottleneck feature and the multi-scale multi-head attention mechanism of the present embodiment, in the described step one and step two, the RBM network in the DBN is wirelessly Supervised training process, wherein, the training of RBN network comprises adopting unsupervised pre-training (pretraining) and reverse gradient propagation algorithm to have supervised training method; The input of described RBM network is FBank speech feature, and the output layer of RBM network is softmax layer , each output layer unit corresponds to the posterior probability of the bound triphone state; there are three hidden layers between the input layer and the output layer, the second hidden layer is the bottleneck layer, and the state of the second hidden layer The number of units is less than other hidden layers.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a speech recognition model establishing method based on bottleneck characteristics and a multi-scale and multi-headed attention mechanism, and belongs to the field of model establishing methods. A traditional attention model has the problems of poor recognition performance and simplex attention scale. According to the speech recognition model establishing method based on thebottleneck characteristics and the multi-scale and multi-headed attention mechanism, the bottleneck characteristics are extracted through a deep belief network to serve as a front end, the robustnessof a model can be improved, a multi-scale and multi-headed attention model constituted by convolution kernels of different scales is adopted as a rear end, model establishing is conducted on speech elements at the levels of phoneme, syllable, word and the like, and recurrent neural network hidden layer state sequences and output sequences are calculated one by one; and elements of the positions where the output sequences are located are calculated through decoding networks corresponding to attention networks of all heads, and finally all the output sequences are integrated into a new output sequence. The recognition effect of a speech recognition system can be improved.

Description

technical field [0001] The invention relates to a training model in the technical field of speech recognition, in particular to a method of increasing the robustness of the model by extracting bottleneck features, and establishing a multi-scale multi-head model to construct phonemes, syllables, words and other levels of speech primitives. model to improve its recognition performance. Background technique [0002] Speech signal is one of the most common and commonly used signals in human society, and it is an important way for people to express, communicate and disseminate information. In today's era of information explosion, massive voice data is generated all the time in the Internet and telephone channels. In order to recognize, classify and retrieve large-scale voice signals more efficiently, the demand for Automatic Speech Recognition (ASR) has become more and more important. urgent. Compared with the traditional Hidden Markov Model (HMM) speech recognition system, the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/06G10L15/16G10L15/02
CPCG10L15/063G10L15/16G10L15/02
Inventor 韩纪庆唐海桃郑铁然郑贵滨
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products