Check patentability & draft patents in minutes with Patsnap Eureka AI!

End-to-end speech recognition method based on rotation position coding

A technology for rotating position and speech recognition, applied in the field of pattern recognition, can solve the problems of cumbersome implementation of matrix operations, and achieve the effect of simple implementation, few parameters and good performance.

Pending Publication Date: 2022-01-04
NORTHWESTERN POLYTECHNICAL UNIV +1
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the relative position encoding increases the parameter amount of the model, and the matrix operation of the relative position encoding is more cumbersome to implement

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • End-to-end speech recognition method based on rotation position coding
  • End-to-end speech recognition method based on rotation position coding
  • End-to-end speech recognition method based on rotation position coding

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment

[0068] 1. Data preparation:

[0069] In the experiment, the experimental data uses the Mandarin corpus AISHELL-1 and the English speech corpus LibriSpeech. The former has 170 hours of labeled speech, while the latter includes 970 hours of labeled corpus and an additional 800M words labeled plain text corpus for building language models.

[0070] 2. Data processing:

[0071] Extract 80-dimensional logarithmic mel filter bank features with a frame length of 25ms and a frame shift of 10ms, and normalize the features so that the mean value of each speaker's features is 0 and the variance is 1. The dictionary of AISHELL-1 contains 4231 labels, and the dictionary of LibriSpeech contains 5000 labels generated by the byte pair encoding algorithm. In addition, the vocabularies of AISHELL-1 and LibriSpeech have padding symbol "PAD", unknown symbol "UNK" and end-of-sentence symbol "EOS".

[0072] 3. Build a network:

[0073] The model proposed by the present invention contains 12 enc...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an end-to-end speech recognition method based on rotation position coding. According to the method, the modeling capability of a convolutional self-attention network (Conformer) on acoustic features is enhanced by using the rotation position coding. Firstly, absolute position information of elements in an input sequence is coded through a rotation matrix, then relative position information is added into an inner product of an input vector of a multi-head self-attention module, an end-to-end speech recognition model based on a convolutional self-attention network is constructed, and then input speech is converted into text information through the speech recognition model. According to the invention, experiments are carried out on AISHELL-1 and LibriSpeech corpora, and the experiment result shows that the Conformer enhanced by the rotation position coding is better than the original Conformer in the aspect of speech recognition task. The word error rate of 4.69% is achieved on a test set of an AISHELL-1 data set, and the word error rate of 2.1% and the word error rate of 5.1% are achieved on a test-clean set and a test-over set of a LibriSpeech data set respectively.

Description

technical field [0001] The invention belongs to the technical field of pattern recognition, and in particular relates to a speech recognition method. Background technique [0002] The temporal information of the input sequence plays a crucial role in many sequence learning tasks, especially in speech recognition. Recurrent neural network based models can learn the timing information of sequences by recursively computing their hidden states along the time dimension. Convolutional neural network-based models can implicitly learn the position information of input sequences through padding operators. In recent years, Transformer-based models have shown great superiority in various sequence learning tasks such as machine translation, language modeling, and speech recognition. The Transformer-based model uses self-attention mechanism to model the dependencies between different elements in the input sequence, which provides more efficient parallel computing than recurrent neural ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/16G10L15/26
CPCG10L15/16G10L15/26
Inventor 张晓雷李盛强
Owner NORTHWESTERN POLYTECHNICAL UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More