System and method for end-to-end speech recognition with triggered attention

A speech recognition and attention technology, applied in speech recognition, speech analysis, neural learning methods, etc., can solve problems that are not suitable for online/streaming transmission

Pending Publication Date: 2021-10-29
MITSUBISHI ELECTRIC CORP
View PDF8 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, attention-based neural networks have output latency and are less suitable for online / streaming ASR where low latency is required

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • System and method for end-to-end speech recognition with triggered attention
  • System and method for end-to-end speech recognition with triggered attention
  • System and method for end-to-end speech recognition with triggered attention

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] figure 1 A schematic diagram of a speech recognition system (ASR) 100 configured for end-to-end speech recognition is shown, according to some implementations. The speech recognition system 100 takes an input acoustic sequence and processes the input acoustic sequence to generate a transcribed output sequence. Each transcription output sequence is a transcription of the utterance or part of an utterance represented by the corresponding input acoustic signal. For example, speech recognition system 100 may take input acoustic signal 102 and generate a corresponding transcription output 110 , which is a transcription of the utterance represented by input acoustic signal 102 .

[0048] The input acoustic signal 102 may include a multi-frame sequence, eg, a continuous data stream, of audio data that is a digital representation of an utterance. The sequence of frames of audio data may correspond to a sequence of time steps, eg, where each frame of audio data is associated w...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A speech recognition system includes an encoder to convert an input acoustic signal into a sequence of encoder states, an alignment decoder to identify locations of encoder states in the sequence of encoder states that encode transcription outputs, a partition module to partition the sequence of encoder states into a set of partitions based on the locations of the identified encoder states, and an attention-based decoder to determine the transcription outputs for each partition of encoder states submitted to the attention-based decoder as an input. Upon receiving the acoustic signal, the system uses the encoder to produce the sequence of encoder states, partitions the sequence of encoder states into the set of partitions based on the locations of the encoder states identified by the alignment decoder, and submits the set of partitions sequentially into the attention-based decoder to produce a transcription output for each of the submitted partitions.

Description

technical field [0001] The present invention relates generally to a system and method for speech recognition, and more particularly, to a method and system for end-to-end speech recognition. Background technique [0002] Automatic speech recognition (ASR) systems are widely used in various interface applications, such as voice search. However, it is challenging to make speech recognition systems that achieve high recognition accuracy. This is because such production requires in-depth linguistic knowledge of the target language accepted by the ASR system. For example, phoneme sets, vocabularies, and pronunciation dictionaries are essential to making such an ASR system. Phoneme sets need to be carefully defined by the language's linguists. Pronunciation dictionaries need to be created manually by assigning one or more phoneme sequences to each word in a vocabulary comprising more than 100,000 words. Also, some languages ​​do not have clear lexical boundaries, so we may nee...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/16G06N3/04G10L15/32
CPCG10L15/16G10L15/32G06N3/08G06N3/048G06N7/01G06N3/044G06N3/045G10L15/22G10L19/00G10L15/08G10L15/02G10L25/30G06N3/02
Inventor N·莫里茨堀贵明J·勒鲁克斯
Owner MITSUBISHI ELECTRIC CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products