Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Cross-modal multi-feature fusion audio and video speech recognition method and system

A multi-feature fusion and speech recognition technology, applied in speech recognition, character and pattern recognition, speech analysis, etc., can solve problems such as being easily affected by complex environmental noise

Pending Publication Date: 2020-12-08
HUNAN UNIV
View PDF0 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The technical problem to be solved by the present invention: In view of the above-mentioned problems in the prior art, considering that in the actual robot application environment, voice interaction is easily affected by complex environmental noise, and facial motion information is acquired through video and is relatively stable. The present invention provides an audio-video speech recognition method and system for cross-modal multi-feature fusion. The present invention fuses speech information, visual information and visual motion information through an attention mechanism, and utilizes the correlation between different modalities to achieve more Accurately acquire the voice content expressed by the user, improve the accuracy of voice recognition under complex background noise conditions, improve the performance of voice recognition in human-computer interaction, and effectively overcome the problem of low accuracy of pure voice recognition in noisy environments

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-modal multi-feature fusion audio and video speech recognition method and system
  • Cross-modal multi-feature fusion audio and video speech recognition method and system
  • Cross-modal multi-feature fusion audio and video speech recognition method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Such as figure 1 and figure 2 As shown, a cross-modal multi-feature fusion audio-video speech recognition method includes:

[0036] 1) Preprocess the audio data of the speaker to obtain the spectrogram sequence Xa ;Preprocess the video data of the speaker and extract the image sequence of the lip area Xv , extract the lip motion information to obtain the optical flow map sequence Xo ;

[0037] 2) For spectrogram sequences Xa Perform feature extraction to obtain speech timing features Ha , for the lip region image sequence Xv Perform feature extraction to obtain lip timing features Hv , for the sequence of optical flow maps Xo Perform feature extraction to obtain time series features of lip movement Ho ;

[0038] 3) Use the multi-head attention mechanism to target the obtained speech timing features Ha , lip timing characteristics Hv Timing characteristics of movement between lips Ho Calculate the association representation in different modalitie...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an audio and video speech recognition technology, and provides a cross-modal multi-feature fusion audio and video speech recognition method and system in consideration of thesituation that speech interaction is easily affected by complex environmental noise, and facial motion information is acquired through a video and is relatively stable in an actual robot application environment. According to the method, speech information, visual information and visual motion information are fused through an attention mechanism, and the speech content expressed by a user is acquired more accurately by using the relevance among different modes, so that the speech recognition precision under the condition of complex background noise is improved, the speech recognition performance in human-computer interaction is improved, and the problem of low pure-speech recognition accuracy in a noise environment is effectively solved.

Description

technical field [0001] The invention relates to audio-video speech recognition technology, in particular to a cross-modal multi-feature fusion audio-video speech recognition method and system. Background technique [0002] The purpose of Automatic Speech Recognition (ASR) technology is to enable machines to "understand" human speech, and convert human speech information into readable text information, which is the key technology for realizing human-computer speech interaction. Among the various expressions of human beings, language contains the most abundant and precise information. With the gradual development of deep learning, the speech recognition rate in a quiet environment is higher than 95%, which has surpassed the recognition accuracy of humans. [0003] However, in the practical application of human-computer interaction, complex background noise will have a great impact on speech quality and speech intelligibility, seriously affecting speech recognition performance...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/25G10L15/26G10L25/30G10L15/02G10L15/20G06K9/00G06K9/62G06T7/269
CPCG10L15/25G10L15/02G10L15/20G10L15/26G10L25/30G06T7/269G06T2207/10016G06T2207/20081G06T2207/20084G06T2207/30196G06V40/20G06F18/253
Inventor 李树涛宋启亚孙斌
Owner HUNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products