Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Complex scene voice recognition method and device based on multiple modes

A speech recognition and complex scene technology, applied in speech recognition, speech analysis, instruments, etc., can solve problems such as limited application of single-modal speech recognition technology

Pending Publication Date: 2020-12-29
NAT INNOVATION INST OF DEFENSE TECH PLA ACAD OF MILITARY SCI +1
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In order to solve the problem of limited application of single-modal speech recognition technology in complex scenes, the present invention proposes a multi-modal complex scene speech recognition method and device

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Complex scene voice recognition method and device based on multiple modes
  • Complex scene voice recognition method and device based on multiple modes
  • Complex scene voice recognition method and device based on multiple modes

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0077] In order to better understand the contents of the present invention, an example is given here.

[0078] On the one hand, the present invention proposes a complex scene speech recognition method based on multimodality, figure 1 It is an overall schematic diagram of a complex scene speech recognition device based on multimodality. The method includes:

[0079]S1, taking the change of the lip image collected by the image sensor as a sign of multi-modal data input, that is, the lip image data collection device monitors whether the user's lip image changes, and if it detects that the collected user's lip image changes , it is considered that the user sends out voice input, and the audio signal, lip image signal and facial EMG signal corresponding to the voice input are collected synchronously;

[0080] S2, according to the audio signal, the lip image signal and the facial electromyographic signal, determine the multi-source data characteristics of the signal in the space a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a complex scene voice recognition method based on multiple modes. The method comprises the following steps: synchronously collecting an audio signal, a lip image signal and a facial electromyogram signal corresponding to voice input if a collected lip image of a user is detected to change, determining multi-source data features of the signals in a space domain and a time domain, and coding and modeling the multi-source data features by using a speech recognition model to obtain common information of different modal expression contents and to obtain multi-modal speech information, and synthesizing a text by using a language model. The invention further discloses a complex scene voice recognition device based on multiple modes. The device comprises a data acquisitionmodule, a feature extraction module, a coding and decoding module, a text synthesis module and an interaction module. According to the invention, efficient, accurate and robust voice recognition in complex scene environments with vocal cord damage, high noise, high closure, high privacy requirements and the like is realized, and more reliable voice interaction technology and system are provided for complex man-machine interaction scenes.

Description

technical field [0001] The present invention relates to the technical field of speech recognition, in particular to a multimodal fusion-based collaborative interactive speech recognition method and device in a complex scene. Background technique [0002] Voice interaction is one of the most commonly used and direct communication methods between people. Speech recognition technology based on sound medium sprouted in the period of machine translation research in the 1950s. In recent years, with the development of artificial neural networks and machine learning algorithms, acoustic models based on deep learning have gradually been adopted in speech recognition. Speech recognition technology has made remarkable progress in recent years, and has been widely used in industry, communication, medical and other fields, and opened a new era of intelligent speech recognition and interaction. [0003] The traditional speech recognition technology that relies on the sound medium cannot...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/22G10L15/06G10L15/16G10L15/25G10L25/24G10L25/30G10L25/45
CPCG10L15/22G10L15/25G10L25/24G10L25/45G10L25/30G10L15/16G10L15/063G10L15/06
Inventor 印二威吴竞寒闫慧炯谢良邓宝松范晓丽罗治国闫野
Owner NAT INNOVATION INST OF DEFENSE TECH PLA ACAD OF MILITARY SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products