Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Audio/video keyword identification method based on decision-making level fusion

A recognition method and keyword technology, applied in the information field, can solve the problems of affecting recognition performance, ignoring local information of local changes in time domain and space domain, and unable to solve the problem of distribution of visual and acoustic contributions, so as to achieve the effect of improving recognition performance

Active Publication Date: 2014-07-23
PEKING UNIV SHENZHEN GRADUATE SCHOOL
View PDF5 Cites 31 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] To sum up, the current keyword recognition technology based on audio-video fusion mainly uses appearance features as visual features, while the existing appearance feature extraction methods mainly consider the characteristics of the mouth region from a global perspective, ignoring the description of time domain and Local information of local changes in the airspace, and these local information are crucial
In addition, the fusion strategy of audio and video adopts feature layer fusion. This method requires more training data to fully train a classifier, and it cannot solve the problem of visual and acoustic contribution distribution under different acoustic signal-to-noise ratio environments, which affects recognition. performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio/video keyword identification method based on decision-making level fusion
  • Audio/video keyword identification method based on decision-making level fusion
  • Audio/video keyword identification method based on decision-making level fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0078] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. It should be understood that the described embodiments are only some of the embodiments of the present invention, not all of them. example. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without making creative efforts belong to the protection scope of the present invention.

[0079] First, the keyword table is defined. The tasks in the embodiment of the present invention are oriented to human-computer interaction, so 30 commonly used keywords in human-computer interaction are defined to form the keyword table. According to the defined keyword table, a transcript containing keywords was designed, and 5 sentences were designed for each keyword, a total of 150 transcripts.

[0080] Synchronously record audi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an audio / video keyword identification method based on decision-making level fusion. The method mainly includes the following steps that (1) a keyword audio / video is recorded, a keyword and non-keyword voice acoustic feature vector sequence and a visual feature vector sequence are obtained, and accordingly a keyword and non-keyword acoustic template and a visual template are trained; (2) acoustic likelihood and visual likelihood are obtained according to the audio / video in different acoustic noise environments, so that the acoustic mode reliability, visual mode reliability and optimal weight are obtained, and accordingly an artificial neural network can be trained; (3) secondary parallel keyword identification based on the acoustic mode and the visual mode is conducted on the audio / video to be detected according to the acoustic template, the visual template and the artificial neural network. According to the audio / video keyword identification method based on decision-making level fusion, the acoustic function and the visual function are fused at a decision-making level, the secondary parallel keyword identification based on the dual modes is conducted on the audio / video to be detected, the contribution of visual information in the acoustic noise environment is fully utilized, and therefore identification performance is improved.

Description

technical field [0001] The invention belongs to the field of information technology and relates to an audio and video processing technology applied in the field of human-computer interaction, in particular to an audio and video keyword recognition method based on decision-making layer fusion. Background technique [0002] As an important branch of continuous speech recognition, the purpose of keyword recognition technology is to detect the preset keywords in the continuous unrestricted speech stream. Since there is no need to decode the complete voice stream, keyword recognition is more flexible than continuous speech recognition, and is very suitable for some specific application fields, such as national defense monitoring, human-computer interaction, audio document retrieval, etc. In order to improve the robustness of speech recognition systems in noisy environments, in recent years, audio-video speech recognition technology has become a popular research direction by fusin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/26G10L15/06G06F17/30
Inventor 刘宏范婷吴平平
Owner PEKING UNIV SHENZHEN GRADUATE SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products