Unlock instant, AI-driven research and patent intelligence for your innovation.

Audio recognition device and audio recognition method

a technology of audio recognition and recognition device, which is applied in the field of speech recognition technique, can solve the problems of deformation, difficult to recognize a spoken language at high precision, and difficult for the mouth movement to follow the speech, and achieve the effect of recognizing speech precision

Active Publication Date: 2010-12-23
NEC CORP
View PDF9 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

The present invention relates to a speech recognition technique that improves the precision of recognizing spoken language. The invention proposes a method for learning models using a speaking length that indicates the length of a speech section in speech data. This method helps to improve recognition precision when the spoken language is difficult to accurately measure. The invention also proposes a speech recognition apparatus that includes a speech recognition unit and a model learning unit for implementing the speech recognition process. The technical effect of the invention is to improve the recognition precision of speech from which an accurate feature quantity is hard to grasp, like spoken language.

Problems solved by technology

It is difficult to recognize a spoken language at high precision because of various causes such as acoustic vagueness like speaking idleness and diversity of word arrangement.
Especially when the speaking rate is fast, it is difficult for the mouth movement to follow the speaking and deformation occurs in voice.
It is considered that such deformation largely affects the degradation of the recognition precision.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Audio recognition device and audio recognition method
  • Audio recognition device and audio recognition method
  • Audio recognition device and audio recognition method

Examples

Experimental program
Comparison scheme
Effect test

first embodiment

[0048]FIG. 2 shows a configuration of a speech recognition unit 100B_1 in a The speech recognition unit 100B_1 includes section detection means 103, speaking length decision means 201, speaking length classified models 107, model selection means 202 and recognition means 203

[0049]The section detection means 103 has a function which is basically the same as the function of the section detection means 103 in the model learning unit 100A_1. The section detection means 103 detects a speech section from speech data which is input, and outputs start time and end time of the speech section as section information.

[0050]The speaking length decision means 201 calculates the speaking length which is the length of the section based on the section information. And the speaking length decision means 201 makes a decision which of prescribed classes such as “one second or less”, “between one second and three seconds”, and “at least three seconds” described above corresponds to the calculated speak...

second embodiment

[0057]the present invention will now be described. In the present embodiment, model learning and speech recognition are implemented with attention paid to not only the above-described speaking length but also speaking time which is time measured from the head of the speech section as the feature quantity of speech.

[0058]FIG. 3 shows a configuration of a model learning unit in the second embodiment. A model learning unit 100A_2 in this embodiment includes speaking length classified data 105 obtained from the above-described common element 110 shown in FIG. 1, speaking time decision means 301, speaking length & speaking time classified data 302, the model learning means 106, and speaking length & speaking time classified models 303.

[0059]The speaking time decision means 301 further classifies speech data and written data in the speaking length classified data 105 classified by speaking length into three parts: a part of one second from the head, a part of the last one second, and rema...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

Acoustic models and language models are learned according to a speaking length which indicates a length of a speaking section in speech data, and speech recognition process is implemented by using the learned acoustic models and language models. A speech recognition apparatus includes means (103) for detecting a speaking section in speech data (101) and for generating a section information which indicates the detected speaking section, means (104) for recognizing a data part corresponding to a section information in the speech data as well as text data (102) written from the speech data and for classifying the data part based on a speaking length thereof, and means (106) for learning acoustic models and language models (107) by using the classified data part (105).

Description

TECHNICAL FIELD [0001]The present invention relates to a speech recognition technique, and in particular to a speech recognition technique using acoustic models and language models and a learning technique of the models.BACKGROUND ART [0002]In recent years, study of speech recognition of a spoken language is carried on vigorously. It is difficult to recognize a spoken language at high precision because of various causes such as acoustic vagueness like speaking idleness and diversity of word arrangement. As a technique for improving the recognition precision of the spoken language, a technique for utilizing a phenomenon grasped from the spoken language is proposed. As an example thereof, a technique obtained by paying attention to speaking rate as described in Non Patent Literature 1 described later can be mentioned.[0003]Unlike mechanical read aloud speaking or word speaking, a spoken language of human being is rarely vocalized at a constant speaking rate. Therefore, the rate of the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(United States)
IPC IPC(8): G10L15/06G10L15/04G10L15/065G10L15/183G10L15/187G10L15/197
CPCG10L15/04G10L15/063G10L15/197G10L15/183G10L15/187G10L15/142
Inventor EMORI, TADASHIONISHI, YOSHIFUMI
Owner NEC CORP