Voice recognition method and system based on deep neural network acoustic model

A technology of deep neural network and acoustic model, applied in the field of speech recognition method and system based on deep neural network acoustic model, can solve the problems such as the recognition accuracy needs to be improved, the gradient explosion, the gradient dispersion of the acoustic model, etc.

Pending Publication Date: 2021-06-08
XI AN JIAOTONG UNIV
View PDF0 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, there are still some challenges in the current research. The acoustic model trained by the high-computational deep neural network will cause gradient dispersion and gradient explosion, and the recognition accuracy in low-task scenarios still needs to be improved.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice recognition method and system based on deep neural network acoustic model
  • Voice recognition method and system based on deep neural network acoustic model
  • Voice recognition method and system based on deep neural network acoustic model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0060] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0061] It should be understood that when used in this specification and the appended claims, the terms "comprising" and "comprises" indicate the presence of described features, integers, steps, operations, elements and / or components, but do not exclude one or Presence or addition of multiple other features, integers, steps, operations, elements, components and / or collections thereof.

[0062] It should also be understood that the terminology used ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a voice recognition method and system based on a deep neural network acoustic model. The method comprises steps of carrying out the sliding windowing preprocessing operation of a to-be-recognized voice, and extracting acoustic features; constructing and training a deep neural network acoustic model; calculating a likelihood probability corresponding to the extracted acoustic features by using the deep neural network acoustic model; constructing a static decoding graph, a decoder constructing a directed acyclic graph containing all recognition results as a decoding network through the static decoding graph and the likelihood probability on the basis of a viterbi algorithm of dynamic programming, and a word graph of the state level being obtained from the decoding network and being determined to obtain the word graph of the word level; and obtaining an optimal cost path word graph of the word-level word graph, obtaining a word sequence corresponding to an optimal state sequence of the word graph, taking the word sequence as a final recognition result, and completing voice recognition. According to the method, gradient dispersion and gradient explosion caused by a complex structure network model can be solved, the word error rate is reduced while the decoding speed is ensured, and recognition accuracy is improved.

Description

technical field [0001] The invention belongs to the technical field of speech recognition, and in particular relates to a speech recognition method and system based on a deep neural network acoustic model. Background technique [0002] In recent years, with the rapid development of the artificial intelligence industry, speech recognition technology has received more and more attention from academia and industry. As a front-end technology in the field of speech interaction, speech recognition plays a vital role. It is widely used in many human-computer interaction systems, such as intelligent customer service systems, chat robots, personal intelligent assistants, and smart homes. [0003] In the classic speech recognition framework, an acoustic model is a set of HMM (Hidden Markov Model). Generally, the parameters of an HMM consist of three parts: initial probability, transition probability, and observation probability. According to the acoustic model, the logarithmic obser...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/16G10L15/06G10L15/02G10L15/08G10L25/87
CPCG10L15/02G10L15/063G10L15/08G10L15/16G10L25/87G10L2015/025G10L2015/0631G10L2015/088
Inventor 范建存马一航周世豪景海婷杨涛左良玉
Owner XI AN JIAOTONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products