Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A deep hybrid generative network adaptive method and system

A deep neural network and self-adaptive technology, applied in the field of deep hybrid generative network self-adaptation methods and systems, can solve the problems of untargeted training, low accuracy and accuracy of speech recognition, and low efficiency, and improve self-adaptation efficiency. , the effect of simplifying the adaptive process

Active Publication Date: 2020-06-30
AISPEECH CO LTD
View PDF32 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the acoustic features used in training the acoustic model are speaker-independent filter bank (filter-bank, fBank) features, so the trained acoustic model is actually just a general deep network model, which is independent of the speaker. The general model lacks speaker-related personality characteristics during training, which makes the accuracy and accuracy of speech recognition low, and the adaptability of speech recognition is poor
[0003] In order to solve this problem, it is necessary to carry out adaptive training to the general deep network model that has been trained, and the inventor found in the process of realizing the present invention that the method for performing self-adaptation in the prior art is to use the voice of the speaker to be recognized The data trains the entire deep network model, the training is not targeted, and the efficiency is low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A deep hybrid generative network adaptive method and system
  • A deep hybrid generative network adaptive method and system
  • A deep hybrid generative network adaptive method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0028] It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other.

[0029] The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program mo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a deep mixing generation network self-adaptive method and system. The method comprises the steps of using training audio data acquired from a training data set and the trainingtext data corresponding to the training audio data as input and output respectively, and training a deep mixing generation network so as to obtain a global phoneme mean value from a gaussian mixturemodel; determining the speaker phoneme mean value of a speaker according to the registered audio data of the speaker; determining a self-adaptive conversion matrix for converting the global phoneme mean value to the speaker phoneme mean value; and adjusting the gaussian mixture model on the basis of the self-adaptive conversion matrix so as to realize the self-adaption of the deep mixing generation network. A deep neural network is combined with the gaussian mixture model, so that self-adaptive adjustment is only required to be carried out on the gaussian mixture model network when self-adaption is carried out, and the whole network does not need to be re-trained. The self-adaptive process is simplified, and the self-adaptive efficiency is improved.

Description

technical field [0001] The invention relates to the technical field of speech recognition, in particular to a deep hybrid generation network adaptive method and system. Background technique [0002] In recent years, with the deepening of deep learning, the performance of speech recognition system has been significantly improved. In the existing speech recognition systems that have proven to be the best, most of the acoustic models are DNN-based deep network models trained on hundreds or thousands of hours of data collection. At present, the acoustic features used in training the acoustic model are speaker-independent filter bank (filter-bank, fBank) features, so the trained acoustic model is actually just a general deep network model, which is independent of the speaker. The general model lacks speaker-related personality characteristics during training, resulting in low accuracy and accuracy of speech recognition, and poor adaptability of speech recognition. [0003] In o...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L15/065
CPCG10L15/065
Inventor 钱彦旻丁文谭天
Owner AISPEECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products