Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Spoken language pronunciation evaluation method based on deep neural network posterior probability algorithm

A deep neural network and posterior probability technology, applied in the field of pronunciation evaluation, can solve the problems of low phoneme recognition rate of acoustic model, inaccurate results of scoring, and low likelihood accuracy, and achieve the effect of improving phoneme recognition rate.

Inactive Publication Date: 2018-08-03
苏州声通信息科技有限公司
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, under normal circumstances, the acoustic model has a relatively low recognition rate for phonemes, so the accuracy of the likelihood obtained by FP decoding is also relatively low, which will lead to inaccurate scoring results.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Spoken language pronunciation evaluation method based on deep neural network posterior probability algorithm
  • Spoken language pronunciation evaluation method based on deep neural network posterior probability algorithm

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0020] When using the oral pronunciation evaluation method based on the deep neural network posterior probability algorithm in the present invention, first select a certain amount of audio from one or more relevant voices that need to be evaluated, wherein the number of audio is preferably no more than 10,000 , and the number of words of each audio is limited within a certain range, preferably 1-20, wherein each word contains multiple phonemes.

[0021] Suppose word W contains k phonemes, set {P 1 ,P 2 ,…P k}, where the likelihood of each phoneme is set to loglik(P i ). The characteristic formula used by the traditional GOP (Goodness Of Pronunciation) method to measure pronunciation is loglik(numerator)-loglik(denominator), that is, the average likelihood of FA obtained in the FA process and the average likelihood of FP obtained in the FP decoding process...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a spoken language pronunciation evaluation method based on a deep neural network posterior probability algorithm. The method comprises the following steps of: selectinga certain amount of voice frequencies from voice, wherein the number of words of each voice frequency is in a certain range, calculating the average likelihood of the phoneme of one word, the averageEGOP of the phoneme of one word and the average duration probability of the phoneme of one word in each voice frequency; and taking the average likelihood of the phoneme of one word, the average EGOPof the phoneme of one word and the average duration probability of the phoneme of one word in each voice frequency as input items, inputting the average likelihood of the phoneme of one word, the average EGOP of the phoneme of one word and the average duration probability of the phoneme of one word in each voice frequency into a neural network, and outputting scores of words. The spoken languagepronunciation evaluation method based on a deep neural network posterior probability algorithm starts from an acoustic model, the LSTM modeling is employed to improve the phoneme recognition rate, theFA likelihood and all the similar phoneme likelihoods are compared, a GOP method is extended to an EGOP method, an artificial neural network scoring model is employed to perform scoring so as to obtain an accurate voice evaluation result.

Description

technical field [0001] The invention relates to the field of pronunciation evaluation, in particular to a spoken pronunciation evaluation method based on a deep neural network posterior probability algorithm. Background technique [0002] Commonly used speech evaluation technologies, such as speech evaluation used in oral English teaching, generally use intelligent scoring technology to evaluate learners' spoken English, but the current intelligent scoring technology is mainly based on the GOP (Goodness Of Pronunciation) method. The GOP method relies on two processes, one is Forced Alignment (FA for short), and the other is Free Phoneme (FP for short) decoding, where FA is based on acoustic models and reference texts (that is, learners need to read along Text) find the time boundary of each word, and get the likelihood of each word (Likelihood); while FP decoding uses the same audio, but the unit of decoding is the phoneme level, and each phoneme can be compared with any oth...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/00G10L15/16G10L15/06G10L25/51
CPCG10L15/005G10L15/06G10L15/063G10L15/16G10L25/51G10L2015/0631
Inventor 徐祥荣
Owner 苏州声通信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products