Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Information acquisition method for English pronunciation

A technology for information collection and speech, applied in speech analysis, speech recognition, instruments, etc., can solve the problems of incoherent speech, inaccurate recognition, and output distortion, and achieve the effect of speech smoothing

Inactive Publication Date: 2021-06-15
ZHENGZHOU RAILWAY VOCATIONAL & TECH COLLEGE
View PDF1 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The object of the present invention is to provide a kind of information collection method that is used for English speech, to solve the essential difference between the speaker and the standard speech signal mentioned in the background technology Inaccurate recognition, incoherent voice, output distortion, etc.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Information acquisition method for English pronunciation
  • Information acquisition method for English pronunciation
  • Information acquisition method for English pronunciation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0039] like Figure 4 As shown, a kind of information collection method for English voice is also provided, and the concrete steps of described information collection method are as follows:

[0040] S1, collecting audio signals and amplifying them;

[0041] S2, performing analog filtering on the amplified audio signal;

[0042] S3, converting the analog filtered signal into a digital signal and extracting audio characteristic parameters of the digital audio signal: attack time, spectral centroid, spectral flux, pitch frequency, sharpness, etc.;

[0043] S4. Match the above-mentioned audio feature parameters with the sound source model in the standard sound source database, then match the digital audio signal with the syllable and phoneme in the sound source model to obtain a matching degree, and perform phoneme correction according to the matching degree gap;

[0044] S5. Combining the corrected phonemes into the digital audio signal;

[0045] S6. Perform fuzzy filtering on...

Embodiment 2

[0051] Such as figure 1 As shown, a specific embodiment provided by the present invention is an English pronunciation information collection system, including an audio collection device 1, a pre-filter module 2, an audio matching module 3, an audio synthesis module 4 and a post-filter output module 5;

[0052] The audio collection device 1 is used to collect audio signals and amplify them,

[0053] The pre-filter module 2 is used for analog filtering the amplified audio signal,

[0054] The audio matching module 3 converts the analog filtered signal into a digital signal and extracts audio features such as attack time, spectral centroid, spectral flux, pitch frequency, and sharpness of the digital audio signal, and compares the above audio features with the standard sound source The sound source model in the database is matched, and then the digital audio signal is matched with the syllable and phoneme in the sound source model to obtain the matching degree, and the phoneme i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an information acquisition method for English pronunciation. The information acquisition method comprises the following steps of S1, acquiring an audio signal and amplifying the audio signal; S2, performing analog filtering on the amplified audio signal; S3, converting the analog filtered signal into a digital signal, and extracting audio characteristic parameters of the digital audio signal, such as sound generation time, frequency spectrum centroid, frequency spectrum flux, fundamental tone frequency, sharpness and the like; S4, the audio characteristic parameters being matched with a sound source model in a standard sound source database, then the digital audio signals being matched with syllables and phonemes in the sound source model to obtain matching degrees, and phoneme correction being carried out according to the difference of the matching degrees; S5, combining the corrected phonemes into the digital audio signal; and S6, performing fuzzy filtering on the synthesized digital audio signal, and outputting an audio signal.

Description

technical field [0001] The invention relates to the technical field of audio information collection and processing, in particular to an information collection method for English speech. Background technique [0002] With the popularization of distance education, "online class" has played a very important role as a substitute and supplement for on-site courses, especially in English teaching, teachers usually hope to be able to give perfect pronunciation to complete classroom or training teaching , therefore, correcting pronunciation in real time through voice intelligence can solve the pain points of teachers. [0003] In the prior art, speech evaluation or correction is generally achieved by comparing the teaching speech with the standard speech and giving a score or beautifying the sound. For example, CN202010891349.4 discloses a method for generating adaptive English speech, collecting target speech signals; performing signal analysis and processing on the collected targ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/183G10L15/26G10L15/02
CPCG10L15/02G10L15/183G10L2015/025
Inventor 张敏李琦丁桂芝牛明敏王晓靖李静
Owner ZHENGZHOU RAILWAY VOCATIONAL & TECH COLLEGE
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products