Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Speech Synthesis Method Based on Generative Adversarial Networks

A speech synthesis, generative technology, applied in speech synthesis, biological neural network model, speech analysis, etc., can solve the problems of slow speed, little improvement in synthesis speed, difficult application, etc., achieve fast speed and small model parameters , to ensure the effect of clarity

Active Publication Date: 2021-08-06
成都启英泰伦科技有限公司
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

WaveNet is an autoregressive convolutional neural network. As the first batch of deep learning algorithms used for speech synthesis, it greatly improves the quality of speech synthesis. However, its model structure determines that the speed is very slow and it is difficult to apply it to actual products.
In recent years, the research on the speech synthesis vocoder has mainly focused on improving the calculation speed and reducing the model parameters, and there is no significant improvement in the synthesis speed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Speech Synthesis Method Based on Generative Adversarial Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0048] The present invention is based on a speech synthesis method of a generated counterfeit network, comprising the steps of:

[0049] S1. Prepare the sample, including real audio data, and extract the Mervere spectrum characteristics of true audio data;

[0050]S2. According to the extraction method and sampling rate of Meer spectral characteristics, set the initialized generator parameter group, including setting one-dimensional anti-convolution parameter and one-dimensional consolidation parameter; set the initializable discriminator parameter group, including multidimensioner and Parameters of the Chihua Diatori;

[0051] S3. Enter the Meer Spectrum feature to the generator, by the generator to obtain the corresponding output synthetic audio;

[0052] S4. The output synthesis audio obtained by the real audio data in S1 corresponds to the multidimensional authenticator and poolization discriminator at the same time; w...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method for speech synthesis based on generative confrontation network, comprising the following steps: S1. preparing training samples, including real audio data, extracting Mel spectral features; S2. setting initialized generator parameter groups and discriminator parameter groups; S3. Input the Mel spectrum feature to the generator to obtain the output synthetic audio; S4. Input the real audio data in S1 and the output synthetic audio corresponding to S3 to the multidimensional discriminator and the pooling discriminator; S4. The output result of the discriminator Input to the loss function formula to calculate the generator loss function and discriminator loss function respectively; S5. Update the generator and discriminator; S6. Use the updated generator and discriminator to repeat until reaching the set maximum number of updates M; S7. After each update, return to step S3; S8. Use the generator for speech synthesis. The generator of the present invention adopts one-dimensional convolution operation, the model parameters are small, and the speed is fast.

Description

Technical field [0001] The present invention belongs to the field of manual intelligence speech synthesis, and in particular, the present invention relates to a speech synthesis method based on a generating counterfeit network. Background technique [0002] The voice is the most direct and fast communication method, which plays a very important role in the field of manual intelligence, has been widely used in robots, automotive, synthetic anchors and other fields. With the wide application of artificial intelligence products, the naturalness, clarity, understandability of speech synthesis is also higher and higher. Deep learning makes voice synthesis techniques grow rapidly. [0003] The commonly used speech synthetic scheme now is mainly divided into two phases: predicts its acoustic characteristics based on text information, such as MEL-Spectrograms; It is predicted by acoustic character predicting its original audio waveform, that is, a codon model learning. The first stage is...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L13/02G10L25/24G10L25/30G06N3/08G06N3/04
CPCG10L13/02G10L25/24G10L25/30G06N3/08G06N3/045
Inventor 曹艳艳陈佩云
Owner 成都启英泰伦科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products