Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Speech synthesis method based on generative adversarial network

A speech synthesis and generative technology, applied in speech synthesis, biological neural network model, speech analysis, etc., can solve the problems of no significant improvement in synthesis speed, slow speed, difficult application, etc., and achieve small model parameters and fast speed , to ensure the effect of clarity

Active Publication Date: 2021-07-02
成都启英泰伦科技有限公司
View PDF10 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

WaveNet is an autoregressive convolutional neural network. As the first batch of deep learning algorithms used for speech synthesis, it greatly improves the quality of speech synthesis. However, its model structure determines that the speed is very slow and it is difficult to apply it to actual products.
In recent years, the research on the speech synthesis vocoder has mainly focused on improving the calculation speed and reducing the model parameters, and there is no significant improvement in the synthesis speed

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Speech synthesis method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] Specific embodiments of the present invention will be further described in detail below.

[0049] The speech synthesis method based on the generative confrontation network of the present invention comprises the following steps:

[0050] S1. Prepare training samples, including real audio data, and extract the Mel spectrum features of the real audio data;

[0051]S2. According to the extraction method and sampling rate of Mel spectral features, set the initialized generator parameter group, including setting one-dimensional deconvolution parameters and one-dimensional convolution parameters; set the initialized discriminator parameter group, including multi-dimensional discriminator and The parameters of the pooling discriminator;

[0052] S3. Input the Mel spectrum feature to the generator, and the corresponding output synthetic audio is obtained by the generator;

[0053] S4. Correspondingly input the real audio data in S1 and the output synthetic audio obtained by S3...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a speech synthesis method based on a generative adversarial network. The method comprises the following steps of: S1, preparing training samples, including real audio data, and extracting Mel spectrum features; S2, setting an initialized generator parameter set and a discriminator parameter set; S3, inputting the Mel spectrum features to a generator to obtain an output synthetic audio; S4, correspondingly and simultaneously inputting the real audio data in the S1 and the output synthesized audio obtained in the S3 into a multi-dimensional discriminator and a pooling discriminator; S4, inputting output results of the discriminators into a loss function formula, and respectively calculating a generator loss function and a discriminator loss function; S5, updating the generator and the discriminators; S6, repeating by using the updated generator and discriminators until the set maximum value M of the number of updating times is reached; S7, after each update, returning to the step S3; and S8, performing speech synthesis by using the generator. According to the method, one-dimensional convolution operation is adopted by the generator, model parameters are small, and the speed is high.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence speech synthesis, and in particular relates to a speech synthesis method based on a generative confrontation network. Background technique [0002] Voice, as the most direct and fast way of communication, plays a very important role in the field of artificial intelligence, and has been widely used in robots, cars, synthetic anchors and other fields. With the wide application of artificial intelligence products, the requirements for the naturalness, clarity, and intelligibility of speech synthesis are getting higher and higher. Deep learning has enabled the rapid development of speech synthesis technology. [0003] The commonly used deep learning-based speech synthesis scheme is mainly divided into two stages: predicting its acoustic features based on text information, such as mel-spectrograms; predicting its original audio waveform from acoustic features, that is, vocoder model le...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L13/02G10L25/24G10L25/30G06N3/08G06N3/04
CPCG10L13/02G10L25/24G10L25/30G06N3/08G06N3/045
Inventor 曹艳艳陈佩云
Owner 成都启英泰伦科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products