Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Voice synthesis model training method, and voice synthesis method and device

A speech synthesis and model training technology, which is applied in the computer field, can solve the problems of unsatisfactory speech synthesis, complicated training process, flat sound quality, etc., and achieve the effect of reducing pronunciation errors, ensuring correct rate, and improving accuracy

Active Publication Date: 2019-09-27
BEIJING XINTANG SICHUANG EDUCATIONAL TECH CO LTD
View PDF6 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] It can be seen that the above method requires the design and training of multiple models, so the training process is very complicated, and the effect of the synthesized speech obtained is not ideal, and the sound quality is flat

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice synthesis model training method, and voice synthesis method and device
  • Voice synthesis model training method, and voice synthesis method and device
  • Voice synthesis model training method, and voice synthesis method and device

Examples

Experimental program
Comparison scheme
Effect test

preparation example Construction

[0114] As shown in the figure, the speech synthesis method provided by the embodiment of the present invention includes:

[0115] Step S30: Obtain the third word vector sequence of the Chinese character sentence to be speech synthesized.

[0116] The specific content of step S30 can refer to figure 1 Step S10, which will not be repeated here.

[0117]Step S31: Encoding the third character vector sequence by using the trained encoding module to obtain a third linguistic encoding feature.

[0118] After obtaining the third character vector sequence, use the trained encoding module to encode it, so as to obtain triphonetic encoding features.

[0119] The specific content of step S31 can refer to figure 1 Step S11, which will not be repeated here.

[0120] Step S32: Using the trained speech feature decoding module to decode the third linguistic coding feature to obtain a third speech feature.

[0121] After obtaining the third linguistic coding feature, use the trained speech...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides a voice synthesis model training method, and a voice synthesis method and device. The voice synthesis model training method comprises the steps: a first character vector sequence of a Chinese character sentence corresponding to encoding training is obtained; the first character vector sequence is encoded through an encoding module, and a first linguistic encoding characteristic is obtained; the first linguistic encoding characteristic is subjected to linguistic characteristic decoding through a linguistic characteristic decoding module, and a linguistic decoding characteristic is obtained; and according to the linguistic characteristic loss between the linguistic decoding characteristic and a reference linguistic decoding characteristic, model parameters of the encoding module of a voice synthesis model are adjusted till the linguistic characteristic loss meets a linguistic characteristic loss threshold value, and the trained encoding module of the voice synthesis model is obtained. According to the voice synthesis model training method, and the voice synthesis method and device provided by the embodiment of the invention, the complexity of voice synthesis can be lowered, meanwhile, the training accuracy of an encoder is improved, and then the effect of synthesized voice is guaranteed.

Description

technical field [0001] The embodiments of the present invention relate to the field of computers, and in particular to a speech synthesis model training method, device, device and storage medium, and a speech synthesis method, device, device and storage medium. Background technique [0002] With the development of artificial intelligence technology, people pay more and more attention to speech synthesis technology. Using speech synthesis technology and speech recognition technology, on the one hand, computers and other equipment can generate spoken language that people can understand; Human-machine voice communication can be realized. [0003] In order to realize speech synthesis, the traditional parametric speech synthesis method can be used, which is divided into multiple parts such as linguistic feature prediction, duration prediction, and acoustic feature prediction. It is necessary to build and train models for each part to realize speech synthesis. [0004] It can be ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L13/02G10L25/03G10L25/30
CPCG10L13/02G10L25/03G10L25/30
Inventor 智鹏鹏杨嵩杨非刘子韬
Owner BEIJING XINTANG SICHUANG EDUCATIONAL TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products