Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice recognition and voice synthesis model training method and device and computer equipment

A speech synthesis and speech recognition technology, applied in the computer field, can solve the problems of unfavorable speech synthesis system widely popularized, high cost of construction and training of one-way mapping system, saving construction and training cost and improving training effect.

Active Publication Date: 2020-08-25
深圳市友杰智新科技有限公司
View PDF7 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The main purpose of this application is to provide a model training method for speech recognition and speech synthesis, aiming to solve the problem that the construction and training costs of the existing one-way mapping system are very high, which is not conducive to the general promotion of "speech recognition" and "speech synthesis" systems Technical issues used

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice recognition and voice synthesis model training method and device and computer equipment
  • Voice recognition and voice synthesis model training method and device and computer equipment
  • Voice recognition and voice synthesis model training method and device and computer equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.

[0056] refer to figure 1 , a model training method for speech recognition and speech synthesis according to an embodiment of the present application, the model includes an audio processing network, an audio restoration network, a text processing network and a text restoration network, and the method includes:

[0057] S1: Obtain the first high-dimensional vector output after the audio processing network processes the voice data of the first data pair in the training set, and obtain the second high-dimensional vector output by the text processing network processing the text...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a voice recognition and voice synthesis model training method, which comprises the steps of obtaining a first high-dimensional vector output after an audio processing networkprocesses voice data of a first data pair in a training set, and obtaining a second high-dimensional vector output after a text processing network processes text data of the first data pair; trainingan audio processing network and a text processing network on the training set through a loss function until training convergence; after training convergence, fixing a first parameter set correspondingto the audio processing network and a second parameter set corresponding to the text processing network; under the first parameter set and the second parameter set, training a text recovery network and an audio recovery network to converge; sequentially combining and connecting the audio processing network and the text recovery network to obtain an acoustic pre-training model for speech recognition, and sequentially combining and connecting the text processing network and the audio recovery network to obtain an acoustic pre-training model for speech synthesis. The model construction and training cost is saved.

Description

technical field [0001] This application relates to the computer field, in particular to a model training method, device and computer equipment for speech recognition and speech synthesis. Background technique [0002] "Speech recognition" and "speech synthesis" are two "sequence-to-sequence" prediction tasks in a dual relationship, which can be modeled using the encoder-decoder framework. Since the training data of "speech recognition" and "speech synthesis" are not universal, the existing speech recognition system only achieves a one-way mapping of aligning speech information to text information, and speech synthesis only achieves aligning text information to speech information one-way mapping. Due to the diversity of sequences, the scale of each one-way mapping system is very large, and the amount of data required for training the system is also very large, so the construction and training costs of each one-way mapping system are very high, which is not conducive to "spee...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/06G10L15/26G10L13/047G10L15/16G10L25/30
CPCG10L13/047G10L15/063G10L15/16G10L15/26G10L25/30G10L2015/0631
Inventor 徐泓洋太荣鹏温平
Owner 深圳市友杰智新科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products