Many-to-many voice conversion method based on double voiceprint feature vectors and sequence-to-sequence modeling

A voiceprint feature and voice conversion technology, applied in voice analysis, voice recognition, instruments, etc., can solve the problems of discontinuous feature space, ignoring information, and unsatisfactory conversion effect, so as to reduce actual cost and difficulty, reduce cost and Difficulty, good effect

Active Publication Date: 2020-12-11
SUN YAT SEN UNIV
View PDF9 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, since this method will cause discontinuity in the feature space during quantization, and ignores the information between frames, the conversion effect is not ideal.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Many-to-many voice conversion method based on double voiceprint feature vectors and sequence-to-sequence modeling
  • Many-to-many voice conversion method based on double voiceprint feature vectors and sequence-to-sequence modeling
  • Many-to-many voice conversion method based on double voiceprint feature vectors and sequence-to-sequence modeling

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] Such as figure 1 , 2 As shown, a many-to-many voice conversion method based on dual voiceprint feature vectors and sequence-to-sequence modeling, including the following steps:

[0056] S1. Data enhancement: use text-to-speech multi-speaker speech synthesis module to generate parallel corpus; parallel corpus means that the source speaker and target speaker speak the same content; due to the lack of a large amount of parallel corpus data, text-to-speech Multi-speaker speech synthesis technology to generate parallel corpus;

[0057] S2. Feature extraction of the speech signal: for the generated parallel corpus, extract the acoustic features of the original audio and the target audio;

[0058] S3. Encoding the speaker's identity feature to obtain a voiceprint feature vector representing the speaker's identity;

[0059] S4. Utilize the sequence-to-sequence voice conversion model to train the acoustic features of step S2 and the voiceprint feature vector of step S3. The s...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the field of voice synthesis and voice conversion, and more specifically relates to a many-to-many voice conversion method based on double voiceprint feature vectors and sequence-to-sequence modeling. According to the method, a large number of parallel prediction is generated by employing the voice synthesis technology of a plurality of speakers, and great convenience is provided for model training; and then input source speaker features are modeled and mapped to target speaker features by employing a sequence-to-sequence neural network. In order to realize many-to-many voice conversion, voiceprint feature vectors representing the identities of the speakers are generated by employing a model of speaker verification tasks; the voiceprint feature vectors of the source speakers and the target speakers are added to the sequence-to-sequence model as auxiliary confidence; and through model training tests, good effects can be achieved.

Description

technical field [0001] The invention relates to the fields of speech synthesis and speech conversion, and more particularly, relates to a many-to-many speech conversion method based on dual voiceprint feature vectors and sequence-to-sequence modeling. Background technique [0002] With the rapid development of the field of artificial intelligence, technologies such as intelligent voice interaction and personalized voice generation have attracted widespread attention. Speech conversion, as one of the important technologies, involves signal processing, deep learning, phonetics and other disciplines, and is currently a hot and difficult point in voice interaction. Speech conversion usually refers to transforming the personalized features of the source speaker into the personalized features of the target speaker, keeping the content of the speech unchanged. Personalized features include information such as the frequency spectrum and prosody of speech, and the essence is to make...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10L21/003G10L21/007G10L17/02G10L15/02G10L15/16G10L25/18
CPCG10L21/003G10L21/007G10L17/02G10L15/02G10L15/16G10L25/18
Inventor 杨耀根张东
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products