Voice converting method based on deep learning

A voice conversion and deep learning technology, applied in voice analysis, instruments, etc., can solve problems such as unsuitable computing resources and equipment, poor voice quality, and sudden increase in calculation volume, so as to save terminal computing and storage resources and improve voice conversion effect of effect

Active Publication Date: 2018-01-05
NANJING UNIV OF POSTS & TELECOMM
View PDF5 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this type of algorithm still has some defects, such as: using the Gaussian model for speech conversion experiments, the converted speech quality is poor, and the parameters such as the mixing degree of the Gaussian model are not set properly, which usually leads to underfitting and overfitting Combined; when using the Gaussian mixture model to train the mapping function, the global variables are considered and the training data is iterated, which leads to a sudden increase in the amount of calculation, and the Gaussian mixture model can achieve a better conversion effect when the training data is sufficient. Not suitable for limited computing resources and devices

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice converting method based on deep learning
  • Voice converting method based on deep learning
  • Voice converting method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail:

[0042] Those skilled in the art can understand that, unless otherwise defined, all terms (including technical terms and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It should also be understood that terms such as those defined in commonly used dictionaries should be understood to have a meaning consistent with the meaning in the context of the prior art, and unless defined as herein, are not to be interpreted in an idealized or overly formal sense Explanation.

[0043] The AHOcoder feature parameter extraction model is a speech codec (speech analysis / synthesis system) developed by Daniel Erro in the AHOLAB signal processing laboratory of the University of Basque Country. AHOcoder decomposes 16kHz, 16bits mono wav speech into three parts: fundamental ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a voice converting method based on deep learning, and belongs to the technical field of voice signal processing. According to the invention, the method includes the following steps: configuring a voice encoding and decoding device AHOcoder as a feature extraction terminal and a voice synthesis terminal, training voice features by using the deep leaning method to separatelyobtain deep features of a source speaker and a target speaker, also obtaining the capability of decoding the deep features to original features, mapping the source speaker and the target speaker by using a BP neural network, thus realizing voice conversion. According to the invention, the method stitches the original features of voice, the combined feature parameters obtained from stitching are deemed to include the dynamic features of the voice features of the speaker, the training of the deep neural network is accelerated by pre-training the deep autoencoders, and by converting the deep features, the method herein obtains quality converted voice even when less voice materials are trained. The method also supports offline learning, and saves computing resources and memory of terminal devices.

Description

technical field [0001] The invention relates to a voice conversion and voice synthesis method, which belongs to the technical field of voice signal processing. Background technique [0002] Speech conversion technology is a research branch of speech signal processing. It covers the fields of speaker recognition, speech recognition and speech synthesis. It intends to change the personalized information of speech while keeping the original semantic information unchanged, so that The speech of a particular speaker (ie, the source speaker) sounds like the speech of another particular speaker (ie, the target speaker). The main tasks of speech conversion include extracting the characteristic parameters of two specific speaker's voices and performing mapping transformation, and then decoding and reconstructing the transformed parameters into converted speech. In this process, it is necessary to ensure the accuracy of the auditory quality of the converted speech and the accuracy of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10L25/24G10L25/30G10L17/04G10L17/18
Inventor 李燕萍凌云志
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products