Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Training method, device and computer equipment for speaker information extraction model

An information extraction and training method technology, applied in the training field of speaker information extraction model, can solve the problem that the generalization ability of the voiceprint recognition network cannot meet the needs of use well, achieve speech recognition and speech synthesis, and enrich data. , the effect of strong generalization ability

Active Publication Date: 2020-09-29
深圳市友杰智新科技有限公司
View PDF9 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The main purpose of this application is to provide a training method for the speaker information extraction model, aiming to solve the technical problem that the generalization ability of the existing voiceprint recognition network cannot better meet the needs of use

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Training method, device and computer equipment for speaker information extraction model
  • Training method, device and computer equipment for speaker information extraction model
  • Training method, device and computer equipment for speaker information extraction model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] In order to make the purpose, technical solutions, and advantages of this application clearer, the following further describes this application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.

[0060] Reference figure 1 , The training method of the speaker information extraction model in an embodiment of the present application includes:

[0061] S1: Associate a speech synthesis system with a speech recognition system as a training system through the speaker information extraction model, wherein the speech synthesis system includes a text processing network and an audio recovery network connected in sequence, and the speech recognition system includes sequential Connected audio processing network and text recovery network, and the speaker information extraction model is respectively associated with the au...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a training method for a speaker information extraction model. The method comprises the steps: enabling a voice synthesis system and a voice recognition system to be correlatedinto a training system through the speaker information extraction model; removing residual data information after text content information corresponding to voice data is extracted by the speaker information extraction model from an audio processing result to obtain a first high-dimensional vector, and obtaining a second high-dimensional vector output by a text processing network for processing the text data of a first data pair; training an audio processing network, the text processing network and the speaker information extraction model, and training convergence when a loss function reachesthe minimum value; combining the audio processing network and an audio recovery network into an audio processing network, and combining the text processing network and a text recovery network into a text processing network; and training the audio processing network and the speaker information extraction model until convergence to obtain a parameter set of the speaker information extraction model.The generalization ability of the speaker information extraction model is improved.

Description

Technical field [0001] This application relates to the field of voiceprint recognition, in particular to the training method, device and computer equipment of the speaker information extraction model. Background technique [0002] Voiceprint recognition is a technology that extracts information that uniquely represents the identity of the speaker. It is divided into two cases: text-related and text-independent. Text correlation means that the speaker must say the specified content to be recognized. The text-independent speaker recognition means that there is no need to say specific content, as long as there is a voice, it will be recognized. The model is generally trained based on supervised learning methods. In addition, there are open sets and closed sets, mainly for the recognition range of the model. The open set means that the objects that can be recognized by the voiceprint recognition model are not limited to the training data set, while the closed set refers to the voic...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L17/04G10L17/02G10L17/18G10L15/26G10L13/04
CPCG10L17/02G10L17/04G10L17/18
Inventor 徐泓洋太荣鹏温平
Owner 深圳市友杰智新科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products