Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Training method and device for speaker information extraction model and computer equipment

A technology of information extraction and training methods, which is applied in the field of speaker information extraction model training, can solve the problems that the generalization ability of the voiceprint recognition network cannot meet the needs of use, and achieve speech recognition, speech synthesis, and generalization The effect of strong capabilities and rich data

Active Publication Date: 2020-07-17
深圳市友杰智新科技有限公司
View PDF9 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The main purpose of this application is to provide a training method for the speaker information extraction model, aiming to solve the technical problem that the generalization ability of the existing voiceprint recognition network cannot better meet the needs of use

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Training method and device for speaker information extraction model and computer equipment
  • Training method and device for speaker information extraction model and computer equipment
  • Training method and device for speaker information extraction model and computer equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] In order to make the purpose, technical solution and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present application, and are not intended to limit the present application.

[0060] refer to figure 1 , the training method of the speaker information extraction model of an embodiment of the present application, comprising:

[0061] S1: Associating a speech synthesis system with a speech recognition system as a training system through the speaker information extraction model, wherein the speech synthesis system includes a sequentially connected text processing network and an audio recovery network, and the speech recognition system includes sequentially connected a connected audio processing network and a text recovery network, the speaker inform...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a training method for a speaker information extraction model. The method comprises the steps: enabling a voice synthesis system and a voice recognition system to be correlatedinto a training system through the speaker information extraction model; removing residual data information after text content information corresponding to voice data is extracted by the speaker information extraction model from an audio processing result to obtain a first high-dimensional vector, and obtaining a second high-dimensional vector output by a text processing network for processing the text data of a first data pair; training an audio processing network, the text processing network and the speaker information extraction model, and training convergence when a loss function reachesthe minimum value; combining the audio processing network and an audio recovery network into an audio processing network, and combining the text processing network and a text recovery network into a text processing network; and training the audio processing network and the speaker information extraction model until convergence to obtain a parameter set of the speaker information extraction model.The generalization ability of the speaker information extraction model is improved.

Description

technical field [0001] This application relates to the field of voiceprint recognition, in particular to a training method, device and computer equipment for a speaker information extraction model. Background technique [0002] Voiceprint recognition is a technology that extracts information that can uniquely represent the speaker's identity. It is divided into two types: text-related and text-independent. Text correlation means that the speaker must say the specified content to be recognized. Text-independent speaker recognition means that there is no need to say specific content, as long as there is a speaking voice, it will be recognized. The model is generally trained based on a supervised learning method. In addition, there are open set and closed set, mainly for the recognition range of the model. The open set means that the objects that can be recognized by the voiceprint recognition model are not limited to the training data set, while the closed set means that the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L17/04G10L17/02G10L17/18G10L15/26G10L13/04
CPCG10L17/02G10L17/04G10L17/18
Inventor 徐泓洋太荣鹏温平
Owner 深圳市友杰智新科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products