Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voiceprint feature extraction method and device based on attention mechanism

A technology of voiceprint features and extraction methods, which is applied in speech analysis, instruments, etc., can solve the problems of not considering the contribution and differences of voiceprint authentication, and achieve the effect of improving user experience, increasing pass rate, and easy extraction

Inactive Publication Date: 2019-05-24
SOUNDAI TECH CO LTD
View PDF6 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the commonly used voiceprint feature extraction method is to use the trained deep neural network model to calculate the voiceprint features. This method treats the voice frames of the target speaker equally in the voiceprint calculation process, and does not consider different voice frames. Contributions to voiceprint authentication are different

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voiceprint feature extraction method and device based on attention mechanism
  • Voiceprint feature extraction method and device based on attention mechanism
  • Voiceprint feature extraction method and device based on attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0047] In order to solve the problem of the contribution of different voice frames in voiceprint feature extraction, the present disclosure provides a voiceprint feature extraction method and device based on an attention mechanism. The attention mechanism is introduced to estimate the weights of different voice frames, and then through the weighted hidden layer The activation value is used to obtain the voiceprint feature, and the above-mentioned situation that the contribution of the speech frame is the same is changed.

[0048] In order to make the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be further described in detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

[0049] Certain embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings, in which some but not all embodiments are shown. Indeed, va...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The disclosure provides a voiceprint feature extraction method and device based on an attention mechanism. The voiceprint feature extraction method includes: inputting a target speaker's speech spectral features to a deep neural network, adding an attention layer to the deep neural network, and estimating weights of different speech frames through the attention mechanism; extracting an activationfrom a last hidden layer of the deep neural network, and weighting the activation to obtain voiceprint feature d-vector. The attention mechanism is introduced herein to estimate the weight of a speechframe; voiceprint features are more distinguishable; passing rate of the target speaker can be increased at the premise of ensuring voiceprint authentication, mistaken recognition rate of non-targetspeakers is decreased, and personal user experience is improved.

Description

technical field [0001] The present disclosure relates to the field of automatic speech recognition, in particular to a method and device for extracting voiceprint features based on an attention mechanism. Background technique [0002] At present, with the popularization of information technology, automatic speech recognition technology is playing an increasingly important role, and its application prospects are also broader. Speech signals mainly contain three aspects of information: who is speaking, what language is being spoken, and what is being said. The automatic speech recognition technologies involved are: speaker recognition, language recognition and semantic recognition. Speaker recognition technology, also known as voiceprint recognition, mainly studies the technology of authenticating the speaker's identity based on the input voice signal. Speaker recognition, like other recognition technologies, recognizes the input speaker audio through certain features, so as ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L17/02G10L17/18G10L17/22G10L25/24
Inventor 冯大航陈孝良苏少炜常乐
Owner SOUNDAI TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products