Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Speaker recognition method based on depth learning

A speaker recognition and deep learning technology, applied in the field of speech processing, can solve the problems of unable to fully characterize the characteristics of the speaker's vocal tract, unable to automatically learn feature information, and unsatisfactory recognition effect, so as to improve the system recognition rate and reduce computational complexity. degree, the effect of a good classification function

Active Publication Date: 2014-11-19
DALIAN UNIV OF TECH
View PDF3 Cites 74 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] According to the prior art, mostly a single speech feature is used in the process of speaker recognition, which cannot fully characterize the characteristics of the speaker’s vocal tract, has poor robustness, and the speaker recognition model adopted is usually artificially set feature parameters, and For problems such as inability to automatically learn deeper feature information and unsatisfactory recognition effects, the present invention discloses a speaker recognition method based on deep learning. By processing the speaker's voice signal and establishing a The deep belief network model uses a layer-by-layer greedy algorithm, combined with the speaker's voice feature parameters, to train the established deep belief network model, so as to determine the model parameters, and then input the voice signal again to complete the voice recognition process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Speaker recognition method based on depth learning
  • Speaker recognition method based on depth learning
  • Speaker recognition method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0146] In the experiment, the parameters used are: voice sampling rate 16kHz, 16-bit coded PCM format voice, frame length 16 milliseconds, pre-emphasis coefficient a=0.9375; short-term energy and short-term zero-crossing rate thresholds are 67108864 and 30; select 10 speakers, the speech length of each speaker is about 10 seconds for training, and the speech unit length for testing is respectively 0.4 seconds, 0.8 seconds, 1.2 seconds, 1.6 seconds and 2.0 seconds, and the speech feature parameters Select 16-dimensional MFCC, 16-dimensional GFCC, and combine MFCC and GFCC to form a 32-dimensional feature vector. The number of hidden layers in the deep belief network model is 3 layers, and the number of neurons in each hidden layer is 50. , and the training times are 500 times. The speaker recognition results are shown in Table 3, and then the system recognition results of different speech features are drawn into a line graph as Figure 9 shown.

[0147] Table 3 Speaker recogn...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a speaker recognition method based on depth learning. The method comprises the following steps: S1) carrying out pre-emphasis and overlapping-type framing windowing on collected voice signals; S2) carrying out endpoint detection on the collected voice signals by utilizing a dual-threshold endpoint detection method based on short-time energy and short-time zero-crossing rate, and judging and indentifying the staring moment, transition stage , noise section and ending moment of the voice; S3) carrying out feature extraction on the voice signals; S4) forming a depth belief network model based on restricted boltzmann machine hierarchy, training the established depth belief network model by utilizing layer-by-layer greedy algorithm and with speaker voice feature parameters being combined, and adding a Softmax classifier to the top layer of the depth belief network model; and S5) inputting the voice features of a speaker to the depth belief network model obtained after being subjected to training, calculating the probability that the model outputs voice features similar to the voice features of the other speakers, and selecting the speaker corresponding to the maximum probability as recognition result.

Description

technical field [0001] The present invention relates to the technical field of speech processing, in particular to a speaker recognition method based on deep learning. Background technique [0002] Speaker recognition is usually called voiceprint recognition. Compared with other biometric technologies, it has the characteristics of natural convenience, high user acceptance, and low equipment cost. Speaker recognition technology has been widely used in identity verification, access control systems, human-computer interaction, forensic identification, communication networks, mobile terminals, banking systems, national defense and military and other fields. Speaker recognition technology mainly includes speech feature parameter extraction and speaker pattern classification. Speech feature extraction is to extract the speaker's speech features and vocal tract characteristics. At present, the mainstream feature parameters, including MFCC, LPCC, pitch period, etc., are all based...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L17/02G10L17/04
Inventor 陈喆殷福亮耿国胜
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products