Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

CNN (Convolutional Neural Network)-based voiceprint recognition method for anti-record attack detection

A convolutional neural network and neural network technology, applied in the field of voiceprint authentication for anti-recording attack detection, can solve the problem of large model consumption and achieve the effect of reducing calculation and model size

Inactive Publication Date: 2019-05-14
SOUTH CHINA UNIV OF TECH
View PDF6 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, there are still problems that the model consumes a lot and the accuracy of feature extraction can be further improved.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • CNN (Convolutional Neural Network)-based voiceprint recognition method for anti-record attack detection
  • CNN (Convolutional Neural Network)-based voiceprint recognition method for anti-record attack detection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0025] figure 1 As shown, the voiceprint authentication method based on the convolutional neural network proposed in this embodiment mainly includes:

[0026] Step 101: Acquire the audio to be detected, perform pre-emphasis processing and endpoint detection, and extract the MFCC feature vector of the audio to be detected. The audio to be detected includes the real voice of a person and the sound played after recording by different recording devices.

[0027] Step 102: Combining the deep decomposable operation of MobileNet and the way of connecting the first layer and the last layer neural network of Unet to construct a new convolutional neural network; in the network structure, the input layer is connected to a standard convolutional layer, and then used Four layers of downsampling convolutional layers with a step size of 2, and then four layers of upsampling deconvolution layers with a step size of 2. The first layer of convolutional layer is directly connected to the last la...

Embodiment 2

[0030] figure 2 As shown, the voiceprint authentication method based on the convolutional neural network in this embodiment to prevent recording attacks mainly includes:

[0031] Step 201: Acquire the audio to be detected, perform pre-emphasis processing and endpoint detection, and extract the MFCC feature vector of the audio to be detected. The audio to be detected includes the real voice of a person and the sound recorded and played by different recording devices.

[0032] Step 202: Using the feature vector proposed in step S101, train a fully connected neural network, the input and output of the model are the MFCC features extracted in S101, that is, train an autoencoder.

[0033] Step 203: pass all the audio through the fully connected neural network trained in step 202, and take the output of the bottleneck layer as the feature input of the new network.

[0034] Step 204: Combining the deep decomposable operation of MobileNet and the way of connecting the first layer an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a CNN (Convolutional Neural Network)-based voiceprint recognition method for anti-record attack detection. The CNN-based voiceprint recognition method comprises the following steps: step S101, acquiring to-be-detected voice frequency and establishing a voiceprint recognition data set; step S102, carrying out character extraction on the voice frequency of the voiceprint recognition data set, wherein extracted characters comprise a character MFCC (Mel Frequency Cepstrum Coefficient) and a bottleneck layer character; step S103, establishing a CNN by combining MobileNet andUnet; step S104, inputting the voiceprint recognition data set to the CNN for training; step S105, inputting the bottleneck layer character to the trained CNN by using testing voice frequency, thus obtaining a testing score for judging real talk or record voice frequency. The CNN-based voiceprint recognition method disclosed by the invention combines the characteristics of two models of the Unetand the MobileNet, has lower model complexity, i.e., lower model size, smaller computation resource loss and higher recognition accuracy rate, and can be transplanted and applied to a mobile phone side and an embedded device.

Description

technical field [0001] The invention relates to the fields of deep learning and voiceprint recognition, in particular to a voiceprint authentication method based on convolutional neural network for anti-recording attack detection. Background technique [0002] Voiceprint recognition technology is a common and practical biometric authentication technology. However, with the advancement of recognition technology, the cracking technology is also developing. The common methods to crack the voiceprint recognition system technology include real person imitation and machine imitation. Real person imitation is a method for experienced personnel to impersonate the speaker by simulating the speaker's voice and vocal skills, while machine imitation includes Machine synthesis, machine recording and playback attacks and other methods have been proposed. [0003] Among them, the recording attack uses high-fidelity recording equipment to record the voice of the speaker, and then uses the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L17/00G10L17/02G10L17/04G10L17/18G10L25/03G10L25/24
Inventor 谢志峰张伟彬徐向民
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products