Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Deep Neural Network and Acoustic Target Voiceprint Feature Extraction Method

A deep neural and voiceprint feature technology, applied in the field of target recognition, can solve problems such as difficult to achieve results, poor local optimum, etc., and achieve the effect of reducing the impact

Active Publication Date: 2019-04-09
CSSC SYST ENG RES INST
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the field of neural network research, stochastic gradient descent and error backpropagation algorithms are typical algorithms for traditional training of multi-layer networks. It is difficult to achieve ideal results when training neural networks with multiple hidden layers.
Among them, one of the main difficulties stems from the ubiquity of the local optimum of the non-convex objective function of the deep network, which makes the randomly initialized network easy to fall into a poor local optimum during training.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Deep Neural Network and Acoustic Target Voiceprint Feature Extraction Method
  • A Deep Neural Network and Acoustic Target Voiceprint Feature Extraction Method
  • A Deep Neural Network and Acoustic Target Voiceprint Feature Extraction Method

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment

[0058] The autoencoder network used in this paper has three hidden layers, and the number of nodes in each layer is shown in Table 1. Among them, 500 nodes in the input layer are the number of frequency points of the original signal spectrum, 51 nodes correspond to all frequencies within the value range of the fundamental frequency, and 5 nodes are the 5th harmonic order from 3 to 7.

[0059] Table 1

[0060]

input layer

hidden layer 1

hidden layer 2

hidden layer 3

output layer

Number of nodes

500+51+5

200

50

200

500

[0061] Using the training data, train a single hidden layer neural network. The number of network input nodes is 556, the number of output nodes is 500, and the number of hidden layer nodes is 100. Figure 5 The reconstruction error is given as a function of the number of iterations. from Figure 5 It can be seen that when the number of nodes is less than 100, the reconstruction error decreases exponent...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A deep neural network and an underwater acoustic target voiceprint feature extraction method, the deep neural network includes an input layer, a hidden layer and an output layer, used for the extraction of underwater acoustic target voiceprint features, and the number of nodes in the input layer is the underwater acoustic target signal The number of frequency points of the original signal spectrum, the number of frequency points of all frequencies within the value range of the fundamental frequency and the sum of harmonic orders, the number of nodes in the output layer is the number of frequency points of the original signal spectrum, and the number of nodes in the hidden layer is less than the number of nodes in the input layer number; the underwater acoustic target voiceprint feature extraction method includes the signal acquisition step, the fundamental frequency and harmonic acquisition step and the reconstruction step. The accurate extraction of the fundamental frequency and harmonics and the reconstruction of the original signal spectrum weaken the noise line spectrum contained in the original signal spectrum, purify the original signal spectrum, and reduce the impact of the interference line spectrum on the final identification of the final ship target individual. And can adapt to frequency drift.

Description

technical field [0001] The invention relates to a neural network and a feature extraction method, in particular to a deep neural network and an underwater acoustic target voiceprint feature extraction method, belonging to the field of target recognition. Background technique [0002] In the underwater acoustic signal, there are voiceprint features that can distinguish individuals like fingerprints. Ship radiated noise is mainly generated by sound sources such as generators, propulsion systems and auxiliary equipment on board, and can be detected and acquired by detection equipment. There are differences in the detected underwater acoustic signal corresponding to its multiple sound sources. Compared with the voiceprint features of other types of ships, the voiceprint features include simple features and complex features. The line spectrum in the voiceprint features is a simple feature. These feature lines Spectrum can be described by frequency, amplitude, and width, while th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06N3/08
CPCG06N3/088G06F2218/08
Inventor 潘悦吴玺宏李江乔皇甫立
Owner CSSC SYST ENG RES INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products