Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Quick training method of large-scale data recurrent neutral network (RNN)

A regression neural network, large-scale data technology, applied in the field of speech recognition, can solve the problems of underutilization, many iteration steps, slow convergence and so on

Inactive Publication Date: 2015-05-06
TSINGHUA UNIV
View PDF2 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the update method in step 6 only uses the average gradient, and the update step size depends on the preset learning factor. Although it has a strong regularization ability, it does not make full use of the gradient and objective function information at each training sample, resulting in Convergence is slow, and the number of iteration steps is large

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Quick training method of large-scale data recurrent neutral network (RNN)
  • Quick training method of large-scale data recurrent neutral network (RNN)
  • Quick training method of large-scale data recurrent neutral network (RNN)

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0084] The present invention proposes a kind of fast training method of large-scale data regression neural network in conjunction with accompanying drawing and embodiment and further explanation is as follows:

[0085] The present invention proposes a large-scale data regression neural network rapid training method, which is characterized in that the method uses the average gradient direction and the gradient residual principal component direction to simultaneously update the large-scale data of the internal coefficients to perform rapid training of the regression neural network. After the gradient of the objective function to the internal coefficient at each training sample is obtained, the training samples are grouped, and the gradients of the entire training sample set and each group are weighted and averaged according to the objective function value at each training sample, and the global average gradient and group average The orientation of the residual principal component...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a quick training method of large-scale data recurrent neutral network (RNN), and belongs to the technical field of machine learning. The method comprises the steps of synchronously updating large-scale data of internal coefficients in the average gradient direction and the gradient residual main component direction to quickly train the generalized regression neural network; performing error back propagation to obtain the gradient of the target functions at each training sample to the internal coefficient; grouping the training samples; performing weighted averaging for the whole training sample set and the gradients of each group according to the target function value of each training sample; updating the internal coefficients in the global average gradient direction and the residual main component direction that and the group average gradient is orthogonal to the global average gradient. With the adoption of the method, the gradient information of each training sample can be effectively utilized with relatively small calculation cost to reduce the iteration steps can be decreased, so as to increase the calculation efficiency of RNN training process.

Description

technical field [0001] The invention belongs to the technical field of machine learning, in particular to applications such as speech recognition and natural language processing for large-scale data information processing, high-dimensional time series analysis and the like. Background technique [0002] Contemporary data acquisition technology generates a large amount of complex data, which contains rich information, and has great potential value for various application fields in production and scientific research. Extracting useful information from large-scale data requires effective data processing methods. Artificial neural network is one of the most widely used data information extraction methods, and has shown outstanding performance in computer vision, speech recognition and natural language processing. [0003] Artificial Neural Network (ANN), referred to as Neural Network (NN), is a computational model that imitates the structure and function of biological neural ne...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/02
Inventor 杨广文李连登付昊桓袁龙
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products