Deep learning oriented sparse self-adaptive neural network, algorithm and implementation device

A neural network algorithm and deep learning technology, applied in biological neural network models, physical implementation, etc., can solve problems such as loss of accuracy, and achieve the effect of saving storage requirements

Inactive Publication Date: 2016-04-13
CHONGQING UNIV
View PDF0 Cites 28 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, using traditional methods to quantify floating-point data and

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning oriented sparse self-adaptive neural network, algorithm and implementation device
  • Deep learning oriented sparse self-adaptive neural network, algorithm and implementation device
  • Deep learning oriented sparse self-adaptive neural network, algorithm and implementation device

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0037] Example one

[0038] Generally, a traditional artificial neural network includes a visible layer with a certain number of input nodes and a hidden layer with a certain number of output nodes. In some designs, the label layer is used in the highest layer of the network, which is also an optional component of the present invention, but it is not an essential component. Each node of a hidden layer is connected to the input node of the visible layer by weighting. Note that when the hidden layer is two or more layers, the previous hidden layer is connected to another hidden layer. Once the hidden layer of the low-level network is trained, for the high-level network, the hidden layer is the visible layer of the high-level network.

[0039] figure 1 It is a schematic diagram of the classic DBN model. In the DBN network, the parameters describing the connection are dense real numbers. The calculation of each layer is the matrix multiplication between the interconnected units and ...

Example Embodiment

[0076] Example two

[0077] Early sparse DBN research only focused on extracting sparse features instead of using sparse connections to generate efficient network architectures for hardware models; recent neuromorphic hardware models used for deep learning have an increasing number of neurons on chip, but they are integrated on a chip One million neurons and one billion synapses are still no small challenge. Figure 4 A device for optimizing and implementing a sparse adaptive neural network for deep learning is shown, and its MAP table and TABLE table are obtained by the DAN sparse algorithm of the present invention.

[0078] The specific workflow is as follows:

[0079] 1) Check whether the input bit axon[i] is 1: If it is 1, that is, there is a synaptic event, and the corresponding position in the MAP list is accessed according to the value of i. If it is 0, the next input bit is detected.

[0080] 2) Read the corresponding start address and length value in the MAP. If the length va...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a deep learning oriented sparse self-adaptive neural network. The deep learning oriented sparse self-adaptive neural network comprises at least one layer of self-adaptive limited Boltzmann machine; the at least one layer of self-adaptive limited Boltzmann machine comprises a visible layer and a hidden layer; and the visible layer and the hidden layer are sparsely connected. In the neural network disclosed by the invention, the visible layer and the hidden layer are sparsely connected; simultaneously, one connection represented by a 32-bit real number is optimized into one connection represented by a 1-bit integer; due to the optimization manner, mode recognition is not influenced; furthermore, precision requirements can also be satisfied; and the large-scale neural network can be realized on a single chip only in need of fixed-point arithmetic and a small amount of multiplication.

Description

technical field [0001] The invention relates to the field of integrated circuit / neural network / big data computing, and in particular to the field of model construction and optimization of on-chip deep self-adaptive neural network. Background technique [0002] In this technical field, the realization of the neural network model, the existing technology is mainly divided into software implementation and hardware implementation. [0003] Software implementation: usually based on a general-purpose processing unit (CPU) or a general-purpose graphics processing unit (GPGPU) based on the von Neumann architecture to run specific neural network algorithms. In neural network models, such as the classic DBN model, the connection between neurons needs to be realized by a matrix storing weight values. As the number of neurons increases, the size of the weight matrix will be O(n 2 ) grows explosively, which means that a lot of storage resources (such as memory) need to be consumed. Li...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/02G06N3/06
CPCG06N3/02G06N3/061
Inventor 周喜川李胜力余磊李坤平赵昕杨帆谭跃唐枋胡盛东甘平
Owner CHONGQING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products