Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method and device for optimizing neural network

A technology of neural network and optimization method, applied in the field of artificial neural network, which can solve problems such as occupation, shortening of running time, and high computational complexity of image data processing

Inactive Publication Date: 2017-05-10
GUANGZHOU SHIYUAN ELECTRONICS CO LTD
View PDF0 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When using the neural network model to carry out face recognition, the existing problems are as follows: 1. The complexity of image data processing and calculation is high, which affects the operation time (such as processing face images on electronic devices equipped with Core i7 processors It often takes more than 1 second); 2. It needs to take up a lot of memory space or graphics card memory space during the processing; 3. It also needs to take up a lot of storage space to store the entire neural network model
[0003] The existing neural network model optimization methods cannot completely solve the above-mentioned problems. For example, the optimization in the form of Huffman coding can ensure the processing and calculation accuracy of the optimized neural network model and effectively reduce the depth of the neural network. The storage space of the network model, but it cannot reduce the complexity of processing operations, shorten the running time, and at the same time, it cannot reduce the space occupied by memory or video memory during processing.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and device for optimizing neural network
  • Method and device for optimizing neural network
  • Method and device for optimizing neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0023] figure 1 It is a schematic flowchart of a neural network optimization method provided in Embodiment 1 of the present invention, which is applicable to the case of compressing and optimizing the neural network after training and learning, and the method can be executed by a neural network optimization device, wherein the device It can be implemented by software and / or hardware, and is generally integrated on the terminal device or server platform where the neural network model is located.

[0024] Generally speaking, neural network mainly refers to artificial neural network, which can be regarded as an algorithmic mathematical model that imitates the behavior characteristics of animal neural network and performs distributed parallel information processing. The unit nodes in the neural network are divided into at least three layers, including the input layer, the hidden layer and the output layer. The input layer and the output layer both contain only one layer of unit no...

Embodiment 2

[0038] FIG. 2 is a schematic flowchart of a neural network optimization method provided by Embodiment 2 of the present invention. The embodiment of the present invention is optimized on the basis of the above-mentioned embodiments. In this embodiment, the weight parameter matrix between two adjacent layers of unit nodes in the initial neural network will be determined, and further optimized as follows: if the initial neural network The connection between the unit nodes of two adjacent layers is fully connected, then a two-dimensional weight parameter matrix is ​​formed based on the weight parameter values ​​corresponding to the lines between the unit nodes; if the connection between the unit nodes of two adjacent layers in the initial neural network is volume If the product is connected, a multi-dimensional weight parameter matrix is ​​formed based on the weight parameter array corresponding to the connection between the unit nodes.

[0039] Further, the weight parameter matri...

Embodiment 3

[0063] Figure 3a It is a schematic flowchart of a neural network optimization method provided by Embodiment 3 of the present invention. The embodiments of the present invention are optimized on the basis of the above-mentioned embodiments. In this embodiment, further optimizations are added: determine whether the current processing accuracy of the target neural network meets the set accuracy conditions, Neural network for training learning or deep optimization.

[0064] On the basis of the above optimization, the target neural network will be trained or deeply optimized based on the determination results, and further optimized as follows: if the current processing accuracy does not meet the set accuracy conditions, the target neural network will be Carry out training and learning until the set accuracy condition is met or the set number of training times is reached; otherwise, perform a self-increment operation on the deletion threshold, and use the target neural network as a ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a method and a device for optimizing a neural network. The method comprises the steps of acquiring an initial neural network conforming to set accuracy conditions, and determining a weight parameter matrix between two adjacent layers of unit nodes in the initial neural network; processing the weight parameter matrix according to a preset deletion threshold, and determining unit nodes to be deleted in the initial neural network; and deleting the unit nodes so as to form an optimized target neural network. by using the method disclosed by the invention, compression can be performed on the neural network simply and efficiently, and optimization for the neural network is realized, so that the purposes of accelerating the recognition processing speed, shortening the recognition processing time and reducing space occupation for a memory, a running memory, a display memory and the like are achieved when face recognition is performed based on the optimized neural network.

Description

technical field [0001] Embodiments of the present invention relate to the technical field of artificial neural networks, and in particular to a neural network optimization method and device. Background technique [0002] At present, face recognition is usually performed based on a trained neural network model (such as a deep convolutional neural network model). When using the neural network model to carry out face recognition, the existing problems are as follows: 1. The complexity of the image data processing calculation is high, which affects the operation time (such as processing the face image on an electronic device equipped with a Core i7 processor It often takes more than 1 second); 2. It needs to take up a lot of memory space or video memory space of the graphics card during the processing; 3. It also needs to take up a lot of storage space to store the entire neural network model. [0003] The existing neural network model optimization methods cannot completely sol...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/08
CPCG06N3/082
Inventor 张玉兵
Owner GUANGZHOU SHIYUAN ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products