Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network optimizing method and device

A neural network and optimization method technology, applied in the field of computer vision, can solve the problems of slow neural network calculation speed and poor real-time performance, and achieve the effect of increasing memory overhead, reducing data volume, and improving convolution operation speed

Active Publication Date: 2017-09-08
BEIJING TUSEN ZHITU TECH CO LTD
View PDF3 Cites 29 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In view of the above problems, the present invention provides a neural network optimization method and device to solve the problems of slow calculation speed and poor real-time performance of neural networks in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network optimizing method and device
  • Neural network optimizing method and device
  • Neural network optimizing method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0037] see figure 1 , is a flow chart of the neural network optimization method provided by the embodiment of the present invention. In this embodiment, the convolutional layer of the neural network is processed, and the method includes:

[0038] Step 101: Perform binarization and bit packing operations on the input data of the convolutional layer along the channel direction to obtain compressed input data.

[0039] The input data of the convolutional layer is generally three-dimensional data, which includes the height, width and number of channels of the input data, and the number of channels of the input data is more, generally a multiple of 32. Such as figure 2 Shown is a schematic diagram of the input data and the compressed input data corresponding to the input data. H represents the height of the input data, W represents the width of the input data, and C represents the number of channels of the input data; the height and width of the compressed input data are not cha...

Embodiment 2

[0082] see Figure 7 , is a schematic flowchart of a neural network optimization method provided by an embodiment of the present invention, the method includes steps 701 to 709, wherein steps 701 to 705 process the convolutional layer in the neural network, and figure 1 Steps 101 to 105 are in one-to-one correspondence. For the corresponding specific implementation, refer to Embodiment 1, which will not be repeated here. Steps 706 to 709 process the fully-connected layers in the neural network. The order of steps 706 to 709 and steps 701 to 705 is not strictly limited, and is determined according to the structure of the neural network. For example, the neural network contains The network layers are convolutional layer A, convolutional layer B, fully connected layer C, convolutional layer D, and fully connected layer E in sequence, and steps 701 to 100 are applied to each convolutional layer in sequence according to the order of the network layers included in the neural network...

Embodiment 3

[0115] Based on the same idea of ​​the neural network optimization method provided by the foregoing embodiments 1 and 2, embodiment 3 of the present invention provides a neural network optimization device. The structural diagram of the device is as follows Figure 10 shown.

[0116] The first data processing unit 11 is configured to perform binarization and bit packing operations on the input data of the convolutional layer along the channel direction to obtain compressed input data;

[0117] The second data processing unit 12 is configured to perform binarization and bit packing operations on the convolution kernels of the convolution layer along the channel direction to obtain corresponding compressed convolution kernels;

[0118] The division unit 13 is used to sequentially divide the compressed input data into data blocks of the same size as the compressed convolution kernel according to the order of convolution operations, and the input data included in one convolution op...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network optimizing method and a device to solve the problems in the prior art of neural network technology that the calculation speed is low, and that the timeliness is poor. The method comprises: performing binarization to the convolution layer inputted data along the channel direction and packaging the bits to obtain the zipped data; performing binarization to the convolution kernels of the various convolution layers and packaging the bits to obtain the corresponding zipped convolution kernels; dividing the zipped input data into data blocks of the same sizes as the convolution kernels according to the calculation order in a succession manner with the input data contained in one convolution calculation forming a data block; performing convolution calculations to each data block of the zipped input data successively with the zipped convolution kernels to obtain the convolution result; and according to the convolution result, obtaining a plurality of output data for the convolution layers. The technical schemes of the invention are able to make the neural network calculation more speedy and more timely.

Description

technical field [0001] The invention relates to the field of computer vision, in particular to a neural network optimization method and device. Background technique [0002] In recent years, deep neural networks have achieved great success in various applications in the field of computer vision, such as image classification, object detection, image segmentation, etc. [0003] However, the deep neural network model often contains a large number of model parameters, with a large amount of calculation and slow processing speed, and it cannot be calculated in real time on some devices with low power consumption and low computing power (such as embedded devices, integrated devices, etc.). Contents of the invention [0004] In view of the above problems, the present invention provides a neural network optimization method and device to solve the problems of slow calculation speed and poor real-time performance of the neural network in the prior art. [0005] Embodiments of the p...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06F17/15
CPCG06N3/063G06F17/153G06N3/045G06N3/08G06F12/0207H03M7/30G06F17/16G06N20/10
Inventor 胡玉炜李定华苏磊靳江明
Owner BEIJING TUSEN ZHITU TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products