Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Double-layer same-or binary neural network compression method based on lookup table calculation

A technology of binary neural network and compression method, which is applied in the field of digital image processing, can solve the problems of large structural power consumption and logic resource consumption, and achieve the effect of reducing the amount of parameters and computational complexity

Active Publication Date: 2019-07-09
SOUTHEAST UNIV
View PDF3 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] However, the power consumption and logic resource consumption of this structure are relatively large

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Double-layer same-or binary neural network compression method based on lookup table calculation
  • Double-layer same-or binary neural network compression method based on lookup table calculation
  • Double-layer same-or binary neural network compression method based on lookup table calculation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] The technical solution of the present invention will be further described below in conjunction with the accompanying drawings.

[0026] A double-layer homogeneous or binary neural network compression method based on lookup table calculation, the compression method is completed by a double-layer convolution structure, and its algorithm includes the following steps: first, the input feature map is nonlinearly activated, batch normalized After normalization and binary activation, the first-layer convolution operation with different convolution kernel sizes is performed in groups to obtain the first-layer output results. Then, the output feature map is obtained by using the second layer convolution operation with a size of 1×1 on the output result of the first layer.

[0027] Its hardware implementation steps include:

[0028] (1) After the hardware realizes the non-linear activation, batch normalization and binary activation process, the convolution module of the first la...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a double-layer same or binary neural network compression method based on lookup table calculation. The compression method is completed by a double-layer convolution structure,and the algorithm comprises the following steps: firstly, performing non-linear activation, batch normalization and binary activation on an input feature map, and grouping to perform first-layer convolution operation with different convolution kernel sizes to obtain a first-layer output result; and then, carrying out 1 * 1 second-layer convolution operation on the first-layer output result to obtain an output feature map. In terms of hardware implementation, a traditional double-layer sequential calculation mode is replaced by three-input XOR operation of double-layer parallel calculation forimproved double-layer convolution, all double-layer convolution operations are calculated in a lookup table mode, and the utilization rate of hardware resources is increased. The compression method provided by the invention is an algorithm hardware collaborative compression scheme integrating a full-precision high-efficiency neural network technique and a lookup table calculation mode, has a relatively good compression effect in structure, and also reduces the consumption of logic resources in hardware.

Description

technical field [0001] The invention relates to an FPGA optimization design technology of a binary neural network, belonging to the technical field of digital image processing. Background technique [0002] Based on the vigorous development of deep learning technology, convolutional neural network (CNN) has been widely used in the field of digital image processing. Starting from the most classic AlexNet, to the ResNet residual neural network proposed by Facebook Research Institute, the deep convolutional neural network has entered a period of rapid development, and the performance of the neural network is gradually increasing. In terms of practical applications, Google has achieved remarkable results in autonomous driving and face recognition using convolutional neural networks. At the same time, the convolutional neural network has encountered some challenges in the process of development. For example, the high computational load and high complexity of the convolutional ne...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/045Y02D10/00
Inventor 张萌李建军李国庆沈旭照曹晗翔刘雪梅陈子洋
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products