Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Convolutional Neural Network Accelerator Architecture with Binarized Weights and Activations

A convolutional neural network and activation value technology, applied in the field of convolutional neural network accelerator architecture, can solve problems such as computing speed limitations, achieve the effects of solving read-write conflicts, saving hardware computing resources, and improving throughput

Active Publication Date: 2022-03-08
北京中科胜芯科技有限公司
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When using FPGA to implement 16-bit wide convolutional neural network, the operation speed is limited by the number of multipliers in FPGA

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Convolutional Neural Network Accelerator Architecture with Binarized Weights and Activations
  • A Convolutional Neural Network Accelerator Architecture with Binarized Weights and Activations
  • A Convolutional Neural Network Accelerator Architecture with Binarized Weights and Activations

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0019] The convolutional neural network accelerator architecture with binarized weights and activation values ​​in this embodiment, such as figure 1 , the weight data is stored in the memory 101, the memory 105, and the memory 109 in the FPGA, and the feature map data is stored in the memory 103, the memory 104, the memory 107, and the memory 108 in the FPGA. The arithmetic unit 102, the arithmetic unit 106, and the arithmetic unit 110 are composed of logic resources. The output of the memory 101 is connected to the computing unit 102; the output of the computing unit 102 is connected to the memory 103 and the memory 104 respectively; the output of the memory 103 and the memory 104 is connected to the computing unit 106, and the output of the memory 105 is connected to the computing unit 106; the output of the computing machine 106 is connected to the memory 107 and storage 108; the output of storage 107 and storage 108 is connected to computing unit 110, and the output of sto...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a convolutional neural network accelerator architecture in which both weights and activation values ​​are binarized, including: a first memory, a fifth memory and a ninth memory for storing weight data; a memory for storing feature map data The third memory, the fourth memory, the seventh memory and the eighth memory; the second operator, the sixth operator and the tenth operator. The present invention provides a convolutional neural network accelerator architecture in which both weights and activation values ​​are binarized; data multiplication is realized by XOR operation, and XOR can be completed without using multipliers, but using logic resources to complete the operation.

Description

technical field [0001] The invention relates to a convolutional neural network accelerator architecture with binarized weight and activation values, belonging to the technical field of integrated circuit design. Background technique [0002] In recent years, convolutional neural networks have been widely developed in many fields. In the convolutional neural network, due to the large number of parameters of the data, it will take up a relatively high storage space when each parameter is stored in a floating-point number. Therefore, it is often used in the FPGA implementation architecture to use fixed-point numbers to represent each data. representation method. [0003] In the process of forward inference of convolutional neural network, the requirements for data accuracy are low. Some scholars propose to use 16bit, 8bit or even lower bit width to represent data, without having a great impact on the final result. , 1bit data can be used to represent the weight and the value ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/04G06N3/063
CPCG06N3/063G06N3/045
Inventor 毛宁黄志洪杨海钢
Owner 北京中科胜芯科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products