Supercharge Your Innovation With Domain-Expert AI Agents!

Deep learning accelerator for accelerating BERT neural network operation

A neural network and deep learning technology, applied in the field of deep learning accelerators, can solve the problem of wasting storage units, and achieve the effect of reducing required space, data interaction, computing time and power consumption

Active Publication Date: 2020-04-24
FUDAN UNIV
View PDF7 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The output eigenvalues ​​of each branch must be temporarily stored in a specific storage space when they do not participate in post-processing, wasting storage units

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep learning accelerator for accelerating BERT neural network operation
  • Deep learning accelerator for accelerating BERT neural network operation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] The invention is described more fully hereinafter in reference to the examples illustrated in the illustrations, providing preferred embodiments but should not be considered limited to the embodiments set forth herein.

[0022] Embodiment is a deep learning accelerator for accelerating BERT neural network operations, figure 1 is its top-level circuit block diagram.

[0023] The accelerator includes three eigenvalue memories, two weight memories, three matrix multiplication arrays, a Softmax and dot product calculation unit, a controller and on-chip and off-chip interfaces.

[0024] figure 2 It is an accelerated BERT neural network calculation flow chart. It mainly includes two parts: multi-head attention branch neural network structure and feed-forward end-to-end neural network structure. When calculating the multi-head attention neural network, the same input is matrix multiplied with the weights of the three branches respectively, and the output eigenvalues ​​obta...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of integrated circuits, and particularly relates to a deep learning accelerator for accelerating BERT neural network operation. The accelerator comprises:three matrix multiply arrays used for calculating multiply-accumulate operation; a Softmax and dot product calculation unit that is used for calculating a Softmax probability function and carrying outdot multiplication on branch network output to obtain an output characteristic value; three feature memories that are used for storing input and output feature values; two weight memories; and a controller and an on-chip off-chip interface, wherein the controller is used for controlling the data in the off-chip DRAM to interact with the on-chip data. A branch network structure in the neural network is optimized, so that the storage space of intermediate data is effectively reduced, the off-chip on-chip data interaction frequency is reduced, and the power consumption is reduced; meanwhile, reconfigurable data interconnection between the storage unit and the calculation unit is configured, so that the branch network structure calculation requirement in the BERT neural network is met, and the method can be used for end-to-end neural network calculation.

Description

technical field [0001] The invention belongs to the technical field of integrated circuits, and in particular relates to a deep learning accelerator for accelerating BERT neural network operations. Background technique [0002] With the development of the BERT neural network, it is widely used in natural language processing tasks such as automatic question answering, text translation, text classification, and speech recognition. The BERT neural network includes a multi-head attention (Multi-head attention) neural network part with a branch structure and an end-to-end feedforward prediction neural network part. The former is used to extract the associated features between contexts, and the latter predicts the next moment of speech or text output based on the extracted features. [0003] Current computing platforms CPU / GPU or traditional custom-designed neural network accelerators can only accelerate the processing of end-to-end neural networks. For a neural network with a b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06N3/063G06N3/04G06N3/08
CPCG06N3/063G06N3/08G06N3/045Y02D10/00
Inventor 刘诗玮张怡云史传进
Owner FUDAN UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More